text
stringlengths 14
1.76M
|
|---|
# Computability-logic web: an alternative to deep learning
Keehang Kwon
Department of Computing Sciences, DongA University, Korea.
Email<EMAIL_ADDRESS>
###### Abstract
Computability logic (CoL) is a powerful, mathematically rigorous computational
model. In this paper, we show that CoL-web, a web extension to CoL, naturally
supports web programming where database updates are involved. To be specific,
we discuss an implementation of the AI ATM based on CoL (CL9 to be exact).
More importantly, we argue that CoL-web supports a general AI and, therefore,
is a good alternative to neural nets and deep learning. We also discuss how to
integrate neural nets into CoL-web.
Keywords: Computability logic; Web programming; Game semantics; AI;
## 1 Introduction
It is not dfficult to point out the weaknesses of neural nets and deep
learning. Simply put, neural nets are too weak to support general AI. They
receive inputs (numbers), perform simple arithmetic operations and produce
outputs (numbers). Consequently, they provide only primitive services such as
object classifications. Although object classification has some interesting
applications, the power of classification is in fact not much compared to all
the complex services a human can provide. Complex services – making a coffee,
withdrawing money from ATM, etc – are not well supported by neural nets. In
addition, their classification services are not perfect, as they are only
approximate.
A human can provide complex services to others. The notion of services and how
to complete them thus play a key role for an AI to imitate a human. In other
words, the right move towards general AI would be to find (a) a mathematical
notion for services, and (b) how an AI automatically generates a strategy for
completing the service calls.
Fortunately, Japaridze developed a theory for services/games involving complex
ones. Computability logic (CoL) [1]-[4], is an elegant theory of (multi-)agent
services. In CoL, services are seen as games between a machine and its
environment and logical operators stand for operations on games. It
understands interaction among agents in its most general — game-based — sense.
In this paper, we discuss a web programming model based on CoL and implement
an AI ATM. An AI ATM is different from a regular ATM in that the former
automatically generates a strategy for a service call, while the latter does
not.
We assume the following in our model:
* •
Each agent corresponds to a web site with a URL. An agent’s knowledgebase(KB)
is described in its homepage.
* •
Agents are initially inactive. An inactive agent becomes activated when
another agent invokes a query for the former.
* •
Our model supports the query/knowledge duality, also known as querying
knowledge. That is, knowledge of an agent can be obtained from another agent
by invoking queries to the latter.
To make things simple, we choose CL9– a fragment of CoL – as our target
language. CL9 includes sequential operators: sequential disjunction
($\bigtriangledown$) and sequential conjunction ($\bigtriangleup$) operators.
These operators model knowledgebase updates. Imagine an ATM that maintains
balances on Kim. Balances change over time. Whenever Kim presses the deposit
button for $1, the machine must be able to update the balance of the person.
This can be represented by
$balance(\$0)\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}blance(\$1)\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}.$
In this paper, we present $\mbox{\bf CL9}^{\Phi}$ which is a web-based
implementation of CL9. This implementation is straightfoward and its
correctness is rather obvious. What is interesting is that CL9 is a novel web
programming model with possible database updates. It would provide a good
starting point for future high-level web programming.
## 2 Preliminaries
In this section a brief overview of CL9 is given.
There are two players: the machine $\top$ and the environment $\bot$.
There are two sorts of atoms: elementary atoms $p$, $q$, …to represent
elementary games, and general atoms $P$, $Q$, …to represent any, not-
necessarily-elementary, games.
Constant elementary games
$\top$ is always a true proposition, and $\bot$ is always a false proposition.
Negation
$\neg$ is a role-switch operation: For example, $\neg(0=1)$ is true, while
$(0=1)$ is false.
Choice operations
The choice operations model decision steps in the course of interaction, with
disjunction $\sqcup$ meaning the machine’s choice, and conjunction $\sqcap$
meaning choice by the environment.
Parallel operations
$A\wedge B$ means the parallel-and, while $A\vee B$ means the paralle-or. In
$A\wedge B$, $\top$ is considered the winner if it wins in both $A$ and $B$,
while in $A\vee B$ it is sufficient to win in one of $A$ and $B$.
Reduction
$\rightarrow$ is defined by $\neg A\vee B$.
Sequential operations
$A\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}B$ (resp.
$A\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}B$) is a game that starts and
proceeds as a play of $A$; it will also end as an ordinary play of $A$ unless,
at some point, $\top$ (resp. $\bot$) decides — by making a special switch move
— to abandon $A$ and switch to $B$.
$A\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}B$ is quite similar to
the $if$-$then$-$else$ in imperative languages.
We reserve $\S$ as a special symbol for switch moves. Thus, whenever $\bot$
wants to switch from a given component $A_{i}$ to $A_{i+1}$ in
$A_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}A_{n}$,
it makes the move $\S$. Note that $\top$, too, needs to make switch moves in a
$\bigtriangleup$-game to “catch up” with $\bot$. The switches made by $\bot$
in a $\bigtriangleup$-game we call leading switches, and the switches made by
$\top$ in a $\bigtriangleup$-game we call catch-up switches.
## 3 Logic $\mbox{\bf CL9}^{\Phi}$
In this section we review the propositional system CL9 [5] and slightly extend
it. Our presentation closely follows the one in [5]. We assume that there are
infinitely many nonlogical elementary atoms, denoted by $p,q,r,s$ and
infinitely many nonlogical general atoms, denoted by $P,Q,R,S$.
Formulas, to which we refer as CL9-formulas, are built from atoms and
operators in the standard way.
###### Definition 3.1
The class of CL9-formulas is defined as the smallest set of expressions such
that all atoms are in it and, if $F$ and $G$ are in it, then so are $\neg F$,
$F\wedge G$, $F\vee G$, $F\rightarrow G$, $F\sqcap G$, $F\sqcup G$,
$F\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G$,
$F\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G$.
Now we define $\mbox{\bf CL9}^{\Phi}$, a slight extension to CL9 with
environment parameters. Let $F$ be a CL9-formula. We introduce a new env-
annotated formula $F^{\omega}$ which reads as ‘play $F$ against an agent
$\omega$. For an $\sqcap$-occurrence or an $\bigtriangleup$-occurrence $O$ in
$F^{\omega}$, we say $\omega$ is the matching environment of $O$. For example,
$(p\sqcap(q\sqcap r))^{w}$ is an agent-annotated formula and $w$ is the
matching environment of both occurrences of $\sqcap$. Similarly for
$(p\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}(q\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}r))^{w}$.
We extend this definition to subformulas and formulas. For a subformula
$F^{\prime}$ of the above $F^{\omega}$, we say that $\omega$ is the matching
environment of both $F^{\prime}$ and $F$.
In introducing environments to a formula $F$, one issue is whether we allow
‘env-switching’ formulas of the form $(F[R^{u}])^{w}$. Here $F[R]$ represents
a formula with some occurrence of a subformula $R$. That is, the machine
initially plays $F$ against agent $w$ and then switches to play against
another agent $u$ in the course of playing $F$. For technical reasons, we
focus on non ‘env-switching’ formulas. This leads to the following definition:
###### Definition 3.2
The class of $\mbox{\bf CL9}^{\Phi}$-formulas is defined as the smallest set
of expressions such that (a) For any CL9-formula $F$ and any agent $\omega$,
$F^{\omega}$ are in it and, (b) if $H$ and $J$ are in it, then so are $\neg
H$, $H\wedge J$, $H\vee J$, $H\rightarrow J$.
###### Definition 3.3
Given a $\mbox{\bf CL9}^{\Phi}$-formula $J$, the skeleton of $J$ – denoted by
$skeleton(J)$ – is obtained by replacing every occurrence $F^{\omega}$ by $F$.
For example, $skeleton((p\sqcap(q\sqcap r))^{w})=p\sqcap(q\sqcap r)$.
We borrow the following definitions from [5]. They apply both to CL9 and
$\mbox{\bf CL9}^{\Phi}$.
An interpretation for CL9 is a function that sends each nonlogical elementary
atom to an elementary game, and sends each general atom to any, not-
necessarily-elementary, static game. This mapping extends to all formulas by
letting it respect all logical operators as the corresponding game operations.
That is, $\top^{*}=\top$,
$(E\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}F)^{*}=E^{*}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}F^{*}$,
etc. When $F^{*}=A$, we say that ∗ interprets $F$ as $A$.
A formula $F$ is said to be valid iff, for every interpretation ∗, the game
$F^{*}$ is computable. And $F$ is uniformly valid iff there is an HPM $\cal
H$, called a uniform solution for $F$, such that $\cal H$ wins $F^{*}$ for
every interpretation ∗.
A sequential (sub)formula is one of the form
$F_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}F_{n}$
or
$F_{0}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}F_{n}$.
We say that $F_{0}$ is the head of such a (sub)formula, and
$F_{1},\ldots,F_{n}$ form its tail.
The capitalization of a formula is the result of replacing in it every
sequential subformula by its head.
A formula is said to be elementary iff it is a formula of classical
propositional logic.
An occurrence of a subformula in a formula is positive iff it is not in the
scope of $\neg$. Otherwise it is negative.
A surface occurrence is an occurrence that is not in the scope of a choice
connective and not in the tail of any sequential subformula.
The elementarization of a CL9-formula $F$ means the result of replacing in the
capitalization of $F$ every surface occurrence of the form
$G_{1}\sqcap\ldots\sqcap G_{n}$ by $\top$, every surface occurrence of the
form $G_{1}\sqcup\ldots\sqcup G_{n}$ by $\bot$, and every positive surface
occurrence of each general literal by $\bot$.
Finally, a formula is said to be stable iff its elementarization is a
classical tautology; otherwise it is instable.
The proof system of $\mbox{\bf CL9}^{\Phi}$ is identical to that CL9 in that
agent parameters play no roles. $\mbox{\bf CL9}^{\Phi}$ consists of the
following four rules of inference.
###### Definition 3.4
Wait:
$\vec{H}\mapsto F$, where $F$ is stable and $\vec{H}$ is the smallest set of
formulas satisfying the following two conditions:
1. 1.
whenever $F$ has a surface occurrence of a subformula $G_{1}\sqcap\ldots\sqcap
G_{n}$ whose matching environment is $\omega$, for each
$i\in\\{1,\ldots,n\\}$, $\vec{H}$ contains the result of replacing that
occurrence in $F$ by $G_{i}^{\omega}$;
2. 2.
whenever $F$ has a surface occurrence of a subformula
$G_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{1}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n}$
whose matching environment is $\omega$, $\vec{H}$ contains the result of
replacing that occurrence in $F$ by
$(G_{1}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n})^{\omega}$.
Choose:
$H\mapsto F$, where $H$ is the result of replacing in $F$ a surface occurrence
of a subformula $G_{1}\sqcup\ldots\sqcup G_{n}$ whose matching environment is
$\omega$ by $G_{i}^{\omega}$ for some $i\in\\{1,\ldots,n\\}$.
Switch:
$H\mapsto F$, where $H$ is the result of replacing in $F$ a surface occurrence
of a subformula
$G_{0}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{1}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{n}$
whose matching environment is $\omega$ by
$(G_{1}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{n})^{\omega}$.
Match:
$H\mapsto F$, where $H$ is the result of replacing in $F$ two — one positive
and one negative — surface occurrences of some general atom by a nonlogical
elementary atom that does not occur in $F$.
###### Example 3.5
The following is a CL9-proof of
$(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{u}\rightarrow(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{w}$:
$\begin{array}[]{ll}1.\ b2^{u}\rightarrow b2^{w}&\mbox{(from $\\{\\}$ by
Wait)};\\\ 2.\
b2^{u}\rightarrow(b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{w}&\mbox{(from
1 by Switch)};\\\ 3.\
(b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{u}\rightarrow(b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{w}&\mbox{(from
2 by Wait)};\\\ 4.\
(b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{u}\rightarrow(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{w}&\mbox{(from
3 by Switch)};\\\ 5.\
(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{u}\rightarrow(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{w}&\mbox{(from
4 by Wait)};\\\ \end{array}$
## 4 Logic $\mbox{\bf CL9}^{\circ,\Phi}$
To facilitate the execution procedure, following [5], we modify $\mbox{\bf
CL9}^{\Phi}$ to obtain $\mbox{\bf CL9}^{\circ,\Phi}$. Unlike $\mbox{\bf
CL9}^{\Phi}$, this new language allows hyperformulas which contain the
following.
* •
Hybrid atom: each hybrid atom is a pair consisting of a general atom $P$,
called its general component, and a nonlogical elementary atom $q$, called its
elementary component. We denote such a pair by $P_{q}$. It keeps track of the
exact origin of each such elementary atom $q$.
* •
Underlined sequential formula: It is introduced for us not to forget the
earlier components of sequential subformulas when Switch or Wait are applied.
We now require that, in every sequential (sub)formula, one of the components
be underlined.
The formulas of this modified language we call hyperformulas. We borrow the
following definitions from [5].
By the general dehybridization of a hyperformula $F$ we mean the CL9-formula
that results from $F$ by replacing in the latter every hybrid atom by its
general component, and removing all underlines in sequential subformulas.
A surface occurrence of a subexpression in a given hyperformula $F$ means an
occurrence that is not in the scope of a choice operator, such that, if the
subexpression occurs within a component of a sequential subformula, that
component is underlined or occurs earlier than the underlined component.
An active occurrence is an occurrence such that, whenever it happens to be
within a component of a sequential subformula, that component is underlined.
An abandoned occurrence is an occurrence such that, whenever it happens to be
within a component of a sequential subformula, that component is to the left
of the underlined component of the same subformula.
An elementary hyperformula is one not containing choice and sequential
operators, underlines, and general and hybrid atoms.
The capitalization of a hyperformula $F$ is defined as the result of replacing
in it every sequential subformula by its underlined component, after which all
underlines are removed.
The elementarization
$\parallel F\parallel$
of a hyperformula $F$ is the result of replacing, in the capitalization of
$F$, every surface occurrence of the form $G_{1}\sqcap\ldots\sqcap G_{n}$ by
$\top$, every surface occurrence of the form $G_{1}\sqcup\ldots\sqcup G_{n}$
by $\bot$, every positive surface occurrence of each general literal by
$\bot$, and every surface occurrence of each hybrid atom by the elementary
component of that atom.
A hyperformula $F$ is stable iff its elementarization $\parallel F\parallel$
is a classical tautology; otherwise it is instable.
A hyperformula $F$ is said to be balanced iff, for every hybrid atom $P_{q}$
occurring in $F$, the following two conditions are satisfied:
1. 1.
$F$ has exactly two occurrences of $P_{q}$, one positive and the other
negative, and both occurrences are surface occurrences;
2. 2.
the elementary atom $q$ does not occur in $F$, nor is it the elementary
component of any hybrid atom occurring in $F$ other than $P_{q}$.
An active occurrence of a hybrid atom (or the corresponding literal) in a
balanced hyperformula is widowed iff the other occurrence of the same hybrid
atom is abandoned.
We extend $\mbox{\bf CL9}^{\Phi}$ to $\mbox{\bf CL9}^{\circ,\Phi}$. The
language of $\mbox{\bf CL9}^{\circ}$ allows any balanced hyperformulas, which
we also refer to as $\mbox{\bf CL9}^{\circ,\Phi}$-formulas.
###### Definition 4.1
Logic $\mbox{\bf CL9}^{\circ,\Phi}$ is given by the following rules for
balanced hyperformulas (below simply referred to as “(sub)formulas”):
Wait∘:
$\vec{H}\mapsto F$, where $F$ is stable and $\vec{H}$ is the smallest set of
formulas satisfying the following two conditions:
1. 1.
whenever $F$ has an active surface occurrence of a subformula
$G_{1}\sqcap\ldots\sqcap G_{n}$ whose matching environment is $\omega$ , for
each $i\in\\{1,\ldots,n\\}$, $\vec{H}$ contains the result of replacing that
occurrence in $F$ by $G_{i}^{\omega}$;
2. 2.
whenever $F$ has an active surface occurrence of a subformula
$G_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\underline{G_{m}}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{m+1}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n}$
whose matching environment is $\omega$, $\vec{H}$ contains the result of
replacing that occurrence in $F$ by
$(G_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{m}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\underline{G_{m+1}}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n})^{\omega}$.
Choose∘:
$H\mapsto F$, where $H$ is the result of replacing in $F$ an active surface
occurrence of a subformula $G_{1}\sqcup\ldots\sqcup G_{n}$ whose matching
environment is $\omega$ by $G_{i}^{\omega}$ for some $i\in\\{1,\ldots,n\\}$.
Switch∘:
$H\mapsto F$, where $H$ is the result of replacing in $F$ an active surface
occurrence of a subformula
$G_{0}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\underline{G_{m}}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{m+1}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{n}$
whose matching environment is $\omega$ by
$(G_{0}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}{G_{m}}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\underline{G_{m+1}}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{n})^{\omega}.$
Match∘:
$H\mapsto F$, where $H$ has two — a positive and a negative — active surface
occurrences of some hybrid atom $P_{q}$, and $F$ is the result of replacing in
$H$ both occurrences by $P$.
An effective procedure that converts any $\mbox{\bf CL9}^{\Phi}$-proof of any
formula $G$ into a $\mbox{\bf CL9}^{\circ,\Phi}$-proof of $G$ is given in [5].
## 5 Execution Phase
The machine model of $\mbox{\bf CL9}^{\circ,\Phi}$ is designed to process only
one query/formula at one time. In distributed systems, however, it is natural
for an agent to receive/process multiple queries from different users. For
this reason, we introduce multiple queries to our machine. To do this, we
assume that an agent maintains two queues: the income queue $QI$ for storing a
sequence of new incoming queries of the form $(Q_{1},\ldots,Q_{n})$ and the
temporarily solved queue $QS$ for storing a sequence of temporarily solved
queries of the form $(KB_{1}\rightarrow Q_{1},\ldots,KB_{n}\rightarrow
Q_{n})$. Here each $Q_{i}$ is a query and each $KB_{i}$ is a knowledgebase. A
query $Q$ with respect to some knowledgebase is temporarily solved if $Q$ is
solved but $\top$ has a remaining switch move in $Q$. Otherwise $Q$ is said to
be completely solved.
As expected, processing real-time multiple queries causes some complications.
To be specific, we process $QI$ of the form $(Q_{1},\ldots,Q_{m})$ and $QS$ of
the form $(KB_{1}\rightarrow Q^{\prime}_{1},\ldots,KB_{n}\rightarrow
Q^{\prime}_{n})$ in the following way:
1. 1.
First stage is to initialize a temporary variable $NewKB$ to $KB$,
2. 2.
The second stage is to follow the $loop$ procedure:
procedure $loop$:
* •
Case 1: $QI$ is not empty:
The machine tries to solve $Q_{1}$ by calling $Exec(NewKB\rightarrow Q_{1})$.
* –
If it fails, then report a failure, remove $Q_{1}$ from $QI$ and repeat
$loop$.
* –
Suppose it is a success and $NewKB$ and $Q_{1}$ evolve to $NewKB^{\prime}$ and
$Q^{\prime}_{1}$ after solving this query. We consider two cases.
(a) If it is completely solved, then report a success, remove $Q_{1}$ from
$QI$, update $NewKB$ to $NewKB^{\prime}$ and repeat $loop$. (b) If it is
temporarily solved, then report a success, remove $Q_{1}$ from $QI$, insert
$NewKB^{\prime}\rightarrow Q^{\prime}_{1}$ to $QS$, update $NewKB$ to
$NewKB^{\prime}$ and repeat $loop$.
* •
Case 2. $QI$ is empty and $QS$ nonempty: The machine tries to solve the first
query $KB_{1}\rightarrow Q^{\prime}_{1}$ in $QS$.
* –
If $KB=NewKB$, it means nothing has changed since the last check. Hence the
machine waits for any change such as the environment’s new move.
* –
Otherwise, the machine tries to solve $Q^{\prime}_{1}$ with respect to
$NewKB$. It thus removes the above query from $QS$, adds $Q^{\prime}_{1}$ to
$QI$, and repeat $loop$.
* •
Case 3. $QI$ is empty and $QS$ is empty: wait for new incoming service calls.
Below we will introduce an algorithm that executes a formula $J$. The
algorithm is a minor variant of the one in [5] and contains two stages:
Algorithm Exec(J): % $J$ is a $\mbox{\bf CL9}^{\circ,\Phi}$-formula
1. 1.
Fix an interpretation ∗. First stage is to initialize a temporary variable $E$
to $J$, a position variable $\Omega$ to an empty position $\langle\rangle$.
Activate all the resource agents specified in $J$ by invoking proper queries
to them. That is, for each negative occurrence of an annotated formula
$F^{\omega}$ in $J$, activate $\omega$ by querying $F^{\mu}$ to $\omega$. Here
$\mu$ is the current machine; On the other hand, we assume that all the
querying agents – which appear positively in $J$ – are already active.
2. 2.
The second stage is to play $J$ according to the following $mainloop$
procedure (which is a minor variant of [5]):
procedure $loop(Tree)$: % $Tree$ is a proof tree of $J$
If $E$ is derived by Choose∘ from $H$, the machine makes the move $\alpha$
whose effect is choosing $G_{i}$ in the $G_{1}\sqcup\ldots\sqcup G_{n}$
subformula of $E$. So, after making move $\alpha$, the machine call $loop$ on
$\langle\Omega\rangle H^{*}$. Let $\omega$ be the matching environment. Then
inform $\omega$ of the move $\alpha$.
If $E$ is derived by Switch∘ from $H$, then the machine makes the move
$\alpha$ whose effect is making a switch in the
$G_{0}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\underline{G_{m}}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{m+1}\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}\ldots\mbox{\small\raisebox{1.70709pt}{$\bigtriangledown$}}G_{n}$
subformula. So, after making move $\alpha$, the machine calls $loop$ on
$\langle\Omega,\top\alpha\rangle H^{*}$.
If $E$ is derived by Match∘ from $H$ through replacing the two (active
surface) occurrences of a hybrid atom $P_{q}$ in $H$ by $P$, then the machine
finds within $\Omega$ and copies, in the positive occurrence of $P_{q}$, all
of the moves made so far by the environment in the negative occurrence of
$P_{q}$, and vice versa. This series of moves brings the game down to
$\langle\Omega^{\prime}\rangle E^{*}=\langle\Omega^{\prime}\rangle H^{*}$,
where $\Omega^{\prime}$ is result of adding those moves to $\Omega$. So, now
the machine calls $loop$ on $\langle\Omega^{\prime}\rangle H^{*}$.
Finally, suppose $E$ is derived by Wait∘. Our machine keeps granting
permission (“waiting”).
Case 1. $\alpha$ is a move whose effect is moving in some abandoned subformula
or a widowed hybrid literal of $E$. In this case, the machine calls $loop$ on
$\langle\Omega,\bot\alpha\rangle E^{*}$.
Case 2. $\alpha$ is a move whose effect is moving in some active surface
occurrence of a general atom in $E$. Again, in this case, the machine calls
$loop$ on $\langle\Omega,\bot\alpha\rangle E^{*}$.
Case 3. $\alpha$ is a move whose effect is making a catch-up switch in some
active surface occurrence of a $\bigtriangledown$-subformula. The machine
calls $loop$ on $\langle\Omega,\bot\alpha\rangle E^{*}$.
Case 4. $\alpha$ is a move whose effect is making a move $\gamma$ in some
active surface occurrence of a non-widowed hybrid atom. Let $\beta$ be the
move whose effect is making the same move $\gamma$ within the other active
surface occurrence of the same hybrid atom. In this case, the machine makes
the move $\beta$ and calls $loop$ on
$\langle\Omega,\bot\alpha,\top\beta\rangle E^{*}$.
Case 5: $\alpha$ is a move whose effect is a choice of the $i$th component in
an active surface occurrence of a subformula $G_{1}\sqcap\ldots\sqcap G_{n}$.
Then the machine calls $loop$ on $\langle\Omega\rangle H^{*}$, where $H$ is
the result of replacing the above subformula by $G_{i}$ in $E$.
Case 6: $\alpha$ signifies a (leading) switch move within an active surface
occurrence of a subformula
$G_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\underline{G_{m}}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{m+1}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n}.$
Then the machine makes the same move $\alpha$ (signifying making a catch-up
switch within the same subformula), and calls $exec$ on
$\langle\Omega,\bot\alpha,\top\alpha\rangle H^{*}$, where $H$ is the result of
replacing the above subformula by
$G_{0}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{m}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\underline{G_{m+1}}\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}\ldots\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}G_{n}.$
## 6 Examples
As an example of web system, we will look at the ATM of some bank. It is
formulated with the user, an ATM, a database, and a credit company. We assume
the following:
* •
There are two kinds of agents: super agents and regular agents. Super agents
are prefixed with $\$$. For example, $\$kim$ is a super agent. While regular
agents behave according to the $Exec$ procedure, super agents behave
unpredictably.
* •
For simplicity, we assume the bank has only one customer named Kim. Further,
the balance is restricted to one of the three amounts: $0, $1 or $2.
* •
The database maintains balance information on Kim.
* •
Both the credit company and the ATM request balance checking to the database.
* •
The ATM has a ($1) deposit button. Whenever pressed, it adds $\$1$ to the
account.
The above can be implemented as follows:
$agent\ credit$. % credit company
---
$(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{db}$.
% b0 means the balance is $0, and so on.
% Here, we assume that ATM usage charge is zero, meaning deposit = balance.
$agent\ db$. % database
---
$(d0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}d1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}d2)^{m}$.
% d0 means the accumulated deposit is $0, and so on.
$d0\rightarrow b0$.
$d1\rightarrow b1$.
$d2\rightarrow b2$.
$agent\ m$. % ATM machine
---
$(d0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}d1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}d2)^{\$kim}$.
% request deposit checking to kim
$(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{db}$.
% request balance checking to DB
$agent\ \$kim$. % $\$kim$ is a super agent.
---
$(b0\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b1\mbox{\small\raisebox{0.0pt}{$\bigtriangleup$}}b2)^{m}$.
% request balance checking to ATM
Now let us consider the agent $kim$ and the agent $credit$. They both want to
know the balance of Kim’s account. The initial balance checking will return
$b0$, meaning zero dollars. Later, suppose Kim deposits $1. In this case, the
balance information on $db$ will be updated to one dollar and, subsequently,
the response to balance checking by ATM, kim and the credit company will be
updated to $b1$, as desired.
## 7 Adding neural networks
The integration of neural nets and symbolic AI is often beneficial. There are
several ambitious approaches such as DeepProblog[6] and these approaches try
to combine both worlds within a single agent. Unfortunately, these approaches
considerably increase the complexity of the machine.
Fortunately, in the multi-agent setting, this integration can be achieved in a
rather simple way by introducing a new kind of agents called
$\eta$-agent(neural-net agents).
There are now three kinds of agents:
* •
regular agents who perform deductive reasoning, and
* •
$\eta$-agents who perform inductive reasoning.
* •
super agents who are able to create resources.
An $\eta$-agent is an agent which is designed to implement low-level
perceptions (visual data, etc) via trainig. That is, its knowledgebase is in
neural-net form for easy training. In the sequel, $\eta$-agents are prefixed
with $\eta$.
We assume the following:
(1) the input/output of a neural network is encapsulated in the form of an
atomic predicate,
(2) the output of a neural network is deterministic. Thus, we do not consider
probability here, and
(3) For simplicity, neural nets are specified in logical form ($\mbox{\bf
CL9}^{\Phi}$ to be exact), instead of in functional form.
An $\eta$-agent with its knowledgebase $N$ proceeds in two modes:
* •
When there is a query $A$, it proceeds in deductive mode by processing $A$
using $N$ via $\mbox{\bf CL9}^{\Phi}$ deduction.
* •
When it is idle, it trains itself on sample data by updating $N$.
As an example, we will look at the program which, given image of an animal,
identifies its habitat.
We assume the following:
* •
The regular agent $a$ implements the predicate $habitat(j,h)$ where $j$ is an
image of an animal and $h$ is the main habitat of the animal. We consider two
kinds of animals: lion and tiger. We assume that $j$ belongs to
$S=\\{i_{1},\ldots,i_{3}\\}$ where $S$ is a set of three images of animals.
* •
The $\eta$-agent $\eta d$ implements the predicate $animal(j,n)$ where $j$ is
image of an animal and $n$ is the corresponding animal.
The above can be implemented as follows:
$agent\ a$. % animal habitats
---
% Below is a query to $\eta d$.
$((animal(i_{1},lion)\sqcup animal(i_{1},tiger))\sqcap$
$(animal(i_{2},lion)\sqcup animal(i_{2},tiger))\sqcap$
$(animal(i_{3},lion)\sqcup animal(i_{3},tiger)))^{\eta d}$.
% Rules that maps animals to the corresponding habitats.
$animal(i_{1},tiger)\rightarrow habitat(i_{1},india)$.
$animal(i_{2},tiger)\rightarrow habitat(i_{2},india)$.
$animal(i_{3},tiger)\rightarrow habitat(i_{3},india)$.
$animal(i_{1},lion)\rightarrow habitat(i_{1},senegal)$.
$animal(i_{2},lion)\rightarrow habitat(i_{2},senegal)$.
$animal(i_{3},lion)\rightarrow habitat(i_{3},senegal)$.
% Given image $j$, the agent $\eta d$ produces the corresponding animal via
deep learning. We do not show the details here.
$agent\ \eta d$. % $\eta$-agent
---
$\vdots$.
When the agent $\eta d$ is idle, it trains itself and updates its
knowledgebase by adjusting weights. Now let us invoke a query
$habitat(i_{3},india)\sqcup habitat(i_{3},senegal)$ to the agent $a$ where
$i_{3}$ is the image of some animal. To solve this query, the agent $a$
invokes another query shown above to the agent $\eta d$. Now $\eta d$ switches
from training mode to deduction mode to solve this query. Let us assume that
the response from $\eta d$ is $animal(i_{3},lion)$. Using the rules related to
animal’s habitats, the agent $a$ will return $habitat(i_{3},senegal)$ to the
user. Note that our agents behave just like real-life agents. For example, a
doctor typically trains himself when he is idle. If there is a service
request, then he switches from training mode to deduction mode.
## References
* [1] G. Japaridze. Introduction to computability logic. Annals of Pure and Applied Logic 123 (2003), pp. 1-99.
* [2] G. Japaridze. Propositional computability logic I. ACM Transactions on Computational Logic 7 (2006), No.2, pp. 302-330.
* [3] G. Japaridze. Propositional computability logic II. ACM Transactions on Computational Logic 7 (2006), No.2, pp. 331-362.
* [4] G. Japaridze. In the beginning was game semantics. In: Games: Unifying Logic, Language and Philosophy. O. Majer, A.-V. Pietarinen and T. Tulenheimo, eds. Springer Verlag, Berlin (to appear). Preprint is available at http://arxiv.org/abs/cs.LO/0507045
* [5] G. Japaridze. Sequential operators in computability logic. Information and Computation 206, No.12 (2008), pp. 1443-1475.
* [6] R. Manhaeve et al. DeepProbLog: neural probabilistic logic programming, NIPS’18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018, pp.3753–3763.
|
# Chiral Bloch states in single layer graphene with Rashba spin-orbit
coupling: Spectrum and spin current density
Y. Avishai1 and Y. B. Band2 1Department of Physics, Ben-Gurion University of
the Negev, Beer-Sheva, Israel,
New York University and the NYU-ECNU Institute of Physics at NYU Shanghai,
3663 Zhongshan Road North, Shanghai, 200062, China,
and Yukawa Institute for Theoretical Physics, Kyoto, Japan
2Department of Physics, Department of Chemistry, Department of Electro-Optics,
and The Ilse Katz Center for Nano-Science, Ben-Gurion University of the Negev,
###### Abstract
We study the Bloch spectrum and spin physics of 2D massless Dirac electrons in
single layer graphene subject to a one dimensional periodic Kronig-Penney
potential and Rashba spin-orbit coupling. The Klein paradox exposes novel
features in the band dispersion and in graphene spintronics. In particular it
is shown that: (1) The Bloch energy dispersion $\varepsilon(p)$ has unusual
structure: There are two Dirac points at Bloch momenta $\pm p\neq 0$ and a
narrow band emerges between the wide valence and conduction bands. (2) The
charge current and the spin density vector vanish. (3) Yet, all the non-
diagonal elements of the spin current density tensor are finite and their
magnitude increases linearly with the spin-orbit strength. In particular,
there is a spin density current whose polarization is perpendicular to the
graphene plane. (4) The spin density currents are space-dependent, hence their
continuity equation includes a finite spin torque density.
Introduction: Following the discovery of graphene [1], novel phenomena were
predicted in its electronic properties [2, 3]. Among these, the Klein paradox
[4] and chiral tunneling in single layer graphene (SLG) were reported in a
seminal paper [5], and further findings were reported in Refs. [6, 7]. Due to
chirality near a Dirac point, electrons execute unimpeded transmission through
a potential barrier even for energies below the barrier. This scenario is
related to the absence of back-scattering for electron-impurity scattering in
carbon nanotubes [8]. Several extensions were reported in Refs. [9, 10, 11].
In parallel, investigation of the role of electron spin in graphene led to the
emergence of a new field: graphene spintronics [12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56].
The role of Klein paradox in graphene spintronics is reported in Refs. [33,
39, 56], who studied electron transmission through a barrier in the presence
of Rashba spin orbit coupling (RSOC).
In this work we expose yet another facet of the Klein paradox in graphene
spintronics by elucidating the physics of electrons in SLG subject to a
periodic one dimensional Kronig-Penney potential (1DKPP) and uniform RSOC.
Thereby the roles of the Klein paradox [5] and RSOC in SLG are combined with
the Bloch theorem, and novel aspects of band structure and spin related
observables are exposed. Recall that RSOC can be controlled by an externally
applied uniform electric field ${\bf E}=E_{0}\hat{\bf z}$ perpendicular to the
SLG lying in the $x$-$y$ plane, as in the Rashba model for the two-dimensional
electron gas [57]. We hope this study will motivate further study of graphene
based spintronic devices that do not rely on the use of an external magnetic
field or magnetic materials.
Observables that are calculated include the Bloch spectrum $\varepsilon(p)$
($p=$ crystal momentum), spin density, and spin current density (related to
spin torques [58]). Their properties are remarkably different from those
predicted in bulk SLG in the absence of a 1DKPP, wherein the Klein paradox
does not play a role): (1) The spin-orbit (SO) splitting of levels in the
Bloch energy dispersion is rather unusual: Recall that for $\lambda=0$, there
are two degenerate levels in the valence and the conduction band and the gap
is closed at a single Dirac point at Bloch momentum $p=0$ [see Fig. 1(a)
below]. For $\lambda>0$ this single Dirac point is split into two points
located at $\pm p\neq 0$ [see Fig. 1(c) below]. (2) Although the charge
current and the spin density vector vanish, the non-diagonal elements of the
spin current density tensor $J_{ij}$ are finite (here $i=x,y,z$ is the
polarization direction and $j=x,y$ is the propagation direction). Thus, unlike
in bulk SLG [47], $J_{zx}\neq 0$ and $J_{zy}\neq 0$ (current is polarized
perpendicular to the SLG plane). (3) $J_{ij}(x)$ is space-dependent so that
there is a finite spin torque [58]. (4) The response of the spin current
densities to the RSOC strength $\lambda$ is substantial even for small
$\lambda$ (the magnitude of $\lambda$ due to a strong perpendicular electric
field in SLG as reported in Ref. [44] is a fraction of meV). These predictions
regarding graphene spintronics are experimentally verifiable.
Formalism: Consider a system of massless 2D Dirac electrons in SLG lying in
the $x$-$y$ plane subject to a uniform electric field ${\bf E}=E_{0}\hat{\bf
z}$ and a 1D periodic Kronig-Penney potential,
$u(x)=u_{0}\sum_{m=-\infty}^{\infty}\Theta(x-m\ell)\Theta(m\ell+d-x).$ (1)
The (Fermi) energy $\varepsilon$ and the potential height $u_{0}$ satisfy the
inequality $u_{0}>\varepsilon>0$ (the condition for the emergence of the Klein
paradox wherein electrons propagate under the barriers). Our goal is to derive
the Bloch spectrum and Bloch functions in oder to predict spin related
observables. The problem is treated here within the continuum formulation near
one of the Dirac points, say ${\bf K}^{\prime}$. Since the transverse wave
number $k_{y}$ is conserved, the wave function can be factored:
$\Psi(x,y)=e^{ik_{y}y}\psi(x)$. Recall that, in addition to the isospin
${\bm{\tau}}$ encoding the two-lattice structure of SLG, there is now a real
spin, ${\bm{\sigma}}$. Hence, the wave function $\psi(x)$ is a four component
spinor in ${\bm{\sigma}}\otimes{\bm{\tau}}$ (spin$\otimes$isospin) space. It
has dimensions of $1/\sqrt{A}$ where $A$ is some relevant area. Hereafter we
take $A=(d+\ell)\times 1$ (nm)2, and omit this factor when no confusion
arises. The Hamiltonian is,
$\displaystyle
h(-i\partial_{x},k_{y},\lambda)=\gamma\\{[-i\partial_{x}+\lambda(\hat{\bf
z}\times{\bm{\sigma}})_{x}]\tau_{x}$ $\displaystyle\ \
+[k_{y}+\lambda(\hat{\bf z}\times{\bm{\sigma}})_{y}]\tau_{y}\\}+u(x)$
$\displaystyle\equiv h_{0}(-i\partial_{x},k_{y},\lambda)+u(x).$ (2)
which is a 4$\times$4 matrix first-order differential operator. Here $\gamma$
= $\hbar v_{F}=659.107$ meV$\cdot$nm is the kinetic energy parameter, and
$\lambda$ is the RSOC strength parameter [59] (it is also the inverse SO
length parameter $\lambda=1/\ell_{so}\propto E_{0}$). The products,
$\sigma_{x}\tau_{x}$, $\sigma_{y}\tau_{y}$, implicitly incorporate a Kronecker
product. $\psi(x)$ is a combination of four component plane-wave spinors,
$e^{\pm ik_{x}x}v(\pm k_{x})$ (between barriers), and $e^{\pm iq_{x}x}w(\pm
q_{x})$ (in the barriers). The constant vectors $v(\pm k_{x})$ and $w(\pm
q_{x})$ satisfy the algebraic linear equations,
$\displaystyle h_{0}(\pm k_{x},k_{y},\lambda)v(\pm k_{x})=\varepsilon v(\pm
k_{x}),$ $\displaystyle h(\pm q_{x},k_{y},\lambda)w(\pm k_{x})=\varepsilon
w(\pm k_{x}).$ (3)
The vectors $v(\pm k_{x})$ and $w(\pm k_{x})$ cannot be chosen as spin
eigenfunctions because spin is not conserved. Moreover, Eqs. (Chiral Bloch
states in single layer graphene with Rashba spin-orbit coupling: Spectrum and
spin current density) are not eigenvalue equations. Indeed, assuming fixed
transverse wave number $k_{y}$, potential parameters $u_{0},d,\ell$ and RSOC
strength $\lambda$, the wave numbers $k_{x}$ and $q_{x}$ must depend on the
(yet unknown) energy $\varepsilon$. For $\varepsilon>0$ (recall the condition
of the Klein paradox), and for each sign $s=\pm$, there are two wave numbers
that solve these implicit equations: $sk_{xn}(\varepsilon)$ for
$x\notin[0,d]$, and $sq_{xn}(\varepsilon)$ for $x\in[0,d]$ ($n=1,2$). (The
ubiquitous energy dependence will be occasionally omitted). Therefore,
equations (Chiral Bloch states in single layer graphene with Rashba spin-orbit
coupling: Spectrum and spin current density) are implicit equations for
$sk_{xn}(\varepsilon)$ and $sq_{xn}(\varepsilon)$ as well as for $v_{ns}\equiv
v[sk_{xn}(\varepsilon)]$ and $w_{ns}\equiv w[sk_{xn}(\varepsilon)]$ (where
$n=1,2$ and $s=\pm$). The solution of Eqs. (Chiral Bloch states in single
layer graphene with Rashba spin-orbit coupling: Spectrum and spin current
density) is given by,
$\displaystyle
k_{xn}^{2}=[\varepsilon+(-1)^{n+1}\lambda]^{2}-\lambda^{2}-k_{y}^{2},$
$\displaystyle q_{xn}^{2}=[\varepsilon+(-1)^{n+1}\lambda-
u_{0}]^{2}-\lambda^{2}-k_{y}^{2},$ (4)
together with the vectors $v_{ns}$ and $w_{ns}$ (their analytic expressions
will not be explicitly given here). They are normalized as $\langle
v_{ns}|v_{n^{\prime}s}\rangle=\langle
w_{ns}|w_{n^{\prime}s}\rangle=\delta_{nn^{\prime}}$, but $\langle
v_{n+}|v_{n-}\rangle\neq 0$ and $\langle w_{n+}|w_{n-}\rangle\neq 0$.
The general form of the wave functions between and within the barriers is
then:
$\displaystyle\psi(x)=\sum_{n,s=\pm}\begin{cases}a_{ns}e^{isk_{xn}x}v_{ns},\
(u(x)\\!=0),\\\ b_{ns}e^{isq_{xn}x}w_{ns},\ (u(x)\\!=u_{0}).\end{cases}$ (5)
The constants $a_{ns}(\varepsilon)$ and $b_{ns}(\varepsilon)$ with $n=1,2$,
and $s=\pm$, are determined by matching the wave functions on the walls of the
barrier and employing Bloch condition to which we now turn.
Consider the unit cell $[0,R]$ consisting of the barrier region $[0,d]$ and
the spacing $[d,d+\ell=R]$, corresponding to the case $m=0$ in Eq. (1). The
matching equations at the left wall of the barrier $x=0$ implies
$\psi(0^{-})=\psi(0^{+})$. It is written in terms of
$\\{a_{ns}\\},\\{b_{ns}\\}$ using the following notation:
$\displaystyle{\bf a}$ $\displaystyle=$
$\displaystyle(a_{1^{+}},a_{2^{+}},a_{1^{-}},a_{2^{-}})^{T},$
$\displaystyle{\bf b}$ $\displaystyle=$
$\displaystyle(b_{1^{+}},b_{2^{+}},b_{1^{-}},b_{2^{-}})^{T}.$ (6)
${\bf a}$ and ${\bf b}$ are the $4$$\times$$1$ column vectors of coefficients
introduced in Eq. (5). Moreover,
$\displaystyle V$ $\displaystyle=$
$\displaystyle(v_{1^{+}},v_{2^{+}},v_{1^{-}},v_{2^{-}}),$ $\displaystyle W$
$\displaystyle=$ $\displaystyle(w_{1^{+}},w_{2^{+}},w_{1^{-}},w_{2^{-}}),$ (7)
are $4$$\times$$4$ matrices built from the $4$$\times$$1$ column vectors
introduced in Eq. (5). The matching equations at $x=0$ and the transfer matrix
carrying $\psi(0^{-})$ to $\psi(0^{+})$ are then given by,
$V{\bf a}=W{\bf b},\ \Rightarrow\ T_{0^{-}\to 0^{+}}=W^{-1}V,$ (8)
so that $T_{0^{-}\to 0^{+}}{\bf a}={\bf b}$. Similarly, the transfer matrix
carrying $\psi(d^{-})\to\psi(d^{+})$ across the right wall of the barrier is
$T_{d^{-}\to d^{+}}=V^{-1}W$. To complete the construction of the transfer
matrix $T$ that carries the wave function across a unit cell from $x=0^{-}$ to
$x=R^{-}=\ell+d^{-}$ recall that the propagation of $\psi(x)$ from $0^{+}\to
d^{-}$ and from $d^{+}\to R^{-}$ is respectively controlled by the 4$\times$4
diagonal phase-factor matrices,
$\displaystyle\Phi_{q}$ $\displaystyle=$
$\displaystyle\mbox{diag}[e^{iq_{x1}d},e^{iq_{x2}d},e^{-iq_{x1}d},e^{-iq_{x2}d}],$
$\displaystyle\Phi_{k}$ $\displaystyle=$
$\displaystyle\mbox{diag}[e^{ik_{x1}\ell},e^{ik_{x2}\ell},e^{-ik_{x1}\ell},e^{-ik_{x2}\ell}],$
(9)
which leads eventually to the expression, $T=\Phi_{k}T_{d^{-}\to
d^{+}}\Phi_{q}T_{0^{-}\to 0^{+}}$. $T$ is a symplectic $4$$\times$$4$ matrix
satisfying $\mbox{Det}[T]=1$ and $T^{\dagger}\Sigma_{z}T=\Sigma_{z}$, where
$\Sigma_{z}={\bf 1}_{2\times 2}\otimes\tau_{z}$. The Bloch theorem (for fixed
$\lambda,k_{y},u_{0},d,\ell$) requires that $\psi(x+R)=e^{ipR}\psi(x)$ where
$p$ is the crystal wave number. This implies the eigenvalue equation
$T(\varepsilon){\bf a}(\varepsilon)=e^{ipR}{\bf a(\varepsilon)}.$ (10)
Equation (10) defines a relation between the four eigenvalues
$\\{\lambda_{j}(\varepsilon)\\}$ ($j=1,2,3,4$) of $T(\varepsilon)$ and the
Bloch wave number $p$, that is, Im$[\lambda_{j}(\varepsilon)]=\sin pR$.
Thereby we get the dispersion curves
$\varepsilon_{j}(p)=[\lambda^{I}_{j}]^{-1}\big{(}\sin pR\big{)}$. The
eigenvalues of $T$ satisfy the equalities $\lambda_{1}=1/\lambda_{2},\
\lambda_{3}=1/\lambda_{4}$ so that if $\lambda_{j}(\varepsilon)$ is real the
energy $\varepsilon_{j}(p)$ is in the gap. Otherwise, the eigenvalues consist
of two pairs of conjugate complex numbers lying on the unit circle, re-
numbered as $\lambda_{1}=1/\lambda_{1}^{*},\lambda_{2}=1/\lambda_{2}^{*}$.
Consequently, there are two symmetric dispersion curves
$\varepsilon_{1}(p)=\varepsilon_{1}(-p)$ and
$\varepsilon_{2}(p)=\varepsilon_{2}(-p)$ corresponding to the two SO split
levels. As we shall see below, for fixed $k_{y}=0$ and RSOC strength
$\lambda\to 0$, the two curves coincide, forming valence and conduction bands
that display a Dirac point at $p=0$, with linear dispersion
$\varepsilon_{j}(p)=\varepsilon_{j}(0)+(-1)^{j}a|p|$ for small $p$ (where
$a>0$ is a real constant), see Fig. 1(a). The more intriguing case $\lambda>0$
is discussed below. We are now in a position to present our results based on
the above formalism.
Choice of parameters: Our objectives are twofold: (1) to elucidate the Bloch
dispersion and its dependence on the RSOC strength $\lambda$ (tunable by the
electric field). (2) To calculate wave functions and spin-related observables,
to asses their space dependence and their response to variation of $\lambda$.
As we hope to enrich our understanding of graphene spintronics, it is
important to choose potential parameters $u_{0},d,\ell$ and RSOC strength
$\lambda$ in accordance with experimental capability.
Below, the lengths $x,y,d,...$ are given in nm, and energies
$\varepsilon,u_{0}$ $\lambda$ as well as the wave numbers $k_{x},k_{y},q_{x}$
(introduced above) are given in (nm)-1, [1 (nm)-1 corresponds to 659.107 meV].
The size of $\lambda$ is dictated by experiments on Rashba spin-splitting in
SLG. In Ref. [44], it is shown that $\lambda$ is in the order of fraction of 1
meV. Here we let $0<\lambda\leq$ 0.0016 nm-1 ($0<\lambda\leq 1.01544$ meV). It
is also required that the wave numbers $k_{xn}$ and $q_{xn}$ should be real
[see Eq. (Chiral Bloch states in single layer graphene with Rashba spin-orbit
coupling: Spectrum and spin current density)]. For $k_{y}=0$, this implies
$\varepsilon>2\lambda$ (for real $k_{xn}$) and
$(u_{0}-\varepsilon)^{2}+2\lambda(u_{0}-\varepsilon)>0$ for $q_{xn}$. Finally,
for simplicity, we consider forward scattering, $k_{y}=0$. The case $k_{y}\neq
0$ will be explored in a future communication. (Note that it is experimentally
difficult to tune $k_{y}$ for fixed Fermi energy $\varepsilon$). In summary:
(1) The fixed parameters are: $k_{y}=0,u_{0}=98.85$ meV, $d=200$ nm,
$\ell=260$ nm. (2) In the calculations of the spectrum the Bloch energy is
varied in the interval $[2\lambda,u_{0}/2]$. (3) In the calculations of the
spin observables, the Fermi energy is fixed at 0.0243 (nm)${}^{-1}=13.2$ meV.
(4) Bloch spectrum and spin observables are calculated for $\lambda=0.00016$
(nm)${}^{-1}=0.10544$ meV (nearly $\lambda=0$), and $\lambda=0.0016$
(nm)${}^{-1}=1.0544$ meV.
Results: In the series of figures below we show our results for the Bloch
spectrum, the charge density, and the non-diagonal elements of the spin
current density. It is argued that the charge current density and the spin
density vector vanish. Expressions for all these quantities are given below.
First, we discuss the Bloch spectrum. In Fig. 1(a) the dispersion curves
$\varepsilon(p)$ are shown for very small (actually vanishing) RSOC strength
$\lambda=0.00016$ (nm)-1=0.10054 meV. It consists of two (virtually)
degenerate levels in the valence and the conduction bands with a single Dirac
point at $p=0$. Strictly speaking, the periodic potential is 1D, so we should
refer this linear dispersion as a Dirac triangle and not a Dirac cone. As we
increase $\lambda$ to 0.0016(nm)${}^{-1}=1.0054$ meV, the pattern is unusually
modified as shown in Fig. 1(b). To explain what happens, it is useful to plot
the inverse function $\sin p(\varepsilon)R$ as function of $\varepsilon$
(restricted to positive $p$ for simplicity). In Fig. 1(c) it is shown for
$\lambda\to 0$, where the two $p$ levels coincide and form a Dirac point with
linear dispersion. For $\lambda>0$ the red $p$ level “pulls the Dirac point
up”, and the two blue $p$ levels repel each other. As a result, (taking into
account the symmetric pattern for $p<0$) it implies that RSOC causes level
repulsion in both energy (except at the Dorac points) and momentum. The single
Dirac point at $p=0$ is now split into a couple of Dirac points $\pm p\neq 0$.
But the dispersion at these two Dirac points remains linear, unlike in the
pattern encountered in bulk SLG [41]. From the point of view of band
structure, the central rhombus in Fig. 1(b) specifies a narrow “semi-metallic
band” between the valence and conduction bands.
Figure 1: (a) Bloch spectrum at $\lambda=0.00016$(nm)${}^{-1}=0.10054$ meV .
(b) Bloch spectrum at $\lambda=0.0016$ (nm)${}^{-1}=1.0054$ meV has two Dirac
points. (c) and (d) Compare the inverse function $\sin p(\varepsilon)R$: (c)
is for $\lambda=0.00016$(nm)${}^{-1}=0.10054$ meV and (d) is for
$\lambda=0.0016$ (nm)${}^{-1}=1.0054$ meV. These show that the effect of RSOC
is to cause energy and momentum splitting of the two SO levels (prevailing at
$\lambda=0$) while maintaining the Dirac points.
Now we consider Bloch wave functions and derivation of local observables.
Calculations are carried out at a given energy $\varepsilon$=0.025 nm-1 that
passes through the two Dirac points at $pR=\pm$0.13. There are four wave
functions $\\{\psi_{p_{i}}(x)\\},\ i=1,2,3,4$, corresponding to the four
points $\\{p_{i}\\}$ at which the constant energy line crosses the four
dispersion curves. The expressions of the wave functions are given in Eq. (5),
wherein the coefficients $\\{a_{ns}\\}$ are the component of the vector ${\bf
a}$ [defined in Eq. (Chiral Bloch states in single layer graphene with Rashba
spin-orbit coupling: Spectrum and spin current density)] that is an
eigenvector of $T$ with eigenvalue $e^{ip_{i}R}$. Similarly, the coefficients
$\\{b_{ns}\\}$ are the component of the vector ${\bf b}$ defined after Eq.
(8).
An operator $\hat{O}$, is representable as 4$\times$4 hermitian matrix in
${\bm{\sigma}}\otimes{\bm{\tau}}$ (spin $\otimes$ isospin) space. Local
observables are obtained by
$O(x)=\frac{1}{4}\sum_{i=1}^{4}\psi_{p_{i}}^{\dagger}(x)\hat{O}\psi_{p_{i}}(x).$
(11)
(this is not an expectation value: observables may depend on $x$). Below we
will consider operators of charge density, charge current (or velocity), spin
density and spin current density, and check the space dependence of the
corresponding observables.
For the charge density, the relevant operator is $\hat{I}_{4\times 4}$ and the
density is then
$\rho(x)=\frac{1}{4}\sum_{i=1}^{4}\psi_{p_{i}}^{\dagger}(x)\psi_{p_{i}}(x)$.
$\rho(x)$ is shown in Figs. 2(a) for $\lambda=0.00016$ (nm)-1 and 2(b) for
$\lambda=0.0016$ (nm)-1. Note the concentration of oscillations around 1. The
reason is that Bloch waves propagate in the longitudinal direction (recall
that $k_{y}=0$). In the absence of RSOC the Klein paradox implies that
transmission through a barrier is unimpeded. As shown in Ref. [60], in the
presence of RSOC the transmission is still high but not perfect. Increasing
$\lambda$ implies larger oscillation amplitudes. This is manifested here by
noting that the amplitude of oscillations of the density at higher $\lambda$
as shown in Fig. 2(b), is larger than those for $\lambda\to 0$ in Fig. 2(a).
The higher frequency in the barrier region $0<x<d$ (compared with the spacing
region $-\ell<x<0$) reflects the inequality of wave numbers $q_{xn}>k_{kn}$,
see Eq. (Chiral Bloch states in single layer graphene with Rashba spin-orbit
coupling: Spectrum and spin current density).
Next we consider the velocity operator (which is also the charge current),
$\hat{\bf V}={\bf I}_{2\times 2}\otimes{\bm{\tau}}.$ (12)
As expected, we find that $V_{x}=0$, due to left-right symmetry. Also,
$V_{y}=0$ because we have chosen $k_{y}=0$. However, the velocity operator
will contribute to the spin current density (see below).
Figure 2: Density $\rho(x)$ in the unit cell $-l<x<d$ for (a)
$\lambda=0.00016$ (nm)${}^{-1}=0.10054$ meV and (b) $\lambda=0.0016$
(nm)${}^{-1}=1.0054$ meV.
The spin density operators $\hat{\bf S}$ (from which the spin density
observables ${\bf S}(x)$ are derived via Eq. (11)), are given by, $\hat{\bf
S}=(\hat{S}_{x},\hat{S}_{y},\hat{S}_{z})=\tfrac{1}{2}\hbar{\bm{\sigma}}\otimes{\bf
I}_{2}.$ The unit of the observable ${\bf S}(x)$ is $S_{0}=\hbar/A$. But in
the present case, it is found that ${\bf S}=0$. For $S_{x}$, it is expected
that there is no spin density along the direction of motion. For $S_{z}$, it
is expected that there is no spin density along the direction of motion
outside the SLG plan. For $S_{y}$, there is cancellation between the four
contributions in Eq. (11).
Now let us focus on spin-current density. The corresponding operator is
$\hat{\bf J}$ (a tensor) from which the observed components of the spin
current density observables $J_{ij}(x)$ are derived via Eq. (11)), is defined
as
$\hat{J}=\frac{1}{2}\\{\hat{\bf S},\hat{\bf V}\\},$ (13)
where $\hat{\bf S}$ is the spin density operator defined above, and $\hat{\bf
V}={\bf I}_{2}\otimes{\bm{\tau}}$ is the velocity operator defined in Eq.
(12). In Eq. (13), $i=1,2,3=x,y,z$ specifies the polarization direction, and
$j=1,2=x,y$ specifies the axis along which electrons propagate. The unit of
spin current density observables $J_{ij}$ is $J_{0}=S_{0}v_{F}=\gamma/A$
meV/nm.
Our calculations show that the non-zero spin current density observables are
the non-diagonal elements of the spin current density observable, explicitly,
$J_{xy}(x)=J_{yx}(x),J_{zx}(x)$ and $J_{zy}(x)$. They are shown in Figs. 3, 4
and 5 respectively for $\lambda$=0.00016(nm)-1=0.10054 meV in panel (a) and
$\lambda=0.0016$ (nm)-1 = 1.0054 meV in (b). Note that (1) Despite the fact
that ${\bf V}={\bf S}=0$, the spin current density does not vanish. (2)
Increasing $\lambda$ by a factor 10 increases the amplitudes of the spin
current density by a factor of about 15 for $J_{xy}$ and $J_{zx}$ and about 50
for $J_{zy}$. (3) The spin current densities have a rich space dependence
implying a non-zero torque, see below.
Figure 3: Spin current density $J_{xy}(x)$ in the unit cell $-l<x<d$ for (a)
$\lambda=0.00016$ (nm)-1(nm)${}^{-1}=0.10054$ meV and (b) $\lambda=0.0016$
(nm)${}^{-1}=1.0054$ meV. The ratio of amplitude $J_{xy}$ in (b) to $J_{xy}$
in (a) is abut 15. Note that $J_{yx}(x)=J_{xy}(x)$.
Figure 4: Spin current density $J_{zx}(x)$ in the unit cell $-l<x<d$ for (a)
$\lambda=0.00016$ (nm)-1(nm)${}^{-1}=0.10054$ meV and (b) $\lambda=0.0016$
(nm)${}^{-1}=1.0054$ meV. The ratio of amplitude $J_{zx}$ in (b) to $J_{zx}$
in (a) is about 15.
Figure 5: Spin current density $J_{zy}(x)$ in the unit cell $-l<x<d$ for (a)
$\lambda=0.00016$ (nm)-1(nm)${}^{-1}=0.10054$ meV and (b) $\lambda=0.0016$
(nm)${}^{-1}=1.0054$ meV. The ratio of amplitude $J_{zy}$ in (b) to $J_{zy}$
in (a) is about 50.
The spin current density was calculated in bulk SLG in Ref. [47]. The authors
found that (1) $J_{xx}=J_{yy}=J_{zx}=J_{zy}=0$, (2) $J_{xy}=-J_{yx}$, and (3)
the spin currents are not space dependent (see Eq. (5) in Ref. [47]). In our
calculations it is shown that in the presence of a 1D potential (where there
is no rotational symmetry around the $z$-axis), the symmetry relation (valid
in bulk SLG [47]) is reversed, $J_{xy}=+J_{yx}$. Moreover, although the value
of $\lambda$ used in our calculations is about two orders of magnitude smaller
than that used in Ref. [47], the size of the spin current densities in both
systems are the same order of magnitude. Another noticeable difference from
SLG is that in the present system, spin current densities are space dependent
and the divergence of the spin current density does not vanish. The continuity
equation for the spin current density must contain a spin torque density term
[58]. As we have shown in Ref. [60], for spin current density that depends
only on $x$, the component $J_{ix}(x)$ have non-zero torque. In the present
case these are $J_{yx}$ and $J_{zx}$.
Summary and Conclusion: The Klein paradox in SLG occurs when an electron at
the Fermi energy $\varepsilon$ tunnels through a 1D potential barrier of
height $u_{0}$ (which can be experimentally controlled by a gate voltage) in
the region $u_{0}>\varepsilon>0$. When, in addition, a uniform perpendicular
electric field ${\bf E}=E_{0}\hat{\bf z}$ is applied, the role of electron
spin enters due to RSOC. This system was studied in relation to transmission
[33, 39, 56] and spin current densities [56] with the quest to reveal
interesting facets of graphene spintronics within a time-reversal invariant
formalism. Its study is appealing due to the fact that $u_{0}$ and the RSOC
strength $\lambda$ can be experimentally controlled, making it verifiable.
The present work targets graphene spintronics not through the properties of
transmission, but rather through the properties of the stationary states. It
combines the four pillars of 2D Dirac electrons, Klein Paradox, Bloch theorem
and RSOC, and establishes a theoretical framework with predictive power. It
presents a plethora of observables that can be experimentally tested. It is
shown that RSOC results in an unusual Bloch dispersion band structure with two
Dirac points and a narrow semi-metallic band between the valence and
conduction bands. Spin observables are calculated and shown to have different
properties than those found in bulk SLG. In particular, spin current density
exists also if the polarization direction is perpendicular to the graphene
plane. Moreover, despite the upper (experimental) limit on the strength
$\lambda$ of the RSOC, the size of the spin current density is not small. In
addition, the spin current density has non-trivial space dependence along the
periodic lattice direction, implying the occurrence of finite and oscillating
spin torque density [58].
This work is partially motivated by the quest for constructing spintronic
devices without the use of an external magnetic field or magnetic materials
(in addition to the many references mentioned above, see also Refs. [64, 65,
66]). We hope that our results advance this goal. The sensitivity of the
spectrum $\varepsilon(p)$ and the components $J_{ij}(x)$ of the spin current
density tensor to variation of the RSOC strength $\lambda$ are particularly
promising aspects in this regard.
Acknowledgement: Discussions with J. Nitta, J. Fabian, K. Richter and M. H.
Liu are highly appreciated, and were indispensable for understanding some
crucial issues.
## References
* [1] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004); A. K. Geim and K. S. Novoselov Nature Mater. 6 183 (2007); A. K. Geim Science 324, 1530 (2009).
* [2] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* [3] S. Das Sarma, Shaffique Adam, E. H. Hwang, and Enrico Rossi, Rev. Mod. Phys. 83, 407 (2011).
* [4] O. Klein, Z. Phys. 53 157 (1929); C. W. J. Beenakker, Rev. Mod. Phys. 80, 1337 (2008).
* [5] M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nature Physics 2, 620 (2006).
* [6] T. Tudorovskiy, K J. A. Reijnders and M. I. Katsnelson, Phys. Scr. T146, 014010 (2012).
* [7] Pierre E. Allain and J-N. Fuchs, Eur. Phys. J. B 83, 301 (2011).
* [8] T. Ando, and T. Nakanishi, J. Phys. Soc. Jpn. 67, 1704 (1998).
* [9] J. M. Pereira Jr., V. Molnar, F. M. Peeters, P. Vasilopoulos, Phys. Rev. B 74, 045424 (2006).
* [10] M. Barbier, P. Vasilopoulos, and F. M. Peeters, Philosophical Transactions: Mathematical, Physical and Engineering Sciences 368, No. 193, 5499 (2010), arXiv:1101.4117.
* [11] Y. Avishai and Y. B. Band, Phys. Rev. B102, 085435 (2020).
* [12] H. D.Huertas, F. Guinea and A. Brataas, Phys. Rev. B 74, 155426 (2006).
* [13] H. Min, J.E. Hill, N.A. Shinitsyn, B.R. Sahu, L. Kleinman, and A.H. MacDonald, Phys. Rev. B 74, 165310 (2006).
* [14] Y. Yao, F. Ye, X. L. Qi, S. C. Zhang, and Z. Fang, Phys. Rev. B 75, 041401(R) (2007).
* [15] E. V. Castro, K. S. Novoselov, S. V. Morozov, N. M. R. Peres, J. M. B. L. dos Santos, J. Nilsson, F. Guinea, A. K. Geim, and A. H. Castro Neto, Phys. Rev. Lett. 99, 216802 (2007).
* [16] B. Trauzettel, D. V. Bulaev, D. Loss, and G. Burkard, Nature Phys. 3, 192 (2007).
* [17] N. Tombros, C. Jozsa, M. Popinciuc, H. T. Jonkman, and B. J. van Wees, Nature 448, 571 (2007).
* [18] N. Tombros et al., Phys. Rev. Lett. 101, 046601 (2008).
* [19] S. Cho, Y. Chen, and M. S. Fuhrer, Appl. Phys. Lett. 91, 123105 (2007).
* [20] M. Zarea and N. Sandler, Phys. Rev. B 79, 165442 (2009).
* [21] E. I. Rashba, Phys. Rev. B 79, 161409(R) (2009).
* [22] M.-H. Liu and C.-R. Chang, Phys. Rev. B 80, 241304(R) (2009).
* [23] M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl, and J. Fabian, Phys. Rev. B 80, 235431 (2009).
* [24] D. Bercioux and De Martino, Phys. Rev. B 81, 165410 (2010).
* [25] S. Konschuh, M. Gmitra, and J. Fabian, Phys. Rev. B 82, 245412 (2010).
* [26] F. Schwierz, Nature Nanotechnol. 5, 487 (2010).
* [27] W.-K. Tse, Z. Qiao, Y. Yao A. H. MacDonald, and Q. Niu, Phys. Rev. B 83, 155447 (2011).
* [28] S. Konschuh, it Spin-Orbit Coupling Effects From Graphene To Graphite, Ph.D. Thesis, Universität Regensburg, (2011).
* [29] G. Miao, M. Münzenberg, and J. S. Moodera, Rep. Prog. Phys. 74, 036501 (2011).
* [30] S. Jo, D. Ki, D. Jeong, H. Lee, and S. Kettemann, Phys. Rev. B 84, 075453 (2011).
* [31] J. F. Liu, B. K. S. Chan and J. Wang, Nanotechnology, 23(9):095201 (2012).
* [32] D. Marchenko, A. Varykhalov, M.R. Scholz, G. Bihlmayer, E.I. Rashba, A. Rybkin, A.M. Shikin and O. Rader, Nature Communications 3, 1232 (2012).
* [33] Ming-Hao Liu, Jan Bundesmann, and Klaus Richter, Phys. Rev. B 85, 085406 (2012).
* [34] A. K. Patra et al., Phys. Lett. 101, 162407 (2012).
* [35] B. Dulbak et al., Nature Phys. 8, 557 (2012).
* [36] P. J. Zomer et al., Phys. Rev. B 86, 161416(R) (2012).
* [37] Q. Zhang, K. S. Chan, Z. Lin and J. F. Liu, Phys. Lett. A 377, 632 (2013).
* [38] L. Lenz, D. F. Urban and D. Bercioux, The European Physical Journal B 86, 502 (2013).
* [39] K. Shakouri, M. R. Masir, A. Jellal, E. B. Choubabi, and F. M. Peeters, Phys. Rev. B 88, 115408 (2013).
* [40] Z. Tang, E. Shikoh, H. Ago, K. Kawahara, Y. Ando, T. Shinjo, and M. Shiraishi, Phys. Rev. B 87, 140401 (2013).
* [41] W. Han, R. K. Kawakami, M. Gmitra and J. Fabian, Nature Nanotechnology 9, 794 (2014).
* [42] D. Kochan, M. Gmitra, and J. Fabian, Phys. Rev. Lett. 112, 116602 (2014).
* [43] et al., Nature Phys. 10, 857 (2014).
* [44] M. Gmitra and J. Fabian, Phys. Rev. B 92, 155403 (2015).
* [45] A. C Ferrari et al., Nanoscale 7, 4598 (2015).
* [46] S. Roch et al. 2D Mater. 2, 030202 (2015).
* [47] H. Zhang, Z. Ma and J. F. Liu, Scientific Reports 4, 6464 (2015).
* [48] M. Dröegler et al., Nano Lett. 16, 3533 (2016).
* [49] A. Avsar et. al., NPG Asia Mater. 8, 274 (2016).
* [50] J. Ingla-Aynés, R. J. Meijerink, and B. J. van Wees, Nano Lett. 16, 4825 (2016).
* [51] B. Berche, F. Mireles, and E. Medina, Condensed Matter Physics 20, 13702 (2017).
* [52] X. Li, Z. Wu and J. Liu, Scientific Reports 7, 6526 (2017).
* [53] Xiaoyang Lin et al., Phys. Rev. Appl. 8, 034006 (2017).
* [54] A. M. Afzal, K. H. Min, B. M. Ko and J. Eom, RSC Adv. 9, 31797 (2019).
* [55] S. Qi, Y. Han, F. Xu, X. Xu and Z. Qiao, Phys. Rev. B 99, 195439 (2019).
* [56] Y. Avishai and Y. B. Band, arXiv:2012.10971.
* [57] E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960); Y. A. Bychkov and E. I. Rashba, JETP Lett. 39, 78 (1984).
* [58] J. Shi, P. Zhang, D. Xiao, and Q. Niu, Phys. Rev. Lett. 96, 076604 (2006).
* [59] A. Shnirman, Geometric phases and spin-orbit effects, Lecture 2 in Karlsruhe Institute of Technology.
* [60] Y. Avishai and Y. B. Band Arxiv (2020).
* [61] D. D. Awschalom, D. Loss, and N. Samarth, eds., Semiconductor Spintronics and Quantum Computation (Springer, Berlin, 2002).
* [62] I. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. 76, 323 (2004).
* [63] H. A. Engel E. I. Rashba and B. I. Halperin, arXiv:cond-mat/0603306.
* [64] N. Hatano, R. Shirasaki, H. Nakamura, Phys. Rev. A 75, 032107 2007.
* [65] S. Matityahu, Y. Utsumi, A. Aharony, O. Entin-Wohlman and C. A. Balseiro, Phys. Rev. B 93, 075407 (2016).
* [66] Y. Avishai and Y. B. Band, Phys. Rev. B 95, 104429 (2017).
|
# Continual Learning of Generative Models with Limited Data: From
Wasserstein-1 Barycenter to Adaptive Coalescence
Mehmet Dedeoglu Sen Lin Zhaofeng Zhang Junshan Zhang
###### Abstract
Learning generative models is challenging for a network edge node with limited
data and computing power. Since tasks in similar environments share model
similarity, it is plausible to leverage pre-trained generative models from the
cloud or other edge nodes. Appealing to optimal transport theory tailored
towards Wasserstein-1 generative adversarial networks (WGAN), this study aims
to develop a framework which systematically optimizes continual learning of
generative models using local data at the edge node while exploiting adaptive
coalescence of pre-trained generative models. Specifically, by treating the
knowledge transfer from other nodes as Wasserstein balls centered around their
pre-trained models, continual learning of generative models is cast as a
constrained optimization problem, which is further reduced to a Wasserstein-1
barycenter problem. A two-stage approach is devised accordingly: 1) The
barycenters among the pre-trained models are computed offline, where
displacement interpolation is used as the theoretic foundation for finding
adaptive barycenters via a “recursive” WGAN configuration; 2) the barycenter
computed offline is used as meta-model initialization for continual learning
and then fast adaptation is carried out to find the generative model using the
local samples at the target edge node. Finally, a weight ternarization method,
based on joint optimization of weights and threshold for quantization, is
developed to compress the generative model further.
Continual Learning, Wasserstein Barycenter, Edge Learning, Wasserstein
Generative Adversarial Networks
## 1 Introduction
The past few years have witnessed an explosive growth of Internet of Things
(IoT) devices at the network edge. On the grounds that the cloud has abundant
computing resources, the conventional method for AI at the network edge is
that the cloud trains the AI models with the data uploaded from edge devices,
and then pushes the models back to the edge for on-device inference (e.g.,
Google Edge TPU). However, an emerging view is that this approach suffers from
overwhelming communication overhead incurred by the data transmission from
edge devices to the cloud, as well as potential privacy leakage. It is
therefore of great interest to obtain generative models for the edge data,
because they require a smaller number of parameters than the data volume and
it is much more parsimonious compared to sending the edge data to the cloud,
and further they can also help to preserve data privacy. Taking a forward-
looking view, this study focuses on continual learning of generative models at
edge nodes.
There are a variety of edge devices and edge servers, ranging from self-
driving cars to robots, from 5G base station servers to mobile phones. Many
edge AI applications (e.g., autonomous driving, smart robots, safety-critical
health applications, and augmented/virtual reality) require edge intelligence
and continual learning capability via fast adaptation with local data samples
so as to adapt to dynamic application environments. Although deep generative
models can parametrize high dimensional data samples at edge nodes
effectively, it is often not feasible for a single edge server to train a deep
generative model from scratch, which would otherwise require humongous
training data and high computational power (Yonetani et al., 2019; Wang et
al., 2018b, 2020). A general consensus is that learning tasks across different
edge nodes often share some model similarity. For instance, different robots
may perform similar coordination behaviors according to the environment
changes. With this sight, we advocate that the pre-trained generative models
from other edge nodes are utilized to speed up the learning at a given edge
node, and seek to answer the following critical questions: (1) “What is the
right abstraction of knowledge from multiple pre-trained models for continual
learning?” and (2) “How can an edge server leverage this knowledge for
continual learning of a generative model?”
The key to answering the first question lies in efficient model fusion of
multiple pre-trained generative models. A common approach is the _ensemble
method_ (Breiman, 1996; Schapire, 1999) where the outputs of different models
are aggregated to improve the prediction performance. However, this requires
the edge server to maintain all the pre-trained models and run each of them,
which would outweigh the resources available at edge servers. Another way for
model fusion is direct _weight averaging_ (Smith & Gashler, 2017; Leontev et
al., 2020). Because the weights in neural networks are highly redundant and no
one-to-one correspondence exists between the weights of two different neural
networks, this method is known to yield poor performance even if the networks
represent the same function of the input. As for the second question, Transfer
Learning is a promising learning paradigm where an edge node incorporates the
knowledge from the cloud or another node with its local training samples.
(Wang et al., 2018b, 2020; Yonetani et al., 2019). Notably, recent work on
Transferring GANs (Wang et al., 2018b) proposed several transfer
configurations to leverage pre-trained GANs to accelerate the learning
process. However, since the transferred GAN is used only as initialization,
Transferring GANs suffers from catastrophic forgetting.
Figure 1: Continual learning of generative models based on coalescence of pre-
trained generative models $\\{\mu_{k},k=1,\ldots,K\\}$ and local dataset at
Node 0 (denoted by $\hat{\mu}_{0}$).
To tackle these challenges, this work aims to develop a framework which
explicitly optimizes the continual learning of generative models for the edge,
based on the adaptive coalescence of pre-trained generative models from other
edge nodes, using optimal transport theory tailored towards GANs. To mitigate
the mode collapse problem due to the vanishing gradients, multiple GAN
configurations have been proposed based on the Wasserstein-$p$ metric $W_{p}$,
including Wasserstein-1 distance (Arjovsky et al., 2017) and Wasserstein-2
distance (Leygonie et al., 2019; Liu et al., 2019). Despite Wasserstein-2 GANs
are analytically tractable, the corresponding implementation often requires
regularization and is often outperformed by the Wasserstein-1 GAN (W1GAN).
With this insight, in this paper we focus on the W1GAN (WGAN refers to W1GAN
throughout).
Specifically, we consider a setting where an edge node, denoted Node 0, aims
to learn a generative model. It has been shown that training a WGAN is
intimately related to finding a distribution minimizing the Wasserstein
distance from the underlying distribution $\mu_{0}$ (Arora et al., 2017). In
practice, an edge node has only a limited number of samples with empirical
distribution $\hat{\mu}_{0}$, which is distant from $\mu_{0}$. A naive
approach is to train a WGAN based on the limited local samples only, which can
be captured via the optimization problem given by
$\min\nolimits_{\nu\in\mathcal{P}}~{}W_{1}(\nu,\hat{\mu}_{0})$, with
$W_{1}(\cdot,\cdot)$ being the Wasserstein-$1$ distance between two
distributions. The best possible outcome of solving this optimization problem
can generate a distribution very close to $\hat{\mu}_{0}$, which however could
still be far away from the true distribution ${\mu}_{0}$. Clearly, training a
WGAN simply based on the limited local samples at an edge node would not work
well.
As alluded to earlier, learning tasks across different edge nodes may share
model similarity. To facilitate the continual learning at Node 0, pre-trained
generative models from other related edge nodes can be leveraged via knowledge
transfer. Without loss of generality, we assume that there are a set
$\mathcal{K}$ of $K$ edge nodes with pre-trained generative models. Since one
of the most appealing benefits of WGANs is the ability to continuously
estimate the Wasserstein distance during training (Arjovsky et al., 2017), we
assume that the knowledge transfer from Node $k$ to Node 0 is in the form of a
Wasserstein ball with radius $\eta_{k}$ centered around its pre-trained
generative model $\mu_{k}$ at Node $k$, for $k=1,\ldots,K$. Intuitively,
radius $\eta_{k}$ represents the relevance (hence utility) of the knowledge
transfer, and the smaller it is, the more informative the corresponding
Wasserstein ball is. Building on this knowledge transfer model, we treat the
continual learning problem at Node 0 as the coalescence of $K$ generative
models and empirical distribution $\hat{\mu}_{0}$ (Figure 1), and cast it as
the following constrained optimization problem:
$\underset{\nu\in\mathcal{P}}{\min}~{}W_{1}(\nu,\hat{\mu}_{0}),~{}\text{s.t.}~{}~{}W_{1}(\nu,\mu_{k})\leq\eta_{k},\forall
k\in\mathcal{K}.$ (1)
Observe that the constraints in problem (1) dictate that the optimal coalesced
generative model, denoted by $\nu^{*}$, lies within the intersection of $K$
Wasserstein balls (centered around $\\{\mu_{k}\\}$), exploiting the knowledge
transfer systematically. It is worth noting that the optimization problem (1)
can be extended to other distance functionals, e.g., Jensen-Shannon
divergence.
The contributions of this work are summarized as follows.
1) We propose a systematic framework to enable continual learning of
generative models via adaptive coalescence of pre-trained generative models
from other edge nodes and local samples at Node 0. In particular, by treating
the knowledge transferred from each node as a Wasserstein ball centered around
its local pre-trained generative model, we cast the problem as a constrained
optimization problem which optimizes the continual learning of generative
models.
2) Applying Lagrangian relaxation to (1), we reduce the optimization problem
to finding a Wasserstein-1 barycenter of $K+1$ probability measures, among
which $K$ of them are pre-trained generative models and the last one is the
empirical distribution (not a generative model though) corresponding to local
data samples at Node 0. We propose a barycentric fast adaptation approach to
efficiently solve the barycenter problem, where the barycenter $\nu_{K}^{*}$
for the $K$ pre-trained generative models is found recursively offline in the
cloud, and then the barycenter between the empirical distribution
$\hat{\mu}_{0}$ of Node 0 and $\nu_{K}^{*}$ is solved via fast adaptation at
Node 0. A salient feature in this proposed barycentric approach is that
generative replay, enabled by pre-trained GANs, is used to annihilate
catastrophic forgetting.
3) It is known that the Wasserstein-1 barycenter is notoriously difficult to
analyze, partly because of the existence of infinitely many minimizers of the
Monge Problem. Appealing to optimal transport theory, we use displacement
interpolation as the theoretic foundation to devise recursive algorithms for
finding adaptive barycenters, which ensures the resulting barycenters lie in
the baryregion.
4) From the implementation perspective, we introduce a “recursive” WGAN
configuration, where a 2-discriminator WGAN is used per recursive step to find
adaptive barycenters sequentially. Then the resulting barycenter in offline
training is treated as the meta-model initialization and fast adaptation is
carried out to find the generative model using the local samples at Node 0. A
weight ternarization method, based on joint optimization of weights and
threshold for quantization, is developed to compress the generative model and
enable efficient edge learning. Extensive experiments corroborate the efficacy
of the proposed framework for fast edge learning of generative models.
Figure 2: The illustrations of image morphing using 3 different techniques:
$I$) Barycentric fast adaptation, $II$) Transferring GANs and $III$) Ensemble
method. $5000$ samples from classes “2” and “9” in MNIST are used in
experiments and horizontal axis represents training iterations.
The proposed barycentric fast adaptation approach is useful for many
applications, including image morphing (Simon & Aberdam, 2020), clustering
(Cuturi & Doucet, 2014), super resolution (Ledig et al., 2017) and privacy-
aware synthetic data generation (Shrivastava et al., 2017) at edge nodes. To
get a more concrete sense, Figure 2 illustrates a comparison of image morphing
using three methods, namely _barycentric fast adaptation_ , _Transferring
GANs_ and _ensemble_. Observe that Transferring GANs quickly morphs images
from class “2” to class “9”, but forgetting the previous knowledge. In
contrast, barycentric fast adaptation morphs class “2” to a barycenter model
between the two classes “2” and “9,” because it uses generative replay in the
training (we will elaborate further on this in the WGAN configuration), thus
mitigating catastrophic forgetting. The ensemble method learns both classes
“2” and “9” at the end, but its morphing process takes longer.
## 2 Related Work
Optimal transport theory has recently been studied for deep learning
applications (see, e.g., (Brenier, 1991; Ambrosio et al., 2008; Villani,
2008)). (Agueh & Carlier, 2011) has developed an analytical solution to the
Wasserstein barycenter problem. Aiming to solve the Wasserstein barycenter
problem, (Cuturi, 2013; Cuturi & Doucet, 2014; Cuturi & Peyré, 2016) proposed
smoothing through entropy regularization for the discrete setting, based on
linear programming. (Srivastava et al., 2015) employed posterior sampling
algorithms in studying Wasserstein barycenters, and (Anderes et al., 2016)
characterized Wasserstein barycenters for the discrete setting (cf. (Staib et
al., 2017; Ye et al., 2017; Singh & Jaggi, 2019)). GANs (Goodfellow et al.,
2014) have recently emerged as a powerful deep learning tool for obtaining
generative models. Recent work (Arjovsky et al., 2017) has introduced
Wasserstein metric in GANs, which can help mitigate the vanishing gradient
issue to avoid mode collapse. Though gradient clipping is applied to ensure
$1$-Lipschitz conditions, it may still lead to non-convergence. (Gulrajani et
al., 2017) proposed to use gradient penalty to overcome the shortcomings due
to weight clipping. Using optimal transport theory, recent advances of
Wasserstein GANs have shed light on understanding generative models. Recent
works (Leygonie et al., 2019; Liu et al., 2019) proposed two distinct
transport theory based GANs using 2-Wasserstein distance. Furthermore, (Lei et
al., 2017) devised an computationally efficient method for computing the
generator when the cost function is convex. In contrast, for the Wasserstein-1
GAN, the corresponding discriminator may constitute one of infinitely many
optimal maps from underlying empirical data distribution to the generative
model (Ambrosio et al., 2008; Villani, 2008), and it remains open to decipher
the relation between the model training and the optimal transport maps. Along
a different line, a variety of techniques have been proposed for more robust
training of GANs (Qi et al., 2019; Yonetani et al., 2019; Durugkar et al.,
2016; Simon & Aberdam, 2020).
Pushing the AI frontier to the network edge for achieving edge intelligence
has recently emerged as the marriage of AI and edge computing (Zhou et al.,
2019). Yet, the field of edge intelligence is still in its infancy stage and
there are significant challenges since AI model training generally requires
tremendous resources that greatly outweigh the capability of resource-limited
edge nodes. To address this, various approaches have been proposed in the
literature, including model compression (Shafiee et al., 2017; Yang et al.,
2017; Wang et al., 2019), knowledge transfer learning (Osia et al., 2020; Wang
et al., 2018a), hardware acceleration (Venkataramani et al., 2017; Wang et
al., 2017), collaboration-based methods (Lin et al., 2020; Zhang et al.,
2020), etc. Different from these existing studies, this work focuses on
continual learning of generative models at the edge node. Rather than learning
the new model from scratch, continual learning aims to design algorithms
leveraging knowledge transfer from pre-trained models to the new learning task
(Thrun, 1995), assuming that the training data of previous tasks are
unavailable for the newly coming task. Clearly, continual learning fits
naturally in edge learning applications. Notably, the elastic weight
consolidation method (Kirkpatrick et al., 2017; Zenke et al., 2017) estimates
importance of all neural network parameters and encodes it into the Fisher
information matrix, and changes of important parameters are penalized during
the training of later tasks. Generative replay is gaining more attention where
synthetic samples corresponding to earlier tasks are obtained with a
generative model and replayed in model training for the new task to mitigate
forgetting (Rolnick et al., 2019; Rebuffi et al., 2017). In this work, by
learning generative models via the adaptive coalescence of pre-trained
generative models from other nodes, the proposed “recursive” WGAN
configuration facilitates fast edge learning in a continual manner, which can
be viewed as an innovative integration of a few key ideas in continual
learning, including the replay method (Shin et al., 2017; Wu et al., 2018;
Ostapenko et al., 2019; Riemer et al., 2019) which generates pseudo-samples
using generative models, and the regularization-based methods (Kirkpatrick et
al., 2017; Lee et al., 2017; Schwarz et al., 2018; Dhar et al., 2019) which
sets the regularization for the model learning based on the learned knowledge
from previous tasks, in continual learning (De Lange et al., 2019).
## 3 Adaptive Coalescence of Generative Models: A Wasserstein-1 Barycenter
Approach
In what follows, we first recast problem (1) as a variant of the Wasserstein
barycenter problem. Then, we propose a two-stage recursive algorithm,
characterize the geometric properties of geodesic curves therein and use
displacement interpolation as the foundation to devise recursive algorithms
for finding adaptive barycenters.
### 3.1 A Wasserstein-1 barycenter formulation via Lagrangian relaxation
Observe that the Lagrangian for (1) is given as follows:
$\mathcal{L}(\\{\lambda_{k}\\},\nu)=W_{1}(\nu,\hat{\mu}_{0})+\sum\limits_{k=1}^{K}\lambda_{k}W_{1}(\nu,\mu_{k})-\sum\limits_{k=1}^{K}\lambda_{k}\eta_{k},\vspace{-0.08in}$
(2)
where $\\{\lambda_{k}\geq 0\\}_{1:K}$ are the Lagrange multipliers. Based on
(Volpi et al., 2018), problem (1) can be solved by using the following
Lagrangian relaxation with $\lambda_{k}=\frac{1}{\eta_{k}},\forall
k\in\mathcal{K}$, and $\lambda_{0}=1$:
$\underset{\nu\in\mathcal{P}}{\min}~{}\sum\limits_{k=1}^{K}\frac{1}{\eta_{k}}W_{1}(\nu,\mu_{k})+W_{1}(\nu,\hat{\mu}_{0}).\vspace{-0.08in}$
(3)
It is shown in (Sinha et al., 2017) that the selection
$\lambda_{k}=\frac{1}{\eta_{k}},\forall k\in\mathcal{K}$ ensures the same
levels of robustness for (3) and (1). Intuitively, such a selection of
$\\{\lambda_{k}\\}_{0:K}$ strikes a right balance, in the sense that larger
weights are assigned to the knowledge transfer models (based on the pre-
trained generative models $\\{\mu_{k}\\}$) from the nodes with higher
relevance, captured by smaller Wasserstein-1 ball radii. For given
$\\{\lambda_{k}\geq 0\\}$, (3) turns out to be a Wasserstein-1 barycenter
problem (cf. (Agueh & Carlier, 2011; Srivastava et al., 2015)), with the new
complication that $\hat{\mu}_{0}$ is an empirical distribution corresponding
to local samples at Node 0. Since $\hat{\mu}_{0}$ is not a generative model
per se, its coalescence with other $K$ general models is challenging.
### 3.2 A Two-stage adaptive coalescence approach for Wasserstein-1
barycenter problem
Based on (3), we take a two-stage approach to enable efficient learning of the
generative model at edge Node 0. The primary objective of Stage I is to find
the barycenter for $K$ pre-trained generative models
$\\{\mu_{1},\ldots,\mu_{K}\\}$. Clearly, the ensemble method would not work
well due to required memory and computational resources. With this insight, we
develop a recursive algorithm for adaptive coalescence of pre-trained
generative models. In Stage II, the resulting barycenter solution in Stage I
is treated as the model initialization, and is further trained using the local
samples at Node 0. We propose that the offline training in Stage I is
asynchronously performed in the cloud, and the fast adaptation in Stage II is
carried out at the edge server (in the same spirit as the model update of
Google EDGE TPU), as outlined below:
Stage I: Find the barycenter of $K$ pre-trained generative models across $K$
edge nodes offline. Mathematically, this entails the solution of the following
problem:
$\displaystyle\min_{\nu\in\mathcal{P}}~{}\sum_{k=1}^{K}\frac{1}{\eta_{k}}W_{1}(\nu,\mu_{k}).$
(4)
To reduce computational complexity, we propose the following recursive
algorithm: Take $\mu_{1}$ as an initial point, i.e., $\nu^{*}_{1}=\mu_{1}$,
and let $\nu^{*}_{k-1}$ denote the barycenter of $\\{\mu_{i}\\}_{1:k-1}$
obtained at iteration $k-1$ for $k=2,\ldots,K$. Then, at each iteration $k$, a
new barycenter $\nu^{*}_{k}$ is solved between the barycenter $\nu^{*}_{k-1}$
and the pre-trained generative model $\mu_{k}$. (Details are in Algorithm 1 in
the appendix.)
Stage II: Fast adaptation to find the barycenter between $\nu_{K}^{*}$ and the
local dataset at Node 0. Given the solution $\nu_{K}^{*}$ obtained in Stage I,
we subsequently solve the following problem:
$\min_{\nu\in\mathcal{P}}~{}W_{1}(\nu,\hat{\mu}_{0})+W_{1}(\nu,\nu_{K}^{*}).$
By taking $\nu_{K}^{*}$ as the model initialization, fast adaptation based on
local samples is used to learn the generative model at Node 0. (See Algorithm
2 in the appendix.)
### 3.3 From Displacement Interpolation to Adaptive Barycenters
As noted above, in practical implementation, the W1-GAN often outperforms
Wasserstein-$p$ GANs ($p>1$). However, the Wasserstein-1 barycenter is
notoriously difficult to analyze due to the non-uniqueness of the minimizer to
the Monge Problem (Villani, 2008). Appealing to optimal transport theory, we
next characterize the performance of the proposed two-stage recursive
algorithm for finding the Wasserstein-1 barycenter of pre-trained generative
models $\\{\mu_{k},k=1,\ldots,K\\}$ and the local dataset at Node 0, by
examining the existence of the barycenter and characterizing its geometric
properties based on geodesic curves.
The seminal work (McCann, 1997) has established the existence of geodesic
curves between any two distribution functions $\sigma_{0}$ and $\sigma_{1}$ in
the $p$-Wasserstein space, $\mathcal{P}_{p}$, for $p\geq 1$. It is shown in
(Villani, 2008) that there are infinitely many minimal geodesic curves between
$\sigma_{0}$ and $\sigma_{1}$, when $p=1$. This is best illustrated in $N$
dimensional Cartesian space, where the minimal geodesic curves between
$\varsigma_{0}\in\mathbb{R}^{N}$ and $\varsigma_{1}\in\mathbb{R}^{N}$ can be
parametrized as follows:
$\varsigma_{t}=\varsigma_{0}+s(t)(\varsigma_{1}-\varsigma_{0}),$ where $s(t)$
is an arbitrary function of $t$, indicating that there are infinitely many
minimal geodesic curves between $\varsigma_{0}$ and $\varsigma_{1}$. This is
in stark contrast to the case $p>1$ where there is a unique geodesic between
$\varsigma_{0}$ and $\varsigma_{1}$. In a similar fashion, there exists
infinitely many transport maps, $T^{1}_{0}$, from $\sigma_{0}$ to $\sigma_{1}$
when $p=1$. For convenience, let $C(\sigma_{0},\sigma_{1})$ denote an
appropriate transport cost function quantifying the minimum cost to move a
unit mass from $\sigma_{0}$ to $\sigma_{1}$. It has been shown in (Villani,
2008) that when $p=1$, two interpolated distribution functions on two distinct
minimal curves may have a non-zero distance, i.e.,
$C(\hat{T}^{1}_{0}\\#\sigma_{0},\tilde{T}^{1}_{0}\\#\sigma_{0})\geq 0$, where
$\\#$ denotes the push-forward operator, thus yielding multiple minimizers to
(4). For convenience, define
$\mathcal{F}:=\hat{\mu}_{0}\cup\\{\mu_{k}\\}_{1:K}$.
###### Definition 1.
(Baryregion) Let $g_{t}(\mu_{k},\mu_{\ell})_{0\leq t\leq 1}$ denote any
minimal geodesic curve between any pair $\mu_{k},\mu_{\ell}\in\mathcal{F}$,
and define the union
$\mathcal{R}:=\bigcup_{k=1}^{K}\bigcup_{\ell=k+1}^{K+1}g_{t}(\mu_{k},\mu_{\ell})_{0\leq
t\leq 1}$. Then, the baryregion $\mathcal{B}_{\mathcal{R}}$ is given by
$\mathcal{B}_{\mathcal{R}}=\bigcup_{\sigma\in\mathcal{R}}\bigcup_{\varpi\in\mathcal{R},\varpi\not=\sigma}g_{t}(\sigma,\varpi)_{0\leq
t\leq 1}$.
Intuitively, $\mathcal{B}_{\mathcal{R}}$ encapsulates all possible
interpolations through distinct geodesics between any two distributions in
$\mathcal{R}$ or $\mathcal{F}$. Since each geodesic has finite length,
$\mathcal{B}_{\mathcal{R}}$ defines a bounded set in $\mathcal{P}_{1}$. Next
we restate in Lemma 1 the renowned _Displacement Interpolation_ result
(McCann, 1997), which sets the foundation for each recursive step in finding a
barycenter in our proposed two-stage algorithm. In particular, Lemma 1 leads
to the fact that the barycenter $\nu^{*}$ resides in
$\mathcal{B}_{\mathcal{R}}$.
###### Lemma 1.
(Displacement Interpolation, (Villani, 2003)) Let $C(\sigma_{0},\sigma_{1})$
denote the minimum transport cost between $\sigma_{0}$ and $\sigma_{1}$, and
suppose $C(\sigma_{0},\sigma_{1})$ is finite for
$\sigma_{0},\sigma_{1}\in\mathcal{P}(\mathcal{X})$. Assume that
$C(\sigma_{s},\sigma_{t})$, the minimum transport cost between $\sigma_{s}$
and $\sigma_{t}$ for any $0\leq s\leq t\leq 1$, is continuous. Then, the
following holds true for any given continuous path
$g_{t}(\sigma_{0},\sigma_{1})_{0\leq t\leq 1}$:
$C(\sigma_{t_{1}},\sigma_{t_{2}})+C(\sigma_{t_{2}},\sigma_{t_{3}})=C(\sigma_{t_{1}},\sigma_{t_{3}}),~{}~{}0\leq
t_{1}\leq t_{2}\leq t_{3}\leq 1.$ (5)
In the adaptive coalescence algorithm, the $k$th recursion defines a
baryregion, $\mathcal{B}_{\\{\nu_{k_{1}}^{*},\mu_{k}\\}}$, consisting of
geodesics between the barycenter $\nu^{*}_{k-1}$ found in $(k-1)$th recursion
and generative model $\mu_{k}$. Clearly,
$\mathcal{B}_{\\{\nu_{k}^{*},\mu_{k}\\}}\subset\mathcal{B}_{\mathcal{R}}$.
Viewing each recursive step in the above two-stage algorithm as adaptive
displacement interpolation, we have the following main result on the geodesics
and the geometric properties regarding $\nu^{*}$ and
$\\{\nu_{k}^{*}\\}_{1:K}$.
###### Proposition 1.
(Displacement interpolation for adaptive barycenters) The adaptive barycenter,
$\nu_{k}^{*}$, obtained at the output of $k$th recursive step in Stage I, is a
displacement interpolation between $\nu_{k-1}^{*}$ and $\mu_{k}$ and resides
inside $\mathcal{B}_{\mathcal{R}}$. Further, the final barycenter $\nu^{*}$
resulting from Stage II of the recursive algorithm resides inside
$\mathcal{B}_{\mathcal{R}}$.
## 4 Recursive WGAN Configuration for Adaptive Coalescence and Continual
Learning
Based on the above theoretic results on adaptive coalescence via Wasserstein-1
barycenters, we next turn attention to the implementation of computing
adaptive barycenters. Notably, assuming the knowledge of accurate empirical
distribution models on discrete support, (Cuturi & Doucet, 2014) introduces a
powerful linear program (LP) to compute Wasserstein-$p$ barycenters, but the
computational complexity of this approach is excessive. In light of this, we
propose a WGAN-based configuration for finding the Wasserstein-1 barycenter,
which in turn enables fast learning of generative models based on the
coalescence of pre-trained models. Specifically, (3) can be rewritten as:
$\displaystyle\underset{G}{\min}$
$\displaystyle\underset{\\{\varphi_{k}\\}_{0:K}}{\max}\left\\{\mathbb{E}_{{x}\sim\hat{\mu}_{0}}[\varphi_{0}({x})]-\mathbb{E}_{{z}\sim\vartheta}[\varphi_{0}(G({z}))]\right\\}$
$\displaystyle+\sum_{k=1}^{K}\frac{1}{\eta_{k}}\left\\{\mathbb{E}_{{x}\sim\mu_{k}}[\varphi_{k}({x})]-\mathbb{E}_{{z}\sim\vartheta}[\varphi_{k}(G({z}))]\right\\},$
(6)
where $G$ represents the generator and $\\{\varphi_{k}\\}_{0:K}$ are
$1-$Lipschitz functions for discriminator models, respectively. Observe that
the optimal generator DNN $G^{*}$ facilitates the barycenter distribution
$\nu^{*}$ at its output. We note that the multi-discriminator WGAN
configuration have recently been developed (Durugkar et al., 2016; Hardy et
al., 2019; Neyshabur et al., 2017), by using a common latent space to train
multiple discriminators so as to improve stability. In stark contrast, in this
work distinct generative models from multiple nodes are exploited to train
different discriminators, aiming to learn distinct transport plans among
generative models.
Figure 3: A 2-discriminator WGAN for efficient learning of $k$th barycenter
generator in offline training, where ${x}$ denotes the synthetic data
generated from pretrained models.
A naive approach is to implement the above multi-discriminator WGAN in a one-
shot manner where the generator and $K+1$ discriminators are trained
simultaneously, which however would require overwhelming computation power and
memory. To enable efficient training, we use the proposed two-stage algorithm
and develop a “recursive” WGAN configuration to sequentially compute 1) the
barycenter $\nu_{K}^{*}$ for the offline training in the cloud, as shown in
Figure 3; and 2) the barycenter $\nu^{*}$ for the fast adaptation at the
target edge node, as shown in Figure 4. The analytical relation between _one-
shot_ and _recursive_ barycenters has been studied for Wasserstein-2 distance,
and sufficient conditions for their equivalence is presented in (Boissard et
al., 2015), which, would not suffice for Wasserstein-1 distance, because of
the existence of multiple Wasserstein-1 barycenters. Proposition 1 shows that
any barycenter solution to recursive algorithm resides inside a baryregion,
which can be viewed as the counterpart for the _one-shot_ solution. We
highlight a few important advantages of the “recursive” WGAN configuration for
the barycentric fast adaptation algorithm.
1) _A 2-discriminator WGAN implementation per recursive step to enable
efficient training._ At each recursive step $k$, we aim to find the barycenter
$\nu_{k}^{*}$ between pre-trained model $\mu_{k}$ and the barycenter
$\nu_{k-1}^{*}$ from last round, which is achieved by training a
2-discriminator WGAN as follows:
$\displaystyle\underset{\mathcal{G}_{k}}{\min}$
$\displaystyle\underset{\psi_{k},\tilde{\psi}_{k}}{\max}\lambda_{\psi_{k}}\big{\\{}\mathbb{E}_{{x}\sim\mu_{k}}[\psi_{k}({x})]-\mathbb{E}_{{z}\sim\vartheta}[{\psi}_{k}(\mathcal{G}_{k}({z}))]\big{\\}}$
$\displaystyle+\lambda_{\tilde{\psi}_{k}}\big{\\{}\mathbb{E}_{{x}\sim\nu_{k-1}^{*}}[\tilde{\psi}_{k}({x})]-\mathbb{E}_{{z}\sim\vartheta}[\tilde{\psi}_{k}(\mathcal{G}_{k}({z}))]\big{\\}},$
(7)
where $\psi$ and $\tilde{\psi}$ denote the corresponding discriminators for
pre-trained model $G_{k}$ and barycenter model $\mathcal{G}_{k-1}^{*}$ from
the previous recursive step, respectively.
2) _Model initialization in each recursive step._ For the initialization of
the generator $\mathcal{G}_{k}$, we use the trained generator
$\mathcal{G}_{k-1}^{*}$ in last step. $\mathcal{G}_{k-1}^{*}$ corresponds to
the barycenter $\nu_{k-1}^{*}$, and using it as the initialization the
displacement interpolation would move along the geodesic curve from
$\nu_{k-1}^{*}$ to $\mu_{k}$ (Leygonie et al., 2019). It has been shown that
training GANs with such initializations would accelerate the convergence
compared with training from scratch (Wang et al., 2018b). Finally,
$\nu_{K}^{*}$ is adopted as initialization to enable fast adaptation at the
target edge node. With the barycenter $\nu_{K}^{*}$ solved via offline
training, a new barycenter $\nu^{*}$ between local dataset (represented by
$\hat{\mu}_{0}$) and $\nu_{K}^{*}$, can be obtained by training a
2-discriminator WGAN, and fine-tuning the generator $\mathcal{G}_{0}$ from
$\mathcal{G}_{K}^{*}$ would be notably _faster and more accurate_ than
learning the generative model from local data only.
Figure 4: Fast adaptation for learning generative model at Node 0.
3) _Fast adaptation for training ternary WGAN at Node 0._ As outlined in
Algorithm 2, fast adaptation is used to find the barycenter between
$\nu_{K}^{*}$ and the local dataset at Node 0. To further enhance edge
learning, we adopt the weight ternarization method to compress the WGAN model
during training. The weight ternarization method not only replaces
computationally-expensive multiplication operations with efficient
addition/subtraction operations, but also enables the sparsity in model
parameters (Han et al., 2015). Specifically, the ternarization process is
formulated as:
$w_{l}^{\prime}=S_{l}\cdot
Tern\left(w_{l},\Delta_{l}^{\pm}\right)=S_{l}\cdot\left\\{\begin{array}[]{ll}+1&w_{l}>\Delta_{l}^{+}\\\
0&\Delta_{l}^{-}\leq w_{l}\leq\Delta_{l}^{+}\\\
-1&w_{l}<\Delta_{l}^{-}\end{array}\right.$ (8)
where $\\{w_{l}\\}$ are the full precision weights for $l$th layer,
$\\{w_{l}^{\prime}\\}$ are the weight after ternarization, $\\{S_{l}\\}$ is
the layer-wise weight scaling coefficient and $\Delta_{l}^{\pm}$ are the
layer-wise thresholds. Since the fixed weight thresholds may lead to accuracy
degradation, $S_{l}$ is approximated as a differentiable closed-form function
of $\Delta_{l}^{\pm}$ so that both weights and thresholds can be optimized
simultaneously through back-propagation (He & Fan, 2019). Let the generator
and the discriminators of WGAN at Node 0 be denoted by $\mathcal{G}_{0}$,
$\tilde{\psi}_{0}$ and $\psi_{0}$, which are parametrized by the ternarized
weights
$\\{\mathbf{w}^{\prime}_{l_{\mathcal{G}}}\\}_{l_{\mathcal{G}}=1}^{L_{\mathcal{G}}}$,
$\\{\mathbf{w}^{\prime}_{l_{\tilde{\psi}}}\\}_{l_{\tilde{\psi}}=1}^{L_{\tilde{\psi}}}$
and $\\{\mathbf{w}^{\prime}_{l_{\psi}}\\}_{l_{\psi}=1}^{L_{\psi}}$,
respectively. The barycenter $\nu^{*}$ at Node 0, captured by
$\mathcal{G}_{0}^{*}$, can be obtained by training the ternary WGAN via
iterative updates of both weights and thresholds:
$\displaystyle\underset{\mathcal{G}_{0}}{\min}\underset{\psi_{0},\tilde{\psi}_{0}}{\max}$
$\displaystyle\mathbb{E}_{{x}\sim\hat{\mu}_{0}}[\psi_{0}({x})]-\mathbb{E}_{{z}\sim\vartheta}[{\psi}_{0}(\mathcal{G}_{0}({z}))]$
$\displaystyle+\mathbb{E}_{{x}\sim\nu_{K}^{*}}[\tilde{\psi}_{0}({x})]-\mathbb{E}_{{z}\sim\vartheta}[\tilde{\psi}_{0}(\mathcal{G}_{0}({z}))],$
(9)
which takes three steps in each iteration: a) calculating the scaling
coefficients and the ternary weights for $\mathcal{G}_{0}$, $\tilde{\psi}_{0}$
and $\psi_{0}$, b) calculating the loss function using the ternary weights via
forward-propagation and c) updating the full precision weights and the
thresholds via back-propagation.
## 5 Experiments
Datasets, Models and Evaluation. We extensively examine the performance of
learning a generative model, using the barycentric fast adaptation algorithm,
on a variety of widely adapted datasets in the GAN literature, including
CIFAR10, CIFAR100, LSUN and MNIST. In experiments, we used various DCGAN-based
architectures (Radford et al., 2015) depending on the dataset as different
datasets vary in image size, feature diversity and in sample size, e.g., image
samples in MNIST has less diversity compared to the rest of the datasets,
while LSUN contains the largest number of samples with larger image sizes.
Further, we used the weight ternarization method (He & Fan, 2019) to jointly
optimize weights and quantizers of the generative model at the target edge
node, reducing the memory burden of generative models in memory-limited edge
devices. Details on the characteristics of datasets and network architectures
used in experiments are relegated to the appendix.
The Frechet-Inception Distance (FID) score (Heusel et al., 2017a) is used for
evaluating the performance of the two-stage adaptive coalescence algorithm and
all baseline algorithms. The FID score is widely adopted for evaluating the
performance of GAN models in the literature (Chong & Forsyth, 2019; Wang et
al., 2018b; Grnarova et al., 2019), since it provides a quantitative
assessment of the similarity of a dataset to another reference dataset. In all
experiments, we use the entire dataset as the reference dataset. We here
emphasize that a smaller FID score of a GAN indicates that it has better
performance. A more comprehensive discussion of FID score is relegated to the
appendix.
To demonstrate the improvements by using the proposed framework based on
_barycentric fast adaptation_ , we conduct extensive experiments and compare
performance with $3$ distinct baselines: 1) _transferring GANs_ (Wang et al.,
2018b): a pre-trained GAN model is used as initialization at Node 0 for
training a new WGAN model by using local data samples. 2) _Ensemble method_ :
The model initialization, obtained by using pre-trained GANs at other edge
nodes, is further trained using both local data from Node 0 and synthetic data
samples. 3) _Edge-Only_ : only local dataset at node 0 is used in WGAN
training.
Following (Heusel et al., 2017b; Wang et al., 2018b), we use the FID score to
quantify the image quality. Due to the lack of sample diversity at the target
edge node, the WGAN model trained using local data only is not expected to
attain a small FID score. In stark contrast, the WGAN model trained using the
proposed two-stage adaptive coalescence algorithm, inherits diversity from
pre-trained models at other edge nodes, and can result in lower FID scores
than its counterparts. We note that if the entire dataset were available at
Node $0$, then the minimum FID score would be achieved (see the appendix).
(a) Comparison of convergence over MNIST: The non-overlapping case.
(b) Comparison of convergence over MNIST: The overlapping case.
(c) Comparison of convergence over CIFAR10: The overlapping case.
(d) Comparison of convergence over CIFAR100: The overlapping case.
(e) Comparison of convergence over CIFAR10: The overlapping case.
(f) Comparison of convergence over LSUN: The overlapping case.
Figure 5: Performance comparison of barycentric fast adaptation with various
baselines.
Fine-tuning via fast adaptation. We investigate the convergence and the
generated image quality of various training scenarios on CIFAR100 and MNIST
datasets. Specifically, we consider the following two scenarios: 1) _The
overlapping case_ : the classes of the data samples at other edge nodes and at
Node 0 overlap; 2) _The non-overlapping case_ : the classes of the data
samples at other edge nodes and at Node 0 are mutually exclusive. As
illustrated in Figure 5 and 6, _barycentric fast adaptation_ clearly
outperforms all baselines. Transferring GANs suffers from catastrophic
forgetting, because the continual learning is performed over local data
samples at Node 0 only. On the contrary, the barycentric fast adaptation and
the ensemble method leverage generative replay, which mitigates the negative
effects of catastrophic forgetting. Further, observe that the ensemble method
suffers because of the limited data samples at Node 0, which are significantly
outnumbered by synthetic data samples from pre-trained GANs, and this
imbalance degrades the applicability of the ensemble method for continual
learning. On the other hand, the barycentric fast adaptation can obtain the
barycenter between the local data samples at Node 0 and the barycenter model
trained offline, and hence can effectively leverage the abundance of data
samples from edge nodes and the accuracy of local data samples at Node 0 for
better continual learning.
Impact of number of pre-trained generative models. To quantify the impact of
cumulative model knowledge from pre-trained generative models on the learning
performance at the target node, we consider the scenario where $10$ classes in
CIFAR10/MNIST are split into $3$ subsets, e.g., the first pre-train model has
classes $\\{0,1,2\\}$, the second pre-trained model has classes $\\{2,3,4\\}$
and the third pre-trained model has the remaining classes. One barycenter
model is trained offline by using the first two pre-trained models and the
second barycenter model is trained using all $3$ pre-trained models,
respectively, based on which we evaluate the performance of barycentric fast
adaptation with $1000$ data samples at the target node. Figure 5(b) and 5(c)
showcase that the more model knowledge is accumulated in the barycenter
computed offline, the higher image quality is achieved at Node 0. As expected,
more model knowledge can help new edge nodes in training higher-quality
generative models. In both figures, the barycentric fast adaptation
outperforms Transferring GANs.
Figure 6: Image samples from $3$ different approaches for CIFAR10.
Impact of the Number of Data Samples at Node 0. Figure 5(e) further
illustrates the convergence across different number of data samples at the
target node on CIFAR10 dataset. As expected, the FID score gap between
barycentric fast adaptation and _edge-only_ method decreases as the number of
data samples at the target node increases, simply because the empirical
distribution becomes more ‘accurate’. In particular, the significant gap of
FID scores between _edge-only_ and the _barycentric fast adaptation_
approaches in the initial stages indicates that the barycenter found via
offline training and adopted as the model initialization for fast adaptation,
is indeed close to the underlying model at the target node, hence enabling
faster and more accurate edge learning than _edge-only_.
Impact of Wasserstein ball radii. Intuitively, the Wasserstein ball radius
$\eta_{k}$ for pre-trained model $k$ represents the relevance (and hence
utility) of the knowledge transfer which is also intimately related to the
capability to generalize beyond the pre-trained generative models, and the
smaller it is, the more informative the corresponding Wasserstein ball is.
Hence, larger weights $\lambda_{k}=1/\eta_{k}$ would be assigned to the nodes
with higher relevance. We note that the weights are determined by the
constraints and thus are fixed. Since we introduce the recursive WGAN
configuration, the order of coalescence (each corresponding to a geodesic
curve) may impact the final barycentric WGAN model, and hence the performance
of barycentric fast adaptation. To this end, we compute the coalescence of
models of nodes with higher relevance at latter recursions to ensure that the
final barycentric model is closer to the models of nodes with higher
relevance.
Ternary WGAN based barycentric fast adaptation. With the model initialization
in the form of a full-precision barycenter model computed in offline training,
we next train a ternary WGAN with 2 discriminators for the target node to
compress the generative model further. In particular, we use the same split of
classes as the experiment illustrated in Figure 5(e), and compare the image
quality obtained by Ternary WGAN-based fast adaptation against both full
precision counterpart and _Edge-Only_. It can be seen from the FID scores
(Figure 5(f)), the ternary WGAN-based barycentric fast adaptation results in
negligible performance degradation compared to its full precision counterpart,
and is still much better compared to the _Edge-Only_ approach.
## 6 Conclusions
In this work, we propose a systematic framework for continual learning of
generative models via adaptive coalescence of pre-trained models from other
edge nodes. Particularly, we cast the continual learning problem as a
constrained optimization problem that can be reduced to a Wasserstein-1
barycenter problem. Appealing to optimal transport theory, we characterize the
geometric properties of geodesic curves therein and use displacement
interpolation as the foundation to devise recursive algorithms for finding
adaptive barycenters. Next, we take a two-stage approach to efficiently solve
the barycenter problem, where the barycenter of the pre-trained models is
first computed offline in the cloud via a ”recursive” WGAN configuration based
on displacement interpolation. Then, the resulting barycenter is treated as
the meta-model initialization and fast adaptation is used to find the
generative model using the local samples at the target edge node. A weight
ternarization method, based on joint optimization of weights and threshold for
quantization, is developed to compress the edge generative model further.
Extensive experimental studies corroborate the efficacy of the proposed
framework.
## References
* Agueh & Carlier (2011) Agueh, M. and Carlier, G. Barycenters in the wasserstein space. _SIAM Journal on Mathematical Analysis_ , 43(2):904–924, 2011.
* Ambrosio (2003) Ambrosio, L. Lecture notes on optimal transport problems. In _Mathematical aspects of evolving interfaces_ , pp. 1–52. Springer, 2003.
* Ambrosio et al. (2008) Ambrosio, L., Gigli, N., and Savaré, G. _Gradient flows: in metric spaces and in the space of probability measures_. Springer Science & Business Media, 2008.
* Anderes et al. (2016) Anderes, E., Borgwardt, S., and Miller, J. Discrete wasserstein barycenters: Optimal transport for discrete data. _Mathematical Methods of Operations Research_ , 84(2):389–409, 2016.
* Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gan. _arXiv preprint arXiv:1701.07875_ , 2017.
* Arora et al. (2017) Arora, S., Ge, R., Liang, Y., Ma, T., and Zhang, Y. Generalization and equilibrium in generative adversarial nets (gans). In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pp. 224–232. JMLR. org, 2017.
* Boissard et al. (2015) Boissard, E., Le Gouic, T., Loubes, J.-M., et al. Distribution’s template estimate with wasserstein metrics. _Bernoulli_ , 21(2):740–759, 2015.
* Breiman (1996) Breiman, L. Bagging predictors. _Machine learning_ , 24(2):123–140, 1996.
* Brenier (1991) Brenier, Y. Polar factorization and monotone rearrangement of vector-valued functions. _Communications on Pure and Applied Mathematics_ , 44(4):375–417, 1991.
* Chong & Forsyth (2019) Chong, M. J. and Forsyth, D. Effectively unbiased fid and inception score and where to find them, 2019\.
* Cuturi (2013) Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In _Advances in Neural Information Processing Systems 26_ , pp. 2292–2300. Curran Associates, Inc., 2013.
* Cuturi & Doucet (2014) Cuturi, M. and Doucet, A. Fast computation of wasserstein barycenters. In _International Conference on Machine Learning_ , pp. 685–693, 2014.
* Cuturi & Peyré (2016) Cuturi, M. and Peyré, G. A smoothed dual approach for variational wasserstein problems. _SIAM Journal on Imaging Sciences_ , 9(1):320–343, 2016.
* De Lange et al. (2019) De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. A continual learning survey: Defying forgetting in classification tasks. _arXiv preprint arXiv:1909.08383_ , 2019.
* Dhar et al. (2019) Dhar, P., Singh, R. V., Peng, K.-C., Wu, Z., and Chellappa, R. Learning without memorizing. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 5138–5146, 2019.
* Durugkar et al. (2016) Durugkar, I., Gemp, I., and Mahadevan, S. Generative multi-adversarial networks. _arXiv preprint :1611.01673_ , 2016.
* Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In _Advances in neural information processing systems_ , pp. 2672–2680, 2014.
* Grnarova et al. (2019) Grnarova, P., Levy, K. Y., Lucchi, A., Perraudin, N., Goodfellow, I., Hofmann, T., and Krause, A. A domain agnostic measure for monitoring and evaluating gans. In _Advances in Neural Information Processing Systems 32_ , pp. 12092–12102. Curran Associates, Inc., 2019.
* Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems 30_ , pp. 5767–5777. Curran Associates, Inc., 2017.
* Han et al. (2015) Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. _arXiv preprint arXiv:1510.00149_ , 2015.
* Hardy et al. (2019) Hardy, C., Le Merrer, E., and Sericola, B. Md-gan: Multi-discriminator generative adversarial networks for distributed datasets. In _2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_ , pp. 866–877. IEEE, 2019.
* He & Fan (2019) He, Z. and Fan, D. Simultaneously optimizing weight and quantizer of ternary neural network using truncated gaussian approximation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 11438–11446, 2019.
* Heusel et al. (2017a) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems 30_ , pp. 6626–6637. Curran Associates, Inc., 2017a.
* Heusel et al. (2017b) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Advances in neural information processing systems_ , pp. 6626–6637, 2017b.
* Kirkpatrick et al. (2017) Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. Overcoming catastrophic forgetting in neural networks. _Proceedings of the national academy of sciences_ , 114(13):3521–3526, 2017.
* Le Gouic & Loubes (2017) Le Gouic, T. and Loubes, J.-M. Existence and consistency of wasserstein barycenters. _Probability Theory and Related Fields_ , 168(3-4):901–917, 2017.
* Ledig et al. (2017) Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., and Shi, W. Photo-realistic single image super-resolution using a generative adversarial network, 2017.
* Lee et al. (2017) Lee, S.-W., Kim, J.-H., Jun, J., Ha, J.-W., and Zhang, B.-T. Overcoming catastrophic forgetting by incremental moment matching. In _Advances in neural information processing systems_ , pp. 4652–4662, 2017.
* Lei et al. (2017) Lei, N., Su, K., Cui, L., Yau, S.-T., and Gu, D. X. A geometric view of optimal transportation and generative model, 2017\.
* Leontev et al. (2020) Leontev, M. I., Islenteva, V., and Sukhov, S. V. Non-iterative knowledge fusion in deep convolutional neural networks. _Neural Processing Letters_ , 51(1):1–22, 2020\.
* Leygonie et al. (2019) Leygonie, J., She, J., Almahairi, A., Rajeswar, S., and Courville, A. C. Adversarial computation of optimal transport maps. _CoRR_ , abs/1906.09691, 2019.
* Lin et al. (2020) Lin, S., Yang, G., and Zhang, J. A collaborative learning framework via federated meta-learning. _arXiv preprint arXiv:2001.03229_ , 2020.
* Liu et al. (2019) Liu, H., Gu, X., and Samaras, D. Wasserstein gan with quadratic transport cost. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 4832–4841, 2019.
* McCann (1997) McCann, R. J. A convexity principle for interacting gases. _Advances in mathematics_ , 128(1):153–179, 1997\.
* Neyshabur et al. (2017) Neyshabur, B., Bhojanapalli, S., and Chakrabarti, A. Stabilizing GAN training with multiple random projections. _CoRR_ , abs/1705.07831, 2017.
* Osia et al. (2020) Osia, S. A., Shamsabadi, A. S., Sajadmanesh, S., Taheri, A., Katevas, K., Rabiee, H. R., Lane, N. D., and Haddadi, H. A hybrid deep learning architecture for privacy-preserving mobile analytics. _IEEE Internet of Things Journal_ , 7(5):4505–4518, 2020.
* Ostapenko et al. (2019) Ostapenko, O., Puscas, M., Klein, T., Jahnichen, P., and Nabi, M. Learning to remember: A synaptic plasticity driven framework for continual learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 11321–11329, 2019.
* Qi et al. (2019) Qi, M., Wang, Y., Qin, J., and Li, A. Ke-gan: Knowledge embedded generative adversarial networks for semi-supervised scene parsing. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 5237–5246, 2019.
* Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. _arXiv preprint arXiv:1511.06434_ , 2015.
* Rebuffi et al. (2017) Rebuffi, S.-A., Kolesnikov, A., Sperl, G., and Lampert, C. H. icarl: Incremental classifier and representation learning. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , pp. 2001–2010, 2017.
* Riemer et al. (2019) Riemer, M., Klinger, T., Bouneffouf, D., and Franceschini, M. Scalable recollections for continual lifelong learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 1352–1359, 2019.
* Rolnick et al. (2019) Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., and Wayne, G. Experience replay for continual learning. In _Advances in Neural Information Processing Systems_ , pp. 350–360, 2019.
* Schapire (1999) Schapire, R. E. A brief introduction to boosting. In _Ijcai_ , volume 99, pp. 1401–1406, 1999.
* Schwarz et al. (2018) Schwarz, J., Luketina, J., Czarnecki, W. M., Grabska-Barwinska, A., Teh, Y. W., Pascanu, R., and Hadsell, R. Progress & compress: A scalable framework for continual learning. _arXiv preprint arXiv:1805.06370_ , 2018.
* Shafiee et al. (2017) Shafiee, M. J., Li, F., Chwyl, B., and Wong, A. Squishednets: Squishing squeezenet further for edge device scenarios via deep evolutionary synthesis. _arXiv preprint arXiv:1711.07459_ , 2017.
* Shin et al. (2017) Shin, H., Lee, J. K., Kim, J., and Kim, J. Continual learning with deep generative replay. In _Advances in Neural Information Processing Systems_ , pp. 2990–2999, 2017.
* Shrivastava et al. (2017) Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. Learning from simulated and unsupervised images through adversarial training. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 2107–2116, 2017.
* Simon & Aberdam (2020) Simon, D. and Aberdam, A. Barycenters of natural images constrained wasserstein barycenters for image morphing. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 7910–7919, 2020.
* Singh & Jaggi (2019) Singh, S. P. and Jaggi, M. Model fusion via optimal transport. _arXiv preprint arXiv:1910.05653_ , 2019.
* Sinha et al. (2017) Sinha, A., Namkoong, H., Volpi, R., and Duchi, J. Certifying some distributional robustness with principled adversarial training. _arXiv preprint arXiv:1710.10571_ , 2017.
* Smith & Gashler (2017) Smith, J. and Gashler, M. An investigation of how neural networks learn from the experiences of peers through periodic weight averaging. In _2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)_ , pp. 731–736. IEEE, 2017.
* Srivastava et al. (2015) Srivastava, S., Cevher, V., Dinh, Q., and Dunson, D. Wasp: Scalable bayes via barycenters of subset posteriors. In _Artificial Intelligence and Statistics_ , pp. 912–920, 2015\.
* Staib et al. (2017) Staib, M., Claici, S., Solomon, J. M., and Jegelka, S. Parallel streaming wasserstein barycenters. In _Advances in Neural Information Processing Systems_ , pp. 2647–2658, 2017.
* Thrun (1995) Thrun, S. A lifelong learning perspective for mobile robot control. In _Intelligent robots and systems_ , pp. 201–214. Elsevier, 1995\.
* Venkataramani et al. (2017) Venkataramani, S., Ranjan, A., Banerjee, S., Das, D., Avancha, S., Jagannathan, A., Durg, A., Nagaraj, D., Kaul, B., Dubey, P., et al. Scaledeep: A scalable compute architecture for learning and evaluating deep networks. In _Proceedings of the 44th Annual International Symposium on Computer Architecture_ , pp. 13–26, 2017.
* Villani (2003) Villani, C. _Topics in optimal transportation_. Number 58. American Mathematical Soc., 2003.
* Villani (2008) Villani, C. _Optimal transport: old and new_ , volume 338. Springer Science & Business Media, 2008.
* Volpi et al. (2018) Volpi, R., Namkoong, H., Sener, O., Duchi, J. C., Murino, V., and Savarese, S. Generalizing to unseen domains via adversarial data augmentation. In _Advances in Neural Information Processing Systems_ , pp. 5334–5344, 2018.
* Wang et al. (2018a) Wang, J., Zhang, J., Bao, W., Zhu, X., Cao, B., and Yu, P. S. Not just privacy: Improving performance of private deep learning in mobile cloud. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pp. 2407–2416, 2018a.
* Wang et al. (2019) Wang, J., Bao, W., Sun, L., Zhu, X., Cao, B., and Philip, S. Y. Private model compression via knowledge distillation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 1190–1197, 2019.
* Wang et al. (2020) Wang, J., Zhou, W., Qi, G.-J., Fu, Z., Tian, Q., and Li, H. Transformation gan for unsupervised image synthesis and representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 472–481, 2020.
* Wang et al. (2017) Wang, Q., Li, Y., Shao, B., Dey, S., and Li, P. Energy efficient parallel neuromorphic architectures with approximate arithmetic on fpga. _Neurocomputing_ , 221:146–158, 2017.
* Wang et al. (2018b) Wang, Y., Wu, C., Herranz, L., van de Weijer, J., Gonzalez-Garcia, A., and Raducanu, B. Transferring gans: generating images from limited data. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 218–234, 2018b.
* Wu et al. (2018) Wu, C., Herranz, L., Liu, X., van de Weijer, J., Raducanu, B., et al. Memory replay gans: Learning to generate new categories without forgetting. In _Advances in Neural Information Processing Systems_ , pp. 5962–5972, 2018.
* Yang et al. (2017) Yang, T.-J., Chen, Y.-H., and Sze, V. Designing energy-efficient convolutional neural networks using energy-aware pruning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 5687–5695, 2017.
* Ye et al. (2017) Ye, J., Wu, P., Wang, J. Z., and Li, J. Fast discrete distribution clustering using wasserstein barycenter with sparse support. _IEEE Transactions on Signal Processing_ , 65(9):2317–2332, 2017.
* Yonetani et al. (2019) Yonetani, R., Takahashi, T., Hashimoto, A., and Ushiku, Y. Decentralized learning of generative adversarial networks from non-iid data. _arXiv preprint arXiv:1905.09684_ , 2019.
* Zenke et al. (2017) Zenke, F., Poole, B., and Ganguli, S. Continual learning through synaptic intelligence. _Proceedings of machine learning research_ , 70:3987, 2017\.
* Zhang et al. (2020) Zhang, Z., Lin, S., Dedeoglu, M., Ding, K., and Zhang, J. Data-driven distributionally robust optimization for edge intelligence. In _IEEE INFOCOM 2020-IEEE Conference on Computer Communications_ , pp. 2619–2628. IEEE, 2020.
* Zhou et al. (2019) Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., and Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. _Proceedings of the IEEE_ , 107(8):1738–1762, 2019.
## Appendices
## Appendix A A Preliminary Review on Optimal Transport Theory and
Wasserstein GANs
This section provides a brief overview of optimal transport theory and
Wasserstein GAN, which serves as the theoretic foundation for the proposed
two-stage adaptive coalescence algorithm for fast edge learning of generative
models. In particular, it is known that the Wasserstein-1 barycenter is
difficult to analyze, because of the existence of infinitely many minimizers
of the Monge Problem. We will review related geometric properties of geodesic
curves therein and introduce displacement interpolation.
### A.1 Monge Problem and Optimal Transport Plan
Optimal transport theory has been extensively utilized in economics for
decades, and has recently garnered much interest in deep learning applications
(see, e.g., (Brenier, 1991; Ambrosio et al., 2008; Villani, 2008)). Simply
put, optimal transport theory aims to find the most efficient transport map
from one probability distribution to another with respect to a predefined cost
function $c(x,y)$. The optimal distribution preserving the transport map can
be obtained by solving the Monge problem.
###### Definition 2.
(Monge Problem) Let $(\mathcal{X},d)$ and $\mathcal{P}(\mathcal{X})$ denote a
complete and separable metric space, i.e., a Polish space, and the set of
probability distributions on $\mathcal{X}$, respectively. Given
$\mu\in\mathcal{P}(\mathcal{X})$ and $\nu\in\mathcal{P}(\mathcal{Y})$ defined
on two Polish spaces which are connected with a Borel map $T$, the Monge
problem is defined as:
$\displaystyle\underset{T:T\\#\mu=\nu}{\inf}\int_{\mathcal{X}}c(x,T(x))d\mu(x).$
(10)
In Definition 2, $T$ is referred as the distribution preserving _transport
map_ and $\\#$ denotes the push-forward operator. In lieu of the strict
constraint, there may not exist an optimal transport map for the Monge
problem. A relaxation of the Monge problem leads to Kantorovich’s optimal
transport problem.
###### Definition 3.
(Kantorovich Problem) Given $\mu\in\mathcal{P}(\mathcal{X})$ and
$\nu\in\mathcal{P}(\mathcal{Y})$ are two probability distributions defined on
two Polish spaces, the Kantorovich problem is defined as:
$\displaystyle\underset{\gamma\in\Pi(\mu,\nu)}{\inf}\int_{\mathcal{X}\times\mathcal{Y}}c(x,y)d\gamma(x,y),$
(11)
where $\Pi(\mu,\nu)$ is the admissible set with its elements satisfying:
$\displaystyle\pi_{\mu}\\#\gamma=\mu,$ $\displaystyle\pi_{\nu}\\#\gamma=\nu,$
(12)
where $\pi_{\mu}$ and $\pi_{\nu}$ are two projector transport maps.
In Definition 3, $\gamma$ is referred as the _transference plan_ and the
admissible set $\Pi$ is a relaxation to $T\\#\mu=\nu$. A transference plan can
leverage _mass splitting_ in contrast to transport maps, and hence can result
in a solution under the semi-continuity assumptions. Mass splitting further
enables the reputed Kantorovich duality, as shown in the following lemma,
facilitating an alternative and convenient representation of the Kantorovich
problem.
###### Lemma 2.
(Kantorovich Duality, (Villani, 2003)) Let $\mu\in\mathcal{P}(\mathcal{X})$
and $\nu\in\mathcal{P}(\mathcal{Y})$ be two probability distributions defined
on Polish spaces $\mathcal{X}$ and $\mathcal{Y}$, respectively, and let
$c(x,y)$ be a lower semi-continuous cost function. Further, define $\Phi_{c}$
as the set of all measurable functions $(\varphi,\psi)\in L^{1}(d\mu)\times
L^{1}(d\nu)$ satisfying:
$\displaystyle\varphi(x)+\psi(y)\leq c(x,y),$ (13)
for $d\mu$-almost all $x\in\mathcal{X}$ and $d\nu$-almost all
$y\in\mathcal{Y}$. Then, the following strong duality holds for $c$-concave
function $\varphi$:
$\displaystyle\underset{\gamma\in\Pi(\mu,\nu)}{\inf}\int_{\mathcal{X}\times\mathcal{Y}}c(x,y)d\gamma(x,y)=\underset{(\varphi,\psi)\in\Phi_{c}}{\sup}\int_{\mathcal{X}}\varphi(x)d\mu(x)+\int_{\mathcal{Y}}\psi(y)d\nu(y).$
(14)
As the right hand side of (14) is an optimization over two functions,
efficient gradient algorithms can be employed to learn the optimal solution.
(14) can be further simplified using $c$-transform (Villani, 2008), in which
$\psi(y)$ can be replaced by the $c$-transform
$\varphi^{c}(y)={\inf}_{{x\in\mathcal{X}}}c(x,y)-\varphi(x)$, and $\varphi$ is
referred as the Kantorovich potential. The following lemma establishes the
existence of a Kantorovich potential that can also represent the Monge
problem.
###### Lemma 3.
(Existence of Optimal Transport Plan, (Ambrosio, 2003)) For a lower semi-
continuous cost function $c(x,y)$ defined on $\mathcal{X}\times\mathcal{Y}$,
there exists at least one $\gamma\in\Pi(\mu,\nu)$ solving the Kantorovich
problem. Furthermore, if $c(x,y)$ is continuous and real-valued, and $\mu$ has
no atoms, then the minimums to both Monge and Kantorovich problems are
equivalent, i.e.,
$\displaystyle\underset{T:T\\#\mu=\nu}{\inf}\int_{\mathcal{X}}c(x,T(x))d\mu(x)=\underset{\gamma\in\Pi(\mu,\nu)}{\inf}\int_{\mathcal{X}\times\mathcal{Y}}c(x,y)d\gamma(x,y).$
(15)
Lemma 3 indicates that there exists at least one transport map which are
solutions to the Kantorovich problem. We here remark that not all transference
plans are necessarily transport maps. Lemma 3 further facilitates a connection
between dataset interpolation and the proposed Wasserstein GAN configuration
in this study, along with the McCann’s celebrated displacement interpolation
result (McCann, 1997).
### A.2 From Vanilla Generative Adversarial Networks (GAN) to Wasserstein-1
GAN
A generative adversarial network is comprised of a generator and discriminator
neural networks. Random noise samples are fed into the generator to generate
data samples of certain structure at the output of the generator. The
generated (or fake) samples are then fed into the discriminator along with
real-world samples taken from the dataset. The discriminator acts as a
classifier and incurs a loss when mislabeling takes place. From a game
theoretic point of view, the generator and the discriminator play a zero-sum
game, in which the generator seeks to manipulate the discriminator to classify
fake samples as real by generating samples similar to the real-world dataset.
In principle, GAN training is equivalent to solving for the following
optimization problem:
$\displaystyle\underset{G}{\min}~{}\underset{D}{\max}~{}V(D,G)$
$\displaystyle=\underset{G}{\min}~{}\underset{D}{\max}~{}\mathbb{E}_{\mathbf{x}\sim\mu}[\log
D(\mathbf{x})]+\mathbb{E}_{\mathbf{z}\sim\vartheta}[\log(1-D(G(\mathbf{z})))]$
$\displaystyle=\underset{G}{\min}~{}\underset{D}{\max}~{}\mathbb{E}_{\mathbf{x}\sim\mu}[\log
D(\mathbf{x})]+\mathbb{E}_{\mathbf{y}\sim\nu}[\log(1-D(\mathbf{y}))],$ (16)
where $D$ and $G$ represent the discriminator and generator networks,
respectively. Let $\mu$, $\nu$ and $\vartheta$ denote the distributions from
empirical data, at generator output and at generator input, respectively. The
latent distribution $\vartheta$ is often selected to be uniform or Gaussian.
The output of the generator, denoted
$\mathbf{y}=G(\mathbf{z},\theta_{G})\sim\nu$, is composed by propagating
$\mathbf{z}$ through a nonlinear transformation, represented by neural network
parameter $\theta_{G}$. Model parameter $\theta_{G}$ entails $\nu$ to reside
in a parametric probability distribution space $\mathcal{Q}_{G}$, constructed
by passing $\vartheta$ through $G$. It has been shown in (Goodfellow et al.,
2014) that the solution to (A.2) can be expressed as an optimization problem
over $\nu$ as:
$\displaystyle\underset{\nu\in\mathcal{Q}_{G}}{\min}~{}-\log(4)+2\cdot\text{JSD}(\mu||\nu),$
(17)
where JSD denotes Jensen-Shannon divergence. Clearly, the solution to (17) can
be achieved at $\nu^{*}=\mu$, and the corresponding $\theta_{G}^{*}$ is the
optimal generator model parameter.
The vanilla GAN training process suffers from the mode collapse issue that is
often caused by vanishing gradients during the training process of GANs
(Arjovsky et al., 2017). In contrast to JSD, under mild conditions the
Wasserstein distance does not incur vanishing gradients, and hence exhibits
more useful gradient properties for preventing mode collapse. The training
process of Wasserstein-1 distance based GAN can be expressed as solving an
optimization problem ${\min}_{\nu\in\mathcal{Q}_{G}}~{}W_{1}(\nu,\mu)$. Since
the $c$-transform of the Kantorovich potential admits a simpler and more
convenient form for $W_{1}$, i.e., $\varphi^{c}=-\varphi$, the Wasserstein-1
GAN cost function can be rewritten as:
$\displaystyle W_{1}(\nu,\mu)=\underset{||\varphi||_{L}\leq
1}{\sup}\left\\{\mathbb{E}_{\mathbf{x}\sim\mu}[\varphi(\mathbf{x})]-\mathbb{E}_{\mathbf{x}\sim\nu}[\varphi(\mathbf{y})]\right\\},$
(18)
where $\varphi$ is constrained to be a $1$-Lipschitz function. Following the
same line as in the vanilla GAN, $\varphi$ in (18) can be characterized by a
neural network, which is parametrized by model parameter $\theta_{D}$.
Consequently, training a Wasserstein-1 GAN is equivalent to solve the
following non-convex optimization problem through training the generator and
discriminator neural networks:
$\displaystyle\underset{G}{\min}\underset{||\varphi||_{L}\leq
1}{\max}\left\\{\mathbb{E}_{\mathbf{x}\sim\mu}[\varphi(\mathbf{x})]-\mathbb{E}_{\mathbf{y}\sim\nu}[\varphi(\mathbf{y})]\right\\}=\underset{G}{\min}\underset{||\varphi||_{L}\leq
1}{\max}\left\\{\mathbb{E}_{\mathbf{x}\sim\mu}[\varphi(\mathbf{x})]-\mathbb{E}_{\mathbf{z}\sim\vartheta}[\varphi(G(\mathbf{z}))]\right\\}.$
(19)
We here note that $\varphi$ must be selected from a family of $1$-Lipschitz
functions. To this end, various training schemes have been proposed in the
literature.
### A.3 From Wasserstein-1 Barycenter to Multi-Discriminator GAN Cost
Problem (3) can be expressed in terms of Kantorovich potentials by applying
Kantorovich’s Duality as:
$\displaystyle\underset{\nu\in\mathcal{P}}{\min}\sum\limits_{k=1}^{K}\frac{1}{\eta_{k}}W_{1}(\nu,\mu_{k})+W_{1}(\nu,\hat{\mu}_{0})=\underset{\nu\in\mathcal{P}}{\min}~{}\sum\limits_{k=1}^{K}\frac{1}{\eta_{k}}$
$\displaystyle\underset{(\varphi_{k},\psi_{k})\in\Phi_{c}}{\sup}\left\\{\int_{\mathcal{X}}\varphi_{k}(x)d\mu_{k}(x)+\int_{\mathcal{Y}}\psi_{k}(y)d\nu(y)\right\\}$
$\displaystyle+$
$\displaystyle\underset{(\varphi_{0},\psi_{0})\in\Phi_{c}}{\sup}\left\\{\int_{\mathcal{X}}\varphi_{0}(x)d\hat{\mu}_{0}(x)+\int_{\mathcal{Y}}\psi_{0}(y)d\nu(y)\right\\}.$
(20)
By using $c$-transformation, we have $\psi_{k}(y)=\varphi_{k}^{c}(y)$. In
particular, for the Wasserstein-1 distance, we have that
$\varphi_{k}^{c}(y)=-\varphi_{k}(y)$, and hence (A.3) is further simplified
as:
$\displaystyle\underset{\nu\in\mathcal{P}}{\min}\sum\limits_{k=1}^{K}\frac{1}{\eta_{k}}$
$\displaystyle
W_{1}(\nu,\mu_{k})+W_{1}(\nu,\hat{\mu}_{0})=\underset{\nu\in\mathcal{P}}{\min}~{}\sum\limits_{k=1}^{K}\frac{1}{\eta_{k}}\underset{\lVert\varphi_{k}\rVert_{L}\leq
1}{\max}\left\\{\mathbb{E}_{\mathbf{x}\sim\mu_{k}}[\varphi_{k}(\mathbf{x})]-\mathbb{E}_{\mathbf{y}\sim\nu}[\varphi_{k}(\mathbf{y})]\right\\}$
$\displaystyle+$ $\displaystyle\underset{\lVert\varphi_{0}\rVert_{L}\leq
1}{\max}\left\\{\mathbb{E}_{\mathbf{x}\sim\hat{\mu}_{0}}[\varphi_{0}(\mathbf{x})]-\mathbb{E}_{\mathbf{y}\sim\nu}[\varphi_{0}(\mathbf{y})]\right\\}$
$\displaystyle=$
$\displaystyle\underset{G}{\min}\underset{\\{\lVert\varphi_{k}\rVert_{L}\leq
1\\}_{0:K}}{\max}\left\\{\mathbb{E}_{\mathbf{x}\sim\hat{\mu}_{0}}[\varphi_{0}(\mathbf{x})]-\mathbb{E}_{{\mathbf{z}}\sim\vartheta}[\varphi_{0}(G(\mathbf{z}))]\right\\}+\sum_{k=1}^{K}\frac{1}{\eta_{k}}\left\\{\mathbb{E}_{\mathbf{x}\sim\mu_{k}}[\varphi_{k}(\mathbf{x})]-\mathbb{E}_{{\mathbf{z}}\sim\vartheta}[\varphi_{k}(G(\mathbf{z}))]\right\\}.$
(21)
Therefore, a barycenter of $K$ distributions can be obtained by minimizing the
cost in (A.3) through a specially designed GAN configuration.
### A.4 A Discussion on the Relationship between One-Shot and Recursive
Configurations
Even though the multi-discriminator GAN can lead to a Wasserstein-1 barycenter
in principle, training a multi-discriminator GAN in a one-shot manner is
overwhelming for memory-limited edge nodes. The proposed 2-stage recursive
configuration is designed to address the memory problem by converting the one-
shot formulation to a nested Wasserstein barycenter problem. In a nutshell, a
2-discriminator GAN configuration suffices to obtain a shape-preserving
interpolation of all distributions. As discussed above, the Wasserstein-1
barycenter problem not necessarily constitutes a unique solution due to the
non-uniqueness of geodesic curves between distributions in the probability
space. Proposition 1 asserts that any solution to each pairwise Wasserstein-1
barycenter problem, referred as a barycenter in this study, resides inside the
baryregion formed by $\\{\mu_{k}\\}_{1:K}$. Consequently, the final barycenter
$\nu^{*}$, obtained at the end of all recursions, also resides inside the
baryregion. However, the 2-stage recursive configuration may not obtain the
same barycenter solution to Wasserstein-1 barycenter problem. Through the
intuition that the Wasserstein ball radius
$\eta_{k}=\frac{1}{\lambda_{\psi_{k}}}$ for pre-trained model $k$ represents
the relevance (and hence utility) of the distribution $k$, larger weights
$\lambda_{k}=1/\eta_{k}$ would be assigned to the nodes with higher relevance.
Since we introduce the recursive WGAN configuration, the order of coalescence
(each corresponding to a geodesic curve) may impact the final barycentric WGAN
model, and hence the performance of barycentric fast adaptation. To this end,
we compute the coalescence of models of nodes with higher relevance at latter
recursions to ensure that the final barycentric model is closer to the models
of nodes with higher relevance.
### A.5 Refined Forming Set
The following definition identifies a more compact forming set for baryregions
when they exist.
###### Definition 4.
(Refined Forming Set) Let $\\{\mu_{k}\\}_{k\in\kappa}$ be a subset of the
forming set $\\{\mu_{k}\\}_{1:K}$ for a set $\kappa\subset\mathcal{K}$, and
let $\mathcal{B}_{\mathcal{R}}(\kappa)$ represent the baryregion facilitated
by $\\{\mu_{k}\\}_{k\in\kappa}$. The smallest subset
$\\{\mu_{k}\\}_{k\in\kappa^{*}}$, satisfying
$\mathcal{B}_{\mathcal{R}}(\kappa^{*})\supseteq\mathcal{B}_{\mathcal{R}}$, is
defined as the refined forming set of $\mathcal{B}_{\mathcal{R}}$.
A refined forming set can characterize a baryregion as complete as the
original forming set, but can better capture the geometric properties of the
barycenter problem. In particular, a refined forming set $\kappa^{*}$ dictates
that $\\{\mu_{k}\\}_{k\in\kappa^{*}}$ engenders exactly the same geodesic
curves as in $\mathcal{B}_{\mathcal{R}}$.
###### Proposition 2.
(Non-uniqueness) A refined forming set of $\\{\mu_{k}\\}_{1:K}$ is not
necessarily unique.
###### Proof.
To prove Proposition 1, it suffices to construct a counter example. Consider a
forming set $\\{\mu_{k}\\}_{1:4}$ with the probability measures
$\mu_{1}=\delta_{(0,0)}$, $\mu_{2}=\delta_{(1,0)}$, $\mu_{3}=\delta_{(0,1)}$,
and $\mu_{4}=\delta_{(1,1)}$, where $\delta_{(\mathbf{a},\mathbf{b})}$ is the
delta function with value $1$ at $(x,y)=(\mathbf{a},\mathbf{b})$ and $0$
otherwise. Further, let $\\{\mu_{k}\\}_{k\in\\{1,4\\}}$ and
$\\{\mu_{k}\\}_{k\in\\{2,3\\}}$ be two subsets of the forming set. Then, the
length of the minimal geodesic curve between $\mu_{1}$ and $\mu_{4}$ can be
computed as:
$\displaystyle W_{1}(\mu_{1}(x),\mu_{4}(y))$
$\displaystyle=\underset{\gamma\in\Pi(\mu_{1},\mu_{4})}{\inf}\int_{\mathcal{X}\times\mathcal{Y}}d(x,y)d\gamma(x,y)$
$\displaystyle=\int_{\mathcal{Y}}\int_{\mathcal{X}}d([0,0]^{T},[1,1]^{T})\delta_{([0,0]^{T},[1,1]^{T})}dxdy=2.$
(22)
By recalling that there exist infinitely many minimal geodesics satisfying
(A.5), we check the lengths of two other sets of geodesics that traverse
through $\mu_{2}$ and $\mu_{3}$, respectively. First, for $\mu_{2}$,
$\displaystyle W_{1}($ $\displaystyle\mu_{1}(x),\mu_{4}(y))\leq
W_{1}(\mu_{1}(x),\mu_{2}(z))+W_{1}(\mu_{2}(z),\mu_{4}(y))$
$\displaystyle=\underset{\gamma\in\Pi(\mu_{1},\mu_{2})}{\inf}\int_{\mathcal{X}\times\mathcal{Z}}d(x,z)d\gamma(x,z)+\underset{\gamma\in\Pi(\mu_{2},\mu_{4})}{\inf}\int_{\mathcal{Z}\times\mathcal{Y}}d(z,y)d\gamma(z,y)$
$\displaystyle=\int_{\mathcal{Z}}\int_{\mathcal{X}}d([0,0]^{T},[1,0]^{T})\delta_{([0,0]^{T},[1,0]^{T})}dxdz+\int_{\mathcal{Y}}\int_{\mathcal{Z}}d([1,0]^{T},[1,1]^{T})\delta_{([1,0]^{T},[1,1]^{T})}dzdy$
$\displaystyle=2\leq W_{1}(\mu_{1}(x),\mu_{4}(y)),$ (23)
based on the triangle inequality and the definition of first-type Wasserstein
distance. Similarly for $\mu_{3}$, we can show that
$\displaystyle W_{1}(\mu_{1}(x),\mu_{4}(y))\leq
W_{1}(\mu_{1}(x),\mu_{3}(z))+W_{1}(\mu_{3}(z),\mu_{4}(y))\leq
W_{1}(\mu_{1}(x),\mu_{4}(y)),$ (24)
through the selections $\gamma(x,z)=\delta_{([0,0]^{T},[0,1]^{T})}$ and
$\gamma(z,y)=\delta_{([0,1]^{T},[1,1]^{T})}$. As a result, there exists at
least a single minimal geodesic between $\mu_{1}$ and $\mu_{4}$ passing
through $\mu_{\ell}$ for $\ell\in\\{2,3\\}$, indicating that
$\mu_{2},\mu_{3}\in\mathcal{R}(\\{\mu_{k}\\}_{k\in\\{1,4\\}})$ and
$\mathcal{B}_{\mathcal{R}}(\\{\mu_{k}\\}_{k\in\\{1,4\\}})\supseteq\mathcal{B}_{\mathcal{R}}$.
Observing that there exists no smaller forming set than
$\\{\mu_{k}\\}_{k\in\\{1,4\\}}$, we conclude that
$\\{\mu_{k}\\}_{k\in\\{1,4\\}}$ is a refined forming set.
Following the same line, we can have that $\\{\mu_{k}\\}_{k\in\\{2,3\\}}$ is
another refined forming set of $\\{\mu_{k}\\}_{1:4}$ by first showing the
following three inequalities:
$\displaystyle W_{1}(\mu_{2}(x),\mu_{3}(y))$
$\displaystyle=\int_{\mathcal{Y}}\int_{\mathcal{X}}d([1,0]^{T},[0,1]^{T})\delta_{([1,0]^{T},[0,1]^{T})}dxdy=2,$
(25) $\displaystyle W_{1}(\mu_{2}(x),\mu_{3}(y))$ $\displaystyle\leq
W_{1}(\mu_{2}(x),\mu_{1}(z))+W_{1}(\mu_{1}(z),\mu_{3}(y))\leq
W_{1}(\mu_{2}(x),\mu_{3}(y)),$ (26) $\displaystyle
W_{1}(\mu_{2}(x),\mu_{3}(y))$ $\displaystyle\leq
W_{1}(\mu_{2}(x),\mu_{4}(z))+W_{1}(\mu_{4}(z),\mu_{3}(y))\leq
W_{1}(\mu_{2}(x),\mu_{3}(y)),$ (27)
where the transport maps $\gamma(x,z)=\delta_{([1,0]^{T},[0,0]^{T})}$ and
$\gamma(z,y)=\delta_{([0,0]^{T},[0,1]^{T})}$ for (26), and
$\gamma(x,z)=\delta_{([1,0]^{T},[1,1]^{T})}$ and
$\gamma(z,y)=\delta_{([1,1]^{T},[0,1]^{T})}$ for (27). Consequently, there
exists at least a single minimal geodesic between $\mu_{2}$ and $\mu_{3}$
passing through $\mu_{\ell}$ for $\ell\in\\{1,4\\}$, indicating that
$\mu_{1},\mu_{4}\in\mathcal{R}(\\{\mu_{k}\\}_{k\in\\{2,3\\}})$ and
$\mathcal{B}_{\mathcal{R}}(\\{\mu_{k}\\}_{k\in\\{2,3\\}})\supseteq\mathcal{B}_{\mathcal{R}}$.
Since there exists no smaller forming set than
$\\{\mu_{k}\\}_{k\in\\{2,3\\}}$, we have that $\\{\mu_{k}\\}_{k\in\\{2,3\\}}$
is another refined forming set, thereby completing the proof of non-
uniqueness. ∎
## Appendix B Proof of Proposition 1
###### Proof.
Let $\\{\mu_{k}\\}_{1:K}$ be any set of probability measures on a refined
forming set and $\nu_{k}^{*}$ denote a continuous probability measure with no
atoms, which minimizes the problem
$\underset{\nu_{k}}{\min}~{}W_{1}(\mu_{k},\nu_{k})+W_{1}(\nu_{k-1}^{*},\nu_{k})$
(Ambrosio et al., 2008). By Proposition 2, there exists multiple refined
forming sets, and the proceeding proof holds true for any refined forming set
induced by the original set of probability distributions. The proceeding proof
utilizes the geodesic property and the existence of a barycenter in
Wasserstein-1 space, for which the details can be found in (Villani, 2003;
Ambrosio et al., 2008) and (Le Gouic & Loubes, 2017), respectively. Suppose
that $\alpha\not\in\mathcal{B}_{\mathcal{R}}$ is a distribution satisfying
$\displaystyle
W_{1}(\mu_{2},\nu_{2}^{*})+W_{1}(\mu_{1},\nu_{2}^{*})=W_{1}(\mu_{2},\alpha)+W_{1}(\mu_{1},\alpha).$
(28)
Let $\nu_{2}^{*}=\alpha$. Note that if
$\alpha\not\in\mathcal{B}_{\mathcal{R}}$, $\alpha$ cannot reside on the
geodesic curve $g_{t}(\mu_{1},\mu_{2})_{0\leq t\leq 1}$ since
$g_{t}(\mu_{1},\mu_{2})_{0\leq t\leq 1}\in\mathcal{B}_{\mathcal{R}}$.
Subsequently, by considering another distribution $\beta$ which resides on
geodesic curve $g_{t}(\mu_{1},\mu_{2})$, we can also show that:
$\displaystyle W_{1}(\mu_{1},\beta)+W_{1}(\mu_{2},\beta)$
$\displaystyle=W_{1}(\mu_{1},\beta)+W_{1}(\beta,\mu_{2})=W_{1}(\mu_{1},\mu_{2})$
$\displaystyle<W_{1}(\mu_{1},\alpha)+W_{1}(\alpha,\mu_{2})=W_{1}(\mu_{2},\nu_{2}^{*})+W_{1}(\mu_{1},\nu_{2}^{*}),$
(29)
indicating that $\beta$ attains a lower cost than the minimizer $\nu_{2}^{*}$,
which is a contradiction, indicating that $\nu_{2}^{*}$ must reside in
$\mathcal{B}_{\mathcal{R}}$. Similarly, $\nu_{3}^{*}$ must also reside in
$\mathcal{B}_{\mathcal{R}}$:
$\displaystyle W_{1}(\mu_{3},\beta)+W_{1}(\nu_{2}^{*},\beta)$
$\displaystyle=W_{1}(\mu_{3},\beta)+W_{1}(\beta,\nu_{2}^{*})=W_{1}(\mu_{3},\nu_{2}^{*})$
$\displaystyle<W_{1}(\mu_{3},\alpha)+W_{1}(\alpha,\nu_{2}^{*}).$ (30)
By induction, $\beta\in\mathcal{B}_{\mathcal{R}}$ attains a lower cost
compared with $\alpha\not\in\mathcal{B}_{\mathcal{R}}$ at the $k$th iteration:
$\displaystyle W_{1}(\mu_{k},\beta)+W_{1}(\nu_{k-1}^{*},\beta)$
$\displaystyle=W_{1}(\mu_{k},\beta)+W_{1}(\beta,\nu_{k-1}^{*})=W_{1}(\mu_{k},\nu_{k-1}^{*})$
$\displaystyle<W_{1}(\mu_{k},\alpha)+W_{1}(\alpha,\nu_{k-1}^{*}).$ (31)
Hence, $\nu_{k}^{*}=\beta\in\mathcal{B}_{\mathcal{R}}$. Consequently, all
barycenters to at each iteration must reside in the baryregion
$\mathcal{B}_{\mathcal{R}}$.
Similarly, we can show that for stage $\mathrm{II}$ the following holds:
$\displaystyle W_{1}(\mu_{0},\beta)+W_{1}(\nu_{K}^{*},\beta)$
$\displaystyle=W_{1}(\mu_{0},\beta)+W_{1}(\beta,\nu_{K}^{*})=W_{1}(\mu_{0},\nu_{K}^{*})$
$\displaystyle<W_{1}(\mu_{0},\alpha)+W_{1}(\alpha,\nu_{K}^{*}).$ (32)
Consequently, $\nu^{*}$ also resides in $\mathcal{B}_{\mathcal{R}}$, which
completes the proof. ∎
## Appendix C Algorithms and Experiment Settings
### C.1 Algorithms
For the proposed two-stage adaptive coalescence algorithm, the offline
training in Stage I is done in the cloud, and the fast adaptation in Stage II
is carried out at the edge server, in the same spirit as the model update of
Google EDGE TPU. Particularly, as illustrated in Figure 7, each edge node
sends its pre-trained generative model (instead of its own training dataset)
to the cloud. As noted before, the amount of bandwidth required to transmit
data from an edge node to cloud is also significantly reduced by transmitting
only a generative model, because neural network model parameters require much
smaller storage than the dataset itself. The algorithms developed in this
study are summarized as follows:
Algorithm 1 Offline training to solve the barycenter of $K$ pre-trained
generative models
1: Inputs: $K$ pre-trained generator-discriminator pairs
$\\{(G_{k},D_{k})\\}_{1:K}$ of corresponding source nodes $k\in\mathcal{K}$,
noise prior $\vartheta(z)$, the batch size $m$, learning rate $\alpha$
2: Outputs: Generator $\mathcal{G}^{*}_{K}$ for barycenter $\nu_{K}^{*}$,
discriminators $\tilde{\psi}_{K}^{*}$, $\psi_{K}^{*}$;
3: Set $\mathcal{G}^{*}_{1}\leftarrow G_{1}$, $\tilde{\psi}^{*}_{1}\leftarrow
D_{1}$; //Barycenter initialization
4: for iteration $k=2,...,K$ do
5: Set $\mathcal{G}_{k}\leftarrow\mathcal{G}^{*}_{k-1}$,
$\tilde{\psi}_{k}\leftarrow\\{\tilde{\psi}_{k-1}^{*},\psi_{k-1}^{*}\\}$,
$\psi_{k}\leftarrow D_{k}$ and choose $\lambda_{\tilde{\psi}_{k}}$,
$\lambda_{\psi_{k}}$; //Recursion initialization
6: while generator $\mathcal{G}_{k}$ has not converged do
7: Sample batches of prior samples $\\{z^{(i)}\\}_{i=1}^{m}$,
$\\{{z}^{(i)}_{\tilde{\psi}_{k}}\\}_{i=1}^{m}$,
$\\{{z}^{(i)}_{\psi_{k}}\\}_{i=1}^{m}$ independently from prior
$\vartheta(z)$;
8: Generate synthetic data batches
$\\{{x}^{(i)}_{\tilde{\psi}_{k}}\\}_{i=1}^{m}\sim\nu_{k-1}^{*}$ and
$\\{{x}^{(i)}_{\psi_{k}}\\}_{i=1}^{m}\sim\mu_{k}$ by passing
$\\{{z}^{(i)}_{\tilde{\psi}_{k}}\\}_{i=1}^{m}$ and
$\\{{z}^{(i)}_{\psi_{k}}\\}_{i=1}^{m}$ through $\mathcal{G}_{k-1}^{*}$ and
$G_{k}$, respectively;
9: Compute gradients $g_{\tilde{\psi}_{k}}$ and $g_{\psi_{k}}$:
$\left\\{g_{\omega}\leftarrow\lambda_{\omega}\nabla_{\omega}\frac{1}{m}\sum_{i=1}^{m}\left[\omega(x^{(i)}_{\omega})-\omega(\mathcal{G}_{k}(z^{(i)})\right]\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
10: Update both discriminators $\psi_{k}$ and $\tilde{\psi}_{k}$:
$\left\\{\omega\leftarrow\omega+\alpha\cdot\text{Adam}(\omega,g_{\omega})\right\\}_{\omega=\psi_{k},\tilde{\psi}_{k}}$;
11: Compute gradient
$g_{\mathcal{G}_{k}}\leftarrow-\nabla_{\mathcal{G}_{k}}\frac{1}{m}\sum_{i=1}^{m}\left[\lambda_{\psi_{k}}\psi_{k}(\mathcal{G}_{k}(z^{(i)}))+\lambda_{\tilde{\psi}_{k}}\tilde{\psi}_{k}(\mathcal{G}_{k}(z^{(i)}))\right]$;
12: Update generator $\mathcal{G}_{k}$:
$\mathcal{G}_{k}\leftarrow\mathcal{G}_{k}-\alpha\cdot\text{Adam}(\mathcal{G}_{k},g_{\mathcal{G}_{k}})$
until optimal generator $\mathcal{G}^{*}_{k}$ is computed;
13: end while
14: end for
15: return generator $\mathcal{G}^{*}_{K}$, discriminators
$\tilde{\psi}_{K}^{*}$, $\psi_{K}^{*}$.
Figure 7: Illustration of offline training in the cloud: Each edge node sends
pre-trained generative model (instead of datasets) to the cloud, based on
which the cloud computes adaptive barycenters using the recursive
configuration. Algorithm 2 Fast adaptive learning of the ternary generative
model for edge Node 0
1: Inputs: Training dataset $\mathcal{S}_{0}$, generator $\mathcal{G}^{*}_{K}$
for the barycenter $\nu_{K}^{*}$, offline pre-trained discriminators
$\psi_{K}^{*}$, $\tilde{\psi}_{K}^{*}$, noise prior $\vartheta(z)$, the batch
size $m$, learning rate $\alpha$, the number of layers
$L_{\mathcal{G}}=L_{\psi}=L_{\tilde{\psi}}=L$;
2: Outputs: the ternary generator $\mathcal{G}_{0}$;
3: Set $\mathcal{G}_{0}\leftarrow\mathcal{G}^{*}_{K}$,
$\tilde{\psi}_{0}\leftarrow\tilde{\psi}_{K}^{*}$ and
$\psi_{0}\leftarrow\psi_{K}^{*}$; //Initialization
4: while generator $\mathcal{G}_{0}$ has not converged do
5: for $l:=1$ to $L$ //Weight ternarization do
6: $\left\\{\mathbf{w}^{\prime}_{l_{\omega}}\leftarrow
Tern(\mathbf{w}_{l_{\omega}},S_{l_{\omega}},\Delta_{l_{\omega}}^{\pm})\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
7: $\mathbf{w}^{\prime}_{l_{\mathcal{G}}}\leftarrow
Tern(\mathbf{w}_{l_{\mathcal{G}}},S_{l_{\mathcal{G}}},\Delta_{l_{\mathcal{G}}}^{\pm})$;
8: end for
9: Sample batches of prior samples $\\{z^{(i)}\\}_{i=1}^{m}$ from prior
$\vartheta(z)$;
10: Sample batches of training samples $\\{x_{0}^{i}\\}_{i=1}^{m}$ from local
dataset $\mathcal{S}_{0}$;
11: for $l:=L$ to $1$ //Update the thresholds do
12: Compute gradients
$\left\\{g_{\Delta_{l_{\omega}}^{\pm}}\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$:
$\left\\{g_{\Delta_{l_{\omega}}^{\pm}}\leftarrow\nabla_{\Delta_{l_{\omega}}^{\pm}}\frac{1}{m}\sum_{i=1}^{m}\left[\omega_{0}(x^{(i)}_{0})-\omega_{0}(\mathcal{G}_{0}(z^{(i)})\right]\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
13: Update
$\left\\{\Delta_{l_{\omega}}^{\pm}\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$:
$\left\\{\Delta_{l_{\omega}}^{\pm}\leftarrow\Delta_{l_{\omega}}^{\pm}+\alpha\cdot
g_{\Delta_{l_{\omega}}^{\pm}}\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
14: Compute gradient $g_{\Delta_{l_{\mathcal{G}}}^{\pm}}$:
$g_{\Delta_{l_{\mathcal{G}}}^{\pm}}\leftarrow-\nabla_{\Delta_{l_{\mathcal{G}}}^{\pm}}\frac{1}{m}\sum_{i=1}^{m}\left[\psi_{0}(\mathcal{G}_{0}(z^{(i)}))+\tilde{\psi}_{0}(\mathcal{G}_{0}(z^{(i)}))\right]$;
15: Update $\Delta_{l_{\mathcal{G}}}^{\pm}$:
$\Delta_{l_{\mathcal{G}}}^{\pm}\leftarrow\Delta_{l_{\mathcal{G}}}^{\pm}-\alpha\cdot
g_{\Delta_{l_{\mathcal{G}}}^{\pm}}$;
16: end for
17: Repeat step 3-5 using updated thresholds;
18: for $l:=L$ to $1$ //Update the full-precision weights do
19: Compute gradients
$\left\\{g_{\mathbf{w}_{l_{\omega}}}\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$:
$\left\\{g_{\mathbf{w}_{l_{\omega}}}\leftarrow\nabla_{\mathbf{w}_{l_{\omega}}}\frac{1}{m}\sum_{i=1}^{m}\left[\omega_{0}(x^{(i)}_{0})-\omega_{0}(\mathcal{G}_{0}(z^{(i)})\right]\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
20: Update
$\left\\{\mathbf{w}_{l_{\omega}}\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$:
$\left\\{\mathbf{w}_{l_{\omega}}\leftarrow\mathbf{w}_{l_{\omega}}+\alpha\cdot\text{Adam}(\mathbf{w}_{l_{\omega}},g_{\mathbf{w}_{l_{\omega}}})\right\\}_{\omega={\tilde{\psi}_{k}},{\psi_{k}}}$;
21: Compute gradient $g_{\mathbf{w}_{l_{\mathcal{G}}}}$:
$g_{\mathbf{w}_{l_{\mathcal{G}}}}\leftarrow-\nabla_{\mathbf{w}_{l_{\mathcal{G}}}}\frac{1}{m}\sum_{i=1}^{m}\left[\psi_{0}(\mathcal{G}_{0}(z^{(i)}))+\tilde{\psi}_{0}(\mathcal{G}_{0}(z^{(i)}))\right]$;
22: Update $\mathbf{w}_{l_{\mathcal{G}}}$:
$\mathbf{w}_{l_{\mathcal{G}}}\leftarrow\mathbf{w}_{l_{\mathcal{G}}}-\alpha\cdot\text{Adam}(\mathbf{w}_{l_{\mathcal{G}}},g_{\mathbf{w}_{l_{\mathcal{G}}}})$;
23: end for
24: Repeat step 3-5 using updated full-precision weights;
25: end while
26: return the ternary generator $\mathcal{G}_{0}$.
### C.2 Experiment Settings
This section outlines the architecture of deep neural networks and hyper-
parameters used in the experiments.
Network architectures deployed in the experiments. Figures 8, 9 and 10 depict
the details of the DNN architecture used in our experiments; the shapes for
convolution layers follow
$(batch~{}size,number~{}of~{}filters,kernel~{}size,stride,padding)$; and the
shapes for network inputs follow
$(batch~{}size,number~{}of~{}channels,heights,widths)$.
Hyper-parameters used in the experiments. All experiments are conducted in
PyTorch on a server with RTX 2080 Ti and 64GB of memory. The selection of most
parameter values, e.g., the number of generator iterations, batch size,
optimizer, gradient penalty factor, and the number of discriminator iterations
per generator iterations, follows (Arjovsky et al., 2017; Gulrajani et al.,
2017; Wang et al., 2018b). For other parameters, we select the values giving
the best performance via trial-and-error. In Table 1 and 2 all hyper-
parameters are listed. We have considered different ranges of values for
different parameters. The number of generator iterations (fast adaptation)
ranges from $800$ up to $100000$. For better illustration, the figures depict
only the iterations until satisfactory image quality is achieved. For the
number of samples at the target edge node, $500\sim 10000$ samples in CIFAR10,
$20\sim 500$ samples in MNIST and $500\sim 1000$ samples in LSUN and CIFAR100
are used. Each experiment is smoothed via a moving average filter for better
visualization. More details and instructions to modify the hyper-parameters
are available in the accompanying code, which will be publicly available on
GitHub once the review process is over.
Figure 8: The DNN architecture used in experiments for LSUN dataset.
Figure 9: The DNN architecture used in experiments for CIFAR10 and CIFAR100
datasets.
Figure 10: The DNN architecture used in experiments for MNIST dataset. Table 1: List of hyper-parameters used in all experiments. | CIFAR10 | | |
---|---|---|---|---
Parameter\Experiment Figure | Fig. 5(e) | Fig. 14(c) | Fig. 5(c) | | |
Num. of Generator Iter. (Offline Training) | 100000 | 100000 | 5000 | | |
Num. of Generator Iter. (fast adaptation) | 5000 | 2000 | 3000 | | |
Batch Size | 64 | 64 | 64 | | |
Optimizer Parameters | Adam($\beta_{1}=0.5,\beta_{2}=0.999$, l.r.$=0.0001$) | | |
Num. of Discriminator Iter. per Gen. Iter. | 5 | 5 | 5 | | |
Gradient-Penalty Factor | 10 | 10 | 10 | | |
Num. of Samples in Node $0$ | Varies | 1000 | 1000 | | |
$\\{\lambda_{k}\\}$ | Equal | Equal | Equal | | |
Number of Training Samples | 50000 | 50000 | 50000 | | |
Avg. Dur. of Pretraining/Transfer per Iter. | 0.755 seconds | | |
Avg. Dur. of Training Barycenter per Iter. | 1.480 seconds | | |
| LSUN | | |
Parameter\Experiment Figure | Fig. 13(c) | Fig. 5(f) | Fig. 13(a) | | |
Num. of Generator Iter. (Offline Training) | 10000 | 10000 | 10000 | | |
Num. of Generator Iter. (fast adaptation) | 10000 | 10000 | 5000 | | |
Batch Size | 64 | 64 | 64 | | |
Optimizer Parameters | Adam($\beta_{1}=0.5,\beta_{2}=0.999$, l.r.$=0.0001$) | |
Num. of Discriminator Iter. per Gen. Iter. | 5 | 5 | 5 | | |
Gradient-Penalty Factor | 10 | 10 | 10 | | |
Num. of Samples in Node $0$ | 1000 | 1000 | Varies | | |
$\\{\lambda_{k}\\}$ | Equal | Equal | Equal | | |
Number of Training Samples | 100000 | 100000 | 100000 | | |
Avg. Dur. of Pretraining/Transfer per Iter. | 1.055 seconds | | |
Avg. Dur. of Training Barycenter per Iter. | 2.125 seconds | | |
| MNIST | |
Parameter\Experiment | Fig. 12(a), 12(b) | Fig. 5(a) | Fig. 14(b) | Fig. 5(b) | |
Num. of Generator Iter. (Offline Training) | 1000 | 3000 | 1000 | 1000 | |
Num. of Generator Iter. (fast adaptation) | 800 | 1000 | 800 | 800 | |
Batch Size | 256 | 256 | 256 | 256 | |
Optimizer Parameters | Adam($\beta_{1}=0.5,\beta_{2}=0.999$, l.r.$=0.0001$) | |
Num. of Discriminator Iter. per Gen. Iter. | 5 | 5 | 5 | 5 | |
Gradient-Penalty Factor | 10 | 10 | 10 | 10 | |
Num. of Samples in Node $0$ | Varies | 8 | 100 | 20 | |
$\\{\lambda_{k}\\}$ | Equal | Equal | Equal | Equal | |
Number of Training Samples | 60000 | 60000 | 60000 | 60000 | |
Avg. Dur. of Pretraining/Transfer per Iter. | 0.535 seconds | |
Avg. Dur. of Training Barycenter per Iter. | 1.020 seconds | |
Table 2: List of hyper-parameters used in all experiments. | CIFAR100 | |
---|---|---|---
Parameter\Experiment Figure | Fig. 13(d) | Fig. 13(b) | Fig. 14(a) | Fig. 15(a) | |
Num. of Generator Iter. (Offline Training) | 10000 | 10000 | 10000 | 10000 | |
Num. of Generator Iter. (fast adaptation) | 10000 | 10000 | 10000 | 10000 | |
Batch Size | 64 | 64 | 64 | 64 | |
Optimizer Parameters | Adam($\beta_{1}=0.5,\beta_{2}=0.999$, l.r.$=0.0001$) | |
Num. of Discriminator Iter. per Gen. Iter. | 5 | 5 | 5 | 5 | |
Gradient-Penalty Factor | 10 | 10 | 10 | 10 | |
Num. of Samples in Node $0$ | 1000 | Varies | 500 | 500 | |
$\\{\lambda_{k}\\}$ | Equal | Equal | Equal | Varies | |
Number of Training Samples | 50000 | 50000 | 50000 | 50000 | |
Avg. Dur. of Pretraining/Transfer per Iter. | 0.756 seconds | |
Avg. Dur. of Training Barycenter per Iter. | 1.482 seconds | |
| CIFAR100 | | |
Parameter\Experiment Figure | Fig. 15(b) | Fig. 15(d) | Fig. 15(c) | Fig. 5(d) | |
Num. of Generator Iter. (Offline Training) | 5000 | 5000 | 10000 | 10000 | |
Num. of Generator Iter. (fast adaptation) | 5000 | 5000 | 10000 | 7000 | |
Batch Size | 64 | 64 | 64 | 64 | |
Optimizer Parameters | Adam($\beta_{1}=0.5,\beta_{2}=0.999$, l.r.$=0.0001$) | |
Num. of Discriminator Iter. per Gen. Iter. | 5 | 5 | 5 | 5 | |
Gradient-Penalty Factor | 10 | 10 | 10 | 10 | |
Num. of Samples in Node $0$ | 1000 | 1000 | 5000 | 400 | |
$\\{\lambda_{k}\\}$ | Equal | Equal | Equal | Equal | |
Number of Training Samples | 50000 | 50000 | 50000 | 50000 | |
Avg. Dur. of Pretraining/Transfer per Iter. | 0.756 seconds | |
Avg. Dur. of Training Barycenter per Iter. | 1.482 seconds | |
Figure 11: The DNN architecture used for extracting features in MNIST images
and computing the modified FID scores.
(a) Evolution of image quality on MNIST using FID score under different number
of samples at target edge node.
(b) Evolution of image quality on MNIST using modified FID score under
different number of samples at target edge node.
Figure 12: Image quality performance of two stage adaptive coalescence
algorithm in various scenarios.
## Appendix D Experiments and Further Discussion
### D.1 Frechet Inception Distance Score
An overview of FID score. Quantifying the quality of images is an important
problem for performance comparison in the literature on GANs. A variety of
metrics have been proposed in the literature to quantify image quality with
the consideration of over-fitting and mode collapse issues. This study adopts
the FID score (Heusel et al., 2017a), which has been shown to be able to
accurately evaluate image quality and over-fitting, independent of the number
of classes. Since most of the datasets considered in this study (CIFAR10, LSUN
and MNIST) only contain $10$ classes and they are further split into subsets,
using a metric independent of classes is essential for our study, and the
metrics highly dependent on the number of classes, e.g., Inception score (IS),
may not be appropriate here.
Similar to IS, a pre-trained ‘Inception’ network is utilized to extract useful
features for obtaining the FID score, such that the features of real and fake
images can then be used to compute correlations between these images so as to
evaluate the quality of images. A perfect score of $1$ can be obtained only if
the features of both real and fake datasets are the same, i.e., fake images
span every image in the real datasets. Consequently, if a generative model is
trained only on a subset of the real-world dataset, the model would over-fit
the corresponding subset and does not capture the features of the remaining
real samples, thus yielding a bad FID score.
Modified FID score for MNIST dataset. Since the ‘Inception’ network is pre-
trained on ‘ILSVRC 2012’ dataset, both IS and FID scores are most suitable for
RGB images (e.g., CIFAR), which however cannot accurately capture the valuable
features in MNIST images, simply because ‘ILSVRC 2012’ dataset does not
contain MNIST classes.
To resolve these issues, we particularly train a new neural network to extract
useful features for MNIST dataset. The network architecture of the
corresponding DNN is shown in Figure 11. Fully trained network achieves an
accuracy rate of $99.23\%$ for classifying the images in MNIST. Though the
corresponding architecture is much simpler in comparison to the ‘Inception’
network, the high classification accuracy indicates that the network can
extract the most valuable features in MNIST dataset.
To further demonstrate the difference between FID and modified FID scores, we
evaluate the results of Experiment $4$ using both approaches, as shown in
Figure 12(a) and 12(b), respectively. It can be seen that upon convergence,
the FID scores for the ‘Edge-Only’ with different number of samples are
similar, whereas the modified FID scores under different cases are more
distinct from each other and correctly reflect the learning performance.
Besides, ‘Edge-Only’ with $20$ samples incorrectly performs better than ‘Edge-
Only’ with $100$ samples in the FID score, while ‘Edge-Only’ with $20$ and
$100$ samples perform as expected with the modified FID score. Hence, the
modified FID score can better capture the image features compared with the FID
score, and is a more suitable metric to evaluate the image quality in
experiments with MNIST.
(a) Evolution of image quality on LSUN using FID score under different number
of samples at target edge node.
(b) Evolution of image quality on CIFAR100 using FID score under different
number of samples at target edge node.
(c) Convergence of Barycentric fast-adaptation compared to 3 different
baselines: Case for LSUN.
(d) Convergence of Barycentric fast-adaptation compared to 3 different
baselines: Case for CIFAR100.
Figure 13: Image quality performance of barycentric fast adaptation in various
scenarios.
### D.2 Additional Experiments on MNIST, CIFAR10, CIFAR100 and LSUN
Fine-tuning via fast adaptation. We investigate the convergence and the image
quality of various training scenarios on MNIST, CIFAR10, CIFAR100 and LSUN
datasets. To demonstrate the improvements by using the proposed framework
based on _Barycentric Fast-Adaptation_ , we conduct extensive experiments and
compare performance with $3$ additional baselines: 1) _Edge-Only_ : only local
dataset with few samples at the target edge node is used in WGAN training; 2)
_Weight-Average_ : an initial model for training a WGAN model at the target
edge node is computed by weight-averaging pre-trained models across other edge
nodes, and then _Barycentric Fast-Adaptation_ is used to train a WGAN model;
3) _Whole Data at Node $0$_: the whole dataset available across all edge nodes
is used in WGAN training.
As illustrated in Figure 12(b), 13(a) and 13(b), _barycentric fast adaptation_
outperforms _Edge-Only_ in all scenarios with different sizes of the training
set. In particular, the significant gap of modified FID scores between two
approaches in the initial stages indicates that the barycenter found via
offline training and adopted as the model initialization for fast adaptation,
is indeed close to the underlying model at the target edge node, hence
enabling faster and more accurate edge learning than _Edge-Only_. Moreover,
upon convergence, the _barycentric fast adaptation_ approach achieves a better
FID score (hence better image quality) than _Edge-Only_ , because the former
converges to a barycenter residing between the coalesced model computed
offline and the empirical model at target edge node. We further notice that
_barycentric fast adaptation_ noticeably addresses catastrophic forgetting
problem apparent in _Transferring GANs_ and _Edge-Only_ , but cannot eliminate
it completely in Figure 13. As it will be illustrated in Figure 15,
catastrophic forgetting can be eliminated by selecting appropriate $\eta_{k}$
values. As expected, the modified FID score gap between two approaches
decreases as the number of data samples at the target node increases, simply
because the empirical distribution becomes more ‘accurate’.
(a) Convergence of ternarized and full precision barycentric fast adaptation
methods on CIFAR100.
(b) Convergence of image quality of ternarized and full precision barycentric
fast adaptation techniques on MNIST.
(c) Convergence of ternarized and full precision fast adaptation methods on
CIFAR10.
Figure 14: Image quality performance of two stage adaptive coalescence
algorithm in various scenarios.
Figures 13(c) and 13(d) compare the performance of _Barycentric Fast-
Adaptation_ on LSUN and CIFAR100 with additional 2 baselines _Weight-Average_
and _Whole Data at Node $0$_. Again, _Barycentric Fast-Adaptation_ outperforms
all baselines in the initial stages of training, but as expected, _Whole Data
at Node $0$_ achieves the best FID score upon convergence as it utilizes whole
reference dataset. Unsurprisingly, _Weight-Average_ performs poorly since
weight averaging does not constitute a shape-preserving transformation of pre-
trained models, while _Barycentric Fast-Adaptation_ can by utilizing
displacement interpolation in the Wasserstein space.
(a) Evolution of image quality on CIFAR100 for different Wasserstein ball
radii values.
(b) Evolution of the quality of images generated by fast adaptation using pre-
trained model or using few samples at target node.
(c) Evolution of the quality of images generated by fast adaptation for
different number of data samples with same data classes at edge nodes.
(d) Evolution of the quality of images generated by fast adaptation for
disjoint dataset at target node.
Figure 15: Image quality performance of two stage adaptive coalescence
algorithm in various scenarios. CIFAR100 dataset is used in all experiments.
Ternary WGAN based fast adaptation. Following the same spirit of the
experiment for LSUN, we compare the image quality obtained by ternary WGAN-
based fast adaptation against both full precision counterpart and _Edge-Only_
for CIFAR100, CIFAR10 and MNIST datasets. It can be seen from the modified FID
scores (Figure 14(b), 14(c) and 14(a)) that the ternary WGAN-based fast
adaptation facilitates image quality in between its full precision counterpart
and the _Edge-Only_ approach, which indicates that the ternary WGAN-based fast
adaptation provides negligible performance degradation compared to the full
precision method.
### D.3 Additional Experiment Settings
This subsection features additional experiment setups, which are not
considered as primary use cases for the proposed _Barycentric fast adaptation_
, but might provide useful insights regarding the algorithm.
The impact of Wasserstein ball radii. To demonstrate the impact of the
Wasserstein ball radii, we design an experiment with different radius values
in the fast adaptation stage. The CIFAR100 dataset is equally split to 2 edge
nodes and an offline barycenter is computed with equal Wasserstein ball radii.
We trained 3 different models for fast adaptation with varying weights
$\lambda_{k}=\nicefrac{{1}}{{\eta_{k}}}$ . As noted in Section 1, radius
$\eta_{k}$ represents the relevance (hence utility) of the knowledge transfer,
and the smaller it is, the more informative the corresponding Wasserstein ball
is. As illustrated in Figure 15(a), the performance of _barycentric fast
adaptation_ improves as the weight $\lambda_{k}$ increases, because the
knowledge transfer from the offline barycenter is more informative.
Consequently, the fast adaptation benefits from the coalesced model more,
which mitigates the effects of catastrophic forgetting, leading to better
image quality.
Pre-training WGAN at target edge node. In this experiment, we explore the
possible effects of using a pre-trained WGAN model, which is trained using the
local samples at the target edge node, instead of using the samples at target
edge node as in the proposed barycentric fast adaptation phase. Specifically,
the CIFAR100 dataset is split into 2 equal size subsets and each subset is
placed on one of two edge nodes, based on which an offline barycenter model is
trained. In addition, another WGAN model is pre-trained using local samples at
the target edge node as in _Edge-Only_. Subsequently, model fusion is applied
using the offline barycenter model and the pre-trained WGAN model at the
target edge node. Figure 15(b) demonstrates that the performance of this
approach is negatively impacted, when compared to the proposed barycentric
fast adaptation.
Disjoint classes at the target edge node. In this experiment, we investigate
the performance degradation of fast adaptation when the datasets in the source
edge nodes and at the target edge node do not have data samples from the same
class. To this end, two disjoint subsets from CIFAR100, $50$ classes and $40$
classes, are placed on 2 edge nodes, from which an offline barycenter is
trained. A subset of samples from the remaining $10$ classes are placed on the
target edge node. Figure 15(d) shows the performance benefit of _barycentric
fast adaptation_ compared to _Edge-Only_. As expected, _barycentric fast
adaptation_ with disjoint classes yield less knowledge transfer from offline
training to fast adaptation (yet they still share common features), but
perform better than its _Edge-Only_ counterpart.
The impact of sample sizes. Next, we explore if the offline barycenter model
offers any benefit to fast adaptation when all the edge nodes possess the same
dataset classes, but with different sample sizes. For this purpose, 250, 200
and 50 disjoint samples are sampled from each class in CIFAR100 and placed at
two edge nodes and target node, respectively. We here notice that the offline
barycenter is now just a barycenter of two close empirical distributions,
which share the same underlying distributions. Therefore, this setup is more
suitable to transfer learning rather than edge learning. Nonetheless,
_barycentric fast adaptation_ utilizes the additional samples from offline
training, in the same spirit to _transfer learning_ and improves FID score in
comparison to _Edge-Only_ , which only has access to 5000 samples (Figure
15(c)).
### D.4 Additional Synthetic Images
In this section, we present more synthetic images generated using _Edge-Only,
transferring GANs, barycentric fast adaptation_ and _ternarized barycentric
fast adaptation_ techniques. Figure 16, 17 and 18 illustrate 100 additional
images generated by _barycentric fast adaptation, transferring GANs_ and
_ternarized barycentric fast adaptation_ techniques, respectively. For
barycentric fast adaptation and transferring GANs, the synthetic images are
collected at iteration 1000, since both techniques attains a good FID score at
early stages of training. However, transferring GANs suffers from catastrophic
forgetting in latter stages of training, while barycentric fast adaptation can
significantly prevent catastrophic forgetting, generating high quality
synthetic images even at latter stages of training. We collected synthetic
images from ternary barycentric fast adaptation at iteration 5000 since as
expected it takes longer for this technique to converge to a good generative
model. However, it saves significant memory in comparison to full precision
barycentric fast adaptation at the expense of negligible performance
degradation.
Finally, Figure 19 and 20 show images generated using Edge-Only at iterations
5000 and 90000 iterations, respectively. As it can be observed from the images
in Figure 19, Edge-Only has not converged to a good GAN model yet at iteration
5000. Observe that the image quality at iteration 90000 in Figure 20 is
significantly better, since the Edge-Only has converged to the empirical
distribution at Node 0, but it is still as not good as that generated by using
barycentric fast adaptation.
Figure 16: Image samples generated at 1000th iteration using barycentric fast
adaptation on CIFAR10 dataset. Figure 17: Image samples generated at 1000th
iteration using Transferring GANs on CIFAR10 dataset. Figure 18: Image
samples generated at 5000th iteration using ternarized barycentric fast
adaptation on CIFAR10 dataset. Figure 19: Image samples generated at 5000th
iteration using Edge-Only on CIFAR10 dataset. Figure 20: Image samples
generated at 90000th iteration using Edge-Only on CIFAR10 dataset.
|
# Where does the Stimulus go? Deep Generative Model for Commercial Banking
Deposits
Ni Zhan
Carnegie Mellon University
<EMAIL_ADDRESS>
###### Abstract
This paper examines deposits of individuals ("retail") and large companies
("wholesale") in the U.S. banking industry, and how these deposit types are
impacted by macroeconomic factors, such as quantitative easing (QE). Actual
data for deposits by holder are unavailable. We use a dataset on banks’
financial information and probabilistic generative model to predict industry
retail-wholesale deposit split from 2000 to 2020. Our model assumes account
balances arise from separate retail and wholesale lognormal distributions and
fit parameters of distributions by minimizing error between actual bank
metrics and simulated metrics using the model’s generative process. We use
time-series regression to forward predict retail-wholesale deposits as
function of loans, retail loans, and reserve balances at Fed banks. We find
increase in reserves (representing QE) increases wholesale but not retail
deposits, and increase in loans increase both wholesale and retail deposits
evenly. The result shows that QE following the 2008 financial crisis benefited
large companies more than average individuals, a relevant finding for economic
decision making. In addition, this work benefits bank management strategy by
providing forecasting capability for retail-wholesale deposits.
## 1 Introduction
The Federal Reserve (Fed) has used quantitative easing (QE) as a response to
economic slowdowns, including the 2008 financial crisis and recently Covid-19
crisis. In QE, the Fed purchases US bonds and other assets to increase money
supply, improve liquidity, and encourage economic activity. The initial
response to Covid-19 crisis, the Cares Act, injected $2.2 Trillion into the US
economy and saw a rapid, large increase in commercial banking deposits, which
are money held at banks and the predominant part of macroeconomic M1 and M2
money supply measures. The scale and swiftness of deposit increase (19% from
March to May 2020) has been unprecedented in the past 20 years [13]. With
large amounts of QE and increase in money supply, we would like to examine the
breakup of where the stimulus landed. In particular, this work focuses on
determining how macroeconomic factors, including QE and loans, impact the
financial position of large corporations and individuals in terms of their
deposits at depository institutions. We focus on deposits because they are a
large part of the money supply, and the perspective is useful for bank
balance-sheet management and understanding where stimulus landed in the
macroeconomy.
We classify deposits into two types: retail and wholesale. A deposit is
considered "retail" when the depositor is an individual and "wholesale" when
the depositor is a large company. Large companies (for example Apple, Walmart)
have bank accounts at certain banks, but usually only at large banks. From a
bank’s perspective, deposits are balance-sheet liabilities, and the retail and
wholesale deposits have different characteristics, such as liquidity outflow,
risk exposures to interest rates, etc. [3]. This work aids managing banks’
balance-sheet risks by increasing understanding of how macroeconomic factors,
including QE, separately impact retail and wholesale deposits. From an
economic policy perspective, this work examines the result of actions such as
QE and increasing loan programs on deposits of individuals and corporations.
One of the main challenges is missing direct data for retail and wholesale
deposits across the industry. Banks are unlikely to volunteer information
about their client mix, and available data report total deposits irrespective
of individual or corporation as the holder. Therefore this work has two main
parts: first estimating an industry retail-wholesale deposit split, and then
predicting retail-wholesale deposits’ temporal changes based on macroeconomic
factors. In the first part, we use the FDIC Statistics on Depository
Institutions (SDI) dataset in an unsupervised machine learning problem. The
SDI data has financial and demographic information about each depository
institution, and we build a probabilistic generative model to infer retail-
wholesale deposits from an institutions’ data. In the second part, we build a
time-series regression to predict industry retail-wholesale deposits using
macroeconomic factors as input. To the authors’ knowledge, this is the first
work to estimate industry retail-wholesale deposits and examine their
macroeconomic drivers.
## 2 Related Work
Theoretically in a closed economy, QE money will become apparent somewhere in
the system, an idea known as flow-of-funds [2, 17]. Related work created a
flow-of-funds simulation between banks, government, Fed, households, and firms
[4], and our work also examines impact of these sectors on each other in an
alternative method. Nagurney et. al. created a network flow-of-funds, and
solved as optimization problem on case study with Fed balance sheet [24].
These papers show the validity and opportunity of examining Fed and banks’
balance sheet interactions, and our work also takes advantage of these
theoretical underpinnings.
In addition, there is significant interest in QE impact on banking system,
with studies of recent QE in the U.S. [5, 26], Euro-area [18], England [20],
and Japan [23]. Matousek et. al. used vector autoregression (VAR) and found
deposit growth after QE shock in large-sized banks with non-performing loans
[23], and we likewise find deposit growth after QE. Other studies focused
predominantly on QE impact on bank lending [18, 20], and bond prices [6]. Our
work is unique in examining QE impact on retail-wholesale deposits.
Some papers have examined bank types and retail-wholesale banks, although they
defined retail-wholesale differently than we did [16, 9]. Gertler et. al.
studied wholesale banks and their role in the 2008 financial crisis [16]. They
defined wholesale bank as highly leveraged, funded by short term debt, such as
overnight repo, and roughly corresponding with "shadow banks" such as Lehman
Brothers. Craig et. al. examined liabilities structure with relation to
competition among banks, and categorized liabilities into wholesale funding
and deposits [9]. The difference is that our work categorized deposits by
wholesale-retail, and did not examine liabilities other than deposits. Some
papers clustered and differentiated banks using FDIC dataset [8, 1], similarly
to this work. Cohen et. al. examined market share, impacts of product
differentiation on bank competition and profitability, and classified banks
into multimarket, community, and thrifts [8]. Adams et. al. investigated
market segmentation and substitution between thrift and multimarket banks,
with implications for bank merger policy [1].
Other works that use deep learning for banking are summarized by review papers
[19, 22]. In macroeconomics applications, Sevim et. al. used neural networks
to predict currency crises [27]. Other works focused on modeling banking
default risk, credit risk, and liquidity risk [19, 22]. Banking risk
management is an active area of interest [9, 3], and our work also contributes
to this topic.
Finally the generative model used in this work is inspired by variational
autoencoders [21, 10], because our model also simultaneously draws samples
from probability distributions and incorporates the samples into the loss
function that trains the distributions.
## 3 Methods
### 3.1 Dataset
We used the FDIC SDI dataset, which contains financial information about each
FDIC insured institution [12]. The data is available in SQL style structure as
CSV files, at quarterly frequency from 1993. (We use YYYY-Q# to indicate end
of quarter throughout.) There were around 5,300 institutions in 2019 and
10,100 institutions in 2000. Institutions include commercial banks, thrifts
(such as former Washington Mutual), and savings banks. The FDIC classifies
institutions by field "bank charter class" into six classes based on
commercial, savings, thrift, and federal or state charter. We compared the
deposits reported by SDI with H6 [13] and H8 datasets [14] published by the
Fed. H8 large commercial aligns with FDIC commercial bank with national
charter (bank class "N") to 0.4% of domestic deposits from 2004 onwards. H6
commercial aligns with FDIC bank classes N, NM, SM to 1.1% of domestic
deposits from 2008 onwards.
To provide additional intuition on retail-wholesale banks, we consider some
examples. The largest banks JPMorgan Chase, Bank of America, Citigroup, and
Wells Fargo have retail and wholesale deposits generally following their lines
of businesses. For example, Bank of America’s quarterly earnings supplement
separates deposits by lines of businesses (LoBs): Consumer Banking, Global
Wealth and Investment Management, Global Banking, and Global Markets. From
description of Bank of America’s LoBs, we can naively consider Consumer
Banking as retail, Global Banking and Global Markets as wholesale, and Global
Wealth and Investment Management as mix of retail-wholesale. However,
obtaining the data for banks along LoBs is limited (depending on if the bank
provides the information) and tedious (as it involves scraping quarterly
PDFs). We also believe that some banks (Bank of New York Mellon, State Street)
are purely wholesale, and many small banks serving local regions are purely
retail.
The data on each bank includes balance sheet items, amounts for different
types of loans, deposits in checkings, savings accounts, demographic
information such as number of offices, and many other items. Because we want
to estimate retail and wholesale deposits, which is unknown, we used data
which intuitively may differentiate banks as having more wholesale or retail
deposits. These useful data fields, which became inputs to our model, are
described in Table 1 for 2019-Q4 snapshot. Banks with less than $4.5M deposits
or fewer than 250 accounts were dropped from subsequent analysis, and these
banks had negligible effect on industry results. In Table 1, "percent small
amount" indicates percentage of deposits in accounts less than the FDIC
insurance limit ($100,000 before 2010 and $250,000 after 2010). "Percent small
accounts" indicates percentage of accounts smaller than the FDIC limit. To
provide intuition, the first three features in Table 1 roughly measure bank
size, and banks with large values for these features are more likely to have
wholesale deposits. We also note the distributions of these features have very
fat tails to the right, from examination of the log-values, particularly the
means and maxes. For the last three features in Table 1, we expect banks with
larger values of these features to have larger composition of retail deposits,
because small account balances are more likely retail and retail-dominated
banks likely have more retail loans. Note that most banks have percent of
small accounts very close to 98%, while the percent of small amounts is much
lower with mean 65%. This indicates that account balances within a bank are
distributed with fat tails on the right as well.
C[1]>m#1
M[1]>p#1
Table 1: Bank features and statistics for 5113 FDIC 2019-Q4 banks. Logs of dollar values were taken in thousands of dollars. Means and StdDev w.r.t. logs are parameters of a log-normal approximating the distribution of banks. Retail loans were calculated as sum of residential construction loans, residential real estate loans, and loans to individuals. Deposits and accounts have fat tails on the right. Larger % small amount should indicate retail-dominated bank. | | Domestic
---
deposits
| Deposits
---
per office
| Total
---
accounts
% small amount | % small accounts | % retail loans
| $B | Log | $B | Log | $\times 10^{5}$ | Log | | |
Mean | 2.6 | 12.4 | 0.27 | 10.9 | 1.1 | 9.1 | 65 | 98 | 41
StdDev | - | 1.5 | - | 0.9 | - | 1.4 | 15 | 3 | 22
Min | 0.006 | 8.7 | 0.002 | 7.7 | 0.003 | 5.5 | 0.1 | 46 | 0
Max | 1400 | 21.1 | 123 | 18.6 | 980 | 18.4 | 100 | 100 | 100
### 3.2 Deep Generative Model (DGM)
With the available data, we aim to estimate industry retail deposits. The
problem is unsupervised because we do not have data for retail deposit
fraction. We can think of retail deposit fraction as a hidden variable to be
estimated from observed bank data. However using traditional methods such as
Hidden Markov Model or Gaussian Mixture Model is difficult because each bank
is not discretely retail or wholesale. Therefore we created a generative model
customized for this problem.
We consider the total deposits at a bank as the sum of account balances. If we
knew the balance of each account, we would have the distribution of account
balances. We do not have the complete distribution, however we have some
information about the distribution of accounts at each bank from the SDI
dataset, namely: number of small accounts, deposits in small accounts, total
number of accounts, and total deposits. Intuitively, knowing the distribution
of accounts helps determine retail-wholesale deposits because large accounts
are more likely to be wholesale. In addition, each account is either retail or
wholesale. We build this intuition into a probabilistic generative model.
Our model assumes account balances arise from two separate distributions:
retail and wholesale, and we assume lognormal distributions for both.
Lognormals work well for income data [7] and seem reasonable given our fat
tailed data. Each bank $b$ has five parameters at a point in time $t$:
$p_{b,t}$ fraction of retail accounts, $\mu_{b,t,\text{ret}}$,
$\sigma_{b,t,\text{ret}}$, $\mu_{b,t,\text{ws}}$, $\sigma_{b,t,\text{ws}}$
representing means and standard deviations of retail and wholesale
distributions, respectively. Bank features in array form [#banks, #timesteps,
#features] are inputs to neural networks (NNs) which output the five
parameters. The NNs provide some generalization by mapping banks with similar
features to similar parameters, and we use NNs because the functional form
between bank features and parameters is unknown. The NNs can also be
regularized with, for example, L1 or L2 weight regularization, although we did
not do this here. We then sample account balances from each parameterized
distribution, i.e.
$s_{b,t,\text{ret}}\sim\text{lognormal}(\mu_{b,t,\text{ret}},\sigma_{b,t,\text{ret}}^{2})$
and
$s_{b,t,\text{ws}}\sim\text{lognormal}(\mu_{b,t,\text{ws}},\sigma_{b,t,\text{ws}}^{2})$.
The objective we minimize includes error terms between observed data
$\mathbf{x}$ and calculated metrics from the samples $\mathbf{v}$. The
included observed data are total deposits, number of small accounts, and
deposits in small accounts, denoted by set $\mathcal{I}$.
$\mathcal{L}_{\text{samples}}=\frac{1}{n_{t}\cdot
n_{b}}\sum_{t}^{n_{t}}\sum_{b}^{n_{b}}\sum_{\mathcal{I}}(\mathbf{x}_{b,t,i}-\mathbf{v}_{b,t,i})^{2}$
(1)
For the different metrics, $\mathbf{v}$ was calculated as follows.
$\mathbf{v}_{b,t,\text{ total
deposits}}=\frac{n_{\text{accounts},b,t}}{n_{\text{samples}}}[p_{b,t}\sum
s_{b,t,\text{ret}}+(1-p_{b,t})\sum s_{b,t,\text{ws}}]$ (2)
$\mathbf{v}_{b,t,\text{ deposits in small
accounts}}=\frac{n_{\text{accounts},b,t}}{n_{\text{samples}}}[p_{b,t}\sum_{\mathcal{S}:s_{b,t,\text{ret}}<l}s_{b,t,\text{ret}}+(1-p_{b,t})\sum_{\mathcal{S}:s_{b,t,\text{ws}}<l}s_{b,t,\text{ws}}]$
(3) $\mathbf{v}_{b,t,\text{ number of small
accounts}}=p_{b,t}P(s_{b,t,\text{ret}}<l)+(1-p_{b,t})P(s_{b,t,\text{ws}}<l)$
(4)
where probabilities $P$ are given from the CDF, and $l$ is FDIC insurance
limit.
Based on $\mathcal{L}_{\text{samples}}$, we see the model fits parameters by
minimizing errors between actual bank metrics and simulated metrics from the
generative process. The model’s generative process is specifically formulated
to allow estimation of retail and wholesale deposits. To perform inference, we
calculate retail deposits $\mathbf{r}$ using
$\mathbf{r}_{b,t}=\frac{n_{\text{accounts},b,t}}{n_{\text{samples}}}p_{b,t}\sum
s_{b,t,\text{ret}}\hskip 5.69046pt.$ (5)
Because we assume wholesale distribution is distinct from and has larger mean
than retail, we constrain the distribution parameters, or "give a prior", as
shown in Eq. 6.
$\mathcal{L}_{\text{constrain}}=\lambda\sum_{t}^{n_{t}}\sum_{b}^{n_{b}}[(\mu_{b,t,\text{ret}}-{\mu_{0}}_{t,\text{ret}})^{2}+(\mu_{b,t,\text{ws}}-{\mu_{0}}_{t,\text{ws}})^{2}+(\sigma_{b,t,\text{ret}}-{\sigma_{0}}_{t,\text{ret}})^{2}+(\sigma_{b,t,\text{ws}}-{\sigma_{0}}_{t,\text{ws}})^{2}]$
(6)
where $\lambda$ is a hyperparameter balancing $\mathcal{L}_{\text{constrain}}$
and $\mathcal{L}_{\text{samples}}$. Overall, we minimize
$\mathcal{L}_{\text{constrain}}+\mathcal{L}_{\text{samples}}$. In Eq. 6,
$\mu_{0}$ and $\sigma_{0}$ are our guesses, which vary over timesteps but do
not vary over banks. In Section 4.1, we validate the assumption that retail
and wholesale distributions are separate.
The model was implemented in TensorFlow. We minimized the loss using
stochastic gradient descent. We used one neural network for retail parameters,
one for wholesale parameters, and one for fraction of retail accounts
$p_{b,t}$. Each NN had one hidden layer with 50 reLU units, and retail and
wholesale NNs had separate output layers for mean and standard deviation. NN
inputs were features in Table 1, specifically we used normalized log-values
and fractional values instead of percentages. NN outputs for standard
deviations were exponentiated to ensure positivity. NN output for $p_{b,t}$
was transformed using sigmoid to ensure it was between 0 and 1. The model was
trained on data from 2000-Q1 to 2019-Q4, with priors for
${\mu_{0}}_{t,\text{ret}}$ linearly spaced from 1.2 to 1.5,
${\sigma_{0}}_{t,\text{ret}}$ from 1.3 to 1.5, ${\mu_{0}}_{t,\text{ws}}$ from
4.5 to 5.0, ${\sigma_{0}}_{t,\text{ws}}$ from 3.0 to 3.5, and $\lambda=0.05$.
The priors were chosen based on results from Section 4.1. We drew 10,000
samples from each retail and wholesale distribution during training and
inference, and averaged 10 calculations for our final inference predictions
(Eq. 5) to decrease variability of sampling process.
### 3.3 Benchmark
We compare our retail-wholesale estimation with a benchmark. The benchmark
uses FDIC Branch Office Deposits dataset, which has deposits in each branch of
each institution and is published annually from 1994 [11]. For the benchmark,
deposits in branches with less than $500 Million deposits are considered all
retail deposits, and deposits in branches with more than $500 Million deposits
are considered wholesale.
### 3.4 Time Series Regression
We use the retail-wholesale deposits from the DGM as the ground truth, and
predict their future movements based on macroeconomic data. If the time-series
regression accurately predicts movements, then the model inputs are likely
drivers, although this shows association but not causation. The initial
macroeconomic inputs considered were reserves, Treasury General Account (TGA),
loans at commercial banks, retail loans at commercial banks, Fed assets, and
currency. After trying different combinations of features, we used reserves,
loans, and retail loans because they gave good predictions. Specifically, we
used reserves (WRBWFRBL) and loans (TOTLLNSA) from FRED Economic Data [15].
Reserves indicates balances of bank deposits at the Fed. During QE, the Fed
purchases assets with newly-created bank reserves, therefore amount of
increase in reserves corresponds with degree of QE. Retail loans were from SDI
dataset as indicated in Table 1, although similar retail loan data is released
weekly by Fed in H8 [14].
Our model was a linear regression with four backward periods data as input. We
found using less than four backward periods would decrease prediction
accuracy, but greater than four periods did not significantly improve
accuracy. We used quarter-over-quarter differences (q/q diff) for model inputs
and outputs to achieve better numeric behavior and stationarity. We used 55
prediction cases from 2003 to 2017 for training and 10 data points from 2017
to 2020 for testing. We provide an example of model prediction. We define q/q
diff for 6-30-2020 to be level at 6-30-2020 minus level at 3-31-2020. If we
were to predict deposit q/q diff for 6-30-2020, our input would be q/q diffs
of data for 9-30-2019, 12-31-2019, 3-31-2020, and 6-27-2020. We use the most
recent macroeconomic data available closest to the prediction date. Our model
input data are released weekly by the Fed, so we estimate the most recent data
could be three days prior to the prediction date. We used all zero biases in
linear regression because we wanted to attribute all changes in deposits to
the input features.
## 4 Results
### 4.1 Validation of Retail and Wholesale Account Balance Distributions
We want to validate the assumption that retail and wholesale account balances
arise from separate distributions. We fit two subsets of the banks (one
representing all retail and one representing all wholesale) to simplified DGM
with only one distribution. The parameters obtained with the retail subset of
banks represent an estimated retail distribution, and parameters obtained with
the wholesale subset represent an estimated wholesale distribution. We would
like to observe a difference in parameters to confirm the assumption that
retail and wholesale account balances can be modeled with two distinct
distributions.
For the retail subset, we use thrift institutions (FDIC "SA" and "SB" bank
class), and for the wholesale subset, we use State Street and Bank of New York
Mellon (because we were unaware of other banks that are purely wholesale), and
we perform the comparison for 2019-Q4. The means and standard deviations of
the thrift institutions are shown in Fig. 1. Most means $\mu_{b,t,\text{ret}}$
are between 1.0 to 2.0. (The deposits are reported in thousands of dollars,
therefore 1.0 represents $$1000e$ (or $2,718), and 2.0 represents $$1000e^{2}$
(or $7,389.)) Most standard deviations $\sigma_{b,t,\text{ret}}$ are below
2.0. For 2019-Q4, the best fit $\mu_{b,t,\text{ws}}$ for State Street and Bank
of New York Mellon were 4.5 ($54,500) and 3.5 ($33,000), respectively, and the
$\sigma_{b,t,\text{ws}}$ were 2.3 and 2.5, respectively. The distribution
representing wholesale is shifted to the right of the retail distribution,
which matches our assumption and intuition. We note that a small fraction of
the thrift institutions had larger means, and it is possible that these were
purely wholesale banks. For example, we found a thrift institution called
NexBank which primarily serves institutional clients [25] and had mean 3.2 in
our model.
A deposit of a large company vs. high-net-worth individual may look the same
to our model. However, we do use lognormals for both distributions, and
sampling the lognormal retail distribution will result in large account
balances attributed to high-net-worth individuals.
Figure 1: Means and standard deviations of lognormal distribution of account
balances for thrift institutions in 2019-Q4
### 4.2 DGM Prediction
We use the banks with at least one branch with over $500 Million deposits in
2019 as training dataset, and fit the DGM neural network parameters with data
from 2000-Q1 to 2019-Q4. We chose the training dataset this way because these
banks are likely to have a mix of wholesale and retail deposits. The neural
network input are bank features in array form [#banks, #timesteps, #features],
and the target are metrics in form [#banks, #timesteps, #metrics], with
metrics described in Section 3.2. After objective minimization, we perform
inference of retail deposits on all banks in FDIC SDI.
We examined the predicted retail fraction for the larger individual banks.
Some results of retail percent are listed here: Wells Fargo (55%), JPMorgan
Chase (50%), Bank of America (64%), Citi (35%), Bank of New York Mellon (2%),
Ally (98%), State Street (2%), Morgan Stanley (38%), Goldman Sachs (73%).
Overall the predictions seem reasonable, with Morgan Stanley and Goldman
seeming high for being investment banks. The predictions are difficult to
verify for most banks. In addition, the predictions seem to have high
variance, and we took averages of 10 trials to smooth them. Further work can
investigate how changes in training and model set-up affect prediction
variance.
Fig. 2 shows the retail-wholesale deposit prediction for 2000 to 2020-Q1. Fig.
2(a) shows the retail fraction over time for our model prediction vs. the 500M
threshold benchmark. Both predictions show a downtrend in retail deposit
fraction dropping from 70% in 2000 to around 40% in 2020. The trend may
indicate that large companies’ cash holdings grew faster than individuals’ for
the past twenty years. We note that it is possible the training dataset
influenced the model prediction to have a similar trend as the 500M threshold
method, however we did include all FDIC banks in model inference, and we
believe sensitivity of model to training data can be explored in future work.
We also assert that our model prediction improves upon the benchmark because
our model gives four times as many predictions, that were estimated in a more
precise, data-driven method. Our improved retail-wholesale estimation allows
quantification of response to macroeconomic factors which is not possible with
the rough yearly estimate from the benchmark.
Fig 2(b) shows the same model prediction in a dollar amount format. The total
industry deposits is readily available data, and our model contributes the
split between retail and wholesale. The last timestep for 2020-Q1 was not in
the training dataset, and our model predicted a large increase in wholesale
rather than retail deposits. We believe the model accurately captured what
actually occurred. Large companies were able to quickly respond to the shock
of Covid-19 onset in the U.S. in March. Companies were able to draw loans from
their agreements with banks, and the loans became deposits in the commercial
banking system. On the other hand, it is likely that individuals did not have
a large financial response to Covid-19 in March 2020.
(a) DGM prediction vs. 500M threshold benchmark
(b) Nominal industry deposits 2000-2020
Figure 2: Deep Generative Model (DGM) predictions: fraction retail and
nominal. DGM improves upon benchmark, and DGM predicts large jump in wholesale
deposits for 2020-Q1.
### 4.3 Time Series Model
With the nominal prediction of retail-wholesale industry deposits, we build a
time-series model with macroeconomic inputs. We take the DGM prediction as the
ground truth for the time-series model, with model formulation as described in
Section 3.4. The model predicts one-step-forward quarter-over-quarter
difference, and we compare the predictions against ground truth for retail and
wholesale deposits in Fig. 3. For both retail and wholesale deposits, the
magnitude, variance, and direction of movements match the ground truth. This
is true for train and test data, indicating that the model does not overfit.
In addition we were particularly interested in model performance for test
point 2020-Q1, and the model predicts a good amount of the jump in deposits
then.
(a) Retail deposits
(b) Wholesale deposits
Figure 3: Time series model forecast of ground truth deposits, matches well in
direction and magnitude
We now quantify the impact of macroeconomic inputs on the change in retail and
wholesale deposits. The linear regression predicts forward-in-time changes and
purposely has a zero bias so that deposit predictions are wholly accounted for
by model input weights. We proxy the macro factor impacts by summing model
weights for the four lag timesteps, and the impacts are shown in Table
LABEL:table:impacts. To interpret the impacts, $1.00 increase in reserves
leads to around $0.60 increase in wholesale and negligible change in retail
deposits. The model also predicts approximately even change in retail and
wholesale deposits as response to changes in loans. Because amount of increase
in reserves corresponds with degree of QE, our results (namely the combined
results from DGM and time-series models) predict that QE has a stronger
increase on wholesale rather than retail deposits. This result matches our
intuition that QE money is more easily captured by entities with already large
funds and access to capital, such as large companies.
We examined the historical time periods when wholesale and retail deposit
growth diverged. In the past 15 years, wholesale growth is generally faster
than retail. Fig. 4(a) highlights in green time-periods when wholesale was
abnormally faster than retail deposit growth. These time-periods were 2009-Q3
to 2012-Q4 and 2014-Q2 to 2015-Q1. Fig. 4(b) highlights the model inputs for
the same time-periods with a 1-year backward lag, because the time-series
model essentially uses the previous year data for prediction. We observe large
reserves growth with low loan growth; the time-periods were QE periods
following the 2008 financial crisis. Our results indicate that QE contributed
to wholesale deposits growing much faster than retail, historically. We note
that QE time-period coincided with an economic recession, and the economic
recession may have partially caused the low retail deposit growth. However
this does not change the fact that wholesale deposits grew significantly
faster in the same QE/recession time-period. If we have future rounds of
stimulus for Covid, we will likely see the same pattern. The type of economic
stimulus likely also matters. Stimulus that increases loan generation may more
evenly benefit individuals and large companies. Future work can apply causal
analysis to the associations between loans, QE and retail-wholesale deposits
found here.
Table 2: Impact of Macroeconomic Inputs on Deposits | Reserves | Total Loans | Retail Loans
---|---|---|---
Retail | -6% | 53% | -23%
Wholesale | 61% | 64% | -23%
Total | 54% | 116% | -45%
(a) Wholesale minus retail year-over-year differences
(b) Reserves and total loans
Figure 4: Green time-periods represent faster than average wholesale deposit
growth compared with retail. The corresponding model input time periods have
1-year backward lag, and showed large increase in reserves and negative to low
loan growth.
## 5 Conclusion
This work developed the first model of U.S. banking industry deposits split by
holder: individuals and large companies. We used a novel generative model
which fit distributions of account balances per bank based on its financial
data. The machine learning method took advantage of a rich FDIC dataset, which
could be used in future investigations. With the wholesale-retail split, we
showed that QE increases wholesale, and loans increase wholesale and retail
deposits, a very intuitive result and what was observed following the 2008
financial crisis. The results have direct implication for economic policy,
namely that QE has greater benefit for large companies and high-net-worth
individuals. QE may increase economic inequality; at the same time other
aspects of QE may backstop economic crises and can be explored in future work.
Other extensions of this work include examining effects of interest rate on
retail-wholesale deposits, and examining macro-factors impact on changes in
market share across small regional banks and the largest banks.
## References
* [1] Robert M. Adams, Kenneth P. Brevoort and Elizabeth K. Kiser “Who Competes With Whom? the Case of Depository Institutions” In _Journal of Industrial Economics_ 55.1, 2007, pp. 141–167 DOI: 10.1111/j.1467-6451.2007.00306.x
* [2] A.. Bain “Surveys in Applied Economics: Flow of Funds Analysis” In _The Economic Journal_ 83.332, 1973, pp. 1055 DOI: 10.2307/2230842
* [3] “Balance sheet management benchmark survey” In _PriceWaterhouseCoopers_ , 2009 URL: https://www.pwc.com/gx/en/banking-capital-markets/assets/balance-sheet-management-benchmark-survey.pdf
* [4] Alessandro Caiani et al. “Agent Based-Stock Flow Consistent Macroeconomics: Towards a Benchmark Model” In _Journal of Economic Dynamics and Control_ 69, 2016, pp. 375–408 DOI: 10.1016/j.jedc.2016.06.001
* [5] Céline Choulet “QE and bank balance sheets: the American experience” In _Conjoncture_ , 2015 URL: https://economic-research.bnpparibas.com/html/en-US/QE-bank-balance-sheets-American-experience-7/23/2015,25852
* [6] Jens Henrik Eggert Christensen and Signe Krogstrup “A Portfolio Model of Quantitative Easing” In _SSRN Electronic Journal_ , 2016 DOI: 10.2139/ssrn.2777690
* [7] Fabio Clementi and Mauro Gallegati “Pareto’s Law of Income Distribution: Evidence for Grermany, the United Kingdom, and the United States”, 2005 URL: https://ideas.repec.org/p/wpa/wuwpmi/0505006.html
* [8] Andrew M. Cohen and Michael J. Mazzeo “Market Structure and Competition Among Retail Depository Institutions” In _Review of Economics and Statistics_ 89.1, 2007, pp. 60–74 DOI: 10.1162/rest.89.1.60
* [9] Ben R. Craig and Valeriya Dinger “Deposit Market Competition, Wholesale Funding, and Bank Risk” In _Journal of Banking & Finance_ 37.9, 2013, pp. 3605–3622 DOI: 10.1016/j.jbankfin.2013.05.010
* [10] Peter Dayan, Geoffrey E Hinton, Radford M Neal and Richard S Zemel “The helmholtz machine” In _Neural computation_ 7.5 MIT Press, 1995, pp. 889–904
* [11] “Federal Deposit Insurance Corporation Branch Office Deposits” In _Federal Deposit Insurance Corporation_ , 2020 URL: https://www7.fdic.gov/idasp/warp_download_all.asp
* [12] “Federal Deposit Insurance Corporation Statistics of Depository Institutions” In _Federal Deposit Insurance Corporation_ , 2020 URL: https://www7.fdic.gov/sdi/download_large_list_outside.asp
* [13] “Federal Reserve H.6 Money Stock Measures” In _Federal Reserve_ , 2020 URL: http://www.federalreserve.gov/releases/h6/
* [14] “Federal Reserve H.8 Assets and Liabilities of Commercial Banks in the United States” In _Federal Reserve_ , 2020 URL: http://www.federalreserve.gov/releases/h8/
* [15] “FRED Economic Data” In _FRED Economic Data_ , 2020 URL: https://fred.stlouisfed.org/
* [16] M. Gertler, N. Kiyotaki and A. Prestipino “Wholesale Banking and Bank Runs in Macroeconomic Modeling of Financial Crises” In _Handbook of Macroeconomics_ , Handbook of Macroeconomics Elsevier, 2016, pp. 1345–1425 DOI: 10.1016/bs.hesmac.2016.03.009
* [17] Christopher J. Green and Victor Murinde “Flow of Funds: Implications for Research on Financial Sector Development and the Real Economy” In _Journal of International Development_ 15.8, 2003, pp. 1015–1036 DOI: 10.1002/jid.961
* [18] Maximilian Horst and Ulrike Neyer “The Impact of Quantitative Easing on Bank Loan Supply and Monetary Policy Implementation in the Euro Area” In _Review of Economics_ 70.3, 2020, pp. 229–265 DOI: 10.1515/roe-2019-0033
* [19] Jian Huang, Junyi Chai and Stella Cho “Deep Learning in Finance and Banking: a Literature Review and Classification” In _Frontiers of Business Research in China_ 14.1, 2020, pp. 13 DOI: 10.1186/s11782-020-00082-6
* [20] Michael Joyce and Marco Spaltro “Quantitative Easing and Bank Lending: a Panel Data Approach” In _SSRN Electronic Journal_ , 2014 DOI: 10.2139/ssrn.2487793
* [21] Diederik P Kingma and Max Welling “Auto-encoding variational bayes” In _arXiv preprint arXiv:1312.6114_ , 2013
* [22] Martin Leo, Suneel Sharma and K. Maddulety “Machine Learning in Banking Risk Management: a Literature Review” In _Risks_ 7.1, 2019, pp. 29 DOI: 10.3390/risks7010029
* [23] Roman Matousek, Stephanos Τ. Papadamou, Aleksandar and Nickolaos G. Tzeremes “The Effectiveness of Quantitative Easing: Evidence From Japan” In _Journal of International Money and Finance_ 99, 2019, pp. 102068 DOI: 10.1016/j.jimonfin.2019.102068
* [24] Anna Nagurney and Merritt Hughes “Financial Flow of Funds Networks” In _Networks_ 22.2, 1992, pp. 145–161 DOI: 10.1002/net.3230220203
* [25] “NexBank” In _NexBank_ , 2020 URL: https://www.nexbank.com/
* [26] Alexander Rodnyansky and Olivier M. Darmouni “The Effects of Quantitative Easing on Bank Lending Behavior” In _The Review of Financial Studies_ 30.11, 2017, pp. 3858–3887 DOI: 10.1093/rfs/hhx063
* [27] Cuneyt Sevim et al. “Developing an Early Warning System To Predict Currency Crises” In _European Journal of Operational Research_ 237.3, 2014, pp. 1095–1104 DOI: 10.1016/j.ejor.2014.02.047
|
# Nucleation rates from small scale atomistic simulations and transition state
theory
Kristof M. Bal<EMAIL_ADDRESS>Department of Chemistry and NANOlab
Center of Excellence, University of Antwerp, Universiteitsplein 1, 2610
Antwerp, Belgium
###### Abstract
The evaluation of nucleation rates from molecular dynamics trajectories is
hampered by the slow nucleation time scale and impact of finite size effects.
Here, we show that accurate nucleation rates can be obtained in a very general
fashion relying only on the free energy barrier, transition state theory
(TST), and a simple dynamical correction for diffusive recrossing. In this
setup, the time scale problem is overcome by using enhanced sampling methods,
in casu metadynamics, whereas the impact of finite size effects can be
naturally circumvented by reconstructing the free energy surface from an
appropriate ensemble. Approximations from classical nucleation theory are
avoided. We demonstrate the accuracy of the approach by calculating
macroscopic rates of droplet nucleation from argon vapor, spanning sixteen
orders of magnitude and in excellent agreement with literature results, all
from simulations of very small (512 atom) systems.
kinetics, free energy barriers, chemical reactions, nucleation, metadynamics
## I Introduction
First order phase transitions are initiated by a nucleation event, in which a
small embryo of a thermodynamically favored phase is formed within a bulk
metastable phase. Nucleation is an inherently difficult process to study. In
principle, the nanoscale dimensions of the critical nucleus make molecular
dynamics (MD) simulations a natural choice to probe the nucleation process.
The rare event nature of critical nucleus formation, which may take seconds or
longer, however puts it well beyond the MD time scale.
The difficulties associated with nucleation simulations are nicely illustrated
by one of the simplest nucleation processes, namely, the formation of a liquid
droplet in argon vapor. Even here, direct MD simulations can only capture
nucleation events at very high supersaturations [1] or in very expensive
massively parallel large-scale calculations. [2] In addition, the use of
computationally efficient small simulation cells introduces significant finite
size artifacts. [3] Moreover, indirect rate calculations based on classical
nucleation theory (CNT) may be in error by several orders of magnitude. [2]
Recently, accelerated molecular dynamics approaches have started to address
the key issues in this field. [4, 5] Slow argon droplet nucleation events can
be observed in direct MD simulations when an external bias potential is
applied. Under certain conditions, it is then possible to quantitatively
correct for the impact of the bias potential on the apparent (shortened)
nucleation time. [6, 7] This way, trajectories corresponding to physical
nucleation times up to $\tau=10^{4}$ s have been sampled. This methodology is
in principle highly generic because, besides the simulation model itself, no
specific mechanistic assumptions are made. In addition, by using concepts from
CNT, estimates of macroscopic nucleation rates were obtained a posteriori by
correcting nucleation times from small-scale simulations. [4] Such accelerated
MD approaches have however not yet been applied to other types of nucleation
problems.
Yet, the enhanced sampling methods on which recent accelerated MD work has
been based have already been broadly applied to different types of phase
transitions in atomistic simulations. Recent examples of such studies include
melting, [8] solid–solid transitions, [9, 10] crystallization from the liquid,
[11, 12, 13, 14, 15, 16] and crystallisation from solution. [17, 18, 19] Such
approaches however do not directly produce nucleation rates, and rather aim to
reconstruct the free energy surface (FES) of the nucleation process.
The FES concept unifies the description of thermodynamics across systems,
avoiding any process-specific theories: an appropriate set of low-dimensional
order parameters is the only required system-dependent information. Calculated
FES for nucleation have therefore primarily been used to investigate the
thermodynamic aspects of phase transitions.
The FES does, however, in principle also encode kinetic information. It is
possible to obtain free energy barriers and, thus, calculate rates using
transition state theory (TST), at least for chemical reactions. [20] Would
this also be possible for nucleation, thus unifying rate calculation within a
single framework? Indeed, dedicated “extraordinary rate theories [21]” for
specific processes tend to share many common aspects and are ultimately
equivalent.
Even though a large portfolio of tools has already been applied to the
calculation of nucleation rates—with recent studies of various processes
having employed seeding approaches, [22, 23] transition path sampling (TPS),
[24] transition interface sampling (TIS), [25, 26] and forward flux sampling
(FFS) [27, 28, 29, 30]—accurate calculation of realistic nucleation rates is a
formidable challenge in general. Computed nucleation rates in any system are
highly sensitive to every methodological aspect or approximation in sometimes
non-obvious ways. [31] Moreover, different approaches to the calculation of
nucleation rates can sometimes disagree quite strongly. [2, 32, 33, 34] Any
new methodological perspective might therefore be valuable to understand such
discrepancies.
In this manuscript, we demonstrate that highly accurate nucleation rates can
be calculated within the generic workflow of a free energy calculation. It is
not necessary to rely on mechanistic assumptions [35] or expression derived
from CNT, [36, 37] nucleation events need not be explicitly sampled from
dynamical trajectories, and it is possible to account for finite size effects
in a natural manner. In order to demonstrate the accuracy of our approach, we
first validate its rate estimates for droplet nucleation from argon vapor in a
small simulation cell over a wide range of supersaturations. We then show how,
using the same small-scale system, this workflow also allows to calculate
rates free of finite size errors.
## II Methodology
### II.1 TST rates for droplet nucleation
In order to generate a FES of an arbitrary process, one must first identify at
least one suitable collective variable (CV) $\chi=\chi(\mathbf{R})$ that is a
function of the system coordinates $\mathbf{R}$ and that can distinguish all
states of interest. This CV will also serve as a candidate reaction coordinate
of the process. Recently, we have demonstrated that, whenever such a FES
$F(\chi)$ is available for a process $A\rightarrow B$, its TST rate
$k^{\mathrm{TST}}$ can be calculated unambiguously. [20] For an _arbitrary_
choice of the reaction coordinate $\chi$ one can derive that the total flux
through a dividing surface $\chi=\chi^{*}$ in the configuration space
$\mathbf{R}$ is equal to
$k^{\mathrm{TST}}=\sqrt{\frac{1}{2\pi\beta
m}}\frac{\displaystyle\int\mathrm{d}\mathbf{R}\left|\nabla\chi\right|_{\chi=\chi^{*}}\cdot\delta[\chi^{*}-\chi(\mathbf{R})]\,e^{-\beta
U(\mathbf{R})}}{\displaystyle\int\mathrm{d}\mathbf{R}\,H[\chi^{*}-\chi(\mathbf{R})]\,e^{-\beta
U(\mathbf{R})}},$ (1)
in which $U(\mathbf{R})$ is the potential energy of the system. $|\nabla n|$
is the norm of the gradient of $\chi(\mathbf{R})$ with respect to all
coordinates $\mathbf{R}$ and serves as a gauge correction to ensure invariance
of the rate with respect to the parametrization of $\chi$.
$\beta=(k_{B}T)^{-1}$, $k_{B}$ the Boltzmann constant, $T$ the temperature,
and $m$ the mass of the nucleating particles. The step function $H$ and delta
function $\delta$ are used to select configurations belonging to the initial
metastable state ($\chi<\chi^{*}$) and dividing surface ($\chi=\chi^{*}$)
respectively. The dividing surface $\chi=\chi^{*}$ can also be referred to as
the location of the transition state (TS). This expression can be recast in
terms of the free energy surface $F(\chi)$ (or $G(\chi)$):
$k^{\mathrm{TST}}=\frac{\langle|\nabla\chi|\rangle_{\chi=\chi^{*}}}{\sqrt{2\pi\beta
m}}e^{-\beta(F(\chi^{*})-F_{A})},$ (2)
in which state $F_{A}$ is the integrated free energy of state $A$, which we
have defined as all configurations for which $\chi<\chi^{*}$:
$F_{A}=-\frac{1}{\beta}\ln\int_{\chi<\chi^{*}}\mathrm{d}\chi\,e^{-\beta
F(\chi)}.$ (3)
We can now rewrite the expression for $k^{\mathrm{TST}}$ to be equivalent to
the well-known Eyring formula:
$k^{\mathrm{TST}}=\frac{1}{h\beta}e^{-\beta\Delta^{\ddagger}F},$ (4)
in which $h$ is the Planck constant. To do this, we must define the free
energy barrier $\Delta^{\ddagger}F$ as:
$\Delta^{\ddagger}F_{A\rightarrow
B}=F(\chi^{*})+\frac{1}{\beta}\ln\frac{\langle|\nabla\chi|\rangle^{-1}_{\chi=\chi^{*}}}{h}\sqrt{\frac{2\pi
m}{\beta}}-F_{A}.$ (5)
As we have argued before, besides ensuring compatibility with the Eyring
expression, this definition of $\Delta^{\ddagger}F$ also has a physical
significance: It measures the probability of generating a configuration
$\chi=\chi^{*}$ from the full ensemble of $A$ states, i.e., states for which
$\chi<\chi^{*}$. [20]
Note that we have made no assumptions whatsoever about the mechanism or nature
of the $A\rightarrow B$ transition, or put any requirements on $\chi$. To be
useful, $\chi$ of course has to be able to properly discriminate states $A$
and $B$, and parametrize an appropriate dividing surface between the two.
In a condensation process state $A$ is the vapor phase $g$, and $B$ is the
liquid $l$. For droplet nucleation, the number of liquid atoms (the ten
Wolde–Frenkel parameter) $n$ [38] has been a common choice. [1, 4, 5] An atom
is considered liquid when it has more than 5 close neighbors. [38]
Thus, we choose $\chi=n$. It must be noted that $n$, as defined in this work,
does not strictly count the number of atoms inside the largest droplet. It is
rather a measure of the number of highly coordinated atoms. Low-coordinated
atoms also make a (small) contribution to $n$, even if they have no direct
neighbors, while atoms at the surface of a droplet may not be fully counted.
This is a consequence of requiring $n$ to be a computationally convenient
continuous function that can be used to induce transitions from gas to liquid,
and back. As a result, $n$ is also not a valid definition of cluster size
within the context of classical nucleation theory because it does not strictly
count the number of atoms inside the largest cluster only. [33]
None of this matters much from a TST perspective. $n=n(\mathbf{R})$ is
ultimately just some order parameter that exfoliates configuration space and
parametrizes a dividing surface $n=n^{*}$. $n^{*}$ is in this context defined
as the value of $n$ that maximizes the geometric free energy surface
$F^{G}(n)$ [39]:
$F^{G}(n)=F(n)-\frac{1}{\beta}\ln\langle|\nabla n|\rangle_{n(\mathbf{R})=n}.$
(6)
This is mathematically equivalent to variationally minimizing the TST rate Eq.
(1), because it is the classical upper bound of the true rate. $F^{G}(n)$ can
be obtained simultaneously with the standard FES $F(n)$ by reweighting. [20]
We sometimes refer to $n^{*}$ as the “critical nucleus size” although it is,
strictly speaking, not the size of the actual critical nucleus within
macroscopic CNT for the reasons outlined above. Whether or not we can identify
the critical nucleus—or if it even exists—is however irrelevant. After all, we
could just as well have used a more advanced reaction coordinate that also
accounts for cluster shape [5] or one that has an even less pronounced
connection to the nucleus size, such as a generic measure of global order.
[12] Any possibility to interpret the reaction coordinate in terms of cluster
size is then lost, but we can still proceed to calculate the TST rate as
described previously. [20]
It is worth pointing out that an earlier model of the droplet nucleation rate
was also based on TST. [35] Compared to our approach, the TST rate expression
was constructed only for a single (elementary) monomer addition or evaporation
process at a cluster of fixed size $n$, whereas we derive a single expression
for the total nucleation rate without explicitly assuming a mechanism based on
sequential monomer addition only.
The FES is calculated within a small simulation cell, and the resultant
barrier derived from this FES is the formation free energy of a critical
nucleus (or, more generally, dividing surface) within this cell. That is, the
barrier in Eq. (5) and TST rate Eq. (4) are only defined for the $N$-atom
system in which the nucleation free energy surface was obtained. Therefore,
$k^{\mathrm{TST}}$ is the TST nucleation rate inside this particular
simulation cell. To obtain a global nucleation rate $J$, we must divide $k$ by
the initial volume $V$ of the simulation cell:
$J^{\mathrm{TST}}=\frac{k^{\mathrm{TST}}}{V}.$ (7)
$k^{\mathrm{TST}}$ is a strict upper bound to the true rate and, consequently,
$J^{\mathrm{TST}}$ could overestimate the true nucleation rate $J$. A failure
of TST can mostly be traced back to one of two following phenomena:
1. 1.
the reaction coordinate $n$ does not parametrize a proper dividing surface and
$\Delta^{\ddagger}F$ is underestimated, or;
2. 2.
not every crossing of the dividing surface $n=n^{*}$ results in an effective
transition $g\rightarrow l$.
### II.2 Committor analysis and recrossing correction
We propose that committor analysis, which is a standard way to verify the
quality of a candidate reaction coordinate $\chi$, can simultaneously be used
to obtain a transmission coefficient $\kappa$, which compensates for
recrossings of the dividing surface. The committor $p_{l}$ is the probability
that an ensemble of configurations commits to the liquid state $l$. [40] If
$p_{l}>0.5$ configurations can be considered to belong to the liquid state, if
$p_{l}<0.5$ they are part of the vapor, and if $p_{l}=0.5$ they are part of
the transition state ensemble. As a result, the quality of our putative
dividing surface $n=n^{*}$ as identified from the geometric FES (6) can be
assessed by subjecting a sample of $n=n^{*}$ states to a committor test.
Finding $p_{l}=0.5$ is a necessary condition for being a suitable dividing
surface and can thus be used to validate the reaction coordinate $n$.
If we find $p_{l}\approx 0.5$, we assume that recrossings are intrinsic to the
true dividing surface. We now also assume that the system spends such a long
time in the transition state (TS) region that it becomes fully decorrelated.
This corresponds to fully diffusional barrier crossing dynamics. The TST rate,
by definition, amounts to all crossings of $n=n^{*}$. Therefore, if we count
the average number of TS crossings $j_{\mathrm{cross}}$ during the committor
analysis we can directly measure the correction to the TST rate. The fraction
of TS crossings that effectively results in a nucleation event is
$p_{l}/\langle j_{\mathrm{cross}}\rangle$. This quantity therefore corresponds
to the transmission coefficient $\kappa$. In addition, if it was previously
found that $p_{l}=0.5$, we have $\kappa=(2\langle
j_{\mathrm{cross}}\rangle)^{-1}$.
The final estimate of the nucleation rate inside the cell volume $V$ is now
$k=\kappa k^{\mathrm{TST}}=\frac{\kappa}{h\beta}e^{-\beta\Delta^{\ddagger}F},$
(8)
and the global nucleation rate is
$J=\kappa J^{\mathrm{TST}}=\frac{\kappa k^{\mathrm{TST}}}{V}.$ (9)
Transmission coefficients and recrossings have received much interest, and
several theories have been developed to rationalize the concept, including CNT
or Kramers’ theory.[41] Here, we however only calculate numerical values of
$\kappa$ for the chosen reaction coordinate, directly employing its definition
within TST: The ratio between the effective rate and the TST rate associated
with the reaction coordinate. Compared to more dedicated theories, the current
approach offers little direct physical insight but, as we will show, it is
accurate and simple to apply.
## III Computational details
All simulations were carried out with LAMMPS [42] and the PLUMED plugin. [43,
44] The interatomic interactions between the Ar atoms was described using a
Lennard-Jones potential with $\epsilon=0.99797$ kJ/mol and $\sigma=3.405$ nm.
The interaction was truncated at a distance of $6.75\sigma$. These parameters
fully match those used earlier. [1, 4, 5]
The equations of motion were integrated with a time step of 5 fs and
temperature control at $T=80.7$ K was achieved using a Langevin thermostat
[45] with a time scale of 1 ps. A Langevin thermostat was found to be
necessary to maintain a strict equipartition of the energy in the system,
between vapor and liquid phases. Note that such a thermostat should not be
used when explicitly sampling nucleation times (i.e., in brute force MD or
infrequent metadynamics), since the Langevin friction affects the rate of
processes. For this reason, we used a global thermostat [46] when performing
committor analysis, which retains the equilibration efficiency of its local
Langevin counterpart while leaving dynamical trajectories mostly unperturbed.
[47]
Constant volume (NVT) simulations were performed in periodic cubic simulation
cells of different size. Following previous definitions [1, 4, 5], each system
is identified by its supersaturation level $S$, defined as
$S=\frac{Nk_{B}T}{p_{e}V},$ (10)
in which $p_{e}=0.43$ bar.
Constant pressure (NPT) simulations were performed in the same way as the NVT
simulations, except that the equations of motion are those of a Nosé–Hoover
style barostat. [48] The imposed pressure $p$ for each value of $S$ is chosen
by an initial NVT simulation in a box with a size found from eq. (10). We use
this approach, rather than using $p=Sp_{e}$, because we wish to comply with
the ideal gas law-based naming scheme established earlier.
In order to achieve sufficient sampling along the reaction coordinate $n$,
some enhanced sampling scheme is necessary. Here, we choose well-tempered
metadynamics [49, 50] because it is a widely available method that
demonstrates the ease by which our approach can be implemented practically.
The approach presented here is however method-agnostic and other sampling
strategies can employed to reconstruct the FES if they are more efficient or
practical. Other free energy studies of phase transitions have used, for
example, umbrella sampling, [38, 36, 37, 34] variationally enhanced sampling
(VES), [12, 15] adiabatic free-energy dynamics (AFED), [8, 10] on-the-fly
probability-enhanced sampling (OPES), [16] and reweighted Jarzynski sampling.
[51]
The metadynamics parameters were almost equal in all systems. As a CV, we used
the number of liquid atoms $n$, defined using switching functions of the type
$s(x)=\frac{1-(x/x_{0})^{6}}{1-(x/x_{0})^{12}}.$ (11)
First, a coordination number $c_{i}$ is calculated for each atom $i$, by
summing $s(r_{ij})$ using the pairwise distance $r_{ij}$ with all other atoms
$j$ within 10 Å, and $r_{0}=5$ Å. Then, $n$ is calculated as the sum of all
$s(c_{i})$, using $c_{0}=5$.
The external bias potential in metadynamics is history-dependent and expressed
as a sum of repulsive Gaussians. Every 50 ps, a Gaussian of initial height
$w=0.5$ kJ/mol was added to the total bias. The width of each new Gaussian was
determined using a diffusional scheme, [52] on a time scale of 25 ps. Well-
tempered metadynamics was used with bias factor $\gamma=15$, to gradually
reduce the size of the newly added Gaussians and smoothly converge the bias.
[50] FES estimates were produced by reweighting [53] 200 ns chunks of the
biased trajectory. The total simulation time was 1 $\mu$s for each system.
We used harmonic restraints on $n$ to keep the droplet from growing too large.
These were placed at $n=64$ (for $S>6.01$) or $n=128$ (otherwise).
Representative system configurations with $n=n^{*}$ for committor analysis
were generated using steered MD (SMD), and a set of 10 independent
trajectories were launched for each condition. For each trajectory, we
recorded the number of times $j_{\mathrm{cross}}$ the system crosses the
dividing surface defined by $n=n^{*}$. 20 ns per trajectory proved to be
sufficient for all systems, except for $S=4.81$.
## IV Results and discussion
### IV.1 The finite size limit
We first study nucleation in a vapor of 512 Ar atoms in the canonical (NVT)
ensemble using Langevin dynamics [45] within several fixed box volumes $V$
(Table 1). Each box size represents a different supersaturation level $S$.
Specifically, we consider a series of systems at a temperature $T=80.7$ K, for
which accurate rate estimates are available from (accelerated) MD
trajectories. [1, 4, 5] As an example, we plot in Fig. 1a the FES for
nucleation in a cubic cell with an edge length of 11.5 nm, or $S=8.68$.
Figure 1: Calculation of Ar droplet nucleation rates from TST. (a) Effect of the ensemble on the free energy surface. Inset: snapshot of a droplet $n\approx 64$. (b) Comparing computed TST rates $J^{\mathrm{TST}}$ with recrossing-corrected rates $J=\kappa J^{\mathrm{TST}}$ and literature results $J^{\mathrm{ref}}$ as a function of the supersaturation level $S$. (c) Comparing rates $J$ in the finite system and NVT ensemble to their NPT counterparts $J_{\infty}$ and finite size-corrected macroscopic rates from the literature $J_{\infty}^{\mathrm{ref}}$ as a function of $S$. Table 1: Nucleation barriers $\Delta^{\ddagger}F$, TST rates $J^{\mathrm{TST}}$, transmission coefficients $\kappa$ and final rate estimates $J$ in a fixed-volume (NVT) 512 Ar system. Different supersaturations $S$ are simulated by choosing the simulation cell edge length $L$. Reference rates $J^{\mathrm{ref}}$ are given for comparison.222Reference values taken from Tsai et al. [5] for $S\geq 9.04$ and from Salvalaglio et al. [4] otherwise. S | $L$ | $\Delta^{\ddagger}F$ | $J^{\mathrm{TST}}$ | $\kappa$ | $J$ | $J^{\mathrm{ref}}$
---|---|---|---|---|---|---
| (nm) | (kJ/mol) | (cm-3 s-1) | ($10^{-3}$) | (cm-3 s-1) | (cm-3 s-1)
11.43 | 10.5 | $3.76\pm 0.15$ | $5.34\pm 1.23\times 10^{27}$ | $4.9\pm 1.4$ | $2.62\pm 0.95\times 10^{25}$ | $1.84\pm 0.27\times 10^{25}$
9.87 | 11.0 | $5.13\pm 0.27$ | $6.03\pm 2.44\times 10^{26}$ | $2.6\pm 0.9$ | $1.58\pm 0.83\times 10^{24}$ | $1.09\pm 0.19\times 10^{24}$
9.04 | 11.3 | $6.59\pm 0.18$ | $6.36\pm 1.69\times 10^{25}$ | $2.5\pm 0.4$ | $1.61\pm 0.50\times 10^{23}$ | $1.10\pm 0.27\times 10^{23}$
8.68 | 11.5 | $7.46\pm 0.30$ | $1.65\pm 0.73\times 10^{25}$ | $2.1\pm 0.5$ | $3.52\pm 1.73\times 10^{22}$ | $2.80\pm 0.82\times 10^{22}$
6.76 | 12.5 | $12.20\pm 0.19$ | $1.10\pm 0.32\times 10^{22}$ | $1.0\pm 0.2$ | $1.10\pm 0.39\times 10^{19}$ | $0.64\pm 0.33\times 10^{19}$
6.01 | 13.0 | $16.31\pm 0.51$ | $2.12\pm 1.63\times 10^{19}$ | $1.1\pm 0.2$ | $2.37\pm 1.85\times 10^{16}$ | $1.26\pm 0.56\times 10^{16}$
5.36 | 13.5 | $20.78\pm 0.13$ | $2.44\pm 0.47\times 10^{16}$ | $0.6\pm 0.2$ | $1.57\pm 0.48\times 10^{13}$ | $1.30\pm 0.75\times 10^{13}$
4.81 | 14.0 | $25.24\pm 0.07$ | $2.81\pm 0.29\times 10^{13}$ | $0.2\pm 0.1$ | $5.46\pm 1.08\times 10^{9}$ |
If we assume $\kappa=1$, we can use (4) to calculate the TST rate
$k^{\mathrm{TST}}$ and also obtain a TST-style estimate estimate
$J^{\mathrm{TST}}$ of the global nucleation rate $J$. Only the free energy
surface $F(n)$ (and an appropriate gauge correction) is needed to calculate
$k^{\mathrm{TST}}$.
While TS crossing in many chemical reactions can be considered to be ballistic
(and $k\approx k^{\mathrm{TST}}$), this may not be the case for nucleation
processes. Not every occurrence of a configuration $\mathbf{R}$ for which
$n(\mathbf{R})=n^{*}$ necessarily corresponds to a nucleation event. As can be
seen in Fig. 1b and Table 1, a poor agreement with literature rates is
obtained when we calculate a nucleation rate $J^{\mathrm{TST}}$ from
$k^{\mathrm{TST}}$. On average $J^{\mathrm{TST}}$ and $J^{\mathrm{ref}}$
deviate by three orders of magnitude.
From a committor test we always found that $p_{l}\approx 0.5$ in all cases,
confirming the quality of the CV $n$. From just 10 trajectories per $S$, we
also obtained estimates of $\kappa$ that have a precision similar to that of
$J^{\mathrm{TST}}$, and are in the order of $10^{-3}$ (Table 1). TS crossing
is therefore highly diffusive, thus validating our assumption that
trajectories around $n=n^{*}$ become fully decorrelated. Our final nucleation
rate estimates $J$ now match very well the $J^{\mathrm{ref}}$ values, as can
be seen in Fig. 1b. This agreement is even more remarkable when realizing that
rate estimates purely from CNT can be off by several orders of magnitude. [2]
Such inconsistencies in nucleation rate predictions are quite common: A
spectacular example is ice formation, for which rates calculated by different
approaches (seeding, forward flux sampling, and a CNT-based recipe) were found
to span nine orders of magnitude even though the employed water model and
simulation conditions were the same. [34]
Figure 2: Relative contributions to the nucleation rate $J$. Values of the
exponential term $e^{-\beta\Delta^{\ddagger}F}$ and transmission coefficient
$\kappa$ are plotted as a function of $\ln^{-2}S$.
As can be seen in Fig. 2, the relative contribution of $\kappa$ to the overall
nucleation rate is similar to that of the exponential
$e^{-\beta\Delta^{\ddagger}F}$ term at high supersaturations. With decreasing
$S$, the nucleation barrier increases strongly, while $\kappa$ only has a weak
dependence on $S$. The fact that nucleation time scales over the whole studied
supersaturation range span sixteen orders of magnitude can therefore be almost
exclusively attributed to the exponential term in the rate expression Eq. (9).
### IV.2 Macroscopic nucleation rates
Nucleation rates calculated in a small simulation box with fixed dimensions
are affected by finite size effects. This is because the growing droplet
depletes the gas phase and, thus, artificially decreases the supersaturation.
Salvalaglio et al. [4] corrected their accelerated MD simulations by
estimating the finite size error from CNT expressions. However, a much more
generic solution to the finite size problem is available within our FES-based
approach. If we calculate the FES within the constant pressure NPT ensemble,
the vapor phase is kept at its initial pressure throughout the simulation
because the box size is allowed to vary.
Table 2: Nucleation barriers $\Delta^{\ddagger}G$, TST rates $J^{\mathrm{TST}}_{\infty}$, transmission coefficients $\kappa$ and final rate estimates $J_{\infty}$ in a constant pressure (NPT) 512 Ar system that approximate the physics of a macroscopically sized system. Different supersaturations $S$ are simulated by enforcing a pressure $p$. Reference rates $J_{\infty}^{\mathrm{ref}}$ are given for comparison.444Reference values taken from Salvalaglio et al. [4] where available. S | $p$ | $\Delta^{\ddagger}G$ | $J_{\infty}^{\mathrm{TST}}$ | $\kappa$ | $J_{\infty}$ | $J_{\infty}^{\mathrm{ref}}$
---|---|---|---|---|---|---
| (atm) | (kJ/mol) | (cm-3 s-1) | ($10^{-3}$) | (cm-3 s-1) | (cm-3 s-1)
11.43 | 3.92 | $3.65\pm 0.17$ | $6.33\pm 1.58\times 10^{27}$ | $5.9\pm 2.0$ | $3.75\pm 1.56\times 10^{25}$ | $3.04\pm 0.70\times 10^{25}$
9.87 | 3.52 | $4.99\pm 0.12$ | $7.44\pm 1.32\times 10^{26}$ | $2.9\pm 1.0$ | $2.16\pm 0.84\times 10^{24}$ |
9.04 | 3.30 | $6.12\pm 0.07$ | $1.28\pm 0.13\times 10^{26}$ | $3.9\pm 0.9$ | $5.05\pm 1.25\times 10^{23}$ |
8.68 | 3.16 | $6.83\pm 0.20$ | $4.22\pm 1.24\times 10^{25}$ | $2.4\pm 0.4$ | $1.01\pm 0.35\times 10^{23}$ | $8.64\pm 2.53\times 10^{22}$
6.76 | 2.55 | $11.47\pm 0.38$ | $3.25\pm 1.84\times 10^{22}$ | $3.1\pm 0.9$ | $1.00\pm 0.64\times 10^{20}$ | $0.51\pm 0.27\times 10^{20}$
6.01 | 2.31 | $14.49\pm 0.24$ | $3.20\pm 1.13\times 10^{20}$ | $1.2\pm 0.4$ | $3.87\pm 1.94\times 10^{17}$ | $2.57\pm 1.14\times 10^{17}$
5.36 | 2.08 | $17.42\pm 0.06$ | $3.62\pm 0.31\times 10^{18}$ | $0.8\pm 0.3$ | $2.92\pm 1.04\times 10^{15}$ | $1.35\pm 0.78\times 10^{15}$
4.81 | 1.89 | $20.98\pm 0.29$ | $1.62\pm 0.69\times 10^{16}$ | $0.9\pm 0.2$ | $1.39\pm 0.67\times 10^{13}$ |
4.33 | 1.71 | $26.23\pm 0.13$ | $5.77\pm 1.14\times 10^{12}$ | $0.6\pm 0.2$ | $3.32\pm 1.13\times 10^{9}$ |
We therefore repeat our metadynamics simulations in the NPT ensemble. Taking
yet again the case of $S=8.68$ as an example, we see that the FES of
nucleation—which now represents the Gibbs free energy $G$ rather than the
Helmholtz definition $F$—is significantly affected by the ensemble change
(Fig. 1a and Table 2). In this system, the nucleation barrier
$\Delta^{\ddagger}G_{g\rightarrow l}$ decreases by about 0.6 kJ/mol
(${\sim}k_{B}T$). $\kappa$ appears not appreciably affected by finite size
effects, meaning that our final estimate of the macroscopic nucleation rate
$J_{\infty}$ is about 3 times higher than the finite size estimate $J$, which
is in perfect agreement with the estimates of Salvalaglio et al. (Fig. 1c).
More generally, our results closely match finite size-corrected nucleation
rates for all $S$ with available reference data. With decreasing $S$, the
magnitude of the finite size effect increases very strongly, up till four
orders of magnitude for $S=4.81$.
It may also be possible to directly sample nucleation rates in the NPT
ensemble, using accelerated MD. However, although the employed thermo- and
barostat correctly reproduce the thermodynamic averages of the target
ensemble, they achieve this by augmenting the equations of motion with an
artificial friction term. [45, 48] The dynamical trajectories of all atoms are
thus affected. It has therefore been argued that also nucleation times could
be unphysical to some extent, although the magnitude of this possible effect
was not quantified. [2, 32] In contrast, the TST rate is purely an equilibrium
property of the system: The FES (or barrier) only depends on the underlying
thermodynamic distributions, and not the precise dynamical trajectories.
More generally, calculating nucleation free energy barriers is a matter of
sampling along a suitable reaction coordinate (or CV), while maintaining the
nucleating particles at a physically meaningful chemical potential $\mu$.
Depending on the process, such can be achieved in the NPT, [11, 12, 15, 10,
13, 14, 8, 9] NVT, [19] or $\mu$VT ensembles. [18] As we show here, the
resultant FES (and accompanying committor analysis) then suffices to calculate
accurate macroscopic nucleation rates from TST. Care must be taken, however,
to ensure that the system is large enough to accommodate the critical nucleus.
Convergence tests using different system sizes can reveal any remaining size
effects [11] which, as will be shown in Section IV.6, are absent in our setup.
### IV.3 Efficiency of the rate calculation
The efficiency of accelerated MD simulations is often expressed in terms of an
_acceleration factor_ $\alpha$, which is the ratio of the transition time
$\tau$ and the length of the MD trajectory needed to observe it in the biased
simulation, i.e.,
$\alpha=\frac{\tau}{t_{\mathrm{MD}}}.$ (12)
The reference rates $J^{\mathrm{ref}}$ used here were obtained within the
infrequent metadynamics framework. [7] In infrequent metadynamics, a standard
metadynamics setup is employed, but the deposition of the Gaussian bias
potentials is done more slowly. The idea is that this helps to ensure that the
dividing surface between states (here $n=n^{*}$) is not biased, and the
simulation becomes equivalent to hyperdynamics. [6] Then, $\alpha=\langle
e^{\beta V(n)}\rangle_{b}$, in which $V(n)$ is the bias potential and
$\langle\cdots\rangle_{b}$ denotes a time average over the biased trajectory.
Using infrequent metadynamics simulations, $\alpha_{\mathrm{iMetaD}}=1.7\times
10^{11}$ could be reached for $S=5.36$. However, in order to obtain correct
statistics, several independent observations $n_{\mathrm{sim}}$ of the
transition were needed. We can use a similar definition to calculate
$\alpha_{\mathrm{FES}}$ for the FES-based estimation of the rate, in which we
take $t_{\mathrm{MD}}$ as the time needed to converge the FES estimates (1
$\mu$s in total), plus the time spent for committor analysis ($10\times 20$
ns). The discrepancy between the two definitions lies in the value of
$n_{\mathrm{sim}}$. Therefore, we can compute the relative efficiency $\eta$
of the two approaches as:
$\eta=n_{\mathrm{sim}}\frac{\alpha_{\mathrm{FES}}}{\alpha_{\mathrm{iMetaD}}}.$
(13)
Table 3: Nucleation times $\tau$ in NVT simulations, number of infrequent metadynamics runs $n_{\mathrm{sim}}$ and acceleration factors $\alpha_{\mathrm{iMetaD}}$ in the literature,666$\alpha_{\mathrm{iMetaD}}$ and $n_{\mathrm{sim}}$ values taken from Tsai et al. [5] for $S\geq 9.04$ and from Salvalaglio et al. [4] otherwise. acceleration by the FES-based approach $\alpha_{\mathrm{FES}}$, and relative efficiency $\eta$ of the two. S | $\tau$ (s) | $n_{\mathrm{sim}}$ | $\alpha_{\mathrm{iMetaD}}$ | $\alpha_{\mathrm{FES}}$ | $\eta$
---|---|---|---|---|---
11.43 | $3.30\times 10^{-8}$ | 20 | $6.40\times 10^{1}$ | $2.75\times 10^{-2}$ | 0.01
9.87 | $4.77\times 10^{-7}$ | 20 | $1.10\times 10^{3}$ | $3.97\times 10^{-1}$ | 0.01
9.04 | $4.29\times 10^{-6}$ | 20 | $7.80\times 10^{3}$ | $3.58\times 10^{0}$ | 0.01
8.68 | $1.87\times 10^{-5}$ | 100 | $1.80\times 10^{2}$ | $1.56\times 10^{1}$ | 9.95
6.76 | $4.64\times 10^{-2}$ | 50 | $2.40\times 10^{5}$ | $3.87\times 10^{4}$ | 8.06
6.01 | $1.92\times 10^{1}$ | 50 | $6.30\times 10^{7}$ | $1.60\times 10^{7}$ | 14.44
5.36 | $2.59\times 10^{4}$ | 50 | $1.70\times 10^{11}$ | $2.16\times 10^{10}$ | 11.66
4.81 | $6.67\times 10^{7}$ | | | $5.56\times 10^{13}$ |
A clear discrepancy between different infrequent metadynamics studies becomes
apparent, where Tsai et al. used significantly more aggressive biasing
parameters than Salvalaglio et al. In addition, and also not directly
discernable from Table 3, the former authors used a more complex CV, which
besides $n$ also contained information about droplet shape. A single MD step
in the study of Tsai et al. therefore also required more CPU time. A short
test indicated that our implementation of $n$ is about 30 times faster to
evaluate than their preferred CV for biasing. Indeed, due to the high cost of
their simulations no supersaturations below 9.04 could be simulated. [5]
Nevertheless, we see that a FES-based approach only starts to become
competitive at lower supersaturations, where it quite consistently outperforms
infrequent metadynamics by an order of magnitude. In addition, without
changing biasing parameters, we could calculate rates for supersaturations as
low as $S=4.33$ in the NPT simulations, with $\tau\sim 10^{8}$ s, and
$\alpha_{\mathrm{FES}}=8.2\times 10^{13}$. We can therefore anticipate that
using the FES and TST becomes an increasingly attractive option when
interatomic potentials become more expensive and/or nucleation barriers become
higher.
Furthermore, metadynamics may not necessarily be the most efficient free
energy method under all conditions. Plenty of alternative free energy methods
have been reported in the literature and implemented in widely available codes
such as PLUMED. We have recently already demonstrated that a new method based
on nonequilibrium sampling improves upon metadynamics by a factor ${\sim}3$ in
the $S=8.68$ NVT case. [51]
### IV.4 Discussion of errors
Overall, the agreement between infrequent metadynamics and the TST-based
approach is very good. This is quite remarkable considering that the most
prominent sources of error of both methods go in opposite directions.
Suboptimal CVs have a negative impact on the performance of infrequent
metadynamics: If the CV does not contain all slow modes in the system, the
bias potential will not be effective, leading to overfilling of the metastable
basin before a transition can occur. Or, put differently, a poor CV will not
properly distinguish transition states from metastable states, meaning that
bias is also added to the transition states, leading to a violation of the
hyperdynamics assumption. [6] As a result, transition times will be
overestimated, and predicted rates will be _underestimated_. [5, 54]
A poor CV can still be sufficient to enhance sampling and converge a FES.
Because it mixes TS states with stable states the apparent free energy of the
TS will however be too low. Therefore, rates computed from this barrier will
always be an upper bound of the true rate, and are prone to _overestimate_ it.
We attempted to minimize the error in the reference nucleation rates by
selecting the values Tsai et al. obtained using their optimized CV, rather
than $n$. Salvalaglio et al. only used $n$, but were significantly more
prudent with respect to their biasing parameters.
Despite using $n$ as a CV, which may be suboptimal [5], our rates are very
close to the infrequent metadynamics estimates. There may, however, be a
slight bias to somewhat higher rates (up to 2 times higher than the reference,
but always within error bars), in line with the reasoning outlined above. The
overall good agreement of the competing approaches is however consistent with
the observation of Tsai et al. that the barrier along their optimized CV was
not appreciably higher than the one along $n$. [5] Note, also, that small
differences in numerical precision between the employed codes may introduce
small deviations.
Finally, our committor analysis reveals one important point of caution when
applying infrequent metadynamics along $n$. Because the system may spend up to
10 ns in the TS region, it is very difficult to guarantee an uncorrupted TS if
new Gaussian biases are continuously added: Only simulations with
impractically low bias addition rates or very small Gaussians are truly
trustworthy.
Rates have an exponential dependence on nucleation free energy barriers, so
even small uncertainties in the FES can result in large error bars on a final
rate estimate. For almost every system we have managed to keep the uncertainty
on the barrier well below $k_{B}T$, leading mostly to errors between 30 and 60
% on the rate. These error bars are similar to those reported by Salvalaglio
et al. [4] Somewhat higher uncertainties have been reported on nucleation
rates computed by forward flux sampling (between 150 and 500 %) in diverse
systems. [27, 28, 29]
### IV.5 Interpretation of the transmission coefficient
In the strictest sense, the objective of our study is to obtain nucleation
rates. As a consequence we have not interpreted the values of the nucleation
barrier or transmission coefficient $\kappa$ in great detail. These two
quantities are however quite interconnected.
Indeed, the value of $\kappa$ as used in our study can potentially serve two
purposes. If we assume that $n=n^{*}$ is the best possible choice of dividing
surface, the free energy barrier will be maximized (in the spirit of
variational TST) and $\kappa<1$ represents the inherent diffusivity in the TS
region. In such case, a no-recrossing dividing surface does not exist, and
$\kappa$ captures dynamical (friction) effects that lower the true TST rate.
Alternatively, recrossings may also be a consequence of a poorly chosen
dividing surface. In that case $n=n^{*}$ also contains configurations with
lower free energies (so the apparent barrier is too low) and is crossed more
than the true dividing surface (so $\kappa$ becomes smaller). In such a
situation, the final rate estimate may still be accurate, but barrier and
transmission coefficient have less of a clear-cut physical significance.
However, note that we use steered MD to generate trial $n=n^{*}$
configurations for committor analysis, starting in the $g$ state. If $n=n^{*}$
is a poor dividing surface, one would expect these configurations to be biased
towards the $g$ state because there is a residual barrier still unaccounted
for in $F(n)$. As a result, the SMD run will be unable to place the system
exactly on the true dividing surface and $p_{l}<0.5$ Because we do, however,
find $p_{l}\approx 0.5$ we conclude that such residual barrier is negligible
(${\sim}k_{B}T$). More rigorous tests for candidate dividing surfaces have
also been proposed. [55]
It is therefore likely, then, that the very low values of $\kappa$ (in the
order of $10^{-3}$) are mostly a manifestation of intrinsic dynamic effects.
This is a reasonable conclusion, considering that droplet growth is a process
fully driven by diffusion of gas atoms, balanced by re-evaporation of atoms
from the liquid. The stochastic nature of these phenomena is compatible with
small transmission coefficients.
### IV.6 Comparison with a CNT-based approach
Our method bears some resemblance with the popular “parameter-free”
implementation of CNT pioneered by Auer and Frenkel. [36, 37] As in our
approach, a nucleation FES must be reconstructed first and a rate estimate is
calculated from the barrier. It is important to note, however, that this
expression is based on a definition of the barrier within the macroscopic CNT
framework.
To wit, within the NPT ensemble, the rate is expressed as
$J=Zf^{+}\rho_{g}e^{-\beta G^{*}}.$ (14)
Herein, $G^{*}=G(n^{*})-G(0)$, $\rho_{g}$ is the number density of the
metastable vapor, $f^{+}$ is the attachment rate on the critical nucleus, and
$Z$ is the Zeldovich factor.
The attachment rate $f^{+}$ can be calculated in several ways. Most commonly,
one launches several trajectories starting from a critical nucleus, and
calculates $f^{+}$ as a diffusion coefficient in $n$:
$f^{+}=\frac{\langle(n(t)-n(0))^{2}\rangle}{2t}.$ (15)
Alternatively, in the case of droplet nucleation, one can use kinetic gas
theory [4]:
$f^{+}=A(n^{*})\frac{\rho_{g,e}}{\sqrt{2\pi\beta m}}.$ (16)
The density of the vapor at coexistence is $\rho_{g,e}=\rho_{g}/S$. $A(n^{*})$
is the surface area of the critical nucleus, which can be calculated if the
number density $\rho_{l}$ of the liquid is known:
$A(n^{*})=\left(\frac{36\pi}{\rho_{l}^{2}}\right)^{1/3}(n^{*})^{2/3}.$ (17)
The Zeldovich factor $Z$ is computed from the free energy surface $G(n)$ as
$Z=\sqrt{\frac{\beta}{2\pi}\left|\frac{\mathrm{d}^{2}G(n)}{\mathrm{d}n^{2}}\right|_{n=n^{*}}}.$
(18)
We illustrate the application of this approach with a calculation of
$J_{\infty}$ for $S=8.68$. From the FES $G(n)$ we compute $Z=0.065$. Our two
possible estimates of $f^{+}$ however differ quite strongly: The direct
measurement of the diffusion coefficient using Eq. (15) yields $2.5\times
10^{11}$ s-1, whereas the kinetic gas theory expression Eq. (16) predicts
$9.2\times 10^{9}$ s-1. Also note that the former estimate is difficult to
converge, and has a relative error bar of 100%.
Figure 3: Size dependency of the nucleation free energy surface $G(n)$ for $S=8.68$. Different system sizes $N$ are compared. Table 4: Size-dependence of nucleation barriers obtained from NPT simulations with different number of atoms $N$, for $S=8.68$.888All barriers were computed from a single, long reweighting run. For fairness, this is also true for the $N=512$ case. This explains the slightly different value for $\Delta^{\ddagger}G$ compared to the value in Table 2. It also explains the lack of error bars. Values of $J_{\infty}^{\mathrm{TST}}$, directly estimated from Eqs. (4) and (7), show that our TST-based procedure naturally accounts for the extensive nature of $\Delta^{\ddagger}G$. $N$ | $G(n^{*})-G(n_{\mathrm{min}})$ | $\Delta^{\ddagger}G$ | $J_{\infty}^{\mathrm{TST}}$
---|---|---|---
| kJ/mol | kJ/mol | cm-3 s-1
216 | 7.74 | 7.26 | $5.2\times 10^{25}$
512 | 6.86 | 6.68 | $5.3\times 10^{25}$
1000 | 6.18 | 6.36 | $4.4\times 10^{25}$
Now, we must calculate $G^{*}=G(n^{*})-G(0)$. This expression is however only
valid if $G(n)$ has the shape predicted by CNT. Because the order parameter
$n$ does not strictly count the number of atoms in the critical nucleus only,
$G(n)$ thus deviates from the CNT shape in particular for small $n$ with
increasing system size $N$ (Fig. 3). The minimum of this curve is now located
at $n_{\mathrm{min}}>0$ Therefore, the apparent barrier height
$G(n^{*})-G(n_{\mathrm{min}})$ is also size-dependent (Table 4). The precise
nature of these issues was only recently addressed in full detail. [56, 33] In
principle, the definition of the barrier as $G(n^{*})-G(0)$ can be retained
only if $G(n)$ is transformed first into a macroscopic function consistent
with the CNT definition of critical cluster size. [33]
A more ad hoc correction can be derived as follows. $e^{-\beta G^{*}}$ is
defined as the relative “equilibrium” probability to form a critical nucleus
around one monomer, and has to be multiplied by the monomer density $\rho_{g}$
to yield the “equilibrium concentration” of critical nuclei. It is therefore
an intensive, macroscopic quantity. The quantity
$G(n^{*})-G(n_{\mathrm{min}})$ is the work required to form a critical nucleus
inside the the simulation cell, i.e., the form a critical nucleus around _any_
of the monomer particles. It is therefore an extensive quantity.
$e^{-\beta(G(n^{*})-G(n_{\mathrm{min}}))}$ is therefore the probability of
finding a critical nucleus within the simulation cell volume, relative to the
system residing exactly in its local minimum. The critical nucleus
concentration therefore equals $e^{-\beta(G(n^{*})-G(n_{\mathrm{min}}))}/V$.
If a nucleation barrier is obtained from microscopic simulations, one can
therefore approximate the term $\rho_{g}e^{-\beta G^{*}}$ by
$e^{-\beta(G(n^{*})-G(n_{\mathrm{min}}))}/V$ in Eq. (14), as noted before.
[56] Alternatively, $\beta G^{*}\approx\beta G(n^{*})-\beta
G(n_{\mathrm{min}})+\ln N$.
When now using Eq. (15) to estimate $f^{+}$, employing an appropriate
definition of $G^{*}$, we find a predicted rate of $J_{\infty}\approx 4\times
10^{23}$ cm-3 s-1, which is quite close to our TST-based result of $1.0\times
10^{23}$ cm-3 s-1 when taking into account the very large uncertainty on
$f^{+}$. Eq. (16) fares worse, yielding an estimated $J_{\infty}\approx
1.4\times 10^{22}$ cm-3 s-1. These results highlight that TST, CNT, and
related approaches are equivalent theories that can be used to calculate
nucleation rates.
Application of the CNT-derived expression Eq. (14) thus requires some
processing to turn the microscopic simulation data into appropriately
macroscopic quantities. [33, 56] The TST rate of Eq. (1), in contrast, is one
monolithic expression for the flux through the dividing surface $n=n^{*}$. It
is therefore a purely microscopic quantity that is rigorously defined _within
the chosen simulation cell_. This local rate estimate can then
straightforwardly be converted in a global nucleation rate (through Eq. (7) or
(9)), which is also an experimentally verifiable observable. In this sense the
barrier $\Delta^{\ddagger}G$ and transmission coefficient $\kappa$ only have
significance for the specific simulation setup in which they were obtained;
they serve as input for our procedure to yield the final nucleation rate
estimate $J$.
$\Delta^{\ddagger}G$ as defined by Eq. (5), in particular, does not correspond
to the macroscopic nucleation barrier $G^{*}$ of Eq. (14) because it is also
size-dependent (Table 4). Application of Eqs. (1) and (7), however, takes care
of producing a macroscopic quantity. It can also be seen that no lingering
size effects remain in nucleation rate estimates from our procedure because
the predicted TST nucleation rate $J_{\infty}^{\mathrm{TST}}$ is the same
(within error) for each system size (Table 4).
## V Conclusions
Enhanced sampling methods and TST provide a unified theoretical framework for
rate calculations. Whenever it is possible to converge a FES along a suitable
approximate reaction coordinate, accurate rates can be computed at little
extra cost. Here, we have used Ar droplet nucleation as an example. Global,
macroscopic, nucleation rates can be unambiguously obtained from small model
systems without the need to invoke a process-specific approximation such as
CNT, as long as an appropriate ensemble is simulated.
Only two ingredients are required in a TST-based nucleation rate calculation.
Calculation of the TST rate requires the _free energy barrier_ , which can be
obtained through an ever-increasing array of free energy methods. The quality
of the free energy barrier can subsequently be verified from a committor
analysis of the candidate transition state, which yields an accurate
_recrossing correction_ (i.e., transmission coefficient) as a byproduct.
Accurate, consistent, and reproducible nucleation rates are thus accessible
through a straightforward application of widely available, well-tested and
actively developed tools.
Although we have focused on nucleation process, the highly generic nature of
the approach likely makes it conveniently straightforward to apply to any type
of process in chemistry, materials science, and biology.
###### Acknowledgements.
K.M.B. was funded as a junior postdoctoral fellow of the FWO (Research
Foundation – Flanders), Grant 12ZI420N. The computational resources and
services used in this work were provided by the HPC core facility CalcUA of
the Universiteit Antwerpen, and VSC (Flemish Supercomputer Center), funded by
the FWO and the Flemish Government. K.M.B. thanks Erik Neyts for his
continuous support.
## Author Declarations
### Conflict of interest
The author has no conflicts to disclose.
## Data availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request. Sample inputs to reproduce the
reported simulations are deposited on PLUMED-NEST (www.plumed-nest.org), the
public repository of the PLUMED consortium [44], as plumID:21.009.[57]
## References
* Chkonia _et al._ [2009] G. Chkonia, J. Wölk, R. Strey, J. Wedekind, and D. Reguera, “Evaluating nucleation rates in direct simulations,” J. Chem. Phys. 130, 064505 (2009).
* Diemand _et al._ [2013] J. Diemand, R. Angélil, K. K. Tanaka, and H. Tanaka, “Large scale molecular dynamics simulations of homogeneous nucleation,” J. Chem. Phys. 139, 074309 (2013).
* Wedekind, Reguera, and Strey [2006] J. Wedekind, D. Reguera, and R. Strey, “Finite-size effects in simulations of nucleation,” J. Chem. Phys. 125, 214505 (2006).
* Salvalaglio _et al._ [2016] M. Salvalaglio, P. Tiwary, G. M. Maggioni, M. Mazzotti, and M. Parrinello, “Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations,” J. Chem. Phys. 145, 211925 (2016).
* Tsai, Smith, and Tiwary [2019] S.-T. Tsai, Z. Smith, and P. Tiwary, “Reaction coordinates and rate constants for liquid droplet nucleation: Quantifying the interplay between driving force and memory,” J. Chem. Phys. 151, 154106 (2019).
* Voter [1997] A. F. Voter, “A method for accelerating the molecular dynamics simulation of infrequent events,” J. Chem. Phys. 106, 4665–4677 (1997).
* Tiwary and Parrinello [2013] P. Tiwary and M. Parrinello, “From metadynamics to dynamics,” Phys. Rev. Lett. 111, 230602 (2013).
* Samanta _et al._ [2014] A. Samanta, M. E. Tuckerman, T.-Q. Yu, and W. E, “Microscopic mechanisms of equilibrium melting of a solid,” Science 346, 729–732 (2014).
* Gimondi and Salvalaglio [2017] I. Gimondi and M. Salvalaglio, “CO2 packing polymorphism under pressure: Mechanism and thermodynamics of the I–III polymorphic transition,” J. Chem. Phys. 147, 114502 (2017).
* Rogal, Schneider, and Tuckerman [2019] J. Rogal, E. Schneider, and M. E. Tuckerman, “Neural-network-based path collective variables for enhanced sampling of phase transformations,” Phys. Rev. Lett. 123, 245701 (2019).
* Quigley and Rodger [2009] D. Quigley and P. M. Rodger, “A metadynamics-based approach to sampling crystallisation events,” Mol. Simul. 35, 613–623 (2009).
* Piaggi, Valsson, and Parrinello [2017] P. M. Piaggi, O. Valsson, and M. Parrinello, “Enhancing entropy and enthalpy fluctuations to drive crystallization in atomistic simulations,” Phys. Rev. Lett. 119, 015701 (2017).
* Pipolo _et al._ [2017] S. Pipolo, M. Salanne, G. Ferlat, S. Klotz, A. M. Saitta, and F. Pietrucci, “Navigating at will on the water phase diagram,” Phys. Rev. Lett. 119, 245701 (2017).
* Zhang _et al._ [2019] Y.-Y. Zhang, H. Niu, G. Piccini, D. Mendels, and M. Parrinello, “Improving collective variables: The case of crystallization,” J. Chem. Phys. 150, 094509 (2019).
* Piaggi and Car [2020] P. M. Piaggi and R. Car, “Phase equilibrium of liquid water and hexagonal ice from enhanced sampling molecular dynamics simulations,” J. Chem. Phys. 152, 204116 (2020).
* Karmakar _et al._ [2021] T. Karmakar, M. Invernizzi, V. Rizzi, and M. Parrinello, “Collective variables for the study of crystallisation,” Mol. Phys. 40, e1893848 (2021).
* Salvalaglio _et al._ [2015] M. Salvalaglio, C. Perego, F. Giberti, M. Mazzotti, and M. Parrinello, “Molecular-dynamics simulations of urea nucleation from aqueous solution,” Proc. Natl. Acad. Sci. U.S.A. 112, E6–E14 (2015).
* Karmakar, Piaggi, and Parrinello [2019] T. Karmakar, P. M. Piaggi, and M. Parrinello, “Molecular dynamics simulations of crystal nucleation from solution at constant chemical potential,” J. Chem. Theory. Comput. 15, 6923–6930 (2019).
* Fukuhara _et al._ [2021] S. Fukuhara, K. M. Bal, E. C. Neyts, and Y. Shibuta, “Entropic and enthalpic factors determining the thermodynamics and kinetics of carbon segregation from transition metal nanoparticles,” Carbon 171, 806–813 (2021).
* Bal _et al._ [2020] K. M. Bal, S. Fukuhara, Y. Shibuta, and E. C. Neyts, “Free energy barriers from biased molecular dynamics simulations,” J. Chem. Phys. 153, 114118 (2020).
* Peters [2015] B. Peters, “Common features of extraordinary rate theories,” J. Phys. Chem. B 119, 6349–6356 (2015).
* Espinosa _et al._ [2016] J. R. Espinosa, C. Vega, C. Valeriani, and E. Sanz, “Seeding approach to crystal nucleation,” J. Chem. Phys. 144, 034501 (2016).
* Zimmermann _et al._ [2018] N. E. R. Zimmermann, B. Vorselaars, J. R. Espinosa, D. Quigley, W. R. Smith, E. Sanz, C. Vega, and B. Peters, “Nacl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates,” J. Chem. Phys. 148, 222838 (2018).
* Arjun and Bolhuis [2021] A. Arjun and P. G. Bolhuis, “Molecular understanding of homogeneous nucleation of CO2 hydrates using transition path sampling,” J. Phys. Chem. B 125, 338–349 (2021).
* Arjun and Bolhuis [2020] A. Arjun and P. G. Bolhuis, “Rate prediction for homogeneous nucleation of methane hydrate at moderate supersaturation using transition interface sampling,” J. Phys. Chem. B 124, 8099–8109 (2020).
* Menon _et al._ [2020] S. Menon, G. Díaz Leines, R. Drautz, and J. Rogal, “Role of pre-ordered liquid in the selection mechanism of crystal polymorphs during nucleation,” J. Chem. Phys. 153, 104508 (2020).
* Wang, Valeriani, and Frenkel [2009] Z.-J. Wang, C. Valeriani, and D. Frenkel, “Homogeneous bubble nucleation driven by local hot spots: A molecular dynamics study,” J. Phys. Chem. B 113, 3776–3784 (2009).
* Haji-Akbari and Debenedetti [2015] A. Haji-Akbari and P. G. Debenedetti, “Direct calculation of ice homogeneous nucleation rate for a molecular model of water,” Proc. Natl. Acad. Sci. U.S.A. 112, 10582–10588 (2015).
* Sosso _et al._ [2016] G. C. Sosso, T. Li, D. Donadio, G. A. Tribello, and A. Michaelides, “Microscopic mechanism and kinetics of ice formation at complex interfaces: Zooming in on kaolinite,” J. Phys. Chem. Lett. 7, 2350–2355 (2016).
* Jiang _et al._ [2018] H. Jiang, A. Haji-Akbari, P. G. Debenedetti, and A. Z. Panagiotopoulos, “Forward flux sampling calculation of homogeneous nucleation rates from aqueous nacl solutions,” J. Chem. Phys. 148, 044505 (2018).
* Blow, Quigley, and Sosso [2021] K. E. Blow, D. Quigley, and G. C. Sosso, “The seven deadly sins: when computing crystal nucleation rates, the devil is in the details,” (2021), arXiv:2104.13104 .
* Diemand _et al._ [2014] J. Diemand, R. Angélil, K. K. Tanaka, and H. Tanaka, “Direct simulations of homogeneous bubble nucleation: Agreement with classical nucleation theory and no local hot spots,” Phys. Rev. E 90, 052407 (2014).
* Cheng and Ceriotti [2017] B. Cheng and M. Ceriotti, “Bridging the gap between atomistic and macroscopic models of homogeneous nucleation,” J. Chem. Phys. 146, 034106 (2017).
* Cheng, Dellago, and Ceriotti [2018] B. Cheng, C. Dellago, and M. Ceriotti, “Theoretical prediction of the homogeneous ice nucleation rate: disentangling thermodynamics and kinetics,” Phys. Chem. Chem. Phys. 20, 28732–28740 (2018).
* Schenter, Kathmann, and Garrett [1999] G. K. Schenter, S. M. Kathmann, and B. C. Garrett, “Dynamical nucleation theory: A new molecular approach to vapor-liquid nucleation,” Phys. Rev. Lett. 82, 3484–3487 (1999).
* Auer and Frenkel [2001] S. Auer and D. Frenkel, “Prediction of absolute crystal-nucleation rate in hard-sphere colloids,” Nature 409, 1020–1023 (2001).
* Auer and Frenkel [2004] S. Auer and D. Frenkel, “Numerical prediction of absolute crystallization rates in hard-sphere colloids,” J. Chem. Phys. 120, 3015–3029 (2004).
* ten Wolde and Frenkel [1998] P. R. ten Wolde and D. Frenkel, “Computer simulation study of gas–liquid nucleation in a Lennard–Jones system,” J. Chem. Phys. 109, 9901–9918 (1998).
* Hartmann and Schütte [2007] C. Hartmann and C. Schütte, “Comment on two distinct notions of free energy,” Phys. D 228, 59–63 (2007).
* Geissler, Dellago, and Chandler [1999] P. L. Geissler, C. Dellago, and D. Chandler, “Kinetic pathways of ion pair dissociation in water,” J. Phys. Chem. B 103, 3706–3710 (1999).
* Kramers [1940] H. Kramers, “Brownian motion in a field of force and the diffusion model of chemical reactions,” Physica 7, 284–304 (1940).
* Plimpton [1995] S. Plimpton, “Fast parallel algorithms for short-range molecular dynamics,” J. Comput. Phys. 117, 1–19 (1995).
* Tribello _et al._ [2014] G. A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni, and G. Bussi, “PLUMED 2: New feathers for an old bird,” Comput. Phys. Commun. 185, 604–613 (2014).
* The PLUMED consortium [2019] The PLUMED consortium, “Promoting transparency and reproducibility in enhanced molecular simulations,” Nat. Methods 16, 670–673 (2019).
* Bussi and Parrinello [2007] G. Bussi and M. Parrinello, “Accurate sampling using Langevin dynamics,” Phys. Rev. E 75, 056707 (2007).
* Bussi, Donadio, and Parrinello [2007] G. Bussi, D. Donadio, and M. Parrinello, “Canonical sampling through velocity rescaling,” J. Chem. Phys. 126, 014101 (2007).
* Bussi and Parrinello [2008] G. Bussi and M. Parrinello, “Stochastic thermostats: comparison of local and global schemes,” Comput. Phys. Commun. 179, 26–29 (2008).
* Martyna, Tobias, and Klein [1994] G. J. Martyna, D. J. Tobias, and M. L. Klein, “Constant pressure molecular dynamics algorithms,” J. Chem. Phys. 101, 4177–4189 (1994).
* Laio and Parrinello [2002] A. Laio and M. Parrinello, “Escaping free-energy minima,” Proc. Natl. Acad. Sci. U.S.A. 99, 12562–12566 (2002).
* Barducci, Bussi, and Parrinello [2008] A. Barducci, G. Bussi, and M. Parrinello, “Well-tempered metadynamics: A smoothly converging and tunable free-energy method,” Phys. Rev. Lett. 100, 020603 (2008).
* Bal [2021a] K. M. Bal, “Reweighted Jarzynski sampling: Acceleration of rare events and free energy calculation with a bias potential learned from nonequilibrium work,” (2021a), arXiv:2105.03483 .
* Branduardi, Bussi, and Parrinello [2012] D. Branduardi, G. Bussi, and M. Parrinello, “Metadynamics with adaptive Gaussians,” J. Chem. Theory Comput. 8, 2247–2254 (2012).
* Tiwary and Parrinello [2015] P. Tiwary and M. Parrinello, “A time-independent free energy estimator for metadynamics,” J. Phys. Chem. B 119, 736–742 (2015).
* Khan, Dickson, and Peters [2020] S. A. Khan, B. M. Dickson, and B. Peters, “How fluxional reactants limit the accuracy/efficiency of infrequent metadynamics,” J. Chem. Phys 153, 054125 (2020).
* Mullen, Shea, and Peters [2014] R. G. Mullen, J.-E. Shea, and B. Peters, “An existence test for dividing surfaces without recrossing,” J. Chem. Phys. 140, 041104 (2014).
* Yi and Rutledge [2012] P. Yi and G. C. Rutledge, “Molecular origins of homogeneous crystal nucleation,” Annu. Rev. Chem. Biomol. Eng. 3, 157–182 (2012).
* Bal [2021b] K. M. Bal, “Nucleation rates from small scale atomistic simulations and transition state theory,” https://www.plumed-nest.org/eggs/21/009 (2021b), PLUMED-NEST, plumID:21.009.
|
# Benchmarking Dust Emission Models in M101
Jérémy Chastenet Center for Astrophysics and Space Sciences, Department of
Physics, University of California, San Diego
9500 Gilman Drive, La Jolla, CA 92093, USA Karin Sandstrom Center for
Astrophysics and Space Sciences, Department of Physics, University of
California, San Diego
9500 Gilman Drive, La Jolla, CA 92093, USA I-Da Chiang (江宜達) Center for
Astrophysics and Space Sciences, Department of Physics, University of
California, San Diego
9500 Gilman Drive, La Jolla, CA 92093, USA Brandon S. Hensley Princeton
University Observatory, Peyton Hall, Princeton, NJ 08544-1001, USA Bruce T.
Draine Princeton University Observatory, Peyton Hall, Princeton, NJ
08544-1001, USA Karl D. Gordon Space Telescope Science Institute, 3700 San
Martin Drive, Baltimore, MD, 21218, USA Sterrenkundig Observatorium,
Universiteit Gent, Gent, Belgium Eric W. Koch University of Alberta,
Department of Physics, 4-183 CCIS, Edmonton AB T6G 2E1, Canada Adam K. Leroy
Department of Astronomy, The Ohio State University, 4055 McPherson Laboratory,
140 West 18th Ave, Columbus, OH 43210, USA Dyas Utomo Department of
Astronomy, The Ohio State University, 4055 McPherson Laboratory, 140 West 18th
Ave, Columbus, OH 43210, USA Thomas G. Williams Max Planck Institut für
Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
###### Abstract
We present a comparative study of four physical dust models and two single-
temperature modified blackbody models by fitting them to the resolved WISE,
Spitzer, and Herschel photometry of M101 (NGC 5457). Using identical data and
a grid-based fitting technique, we compare the resulting dust and radiation
field properties derived from the models. We find that the dust mass yielded
by the different models can vary by up to factor of 3 (factor of 1.4 between
physical models only), although the fits have similar quality. Despite
differences in their definition of the carriers of the mid-IR aromatic
features, all physical models show the same spatial variations for the
abundance of that grain population. Using the well determined metallicity
gradient in M101 and resolved gas maps, we calculate an approximate upper
limit on the dust mass as a function of radius. All physical dust models are
found to exceed this maximum estimate over some range of galactocentric radii.
We show that renormalizing the models to match the same Milky Way high
latitude cirrus spectrum and abundance constraints can reduce the dust mass
differences between models and bring the total dust mass below the maximum
estimate at all radii.
Spectral energy distribution(2129); Interstellar dust(836); Dust continuum
emission(412); Gas-to-dust ratio(638); Metallicity(1031); M101 (catalog ) (NGC
5457 (catalog ))
††journal: AJ
## 1 Introduction
Dust grains play key roles in processes that shape the interstellar medium
(ISM) and galaxy evolution. They release photo-electrons that participate in
heating gas (e.g. Wolfire et al., 1995; Weingartner & Draine, 2001a; Croxall
et al., 2012), they shield dense molecular clouds from stellar UV radiation
and aid their collapse (e.g. Fumagalli et al., 2010; Byrne et al., 2019), they
catalyze a number of chemical reactions, and offer surface area for the
production of H2 (e.g. Bron et al., 2014; Castellanos et al., 2018; Thi et
al., 2020, see also a review by Wakelam et al. 2017). It is therefore
critically important to understand dust properties and abundance, and how dust
affects these processes.
One main way to infer dust properties in external galaxies is to interpret
infrared (IR) spectral energy distributions (SEDs) with the aid of dust
models. The near-to-mid-IR part of the spectrum is dominated by the emission
of stochastically heated grains that do not achieve a steady-state equilibrium
with the incident radiation field. At longer wavelengths, the emission is
almost entirely due to grains in the thermal equilibrium, with a large enough
radius to be constantly receiving and emitting photons. In this regime, the
steady-state grain temperature is set by the strength of the incident
radiation field.
Modified blackbody models are a convenient parametric representation of the
emission from large grains in thermal equilibrium. As such, they provide good
fits to the far-IR SED and yield satisfactory dust masses if correctly
calibrated (e.g. Bianchi, 2013), since these grains contain most of the dust
mass. Large grains in thermal equilibrium are reasonably well described by a
single temperature, in which case their emission can be represented by a
blackbody radiation, $B_{\nu}$, modified by an effective grain opacity,
$\kappa_{\nu}(\lambda)$, so that the grain emission
$I_{\nu}\propto\kappa_{\nu}(\lambda)\ B_{\nu}(\lambda,T)$. The opacity as a
function of wavelength is often described with a power-law with spectral
index, $\beta$, such that $\kappa_{\nu}=\kappa_{0}(\nu/\nu_{0})^{\beta}$.
Several variations to the modified blackbody model have been used to fit the
far-IR SED, for example multiple dust populations having different
temperatures and $\beta$, or a different functional form for $\kappa_{\nu}$
such as a broken power-law.
Physical dust models aim to reproduce the IR emission, extinction, and
depletions, among other observations, with a self-consistent description of
dust properties. Building these models requires specifying grain sizes,
shapes, and chemical composition, which lead to the optical properties and
heat capacities. These grain populations are then illuminated by a radiation
field with a specified intensity and spectrum. Once the radiation field is
modeled, one can compare the predicted and observed dust emission. Dust
extinction measured towards specific lines-of-sight (not high $A_{V}$) helps
constrain the size distribution of grains, their composition, and total mass
relative to H (e.g. Weingartner & Draine, 2001b). Depletion measurements
provide important limits on the elemental abundance locked in dust grains
(e.g. Jenkins, 2009; Tchernyshyov et al., 2015). Experimentally, many studies
rely on material thought to be ISM dust analogs to provide laboratory
constraints on optical properties and heat capacities (e.g. Richey et al.,
2013; Demyk et al., 2017; Mutschke & Mohr, 2019).
Both physical and modified blackbody models are almost always calibrated in
the Milky Way (MW), where the the relevant observables of dust are well-
constrained. This includes measurements of the diffuse emission and extinction
per H column of the ISM at high Galactic latitudes also referred to as the
Milky Way cirrus (e.g. Compiègne et al., 2011; Guillet et al., 2018). The
cirrus is also a unique place where the interstellar radiation field that
heats dust grains, a necessary component to constrain models, can be
estimated. The radiation field measured by Mathis et al. (1983) at the
galactocentric distance $D_{\rm G}=10~{}$kpc is generally used to describe the
starlight heating dust grains.
The stringency with which each model follows the elemental abundances locked
in dust grains from depletion studies varies. Some models use them as strict
constraints (e.g. Gordon et al., 2014). On the other hand, most physical
models allow more flexibility in the mass of metals locked up in dust grains,
to more closely match other, better constrained observables. For example the
Zubko et al. (2004) dust models follow the depletion constraints strictly,
while the Weingartner & Draine (2001c) can require up to $\sim 30$% (assuming
Si/H = 36 ppm in the model) more silicon than observed in the cold neutral
medium. However, the latter reproduce the observed extinction to a better
degree than the former.
With the increasing number of observational constraints on dust models, the
complexity of physical dust models has grown. One of the earliest dust models
described grains as a single mixture with a power-law size distribution
(Mathis et al., 1977). Later on, very small grains known as the Polycyclic
Aromatic Hydrocarbons (PAHs) were suggested as responsible for the mid-IR
emission features (Leger & Puget, 1984; Allamandola et al., 1985), and
included in dust models. While some dust models consider them to be defined by
their size (in the model description; e.g. Draine & Li, 2007; Compiègne et
al., 2011; Galliano et al., 2011), other models identify the mid-IR features
carriers in the form of aromatic-rich mantles onto amorphous grains (e.g.
Jones et al., 2013, and their hydrogenated amorphous hydrocarbons component).
The precise nature of large grains is also uncertain. The presence of
amorphous silicate material in grains was demonstrated early on by conspicuous
absorption features (Mathis et al., 1977; Kemper et al., 2004) and included in
dust models. But their exact composition remains an active research topic
(e.g. Zeegers et al., 2019). While some models have a strong focus towards
reproducing the observations (e.g. Draine & Li, 2007, “astrosilicates”), other
models are closely tied to new laboratory data (e.g. Jones et al., 2013,
olivine and pyroxene). Finally, with the growing amount of far-IR polarization
data, new models emerge and take into account this important grain property
(e.g Guillet et al., 2018; Hensley & Draine, 2021). Future missions and
instruments are being developed to focus on polarization, and will bring new
constraints to dust properties (e.g. SOFIA/HAWC+: Harper et al., 2018).
There are now several physical dust models available, that are all different
in some—sometimes small—ways. However, these small differences in modeling can
lead to significant differences in the derived dust properties, as many
studies have shown in nearby galaxies. For instance, Gordon et al. (2014) and
Chiang et al. (2018) have used a number of blackbody variations (e.g. simple
power-law emissivity, two temperatures, broken-emissivity) to model the far-IR
SEDs of the Magellanic Clouds and M101, respectively. They both found that the
dust mass derived by several blackbody variations (namely, simple power-law
emissivity, two temperature) violates the available elemental content, making
these approaches unlikely to be valid descriptions of dust grain emission. In
the Magellanic Clouds, Chastenet et al. (2017) found that dust masses can vary
by almost an order of magnitude depending on the physical model chosen. In the
Large Magellanic Cloud, Paradis et al. (2019) found that not all models
require both neutral and ionized PAHs for a good fit at short wavelengths.
Other discrepancies may arise from simply using a different fitting approach,
where uncertainties are treated differently, or with a different data-set. For
instance, Sandstrom et al. (2010) found an increased abundance of PAHs in
dense gas regions of the Small Magellanic Clouds. This behavior was confirmed,
but with different PAH fractions in Chastenet et al. (2019), by using the same
model with a different wavelength sampling. Most of the uncertainties in
comparing the results of the studies mentioned above arise from the wavelength
coverage of their data-set, the definition of radiation field(s) they use or
simply the dust model they choose.
To reach coherent results on dust properties (dust-to-gas ratio, dust-to-metal
ratio, fraction of PAHs, etc.), we need to be able to compare between model
results. In this paper, we carry out a rigorous comparison among some of the
widely used dust models available in the literature by fitting the IR emission
of M101 (NGC 5457) in a strictly identical way for all models. We therefore
reduce the differences to those due to the physical modeling choices only. We
compare the models from Compiègne et al. (2011), Draine & Li (2007), and
THEMIS (overview in Jones et al., 2017), as well as Hensley & Draine (2021).
We also include two modified blackbody models previously used in the
literature: a simple-emissivity, and a broken-emissivity modified blackbody.
We perform our analysis on the galaxy M101 (NGC 5457), which has multiple
advantages. First, its distance ($\sim 6.7~{}$Mpc, Tully et al., 2009), and
low inclination allow for well-resolved photometry, even in the far-IR.
Second, the available IR photometry and spectroscopy from recent space
telescopes with high sensitivity are ideal to constrain the dust models.
Finally, metallicity gradients of M101 also offer an independent route to put
an upper limit on its dust content. The galaxy has detailed measurements of
metallicity from auroral lines (e.g. Croxall et al., 2016; Berg et al., 2020),
which puts good constraints on the gas phase metal abundance. It has also been
extensively targeted for deep H I and CO observations (Walter et al., 2008;
Leroy et al., 2009), allowing us to account for all the gas (modulo any
limitations in the ability of the CO and 21 cm lines to trace the gas).
Combining these, we can evaluate the impact of model choice on derived dust
properties across a wide range of environments with well-understood metal and
gas content.
In Section 2 we present the photometry used to sample the IR emission of M101.
Section 3 describes the physical and modified blackbody dust models and
Section 4 the technical aspects of the emission fitting technique. The results
of these fits are presented in Section 5 with discussions of the differences
in dust properties yielded by the dust models. Finally in Section 6 we analyze
the calibration differences in the dust models themselves.
## 2 Data
We present our adopted distance and orientation parameters for M101 in Table
1.
Table 1: M101 (NGC 5457) properties. Property | Value | Reference
---|---|---
Right ascension | 14:03:12.6 | Makarov et al. (2014)aafrom the HyperLEDA data base; http://leda.univ-lyon1.fr/
Declination | +54:20:57 | Makarov et al. (2014)aafrom the HyperLEDA data base; http://leda.univ-lyon1.fr/
Inclination | 30° | de Blok et al. (2008)bbNote the difference with the HyperLEDA database value (16°).
Position angle | 38° | Sofue et al. (1999)
$\rm r_{25}$ | $11.4^{\prime}$ | Makarov et al. (2014)aafrom the HyperLEDA data base; http://leda.univ-lyon1.fr/
Distance | 6.7 Mpc | Tully et al. (2009)
In order to derive the dust properties in M101, we perform fits to its IR SED,
comprised of measurements at 16 different photometric bands:
* $\bullet$
3.4, 4.6, 12, and 22 $\mu$m from the Wide-field Infrared Survey Explorer
(WISE; Wright et al., 2010). We use the maps delivered by Leroy et al. (2019)
in the $z=0$ Multiwavelength Galaxy Survey, already convolved to a
$15^{\prime\prime}$-FWHM Gaussian resolution;
* $\bullet$
3.6, 4.5, 5.8, and 8.0 $\mu$m from the Infrared Array Camera instrument (IRAC;
Fazio et al., 2004), and 24 and 70 $\mu$m from the Multiband Imaging
Photometer for Spitzer instrument (MIPS; Rieke et al., 2004)111We do not use
MIPS 160 (e.g. as opposed to Aniano et al., 2020) to gain back some resolution
(MIPS 160 PSF is $\sim 39^{\prime\prime}$), without losing wavelength
coverage. This does not lead to major differences., both on-board the Spitzer
Space Telescope (Werner et al., 2004). We used the product delivery
DR5222https://irsa.ipac.caltech.edu/data/SPITZER/LVL/LVL_DR5_v5.pdf from the
Local Volume Legacy survey for IRAC and MIPS maps (Dale et al., 2009);
* $\bullet$
70, 100, and 160 $\mu$m from the Photoconductor Array Camera and Spectrometer
instrument (PACS; Poglitsch et al., 2010) and 250, 350, and 500 $\mu$m from
the Spectral and Photometric Imaging Receiver instrument (SPIRE; Griffin et
al., 2010), both on-board the Herschel Space Observatory (Pilbratt et al.,
2010). We downloaded the scans from the KINGFISH program (Kennicutt et al.,
2011) in the Herschel Science Archive, and processed them from L0 to L1 with
HIPE (v. 15; PACS Calibration v. 77; SPIRE Calibration v. 14.3; Ott, 2010) and
Scanamorphos (v. 25; Roussel, 2013).
### 2.1 Data Processing
We correct the images for extended source calibration appropriately and
convert them to the same units in the following way. We apply extended-source
correction factors for the IRAC images. These are multiplicative corrections
of 0.91, 0.94, 0.695, and 0.74 at 3.6, 4.5, 5.8 and 8.0 $\mu$m, respectively,
as suggested by the IRAC Instrument
Handbook333https://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/29/.
For SPIRE, we adopt calibration factors of 90.646, 51.181, and 23.580
MJy/sr/(Jy/beam) at 250, 350, and 500 $\mu$m, respectively, as suggested by
the SPIRE
Handbook444http://herschel.esac.esa.int/Docs/SPIRE/spire_handbook.pdf. These
factors convert the data to MJy/sr from Jy/beam and also correct from point
source to extended source calibration. The PACS data in units of Jy/pixel are
converted to units of MJy/sr using the image pixel size (contained in the
headers).
We then process all the maps following the same steps, as described below. We
remove a background in each image at their native resolution, by fitting a 2-D
plane using regions identified not to have significant galaxy emission. We
find these regions with the following procedure: 1) We use the $\rm r_{25}$
radius ($\rm r_{25}$ $\sim 12^{\prime}$) to first mask the galaxy (this covers
the visible SPIRE 500 emission completely); 2) We measure the median of the
remaining pixels and the standard deviation, and clip all of those that are 3
standard deviations above that median; 3) We iterate that clipping until the
medians at iterations $i$ and $i+1$ differ by less than 1%. This clipping is
only done for the purpose of measuring a background level, and we do not keep
this mask applied to the data for the following steps. Table 2 lists the final
standard deviations of the background pixels in each bands.
Table 2: Band-related details. References to the $\sigma_{\rm cal}$ and $\sigma_{\rm sta}$ coefficients: WISE: http://wise2.ipac.caltech.edu/docs/release/prelim/expsup/sec4_3g.html; IRAC: Reach et al. (2005) and https://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/29/; MIPS: Engelbracht et al. (2007) and Gordon et al. (2007); PACS: Müller et al. (2011) and Balog et al. (2013); SPIRE: Griffin et al. (2010) and Bendo et al. (2013). Band | $\sigma_{\rm bkg}$aaThe background standard deviation in the pixels considered to measure the background covariance matrix, once the maps are background-removed, convolved and projected. | $m_{\rm cal}$bb$m_{\rm cal}$ is the error on the calibration used in each instrument. The large errors of the IRAC bands are from the extended source corrections, which we consider to be correlated calibration errors. | $m_{\rm sta}$cc$m_{\rm sta}$ measures the stability of an instrument, i.e. the scatter when measuring the same signal.
---|---|---|---
| 10-1 MJy/sr | % | %
WISE 3.4 | 0.170 | 2.4 | 10.0
IRAC 3.6 | 0.149 | 9.0 | 1.5
IRAC 4.5 | 0.107 | 6.0 | 1.5
WISE 4.6 | 0.095 | 2.8 | 10.0
IRAC 5.8 | 0.109 | 30 | 1.5
IRAC 8.0 | 0.085 | 26 | 1.5
WISE 12 | 0.114 | 4.5 | 10.0
WISE 22 | 0.055 | 5.7 | 10.0
MIPS 24 | 0.049 | 4.0 | 0.4
MIPS 70 | 5.24 | 10.0 | 4.5
PACS 70 | 10.7 | 10.0 | 2.0
PACS 100 | 10.1 | 10.0 | 2.0
PACS 160 | 7.95 | 10.0 | 2.0
SPIRE 250 | 3.96 | 8.0 | 1.5
SPIRE 350 | 2.73 | 8.0 | 1.5
SPIRE 500 | 1.82 | 8.0 | 1.5
Each map is then convolved to the SPIRE 500 PSF (FWHM $\sim
36^{\prime\prime}$), using convolution kernels from Aniano et al. (2011).
Finally, all the maps are aligned and projected onto the astrometric grid of
the SPIRE 500 image. In Figure 1, we show the 16 bands that we use to model
the dust emission from 3.4 to 500 $\mu$m.
The final pixel size ($9^{\prime\prime}$) oversamples the SPIRE 500 beam size,
to which all data are convolved. We take this into account by correcting by
$\sqrt{N_{\rm pix}/N_{\rm beam}}$ when calculating uncertainties on quantities
that use the values of multiple pixels.
Figure 1: Emission of M101 (NGC 5457) from 3.4 to 500 $\mu$m, in all the bands
used in this study. The maps show the final version of each map: after
extended source correction (IRAC and SPIRE bands), unit conversion (PACS and
SPIRE bands), background removal, convolution to SPIRE 500 PSF ($\sim
36^{\prime\prime}$) and regridding. On the MIPS 70 panel, we also show: the
location of the pixel for the fit example (white cross; Figure 4), the region
used for the IRS measurement (white rectangle; Section 5.6), and the contours
for the $3\sigma$ detection threshold.
### 2.2 Ancillary data: neutral and molecular gas
In Section 6, we use gas surface density and metallicity measurements in
additional to IR SEDs. We use the same data as Chiang et al. (2018) to get the
total gas surface density, $\Sigma_{\rm gas}$, combining gas surface density
maps from H I 21 cm and ${\rm CO~{}(2\to 1)}$ emission converted to H2. The H
I 21 cm line data is from the THINGS collaboration (The H I Nearby Galaxy
Survey; Walter et al., 2008). The map is converted from integrated intensity
to surface density assuming optically thin 21 cm emission, following Walter et
al. (2008, Equ. 1 and 5, and multiplying by the atomic mass of hydrogen).
We use ${\rm CO}~{}(2\to 1)$ emission from HERACLES (Leroy et al., 2009), and
convert it to H2 surface density, assuming a line ratio $R_{21}=0.7$, and a
$\alpha_{\rm CO}$ conversion factor, in units of M⊙/pc2 (K km/s$)^{-1}$,
following two prescriptions (both including helium): the first one is
representative of the MW CO-to-H2 conversion555A standard $X_{\rm CO}=2\times
10^{20}$ (K km/s)-1 is assumed for the column density conversion factor. The
mass of helium and heavier elements have been accounted for in $\alpha_{\rm
CO}$.,
$\alpha_{\rm CO}^{\rm MW}=4.35,$ (1)
and the second one follows Bolatto, Wolfire, & Leroy (2013),
$\begin{split}\alpha_{\rm CO}^{\rm BWL13}=2.9\
&\text{e}^{(0.4/Z^{\prime})}(\Sigma^{100}_{\rm Total})^{-\gamma}\\\
&\textrm{with}\begin{dcases}\gamma=0.5\ &\text{if }\Sigma^{100}_{\rm
Total}\geq 1\\\ \gamma=0\ &\text{otherwise}\end{dcases}\end{split}$ (2)
where $Z^{\prime}$ is the metallicity relative to solar
metallicity666$Z^{\prime}=10^{\left(\rm(12+log(O/H))-(12+log(O/H)_{\odot})\right)}$
with 12+log(O/H)${}_{\odot}=8.69$., $\Sigma^{100}_{\rm Total}$ is the total
surface density map in units of 100 M⊙/pc2. The gas maps are convolved to a
$41^{\prime\prime}$ Gaussian PSF (Chiang et al., 2020); since we plot the
radial profile of the gas maps by averaging in growing annuli, we find that
the minor differences between the dust and gas maps resolutions are
negligible. We measure an uncertainty of $\sim~{}0.3$ K km/s for the ${\rm
CO~{}(2\to 1)}$ map, and an uncertainty of $\sim~{}0.4~{}$M⊙/pc2 for the H I
21 cm map777The rms per channel is $\sim 0.46~{}$mJy/beam. Assuming
$\sigma_{\rm z,~{}gas}=11~{}$km/s (Leroy et al., 2008), the uncertainty is
0.96 M⊙/pc2. The 5% calibration error (Walter et al., 2008) becomes
significant in the dense regions.. We build a radial profile by averaging
$\Sigma_{\rm gas}$ in growing annuli from the center out to $\rm r_{25}$.
### 2.3 Ancillary data: metallicity
Metallicity measurements $12+{\rm log_{10}(O/H)}$ are taken from the CHAOS
survey (Berg et al., 2020), derived from 72 H II regions of M101. In
particular, we use their fitted metallicity gradient
$12+{\rm log_{10}(O/H)}=8.78\pm 0.04-0.75\pm 0.07\ (r/{\rm r_{25}})$ (3)
to convert it to a radial profile of metal surface density (see Section 6.1).
We adjusted the slope given by Berg et al. (2020, 0.90) to account for the
different $\rm r_{25}$ used here. Berg et al. (2020) find that the scatter in
individual region metallicities around the measured gradient is $\sim 10$%.
Tracing the total metal mass from nebular emission lines of oxygen is subject
to systematic uncertainties. As pointed at by Jenkins (2009), oxygen shows
unexplained depletion patterns, compared to how much is expected to be locked
in the solid phase. However, Peimbert & Peimbert (2010) measured the
depletions of heavy elements in Galactic and extra-galactic H II regions, and
found minimal depletion of oxygen ($\lesssim 0.1~{}$dex). Other elements may
be of use for tracing metallicity (as it is the case in Berg et al., 2020),
but given the widespread use of O/H for extragalactic studies, we use the
typical $12+{\rm log(O/H})$ tracer.
## 3 Dust emission models
The goal of this study is to investigate the differences in dust properties
derived using physical and modified blackbody models. In this Section, we
present the characteristics of each model. The parameters we use to fit the IR
SEDs are presented in Table 3. A summary table of the following information is
presented in Appendix A.
### 3.1 Modified blackbodies
We use single-temperature modified blackbodies models. For these two models,
we only fit photometry from 100 to 500 $\mu$m. At $\lambda<100~{}\mu$m,
stochastically heated grains contribute to the emission and are not well
modeled by a modified blackbody. Assuming the optically thin case, the dust
emission $I_{\nu}$ is described as
$I_{\nu}(\lambda)=\kappa_{\nu}(\lambda)\ \Sigma_{\rm dust}\
B_{\nu}(\lambda,T_{\rm dust}),$ (4)
where $B_{\nu}(\lambda,T_{\rm dust})$ is the Planck function at wavelength
$\lambda$ (in MJy/sr), $T_{\rm dust}$ the dust temperature, $\Sigma_{\rm
dust}$ the dust surface density, and $\kappa_{\nu}$ the opacity. We use the
simple power-law opacity (Equ. 5) and a broken power-law opacity (Equ. 6),
which we normalize at $\lambda=160~{}\mu$m. The broken power-law model
presented the best results in terms of quality of fits in the study by Chiang
et al. (2018) and yielded physically reasonable $\Sigma_{\rm dust}$ and
$T_{\rm dust}$ values in their study.
We follow Gordon et al. (2014) and Chiang et al. (2018) and calibrate the
opacity values for each model. We fit the modified blackbody model to the dust
emission per H column of the MW high-latitude cirrus described in Chiang et
al. (2018) to derive the opacity. The abundance constraints are based on a
depletion strength factor typical for lines-of-sight with similar $N_{\rm H}$
to the MW cirrus, e.g., $F_{*}=0.36$ (Jenkins, 2009). This sets the allowed
dust mass per H atom. By fitting the temperature for the MW cirrus we can then
derive the opacity for each modified blackbody model. More details on the
opacity and comparison to the physical models can be found in Section 3.2.
#### 3.1.1 Simple-Emissivity (SE)
In this case, the opacity is described as a single power-law:
$\kappa_{\nu}=\kappa_{\nu_{0}}\left(\frac{\nu}{\nu_{0}}\right)^{\beta},$ (5)
where $\beta$ is the spectral index. For all blackbody models, we fix
$\lambda_{0}=160~{}\mu$m, and $\nu_{0}=c/\lambda_{0}$. The free parameters for
this model are then the dust surface density, $\Sigma_{\rm dust}$, the dust
temperature, $T_{\rm dust}$, and the dust spectral index, $\beta$. In this
model the calibrated $\kappa_{\nu_{0}}=10.10\pm 1.42$ cm2/g (Chiang et al.,
2018), from fitting the high latitude cirrus as described above.
#### 3.1.2 Broken-Emissivity (BE)
The broken-emissivity model stemmed from the identification of the sub-
millimeter excess (Gordon et al., 2014). It allows a change of the dust
spectral index with wavelength, meant to better reproduce the far-IR slope
than does a simple modified blackbody. In this model, the value of the opacity
is wavelength-dependent, such that:
$\kappa_{\nu}=\left\\{\begin{array}[]{ll}\kappa_{\nu_{0}}\left(\frac{\nu}{\nu_{0}}\right)^{\beta}&\lambda<\lambda_{c}\\\
\kappa_{\nu_{0}}\left(\frac{\nu_{c}}{\nu_{0}}\right)^{\beta}\left(\frac{\nu}{\nu_{c}}\right)^{\beta_{2}}&\lambda\geq\lambda_{\rm
c}\end{array}\right.$ (6)
where $\lambda_{\rm c}$ is the wavelength at which the opacity changes, and
$\nu_{\rm c}$ its equivalent frequency. Following Chiang et al. (2018), we fix
$\beta=2$, and $\lambda_{\rm c}=300~{}\mu$m. The free parameters are then the
dust surface density, $\Sigma_{\rm dust}$, the dust temperature, $T_{\rm
dust}$, and the second dust spectral index, $\beta_{2}$. The calibrated
opacity, $\kappa_{\nu_{0}}$, for this model is $20.73\pm 0.97$ cm2/g (Chiang
et al., 2018), as described above.
### 3.2 Opacity calibrations
The physical dust models used in our study have opacities that are set by each
individual model’s calibration procedure. All models use similar constraints
from the MW high latitude cirrus (described in more detail in Appendix A). For
ease of comparison to the opacities we have derived for the modified blackbody
models (10.1 cm2/g for the simple-emissivity and 20.7 cm2/g for the broken-
emissivity at 160 $\mu$m), we list the opacity in the physical models below
all scaled to 160 microns for comparison:
* •
Draine & Li (2007): from the Weingartner & Draine (2001c) model updated in
DL07 we report $\kappa_{160}~{}=~{}10.2~{}$cm2/g (see also Bianchi, 2013);
* •
Compiègne et al. (2011): from the work of Bianchi (2013) we report
$\kappa_{160}~{}=~{}12.0~{}$cm2/g;
* •
Jones et al. (2017, THEMIS): from the work of Galliano et al. (2018) we report
$\kappa_{160}~{}=~{}14.2~{}$cm2/g;
* •
Hensley & Draine (2021): the models uses $\kappa_{160}~{}\sim~{}9.95~{}$cm2/g
(B. Hensley; priv. comm.).
For all models, we use the listed opacity regardless of the specific
environment in M101 we are studying. Few of the currently available physical
dust models have been calibrated in any other environment than the MW cirrus
(although the Draine & Li 2007 model does have Small and Large Magellanic
Cloud-like calibrations, though they are not widely used even for these very
galaxies; Sandstrom et al., 2010; Chastenet et al., 2019). Indeed, it is
standard in current extragalactic applications to apply MW cirrus $R_{V}=3.1$
dust calibrations across all environments (e.g. Davies et al., 2017; Aniano et
al., 2020). In detail, this is unlikely to be correct since the opacity can
and probably should evolve as a function of environment and it is clear that a
single $F_{*}~{}=~{}0.36$ value does not describe the depletion in the ISM
over the full range of column densities probed in galaxies. However, for the
purposes of our comparative study of widely used dust models, we proceed by
using the $R_{V}=3.1$ MW cirrus calibrations from each model. Even if there
were a potential way to adjust opacity with H column in M101, work by Ysard et
al. (2015) using THEMIS suggest that this is insufficient to predict changes
in dust properties.
### 3.3 Physical dust models
Physical dust models assume a composition, density, and shape for the dust
grains and adopt heat capacities and optical properties from laboratory and
theoretical studies that are appropriate for such materials. For simplicity,
most models assume spherical grains or planar molecules for PAHs. In the case
of PAHs, the grains/molecules are additionally described by an ionization
state, which changes the absorption cross-sections as a function of
wavelength. The temperature and emission of a grain of a given size, shape and
composition in a radiation field with a specified intensity and spectrum can
then be calculated analytically (e.g. Draine & Lee, 1984; Desert et al.,
1986).
The full dust population is represented by a grain size distribution and
abundance relative to H for the specified compositions. Physical dust models
are calibrated by adjusting the grain size distributions and dust mass per H
to simultaneously match observations of extinction, emission, and abundances
(and more recently, polarization) in a location where the underlying radiation
field that is heating the dust is well known. This has generally been taken to
be the high-latitude MW cirrus, where the radiation field intensity and
spectrum are approximately given by the Mathis et al. (1983) model for the
Solar neighborhood. The degree to which the models must adhere to the somewhat
uncertain abundance constraints varies model to model. For example, the
modified blackbody models from Gordon et al. (2014) use the depletion
measurements in the MW as a strict limit. However, most physical models allow
the final element abundances to vary from depletions, using them only as loose
guide.
In the following, we use 4 physical dust models: Draine & Li (2007), Compiègne
et al. (2011), THEMIS (Jones et al., 2017) and Hensley & Draine (2021). Here
we briefly describe these models, the key differences between them. Details on
their respective calibration methodologies can be found in Appendix A.
#### 3.3.1 Draine & Li (2007)
In the Draine & Li (2007, hereafter DL07) model, dust is comprised of PAHs,
graphite grains and amorphous silicate grains. It stems from the original
models presented in Li & Draine (2001a). The carbonaceous dust optical
properties are adopted from Li & Draine (2001b) with updates to the PAH cross-
sections and form of the grain size distribution. A balance between ionized
and neutral PAHs is assumed, following Li & Draine (2001b). The optical
properties of silicate material are adopted from the “astrosilicates” in the
original model. We do not make use of the mass renormalization in Draine et
al. (2014).
The mass fraction of PAHs, $q_{\rm\sc{PAH}}$, is described as the mass of
carbonaceous grains with less than 103 carbon atoms with respect to total dust
mass. We effectively obtain $q_{\rm\sc{PAH}}$ in % by converting the part-per-
million carbon abundance $b_{\rm C}$ to a PAH fraction, using the reference
$q_{\rm PAH}=4.7\%\equiv b_{\rm C}=55~{}{\rm ppm}$.
The calibration of the model is described in several papers (Draine & Li,
2001; Li & Draine, 2001b; Weingartner & Draine, 2001c). We use the DL07 Milky
Way model, with $R_{\rm V}=3.1$, which has a fixed ratio of silicate to
carbonaceous grains. Details on the calibration can be found in Appendix A.1.
#### 3.3.2 Compiègne et al. (2011)
The Compiègne et al. (2011, hereafter MC11) model is composed of PAHs,
hydrogenated amorphous carbon grains, and amorphous silicate grains. The size
distribution of the carbonaceous components includes PAHs, small amorphous
carbon grains (SamC), and large amorphous carbon grains (LamC). The PAH cross-
sections and ionization as a function of size adopted in the model are based
on Draine & Li (2007) with slight modifications to the cross sections of
several bands. Amorphous carbonaceous grains have optical properties from
Zubko et al. (2004) and heat capacities from Draine & Li (2001). The amorphous
silicates (aSil) have optical properties from Draine (2003) and heat
capacities from Draine & Li (2001). Details on the calibration can be found in
Appendix A.2.
The DustEM 888DustEM is a tool that outputs extinction, emission and
polarization of dust grains heated by a given radiation field. See details at
https://www.ias.u-psud.fr/DUSTEM/. tool allows both ionized and neutral PAHs
to be fit independently. Because not all models allow that separation, we tie
their emission spectra together by summing them, hence keeping their ratio
constant at roughly 60% neutral and 40% ionized999The ratio of ionized and
neutral PAHs is size-dependent. The values are set in DustEM from the
MIX_PAHx.DAT files.. Additionally, we fix the mass fractions of the large
carbonaceous-to-silicate grains such as $(M_{\rm dust}^{\rm LamC}/M_{\rm
H})/(M_{\rm dust}^{\rm aSil}/M_{\rm H})=0.22$101010This value is that from the
DustEM file GRAIN_MC10.DAT.. Since these share a very similar far-IR spectral
index, their respective emission cannot be properly determined independently
with our wavelength coverage, and would lead to degenerate abundances if fit
separately.
#### 3.3.3 THEMIS
The Heterogeneous dust Evolution Model for Interstellar Solids (THEMIS; Jones
et al., 2013; Köhler et al., 2014; Ysard et al., 2015; Jones et al., 2017) is
a core/mantle dust model, consisting of large silicate and hydrocarbons, both
coated with aromatic-rich particles (Hydrogenated Amorphous Hydrocarbons,
HACs). This model defines its dust components focusing strongly on laboratory
data, slightly adjusted to better match observations. We use the diffuse ISM
version of the model, described in Jones et al. (2013). The amorphous carbon
grain properties are size-dependent, and there is no strictly independent
population of grains responsible for the mid-IR features, carried by aromatic
clusters in the form of mantles and very small grains (sCM20). As they grow to
larger grains (lCM20) in size, their core becomes aliphatic-rich, coated with
an aromatic mantle. The silicate grains (aSilM5) in particular are discussed
in Köhler et al. (2014). They are a mixture of olivine and pyroxene-type
material, with nano-inclusions of Fe and FeS. Their mass ratio is kept
constant due to their extreme resemblance in emission in the considered
wavelength range. The dust evolution models (from diffuse to denser medium)
are discussed in Ysard et al. (2015), with the impact of aggregates and
thicker mantles on the final abundances. In the diffuse model we use, aSil
have a 5 nm mantle, and hydrocarbons have a 20 nm mantle. In our fitting, the
lCM20-to-aSilM5 mass ratio is kept constant, such as $(M_{\rm dust}^{\rm
lCM20}/M_{\rm H})/(M_{\rm dust}^{\rm aSilM5}/M_{\rm H})=0.24$111111This value
is that from the DustEM file GRAIN_J13.DAT. Details on the calibration can be
found in Appendix A.3.
#### 3.3.4 Hensley & Draine (2021)
Rather than employ separate amorphous silicate and carbonaceous grain
components, the Hensley & Draine (2021, hereafter HD21) model invokes a single
homogeneous composition, “astrodust” (Draine & Hensley, 2020), to model most
of the interstellar grain mass. In addition to astrodust, the model
incorporates PAHs using the cross-sections from Draine & Li (2007) and a small
amount of graphite using the turbostratic graphite model presented in Draine
(2016). The HD21 model was developed to reproduce the observed properties of
dust polarization, including both polarized extinction and emission, in
addition to total extinction and emission. This results in raising of the
emissivity in the far-IR, forcing the dust to be slightly cooler than other
models and requiring a higher radiation field to get comparable amounts of
emission. We use cross sections computed for 2:1 prolate spheroids, but the
grain shape has only a small effect on the far-infrared total intensity
studied in this work. Details on the calibration can be found in Appendix A.4.
For the purposes of this study, the HD21 model has been parameterized in the
same way as Draine & Li (2007), i.e., utilizing parameters $U$ and $q_{\rm
PAH}$.
## 4 Fitting methodology
### 4.1 Making matched model grids
Since each model was developed independently, it is not always possible to
create model grids that have exactly the same parameter sampling, because of
the lack of parameter equivalence. However we attempt to do so as much as
possible.
Radiation field — We implement identical dust heating in each of the physical
models: a fraction $\gamma$ of the dust mass in a pixel is heated by a power-
law distribution of radiation fields with $U_{\rm min}<U\leq{\rm U_{max}}$
(Dale et al., 2001), where $U$ is the interstellar radiation field, expressed
in units of the MW-diffuse radiation field at 10 kpc from Mathis et al.
(1983). The remaining fraction of dust $(1-\gamma)$ is heated by the minimum
radiation field $U_{\rm min}$ (Draine & Li, 2007):
$\begin{split}\frac{1}{M_{\rm dust}}\frac{dM_{\rm dust}}{dU}=&(1-\gamma)\
U_{\rm min}+\\\ &\gamma\ \frac{(\alpha-1)}{U_{\rm min}^{1-\alpha}-{\rm U}_{\rm
max}^{1-\alpha}}\ U^{-\alpha}\end{split}{}$ (7)
We fix $\alpha=2$ for all models, ${\rm U_{max}}=10^{7}$ for the DL07, THEMIS
and MC11 models, and ${\rm U_{max}}=10^{6}$ for HD21. This choice is
constrained by the available parameter ranges in the models: the ${\rm
U_{max}}$ values for the DL07 and HD21 models are fixed, and we do not have
the freedom to change them; thanks to DustEM, we can adjust ${\rm U_{max}}$
for MC11 and THEMIS121212Using DustEM and THEMIS, we checked the effect of
using ${\rm U_{max}}=10^{6}$ or $10^{7}$, for a range of $\gamma$ values. We
find that most of the mid-IR bands used in this study are only minimally
affected. The difference for IRAC 4.5 and WISE 4.6 can be significant at
$\gamma\geq 0.1$, while the maximum $\gamma$ values reach by the model fits is
0.07 (for DL07). Additionally, these bands are dominated by starlight, and
mostly modeled by the 5,000 K blackbody.. For THEMIS and the MC11 model, which
have multiple dust populations that can be independently heated, each
population is heated by the same radiation field.
The minimum radiation field parameter grid is fixed by the values provided in
the Draine & Li (2007) model. Thanks to the DustEM tool, we can use the exact
same values with THEMIS and the MC11 model. These parameter values in HD21
however are not exactly identical to those in $U_{\rm min}^{\rm DL07}$. We
therefore use the spectra with closest $U_{\rm min}^{\rm HD21}$ values.
Although not strictly equal, the radiation field values in Hensley & Draine
are within 5% of $U_{\rm min}^{\rm DL07}$.
For the modified blackbody models, we use a single radiation field intensity
and translate it into a dust temperature. We use the relationship
$U\propto T_{\rm dust}^{\beta+4}\quad\textrm{ with }\beta=2$ (8)
to convert $U_{\rm min}$ to $T_{\rm dust}$ and find the approximately matching
sampling to use in the blackbody models. We use the normalization from Draine
et al. (2014), i.e. $U=1$ at $T_{\rm dust}=18~{}$K, found using the same
radiation field from Mathis et al. (1983).
Mid-IR feature carriers — The $q_{\rm\sc{PAH}}$ parameter is kept strictly
identical between DL07 and HD21. We choose to use THEMIS and the MC11 models
in a similar fashion. Despite the definition of “PAHs” being different in
THEMIS, we parameterize the model so that the fraction of small grains can
vary, keeping the large-carbonaceous-to-silicate grains ratio constant. The
MC11 default model has 4 populations that can vary independently, and we
choose to tie together SamC, LamC and aSil, to leave the amount of PAHs as a
free parameter. We explain these choices in more detail in Section 7.
Stellar surface brightness — In addition to the dust model parameters, we use
a scaling parameter of a 5,000 K blackbody, $\Omega_{*}$, to model the stellar
surface brightness visible at the shortest wavelengths. This temperature is a
good approximation as the shortest wavelengths are nearly on the Rayleigh-
Jeans tail. The free parameter $\Omega_{*}$ scales the amplitude of the
stellar blackbody.
Table 3: Fitting parameters Parameter | Range | Step | Unit
---|---|---|---
All physical models
${\rm log_{10}}$($\Sigma_{\rm dust}$) | [-2.2, 0.1] | 0.035 | M⊙/pc2
$U_{\rm min}$ | [0.1, 50] | Irraa$U_{\rm min}\in$ {0.1, 0.12, 0.15, 0.17, 0.2, 0.25, 0.3, 0.35, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0, 1.2, 1.5, 1.7, 2.0, 2.5, 3.0, 3.5, 4.0, 5.0, 6.0, 7.0, 8.0, 10.0, 12.0, 15.0, 17.0, 20.0, 25.0, 30.0, 35.0, 40.0, 50.0}. | –
${\rm log_{10}}$($\gamma$) | [-4, 0] | 0.15 | –
${\rm log_{10}}$($\Omega_{*}$) | [-1, 2.5] | 0.075 | –
Model-specific
DL07, HD21: $q_{\rm\sc{PAH}}$ | [0, 6.1] | 0.25 | %
THEMIS: $f_{\rm sCM20}$bbWe use a “fraction” parameter so that $\Sigma_{X}=f_{X}\times\Sigma_{\rm d}$, where $X$ = {${\rm sCM20^{THEMIS}}$, ${\rm PAHs^{MC11}}\\}$. | [0, 0.5] | 0.03 | –
MC11: $f_{\rm PAHs}$bbWe use a “fraction” parameter so that $\Sigma_{X}=f_{X}\times\Sigma_{\rm d}$, where $X$ = {${\rm sCM20^{THEMIS}}$, ${\rm PAHs^{MC11}}\\}$. | [0, 0.5] | 0.03 | –
Modified-blackbody models
${\rm log_{10}}$($\Sigma_{\rm dust}$) | [-2.2, 0.1] | 0.035 | M⊙/pc2
$T_{\rm dust}$ | [12, 35] | 0.3 | K
SE: $\beta$, BE: $\beta_{2}$ | [-1,4] | 0.05 | –
In Table 3, we list the final free parameters we use for each model. There are
5 free parameters for the physical models, and 3 for the modified blackbody
models. Figure 2 shows all the dust emission models used in this study. The
top-left panel shows the fiducial MW high galactic latitude diffuse ISM
models, labeled ‘Galactic SED’. The IR MW diffuse emission from Compiègne et
al. (2011) is also plotted. The bottom-left panel shows the physical dust
models at the same radiation field $U_{\rm min}$ $=1$. The other different
panels detail the models: THEMIS is divided in two grain population when
fitted to the SED of M101 (note that we tie the lCM20 and aSilM5 population);
for the MC11 model we tie the SamC, LamC and aSil emissions together, and thus
there are two free parameters for the dust mass: $f_{\rm PAH}$ or $f_{\rm
sCM20}$, and the total dust surface density. The two modified blackbodies, are
shown with their best fit values to the MW diffuse SED: {$T_{\rm dust}$
$=20.9$ K, $\beta=1.44$} for the simple-emissivity model, and {$T_{\rm dust}$
$=18.0$ K, $\beta_{2}=1.55$} for the broken-emissivity model.
Figure 2: The dust emission models used in this study: Draine & Li (2007),
Jones et al. (2017, THEMIS), Compiègne et al. (2011), Hensley & Draine (2021),
simple-emissivity, and broken-emissivity. Top left: all six models at the
$U_{\rm min}$ value that best fits the calibration SED. Bottom left: the four
physical models at $U_{\rm min}$ = 1. The right side panels show each model at
the $U_{\rm min}$ value that best fits the calibration SED, and their break-
down as they are used in this study.
### 4.2 Fitting tool
#### 4.2.1 Bayesian fitting with DustBFF
We use the DustBFF tool from Gordon et al. (2014) to fit the data with the
chosen dust models. DustBFF is a Bayesian fitting tool that uses flat priors
for all parameters. The probability that a model with parameters $\theta$ fits
the measurements ($S^{\rm obs}$) is given by:
$\mathcal{P}({\bf S}^{\rm
obs}|\theta)=\frac{1}{Q}\text{e}^{-\chi^{2}(\theta)/2}$ (9)
with
$\begin{split}Q^{2}&=(2\pi)^{\rm n}\ \text{det}|\mathbb{C}|\\\
\chi^{2}(\theta)&=[{\bf S}^{\rm obs}-{\bf S}^{\rm mod}(\theta)]^{\rm T}\
\mathbb{C}^{-1}\ [{\bf S}^{\rm obs}-{\bf S}^{\rm mod}(\theta)]\end{split}$
(10)
where ${\bf S}^{X}$ the observed ($X={\rm obs}$) or modeled ($X={\rm mod}$) 16
band SEDs used here, and $\mathbb{C}$ is the covariance matrix that includes
uncertainties from random noise, astronomical backgrounds and instrument
calibration (described further below).
To create ${\bf S}^{\rm mod}(\theta)$ each model spectrum is convolved with
the spectral responses for the photometric bands used here. PACS and SPIRE
band integrations are done in energy units, whereas all the others are done in
photon units, as necessitated by the instrument’s calibration scheme. We also
follow the reference spectra used for the calibration of each instrument: the
MIPS bands require a reference shape in the form of the $10^{4}$ K blackbody
while the other bands use a reference shape as $1/\nu$.
#### 4.2.2 Covariance matrices
The covariance matrix in the previous equations describes the uncertainties on
the measured flux, both due to astronomical backgrounds, and instrumental
noise and the uncertainties in calibrating the instruments. This takes into
account the correlation between the photometric bands due to the calibration
scheme and the correlated nature of astronomical background signals. We define
a background matrix and an instrument matrix such that
$\mathbb{C}=\mathbb{C}_{\rm bkg}+\mathbb{C}_{\rm ins}$ to propagate the
correlated errors of the background and noise, and of the calibration
uncertainties, respectively.
##### Background covariance matrix $\mathbb{C}_{\rm bkg}$
The “background” in our images encompasses astronomical signals that do not
come from the IR emission of the target M101. It is dominated by different
objects depending on the wavelength, and can therefore be correlated between
bands. For instance, the background from 3.4 to 5.8 $\mu$m is mostly from
foreground stars, as well as zodiacal light; from 8 to 24 $\mu$m, it is from
evolved stars and background galaxies; in the far-IR, it is dominated by
Galactic cirrus emission and background galaxies. To include this uncertainty,
we measure this combination of signals in “background pixels” using the
processed data and a masking procedure described in Section 4.2.3.
The elements of the background covariance matrix are calculated as
$(\mathbb{C}_{\rm bkg})_{i,j}^{2}=\frac{\sum_{k}^{\rm N}{(S_{i}^{k}-\langle
S_{i}\rangle)(S_{j}^{k}-\langle S_{j}\rangle)}}{{\rm N}-1},$ (11)
where $S_{X}^{k}$ is the flux of pixel $k$, in band $X$, and $\langle
S_{X}\rangle$ is the average background emission in band $X$ (close to 0). The
final number of 9′′ pixels used to measure the covariance matrix is
$N=18,920$.
We display the background _correlation_ matrix in Figure 3 (not the elements’
absolute values but the Pearson correlation coefficients; see Gordon et al.,
2014). Three clear sets of positive correlations appear, like previously
explained: correlations are due to starlight in the near-IR, evolved stars and
MW cirrus in the mid-IR, and background galaxies and MW cirrus in the far-IR.
Additionally, some bands may be noise dominated: it is the case for the PACS
70 band which is only weakly correlated with other bands.
Figure 3: Background correlation matrix. The dark color indicates a strong
correlation between the bands. The matrix is symmetric and we show only half.
The text indicates the astronomical signals that dominates the contoured
bands, and that explains their strong correlation.
##### Instrument calibration matrix $\mathbb{C}_{\rm ins}$
This matrix is calculated as the quadratic sum of the correlated and
uncorrelated errors for each instrument. The correlated errors (matrix with
diagonal _and_ anti-diagonal terms) refer to the instrument calibration itself
($m_{\rm cal}$ in Table 2, while the uncorrelated (matrix with diagonal terms
only) express the instrument stability, or repeatability ($m_{\rm sta}$ in
Table 2). We use the same calibration errors reported in Chastenet et al.
(2017, see Table 2) for the Spitzer/MIPS and Herschel bands. The correlated
errors for the IRAC bands were changed to take into account the uncertainties
in the extended source correction factors, larger than the calibration error
themselves. The errors due to repeatability are unchanged. The errors for WISE
bands were taken from the WISE
documentation131313http://wise2.ipac.caltech.edu/docs/release/prelim/expsup/sec4_3g.html.
The elements of the calibration matrix $\mathbb{C}_{\rm ins}$ are calculated
“model-by-model” as
$(m_{\rm ins})_{i,j}^{2}=S_{i}^{\rm mod}(\theta)\ S_{j}^{\rm mod}(\theta)\
(m^{2}_{{\rm cal,\ }i,j}+m^{2}_{{\rm sta,\ }i,j})$ (12)
with particular elements
$\begin{split}m_{{\rm cal,\ }i,j}&=0\ \text{if }i,~{}j\text{ belong to two
\emph{different} instruments;}\\\ m_{{\rm sta,\ }i,j}&=0\ \text{if }i\neq
j.\end{split}$
We fit all the pixels that are above 1$\sigma$ of the background values, in
all bands. We use these pixels to show parameter maps and radial profiles,
while the galaxy-integrated values are calculated for pixels above 3$\sigma$
detection above the background (black contours in Appendices B and C).
#### 4.2.3 Stars in the background/foreground
Here we describe the masking procedure to measure the background covariance
matrix in Section 4.2.2. To do so, we use the final images, i.e. background-
subtracted, convolved and projected to the same pixel grid.
The covariance matrix elements are calculated with the assumption of a
Gaussian noise. While the assumption works well for faint and unresolved stars
and the cosmic infrared background galaxies, it is no longer correct if we
include bright stars. Bright foreground stars only must therefore be cut to
measure this matrix. This masking has the effect of making the approximation
of Gaussian noise for the remaining background more correct. Note, however,
than they are not masked for the fitting within the boundary of galaxy141414We
find no pixels showing a bad fit due to a foreground star. The $\Omega_{*}$
maps do not show conspicuous peaks, indicating that the foreground stars are
not dominant, and the models successfully fit the galaxy emission., but only
for the purpose of the covariance matrix measurement.
In the process of masking, we first exclude the region within $\rm r_{25}$, to
mask the galactic emission. We then mask the brightest stars, using the star
masks from Leroy et al. (2019) that leverage the known positions of stars from
the GAIA and 2MASS
catalogs151515https://irsa.ipac.caltech.edu/data/WISE/z0MGS/overview.html. To
match the final products, we convolve and regrid these star masks to the SPIRE
500 resolution. After convolution, the mask values are no longer binary 0 and
1, but show intermediate values around the position of bright stars. We mask
pixels above 0.15 to exclude these regions where bright foreground stars
contaminate our measurements from the covariance matrix calculation.
One argument against the decision to mask bright stars to measure the
covariance matrix would be that the emission from these stars is an
astrophysical signal that should be taken into account to propagate the noise
from a band to another. This would require creating a new noise model and
significant changes to the fitting methodology. Rather than major changes to
the fitting approach, we decide to mask the stars to measure the background
covariance matrix.
## 5 Results
Figure 4: Example of the best fits in a pixel for the six models used in this
study (color lines). The data is shown as the empty symbols, with 1$\sigma$
error-bars. The synthetic photometry from the model spectrum is shown with a
cross symbol, due to the band integration, this does not always sit exactly on
the spectrum. The location of the measurement is marked by a cross in Figure
1.
We investigate the differences in some of the key parameters from dust
emission modeling, when the models presented here are all used in an identical
fitting framework. All residuals in this analysis are presented as (Data-
Model)/Model. For radial profiles, we use the pixel size as the annulus width
and cover from the center of M101 to $\rm r_{25}$. Appendix B shows the
resolved maps of the fitted parameters for each model. Appendix C shows the
residual maps of each model.
### 5.1 Quality of fits
Figure 4 shows an example of the fits for each model in a single pixel (marked
by the cross in Figure 1). In each panel we plot the best fit spectrum
(colored lines) to the data SED (empty circles). The residuals are shown in
the colored symbols. Negative residuals mean that the model overestimates the
data. For example, the negative (and decreasing) residuals from 250 to 500
$\mu$m in the physical models panels are representative of a systematic
overestimation of the data (present also in other locations; see Appendix
Figures 19–22). In Figure 5, we show the fractional residuals (Data-
Model)/Model in each band, for the pixels above the 3$\sigma$ threshold.
Appendix D shows the reduced $\chi^{2}$ for all models.
The bulk of the residuals in the short wavelength bands are within the
instrument uncertainties and calibration errors. For example, despite larger
uncertainty due to the extended source correction, the IRAC 8 band shows
residuals mostly within 10%. Below 4 $\mu$m, THEMIS shows a clear offset that
may be related to the absence, or low amount, of an ionized component in the
HAC population, which leads to enhance the mid-IR features. It is also worth
noting these bands are dominated by starlight, modeled by a 5,000 K blackbody,
which is independent from the dust model itself. The residuals at 12 $\mu$m
are systematically positively offset by less than 10%, but all physical models
show very narrow residual distributions. This is in contrast with the broader
residuals in IRAC 8 and WISE 22 bands, where we can see more differences
between models.
All models show differences in the central values of the residual distribution
at 100 $\mu$m. At 160 $\mu$m, all residuals overlap and models perform fits of
similar quality. At longer wavelengths, significant difference begin to
appear. At all far-IR wavelengths, the modified blackbody models reproduce the
SEDs the best. This is likely because of the additional parameter that can
adjust the far-IR slope ($\beta$ in the simple-emissivity model and
$\beta_{2}$ in the broken-emissivity model). The SPIRE 250 band shows
symmetrical residuals centered on 0 for the physical models (except THEMIS),
while the residuals get progressively worse at 350 and 500 $\mu$m for all
physical models, showing that on average, the modeled far-IR slope of the SEDs
is steeper than the data.
In the SPIRE 350 and SPIRE 500 bands, the large number of pixels
underestimated by the models show residuals much larger than the
uncertainties, ruling out statistical noise and indicating that the models are
not able to fit these wavelengths. In the SPIRE 500 band, some of the pixels
show the so-called “sub-millimeter excess” seen in other studies (e.g.
Galametz et al., 2014; Gordon et al., 2014; Paradis et al., 2019).
Figure 5: Fractional residuals (Data-Model)/Model for each model in each band.
We plot the (Gaussian) kernel density estimates of the the residual
distributions. The WISE 12 band shows narrow, offset fits for all models while
other mid-IR bands show clear over/under-estimations by some models. The
physical models perform a good fit at 250 $\mu$m that gets progressively worse
towards longer wavelengths. Only the modified blackbody models show
systematically good fits, within 10%, in all far-IR bands, likely because of
the spectral index, $\beta$ being a free parameter.
### 5.2 Total Dust Mass and Average Radiation Field
We compute several galaxy averaged quantities: the total dust mass, $M_{\rm
dust}$, the dust mass-weighted average radiation field
$\langle\overline{U}\rangle$ for the physical models, and the mass-weighted
average dust temperature $\langle T_{\rm dust}\rangle$ for the blackbody
models. The average radiation field $\overline{U}$ is calculated for each
pixel as
$\overline{U}=(1-\gamma)\ U_{\rm min}+\gamma\times U_{\rm min}\ \frac{{\rm
ln(U_{max}}/U_{\rm min})}{1-U_{\rm min}/{\rm U_{max}}}$ (13)
since we fixed $\alpha=2$ (Aniano et al., 2020). The galaxy-integrated _mass-
weighted_ averages are calculated as
$\begin{split}\langle\overline{U}\rangle&=\frac{\sum_{j}{\overline{U}_{j}\times\Sigma_{{\rm
dust,}j}}}{\sum_{j}\Sigma_{{\rm dust,}j}}\\\ \langle T_{\rm
dust}\rangle&=\frac{\sum_{j}{T_{{\rm dust,}j}\times\Sigma_{{\rm
dust,}j}}}{\sum_{j}\Sigma_{{\rm dust,}j}}.\end{split}$ (14)
The integrated values are calculated over the pixels above the 3$\sigma$
detection threshold. In Figure 6, we show these measurements for each model as
diagonal elements. We also provide the ratios between all models in $M_{\rm
dust}$ and $\langle\overline{U}\rangle$ (or $\langle T_{\rm dust}\rangle$) to
explicitly show their differences. These off-diagonal elements read as Y-model
/ X-model (e.g. $M_{\rm dust}^{\rm HD21}/M_{\rm dust}^{\rm DL07}=0.77$).
The top panel of Figure 6 shows the total dust masses. The broken-emissivity
modified blackbody model yields the lowest total dust mass, while the MC11
model yields the highest. The simple-emissivity model shows very different
spatial variation of dust surface density than the other models (see Section
5.3). In a recent study of the KINGFISH sample, Aniano et al. (2020) found a
total dust mass of $9.14\times 10^{7}$ M⊙ (before their renormalization) by
fitting the DL07 at MIPS 160 resolution ($\sim 39^{\prime\prime}$)161616The
difference between Aniano et al. (2020)’s dust mass and ours is due to the
larger area used in the former for the total dust mass calculation.. Using the
CIGALE SED fitting tool (e.g. Boquien et al., 2019, and reference therein) and
THEMIS, Nersesian et al. (2019) found a total dust mass of $4.70\times 10^{7}$
M⊙, which is similar to the $5.05\times 10^{7}$ M⊙ from THEMIS in this study.
It is worth noting that Nersesian et al. (2019) performed fits to the
integrated SED, which could lead to a lower dust mass (Aniano et al., 2012;
Utomo et al., 2019). However, low signal-to-noise pixels are included in the
integrated SED, and excluded in the resolved fits.
It is interesting to note that despite the fairly good agreement ($\sim 10\%$,
Figure 5) of all physical models with each other (and modified blackbody
models at far-IR wavelengths) in reproducing the data, the differences in dust
masses can be much larger. This suggests intrinsic opacity values rather than
fit quality dominate the differences between models in dust mass. This is
supported by the recent, extensive study done by Fanciullo et al. (2020). By
comparing the literature opacities (including 3 models used in this work) with
laboratory dust analogue opacities, they found that dust masses can be
overestimated by more than an order of magnitude.
The bottom panel of Figure 6 shows the integrated values for
$\langle\overline{U}\rangle$ and $\langle T_{\rm dust}\rangle$. As expected,
the HD21 model requires the highest radiation field, based on its colder dust.
THEMIS and the MC11 model show similar values of $\langle\overline{U}\rangle$.
Nersesian et al. (2019) found a dust temperature of $21.7$ K by fitting a
modified blackbody (with the THEMIS opacity), close to that yielded by the
modified blackbody models here.
The mass-weighted average temperatures correspond to radiation fields of 2.5
and 2.3 (using Equ. 8) for the broken-emissivity and the simple-emissivity
models, respectively. They are in good agreement with the radiation field
values fitted by the physical models.
Figure 6: Integrated values (over the 3$\sigma$ pixels). The diagonal elements
show the total dust mass (top panel), and radiation field (bottom panel, above
the line) or temperature (bottom panel, below the line). The off-diagonal
elements are ratios of the integrated values between different models, and are
read as “model Y-axis/model X-axis” (e.g. $M_{\rm dust}^{\rm HD21}/M_{\rm
dust}^{\rm DL07}=0.77$).
### 5.3 Dust Surface Density, $\Sigma_{\rm dust}$
Figure 7 shows maps of the dust surface density, $\Sigma_{\rm dust}$, for each
model, as well as their ratios with each other. We can see that the physical
dust models DL07, HD21, THEMIS and MC11 are all fairly close to each other
(light colors), with variations in dust surface density within a factor of 2.
They all yield similar dust surface density structures, and appear to vary
from one another by a spatially smooth offset. THEMIS and the HD21 model show
the closest $\Sigma_{\rm dust}$ values, but show an inversion of their ratio
around 0.38 $r/$$\rm r_{25}$, where THEMIS requires less dust (see Figure 12).
The HD21/DL07 and MC11/THEMIS ratio maps are particularly flat, with ratios
$\sim 1.3$–$1.4$ in both cases, pointing at the resemblance in their large
grains properties and size distribution. The MC11 model requires the most dust
mass. This is consistent with the comparison analysis in Chastenet et al.
(2017). The dust surface density from the broken-emissivity model is
consistently lower than the physical dust models. It shows a rather smooth
offset, which indicates that the spatial variations are fairly similar between
them. The dust surface density from the simple-emissivity model shows
different spatial structures in the center and the outskirts of the galaxy. It
yields high $\Sigma_{\rm dust}$ values in the center, but drops more rapidly
than any other model with increasing radius (Figure 12; also Chiang et al.,
2018).
A caveat of our approach is the difference in treating the heating of dust
grains between physical dust models and modified blackbodies. In the latter,
we only use a single temperature, which is not equivalent to the ensemble of
radiation fields used in the physical models. However, given the two extreme
behaviors of the modified blackbodies (the simple-emissivity model requiring a
high dust mass, and the broken-emissivity model requiring the lowest), it is
not obvious that the use of multiple temperatures (or radiation fields) drive
the differences in dust mass observed here. Rather, the calibration of the
modified blackbodies, and their effective opacity, seem to be more important.
Figure 7: Dust surface density maps (diagonal) and corresponding ratios with
each model. The MC11 model requires the largest dust mass, followed by the
simple-emissivity and DL07 models. All physical dust models show very similar
spatial variations despite having different values of $\Sigma_{\rm dust}$. The
HD21/DL07 and MC11/THEMIS ratio maps are particularly smooth across the disk,
but the MC11/DL07 and THEMIS/HD21 have the ratios closest to 1. The simple-
emissivity model shows clear structural differences with the other models, by
requiring a lot of dust in the center, but rapidly dropping in the outskirts.
### 5.4 Average radiation field, $\overline{U}$
We perform the same ratio analysis with $\overline{U}$, derived from the
fitted parameters in the physical dust models (Equ. 13 and 14). In Figure 8 we
show the radial profiles for $\overline{U}$, as well as the parameter maps and
the ratios of each model. In the radial profile, the thick lines stop where
the selection effect due to fitting only bright pixels becomes important.
Using the SPIRE 500 image, we found that radial profile of IR emission for all
pixels and for fitted-pixels only differ significantly at $\sim 6^{\prime}$
(i.e. 0.5 $\rm r_{25}$). The variations in $\overline{U}$ are reflective of
those of $U_{\rm min}$, since the $\gamma$ values are overall small, lending
more power to the delta-function than the power-law (Equ. 7).
The overall variations of $\overline{U}$ appear to be rather smooth, which is
expected since it is dominated by the diffuse radiation field $U_{\rm min}$.
However, we do find enhanced values of $\overline{U}$ in H II regions. The
ratio maps do not strongly show these peaks, which indicates that all models
behave similarly and require higher radiation field intensities in these
regions. Like for the $\Sigma_{\rm dust}$ parameter, the ratio maps for
$\overline{U}$ do not display any conspicuous spatial differences, but rather
an offset between each model. This is also visible in the upper panel of
Figure 8.
The HD21 model shows the highest values of $\overline{U}$. This is expected as
the dust in the model is “colder” than other models, due to very few large
carbonaceous grains. This leads to a higher radiation field intensity required
to reach the same luminosity.
The spatial distributions of the $\gamma$ parameter are similar in all
physical models, and we chose not to show them. Instead, we use the average
radiation field, $\overline{U}$, that includes $\gamma$ in its calculation.
The HD21 model shows the lowest values of $\gamma$, which, combined to the
highest values of $U_{\rm min}$, means it requires more power in the delta
function than the other physical models.
Figure 8: Top: Radial profile of $\langle\overline{U}\rangle$ (mass averaged
radiation field, see Equ. 14) for each physical model. The thick lines stop
where the radial profile is affected by the selection effect due to fitting
only bright pixels (Section 4.2.2). Bottom: Average radiation field,
$\overline{U}$, maps (diagonal) and corresponding ratios with each model. The
HD21 model shows the highest values of $\overline{U}$ in all the pixels.
Despite different values, all models show very similar spatial distribution of
$\overline{U}$, including in H II regions.
### 5.5 Fraction of PAHs
Three models have an explicitly defined PAH fraction. The DL07 and HD21 models
define the parameter $q_{\rm\sc{PAH}}$ as the fraction of the dust mass
contained in carbonaceous grains with less than 103 carbon atoms, roughly less
than 1 nm in size. We use the $f_{\rm PAH}$ parameter from the MC11 model to
estimate a PAH fraction, i.e. mass of grains with sizes from 0.35 to 1.2 nm,
as defined by the fiducial parameters. We refer to the mid-IR emission
features carriers in THEMIS as HACs. We use the definition in Lianou et al.
(2019) to compute a fraction of HACs from THEMIS results, that can compare to
the PAHs in other dust models: they found that this fraction of HACs
corresponds to grains between 0.7 and 1.5 nm of the sCM20 component. It is
important to remember that the strict definition of the PAH/HAC fraction is
different in each model, but its purpose—fitting the mid-IR emission
features—remains similar.
We investigate the variations of the surface density of the carriers,
$\Sigma_{\rm PAH}$ and $\Sigma_{\rm HAC}$, instead of their abundances. In the
top panel of Figure 9, we show the radial profiles of
$\Sigma_{\rm\\{PAH;~{}HAC\\}}$. Although the absolute values of the surface
density of PAHs (HACs) differ by a factor up to $\sim 3.5$ (similar to dust
masses), their gradients are very similar. This behavior shows that the grain
populations that are held responsible for the mid-IR features in each model
follow comparable distributions. In these models, their contributions to the
total dust mass vary significantly but all prove to be a good fit to the mid-
IR bands (see also Figure 5). This is also exemplified by the normalized ratio
maps in Figure 9 (bottom panel). The dark colors in the outer-most pixels of
the HD21 model are due to best $q_{\rm\sc{PAH}}$ fit consistent with 0%. To
visualize the variations between models, we normalize each parameter map to
their mean value (as shown in the color-bar labels). We are thus able to
compare the spatial variations of the maps, and avoid the offsets due to the
definition differences of PAHs or HACs.
The $q_{\rm\sc{PAH}}$ map in Aniano et al. (2020, using the DL07 model) shows
similar features to ours. A large portion of the disk of M101 has a rather
constant distribution of $q_{\rm\sc{PAH}}$, with conspicuous drops in H II
regions. In their study using the Desert, Boulanger, & Puget (1990) dust
model, Relaño et al. (2020) found a flat radial profile of the small-to-large
grain mass ratio, up to 0.8 $\rm r_{25}$ ($\sim 9.1^{\prime}$). Our maps of
the fraction of PAHs, or HACs, present a somewhat flat distribution
(variations less than 1%) out to $\sim 0.3~{}$$\rm r_{25}$ ($\sim
3.4^{\prime}$; see Appendix B), and a steep change further out.
Figure 9: Top: Radial profiles for $\Sigma_{\rm PAH}$, or $\Sigma_{\rm HAC}$,
in M⊙/pc2. The models yield very similar spatial variations. This indicates
that the definition of the features carriers matters in terms of their
contribution to the total dust mass, but they all reproduce the mid-IR
features similarly. The thick lines stop where the radial profile is affected
by the selection effect due to fitting only bright pixels (Section 4.2.2).
Bottom: Fraction of PAHs (or HACs) (diagonal) centered on their respective
mean value, with boundaries at 5 and 95 percentiles ($P_{5}$, $P_{95}$). The
normalized ratios (normalized to the mean; off-diagonal) show the spatial
variations between models.
### 5.6 Reproducing the mid-IR emission features
To investigate in more detail the ability of each physical model to reproduce
the PAH features, we perform a fit on an integrated SED and compare the
results to measurements from the Infrared Spectrograph (IRS; on-board Spitzer)
in that same region (J. D. T. Smith, private communication). Figure 10 shows a
zoom on the mid-IR part of the models and the results of the fits to the
integrated SED. From that fit, it appears that all models are able to
generally reproduce the mid-IR features, with their different parameters, but
we can notice a few differences between models.
From the residuals (bottom panel), we see that the models perform similarly at
5.8, 8.0 and 12 $\mu$m. At 22 and 24$\mu$m, the offset between the
measurements means the models tend to split the difference and sit between the
points. We note that all models appear to overestimate the continuum around 7
µm, in between the 6.2 and 7.7 µm PAH features.
There are nonetheless a couple of noticeable differences between each model.
For instance, the HD21 model shows a higher continuum at 10 $\mu$m than the
other three models, despite a similar continuum at 20 $\mu$m. This rules out
the higher $U_{\rm min}$ found in the HD21 model as the reason for the higher
flux at 10 $\mu$m. Rather, in this model, the emission at $10~{}\mu$m is
strongly dependent on the amount of nano-silicates, used to account for the
lack of correlation between PAH emission and anomalous microwave emission
(Hensley & Draine, 2017). The ratio between the flux at 20 and 10 $\mu$m is
$\sim 1.7$ for the HD21 models and 2 or above for the other three models. On
the other hand, THEMIS shows no emission feature around 17 $\mu$m, while the
other models do (although it has no impact on this particular fit). Around 7
$\mu$m, all models show a higher flux than the one seen in the IRS spectrum.
It is notable that, using only photometric bands from WISE and Spitzer/IRAC in
the fit, all models reasonably well reproduce the mid-IR emission features,
despite having different values of the PAH (or HAC) fraction and different
definitions of the carriers. However, the comparison to spectroscopic
measurements show that there are still differences between models.
Figure 10: Fits to the integrated SED from the broad-band photometry (open
circles) within a rectangle box (drawn in Figure 1). The Spitzer/IRS spectrum
and its corresponding SED (convolved in the 16 bands used here) are shown in
gray (line and filled circles). All models perform a good fit to the mid-IR
part of the SED, but show different fractions of PAHs (or HACs). Differences
can be noticed between models: a higher 10 $\mu$m emission in the HD21 model,
due to nano-silicates, or the lack of an emission feature at 17 $\mu$m in
THEMIS.
### 5.7 SPIRE 500 and $\Sigma_{\rm dust}$
The monochromatic dust emission in far-IR wavelengths has often been used as a
mass tracer of the ISM (e.g. Eales et al., 2012; Berta et al., 2016; Scoville
et al., 2017; Groves et al., 2015; Aniano et al., 2020; Baes et al., 2020). In
Figure 11, we plot the emission of M101 at 500 $\mu$m as a function of the
fitted $\Sigma_{\rm dust}$ for each model (pixels above the 3$\sigma$
detection threshold), color-coded by the minimum radiation field $U_{\rm
min}$, or dust temperature $T_{\rm dust}$.
In all cases, we can see two distinct relations as the SPIRE 500 emission
increases. The majority of the fitted pixels show a linear scaling between the
emission at 500 $\mu$m and $\Sigma_{\rm dust}$, while in some specific regions
of the galaxy, all models prefer a higher radiation field (or temperature) and
a lower dust surface density. We provide the scaling relations between the
emission at 500 $\mu$m in MJy/sr, and the dust surface density in M⊙/pc2, for
each model. We measure the 5th and 95th percentiles of the data points above
the 3$\sigma$ detection threshold (to keep the bulk of the distribution only).
We fit a linear slope to these points171717The fit coefficients and
uncertainties were measured using the numpy.polyfit procedure.:
$\begin{split}&{\rm log_{10}(\Sigma_{d})}^{\rm DL07}=1.21\pm 0.01\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.38\pm 0.005\\\ &{\rm
log_{10}(\Sigma_{d})}^{\rm MC11}=1.25\pm 0.02\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.39\pm 0.008\\\ &{\rm
log_{10}(\Sigma_{d})}^{\rm THEMIS}=1.32\pm 0.02\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.56\pm 0.008\\\ &{\rm
log_{10}(\Sigma_{d})}^{\rm HD21}=1.21\pm 0.01\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.50\pm 0.006\\\ &{\rm
log_{10}(\Sigma_{d})}^{\rm SE}=1.61\pm 0.03\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.80\pm 0.02\\\ &{\rm
log_{10}(\Sigma_{d})}^{\rm BE}=1.08\pm 0.08\times{\rm
log_{10}}(I_{\nu}^{500\mu m})\\\ &\qquad\qquad\qquad-1.78\pm
0.005.\end{split}$ (15)
To identify the pixels that “branch out” from the bulk we select any pixel
that falls below one standard deviation from the fit (dashed-lines). In each
panel, we show the spatial location of these pixels. It becomes clear that the
regions that need a higher $U_{\rm min}$ (or $T_{\rm dust}$) are H II regions
and in the outskirts of the galaxy. These pixels can account between 4 and 11%
of the pixels above the 3$\sigma$ detection. The branching-out from the main
relation is likely the consequence of the fact that the dust in these H II
regions is significantly hotter than average (as shown by the enhanced
$\overline{U}$ in all models in Figure 8). Lower dust-to-gas ratios in the
galaxy outskirts may also contribute to this trend.
Figure 11: SPIRE 500 emission as a function of the fitted $\Sigma_{\rm dust}$
for each model, color-coded by the radiation field $U_{\rm min}$ or the dust
temperature $T_{\rm dust}$. We identify a separation in the linear scaling of
$\Sigma_{\rm dust}$ with the dust emission at 500 $\mu$m, marked by the dashed
lines. The pixels below the lines are plotted in color in the maps, and are
located in H II regions and surroundings. In these specific locations, the
luminosity does not follow the same relation with $\Sigma_{\rm dust}$ than the
rest of the galaxy.
In Aniano et al. (2020), the authors found that this relation is well
represented by a power-law scaling in the KINGFISH sample, with slope of
0.942, which is lower than our measured value of 1.2 for M101 with the DL07
model. A linear slope would be expected if the dust temperature and optical
properties were uniform throughout the galaxy, leading to a constant
$I_{\nu}^{500}/\Sigma_{\rm dust}$ ratio. The spatial distribution of
temperatures throughout each individual galaxy leads to distinct slopes, and
for M101 we find that in regions of higher dust surface density, the dust is
also warmer, leading to more 500 µm emission (this can be seen in change in
the color table on Figure 11 at the highest $\Sigma_{\rm dust}$). The branch
of H II region points we see represents only a small fraction of the total
data and may not be evident on the Aniano et al. (2020) plot.
## 6 Model Performance Given Abundance Constraints on Dust Mass
### 6.1 Maximum Dust Surface Density
The calibration of dust models involves a constraint on elements locked in
grains (see Section 3). This step relies on depletion measurements, which
characterize the distribution of heavy elements between the gas and solid
phases. The final amount of elements allowed in dust grains varies between
different physical dust models. The final dust masses derived by each model
vary as well, as discussed in Sections 5.2 and 5.3.
A way to assess the performance of dust models is to verify that the required
dust mass does not exceed the available heavy element mass, as constrained by
metallicity measurements (e.g. Gordon et al., 2014; Chiang et al., 2018). We
perform this test in M101 since its metallicity gradient has been thoroughly
characterized (e.g. Zaritsky et al., 1994; Moustakas et al., 2010; Croxall et
al., 2016; Berg et al., 2020; Skillman et al., 2020).
We estimate the dust mass surface density upper-limit by assuming all
available metals are in dust and calculating the metal mass surface density
from the metallicity gradient and observed gas mass surface density:
$\frac{M_{\rm dust}^{\rm max}}{M_{\rm gas}}=\frac{M_{\rm Z}}{M_{\rm gas}},$
(16)
where $M_{\rm gas}$ and $M_{\rm Z}$ are determined as follows.
The gas surface density is the sum of H I and H2 surface densities including a
correction for the mass of He (we neglect the ionized gas contribution). The
latter is built from CO emission, assuming two prescriptions for the CO-to-H2
conversion (see Section 2). We include the MW $\alpha_{\rm CO}$ prescription
as it is widely used (Equ. 1). We also choose the $\alpha_{\rm CO}$
prescription from Bolatto, Wolfire, & Leroy (2013), which takes into account
environmental variations of $\alpha_{\rm CO}$ with metallicity and surface
density (Equ. 2). We emphasize that the $\Sigma_{\rm dust}$ upper-limit in
this section is dependent on the choice of $\alpha_{\rm CO}$ to derive a gas
surface density. This is particularly true in the central region of M101,
where H2 dominates (e.g. Schruba et al., 2011; Vílchez et al., 2019) and where
the two $\alpha_{\rm CO}$ differ the most. Note also that this result differs
with that of Chiang et al. (2018). This is expected as the $\alpha_{\rm CO}$
conversion factor in their study (from Sandstrom et al., 2013) is lower than
the ones used in this study, which leads to a lower upper-limit.
We use the $12+{\rm log_{10}(O/H)}$ radial profile from Berg et al. (2020) and
convert it to metallicity through:
$\frac{M_{\rm Z}}{M_{\rm gas}}=\frac{\frac{\rm m_{O}}{\rm m_{H}}\
10^{\big{(}{\rm 12+log_{10}(O/H)\big{)}-12}}}{1.36\ \frac{M_{\rm O}}{M_{Z}}},$
(17)
with ${\rm m_{O},m_{H}}$ the atomic masses of oxygen and hydrogen,
respectively; and the oxygen-to-metals mass ratio $M_{\rm O}/{M_{Z}}=0.445$
(Asplund et al., 2009).
The top panels in Figure 12 show the radial profile of $\Sigma_{\rm dust}$ for
each model and the $\Sigma_{\rm dust}^{\rm max}$ upper-limits yielded by the
two $\alpha_{\rm CO}$ prescriptions used here (black lines, dotted and dash-
dotted). We see on the top left panel that the DL07 and MC11 models are above
both $\Sigma_{\rm dust}^{\rm max}$ upper-limits in almost all the significant
pixels. THEMIS and the HD21 models are fairly in line with the Bolatto,
Wolfire, & Leroy (2013) upper-limit, with the conservative assumption that
100% metals are in dust grains. These behaviors suggest that the dust
emissivity in all physical models is too low, which leads to requiring too
much dust. We note that the large dust masses are likely due to the opacity
calibration, rather than a wrong fit in the far-IR bands: in Figure 5, all
physical models show a reasonable fit at 160 $\mu$m, much closer to the IR
peak than the 500 $\mu$m band. We are confident that the IR peak is correctly
recovered, and that the high dust masses are not due to the sub-millimeter
excess. Based on the reasonable quality of the fits, we believe that the
excessive mass is likely due to the opacity calibration.
Figure 12: Top: Radial profiles for $\Sigma_{\rm dust}$ (colored lines), and
dust surface density upper-limits (black dotted and dash-dotted lines). Upper-
limits are estimated from gas and metallicity measurements, assuming all
metals are locked in dust grains (Section 6.1). We have projected the dust
surface density maps to the gas maps pixel grid, and masked the gas maps where
there is no dust data (to ensure we are selecting identical pixels in building
radial profiles). The upper-limits are invariant between the left and right
panels. Left: radial profiles from the fits. All physical models and the
simple-emissivity model are either above or similar to the upper-limits, out
to 0.4 $r$/$\rm r_{25}$. Right: renormalized dust surface densities for the
physical models (Section 6.2). The renormalization forces physical models to
the same abundance constraints ($M_{\rm dust}/M_{\rm H}~{}=~{}1/150$) and to
fit the same diffuse MW IR emission. Doing so, we derive correction factors
and apply them to the dust surface densities, scaling them down to plausible
values (below the $\Sigma_{\rm dust}^{\rm max}$ lines). Bottom: Dust-to-gas
ratios for each model, assuming the gas surface density derived with
$\alpha_{\rm CO}$ from Bolatto, Wolfire, & Leroy (2013). The gray line
represents the upper-limit from Berg et al. (2020) and assuming a dust-to-
metal ratio of 1 (Equ. 17). The thick lines stop where the radial profile is
affected by the selection effect due to fitting only bright pixels (Section
4.2.2), creating the conspicuous features in the H II region locations. The
main bump at 0.6 $r/$$\rm r_{25}$ corresponds to the two H II regions NGC 5447
and NGC 5450. The less visible bump at 0.5 $r/$$\rm r_{25}$ corresponds to the
H II region NGC 5462.
### 6.2 (Re-)Normalization
Despite sharing a common calibration approach, the details of the opacity
calibration in the dust models used in this study vary in small but
significant ways. While all models were calibrated to MW diffuse emission,
they did not use exactly the same high-latitude cirrus spectrum. In addition,
the $M_{\rm dust}/M_{\rm H}$ adopted for the MW diffuse ISM by the physical
dust models varies. Additionally, the radiation field that best reproduces the
MW diffuse emission, $U^{\rm MW}$, differs slightly from one model to the
next. Because of the relationship between dust temperature and radiation field
$(U\propto T_{\rm dust}^{4+\beta})$ and dust temperature and luminosity, even
a slight difference in the assumed radiation field may lead to a significant
change in the model’s calibrated dust opacity.
To investigate calibration discrepancies, we re-normalize each of the dust
models via a fit to a common MW diffuse emission spectrum using the same
abundance constraints. We use the MW SED described in Gordon et al. (2014),
which we previously used to calibrate $\kappa_{\nu}$ of the modified blackbody
models (Chiang et al., 2018). This SED is the same as that used in Compiègne
et al. (2011), a combination of DIRBE and FIRAS measurements (e.g. Boulanger
et al., 1996; Arendt et al., 1998)181818Although more recent measurements from
Planck are available in the far-IR, we emphasize here that the important
aspect is about uniformity. We choose the DIRBE+FIRAS SED as it is
conveniently the one used to calibrate the modified blackbody models.
Additionally, the significant input brought by Planck measurements are past
the wavelength range used in our study (sub-milimeter and millimeter range)..
We do not use the ionized gas correction because depletions measurements do
not correct for it, and instead use a correction factor of 0.97 for molecular
gas only (Compiègne et al., 2011). We integrate the SED in the PACS 100, PACS
160, SPIRE 250, SPIRE 350 and SPIRE 500 bands, so all models use the same
wavelength coverage. We use a 2.5% uncorrelated and 5% correlated errors to
account for FIRAS and DIRBE uncertainties (Gordon et al., 2014). The $M_{\rm
dust}/M_{\rm H}$ ratio set in the normalization is 1/150, as suggested by
depletions studies ($F_{*}=0.36$ from Jenkins, 2009, see also Gordon et al.
2014).
To perform the re-normalization using the MW SED, we use the same fitting
technique as previously described with the following choices: 1) We do not use
the combination of radiation fields, nor the stellar component (i.e.
$\Omega_{*}=\gamma=0$). 2) We allow the minimum radiation field $U_{\rm min}$
and the total dust surface density $\Sigma_{\rm dust}$ to vary in each
physical model. 3) We keep the relative ratios between grain populations fixed
and do not vary them independently (e.g. for each model, we use the total
spectra solid lines ‘DL07’, ‘HD21’, ‘MC11’ and ‘THEMIS’ in Figure 2).
The fits yield renormalization factors that correct all physical models from
their respective assumptions to a dust-to-H ratio of 1/150. These corrections
range from 1.5 for THEMIS, to 3 for the DL07 model (see Appendix A). With this
normalization, we are able to meet the metallicity constraints. The top right
panel of Figure 12 shows the radial profiles with the correction factors
applied to the surface densities of the physical dust models. The
renormalization brings the models to lower dust surface densities that agree
with the upper-limit based on the metal content. It is interesting to note
that the renormalized models now show three distinct behaviors in the dust
mass surface density radial profile: DL07, HD21 and the broken power-law
emissivity modified blackbody yield very similar results; THEMIS and MC11 are
very similar to each other and offset by a factor of $\sim 2$ from the first
group; and the simple power-law emissivity modified blackbody has a steeper
increase that still puts it above the abundance constraints even though it is
similarly normalized to the MW cirrus spectrum. The two different behaviors,
for DL07 and HD21, and for THEMIS and MC11 are likely due to the difference in
the best $U_{\rm min}^{\rm MW}$ to fit the MW SED, and their initial spectrum
used for calibration.
In the bottom panels of Figure 12, we show the radial profile of the dust-to-
gas ratios (DGR) for each model, using the $\alpha_{\rm CO}^{\rm BWL13}$
conversion factor. The bottom left shows the DGR with the derived dust surface
densities, and the bottom right panels is after re-normalization. The thick
gray line shows the upper-limit of the DGR using the metallicity gradient from
Berg et al. (2020) and Equ. 17, with $\alpha_{\rm CO}^{\rm BWL13}$. We find
the same abrupt change in the DGR as Vílchez et al. (2019) around 0.5 $\rm
r_{25}$, although slightly lower values, consistent with the higher dust
surface densities in this study.
## 7 Discussion
Here we discuss some next steps that should be undertaken to improve dust
emission modeling. In the previous sections, we investigated some systematic
effects linked to the modeling choices.
We showed that physical dust models are likely to require too much dust mass,
exceeding what is available based on metallicity measurements (for reasonable
choices of CO-to-H2 conversion factors, though we note that, in the central
region, this assertion is strongly dependent on this choice). This excess is
linked to the calibration of these models, in particular the elemental
abundances prescribed, and their assumed radiation field. Combined with the
growing number of metallicity measurements in nearby galaxies (e.g. Kreckel et
al., 2019; Berg et al., 2020), additional constraints for external
environments (beyond the MW) may help to perform better fits of dust emission.
Several aspects of galaxy evolution studies rely on grain properties. Dust
evolution models heavily rely on observational constraints to find the
parameters that best match the observed properties of dust. The balance
between destruction and formation processes in dust evolution models are
adjusted by observed dust masses, that need to be accurately measured (e.g. by
emission fitting). Similarly, dust-to-gas ratio evolution with metallicity is
often derived using dust masses from emission measurements (e.g. Rémy-Ruyer et
al., 2014; De Vis et al., 2019; Nersesian et al., 2019; De Looze et al., 2020;
Nanni et al., 2020), and are subject to the systematic biases found in this
work.
Our study was designed for a rigorous comparison between models fit to mid-
through far-IR SEDs. Several choices made in this study are justified by our
implementation of each model in the most similar framework possible. This also
requires using a uniform radiation field description, and the parameters that
go into describing the physical dust models in their fiducial form are not
always adapted to the choice of the radiation field model of this work (Equ.
7).
Because of the limited wavelength coverage and SED sampling of this study, we
“tie” together different grain populations, based on the similarity of their
respective emission spectra. In the MC11 the large carbon (LamC) and silicate
(aSil) grains have very similar slopes in the far-IR, which would make these
fit parameters strongly degenerate if both were allowed to vary. In THEMIS,
the key difference between the large carbon grains (lCM20) and the large
silicate grains (aSilM5) is their slopes in the far-IR: the lCM20 grains have
a flatter SED than aSilM5. However, the spectral coverage used in this study
is too limited to properly constrain the emission from the two grain
populations. These choices have implications for the evolution of dust
composition in the ISM, since the ratios of carbon-to-silicate in large grains
is assumed to be constant for each model of this study.
Additionally, our further tests show that the $\gamma$ parameter is degenerate
with the emission of some of the grain population in MC11 and THEMIS. The
abundance of small amorphous carbon (SamC) in MC11 helps adjust the slope
between 24 and 70 $\mu$m. In the radiation field parameterization chosen for
this study, the $\gamma$ parameter has a similar impact on the shape of the
dust emission. Keeping both the small amorphous carbon grains abundance and
$\gamma$ as free parameters introduces a degeneracy in the fitting. For this
reason we choose to keep the fiducial relative abundances of SamC with respect
to that of big grains (LamC+aSil) fixed. In THEMIS, when allowing both lCM20
and aSilM5 populations to vary (e.g. with $f_{\rm aSil}$, the fraction of
large grains in the form of aSilM5), we also introduce a degeneracy with the
$\gamma$ parameter. A varying ratio of carbon-to-silicate grains performs a
similar change of the SED shape and both parameters, $\gamma$ and $f_{\rm
aSil}$ become slightly degenerate. Future studies with more wavelength
coverage and more detailed constraints on individual elemental abundances may
be able to allow for more free parameters in the fits.
## 8 Conclusions
In this study, we compared the dust properties of M101 derived from six dust
models: four physical dust models and two blackbody models. We used the models
from Draine & Li (2007), Compiègne et al. (2011), Jones et al. (2017, THEMIS)
and Hensley & Draine (2021), as well as a simple-emissivity and a broken-
emissivity modified blackbody models to assess the differences in various dust
properties yielded by fitting the mid- to far-IR emission from WISE, the
Spitzer Space Telescope and the Herschel Space Observatory photometry. Our
main conclusions are:
* $\bullet$
There are a few notable trends in the fitting residuals (described as (Data-
Model)/Model; Figure 5). All physical models reproduce the mid-IR bands within
10%, with very similar residual distributions in the WISE 12 band. All models
perform fits of similar quality at 160 $\mu$m. While the modified blackbody
models can reproduce the data in all far-IR bands (residuals centered on 0),
the fits from physical models have large residuals at long wavelengths. This
suggests that the flexibility to adjust the long wavelength slope of the
opacity is important to reproduce the observed SEDs.
* $\bullet$
All physical models reproduce the mid-IR emission features but yield different
values of the mass fraction of their carriers (Figure 9). Models that
attribute the mid-IR emission features to PAHs or HACs do similarly well in
reproducing the mid-IR spectrum.
* $\bullet$
We provide scaling relation of $\Sigma_{\rm dust}=f(I_{\nu}^{500~{}\mu m})$,
and identified a diverging relation in H II regions, where hot dust changes
the relationship between dust emission and mass (Figure 11).
Examining the fitting results of total dust masses and dust surface density
distributions, we find:
* $\bullet$
Models yield different total dust masses, up to a factor of 1.4 between
physical models, and up to 3 including modified blackbodies (Figure 6), but
all show similar spatial distributions of dust surface density (note the
fairly low discrepancy between dust masses from physical models, compared to
modified blackbody models). The MC11 model requires the highest dust mass, and
the broken-emissivity model the lowest.
* $\bullet$
We use metallicity and gas measurements to calculate a dust surface density
upper-limit (assuming all metals in dust) and show that all physical dust
models require too much dust over some radial ranges in M101. Only the broken-
emissivity modified blackbody model is below the upper limit of $\Sigma_{\rm
dust}^{\rm max}$ (Figure 12). This finding is dependent on the chosen
prescription for the CO-to-H2 conversion factor.
* $\bullet$
To investigate the differences between dust masses and their relationship to
the available heavy elements, we renormalized the models via fits to the same
SED of the MW diffuse emission, assuming a strict abundance constraint of
$M_{\rm dust}/M_{\rm H}=1/150$ (Section 6.2). We derive scaling factors and
apply them to the fitted dust surface density, and find renormalized dust mass
values lower than $\Sigma_{\rm dust}^{\rm max}$ (Figure 12). We find that the
choices made to calibrate dust models have a non-negligible impact on the
derived dust masses.
To provide the strictest comparison, we do not always use dust models in their
fiducial aspect, sometimes assuming a fixed ratio between two dust grain
populations. The observational constraints brought by IR emission fitting are
used to validate evolution models or derive scaling relations like the dust-
to-gas ratio. Our results show that these derived dust properties have
systematic uncertainties that should be taken into account. Although there are
still systematic uncertainties inherent in H II region metallicity
measurements, resolved metallicity gradients in nearby galaxies can be helpful
for testing the opacity calibrations in dust models.
We thank the referee for a very thorough reading and providing detailed
comments about the manuscript, which greatly improved the clarity of the
paper. The work of JC, KS, IC, AKL, and DU is supported by NASA ADAP grants
NNX16AF48G and NNX17AF39G and National Science Foundation grant No. 1615728.
The work of AKL and DU is partially supported by the National Science
Foundation under Grants No. 1615105, 1615109, and 1653300. TGW acknowledges
funding from the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme (grant agreement No. 694343).
This work uses observations made with ESA Herschel Space Observatory. Herschel
is an ESA space observatory with science instruments provided by European-led
Principal Investigator consortia and with important participation from NASA.
This publication makes use of data products from the Wide-field Infrared
Survey Explorer, which is a joint project of the University of California, Los
Angeles, and the Jet Propulsion Laboratory/California Institute of Technology,
funded by the National Aeronautics and Space Administration. This work is
based in part on observations made with the Spitzer Space Telescope, which was
operated by the Jet Propulsion Laboratory, California Institute of Technology
under a contract with NASA. This research made use of matplotlib, a Python
library for publication quality graphics (Hunter, 2007). This research made
use of Astropy, a community-developed core Python package for Astronomy
(Astropy Collaboration et al., 2018, 2013). This research made use of NumPy
(Van Der Walt et al., 2011). This research made use of SciPy (Virtanen et al.,
2020). This research made use of APLpy, an open-source plotting package for
Python (Robitaille & Bressert, 2012; Robitaille, 2019). We acknowledge the
usage of the HyperLeda database (http://leda.univ-lyon1.fr).
## References
* Allamandola et al. (1985) Allamandola, L. J., Tielens, A. G. G. M., & Barker, J. R. 1985, ApJ, 290, L25, doi: 10.1086/184435
* Aniano et al. (2011) Aniano, G., Draine, B. T., Gordon, K. D., & Sandstrom, K. 2011, PASP, 123, 1218, doi: 10.1086/662219
* Aniano et al. (2012) Aniano, G., Draine, B. T., Calzetti, D., et al. 2012, ApJ, 756, 138, doi: 10.1088/0004-637X/756/2/138
* Aniano et al. (2020) Aniano, G., Draine, B. T., Hunt, L. K., et al. 2020, ApJ, 889, 150, doi: 10.3847/1538-4357/ab5fdb
* Arendt et al. (1998) Arendt, R. G., Odegard, N., Weiland, J. L., et al. 1998, ApJ, 508, 74, doi: 10.1086/306381
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481, doi: 10.1146/annurev.astro.46.060407.145222
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Baes et al. (2020) Baes, M., Trčka, A., Camps, P., et al. 2020, MNRAS, 494, 2912, doi: 10.1093/mnras/staa990
* Balog et al. (2013) Balog, Z., Müller, T., Nielbock, M., et al. 2013, Experimental Astronomy, doi: 10.1007/s10686-013-9352-3
* Bedell et al. (2018) Bedell, M., Bean, J. L., Meléndez, J., et al. 2018, ApJ, 865, 68, doi: 10.3847/1538-4357/aad908
* Bendo et al. (2013) Bendo, G. J., Griffin, M. J., Bock, J. J., et al. 2013, MNRAS, 433, 3062, doi: 10.1093/mnras/stt948
* Berg et al. (2020) Berg, D. A., Pogge, R. W., Skillman, E. D., et al. 2020, ApJ, 893, 96, doi: 10.3847/1538-4357/ab7eab
* Berta et al. (2016) Berta, S., Lutz, D., Genzel, R., Förster-Schreiber, N. M., & Tacconi, L. J. 2016, A&A, 587, A73, doi: 10.1051/0004-6361/201527746
* Bianchi (2013) Bianchi, S. 2013, A&A, 552, A89, doi: 10.1051/0004-6361/201220866
* Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207, doi: 10.1146/annurev-astro-082812-140944
* Boquien et al. (2019) Boquien, M., Burgarella, D., Roehlly, Y., et al. 2019, A&A, 622, A103, doi: 10.1051/0004-6361/201834156
* Boulanger et al. (1996) Boulanger, F., Abergel, A., Bernard, J. P., et al. 1996, A&A, 312, 256
* Bron et al. (2014) Bron, E., Le Bourlot, J., & Le Petit, F. 2014, A&A, 569, A100, doi: 10.1051/0004-6361/201322101
* Byrne et al. (2019) Byrne, L., Christensen, C., Tsekitsidis, M., Brooks, A., & Quinn, T. 2019, ApJ, 871, 213, doi: 10.3847/1538-4357/aaf9aa
* Castellanos et al. (2018) Castellanos, P., Candian, A., Andrews, H., & Tielens, A. G. G. M. 2018, A&A, 616, A167, doi: 10.1051/0004-6361/201833221
* Chastenet et al. (2017) Chastenet, J., Bot, C., Gordon, K. D., et al. 2017, A&A, 601, A55, doi: 10.1051/0004-6361/201629133
* Chastenet et al. (2019) Chastenet, J., Sandstrom, K., Chiang, I. D., et al. 2019, ApJ, 876, 62, doi: 10.3847/1538-4357/ab16cf
* Chiang et al. (2018) Chiang, I. D., Sandstrom, K. M., Chastenet, J., et al. 2018, ApJ, 865, 117, doi: 10.3847/1538-4357/aadc5f
* Chiang et al. (2020) Chiang, I.-D., Sandstrom, K. M., Chastenet, J., et al. 2020, arXiv e-prints, arXiv:2011.10561. https://arxiv.org/abs/2011.10561
* Compiègne et al. (2011) Compiègne, M., Verstraete, L., Jones, A., et al. 2011, A&A, 525, A103, doi: 10.1051/0004-6361/201015292
* Croxall et al. (2016) Croxall, K. V., Pogge, R. W., Berg, D. A., Skillman, E. D., & Moustakas, J. 2016, ApJ, 830, 4, doi: 10.3847/0004-637X/830/1/4
* Croxall et al. (2012) Croxall, K. V., Smith, J. D., Wolfire, M. G., et al. 2012, ApJ, 747, 81, doi: 10.1088/0004-637X/747/1/81
* Dale et al. (2001) Dale, D. A., Helou, G., Contursi, A., Silbermann, N. A., & Kolhatkar, S. 2001, ApJ, 549, 215, doi: 10.1086/319077
* Dale et al. (2009) Dale, D. A., Cohen, S. A., Johnson, L. C., et al. 2009, ApJ, 703, 517, doi: 10.1088/0004-637X/703/1/517
* Davies et al. (2017) Davies, J. I., Baes, M., Bianchi, S., et al. 2017, PASP, 129, 044102, doi: 10.1088/1538-3873/129/974/044102
* de Blok et al. (2008) de Blok, W. J. G., Walter, F., Brinks, E., et al. 2008, AJ, 136, 2648, doi: 10.1088/0004-6256/136/6/2648
* De Looze et al. (2020) De Looze, I., Lamperti, I., Saintonge, A., et al. 2020, MNRAS, doi: 10.1093/mnras/staa1496
* De Vis et al. (2019) De Vis, P., Jones, A., Viaene, S., et al. 2019, A&A, 623, A5, doi: 10.1051/0004-6361/201834444
* Demyk et al. (2017) Demyk, K., Meny, C., Lu, X. H., et al. 2017, A&A, 600, A123, doi: 10.1051/0004-6361/201629711
* Desert et al. (1990) Desert, F. X., Boulanger, F., & Puget, J. L. 1990, A&A, 500, 313
* Desert et al. (1986) Desert, F. X., Boulanger, F., & Shore, S. N. 1986, A&A, 160, 295
* Draine (2003) Draine, B. T. 2003, ApJ, 598, 1026, doi: 10.1086/379123
* Draine (2011) —. 2011, Physics of the Interstellar and Intergalactic Medium
* Draine (2016) —. 2016, ApJ, 831, 109, doi: 10.3847/0004-637X/831/1/109
* Draine & Hensley (2020) Draine, B. T., & Hensley, B. S. 2020, arXiv e-prints, arXiv:2009.11314. https://arxiv.org/abs/2009.11314
* Draine & Lee (1984) Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89, doi: 10.1086/162480
* Draine & Li (2001) Draine, B. T., & Li, A. 2001, ApJ, 551, 807, doi: 10.1086/320227
* Draine & Li (2007) —. 2007, ApJ, 657, 810, doi: 10.1086/511055
* Draine et al. (2014) Draine, B. T., Aniano, G., Krause, O., et al. 2014, ApJ, 780, 172, doi: 10.1088/0004-637X/780/2/172
* Eales et al. (2012) Eales, S., Smith, M. W. L., Auld, R., et al. 2012, ApJ, 761, 168, doi: 10.1088/0004-637X/761/2/168
* Engelbracht et al. (2007) Engelbracht, C. W., Blaylock, M., Su, K. Y. L., et al. 2007, PASP, 119, 994, doi: 10.1086/521881
* Fanciullo et al. (2020) Fanciullo, L., Kemper, F., Scicluna, P., Dharmawardena, T. E., & Srinivasan, S. 2020, MNRAS, 499, 4666, doi: 10.1093/mnras/staa2911
* Fazio et al. (2004) Fazio, G. G., Hora, J. L., Allen, L. E., et al. 2004, ApJS, 154, 10, doi: 10.1086/422843
* Finkbeiner et al. (1999) Finkbeiner, D. P., Davis, M., & Schlegel, D. J. 1999, ApJ, 524, 867, doi: 10.1086/307852
* Fitzpatrick (1999) Fitzpatrick, E. L. 1999, PASP, 111, 63, doi: 10.1086/316293
* Fitzpatrick et al. (2019) Fitzpatrick, E. L., Massa, D., Gordon, K. D., Bohlin, R., & Clayton, G. C. 2019, ApJ, 886, 108, doi: 10.3847/1538-4357/ab4c3a
* Fumagalli et al. (2010) Fumagalli, M., Krumholz, M. R., & Hunt, L. K. 2010, ApJ, 722, 919, doi: 10.1088/0004-637X/722/1/919
* Galametz et al. (2014) Galametz, M., Albrecht, M., Kennicutt, R., et al. 2014, MNRAS, 439, 2542, doi: 10.1093/mnras/stu113
* Galliano et al. (2018) Galliano, F., Galametz, M., & Jones, A. P. 2018, ARA&A, 56, 673, doi: 10.1146/annurev-astro-081817-051900
* Galliano et al. (2011) Galliano, F., Hony, S., Bernard, J. P., et al. 2011, A&A, 536, A88, doi: 10.1051/0004-6361/201117952
* Gordon et al. (2007) Gordon, K. D., Engelbracht, C. W., Fadda, D., et al. 2007, PASP, 119, 1019, doi: 10.1086/522675
* Gordon et al. (2014) Gordon, K. D., Roman-Duval, J., Bot, C., et al. 2014, ApJ, 797, 85, doi: 10.1088/0004-637X/797/2/85
* Grevesse & Sauval (1998) Grevesse, N., & Sauval, A. J. 1998, Space Sci. Rev., 85, 161, doi: 10.1023/A:1005161325181
* Griffin et al. (2010) Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3, doi: 10.1051/0004-6361/201014519
* Groves et al. (2015) Groves, B. A., Schinnerer, E., Leroy, A., et al. 2015, ApJ, 799, 96, doi: 10.1088/0004-637X/799/1/96
* Guillet et al. (2018) Guillet, V., Fanciullo, L., Verstraete, L., et al. 2018, A&A, 610, A16, doi: 10.1051/0004-6361/201630271
* Harper et al. (2018) Harper, D. A., Runyan, M. C., Dowell, C. D., et al. 2018, Journal of Astronomical Instrumentation, 7, 1840008, doi: 10.1142/S2251171718400081
* Hensley & Draine (2017) Hensley, B. S., & Draine, B. T. 2017, ApJ, 836, 179, doi: 10.3847/1538-4357/aa5c37
* Hensley & Draine (2020a) —. 2020a, arXiv e-prints, arXiv:2009.00018. https://arxiv.org/abs/2009.00018
* Hensley & Draine (2020b) —. 2020b, ApJ, 895, 38, doi: 10.3847/1538-4357/ab8cc3
* Hensley & Draine (2021) —. 2021, arXiv e-prints. https://arxiv.org/abs/200x.xxxxx
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Jenkins (2009) Jenkins, E. B. 2009, ApJ, 700, 1299, doi: 10.1088/0004-637X/700/2/1299
* Jones et al. (2013) Jones, A. P., Fanciullo, L., Köhler, M., et al. 2013, A&A, 558, A62, doi: 10.1051/0004-6361/201321686
* Jones et al. (2017) Jones, A. P., Köhler, M., Ysard, N., Bocchio, M., & Verstraete, L. 2017, A&A, 602, A46, doi: 10.1051/0004-6361/201630225
* Kemper et al. (2004) Kemper, F., Vriend, W. J., & Tielens, A. G. G. M. 2004, ApJ, 609, 826, doi: 10.1086/421339
* Kennicutt et al. (2011) Kennicutt, R. C., Calzetti, D., Aniano, G., et al. 2011, PASP, 123, 1347, doi: 10.1086/663818
* Köhler et al. (2014) Köhler, M., Jones, A., & Ysard, N. 2014, A&A, 565, L9, doi: 10.1051/0004-6361/201423985
* Kreckel et al. (2019) Kreckel, K., Ho, I. T., Blanc, G. A., et al. 2019, ApJ, 887, 80, doi: 10.3847/1538-4357/ab5115
* Leger & Puget (1984) Leger, A., & Puget, J. L. 1984, A&A, 500, 279
* Lenz et al. (2017) Lenz, D., Hensley, B. S., & Doré, O. 2017, ApJ, 846, 38, doi: 10.3847/1538-4357/aa84af
* Leroy et al. (2008) Leroy, A. K., Walter, F., Brinks, E., et al. 2008, AJ, 136, 2782, doi: 10.1088/0004-6256/136/6/2782
* Leroy et al. (2009) Leroy, A. K., Walter, F., Bigiel, F., et al. 2009, AJ, 137, 4670, doi: 10.1088/0004-6256/137/6/4670
* Leroy et al. (2019) Leroy, A. K., Sandstrom, K. M., Lang, D., et al. 2019, ApJS, 244, 24, doi: 10.3847/1538-4365/ab3925
* Li & Draine (2001a) Li, A., & Draine, B. T. 2001a, ApJ, 554, 778, doi: 10.1086/323147
* Li & Draine (2001b) —. 2001b, ApJ, 554, 778, doi: 10.1086/323147
* Lianou et al. (2019) Lianou, S., Barmby, P., Mosenkov, A. A., Lehnert, M., & Karczewski, O. 2019, A&A, 631, A38, doi: 10.1051/0004-6361/201834553
* Makarov et al. (2014) Makarov, D., Prugniel, P., Terekhova, N., Courtois, H., & Vauglin, I. 2014, A&A, 570, A13, doi: 10.1051/0004-6361/201423496
* Mathis (1990) Mathis, J. S. 1990, ARA&A, 28, 37, doi: 10.1146/annurev.aa.28.090190.000345
* Mathis et al. (1983) Mathis, J. S., Mezger, P. G., & Panagia, N. 1983, A&A, 500, 259
* Mathis et al. (1977) Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425, doi: 10.1086/155591
* Moustakas et al. (2010) Moustakas, J., Kennicutt, Robert C., J., Tremonti, C. A., et al. 2010, ApJS, 190, 233, doi: 10.1088/0067-0049/190/2/233
* Müller et al. (2011) Müller, T., Nielbock, M., Balog, Z., Klaas, U., & Vilenius, E. 2011, PACS Photometer - Point-Source Flux Calibration, Tech. Rep. PICC-ME-TN-037, Herschel
* Mutschke & Mohr (2019) Mutschke, H., & Mohr, P. 2019, A&A, 625, A61, doi: 10.1051/0004-6361/201834805
* Nanni et al. (2020) Nanni, A., Burgarella, D., Theulé, P., Côté, B., & Hirashita, H. 2020, arXiv e-prints, arXiv:2006.15146. https://arxiv.org/abs/2006.15146
* Nersesian et al. (2019) Nersesian, A., Xilouris, E. M., Bianchi, S., et al. 2019, A&A, 624, A80, doi: 10.1051/0004-6361/201935118
* Onaka et al. (1996) Onaka, T., Yamamura, I., Tanabe, T., Roellig, T. L., & Yuen, L. 1996, PASJ, 48, L59, doi: 10.1093/pasj/48.5.L59
* Ott (2010) Ott, S. 2010, in Astronomical Society of the Pacific Conference Series, Vol. 434, Astronomical Data Analysis Software and Systems XIX, ed. Y. Mizumoto, K.-I. Morita, & M. Ohishi, 139. https://arxiv.org/abs/1011.1209
* Paradis et al. (2019) Paradis, D., Mény, C., Juvela, M., Noriega-Crespo, A., & Ristorcelli, I. 2019, A&A, 627, A15, doi: 10.1051/0004-6361/201935158
* Peimbert & Peimbert (2010) Peimbert, A., & Peimbert, M. 2010, ApJ, 724, 791, doi: 10.1088/0004-637X/724/1/791
* Pilbratt et al. (2010) Pilbratt, G. L., Riedinger, J. R., Passvogel, T., et al. 2010, A&A, 518, L1, doi: 10.1051/0004-6361/201014759
* Planck Collaboration et al. (2011) Planck Collaboration, Abergel, A., Ade, P. A. R., et al. 2011, A&A, 536, A24, doi: 10.1051/0004-6361/201116485
* Planck Collaboration Int. XVII (2014) Planck Collaboration Int. XVII. 2014, A&A, 566, A55, doi: 10.1051/0004-6361/201323270
* Planck Collaboration Int. XXII (2015) Planck Collaboration Int. XXII. 2015, A&A, 576, A107, doi: 10.1051/0004-6361/201424088
* Planck Collaboration XI (2018) Planck Collaboration XI. 2018, arXiv e-prints, arXiv:1801.04945. https://arxiv.org/abs/1801.04945
* Poglitsch et al. (2010) Poglitsch, A., Waelkens, C., Geis, N., et al. 2010, A&A, 518, L2, doi: 10.1051/0004-6361/201014535
* Reach et al. (2005) Reach, W. T., Megeath, S. T., Cohen, M., et al. 2005, PASP, 117, 978, doi: 10.1086/432670
* Relaño et al. (2020) Relaño, M., Lisenfeld, U., Hou, K. C., et al. 2020, A&A, 636, A18, doi: 10.1051/0004-6361/201937087
* Rémy-Ruyer et al. (2014) Rémy-Ruyer, A., Madden, S. C., Galliano, F., et al. 2014, A&A, 563, A31, doi: 10.1051/0004-6361/201322803
* Richey et al. (2013) Richey, C. R., Kinzer, R. E., Cataldo, G., et al. 2013, ApJ, 770, 46, doi: 10.1088/0004-637X/770/1/46
* Rieke et al. (2004) Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25, doi: 10.1086/422717
* Robitaille (2019) Robitaille, T. 2019, APLpy v2.0: The Astronomical Plotting Library in Python, doi: 10.5281/zenodo.2567476
* Robitaille & Bressert (2012) Robitaille, T., & Bressert, E. 2012, APLpy: Astronomical Plotting Library in Python, Astrophysics Source Code Library. http://ascl.net/1208.017
* Roussel (2013) Roussel, H. 2013, PASP, 125, 1126, doi: 10.1086/673310
* Sandstrom et al. (2010) Sandstrom, K. M., Bolatto, A. D., Draine, B. T., Bot, C., & Stanimirović, S. 2010, ApJ, 715, 701, doi: 10.1088/0004-637X/715/2/701
* Sandstrom et al. (2013) Sandstrom, K. M., Leroy, A. K., Walter, F., et al. 2013, ApJ, 777, 5, doi: 10.1088/0004-637X/777/1/5
* Schlafly et al. (2016) Schlafly, E. F., Meisner, A. M., Stutz, A. M., et al. 2016, ApJ, 821, 78, doi: 10.3847/0004-637X/821/2/78
* Schruba et al. (2011) Schruba, A., Leroy, A. K., Walter, F., et al. 2011, AJ, 142, 37, doi: 10.1088/0004-6256/142/2/37
* Scott et al. (2015a) Scott, P., Asplund, M., Grevesse, N., Bergemann, M., & Sauval, A. J. 2015a, A&A, 573, A26, doi: 10.1051/0004-6361/201424110
* Scott et al. (2015b) Scott, P., Grevesse, N., Asplund, M., et al. 2015b, A&A, 573, A25, doi: 10.1051/0004-6361/201424109
* Scoville et al. (2017) Scoville, N., Lee, N., Vanden Bout, P., et al. 2017, ApJ, 837, 150, doi: 10.3847/1538-4357/aa61a0
* Skillman et al. (2020) Skillman, E. D., Berg, D. A., Pogge, R. W., et al. 2020, ApJ, 894, 138, doi: 10.3847/1538-4357/ab86ae
* Sofue et al. (1999) Sofue, Y., Tutui, Y., Honma, M., et al. 1999, ApJ, 523, 136, doi: 10.1086/307731
* Tanaka et al. (1996) Tanaka, M., Matsumoto, T., Murakami, H., et al. 1996, PASJ, 48, L53, doi: 10.1093/pasj/48.5.L53
* Tchernyshyov et al. (2015) Tchernyshyov, K., Meixner, M., Seale, J., et al. 2015, ApJ, 811, 78, doi: 10.1088/0004-637X/811/2/78
* Thi et al. (2020) Thi, W. F., Hocuk, S., Kamp, I., et al. 2020, A&A, 634, A42, doi: 10.1051/0004-6361/201731746
* Tully et al. (2009) Tully, R. B., Rizzi, L., Shaya, E. J., et al. 2009, AJ, 138, 323, doi: 10.1088/0004-6256/138/2/323
* Utomo et al. (2019) Utomo, D., Chiang, I. D., Leroy, A. K., Sand strom, K. M., & Chastenet, J. 2019, ApJ, 874, 141, doi: 10.3847/1538-4357/ab05d3
* Van Der Walt et al. (2011) Van Der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22
* Vílchez et al. (2019) Vílchez, J. M., Relaño, M., Kennicutt, R., et al. 2019, MNRAS, 483, 4968, doi: 10.1093/mnras/sty3455
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: https://doi.org/10.1038/s41592-019-0686-2
* Wakelam et al. (2017) Wakelam, V., Bron, E., Cazaux, S., et al. 2017, Molecular Astrophysics, 9, 1, doi: 10.1016/j.molap.2017.11.001
* Walter et al. (2008) Walter, F., Brinks, E., de Blok, W. J. G., et al. 2008, AJ, 136, 2563, doi: 10.1088/0004-6256/136/6/2563
* Weingartner & Draine (2001a) Weingartner, J. C., & Draine, B. T. 2001a, ApJS, 134, 263, doi: 10.1086/320852
* Weingartner & Draine (2001b) —. 2001b, ApJ, 548, 296, doi: 10.1086/318651
* Weingartner & Draine (2001c) —. 2001c, ApJ, 548, 296, doi: 10.1086/318651
* Werner et al. (2004) Werner, M. W., Roellig, T. L., Low, F. J., et al. 2004, ApJS, 154, 1, doi: 10.1086/422992
* Wolfire et al. (1995) Wolfire, M. G., Hollenbach, D., McKee, C. F., Tielens, A. G. G. M., & Bakes, E. L. O. 1995, ApJ, 443, 152, doi: 10.1086/175510
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868, doi: 10.1088/0004-6256/140/6/1868
* Ysard et al. (2015) Ysard, N., Köhler, M., Jones, A., et al. 2015, A&A, 577, A110, doi: 10.1051/0004-6361/201425523
* Zaritsky et al. (1994) Zaritsky, D., Kennicutt, Robert C., J., & Huchra, J. P. 1994, ApJ, 420, 87, doi: 10.1086/173544
* Zeegers et al. (2019) Zeegers, S. T., Costantini, E., Rogantini, D., et al. 2019, A&A, 627, A16, doi: 10.1051/0004-6361/201935050
* Zubko et al. (2004) Zubko, V., Dwek, E., & Arendt, R. G. 2004, ApJS, 152, 211, doi: 10.1086/382351
## Appendix A Calibration Details
We present here the details of the calibration methodology used in each
physical model, and a summary of the calibration constraints.
### A.1 Draine & Li (2007)
DL07 was calibrated using the following constraints. The extinction is
described in Weingartner & Draine (2001c) and uses the Fitzpatrick (1999)
extinction curves with a normalization of N${}_{H}/E(B-V)=5.8\times 10^{21}$
H/cm2 or $A_{V}/N_{H}=5.3\times 10^{-22}$ cm2. The high-latitude cirrus
emission per H observed by DIRBE (Diffuse Infrared Background Experiment) and
FIRAS (Far Infrared Absolute Spectrophotometer; Arendt et al., 1998;
Finkbeiner et al., 1999) is used as a reference for the far-IR emission,
complemented by mid- and near-IR emission from IRTS (Infrared Telescope in
Space; Onaka et al., 1996; Tanaka et al., 1996). Weingartner & Draine (2001b)
adopt solar abundances from Grevesse & Sauval (1998), assuming 30% of carbon
is in the gas phase. They assume all silicon is depleted and has abundance
equal to the Solar value. DL07 uses $M_{\rm dust}/M_{\rm H}=1.0\times
10^{-2}$. The radiation field used in the Draine & Li (2007) model is based on
Mathis et al. (1983).
### A.2 Compiègne et al. (2011)
MC11 was calibrated using the following constraints. Extinction constraints
were taken from Mathis (1990) and Fitzpatrick (1999), including the
$R_{V}=3.1$ extinction curve in the UV-visible and a normalization of
$N_{H}/E(B-V)=5.8\times 10^{21}$ H/cm2. At $\lambda>25~{}\micron$, MC11 use
the MW cirrus emission per H observed by COBE-DIRBE and WMAP (integrated in
the Herschel and Planck/HFI bands; see MC11, ). At $\lambda\leq 25~{}\micron$,
a compilation of mid-IR observations of high latitude MW cirrus are used
(combining measurements from AROME, DIRBE, and ISOCAM; we refer to reader to
the Compiègne et al. paper for details). They scale the emission SED by 0.77
to account for ionized and molecular gas not accounted in the H column. The
allowed dust-phase abundances for C, O, and other dust components come from
the difference between Solar (or F/G star) abundances and the observed gas
phase abundances. In total, the $M_{\rm dust}$/$M_{\rm H}=1.02\times 10^{-2}$.
MC11 assumes the Mathis et al. (1983, $D_{\rm G}=10$ kpc) solar neighborhood
radiation field to heat the dust grains.
### A.3 THEMIS
THEMIS was calibrated using the same constraints as Compiègne et al. (2011)
presented in the previous Section, with the addition of the far-IR-to-
extinction relation $\tau_{250}/E(B-V)=5.8\times 10^{-4}~{}$ (Planck
Collaboration et al., 2011). In THEMIS, $M_{\rm dust}/M_{\rm H}=7.4\times
10^{-3}$.
### A.4 Hensley & Draine (2021)
The full set of observational constraints used to develop the model are
described in Hensley & Draine (2020a). In brief, the extinction curve is
primarily a synthesis of those of Fitzpatrick et al. (2019) in the UV and
optical, Schlafly et al. (2016) in the optical and near-infrared, and Hensley
& Draine (2020b) in the mid-infrared. The normalization $N_{\rm
H}/E\left(B-V\right)=8.8\times 10^{21}~{}$cm-2 mag-1 is used to normalize
extinction to the hydrogen column (Lenz et al., 2017). The infrared emission
in both total intensity and polarization are based on the analyses presented
in Planck Collaboration Int. XVII (2014), Planck Collaboration Int. XXII
(2015), and Planck Collaboration XI (2018), including the normalization to
$N_{\rm H}$. The solid phase interstellar abundances are re-determined in
Hensley & Draine (2020a) using a set of Solar abundances (Asplund et al.,
2009; Scott et al., 2015a, b), a measurement of Galactic chemical enrichment
from Solar twin studies (Bedell et al., 2018), and determination of the gas
phase abundances from absorption spectroscopy (Jenkins, 2009). $M_{\rm
dust}/M_{\rm H}=1.0\times 10^{-2}$ in this model. HD21 assumes the same
radiation field as the DL07 model to heat the dust grains, with updates from
Draine (2011).
=2.2in
Table 4: Physical Models Calibration Summary
| Draine & Li (2007) | Compiègne et al. (2011) | THEMIS | Hensley & Draine (2021)
---|---|---|---|---
Extinction curve | Fitzpatrick (1999) | Mathis (1990) | Mathis (1990) | Fitzpatrick et al. (2019)
| | | | Schlafly et al. (2016)
| | | | Hensley & Draine (2020b)
$N_{\rm H}/E(B-V)$ | $5.8\times 10^{21}$ H/cm2 | $5.8\times 10^{21}$ H/cm2 | $5.8\times 10^{21}$ H/cm2 aaadditional constraint: $\tau_{250}/E(B-V)=5.8\times 10^{-4}~{}$ (Planck Collaboration et al., 2011) | $8.8\times 10^{21}$ cm-2 mag-1
Emission spectrum | Onaka et al. (1996) | compiled in Compiègne et al. (2011) | compiled in Compiègne et al. (2011) | Planck Collaboration Int. XVII (2014)
| Tanaka et al. (1996) | | | Planck Collaboration Int. XXII (2015)
| Arendt et al. (1998) | | | Planck Collaboration XI (2018)
| Finkbeiner et al. (1999) | | |
$M_{\rm d}/M_{\rm H}$ | $1.0\times 10^{-2}$ | $1.02\times 10^{-2}$ | $7.4\times 10^{-3}$ | $1.0\times 10^{-2}$
Radiation Field | Mathis et al. (1983) | Mathis et al. (1983) | Mathis et al. (1983) | Draine (2011)
Renormalization: emission constraint _only_ from Compiègne et al.
(2011)bbCorrected for molecular gas only, not ionized gas. and forcing $M_{\rm
d}/M_{\rm H}=6.6\times 10^{-3}$
$U_{\rm min}^{\rm MW}$ | 0.6 | 1.0 | 1.0 | 1.6
Normalization factor | 3.1 | 2.1 | 1.5 | 2.5
Note. — All models share $R_{V}=3.1$.
## Appendix B Fitted parameter maps
We present the spatial variations of the fitted parameters for all six models,
in Figures 13 – 17. The contours mark the 3$\sigma$ detection threshold. We
use the same scale for identical parameters, when possible.
Figure 13: Maps of fitted parameters. Top: simple-emissivity model, dust
temperature ($T_{\rm dust}$), total dust surface density ($\Sigma_{\rm dust}$)
and spectral index ($\beta$). Bottom: broken-emissivity model, dust
temperature ($T_{\rm dust}$), total dust surface density ($\Sigma_{\rm dust}$)
and second spectral index ($\beta_{2}$); the breaking wavelength is fixed
($\lambda_{\rm c}=300~{}\mu$m) as well as the first spectral index
($\beta=2$). Figure 14: Maps of the fitted parameters for the Draine & Li
(2007) model: minimum radiation field ($U_{\rm min}$), total dust surface
density ($\Sigma_{\rm dust}$), fraction of dust mass heated by a power-law
distribution of radiation field ($\gamma$), PAH fraction (mass in grains with
less than $10^{3}$ C atoms, $q_{\rm\sc{PAH}}$), and scaling parameter of
surface brightness (5,000 K blackbody, $\Omega_{*}$). Figure 15: Maps of the
fitted parameters for the Compiègne et al. (2011) model: minimum radiation
field ($U_{\rm min}$), total dust surface density ($\Sigma_{\rm dust}$),
fraction of dust mass heated by a power-law distribution of radiation field
($\gamma$), PAH fraction (with respect to total dust mass, $f_{\rm PAH}$), and
scaling parameter of surface brightness (5,000 K blackbody, $\Omega_{*}$).
Figure 16: Maps of the fitted parameters for the Jones et al. (2013) model:
minimum radiation field ($U_{\rm min}$), total dust surface density
($\Sigma_{\rm dust}$), fraction of dust mass heated by a power-law
distribution of radiation field ($\gamma$), sCM20 fraction (small carbon
grains, $f_{\rm sCM20}$), and scaling parameter of surface brightness (5,000 K
blackbody, $\Omega_{*}$). Figure 17: Maps of the fitted parameters for the
Hensley & Draine (2021) model: minimum radiation field ($U_{\rm min}$), total
dust surface density ($\Sigma_{\rm dust}$), fraction of dust mass heated by a
power-law distribution of radiation field ($\gamma$), PAH fraction (mass in
grains with less than $10^{3}$ C atoms, $q_{\rm\sc{PAH}}$), and scaling
parameter of surface brightness (5,000 K blackbody, $\Omega_{*}$).
## Appendix C Residual maps
We present the spatial variations of the fractional residuals (Data-
Model)/Model for all six models, in Figures 18 – 22. The contours mark the
3$\sigma$ detection threshold. The so-called “sub-millimeter” excess is
visible in most maps at SPIRE 500 (blue shade).
Figure 18: Maps of the fractional residuals for the simple-emissivity and
broken-emissivity models. Figure 19: Maps of the fractional residuals for the
Draine & Li (2007) model. Figure 20: Maps of the fractional residuals for the
Compiègne et al. (2011) model. Figure 21: Maps of the fractional residuals for
the Jones et al. (2013) model. Figure 22: Maps of the fractional residuals for
the Hensley & Draine (2021) model.
## Appendix D Fits quality
We show the relative quality of the fits between each model. The value
displayed in the maximum likelihood, in arbitrary units. The simple-emissivity
and broken-emissivity models show the least dynamic range but never reach the
highest values of the physical models. For the physical models, we can clearly
see the H II regions showing fits with low confidence, likely related to the
issues mentioned in Section 5.7.
Figure 23: Maps of the reduced $\chi^{2}$ in each pixel. The H II regions show
the lowest confidence for the physical models, while the modified blackbodies
show good fits on the entire disk. The contour marks the 3-$\sigma$ detection
threshold.
|
# High order Coherence Functions and Spectral Distributions as given by the
Quantum Theory of Laser Radiation
Tao Peng Texas A&M University, College Station, Texas, 77843, USA Xingchen
Zhao Texas A&M University, College Station, Texas, 77843, USA Yanhua Shih
University of Maryland, Baltimore County, Baltimore, Maryland 21250, USA
Marlan O. Scully<EMAIL_ADDRESS>Texas A&M University, College Station,
Texas, 77843, USA Baylor University, Waco, 76706, USA Princeton University,
Princeton, New Jersey 08544, USA
###### Abstract
We propose and demonstrate a method for measuring the time evolution of the
off-diagonal elements $\rho_{n,n+k}(t)$ of the reduced density matrix obtained
from the quantum theory of the laser. The decay rates of the off-diagonal
matrix element $\rho_{n,n+k}(t)$ (k=2,3) are measured for the first time and
compared with that of $\rho_{n,n+1}(t)$, which corresponds to the linewidth of
the laser. The experimental results agree with the quantum theory of the
laser.
††preprint: APS/123-QED
## I Introduction
Quantum coherence effects in molecular physics are largely based on the
existence of the laser Pestov et al. (2007). Indeed, in most of our
experiments and calculations we take the laser to be an ideal monochromatic
light source. If the laser linewidth is important then we usually just include
a “phase diffusion” linewidth into the logic. But what if we are thinking
about higher order correlation effects in an ensemble of coherently driven
molecules. For example, photon correlation and light beating spectroscopy
involving Glauber second order correlation functions Cummins (2013); Glauber
(1963). Furthermore, third and higher order photon correlations of the laser
used to drive our molecular system can be important. The investigation of
higher order quantum laser noise is the focus of the present paper.
Fifty years ago the quantum theory of the laser (QTL) was developed using a
density matrix formalism Scully and Lamb Jr (1966). In the interesting
threshold region Haken (1966); DeGiorgio and Scully (1970) the steady state
laser photon statistics is given by the diagonal elements of the laser density
matrix as
$\displaystyle\rho_{n,n}=\mathfrak{N}\prod\limits^{n}_{m=0}[\alpha-\beta
m]/\gamma,$ (1)
where $\alpha$ is the linear gain, $\beta$ is the nonlinear saturation
coefficient, $\gamma$ is the cavity loss rate, and $\mathfrak{N}$ is the
normalization constant:
$\displaystyle\mathfrak{N}^{-1}=\sum\limits_{n}{\prod\limits^{n}_{m=0}[\alpha-\beta
m]/\gamma}.$ (2)
Eq. (1) is plotted in Fig. 1 where it is compared with a coherent state.
The formalism developed in the QTL density matrix analysis has since been
successfully applied to many other physical systems such as the single-atom
maser(aka the micromaser) Meschede et al. (1985), the Bose-Einstein condensate
(aka the atom laser, see Table 1) Scully (1999), pion physics Hoang (1997),
etc. Other applications of the formalism have been developed recently and more
will likely emerge. Thus we are motivated to deeper our understanding of the
QTL by further analyzing and experimentally verifying the time dependence of
off-diagonal elements $\rho_{n,n+k}(t)\equiv\rho^{(k)}_{n}(t)$. The diagonal
elements of the laser density matrix for which $k=0$, have been well studied.
Not as for the off-diagonal elements. In particular $\rho^{(1)}_{n}(t)$ yields
the Schawlow-Townes laser linewidth. But what about the higher order
correlations $k=2,3\cdots$? That is the focus of the current paper.
## II Theory and Experiment
The off-diagonal elements vanish at steady state, regressing to zero as Scully
and Lamb Jr (1966)
$\displaystyle\rho^{(k)}_{n}(t)=\rho^{(k)}_{n}(0)\text{exp}(-k^{2}Dt)$ (3)
where $D=\gamma/\bar{n}$ is the Schawlow-Townes phase diffusion linewidth and
$\bar{n}=(\alpha-\gamma)/\beta$. The expectation value of the laser amplitude
operator is given by
$\displaystyle\langle\hat{E}(z,t)\rangle=\mathscr{E}_{0}\sin\kappa
z\sum_{n}{\rho^{(1)}_{n}(0)\sqrt{n+1}e^{-Dt}e^{i\nu t}},$ (4)
where $\nu$ is the center frequency of the laser field and the electric field
per photon is given by $\mathscr{E}_{0}=\sqrt{\hbar\nu/\epsilon_{0}V}$, where
$\epsilon_{0}$ is the permittivity of free space and V is the laser cavity
volume. The physics is explained in Fig. 2 and associated text.
Figure 1: Steady state photon distribution function for coherent (orange
dashed line) and laser radiation (blue solid line). The laser is taken to be
20 percent above threshold, $\langle n\rangle=200$.
As is discussed in the following, the second order off-diagonal elements are
given by the field operator averages
$\displaystyle\ \ \ \ \langle\hat{E}(z,t)\hat{E}(z,t)\rangle$ (5)
$\displaystyle=\mathscr{E}^{2}_{0}\sin^{2}\kappa
z\sum_{n}{\rho^{(2)}_{n}(0)\sqrt{(n+1)(n+2)}e^{-4Dt}e^{i2\nu t}},$ (6)
and the third order off-diagonal elements are given by
$\displaystyle\ \ \ \ \langle\hat{E}(z,t)\hat{E}(z,t)\hat{E}(z,t)\rangle$ (7)
$\displaystyle=\mathscr{E}^{3}_{0}\sin^{3}\kappa
z\small{\sum_{n}}{\rho^{(3)}_{n}(0)\sqrt{(n+1)(n+2)(n+3)}e^{-9Dt}e^{i3\nu
t}},$ (8)
where again the physics is explained in Fig. 3 and associated text.
Eq. (4) gives the time evolution associated with the first order off-diagonal
elements $\rho^{(1)}_{n}$, yielding the spectral profile of the laser. The
heterodyne method is usually adapted to measure the linewidth of the laser
Okoshi et al. (1980); Richter et al. (1986), in which case the center
frequency is shifted from optical frequency to the radio frequency range. A
natural way to measure the laser linewidth is to beat two almost identical but
uncorrelated lasers Muanzuala et al. (2015) such that the beat frequency
between the lasers is in the MHz range. The result, as seen from Eq. (16), is
twice of the laser linewidth when the two independent lasers are nearly
identical.
Many experiments have been carried out to determine the linewidth Okoshi et
al. (1980) and photon statistics Arecchi (1965) of the laser. Other
experiments have measured the intensity correlation of the laser at threshold
Corti et al. (1973), revealing the influence of the intensity fluctuation on
the laser spectrum. However, to the best of our knowledge, no measurements
have been made of the higher order phase correlations ($k\geq 2$). Here we
measure the second and third correlation of the heterodyne signals from two
independent lasers, which yields the second and third order time evolution of
a laser above threshold. Specifically, we performed the following experiments:
the first set of experiments is to measure the spectral profile of the laser
beat note, i.e., allows us to measure the decay rate as shown in Eq. (4). The
other two sets of experiments determine the spectral profile of the second and
third order correlated beat notes, this allows us to measure the decay rate as
shown in Eq. (5) and Eq. (LABEL:E-triple-rho).
| Laser | BEC
---|---|---
$\alpha$ | |
Linear stimulated emission gain | Rate of cooling due to interaction
| with walls times the number of atom N
$\beta$ | |
Nonlinear saturation due to the reabsorption | Nonlinearity parameter due to the constraint that
of photons generated by stimulated emission | there are N atoms in the BEC:
| | numerically equal to $\alpha/N$.
$\gamma$ | |
Loss rate due to photons absorbed | Loss rate due to photon absorption from the
in cavity mirrors etc. | thermal bath (walls) equal to $\alpha(T/T_{c})^{3}$.
Table 1: Parameters in laser and BEC systems.
The laser spectral measurement is conceptually illustrated in Fig. 2A. The
frequency difference between the two laser fields of laser 1 and 2 is
$\nu_{0}\equiv\nu_{1}-\nu_{2}$, $\nu_{1}$ and $\nu_{2}$ represent the center
frequencies of the laser 1 and 2, respectively. The lower level doublet may be
thought as hyperfine doublet whose dipole is driven at frequency $\nu_{0}$ and
detected by a pick-up coil. Fig. 2B illustrates the setup of the first set of
experiments. This is a typical heterodyne detection setup, the center
frequency between the two He-Ne lasers is in the MHz range. This difference
allows us to analyze the beat signal around a non-zero value hence the full
shape of the linewidth is obtained unambiguously. A non-polarizing
beamsplitter (BS) is used to mix the two laser beams. The beat signal is then
directed to the photodiode ($D1$) after the BS. A fast Fourier transform (FFT)
of the signal is performed by the spectrum analyzer (SA) giving the frequency
spectrum of the beat note.
Figure 2: (A) Conceptual illustration of laser spectral measurement by beating
two laser fields laser 1 and 2. The frequency difference between the two
fields is $\nu_{0}$. The lower level doublet may be thought as hyperfine
doublet whose dipole is driven at frequency $\nu_{0}$ and detected by a pick-
up coil (not shown) in the usual way. (B) Experimental setup used in measuring
the spectrum of the beat note between the laser and local oscillator. The beat
note signal is measured by the detector ($D1$) and analyzed by the spectrum
analyzer(SA). BS, non-polarizing beamsplitter.
For the first set of experiments, the first order coherence function Glauber
(1963); Scully and Lamb Jr (1966) is
$\displaystyle G^{(1)}(t)$
$\displaystyle=\mathrm{Tr}\\{\rho[(\hat{E}^{\dagger}_{1}(t)+\hat{E}^{\dagger}_{2}(t))(\hat{E}_{1}(t)+\hat{E}_{2}(t))]\\}$
(10)
$\displaystyle=\mathrm{Tr}\\{(\rho_{1}\otimes\rho_{2})[|\hat{E}_{1}(t)|^{2}+|\hat{E}_{2}(t)|^{2}+\hat{E}^{\dagger}_{1}(t)\hat{E}_{2}(t)+c.c.\\}$
(11)
$\displaystyle=\mathscr{E}^{2}_{1}\mathrm{Tr}[\rho_{1}\hat{a}^{\dagger}_{1}(t)\hat{a}_{1}(t)]+\mathscr{E}^{2}_{2}\mathrm{Tr}[\rho_{2}\hat{a}^{\dagger}_{2}(t)\hat{a}_{2}(t)]$
(12)
$\displaystyle+\mathscr{E}_{1}\mathscr{E}_{2}\\{\mathrm{Tr}[(\rho_{1}\otimes\rho_{2})\hat{a}^{\dagger}_{1}(t)\hat{a}_{2}(t)]e^{i(\nu_{1}-\nu_{2})t}+c.c.]\\},$
(13)
where $\rho=\rho_{1}\otimes\rho_{2}$ is the density operator of the system,
$\rho_{1}$ and $\rho_{2}$ represent the density operators of laser 1 and 2,
respectively.
Figure 3: Conceptual illustration of the second order (A) and third-order (B)
correlated spectral measurement. Again, the frequency difference between the
two laser fields is $\nu_{0}$. The lower level doublet may be thought as
hyperfine doublet (triplet) whose dipole is driven at frequency $2\nu_{0}$
($3\nu_{0}$) and detected by a pick-up coil.
From the above equation, we can see the only terms carry the beat note
frequency are
$\displaystyle\Gamma^{(1)}(t)=\mathscr{E}_{1}\mathscr{E}_{2}\mathrm{Tr}[(\rho_{1}\otimes\rho_{2})\hat{a}^{\dagger}_{1}(t)\hat{a}_{2}(t)]e^{i\nu_{0}t},$
(14)
with its complex conjugate which contributes to the $-\nu_{0}$ frequency
component. Under the condition that the two lasers are independent, we can
rewrite Eq.(14) as
$\displaystyle\Gamma^{(1)}(\nu_{0},t)=\mathscr{E}_{1}\sum_{n_{1}}{\sqrt{n_{1}+1}\rho^{(1)}_{n_{1}}(0)e^{-D_{1}t}}\times\mathscr{E}_{LO}\sum_{n_{2}}{\sqrt{n_{2}}\rho^{(-1)}_{n_{2}}(0)e^{-D_{2}t}}e^{i\nu_{0}t}.$
(15)
Taking the Fourier transform, we have a Lorentzian spectrum centered at the
beat frequency $\nu_{0}$ with a width $D^{\prime}=D_{1}+D_{2}$, which is
essentially twice the width of one laser
$\displaystyle
S_{\nu_{0}}(\omega)\propto\frac{D^{\prime}}{(\omega-\nu_{0})^{2}+(D^{\prime})^{2}}.$
(16)
As shown in Fig. 3, the higher order spectral measurements can be conceptually
understood in a similar way as the laser linewidth measurement. The frequency
difference between the two laser fields is again $\nu_{0}$. The lower level
doublet can be thought as hyperfine doublet whose dipole is driven at
frequency $2\nu_{0}$ or $3\nu_{0}$, respectively, and detected by a pick-up
coil. The second and third experiments measure the spectral profile of the
second and third order correlation of beat notes, the setup is shown in Fig.
4. We used the same two lasers to create the beat signal, where three
detectors $Di(i=1,2,3)$ are used. The outputs from the photodiodes are used as
inputs for a frequency mixer. The output from the mixer is then sent to the
spectrum analyzer and the frequency spectrum of the correlated signal is
obtained after the FFT. As shown in Fig. 4, this set of experiments measures
the laser field correlation that is governed by the time evolution of the
second and third order off-diagonal elements $\rho^{(2)}_{n}(t)$ and
$\rho^{(3)}_{n}(t)$, respectively. The quantity we now measure is determined
by the correlation of the heterodyne signals from detectors as in Fig. 4. We
have the signal of interest at frequency $2\nu_{0}$ from the second order
coherence function is
$\displaystyle\Gamma^{(2)}(t)=\mathscr{E}^{2}_{1}\mathscr{E}^{2}_{2}\mathrm{Tr}(\rho_{1}\otimes\rho_{2})\hat{a}^{\dagger}_{1}(t)\hat{a}^{\dagger}_{1}(t)\hat{a}_{2}(t)\hat{a}_{2}(t)e^{i2\nu_{0}t},$
(17)
with its complex conjugate contributes to the $-2\nu_{0}$ frequency. The
correlated heterodyne signal is
$\displaystyle\Gamma^{(2)}(2\nu_{0},t)$
$\displaystyle=\mathscr{E}^{2}_{1}\sum_{n_{1}}{\rho^{(2)}_{n_{1}}(0)\sqrt{(n_{1}+2)(n_{1}+1)}e^{-4D_{1}t}}$
(18)
$\displaystyle\times\mathscr{E}^{2}_{2}\sum_{n_{2}}{\rho^{(-2)}_{n_{2}}(0)\sqrt{(n_{2}-1)n_{2}}e^{-4D_{2}t}}e^{i2\nu_{0}t}.$
(19)
Taking the Fourier transform, we get a Lorentzian spectral profile centered at
$2\nu_{0}$ with a width of $4D^{\prime}$
$\displaystyle
S_{2\nu_{0}}(\omega)\propto\frac{4D^{\prime}}{(\omega-2\nu_{0})^{2}+(4D^{\prime})^{2}}.$
(20)
similarly, we have
$\displaystyle
S_{3\nu_{0}}(\omega)\propto\frac{9D^{\prime}}{(\omega-3\nu_{0})^{2}+(9D^{\prime})^{2}}.$
(21)
The main experimental results are shown in Fig. 5. All measurements were taken
with the laser operating at the same average output power level. The
resolution bandwidth (RBW) of the SA is 10 kHz, video bandwidth (VBW) is 30
kHz in all the measurements. For the sake of simplicity, the Full width at
half maximum (FWHM) linewidth is taken at the -3 dB width of the measured
spectrum by considering only the Lorentzian fitting Muanzuala et al. (2015).
Figure 4: Schematic setup for measuring higher order spectral line
distribution up to 3rd order. Laser 1 and 2 : He-Ne lasers; P: polarizer; F:
filter; A: analyzer; BS: non-polarizing beamsplitter; Mixer: frequency mixer;
$D1$, $D2$, and $D3$, photodiode detectors. Figure 5: Experimental results
from the two sets of measurements. The bandwidths of the detectors are 50 MHz,
the resolution bandwidth of the SA is 10 kHZ. The black dots are experimental
data and the red curves are theory. (A) is the beat signals from $D1$, where
the FWHM is 107.9 kHz with average 50 times. Theory is the Fourier transform
of the laser fields time evolution ($e^{-D^{\prime}t}$) associated with
frequency $\nu_{0}$, as shown in Eq. (16); (B) is correlated signal from $D1$
and $D2$, where the FWHM bandwidth is 420.6 kHz with average 50 times. Theory
is the Fourier transform of the correlated laser fields time evolution
($e^{-4D^{\prime}t}$) associated with frequency $2\nu_{0}$, as shown in Eq.
(20).(C) is correlated signal from $D1$, $D2$, and $D3$, where the FWHM is
963.3 kHz with average 50 times. Theory is the Fourier transform of the
correlated laser fields time evolution ($e^{-9D^{\prime}t}$) associated with
frequency $3\nu_{0}$, as shown in Eq. (21).
Fig. 5A represents the data of the first set of experiments with an average of
50 measurements of beat note signal from $D1$. The theoretical fitting in the
red solid line is based on Eq. (16), and the FWHM is 107.9 kHz. Fig. 5B
represents the data of the second set of experiments with 50 measurements of
correlated beat note signals from $D1$ and $D2$. The theoretical fitting in
the red solid line is based on Eq. (20), and the FWHM is estimated to be 420.6
kHz. Fig. 5C represents the data of the third order experiments with 50
measurements from all three detectors. The theoretical fitting in the red
solid line is based on Eq. (21), and the FWHM is estimated to be 963.3 kHz,.
First of all, we see that the obtained linewidth from the second order
correlation spectrum is essentially 4 times wider than that of the single beat
note linewidth, as well as the third order spectrum is 9 times wider than that
of the single beat note linewidth, validating our theoretical expectation.
Secondly, we see that the theoretical curves fit the data well in the center
peak, but not as good at the tails. This is mainly due to the influences from
other noises that also contribute to the spectral profile. For the same
reason, we see that the single beat note signal can be better fitted than the
second and third order correlation signals. This is mainly due to our
remeasured higher order spectral signal is close to the noise level of the
detection system, further using a more intense local oscillator and sensitive
detection system (detector and spectral analyzer) should be able to solve this
issue. Nevertheless, our data confirms the Lorentizan spectral profile of the
signal and the time evolution described by Eq. (3), in the case of $k=1$,
$k=2$, and $k=3$.
## III Conclusion
In conclusion, we have studied the time evolution in the laser. We
particularly measured the bandwidth of the laser beat note and the bandwidth
of the correlated laser beat note, which reveal the evolution of the first,
second, and third order off-diagonal elements of the laser density operator.
The higher order spectra reveal the influence of the randomness in the phase
of the laser field due to quantum fluctuation. Experimental results agreed
with the QTL showing that the bandwidth of the third order and second order
spectral profile are nine times and four times wider than that of the first
order spectral profile, respectively.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any
commercial or financial relationships that could be construed as a potential
conflict of interest.
## Author Contributions
TP, YS and MOS discussed the design priciple. TP YS and MOS derived the
theory. TP and XZ performed the experiment and data analysis. TP and MOS wrote
the paper. All the authors commented on the paper.
## Funding
Air Force Office of Scientific Research (Award No. FA9550-20-1-0366 DEF),
Office of Naval Research (Award No. N00014-20-1-2184), Robert A. Welch
Foundation (Grant No. A-1261), National Science Foundation (Grant No.
PHY-2013771), King Abdulaziz City for Science and Technology (KACST).
## Acknowledgments
We thank Z. H. Yi, R. Nessler, H. Cai, and J. Sprigg for helpful discussion.
## References
* Pestov et al. (2007) Pestov D, Murawski RK, Ariunbold GO, Wang X, Zhi M, Sokolov AV, et al. Optimizing the laser-pulse configuration for coherent raman spectroscopy. science 316 (2007) 265–268.
* Cummins (2013) Cummins H. Photon correlation and light beating spectroscopy, vol. 3 (Springer Science & Business Media) (2013).
* Glauber (1963) Glauber RJ. The quantum theory of optical coherence. Physical Review 130 (1963) 2529.
* Scully and Lamb Jr (1966) Scully M, Lamb Jr W. Quantum theory of an optical maser. Physical Review Letters 16 (1966) 853.
* Haken (1966) Haken H. Theory of intensity and phase fluctuations of a homogeneously broadened laser. Zeitschrift für Physik 190 (1966) 327–356.
* DeGiorgio and Scully (1970) DeGiorgio V, Scully MO. Analogy between the laser threshold region and a second-order phase transition. Physical Review A 2 (1970) 1170.
* Meschede et al. (1985) Meschede D, Walther H, Müller G. One-atom maser. Physical review letters 54 (1985) 551.
* Scully (1999) Scully MO. Condensation of n bosons and the laser phase transition analogy. Physical review letters 82 (1999) 3927.
* Hoang (1997) Hoang T. Remarks on the charged multiplicity of hadronic z 0 (91) decays. Zeitschrift für Physik C Particles and Fields 73 (1997) 149–152.
* Okoshi et al. (1980) Okoshi T, Kikuchi K, Nakayama A. Novel method for high resolution measurement of laser output spectrum. Electronics letters 16 (1980) 630–631.
* Richter et al. (1986) Richter L, Mandelberg H, Kruger M, McGrath P. Linewidth determination from self-heterodyne measurements with subcoherence delay times. IEEE Journal of Quantum Electronics 22 (1986) 2070–2074.
* Muanzuala et al. (2015) Muanzuala L, Ravi H, Sylvan K, Natarajan V. Measuring the linewidth of a stabilized diode laser. arXiv preprint arXiv:1510.03683 (2015).
* Arecchi (1965) Arecchi FT. Measurement of the statistical distribution of gaussian and laser sources. Physical Review Letters 15 (1965) 912.
* Corti et al. (1973) Corti M, Degiorgio V, Arecchi F. Measurements of the fine structure of laser intensity correlations near threshold. Optics Communications 8 (1973) 329–332.
|
# A Survey of Requirements
for COVID-19 Mitigation Strategies
Part II: Elicitation of Requirements
Wojciech Jamroga1,2
(1 Interdisciplinary Centre on Security, Reliability and Trust, SnT,
University of Luxembourg
2 Institute of Computer Science, Polish Academy of Sciences,
Warsaw, Poland )
###### Abstract
The COVID-19 pandemic has influenced virtually all aspects of our lives.
Across the world, countries have applied various mitigation strategies, based
on social, political, and technological instruments. We postulate that multi-
agent systems can provide a common platform to study (and balance) their
essential properties. We also show how to obtain a comprehensive list of the
properties by “distilling” them from media snippets. Finally, we present a
preliminary take on their formal specification, using ideas from multi-agent
logics.
Keywords: COVID-19, mitigation strategies, specification, agent logics
## 1 Introduction
COVID-19 has influenced virtually all aspects of our lives. Across the world,
countries applied wildly varying mitigation strategies for the epidemic,
ranging from minimal intrusion in the hope of obtaining “herd immunity”, to
imposing severe lockdowns on the other extreme. It seems clear at the first
glance what all those measures are trying to achieve, and what the criteria of
success are. But is it really that clear? Quoting an oft-repeated phrase, with
COVID-19 we fight _an unprecedented threat to health and economic stability_
Soltani et al. (2020). While fighting it, we must _protect privacy, equality
and fairness_ Morley et al. (2020) and _do a coordinated assessment of
usefulness, effectiveness, technological readiness, cyber security risks and
threats to fundamental freedoms and human rights_ Stollmeyer et al. (2020).
Taken together, this is hardly a straightforward set of goals and
requirements. Thus, paraphrasing Stollmeyer et al. (2020), one may ask:
What problem does a COVID mitigation strategy solve exactly?
Even a quick survey of news articles, manifestos, and research papers
published since the beginning of the pandemic reveals a diverse landscape of
postulates and opinions. Some authors focus on medical goals, some on
technological requirements; others are concerned by the economic, social, or
political impact of a containment strategy. The actual stance is often related
to the background of the author (in case of a researcher) or their information
sources (in case of a journalist). Moreover, the authors advocating a
particular aspect of the strategy most often neglect all the other aspects. We
propose that the field of multi-agent systems can offer a common platform to
study all the relevant properties, due to its interdisciplinary nature Weiss
(1999); Wooldridge (2002); Shoham and Leyton-Brown (2009), well developed
theories of heterogeneous agency Bratman (1987); Cohen and Levesque (1990);
Rao and Georgeff (1991); Wooldridge (2000); Broersen et al. (2001), and a
wealth of formal methods for specification and verification Dastani et al.
(2010); Shoham and Leyton-Brown (2009); Jamroga (2015).
This still leaves the question of how to gather the _actual_ goals and
requirements for a COVID-19 mitigation strategy. One way to achieve it is to
look at what is considered relevant by the general public, and referred to in
the media. To this end, we collected a number of news quotes on the topic,
ordered them thematically and with respect to the type of concern, and
presented in Jamroga et al. (2020). Here, we take the news clips from Jamroga
et al. (2020), and distill a comprehensive list of goals, requirements, and
most relevant risk. The list is presented in Section 2. In Section 3, we make
the first step towards a formalization of the properties by formulas of multi-
agent logics. We conclude in Section 4.
Besides potential input to the design of anti-COVID-19 strategies, the main
contribution of this paper is methodological: we demonstrate how to obtain a
comprehensive and relatively unbiased specification of properties for complex
MAS by searching for hints in the public space.
## 2 Extracting Goals and Requirements from News Clips
Specification of properties is probably the most neglected part of formal
verification for MAS. The research on formal verification usually concentrates
on defining the decision problem, establishing its theoretical properties, and
designing algorithms that solve the problem at an abstract level Dastani et
al. (2010). Fortunately, the algorithms are more and more often implemented in
the form of a publicly available model-checker Alur et al. (2000); Lomuscio et
al. (2017); Behrmann et al. (2004); Kant et al. (2015); Kurpiewski et al.
(2019). The tools come with examples of how to model the behavior of a system,
but writing the input formulas is generally considered easy. The big question,
however, is: _Where do the formulas come from?_ In a realistic multi-agent
scenario, it is not clear at all.
Mitigating COVID-19 illustrates the point well. Research on mitigation
measures is typically characterized by: (a) strong focus on the native domain
of the authors, and (b) focus on the details, rather than the general picture.
In order to avoid “overlooking the forest for the trees,” we came up with a
different methodology. We looked for relevant phrases that appeared in the
media, with no particular method of source selection Jamroga et al. (2020).
Then, we extracted the properties, and whenever possible generalized
statements on specific measures to the mitigation strategy in general.
Finally, we sorted them thematically, and divided into 3 categories: _goals_ ,
additional _requirements_ , and potential _risks and threats_.
While most of the collected snippets focus on digital contact tracing, the
relevance of the requirements goes clearly beyond that, and applies to all the
aspects of this epidemic, as well as the ones that may happen in the future.
### 2.1 Epidemiological and Health-Related Concerns
COVID-19 is first and foremost a threat to people’s health and lives.
Accordingly, we begin with requirements related to this aspect of mitigation
strategies.
#### 2.1.1 Epidemiological Goals
The goal of the mitigation strategy in general, and digital measures in
particular, is to:
1. (i)
provide an _epidemic response_ Soltani et al. (2020)
2. (ii)
_bring the pandemic under control_ Morley et al. (2020)
3. (iii)
_slow the spread of the virus_ Woodhams (2020); NCS (2020); Soltani et al.
(2020); Bicheno (2020); hel (2020); Ilves (2020)
4. (iv)
_prevent deaths_ AFP (2020)
5. (v)
_reduce the reproduction rate_ of the virus, i.e., how many people are
infected by someone with the virus AFP (2020).
The specific goals of digital measures are to:
1. (i)
_trace the spread of the virus_ and _identify dangerous Covid-19 clusters_
Ilves (2020)
2. (ii)
_find potential new infections_ Timberg (2020)
3. (iii)
_register contacts between potential carriers and those who might be infected_
Ilves (2020)
4. (iv)
_deter people from breaking quarantine_ Clarance (2020)
Requirements:
1. (1)
The efforts must meet _public health needs_ best Soltani et al. (2020); Ilves
(2020).
2. (2)
Digital measures should be a _component of the epidemic response_ Soltani et
al. (2020), and _enhance traditional forms of contact tracing_ Timberg (2020)
3. (3)
They should be designed to _help the health authorities_ hel (2020).
#### 2.1.2 Effectiveness of Epidemic Response
Requirements:
1. (1)
The strategy should be _effective_ Soltani et al. (2020); Stollmeyer et al.
(2020)
2. (2)
It should _make a difference_ Burgess (2020).
Risks and threats:
1. (a)
_Inaccurate detection_ of carriers and infected people due to the limitations
of the technology and the underlying model of human interaction Soltani et al.
(2020)
2. (b)
Specifically, this may _adversely impact relaxation of lockdowns_ Woodhams
(2020)
3. (c)
Misguided _assurance_ that going out is safe Soltani et al. (2020).
#### 2.1.3 Information Flow to Counter the Epidemic
The strategy should support rapid identification and notification of the most
concerned. That is, it should allow:
1. (1)
to _identify people who might have been exposed to the virus_ Zastrow (2020)
2. (2)
to _alert those people_ Morley et al. (2020); hel (2020); Timberg (2020); POL
(2020).
3. (3)
The identification and notification must be _rapid_ Zastrow (2020); Morley et
al. (2020).
#### 2.1.4 Monitoring Pandemic and Containment Strategy
The containment strategy should enable:
1. (1)
_monitoring the state of the pandemic_ , e.g., the outbreaks and the spread of
the virus POL (2020); Frasier (2020)
2. (2)
_monitoring the behavior of people_ , in particular if they are following the
rules Scott and Wanat (2020)
3. (3)
to _monitor the effectiveness of the strategy_ Davies (2020).
#### 2.1.5 Tradeoffs
There are _tradeoffs_ between effective containment of the epidemic and other
concerns, such as _privacy_ and protection of _fundamental freedoms_ McCarthy
(2020); Clarance (2020); POL (2020); Ilves (2020). E.g., effective monitoring
is often at odds with privacy Davies (2020). The strategy should
1. (1)
_strike the right balance_ between different concerns Ilves (2020).
We will see more tradeoff-related requirements in the subsequent sections.
### 2.2 Economic and Social Impact
Most measures to contain the epidemic are predominantly social (cf. lockdown),
and have strong social and economic impact.
#### 2.2.1 Economic Stability
The containment strategy should:
1. (1)
minimize the _cost to local economies_ and the negative impact on _economic
growth_ Soltani et al. (2020); AFP (2020)
2. (2)
allow for _return to normal economy and society_ and make resumption of
economic and social activities _safer_ Timberg (2020); Taylor (2020).
#### 2.2.2 Social and Political Impact
The containment strategy (and digital measures in particular) should:
1. (1)
_ease lockdowns_ and _home confinement_ Soltani et al. (2020); Stollmeyer et
al. (2020); Zastrow (2020); Taylor (2020)
2. (2)
minimize adverse impact on _social relationships_ and _personal well-being_
Soltani et al. (2020)
3. (3)
_prohibit economic and social discrimination_ on the basis of information and
technology being part of the strategy Soltani et al. (2020)
4. (4)
_protect the communities_ that can be harmed by the collection and
exploitation of personal data Soltani et al. (2020).
Detailed requirements:
1. (1)
Surveillance technologies should not become _compulsory for public and social
engagements_ , with unaffected individuals _restricted from participating in
social and economic activities_ Soltani et al. (2020).
Risks and threats:
1. (a)
_Little knowledge_ about social impact of the measures O’Neill et al. (2020)
2. (b)
_Discrimination_ and creation of _social divides_ Soltani et al. (2020); Mat
(2020)
3. (c)
_Disinformation_ and _information abuse_ Soltani et al. (2020); Woodhams
(2020)
4. (d)
Providing a _false sense of security_ Soltani et al. (2020)
5. (e)
_Political manipulation_ , creating _social unrest_ , and _dishonest
competition_ by false reports of coronavirus Soltani et al. (2020)
6. (f)
Too much political influence of IT companies on _the decisions of sovereign
democratic countries_ Ilves (2020).
#### 2.2.3 Costs, Human Resources, Logistics
Requirements:
1. (1)
The _financial cost_ of the measures should be minimized Hern (2020)
2. (2)
Minimization of the involved _human resources_ Scott and Wanat (2020); Soltani
et al. (2020)
3. (3)
_Timeliness_ Hern (2020)
4. (4)
_Coordination_ between different institutions and authoritiesTahir and Lima
(2020); Eur (2020), including the establishment of _common standards_ Tahir
and Lima (2020).
### 2.3 Ethical and Legal Aspects
In this section, we look at requirements that aim at the long-term robustness
and resilience of the social structure.
#### 2.3.1 Ethical and Legal Requirements
1. (1)
The mitigation strategy must be _ethically justifiable_ Morley et al. (2020)
2. (2)
The measures should be _necessary_ , _proportionate_ , _legitimate_ , _just_ ,
_scientifically valid_ , and _time-bound_ Morley et al. (2020); Woodhams
(2020); Oslo (2020); Scott and Wanat (2020); Mat (2020)
3. (3)
They should not be _invasive_ Clarance (2020) and must not be done at the
expense of _individual civil rights_ Bicheno (2020); O’Neill et al. (2020);
Mat (2020)
4. (4)
Means of protection should be _available to anyone_ Morley et al. (2020)
5. (5)
They should be _voluntary_ O’Neill et al. (2020); NCS (2020)
6. (6)
The measures must _comply with legal regulations_ Mat (2020); McCarthy (2020);
Wodinsky (2020)
7. (7)
_Implementation_ and _impact_ must also be considered Morley et al. (2020);
Woodhams (2020)
8. (8)
_Impact assessment_ should be _conducted_ and _made public_ Mat (2020).
#### 2.3.2 Risks and Threats
1. (a)
Serious and long-lasting _harms to fundamental rights and freedoms_ Morley et
al. (2020)
2. (b)
Costs of _not devoting resources to something else_ Morley et al. (2020)
3. (c)
Measures designed and implemented _without adequate scrutiny_ Woodhams (2020)
4. (d)
Measures that support _extensive physical surveillance_ Woodhams (2020)
5. (e)
_Mandatory_ use of digital measures, _collecting sensitive information_ ,
_sharing the data_ with the government Clarance (2020); Zastrow (2020)
6. (f)
_Censorship practices_ to silence critics and control the flow of information
Woodhams (2020).
### 2.4 Privacy and Data Protection
Privacy-related issues for COVID-19 mitigation strategies have triggered
heated discussion, and at some point gained much media coverage. This is
understandable, since privacy and data protection is an important aspect of
medical information flow, even in ordinary times. Moreover, the IT measures
against COVID-19 are usually designed by computer scientists and specialists,
for whom security requirements are relatively easy to identify and understand.
#### 2.4.1 General Privacy Requirements
1. (1)
The strategy should be designed with _privacy_ and _information security_ in
mind Soltani et al. (2020); Timberg (2020); O’Neill et al. (2020)
2. (2)
It should _mitigate privacy concerns_ inherent in a _technological approach_
Soltani et al. (2020)
3. (3)
It should be _anonymous_ under data protection laws, i.e., it cannot _lead to
the identification of an individual_ Burgess (2020)
4. (4)
The _information_ about users should be _protected at all times_ NCS (2020)
5. (5)
The design should include recommendations for _how back-end systems should be
secured_ , and identify _vulnerabilities_ as well as _unintended consequences_
Soltani et al. (2020).
Risks and Threats:
1. (a)
Lack of _clear privacy policies_ Woodhams (2020); Eisenberg (2020); O’Neill et
al. (2020)
2. (b)
_Exploitation of personal information_ by authorities or third parties
Eisenberg (2020); Woodhams (2020); Garthwaite and Anderson (2020), in
particular _live or near-live tracking of users’ locations_ and _linking
sensitive personal information to an individual_ Garthwaite and Anderson
(2020)
3. (c)
_Linking different datasets_ at some point in the future Wodinsky (2020)
4. (d)
Alerts can be _too revealing_ BBC (2020)
5. (e)
It may be possible to work out _who is associating with whom_ McCarthy (2020).
#### 2.4.2 Data Protection and Potential Misuse of Data
Here, the key question is: _What data is collected_ and _who is it shared
with_? O’Neill et al. (2020); Soltani et al. (2020) This leads to the
following requirements:
1. (1)
Clear and reasonable _limits on the data collection types_ Tahir and Lima
(2020); Clarance (2020); NCS (2020); Soltani et al. (2020); Timberg (2020)
2. (2)
Limitations on _how the data is used_ O’Neill et al. (2020)
3. (3)
In particular, the data is to be _used strictly for disease control_ and not
_shared with law enforcement agencies_ Clarance (2020); Taylor (2020)
4. (4)
Less _state access_ and _control_ over _user data_ Bicheno (2020)
5. (5)
Data collection should be _minimized_ O’Neill et al. (2020) and based on
_informed consent_ of the participants Ilves (2020)
6. (6)
Giving access to one’s data should be _voluntary_ O’Neill et al. (2020)
7. (7)
One should be able to _delete their personal information_ at any time hel
(2020); McCarthy (2020)
8. (8)
One should have the _right to access their own data_ hel (2020); McCarthy
(2020)
9. (9)
For digital measures, the user should be able to _remove the software_ and
_disable more invasive features_ hel (2020).
Risks and threats:
1. (a)
Data storage that can be _hacked_ and _exploited_ Davies (2020); Zastrow
(2020); Woodhams (2020)
2. (b)
_Data breaches_ due to insider threats Eisenberg (2020)
3. (c)
_Function creep_ and _state surveillance_ Zastrow (2020)
4. (d)
_Sharing data_ across agencies or _selling_ to a third party Eisenberg (2020);
Woodhams (2020)
5. (e)
Integration with _commercial services_ Woodhams (2020).
#### 2.4.3 Sunsetting, Safeguards, and Monitoring
Requirements:
1. (1)
Sunsetting: the measures should be _terminated_ as soon as possible Scott and
Wanat (2020); Soltani et al. (2020); hel (2020)
2. (2)
Data should be eventually or even periodically _destroyed_ Scott and Wanat
(2020); O’Neill et al. (2020); hel (2020); McCarthy (2020); Soltani et al.
(2020); Timberg (2020), in particular _when it is no longer needed to help
manage the spread of coronavirus_ NCS (2020)
3. (3)
_Transparency_ of data collection O’Neill et al. (2020)
4. (4)
There should be clear _policies to prevent abuse_ O’Neill et al. (2020)
5. (5)
Privacy must be backed up with clear lines of _accountability_ and processes
for _evaluation_ and _monitoring_ Wodinsky (2020)
6. (6)
_Judicial oversight_ must be provided Soltani et al. (2020)
7. (7)
Safeguards should be backed by _an independent figure_ Scott and Wanat (2020).
Risks and threats:
1. (a)
Surveillance might _continue to be used_ after the threat of the coronavirus
recedes Garthwaite and Anderson (2020)
2. (b)
Data can _stay with the government_ longer than necessary Scott and Wanat
(2020).
#### 2.4.4 Impact of Privacy
Requirements:
1. (1)
People must _get the information they need_ to protect themselves and others
BBC (2020)
2. (2)
There must be protections against _economic and social discrimination_ based
on _information_ and _technology_ designed to fight the pandemic, in
particular with respect to communities vulnerable to _collection and
exploitation of personal data_ Soltani et al. (2020)
3. (3)
Information should be used in such a way that people who fear being judged
will not _put other people in danger_ BBC (2020).
Risks and threats:
1. (a)
_Fear of social stigma_ BBC (2020)
2. (b)
Online _judgement_ and _ridicule_ BBC (2020).
#### 2.4.5 Privacy vs. Epidemiological Efficiency
There is a tradeoff between protecting privacy vs. collecting and processing
all the information that can be useful in fighting the epidemic:
* •
Privacy hinders _making the best possible use of the data_ , including
_analysis of the population_ , _contact matching_ , _modeling the network of
contacts_ , enabling _epidemiological insights_ such as _revealing clusters_
and _superspreaders_ , and providing _advice to people_ McCarthy (2020);
Zastrow (2020); Taylor (2020)
* •
Privacy-preserving solutions put users in _more control of their information_
and require _no intervention from a third party_ McCarthy (2020).
The relationship is not simply antagonistic, though:
* •
Privacy is instrumental in building _trust_. Conversely, lack of privacy
undermines trust, and may _hinder the epidemiological, economic, and social
effects of the mitigation activities_ Eisenberg (2020).
#### 2.4.6 Reasonable Privacy
While it might be necessary to waive users’ privacy in the short term in order
to contain the epidemic, one must look for mechanisms such that
1. (1)
_exploiting the risks would require significant effort by the attackers for
minimal reward_ Zastrow (2020).
### 2.5 User-Related Aspects
The measures must be adopted and followed by the people, in order to make them
effective.
#### 2.5.1 User Incentives
Goals:
1. (i)
_High acceptance rate_ for the mitigation measures Timberg (2020).
2. (ii)
_Creating incentives_ and overcoming _incentive problems_ for individual
people to adopt the strategy Soltani et al. (2020)
Risks and threats:
1. (a)
Lack of _immediate benefits_ for the participants Soltani et al. (2020)
2. (b)
Perceived _privacy_ and _security risks_ Timberg (2020)
3. (c)
Some measures can _divert attention from more important measures_ , and _make
people less alert_ Szymielewicz et al. (2020)
4. (d)
Creating _false sense of security_ from the pandemic Frasier (2020).
Countermeasures:
1. (a)
Pointing out _indirect benefits_ (e.g., opening of the schools and businesses,
reviving the national economy) Soltani et al. (2020)
2. (b)
Reliance on _personal responsibility_ Stollmeyer et al. (2020).
#### 2.5.2 Adoption and Its Impact
Requirements:
1. (1)
_Enough people_ should _download_ and _use_ the app to make it _effective_
Timberg (2020); O’Neill et al. (2020); Zastrow (2020); Bezat (2020). Note:
this requirement is _graded_ rather than binary O’Neill (2020); Hinch et al.
(2020).
Risks and threats:
1. (a)
Lack of users’ _trust_ Burgess (2020); Eisenberg (2020), see also the
connection between privacy and trust in Section 2.4.5
2. (b)
Lack of _social knowledge_ and _empathy_ by the authorities POL (2020).
### 2.6 Technological Aspects
General requirements:
1. (1)
The concrete measures and tools must be _operational_ McCarthy (2020); Scott
and Wanat (2020)
2. (2)
In particular, they should be _compatible_ with their environment of
implementation Wodinsky (2020)
3. (3)
Design and implementation should be _transparent_ O’Neill et al. (2020); SDZ
(2020).
Specific requirements for digital measures:
1. (1)
They should be _compatible with most available devices_ Wodinsky (2020)
2. (2)
Reasonable _use of battery_ Wodinsky (2020)
3. (3)
_Usable interface_ Wodinsky (2020)
4. (4)
_Accurate measurements_ of how close two devices are Zastrow (2020)
5. (5)
_Cross-border interoperability_ Cyb (2020)
6. (6)
Possibility to _verify the code_ by the public and experts SDZ (2020).
### 2.7 Evaluation and Learning for the Future
COVID-19 mitigation activities should be rigorously assessed. Moreover, their
outcomes should be used to extend our knowledge about the pandemic, and better
defend ourselves in the future. The main goal here is:
1. (i)
to use the collected data in order to _develop efficient infection control
measures_ and _gain insight into the effect of changes to the measures for
fighting the virus_ hel (2020); McCarthy (2020).
Requirements:
1. (1)
A _review_ and _exit strategy_ should be defined Morley et al. (2020)
2. (2)
Before implementing the measures, an _institutional assessment_ is needed of
their _usefulness, effectiveness, technological readiness, cyber-security
risks and threats to fundamental freedoms and human rights_ Stollmeyer et al.
(2020)
3. (3)
After the pandemic, there must be _the society’s assessment_ whether the
strategy has been effective and appropriate BBC (2020)
4. (4)
The _assessments_ should be conducted _by an independent body_ at _regular
intervals_ Morley et al. (2020).
## 3 Towards Formal Specification
Here, we briefly show how the requirements presented in Section 2 can be
rewritten in a more formal way. To this end, we use _modal logics for
distributed and multi-agent systems_ that have been in constant development
for over 40 years Emerson (1990); Fagin et al. (1995); Wooldridge (2000);
Broersen et al. (2001); Alur et al. (2002); Bulling et al. (2015). Note that
the following specifications are only _semi_ -formal, as we do not fix the
models nor give the precise semantics of the logical operators and atomic
predicates. We leave that step for the future work.
### 3.1 Temporal and Epistemic Properties
The simplest kind of requirements are those that refer to achievement or
maintenance of a particular state of affairs. Typically, they can be expressed
by formulas of the branching-time logic $\mathbf{CTL^{\star}}$ Emerson (1990),
with path quantifiers $\mathsf{E}$ (_there is a path_), $\mathsf{A}$ (_for all
paths_), and temporal operators $\mathrm{X}\,$ (_in the next moment_),
$\mathrm{F}\,$ (_sometime from now on_), $\mathrm{G}\,$ (_always from now
on_), and $\,\mathrm{U}\,$ (_until_). For example, goal (ii) in Section 2.1
can be tentatively rewritten as the $\mathbf{CTL^{\star}}$ formula
$\color[rgb]{0,0.55078125,0.30859375}\mathsf{A}\mathrm{F}\,\textsf{{control-
pandemic}},$
saying that, for all possible execution paths, control-pandemic must
eventually hold.111 In fact, a better specification is given by
$\mathsf{A}\mathrm{F}\,\mathrm{G}\,\textsf{{control-pandemic}}$, saying that
the pandemic is not only brought, but also kept under control from some point
on. Similarly, goal (iii) can be expressed by formula $\forall
n\,.\,(\textsf{{R0=n}})\rightarrow\mathsf{A}\mathrm{F}\,(\textsf{{R0<n}})$. It
is easy to see that the latter requirement is more precise than the former.
Moreover, goal (iv) can be captured by
$\mathsf{A}\mathrm{G}\,(\textsf{{\\#deaths<k}})$, for a reasonably chosen $k$.
The identification and monitoring aspects can be expressed by a combination of
$\mathbf{CTL^{\star}}$ with epistemic operators $K_{a}\varphi$ (_“ $a$ knows
that $\varphi$”_). For example, the information flow requirement (1) in
Section 2.1.3 can be transcribed as
$\textsf{{exposed}}_{i}\rightarrow\mathsf{A}\mathrm{F}\,K_{a}\textsf{{exposed}}_{i}$,
where $a$ is the name of the agent (or authority) supposed to identify the
vulnerable people. A more faithful transcription can be obtained using the
past-time operator $\mathrm{F}\,^{-1}$ (_sometime in the past_) Laroussinie
and Schnoebelen (1995) with
$\color[rgb]{0,0.55078125,0.30859375}exp_{i}\rightarrow\mathsf{A}\mathrm{F}\,K_{a}exp_{i},\quad\textrm{\color[rgb]{0,0,0}
where }exp_{i}\equiv\mathrm{F}\,^{-1}\textsf{{exposed}}_{i},$
saying that if $\textsf{{exposed}}_{i}$ held at some point in the past, then
$a$ will eventually know about it. Likewise, the information flow requirement
(2) can be captured by
$K_{a}exp_{i}\rightarrow\mathsf{A}\mathrm{F}\,K_{i}exp_{i}$. Similar temporal-
epistemic formulas may be used to express some privacy-related requirements in
Section 2.4, e.g., $\forall j\neq i\,.\,\mathsf{A}\mathrm{G}\,(\neg
K_{j}(x=i)\land\neg K_{j}(x\neq i))$ tentatively captures the anonymity of
person $i$ wrt. the database entry represented by $x$, cf. requirement
2.4.1.(3).
### 3.2 Strategic Requirements
The temporal and epistemic patterns, presented above, can be refined by
replacing path quantifiers $\mathsf{A},\mathsf{E}$ with strategic operators
$\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}$ of the logic
$\mathbf{ATL_{\mathrm{}}^{*}}$ Alur et al. (2002); Bulling et al. (2015),
where $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\varphi$ says
that _“the agents in $A$ can bring about $\varphi$”_. For example, the
information flow requirement 2.1.3.(2) can be rewritten as
$\color[rgb]{0,0.55078125,0.30859375}K_{a}(\mathrm{F}\,^{-1}\textsf{{exposed}}_{i})\rightarrow\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\langle\\!\langle{i}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,K_{i}(\mathrm{F}\,^{-1}\textsf{{exposed}}_{i}),$
saying that if the health authority $a$ knows that $i$ was exposed, then $a$
can provide $i$ with the information sufficient to realize that. Strategic
operators are also useful for the monitoring requirements in Section 2.1.4,
e.g.,
$\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,(K_{a}\textsf{{outbreak}}\lor
K_{a}\neg\textsf{{outbreak}})$ can be used for requirement 2.1.4.(1). The same
applies to the access control properties in Section 2.4.2, e.g., requirement
(6) can be formalized by formula $\forall d\in data(i)\,\forall j\neq
i\,.\,\langle\\!\langle{i}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\textsf{{access(j,d)}}\land\langle\\!\langle{i}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\textsf{{access(j,d)}}$.
For some requirements, this should be combined with bounds imposed on the
execution time Penczek and Polrola (2006); Knapik et al. (2019), mental
complexity Jamroga et al. (2019), and/or resources needed to accomplish the
tasks Alechina et al. (2010); Bulling and Farwer (2010); Alechina et al.
(2017). For example, the notification requirement 2.1.3.(2) can be refined as:
$\color[rgb]{0,0.55078125,0.30859375}\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,^{t\leq
10}\langle\\!\langle{i}\rangle\\!\rangle_{{}_{\\!\mathit{}}}^{compl\leq
5}\mathrm{F}\,K_{i}(\mathrm{F}\,^{-1}\textsf{{exposed}}_{i}),$
based on the assumption that $a$ should notify $i$ in at most $10$ time units,
and $i$ has a strategy of complexity at most $5$ to infer the relevant
knowledge from the notification.
To reason explicitly about the outcome of anti-COVID-19 measures, we may need
model update operators $(supp\ a:\sigma)$ saying _“suppose that agent $a$
plays strategy $\sigma$”_ Bulling et al. (2008), and similar to strategy
binding in Strategy Logic Mogavero et al. (2010) and
$\mathbf{ATL_{\mathrm{}}}$ with Strategy Contexts Brihaye et al. (2009). Then,
the effectiveness requirement (2) of Section 2.1.2 for mitigation strategy
$\mathcal{S}$ can be written as
$\color[rgb]{0,0.55078125,0.30859375}\neg\psi\land(supp\ a:\mathcal{S})\psi,$
where $\psi$ can be any of the properties of epidemic response, formalized in
Section 3.1.
### 3.3 Probabilistic Extensions
Many events have probabilistic execution, e.g., actions may fail with some
(typically small) probability. Scenarios with probabilistic events can be
modeled by variants of Markov decision processes, and their properties can be
specified by a probabilistic variant of $\mathbf{CTL^{\star}}$ Baier and
Kwiatkowska (1998) or $\mathbf{ATL_{\mathrm{}}^{*}}$ Chen et al. (2013). For
instance, formula
$\color[rgb]{0,0.55078125,0.30859375}\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}^{P\geq
0.99}\mathrm{F}\,^{t\leq
10}\langle\\!\langle{i}\rangle\\!\rangle_{{}_{\\!\mathit{}}}^{compl\leq
5}\mathrm{F}\,K_{i}(\mathrm{F}\,^{-1}\textsf{{exposed}}_{i}),$
refines the previous specification by demanding that the authority can
successfully notify $i$ with probability at least $99\%$.
### 3.4 Meta-Properties
The requirements presented in Section 2, and formalized above, refer to the
“correctness” of a given mitigation strategy. Two meta-properties, well known
in computer science, can be also useful in case of the present scenario,
namely _diagnosability_ and _resilience_ Ezekiel and Lomuscio (2017). Given a
correctness requirement $\Phi$ and a responsible agent $a$, those can be
expressed by the following templates:
$\displaystyle{\color[rgb]{0,0.55078125,0.30859375}\mathsf{A}\mathrm{G}\,(\neg\Phi\rightarrow\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,K_{a}\neg\Phi)}$
(diagnosability)
$\displaystyle{\color[rgb]{0,0.55078125,0.30859375}\mathsf{A}\mathrm{G}\,(\neg\Phi\rightarrow\langle\\!\langle{a}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\Phi)}$
(resilience)
The templates can be used e.g. for monitoring-type requirements.
### 3.5 Towards Formal Analysis
Ideally, one would like to automatically evaluate COVID-19 strategies with
respect to the requirements, and choose the best one. In the future, we plan
to use model checking tools, such as MCMAS Lomuscio et al. (2017), Uppaal
Behrmann et al. (2004), or PRISM Kwiatkowska et al. (2002), to formally verify
our formulas over micro-level models created to simulate and predict the
progress of the pandemic Neil M. Ferguson, Daniel Laydon, Gemma Nedjati-Gilani
et al. (2020); Adamik et al. (2020); Bock et al. (2020); McCabe et al. (2020).
As we already pointed out, different requirements may be in partial conflict.
Thus, selecting an optimal mitigation strategy may require solving a
multicriterial optimization problem Zionts (1981); Collette and Siarry (2004);
Radulescu et al. (2020), e.g., by identifying the Pareto frontier and choosing
a criterion to select a point on the frontier.
## 4 Conclusions
In this paper, we make the first step towards a systematic analysis of
strategies for effective and trustworthy mitigation of the current pandemic.
The strategies may incorporate medical, social, economic, as well as
technological measures. Consequently, there is a large number of medical,
social, economic, and technological requirements that must be taken into
account when deciding which strategy to adopt. For computer scientists, the
latter kind of requirements is most natural, which is exactly the pitfall that
computer scientists must avoid. The goals (and acceptability criteria) for a
mitigation strategy are much more diverse, and we must consciously choose a
solution that satisfies the multiple criteria to a reasonable degree. We
suggest that formal methods for MAS provide an excellent framework for that.
We also propose a methodology to collect preliminary requirements while
avoiding the usual bias of research papers.
Acknowledgments. The author acknowledges the support of the Luxembourg
National Research Fund (FNR) under the COVID-19 project SmartExit, and the
support of the National Centre for Research and Development Poland (NCBR) and
the Luxembourg National Research Fund (FNR), under the PolLux/CORE project STV
(POLLUX-VII/1/2019).
## References
* BBC [2020] Coronavirus privacy: Are South Korea’s alerts too revealing? _BBC News_ , 5 March 2020. URL https://www.bbc.com/news/amp/world-asia-51733145. Retrieved on 27.05.2020.
* Cyb [2020] Cybernetica proposes privacy-preserving decentralised architecture for COVID-19 mobile application for Estonia. _Cybernetica_ , 6 May 2020. URL https://cyber.ee/news/2020/05-06/. Retrieved on 15.05.2020.
* Eur [2020] Coronavirus: Member States agree on an interoperability solution for mobile tracing and warning apps. _European Commission - Press release_ , 16 June 2020. URL https://ec.europa.eu/digital-single-market/en/news/coronavirus-member-states-agree-interoperability-solution-mobile-tracing-and-warning-apps. Retrieved on 26.06.2020.
* Mat [2020] Legal advice on smartphone contact tracing published. _matrix chambers_ , 3 May 2020. URL https://www.matrixlaw.co.uk/news/legal-advice-on-smartphone-contact-tracing-published/. Retrieved on 26.05.2020.
* NCS [2020] NHS COVID-19: the new contact-tracing app from the nhs. _NCSC_ , 14 May 2020. URL https://www.ncsc.gov.uk/information/nhs-covid-19-app-explainer. Retrieved on 15.05.2020.
* POL [2020] Getting it right: States struggle with contact tracing push. _POLITICO_ , 17 May 2020. URL https://www.politico.com/news/2020/05/17/privacy-coronavirus-tracing-261369. Retrieved on 17.05.2020.
* RTL [2020] German coronavirus tracing app now available in Luxembourg. _RTL Today_ , 25 June 2020. URL https://today.rtl.lu/news/luxembourg/a/1539625.html. Retrieved on 25.06.2020.
* SDZ [2020] Corona-app soll open source werden. _Süddeutsche Zeitung_ , 6 May 2020. URL https://www.sueddeutsche.de/digital/corona-app-tracing-open-source-1.4899711. Retrieved on 26.05.2020.
* hel [2020] Together we can fight coronavirus –- Smittestopp. _helsenorge_ , 28 April 2020. URL https://helsenorge.no/coronavirus/smittestopp?redirect=false. Retrieved on 28.05.2020.
* Adamik et al. [2020] Barbara Adamik, Marek Bawiec, Viktor Bezborodov, Przemyslaw Biecek, Wolfgang Bock, Marcin Bodych, Jan Pablo Burgard, Tyll Krueger, Agata Migalska, Tomasz Ożański, Barbara Pabjan, Magdalena Rosińska, Malgorzata Sadkowska-Todys, Piotr Sobczyk, and Ewa Szczurek. Estimation of the severeness rate, death rate, household attack rate and the total number of COVID-19 cases based on 16 115 Polish surveillance records. _Preprints with The Lancet_ , 2020. doi: http://dx.doi.org/10.2139/ssrn.3696786. URL https://ssrn.com/abstract=3696786.
* AFP [2020] AFP. Major finding: Lockdowns averted 3 million deaths in 11 European nations: study. _RTL Today_ , 9 June 2020. URL https://today.rtl.lu/news/science-and-environment/a/1530963.html. Retrieved on 25.06.2020.
* Alechina et al. [2010] N. Alechina, B. Logan, H.N. Nguyen, and A. Rakib. Resource-bounded alternating-time temporal logic. In _Proceedings of International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 481–488, 2010.
* Alechina et al. [2017] N. Alechina, B. Logan, H.N. Nguyen, and F. Raimondi. Model-checking for resource-bounded ATL with production and consumption of resources. _Journal of Computer and System Sciences_ , 88:126–144, 2017. doi: 10.1016/j.jcss.2017.03.008.
* Alur et al. [2000] R. Alur, L. de Alfaro, T. A. Henzinger, S.C. Krishnan, F.Y.C. Mang, S. Qadeer, S.K. Rajamani, and S. Tasiran. MOCHA: Modularity in model checking. Technical report, University of Berkeley, 2000.
* Alur et al. [2002] R. Alur, T. A. Henzinger, and O. Kupferman. Alternating-time Temporal Logic. _Journal of the ACM_ , 49:672–713, 2002. doi: 10.1145/585265.585270.
* Baier and Kwiatkowska [1998] Christel Baier and Marta Z. Kwiatkowska. Model checking for a probabilistic branching time logic with fairness. _Distributed Comput._ , 11(3):125–155, 1998. doi: 10.1007/s004460050046.
* Baylis [2020] Françoise Baylis. Ten reasons why immunity passports are a bad idea. _Nature Comment_ , 21 May 2020. URL https://www.nature.com/articles/d41586-020-01451-0. Retrieved on 26.06.2020.
* Behrmann et al. [2004] G. Behrmann, A. David, and K.G. Larsen. A tutorial on uppaal. In _Formal Methods for the Design of Real-Time Systems: SFM-RT_ , number 3185 in LNCS, pages 200–236. Springer, 2004.
* Bezat [2020] Jean-Michel Bezat. L’application StopCovid, activée seulement par 2% de la population, connaît des débuts décevants. _Le Monde_ , 10 June 2020. URL https://www.lemonde.fr/pixels/article/2020/06/10/l-application-stopcovid-connait-des-debuts-decevants_6042404_4408996.html. Retrieved on 26.06.2020.
* Bicheno [2020] Scott Bicheno. Unlike France, Germany decides to do smartphone contact tracing the Apple/Google way. _telecoms.com_ , 27 April 2020. URL https://telecoms.com/503931/unlike-france-germany-decides-to-do-smartphone-contact-tracing-the-apple-google-way/. Retrieved on 24.06.2020.
* Bock et al. [2020] Wolfgang Bock, Barbara Adamik, Marek Bawiec, Viktor Bezborodov, Marcin Bodych, Jan Pablo Burgard, Thomas Goetz, Tyll Krueger, Agata Migalska, Barbara Pabjan, Tomasz Ozanski, Ewaryst Rafajlowicz, Wojciech Rafajlowicz, Ewa Skubalska-Rafajlowicz, Sara Ryfczynska, Ewa Szczurek, and Piotr Szymanski. Mitigation and herd immunity strategy for COVID-19 is likely to fail. _medRxiv_ , 2020. doi: 10.1101/2020.03.25.20043109. URL https://www.medrxiv.org/content/early/2020/05/05/2020.03.25.20043109.
* Bratman [1987] M.E. Bratman. _Intentions, Plans, and Practical Reason_. Harvard University Press, 1987.
* Brihaye et al. [2009] T. Brihaye, A. Da Costa Lopes, F. Laroussinie, and N. Markey. ATL with strategy contexts and bounded memory. In _Proceedings of LFCS_ , volume 5407 of _Lecture Notes in Computer Science_ , pages 92–106. Springer, 2009.
* Broersen et al. [2001] J. Broersen, M. Dastani, Z. Huang, and L. van der Torre. The BOID architecture: conflicts between beliefs, obligations, intentions and desires. In J.P. Müller, E. Andre, S. Sen, and C. Frasson, editors, _Proceedings of the Fifth International Conference on Autonomous Agents_ , pages 9–16. ACM Press, 2001.
* Bulling and Farwer [2010] N. Bulling and B. Farwer. Expressing properties of resource-bounded systems: The logics RTL* and RTL. In _Proceedings of Computational Logic in Multi-Agent Systems (CLIMA)_ , volume 6214 of _Lecture Notes in Computer Science_ , pages 22–45, 2010.
* Bulling et al. [2008] N. Bulling, W. Jamroga, and J. Dix. Reasoning about temporal properties of rational play. _Annals of Mathematics and Artificial Intelligence_ , 53(1-4):51–114, 2008.
* Bulling et al. [2015] N. Bulling, V. Goranko, and W. Jamroga. Logics for reasoning about strategic abilities in multi-player games. In J. van Benthem, S. Ghosh, and R. Verbrugge, editors, _Models of Strategic Reasoning. Logics, Games, and Communities_ , volume 8972 of _Lecture Notes in Computer Science_ , pages 93–136. Springer, 2015. doi: 10.1007/978-3-662-48540-8.
* Burgess [2020] Matt Burgess. Just how anonymous is the NHS Covid-19 contact tracing app? _Wired_ , 12 May 2020. URL https://www.wired.co.uk/article/nhs-covid-app-data-anonymous. Retrieved on 25.06.2020.
* Chen et al. [2013] T. Chen, V. Forejt, M. Kwiatkowska, D. Parker, and A. Simaitis. PRISM-games: A model checker for stochastic multi-player games. In _Proceedings of Tools and Algorithms for Construction and Analysis of Systems (TACAS)_ , volume 7795 of _Lecture Notes in Computer Science_ , pages 185–191. Springer, 2013.
* Clarance [2020] Andrew Clarance. Aarogya Setu: Why India’s Covid-19 contact tracing app is controversial. _BBC News_ , 15 May 2020. URL https://www.bbc.com/news/world-asia-india-52659520. Retrieved on 15.05.2020.
* Cohen and Levesque [1990] P.R. Cohen and H.J. Levesque. Intention is choice with commitment. _Artificial Intelligence_ , 42:213–261, 1990.
* Collette and Siarry [2004] Y. Collette and P. Siarry. _Multiobjective Optimization: Principles and Case Studies_. Springer, 2004.
* Dastani et al. [2010] M. Dastani, K. Hindriks, and J.-J. Meyer, editors. _Specification and Verification of Multi-Agent Systems_. Springer, 2010.
* Davies [2020] Jamie Davies. UK snubs Google and Apple privacy warning for contact tracing app. _telecoms.com_ , 28 April 2020. URL https://telecoms.com/503967/uk-snubs-google-and-apple-privacy-warning-for-contact-tracing-app/. Retrieved on 27.05.2020.
* Eisenberg [2020] Amanda Eisenberg. Privacy fears threaten New York City’s coronavirus tracing efforts. _Politico_ , 4 June 2020. URL https://www.politico.com/states/new-york/albany/story/2020/06/04/privacy-fears-threaten-new-york-citys-coronavirus-tracing-efforts-1290657. Retrieved on 25.06.2020.
* Emerson [1990] E.A. Emerson. Temporal and modal logic. In J. van Leeuwen, editor, _Handbook of Theoretical Computer Science_ , volume B, pages 995–1072. Elsevier, 1990.
* Ezekiel and Lomuscio [2017] J. Ezekiel and A. Lomuscio. Combining fault injection and model checking to verify fault tolerance, recoverability, and diagnosability in multi-agent systems. _Information and Computation_ , 254:167–194, 2017. doi: https://doi.org/10.1016/j.ic.2016.10.007.
* Fagin et al. [1995] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. _Reasoning about Knowledge_. MIT Press, 1995.
* Frasier [2020] Sarah Lewin Frasier. Coronavirus antibody tests have a mathematical pitfall. _Scientific American_ , 1 July 2020. URL https://www.scientificamerican.com/article/coronavirus-antibody-tests-have-a-mathematical-pitfall/. Retrieved on 30.10.2020.
* Garthwaite and Anderson [2020] Rosie Garthwaite and Ian Anderson. Coronavirus: Alarm over ’invasive’ Kuwait and Bahrain contact-tracing apps. _BBC News_ , 16 June 2020. URL https://www.bbc.com/news/world-middle-east-53052395. Retrieved on 30.10.2020.
* Guelev et al. [2011] D.P. Guelev, C. Dima, and C. Enea. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. _Journal of Applied Non-Classical Logics_ , 21(1):93–131, 2011.
* Hern [2020] Alex Hern. UK abandons contact-tracing app for Apple and Google model. _The Guardian_ , 18 June 2020. URL https://www.theguardian.com/world/2020/jun/18/uk-poised-to-abandon-coronavirus-app-in-favour-of-apple-and-google-models. Retrieved on 26.06.2020.
* Hinch et al. [2020] Robert Hinch, Will Probert, Anel Nurtay, Michelle Kendall, Chris Wymant, Matthew Hall, Katrina Lythgoe, Ana Bulas Cruz, Lele Zhao, Andrea Stewart, Luca Ferretti, Michael Parker, Ares Meroueh, Bryn Mathias, Scott Stevenson, Daniel Montero, James Warren, Nicole K. Mather, Anthony Finkelstein, Lucie Abeler-Dörner, David Bonsall, and Christophe Fraser. Effective configurations of a digital contact tracing app: A report to NHSX. Technical report, Oxford University, 2020. URL https://github.com/BDI-pathogens/covid-19_instant_tracing/blob/master/Report-EffectiveConfigurationsofaDigitalContactTracingApp.pdf.
* Ilves [2020] Ieva Ilves. Why are Google and Apple dictating how european democracies fight coronavirus? _The Guardian_ , 16 June 2020. URL https://www.theguardian.com/commentisfree/2020/jun/16/google-apple-dictating-european-democracies-coronavirus. Retrieved on 24.06.2020.
* Jakubowska [2020] Ella Jakubowska. COVID-Tech: the sinister consequences of immunity passports. _EDRi_ , 10 June 2020. URL https://edri.org/covid-tech-the-sinister-consequences-of-immunity-passports/. Retrieved on 26.06.2020.
* Jamroga [2015] W. Jamroga. _Logical Methods for Specification and Verification of Multi-Agent Systems_. ICS PAS Publishing House, 2015. ISBN 978-83-63159-25-2.
* Jamroga et al. [2019] Wojciech Jamroga, Vadim Malvone, and Aniello Murano. Natural strategic ability. _Artificial Intelligence_ , 277, 2019. doi: 10.1016/j.artint.2019.103170.
* Jamroga et al. [2020] Wojciech Jamroga, David Mestel, Peter B. Rønne, Peter Y. A. Ryan, and Marjan Skrobot. A survey of requirements for COVID-19 mitigation strategies. part I: newspaper clips. _CoRR_ , abs/2011.07887, 2020. URL https://arxiv.org/abs/2011.07887.
* Kant et al. [2015] G. Kant, A. Laarman, J. Meijer, J. van de Pol, S. Blom, and T. van Dijk. LTSmin: High-performance language-independent model checking. In _Tools and Algorithms for the Construction and Analysis of Systems. Proceedings of TACAS_ , volume 9035 of _Lecture Notes in Computer Science_ , pages 692–707. Springer, 2015. doi: 10.1007/978-3-662-46681-0\\_61.
* Knapik et al. [2019] Michal Knapik, Étienne André, Laure Petrucci, Wojciech Jamroga, and Wojciech Penczek. Timed ATL: forget memory, just count. _Journal of Artificial Intelligence Research_ , 66:197–223, 2019. doi: 10.1613/jair.1.11612.
* Kunat [2020] Karol Kunat. Rzadowa aplikacja ma blad pozwalajacy na sprawdzenie, czy nasi znajomi podlegaja kwarantannie. _Tabletowo_ , 29 April 2020. URL https://www.tabletowo.pl/blad-aplikacji-karantanna-domowa-ujawnia-dane-z-bazy-rzadowej/. Retrieved on 26.05.2020.
* Kurpiewski et al. [2019] Damian Kurpiewski, Wojciech Jamroga, and Michał Knapik. STV: Model checking for strategies under imperfect information. In _Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2019_ , pages 2372–2374. IFAAMAS, 2019.
* Kwiatkowska et al. [2002] M. Kwiatkowska, G. Norman, and D. Parker. PRISM: probabilistic symbolic model checker. In _Proceedings of TOOLS_ , volume 2324 of _Lecture Notes in Computer Science_ , pages 200–204. Springer, 2002. doi: 10.1007/3-540-46029-2_13.
* Laroussinie and Schnoebelen [1995] F. Laroussinie and Ph. Schnoebelen. A hierarchy of temporal logics with past. _Theoretical Computer Science_ , 148(2):303–324, 1995.
* Lomuscio et al. [2017] A. Lomuscio, H. Qu, and F. Raimondi. MCMAS: An open-source model checker for the verification of multi-agent systems. _International Journal on Software Tools for Technology Transfer_ , 19(1):9–30, 2017. doi: 10.1007/s10009-015-0378-x.
* McCabe et al. [2020] Ruth McCabe, Mara D. Kont, Nora Schmit, Charles Whittaker, Alessandra Loechen, Marc Baguelin, Edward Knock, Lilith Whittles, John Lees, Patrick G.T. Walker, Azra C. Ghani, Neil M. Ferguson, Peter J. White, Christl A. Donnelly, Katharina Hauck, and Oliver Watson. Modelling ICU capacity under different epidemiological scenarios of the COVID-19 pandemic in three western European countries. Technical Report 36 (16-11-2020), Imperial College London, 2020.
* McCarthy [2020] Kieren McCarthy. UK finds itself almost alone with centralized virus contact-tracing app that probably won’t work well, asks for your location, may be illegal. _The Register_ , 5 May 2020. URL https://www.theregister.co.uk/2020/05/05/uk_coronavirus_app/. Retrieved on 26.05.2020.
* Mogavero et al. [2010] F. Mogavero, A. Murano, and M.Y. Vardi. Reasoning about strategies. In _Proceedings of FSTTCS_ , pages 133–144, 2010.
* Morley et al. [2020] Jessica Morley, Josh Cowls, Mariarosaria Taddeo, and Luciano Floridi. Ethical guidelines for COVID-19 tracing apps. _Nature Comment_ , pages 29–31, 4 June 2020. URL https://www.nature.com/articles/d41586-020-01578-0. Retrieved on 9.06.2020.
* Neil M. Ferguson, Daniel Laydon, Gemma Nedjati-Gilani et al. [2020] Neil M. Ferguson, Daniel Laydon, Gemma Nedjati-Gilani et al. Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. Technical Report 9 (16-03-2020), Imperial College London, 2020.
* O’Neill [2020] Patrick Howell O’Neill. No, coronavirus apps don’t need 60% adoption to be effective. _MIT Technology Review_ , 5 June 2020. URL https://www.technologyreview.com/2020/06/05/1002775/covid-apps-effective-at-less-than-60-percent-download/. Retrieved on 25.06.2020.
* O’Neill et al. [2020] Patrick Howell O’Neill, Tate Ryan-Mosley, and Bobbie Johnson. A flood of coronavirus apps are tracking us. now it’s time to keep track of them. _MIT Technology Review_ , 7 May 2020. URL https://www.technologyreview.com/2020/05/07/1000961/launching-mittr-covid-tracing-tracker/. Retrieved on 25.06.2020.
* Oslo [2020] AFP Oslo. Norway suspends virus-tracing app due to privacy concerns. _The Guardian_ , 15 June 2020. URL https://www.theguardian.com/world/2020/jun/15/norway-suspends-virus-tracing-app-due-to-privacy-concerns. Retrieved on 26.06.2020.
* Parker and Jones [2020] Imogen Parker and Elliot Jones. Something to declare? surfacing issues with immunity certificates. _Ada Lovelace Institute_ , 02 June 2020. URL https://www.adalovelaceinstitute.org/something-to-declare-surfacing-issues-with-immunity-certificates/. Retrieved on 26.06.2020.
* Penczek and Polrola [2006] W. Penczek and A. Polrola. _Advances in Verification of Time Petri Nets and Timed Automata: A Temporal Logic Approach_ , volume 20 of _Studies in Computational Intelligence_. Springer, 2006.
* Radulescu et al. [2020] Roxana Radulescu, Patrick Mannion, Diederik M. Roijers, and Ann Nowé. Multi-objective multi-agent decision making: a utility-based analysis and survey. _Autonomous Agents and Multi Agent Systems_ , 34(1):10, 2020. doi: 10.1007/s10458-019-09433-x.
* Rao and Georgeff [1991] A.S. Rao and M.P. Georgeff. Modeling rational agents within a BDI-architecture. In _Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning_ , pages 473–484, 1991.
* Scott and Wanat [2020] Mark Scott and Zosia Wanat. Poland’s coronavirus app offers playbook for other governments. _POLITICO_ , 2 April 2020. URL https://www.politico.eu/article/poland-coronavirus-app-offers-playbook-for-other-governments/. Retrieved on 27.05.2020.
* Shoham and Leyton-Brown [2009] Y. Shoham and K. Leyton-Brown. _Multiagent Systems - Algorithmic, Game-Theoretic, and Logical Foundations_. Cambridge University Press, 2009. ISBN 978-0-521-89943-7.
* Soltani et al. [2020] Ashkan Soltani, Ryan Calo, and Carl Bergstrom. Contact-tracing apps are not a solution to the COVID-19 crisis. _Brookings Tech Stream_ , 27 April 2020. URL https://www.brookings.edu/techstream/inaccurate-and-insecure-why-contact-tracing-apps-could-be-a-disaster/. Retrieved on 23.06.2020.
* Stollmeyer et al. [2020] Alice Stollmeyer, Marietje Schaake, and Frank Dignum. The dutch tracing app ’soap opera’ - lessons for europe. _euobserver_ , 7 May 2020. URL https://euobserver.com/opinion/148265. Retrieved on 15.05.2020.
* Szymielewicz et al. [2020] Katarzyna Szymielewicz, Anna Obem, and Tomasz Zieliński. Jak polska walczy z koronawirusem i dlaczego aplikacja nas przed nim nie ochroni? _Panoptykon_ , 5 May 2020. URL https://panoptykon.org/protego-safe-ryzyka. Retrieved on.
* Tahir and Lima [2020] Darius Tahir and Cristiano Lima. Google and Apple’s rules for virus tracking apps sow division among states. _Politico_ , 10 June 2020. URL https://www.politico.com/news/2020/06/10/google-and-apples-rules-for-virus-tracking-apps-sow-division-among-states-312199. Retrieved on 26.06.2020.
* Taylor [2020] Josh Taylor. How did the Covidsafe app go from being vital to almost irrelevant? _The Guardian_ , 23 May 2020. URL https://www.theguardian.com/world/2020/may/24/how-did-the-covidsafe-app-go-from-being-vital-to-almost-irrelevant. Retrieved on 26.05.2020.
* Timberg [2020] Craig Timberg. Most Americans are not willing or able to use an app tracking coronavirus infections. that’s a problem for Big Tech’s plan to slow the pandemic. _Washington Post_ , 29 April 2020. URL https://www.washingtonpost.com/technology/2020/04/29/most-americans-are-not-willing-or-able-use-an-app-tracking-coronavirus-infections-thats-problem-big-techs-plan-slow-pandemic/. Retrieved on 17.05.2020.
* Weiss [1999] G. Weiss, editor. _Multiagent Systems. A Modern Approach to Distributed Artificial Intelligence_. MIT Press: Cambridge, Mass, 1999.
* Wodinsky [2020] Shoshana Wodinsky. The UK’s contact-tracing app breaks the UK’s own privacy laws (and is just plain broken). _Gizmodo_ , 13 May 2020. URL https://gizmodo.com/the-uk-s-contact-tracing-app-breaks-the-uk-s-own-privac-1843439962. Retrieved on 25.06.2020.
* Woodhams [2020] Samuel Woodhams. COVID-19 digital rights tracker. _Top10VPN_ , 10 June 2020. URL https://www.top10vpn.com/research/investigations/covid-19-digital-rights-tracker/. Retrieved on 26.06.2020.
* Wooldridge [2000] M. Wooldridge. _Reasoning about Rational Agents_. MIT Press : Cambridge, Mass, 2000.
* Wooldridge [2002] M. Wooldridge. _An Introduction to Multi Agent Systems_. John Wiley & Sons, 2002.
* Zastrow [2020] Mark Zastrow. Coronavirus contact-tracing apps: can they slow the spread of COVID-19? _Nature (Technology Feature)_ , 19 May 2020. URL https://www.nature.com/articles/d41586-020-01514-2. Retrieved on 25.06.2020.
* Zionts [1981] Stanley Zionts. A multiple criteria method for choosing among discrete alternatives. _European Journal of Operational Research_ , 7(2):143–147, 1981. ISSN 0377-2217. doi: https://doi.org/10.1016/0377-2217(81)90275-7. Fourth EURO III Special Issue.
|
# Signatures of $f(Q)$-gravity in cosmology
Noemi Frusciante Instituto de Astrofisíca e Ciências do Espaço, Faculdade de
Ciências da Universidade de Lisboa, Edificio C8, Campo Grande, P-1749016,
Lisboa, Portugal
(August 28, 2024)
###### Abstract
We investigate the impact on cosmological observables of $f(Q)$-gravity, a
specific class of modified gravity models in which gravity is described by the
non-metricity scalar, $Q$. In particular we focus on a specific model which is
indistinguishable from the $\Lambda$-cold-dark-matter ($\Lambda$CDM) model at
the background level, while showing peculiar and measurable signatures at
linear perturbation level. These are attributed to a time-dependent Planck
mass and are regulated by a single dimensionless parameter, $\alpha$. In
comparison to the $\Lambda$CDM model, we find for positive values of $\alpha$
a suppressed matter power spectrum and lensing effect on the Cosmic Microwave
Background radiation (CMB) angular power spectrum and an enhanced integrated-
Sachs-Wolfe tail of CMB temperature anisotropies. The opposite behaviors are
present when the $\alpha$ parameter is negative. We also investigate the
modified Gravitational Waves (GWs) propagation and show the prediction of the
GWs luminosity distance compared to the standard electromagnetic one. Finally,
we infer the accuracy on the free parameter of the model with standard sirens
at future GWs detectors.
## I Introduction
The late-time cosmic acceleration has been confirmed by different cosmological
observations Riess _et al._ (1998); Perlmutter _et al._ (1999); Betoule _et
al._ (2014); Spergel _et al._ (2003); Ade _et al._ (2016); Aghanim _et al._
(2016); Eisenstein _et al._ (2005); Beutler _et al._ (2011). Within General
Relativity (GR), it is the cosmological constant $\Lambda$ to give rise to the
observed acceleration of the Universe. In this picture the resulting standard
cosmological model ($\Lambda$CDM), besides providing an accurate description
of the Universe, comes along with some theoretical problems Joyce _et al._
(2015) and mild observational tensions, i.e. on the measurements of the value
of the Hubble constant $H_{0}$ Adam _et al._ (2016); Aghanim _et al._
(2020); Riess _et al._ (2011, 2016, 2019); Delubac _et al._ (2015) and the
amplitude of the matter power spectrum at present time $\sigma_{8}$
Hildebrandt _et al._ (2017); de Jong _et al._ (2015); Kuijken _et al._
(2015); Fenech Conti _et al._ (2017); Joudaki _et al._ (2020) from different
surveys. These might signal the necessity of looking for new physics beyond
the standard model.
Several modified gravity (MG) proposals have been considered which modify the
gravitational interaction on cosmological scales. Following the GR
construction, most of them have null non-metricity and torsion Nojiri and
Odintsov (2011); Lue, Scoccimarro, and Starkman (2004); Copeland, Sami, and
Tsujikawa (2006); Silvestri and Trodden (2009); Capozziello and De Laurentis
(2011); Clifton _et al._ (2012); Tsujikawa (2010); Joyce _et al._ (2015); de
Rham (2014); Heisenberg (2014); Koyama (2016); Nojiri, Odintsov, and Oikonomou
(2017); Ferreira (2019); Kobayashi (2019); Frusciante and Perenon (2020).
Alternatively one can construct theories of gravity built from the scalars
associated to torsion ($T$) and non-metricity ($Q$). While the actions $\int
d^{4}x\sqrt{-g}\,T$ and $\int d^{4}x\sqrt{-g}\,Q$ are equivalent to GR in flat
space Jiménez, Heisenberg, and Koivisto (2019), their generalizations with
$f(T)$ Li, Sotiriou, and Barrow (2011); Wu and Yu (2010a, b); Cai _et al._
(2016); Benetti, Capozziello, and Lambiase (2020) and $f(Q)$ Nester and Yo
(1999); Dialektopoulos, Koivisto, and Capozziello (2019); Beltrán Jiménez _et
al._ (2020); Lu, Zhao, and Chee (2019, 2019); Bajardi, Vernieri, and
Capozziello (2020) can be ascribed in the class of MG models. In the following
we focus on $f(Q)$-gravity which introduces at least two additional scalar
modes. These disappear around maximally symmetric backgrounds causing strong
coupling problems. However, while the $f(T)$-gravity models suffer from strong
coupling problems when considering perturbations around a Friedmann-Lemaître-
Robertson-Walker (FLRW) background Golovnev and Koivisto (2018), these are
absent in the case of $f(Q)$-gravity Beltrán Jiménez _et al._ (2020). In
$f(Q)$-gravity the main equations for the linear perturbations for scalar,
vector and tensor modes and the matter density perturbation have been derived
Beltrán Jiménez _et al._ (2020). From this study, modifications in the
evolution of the gravitational potentials and tensor propagation equations
emerge with respect to the $\Lambda$CDM model, thus making it worth to be
further investigated by looking at the signatures these modifications leave on
cosmological observables. Furthermore constraints on the deviations of the
$f(Q)$-gravity from the $\Lambda$CDM background have been performed using
different observational probes for several parameterizations of the $f(Q)$
function in terms of redshift, $z$, i.e. $f(z)$ Lazkoz _et al._ (2019) and at
linear perturbation level, the modification in the evolution of the matter
density perturbation has been tested against redshift space distortion (RSD)
data Barros _et al._ (2020).
In this work we focus on the $f(Q)$ model which shares the same background
evolution as in $\Lambda$CDM, while leaving precise and measurable effects on
cosmological observables. We perform a thorough analysis of its phenomenology
at linear order in perturbations, by comparing the predictions of the theory
to the $\Lambda$CDM model for the temperature-temperature (TT) power spectrum,
lensing potential auto-correlation power spectrum and matter power spectrum.
To this aim we implement the model in the public Einstein-Boltzmann code
MGCAMB Zhao _et al._ (2009); Hojjati, Pogosian, and Zhao (2011); Zucca _et
al._ (2019). Finally we investigate the modification in the Gravitational
waves (GWs) sector by presenting the GWs luminosity distance and we infer the
accuracy on the free parameter of the model using previous forecasts from
standard sirens Belgacem _et al._ (2018a, 2019) at the Einstein Telescope
(ET) Sathyaprakash _et al._ (2012) and the Laser Interferometer Space Antenna
(LISA) Amaro-Seoane _et al._ (2017).
## II $f(Q)$ model
The action of the $f(Q)$-gravity can be written as follows Beltrán Jiménez,
Heisenberg, and Koivisto (2018)
$S=\int d^{4}x\sqrt{-g}\left[-\frac{1}{2}f(Q)+L_{m}\right]\,,$ (1)
where $g$ is the determinant of the metric $g_{\mu\nu}$, $f(Q)$ is a general
function of the non-metricity scalar $Q=-Q_{\alpha\mu\nu}P^{\alpha\mu\nu}$,
with $Q_{\alpha\mu\nu}=\nabla_{\alpha}g_{\mu\nu}$ being the non-metricity
tensor and
$P^{\alpha}_{\phantom{\alpha}\mu\nu}=-L^{\alpha}_{\phantom{\alpha}\mu\nu}/2+\left(Q^{\alpha}-\tilde{Q}^{\alpha}\right)g_{\mu\nu}/4-\delta^{\alpha}_{(\mu}Q_{\nu)}/4$
where $Q_{\alpha}=g^{\mu\nu}Q_{\alpha\mu\nu}$,
$\tilde{Q}_{\alpha}=g^{\mu\nu}Q_{\mu\alpha\nu}$ and
$L^{\alpha}_{\phantom{\alpha}\mu\nu}=(Q^{\alpha}_{\phantom{\alpha}\mu\nu}-Q_{(\mu\nu)}^{\phantom{(\mu\nu)}\alpha})/2$.
Finally, $L_{m}$ is the matter Lagrangian of standard matter fields. The
choice $f=Q/8\pi G_{N}$, where $G_{N}$ is the Newtonian constant, reproduces
the dynamics of GR. A particular class of $f(Q)$-theory that gives an
expansion history on a FLRW background identical to that of $\Lambda$CDM is
Beltrán Jiménez _et al._ (2020):
$f=\frac{1}{8\pi G_{N}}\left(Q+M\sqrt{Q}\right)\,,$ (2)
where $M$ is a constant. In the following we use the dimensionless parameter
$\alpha\equiv M/H_{0}$, where $H_{0}$ is the present time value of the Hubble
parameter $H(t)\equiv\frac{1}{a}\frac{da}{dt}$ and $a(t)$ is the scale factor.
Given the peculiar characteristic of the model at the background level, the
different values of $\alpha$ can thus only be depicted by analyzing the
evolution of the linear perturbations. The model in Eq. (2) will be the
subject of the following analysis. Hereafter we will also make use of the
redefinition: $f\rightarrow f/8\pi G_{N}$.
Considering that the non-metricity scalar assumes the form $Q=6H^{2}$ on a
FLRW background, we show in Figs. 1 and 2 top panels the behaviors of the
$f(Q)$ function given by the model in Eq. (2) respectively for negative and
positive values of the $\alpha$ parameter. The chosen values for $\alpha$ have
the purpose to visualize and quantify the modifications. We also include as
reference the $\alpha=0$ case corresponding to the GR limit. The values for
the cosmological parameters used in this paper are the Planck 2018 best fit
values Aghanim _et al._ (2020). Significant deviations from the GR behavior
appear for smaller redshift ($z<4$) and these differences start to vanish in
the distant past. We can also note that while for positive values of $\alpha$,
$f(Q)$ is enhanced with respect to the GR limit, for negative ones it is
suppressed. These different trends will impact the evolution of the linear
perturbations. The derivative of $f(Q)$ with respect to the non-metricity
scalar, $f_{Q}\equiv df/dQ$, indeed is identified to be the effective Planck
mass Beltrán Jiménez _et al._ (2020) and as such it is expected to impact on
the shape of large scale observables. In the following we will investigate the
physics of Microwave Background radiation (CMB) scalar angular power spectra,
matter power spectrum and the implications of a modified propagation of GWs.
Figure 1: Evolution of $f$ and $f_{Q}$ as a function of the redshift for
negative values of the $\alpha$ parameter. Figure 2: Evolution of $f$ and
$f_{Q}$ as a function of the redshift $z$ for positive values of the $\alpha$
parameter. Figure 3: Evolution of the phenomenological function $\mu$ as a
function of the redshift $z$ for positive (top panel) and negative (bottom
panel) values of the $\alpha$ parameter. Figure 4: Evolution of the linear
growth factor $D$ normalized to unity today divided by the scale factor $a$ as
a function of the redshift for different values of the $\alpha$ parameter.
## III Effects on observations from the scalar sector
Let us consider the perturbations of the metric in Newtonian gauge,
$ds^{2}=-(1+2\Psi)dt^{2}+a^{2}(1-2\Phi)dx^{2}$, where $\Phi(t,x_{i})$ and
$\Psi(t,x_{i})$ are the gravitational potentials. In the standard cosmological
model, these two potentials are equal during the period of structures
formation. This is no longer true when modifications in the gravitational
interaction are considered. In Fourier space the function
$\eta(a,k)\equiv\Phi/\Psi$ defines the non-zero anisotropic stress, and the
modifications in the Poisson equation are enclosed in the $\mu(a,k)$ function
as:
$-k^{2}\Psi=4\pi G_{N}a^{2}\mu(a,k)\rho_{m}\delta_{m}\,,$ (3)
where $\delta_{m}=\delta\rho_{m}/\rho_{m}$ is the density contrast,
$\rho_{m}(t)$ is the background matter density and $\mu(a,k)$ defines the
effective gravitational coupling. Moreover, a light deflection parameter
$\Sigma(a,k)\equiv\mu(1+\eta)/2$ measures the deviation in the Weyl potential
$(\Phi+\Psi)$.
In the quasi-static approximation for perturbations deep inside the Hubble
radius for the $f(Q)$-gravity one has $\eta=1$ and $\mu=1/f_{Q}$ Beltrán
Jiménez _et al._ (2020). In detail, for the model under consideration the
latter turns out to be a function of time only and it reads:
$\mu(a)=\frac{12H}{12H+\sqrt{6}\alpha H_{0}}\,.$ (4)
In Figs. 1 and 2 (bottom panels) we show the evolution of $f_{Q}$ as a
function of the redshift. $f_{Q}$ being the effective Planck mass is always
positive defined as expected. When $\alpha<0$, we have $f_{Q}<1$, thus
according to the evolution of $\mu$ the gravitational interaction is stronger
than that in GR (see bottom panel in Fig. 3). On the other hand when
$\alpha>0$ the opposite holds (upper panel in Fig. 3). This aspect is quite
interesting in light of RSD, galaxy clustering (GC) and weak lensing (WL) data
which independently detected a lower growth rate of matter density
perturbations than that predicted by the $\Lambda$CDM model Hildebrandt _et
al._ (2017); de Jong _et al._ (2015); Kuijken _et al._ (2015); Joudaki _et
al._ (2020); Abbott _et al._ (2018); Beutler _et al._ (2014); Samushia _et
al._ (2014); Macaulay, Wehus, and Eriksen (2013); Vikhlinin _et al._ (2009).
In order to perform explorations of cosmological observables we have modified
the public Einstein-Boltzmann code MGCAMB 111 MGCAMB webpage:
https://github.com/sfu-cosmo/MGCAMB Zhao _et al._ (2009); Hojjati, Pogosian,
and Zhao (2011); Zucca _et al._ (2019) which evolves the linear cosmological
perturbations equations taking into account the MG effects given by Eq. (4).
The modification introduced by the model in Eq. (2) has three major effects on
observations:
1. 1.
It changes the growth of matter perturbations. In modified gravity the matter
density perturbation $\delta_{m}$ obeys the linear equation:
$\ddot{\delta}_{m}+2H\dot{\delta}_{m}-4\pi G_{N}\mu\rho_{m}\delta_{m}=0\,,$
(5)
which can be solved by setting initial conditions in the matter dominated era.
We show in Fig. 4 the redshift evolution of the growth factor
$D(a)\equiv\delta_{m}(a)/\delta_{m}(a=1)$. As expected, the models with
positive values of $\alpha$ and $\mu<1$ have a larger growth factor than
$\Lambda$CDM as such, we predict a lower value for the amplitude of the matter
power spectrum at present time, $\sigma_{8}$, compared to $\Lambda$CDM,
assuming they share the same initial amplitude of primordial perturbations,
$A_{s}$ Dolag _et al._ (2004); De Boni _et al._ (2011); Pace _et al._
(2014). Accordingly we observe for positive values of $\alpha$ a suppression
of the growth of structures in the total matter power spectrum for $k\geq
10^{-3}$ with respect to the $\Lambda$CDM one, as depicted in the lower panel
of Fig. 5. For the values considered the deviations are estimated to be in the
range 5%-10%. When $\alpha$ is chosen to be negative the growth factor and the
matter power spectrum show exact opposite characteristics, and a deviation of
15% is found for $\alpha=-1$ in the matter power spectrum. RSD data have been
used to constrain the $\alpha$ and $\sigma_{8}$ parameters while keeping all
the others fixed to the Planck 2018 best fit values. It is found that
$\alpha=2.0331^{+3.8212}_{-1.9596}$ and
$\sigma_{8}=0.8326^{+0.1386}_{-0.0630}$ at 1$\sigma$ Barros _et al._ (2020).
From their central values we can infer a preliminary estimation of the
corresponding value for $A_{s}\simeq 2.7\times 10^{-9}$.
2. 2.
It modifies the gravitational lensing. For the $f(Q)$-gravity the modification
in the lensing gravitational potential $\phi_{\rm len}=(\Psi+\Phi)/2$ are
associated to $\Sigma=\mu$. Thus for the specific case that we explore, the
gravitational lensing is enhanced when $\mu>1$ and suppressed for $\mu<1$. Let
us consider now the lensing potential auto-correlation power spectrum which
using the line of sight integration method reads Lewis and Challinor (2006):
$\displaystyle C_{\ell}^{\phi\phi}=4\pi\int\frac{{\rm
d}k}{k}\mathcal{P}(k)\left[\int_{0}^{\chi_{\ast}}{\rm
d}\chi\,S_{\phi}(k;\tau)j_{\ell}(k\chi)\right]^{2},$ (6)
with $\mathcal{P}(k)=\Delta^{2}_{\mathcal{R}}(k)$ being the primordial power
spectrum of curvature perturbations, $j_{\ell}$ is the spherical Bessel
function and
$S_{\phi}(k;\tau)=2T_{\phi}(k;\tau_{0}-\chi)\left(\frac{\chi_{\ast}-\chi}{\chi_{\ast}\chi}\right)\,,$
(7)
where $T_{\phi}(k,\tau)=k\,\phi_{\rm len}$ is the transfer function, $\chi$ is
the comoving distance ($\chi_{\ast}$ corresponds to that to the last
scattering surface), with relation $\chi=\tau_{0}-\tau$ being $\tau$ the
conformal time and $\tau_{0}$ is its value today. The modifications in the
lensing potential $\phi_{\rm lens}$ discussed before, when included in the
source term in Eq (7), impact on the lensing power spectrum as shown in the
central panel of Fig. 5. We note that for the negative values of $\alpha$
($\mu>1$) the lensing power spectrum is enhanced with respect to $\Lambda$CDM,
while positive values correspond to a suppression of the lensing power
($\mu<1$). The larger deviations are for the higher values of $|\alpha|$. They
reach the 50% when $\alpha=-1$ and 25% for $\alpha=1$.
3. 3.
It impacts the late-time Integrated Sachs-Wolfe (ISW) effect. A modification
in the time variation of the lensing potential is expected due to the presence
of $\Sigma\neq 1$ for small $z$ which induces a late-time ISW effect. Let us
consider the TT angular spectrum Seljak and Zaldarriaga (1996)
$C_{\ell}^{\rm TT}=(4\pi)^{2}\int\frac{{\rm d}k}{k}\leavevmode\nobreak\
\mathcal{P}(k)\Big{|}\Delta_{\ell}^{\rm T}(k)\Big{|}^{2}\,,$ (8)
with
$\Delta_{\ell}^{\rm T}(k)=\int_{0}^{\tau_{0}}{\rm
d}\tau\,e^{ik\tilde{\mu}(\tau-\tau_{0})}S_{\rm
T}(k,\tau)j_{\ell}[k(\tau_{0}-\tau)]\,,$ (9)
where $\tilde{\mu}$ is the angular separation, and $S_{\rm T}(k,\tau)$ is the
radiation transfer function. The ISW contribution to $S_{\rm T}(k,\tau)$ is
given by
$\displaystyle S_{\rm
T}(k,\tau)\sim\left(\frac{d\Psi}{d\tau}+\frac{d\Phi}{d\tau}\right)e^{-\kappa}\,,$
(10)
where $\kappa$ is the optical depth. The late time variation of $\phi_{\rm
lens}$ expected from $\mu$ thus induces the late-time ISW effect. We find that
a suppression in the lensing power spectrum corresponds to an enhancement of
the amplitude of the low-$\ell$ TT power spectrum relative to $\Lambda$CDM, as
shown in the top panel of Fig. 5. On the contrary an enhancement in the
lensing power spectrum results in a suppression of the large scale ISW tails.
As for the lensing case, the magnitude of the deviations from $\Lambda$CDM
depends on $|\alpha|$. We find that up to a 50% deviation is present for
$\alpha=1$ and 30% for $\alpha=-1$. The effect on the ISW tail should be then
tightly constrained from the CMB data. We note that the realization of both a
lower ISW tail and an enhanced matter power spectrum can be present in
$f(R)$-gravity Song, Peiris, and Hu (2007) and Galileon model Peirone _et
al._ (2019) also.
We showed that the $f(Q)$ model analyzed in this section leaves precise and
measurable signatures on the CMB temperature anisotropies as well as on the
matter power spectrum. In particular some of them can be relevant to
understand whether the model can alleviate some tensions. For example, the
model allows to realize weaker gravity than $\Lambda$CDM, and this can be very
promising in light of the $\sigma_{8}$ tension. A preliminary result using
only RSD data showed that indeed this can be the case Barros _et al._ (2020).
However in the analysis only $\alpha$ and $\sigma_{8}$ are varied, while all
the other cosmological parameters are fixed. Thus a more general study is
required which should include the variation of all other parameters and the
use of several datasets as well. Moreover an enhancement in the lensing might
accomodate the lensing excess in the CMB Planck temperature data Ade _et al._
(2014); Adam _et al._ (2016); Aghanim _et al._ (2020). Instead a suppressed
ISW tail might accomodate better the CMB data over the standard cosmological
scenario as it has been shown in the Galileon Ghost Condensate model Peirone
_et al._ (2019). These features cannot be all present at the same time, thus a
detailed Markov chain Monte Carlo (MCMC) analysis involving several current
observational data is needed.
Figure 5: Power spectra of different cosmological observables for different
values of the $\alpha$ parameter and for the $\Lambda$CDM model. Top panel:
CMB temperature-temperature power spectra at low-$\ell$. Central panel:
lensing potential auto-correlation power spectra. Bottom panel: matter power
spectra.
## IV Gravitational waves luminosity distance
Figure 6: Evolution of $d_{L}^{gw}/d_{L}^{em}$ as function of the redshift for
positive (top panel) and negative (bottom panel) values of the $\alpha$
parameter.
The propagation of GWs modes obeys to the following second order action in
Fourier space Beltrán Jiménez, Heisenberg, and Koivisto (2018):
$S=\frac{1}{2}\sum_{\lambda}\int
d^{3}kdt\,a^{3}\,f_{Q}\left[(\dot{h}_{(\lambda)})^{2}-\frac{k^{2}}{a^{2}}h_{(\lambda)}^{2}\right]\,,$
(11)
where $h_{(\lambda)}$ are the two helicity modes of the metric tensor
perturbation part. According to this action, the corresponding equation of
propagation of GWs introduces a modification in the friction term which is
identified to be
$\delta(z)=\frac{d\ln\sqrt{f_{Q}}}{d(1+z)}\,.$ (12)
For $f(Q)$-gravity we can connect the friction term to the running of the
effective Planck mass, $\alpha_{M}=-2\delta$, being $f_{Q}$ the effective
Planck mass. A modified friction term affects the amplitude of GWs such that
the GWs luminosity distance, $d_{L}^{gw}$, is no longer equal to the standard
electromagnetic luminosity distance, $d_{L}^{em}$ Lombriser and Taylor (2016);
Saltas _et al._ (2014); Nishizawa (2018); Belgacem _et al._ (2018b, a).
Indeed they are related by:
$d_{L}^{gw}(z)=d_{L}^{em}(z)\exp\left\\{-\int^{z}_{0}\frac{dz^{\prime}}{1+z^{\prime}}\delta(z^{\prime})\right\\}\,.$
(13)
In Fig. 6 we show the evolution of the ratio $d_{L}^{gw}/d_{L}^{em}$ as a
function of the redshift for some values of the parameter $\alpha$. Regardless
of the value of $\alpha$, $d_{L}^{gw}/d_{L}^{em}\rightarrow 1$ at small $z$,
while at higher $z$ goes to a constant. The constant assumes values larger
than 1 for $\alpha>0$ and smaller than 1 for $\alpha<0$. The fact that the
ratio goes to $1$ at small $z$ is expected since there cannot be any effect
from MG when the source is at $z\simeq 0$. Moreover when $\alpha<0$ the ratio
$d_{L}^{gw}/d_{L}^{em}$ is smaller than $1$ thus the source of GWs is
magnified and can be seen to larger distances. The opposite holds for
$\alpha>0$.
In Ref. Belgacem _et al._ (2018a) it has been proposed the following
phenomenological parametrization: $d_{L}^{gw}/d_{L}^{em}=\Xi(z)$, where
$\Xi=\Xi_{0}+\frac{1-\Xi_{0}}{(1+z)^{n}}\,,$ (14)
with $\\{\Xi_{0},n\\}$ being constant free parameters. This parametrization
smoothly interpolates the limits $\Xi(z\ll 1)=1$ and $\Xi(z\gg 1)=\Xi_{0}$.
The combination of LISA standard sirens with CMB, BAO and SNIa datasets and
the joint analysis of ET standard sirens with the same cosmological dataset
allowed to obtain forecasted constraints on the $\Xi_{0}$ parameter to the
percent level accuracy, respectively $\Delta\Xi_{0}=0.044$ Belgacem _et al._
(2019) and $\Delta\Xi_{0}=0.008$ Belgacem _et al._ (2018a), while the
uncertainty on $n$ remains large.
We make a correspondence between the $f(Q)$-gravity and the phenomenological
parametrization in Eq. 14 by identifying:
$\Xi_{0}\simeq\frac{1}{2}(1+f_{Q0})\,,\qquad
n\simeq\left(\frac{f^{\prime}_{Q}}{f_{Q}-1}\right)_{0}\,,$ (15)
where $f_{Q0}\equiv f_{Q}(z=0)$ and prime is the derivative with respect to
$\ln a$. For the model in Eq. (2), we find
$\Xi_{0}\simeq 1+\frac{\alpha}{4\sqrt{6}}\,,\qquad
n\simeq\frac{1}{2}(3\Omega_{\rm m}^{0}+4\Omega_{\rm r}^{0})\,,$ (16)
where $\Omega_{\rm m}^{0}$ and $\Omega_{\rm r}^{0}$ are the present time
values of the density parameters of matter and radiation components
respectively. Considering the 1$\sigma$ forecasted error on $\Xi_{0}$ and its
$\Lambda$CDM fiducial ($\Xi_{0}=1$) we infer for the derived parameter
$\alpha$ ($\alpha_{\rm fiducial}=0$), the following forecasted errors:
$\Delta\alpha\simeq 0.078$ for ET and $\Delta\alpha\simeq 0.43$ for LISA
standard sirens.
## V Conclusion
We have presented theoretical predictions on linear cosmological observables
from a modified gravity model based on the non-metricity scalar, $Q$. For this
class of models a general function of $Q$, $f(Q)$, is included in the action.
The first derivative of $f$ with respect to $Q$ is identified with a time
dependent Planck mass and constitutes the source of the modification with
respect to the $\Lambda$CDM behavior at large linear scales. The specific
model we analyzed in this work does not change the expansion history with
respect to $\Lambda$CDM, thus we focused on the scalar angular power spectra
and matter power spectrum as well as on the GWs propagation.
The $f(Q)$-gravity model studied in this paper is given in Eq. (2) and it has
one extra free parameter, $\alpha$. Depending on the sign of $\alpha$ we found
measurable and specific signatures. The anisotropic stress parameter is equal
to 1, and from this it follows that the effective gravitational coupling is
equal to the light deflection parameter. In detail, we found that values of
$\mu>1$ ($\alpha<0$) enhances both the matter power spectrum and the lensing
potential auto-correlation power spectrum in comparison to the $\Lambda$CDM
model. In turns, modifications in the lensing potential impact on the
low-$\ell$ ISW tail of the CMB TT power spectrum due to a modified late-time
ISW effect, which for $\alpha<0$ is suppressed with respect to the
$\Lambda$CDM model. This aspect revealed to be the key feature for a better
fit to data compared to the $\Lambda$CDM scenario in other MG models. The case
$0<\mu<1$ ($\alpha>0$) generates a suppressed lensing power spectrum and an
enhanced low-$\ell$ CMB TT power spectrum. Additionally it allows to realize
weaker gravity than $\Lambda$CDM corresponding to a suppressed matter power
spectrum. As discussed in Section III, these features need to be tested
against data, as some of them are very promising in light of the $\sigma_{8}$
tension arising from the mismatch at more than 4$\sigma$ in the measurements
by Planck and that obtained from WL observations, and in the interpretation of
the lensing excess in the CMB Planck temperature data. The model does not show
these features at the same time as they are driven by different sign of the
$\alpha$ parameter, thus only a thorough MCMC analysis involving several
datasets can give us indications of which one is preferred by data.
Furthermore, the model under investigation includes a modified friction term
in the equation of propagation of GWs which introduces a modification of the
luminosity distance of standard sirens. We analyzed the predictions of the
ratio of GWs luminosity distance and electromagnetic one as function of the
redshift and this showed to follow the phenomenological parametrization
introduced in Belgacem _et al._ (2018a) in terms of two parameters
$\\{\Xi_{0},n\\}$. From the forecasts obtained from standard sirens at ET and
LISA using this parametrization we computed the relation between $\Xi_{0}$ and
the free parameter of our model and we deduced the accuracy on the $\alpha$
parameter. Next generation of GWs detectors will strongly help in constraining
deviations due to a running Planck mass and for the case under analysis it
will help constarining the $\alpha$ parameter with high accuracy.
In conclusion, the model shows very interesting signatures which deserve to be
tested extensively against data. Additionally a model selection analysis would
provide the information whether the model is statistical preferred by data
over the $\Lambda$CDM scenario. We leave these investigations for future work.
###### Acknowledgements.
The author thanks B. J. Barros, F. Pace and D. Vernieri for useful discussions
and comments. This work is supported by Fundação para a Ciência e a Tecnologia
(FCT) through the research grants UID/FIS/04434/2019, UIDB/04434/2020 and
UIDP/04434/2020, by FCT project “DarkRipple – Spacetime ripples in the dark
gravitational Universe” with ref. number PTDC/FIS-OUT/29048/2017 and FCT
project “CosmoTests – Cosmological tests of gravity theories beyond General
Relativity” with ref. number CEECIND/00017/2018.
## References
* Riess _et al._ (1998) A. G. Riess _et al._ (Supernova Search Team), Astron. J. 116, 1009 (1998), arXiv:astro-ph/9805201 .
* Perlmutter _et al._ (1999) S. Perlmutter _et al._ (Supernova Cosmology Project), Astrophys. J. 517, 565 (1999), arXiv:astro-ph/9812133 .
* Betoule _et al._ (2014) M. Betoule _et al._ (SDSS), Astron. Astrophys. 568, A22 (2014), arXiv:1401.4064 [astro-ph.CO] .
* Spergel _et al._ (2003) D. Spergel _et al._ (WMAP), Astrophys. J. Suppl. 148, 175 (2003), arXiv:astro-ph/0302209 .
* Ade _et al._ (2016) P. Ade _et al._ (Planck), Astron. Astrophys. 594, A13 (2016), arXiv:1502.01589 [astro-ph.CO] .
* Aghanim _et al._ (2016) N. Aghanim _et al._ (Planck), Astron. Astrophys. 594, A11 (2016), arXiv:1507.02704 [astro-ph.CO] .
* Eisenstein _et al._ (2005) D. J. Eisenstein _et al._ (SDSS), Astrophys. J. 633, 560 (2005), arXiv:astro-ph/0501171 .
* Beutler _et al._ (2011) F. Beutler, C. Blake, M. Colless, D. Jones, L. Staveley-Smith, L. Campbell, Q. Parker, W. Saunders, and F. Watson, Mon. Not. Roy. Astron. Soc. 416, 3017 (2011), arXiv:1106.3366 [astro-ph.CO] .
* Joyce _et al._ (2015) A. Joyce, B. Jain, J. Khoury, and M. Trodden, Phys. Rept. 568, 1 (2015), arXiv:1407.0059 [astro-ph.CO] .
* Adam _et al._ (2016) R. Adam _et al._ (Planck), Astron. Astrophys. 594, A1 (2016), arXiv:1502.01582 [astro-ph.CO] .
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), arXiv:1807.06209 [astro-ph.CO] .
* Riess _et al._ (2011) A. G. Riess, L. Macri, S. Casertano, H. Lampeitl, H. C. Ferguson, A. V. Filippenko, S. W. Jha, W. Li, and R. Chornock, Astrophys. J. 730, 119 (2011), [Erratum: Astrophys.J. 732, 129 (2011)], arXiv:1103.2976 [astro-ph.CO] .
* Riess _et al._ (2016) A. G. Riess _et al._ , Astrophys. J. 826, 56 (2016), arXiv:1604.01424 [astro-ph.CO] .
* Riess _et al._ (2019) A. G. Riess, S. Casertano, W. Yuan, L. M. Macri, and D. Scolnic, Astrophys. J. 876, 85 (2019), arXiv:1903.07603 [astro-ph.CO] .
* Delubac _et al._ (2015) T. Delubac _et al._ (BOSS), Astron. Astrophys. 574, A59 (2015), arXiv:1404.1801 [astro-ph.CO] .
* Hildebrandt _et al._ (2017) H. Hildebrandt _et al._ , Mon. Not. Roy. Astron. Soc. 465, 1454 (2017), arXiv:1606.05338 [astro-ph.CO] .
* de Jong _et al._ (2015) J. T. A. de Jong _et al._ , Astron. Astrophys. 582, A62 (2015), arXiv:1507.00742 [astro-ph.CO] .
* Kuijken _et al._ (2015) K. Kuijken _et al._ , Mon. Not. Roy. Astron. Soc. 454, 3500 (2015), arXiv:1507.00738 [astro-ph.CO] .
* Fenech Conti _et al._ (2017) I. Fenech Conti, R. Herbonnet, H. Hoekstra, J. Merten, L. Miller, and M. Viola, Mon. Not. Roy. Astron. Soc. 467, 1627 (2017), arXiv:1606.05337 [astro-ph.CO] .
* Joudaki _et al._ (2020) S. Joudaki _et al._ , Astron. Astrophys. 638, L1 (2020), arXiv:1906.09262 [astro-ph.CO] .
* Nojiri and Odintsov (2011) S. Nojiri and S. D. Odintsov, Phys. Rept. 505, 59 (2011), arXiv:1011.0544 [gr-qc] .
* Lue, Scoccimarro, and Starkman (2004) A. Lue, R. Scoccimarro, and G. D. Starkman, Phys. Rev. D 69, 124015 (2004), arXiv:astro-ph/0401515 .
* Copeland, Sami, and Tsujikawa (2006) E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006), arXiv:hep-th/0603057 .
* Silvestri and Trodden (2009) A. Silvestri and M. Trodden, Rept. Prog. Phys. 72, 096901 (2009), arXiv:0904.0024 [astro-ph.CO] .
* Capozziello and De Laurentis (2011) S. Capozziello and M. De Laurentis, Phys. Rept. 509, 167 (2011), arXiv:1108.6266 [gr-qc] .
* Clifton _et al._ (2012) T. Clifton, P. G. Ferreira, A. Padilla, and C. Skordis, Phys. Rept. 513, 1 (2012), arXiv:1106.2476 [astro-ph.CO] .
* Tsujikawa (2010) S. Tsujikawa, Lect. Notes Phys. 800, 99 (2010), arXiv:1101.0191 [gr-qc] .
* de Rham (2014) C. de Rham, Living Rev. Rel. 17, 7 (2014), arXiv:1401.4173 [hep-th] .
* Heisenberg (2014) L. Heisenberg, JCAP 05, 015 (2014), arXiv:1402.7026 [hep-th] .
* Koyama (2016) K. Koyama, Rept. Prog. Phys. 79, 046902 (2016), arXiv:1504.04623 [astro-ph.CO] .
* Nojiri, Odintsov, and Oikonomou (2017) S. Nojiri, S. Odintsov, and V. Oikonomou, Phys. Rept. 692, 1 (2017), arXiv:1705.11098 [gr-qc] .
* Ferreira (2019) P. G. Ferreira, Ann. Rev. Astron. Astrophys. 57, 335 (2019), arXiv:1902.10503 [astro-ph.CO] .
* Kobayashi (2019) T. Kobayashi, Rept. Prog. Phys. 82, 086901 (2019), arXiv:1901.07183 [gr-qc] .
* Frusciante and Perenon (2020) N. Frusciante and L. Perenon, Phys. Rept. 857, 1 (2020), arXiv:1907.03150 [astro-ph.CO] .
* Jiménez, Heisenberg, and Koivisto (2019) J. B. Jiménez, L. Heisenberg, and T. S. Koivisto, Universe 5, 173 (2019), arXiv:1903.06830 [hep-th] .
* Li, Sotiriou, and Barrow (2011) B. Li, T. P. Sotiriou, and J. D. Barrow, Phys. Rev. D 83, 064035 (2011), arXiv:1010.1041 [gr-qc] .
* Wu and Yu (2010a) P. Wu and H. W. Yu, Phys. Lett. B 693, 415 (2010a), arXiv:1006.0674 [gr-qc] .
* Wu and Yu (2010b) P. Wu and H. W. Yu, Phys. Lett. B 692, 176 (2010b), arXiv:1007.2348 [astro-ph.CO] .
* Cai _et al._ (2016) Y.-F. Cai, S. Capozziello, M. De Laurentis, and E. N. Saridakis, Rept. Prog. Phys. 79, 106901 (2016), arXiv:1511.07586 [gr-qc] .
* Benetti, Capozziello, and Lambiase (2020) M. Benetti, S. Capozziello, and G. Lambiase, (2020), 10.1093/mnras/staa3368, arXiv:2006.15335 [astro-ph.CO] .
* Nester and Yo (1999) J. M. Nester and H.-J. Yo, Chin. J. Phys. 37, 113 (1999), arXiv:gr-qc/9809049 .
* Dialektopoulos, Koivisto, and Capozziello (2019) K. F. Dialektopoulos, T. S. Koivisto, and S. Capozziello, Eur. Phys. J. C 79, 606 (2019), arXiv:1905.09019 [gr-qc] .
* Beltrán Jiménez _et al._ (2020) J. Beltrán Jiménez, L. Heisenberg, T. S. Koivisto, and S. Pekar, Phys. Rev. D 101, 103507 (2020), arXiv:1906.10027 [gr-qc] .
* Lu, Zhao, and Chee (2019) J. Lu, X. Zhao, and G. Chee, Eur. Phys. J. C 79, 530 (2019), arXiv:1906.08920 [gr-qc] .
* Bajardi, Vernieri, and Capozziello (2020) F. Bajardi, D. Vernieri, and S. Capozziello, Eur. Phys. J. Plus 135, 912 (2020), arXiv:2011.01248 [gr-qc] .
* Golovnev and Koivisto (2018) A. Golovnev and T. Koivisto, JCAP 11, 012 (2018), arXiv:1808.05565 [gr-qc] .
* Lazkoz _et al._ (2019) R. Lazkoz, F. S. Lobo, M. Ortiz-Baños, and V. Salzano, Phys. Rev. D 100, 104027 (2019), arXiv:1907.13219 [gr-qc] .
* Barros _et al._ (2020) B. J. Barros, T. Barreiro, T. Koivisto, and N. J. Nunes, Phys. Dark Univ. 30, 100616 (2020), arXiv:2004.07867 [gr-qc] .
* Zhao _et al._ (2009) G.-B. Zhao, L. Pogosian, A. Silvestri, and J. Zylberberg, Phys. Rev. D 79, 083513 (2009), arXiv:0809.3791 [astro-ph] .
* Hojjati, Pogosian, and Zhao (2011) A. Hojjati, L. Pogosian, and G.-B. Zhao, JCAP 1108, 005 (2011), arXiv:1106.4543 [astro-ph.CO] .
* Zucca _et al._ (2019) A. Zucca, L. Pogosian, A. Silvestri, and G.-B. Zhao, JCAP 05, 001 (2019), arXiv:1901.05956 [astro-ph.CO] .
* Belgacem _et al._ (2018a) E. Belgacem, Y. Dirian, S. Foffa, and M. Maggiore, Phys. Rev. D 98, 023510 (2018a), arXiv:1805.08731 [gr-qc] .
* Belgacem _et al._ (2019) E. Belgacem _et al._ (LISA Cosmology Working Group), JCAP 07, 024 (2019), arXiv:1906.01593 [astro-ph.CO] .
* Sathyaprakash _et al._ (2012) B. Sathyaprakash _et al._ , Class. Quant. Grav. 29, 124013 (2012), [Erratum: Class.Quant.Grav. 30, 079501 (2013)], arXiv:1206.0331 [gr-qc] .
* Amaro-Seoane _et al._ (2017) P. Amaro-Seoane _et al._ (LISA), (2017), arXiv:1702.00786 [astro-ph.IM] .
* Beltrán Jiménez, Heisenberg, and Koivisto (2018) J. Beltrán Jiménez, L. Heisenberg, and T. Koivisto, Phys. Rev. D 98, 044048 (2018), arXiv:1710.03116 [gr-qc] .
* Abbott _et al._ (2018) T. Abbott _et al._ (DES), Phys. Rev. D 98, 043526 (2018), arXiv:1708.01530 [astro-ph.CO] .
* Beutler _et al._ (2014) F. Beutler _et al._ (BOSS), Mon. Not. Roy. Astron. Soc. 443, 1065 (2014), arXiv:1312.4611 [astro-ph.CO] .
* Samushia _et al._ (2014) L. Samushia _et al._ , Mon. Not. Roy. Astron. Soc. 439, 3504 (2014), arXiv:1312.4899 [astro-ph.CO] .
* Macaulay, Wehus, and Eriksen (2013) E. Macaulay, I. K. Wehus, and H. K. Eriksen, Phys. Rev. Lett. 111, 161301 (2013), arXiv:1303.6583 [astro-ph.CO] .
* Vikhlinin _et al._ (2009) A. Vikhlinin _et al._ , Astrophys. J. 692, 1060 (2009), arXiv:0812.2720 [astro-ph] .
* Dolag _et al._ (2004) K. Dolag, M. Bartelmann, F. Perrotta, C. Baccigalupi, L. Moscardini, M. Meneghetti, and G. Tormen, Astron. Astrophys. 416, 853 (2004), arXiv:astro-ph/0309771 .
* De Boni _et al._ (2011) C. De Boni, K. Dolag, S. Ettori, L. Moscardini, V. Pettorino, and C. Baccigalupi, Mon. Not. Roy. Astron. Soc. 415, 2758 (2011), arXiv:1008.5376 [astro-ph.CO] .
* Pace _et al._ (2014) F. Pace, L. Moscardini, R. Crittenden, M. Bartelmann, and V. Pettorino, Mon. Not. Roy. Astron. Soc. 437, 547 (2014), arXiv:1307.7026 [astro-ph.CO] .
* Lewis and Challinor (2006) A. Lewis and A. Challinor, Phys. Rept. 429, 1 (2006), arXiv:astro-ph/0601594 .
* Seljak and Zaldarriaga (1996) U. Seljak and M. Zaldarriaga, Astrophys. J. 469, 437 (1996), arXiv:astro-ph/9603033 .
* Song, Peiris, and Hu (2007) Y.-S. Song, H. Peiris, and W. Hu, Phys. Rev. D 76, 063517 (2007), arXiv:0706.2399 [astro-ph] .
* Peirone _et al._ (2019) S. Peirone, G. Benevento, N. Frusciante, and S. Tsujikawa, Phys. Rev. D 100, 063540 (2019), arXiv:1905.05166 [astro-ph.CO] .
* Ade _et al._ (2014) P. Ade _et al._ (Planck), Astron. Astrophys. 571, A16 (2014), arXiv:1303.5076 [astro-ph.CO] .
* Lombriser and Taylor (2016) L. Lombriser and A. Taylor, JCAP 03, 031 (2016), arXiv:1509.08458 [astro-ph.CO] .
* Saltas _et al._ (2014) I. D. Saltas, I. Sawicki, L. Amendola, and M. Kunz, Phys. Rev. Lett. 113, 191101 (2014), arXiv:1406.7139 [astro-ph.CO] .
* Nishizawa (2018) A. Nishizawa, Phys. Rev. D 97, 104037 (2018), arXiv:1710.04825 [gr-qc] .
* Belgacem _et al._ (2018b) E. Belgacem, Y. Dirian, S. Foffa, and M. Maggiore, Phys. Rev. D 97, 104066 (2018b), arXiv:1712.08108 [astro-ph.CO] .
|
# Extracting Lifestyle Factors for Alzheimer’s Disease from Clinical Notes
Using Deep Learning with Weak Supervision
z.s.Zitao Shen Y.Y.Yoonkwon Yi A.B.Anusha Bompelli F.Y.Fang Yu Y.W.Yanshan
Wang R.Z.Rui Zhang College of Science & Engineering, University of
Minnesota, Minneapolis, USA Department of Pharmaceutical Care & Health
Systems, University of Minnesota, Minneapolis, USA Edson College of Nursing
and Health Innovation, Arizona State University, Phoenix, USA Department of
AI & Informatics, Mayo Clinic, Rochester, USA Institute for Health
Informatics, University of Minnesota, Minneapolis, USA
###### Abstract
Background Since no effective therapies exist for Alzheimer’s disease (AD),
prevention has become more critical through lifestyle factor changes and
interventions. Analyzing electronic health records (EHR) of patients with AD
can help us better understand lifestyle’s effect on AD. However, lifestyle
information is typically stored in clinical narratives. Thus, the objective of
the study was to demonstrate the feasibility of natural language processing
(NLP) models to classify lifestyle factors (e.g., physical activity and
excessive diet) from clinical texts.
Methods We automatically generated labels for the training data by using a
rule-based NLP algorithm. We conducted weak supervision for pre-trained
Bidirectional Encoder Representations from Transformers (BERT) models on the
weakly labeled training corpus. These models include the BERT base model,
PubMedBERT(abstracts + full text), PubMedBERT(only abstracts), Unified Medical
Language System (UMLS) BERT, Bio BERT, and Bio-clinical BERT. We performed two
case studies: physical activity and excessive diet, in order to validate the
effectiveness of BERT models in classifying lifestyle factors for AD. These
models were compared on the developed Gold Standard Corpus (GSC) on the two
case studies.
Results The PubmedBERT(Abs) model achieved the best performance for physical
activity, with its precision, recall, and F-1 scores of 0.96, 0.96, and 0.96,
respectively. Regarding classifying excessive diet, the Bio BERT model showed
the highest performance with perfect precision, recall, and F-1 scores.
Conclusion The proposed approach leveraging weak supervision could
significantly increase the sample size, which is required for training the
deep learning models. The study also demonstrates the effectiveness of BERT
models for extracting lifestyle factors for Alzheimer’s disease from clinical
notes.
Natural language processing,
Machine learning,
Electronic health records,
Deep learning,
Alzheimer’s disease,
Clinical text classification,
###### keywords:
Research
[id=n1]Equal contributor
## Background
Alzheimer’s disease (AD) is the most common cause of dementia, accounting for
60 to 80 percent of all dementia cases [1]. Around 5.8 million Americans were
living with AD in 2020, and this number is expected to increase to
approximately 14 million by 2050[2]. Currently, no treatments can cure AD, but
several lifestyle factors have been associated with a substantially reduced
risk for AD with inconsistent findings. For example, high levels of physical
and cognitive activity showed the strongest associations with reduced AD risk
ranging from 11% to 44%[3]. Multiple lifestyle modifications, including
physical activity, no-smoking, light-to-moderate alcohol consumption,
cognitive activities, and high-quality diets were correlated with a 60%
decreased risk for AD [4]. Furthermore, the Finnish Geriatric Intervention
Study to Prevent Cognitive Impairment and Disability (FINGER) found that
people at high risk of developing AD showed improvements in their cognitive
abilities following two years of lifestyle changes [5]. These findings led to
the launch of multi-lifestyle intervention trials globally, such as the U.S.
Pointer[6]. Few Randomized Controlled Trials (RCTs) have the resources of the
U.S. Pointer to be able to enroll a large sample. Hence, alternative,
innovative, scalable, and cost-effective approaches to the use of electronic
health records (EHRs) to establish causal effects are critically needed.
Since 2009 when the Health Information Technology for Economic and Clinical
Health Act (HITECH Act) was enacted[7], EHRs have been adopted exponentially.
Consequently, EHR studies have increased dramatically and have been
acknowledged as a way of enhancing patient care and promoting clinical
research [8, 9, 10, 11]. EHRs document information obtained during healthcare
delivery, including detailed explanations of the occurrence, treatment, and
progression of diseases. Secondary analysis of observational EHR data has been
widely used in multiple clinical domains [12].
Much lifestyle information is documented in EHRs in the unstructured format,
making it difficult to process and obtain desired information. To overcome
this difficulty, natural language processing (NLP) techniques have been used
to show promising results in extracting pertinent information from
unstructured data for clinical research. For example, in our previous study,
we demonstrated that extracting lifestyle factors using standard terminologies
such as Unified Medical Language System (UMLS) and the existing NLP model
(MetaMap) was feasible and reliable [13]. Our previous studies used
standardized rule-based NLP models without the aid of annotated data. Besides,
we demonstrated the feasibility of using NLP methods to automatically extract
lifestyle factors from EHRs in our latest research, which was accepted by the
HealthNLP 2020 workshop [14]. Previously, conventional machine learning
methods such as random forest, support vector machine (SVM), conditional
random field, logistic regression, random forest (RF), bagged decision trees,
and K-nearest Neighbors have been used for extracting lifestyle factors
related to excessive diet, physical activity, sleep deprivation, and substance
abuse. However, some limitations included working with a small-sized annotated
corpus mainly due to the labor-intensive process of developing the corpus.
To reduce human efforts to generate annotations, weak supervision is one
approach that trains machine learning models using weak labels generated by
rule-based methods. Previously, Wang et al. [15] have demonstrated the
feasibility of using weak supervision and deep representation using word
embeddings for clinical text classification tasks. Recently, more advanced
neural network-based representations, such as Bidirectional Encoder
Representation from Transformers (BERT)[16], have further improved NLP
performance in multiple NLP downstream tasks, such as question answering. BERT
models have been further applied in the biomedical domain, and its variances
have been pre-trained on various biomedical corpora, such as biomedical
literature and clinical records (e.g., MIMIC[17]), to gain a deep
representation of biomedicine. In general, these domain-specific BERT models
have shown promising performance on clinical applications [18, 19, 20]. For
instance, Lee et al. [21] presented that Bio BERT, pre-trained on large-scale
biomedical corpora, significantly exceeds the standard BERT model on some
popular biomedical NLP tasks, i.e., named entity recognition, relation
extraction, and question answering. Michalopoulos et al. [22] also showed
similar results by comparing the general BERT model with their proposed
domain-specific model, UMLS BERT. However, to the best of our knowledge, no
clinical applications utilize BERT models with weak supervision so far. Hence,
the objective of this study was to demonstrate the feasibility of using
various pre-trained biomedical BERT models to classify lifestyle factors in
clinical notes. Our contributions include: 1) evaluating state-of-the-art
clinical BERT models on the classification of lifestyle factors in clinical
notes of patients with AD, and 2) using weak supervision to overcome the
burdensome task of creating a hand-labeled dataset.
## Methodology
We conducted our experiments on two types of lifestyle factors: physical
activity and excess diet. For each case study, we followed the same steps: 1)
collecting clinical notes from patients with AD; 2) applying the rule-based
NLP classifier to assign weak labels to the lifestyle-based sentences; 3)
fine-tuning BERT models on the data with the weak label for each case study
described below; 4) manually annotating a small portion of the selected
sentences to develop the Gold Standard Corpus (GSC); 5) evaluating the
performances of various BERT models in the GSC. Fig.1 demonstrates the whole
workflow for this study.
Figure 1: Overview of the study design
### Data Source
The clinical notes were sourced from the Clinical Data Repository (CDR) of the
University of Minnesota (UMN). More than 180 million clinical notes are
currently held by the aforementioned CDR, containing more than 2.9 million
patients from 8 hospitals and more than 40 local clinics. Approval was
obtained to access the EHRs for patients with AD from the Institutional Review
Board (IRB).
For extracting sentences and generating weak labels for the sentences, we
utilized NLP-PIER, an information extraction (IE) platform to allow direct
access to patient data stored in the free text of clinical notes in the CDR
[23]. The NLP-PIER provided direct access to patient data in the CDR as it
used Elasticsearch technology and featured an open-source NLP system,
BioMedICUS (BioMedical Information Collection and Understanding System)[24].
Regarding the rules that were used, we utilized a couple of lifestyle factors,
such as physical inactivity and excessive diet, that had been found to be
related to the development of AD. Then, by using the online UMLS Metathesarus
browser based on our previous work, we manually collected all concept unique
identifiers (CUIs) associated with physical activity and excess diet[14].
These CUIs were used in the NLP-PIER to identify sentences with corresponding
labels.
To generate the GSC, three annotators independently annotated 50 sentences for
each case study using INCEpTION [25], a semantic annotation platform. Based on
the presence of lifestyle factor entities of interest in the GSC, the labels
were assigned in the gold standard sentence level for this task. For physical
activity, we set labels as ”yes” or ”no” (’no’ indicates physical inactivity
or no physical activity mentions) to sentences. For excess diet, sentences
were assigned as ”high calorie diet,” ”high salt diet,” ”high fat diet,” or
”normal/none” (indicating the normal diet or no diet mentions). The Cohen’s
Kappa scores reached 1.0 between any two annotators by the end. The entity
names and their example can be found in Tab 1. Note that while the GSC and
training data with weak labels were mutually excluded by ensuring their note
ids were not overlapped.
Table 1: Example sentences with weak label for excessive diet and physical
activity
Category | Class | Sentence Example
---|---|---
Excess Diet | High Fat Diet | Pt is having fatty food …
High Calorie Diet | He had took high calorie diet for two weeks…
High Salt Diet | His current diet contains too much food with high salt..
Normal /None | She backs to normal diet…
Physical Activity | Physical Activity | Pt has increase regular physical activity…
Physical Inactivity | He didn’t maintain daily exercise…
### Model
Transfer learning defines a process in which a model is trained on large size
domain-related data set, then the model, as mentioned above’s pre-trained
weights, are fine-tuned on a small corpus for a specific task. Some benefits
of transfer learning include how, for relatively high performance, less data
is required. In this study, we evaluated six BERT models. The baseline model
was the BERT base model, which was used as the baseline. Other five BERT
models pre-trained in the domain-specific corpora include: PubMedBERT (pre-
trained in the abstracts and full text of biomedical literature)[20],
PubMedBERT (pre-trained in only abstracts)[20], Bio BERT[21], Unified Medical
Language System (UMLS) BERT[22], and Bio-clinical BERT[19]. The use of the
BERT model was a form of transfer learning. Besides the BERT base model, the
other BERT models’ other variations can be classified into two groups based on
the pre-trained corpora. One group was pre-trained from scratch by using the
text from biomedical literature in PubMed. More precisely, PubMedBERT(Abs+Ft)
used the abstract and full text, which included approximately 16.8 billion
words (107 GB). The PubMedBert(Abs) model was developed using the abstract
alone, which included 3.2 billion words (21 GB). Similarly, Bio BERT was
trained from PubMed abstracts and PubMed Central full-text articles. The
remaining two variations of BERT models were trained on Medical Information
Mart for Intensive Care III (MIMIC III) dataset [17]. The MIMIC III corpus has
approximately 0.5 billion words (3.7 GB). Note that the Bio+Clinical BERT
model was initialized from Bio BERT, and the UMLS BERT model was initialized
from bio-clinical BERT while using the information from the CUIs.
### Training and Evaluation
For both case studies, we split the sentences into training(90%), validation
(10%) in the weak-labeled corpus. During the testing, the random seed for all
six models was kept the same. First, we fine-tuned BERT models on the training
dataset and applied them to the validation set. The team fine-tuned all six
BERT models on training data for ten epochs with a learning rate of
$2*10^{-5}$. While keeping the trade-off on efficiency in mind, we picked a
batch size of 512 for the case study on physical activity. The batch was set
as 64 for the excessive diet case study. We used a dropout layer for
regularization purposes and a fully connected layer in the end. The dropout
rate was set to be 0.3, while the optimizer was set as Adam. The cross-entropy
loss function was used as the loss function. In terms of padding, we padded
the sentence to length 50 since the majority of sentences were shorter than
that. The best models in the validation set were further evaluated in the GSC.
## Results
### Corpus statistics
We here describe the corpora for the two case studies to evaluate BERT models’
effectiveness with weak supervision.
#### Case 1: Physical activity
We extracted 23,559 sentences by searching for physical activity related CUIs
on the NLP-PIER. Of all the sentences, 22,785 (96.7%) sentences were assigned
as ”physical activity,” and 777 (3.3%) mentions were ”physical inactivity.” In
the GSC, the number of sentences in the category of physical activity and
physical inactivity is 78 and 122, respectively.
#### Case 2: Excessive diet
In total, 886 sentences were used as training data with weak labels. Training
data were distributed as 302(34%), 133(15%), 153(17.3%), and 300(33.7%),
respectively, for high calorie diet, high fat diet, high salt diet, and
normal/none. In the GSC, the corresponding numbers were 18, 20, 20, and 70,
respectively.
### Model performance
During the experiment, the whole process was conducted on the Intel® Xeon®
Gold 6152 Processor with 22 cores and 256 RAM. In the first case study, it
took about 12 and 18 hours to train one BERT model. For the excessive diet
case study, this process took 0.5 to 2 hours, which was much shorter due to
the relatively smaller training sample size compared to the first study.
As shown in Tab 2, all pre-trained clinical BERT models outperformed the BERT
base model for physical activity.PubmedBERT (Abs) model performed the best
with precision, recall, and macro F-1 score were 0.96, 0.96, and 0.96,
respectively. The Bio-clinical BERT model metric was 9%, 11%, and 10% higher,
respectively, compared to the BERT Base model, which had precision, recall,
and F-1 scores of 0.88, 0.86, and 0.87, respectively, and all other model
surpassed the base model.
Table 2: Comparison of results for various BERT models for the second case study (physical activity). | Precision | Recall | F1
---|---|---|---
BERT Base | 0.88 | 0.86 | 0.87
PubmedBERT(Abs+Ft) | 0.92 | 0.90 | 0.91
PubmedBERT(Abs) | 0.96 | 0.96 | 0.96
Bio BERT | 0.88 | 0.90 | 0.88
UMLS BERT | 0.93 | 0.92 | 0.93
Bio-clinical BERT | 0.89 | 0.88 | 0.89
Table 3: Comparison of results for various BERT models for the second case study (excessive diet). | Precision | Recall | F1
---|---|---|---
BERT Base | 0.94 | 0.97 | 0.95
PubmedBERT(Abs+Ft) | 1.00 | 0.99 | 0.99
PubmedBERT(Abs) | 0.85 | 0.90 | 0.83
Bio BERT | 1.00 | 1.00 | 1.00
UMLS BERT | 0.79 | 0.86 | 0.77
Bio-clinical BERT | 0.83 | 0.92 | 0.85
In the use case on the excessive diet, the BERT base model performed very well
with precision, recall, and F-1 scores of 0.94, 0.97, and 0.95, respectively
Tab 3. Only PubMedBERT and Bio BERT outperform the baseline model with minimal
improvement. Bio BERT model reached excellent performance with all its
precision, recall, and F-1 scores of 1.00. UMLS BERT performed the worst with
significantly lower performance. The performance difference between BERT
models was also wider, unlike the previous case study, which had a relatively
similar result for all models.
## Discussion
The wide adoption of EHR in the healthcare system generates big data about the
healthcare delivery of patients. The rapid advancement of artificial
intelligence (AI) methods and computational resources provide an alternative
way to cost-effectively consider the impact of lifestyle factor on AD using
rich EHR data[26]. In this study, we demonstrated the feasibility of using
BERT models with weak supervision to classify lifestyle factors for AD in
clinical notes. The approach described in this study can be further extended
to other lifestyle factors, accelerating our investigation on the roles of
lifestyle factors on AD.
To effectively train deep neural network models, extensive training data is
required. Similar to other studies[15], weak supervision can generate
sufficiently large training data by a rule-based NLP system without further
human efforts. Although weakly labeled training data contain a certain noisy
label, our finding demonstrated BERT models’ promising performance on weakly
labeled data on two classification tasks for lifestyle factors. Although these
BERT models were pre-trained in the biomedical or clinical domain, their
performances on different tasks are still distinguishable. For example, in the
physical activity, and PubMedBERT(Abs) outperformed PubMedBERT(Abs+Ft);
however, the latter model performed better in the use case of classifying the
excessive diet. In addition, the BERT model and its variations were robust to
a dataset with imbalanced classes since all of them had impressive outcomes
with a class ratio of 1:30.
During the evaluation of the models and error analysis, we found most BERT
models’ variations had difficulties in classifying the sentences in a
complicated contextual situation. For example, the sentence ”…states she
stayed physically active with gardening and housework, but would like to
increase her aerobic exercise…” meant that the patient was physically active.
Still, she would like to be even more active. However, all models predicted
the sentence mentioned above as ”physical inactivity” while it should be
”physical activity.” Similarly, all models classified the sentence, ”Patient
continues to be physically active without doing any aerobic exercise outside
of cardiac rehab” as ”physical inactivity.” At the same time, it should have
been labeled as ”physical activity.” We believe that the phrase ”without doing
any aerobic exercise” may have led the model to mistakenly classify this
sentence as it introduces a piece of contradicting information to the first
part of the sentence. Besides, some sentences such as ”The patient was very
physically inactive,” which seemed to be obvious to be ”physical inactivity,”
were mispredicted as ”physical activity.” We believe that more extensive
training dataset could avoid these errors.
Regarding the second case study (excessive diet), most of the sentences
classified incorrectly are from the class of”normal/none.” These were
incorrectly classified as ”high salt” and ”high fat” for the most part. For
example, the sentence, ”history of cocaine abuse and acute syphilis” was
classified as ”high salt” when it should have been ”none.” It is possible that
some domain specific BERT models didn’t have enough knowledge on
distinguishing the sentences with no mentions on life factors, comparing with
the BERT Base model, which was trained on with broader domain corpus. In fact,
for these specific sentences, a rule-based classification model would perform
better.
While our performance metric was relatively high (0.97-98) for both case
studies, our study has some limitations. The first limitation of this study is
that BERT models are sensitive to the training data size. The size of the
excessive diet dataset, which was 886, was still relatively small. In addition
to the differentiation within the model construction and pre-trained corpus,
this may explain the presence of significant differences between BERT models
and their variations than those in the first case study. Second, the highly
imbalanced class in the case study on physical activity could be a limitation.
Our results and limitations collectively indicate that our approach performed
well, but further replication trials are needed to understand all models
better.
Collectively, our findings have significant implications for further lifestyle
research in AD. They provide a methodology to use unstructured EHR data to
address the inconsistent findings of the strengths of association between
lifestyle factors and AD risk and allow the simultaneous examinations of
multiple lifestyle factors and their interactive/synergistic effects on
cognitive changes and AD risk. Besides, they provide an approach for future
casual modeling of lifestyle changes on clinical outcomes in AD. Since EHRs
offer a potential source of data, it can be evaluated and defined to address
questions that aim to measure the causal effect of intervention or exposure on
the outcome of interest. Unlike RCTs that are time-consuming and expensive to
carry out and have minimal generalizability, conducting studies using EHR data
is scalable and affordable. Developing causal modeling methods using EHR data
will allow large-scale and pragmatic trials.
## Conclusion
In this study, we used weak supervision and various pre-trained clinical BERT
models to classify lifestyle factors in AD from clinical notes. The purpose of
using weak supervision was to prevent the need for the laborious task of
creating the hand-labeled dataset. We evaluated two text classification case
studies’ effectiveness: classifying sentences regarding their physical
activity and excessive diet. We tested one baseline model and five different
BERT models: PubmedBERT(Abs+Ft), PubmedBERT(Abs), Bio BERT, UMLS BERT, and
Bio-clinical BERT. PubmedBERT(Abs) and Bio BERT model performed the best for
the two use cases. The study group can further expand this approach to other
factors such as substance abuse to investigate their effects on AD and provide
additional AD research opportunities.
## Funding
Y.Y. thanks the University of Minnesota’s Undergraduate Research Opportunities
Program (UROP). This work was partially supported by the National Institutions
of Health’s National Center for Complementary & Integrative Health (NCCIH),
the Office of Dietary Supplements (ODS) and National Institute on Aging (NIA)
grant number R01AT009457 (PI: Zhang) and Clinical and Translational Science
Award (CTSA) program grant number UL1TR002494 (PI: Blazar). The content is
solely the responsibility of the authors and does not represent the official
views of the NCCIH, ODS or NIA.
## Abbreviations
AD: Alzheimer’s Disease; BERT: Bidirectional Encoder Representations from
Transformer; CUIs:Concept Unique Identifiers; GSC: Gold Standard Corpus; MIMIC
III: Medical Information Mart for Intensive Care III; UMLS: Unified Medical
Language System;
## Competing interests
There is no competing interests.
## References
* [1] Alzheimer’s_Association: What is Alzheimer’s? https://www.alz.org/alzheimers-dementia/what-is-alzheimers
* [2] NIH: Alzheimer’s Disease Fact Sheet. U.S. Department of Health and Human Services. https://www.nia.nih.gov/health/alzheimers-disease-fact-sheet
* [3] Frederiksen, K.S., Gjerum, L., Waldemar, G., Hasselbalch, S.G.: Physical activity as a moderator of alzheimer pathology: A systematic review of observational studies. Current Alzheimer Research 16(4), 362–378 (2019). doi:10.2174/1567205016666190315095151
* [4] Dhana, K., Evans, D.A., Rajan, K.B., Bennett, D.A., Morris, M.C.: Healthy lifestyle and the risk of alzheimer dementia: Findings from 2 longitudinal studies. Neurology 95(4), 374–383 (2020)
* [5] Kivipelto, M., Solomon, A., Ahtiluoto, S., Ngandu, T., Lehtisalo, J., Antikainen, R., Bäckman, L., Hänninen, T., Jula, A., Laatikainen, T., et al.: The finnish geriatric intervention study to prevent cognitive impairment and disability (finger): Study design and progress. Alzheimer’s & Dementia 9(6), 657–665 (2013). doi:10.1016/j.jalz.2012.09.012
* [6] Alzheimer’s_Association: A Lifestyle Intervention Trial to Support Brain Health and Prevent Cognitive Decline. https://alz.org/us-pointer/overview.asp
* [7] Blumenthal, D.: Launching hitech. New England Journal of Medicine 362(5), 382–385 (2010). doi:10.1056/NEJMp0912825. PMID: 20042745. https://doi.org/10.1056/NEJMp0912825
* [8] Wang, Y., Wang, L., Rastegar-Mojarad, M., Moon, S., Shen, F., Afzal, N., Liu, S., Zeng, Y., Mehrabi, S., Sohn, S., Liu, H.: Clinical information extraction applications: A literature review. Journal of Biomedical Informatics 77, 34–49 (2018). doi:10.1016/j.jbi.2017.11.011
* [9] Velupillai, S., Suominen, H., Liakata, M., Roberts, A., Shah, A.D., Morley, K., Osborn, D., Hayes, J., Stewart, R., Downs, J., Chapman, W., Dutta, R.: Using clinical natural language processing for health outcomes research: Overview and actionable suggestions for future advances. Journal of Biomedical Informatics 88, 11–19 (2018). doi:10.1016/j.jbi.2018.10.005
* [10] Névéol, A., Zweigenbaum, P.: Clinical natural language processing in 2014: Foundational methods supporting efficient healthcare. Yearbook of Medical Informatics 24(01), 194–198 (2015). doi:10.15265/iy-2015-035
* [11] Wu, Y., Jiang, M., Xu, J., Zhi, D., Xu, H.: Clinical named entity recognition using deep learning models. In: AMIA Annual Symposium Proceedings, vol. 2017, p. 1812 (2017). American Medical Informatics Association
* [12] Critical Data, M.: Secondary Analysis of Electronic Health Records. Springer, ??? (2016)
* [13] Zhou, X., Wang, Y., Sohn, S., Therneau, T.M., Liu, H., Knopman, D.S.: Automatic extraction and assessment of lifestyle exposures for alzheimer’s disease using natural language processing. International Journal of Medical Informatics 130, 103943 (2019). doi:10.1016/j.ijmedinf.2019.08.003
* [14] Yi, Y., Shen, Z., Bompelli, A., Yu, F., Wang, Y., Zhang, R.: Natural language processing methods to extract lifestyle exposures for alzheimer’s disease from clinical notes. In: HealthNLP Workshop 2020 (in Press) (2020)
* [15] Wang, Y., Sohn, S., Liu, S., Shen, F., Wang, L., Atkinson, E.J., Amin, S., Liu, H.: A clinical text classification paradigm using weak supervision and deep representation. BMC medical informatics and decision making 19(1), 1 (2019)
* [16] Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). 1810.04805
* [17] Johnson, A.E., Pollard, T.J., Shen, L., Li-Wei, H.L., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L.A., Mark, R.G.: Mimic-iii, a freely accessible critical care database. Scientific data 3(1), 1–9 (2016)
* [18] Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., Poon, H.: Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing (2020). arXiv:2007.15779
* [19] Alsentzer, E.: Clinical BERT Embeddings. GitHub (2020). https://github.com/EmilyAlsentzer/clinicalBERTs
* [20] Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Gao, J., Poon, H.: Domain-specific language model pretraining for biomedical natural language processing. arXiv preprint arXiv:2007.15779 (2020)
* [21] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H., Kang, J.: Biobert: a pre-trained biomedical language representation model for biomedical text mining. CoRR abs/1901.08746 (2019). 1901.08746
* [22] Michalopoulos, G., Wang, Y., Kaka, H., Chen, H., Wong, A.: Umlsbert: Clinical domain knowledge augmentation of contextual embeddings using the unified medical language system metathesaurus. arXiv preprint arXiv:2010.10391 (2020)
* [23] McEwan, R., Melton, G.B., Knoll, B.C., Wang, Y., Hultman, G., Dale, J.L., Meyer, T., Pakhomov, S.V.: Nlp-pier: a scalable natural language processing, indexing, and searching architecture for clinical notes. AMIA Summits on Translational Science Proceedings 2016, 150 (2016)
* [24] Knoll, B.: BioMedICUS: A biomedical and clinical NLP engine. GitHub (2020). https://github.com/nlpie/biomedicus3
* [25] Klie, J.-C.: Inception: Interactive machine-assisted annotation. In: Proceedings of the First Biennial Conference on Design of Experimental Search & Information Retrieval Systems, pp. 105–105 (2018). http://tubiblio.ulb.tu-darmstadt.de/106627/
* [26] Zhang, R., Simon, G., Yu, F.: Advancing alzheimer’s research: A review of big data promises. International journal of medical informatics 106, 48–56 (2017)
|
# Seshadri constants and K-stability of Fano manifolds
Hamid Abban and Ziquan Zhuang _Hamid Abban_
Department of Mathematical Sciences, Loughborough University, Loughborough
LE11 3TU, UK
<EMAIL_ADDRESS>_Ziquan Zhuang_
Department of Mathematics, MIT, Cambridge, MA, 02139, USA
<EMAIL_ADDRESS>
###### Abstract.
We give a lower bound of the $\delta$-invariants of ample line bundles in
terms of Seshadri constants. As applications, we prove the uniform K-stability
of infinitely many families of Fano hypersurfaces of arbitrarily large index,
as well as the uniform K-stability of most families of smooth Fano threefolds
of Picard number one.
## 1\. Introduction
Existence of Kähler-Einstein metrics on Fano manifolds is detected by
K-stability: a Fano manifold admits a Kähler-Einstein metric if and only if it
is K-polystable [CDS, Tian-YTD]. However, deciding whether a given Fano
manifold is K-polystable is a quite challenging problem. We refer to the
recent survey by Xu [Xu-survey] for details on the subject and its
development. A uniform approach to checking K-stability was proposed recently
by the authors in [AZ-K-adjunction], which offers an inductive approach to
K-stability and the skeleton of its proof relies on lifting the calculation of
the so-called $\delta$-invariants (see Section 2.2) to certain flags of
subvarieties by adjunction. One particularly useful tool that [AZ-K-
adjunction] provided is a K-stability criterion that only involves the
existence of a linear system $|L|$ satisfying a simple numerical condition, so
that through each point there is a curve given by complete intersection of
divisors in $|L|$ (see [AZ-K-adjunction]*Theorem 1.2). In this article, we
provide an even stronger criterion using the Seshadri constant, an invariant
that was originally introduced by Demailly [Dem-Seshadri] to measure the local
positivity of line bundles.
###### Theorem A (Theorem 3.1).
Let $X$ be a projective variety of dimension $n\geq 2$ and let $L$ be an ample
line bundle on $X$. Let $x\in X$ and let $S=H_{1}\cap\cdots\cap
H_{n-2}\subseteq X$ be a complete intersection surface passing through $x$,
where each $H_{i}\in|L|$. Assume that $S$ is integral, and smooth at $x$. Then
$\delta_{x}(L)\geq\frac{n+1}{(L^{n})}\cdot\varepsilon_{x}(L|_{S}).$
For a precise description of the equality cases we refer to the statement of
Theorem 3.1.
This result enables us to give strong estimate of the $\delta$-invariants of
many Fano varieties. One major application is to prove uniform K-stability for
a large class of smooth hypersurfaces. Before stating the result, we recall
the following folklore conjecture; see [Xu-survey]*Part 3.
###### Conjecture.
Any smooth Fano hypersurface $X\subseteq\mathbb{P}^{n+1}$ of degree $d\geq 3$
is K-stable.
For general Fano hypersurfaces, this conjecture follows from the K-stability
of the Fermat hypersurfaces [Tian-Fermat, AGP, Z-equivariant], together with
the openness of the K-stable locus in smooth families [Oda-openness, Don-
openness, Xu-quasimonomial, BLX-openness]. For arbitrary hypersurfaces, the
conjecture is known to be true when the Fano index is at most two [Fuj-alpha,
LX-cubic-3fold, AZ-K-adjunction], or when the dimension is at most $4$ [Liu-
cubic-4-fold]. Using Theorem A and some careful study of Seshadri constants,
we are able to extend this to a much larger class of hypersurfaces:
###### Theorem B (Theorem 4.1).
Let $X\subseteq\mathbb{P}^{n+1}$ be a smooth Fano hypersurface of Fano index
$r\geq 3$ and dimension $n\geq r^{3}$. Then $X$ is uniformly K-stable.
For smooth Fano manifolds, uniform K-stability is equivalent to K-stability as
a combination of the analytic works [CDS, Tian-YTD, BBJ-
variational]111Postscript note: an algebraic proof is now available by the
very recent work [LXZ-HRFG]. Our proof of the above theorem is completely
algebraic. In particular, even for general hypersurfaces of Fano index $\geq
3$ in the given dimensions, this is perhaps the first algebraic proof of their
uniform K-stability (see [AZ-K-adjunction] for the small index cases).
The next application concerns smooth Fano threefolds of Picard number one.
They are classified by Iskovskikh into seventeen families; see [Fano-book].
Some of them, such as $\mathbb{P}^{3}$, the quadric $Q$, and the Fano
threefold $V_{5}$ of index two and degree $5$, have infinite automorphism
groups and therefore are not K-stable. It is also well-known that not all Fano
threefolds of degree $22$ are K-stable [Tian-K-stability-defn]. We show that
in most of the remaining degrees, the Fano threefolds are uniformly K-stable.
This is new when the Fano threefold has index one and degree at least $8$. It
also provides a unified and purely algebraic proof for all the sporadic cases
that were previously known [AGP, Der-finite-cover, Fuj-alpha, LX-cubic-3fold,
Z-cpi, AZ-K-adjunction].
###### Theorem C (Theorem 5.1).
Let $X$ be a smooth Fano threefold of Picard number one. Assume that
$(-K_{X})^{3}\neq 18,22$ and $X\neq\mathbb{P}^{3},Q$ or $V_{5}$. Then $X$ is
uniformly K-stable.
### 1.1. Structure of the paper
We set the notation and gather some preliminary results in Section 2. The main
technical result of this paper, Theorem A, which relates Seshadri constants to
stability thresholds, is contained in Section 3. In Section 4, we apply this
to obtain the first application, Theorem B, which concerns the uniform
K-stability of hypersurfaces. Finally, in Section 5 we present the other
application, Theorem C, which concerns the uniform K-stability of Fano
threefolds.
### Acknowledgements
We are grateful to Fabian Gundlach, Yuji Odaka, Artie Prendergast-Smith and
Chenyang Xu for helpful discussion. We would also like to thank the referees
for careful reading and helpful comments. HA is supported by EPSRC grants
EP/T015896/1 and EP/V048619/1. ZZ is partially supported by NSF Grant
DMS-2055531.
## 2\. Preliminary
### 2.1. Notation and conventions
We work over $\mathbb{C}$. Unless otherwise specified, all varieties are
assumed to be normal and projective. A pair $(X,\Delta)$ consists of a variety
$X$ and an effective $\mathbb{Q}$-divisor $\Delta$ such that $K_{X}+\Delta$ is
$\mathbb{Q}$-Cartier. The notions of klt and lc singularities are defined as
in [Kol-mmp]*Definition 2.8. If $\pi:Y\to X$ is a projective birational
morphism and $E$ is a prime divisor on $Y$, then we say $E$ is a divisor over
$X$. A valuation on $X$ will mean a valuation
$v\colon\mathbb{C}(X)^{*}\to\mathbb{R}$ that is trivial on $\mathbb{C}^{*}$.
We write $C_{X}(E)$ (resp. $C_{X}(v)$) for the center of a divisor (resp.
valuation) and $A_{X,\Delta}(E)$ (resp. $A_{X,\Delta}(v)$) for the log
discrepancy of the divisor $E$ (resp. the valuation $v$) with respect to the
pair $(X,\Delta)$ (see [JM-valuation, BdFFU-valuation]). We write
$\mathrm{Val}_{X,\Delta}^{*}$ for the set of nontrivial valuations $v$ on $X$
such that $(X,\Delta)$ is klt at the center of $v$ and
$A_{X,\Delta}(v)<\infty$. For any valuation $v$ and any linear series $V$ we
denote by $\mathcal{F}_{v}$ the filtration on $V$ given by
$\mathcal{F}_{v}^{\lambda}V=\\{s\in V\,|\,v(s)\geq\lambda\\}$. Let
$(X,\Delta)$ be a klt pair, let $Z\subseteq X$ be a closed subset and let $D$
be an effective divisor (or an ideal sheaf) on $X$, we denote by
$\mathrm{lct}_{Z}(X,\Delta;D)$ the largest number $\lambda\geq 0$ such that
the non-lc locus of $(X,\Delta+\lambda D)$ does not contain $Z$.
### 2.2. K-stability and stability thresholds
In this section, we recall the definition of K-stability through stability
thresholds.
###### Definition 2.1 ([FO-delta, BJ-delta]).
Let $(X,\Delta)$ be a projective pair, let $Z\subseteq X$ be a subvariety, and
let $L$ be an ample line bundle on $X$. Let $m>0$ be an integer such that
$H^{0}(X,mL)\neq 0$.
1. (1)
An $m$-basis type $\mathbb{Q}$-divisor of $L$ is a $\mathbb{Q}$-divisor of the
form
$D=\frac{1}{mN_{m}}\sum_{i=1}^{N_{m}}\\{s_{i}=0\\}$
where $N_{m}=h^{0}(X,mL)$ and $s_{1},\cdots,s_{N_{m}}$ is a basis of
$H^{0}(X,mL)$. We define $\delta_{m}(L)$ (resp. $\delta_{Z,m}(L)$) to be the
largest number $\lambda\geq 0$ such that $(X,\Delta+\lambda D)$ is lc (resp.
lc at the generic point of $Z$) for every $m$-basis type $\mathbb{Q}$-divisor
$D$ of $(X,\Delta)$.
2. (2)
Let $v$ be a nontrivial valuation on $X$. We define
$T_{m}(L;v)=\frac{\max\\{v(D)\,|\,D\in|mL|\\}}{m}$
and set $T(L;v)=\lim_{m\to\infty}T_{m}(L;v)$ (it is usually called the pseudo-
effective threshold). We say that $v$ is of linear growth if $T(L;v)<\infty$
(this is the case when $v$ is divisorial or $v\in\mathrm{Val}^{*}_{X,\Delta}$;
see [BSKM-linear-growth]*Section 2.3 and [BJ-delta]*Section 3.1). For such
valuations we set
$\mathrm{vol}(L;v\geq t)=\lim_{m\to\infty}\frac{\dim\\{s\in
H^{0}(X,mL)\,|\,v(s)\geq mt\\}}{m^{\dim X}/(\dim X)!}$
and $S(L;v)=\frac{1}{\mathrm{vol}(L)}\int_{0}^{\infty}\mathrm{vol}(L;v\geq
t)\mathrm{d}t$. If $E$ is a divisor over $X$, we define
$S(L;E):=S(L;\mathrm{ord}_{E})$, $T(L;E):=T(L;\mathrm{ord}_{E})$, etc.
3. (3)
Assume that $(X,\Delta)$ is klt (or klt at the generic point of $Z$ in the
local case). The local and global stability thresholds (or $\delta$-invariant)
are defined to be
$\delta_{Z}(L):=\inf_{v\in\mathrm{Val}^{*}_{X,\Delta},Z\subseteq
C_{X}(v)}\frac{A_{X,\Delta}(v)}{S(L;v)},\quad\delta(L):=\inf_{v\in\mathrm{Val}^{*}_{X,\Delta}}\frac{A_{X,\Delta}(v)}{S(L;v)}.$
Clearly $\delta(L)=\inf_{x\in X}\delta_{x}(L)$. We say that a valuation
$v\in\mathrm{Val}^{*}_{X,\Delta}$ computes $\delta_{Z}(L)$ (or $\delta(L)$) if
it achieves the above infimum. Such valuations always exists by [BJ-
delta]*Theorem E. We will sometimes write $\delta_{Z}(X,\Delta;L)$ and
$\delta(X,\Delta;L)$ if the pair $(X,\Delta)$ is not clear from the context.
By [BJ-delta]*Theorem A, we have
$\lim_{m\to\infty}\delta_{m}(X,\Delta)=\delta(X,\Delta)$. By the same proof in
_loc. cit._ we also have
$\lim_{m\to\infty}\delta_{Z,m}(X,\Delta)=\delta_{Z}(X,\Delta)$.
###### Theorem-Definition 2.2 ([FO-delta, BJ-delta, Fuj-criterion, Li-
criterion]).
Let $X$ be a Fano variety. Then it is K-semistable (resp. uniformly K-stable)
if and only if $\delta(-K_{X})\geq 1$ (resp. $\delta(-K_{X})>1$).
### 2.3. Seshadri constants, movable thresholds and pseudo-effective
thresholds
Let $X$ be a variety and let $L$ be an ample line bundle on $X$. Let $v$ be a
valuation of linear growth on $X$ whose center $C_{X}(v)$ has codimension at
least two. Then the movable threshold $\eta(L;v)$ is defined as
$\begin{split}\eta(L;v)=\sup\\{&\eta>0\,|\,\text{ for some }m\in\mathbb{N},\\\
&\,\mathrm{Bs}|\mathcal{F}_{v}^{m\eta}H^{0}(X,mL)|\text{ has codimension at
least two}\\}.\end{split}$
Note that $\eta(L;v)>0$. Indeed, if we choose some sufficiently large integer
$m>0$ such that $\mathcal{O}_{X}(mL)\otimes\mathcal{I}_{C_{X}(v)}$ is globally
generated, then as the base locus of this linear system has codimension at
least $2$ we get $\eta(L;v)\geq\frac{1}{m}v(\mathcal{I}_{C_{X}(v)})>0$.
Let $x\in X$ be a smooth point, let $\pi\colon\widetilde{X}\to X$ be the
blowup of $x$ and let $E$ be the exceptional divisor. Then the Seshadri
constant $L$ at $x$ is defined to be (see [Dem-Seshadri] or [Laz-
positivity-1]*Chapter 5)
$\varepsilon_{x}(L):=\sup\\{t\geq 0\,|\,\pi^{*}L-tE\mbox{ is
nef}\\}=\inf_{C\subseteq X}\frac{(L\cdot C)}{\mathrm{mult}_{x}C},$
where the infimum is taken over all irreducible curves $C\subseteq X$ passing
through $x$. We also denote the pseudo-effective threshold $T(L;E)$ (resp.
movable threshold $\eta(L;E)$) in this case by $\tau_{x}(L)$ (resp.
$\eta_{x}(L)$). Note that $\varepsilon_{x}(L)=\eta_{x}(L)$ when $X$ is a
surface. By definition it is easy to see that
$\varepsilon_{x}(L)\leq\eta_{x}(L)\leq\tau_{x}(L)$ and
$\tau_{x}(L)=\sup\\{\mathrm{mult}_{x}D\,|\,0\leq D\sim_{\mathbb{Q}}L\\}.$
As we have $(\pi^{*}L-\varepsilon_{x}(L)E)^{n}\geq 0$ (where $n=\dim X$), it
follows that $\sqrt[n]{(L^{n})}\geq\varepsilon_{x}(L)$. It is also well-known
that $\tau_{x}(L)\geq\sqrt[n]{(L^{n})}$ (see e.g. [Laz-positivity-2]*Lemma
10.4.12). When $L$ is very ample, we also have
$\eta_{x}(L)\leq\sqrt{(L^{n})}$: otherwise we can find two effective
$\mathbb{Q}$-divisors $D_{1},D_{2}\sim_{\mathbb{Q}}L$ that have no common
components such that $\mathrm{mult}_{x}D_{i}>\sqrt{(L^{n})}$, and we have
$(L^{n})=(D_{1}\cdot D_{2}\cdot H_{1}\cdots\cdots\cdot
H_{n-2})\geq\mathrm{mult}_{x}D_{1}\cdot\mathrm{mult}_{x}D_{2}>(L^{n})$
for some general members $H_{1},\cdots,H_{n-2}$ of the linear system
$|L\otimes\mathfrak{m}_{x}|$, a contradiction. For later use, we recall some
more properties of these invariants.
###### Lemma 2.3.
Let $L$ be an ample line bundle on a variety $X$ and let $v$ be a valuation of
linear growth on $X$ whose center has codimension at least two. Assume that
$X$ is $\mathbb{Q}$-factorial of Picard number one and $\eta(L;v)<T(L;v)$.
Then there exists a unique irreducible $\mathbb{Q}$-divisor
$D_{0}\sim_{\mathbb{Q}}L$ on $X$ such that $v(D_{0})>\eta(L;v)$. Moreover, we
have $v(D_{0})=T(L;v)$ and for any effective $\mathbb{Q}$-divisor
$D\sim_{\mathbb{Q}}L$ such that $v(D)\geq\eta(L;v)$ we have
$D\geq\frac{v(D)-\eta(L;v)}{T(L;v)-\eta(L;v)}\cdot D_{0}.$
###### Proof.
For ease of notation, let $\eta=\eta(L;v)$ and $T=T(L;v)$. We first prove the
uniqueness. Suppose that there exist two such irreducible
$\mathbb{Q}$-divisors $D_{0}$, $D_{1}$. Let
$\lambda=\min\\{v(D_{0}),v(D_{1})\\}>\eta$. Then for some sufficiently
divisible integer $m$ the base locus of
$|\mathcal{F}_{v}^{m\lambda}H^{0}(mL)|$ has codimension at least two since
$mD_{0},mD_{1}\in|\mathcal{F}_{v}^{m\lambda}H^{0}(mL)|$. Hence
$\eta\geq\lambda$, a contradiction. This proves the uniqueness of $D_{0}$.
For the existence, let $D\sim_{\mathbb{Q}}L$ be an effective
$\mathbb{Q}$-divisor on $S$ such that $v(D)>\eta$ (which exists as
$T(L;v)>\eta(L;v)$). Since $\rho(X)=1$, we may write $D=\sum\lambda_{i}D_{i}$
where $\lambda_{i}>0$, $\sum\lambda_{i}=1$ and each $D_{i}\sim_{\mathbb{Q}}L$
is irreducible. As $v(D)>\eta$, at least one of the $D_{i}$ satisfies
$v(D_{i})>\eta$. This proves the existence of $D_{0}$. Since such $D_{0}$ is
unique, it is then clear from the definition of pseudo-effective threshold
that $v(D_{0})=T$ and moreover we have $v(D_{i})\leq\eta$ for all $i>0$. Thus
$v(D-\lambda_{0}D_{0})\leq(1-\lambda_{0})\eta$. Solving this inequality gives
$\lambda_{0}\geq\frac{v(D)-\eta}{T-\eta}$. ∎
###### Lemma 2.4.
Let $S$ be a $\mathbb{Q}$-factorial surface of Picard number one and let $x\in
S$ be a smooth point. Then we have
$\varepsilon_{x}(L)\cdot\tau_{x}(L)=(L^{2})$.
###### Proof.
Again let $\varepsilon=\varepsilon_{x}(L)$ and $\tau=\tau_{x}(L)$. Clearly
$\varepsilon\leq\tau$ by definition. If $\varepsilon=\tau$, then since
$\tau\geq\sqrt{(L^{2})}\geq\varepsilon$ we necessarily have
$\varepsilon=\tau=\sqrt{(L^{2})}$, hence $\varepsilon\tau=(L^{2})$. Thus we
may assume that $\varepsilon<\tau$. By Lemma 2.3, there exists a unique
irreducible $\mathbb{Q}$-divisor $D_{0}\sim_{\mathbb{Q}}L$ on $S$ such that
$\mathrm{mult}_{x}D_{0}=\tau$. Let $C\subseteq S$ be an irreducible curve
passing through $x$. If $D_{0}$ is supported on $C$, then
$\frac{(C\cdot L)}{\mathrm{mult}_{x}C}=\frac{(D_{0}\cdot
L)}{\mathrm{mult}_{x}D_{0}}=\frac{(L^{2})}{\tau},$
otherwise we have
$\frac{(C\cdot L)}{\mathrm{mult}_{x}C}=\frac{(C\cdot
D_{0})}{\mathrm{mult}_{x}C}\geq\mathrm{mult}_{x}D_{0}=\tau.$
Since $\tau^{2}\geq(L^{2})$, by the definition of Seshadri constants we see
that $\varepsilon=\frac{(L^{2})}{\tau}$. ∎
### 2.4. Restricted volumes
We refer to [ELMNP] for the original definition of the restricted volume
$\mathrm{vol}_{X|Z}(L)$ of a divisor $L$ along a subvariety $Z$. For our
purpose their most important properties are summarized in the following
statement.
###### Lemma 2.5.
Let $L$ be an ample line bundle on a projective variety $X$ of dimension $n$,
let $\pi\colon Y\to X$ be a birational morphism and let $E\subseteq Y$ be a
prime divisor on $Y$. Then
(2.1)
$\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{vol}(\pi^{*}L-tE)=-n\cdot\mathrm{vol}_{Y|E}(\pi^{*}L-tE)$
for all $0\leq t<T(L;E)$. Moreover, the function
$t\mapsto\mathrm{vol}_{Y|E}(\pi^{*}L-tE)^{\frac{1}{n-1}}$ is concave on
$[0,T(L;E))$.
###### Proof.
The equality (2.1) follows from [BFJ-volume-C^1] or [LM-okounkov-
body]*Corollary C. The concavity part follows from [ELMNP]*Theorem A. ∎
We may then rewrite the formula of $S$-invariants using restricted volumes as
follows.
###### Lemma 2.6.
In the situation of Lemma 2.5, we have
$S(L;E)=\frac{n}{(L^{n})}\int_{0}^{T(L;E)}x\cdot\mathrm{vol}_{Y|E}(\pi^{*}L-xE)\mathrm{d}x.$
###### Proof.
By definition,
$S(L;E)=\frac{1}{(L^{n})}\int_{0}^{T(L;E)}\mathrm{vol}(\pi^{*}L-tE)\mathrm{d}t$.
Thus the statement follows from (2.1) and integration by parts. ∎
### 2.5. Filtered linear series and compatible divisors
In this section, we briefly recall some definitions from [AZ-K-
adjunction]*Sections 2.5-2.6.
###### Definition 2.7.
Let $L_{1},\cdots,L_{r}$ be line bundles on $X$. An $\mathbb{N}^{r}$-graded
linear series $W_{\vec{\bullet}}$ on $X$ associated to the $L_{i}$’s consists
of finite dimensional subspaces
$W_{\vec{a}}\subseteq H^{0}(X,\mathcal{O}_{X}(a_{1}L_{1}+\cdots+a_{r}L_{r}))$
for each $\vec{a}\in\mathbb{N}^{r}$ such that $W_{\vec{0}}=\mathbb{C}$ and
$W_{\vec{a}_{1}}\cdot W_{\vec{a}_{2}}\subseteq W_{\vec{a}_{1}+\vec{a}_{2}}$
for all $\vec{a}_{1},\vec{a}_{2}\in\mathbb{N}^{r}$. The support
$\mathrm{Supp}(W_{\vec{\bullet}})\subseteq\mathbb{R}^{r}$ of
$W_{\vec{\bullet}}$ is defined as the closure of the convex cone spanned by
all $\vec{a}\in\mathbb{N}^{r}$ such that $W_{\vec{a}}\neq 0$. In this paper we
only consider multi-graded linear series that have bounded support and contain
ample series, see [LM-okounkov-body]*Section 4.3 or [AZ-K-
adjunction]*Definition 2.11 for the precise definition. A filtration
$\mathcal{F}$ on $W_{\vec{\bullet}}$ is given by the data of vector subspaces
$\mathcal{F}^{\lambda}W_{\vec{a}}\subseteq W_{\vec{a}}$ for each
$\lambda\in\mathbb{R}$ and $\vec{a}\in\mathbb{N}^{r}$ such that
$\mathcal{F}^{\lambda_{1}}W_{\vec{a}_{1}}\cdot\mathcal{F}^{\lambda_{2}}W_{\vec{a}_{2}}\subseteq\mathcal{F}^{\lambda_{1}+\lambda_{2}}W_{\vec{a}_{1}+\vec{a}_{2}}$
for all $\lambda_{i}\in\mathbb{R}$ and all $\vec{a}_{i}\in\mathbb{N}^{r}$. We
only consider linearly bounded filtrations, i.e. there are constants $C_{1}$
and $C_{2}$ such that $\mathcal{F}^{\lambda}W_{\vec{a}}=W_{\vec{a}}$ for all
$\lambda<C_{1}|\vec{a}|$ and $\mathcal{F}^{\lambda}W_{\vec{a}}=0$ for all
$\lambda>C_{2}|\vec{a}|$. Any valuation $v$ of linear growth on $X$ induces a
filtration $\mathcal{F}_{v}$ on $W_{\vec{\bullet}}$ such that
$\mathcal{F}^{\lambda}_{v}W_{\vec{a}}=\\{s\in
W_{\vec{a}}\,|\,v(s)\geq\lambda\\}$.
If we think of $W_{\vec{\bullet}}$ as an
$\mathbb{N}\times\mathbb{N}^{r-1}$-graded linear series and view the first
$\mathbb{N}$-factor as the level grading, then we may define $m$-basis type
$\mathbb{Q}$-divisors of $W_{\vec{\bullet}}$ as a $\mathbb{Q}$-divisor of the
form
$D=\frac{1}{mN_{m}}\sum_{i=1}^{N_{m}}\\{s_{i}=0\\}$
where $s_{1},\cdots,s_{N_{m}}$ enumerate some basis of $W_{m,\vec{a}}$ for all
$\vec{a}\in\mathbb{N}^{r-1}$ (we call it an $m$-basis of $W_{\vec{\bullet}}$)
and $N_{m}=\sum_{\vec{a}}\dim W_{m,\vec{a}}$. Following [AZ-K-adjunction], we
say that $D$ is compatible with a filtration $\mathcal{F}$ on
$W_{\vec{\bullet}}$ if every $\mathcal{F}^{\lambda}W_{m,\vec{a}}$ is spanned
by some of the $s_{i}$. Let
$\Delta(W_{\vec{\bullet}})=\\{\vec{a}\in\mathbb{R}^{r-1}\,|\,(1,\vec{a})\in\mathrm{Supp}(W_{\vec{\bullet}})\\}$.
For any $\vec{a}\in\mathbb{Q}^{r-1}$ in the interior of
$\Delta(W_{\vec{\bullet}})$, we set
$\mathrm{vol}_{W_{\vec{\bullet}}}(\vec{a}):=\lim_{m\to\infty}\frac{\dim
W_{m,m\vec{a}}}{m^{n}/n!}\quad(n=\dim X)$
where the limit is taken over integers $m$ such that
$m\vec{a}\in\mathbb{N}^{r-1}$. By [LM-okounkov-body]*Corollary 4.22, it
extends a continuous function on the interior of $\Delta(W_{\vec{\bullet}})$,
which we still denote by $\mathrm{vol}_{W_{\vec{\bullet}}}(\cdot)$. For each
$\vec{a}\in\mathbb{N}^{r}$, we let $M_{\vec{a}}$ (resp. $F_{\vec{a}}$) be the
movable (resp. fixed) part of $W_{\vec{a}}$. We define
$F(W_{\vec{\bullet}}):=\lim_{m\to\infty}F_{m}(W_{\vec{\bullet}})\in\mathrm{Div}(X)_{\mathbb{R}}$,
where
$F_{m}(W_{\vec{\bullet}})=\frac{1}{mN_{m}}\sum_{\vec{a}\in\mathbb{N}^{r-1}}\dim(W_{m,\vec{a}})\cdot
F_{m,\vec{a}}.$
Note that the limit exists by [AZ-K-adjunction]*Lemma-Definition 2.25. As in
_loc. cit._ , we also set
$c_{1}(W_{\vec{\bullet}}):=\lim_{m\to\infty}c_{1}(D_{m})\in{\rm
NS}(X)_{\mathbb{R}}$ (where $D_{m}$ is any $m$-basis type $\mathbb{Q}$-divisor
of $W_{\vec{\bullet}}$) and
$c_{1}(M_{\bullet}):=c_{1}(W_{\vec{\bullet}})-F(W_{\vec{\bullet}})$. Many of
the invariants we define in Section 2.2 also generalizes to this setting, see
[AZ-K-adjunction]*Section 2.6. For our purposes we recall the following.
1. (1)
The pseudo-effective threshold $T(W_{\vec{\bullet}};\mathcal{F})$ of a
filtration $\mathcal{F}$ on $W_{\vec{\bullet}}$ is defined as
$T(W_{\vec{\bullet}};\mathcal{F}):=\lim_{m\to\infty}\frac{T_{m}(W_{\vec{\bullet}};\mathcal{F})}{m}=\sup_{m\in\mathbb{N}}\frac{T_{m}(W_{\vec{\bullet}};\mathcal{F})}{m}.$
where
$T_{m}(W_{\vec{\bullet}};\mathcal{F})=\sup\\{\lambda\in\mathbb{R}\,|\,\mathcal{F}^{\lambda}W_{m,\vec{a}}\neq
0\mbox{ for some }\vec{a}\\}.$
2. (2)
We set
$S(W_{\vec{\bullet}};\mathcal{F}):=\lim_{m\to\infty}S_{m}(W_{\vec{\bullet}};\mathcal{F})$
where
$S_{m}(W_{\vec{\bullet}};\mathcal{F})=\frac{1}{mN_{m}}\sum_{\lambda,\vec{a}}\lambda\cdot\dim\mathrm{Gr}_{\mathcal{F}}^{\lambda}W_{m,\vec{a}}.$
3. (3)
Given a closed subset $Z\subseteq X$, we define
$\delta_{Z}(W_{\vec{\bullet}},\mathcal{F}):=\limsup_{m\to\infty}\delta_{Z,m}(W_{\vec{\bullet}},\mathcal{F}),$
where
$\delta_{Z,m}(W_{\vec{\bullet}},\mathcal{F})=\inf_{D}\mathrm{lct}_{Z}(X,\Delta;D)$
and the infimum runs over all $m$-basis type $\mathbb{Q}$-divisors $D$ of
$W_{\vec{\bullet}}$ that are compatible with $\mathcal{F}$.
Our multi-graded linear series mostly come from the refinement of a complete
linear series.
###### Definition 2.8 ([AZ-K-adjunction]*Example 2.15).
Let $L$ be a big line bundle on $X$ and let $V_{\vec{\bullet}}$ be the
complete linear series associated to $L$, i.e. $V_{m}=H^{0}(X,mL)$. Let
$\pi\colon Y\to X$ be a birational morphism and let $F$ be a Cartier prime
divisor on $Y$. The refinement of $V_{\vec{\bullet}}$ by $F$ is the
$\mathbb{N}^{2}$-graded linear series $W_{\vec{\bullet}}$ associated to
$\pi^{*}L|_{F}$ and $-F|_{F}$ on $F$ given by
$W_{m,j}={\rm Im}(H^{0}(Y,m\pi^{*}L-jF)\to H^{0}(F,m\pi^{*}L|_{F}-jF|_{F})).$
Note that any filtration $\mathcal{F}$ on $V_{\vec{\bullet}}$ naturally
induces a filtration $\bar{\mathcal{F}}$ on $W_{\vec{\bullet}}$, i.e.
$\bar{\mathcal{F}}^{\lambda}W_{m,j}$ is the image of
$\mathcal{F}^{\lambda}V_{m}\cap H^{0}(Y,m\pi^{*}L-jF)$.
###### Lemma 2.9.
In the above notation, we have
$S(V_{\vec{\bullet}};\mathcal{F})=S(W_{\vec{\bullet}};\bar{\mathcal{F}})$.
###### Proof.
It suffices to show that
$S_{m}(V_{\vec{\bullet}};\mathcal{F})=S_{m}(W_{\vec{\bullet}};\bar{\mathcal{F}})$.
By [AZ-K-adjunction]*Lemma 3.1, we may find a basis $s_{1},\dots,s_{N_{m}}$ of
$V_{m}$ that is compatible with both $\mathcal{F}$ and $\mathcal{F}_{F}$, the
filtration induced by $F$. By construction, they restrict to form an $m$-basis
of $W_{\vec{\bullet}}$ that is compatible with $\bar{\mathcal{F}}$. Let
$\lambda_{i}=\sup\\{\lambda\,|\,s_{i}\in\mathcal{F}^{\lambda}V_{m}\\}$. Then
by the definition of $S$-invariants it is easy to see that
$S(V_{\vec{\bullet}};\mathcal{F})=\frac{1}{mN_{m}}\sum_{i=1}^{N_{m}}\lambda_{i}=S(W_{\vec{\bullet}};\bar{\mathcal{F}})$.
∎
For computations we often choose refinements that are almost complete [AZ-K-
adjunction]*Definition 2.27.
###### Definition 2.10.
Let $L$ be a big line bundle on $X$ and let $W_{\vec{\bullet}}$ be an
$\mathbb{N}^{r}$-graded linear series. We say that $W_{\vec{\bullet}}$ is
_almost complete_ (with respect to $L$) if for every $\vec{a}\in{\rm
int}(\mathrm{Supp}(W_{\vec{\bullet}}))\cap\mathbb{Q}^{r}$ and all sufficiently
divisible integers $m$ (depending on $\vec{a}$), we have
$|M_{m\vec{a}}|\subseteq|L_{m,\vec{a}}|$ for some
$L_{m,\vec{a}}\equiv\ell_{m,\vec{a}}L$ and some
$\ell_{m,\vec{a}}\in\mathbb{N}$ such that
$\frac{\dim W_{m\vec{a}}}{h^{0}(X,\ell_{m,\vec{a}}L)}=\frac{\dim
M_{m\vec{a}}}{h^{0}(X,\ell_{m,\vec{a}}L)}\to 1$
as $m\to\infty$.
In the surface case, all refinements as in Definition 2.8 are almost complete
by [AZ-K-adjunction]*Lemma 4.10. Another common example is the refinement of
the complete linear series associated to an ample line bundle $L$ by some
integral member $H\in|L|$, see [AZ-K-adjunction]*Example 2.28.
## 3\. Seshadri constants and stability thresholds
In this section, we prove the following statement, giving lower bounds of
stability thresholds in terms of Seshadri constants on complete intersection
surfaces. This will be a key tool to verify K-stability of Fano varieties in
subsequent sections.
###### Theorem 3.1.
Let $X$ be a projective variety of dimension $n\geq 2$ and let $L$ be an ample
line bundle on $X$. Let $x\in X$ be a smooth point and let
$S=H_{1}\cap\cdots\cap H_{n-2}\subseteq X$ be a complete intersection surface
passing through $x$, where each $H_{i}\in|L|$. Assume that $S$ is integral and
is smooth at $x$. Then
$\delta_{x}(L)\geq\frac{n+1}{(L^{n})}\cdot\varepsilon_{x}(L|_{S}).$
When equality holds, we have at least one of the following:
1. (1)
$\varepsilon_{x}(L|_{S})=\tau_{x}(L|_{S})=\sqrt{(L^{n})}=1$, and
$\delta_{x}(L)$ is computed by any $H\in|L\otimes\mathfrak{m}_{x}|$, or
2. (2)
$\varepsilon_{x}(L|_{S})=\tau_{x}(L|_{S})=\sqrt{(L^{n})}>1$, and the center of
any valuation $v$ that computes $\delta_{x}(L)$ has dimension $\dim
C_{X}(v)\geq n-2$, or
3. (3)
$\varepsilon_{x}(L|_{S})\tau_{x}(L|_{S})=(L^{n})$, and every valuation that
computes $\delta_{x}(L)$ is divisorial and induced by a prime divisor
$G\subseteq X$ containing $x$ such that $S\not\subseteq G$ and
$L\equiv\tau_{x}(L|_{S})G$.
As one might expect, the careful analysis of the equality cases in the above
statement will be useful in proving _uniform_ K-stability in several cases.
Note that an upper bound on the log canonical threshold in terms of Seshadri
constants was studied in [Odaka-Sano]. However, the relation in 3.1 is in the
opposite direction, which offers significantly more flexibility in estimating
the $\delta$-invariant.
The proof of Theorem 3.1 is by induction on the dimension, where the inductive
step is based on [AZ-K-adjunction]*Lemma 4.6. Apart from that, the heart of
the proof is a detailed analysis of the surface case, where we can be even
more precise about the equality cases:
###### Lemma 3.2.
Let $S$ be a surface and let $L$ be an ample line bundle on $S$. Let $x\in S$
be a smooth point. Then
$\delta_{x}(L)\geq\frac{3}{(L^{2})}\cdot\varepsilon_{x}(L),$
and equality holds if and only if
$\varepsilon_{x}(L)=\tau_{x}(L)=\sqrt{(L^{2})}$, or
$\varepsilon_{x}(L)\tau_{x}(L)=(L^{2})$ and there exists a unique irreducible
curve $C\subseteq X$ containing $x$ such that $L\equiv\tau_{x}(L)C$. Moreover,
in the latter case, the curve $C$ is the only divisor that computes
$\delta_{x}(L)$.
The idea to prove the above statement is to consider the refinement
$W_{\vec{\bullet}}$ of the linear series associated to $L$ by the ordinary
blowup of $x$ and then compare the stability thresholds of $L$ and
$W_{\vec{\bullet}}$ using tools from [AZ-K-adjunction]. Using Zariski
decomposition on surfaces, we will estimate the stability threshold
$\delta(W_{\vec{\bullet}})$ in terms of restricted volume functions and reduce
the inequality in Lemma 3.2 to an inequality of the following type.
###### Lemma 3.3.
Let $0<a\leq b$ and let $g(x)$ be a bounded concave function on $[0,b)$ such
that $g(x)=x$ for all $x\in[0,a)$. Then
$3a\int_{0}^{b}(2x-g(x))\cdot g(x)\mathrm{d}x\leq
4\left(\int_{0}^{b}g(x)\mathrm{d}x\right)^{2},$
and equality holds if and only if $a=b$, or $g(x)=h(x)$ for all $x\in[0,b)$,
where
$h(x)=\begin{cases}x&\text{if }0\leq x\leq a,\\\ \frac{a(b-x)}{b-a}&\text{if
}a<x\leq b.\end{cases}$
In particular, when equality holds we have
$\int_{0}^{b}g(x)\mathrm{d}x=\frac{1}{2}ab$.
###### Proof.
It is straightforward to check that equality holds when $a=b$, thus we may
assume that $b>a$. Let $f(x)=g(x+a)-h(x+a)$ and let $c=b-a>0$. Then $f(0)=0$,
$f(x)$ is a bounded concave function on $[0,c)$ and the inequality in the
statement of the lemma is equivalent to
$\begin{split}a^{2}\int_{0}^{c}\left(\frac{6x}{c}-4\right)f(x)\mathrm{d}x&+a\int_{0}^{c}(6x-4c)f(x)\mathrm{d}x\\\
&-3a\int_{0}^{c}f(x)^{2}\mathrm{d}x-4\left(\int_{0}^{c}f(x)\mathrm{d}x\right)^{2}\leq
0\end{split}$
by an elementary calculation. We claim that
$\int_{0}^{c}(3x-2c)f(x)\mathrm{d}x\leq 0,$
which clearly implies the previous inequality as well as the equality
condition $f(x)\equiv 0$. To prove the claim, consider
$F(t)=\int_{0}^{t}(3x-2t)f(x)\mathrm{d}x$ as a function of $t\in[0,c]$. Then
$F(0)=0$ and $F^{\prime}(t)=t\cdot f(t)-2\int_{0}^{t}f(x)\mathrm{d}x\leq
t\cdot f(t)-2\int_{0}^{t}\frac{x}{t}f(t)\mathrm{d}x=0$ ($\forall t\in(0,c)$)
by the concavity of $f(x)$. Thus $F(c)\leq 0$ and we are done. ∎
To further analyze the equality case in Theorem 3.1 and Lemma 3.2, we need two
more auxiliary results.
###### Lemma 3.4.
Let $a>0$ and let $g(x)$ be a nonnegative bounded concave function on $[0,a)$
such that $g(0)>0$. Let $n>0$ be an integer. Then
$g(0)^{n-1}\int_{0}^{a}x\cdot
g(x)^{n-1}\mathrm{d}x\leq\frac{n}{n+1}\left(\int_{0}^{a}g(x)^{n-1}\mathrm{d}x\right)^{2},$
with equality if and only if $n=1$ or $g(x)=(1-\frac{x}{a})g(0)$.
###### Proof.
The result is clear when $n=1$, so we may assume that $n\geq 2$. Up to
rescaling, we may also assume that $g(0)=1$. For each $b>0$, let
$f_{b}(x)=1-\frac{x}{b}$ $(0\leq x\leq b)$. Since $g(x)$ is nonnegative and
concave, we have $g(x)\geq f_{a}(x)$ for all $x\in[0,a]$ and thus
$\int_{0}^{a}g(x)^{n-1}\mathrm{d}x\geq\int_{0}^{a}f_{a}(x)^{n-1}\mathrm{d}x$.
As $\lim_{b\to\infty}\int_{0}^{b}f_{b}(x)^{n-1}\mathrm{d}x=\infty$, by
interpolation we know that there exists some $b\geq a$ such that
(3.1)
$\int_{0}^{a}g(x)^{n-1}\mathrm{d}x=\int_{0}^{b}f_{b}(x)^{n-1}\mathrm{d}x.$
It is easy to check that $\int_{0}^{b}x\cdot
f_{b}(x)^{n-1}\mathrm{d}x=\frac{n}{n+1}\left(\int_{0}^{b}f_{b}(x)^{n-1}\mathrm{d}x\right)^{2}$,
hence it suffices to show
(3.2) $\int_{0}^{a}x\cdot g(x)^{n-1}\mathrm{d}x\leq\int_{0}^{b}x\cdot
f_{b}(x)^{n-1}\mathrm{d}x.$
For ease of notation, set $g(x)=0$ when $a<x\leq b$ and set
$h(x)=f_{b}(x)^{n-1}-g(x)^{n-1}$. Since $g(x)$ is concave on $[0,a]$ and
$f_{b}(x)$ is linear, there exists some $c\leq a$ such that $h(x)\leq 0$ for
all $x\in[0,c]$ and $h(x)>0$ for all $x\in(c,b)$. Note that $c>0$ by (3.1). We
then have
$\int_{0}^{b}xh(x)\mathrm{d}x=\int_{0}^{c}xh(x)\mathrm{d}x+\int_{c}^{b}xh(x)\mathrm{d}x\geq
c\int_{0}^{c}h(x)\mathrm{d}x+c\int_{c}^{b}h(x)\mathrm{d}x=0,$
where the last equality follows from (3.1). This proves (3.2). When equality
holds, we have $h(x)=0$, thus $b=a$ and $g(x)=1-\frac{x}{a}$. ∎
###### Lemma 3.5.
Let $L$ be an ample line bundle on a variety $X$ of dimension $n$. Let
$G\subseteq X$ be a prime divisor on $X$. Then
$S(L;G)\leq\frac{(L^{n})}{(n+1)(L^{n-1}\cdot G)},$
with equality if and only if $L\equiv aG$ for some $a>0$.
###### Proof.
The result is clear when $n=1$, so we assume that $n\geq 2$. Let $\pi\colon
Y\to X$ be a log resolution such that the strict transform
$\widetilde{G}=\pi^{-1}_{*}G$ of $G$ is smooth. Let $a=T(L;G)$. By Lemmas 2.5
and 2.6, we have
$S(L;G)=\frac{n}{(L^{n})}\int_{0}^{a}x\cdot\mathrm{vol}_{Y|\widetilde{G}}(\pi^{*}L-x\widetilde{G})\mathrm{d}x,$
(3.3)
$(L^{n})=n\int_{0}^{a}\mathrm{vol}_{Y|\widetilde{G}}(\pi^{*}L-x\widetilde{G})\mathrm{d}x.$
Since $L$ is ample, by [ELMNP]*Lemma 2.4 we also have
$\mathrm{vol}_{Y|\widetilde{G}}(\pi^{*}L)=\mathrm{vol}_{X|G}(L)=(L^{n-1}\cdot
G)$, hence the inequality follows directly from Lemma 3.4 applied to
$g(x)=\mathrm{vol}_{Y|\widetilde{G}}(\pi^{*}L-x\widetilde{G})^{\frac{1}{n-1}}$,
which is concave by [ELMNP]*Theorem A. Suppose that equality holds, then by
Lemma 3.4 we have $g(x)=(1-\frac{x}{a})g(0)$, i.e.
$\mathrm{vol}_{Y|\widetilde{G}}(\pi^{*}L-x\widetilde{G})=\left(1-\frac{x}{a}\right)^{n-1}(L^{n-1}\cdot
G)$
for all $0\leq x<a$. A direct calculation through (3.3) then yields
$(L^{n})=a(L^{n-1}\cdot G)$, or $(L^{n-1}\cdot L-aG)=0$. It follows that
$L-aG\equiv 0$ as $L-aG$ is pseudo-effective and $L$ is ample. Clearly
$S(L;G)=\frac{a}{n+1}$ if $L\equiv aG$. This finishes the proof. ∎
We are ready to present the proof of Lemma 3.2 and Theorem 3.1.
###### Proof of Lemma 3.2.
Let $\pi\colon T\to S$ be the ordinary blowup at $x$ with exceptional divisor
$E\cong\mathbb{P}^{1}$. Let $V_{\vec{\bullet}}$ be the complete linear series
associated to $L$ and let $W_{\vec{\bullet}}$ be its refinement by $E$. Note
that $\delta_{x}(L)=\delta_{x}(V_{\vec{\bullet}})$. Let
$\lambda=\frac{3}{(L^{2})}\cdot\varepsilon_{x}(L)$, let
$\varepsilon=\varepsilon_{x}(L)$ and let $\tau=\tau_{x}(L)=T(L;E)$. Since
$W_{\vec{\bullet}}$ is almost complete by [AZ-K-adjunction]*Lemma 4.10,
applying [AZ-K-adjunction]*Corollary 3.4 we know that
$\delta_{x}(V_{\vec{\bullet}})\geq\lambda$ as long as
(3.4) $\lambda\leq\frac{A_{S}(E)}{S(L;E)}$
and $\delta(E,\lambda\cdot
F(W_{\vec{\bullet}});c_{1}(M_{\vec{\bullet}}))\geq\lambda$ holds, where
$M_{\vec{\bullet}}$ is the movable part of $W_{\vec{\bullet}}$. By the
definition of stability thresholds, the latter inequality is equivalent to
saying
(3.5) $\lambda\cdot
S(c_{1}(M_{\vec{\bullet}});P)+\lambda\cdot\mathrm{mult}_{P}F(W_{\vec{\bullet}})\leq
1$
for all closed point $P\in E$. Let us verify that both conditions (3.4), (3.5)
holds in our situation. First, we have
$F(W_{\vec{\bullet}})=\frac{2}{(L^{2})}\int_{0}^{\tau}\left(\mathrm{vol}_{T|E}(\pi^{*}L-xE)\cdot
N_{\sigma}(\pi^{*}L-xE)|_{E}\right)\mathrm{d}x$
by [AZ-K-adjunction]*Lemma 4.13, and
$\mathrm{vol}_{T|E}(\pi^{*}L-xE)=\left(P_{\sigma}(\pi^{*}L-xE)\cdot E\right)$
by [ELMNP]*Corollary 2.17 and Example 2.19, where $P_{\sigma}(\cdot)$ (resp.
$N_{\sigma}(\cdot)$) denotes the nef (resp. negative) part in the Zariski
decomposition of a $($pseudo-effective$)$ divisor. In particular, letting
$g(x)=\mathrm{vol}_{T|E}(\pi^{*}L-xE)$ ($0\leq x<\tau$), we have
$\left(N_{\sigma}(\pi^{*}L-xE)\cdot E\right)=((\pi^{*}L-xE)\cdot
E)-\left(P_{\sigma}(\pi^{*}L-xE)\cdot E\right)=x-g(x).$
By the definition of Seshadri constant, we also have $g(x)=x$ for all $0\leq
x\leq\varepsilon$. Therefore as $g(x)$ is concave by [ELMNP]*Theorem A, Lemma
2.6 and Lemma 3.3 yield
$\displaystyle S(L;E)+\deg F(W_{\vec{\bullet}})$
$\displaystyle=\frac{2}{(L^{2})}\int_{0}^{\tau}x\cdot g(x)\
\mathrm{d}x+\frac{2}{(L^{2})}\int_{0}^{\tau}(x-g(x))\cdot g(x)\mathrm{d}x$
$\displaystyle\leq\frac{2}{(L^{2})}\cdot\frac{4\left(\int_{0}^{\tau}g(x)\mathrm{d}x\right)^{2}}{3\varepsilon}=\frac{2(L^{2})}{3\varepsilon}=\frac{2}{\lambda}.$
It follows that
(3.6) $\lambda\leq\frac{2}{S(L;E)+\deg
F(W_{\vec{\bullet}})}\leq\frac{2}{S(L;E)}=\frac{A_{S}(E)}{S(L;E)}$
which verifies (3.4). Since $E\cong\mathbb{P}^{1}$ is a curve, we have
$S(c_{1}(M_{\vec{\bullet}});P)=\frac{1}{2}\deg c_{1}(M_{\vec{\bullet}})$ for
any closed point $P\in E$. By [AZ-K-adjunction]*(3.1), we also have
$\deg\left(c_{1}(M_{\vec{\bullet}})+F(W_{\vec{\bullet}})\right)=\deg
c_{1}(W_{\vec{\bullet}})=S(L;E)$. Thus we obtain
$\displaystyle\lambda\cdot
S(c_{1}(M_{\vec{\bullet}});P)+\lambda\cdot\mathrm{mult}_{P}F(W_{\vec{\bullet}})$
$\displaystyle\leq\lambda\cdot\deg\left(\frac{1}{2}c_{1}(M_{\vec{\bullet}})+F(W_{\vec{\bullet}})\right)$
$\displaystyle=\frac{\lambda}{2}\cdot(S(L;E)+\deg F(W_{\vec{\bullet}}))\leq 1$
for any closed point $P\in E$, which verifies (3.5). Hence according to the
discussions at the beginning of the proof, [AZ-K-adjunction]*Corollary 3.4
implies that $\delta_{x}(L)=\delta_{x}(V_{\vec{\bullet}})\geq\lambda$ as
desired.
It remains to prove the equality conditions. It is straightforward to check
that $\frac{A_{S}(E)}{S(L;E)}=\lambda$ (resp.
$\frac{A_{S}(C)}{S(L;C)}=\lambda$) when $\varepsilon=\tau=\sqrt{(L^{2})}$
(resp. $\varepsilon\tau=(L^{2})$ and there exists some curve $C\subseteq X$
containing $x$ such that $L\equiv\tau C$), hence $\delta_{x}(L)=\lambda$ in
either case. Conversely, assume that $\delta_{x}(L)=\lambda$. If
$\delta_{x}(L)$ is computed by $E$, then by (3.6) we have
$F(W_{\vec{\bullet}})=0$, hence $N_{\sigma}(\pi^{*}L-xE)=0$ and $\pi^{*}L-xE$
is nef for all $0\leq x<\tau$. It follows that $\varepsilon=\tau$. Since
$\varepsilon\leq\sqrt{(L^{2})}\leq\tau$, we must have
$\varepsilon=\tau=\sqrt{(L^{2})}$ as desired. On the other hand, if $E$ does
not compute $\delta_{x}(L)$, then $\epsilon<\tau$ and by the equality
description in Lemma 3.3 we have
$(L^{2})=2\int_{0}^{\tau}g(x)\mathrm{d}x=\varepsilon\tau$. By [BJ-
delta]*Theorem E and [AZ-K-adjunction]*Corollary 3.4, we also see that
$\delta_{x}(L)$ is computed by some valuations $v$ such that
$C_{S}(v)\neq\\{x\\}$. Thus the center of $v$ is a curve $C\subseteq S$; in
particular, $v$ is divisorial and the curve $C$ also computes $\delta_{x}(L)$,
i.e. $S(L;C)=\frac{1}{\lambda}$. By Lemma 3.5 and the definition of Seshadri
constant, we deduce
$\varepsilon\leq\frac{(L\cdot C)}{\mathrm{mult}_{x}C}\leq(L\cdot
C)\leq\frac{(L^{2})}{3\cdot
S(L;C)}=\frac{\lambda\cdot(L^{2})}{3}=\varepsilon,$
thus equality holds everywhere. In particular, $L\equiv aC$ and
$S(L;C)=\frac{a}{3}$ for some $a>0$ by Lemma 3.5. But since
$S(L;C)=\frac{1}{\lambda}=\frac{(L^{2})}{3\varepsilon}=\frac{\varepsilon\tau}{3\varepsilon}=\frac{\tau}{3}$,
we must have $a=\tau$. Since there can be at most one irreducible curve
$C\subseteq S$ containing $x$ such that $L\equiv\tau C$ (otherwise it follows
from the definition of Seshadri constants that $\epsilon\geq\tau$), this
concludes the proof of the equality cases. ∎
###### Corollary 3.6.
Let $S$ be a smooth surface of Picard number one and let $L$ be an ample line
bundle on $S$. Let $x\in S$ be a closed point. Then
$\delta_{x}(L)\geq\frac{3}{\tau_{x}(L)}$ and equality holds if and only if
$\varepsilon_{x}(L)=\tau_{x}(L)$ or $L\sim_{\mathbb{Q}}\tau_{x}(L)C$ for some
irreducible curve $C\subseteq S$ passing through $x$.
###### Proof.
This is immediate from Lemma 3.2 and Lemma 2.4. ∎
###### Remark 3.7.
The corollary is false without the Picard number one assumption. Consider for
example $S=\mathbb{P}^{1}\times\mathbb{P}^{1}$ and take $L$ to be the line
bundle of bi-degree $(a,b)$ with $0<a\leq b$. Then by [Z-product]*Theorem 1.2
we know that $\delta(L)=\frac{2}{b}$. On the other hand it is not hard to see
that $\tau_{x}(L)=a+b$ for all $x\in S$ and therefore
$\delta_{x}(L)\geq\frac{3}{\tau_{x}(L)}$ only when $b\leq 2a$.
###### Proof of Theorem 3.1.
We prove by induction on the dimension $n$. When $n=2$, the only part that is
not covered by Lemma 3.2 is the assertion that $\delta_{x}(L)$ is computed by
any $H\in|L|$ when $\varepsilon_{x}(L)=1=(L^{2})$ and $\delta_{x}(L)=3$.
However, this follows immediately from the fact that
$\frac{A_{X}(H)}{S(L;H)}=3$ for any $H\in|L|$ (by Lemma 3.5). When $n\geq 3$,
we have
$\delta_{x}(L|_{H_{1}})\geq\frac{n}{(L^{n-1}\cdot
H_{1})}\varepsilon_{x}(L|_{S})=\frac{n}{(L^{n})}\varepsilon_{x}(L|_{S})$
by induction hypothesis. By [AZ-K-adjunction]*Lemma 4.6, we then have
$\delta_{x}(L)\geq\min\left\\{n+1,\frac{n+1}{n}\delta_{x}(L|_{H_{1}})\right\\}\geq\min\left\\{n+1,\frac{n+1}{(L^{n})}\cdot\varepsilon_{x}(L|_{S})\right\\}.$
Since
$\varepsilon_{x}(L|_{S})\leq\sqrt{(L|_{S}^{2})}\leq(L|_{S}^{2})=(L^{n})$, we
obtain
$\delta_{x}(L)\geq\frac{n+1}{(L^{n})}\cdot\varepsilon_{x}(L|_{S}).$
Suppose that equality holds. Let $v$ be any valuation that computes
$\delta_{x}(L)$. By [AZ-K-adjunction]*Lemma 4.6 and the above discussion, we
have either $\delta_{x}(L)=n+1$ and $\varepsilon_{x}(L|_{S})=(L^{n})=1$, or
$\delta_{x}(L|_{H_{1}})=\frac{n}{(L^{n})}\cdot\varepsilon_{x}(L|_{S})$. In the
former case, we also have $\tau_{x}(L|_{S})=\varepsilon_{x}(L|_{S})=1$ since
$\varepsilon_{x}(L|_{S})=\sqrt{(L|_{S}^{2})}$. By the same argument as in the
$n=2$ case, we also know that in this case $\delta_{x}(L)$ is computed by any
$H\in|L|$. In the latter case, by [AZ-K-adjunction]*Lemma 4.6 we also know
that $C_{X}(v)\not\subseteq H_{1}$ and that for every irreducible component
$Z$ of $C_{X}(v)\cap H_{1}$ containing $x$, there exists a valuation $v_{1}$
on $H_{1}$ with center $Z$ that computes $\delta_{x}(L|_{H_{1}})$. By
induction hypothesis, either
$\varepsilon_{x}(L|_{S})=\tau_{x}(L|_{S})=\sqrt{(L^{n})}>1$ and $\dim
C_{H_{1}}(v_{1})\geq n-3$, in which case $\dim C_{X}(v)=\dim
C_{H_{1}}(v)+1\geq n-2$; or $\varepsilon_{x}(L|_{S})\tau_{x}(L|_{S})=(L^{n})$
and the center of $v_{1}$ on $H_{1}$ is a prime divisor that does not contain
$S$. Suppose that we are in the last case. Then $G=C_{X}(v)$ is also a prime
divisor that does not contain $S$. Since $v$ computes $\delta_{x}(L)$, we have
$\frac{1}{S(L;G)}=\delta_{x}(L)=\frac{n+1}{(L^{n})}\cdot\varepsilon_{x}(L|_{S})$.
As in the proof of Lemma 3.2, we then obtain
$\begin{split}\varepsilon_{x}(L|_{S})\leq\varepsilon_{x}(L|_{S})\cdot\mathrm{mult}_{x}(G|_{S})&\leq(L|_{S}\cdot
G|_{S})=(L^{n-1}\cdot G)\\\
&\leq\frac{(L^{n})}{(n+1)S(L;G)}=\varepsilon_{x}(L|_{S}).\end{split}$
Hence equality holds everywhere and $L\equiv\tau_{x}(L|_{S})G$ by Lemma 3.5 as
in the proof of Lemma 3.2. This completes the proof. ∎
## 4\. Hypersurfaces
As a first application of Theorem 3.1, in this section we prove the uniform
K-stability of the following hypersurfaces.
###### Theorem 4.1.
Let $X\subseteq\mathbb{P}^{n+1}$ be a smooth Fano hypersurface of Fano index
$r\geq 3$ and dimension $n\geq r^{3}$. Then $X$ is uniformly K-stable.
The main difficulty we need to overcome in order to apply Theorem 3.1 in this
situation is that the Seshadri constants on complete intersection surfaces are
not always large enough. For example, for any effective $\mathbb{Q}$-divisor
$D\sim_{\mathbb{Q}}L$ (where $L$ is the hyperplane class) and any general
complete intersection surface $S$ passing through some fixed $x$ we have
$\varepsilon_{x}(L_{S})\leq\frac{(L_{S}^{2})}{\mathrm{mult}_{x}(D\cap
S)}=\frac{(L^{n})}{\mathrm{mult}_{x}D}$. If there exists some $D$ such that
$\mathrm{mult}_{x}D$ is relatively large (more precisely, if
$\tau_{x}(L)>\frac{n+1}{r}$), then we will not be able to derive
$\delta_{x}(X)\geq 1$ directly through Theorem 3.1. Thus we need to analyze
these “bad” loci. This is done in the next two lemmas. In particular, it turns
out that the “bad” locus corresponds exactly to points that support divisors
of high multiplicities.
###### Lemma 4.2.
Let $X\subseteq\mathbb{P}^{N}$ be a smooth variety of dimension $n\geq 4$,
Picard number one and degree $d$. Let $x\in X$ be a closed point and let $L$
be the hyperplane class. Assume that for some constant $c>\sqrt{d}$ and for
any general hyperplane section $Y\subseteq X$ containing $x$ we have
$\tau_{x}(L_{Y})\geq c$. Then $\tau_{x}(L)\geq c$.
###### Proof.
For each $t\in\mathbb{P}H^{0}(X,\mathcal{O}_{X}(1)\otimes\mathfrak{m}_{x})$,
let $Y_{t}\subseteq X_{t}$ be the corresponding hyperplane sections containing
$x$. When $t$ is general, $Y_{t}$ is smooth by Bertini theorem and has Picard
number one by Lefschetz theorem. Since $L$ is very ample, we have
$\eta_{x}(L_{Y_{t}})\leq\sqrt{d}$ (see the remark before Lemma 2.3). It then
follows from Lemma 2.3 and the assumption that there exists a unique
irreducible $\mathbb{Q}$-divisor $0\leq D_{t}\sim_{\mathbb{Q}}L_{Y_{t}}$ on
$Y_{t}$ such that $\mathrm{mult}_{x}D_{t}\geq c$. By a standard Hilbert scheme
argument, we may also assume that $mD_{t}$ is integral for some fixed integer
$m>0$.
We first treat a special case. Suppose that a general $D_{t}$ is covered by
lines passing through $x$. Let $Z\subseteq X$ be the union of all lines
passing through $x$. Then $\mathrm{Supp}(D_{t})\subseteq Z\cap Y_{t}$ and
hence $Z$ has codimension at most one. Note that $Z\neq X$ since otherwise $X$
is a cone over its hyperplane section, but as $X$ is smooth it must be a
linear subspace and it is easy to see that the assumption of the lemma is not
satisfied. If $Z_{1},\cdots,Z_{k}\subseteq Z$ are the irreducible components
of codimension one in $X$, then as $\dim Z_{i}\geq 3$, its image under the
projection from $x$ has dimension at least $2$, hence $Z_{i}\cap Y_{t}$ is
irreducible for general $t$ by Bertini theorem. Since $D_{t}$ is also
irreducible and is swept out by lines containing $x$, we deduce that
$\mathrm{Supp}(D_{t})=Z_{i}\cap Y_{t}$ for some $1\leq i\leq k$. As $X$ has
Picard number one, there exists some $\lambda_{i}>0$ such that
$D=\lambda_{i}Z_{i}\sim_{\mathbb{Q}}L$. By comparing degrees, we then have
$D_{t}=D|_{Y_{t}}$. Since $Y_{t}$ is a general hyperplane section, we also
have $\mathrm{mult}_{x}D=\mathrm{mult}_{x}D_{t}\geq c$. This proves the lemma
in this special case.
In the sequel, we may assume that $D_{t}$ is not covered by lines containing
$x$. In particular, the projection from $x$ defines a generically finite
rational map on $D_{t}$. Since $\dim D_{t}\geq 2$, we see that $D_{t}\cap
Y_{s}$ is irreducible for general
$s,t\in\mathbb{P}H^{0}(X,\mathcal{O}_{X}(1)\otimes\mathfrak{m}_{x})$ by
Bertini theorem. Note that each $D_{t}$ is also a codimension two cycle on
$X$. If there exists some general $s\neq t$ such that $D_{s}\cap D_{t}$ has
codimension $4$ (here we need $n\geq 4$ to ensure that $D_{s}\cap D_{t}$ is
nonempty), then we get
$d=\deg(D_{s}\cdot
D_{t})\geq\mathrm{mult}_{x}D_{s}\cdot\mathrm{mult}_{x}D_{t}\geq c^{2}>d,$
a contradiction. Thus $D_{s}\cap D_{t}$ contains a divisor on both $D_{s}$ and
$D_{t}$. Clearly this divisor is contained in $Y_{s}\cap D_{t}$, which is
irreducible for general $s,t$. It follows that
$\mathrm{Supp}(Y_{s}\cap D_{t})\subseteq\mathrm{Supp}(D_{s}\cap
D_{t})\subseteq\mathrm{Supp}(D_{s}).$
Now consider a general pencil
$\ell\in\mathbb{P}H^{0}(X,\mathcal{O}_{X}(1)\otimes\mathfrak{m}_{x})$ and let
$G\subseteq X$ be the divisor swept out by $\mathrm{Supp}(D_{t})$ for general
$t\in\ell$. In other words, $G$ is the image of the universal divisor
$\mathcal{D}\subseteq\mathcal{Y}$ under the natural evaluation map ${\rm
ev}\colon\mathcal{Y}\to X$, where $\mathcal{Y}\to\ell$ is the corresponding
family of hyperplane section. Since $D_{t}$ is irreducible for general $t$, we
see that $\mathcal{D}$ and $G$ are both irreducible. Since $X$ has Picard
number one, we have $G\sim_{\mathbb{Q}}rL$ for some $r\in\mathbb{Q}$. Let
$D=\frac{1}{r}G$. We claim that $\mathrm{mult}_{x}D\geq c$. Indeed, for
general $t\in\ell$ and
$s\in\mathbb{P}H^{0}(X,\mathcal{O}_{X}(1)\otimes\mathfrak{m}_{x})$, we know
that $G\cap Y_{s}$ is irreducible by Bertini theorem as before and
$\mathrm{Supp}(Y_{s}\cap D_{t})\subseteq D_{s}$ by the previous steps. As $t$
varies, the locus $\mathrm{Supp}(Y_{s}\cap D_{t})$ sweeps out a divisor on
$Y_{s}$, which is necessarily contained in both $D_{s}$ and $G\cap Y_{s}$.
Since $D_{s}$ and $G\cap Y_{s}$ are both irreducible, we deduce that they are
proportional to each other. By comparing degrees, we see that
$D_{s}=D|_{Y_{s}}$. As $Y_{s}$ is a general hyperplane section, this implies
$\mathrm{mult}_{x}D=\mathrm{mult}_{x}D_{s}\geq c$ and finishes the proof. ∎
###### Lemma 4.3.
Let $X\subseteq\mathbb{P}^{N}$ be a smooth threefold of Picard number one and
degree $d$. Let $L$ be the hyperplane class and let $x\in X$ be a closed point
that is contained in at most finitely many lines on $X$. Assume that a very
general hyperplane section $Y\subseteq X$ containing $x$ has Picard number one
and for some constant $c>\sqrt[3]{d^{2}}$, we have $\tau_{x}(L_{Y})\geq c$.
Then $\tau_{x}(L)\geq c$.
###### Proof.
Let $Y_{t}\subseteq X_{t}$ be the hyperplane sections corresponding to
$t\in\mathbb{P}H^{0}(X,\mathcal{O}_{X}(1)\otimes\mathfrak{m}_{x}).$
As in the proof of Lemma 4.2, when $t$ is very general, there exists a unique
irreducible $\mathbb{Q}$-divisor $0\leq D_{t}\sim_{\mathbb{Q}}L|_{Y_{t}}$ such
that $\mathrm{mult}_{x}D_{t}\geq c$. Note that $D_{t}$ is a curve on $X$.
Since $(L^{3})=d>\left(\frac{d}{c}\right)^{3}$, there exists a
$\mathbb{Q}$-divisor $0\leq D\sim_{\mathbb{Q}}L$ on $X$ such that
$\mathrm{mult}_{x}D>\frac{d}{c}$. Since $X$ has Picard number one, we may
further assume as before that $D$ is irreducible. Since
$(D\cdot D_{t})=d<\mathrm{mult}_{x}D\cdot\mathrm{mult}_{x}D_{t},$
we see that $\mathrm{supp}(D_{t})\subseteq\mathrm{supp}(D)$. By assumption,
the projection from $x$ defines a generically finite rational map on $D$,
hence $D\cap Y_{t}$ is irreducible by Bertini theorem and $D_{t}=D|_{Y_{t}}$
as in the proof of Lemma 4.2. As before this implies
$\mathrm{mult}_{x}D=\mathrm{mult}_{x}D_{t}\geq c$. ∎
We now prove Theorem 4.1. The key point is that, while the “bad” locus is in
general non-empty, it consists of at most a countable number of points. This
allows us to use the K-stability criterion [AZ-K-adjunction]*Lemmas 4.23 from
our previous work and then we can just apply Theorem 3.1 to conclude.
###### Proof of Theorem 4.1.
By [AZ-K-adjunction]*Lemmas 4.23 and 4.25, it suffices to show that
$\delta_{Z}(X)\geq\frac{n+1}{n}$ for any subvariety $Z\subseteq X$ of
dimension $\geq 1$. Let $L$ be the hyperplane class and let
$d:=(L^{n})=n+2-r$. By assumption $n\geq d$ and $d\geq 26$. We first show that
$\tau_{x}(L)\leq\sqrt{d}+1$ for a very general point $x\in Z$. Suppose not,
then by Lemma 2.3 there exists a unique irreducible $\mathbb{Q}$-divisor
$0\leq D_{x}\sim_{\mathbb{Q}}L$ such that $\mathrm{mult}_{x}D_{x}>\sqrt{d}+1$
for all $x\in Z$ (it is easy to see that $\eta_{x}(L)\leq\sqrt{d}$). By a
standard Hilbert scheme argument, we can find an irreducible and reduced
divisor $G\subseteq X\times U$ (where $U\subseteq Z$ is an affine open subset)
and an integer $m>0$ such that $G\in|\mathrm{pr}_{1}^{*}\mathcal{O}_{X}(mL)|$
and $G|_{X\times\\{x\\}}=mD_{x}$ for very general $x\in Z$. In particular, by
[EKL-Seshadri]*Lemma 2.1 we have
$\mathrm{mult}_{W}G=\mathrm{mult}_{x}(mD_{x})>m(\sqrt{d}+1)$, where
$W\subseteq X\times U$ is the graph of the embedding $U\hookrightarrow X$. By
[EKL-Seshadri]*Proposition 2.3, there exists a divisor
$G^{\prime}\in|\mathrm{pr}_{1}^{*}\mathcal{O}_{X}(mL)|$ such that
$G\not\subseteq\mathrm{Supp}(G^{\prime})$ and
$\mathrm{mult}_{W}G^{\prime}>m(\sqrt{d}+1)-1\geq m\sqrt{d}$. Let
$D^{\prime}_{x}=\frac{1}{m}G^{\prime}|_{X\times\\{x\\}}\sim_{\mathbb{Q}}L$ for
general $x\in Z$, then $D_{x}$ and $D^{\prime}_{x}$ have no common components
and $\mathrm{mult}_{x}D^{\prime}_{x}>\sqrt{d}$. It follows that
$d=\deg(D_{x}\cdot
D^{\prime}_{x})\geq\mathrm{mult}_{x}D_{x}\cdot\mathrm{mult}_{x}D^{\prime}_{x}>(\sqrt{d}+1)\sqrt{d}>d,$
a contradiction. Hence $\tau_{x}(L)\leq\sqrt{d}+1$ for a very general point
$x\in Z$.
Fix some $x\in Z$ with $\tau_{x}(L)\leq\sqrt{d}+1$ and let $Y\subseteq X$ be a
general linear subspace section of dimension $3$ that passes through $x$. By
Lefschetz theorem, we know that $Y$ has Picard number one. Let
$S\in|H^{0}(Y,\mathcal{O}_{Y}(2L)\otimes\mathfrak{m}_{x})|$ be a very general
member. By [DGF-Noether-Lefschetz]*Theorem 1.1, the surface $S$ also has
Picard number one. By Lemma 4.2, we have
$\tau_{x}(L_{Y})\leq\sqrt{d}+1\leq\sqrt[3]{d^{2}}$ (as $d\geq 26$) and hence
$\tau_{x}(2L_{Y})=2\tau_{x}(L_{Y})\leq 2\sqrt[3]{d^{2}}$. Since $(2L\cdot
C)\geq 2$ for any curves $C\subseteq Y$, the threefold $Y$ does not contain
any lines under the embedding given by $|2L|$, thus by Lemma 4.3 we also have
$\tau_{x}(2L_{S})\leq 2\sqrt[3]{d^{2}}$. By Corollary 3.6 we get
$\delta_{x}(2L_{S})\geq\frac{3}{2\sqrt[3]{d^{2}}}$, thus by [AZ-K-
adjunction]*Lemma 4.6, we have
$\delta_{x}(2L_{Y})\geq\frac{2}{\sqrt[3]{d^{2}}}$, or
$\delta_{x}(L_{Y})\geq\frac{4}{\sqrt[3]{d^{2}}}$. A repeated use of [AZ-K-
adjunction]*Theorem 4.6 then yields
$\delta_{Z}(L)\geq\delta_{x}(L)\geq\frac{n+1}{4}\delta_{x}(L_{Y})\geq\frac{n+1}{\sqrt[3]{d^{2}}},$
hence $\delta_{Z}(X)=\frac{\delta_{Z}(L)}{r}\geq\frac{n+1}{n}$ as long as
$n\geq r\sqrt[3]{d^{2}}$. Since $n\geq d$, this is automatic when $n\geq
r^{3}$. The proof is now complete. ∎
## 5\. Fano threefolds
As another application of Theorem 3.1, we prove in this section the uniform
K-stability of most Fano threefolds of Picard number one. Recall that the
degree of a Fano threefold $X$ of Picard number one is defined to be
$(H^{3})$, where $H$ is the ample generator of $\mathrm{Pic}(X)$. Using the
classification of Fano threefolds [Fano-book], we may restate Theorem C as
follows.
###### Theorem 5.1.
Let $X$ be a smooth Fano threefold of Picard number one. Assume that $X$ has
index two and degree at most $4$, or it has index one and degree at most $16$.
Then $X$ is uniformly K-stable.
We remark that while Fano threefolds of Picard number one have been fully
classified (most of them are complete intersections in rational homogeneous
manifolds), we only need very little information from this classification. As
will be seen below, the two key properties of Fano threefolds we need are the
following.
1. (1)
For any closed point $x\in X$ on the Fano threefold, there exists some smooth
member $S\in|H|$ passing through $x$.
2. (2)
Most Fano threefolds $X$ of Fano index one are cut out by quadrics and there
are only finitely many lines through a fixed point.
Before presenting the details, let us give a brief summary of the proof of
Theorem 5.1 and indicate where the above two properties are used. We focus on
the case when $X$ has Fano index one (the index two cases are easier).
Clearly, point (1) allows us to apply Theorem 3.1 and immediately obtain
(5.1) $\delta_{x}(X)\geq\frac{4\varepsilon_{x}(H|_{S})}{(H^{3})}.$
Note that $S$ is a K3 surface. Seshadri constants on K3 surfaces have been
studied by [Knu-K3-Seshadri] and usually they are easier to compute when the
surfaces have Picard number one, thus we seek to choose the surface $S$
carefully so that it not only contains $x$, but also has Picard number one.
This is a Noether-Lefschetz type problem with a base point constraint and will
be studied in Subsection 5.3. Note that this requirement on $S$ can fail on
certain Fano threefolds (e.g. quartic threefolds with generalized Eckardt
points), but we will show that it is satisfied whenever point (2) holds.
Fortunately, in the remaining cases already the trivial bound of
$\varepsilon_{x}(H|_{S})$ is enough to imply K-stability through (5.1). Once
we find a surface $S$ of Picard number one, the argument is relatively
straightforward using Seshadri constants calculations and Theorem 3.1, except
when $X$ has degree $16$. In this case we only get $\delta(X)\geq 1$ (i.e.
K-semistability) and need to study the equality condition a bit further. This
is done in Subsection 5.1 by analyzing the inductive step, which is based on
inversion of adjunction, in the proof of Theorem 3.1.
### 5.1. Equality conditions in adjunction
As indicated in the summary of proof above, the goal of this subsection is to
further analyze the equality conditions in Theorem 3.1. The main technical
results are Lemma 5.4 and Corollary 5.6, which list a few constraints that
need to be satisfied in order to have equality in the adjunction of stability
thresholds. They will play an important role when treating Fano threefolds of
degree $16$ and complete intersections of two quadrics.
###### Lemma 5.2.
Let $X$ be a projective variety of dimension $n$, let $L$ be an ample line
bundle on $X$ and let $v$ be a valuation of linear growth on $X$. Then
$S(L;v)=\frac{n}{n+1}T(L;v)$ if and only if
(5.2) $\frac{\mathrm{vol}(L;v\geq
t)}{(L^{n})}=1-\left(\frac{t}{T(L;v)}\right)^{n}$
for all $0\leq t\leq T(L;v)$. If in addition $v$ is quasi-monomial, then its
center is a closed point on $X$.
###### Remark 5.3.
In general we have $S(L;v)\leq\frac{n}{n+1}T(L;v)$ (see for example [AZ-K-
adjunction]*Lemma 4.2), thus the above statement gives a description of the
equality conditions. When $v$ is divisorial, this is essentially proved in
[Fuj-alpha]*Proposition 3.2.
###### Proof.
Assume that $S(L;v)=\frac{n}{n+1}T(L;v)$. Let $T=T(L;v)$. After replacing $L$
by $rL$ for some sufficiently large integer $r$ we may assume that $L$ is very
ample. Let $H\in|L|$ be a general member, let $W_{\vec{\bullet}}$ be the
refinement by $H$ of the complete linear series $V_{\vec{\bullet}}$ associated
to $L$, and let $\mathcal{F}$ be the filtration on $W_{\vec{\bullet}}$ induced
by the filtration $\mathcal{F}_{v}$ on $R(X,L)$. Concretely,
$W_{\vec{\bullet}}$ is $\mathbb{N}^{2}$-graded and we have
(5.3)
$\mathcal{F}^{\lambda}W_{m,j}=\text{Im}(\mathcal{F}_{v}^{\lambda}H^{0}(X,\mathcal{O}_{X}((m-j)L))\to
H^{0}(H,\mathcal{O}_{H}((m-j)L)).$
In particular, $W_{m,j}=H^{0}(H,\mathcal{O}_{H}((m-j)L))$ when $m-j\gg 0$. By
the definition of the pseudo-effective threshold $T(L;v)$, it follows that
$\mathrm{Supp}(W_{\vec{\bullet}}^{t})\cap(\\{1\\}\times\mathbb{R})=[0,1-t/T]$
where $W_{\vec{\bullet}}^{t}$ ($0<t<T$) is the multi-graded linear series
given by $W^{t}_{m,j}=\mathcal{F}^{mt}W_{m,j}$. By Lemma 2.9, we also have
$S(L;v)=S(V_{\vec{\bullet}};\mathcal{F}_{v})=S(W_{\vec{\bullet}};\mathcal{F})$.
Let
$f(t,\gamma)=\mathrm{vol}_{W_{\vec{\bullet}}^{t}}(\gamma)\quad(0<t<T,0<\gamma<1-t/T).$
It is clear that
$f(t,\gamma)\leq\mathrm{vol}_{W_{\vec{\bullet}}}(\gamma)=(1-\gamma)^{n-1}(L^{n})$.
Note that $\mathrm{vol}(W_{\vec{\bullet}})=\mathrm{vol}(L)=(L^{n})$. We then
obtain
$\displaystyle S(L;v)=S(W_{\vec{\bullet}};\mathcal{F})$
$\displaystyle=\frac{1}{(L^{n})}\int_{0}^{T}\mathrm{vol}(W_{\vec{\bullet}}^{t})\mathrm{d}t$
$\displaystyle=\frac{n}{(L^{n})}\int_{0}^{T}\mathrm{d}t\int_{0}^{1-\frac{t}{T}}f(t,\gamma)\mathrm{d}\gamma$
$\displaystyle\leq\int_{0}^{T}\mathrm{d}t\int_{0}^{1-\frac{t}{T}}n(1-\gamma)^{n-1}\mathrm{d}\gamma=\frac{n}{n+1}T,$
where the second equality follows from [AZ-K-adjunction]*Corollary 2.22 while
the third equality is implied by [AZ-K-adjunction]*Lemma 2.23. Since
$S(L;v)=\frac{n}{n+1}T$ by assumption, we see that
$f(t,\gamma)=(1-\gamma)^{n-1}(L^{n})$ for all $t,\gamma$. By another
application of [AZ-K-adjunction]*Lemma 2.23 (used in the second equality
below), we then have
$\displaystyle\mathrm{vol}(L;v\geq t)=\mathrm{vol}(W_{\vec{\bullet}}^{t})$
$\displaystyle=n\int_{0}^{1-\frac{t}{T}}f(t,\gamma)\mathrm{d}\gamma$
$\displaystyle=\int_{0}^{1-\frac{t}{T}}n(1-\gamma)^{n-1}(L^{n})\mathrm{d}\gamma=\left(1-\left(\frac{t}{T}\right)^{n}\right)(L^{n}),$
which proves (5.2). Conversely, it is easy to check that
$S(L;v)=\frac{n}{n+1}T$ as long as (5.2) holds. This proves the first part of
the lemma.
Suppose next that $v$ is quasi-monomial and $\dim C_{X}(v)\geq 1$. Let
$\pi\colon Y\to X$ be a log resolution and let $Z=C_{Y}(v)$. Since $\dim
C_{X}(v)\geq 1$, we have $(Z\cdot\pi^{*}L)\neq 0$ by the projection formula,
hence $H_{Y}$ intersects $Z$ where $H_{Y}$ is the strict transform of $H$. By
Izumi’s inequality we have
$\mathrm{lct}_{Z}(f)\geq\frac{1}{\mathrm{mult}_{Z}(f)}$. On the other hand
$\mathrm{lct}_{Z}(f)\leq\frac{A_{Y}(v)}{v(f)}$ by definition, thus $v(f)\leq
A_{Y}(v)\mathrm{mult}_{Z}(f)$ for any $f\in\mathcal{O}_{Y,Z}$. Up to rescaling
of the valuation $v$ we may assume that $A_{Y}(v)=1$. Thus by (5.3) we see
that $\mathcal{F}^{\lambda}W_{m,j}\subseteq
H^{0}(H_{Y},\mathcal{I}^{\lambda}_{H_{Y}\cap Z}((m-j)\pi^{*}L))$. As
$\pi^{*}L$ is nef and big, by [FKL-volume]*Theorem A(v) we obtain
$\mathrm{vol}_{W_{\vec{\bullet}}^{t}}(\gamma)<\mathrm{vol}_{W_{\vec{\bullet}}}(\gamma)$
for all $0<t,\gamma\ll 1$. By the proof above it then follows that
$S(L;v)<\frac{n}{n+1}T$. This proves the second part of the lemma. ∎
###### Lemma 5.4.
Let $X$ be a projective variety of dimension $n\geq 2$ with klt singularities
and let $L$ be an ample line bundle on $X$ such that the linear system $|L|$
is base point free. Assume that $\delta(L)$ is computed by some valuation $v$
on $X$ with $\dim C_{X}(v)\geq 1$ and
$\delta(L)\leq\frac{n+1}{n}\delta_{Z}(L|_{H})$
for some general member $H\in|L|$ and some irreducible component $Z$ of
$C_{X}(v)\cap H$. Then one of the following holds:
1. (1)
$\frac{A_{X}(v)}{T(L;v)}<\frac{n-1}{n+1}\delta(L)$, or
2. (2)
$\frac{A_{X}(v)}{T(L;v)}=\frac{n-1}{n+1}\delta(L)$, $\dim C_{X}(v)=1$, and
$\frac{\mathrm{vol}(L;v\geq
t)}{(L^{n})}=1-n\left(\frac{t}{T}\right)^{n-1}+(n-1)\left(\frac{t}{T}\right)^{n}$
for all $0\leq t\leq T:=T(L;v)$.
###### Remark 5.5.
Note that this implies $\alpha(L)\leq\frac{n-1}{n+1}\delta(L)$, which is
stronger than the usual inequality $\alpha(L)\leq\frac{n}{n+1}\delta(L)$ (see
for example [BJ-delta]*Theorem A).
###### Proof.
Let $W_{\vec{\bullet}}$ be the refinement by $H$ of the complete linear series
associated to $L$ and let $L_{0}=L|_{H}$. Note that $W_{\vec{\bullet}}$ is
almost complete, $F(W_{\vec{\bullet}})=0$ and
$c_{1}(W_{\vec{\bullet}})=\frac{n}{n+1}L_{0}$ by [AZ-K-adjunction]*Example
2.28 and (3.1). Let $\mathfrak{a}_{\bullet}(v)$ be the valuation ideals of
$v$, i.e., $\mathfrak{a}_{m}(v)=\\{f\in\mathcal{O}_{X}\,|\,v(f)\geq m\\}$. Let
$\mathfrak{a}_{m}=\mathfrak{a}_{m}(v)|_{H}$ and let $v_{0}$ be a quasi-
monomial valuation on $H$ with center $Z$ that computes
$\mathrm{lct}_{Z}(H;\mathfrak{a}_{\bullet})$, which exists by [Xu-
quasimonomial]*Theorem 1.1. After rescaling, we may assume that
$A_{H}(v_{0})=A_{X}(v)$. By inversion of adjunction,
$\mathrm{lct}_{Z}(H;\mathfrak{a}_{\bullet})\leq\mathrm{lct}_{Z}(X;\mathfrak{a}_{\bullet}(v))$.
Since $v$ calculates $\mathrm{lct}_{Z}(X;\mathfrak{a}_{\bullet}(v))=A_{X}(v)$
by [BJ-delta]*Proposition 4.8, we deduce that
$v_{0}(\mathfrak{a}_{\bullet})\geq 1$ and hence
$\mathfrak{a}_{m}(v)|_{H}\subseteq\mathfrak{a}_{m}(v_{0})$ for all $m$. We now
define two filtrations on $W_{\vec{\bullet}}$: the first one, denoted by
$\bar{\mathcal{F}}_{v}$, is the restriction of the filtration
$\mathcal{F}_{v}$ on $X$ induced by the valuation $v$, while the second one
$\mathcal{F}_{v_{0}}$ is induced by the valuation $v_{0}$. From the previous
argument we see that $\mathcal{F}_{v_{0}}$ dominates $\bar{\mathcal{F}}_{v}$,
i.e.
$\bar{\mathcal{F}}_{v}^{\lambda}W_{\vec{\bullet}}\subseteq\mathcal{F}_{v_{0}}^{\lambda}W_{\vec{\bullet}}$
for all $\lambda\geq 0$. By [AZ-K-adjunction]*Corollary 2.22, this implies
(5.4) $S(W_{\vec{\bullet}};\bar{\mathcal{F}}_{v})\leq
S(W_{\vec{\bullet}};\mathcal{F}_{v_{0}})=S(W_{\vec{\bullet}};v_{0}),$
and when equality holds we have
$T(W_{\vec{\bullet}};\bar{\mathcal{F}}_{v})=T(W_{\vec{\bullet}};\mathcal{F}_{v_{0}})$.
From the construction, it is clear that
$T(W_{\vec{\bullet}};\bar{\mathcal{F}}_{v})=T(L;v)$ and
$T(W_{\vec{\bullet}};\mathcal{F}_{v_{0}})=T(L_{0};v_{0})$. By Lemma 2.9, we
also have
$S(L;v)=S(W_{\vec{\bullet}};\bar{\mathcal{F}}_{v}),$
thus as $v$ computes $\delta(L)$ and $H\in|L|$ is general, we deduce that
$A_{H}(v_{0})=A_{X}(v)=\delta(L)S(L;v)\leq\delta(L)S(W_{\vec{\bullet}};v_{0}).$
On the other hand, we have
$S(W_{\vec{\bullet}};v_{0})=\frac{n}{n+1}S(L_{0};v_{0})$ by [AZ-K-
adjunction]*Lemma 2.29. Combined with our assumptions we obtain
$\delta(L)S(W_{\vec{\bullet}};v_{0})\leq\frac{n+1}{n}\delta_{Z}(L_{0})\cdot\frac{n}{n+1}S(L_{0};v_{0})=\delta_{Z}(L_{0})S(L_{0};v_{0})\leq
A_{H}(v_{0}).$
Therefore equality holds everywhere (including in (5.4)) and we get
$\delta(L)=\frac{n+1}{n}\delta_{Z}(L_{0}),\quad
A_{X}(v)=A_{T}(v_{0})=\delta_{Z}(L_{0})S(L_{0};v_{0})$
i.e. $v_{0}$ computes $\delta_{Z}(L_{0})$, and
$T(L;v)=T(W_{\vec{\bullet}};\bar{\mathcal{F}}_{v})=T(W_{\vec{\bullet}};\mathcal{F}_{v_{0}})=T(L_{0};v_{0}).$
Note that $S(L_{0};v_{0})\leq\frac{n-1}{n}T(L_{0};v_{0})$ by [AZ-K-
adjunction]*Lemma 4.2. It follows that
$A_{X}(v)\leq\frac{n-1}{n}\delta_{Z}(L_{0})T(L_{0};v_{0})=\frac{n-1}{n+1}\delta(L)T(L;v).$
If equality holds, then by Lemma 5.2 we know that $Z$ is a closed point and
$\frac{\mathrm{vol}(L_{0};v_{0}\geq
t)}{(L^{n})}=1-\left(\frac{t}{T}\right)^{n-1},$
where $0\leq T\leq T=T(L_{0};v_{0})=T(L;v)$. It follows that $\dim
C_{X}(v)=1$. Let $W_{\vec{\bullet}}^{t}$ be the multi-graded linear series
given by $W^{t}_{m,j}=\mathcal{F}_{v_{0}}^{mt}W_{m,j}$. Since
$W_{m,j}=H^{0}(H,\mathcal{O}_{H}((m-j)L))$ when $m-j\gg 0$, we see that
$\mathrm{vol}_{W_{\vec{\bullet}}^{t}}(\gamma)=\mathrm{vol}((1-\gamma)L_{0};v_{0}\geq
t)$
and hence
$\frac{\mathrm{vol}_{W_{\vec{\bullet}}^{t}}(\gamma)}{(L^{n})}=(1-\gamma)^{n-1}-\left(\frac{t}{T}\right)^{n-1}$
for all $0\leq\gamma<1-\frac{t}{T}$. Since equality holds in (5.4), using [AZ-
K-adjunction]*Lemma 2.23 we obtain
$\displaystyle\mathrm{vol}(L;v\geq t)=\mathrm{vol}(W_{\vec{\bullet}}^{t})$
$\displaystyle=n(L^{n})\int_{0}^{1-\frac{t}{T}}\left((1-\gamma)^{n-1}-\left(\frac{t}{T}\right)^{n-1}\right)\mathrm{d}\gamma$
$\displaystyle=\left(1-n\left(\frac{t}{T}\right)^{n-1}+(n-1)\left(\frac{t}{T}\right)^{n}\right)(L^{n})$
as desired. ∎
###### Corollary 5.6.
Under the assumption of Lemma 5.4, assume in addition that $X$ is
$\mathbb{Q}$-factorial, $\rho(X)=1$ and $C_{X}(v)$ has codimension $\geq 2$ in
$X$. Then at least one of the following holds:
$\frac{A_{X}(v)}{T(L;v)}<\frac{n-1}{n+1}\delta(L),\quad\text{or}\quad\frac{A_{X}(v)}{\eta(L;v)}\leq\frac{n-1}{n+1}\delta(L).$
###### Proof.
Denote $T:=T(L;v)$ and $\eta:=\eta(L;v)$. It suffices to show that $\eta=T$ in
the second case of Lemma 5.4. Suppose not, i.e. $\eta<T$. Then by Lemma 2.3,
there exists an irreducible $\mathbb{Q}$-divisor $D_{0}\sim_{\mathbb{Q}}L$ on
$X$ such that $v(D_{0})=T$ and for any $t\in(\eta,T)$ and any effective
$\mathbb{Q}$-divisor $D\sim_{\mathbb{Q}}L$ with $v(D)\geq t$ we have
$D=\frac{t-\eta}{T-\eta}D_{0}+\frac{T-t}{T-\eta}G$ where $G\sim_{\mathbb{Q}}L$
is effective and $v(G)\geq\eta$. It follows that
$\mathrm{vol}(L;v\geq
t)=\left(\frac{T-t}{T-\eta}\right)^{n}\mathrm{vol}(L;v\geq\eta),$
which contradicts the expression from Lemma 5.4 (note that $n\geq 3$ since the
curve $C_{X}(v)$ has codimension $\geq 2$ in $X$). Thus $\eta=T$ and we are
done. ∎
### 5.2. The index two case
As an application of Theorem 3.1 and the results from the previous subsection,
we now prove:
###### Theorem 5.7.
Let $X$ be a Fano threefold of Picard number one, Fano index two and degree at
most $4$. Then $X$ is uniformly K-stable.
###### Proof.
Let $H$ be the ample generator of $\mathrm{Pic}(X)$ and let $d=(H^{3})$ be the
degree of $X$. Using the classification of Fano threefolds (see for example
[Fano-book]), it is straightforward to check that for any closed point $x\in
X$ there exists some smooth member $S\in|H|$ passing through $x$ (see for
example the proof of [AZ-K-adjunction]*Corollary 4.9(5) for the degree $1$
case, the other cases are much easier). By adjunction, $S$ is a del Pezzo
surface of degree $d$ and $H|_{S}\sim-K_{S}$. Since
$\delta_{x}(X)=\frac{1}{2}\delta_{x}(H)$, it suffices to show
$\delta_{x}(H)>2$.
Suppose first that $d=1$. By [Bro-dP-Seshadri]*Théorème 1.3, we know that
$\varepsilon_{x}(-K_{S})\geq\frac{1}{2}$, hence by Theorem 3.1 we obtain
$\delta_{x}(H)\geq 2$. Moreover the equality cannot hold since
$1=(H^{3})\neq\left(\frac{1}{2}\right)^{2}$ and $H$ is a primitive element in
$\mathrm{Pic}(X)$. Thus $\delta_{x}(H)>2$ and we are done in this case.
Similarly, when $d=2$, by [Bro-dP-Seshadri]*Théorème 1.3 we know that
$\varepsilon_{x}(-K_{S})\geq 1$, thus the same argument as above gives
$\delta_{x}(H)>2$.
Suppose next that $d=3$. If $x\in X$ is a generalized Eckardt point, then
$\delta_{x}(X)=\frac{6}{5}>1$ by [AZ-K-adjunction]*Theorem 4.18. If $x\in X$
is not a generalized Eckardt point, then there are only finitely many lines on
$X$ passing through $x$, thus if $S\subseteq X$ is general then $x$ is not
contained in any lines on $S$. By [Bro-dP-Seshadri]*Théorème 1.3, we have
$\varepsilon_{x}(-K_{S})=\frac{3}{2}$, hence $\delta_{x}(H)\geq 2$ by Theorem
3.1. Moreover, equality cannot hold as before. Thus $\delta_{x}(H)>2$ and we
are done in this case.
Finally assume that $d=4$. Note that $X$ is a complete intersection of two
quadrics in this case. There is a pencil of tangent hyperplanes at any closed
point $x$ and every line on $X$ passing through $x$ is contained in the base
locus of this pencil, which is a complete intersection curve of degree $4$. It
follows that there are at most $4$ lines containing $x$ on $X$. In particular,
we may arrange that $x$ is not contained in any lines on $S$. By [Bro-dP-
Seshadri]*Théorème 1.3, we know that $\varepsilon_{x}(-K_{S})\geq 2$, hence
Theorem 3.1 yields $\delta_{x}(H)\geq 2$. Suppose that equality holds. Then by
Theorem 3.1 there exists some valuation $v$ with positive dimensional center
on $X$ such that $\frac{A_{X}(v)}{S(H;v)}=2$. We show that this is impossible.
Indeed, if the center $C_{X}(v)$ is a prime divisor $D$ on $X$, then we may
assume that $v=\mathrm{ord}_{D}$. Since $\mathrm{Pic}(X)$ is generated by $H$,
we have $D\sim rH$ for some $r\geq 1$. By Lemma 3.5 we have
$S(H;v)=\frac{1}{4r}$, hence $\frac{A_{X}(v)}{S(H;v)}=4r>2$, a contradiction.
Hence we may assume that the center $C_{X}(v)$ is a curve $C$ on $X$. If
$T\in|H|$ is a general member and $x\in T\cap C_{X}(v)$, then
$\varepsilon_{x}(-K_{T})\geq 2$ and $\delta_{x}(-K_{T})\geq\frac{3}{2}$ by
[Bro-dP-Seshadri]*Théorème 1.3 and Theorem 3.1 as above. Thus the assumptions
of Corollary 5.6 are satisfied and we deduce that either $A_{X}(v)<T(H;v)$ or
$A_{X}(v)\leq\eta(H;v)$. The first case is impossible by [CS-
lct-3fold]*Theorem 6.1, thus it remains to exclude the other possibility.
By the definition of the movable threshold $\eta(H;v)$, for any
$0<\varepsilon\ll 1$ we can find two effective $\mathbb{Q}$-divisors
$D_{1},D_{2}\sim_{\mathbb{Q}}H$ without common components such that
$v(D_{i})\geq(1-\varepsilon)\eta(H;v)$ ($i=1,2$). Let $m$ be a sufficiently
divisible integer and let $Z\subseteq X$ be the complete intersection
subscheme $mD_{1}\cap mD_{2}$. Note that $\deg Z=4m^{2}$. If $\deg C\geq 2$,
then $\mathrm{mult}_{C}Z\leq 2m^{2}$ and by [dFEM-mult-and-lct]*Theorem 0.1,
applied at the generic point of $C$, we have
$\mathrm{lct}_{C}(X;\mathcal{I}_{Z})\geq\frac{\sqrt{2}}{m}$. In particular,
$A_{X}(v)\geq\frac{\sqrt{2}}{m}v(\mathcal{I}_{Z})=\sqrt{2}\min\\{v(D_{1}),v(D_{2})\\}\geq\sqrt{2}(1-\varepsilon)\eta(H;v)>\eta(H;v),$
a contradiction. If $\deg C=1$, i.e. $C$ is a line on $X$, then
$\mathrm{mult}_{C}Z\leq 4m^{2}$ and by [Fano-book]*Proposition 3.4.1(ii), we
have $\mathrm{mult}_{C}D_{i}<\frac{3}{2}<2$ for some $i=1,2$. Hence
$\mathcal{I}_{Z}\not\subseteq\mathcal{I}_{C}^{\frac{3}{2}m}$ and by [Z-bsr-
loc-closed]*Lemma 2.6, applied at the generic point of $C$, we see that there
exists some absolute constant $\varepsilon_{1}>0$, which in particular does
not depend on $D_{i}$ or $\varepsilon$, such that
$\mathrm{lct}_{C}(X;\mathcal{I}_{Z})>\frac{1+\varepsilon_{1}}{m}$. It then
follows as before that
$A_{X}(v)>\frac{1+\varepsilon_{1}}{m}v(\mathcal{I}_{Z})\geq(1+\varepsilon_{1})(1-\varepsilon)\eta(H;v)>\eta(H;v)$,
a contradiction. Hence the equality $\delta_{x}(H)=2$ never holds and $X$ is
uniformly K-stable. ∎
### 5.3. Noether-Lefschetz for prime Fano threefolds
In this subsection, we prove the following Noether-Lefschetz type result on
Fano threefolds. As explained at the beginning of the section, this is another
key ingredient in our study of uniform K-stability when the Fano threefolds
have index one.
###### Theorem 5.8.
Let $X$ be a Fano threefold of Picard number one, Fano index one and degree
$\geq 6$. Let $x\in X$ be a closed point. Then a very general hyperplane
section $S\in|-K_{X}|$ passing through $x$ has Picard number one.
###### Remark 5.9.
The assumption that $X$ has degree $\geq 6$ is indeed necessary, since the
statement fails on quartic threefolds with generalized Eckardt points.
For the proof of the theorem we first recall the following criterion.
###### Lemma 5.10.
Let $X\subseteq\mathbb{P}^{N}$ be a smooth projective threefold, let
$\ell\subseteq(\mathbb{P}^{N})^{*}$ be a Lefschetz pencil of hyperplane
sections and let $Y=X\cap H$ where $H$ is a very general member of $\ell$.
Assume that the natural map $H^{2,0}(X)\to H^{2,0}(Y)$ is not surjective. Then
the restriction $\mathrm{Pic}(X)\to\mathrm{Pic}(Y)$ is an isomorphism.
###### Proof.
This should be well known to experts. By assumption, the vanishing cohomology
$H^{2}(Y,\mathbb{C})_{\rm van}$, the orthogonal complement of
$H^{2}(X,\mathbb{C})$ relative to the intersection form on
$H^{2}(Y,\mathbb{C})$, has non-trivial intersection with $H^{2,0}(Y)$, hence
is not generated by algebraic classes. By [SGA7II]*Théorème 1.4, this implies
that $H^{2}(Y,\mathbb{C})_{\rm van}\cap\mathrm{Pic}(Y)=\\{0\\}$ and therefore
every line bundle on $Y$ is a pullback from $X$; in other words,
$\mathrm{Pic}(X)\to\mathrm{Pic}(Y)$ is surjective. It is also injective by the
Lefschetz hyperplane theorem. ∎
In the remaining part of this section, let $X$ be a smooth Fano threefold as
in Theorem 5.8 and let $x\in X$ be a closed point. It is well known (see for
example [Fano-book]) that the anti-canonical linear system $|-K_{X}|$ induces
an embedding $X\subseteq\mathbb{P}^{N}$. Let
$\check{\mathbb{P}}\subseteq(\mathbb{P}^{N})^{*}$ be the dual projective space
parametrizing hyperplanes of $\mathbb{P}^{N}$ containing $x$. To apply Lemma
5.10, we need to find a Lefschetz pencil in $\check{\mathbb{P}}$.
###### Lemma 5.11.
A general pencil $\ell\subseteq\check{\mathbb{P}}$ is a Lefschetz pencil.
###### Proof.
We first recall some notation and results from [Voisin-Hodge2]*Section 2.1.1.
Let $Z\subseteq X\times(\mathbb{P}^{N})^{*}$ be the algebraic subset defined
by
$Z=\\{(y,H)\,|\,X_{H}:=X\cap H\mbox{ is singular at }y\\}.$
Let $\mathcal{D}_{X}:={\rm pr}_{2}(Z)\subseteq(\mathbb{P}^{N})^{*}$ be the set
of singular hyperplane sections and let
$\mathcal{D}_{X}^{0}\subseteq\mathcal{D}_{X}$ be the subset of hyperplanes $H$
such that $X_{H}$ has at most one ordinary double point as singularity. Let
$W:=\mathcal{D}_{X}\setminus\mathcal{D}_{X}^{0}$. By [Voisin-Hodge2]*Corollary
2.8, and the comments thereafter, we have $\dim W\leq N-2$ and
$\check{\mathbb{P}}$ intersects transversally with $\mathcal{D}_{X}^{0}$ away
from $Z_{x}:={\rm pr}_{1}^{-1}(x)\subseteq Z$. Since $\dim
Z_{x}=N-4=\dim\check{\mathbb{P}}-3$, a general line
$\ell\subseteq\check{\mathbb{P}}$ is disjoint from $Z_{x}$ and thus intersects
transversally with $\mathcal{D}_{X}^{0}$. By [Voisin-Hodge2]*Proposition 2.9,
$\ell$ is a Lefschetz pencil if and only if $\ell$ is also disjoint from $W$.
This would be the case if $W\cap\check{\mathbb{P}}$ (set-theoretic
intersection) has codimension at least $2$ in $\check{\mathbb{P}}$. Clearly
$W\cap\check{\mathbb{P}}=W_{1}\cup W_{2}$ where $W_{1}$ (resp. $W_{2}$)
parametrizes hyperplanes $H$ containing $x$ such that $X_{H}$ has degenerate
singularities (resp. more than one ordinary double point). It suffices to show
that both $W_{i}$ ($i=1,2$) have codimension at least $2$ in
$\check{\mathbb{P}}$.
To this end, consider the closed subset
$R:=\\{y\in X\,|\,x\in\Lambda_{y}\\}\subseteq X,$
where $\Lambda_{y}:=T_{y}X\subseteq\mathbb{P}^{N}$ denotes the tangent space
of $X$ at $y$. We claim that
(5.5) $\dim R\leq 1.$
Assuming this claim for now, let us finish the proof of the lemma. Let
$\check{Z}=\mathrm{pr}_{2}^{*}\check{\mathbb{P}}$. By construction,
$\check{Z}$ is a divisor in $Z$ and $\mathrm{pr}_{1}\colon\check{Z}\to X$ is a
$\mathbb{P}^{N-5}$-bundle away from $R$; in particular, $\check{Z}$ is smooth
outside $\mathrm{pr}_{1}^{-1}(R)$. Note that by (5.5), we have
(5.6) $\dim\mathrm{pr}_{1}^{-1}(R)\leq N-3<\dim\check{Z}=N-2.$
Let $(y,H)\in\check{Z}\setminus\mathrm{pr}_{1}^{-1}(R)$. As in [Voisin-
Hodge2]*Lemma 2.7 and Corollary 2.8, $X_{H}$ has an ordinary double point at
$y$ if and only if the restriction of $\mathrm{pr}_{2}$ to $\check{Z}$ is an
immersion at $(y,H)$, and in that case $\mathrm{pr}_{2*}(T_{\check{Z},(y,H)})$
can be identified with the linear subspace of
$\mathbb{C}^{N}=T_{(\mathbb{P}^{N})^{*},H}$ consisting of functions that
vanish at both $x$ and $y$. By generic smoothness and (5.6), this immediately
implies that $\dim W_{1}\leq N-3$. On the other hand, let $H$ be a general
smooth point of an irreducible component of $W_{2}$. By construction, $X_{H}$
has at least two ordinary double points $y_{1},y_{2}$. Since the Fano
threefold $X$ is cut out by quadrics and cubics [Isk-Fano-3fold-II]*Corollary
2.6, the line $\ell^{\prime}$ joining $y_{1}$ and $y_{2}$ is contained in $X$,
otherwise $\ell^{\prime}$ has intersection number at least $4$ with one of
those quadrics or cubics. It follows that either $x\in\ell^{\prime}$, in which
case $y_{1},y_{2}\in R$ and $H\in\mathrm{pr}_{2}(\mathrm{pr}_{1}^{-1}(R))$, or
$x\not\in\ell^{\prime}$, in which case
$T_{W_{2},H}\subseteq\mathrm{pr}_{2*}(T_{\check{Z},(y_{1},H)})\cap\mathrm{pr}_{2*}(T_{\check{Z},(y_{2},H)})$
has dimension at most $N-3$. In either case we conclude that $\dim W_{2}\leq
N-3$. In other words, both $W_{1}$ and $W_{2}$ have codimension at least $2$
in $\check{\mathbb{P}}$. As explained earlier, this implies the statement of
the lemma.
It remains to prove claim (5.5). Let $y\in R\setminus\\{x\\}$. If $\deg X\geq
8$, then since $X$ is cut out by quadrics [Isk-Fano-3fold-II]*Corollary 2.6
and $x\in\Lambda_{y}$, the line joining $x$ and $y$ is contained in $X$:
otherwise as it is tangent to $X$ at $y$, it has intersection number at least
$3$ with one of the quadrics, a contradiction. Since there are only finitely
many lines on $X$ passing through $x$ [Fano-book]*Proposition 4.2.2, we deduce
that $\dim R\leq 1$. If $\deg X=6$, then $X$ is the complete intersection of a
quadric (denoted by $Q$) and a cubic. Again for any $y\in R\setminus\\{x\\}$,
the line joining $x$ and $y$ is contained in $Q$, thus $y\in H_{0}$, where
$H_{0}$ is the tangent hyperplane of $Q$ at $x$. It follows that $R\subseteq
H_{0}\cap X$. Since $\mathrm{Pic}(X)=\mathbb{Z}\cdot[H_{0}]$, we see that
$H_{0}\cap X$ is irreducible and reduced. As there are only finitely many
lines passing through $x$, the linear projection from $x$ is finite on
$H_{0}\cap X$, therefore by generic smoothness, it cannot be ramified
everywhere. In other words, there exists some smooth point $y_{0}\in H_{0}\cap
X$ such that $x\not\in T_{y_{0}}(H_{0}\cap X)=H_{0}\cap T_{y_{0}}X$, or
equivalently, $x\not\in T_{y_{0}}X$. Thus $R\subsetneq H_{0}\cap X$ and we
deduce that $\dim R\leq 1$. This finishes the proof. ∎
We are now ready to prove Theorem 5.8.
###### Proof of Theorem 5.8.
By Lemma 5.11, there exists a Lefschetz pencil of hyperplane sections passing
through $x$. A smooth member $Y$ of the pencil is a smooth K3 surface, thus
$\dim H^{2,0}(Y)=1$; on the other hand, as $X$ is Fano, we have $\dim
H^{2,0}(X)=0$. In particular, the map $H^{2,0}(X)\to H^{2,0}(Y)$ is not
surjective and the theorem follows immediately from Lemma 5.10. ∎
### 5.4. The index one case
We are now ready to prove Theorem 5.1. By Theorem 5.7, the remaining case is:
###### Theorem 5.12.
Every smooth Fano threefold $X$ of Picard number one, Fano index one and
degree $d\leq 16$ is uniformly K-stable.
###### Proof.
Let $H=-K_{X}$. We will denote by $H_{Y}$ the restriction of the hyperplane
class $H$ to a subvariety $Y\subseteq X$. Let $x\in X$ and let $S\in|H|$ be a
very general hyperplane section containing $x$. When $d\leq 4$, it is easy to
see that $\varepsilon_{x}(H_{S})\geq 1$ since $H$ is base point free, hence
$\delta_{x}(H)>1$ and $X$ is uniformly K-stable by Theorem 3.1 as in the proof
of Theorem 5.7. Thus for the rest of the proof we may assume that $d\geq 6$.
By Theorem 5.8, $\mathrm{Pic}(S)=\mathbb{Z}\cdot[H_{S}]$.
We claim that $\tau_{x}(H_{S})\leq 4$. Suppose not, then for some integer
$m>0$ there exists an integral curve $C\sim-mK_{X}|_{S}$ containing $x$ such
that $\mathrm{mult}_{x}C>4m$ (we can assume $C$ is integral since $S$ has
Picard number one). Since $\mathrm{mult}_{x}C$ is an integer, we have
$\mathrm{mult}_{x}C\geq 4m+1$. By adjunction, $K_{S}\sim 0$. It follows that
$\displaystyle 16m^{2}\geq dm^{2}$ $\displaystyle=(K_{S}+C)\cdot
C=2p_{a}(C)-2$
$\displaystyle\geq\mathrm{mult}_{x}C\cdot(\mathrm{mult}_{x}C-1)-2\geq
4m(4m+1)-2>16m^{2},$
a contradiction. Hence $\tau_{x}(H_{S})\leq 4$ and by Lemma 2.4 and Theorem
3.1 we obtain $\delta_{x}(H)\geq\frac{4}{\tau_{x}(H_{S})}\geq 1$. Moreover,
when equality holds, we have $\varepsilon_{x}(H_{S})=\tau_{x}(H_{S})=4$ or
$H\sim_{\mathbb{Q}}4G$ for some prime divisor $G$ on $X$. The latter case
cannot occur since $H$ is a primitive generator of $\mathrm{Pic}(X)$. By Lemma
2.4, the former case can only happen when $d=16$. This proves that $X$ is
uniformly K-stable when $d\leq 14$ and is K-semistable when $d=16$.
It remains to analyze the equality conditions when $X$ has degree $d=16$.
Suppose that $\delta_{x}(X)=1$, then by Theorem 3.1 there exists some
valuation $v$ with positive dimensional center on $X$ such that
$A_{X}(v)=S(H;v)$. Since $\mathrm{Pic}(X)=\mathbb{Z}\cdot[H]$, using Lemma 3.5
it is easy to see that $S(H;D)\leq\frac{1}{4}<1=A_{X}(D)$ for any prime
divisor $D$ on $X$ (c.f. the proof of Theorem 5.7), hence $C_{X}(v)$ cannot be
a surface and must be a curve.
Suppose first that $C$ has degree $(H\cdot C)\geq 2$. We claim that in this
case $\delta_{C}(X)>1$. To see this, let $T\in|H|$ be a very general
hyperplane section such that $\mathrm{Pic}(T)=\mathbb{Z}\cdot[H_{T}]$ and that
$T\cap C$ consists of at least two points, and let $G\in|H_{T}|$ be a general
hyperplane section on $T$ that is disjoint from $T\cap C$. Let
$W_{\vec{\bullet}}$ be the refinement by $T$ of the complete linear series
associated to $H$. Note that $W_{\vec{\bullet}}$ is almost complete,
$F(W_{\vec{\bullet}})=0$ and $c_{1}(W_{\vec{\bullet}})=\frac{3}{4}H_{T}$ by
[AZ-K-adjunction]*(3.1). Consider the admissible flag $Y_{\bullet}$ on $X$
given by $Y_{0}=X$, $Y_{1}=T$ and $Y_{2}=G$. By [AZ-K-adjunction]*Theorem 3.5,
we have
(5.7)
$\delta_{C}(X;-K_{X})\geq\min\left\\{\frac{A_{X}(T)}{S(H;T)},\delta_{T\cap
C}(T;W_{\vec{\bullet}},\mathcal{F})\right\\}=\min\\{4,\delta_{T\cap
C}(T;W_{\vec{\bullet}},\mathcal{F})\\},$
where $\mathcal{F}$ is the filtration induced by the curve $G$. In particular,
$\delta_{C}(X)>1$ as long as we have $\delta_{T\cap
C}(T;W_{\vec{\bullet}},\mathcal{F})>1$.
We show that this is indeed the case. Let $m\in\mathbb{N}$ and let $D$ be an
$m$-basis type $\mathbb{Q}$-divisor of $W_{\vec{\bullet}}$ that is compatible
with $\mathcal{F}$. Then we have
$D=S_{m}(W_{\vec{\bullet}};G)\cdot G+\Gamma$
for some effective $\mathbb{Q}$-divisor $\Gamma$. As $G$ is disjoint from
$T\cap C$, it is clear that
(5.8) $\mathrm{lct}_{T\cap C}(T;D)=\mathrm{lct}_{T\cap C}(T;\Gamma).$
Since
$\lim_{m\to\infty}S_{m}(W_{\vec{\bullet}};G)=S(W_{\vec{\bullet}};G)=\frac{3}{4}S(H_{T};G)=\frac{1}{4}$
by [AZ-K-adjunction]*Lemma 2.29, we see that
$\Gamma\sim_{\mathbb{Q}}c_{1}(W_{\vec{\bullet}})-S_{m}(W_{\vec{\bullet}};G)\cdot
G\sim_{\mathbb{Q}}\lambda_{m}H_{T}$ for some $\lambda_{m}\geq 0$ with
$\lim_{m\to\infty}\lambda_{m}=\frac{1}{2}$. By [AZ-K-adjunction]*Lemma 2.21
and the last part of its proof, we also know that there exists some
$\eta_{m}\in(0,1)$ with $\lim_{m\to\infty}\eta_{m}=1$ such that $\eta_{m}\cdot
S_{m}(W_{\vec{\bullet}};F)<S(W_{\vec{\bullet}};F)$ for any divisor $F$ over
$T$. In particular, we have
$\mathrm{ord}_{F}(\eta_{m}\Gamma)\leq\eta_{m}\cdot
S_{m}(W_{\vec{\bullet}};F)<S(W_{\vec{\bullet}};F)\leq\frac{1}{4}$
for any irreducible curve $F\subseteq T$ (recall that $\mathrm{Pic}(T)$ is
generated by $H_{T}$, thus by [AZ-K-adjunction]*Lemma 2.29,
$S(W_{\vec{\bullet}};F)=\frac{3}{4}S(H_{T};F)=\frac{1}{4r}$ if $F\sim
rH_{T}$). Perturbing the $\eta_{m}$, we may also assume that
$\eta_{m}\lambda_{m}<\frac{1}{2}$. It follows that $(T,4\eta_{m}\Gamma)$ is
klt outside a finite number of points and $2H_{T}-(K_{T}+4\eta_{m}\Gamma)$ is
ample (recall that $K_{T}=0$ by adjunction).
We now apply an argument from [Z-cpi] to estimate $\mathrm{lct}_{T\cap
C}(T,\Gamma)$. More precisely, let
$\mathcal{J}=\mathcal{J}(T,4\eta_{m}\Gamma)$ be the multiplier ideal, which is
co-supported at a finite number of points by the previous step. By Nadel
vanishing, $H^{1}(T,\mathcal{J}(2H_{T}))=0$, hence
$\ell(\mathcal{O}_{T}/\mathcal{J})\leq h^{0}(T,2H_{T})=2(H_{T}^{2})+2=34$. As
$|T\cap C|\geq 2$, we see that $\ell(\mathcal{O}_{T,x}/\mathcal{J}_{x})\leq
17$ for some $x\in T\cap C$. On the other hand, by [Z-cpi]*Lemmas 3.4 and 5.2,
we have $\mathrm{lct}_{x}(T,\mathfrak{a})>\frac{1}{3}$ for any ideal
$\mathfrak{a}\subseteq\mathcal{O}_{T,x}$ with
$\ell(\mathcal{O}_{T,x}/\mathfrak{a})\leq 21\leq\bar{\sigma}_{2,3}$ (we follow
the notation of [Z-cpi]). Since such ideals $\mathfrak{a}$ can be parametrized
by some scheme of finite type and the log canonical thresholds are
constructible in families, we deduce that there exists some absolute constant
$\alpha>\frac{1}{3}$ such that $\mathrm{lct}_{x}(T,\mathfrak{a})>\alpha$ for
all $\mathfrak{a}\subseteq\mathcal{O}_{T,x}$ with
$\ell(\mathcal{O}_{T,x}/\mathfrak{a})\leq 21$. In particular, we have
$\mathrm{lct}_{x}(T,\mathcal{J})\geq\alpha$. By [Z-cpi]*Theorem 1.6 and Remark
3.1, we then have
$\mathrm{lct}_{x}(T,4\eta_{m}\Gamma)\geq\frac{\alpha}{\alpha+1}$ and therefore
$\mathrm{lct}_{T\cap
C}(T,\Gamma)\geq\mathrm{lct}_{x}(T,\Gamma)\geq\frac{4\alpha\eta_{m}}{\alpha+1}.$
Combined with (5.8) and letting $m\to\infty$, we obtain
$\delta_{T\cap
C}(T;W_{\vec{\bullet}},\mathcal{F})\geq\frac{4\alpha}{\alpha+1}>1.$
By (5.7), this implies that $\delta_{C}(X)>1$ for any curve $C\subseteq X$ of
degree at least $2$.
Hence the only remaining possibility for $C_{X}(v)$ is a line $L$ on $X$. The
rest of the argument is similar to those of Theorem 5.7. By Corollary 5.6, we
have $A_{X}(v)<\frac{1}{2}T(H;v)$ or $A_{X}(v)\leq\frac{1}{2}\eta(H;v)$. In
the former case, there exists some $\mathbb{Q}$-divisor $0\leq
D\sim_{\mathbb{Q}}H$ such that $\mathrm{lct}_{L}(X;D)<\frac{1}{2}$. Since $X$
has Picard number one, we may further assume that $D$ is irreducible. Note
that $\mathrm{mult}_{L}D>2$, otherwise $(X,\frac{1}{2}D)$ is log canonical
[Kol-mmp]*Theorem 2.29. By [Fano-book]*Theorems 4.3.3(vii), Remark 4.3.4 and
4.3.7(iii), $2D$ is integral and $\mathrm{mult}_{L}D=\frac{5}{2}$. Moreover,
if $\rho\colon\widetilde{X}\to X$ is the ordinary blowup of the line $L$ with
exceptional divisor $F$ and $D^{\prime}$ is the strict transform of $D$, then
the scheme theoretic intersection $G=2D^{\prime}\cap F$ contains a non-
hyperelliptic curve $\Gamma$, $\rho|_{G}\colon G\to L$ is finite of degree $5$
and
$\rho^{*}(K_{X}+\frac{1}{2}D)=K_{\widetilde{X}}+\frac{1}{4}F+\frac{1}{2}D^{\prime}.$
It follows that $\rho|_{\Gamma}\colon\Gamma\to L$ has degree at least $3$ and
each component of $G\setminus\Gamma$ is different from $\Gamma$ and has
multiplicity at most $2$. From here we deduce that every component of
$\frac{1}{2}D^{\prime}\cap F$ has multiplicity $\leq\frac{1}{2}$ and by
inversion of adjunction we see that
$(\widetilde{X},\frac{1}{4}F+\frac{1}{2}D^{\prime})$ is klt over the generic
point of $L$ and the same is true for $(X,\frac{1}{2}D)$. But this is a
contradiction as $\mathrm{lct}_{L}(X;D)<\frac{1}{2}$. Therefore we must have
$A_{X}(v)\leq\frac{1}{2}\eta(H;v)$. For any $0<\varepsilon\ll 1$ we can find
effective $\mathbb{Q}$-divisors $D_{1},D_{2}\sim_{\mathbb{Q}}H$ without common
components such that $v(D_{i})>(1-\varepsilon)\eta(H;v)$. Let $m$ be a
sufficiently divisible integer and let $Z\subseteq X$ be the complete
intersection subscheme $mD_{1}\cap mD_{2}$. Note that
$\mathrm{mult}_{L}Z\leq\deg Z=16m^{2}$ and by [Fano-book]*Theorems 4.3.3(vii),
we have $\mathrm{mult}_{C}D_{i}<\frac{5}{2}<4$ for some $i=1,2$. Hence
$\mathcal{I}_{Z}\not\subseteq\mathcal{I}_{C}^{\frac{5}{2}m}$ and by [Z-bsr-
loc-closed]*Lemma 2.6, applied at the generic point of $C$, we see that there
exists some absolute constant $\varepsilon_{1}>0$, which does not depend on
$D_{i}$ and $\varepsilon$, such that
$\mathrm{lct}_{C}(X;\mathcal{I}_{Z})>\frac{1+\varepsilon_{1}}{2m}$. It then
follows as in the proof of Theorem 5.7 that
$A_{X}(v)>\frac{1+\varepsilon_{1}}{2m}v(\mathcal{I}_{Z})>\frac{1}{2}\eta(H;v)$,
a contradiction.
Therefore we conclude that the inequality $\delta_{x}(H)\geq 1$ is always
strict. In other words, $X$ is uniformly K-stable. ∎
###### Proof of Theorem 5.1.
This is just a combination of Theorems 5.7 and 5.12. ∎
## References
|
# Automatic Cerebral Vessel Extraction in TOF-MRA Using Deep Learning
V. de Vos Eindhoven University of Technology, The Netherlands K.M. Timmins
University Medical Center Utrecht, The Netherlands I.C. van der Schaaf
University Medical Center Utrecht, The Netherlands Y. Ruigrok University
Medical Center Utrecht, The Netherlands B.K. Velthuis University Medical
Center Utrecht, The Netherlands H.J. Kuijf University Medical Center
Utrecht, The Netherlands
###### Abstract
Deep learning approaches may help radiologists in the early diagnosis and
timely treatment of cerebrovascular diseases. Accurate cerebral vessel
segmentation of Time-of-Flight Magnetic Resonance Angiographs (TOF-MRAs) is an
essential step in this process. This study investigates deep learning
approaches for automatic, fast and accurate cerebrovascular segmentation for
TOF-MRAs.
The performance of several data augmentation and selection methods for
training a 2D and 3D U-Net for vessel segmentation was investigated in five
experiments: a) without augmentation, b) Gaussian blur, c) rotation and
flipping, d) Gaussian blur, rotation and flipping and e) different input patch
sizes. All experiments were performed by patch-training both a 2D and 3D U-Net
and predicted on a test set of MRAs. Ground truth was manually defined using
an interactive threshold and region growing method. The performance was
evaluated using the Dice Similarity Coefficient (DSC), Modified Hausdorff
Distance and Volumetric Similarity, between the predicted images and the
interactively defined ground truth.
The segmentation performance of all trained networks on the test set was found
to be good, with DSC scores ranging from 0.72 to 0.83. Both the 2D and 3D
U-Net had the best segmentation performance with Gaussian blur, rotation and
flipping compared to other experiments without augmentation or only one of
those augmentation techniques. Additionally, training on larger patches or
slices gave optimal segmentation results.
In conclusion, vessel segmentation can be optimally performed on TOF-MRAs
using a trained 3D U-Net on larger patches, where data augmentation including
Gaussian blur, rotation and flipping was performed on the training data.
###### keywords:
Cerebrovascular diseases, Magnetic Resonance Angiography (MRA), segmentation,
deep learning, U-Net
## 1 Introduction
Stroke, including ischemic and hemorrhagic stroke and aneurysmal subarachnoid
hemorrhage, is a major cause of death and disability worldwide with more than
six million deaths in 2015 [1]. In some cases it can be caused by
abnormalities of the intracranial arteries including stenosis, intracranial
aneurysms and other vascular malformations. The incidence is even increasing
because of the increasing population ages [2, 1].
For an early diagnosis and timely treatment of various cerebrovascular
diseases, detailed information about the vasculature might aid a radiologist
in decision making. This information could be obtained from cerebrovascular
segmentations, where the blood vessels are extracted from the images. This
will allow for quantitative analysis of the vasculature, as well as better
(3D) visualization [1, 3, 4]. Currently, use of such segmentations is not
common practice, because this often requires manual segmentation; a difficult
and time-consuming procedure, which is prone to inter- and intra-rater
variability [1, 5]. Automatic vessel extraction methods could overcome this
issue, including methods as Markov random fields [6], multi scale filtering
[3], deformable models [7], hybrid methods [8] and deep learning [1, 5]. Such
methods create a 3D vascular model for every patient, which can be useful to
find vessel abnormalities [4]. In a study of Gan et al. (2005), an automatic
vessel segmentation method based on maximum intensity projections (MIP) was
presented. This method compiled the vessel segmentation iteratively by using
the segmentation of the MIP images along a fixed direction. The MIP images
were segmented with a finite mixture model (FMM) and expectation maximization
(EM) algorithm. Once the images were segmented along the individual axes, the
results were combined [4]. In addition, a study of Phellan et al. (2017)
proposed a deep Convolutional Neural Network (CNN) to automatically segment
the vessels in TOF-MRA images of healthy subjects. Experiments were performed
with a varying number of images for training the CNN and cross validation was
used to test the generalization of the model. The ground truth was obtained by
manual annotated image patches extracted in the axial, coronal and sagittal
directions [1].
This study provides an automatic vessel segmentation method by training and
evaluating a CNN with U-Net architecture [9], which is one of the most
promising deep learning networks for segmentation tasks. To evaluate the
performance of this network, different experiments were performed to compare a
2D and 3D U-Net architecture with several training data augmentation and
selection methods.
## 2 Materials and Methods
### 2.1 Dataset
The data used in this study included 69 patients with unruptured aneurysms
scanned in the University Medical Center Utrecht, the Netherlands. All
patients underwent a 3D TOF-MRA scan in the period between 2004 and 2012 and
were scanned twice, a baseline scan and a follow up scan. An example of one
slice of a TOF-MRA is shown in Figure 1. Overall, the slice thickness ranged
from 0.4 to 0.7 mm and the in-plane voxel size ranged from 0.195x0.195 to
0.586x0.586 mm.
Figure 1: Example slice in the transverse plane of a TOF-MRA.
### 2.2 Pre-processing
Before segmenting the vascular structure, the images in the dataset, as
described in section 2.1, were preprocessed by using N4 bias field
inhomogeneity correction [10, 11] and Z-score normalization [12].
The dataset did not contain delineations of the brain vasculature. To acquire
the labelled ground truth data for vessel segmentation, interactive vessel
segmentation was performed. First, the image was interactively thresholded by
using histogram-based thresholding in which the user can choose the image
specific intensity percentage at which the threshold was determined. The
threshold of all images was chosen between 95% and 99% of the maximum image
intensity. The resulting thresholded image was used to define seed points for
region growing. The resulting labels were manually checked for accuracy and
corrected as required. The interactive vessel segmentation was performed in
MevisLab (version 3.2) [13].
### 2.3 Network
Both a 2D and 3D fully convolutional neural network with U-Net architecture
[9] were trained on randomly selected and augmented patches from TOF-MRA
images. For the 2D network, the input patches had a size of 64x64 voxels and
for the 3D network a size of 16x16x16 voxels in order to train on the same
number of voxels per patch in 2D and 3D. The same patches were used for all
the experiments.
A balanced number of patches from vessel (80%) and non-vessel (20%) regions
were used for training. The selection of patches was based on the center voxel
of each patch. When this voxel was labelled as vessel in the ground truth
image, the patch was categorized as a patch containing vessels and otherwise
it was categorized as non-vessel patch.
Finally, both the 2D and 3D network were optimized using a dice loss function,
Adam optimizer and a learning rate of $1\times 10^{-4}$.
### 2.4 Experiments
For both the 2D and 3D architectures (190.396 trainable parameters), five
experiments were compared. In all experiments, the same MRAs were used for
training (n = 84, 64%), validation (n = 21, 16%) and test (n = 26, 20%). The
first experiment, (a), was performed without applying any augmentation
technique to the training data. Next, three experiments were performed by
training the networks with the patches with different augmentation techniques:
b) Gaussian blurring, c) rotation and flipping and d) both Gaussian blurring
and rotation and flipping. The fifth experiment, (e), was performed by
training the networks with full slices instead of patches for 2D and training
the 3D network with larger patches (64x64x64 voxels) with all augmentation
techniques mentioned before.
The resulting trained networks were used to segment the blood vessels in the
pre-processed test set of MRAs. Voxels with a probability larger than 0.7 were
assumed to be inside a vessel.
Post-processing was performed using connected component analysis in which
regions with less than 200 voxels were eliminated from the segmentation.
### 2.5 Evaluation metrics
To evaluate and compare the performances of the different experiments, the
Dice Similarity Coefficient (DSC) [14, 15], Modified Hausdorff Distance (MHD)
[14, 15] and Volumetric Similarity (VS) [15] between the predicted
segmentation and the generated ground truth segmentation for each MRA were
determined.
The DSC was used to evaluate the overlap between the ground truth and
predicted segmentation. However, the DSC is limited for the evaluation of the
vessel segmentations as vessels are narrow and elongated. For this reason,
segmentation errors can quickly lead to a loss of overlap. Therefore, a
distance metric was also used for evaluation [5]. A commonly used distance
metric is the Hausdorff Distance (HD). However, this measure is very sensitive
to outliers, which are common in medical segmentations. For this reason, the
Modified Hausdorff Distance (MHD) was used, which is not based on the maximum
distance between points but on a defined percentile (95%) of the distance
between boundary points [14, 15]. Finally, the VS was used to compare the
segmented volumes without taking into account the location or overlap of the
segmentations.
A Wilcoxon signed-rank test was performed to compare the results achieved by
the different experiments. This test was performed with the goal of
determining whether there is a difference between the evaluation metrics of
the experiments [16]. Python version 3.7.6 with the SciPy library was used to
perform this test.
## 3 Results
Tables 1(a) and 1(b) show the average resulting numerical results expressing
the performance of the experiments in both 2D and 3D, respectively. It can be
observed that the segmentation performance of all trained networks in both 2D
and 3D was good with all mean DSC scores larger than 0.70.
Table 1: Segmentation metrics for the test set for the proposed augmentation
techniques and the use of patches or slices for the training of the U-Net.
Values are provided as the mean $\pm$ the standard deviation. The size in
voxels of the patches used for the different experiments are indicated between
the brackets. (a) 2D U-Net, (b) 3D U-Net.
(a) 2D U-Net | _2D U-Net_ | Augmentation | DSC | MHD [mm] | VS
---|---|---|---|---|---
a | Patches | None | 0.74 $\pm$ 0.17 | 47.6 $\pm$ 40.4 | 0.74 $\pm$ 0.18
| (64x64) | | | |
b | Patches | Gaussian blur | 0.81 $\pm$ 0.12 | 41.6 $\pm$ 42.5 | 0.83 $\pm$ 0.13
| (64x64) | | | |
c | Patches | Rotation and flipping | 0.80 $\pm$ 0.14 | 35.8 $\pm$ 39.0 | 0.84 $\pm$ 0.16
| (64x64) | | | |
d | Patches | Gaussian blur, rotation and flipping | 0.82 $\pm$ 0.15 | 34.1 $\pm$ 42.5 | 0.85 $\pm$ 0.17
| (64x64) | | | |
e | Slices | Gaussian blur, rotation and flipping | 0.83 $\pm$ 0.14 | 28.0 $\pm$ 37.0 | 0.85 $\pm$ 0.16
(b) 3D U-Net | _3D U-Net_ | Augmentation | DSC | MHD [mm] | VS
---|---|---|---|---|---
a | Patches | None | 0.72 $\pm$ 0.15 | 81.3 $\pm$ 57.0 | 0.78 $\pm$ 0.17
| (16x16x16) | | | |
b | Patches | Gaussian blur | 0.76 $\pm$ 0.15 | 27.5 $\pm$ 30.3 | 0.77 $\pm$ 0.16
| (16x16x16) | | | |
c | Patches | Rotation and flipping | 0.79 $\pm$ 0.12 | 33.7 $\pm$ 33.1 | 0.85 $\pm$ 0.15
| (16x16x16) | | | |
d | Patches | Gaussian blur, rotation and flipping | 0.81 $\pm$ 0.12 | 36.6 $\pm$ 37.0 | 0.85 $\pm$ 0.15
| (16x16x16) | | | |
e | Patches | Gaussian blur, rotation and flipping | 0.83 $\pm$ 0.11 | 29.9 $\pm$ 30.9 | 0.86 $\pm$ 0.12
| (64x64x64) | | | |
In addition, Figure 2 shows the boxplots of the used evaluation metrics of the
2D U-Net segmentation results compared to the ground truth. According to this
figure and a Wilcoxon signed-rank test, it is observed that the performance of
the 2D U-Net improved by augmenting the training data (experiments (b)-(e))
compared to no augmentation (experiment (a)). This was observed from the DSC
and VS of experiments (b)-(e), which were significantly higher compared to
experiment (a) (p$<$0.05).
Figure 3 shows the boxplots of the DSC, MHD and VS computed from the 3D U-Net
results. From this figure and a Wilcoxon signed-rank test it is also observed
that the performance of the 3D U-Net was improved by augmenting the training
data (experiments (b)-(e)) compared to no augmentation (experiment (a)). This
can be observed from the DSC of experiments (b) (p=0.002) and (e)
(p=$3.43*10^{-}8$), which were significantly higher compared to experiment
(a). Additionally, the MHD of experiment (a) was significantly higher compared
to all the other experiments (p$<$0.05).
Figure 2: Boxplots of the vessel segmentation results obtained by training a
2D U-Net. The evaluation was performed by five experiments: A) without
augmentation; B) augmented training data with Gaussian blurring; C) augmented
training data with rotation and flipping; D) augmented training data with
Gaussian blurring, rotation and flipping and E) trained on slices with
augmented training data with Gaussian blurring, rotation and flipping.
Figure 3: Boxplots of the vessel segmentation results obtained by training a
3D U-Net. The evaluation was performed by five experiments: A) without
augmentation; B) augmented training data with Gaussian blurring; C) augmented
training data with rotation and flipping; D) augmented training data with
Gaussian blurring, rotation and flipping and E) trained on larger patches
(64x64x64) with augmented training data with Gaussian blurring, rotation and
flipping.
Figures 4 and 5 display the segmentation results of experiment (e), the
optimally performing method. From Figure 4, no important differences were
visually observed between the ground truth and automatically obtained
segmentation result, as confirmed by the quantitative analysis. Only a small
oversegmentation in the automatic segmentation was observed of a posterior
cortical vein (arrow 1). In addition, Figure 5 showed an undersegmentation in
the automatic segmentation of the left posterior cerebral artery (arrow 2).
(a)
(b)
Figure 4: Example segmentation for one slice in the transverse plane of a TOF-
MRA. (a) Ground truth segmentation. (b) Automatic segmentation resulted from
the 3D U-Net trained on patches of size 64x64x64 voxels with Gaussian blur,
rotation and flipping. Arrow 1 indicates a small oversegmentation in the
automatic segmentation.
(a)
(b)
Figure 5: Example segmentation for one slice in the transverse plane of a TOF-
MRA. (a) Ground truth segmentation. (b) Automatic segmentation resulted from
the 3D U-Net trained on patches of size 64x64x64 voxels with Gaussian blur,
rotation and flipping. Arrow 2 indicates an undersegmentation in the automatic
segmentation.
The optimum method for cerebrovascular segmentation was found to be the 3D
U-Net trained on patches of size 64x64x64 voxels with all augmentation
procedures, which resulted in a DSC of 0.83, MHD of 29.9 mm and VS of 0.86.
## 4 Discussion
Comparing the performance of the proposed deep learning experiments for vessel
segmentation yielded some interesting results. This study showed that the
automatic cerebrovascular segmentation can be accurately performed using a CNN
with U-Net architecture. The performance of the U-Net can be improved with
augmenting the training data. The optimum network for vessel segmentation was
determined to be the 3D U-Net on patches of size 64x64x64 voxels and augmented
by Gaussian blur, rotation and flipping.
As described in section 3, all experiments performed with the proposed CNN
with U-Net architecture resulted in good DSC scores ranging from 0.72 to 0.83.
In general, this overlap measure was higher compared to the DSC of 0.74
reported in a study of Chen et al. (2017), which used a 3D convolutional
autoencoder for vessel segmentation [17]. Another CNN for vessel segmentation
in TOF-MRA was proposed by a study of Phellan et al. (2017) and resulted in
DSCs ranging from 0.764 to 0.786 depending on the number of images used for
training [1]. On the contrary, the U-Net framework proposed by a study of
Livne et al. (2019) showed higher overlap measure with a mean DSC of 0.88 [5].
This could be caused by the larger patches this study used. The study of Livne
et al. (2019) found an optimal patch size of 96x96 voxels [5]. In our study,
it was also found that the experiment with training on slices or larger
patches gave the best results, as described in section 3.
In general, the 2D U-Net performs better compared to the 3D U-Net for
cerebrovascular segmentation except in the experiment with training on slices
or larger patches (e). This may be caused by the complicated shape of the
vessels in 3D, which makes it more difficult for the network to learn.
As described in section 2, we performed experiments with Gaussian blur,
rotation and flipping. The Gaussian blur could help the network to learn more
robust features, by varying the contrast between the vessels and surrounding
tissues. The rotation and flipping overcome positional biases. The combination
of those augmentation techniques results in more diverse training data
resulting in a better segmentation accuracy. This was also observed in section
3, where it was described that experiments (d) and (e) gave the best results
for both the 2D and 3D U-Net. This section also described that both the 2D and
3D U-Net performances were improved by augmenting the training data compared
to no augmentation, which proves the importance of data augmentation to
increase the diversity of the data without actually collecting new data.
Finally, for both the 2D and 3D U-Net, the best results were obtained by
training on slices or larger patches (64x64x64 voxels) (experiment (e)). This
was also reported in a study by Livne et al. (2019) [5] and may be due to the
larger patches providing a better representation of the small vessels in the
full brain MRA and thereby improving the learning process of the vessel
locations in the brain.
### 4.1 Advantages and limitations
The proposed vessel segmentation experiments have both advantages and
limitations.
Firstly, the computation time of the algorithm is important in clinical use.
As described in section 1, this is one of the main reasons to provide an
automatic vessel segmentation method. The trained U-Net can provide the vessel
segmentations in the order of seconds per image.
Second, as described in section 2, the same MRAs were used for both the 2D and
3D experiments for vessel segmentation. In addition, the patches used for
training the 2D network were of size 64x64 voxels and for the 3D network of
size 16x16x16 voxels in order to train on the same number of voxels per patch
in 2D and 3D. Those factors make it easier to compare the experiments
performed in this study.
One main limitation of deep learning is the dependency on training data. This
training data should be able to represent the unlabelled test data well enough
to provide good results. In this study, the dataset consisted of patients with
unruptured aneurysms. To obtain a more representative dataset for vessel
segmentation, healthy patients and patients with other pathologies, such as
vessels containing stenoses, occlusions or infarcts could be included.
Another limitation of the vessel segmentation is the lack of a manually
labeled vessel imaging dataset. One main advantage of the proposed vessel
segmentation method was that an interactive vessel segmentation method
(described in section 2.2) was used for generating the ground truth labels.
Manual annotations are labour and time intensive and this study showed that it
is possible to produce a robust vessel segmentation without them. However,
some small vessels were missed by the interactive ground truth generation
technique. This was the main cause of the relatively high MHD results,
described in section 3. Further investigation into optimising the ground truth
segmentation is warranted.
Finally, as described in section 3, the best vessel segmentation results were
obtained by training a U-Net on slices in 2D or larger patches in 3D. However,
the largest patches in 3D were of size 64x64x64 voxels as the patch size was
limited due to memory constraints. This potentially reduced the performance of
the 3D U-Net where more context might be needed.
### 4.2 Future work
As described in section 2, the data used for the vessel segmentation was
randomly split into a training, validation and testing set. Cross-validation
could be performed to ensure the robustness and generalization of the trained
network.
As this was a preliminary study, the test set used for the evaluation of the
proposed vessel segmentation algorithm only contained 26 images, which is
relatively small. Consequently, outlier images could have a large influence on
the results. Future work could focus on using a larger dataset to evaluate the
performance of the proposed segmentation method. For example, the full dataset
provided by the Aneurysm Detection And segMentation (ADAM) challenge
containing 113 sets of brain MR images for training and 142 sets for testing
could be used [18].
Furthermore, future work could improve the ground truth used for the deep
learning. With multiple medical experts, a systematic quantitative rating
could be performed which includes the intra- and inter-rater variability and
improves the ground truth segmentations.
In this study, only vessel segmentations generated by the U-Net architecture
were evaluated. The U-Net architecture was chosen because of its prevalent and
successful use in previous medical image segmentation problems. In future
work, other network architectures for vessel segmentation could be
investigated. However, due to the nature of this segmentation problem, no
large improvements with respect to the U-Net performance are expected. In
addition, a study of Livne et al. (2019) compared the performance of the U-Net
to the performance of a U-Net with half of the convolutional layers. This
resulted in comparable segmentation results and reduced the training time [5].
Further research could focus on evaluating the half U-Net, or a U-Net with
less parameters, for vessel segmentation and comparing the performance to the
original U-Net performance as described in our study.
## 5 Conclusion
In conclusion, our study found that a 3D U-Net trained on patches of size
64x64x64 voxels augmented using Gaussian blur, rotation and flipping performs
optimally for vessel segmentation from TOF-MRAs.
## References
* [1] R. Phellan, A. Peixinho, A. Falcão, and N. Forkert, “Vascular segmentation in tof mra images of the brain using a deep convolutional neural network,” Lecture Notes in Computer Science 10552, pp. 39–46, 2017.
* [2] M. Katan and A. Luft, “Global burden of stroke,” Seminars in Neurology 38(2), pp. 208–211, 2018.
* [3] A. Frangi, W. Niessen, V. K.L., and M. Viergever, “Multiscale vessel enhancement filtering,” Lecture Notes in Computer Science 1496, pp. 130–137, 1998.
* [4] R. Gan, W. Wong, and A. Chung, “Statistical cerebrovascular segmentation in three-dimensional rotational angiography based on maximum intensity projections,” Medical Physics 32(9), pp. 3017–3028, 2005.
* [5] M. Livne, J. Rieger, O. Aydin, A. Taha, E. Akay, T. Kossen, J. Sobesky, J. Kelleher, K. Hildebrand, D. Frey, and V. Madai, “A u-net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease,” Frontiers in Neuroscience 13(97), 2019\.
* [6] K. Fang, D. Wang, L. Lui, S. Zhou, W. Chu, A. Ahuja, and P. Heng, “3d model-based method for vessel segmentation in tof-mra,” Proceedings of the 2011 International Conference on Machine Learning and Cybernetics Grulin, pp. 1607–1611, 2011.
* [7] T. McInerney and D. Terzopoulos, “Medical image segmentation using topologically adaptable surface,” Lecture Notes in Computer Science 1205, pp. 23–32, 1997.
* [8] T. Chen and D. Metaxas, “Gibbs prior models, marching cubes, and deformable models: A hybrid framework for 3d medical image segmentation,” Lecture Notes in Computer Science 2879, pp. 703–710, 2003.
* [9] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention (MICCAI) 9351, pp. 234–241, 2015.
* [10] N. Tustison, B. Avants, P. Cook, Y. Zheng, A. Egan, P. Yushkevich, and J. Gee, “N4itk: Improved n3 bias correction,” IEEE Transactions on Medical Imaging 29(6), pp. 1310–1320, 2010.
* [11] “Advanced normalization tools (ants).” http://stnava.github.io/ANTs/. Accessed: 2020-10-23.
* [12] B. Ellingson, T. Zaw, T. Cloughesy, K. Naeini, S. Lalezari, S. Mong, A. Lai, P. Nghiemphu, and W. Pope, “Comparison between intensity normalization techniques for dynamic susceptibility contrast (dsc)-mri estimates of cerebral blood volume (cbv) in human gliomas,” Journal of magnetic reesonance imaging (JMRI) 35(6), pp. 1472–1477, 2012.
* [13] F. Ritter et al., “Medical image analysis,” IEEE Pulse 2(6), pp. 60–70, 2011.
* [14] K. Toennies, Guide to Medical Image Analysis - Advances in Computer Vision and Pattern Recognition, Springer, 2012. pp. 418.
* [15] A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC Medical Imaging 15(29), 2015\.
* [16] E. Whitley and J. Ball, “Statistics review 6: Nonparametric methods,” Critical Care 6(6), pp. 509–513, 2002.
* [17] L. Chen, Y. Xie, J. Sun, N. Balu, M. Mossa-Basha, K. Pimentel, T. Hatsukami, J. Hwang, and C. Yuan, “3d intracranial artery segmentation using a convolutional autoencoder,” IEEE International Conference on Bioinformatics and Biomedicine (BIBM) , pp. 704–707, 2017.
* [18] “Aneurysm detection and segmentation (adam) challenge.” http://adam.isi.uu.nl. Accessed: 2020-10-20.
|
aainstitutetext: Theoretical Physics Department, CERN, 1 Esplanade des
Particules, Geneva 23, CH-1211, Switzerlandbbinstitutetext: Institute of
Physics, Laboratory for Particle Physics and Cosmology,
École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne,
Switzerlandccinstitutetext: Intituut-Lorentz, Leiden University, Niels Bohrweg
2, 2333 CA Leiden, The Netherlandsddinstitutetext: Niels Bohr Institute,
University of Copenhagen, Blegdamsvej 17, DK-2010, Copenhagen,
Denmarkeeinstitutetext: Department of Physics, Taras Shevchenko National
University of Kyiv, 64 Volodymyrs’ka str., Kyiv 01601, Ukraine
# An allowed window for heavy neutral leptons below the kaon mass
Kyrylo Bondarenko c Alexey Boyarsky b Juraj Klaric c,e Oleksii Mikulenko d
Oleg Ruchayskiy d Vsevolod Syvolap b and Inar Timiryasov
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The extension of the Standard Model with two gauge-singlet Majorana fermions
can simultaneously explain two beyond-the-Standard-model phenomena: neutrino
masses and oscillations, as well as the origin of the matter-antimatter
asymmetry in the Universe. The parameters of such a model are constrained by
the neutrino oscillation data, direct accelerator searches, big bang
nucleosynthesis, and requirement of successful baryogenesis. We show that
their combination still leaves an allowed region in the parameter space below
the kaon mass. This region can be probed by the further searches of NA62,
DUNE, or SHiP experiments.
## 1 Introduction
The observed light neutrino masses and the baryon asymmetry of the Universe
(BAU) remain some of the biggest hints pointing at physics beyond the Standard
Model (SM). One of the simplest answers to both questions could be the
existence of gauge-singlet Majorana fermions — also known as right-handed
neutrinos, sterile neutrinos, or heavy neutral leptons (HNLs). HNLs can
provide neutrino masses via the seesaw mechanism Minkowski:1977sc ;
Yanagida:1979as ; GellMann:1980vs ; Mohapatra:1979ia ; Schechter:1980gr ;
Schechter:1981cv . Right-handed neutrinos can also be responsible for the
generation of the BAU through the process known as _leptogenesis_ , see, e.g.
Davidson:2008bu ; Canetti:2012zc ; Bodeker:2020ghk for reviews and
Klaric:2020lov ; Klaric:2021cpi for a recent update. The neutrino oscillation
data requires at least two right-handed neutrinos. It turns out that the same
HNLs with masses in MeV–GeV range can successfully generate the BAU
Asaka:2005pn ; Canetti:2010aw ; Klaric:2020lov . 111This model can be viewed
as a part of the $\nu$MSM, where the third singlet fermion plays a role of
dark matter candidate Asaka:2005an ; Asaka:2005pn . Dark Matter in the
$\nu$MSM can be produced resonantly Shi:1998km ; Shaposhnikov:2008pf ;
Laine:2008pg ; Canetti:2012kh ; Ghiglieri:2020ulj , which requires large
lepton asymmetry (see, e.g. the recent work Ghiglieri:2020ulj and references
therein). Alternatively, it can be produced during preheating Bezrukov:2011sz
; Shaposhnikov:2020aen .
As a result of the seesaw mechanism, the SM neutrinos (flavor eigenstates) mix
with the light ($\nu_{i}$) and heavy ($N_{I}$) mass eigenstates:
$\nu_{L\alpha}=V_{\alpha i}^{\text{PMNS}}\nu_{i}+\theta_{\alpha I}N_{I}^{c},$
(1)
where $V_{\alpha i}^{\text{PMNS}}$ is the PMNS matrix (see, e.g. Zyla:2020zbs
) and the matrix $\theta_{\alpha I}$ characterizes the mixing between the HNLs
and flavor states. These mixings are subject to a number of constraints:
* •
The ratios of the mixing angles $\theta_{\alpha I}$ are constrained by the
neutrino oscillation data (see e.g. Ruchayskiy:2011aa ; Asaka:2011pb ;
Drewes:2016jae ; Caputo:2017pit ). Somewhat surprisingly, the values
$\theta_{\alpha I}$ themselves are not bounded from above by the neutrino
oscillation data, provided that certain cancellations between these elements
ensure the smallness of neutrino masses Wyler:1982dd ; Mohapatra:1986bd ;
Branco:1988ex ; GonzalezGarcia:1988rw ; Shaposhnikov:2006nn ; Kersten:2007vk ;
Abada:2007ux ; Gavela:2009cd .
* •
If HNLs are sufficiently light (below the electroweak scale), their existence
can be probed directly Shrock:1980vy ; Shrock:1980ct ; Shrock:1981wq ;
Shrock:1981cq ; Shrock:1982sc ; Atre:2009rg ; Drewes:2013gca ; Gronau:1984ct ;
Cvetic:2013eza ; Cvetic:2014nla ; Cvetic:2015ura ; Cvetic:2015naa ;
Cvetic:2018elt ; Cvetic:2019rms ; Deppisch:2015qwa ; Zamora-Saa:2016ito ;
Caputo:2016ojx ; Bolton:2019pcu ; Das:2017zjc ; Dib:2019tuj ; Antusch:2017hhu
; Bryman:2019bjg . A partial list of the searches at the existing experiments
is Liventsev:2013zz ; Artamonov:2014urb ; Aaij:2014aba ; Khachatryan:2015gha ;
Aad:2015xaa ; CortinaGil:2017mqf ; Izmaylov:2017lkv ; Sirunyan:2018mtv ;
Abe:2019kgx ; NA62:2020mcv ; NA62mupreliminary ; Aad:2019kiz ; Tastet:2020tzh
. HNL searches are an important part of the physics program of many proposed
experiments, see, e.g. Mermod:2017ceo ; Gligorov:2017nwh ; Curtin:2018mvb ;
Drewes:2018gkc ; Zamora-Saa:2019naq ; Boiarska:2019jcw ; Beacham:2019nyx ;
Ballett:2019bgd ; SHiP:2018xqw ; Hirsch:2020klk . In the absence of positive
results, one establishes upper limits on the mixing angles.
* •
The HNLs can be copiously produced in the early Universe. Their subsequent
decays can affect light element abundances and establish an upper bound on the
lifetime of HNLs, i.e. a lower bound on their mixing angles Dolgov:2000pj ;
Dolgov:2000jw ; Dolgov:2003sg ; Fuller:2011qy ; Ruchayskiy:2012si ;
Hernandez:2013lza ; Hernandez:2014fha ; Vincent:2014rja ; Gelmini:2019wfp ;
Kirilova:2019dlk ; Gelmini:2020ekg ; Sabti:2020yrt ; Boyarsky:2020dzc ;
Boyarsky:2021yoh .
* •
Finally, as we already mentioned, HNLs can be responsible for the generation
of the BAU. Leptogenesis in this model has attracted significant interest of
theoretical community and several groups have performed studies of the
parameter space Canetti:2012vf ; Canetti:2012kh ; Shuve:2014zua ;
Abada:2015rta ; Hernandez:2015wna ; Drewes:2016gmt ; Drewes:2016jae ;
Hernandez:2016kel ; Hambye:2017elz ; Abada:2017ieq ; Antusch:2017pkq ;
Ghiglieri:2017csp ; Eijima:2018qke ; Ghiglieri:2018wbs ; Klaric:2020lov ;
Eijima:2020shs ; Klaric:2021cpi . The requirement of successful leptogenesis
limits the values of the mixings $\theta_{\alpha I}$ from above and from
below.
Experimental studies usually report their results in terms of a single HNL
mixing with a single flavor. While convenient for comparison between studies,
such a model of HNL does not solve per se any of the BSM problem outlined
above – neutrino masses require at least two HNLs, these HNLs should be mixed
with several flavors to explain oscillations; low-scale leptogenesis also
requires at least two HNLs with sufficient mass degeneracy to enhance the
production of the baryon asymmetry. The simplest HNL model capable of
incorporating the above BSM phenomena, is the model with two HNLs, having
approximately equal masses ($M_{N_{1}}\approx M_{N_{2}}=M_{N}$). It turns out
that the parameter space of such a model looks quite different from a
parameter space of a toy-model with a single HNL. To correctly combine
different constraints one has to reanalyze different experimental data. Our
work is devoted to such a reanalysis for the model with two degenerate HNLs.
Similar reinterpretation works have been performed in the past. In Ref.
Drewes:2015iva collider searches were combined with the constraints from the
seesaw mechanism for three heavy neutrinos, as well as the limits on HNL
lifetime from BBN (see also Chrzaszcz:2019inj for a recent analysis using the
GAMBIT Athron:2017ard framework). The minimal model with two HNLs is more
restrictive and leads to stronger bounds on the properties of HNLs, even more
so when combined with the condition of successful leptogenesis as was done in
Ref. Drewes:2016jae . Ref. Hernandez:2016kel also discussed constraints on
the parameter space of the two HNL model, taking into account leptogenesis as
well as potential future searches for the HNLs and for the neutrinoless double
beta decay signal.
In this work we revise and improve upon the existing studies in a number of
ways:
1. 1.
There has been a significant improvement of the Big Bang Nucleosynthesis
constraints, based on the meson driven conversion effect Boyarsky:2020dzc (as
compared to $\tau<0.1\,\mathrm{s}$ bound in Ref. Drewes:2015iva ;
Drewes:2016jae ; Chrzaszcz:2019inj ). This drastically affects our study
rendering out a significant portion of the parameter space.
2. 2.
We include the state-of-the-art leptogenesis calculations that incorporate
important processes, neglected in previous studies (see e.g. Eijima:2017anv ;
Ghiglieri:2017gjz ).
3. 3.
The allowed mixing angles are significantly affected by the improved bounds on
the CP violating phase $\delta_{\text{CP}}$ Esteban:2020cvm , which was not
constrained in the previous studies.
4. 4.
We include the latest results of the direct searches from the NA62 experiment
NA62:2020mcv .
Given the impact of the works from recent years, it is important to see how
the combination of these constraints is affected, and to reevaluate the
remaining parameter space, especially given the attention of the community
towards feebly interacting particles (FIPs) Agrawal:2021dbo . In this work we
present the most up-to-date bounds on the properties of HNLs in the minimal
model.
The paper is organized as follows: in Section 2 and 3 we discuss constraints
on the model from the neutrino oscillation data and accelerator searches. In
Section 4 we discuss constraints from BBN, taking into account the effects of
mesons produced from the HNL decays, while in Section 5 we present the
parameter region where BAU could be successful. We combine these limits in
Section 6 and give a discussion in Section 7.
## 2 Constraints from neutrino oscillations
The experimentally observed neutrino oscillations cannot be explained within
the Standard Model of particles physics where neutrinos are massless and the
flavor lepton number is conserved. One of the possible ways to solve this
problem is to add two right-handed neutrinos to the model. In addition to the
Dirac masses $m_{D}=vF$, which couple the left-handed and right-handed
neutrinos, the right-handed neutrinos, being gauge-singlet states, can also
have Majorana masses $M_{M}$, unrelated to the SM Higgs field. To find the
physical states we need to diagonalise the full mass matrix of the left- and
right-handed neutrinos
$\displaystyle\mathcal{L}\supset\frac{1}{2}\begin{pmatrix}\overline{\nu_{L}}&\overline{\nu_{R}^{c}}\end{pmatrix}\begin{pmatrix}0&m_{D}\\\
m_{D}^{T}&M_{M}\end{pmatrix}\begin{pmatrix}\nu_{L}^{c}\\\
\nu_{R}\end{pmatrix}\,.$ (2)
If the Dirac masses are small compared to the Majorana masses, we can block-
diagonalise this matrix to find two sets of masses,
$\displaystyle m_{\nu}\simeq m_{D}M_{M}^{-1}m_{D}^{T}\quad\text{and}\quad
M_{M}\simeq\begin{pmatrix}M_{1}&0\\\ 0&M_{2}\end{pmatrix}\,.$ (3)
This is the famous _seesaw_ formula Minkowski:1977sc ; Yanagida:1979as ;
GellMann:1980vs ; Mohapatra:1979ia ; Schechter:1980gr ; Schechter:1981cv , in
which the smallness of the light neutrino masses $m_{\nu}$ is explained by the
parametrically small ratio of the Dirac and Majorana masses $m_{D}/M_{1,2}\ll
1$. Another consequence of the seesaw mechanism is that the heavy states
(henceforth _heavy neutral leptons_ – HNLs) are mixtures of the left-handed
and right-handed neutrinos, and can interact with the rest of the SM – in
particular with the $W$ and $Z$ bosons. The strength of this interaction is
given by the _mixing angle_ :
$\displaystyle\theta\simeq m_{D}M_{M}^{-1}$ $\displaystyle m_{\nu}=\theta
M_{M}\theta^{T}\,.$ (4)
The modulus squared of the mixing angle quantifies how suppressed the HNL
interactions are compared to the interactions of the light neutrinos. It is
often useful to introduce the quantities
$\displaystyle U_{\alpha I}^{2}\equiv|\theta_{\alpha I}|^{2}\,,$
$\displaystyle U_{I}^{2}\equiv\sum_{\alpha}U_{\alpha I}^{2}\,,$ (5a)
$\displaystyle U^{2}_{\alpha}\equiv\sum_{I}U^{2}_{\alpha I}\,,$ $\displaystyle
U^{2}\equiv\sum_{\alpha I}U_{\alpha I}^{2}\,,$ (5b)
which quantify the overall suppression of the HNL interactions. If the HNLs
are degenerate in mass, it is also useful to consider the sum over the HNL
flavors (5b).
It is important to note that the size of the observed neutrino masses
$m_{\nu}$ does not constrain the mixing angles $\theta$, nor the Majorana mass
$M_{M}$, but only their combination from Eq. (4). This suggests that the
seesaw mechanism does not imply a mass scale for the heavy neutrinos.
Nonetheless, using the seesaw relation (3), we can connect the HNL mixing
angles $\theta$ to the known neutrino oscillation data through the Casas-
Ibarra parametrization Casas:2001sr :
$\displaystyle\theta=iV^{\text{PMNS}}\left(m_{\nu}^{\mathrm{diag}}\right)^{1/2}R\left(M_{M}^{\mathrm{diag}}\right)^{-1/2}\,,$
(6)
where $V^{\text{PMNS}}$ is the PMNS matrix, $m_{\nu}^{\mathrm{diag}}$ is the
light neutrino mass matrix with $m_{1}=0$ for normal hierarchy (NH),222In the
model with two HNLs which we consider here the lightest active neutrino is
massless at tree level Davidson:2006tg and therefore we use the term
hierarchy rather than ordering. and $m_{3}=0$ for inverted hierarchy (IH). The
complex matrix $R$ satisfies the relation $R^{T}R=\mathbb{1}_{2\times 2}$, and
depends on the neutrino mass hierarchy
$\displaystyle R^{\rm NH}=\begin{pmatrix}0&&0\\\ \cos\omega&&\sin\omega\\\
-\xi\sin\omega&&\xi\cos\omega\end{pmatrix}\,,$ $\displaystyle R^{\rm
IH}=\begin{pmatrix}\cos\omega&&\sin\omega\\\ -\xi\sin\omega&&\xi\cos\omega\\\
0&&0\end{pmatrix}\,,$ (7)
where $\omega$ is a complex-valued angle, and $\xi=\pm 1$. The symmetry
$\xi\leftrightarrow-\xi$, $N_{1}\leftrightarrow N_{2}$,
$\omega\leftrightarrow\omega+\frac{\pi}{2}$ allows to consider only the
$\xi=+1$ case.
While this parametrization cannot provide a limit on the individual mixing
angles $U^{2}_{\alpha I}$ Drewes:2019mhg , in the degenerate mass limit it
gives a lower bound on the summed mixing angles, see e.g. Ruchayskiy:2011aa ;
Eijima:2018qke . If the HNL mixing angle is large $U^{2}_{\alpha}\gg
m_{\nu}/M_{N}$333 If $|m_{D}|^{2}/M_{N}^{2}\approx U^{2}=\mathcal{O}(1)$, the
seesaw expansion breaks down, and one should go beyond Casas-Ibarra
parametrization Donini:2012tt . Given the strong experimental constraints on
$U^{2}$, we can safely neglect such a correction in the present analysis., two
HNLs form a quasi-Dirac pair with mixing angles that are approximately equal
up to a phase $\theta_{\alpha 2}\approx i\theta_{\alpha 1}$, and the
expression for $U^{2}_{\alpha}$ is given by Shaposhnikov:2008pf ; Asaka:2011wq
; Ruchayskiy:2011aa ; Hernandez:2016kel ; Drewes:2016jae :
$\displaystyle U_{\alpha}^{2}=|U_{\alpha 1}|^{2}+|U_{\alpha 2}|^{2}$
$\displaystyle\approx\frac{e^{2|\text{Im
}\omega|}}{2M_{N}}\Bigl{(}m_{2}|V^{\text{PMNS}}_{\alpha
2}|^{2}+m_{3}|V^{\text{PMNS}}_{\alpha 3}|^{2}$
$\displaystyle-\text{sgn(Im}\,\omega)\cdot
2\sqrt{m_{2}\,m_{3}}\text{Im}[V^{\text{PMNS}}_{\alpha
2}(V^{\text{PMNS}}_{\alpha 3})^{*}]\Bigr{)},$ (8)
for the NH while for the IH case the r.h.s. of the corresponding equations are
obtained by replacing $2\rightarrow 1$ and $3\rightarrow 2$. When we normalize
the flavored mixing angles $U^{2}_{\alpha}$ to the total mixing angle $U^{2}$,
the dependence on the unknown HNL parameters drops out, and the ratios
$U^{2}_{\alpha}/U^{2}$ depend only on the PMNS parameters Drewes:2016jae ;
Caputo:2017pit . In what follows we will encounter these ratios very often, so
we introduce
$x_{\alpha}\equiv U^{2}_{\alpha}/U^{2},\qquad x_{e}+x_{\mu}+x_{\tau}=1.$ (9)
The Majorana phases entering the PMNS matrix, $\alpha_{21}$ and $\alpha_{31}$
(in PDG conventions), which also affect the ratios (9), cannot be determined
in oscillation experiments, but could instead be measured indirectly through
neutrinoless double beta decay experiments in the near future (see, e.g.
Bezrukov:2005mx ; Drewes:2016lqo ; Asaka:2016zib ; Hernandez:2016kel ). In the
limit of two HNLs, there is effectively only one Majorana phase which we
denote $\eta$. The phase $\eta$ is equal to $(\alpha_{21}-\alpha_{31})/2$ in
the case of normal hierarchy and $\alpha_{21}/2$ in the case of inverted
hierarchy.
Figure 1: 95% bounds for $x_{\alpha}=U^{2}_{\alpha}/U^{2}$ for normal
hierarchy (left) and inverted hierarchy (right) in the $e^{2\,\text{Im
}\omega}\gg 1$ limit. The Majorana phase takes values $\eta\in[0,2\pi)$, while
$\Delta\chi^{2}$ is taken for the measured values of the PMNS angles
$\theta_{23}$ and $\delta$, that affect the region most strongly. Gray area
corresponds to the forbidden region of the parameter space $x_{e}+x_{\mu}>1$,
see Eq. (9).
To determine the allowed mixing patterns, we take the latest neutrino
oscillation parameters from nuFIT 5.0 Esteban:2020cvm (without Super-
Kamiokande atmospheric data). We perform a numerical scan for the different
values of the Dirac phase of the PMNS matrix $\delta_{\text{CP}}$, angle
$\theta_{23}$ and $\eta$, as for the remaining parameters the experimental
uncertainty is sufficiently smaller and their variation only slightly change
the allowed parameter space. For each point in the $x_{e}$–$x_{\mu}$ plane we
find the smallest possible $\Delta\chi^{2}(\delta_{\text{CP}},\theta_{23})$
and take only the points with $\Delta\chi^{2}<6$ which correspond to the
$95\%$ region for 2 degrees of freedom.
The result of this procedure is shown in Fig. 1. We see that for normal
hierarchy $x_{e}$ can reach small values, while for inverted hierarchy all
three $x_{\alpha}$ can be small. As we will see later, the results for the
minimal allowed HNL mass depend on these small numbers, so one needs to
determine them with high accuracy. Therefore, we analyzed minimal values of
all $x_{\alpha}$, using the two-dimensional $\Delta\chi^{2}$ projection from
nuFIT data for the two most relevant parameters for each case. The minimal
values within 2$\sigma$ bounds are given in Table 1, c.f. Ruchayskiy:2011aa .
| NH | IH
---|---|---
| rel. param. | min value | rel. param. | min value
$x_{e}$ | $\theta_{12},\theta_{13}$ | 0.0034 | $\theta_{12},\Delta m_{\text{sol}}$ | $0.026$
$x_{\mu}$ | $\theta_{23},\delta_{\text{CP}}$ | 0.11 | $\theta_{12},\delta_{\text{CP}}$ | $3.2\cdot 10^{-4}$
$x_{\tau}$ | $\theta_{23},\delta_{\text{CP}}$ | 0.11 | $\theta_{12},\delta_{\text{CP}}$ | 0.0011
Table 1: Minimal values of $x_{\alpha}=U^{2}_{\alpha}/U^{2}$ allowed by
neutrino oscillation data for both normal (NH) and inverted (IH) hierarchies.
The column “rel. param.” shows the most relevant neutrino oscillation
parameters that change the minimal $x_{\alpha}$ values.
## 3 Constraints from accelerator experiments
There exist two types of accelerator experiments capable of searching for MeV-
GeV mass HNLs. The first type is _missing energy experiments_ searching for
decays $\pi/K\rightarrow e/\mu+\text{(invisible) }$. The probability of these
decays depends solely on $U^{2}_{e/\mu}$, directly probing mixing angles
independently on the mixing pattern. The bounds obtained in this type of
experiments are generally stronger than for other types of experiments,
however, they can only constraint HNLs with mass lower than kaon mass. In
addition, they are not sensitive to combinations $U_{\alpha}U_{\beta}$
($\alpha\neq\beta$) and cannot constrain $U^{2}_{\tau}$ (because of large tau-
lepton mass $m_{\tau}>m_{K}$). We use explicit bounds from: PIENU Aguilar-
Arevalo:2017vlf , TRIUMPH Britton:1992xv ($\pi\rightarrow e$), KEK
Yamazaki:1984sj , NA62 NA62:2020mcv ; NA62mupreliminary ($K\rightarrow
e/\mu$), E949 Artamonov:2014urb ($K\rightarrow\mu$). For NA62
$K\rightarrow\mu$ decay only 30% of the current data has been processed
NA62mupreliminary .
The second type of experiments is _displaced vertices_ search for appearance
of SM particles in the decays of long-lived HNLs. This type of experiments can
probe combinations $U^{2}_{\alpha}U^{2}_{\beta}$ because production and decay
channels can be governed by different mixing angles. The relevant experiments
are PS-191 Bernardi:1985ny ; Bernardi:1987ek ,444We note that constraints from
PS-191 used here should be considered with caution. Using simple estimates
according to Bondarenko:2019yob , we were not able to reproduce the claimed
sensitivity of the PS-191 and suspect that the number of kaons in the original
analysis was overestimated. However, as constraints from PS-191 do not
significantly change the results of our analysis we leave this question for
the future analysis. CHARM Bergsma:1985is , NuTeV Vaitaitis:1999wq as well as
DELPHI Abreu:1996pa . The experimental bounds for pure $U_{e}^{2}$ and
$U_{\mu}^{2}$ mixings are shown in Fig. 2.
Figure 2: Accelerator bounds for $U^{2}_{e}$ (left panel) and $U^{2}_{\mu}$
(right panel) for the HNL mass below $5$ GeV. Also, the expected DUNE
sensitivity Coloma:2020lgy is shown (dashed line).
The displaced vertex experiments typically report bounds only on some mixings.
The reanalysis including bounds was done using GAMBIT in Chrzaszcz:2019inj
for the general case of 3 HNLs. We use these results from
$m_{N}>0.2\,\mathrm{GeV}$ and combine them with results from missing energy
experiments. Also, to cover the small window $m_{N}\approx
0.13-0.14\,\mathrm{GeV}$ in the $U^{2}_{e}$ bound we have included explicitly
PS-191 results for $U^{2}_{e}$ reanalyzed following the prescriptions given in
Ruchayskiy:2011aa . The full set of bounds used in this work is shown in Fig.
3.
Figure 3: Full set of bounds used in this work for normal hierarchy (left
panel) and for inverted hierarchy (right panel). For the
$U_{\alpha}U_{\beta\neq\alpha}$ and $U^{2}_{\tau}$ bounds we use only GAMBIT
results Chrzaszcz:2019inj starting from $m_{N}=0.2\,\mathrm{GeV}$.
To combine accelerator limits with other constraints for a given mixing
pattern $x_{\alpha}$ we estimate the actual upper bounds on $U^{2}$. To find
it we need to take into account that $U^{2}_{\tau}$ is typically less
constrained compared to $U^{2}_{e}$, $U^{2}_{\mu}$, however large values of
$U^{2}_{\tau}\gg U^{2}_{e,\mu}$ (i.e. $x_{\tau}\approx 1$) are not allowed by
neutrino oscillation data. Therefore, for each mixing pattern we compute the
maximal mixing angle that does not contradict to any of the
$U_{\alpha}U_{\beta}$ bounds using:
$U^{2}_{\text{upper}}(x_{\alpha})=\min\left(\frac{U^{2}_{e,\text{acc}}}{x_{e}},\frac{(U_{e}U_{\mu})_{\text{acc}}}{\sqrt{x_{e}x_{\mu}}},\frac{U^{2}_{\mu,\text{acc}}}{x_{\mu}},\dots\right)$
(10)
## 4 Constraints from Big Bang Nucleosynthesis
Accelerator searches provide upper bound on HNL mixing angles (10). On the
other hand, a requirement that the presence of HNLs in the primordial plasma
would not lead to the over-production of light elements (Deuterium, Helium-4)
provides a lower bound on the HNL mixing angles. For HNLs heavier than
$\pi^{\pm}$-mesons the strongest BBN bound of the HNL lifetime is due to
$n\leftrightarrow p$ meson driven conversion Boyarsky:2020dzc . Pions and
kaons produced in HNL decays at the time when free neutrons are present in the
plasma modify the resulting freeze-out ratio of neutron to proton abundances,
leading to a larger values of $\isotope[4]{He}$ abundance as compared to the
Standard Model BBN. If meson production is kinematically allowed, the
following constraint can be derived Boyarsky:2020dzc :
$\tau_{N}\lesssim\frac{0.023\,\mathrm{s}}{1+0.07\ln\left(\dfrac{P_{\text{conv}}}{0.1}\dfrac{\text{Br}_{N\rightarrow
h}}{0.1}\dfrac{Y_{N}\zeta}{10^{-3}}\right)}\,,$ (11)
where $P_{\text{conv}}$ is the probability for meson to interact before
decaying, $\text{Br}_{N\rightarrow h}$ is the branching fraction of
semileptonic HNL decays producing a given meson $h$, $Y_{N}$ is the initial
HNL abundance, and $\zeta\equiv(a_{\text{SM}}/a_{\text{SM+HNLs}})^{3}<1$ is
the dilution factor. In the combination
$P_{\text{conv}}\text{Br}_{N\rightarrow h}$ a summation over meson species is
assumed. Note the logarithmic dependence on these parameters, since for
$\tau_{N}\ll 0.1\,\mathrm{s}$ HNLs and consequently mesons have exponentially
small abundances at the time of interest.
Implementation of different mixing patterns changes the value of
$\text{Br}_{N\rightarrow h}$ only, since $Y_{N}\zeta$ depends on processes at
high temperature, where all lepton species are in equilibrium, and
$P_{\text{conv}}$ is solely related to mesons. The value of $Y_{N}\zeta$
varies in $10^{-3}-10^{-2}$, therefore we use the conservative lower bound
$Y_{N}\zeta=10^{-3}$ Boyarsky:2020dzc . In terms of
$(U^{2}_{e},U^{2}_{\mu},U^{2}_{\tau})=U^{2}(x_{e},x_{\mu},x_{\tau})$, the
branching ratio can be parametrized in the following way:
$\text{Br}_{N\rightarrow h}=\sum_{X\in\text{states with
}h}n_{h}(X)\frac{x_{e}\Gamma(N_{e}\rightarrow
X)+x_{\mu}\Gamma(N_{\mu}\rightarrow X)+x_{\tau}\Gamma(N_{\tau}\rightarrow
X)}{x_{e}\Gamma(N_{e})+x_{\mu}\Gamma(N_{\mu})+x_{\tau}\Gamma(N_{\tau})}$ (12)
where the notation $N_{\alpha}$ corresponds to an HNL with the mixing angles
$U^{2}_{\alpha}=1$ and $U^{2}_{\beta\neq\alpha}=0$, $\Gamma(N_{\alpha})$ is
the total decay width, $\Gamma(N_{\alpha}\rightarrow X)$ is the HNL decay
width into state $X$, and $n_{h}(X)$ is the meson $h$ multiplicity for the
final state $X$. For a given mixing pattern it is straightforward to compute
the corresponding $P_{\text{conv}}\text{Br}_{N\rightarrow h}$ and substitute
in (11).
Figure 4: Top panel: the lifetime bounds for the pure mixing cases. Left
panel: muon mixing with a small contribution of $U^{2}_{e}$. Right panel: tau
mixing with a small contribution of $U^{2}_{e}$.
For pure mixing cases the bound is applicable only for HNL masses exceeding
meson production threshold:
$\displaystyle m_{N}>m_{\pi}+m_{e}\approx 130\,\mathrm{MeV}$ for electron
mixing, $\displaystyle m_{N}>m_{\pi}+m_{\mu}\approx 240\,\mathrm{MeV}$ for
muon mixing, $\displaystyle m_{N}>m_{\eta}\approx 550\,\mathrm{MeV}$ for tau
mixing.
However, even small fraction of $U^{2}_{e}$ can relax this restriction to
$m_{N}>m_{\pi}+m_{e}$ due to logarithmic dependence on the total branching
ratio, see examples in Fig. 4.
For the parameter region where the meson constraint does not work we use a
conservative estimate $\tau_{N}<0.1\,\mathrm{s}$ from Sabti:2020yrt .555The
actual estimate on the HNL lifetime depends on the maximally admissible value
of $\Delta Y_{\text{He}}/Y_{\text{He}}$. Here we use $\Delta
Y_{\text{He}}/Y_{\text{He}}\leq 4.35\%$ used in Boyarsky:2020dzc and adopted
in this work. Ref. Sabti:2020yrt refers a twice stronger bound, because it
adopts tighter margin for $Y_{\text{He}}$. Taking this into account the
resulting expression for the lower bound for $U^{2}$ is
$U^{2}_{\text{lower}}(x_{\alpha})=\frac{1}{\tau^{\text{BBN}}_{N}(x_{\alpha})\cdot\sum_{\alpha}x_{\alpha}\Gamma(N_{\alpha})}$
(13)
where $\tau^{\text{BBN}}_{N}$ is given by the minimal value between the r.h.s.
of (11) and $0.1\,\mathrm{s}$.
## 5 Constraints from Leptogenesis
The smallness of the light neutrino masses is not the only problem HNLs can
solve, they can also explain the observed BAU through leptogenesis
Fukugita:1986hr . The condition of reproducing the observed BAU
Aghanim:2018eyx ; Zyla:2020zbs ,
$\displaystyle\frac{n_{B}}{n_{\gamma}}=(5.8-6.5)\times 10^{-10}\,,$ (14)
imposes further constraints on the properties of the HNLs. When combined with
the bounds from the seesaw mechanism, leptogenesis imposes a strong constraint
on mass spectrum of the HNLs, namely it forbids hierarchical HNL masses if the
lightest mass is below $10^{9}$ GeV Davidson:2002qv . This implies that (in
the minimal model) any HNLs we can observe in the near future are degenerate
in mass, and that leptogenesis is realized either via a resonant enhancement
in HNL decays Liu:1993tg ; Flanz:1994yx ; Flanz:1996fb ; Covi:1996wh ;
Covi:1996fm ; Pilaftsis:1997jf ; Pilaftsis:1997dr ; Pilaftsis:1998pd ;
Buchmuller:1997yu ; Pilaftsis:2003gt , or via HNL oscillations Akhmedov:1998qx
; Asaka:2005pn . If we combine these two mechanisms, leptogenesis is possible
for all HNL masses larger than $\sim 100$ MeV Klaric:2020lov .
Nonetheless, leptogenesis can also provide other interesting constraints on
the HNL properties. Phenomenologically, the most important constraint is the
limit on the maximal size of the HNL mixing angles $U^{2}$ Shaposhnikov:2008pf
; Canetti:2012kh ; Drewes:2016gmt ; Hernandez:2016kel ; Drewes:2016jae ;
Antusch:2017pkq ; Eijima:2018qke ; Klaric:2020lov . This limit arises from the
fact that for large mixing angles the HNL interactions become too fast, and
the lepton number reaches thermal equilibrium before the sphalerons freeze-out
at $T\sim 130$ GeV.
#### Allowed flavor mixing patterns.
The upper bounds on $U^{2}$ can have a strong dependence on the choice of
flavor mixing pattern Drewes:2016jae ; Antusch:2017pkq ; Eijima:2018qke , as
shown in Fig. 5. A tiny mixing with a particular lepton flavor means that
lepton flavor will equilibrate more slowly in the early Universe, and can thus
prevent complete equilibration of lepton number. The allowed mixing patterns
are almost completely determined by the low-energy phases as shown in Fig 1.
This means that the leptogenesis bounds can also shift as the neutrino
oscillation data is updated. For example, in the case of inverted hierarchy,
the choice of optimal phases corresponded to $\delta_{\text{CP}}=0$
Drewes:2016jae ; Eijima:2018qke , which is disfavored by the latest fits of
the light neutrino parameters Esteban:2020cvm .
Figure 5: Flavor patterns consistent with both the neutrino oscillation data
(as in Fig. 1), and leptogenesis for $M_{N}\sim 140\,\mathrm{MeV}$. The upper
bound on $U^{2}$ depends on the ratios $U_{\alpha}^{2}/U^{2}$, as this can
prevent a large washout of the lepton asymmetries. The color coding indicates
the maximal $U^{2}$ for which baryogenesis via leptogenesis remains possible.
We note here that the experimental and BBN bounds on the mixing angles are not
included in these figures, as in this range of HNL masses they completely
dominate over the constraints from leptogenesis, as shown in Fig. 9.
#### The HNL Mass splitting.
The mass splitting between the HNLs is one of the key parameters determining
the size of the BAU. The condition of successful leptogenesis constrains the
maximal size of the mass splitting (see, e.g. Canetti:2012kh ; Drewes:2016gmt
; Hernandez:2016kel ; Drewes:2016jae ; Antusch:2017pkq ; Eijima:2018qke ;
Klaric:2020lov ), which can have direct consequences for the various lepton
number violating signatures at direct search experiments Cvetic:2015ura ;
Anamiati:2016uxp ; Gluza:2015goa ; Dev:2015pga ; Antusch:2017ebe ;
Drewes:2019byd ; Tastet:2019nqj ; Antusch:2020pnn , or for the indirect
signatures such as neutrinoless double beta decay Drewes:2016lqo ;
Hernandez:2016kel . As an example, for $M_{N}\approx 140$ MeV, we show how
leptogenesis constrains the remaining parameters after we apply all the other
cuts. The allowed range of mass splittings $\Delta M_{N}=|M_{2}-M_{1}|/2$,
consistent with leptogenesis depends on $U^{2}$ and is shown in Fig. 6.
Figure 6: The allowed range of HNL mass splittings consistent with
leptogenesis for a benchmark mass $M_{N}=140\,\mathrm{MeV}$. All points are
consistent with the experimental constraints. Interestingly, relatively large
$(1\,\mathrm{MeV})$ mass splittings are allowed, which could potentially be
resolved at experiments. It is interesting that all mass splittings are large
enough that the rates of lepton number violating and conserving decays are
approximately equal.
## 6 Results
### 6.1 Numerical procedure
Our procedure of finding viable HNLs models (green points) is as follows. We
consider two HNLs degenerate in mass that pass _all of the following
constraints_ :
1. 1.
The mixing angles $U_{\alpha}^{2}(x)$ are chosen such that neutrino
oscillation data is satisfied. This is ensured by the Casas-Ibarra
parametrization (2). By varying the CP phase $\delta_{\text{CP}}$ and
$\theta_{23}$ within their $95\%$ confidence region ($\Delta\chi^{2}<6.0$) and
by changing the unconstrained Majorana phase $\eta\in[0,2\pi)$ we determine
the region of parameters $(x_{e},x_{\mu})$ admissible by the neutrino
oscillation data.
2. 2.
All $U_{\alpha}^{2}(x)$ must be _smaller_ than the corresponding accelerator
limits for the flavor $\alpha$. To ensure this we scan over the points in the
$(x_{e},x_{\mu})$ plane consistent with neutrino oscillation data and for each
mass $M_{N}$ compute the upper bound $U^{2}(x)$ from the accelerator
experiments
$U^{2}_{\text{upper}}(x_{\alpha})=\min\left(\frac{U^{2}_{e,\text{max}}}{x_{e}},\frac{(U_{e}U_{\mu})_{\text{max}}}{\sqrt{x_{e}x_{\mu}}},\frac{U^{2}_{\mu,\text{max}}}{x_{\mu}},\dots\right)$
(15)
The admissible mixing angles $U_{\alpha}^{2}$ should be below
$x_{\alpha}U^{2}_{\text{upper}}(x_{\alpha})$.
3. 3.
All $U_{\alpha}^{2}(x)$ must be _larger_ than the corresponding BBN bounds for
the given flavor. To this end we find
$U^{2}_{\text{lower}}(x_{\alpha})=\frac{1}{\tau^{\text{BBN}}_{N}(x_{\alpha})\cdot\sum_{\alpha}x_{\alpha}\Gamma(N_{\alpha})}$
(16)
(where the quantities in Eq. (16) are defined in Section 4) and compute
admissible $U_{\alpha}^{2}(x)=x_{\alpha}U^{2}_{\text{lower}}(x)$
4. 4.
All $U^{2}_{\alpha}$ are minimized/maximized independently with respect to
$x_{\alpha}$.
5. 5.
When we start to approach the seesaw line
($MU_{\alpha}^{2}\approx\sum_{i}m_{i}$) two HNLs may in principle have
different mixing angles, i.e. $U_{\alpha 1}\neq U_{\alpha 2}$ and,
correspondingly, different lifetimes. To probe the region near the seesaw
bound, i.e. when $U^{2}M\sim(m_{1}+m_{2})$ or, equivalently, when
$2\operatorname{Im}\omega\to 0$ we used the general expression (6) to generate
a large sample of points in the range $2\operatorname{Im}\omega\in[0,\ln
100],\operatorname{Re}\omega\in[0,2\pi)$ to ensure that the above conditions
are satisfied by each of the HNLs.
6. 6.
Finally we ensure that the observed value of BAU can be reproduced. To this
end we numerically solve the quantum kinetic equations of ref. Klaric:2020lov
.
### 6.2 The space of viable 2 HNL models
Our results are present in Figures 7 (for the normal hierarchy) and 8 (for the
inverted hierarchy). For each mass and each flavor we show a set of admissible
models (green points). A model is selected as viable (a green point) if it
explains neutrino oscillations, provides correct baryon asymmetry of the
Universe and satisfies accelerator and BBN constraints, see Section 6.1 for
details. The blue curve shows _minimal_ $U_{\alpha}^{2}$ ($U^{2}$) compatible
with BBN in the model with 2 HNLs explaining neutrino masses and oscillations.
The blue curve does not take into account whether other mixing angles pass
selection conditions or whether BBN curve is below the accelerator curve. The
red curve shows the upper limit on $U_{\alpha}^{2}$ (correspondingly, $U^{2}$)
compatible with accelerator searches and neutrino oscillations.
Several comments are in order:
1. 1.
Although most of the parameter space below $\approx 330-360\,\mathrm{MeV}$ is
closed, there remains an open window of viable models for the normal hierarchy
of neutrino masses (see insets in Fig. 7). The corresponding HNLs have masses
$0.12\,\mathrm{GeV}\leq M_{N}\leq 0.14\,\mathrm{GeV}$ (17)
and the mixing angles (without describing the actual shape of the region)
$\displaystyle 3\cdot 10^{-6}\leq{}$ $\displaystyle U^{2}\leq 15\cdot 10^{-6}$
$\displaystyle 10^{-8}\leq{}$ $\displaystyle U_{e}^{2}\leq 28\cdot 10^{-8}$
$\displaystyle 4\cdot 10^{-7}\leq{}$ $\displaystyle U_{\mu}^{2}\leq 26\cdot
10^{-7}$ $\displaystyle 10^{-6}\leq{}$ $\displaystyle U_{\tau}^{2}\leq 12\cdot
10^{-6}$
The existence of this window follows from the fact that the missing energy
experiments do not provide strong constraints in this mass region. Experiments
based on pion decays lack sensitivity due to the shrinking of available phase
space, while experiments with kaon decays suffer in this region from large
backgrounds, see e.g. recent discussion in Tastet:2020tzh . Finally, the meson
driven conversion effect which dominates the BBN constraint is not applicable
in this mass range. To close the window completely one would need to improve
the bounds on $U^{2}$ by a factor of 5-10.
The shape of the region can be seen in Fig. 7 with the relevant experiments,
while the specific (benchmark) points are listed in Table 2. The flavor ratios
$U_{\alpha}^{2}/U^{2}$, and the PMNS parameters realized in this region of
parameter space are shown in Fig. 9. Note that a precise determination of the
PMNS parameters may be sufficient to determine the viability of this region.
For the inverted hierarchy the procedure also predicts a small region at
$M_{N}\approx 0.137-0.14\,\mathrm{GeV}$ with $U^{2}$ changing slightly (we
list the values for a benchmark point in Table 2).
2. 2.
Apart from this window, the lower mass of viable HNLs is given by
$\displaystyle M_{N}>0.33\,\mathrm{GeV}$ normal hierarchy (18) $\displaystyle
M_{N}>0.36\,\mathrm{GeV}$ inverted hierarchy
| $M_{N}$ [GeV] | $U^{2}_{e}$ | $U^{2}_{\mu}$ | $U^{2}_{\tau}$
---|---|---|---|---
NH | 0.12 | $3\cdot 10^{-8}$ | $2\cdot 10^{-6}$ | $7\cdot 10^{-6}$
0.14 | $2\cdot 10^{-8}$ | $1\cdot 10^{-6}$ | $8\cdot 10^{-6}$
$0.33$ | $8\cdot 10^{-10}$ | $5\cdot 10^{-9}$ | $3\cdot 10^{-8}$
IH | 0.14 | $7\cdot 10^{-7}$ | $8\cdot 10^{-7}$ | $1\cdot 10^{-6}$
$0.36$ | $2\cdot 10^{-9}$ | $2\cdot 10^{-8}$ | $2\cdot 10^{-8}$
Table 2: Benchmark models for the boundary masses of the allowed regions
3. 3.
For each individual flavor there are regions where accelerator bounds (red)
are above the BBN limits (blue), yet the point is white. This means that such
a mass is excluded by the combination of lower and upper boundaries for some
other flavor.
Figure 7: The parameter space of the model with two HNLs. Green points are
consistent with all experimental bounds, explain neutrino data for the normal
neutrino mass hierarchy (NH) and generate the correct BAU. Independent bounds
for each flavor from the accelerator experiments (red) and BBN (blue) are also
shown.
Figure 8: The parameter space of the model with two HNLs. Green points are
consistent with all experimental bounds, explain neutrino data for the
inverted neutrino mass hierarchy (IH) and generate the correct BAU. Other
notations are the same as in Fig. 7.
Figure 9: Left: the allowed mixing angles in the open window $M_{N}\approx
140$ MeV for NH, with all of the constraints applied (red line). Right:
$\Delta\chi^{2}$ distribution in $(\delta_{\text{CP}},\theta_{23})$ plane
taken from the nuFIT 5.0 Esteban:2020cvm . The region inside the red curve
corresponds to the allowed HNL models for NH and $M_{N}\approx 140$ MeV. The
CP-violating angle $\delta_{\text{CP}}$ is in degrees. If $\delta_{\text{CP}}$
and $\theta_{23}$ are measured to be outside the red boundary, the allowed
window is excluded without a need for a dedicated search experiment.
### 6.3 Future searches
The results including the future constraints from the NA62, DUNE, and SHiP are
present in Fig. 10.
To estimate the future sensitivity of the NA62 experiment, we assume that the
experiment will collect 8 times more data than has been published.666Indeed,
the goal of NA62 is to collect 80 rare kaon decay
$K^{+}\to\pi^{+}\nu\bar{\nu}$ events Ceccucci:832885 . The existing HNL
constraint NA62:2020mcv are based on the dataset where only $9.5$ rare kaon
events are expected NA62:2713499 . Assuming that both data collection and
analysis strategy will not significantly change in the future and that no HNLs
will be detected, the current limit can be scaled down as $\sqrt{8}$, taking
into account that the HNL analysis is background dominated NA62:2020mcv . We
see that for the normal hierarchy future NA62 measurement will not explore the
HNL mass “window” beyond the pion mass. The remainder of the allowed parameter
space is pushed to a lower mass of $M_{N}\gtrsim 0.38(0.39)\,\mathrm{GeV}$ for
NH(IH).
The DUNE near detector will be very sensitive to HNLs Krasnov:2019kdc ;
Ballett:2019bgd ; Coloma:2020lgy . In particular, it will be able to push the
lower bound to $M_{N}=0.39\,\mathrm{GeV}$ for both hierarchies and cover the
open window at lower masses. When estimating the sensitivity for DUNE we took
$U^{2}_{e}$, $U^{2}_{\mu}$ bounds as reported in Coloma:2020lgy and derived
$U^{2}$, $U_{\tau}^{2}$ bounds consistent with oscillation data.
The SHiP experiment SHiP:2018yqc at CERN will provide unprecedented
sensitivity for heavy neutral leptons in the mass range of interest. Using the
sensitivity matrix, provided by the SHiP collaboration SHiP:2018xqw we have
performed a full scan in the $(M_{N},U^{2},x_{e},x_{\mu})$ space to find the
allowed region (determined by the number of events $n_{\text{events}}<2.3$).
The SHiP experiment will fully explore the “window” at low masses and push the
low mass beyond the kaon threshold: $M_{N}\gtrsim 0.43(0.60)\,\mathrm{GeV}$
for NH(IH). We note that this is a conservative estimate and the actual
sensitivity will be even higher as our analysis only included HNLs coming from
D-mesons SHiP:2018xqw , while the HNLs originating from kaon decays will
significantly increase the sensitivity Gorbunov:2020rjx .
Figure 10: Parameter space of the models with two HNLs, including the
projected increase of the sensitivity due to the NA62 ($\times 8$ collected
data), DUNE, or SHiP experiments. The allowed points are consistent with all
experimental bounds, explain neutrino data for the normal (NH) or inverted
(IH) mass hierarchy, and generate correct BAU. For NA62, the minimal mass
after the pion mass window $M_{N}\approx m_{\pi}$ (for NH) becomes
$M_{N}=0.38(0.39)\,\mathrm{GeV}$ for NH(IH). For DUNE, the projections are
based on Coloma:2020lgy ; the minimal mass will be pushed up to $M_{N}\simeq
0.39\,\mathrm{GeV}$ for both hierarchies. For SHiP, the minimal mass is
$M_{N}\approx 0.43(0.60)\,\mathrm{GeV}$ for NH(IH).
## 7 Discussion and outlook
The idea that new particles need not be heavier than the electroweak scale,
but rather can be light and feebly interacting draws increasing attention of
both theoretical and experimental communities (see e.g. Alekhin:2015byh, ;
Beacham:2019nyx, ; Strategy:2019vxc, ). In particular, the idea that heavy
neutral leptons are responsible for (some of the) beyond-the-Standard-Model
phenomena has been actively explored in recent years, see e.g. Boyarsky:2009ix
; Drewes:2013gca ; Alekhin:2015byh ; Deppisch:2015qwa and refs. therein. This
idea is motivated in the first place by the type-I seesaw model that explains
neutrino oscillations. Furthermore, the same HNLs with nearly degenerate
masses in MeV–TeV range can explain the BAU (see e.g. Klaric:2020lov, ) and
refs. therein.
However, while theoretical developments have been focusing on the models with
two or more HNLs that are mixing with different flavors, the experimental
searches were concentrating on a model with a single HNL mixing with a single
flavor Liventsev:2013zz ; Artamonov:2014urb ; Aaij:2014aba ;
Khachatryan:2015gha ; Aad:2015xaa ; Gligorov:2017nwh ; Izmaylov:2017lkv ;
SHiP:2018xqw ; Sirunyan:2018mtv ; Aad:2019kiz ; NA62:2020mcv . Such a model is
simple for analysis and provides a number of useful benchmarks. Nevertheless,
taken at face value it is incompatible with the observed neutrino masses and
cannot generate BAU.
In this paper we address this issue. We recast the existing accelerator and
cosmological bounds to the model with 2 HNLs with degenerate masses. We
perform a scan over all parameter sets of the two HNL model, that
simultaneously: (a) explain neutrino oscillations; (b) are consistent with all
previous non-detections at accelerators; (c) do not spoil predictions of Big
Bang nucleosynthesis; (d) allow for the generation of the baryon asymmetry of
the Universe.
Our main findings are as follows.
1. 1.
For the normal neutrino mass hierarchy, we have found an _open window_ for
masses $0.12-0.14$ GeV and then for $M_{N}\gtrsim 0.33\,\mathrm{GeV}$.
2. 2.
For the inverted neutrino mass hierarchy, the open window around $0.14$ GeV is
very tiny with $U^{2}$ varying by a factor $\sim 2$ around the value $2\times
10^{-6}$. The majority of the viable models have $M_{N}\gtrsim
0.36\,\mathrm{GeV}$.
3. 3.
Future experiments, DUNE or SHiP, will be able to fully cover the region of
parameter space $0.12-0.14$ GeV for all values of the mixing angle.
4. 4.
The upper mass limit above $300$ MeV will be pushed only slightly by DUNE or
NA62, but will be moved beyond the kaon threshold by the SHiP experiment.
5. 5.
A precise determination of the PMNS parameters $\delta$ and $\theta_{23}$ may
be sufficient for closing the $0.12-0.14$ GeV window for the normal mass
ordering.
###### Acknowledgements.
We would like to thank M. Ovchynnikov and M. Shaposhnikov for useful
discussions. This project has received funding from the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation
programme (GA 694896), from the Carlsberg Foundation, and from the NWO Physics
Vrij Programme “The Hidden Universe of Weakly Interacting Particles”, nr.
680.92.18.03, which is partly financed by the Dutch Research Council NWO.
## Appendix A BBN constraints on long-lived HNLs
If HNLs possess semi-leptonic decay channels and have lifetimes
$\tau_{N}\gtrsim 0.02\,\mathrm{sec}$, the mesons from HNL decays completely
dominate n-p conversion rates, driving neutron-to-baryon ratio
$X_{n}\simeq\frac{1}{2}$. The resulting abundance of Helium-4 $Y_{p}\simeq
2X_{n}$ is then $Y_{p}\approx 1$, incompatible with observations that give
$Y_{p}<0.2573$ Boyarsky:2020dzc . The HNLs with lifetimes near the seesaw
bound and masses above the pion production threshold can have lifetimes in the
range $\mathcal{O}(10^{2}-10^{3})$ sec. For such lifetimes the HNL decay
products may not only affect the neutron abundance, but also destroy already
synthesized light elements (whose production starts at Hubble times around 40
sec) – the case that has not been analyzed in Boyarsky:2020dzc .
Below we demonstrate that for all values of HNL masses/lifetimes compatible
with neutrino oscillations, such HNLs lead to an overproduction of Helium-4 or
other light elements and therefore the region $\tau_{N}\gtrsim
40\,\mathrm{sec}$ and $M_{N}>m_{\pi}$ is also excluded from BBN. The details
of the analysis will be presented elsewhere very_long_HNLs .
Indeed, all the neutrons in the primordial plasma will either decay or bind
into light elements (deuterium, Helium-3, Helium-4, etc). The presence of
pions in the plasma effectively “prevents” neutrons from decaying because the
rate of $n+\pi^{+}\to p+\pi^{0}$ exceeds both the Hubble expansion rate and
the decay rate $n\to p+e^{-}+\bar{\nu}_{e}$. As a result, decays (and other
weak processes) can be ignored until Hubble times $\sim 10^{6}$ sec, leading
to the following equation of neutron balance:
$X_{n}^{\rm(free)}+X_{D}+X_{\isotope[3]{He}}+2X_{\isotope[4]{He}}+\dots\approx\frac{1}{2}$
(19)
The cross-sections of all reactions that change abundances in (19)
($n\leftrightarrow p$ conversion by pions, nucleosynthesis, dissociation of
light nuclei by pions, etc) are of the same order. Therefore, the rates of
various reactions are determined solely by the concentrations. Without going
into details (see very_long_HNLs ) there are two qualitative regimes: if the
instantaneous concentration of pions is $n_{\pi}\gtrsim n_{B}$ – the pions
will efficiently destroy the synthesized nuclei and all terms in the l.h.s. of
Eq. (19) will end up being of the same order $X_{n}^{\rm(free)}\sim X_{D}\sim
X_{\isotope[3]{He}}\sim X_{\isotope[4]{He}}\sim\mathcal{O}(1)$. If, on the
other hand, the instantaneous concentration of pions is small,
$n_{\pi}<n_{B}$, – most of the neutrons will bind into the nuclei, leading to
$X_{\isotope[4]{He}}\sim 1$ and $X_{n}^{\rm(free)}\ll 1$. Both cases are
incompatible with experimentally observed abundances $X_{D}\sim
X_{\isotope[3]{He}}\sim 10^{-5}$ and $X_{\isotope[4]{He}}\sim 0.0643$.777We
remind that the mass fraction $Y_{p}=4X_{\isotope[4]{He}}=2X_{n}$.
Finally, few words should be said about long-lived HNLs with $M_{N}<m_{\pi}$.
The influence of such particles on BBN has been analyzed in a number of recent
works (see e.g. Ruchayskiy:2012si, ; Sabti:2020yrt, ), providing an upper
bound on the HNL lifetime that is below the seesaw limit. Near the seesaw
boundary the HNLs are long-lived, so they can survive till the onset of
nuclear reactions and their decay products can dissociate light nuclei. The
recent analysis of Domcke:2020ety based on Hufnagel:2018bjp (see also
Forestell:2018txr ) demonstrated that MeV mass HNLs with lifetimes exceeding
the seesaw bound are _excluded_ from cosmological observations (BBN plus CMB)
and therefore no “open window” exists below the seesaw line but above the
limits of Ruchayskiy:2012si ; Sabti:2020yrt .
## References
* (1) P. Minkowski, $\mu\to e\gamma$ at a Rate of One Out of $10^{9}$ Muon Decays?, Phys. Lett. B 67 (1977) 421–428.
* (2) T. Yanagida, Horizontal gauge symmetry and masses of neutrinos, Conf. Proc. C 7902131 (1979) 95–99.
* (3) M. Gell-Mann, P. Ramond, and R. Slansky, Complex Spinors and Unified Theories, Conf. Proc. C 790927 (1979) 315–321, [arXiv:1306.4669].
* (4) R. N. Mohapatra and G. Senjanovic, Neutrino Mass and Spontaneous Parity Nonconservation, Phys. Rev. Lett. 44 (1980) 912.
* (5) J. Schechter and J. Valle, Neutrino Masses in SU(2) x U(1) Theories, Phys. Rev. D 22 (1980) 2227.
* (6) J. Schechter and J. Valle, Neutrino Decay and Spontaneous Violation of Lepton Number, Phys. Rev. D 25 (1982) 774.
* (7) S. Davidson, E. Nardi, and Y. Nir, Leptogenesis, Phys. Rept. 466 (2008) 105–177, [arXiv:0802.2962].
* (8) L. Canetti, M. Drewes, and M. Shaposhnikov, Matter and Antimatter in the Universe, New J. Phys. 14 (2012) 095012, [arXiv:1204.4186].
* (9) D. Bödeker and W. Buchmüller, Baryogenesis from the weak scale to the grand unification scale, [arXiv:2009.07294].
* (10) J. Klarić, M. Shaposhnikov, and I. Timiryasov, Uniting low-scale leptogeneses, [arXiv:2008.13771].
* (11) J. Klaric, M. Shaposhnikov, and I. Timiryasov, Reconciling resonant leptogenesis and baryogenesis via neutrino oscillations, [arXiv:2103.16545].
* (12) T. Asaka and M. Shaposhnikov, The $\nu$MSM, dark matter and baryon asymmetry of the universe, Phys. Lett. B 620 (2005) 17–26, [hep-ph/0505013].
* (13) L. Canetti and M. Shaposhnikov, Baryon Asymmetry of the Universe in the NuMSM, JCAP 09 (2010) 001, [arXiv:1006.0133].
* (14) T. Asaka, S. Blanchet, and M. Shaposhnikov, The nuMSM, dark matter and neutrino masses, Phys. Lett. B 631 (2005) 151–156, [hep-ph/0503065].
* (15) X.-D. Shi and G. M. Fuller, A New dark matter candidate: Nonthermal sterile neutrinos, Phys. Rev. Lett. 82 (1999) 2832–2835, [astro-ph/9810076].
* (16) M. Shaposhnikov, The nuMSM, leptonic asymmetries, and properties of singlet fermions, JHEP 08 (2008) 008, [arXiv:0804.4542].
* (17) M. Laine and M. Shaposhnikov, Sterile neutrino dark matter as a consequence of nuMSM-induced lepton asymmetry, JCAP 0806 (2008) 031, [arXiv:0804.4543].
* (18) L. Canetti, M. Drewes, T. Frossard, and M. Shaposhnikov, Dark Matter, Baryogenesis and Neutrino Oscillations from Right Handed Neutrinos, Phys. Rev. D 87 (2013) 093006, [arXiv:1208.4607].
* (19) J. Ghiglieri and M. Laine, Sterile neutrino dark matter via coinciding resonances, JCAP 07 (2020) 012, [arXiv:2004.10766].
* (20) F. Bezrukov, D. Gorbunov, and M. Shaposhnikov, Late and early time phenomenology of Higgs-dependent cutoff, JCAP 10 (2011) 001, [arXiv:1106.5019].
* (21) M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Einstein-Cartan Portal to Dark Matter, [arXiv:2008.11686].
* (22) Particle Data Group Collaboration, P. Zyla et al., Review of Particle Physics, PTEP 2020 (2020), no. 8 083C01.
* (23) O. Ruchayskiy and A. Ivashko, Experimental bounds on sterile neutrino mixing angles, JHEP 06 (2012) 100, [arXiv:1112.3319].
* (24) T. Asaka, S. Eijima, and H. Ishida, Mixing of Active and Sterile Neutrinos, JHEP 04 (2011) 011, [arXiv:1101.1382].
* (25) M. Drewes, B. Garbrecht, D. Gueter, and J. Klaric, Testing the low scale seesaw and leptogenesis, JHEP 08 (2017) 018, [arXiv:1609.09069].
* (26) A. Caputo, P. Hernandez, J. Lopez-Pavon, and J. Salvado, The seesaw portal in testable models of neutrino masses, JHEP 06 (2017) 112, [arXiv:1704.08721].
* (27) D. Wyler and L. Wolfenstein, Massless Neutrinos in Left-Right Symmetric Models, Nucl. Phys. B 218 (1983) 205–214.
* (28) R. Mohapatra and J. Valle, Neutrino Mass and Baryon Number Nonconservation in Superstring Models, Phys. Rev. D 34 (1986) 1642\.
* (29) G. Branco, W. Grimus, and L. Lavoura, The Seesaw Mechanism in the Presence of a Conserved Lepton Number, Nucl. Phys. B 312 (1989) 492–508.
* (30) M. Gonzalez-Garcia and J. Valle, Fast Decaying Neutrinos and Observable Flavor Violation in a New Class of Majoron Models, Phys. Lett. B 216 (1989) 360–366.
* (31) M. Shaposhnikov, A Possible symmetry of the nuMSM, Nucl. Phys. B 763 (2007) 49–59, [hep-ph/0605047].
* (32) J. Kersten and A. Y. Smirnov, Right-Handed Neutrinos at CERN LHC and the Mechanism of Neutrino Mass Generation, Phys. Rev. D 76 (2007) 073005, [arXiv:0705.3221].
* (33) A. Abada, C. Biggio, F. Bonnet, M. Gavela, and T. Hambye, Low energy effects of neutrino masses, JHEP 12 (2007) 061, [arXiv:0707.4058].
* (34) M. Gavela, T. Hambye, D. Hernandez, and P. Hernandez, Minimal Flavour Seesaw Models, JHEP 09 (2009) 038, [arXiv:0906.1461].
* (35) R. E. Shrock, New Tests For, and Bounds On, Neutrino Masses and Lepton Mixing, Phys. Lett. B 96 (1980) 159–164.
* (36) R. E. Shrock, General Theory of Weak Leptonic and Semileptonic Decays. 1. Leptonic Pseudoscalar Meson Decays, with Associated Tests For, and Bounds on, Neutrino Masses and Lepton Mixing, Phys. Rev. D 24 (1981) 1232\.
* (37) R. E. Shrock, General Theory of Weak Processes Involving Neutrinos. 2. Pure Leptonic Decays, Phys. Rev. D 24 (1981) 1275.
* (38) R. E. Shrock, Pure Leptonic Decays With Massive Neutrinos and Arbitrary Lorentz Structure, Phys. Lett. 112B (1982) 382–386.
* (39) R. E. Shrock, Electromagnetic Properties and Decays of Dirac and Majorana Neutrinos in a General Class of Gauge Theories, Nucl. Phys. B206 (1982) 359–379.
* (40) A. Atre, T. Han, S. Pascoli, and B. Zhang, The Search for Heavy Majorana Neutrinos, JHEP 05 (2009) 030, [arXiv:0901.3589].
* (41) M. Drewes, The Phenomenology of Right Handed Neutrinos, Int. J. Mod. Phys. E 22 (2013) 1330019, [arXiv:1303.6912].
* (42) M. Gronau, C. N. Leung, and J. L. Rosner, Extending Limits on Neutral Heavy Leptons, Phys. Rev. D29 (1984) 2539.
* (43) G. Cvetič, C. Kim, and J. Zamora-Saá, CP violations in $\pi^{\pm}$ Meson Decay, J. Phys. G 41 (2014) 075004, [arXiv:1311.7554].
* (44) G. Cvetič, C. Kim, and J. Zamora-Saá, CP violation in lepton number violating semihadronic decays of $K,D,D_{s},B,B_{c}$, Phys. Rev. D 89 (2014), no. 9 093012, [arXiv:1403.2555].
* (45) G. Cvetic, C. Kim, R. Kogerler, and J. Zamora-Saa, Oscillation of heavy sterile neutrino in decay of $B\to\mu e\pi$, Phys. Rev. D 92 (2015) 013015, [arXiv:1505.04749].
* (46) G. Cvetic, C. Dib, C. Kim, and J. Zamora-Saa, Probing the Majorana neutrinos and their CP violation in decays of charged scalar mesons $\pi,K,D,D_{s},B,B_{c}$, Symmetry 7 (2015) 726–773, [arXiv:1503.01358].
* (47) G. Cvetič, A. Das, and J. Zamora-Saá, Probing heavy neutrino oscillations in rare $W$ boson decays, J. Phys. G 46 (2019) 075002, [arXiv:1805.00070].
* (48) G. Cvetič, A. Das, S. Tapia, and J. Zamora-Saá, Measuring the heavy neutrino oscillations in rare W boson decays at the Large Hadron Collider, J. Phys. G 47 (2020), no. 1 015001, [arXiv:1905.03097].
* (49) F. F. Deppisch, P. Bhupal Dev, and A. Pilaftsis, Neutrinos and Collider Physics, New J. Phys. 17 (2015), no. 7 075019, [arXiv:1502.06541].
* (50) J. Zamora-Saa, Resonant $CP$ violation in rare $\tau^{\pm}$ decays, JHEP 05 (2017) 110, [arXiv:1612.07656].
* (51) A. Caputo, P. Hernandez, M. Kekic, J. López-Pavón, and J. Salvado, The seesaw path to leptonic CP violation, Eur. Phys. J. C 77 (2017), no. 4 258, [arXiv:1611.05000].
* (52) P. D. Bolton, F. F. Deppisch, and P. S. Bhupal Dev, Neutrinoless double beta decay versus other probes of heavy sterile neutrinos, JHEP 03 (2020) 170, [arXiv:1912.03058].
* (53) A. Das, P. S. B. Dev, and C. Kim, Constraining Sterile Neutrinos from Precision Higgs Data, Phys. Rev. D 95 (2017), no. 11 115013, [arXiv:1704.00880].
* (54) C. Dib, J. Helo, M. Nayak, N. Neill, A. Soffer, and J. Zamora-Saa, Searching for a sterile neutrino that mixes predominantly with $\nu_{\tau}$ at $B$ factories, Phys. Rev. D 101 (2020), no. 9 093003, [arXiv:1908.09719].
* (55) S. Antusch, E. Cazzato, and O. Fischer, Sterile neutrino searches via displaced vertices at LHCb, Phys. Lett. B 774 (2017) 114–118, [arXiv:1706.05990].
* (56) D. Bryman and R. Shrock, Constraints on Sterile Neutrinos in the MeV to GeV Mass Range, Phys. Rev. D 100 (2019) 073011, [arXiv:1909.11198].
* (57) Belle Collaboration, D. Liventsev et al., Search for heavy neutrinos at Belle, Phys. Rev. D 87 (2013), no. 7 071102, [arXiv:1301.1105]. [Erratum: Phys.Rev.D 95, 099903 (2017)].
* (58) E949 Collaboration, A. Artamonov et al., Search for heavy neutrinos in $K^{+}\to\mu^{+}\nu_{H}$ decays, Phys. Rev. D 91 (2015), no. 5 052001, [arXiv:1411.3963]. [Erratum: Phys.Rev.D 91, 059903 (2015)].
* (59) LHCb Collaboration, R. Aaij et al., Search for Majorana neutrinos in $B^{-}\to\pi^{+}\mu^{-}\mu^{-}$ decays, Phys. Rev. Lett. 112 (2014), no. 13 131802, [arXiv:1401.5361].
* (60) CMS Collaboration, V. Khachatryan et al., Search for heavy Majorana neutrinos in $\mu^{\pm}\mu^{\pm}+$ jets events in proton-proton collisions at $\sqrt{s}$ = 8 TeV, Phys. Lett. B 748 (2015) 144–166, [arXiv:1501.05566].
* (61) ATLAS Collaboration, G. Aad et al., Search for heavy Majorana neutrinos with the ATLAS detector in pp collisions at $\sqrt{s}=8$ TeV, JHEP 07 (2015) 162, [arXiv:1506.06020].
* (62) NA62 Collaboration, E. Cortina Gil et al., Search for heavy neutral lepton production in $K^{+}$ decays, Phys. Lett. B 778 (2018) 137–145, [arXiv:1712.00297].
* (63) A. Izmaylov and S. Suvorov, Search for heavy neutrinos in the ND280 near detector of the T2K experiment, Phys. Part. Nucl. 48 (2017), no. 6 984–986.
* (64) CMS Collaboration, A. M. Sirunyan et al., Search for heavy neutral leptons in events with three charged leptons in proton-proton collisions at $\sqrt{s}=$ 13 TeV, Phys. Rev. Lett. 120 (2018), no. 22 221801, [arXiv:1802.02965].
* (65) T2K Collaboration, K. Abe et al., Search for heavy neutrinos with the T2K near detector ND280, Phys. Rev. D 100 (2019), no. 5 052006, [arXiv:1902.07598].
* (66) NA62 Collaboration, E. Cortina Gil et al., Search for heavy neutral lepton production in $K^{+}$ decays to positrons, Phys. Lett. B 807 (2020) 135599, [arXiv:2005.09575].
* (67) E. Goudzovski, “Search for heavy neutral lepton production at the NA62 experiment.” Preliminary results presented at ICHEP 2020, 28 July 2020 to 6 August 2020, Prague.
* (68) ATLAS Collaboration, G. Aad et al., Search for heavy neutral leptons in decays of $W$ bosons produced in 13 TeV $pp$ collisions using prompt and displaced signatures with the ATLAS detector, JHEP 10 (2019) 265, [arXiv:1905.09787].
* (69) J.-L. Tastet, E. Goudzovski, I. Timiryasov, and O. Ruchayskiy, Projected NA62 sensitivity to heavy neutral lepton production in $K^{+}\to\pi^{0}e^{+}N$ decays, [arXiv:2008.11654].
* (70) SHiP Collaboration, P. Mermod, Prospects of the SHiP and NA62 experiments at CERN for hidden sector searches, PoS NuFact2017 (2017) 139, [arXiv:1712.01768].
* (71) V. V. Gligorov, S. Knapen, M. Papucci, and D. J. Robinson, Searching for Long-lived Particles: A Compact Detector for Exotics at LHCb, Phys. Rev. D 97 (2018), no. 1 015023, [arXiv:1708.09395].
* (72) D. Curtin et al., Long-Lived Particles at the Energy Frontier: The MATHUSLA Physics Case, Rept. Prog. Phys. 82 (2019), no. 11 116201, [arXiv:1806.07396].
* (73) M. Drewes, J. Hajer, J. Klaric, and G. Lanfranchi, NA62 sensitivity to heavy neutral leptons in the low scale seesaw model, JHEP 07 (2018) 105, [arXiv:1801.04207].
* (74) S. Tapia and J. Zamora-Saá, Exploring CP-Violating heavy neutrino oscillations in rare tau decays at Belle II, Nucl. Phys. B 952 (2020) 114936, [arXiv:1906.09470].
* (75) I. Boiarska, K. Bondarenko, A. Boyarsky, S. Eijima, M. Ovchynnikov, O. Ruchayskiy, and I. Timiryasov, Probing baryon asymmetry of the Universe at LHC and SHiP, [arXiv:1902.04535].
* (76) J. Beacham et al., Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report, J. Phys. G 47 (2020), no. 1 010501, [arXiv:1901.09966].
* (77) P. Ballett, T. Boschi, and S. Pascoli, Heavy Neutral Leptons from low-scale seesaws at the DUNE Near Detector, JHEP 03 (2020) 111, [arXiv:1905.00284].
* (78) SHiP Collaboration, C. Ahdida et al., Sensitivity of the SHiP experiment to Heavy Neutral Leptons, JHEP 04 (2019) 077, [arXiv:1811.00930].
* (79) M. Hirsch and Z. S. Wang, Heavy neutral leptons at ANUBIS, Phys. Rev. D 101 (2020), no. 5 055034, [arXiv:2001.04750].
* (80) A. D. Dolgov, S. H. Hansen, G. Raffelt, and D. V. Semikoz, Cosmological and astrophysical bounds on a heavy sterile neutrino and the KARMEN anomaly, Nucl. Phys. B 580 (2000) 331–351, [hep-ph/0002223].
* (81) A. D. Dolgov, S. H. Hansen, G. Raffelt, and D. V. Semikoz, Heavy sterile neutrinos: Bounds from big bang nucleosynthesis and SN1987A, Nucl. Phys. B 590 (2000) 562–574, [hep-ph/0008138].
* (82) A. D. Dolgov and F. L. Villante, BBN bounds on active sterile neutrino mixing, Nucl. Phys. B 679 (2004) 261–298, [hep-ph/0308083].
* (83) G. M. Fuller, C. T. Kishimoto, and A. Kusenko, Heavy sterile neutrinos, entropy and relativistic energy production, and the relic neutrino background, [arXiv:1110.6479].
* (84) O. Ruchayskiy and A. Ivashko, Restrictions on the lifetime of sterile neutrinos from primordial nucleosynthesis, JCAP 1210 (2012) 014, [arXiv:1202.2841].
* (85) P. Hernandez, M. Kekic, and J. Lopez-Pavon, Low-scale seesaw models versus $N_{eff}$, Phys. Rev. D 89 (2014), no. 7 073009, [arXiv:1311.2614].
* (86) P. Hernandez, M. Kekic, and J. Lopez-Pavon, $N_{\rm eff}$ in low-scale seesaw models versus the lightest neutrino mass, Phys. Rev. D 90 (2014), no. 6 065033, [arXiv:1406.2961].
* (87) A. C. Vincent, E. F. Martinez, P. Hernández, M. Lattanzi, and O. Mena, Revisiting cosmological bounds on sterile neutrinos, JCAP 04 (2015) 006, [arXiv:1408.1956].
* (88) G. B. Gelmini, P. Lu, and V. Takhistov, Cosmological Dependence of Non-resonantly Produced Sterile Neutrinos, JCAP 12 (2019) 047, [arXiv:1909.13328].
* (89) D. Kirilova, BBN cosmological constraints on beyond Standard Model neutrino, PoS CORFU2018 (2019) 048.
* (90) G. B. Gelmini, M. Kawasaki, A. Kusenko, K. Murai, and V. Takhistov, Big Bang Nucleosynthesis constraints on sterile neutrino and lepton asymmetry of the Universe, JCAP 09 (2020) 051, [arXiv:2005.06721].
* (91) N. Sabti, A. Magalich, and A. Filimonova, An Extended Analysis of Heavy Neutral Leptons during Big Bang Nucleosynthesis, JCAP 11 (2020) 056, [arXiv:2006.07387].
* (92) A. Boyarsky, M. Ovchynnikov, O. Ruchayskiy, and V. Syvolap, Improved BBN constraints on Heavy Neutral Leptons, [arXiv:2008.00749].
* (93) A. Boyarsky, M. Ovchynnikov, N. Sabti, and V. Syvolap, When FIMPs Decay into Neutrinos: The $N_{\mathrm{eff}}$ Story, [arXiv:2103.09831].
* (94) L. Canetti, M. Drewes, and M. Shaposhnikov, Sterile Neutrinos as the Origin of Dark and Baryonic Matter, Phys. Rev. Lett. 110 (2013), no. 6 061801, [arXiv:1204.3902].
* (95) B. Shuve and I. Yavin, Baryogenesis through Neutrino Oscillations: A Unified Perspective, Phys. Rev. D 89 (2014), no. 7 075014, [arXiv:1401.2459].
* (96) A. Abada, G. Arcadi, V. Domcke, and M. Lucente, Lepton number violation as a key to low-scale leptogenesis, JCAP 11 (2015) 041, [arXiv:1507.06215].
* (97) P. Hernández, M. Kekic, J. López-Pavón, J. Racker, and N. Rius, Leptogenesis in GeV scale seesaw models, JHEP 10 (2015) 067, [arXiv:1508.03676].
* (98) M. Drewes, B. Garbrecht, D. Gueter, and J. Klaric, Leptogenesis from Oscillations of Heavy Neutrinos with Large Mixing Angles, JHEP 12 (2016) 150, [arXiv:1606.06690].
* (99) P. Hernández, M. Kekic, J. López-Pavón, J. Racker, and J. Salvado, Testable Baryogenesis in Seesaw Models, JHEP 08 (2016) 157, [arXiv:1606.06719].
* (100) T. Hambye and D. Teresi, Baryogenesis from L-violating Higgs-doublet decay in the density-matrix formalism, Phys. Rev. D 96 (2017), no. 1 015031, [arXiv:1705.00016].
* (101) A. Abada, G. Arcadi, V. Domcke, and M. Lucente, Neutrino masses, leptogenesis and dark matter from small lepton number violation?, JCAP 12 (2017) 024, [arXiv:1709.00415].
* (102) S. Antusch, E. Cazzato, M. Drewes, O. Fischer, B. Garbrecht, D. Gueter, and J. Klaric, Probing Leptogenesis at Future Colliders, JHEP 09 (2018) 124, [arXiv:1710.03744].
* (103) J. Ghiglieri and M. Laine, GeV-scale hot sterile neutrino oscillations: a numerical solution, JHEP 02 (2018) 078, [arXiv:1711.08469].
* (104) S. Eijima, M. Shaposhnikov, and I. Timiryasov, Parameter space of baryogenesis in the $\nu$MSM, JHEP 07 (2019) 077, [arXiv:1808.10833].
* (105) J. Ghiglieri and M. Laine, Precision study of GeV-scale resonant leptogenesis, JHEP 02 (2019) 014, [arXiv:1811.01971].
* (106) S. Eijima, M. Shaposhnikov, and I. Timiryasov, Freeze-in generation of lepton asymmetries after baryogenesis in the $\nu$MSM, [arXiv:2011.12637].
* (107) M. Drewes and B. Garbrecht, Combining experimental and cosmological constraints on heavy neutrinos, Nucl. Phys. B 921 (2017) 250–315, [arXiv:1502.00477].
* (108) M. Chrzaszcz, M. Drewes, T. E. Gonzalo, J. Harz, S. Krishnamurthy, and C. Weniger, A frequentist analysis of three right-handed neutrinos with GAMBIT, Eur. Phys. J. C 80 (2020), no. 6 569, [arXiv:1908.02302].
* (109) GAMBIT Collaboration, P. Athron et al., GAMBIT: The Global and Modular Beyond-the-Standard-Model Inference Tool, Eur. Phys. J. C 77 (2017), no. 11 784, [arXiv:1705.07908]. [Addendum: Eur.Phys.J.C 78, 98 (2018)].
* (110) S. Eijima and M. Shaposhnikov, Fermion number violating effects in low scale leptogenesis, Phys. Lett. B 771 (2017) 288–296, [arXiv:1703.06085].
* (111) J. Ghiglieri and M. Laine, GeV-scale hot sterile neutrino oscillations: a derivation of evolution equations, JHEP 05 (2017) 132, [arXiv:1703.06087].
* (112) I. Esteban, M. Gonzalez-Garcia, M. Maltoni, T. Schwetz, and A. Zhou, The fate of hints: updated global analysis of three-flavor neutrino oscillations, JHEP 09 (2020) 178, [arXiv:2007.14792].
* (113) P. Agrawal et al., Feebly-Interacting Particles:FIPs 2020 Workshop Report, [arXiv:2102.12143].
* (114) J. Casas and A. Ibarra, Oscillating neutrinos and $\mu\to e,\gamma$, Nucl. Phys. B 618 (2001) 171–204, [hep-ph/0103065].
* (115) S. Davidson, G. Isidori, and A. Strumia, The Smallest neutrino mass, Phys. Lett. B 646 (2007) 100–104, [hep-ph/0611389].
* (116) M. Drewes, On the Minimal Mixing of Heavy Neutrinos, [arXiv:1904.11959].
* (117) A. Donini, P. Hernandez, J. Lopez-Pavon, M. Maltoni, and T. Schwetz, The minimal 3+2 neutrino model versus oscillation anomalies, JHEP 07 (2012) 161, [arXiv:1205.5230].
* (118) T. Asaka, S. Eijima, and H. Ishida, Kinetic Equations for Baryogenesis via Sterile Neutrino Oscillation, JCAP 02 (2012) 021, [arXiv:1112.5565].
* (119) F. L. Bezrukov, nu MSM-predictions for neutrinoless double beta decay, Phys. Rev. D 72 (2005) 071303, [hep-ph/0505247].
* (120) M. Drewes and S. Eijima, Neutrinoless double $\beta$ decay and low scale leptogenesis, Phys. Lett. B 763 (2016) 72–79, [arXiv:1606.06221].
* (121) T. Asaka, S. Eijima, and H. Ishida, On neutrinoless double beta decay in the $\nu$MSM, Phys. Lett. B 762 (2016) 371–375, [arXiv:1606.06686].
* (122) PIENU Collaboration, A. Aguilar-Arevalo et al., Improved search for heavy neutrinos in the decay $\pi\rightarrow e\nu$, Phys. Rev. D 97 (2018), no. 7 072012, [arXiv:1712.03275].
* (123) D. Britton et al., Improved search for massive neutrinos in pi+ —$>$ e+ neutrino decay, Phys. Rev. D 46 (1992) 885–887.
* (124) T. Yamazaki et al., Search for Heavy Neutrinos in Kaon Decay, Conf.Proc.C 840719 (7, 1984) 262.
* (125) G. Bernardi et al., Search for Neutrino Decay, Phys. Lett. B 166 (1986) 479–483.
* (126) G. Bernardi et al., FURTHER LIMITS ON HEAVY NEUTRINO COUPLINGS, Phys. Lett. B 203 (1988) 332–334.
* (127) K. Bondarenko, A. Boyarsky, M. Ovchynnikov, and O. Ruchayskiy, Sensitivity of the intensity frontier experiments for neutrino and scalar portals: analytic estimates, JHEP 08 (2019) 061, [arXiv:1902.06240].
* (128) CHARM Collaboration, F. Bergsma et al., A Search for Decays of Heavy Neutrinos in the Mass Range 0.5-{GeV} to 2.8-{GeV}, Phys. Lett. B 166 (1986) 473–478.
* (129) NuTeV, E815 Collaboration, A. Vaitaitis et al., Search for neutral heavy leptons in a high-energy neutrino beam, Phys. Rev. Lett. 83 (1999) 4943–4946, [hep-ex/9908011].
* (130) DELPHI Collaboration, P. Abreu et al., Search for neutral heavy leptons produced in Z decays, Z. Phys. C 74 (1997) 57–71. [Erratum: Z.Phys.C 75, 580 (1997)].
* (131) P. Coloma, E. Fernández-Martínez, M. González-López, J. Hernández-García, and Z. Pavlovic, GeV-scale neutrinos: interactions with mesons and DUNE sensitivity, [arXiv:2007.03701].
* (132) M. Fukugita and T. Yanagida, Baryogenesis Without Grand Unification, Phys. Lett. B 174 (1986) 45–47.
* (133) Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209].
* (134) S. Davidson and A. Ibarra, A Lower bound on the right-handed neutrino mass from leptogenesis, Phys. Lett. B 535 (2002) 25–32, [hep-ph/0202239].
* (135) J. Liu and G. Segre, Reexamination of generation of baryon and lepton number asymmetries by heavy particle decay, Phys. Rev. D 48 (1993) 4609–4612, [hep-ph/9304241].
* (136) M. Flanz, E. A. Paschos, and U. Sarkar, Baryogenesis from a lepton asymmetric universe, Phys. Lett. B 345 (1995) 248–252, [hep-ph/9411366]. [Erratum: Phys.Lett.B 384, 487–487 (1996), Erratum: Phys.Lett.B 382, 447–447 (1996)].
* (137) M. Flanz, E. A. Paschos, U. Sarkar, and J. Weiss, Baryogenesis through mixing of heavy Majorana neutrinos, Phys. Lett. B 389 (1996) 693–699, [hep-ph/9607310].
* (138) L. Covi, E. Roulet, and F. Vissani, CP violating decays in leptogenesis scenarios, Phys. Lett. B 384 (1996) 169–174, [hep-ph/9605319].
* (139) L. Covi and E. Roulet, Baryogenesis from mixed particle decays, Phys. Lett. B 399 (1997) 113–118, [hep-ph/9611425].
* (140) A. Pilaftsis, CP violation and baryogenesis due to heavy Majorana neutrinos, Phys. Rev. D 56 (1997) 5431–5451, [hep-ph/9707235].
* (141) A. Pilaftsis, Resonant CP violation induced by particle mixing in transition amplitudes, Nucl. Phys. B 504 (1997) 61–107, [hep-ph/9702393].
* (142) A. Pilaftsis, Heavy Majorana neutrinos and baryogenesis, Int. J. Mod. Phys. A 14 (1999) 1811–1858, [hep-ph/9812256].
* (143) W. Buchmuller and M. Plumacher, CP asymmetry in Majorana neutrino decays, Phys. Lett. B 431 (1998) 354–362, [hep-ph/9710460].
* (144) A. Pilaftsis and T. E. Underwood, Resonant leptogenesis, Nucl. Phys. B 692 (2004) 303–345, [hep-ph/0309342].
* (145) E. K. Akhmedov, V. Rubakov, and A. Smirnov, Baryogenesis via neutrino oscillations, Phys. Rev. Lett. 81 (1998) 1359–1362, [hep-ph/9803255].
* (146) G. Anamiati, M. Hirsch, and E. Nardi, Quasi-Dirac neutrinos at the LHC, JHEP 10 (2016) 010, [arXiv:1607.05641].
* (147) J. Gluza and T. Jeliński, Heavy neutrinos and the pp→lljj CMS data, Phys. Lett. B748 (2015) 125–131, [arXiv:1504.05568].
* (148) P. S. Bhupal Dev and R. N. Mohapatra, Unified explanation of the $eejj$, diboson and dijet resonances at the LHC, Phys. Rev. Lett. 115 (2015), no. 18 181803, [arXiv:1508.02277].
* (149) S. Antusch, E. Cazzato, and O. Fischer, Resolvable heavy neutrino–antineutrino oscillations at colliders, Mod. Phys. Lett. A34 (2019), no. 07n08 1950061, [arXiv:1709.03797].
* (150) M. Drewes, J. Klarić, and P. Klose, On Lepton Number Violation in Heavy Neutrino Decays at Colliders, JHEP 19 (2020) 032, [arXiv:1907.13034].
* (151) J.-L. Tastet and I. Timiryasov, Dirac vs. Majorana HNLs (and their oscillations) at SHiP, JHEP 04 (2020) 005, [arXiv:1912.05520].
* (152) S. Antusch and J. Rosskopp, Heavy Neutrino-Antineutrino Oscillations in Quantum Field Theory, [arXiv:2012.05763].
* (153) A. Ceccucci et al., Proposal to measure the rare decay $K^{+}\rightarrow\pi^{+}\nu\bar{\nu}$ at the CERN SPS, Tech. Rep. CERN-SPSC-2005-013. SPSC-P-326, CERN, Geneva, Apr, 2005.
* (154) NA62 Collaboration Collaboration, C. NA62, 2020 NA62 Status Report to the CERN SPSC, Tech. Rep. CERN-SPSC-2020-007. SPSC-SR-266, CERN, Geneva, Mar, 2020.
* (155) I. Krasnov, DUNE prospects in the search for sterile neutrinos, Phys. Rev. D 100 (2019), no. 7 075023, [arXiv:1902.06099].
* (156) SHiP Collaboration, C. Ahdida et al., The experimental facility for the Search for Hidden Particles at the CERN SPS, JINST 14 (2019), no. 03 P03025, [arXiv:1810.06880].
* (157) D. Gorbunov, I. Krasnov, Y. Kudenko, and S. Suvorov, Heavy Neutral Leptons from kaon decays in the SHiP experiment, Phys. Lett. B 810 (2020) 135817, [arXiv:2004.07974].
* (158) S. Alekhin et al., A facility to Search for Hidden Particles at the CERN SPS: the SHiP physics case, Rept. Prog. Phys. 79 (2016), no. 12 124201, [arXiv:1504.04855].
* (159) R. K. Ellis et al., Physics Briefing Book, [arXiv:1910.11775].
* (160) A. Boyarsky, O. Ruchayskiy, and M. Shaposhnikov, The Role of sterile neutrinos in cosmology and astrophysics, Ann. Rev. Nucl. Part. Sci. 59 (2009) 191–214, [arXiv:0901.0011].
* (161) A. Boyarsky, M. Ovchynnikov, O. Ruchayskiy, and V. Syvolap, Cosmological constraints on very long-lived HNLs, to appear (2021).
* (162) V. Domcke, M. Drewes, M. Hufnagel, and M. Lucente, MeV-scale Seesaw and Leptogenesis, [arXiv:2009.11678].
* (163) M. Hufnagel, K. Schmidt-Hoberg, and S. Wild, BBN constraints on MeV-scale dark sectors. Part II. Electromagnetic decays, JCAP 1811 (2018) 032, [arXiv:1808.09324].
* (164) L. Forestell, D. E. Morrissey, and G. White, Limits from BBN on Light Electromagnetic Decays, JHEP 01 (2019) 074, [arXiv:1809.01179].
|
# Strong edge geodetic problem on grids
Eva Zmazek
###### Abstract
Let $G=(V(G),E(G))$ be a simple graph. A set $S\subseteq V(G)$ is a strong
edge geodetic set if there exists an assignment of exactly one shortest path
between each pair of vertices from $S$, such that these shortest paths cover
all the edges $E(G)$. The cardinality of a smallest strong edge geodetic set
is the strong edge geodetic number $\operatorname{\rm sg_{e}}(G)$ of $G$. In
this paper, the strong edge geodetic problem is studied on the Cartesian
product of two paths. The exact value of the strong edge geodetic number is
computed for $P_{n}\,\square\,P_{2}$, $P_{n}\,\square\,P_{3}$ and
$P_{n}\,\square\,P_{4}$. Some general upper bounds for $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{m})$ are also proved.
Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
<EMAIL_ADDRESS>
Keywords: strong geodetic problem; strong edge geodetic problem; Cartesian
product of paths
AMS Subj. Class.: 05C12, 05C70
## 1 Introduction
Different covering problems with shortest paths were studied in literature.
For example, the geodetic problem was introduced in $1993$ in [3] and its edge
version in $2007$ in [11]. In $2016$, the strong geodetic problem was
introduced, the seminal paper [9] being published only recently. Since then, a
lot of work was done on the strong geodetic problem.
The exact value of the strong geodetic number was computed for different
families of graphs. For example, for complete bipartite graphs $K_{n,m}$ it
was first computed for cases when $n=m$, and for $n\gg m$ in [4], and later in
the general case in [1]. In [6], the strong geodetic number was computed for
some balanced multipartite complete graphs and it was shown that computing the
strong geodetic number of general complete multipartite graphs is NP-complete.
The exact strong geodetic number was also computed for crown graphs
$S_{0}^{n}$ in [1], Hamming graphs $K_{m}\,\square\,K_{n}$ in [5], Cartesian
products $K_{1,n}\,\square\,P_{l}$ in [5], thin ($n\gg m$) grids
$P_{n}\,\square\,P_{m}$, and thin ($n\gg m$) cylinders $P_{n}\,\square\,C_{m}$
in [7], and $i$-level complete Appolonian networks $A(i)$ in [9].
Several general bounds on the strong geodetic number were given using
different graphical invariants. In [4], bounds on $\operatorname{\rm
sg_{e}}(G)$ were given depending on the diameter $\operatorname{\rm diam}(G)$
of $G$. In [12], an upper bound on the strong geodetic number was given using
the connectivity number. In [9], lower and upper bounds were given using
isometric path number. Gledel at al. [2] proved an upper bound on the strong
geodetic number of Cartesian product graphs using the so-called strong
geodetic core number. In [1], an upper bound on the strong edge geodetic
number was given for hypercubes. A general upper bound was in [5] computed for
the strong geodetic number of Cartesian product graphs and an upper bound for
the strong geodetic number of the Cartesian product of a path with an
arbitrary graph was computed in [7].
Using reduction from NP-completeness of dominating set problem, Manuel et al.
in [9] proved that the strong geodetic problem is NP-complete. On the positive
side Mezzini in [10] gave a polynomial algorithm for computing the strong
geodetic number of outerplanar graphs. Some general properties about the
strong geodetic number were also given. For example, in [12] the graphs with
the strong geodetic number $2$, $n(G)$, or $n(G)-1$, were charactarized. In
[5] relation between the strong geodetic number of a graph and its induced,
convex, or gated subgraphs were derived.
Like geodetic problem and many other problems in graph theory, there is an
interesting edge version of the problem. In $2017$, the edge version of the
strong geodetic problem, called the strong edge geodetic problem, was
introduced in [8], but it did not get as much attention as the vertex version.
This gap is in part filled in this paper. In the seminal paper [8] it was
proved that the strong edge geodetic problem is NP-complete and some general
upper and lower bound were given using the isometric path number, the number
of simplicial vertices in a graph, and the number of convex components in
graph. In this article, we will show that even though the vertex and edge
version of the strong geodetic problem seem similar at the first sign, they
differ a lot. For example, in [7] it was proved that if $2\leq n\leq m$, then
$\operatorname{\rm sg}(P_{n}\,\square\,P_{m})\leq\left\lceil
2\sqrt{n}~{}\right\rceil$, while we will prove that this is not true for the
strong edge geodetic number $\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{m})$
when $m=3$ or $4$. The main results of this article are the following three
theorems.
###### Theorem 1.1.
If $n\geq 2$, then $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})=\left\lceil 2\sqrt{n}~{}\right\rceil$.
###### Theorem 1.2.
If $n\geq 2$, then $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{3})=\left\lceil 2\sqrt{n+1}~{}\right\rceil$.
###### Theorem 1.3.
If $n\geq 2$, then
$\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{4})=\begin{cases}2k+1;&n=k^{2}+h,0\leq h\leq
k-1,\\\ 2k+2;&n=k^{2}+h,k\leq h\leq 2k-1,\\\ 2k+3;&n=k^{2}+2k.\end{cases}$
Theorem 1.3 implies that $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{4})=\left\lceil 2\sqrt{n+2}~{}\right\rceil$ for all
$n\in\mathbb{N}$ except when $n=k^{2}+k-1$ for some $k\in\mathbb{N}$. This can
also be interpreted as $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{4})=\left\lceil 2\sqrt{n+1}~{}\right\rceil$ for all
$n\in\mathbb{N}$ except when $n=k^{2}+2k$ for some $k\in\mathbb{N}$. This
shows that the pattern from Theorems 1.1 and 1.2 does not extend to $n\geq 4$.
In the next section we formally define concepts needed in this paper and
prepare several preliminary results. Then, in Sections 3-5, we prove Theorems
1.1, 1.2, 1.3, respectively. In the last section we give three general upper
bounds on $\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{m})$.
## 2 Preliminaries
Let $G=(V(G),E(G))$ be a simple graph. A $x,y$-geodesic is a shortest path
between vertices $x$ and $y$. With $P(G;x,y)$ we denote the set of all
shortest paths in $G$ between vertices $x$ and $y$. A set $S\subseteq V(G)$ is
a _strong edge geodetic set_ if there exists an assignment of shortest paths
$P_{x,y}\in P(G;x,y)$ for every pair $x,y\in V(G)$, such that
$\bigcup\limits_{\\{x,y\\}\in\binom{S}{2}}E(P_{x,y})=E(G),$
where $E(P_{x,y})$ denotes the set of edges from the selected shortest path
$P_{x,y}$. The set of these shortest paths is called the _strong edge
covering_. The _strong edge geodetic number_ of $G$, denoted by
$\operatorname{\rm sg_{e}}(G)$, is the cardinality of a smallest strong edge
geodetic set of $G$.
_The Cartesian product_ $G\,\square\,H$ of graphs $G$ and $H$ is the graph on
the vertex set $V(G\,\square\,H)=V(G)\times V(H)$, where two vertices
$(g_{1},h_{1})$ and $(g_{2},h_{2})$, $g_{1},g_{2}\in V(G)$, $h_{1},h_{2}\in
V(H)$ are adjecent if $g_{1}g_{2}\in E(G)$ and $h_{1}=h_{2}$, or if
$g_{1}=g_{2}$ and $h_{1}h_{2}\in E(H)$. An edge $(g_{1},h_{1})(g_{2},h_{2})\in
E(G\,\square\,H)$ is said to be _horizontal_ if $h_{1}=h_{2}$ and is said to
be _vertical_ if $g_{1}=g_{2}$. A _grid_ is the Cartesian product of two
paths. The _$i$ -th row_, $1\leq i\leq m$, in $P_{n}\,\square\,P_{m}$ is the
vertex set $\\{(1,i),\dots,(n,i)\\}$ together with the horizontal edges
between them. Similarly, the _$j$ -th column_, $1\leq j\leq n$, in
$P_{n}\,\square\,P_{m}$ is the vertex set $\\{(j,1),\dots,(j,m)\\}$ together
with the vertical edges between them. Because of the commutativity of the
Cartesian product operation, $\operatorname{\rm
sg_{e}}(G\,\square\,H)=\operatorname{\rm sg_{e}}(H\,\square\,G)$. A subgraph
$H$ of a graph $G$ is _convex_ if for every pair of vertices $\\{x,y\\}$ in
$H$, every $x,y$-geodesic lies completely in $H$. A set of edges $F\subseteq
E(G)$ is a _convex edge-cut_ if $G-F$ has precisely two convex components.
###### Lemma 2.1.
If $n\geq 2$ and $m\geq 2$, then $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{m})\geq\left\lceil 2\sqrt{n}~{}\right\rceil$.
###### Proof.
In [8, Corollary 6.4] it is proved that if $F$ is a convex edge-cut of a graph
$G$, then $\operatorname{\rm sg_{e}}(G)\geq\lceil 2\sqrt{|F|}\rceil.$ In our
case, all vertical edges between the first and the second row in
$P_{n}\,\square\,P_{m}$ represent a convex edge-cut set of
$P_{n}\,\square\,P_{m}$ (see Fig. 1). Because there are exactly $n$ vertical
edges between the first and the second row in $P_{n}\,\square\,P_{2}$, the
inequality holds.
⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮ Figure 1: Edge-cut in $P_{n}\,\square\,P_{m}$.
∎
Throughout the rest of this paper, we will use Algorithm 1 to prove upper
bounds on the strong edge geodetic number. The input of this algorithm are an
integer $n$ and an integer $m$. Integer $n$ can be uniquely written as the sum
$n=k^{2}+h$, where $k$ and $h$ are integers and $0\leq h\leq 2k$. Algorithm
defines a set of vertices $S$ and a set of shortest paths, where for each pair
of vertices in $S$ it uses at most one shortest path between them, such that
the union of this shortest paths covers all the vertical edges in
$P_{n}\,\square\,P_{m}$.
Algorithm 1 first takes two vertices, $a_{1}=(1,1)$ and $b_{1}=(1,m)$ from the
first column, and cover the vertical edges in the first column by the unique
$a_{1},b_{1}$-geodesic. In next step, if $k^{2}\geq 4$, the algorithm takes
vertices $a_{2}=(2^{2},1)$ and $b_{2}=(2^{2},m)$ and covers the second column
of edges by the $a_{1}$,$b_{2}$-geodesic, the third column of edges by the
$a_{2},b_{1}$-geodesic and the fourth column of edges by the
$a_{2},b_{2}$-geodesic. After the $(i-1)$-th, $(i-1)^{2}\leq n$, step, all the
first $(i-1)^{2}$ columns are already covered. In the $i$-th step, if
$i^{2}\leq n$, we add vertices $a_{i}=(i^{2},1)$ and $b_{i}=(i^{2},m)$. We
cover the next $i-1$ columns by $a_{1},b_{i}$-, …, $a_{i-1},b_{i}$-geodesic,
respectively the way every geodesics covers the leftmost not yet covered
column of edges. Similarly, we cover $i-1$ edges from the
$(i-1)^{2}+(i-1)+1$-th to the $(i-1)^{2}+2(i-1)$-th column by $a_{i},b_{1}$-,
…, $a_{i},b_{i-1}$-geodesic, respectively, covering the leftmost not yet
covered column of edges. The algorithm covers the $i$-th column of edges by
the unique $a_{i},b_{i}$-geodesic. This way we cover all the vertical edges
from the first $k^{2}$ columns.
If $1\leq h\leq k$, we add the vertex $b_{k+1}=(n,m)$ and then cover the
remaining columns of edges by $a_{1},b_{k+1}$-, …, $a_{h},b_{k+1}$-geodesic,
respectively, covering the leftmost not yet covered column of edges. If
$k+1\leq h\leq 2k$, we add vertices $a_{k+1}=(n,1)$ and $b_{k+1}=(n,m)$. We
cover the next $k$ columns by $a_{1},b_{k+1}$-, …, $a_{k},b_{k+1}$-geodesic,
respectively, the way every geodesics covers the leftmost not yet covered
column of edges. Simirarly, we cover the remaining columns of edges by
$a_{k+1},b_{1}$-, …, $a_{k+1},b_{h-k}$-geodesic, respectively, covering the
leftmost not yet covered column of edges.
Input: integer $n=k^{2}+h$, where $0\leq h\leq 2k$, and integer $m$
Result: strong edge geodetic set and a set of shortest paths in
$P_{n}\,\square\,P_{m}$, $m\geq 2$, that cover all vertical edges
1 for _$i=1,\dots,k$_ do
2 $a_{i}=\left(i^{2},1\right)$;
3 $b_{i}=\left(i^{2},m\right)$;
4 for _$j=1,\dots,i-1$_ do
5 connect vertices $a_{i}$ and $b_{j}$ with a geodesic that covers all
vertical edges in $((i-1)^{2}+j)$-th column of $P_{n}\,\square\,P_{m}$
6 end for
7 for _$j=1,\dots,i-1$_ do
8 connect vertices $a_{j}$ and $b_{i}$ with a geodesic that covers all
vertical edges in $(i(i-1)+j)$-th column of $P_{n}\,\square\,P_{m}$
9 end for
10 connect vertices $a_{i}$ and $b_{i}$ with the unique geodesic between them
11 end for
12if _$1\leq h\leq k$_ then
13 $b_{k+1}=\left(n,m\right)$;
14 for _$j=1,\dots,h$_ do
15 connect vertices $a_{j}$ and $b_{k+1}$ with a geodesic that covers all
vertical edges in $(k^{2}+j)$-th column of $P_{n}\,\square\,P_{m}$
16 end for
17
18 end if
19else if _$k+1\leq h\leq 2k$_ then
20 $a_{k+1}=\left(n,1\right)$;
21 $b_{k+1}=\left(n,m\right)$;
22 for _$j=1,\dots,k$_ do
23 connect vertices $a_{j}$ and $b_{k+1}$ with a geodesic that covers all
vertical edges in $(k^{2}+j)$-th column of $P_{n}\,\square\,P_{m}$
24 end for
25 for _$j=1,\dots,h-k$_ do
26 connect vertices $a_{k+1}$ and $b_{j}$ with a geodesic that covers all
vertical edges in $(k(k+1)+j)$-th column of $P_{n}\,\square\,P_{m}$
27 end for
28
29 end if
Algorithm 1 Covering vertical edges in grids
We conclude the preliminaries with the following technical lemma to be used in
our proofs.
###### Lemma 2.2.
$\left\lceil
2\sqrt{n\phantom{!}}\right\rceil=\begin{cases}2k;&n=k^{2},~{}k\in\mathbb{N},\\\
2k+1;&n=k^{2}+h,~{}k\in\mathbb{N},1\leq h\leq k,\\\
2k+2;&n=k^{2}+h,~{}k\in\mathbb{N},k+1\leq h\leq 2k.\end{cases}$
###### Proof.
If $n=k^{2}$ for some $k\in\mathbb{N}$, then $\left\lceil
2\sqrt{n\phantom{!}}\right\rceil=\left\lceil 2\sqrt{k^{2}}\right\rceil=2k$.
Suppose $n=k^{2}+h$ for some $k,h\in\mathbb{N}$ where $1\leq h\leq k$. Then
because $h>0$ and $\left\lceil 2\sqrt{k^{2}}\right\rceil$ is an integer, it
holds
$\left\lceil 2\sqrt{n\phantom{!}}\right\rceil=\left\lceil
2\sqrt{k^{2}+h}\right\rceil>\left\lceil 2\sqrt{k^{2}}\right\rceil=2k.$
Also, because $h\leq k$ and $\left\lceil 2\sqrt{k^{2}}\right\rceil$ is an
integer, it holds
$\left\lceil 2\sqrt{n\phantom{!}}\right\rceil\leq\left\lceil
2\sqrt{k^{2}+k}\right\rceil=\left\lceil
2\sqrt{(k^{2}+1/2)^{2}-1/4}\right\rceil\leq\left\lceil
2\sqrt{(k+1/2)^{2}}\right\rceil=2k+1.$
Because $\left\lceil 2\sqrt{n}~{}\right\rceil$ is an integer greater than $2k$
and is also less or equal to $2k+1$, we conclude that $\left\lceil
2\sqrt{n}~{}\right\rceil=2k+1$ when $n=k^{2}+h$, $1\leq h\leq k$.
Now suppose that $n=k^{2}+h$ for some $k,h\in\mathbb{N}$ where $k+1\leq h\leq
2k$. Then because $h\geq k+1$ and $\left\lceil
2\sqrt{(k+1/2)^{2}}\right\rceil$ is an integer, it holds
$\left\lceil 2\sqrt{n\phantom{!}}\right\rceil\geq\left\lceil
2\sqrt{k^{2}+k+1}\right\rceil=\left\lceil
2\sqrt{(k+1/2)^{2}+3/4}\right\rceil>\left\lceil
2\sqrt{(k+1/2)^{2}}\right\rceil=2k+1.$
Also, because $h\leq 2k$ and $\left\lceil 2\sqrt{k^{2}}\right\rceil$ is an
integer, it holds
$\left\lceil 2\sqrt{n\phantom{!}}\right\rceil\leq\left\lceil
2\sqrt{k^{2}+2k}\right\rceil\leq\left\lceil
2\sqrt{k^{2}+2k+1}\right\rceil=2k+2.$
Because $\left\lceil 2\sqrt{n}~{}\right\rceil$ is an integer greater than
$2k+1$ and is also less or equal to $2k+2$, we get $\left\lceil
2\sqrt{n}~{}\right\rceil=2k+2$ when $n=k^{2}+h$, $k+1\leq h\leq 2k$. ∎
## 3 Proof of Theorem 1.1
The lower bound $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})\geq\left\lceil 2\sqrt{n}~{}\right\rceil$
follows from Lemma 2.1. It remains to prove the upper bound $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})\leq\left\lceil 2\sqrt{n}~{}\right\rceil$.
We will use Algorithm 1 for $m=2$ to cover all the vertical edges in
$P_{n}\,\square\,P_{2}$. When $n=k^{2}$ for some $k\in\mathbb{N}$, the number
of vertices used in Algorithm 1 is exactly $2k$, these are the vertices
$a_{1},\dots,a_{k},b_{1},\dots,b_{k}$. When $n=k^{2}+h$, $k\in\mathbb{N}$,
$1\leq h\leq k$, the number of vertices used in Algorithm 1 is exactly $2k+1$,
and when $n=k^{2}+h$, $k\in\mathbb{N}$, $k+1\leq h\leq 2k$, exactly $2k+1$
vertices are used. By Lemma 2.2, we see that Algorithm 1 uses exactly $\lceil
2\sqrt{n}~{}\rceil$ vertices.
It remains to cover the horizontal edges. Observe that Algorithm 1 uses only
the $a_{j_{1}}$,$b_{j_{2}}$-geodesics for some $j_{1},j_{2}\in\mathbb{N}$.
If $n=k^{2}$, all horizontal edges from the first row can be covered by the
unique $a_{1}$,$a_{k}$-geodesic. Similarly, all horizontal edges from the
second row can be covered by the unique $b_{1}$,$b_{k}$-geodesic.
If $n=k^{2}+h$, where $1\leq h\leq k$, we can cover the horizontal edges in
the first row with the unique $a_{1}$,$a_{k}$-geodesic. The remaining
horizontal edges in the first row are already covered with the shortest path
from Algorithm 1 that covers vertical edges in the $n$-th column. We can cover
the horizontal edges from the second row with the unique
$b_{1}$,$b_{k+1}$-geodesic.
If $n=k^{2}+h$, where $k+1\leq h\leq 2k$, we can cover the horizontal edges
from the first row with the unique $a_{1}$,$a_{k+1}$-geodesic, and the
horizontal edges from the second row with the unique
$b_{1}$,$b_{k+1}$-geodesic.
This proves Theorem 1.1. See Fig. 2 for some typical optimal strong edge
geodetic sets in $P_{n}\,\square\,P_{2}$.
Figure 2: Strong edge geodetic set for graphs $P_{16}\,\square\,P_{2}$,
$P_{20}\,\square\,P_{2}$ and $P_{24}\,\square\,P_{2}$.
## 4 Proof of Theorem 1.2
We first show:
###### Lemma 4.1.
If $n\geq 2$, then $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{3})\leq\left\lceil 2\sqrt{n}~{}\right\rceil+1$.
###### Proof.
Use Algorithm 1 and then cover horizontal edges in the first row and in the
last row in the same way as in the proof of Theorem 1.1. Add vertex $(n,2)$ to
the existing vertex set from Algorithm 1, and connect it with $(1,1)$ by a
geodesic that covers all horizontal edges from the second row, see Fig. 3. ∎
Figure 3: Shortest path between $(1,1)$ and $(n,2)$.
By Lemma 4.1 and Lemma 2.1 (for $m=3$) we know that for every $n\geq 2$, the
strong edge goedetic number $\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{3})$
is either $\left\lceil 2\sqrt{n}~{}\right\rceil$ or $\left\lceil
2\sqrt{n}~{}\right\rceil+1$. Because by Lemma 2.2 it holds $\left\lceil
2\sqrt{n}~{}\right\rceil=\left\lceil 2\sqrt{n+1}~{}\right\rceil$, except when
$n=k^{2}$ or $n=k^{2}+k$ for some $k\in\mathbb{N}$, it is enough to prove that
when $n=k^{2}$ or $n=k^{2}+k$, there is no strong edge geodetic set of size
$\left\lceil 2\sqrt{n}~{}\right\rceil$ and to find a strong edge geodetic set
of size $\left\lceil 2\sqrt{n}~{}\right\rceil$ in the other cases.
First, let us show that there exists a strong edge geodetic set of size
$\left\lceil 2\sqrt{n}~{}\right\rceil$ on $P_{n}\,\square\,P_{3}$ for
$n=k^{2}+h$, where $1\leq h\leq k-1$ or $k+1\leq h\leq 2k$.
###### Proposition 4.2.
If $n=k^{2}+h$, $n\geq 3$, $k\geq 1$, where $1\leq h\leq k-1$ or $k+1\leq
h\leq 2k$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{3})\leq\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2}).$
###### Proof.
To cover all the vertical edges and all the horizontal edges in the second
row, we will adjust Algorithm 1. We will divide this adjustment into two
cases.
###### Case 1:
$n=k^{2}+h$, $1\leq h\leq k-1$ (see Fig. 4).
We can see that in this case, because $h\leq k-1$, Algorithm 1 never uses a
$a_{k}$,$b_{k+1}$-geodesic and that a $a_{1}$,$b_{k+1}$-geodesic covers all
vertical edges in the $(k^{2}+1)$-th column. Because $a_{k}=(k^{2},1)$, we can
add a $a_{k}$,$b_{k+1}$-geodesic that covers all the vertical edges in the
$(k^{2}+1)$-th column and then replace the existing $a_{1}$,$b_{k+1}$-geodesic
with the one that covers all the horizontal edges in the second row. This way
we have covered all the vertical edges in $P_{n}\,\square\,P_{3}$ and also all
the horizontal edges in the second row.
In the same way as in $P_{n}\,\square\,P_{2}$ we can cover the horizontal
edges in the first row (using the $a_{1}$,$a_{k}$-geodesic and also the
existing $a_{h}$,$b_{k+1}$-geodesic) and in the third row (using the
$b_{1}$,$b_{k+1}$-geodesic) of $P_{n}\,\square\,P_{3}$.
| $\to$ |
---|---|---
Figure 4: Strong edge geodetic set in $P_{3^{2}+2}\,\square\,P_{3}$.
###### Case 2:
$n=k^{2}+h$, $k+1\leq h\leq 2k$ (see Fig. 5).
In this case Algorithm 1 does not use the $a_{k+1}$,$b_{k+1}$-geodesic. This
shortest path covers all the vertical edges in the $n$-th column. Because a
$a_{k+1}$,$b_{h}$-geodesic from Algorithm 1 also covers all the vertical edges
from the $n$-th column, we can add a $a_{k+1}$,$b_{k+1}$-geodesic and then
replace an existing $a_{k+1}$,$b_{h}$-geodesic with the one that covers all
the vertical edges in the $(k^{2}+1)$-th column. This way we can then similar
to the previous case replace the existing $a_{1}$,$b_{k+1}$-geodesic (which
covers all the vertical edges in the $(k^{2}+1)$-th column in Algorithm 1)
with the one that covers all the horizontal edges in the second row of
$P_{n}\,\square\,P_{3}$.
In the same way as in $P_{n}\,\square\,P_{2}$ we now cover the horizontal
edges in the first row (using the $a_{1}$,$a_{k+1}$-geodesic) and in the third
row (using the $b_{1}$,$b_{k+1}$-geodesic) of $P_{n}\,\square\,P_{3}$.
| $\to$ |
---|---|---
| $\to$ |
---|---|---
Figure 5: Strong edge geodetic set in $P_{3^{2}+2\cdot 3}\,\square\,P_{3}$.
In both cases we have adjusted Algorithm 1 such that all the vertical edges
together with all the horizontal edges are covered without changing the set of
vertices used in the algorithm. It follows that $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{3})\leq\left\lceil
2\sqrt{n}~{}\right\rceil=\left\lceil 2\sqrt{n+1}~{}\right\rceil$ for all
$n\in\mathbb{N}$ when $n\not=k^{2}$ or $n\not=k^{2}+k$ for some
$k\in\mathbb{N}$. ∎
In the second part of the proof we will prove that there is no strong edge
geodetic set of size $\left\lceil 2\sqrt{n}~{}\right\rceil$ if $n=k^{2}$ or
$n=k^{2}+k$.
###### Proposition 4.3.
If $n=k^{2}$ or $n=k^{2}+k$ for some $k\geq 2$, $k\in\mathbb{N}$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{3})>\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2}).$
###### Proof.
Let us define the function
$f_{s}(a,b,c)=ab+bc+2ac$
which gives an upper bound on how many different vertical edges can be covered
with shortest paths between $s$ vertices, where $a$ vertices lie in the first
row, $b$ vertices in the second row, and $c$ vertices in the third row in
$P_{n}\,\square\,P_{3}$, and for each pair of these vertices we use at most
one shortest path. The extremes of $f_{s}$ can be obtained by computer. For
example, if $a\geq 0$, $b\geq 0$, and $c\geq 0$, $a,b,c\in\mathbb{R}$, then
the maximum of $f_{s}$ is equal to $s^{2}/2$ and if $a\geq 0$, $b\geq 1$, and
$c\geq 0$, $a,b,c\in\mathbb{R}$, then the maximum of $f_{s}$ is equal to
$(s^{2}-1)/2$. We prove Proposition 4.3 by assuming the opposite in each case.
###### Case 1:
$n=k^{2}$.
Suppose that there exists a strong edge geodetic set $S$ in
$P_{n}\,\square\,P_{3}$ with $2k$ elements. If this set contains a vertex from
the second row (in which case $b\geq 1$), $f_{s}(a,b,c)$ is at most
$(s^{2}-1)/2$, which is in our case (when $s=2k$) equal to $(4k^{2}-1)/2$.
This is less than the number $2k^{2}$ of all the vertical edges in
$P_{n}\,\square\,P_{2}$, which in turn means that if there exists such a
strong edge geodetic set, then all the vertices from it are in the first and
the third row ($b=0$).
Now consider the edge $(1,2)(2,2)$. A shortest path that covers this edge has
to have one endpoint at $(1,1)$ or $(1,3)$, which means that at least one of
these vertices is in $S$. Without lost of generality we can assume that it has
one endpoint at $(1,1)$ (Fig. 6a). This shortest path then also covers the
edge $(1,1)(1,2)$. If $(1,3)\in S$, then the edge $(1,1)(1,2)$ is also covered
with the unique $(1,1)$,$(1,3)$-geodesic (Fig. 6b). Otherwise, the shortest
path that covers the edge $(1,2)(1,3)$ also covers the edge $(1,1)(1,2)$ (Fig.
6c). In both ways the edge $(1,1)(1,2)$ is covered at least twice, which means
that with the set of $2k$ vertices we can only find shortest paths between
them that cover at most $\max(f_{2k}-1)\leq 2k^{2}-1$ different vertical edges
which is again less that the number $2k^{2}$ of all the vertical edges in
$P_{n}\,\square\,P_{2}$. This implies that such a strong edge geodetic set $S$
cannot exist.
……… | | ……… | | ………
---|---|---|---|---
a | | b | | c
Figure 6: Covering the edge $(1,2)(2,2)$ in $P_{n}\,\square\,P_{3}$ when
$n=k^{2}$.
###### Case 2:
$n=k^{2}+k$.
Suppose that there exist a strong edge geodetic set $S$ of
$P_{n}\,\square\,P_{3}$ with $2k+1$ elements. If this set does not include a
vertex $(1,2)$, we can similarly as in the previous case without loss of
generality conclude that the edge $(1,1)(1,2)$ is covered at least twice,
which implies that with a set of $2k+1$ vertices, we can only find shortest
paths between them that cover at most
$\max(f_{2k+1}-1)\leq(2k+1)^{2}/2-1=2k^{2}+2k-1/2$ different vertical edges
which is less that the number $2(k^{2}+2)$ of all the vertical edges in
$P_{n}\,\square\,P_{3}$. This implies that if such a strong edge geodetic set
in $P_{n}\,\square\,P_{3}$ exists, it includes the vertex $(1,2)$. By
symmetry, it also includes the vertex $(n,2)$. But then, because $b\geq 2$,
the value of $f_{2k+1}(a,b,c)$ is less or equal to $((2k+1)^{2}-4)/2$ which is
again less than the number $2k^{2}+2k$ of different vertical edges in
$P_{n}\,\square\,P_{3}$.
Since in both cases we got a contradiction, we conclude that
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{3})>\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})$. ∎
Propositions 4.2 and 4.3, together with Lemma 2.1 for $m=3$, and Lemma 4.1
imply Theorem 1.2.
## 5 Proof of Theorem 1.3
###### Lemma 5.1.
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})\leq\left\lceil
2\sqrt{n}~{}\right\rceil+1$.
###### Proof.
Use Algorithm 1 and cover horizontal edges in the first row and in the last
row in the same way as in the proof of Theorem 1.1. To the existing vertex set
from Algorithm 1, add vertex $(n,2)$ and connect it with $(1,1)$ by a geodesic
that covers all horizontal edges from the second row (Fig. 7a) and with
$(1,4)$ by a geodesic that covers all horizontal edges from the third row
(Fig. 7b).
|
---|---
a | b
Figure 7: $(1,1)$,$(n,2)$-geodesic and $(1,4)$,$(n,2)$-geodesic.
∎
By Lemma 5.1 and Lemma 2.1 for $m=4$, we know that for every $n\geq 2$, the
strong edge geodetic number $\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})$
is either $\left\lceil 2\sqrt{n}~{}\right\rceil$ or $\left\lceil
2\sqrt{n}~{}\right\rceil+1$. From Lemma 2.2 we see that Theorem 1.3 says that
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})=\left\lceil
2\sqrt{n}~{}\right\rceil$ except when $n=k^{2}$, $n=k^{2}+k$, or $n=k^{2}+2k$
for some $k\in\mathbb{N}$. Hence it is enough to prove that when $n=k^{2}$,
$n=k^{2}+k$, or $n=k^{2}+2k$ there is no strong edge geodetic set of size
$\left\lceil 2\sqrt{n}~{}\right\rceil$ and to find a strong edge geodetic set
of size $\left\lceil 2\sqrt{n}~{}\right\rceil$ in the other cases.
First, let us show that there exists a strong edge geodetic set of size
$\left\lceil 2\sqrt{n}~{}\right\rceil$ for $P_{n}\,\square\,P_{4}$ when
$n=k^{2}+h$ for some $k\in\mathbb{N}$, where $1\leq h\leq k-1$, or $k+1\leq
h\leq 2k-1$.
###### Proposition 5.2.
If $n=k^{2}+h$, $n\geq 3$, $k\geq 1$, where $1\leq h\leq k-1$ or $k+1\leq
h\leq 2k-1$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})\leq\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2}).$
###### Proof.
To cover all the vertical edges and the horizontal edges in the second row, we
will adjust Algorithm 1. We divide this adjustment into two cases.
###### Case 1:
$n=k^{2}+h$, $1\leq h\leq k-1$ (see Fig. 8).
First, let us use Algorithm 1. In the next step replace the existing vertex
$b_{k+1}=(k^{2},1)$ with the vertex $c=(n,3)$ and for $j=h,\dots,1$ replace
the existing $a_{j}$,$b_{k+1}$-geodesic with two shortest paths, the
$a_{j+1}$,$c$-geodesic (it exists because $h\leq k-1$) that covers the edges
$(k^{2}+j,1)(k^{2}+j,2)$ and $(k^{2}+j,2)(k^{2}+j,3)$, and the
$b_{j+1}$,$c$-geodesic that covers the edge $(k^{2}+j,3)(k^{2}+j,4)$. This
replacement is well defined because we replaced all the shortest paths that
had one endpoint at $b_{k+1}$ and added some new shortest paths that have one
endpoint at $c$, so the condition that for each pair of vertices from the set
$\\{a_{1},\dots,a_{k},b_{1},\dots,b_{k},c\\}$ we only use at most one shortest
path, still holds.
| | | |
---|---|---|---|---
| $\to$ | | |
| $\to$ | | $+$ |
| $\to$ | | $+$ |
| | | |
| | | |
Figure 8: Strong edge geodetic set in $P_{3^{2}+2}\,\square\,P_{4}$.
In this way we have adjusted Algorithm 1 such that it covers all vertical
edges in $P_{n}\,\square\,P_{4}$, while it uses neither the
$a_{1}$,$c$-geodesics nor the $b_{1}$,$c$-geodesics. This means that we can
add the $a_{1}$,$c$-geodesic that covers all the horizontal edges in the
second row and the $b_{1}$,$c$-geodesic that covers all the horizontal edges
in the third row. Some edges from the first row are already covered with the
$a_{h+1}$,$c$-geodesic, but the rest of them can be covered with the unique
$a_{1}$,$a_{k}$-geodesic. By symmetry, some of the horizontal edges from the
fourth row are covered with the $b_{h+1}$,$c$-geodesic, and the other ones are
covered by the $b_{1}$,$b_{k}$-geodesic.
###### Case 2:
$n=k^{2}+h$, $k+1\leq h\leq 2k-1$ (see Fig. 9).
In this case, because $h\leq 2k-1$, Algorithm 1 never uses a
$a_{k+1}$,$b_{k}$-geodesic. It also never uses the unique
$a_{k+1}$,$b_{k+1}$-geodesic. If we add this shortest path, we can replace the
existing $b_{h-k}$,$a_{k+1}$-geodesic (this shortest path covers the vertical
edges in the $n$-th column) with the one that covers the vertical edges in the
$(k(k+1)+1)$-th column (in Algorithm 1 covered by the
$a_{k+1}$,$b_{1}$-geodesic). Observe that this step does nothing if $h=1$.
This way we can replace the existing $a_{k+1}$,$b_{1}$-geodesic with the one
that covers all the horizontal edges in the third row.
Also, if we add the $a_{k+1}$,$b_{k}$-geodesic that covers all the vertical
edges in the $(k^{2}+1)$-th column, we can replace the existing
$a_{1}$,$b_{k+1}$-geodesic with the one that covers all the horizontal edges
in the second row.
| $\to$ |
---|---|---
| $\to$ |
---|---|---
| $\to$ |
---|---|---
Figure 9: Strong edge geodetic set in $P_{3^{2}+2\cdot 3-1}\,\square\,P_{4}$.
In the same way as in $P_{n}\,\square\,P_{2}$ we now cover the horizontal
edges in the first row (using the $a_{1}$,$a_{k+1}$-geodesic) and in the
fourth row (using the $b_{1}$,$b_{k+1}$-geodesic) of $P_{n}\,\square\,P_{4}$.
Since in all three cases we found a strong edge geodetic set of size
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$, we can conclude that
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})\leq\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})$ for all $n\in\mathbb{N}$ when $n\not=k^{2}$,
$n\not=k^{2}-1$, and $n\not=k^{2}+k$ for some $k\in\mathbb{N}$. ∎
We will now show that there is no strong edge geodetic set of size
$\left\lceil 2\sqrt{n}\right\rceil$ for $P_{n}\,\square\,P_{4}$ when
$n=k^{2}-1$, $k^{2}$, or $k^{2}+k$ for some $k\in\mathbb{N}$.
###### Proposition 5.3.
If $n=k^{2}$, $n=k^{2}-1$, or $n=k^{2}+k$ for some $k\geq 2$,
$k\in\mathbb{N}$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})>\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2}).$
###### Proof.
Let us define the function
$f_{s}(a,b,c,d)=ab+bc+cd+2ac+2bd+3ad$
which gives an upper bound on how many different vertical edges can be covered
with shortest paths between $s$ vertices, where $a$ vertices are in the first
row, $b$ vertices in the second row, $c$ vertices in the third row, and $d$
vertices in the fourth row of $P_{n}\,\square\,P_{4}$. The maximum values of
function $f_{s}$ under some bounds are again computed by computer and are
gathered in Table 1.
conditions | | | upper bound
---|---|---|---
$a,b,c,d\geq 0$ | | | $f_{s}(a,b,c,d)\leq{1\over 4}(3s^{2})$
$a,c,d\geq 0$, | $b\geq 1$ | | $f_{s}(a,b,c,d)\leq{1\over 12}(9s^{2}-8)$
$a,b,c,d\geq 0$, | $b+c\geq 2$ | | $f_{s}(a,b,c,d)\leq{1\over 4}(3s^{2}-8)$
$a,b,c,d\geq 0$, | $b+c\geq 3$ | | $f_{s}(a,b,c,d)\leq{1\over 4}(3s^{2}-18)$
Table 1: Maximal values of function $f_{s}(a,b,c,d)$.
Every strong edge geodetic set $S$ has to include at least one vertex from the
first column, otherwise it would not be possible to cover the edge
$(1,1)(1,2)$. Similarly, every strong edge geodetic set includes at least one
vertex from the last column. Depending on a strong edge geodetic set $S$ in
$P_{n}\,\square\,P_{4}$ we will define a type of first or last column of
vertices. All the types for a strong edge geodetic set with at least one
vertex in the first and at least one vertex in the last column are gathered in
Table 2. For example, if $(1,1),(1,2)\in S$ and $(1,3),(1,4)\not\in S$, we say
that the first column of vertices in $P_{n}\,\square\,P_{4}$ is of type $D$.
Symmetrically, if $(1,1),(1,2)\not\in S$ and $(1,3),(1,4)\in S$, we also say
that the first column of vertices in $P_{n}\,\square\,P_{4}$ is of type $D$.
We symmetrically define the type of the last column.
Type $T$ | type in first column | | $r(T)$
---|---|---|---
$A$ | | | $7$
$B$ | | | $2$
$C$ | | | $4$
$D$ | | | $1$
$E$ | | | $1$
$F$ | | | $2$
$G$ | | | $0$
$H$ | | | $3$
$I$ | | | $1$
Table 2: Different types of a strong edge geodetic set in the first or the
last column.
For a shortest path $P$ in $P_{n}\,\square\,P_{4}$, let $E_{i}(P)$ denote the
set of edges in $i$-th column, that is,
$E_{i}(P)=E(P)\cap\\{(i,1)(i,2),(i,2)(i,3),(i,3)(i,4)\\}.$
For a strong edge geodetic covering $C=\\{P_{x,y}\\}$ set
$r_{C}^{1}=\sum_{\\{x,y\\}\in\binom{S}{2}}|E_{1}(P_{x,y})|-3.$
Roughly speaking, $r_{C}^{1}$ measures redundancy of the strong edge geodetic
covering in the first column of $P_{n}\,\square\,P_{4}$. Analogously, the
redundancy $r_{C}^{n}$ with respect to the last column is introduced.
For each type $T$ of the first column determined by $S$, we can now compute
the minimum number of redundant coverings in the first column of edges as
$r_{1}(T)=\min_{C}\\{r_{C}^{1}\\}$, where $C$ is a strong edge geodetic
covering for a strong edge geodetic set $S$, where the first column in
$P_{n}\,\square\,P_{4}$ is of type $T$. Similarly we can define the minimum
number of redundant coverings in the last column of edges, $r_{n}(T)$. By
symmetry, these two numbers are the same, so we can denote
$r(T)=r_{1}(T)=r_{n}(T)$. For each type $T$, the number $r(T)$ is listed in
the last column of Table 2.
We will show how to compute $r(C)$ for type $C$, for other types it is
similar. We can without loss of generality assume
$\\{(1,1),(1,2),(1,4)\\}\subset S$. Between the pairs of these three vertices,
there are three unique shortest paths $P_{1}$, $P_{2}$, and $P_{3}$. To cover
the horizontal edge $(1,3)(2,3)$ we need another shortest path $P_{4}$ that
has one endvertex in the vertex set $\\{(1,1),(1,2),(1,4)\\}$. Because
$(1,3)\not\in S$, this path includes at least one vertical edge from the first
column, which implies
$\sum_{\\{x,y\\}\in\binom{S}{2}}|E_{1}(P_{x,y})|-3\geq|E_{1}(P_{1})|+|E_{1}(P_{2})|+|E_{1}(P_{3})|+|E_{1}(P_{4})|-3\geq
3.$
Suppose now that $S$ is a strong edge geodetic set for $P_{n}\,\square\,P_{4}$
of cardinality $|S|=\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$. We
distinguish four different cases.
###### Case 1:
$S$ does not contain any vertex from the set $\\{(1,2),(1,3),(n,2),(n,3)\\}$.
In this case the first and the last column are either of type $F$ or type $G$,
which implies that $r_{1}+r_{n}\geq 4$. Because
$f_{s}(a,b,c,d)\leq{3s^{2}\over 4}$ when $a,b,c,d\geq 0$, it holds
$\displaystyle f_{s}(a,b,c,d)-4$ $\displaystyle\leq$
$\displaystyle{3\cdot(2k)^{2}\over 4}-4=3k^{2}-4<3n\text{ for }n=k^{2}-1;$
$\displaystyle f_{s}(a,b,c,d)-4$ $\displaystyle\leq$
$\displaystyle{3\cdot(2k)^{2}\over 4}-4=3k^{2}-4<3n\text{ for }n=k^{2};$
$\displaystyle f_{s}(a,b,c,d)-4$ $\displaystyle\leq$
$\displaystyle{3\cdot(2k+1)^{2}\over 4}-4=3k^{2}+3k-{13\over 4}<3n\text{ for
}n=k^{2}+k,$
for $s=\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$.
###### Case 2:
$S$ contains at least three vertices from the second and the third row.
Because $f_{s}(a,b,c,d)\leq{1\over 4}(3s^{2}-18)$ when $a,b,c,d\geq 0$,
$b+c\geq 3$, it holds
$\displaystyle f_{s}(a,b,c,d)$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k)^{2}-18)=3k^{2}-{18\over 4}<3n\text{ for }n=k^{2}-1;$ $\displaystyle
f_{s}(a,b,c,d)$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k)^{2}-18)=3k^{2}-{18\over 4}<3n\text{ for }n=k^{2};$ $\displaystyle
f_{s}(a,b,c,d)$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k+1)^{2}-18)=3k^{2}+3k-{15\over 4}<3n\text{ for }n=k^{2}+k,$
for $s=\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$.
###### Case 3:
$S$ contains exactly two vertices from the set $\\{(1,2),(1,3),(n,2),(n,3)\\}$
and no other vertex from the second and the third row.
If we look at all types of the first and the last column such that the
condition holds, we see that $r_{1}+r_{n}$ is always at least $2$ (observe
that type $G$ can only be combined with types $F$ and $H$). Because
$f_{s}(a,b,c,d)\leq{1\over 4}(3s^{2}-8)$ when $b+c\geq 2$, and $a,b,c,d\geq
0$, it holds
$\displaystyle f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k)^{2}-8)-2=3k^{2}-4<3n\text{ for }n=k^{2}-1;$ $\displaystyle
f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k)^{2}-8)-2=3k^{2}-4<3n\text{ for }n=k^{2};$ $\displaystyle
f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
4}(3(2k+1)^{2}-8)-2=3k^{2}+3k-{13\over 4}<3n\text{ for }n=k^{2}+k,$
for $s=\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$.
###### Case 4:
$S$ contains exactly one vertex from the set $\\{(1,2),(1,3),(n,2),(n,3)\\}$
and no other vertex from the second and the third row.
By symmetry, we can without loss of generality assume that
$(1,3),(n,2),(n,3)\not\in S$ and $(1,2)\in S$. This implies that the first
column is of type $C$, $D$, $E$, or $H$ and the last column is of type $F$ or
$G$. The sum of the number of redundant coverings for the first and the last
column is than at least $3$. Because $f_{s}(a,b,c,d)\leq{1\over 12}(9s^{2}-8)$
when $b+c\geq 2$ and $a,b,c,d\geq 0$, it holds
$\displaystyle f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
12}(9(2k)^{2}-8)-3=3k^{2}-{11\over 3}<3n\text{ for }n=k^{2}-1;$ $\displaystyle
f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
12}(9(2k)^{2}-8)-3=(3k^{2}-3)-{2\over 3}<3n\text{ for }n=k^{2};$
$\displaystyle f_{s}(a,b,c,d)-2$ $\displaystyle\leq$ $\displaystyle{1\over
12}(9(2k+1)^{2}-8)-3=(3k^{2}+3k)-{37\over 12}<3n\text{ for }n=k^{2}+k,$
for $s=\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$.
Since $3n$ is the number of all vertical edges in $P_{n}\,\square\,P_{4}$, it
holds $|S|>\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$ in every case
above. This is a contradiction with the assumption that $S$ is a strong edge
geodetic set of $P_{n}\,\square\,P_{4}$ of cardinality $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})$.
Because all four cases led us to a contradiction, we can conclude that for
$P_{n}\,\square\,P_{4}$, where $n=k^{2}-1,k^{2}$, or $k^{2}+k$ for some
$k\in\mathbb{N}$, there is no strong edge geodetic set of size
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{2})$. By Lemma 5.1 this means
that $\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{4})=\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{2})+1$ when $n=k^{2}-1,k^{2}$, or $k^{2}+k$ for
some $k\in\mathbb{N}$. In other words, $\operatorname{\rm
sg_{e}}(P_{k^{2}-1}\,\square\,P_{4})=\operatorname{\rm
sg_{e}}(P_{k^{2}}\,\square\,P_{4})=2k+1$ and $\operatorname{\rm
sg_{e}}(P_{k^{2}+k}\,\square\,P_{4})=2k+2$. Also, because
$k^{2}-1=(k-1)^{2}+2(k-1)$, we have $\operatorname{\rm
sg_{e}}(P_{k^{2}+2k}\,\square\,P_{4})=2k+3$. ∎
## 6 Upper bounds
In this concluding section we give two upper bounds on $\operatorname{\rm
sg_{e}}(P_{n}\,\square\,P_{m})$.
###### Proposition 6.1.
If $n\geq 2$ and $m\geq 2$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{m})\leq\left\lceil
2\sqrt{n}~{}\right\rceil+\left\lceil 2\sqrt{m-2}~{}\right\rceil.$
###### Proof.
First we cover all the vertical edges in $P_{n}\,\square\,P_{m}$ by using
Algorithm 1. This algorithm uses $\left\lceil 2\sqrt{n}~{}\right\rceil$
vertices from the first and the last row. We also, similarly as in
$P_{n}\,\square\,P_{2}$, use these vertices to cover the horizontal edges from
the first and the last row (Fig. 10a).
| |
---|---|---
a | | b
| |
c | | d
Figure 10: Strong edge geodetic set for $P_{n}\,\square\,P_{m}$ with
cardinality $\left\lceil 2\sqrt{n}~{}\right\rceil+\left\lceil
2\sqrt{m-2}~{}\right\rceil$ for $n=14$ and $m=8$.
In second step we look at the subgraph $H$ of $P_{n}\,\square\,P_{m}$ without
the first and the last row (Fig. 10b). $H$ is isomorphic to
$P_{n}\,\square\,P_{m-2}$. To cover all the horizontal edges from $H$, we use
Algorithm 1 on $H$ rotated by $90$ degrees, which is the same as using
Algorithm 1 on $P_{m-2}\,\square\,P_{n}$ (Fig. 10c). This algorithm uses
exactly $\left\lceil 2\sqrt{m-2}~{}\right\rceil$ vertices and cover all the
horizontal edges from $H$, which are exactly the edges that have not been
covered in the first part of the proof. ∎
With a similar, but a bit more involved, idea as in the proof of Proposition
6.1, we are going to prove the following proposition.
###### Proposition 6.2.
If $n\geq 3$ and $m\geq 3$, then
$\operatorname{\rm sg_{e}}(P_{n}\,\square\,P_{m})\leq\left\lceil
2\sqrt{n+2}\right\rceil+\left\lceil 2\sqrt{m\phantom{!}}\right\rceil-4.$
###### Proof.
First we will adjust Algorithm 1 such that it will use all the corner
vertices, that is $(1,1),(1,m),(n,1),(n,m)$. We call the new algorithm
Algorithm 1*. For $n=k^{2}+h$, where $h=0$ or $k+1\leq h\leq 2k$, Algorithm 1
already uses all the corner vertices. When $n=k^{2}+h$, $1\leq h\leq k$,
redefine the vertex $a_{k}$ as $a_{k}=(n,1)$. The shortest paths between
vertices $a_{k}$ and $b_{i}$ in Algorithm 1* are the ones that cover the same
vertical edges as shortest paths between vertices $a_{k}$ and $b_{i}$ in
Algortihm 1. An example output of this algorithm is shown in Fig. 11.
Figure 11: Strong edge geodetic set of $P_{4^{2}+2}\,\square\,P_{5}$ from
Algorithm 1*.
When $n=k^{2}$, $n=k^{2}+k-1$, $n=k^{2}+k$, or $n=k^{2}+2k$ for some
$k\in\mathbb{N}$, we will cover the vertical edges in $P_{n}\,\square\,P_{m}$
in the following way. First, we use Algorithm 1* with
$\left\lceil\sqrt{n}~{}\right\rceil$ vertices $V_{1}$. To $V_{1}$ we add
$c=((k-1)^{2}+k,m)$. We can then cover the vertical edges covered by the
$(1,1),(n,m)$-geodesic and the $(n,1),(1,m)$-geodesic with $(1,1),c$-geodesic
and $(n,1),c$-geodesic. In this way, we can remove shortest paths between
corner vertices (Fig. 12).
| $\to$ |
---|---|---
| $\to$ |
Figure 12: Covering vertical edges in $P_{n}\,\square\,P_{m}$ without using
shortest paths between vertices $(1,1)$ and $(n,m)$ and between vertices
$(n,1)$ and $(1,m)$.
When $n=k^{2}+h$; $k,h\in\mathbb{N}$; $1\leq h\leq k-2$, we will adjust
Algorithm 1* to cover all the vertical edges with the same vertex set and
without using the shortest paths between vertices $a_{1}$ and $b_{k+1}$ and
between vertices $a_{k}$ and $b_{1}$. First we notice that Algorithm 1* does
not use the unique $a_{k}$,$b_{k+1}$-geodesic. If we add it, we can replace
the $a_{h}$,$b_{k+1}$-geodesic with the one that covers all the vertical edges
in the $(k^{2}+1)$-th column. We can then remove the
$a_{1}$,$b_{k+1}$-geodesic (it also covers the vertical edges in the
$(k^{2}+1)$-th column). Algorithm 1* also does not use the
$a_{k-1}$,$b_{k+1}$-geodesic. If we add the $a_{k-1}$,$b_{k+1}$-geodesic that
covers all the vertical edges in the $((k-1)^{2}+k)$-th column, we can remove
the $a_{k}$,$b_{k+1}$-geodesic.
When $n=k^{2}+h$, $k,h\in\mathbb{N}$, $k+1\leq h\leq 2k-1$, we can use a
similar adjustment as in the proof of Theorem 1.3 for $n=k^{2}+h$, $k+1\leq
h\leq 2k-1$, and remove the shortest paths between vertices $a_{1}$ and
$b_{k+1}$ and between vertices $a_{k+1}$ and $b_{1}$ that cover the horizontal
edges in the second and the third row.
In the first part of the proof we defined the algorithm that uses $\left\lceil
2\sqrt{n+2}~{}\right\rceil$ vertices and covers all the vertical rows in
$P_{n}\,\square\,P_{m}$, while it uses neither the $(1,1)$,$(n,m)$-geodesic,
nor the $(1,m)$,$(n,1)$-geodesic.
To cover the horizontal edges, we use Algorithm 1* on rotated
$P_{n}\,\square\,P_{m}$ by $90$ degrees. This part will use
$\left\lceil\sqrt{m}~{}\right\rceil$ vertices, where four
($(1,1),(1,m),(n,1),(n,m)$) of them are already used in the first part.
Because the first part does not use shortest paths between these four
vertices, the condition that between any two vertices we use at most one
shortest path still holds.
In the first and in the second part we have covered all the edges in
$P_{n}\,\square\,P_{m}$, using exactly
$\left\lceil\sqrt{n+2}~{}\right\rceil+\left\lceil\sqrt{m}~{}\right\rceil-4$
vertices. ∎
If we combine all the results from this paper, we can see that the bound from
Proposition 6.1 is sharp when $m=2$ and is not sharp for $m=3$ and $m=4$. The
bounds from Proposition 6.2 are sharp when $m=3$ and $n=k^{2}$ or $k^{2}+k$
for some integer $k$, as well as when $m=4$ and $n=k^{2}$, $k^{2}+k$ or
$k^{2}+2k$ for some integer $k$.
## References
* [1] V. Gledel and V. Iršič. Strong geodetic number of complete bipartite graphs, crown graphs and hypercubes. Bull. Malays. Math. Sci. Soc., 43:2757–2767, 2020.
* [2] V. Gledel, V. Iršič, and S. Klavžar. Strong geodetic cores and Cartesian product graphs. Appl. Math. Comput., 363:124609, 2019.
* [3] F. Harary, E. Loukakis, and C. Tsouros. The geodetic number of a graph. Math. Comput. Model., 17:89–95, 1993.
* [4] V. Iršič. Strong geodetic number of Complete bipartite graphs and of graphs with specified diameter. Graphs Combin., 43:443–456, 2018.
* [5] V. Iršič and S. Klavžar. Strong geodetic problem on Cartesian products of graphs. RAIRO Oper. Res., 52:205–216, 2018.
* [6] V. Iršič and M. Konvalinka. Strong geodetic problem on complete multipartite graphs. Ars Math. Contemp., 17:481–491, 2019.
* [7] S. Klavžar and P. Manuel. Strong geodetic problem in grid-like architectures. Bull. Malays. Math. Sci. Soc., 41:1671–1680, 2018.
* [8] P. Manuel, S. Klavžar, A. Xavier, A. Arokiaraj, and E. Thomas. Strong edge geodetic problem in networks. Open Math., 15:1225–1235, 2017.
* [9] P. Manuel, S. Klavžar, A. Xavier, A. Arokiaraj, and E. Thomas. Strong geodetic problem in networks. Discuss. Math. Graph Theory, 40:307–321, 2020.
* [10] M. Mezzini. An $O(mn^{2})$ algorithm for computing the strong geodetic number in outerplanar graphs. Discuss. Math. Graph Theory, 2020. doi:10.7151/dmgt.2311.
* [11] A. P. Santhakumaran and J. John. Edge geodetic number of a graph. J. Discrete Math. Sci. Cryptogr., 10:415–432, 2007.
* [12] Z. Wang, Y. Mao, H. Ge, and C. Magnant. Strong geodetic number of graphs and connectivity. Bull. Malays. Math. Sci. Soc., 43:2443–2453, 2020.
|
# Non-negative integral matrices with given spectral radius and controlled
dimension
Mehdi Yazdi
###### Abstract.
A celebrated theorem of Douglas Lind states that a positive real number is
equal to the spectral radius of some integral primitive matrix, if and only
if, it is a Perron algebraic integer. Given a Perron number $p$, we prove that
there is an integral irreducible matrix with spectral radius $p$, and with
dimension bounded above in terms of the algebraic degree, the ratio of the
first two largest Galois conjugates, and arithmetic information about the ring
of integers of its number field. This arithmetic information can be taken to
be either the discriminant or the minimal Hermite-like thickness.
Equivalently, given a Perron number $p$, there is an irreducible shift of
finite type with entropy $\log(p)$ defined as an edge shift on a graph whose
number of vertices is bounded above in terms of the aforementioned data.
## 1\. Introduction
A real algebraic integer $\lambda\geq 1$ is _Perron_ if $\lambda$ is strictly
larger than all its other Galois conjugates in absolute value. In what
follows, all matrices are considered to be square size. For a real matrix $A$,
by $A>0$ we mean that all entries of $A$ are positive. A non-negative real
matrix $A$ is _primitive_ (or _aperiodic_) if there is some natural number $k$
such that $A^{k}>0$. A non-negative real matrix is _irreducible_ if for any
two indices $i$ and $j$, there is some natural number $k=k(i,j)$ such that
$(A^{k})_{ij}>0$. The Perron–Frobenius theorem implies that
1. I)
the spectral radius of any _integral_ primitive matrix is a Perron number; and
2. II)
the spectral radius of any _integral_ irreducible matrix is equal to the $n$th
root of a Perron number where $n$ is the _period_ of the irreducible matrix.
Moreover, the spectral radius of an integral non-negative non-nilpotent matrix
is equal to the spectral radius of an integral irreducible one.
Lind [Lin84, Theorem 1] proved a converse to the ‘integer’ Perron–Frobenius
theorem, namely for every Perron number $\lambda$ there is an integral
primitive matrix with spectral radius equal to $\lambda$. It readily follows
that for any Perron number $\lambda$ and any natural number $n$, there is an
integral irreducible matrix with spectral radius equal to $\sqrt[n]{\lambda}$;
see [Lin84, Theorem 3].
Associated to a non-negative and non-degenerate (i.e. with no zero rows or
columns) integral matrix $A=[a_{ij}]$ with spectral radius $\lambda$ is a
_shift of finite type_ with entropy equal to $\log(\lambda)$, which is defined
as the _edge shift_ on a directed finite graph $G_{A}$ as follows. The graph
$G_{A}$ has one vertex for each row of the matrix $A$, and there are exactly
$a_{ij}$ oriented edges from the vertex $v_{i}$ to the vertex $v_{j}$. In
particular, the dimension of the matrix $A$ (i.e. its number of rows or
columns) is equal to the number of vertices of the graph $G_{A}$. The matrix
is primitive if and only if the associated shift of finite type is
topologically mixing. If the matrix is irreducible then the corresponding
shift of finite type is called _irreducible_. Irreducible shifts of finite
type are those which are topologically transitive. See e.g. [LM21].
Given a Perron number $\lambda$, the _Perron–Frobenius degree_ of $\lambda$,
$d_{PF}(\lambda)$, is the least dimension of an integral _primitive_ matrix
with spectral radius equal to $\lambda$. Clearly we have
$d_{PF}(\lambda)\geq d,$
where $d$ denotes the algebraic degree of $\lambda$ as an algebraic integer.
It is easy to see that the equality happens if $\lambda$ is quadratic (see
[Yaz21, Remark 3.1]), so we consider the case of $d\geq 3$. Lind observed that
if the _trace_ (i.e. sum of Galois conjugates) of $\lambda$ is negative, then
$d_{PF}(\lambda)$ is strictly larger than $d$; see [Lin84, page 289]. In
[Yaz21], using an idea of Lind, we gave a lower bound for $d_{PF}(\lambda)$ in
terms of the layout of the two largest (in absolute value) Galois conjugates
of $\lambda$ in the complex plane. As a corollary, it was shown that there are
examples of cubic Perron numbers with arbitrarily large Perron–Frobenius
degrees, a result previously known to Lind, McMullen, and Thurston, although
unpublished.
###### Definition 1.1.
Given a Perron number $\lambda$, define the _spectral ratio_ of $\lambda$ as
$\max_{i}\frac{|\lambda_{i}|}{\lambda}$, where $\lambda_{i}\neq\lambda$ are
the remaining Galois conjugates of $\lambda$.
###### Notation 1.2.
For a Perron number $\lambda$, let $d_{PF}^{irr}(\lambda)$ be the smallest
dimension of an integral _irreducible_ matrix with spectral radius $\lambda$.
In this paper, we give an explicit upper bound for $d_{PF}^{irr}(\lambda)$ in
terms of the algebraic degree of $\lambda$, the spectral ratio of $\lambda$,
and arithmetic information about the ring of integers
$\mathcal{O}_{\mathbb{K}}$ of the number field
$\mathbb{K}:=\mathbb{Q}(\lambda)$. This arithmetic quantity, which we call the
_minimal Hermite-like thickness_ and denote it by
$\tau_{\min}(\mathcal{O}_{\mathbb{K}})$, was previously defined by Bayer
Fluckiger [Bay06] in relation to _Minkowski’s conjecture_ ; see Definition
3.1. Intuitively, once an inner product is chosen on $\mathbb{R}^{d}$, the
Hermite-like thickness is defined as the square of the covering radius,
normalised properly, for the inclusion of the lattice
$\mathcal{O}_{\mathbb{K}}$ in $\mathbb{R}^{d}$. The minimal Hermite-like
thickness is then defined by taking the infimum of Hermite-like thickness over
an appropriate space of inner products on $\mathbb{R}^{d}$. As a corollary of
our main result, and using an inequality due to Banaszczyk and Bayer Fluckiger
(see inequality (25)), we obtain a similar bound in terms of the
_discriminant_ $D_{\mathbb{K}}$ of $\mathcal{O}_{\mathbb{K}}$ instead of
$\tau_{\min}(\mathcal{O}_{\mathbb{K}})$. See Definition 3.3.
###### Theorem 1.3.
Let $\lambda$ be a Perron number of algebraic degree $d\geq 3$ and spectral
ratio $\rho$. Set $\mathbb{K}:=\mathbb{Q}(\lambda)$. Let
$\mathcal{O}_{\mathbb{K}}$ be the ring of integers of $\mathbb{K}$, and denote
the discriminant and the minimal Hermite-like thickness of $\mathbb{K}$ by,
respectively, $D_{\mathbb{K}}$ and $\tau_{\min}(\mathcal{O}_{\mathbb{K}})$.
Then $d_{PF}^{irr}(\lambda)$ is bounded above by each of
$\Big{(}\dfrac{8d}{1-\rho}\Big{)}^{d^{2}}\tau_{\min}(\mathcal{O}_{\mathbb{K}})^{\frac{d}{2}}$
and
$\Big{(}\dfrac{8d}{1-\rho}\Big{)}^{d^{2}}\sqrt{D_{\mathbb{K}}}.$
###### Remark 1.4.
Given a natural number $n$ and an integral irreducible matrix $A$ with
spectral radius $\lambda$, one can readily construct an integral irreducible
matrix $B$ with spectral radius $\sqrt[n]{\lambda}$ such that
$\dim(B)=n\dim(A)$. Hence, Theorem 1.3 can be used to give an upper bound for
$d_{PF}^{irr}(\sqrt[n]{\lambda})$. See e.g. the proof of [Lin84, Theorem 3].
Theorem 1.3 immediately translates into the context of irreducible shifts of
finite type with a given entropy. We can derive an upper bound for the
Perron–Frobenius degree using Theorem 1.3. See also Remark 3.9 and Question
5.1. First we need to introduce a notation.
###### Notation 1.5.
Let $\mathbb{K}$ be a real number field (i.e. with at least one real place),
and $\rho\in(0,1)$. Set
$M=1+\frac{4}{1-\rho},$
and denote
$\kappa(\mathbb{K},\rho):=\max\\{d_{PF}(\alpha)\hskip 2.84526pt|\hskip
2.84526pt\alpha\in[1,M]\cap\mathbb{K}\text{ is a Perron number}\\}.$
Note that for any $M>1$, there are only finitely many Perron numbers in the
interval $[1,M]$ with degree at most $d$. Hence, $\kappa(\mathbb{K},\rho)$ can
be computed in theory.
###### Theorem 1.6.
Let $\mathbb{\lambda}$ be a Perron number of degree $d\geq 3$ and spectral
ratio $\rho$. Set $\mathbb{K}:=\mathbb{Q}(\lambda)$. Denote the bound from
Theorem 1.3 by $B(\mathbb{K},\rho)$; note that $d=[\mathbb{K}:\mathbb{Q}]$ is
uniquely determined by $\mathbb{K}$. The Perron–Frobenius degree of $\lambda$
is bounded above by
$\max\\{2^{d^{2}}B(\mathbb{K},\rho),\kappa(\mathbb{K},\rho)\\}.$
### 1.1. Previous work
In [Lin84], Lind gave a method for producing _all_ integral primitive matrices
with a given Perron number $\lambda$ as their spectral radius. Let
$B\colon\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ be the _companion matrix_
associated to $\lambda$. In what follows, unless otherwise specified all
eigenvectors are column (right) eigenvectors; similarly for eigenspaces. Pick
an eigenvector $v$ for the eigenvalue $\lambda$, and denote the one
dimensional eigenspace of $\lambda$ by $E_{\lambda}$. Let $E$ be the positive
half-space associated to $v$, i.e. if $\pi_{1}\colon\mathbb{R}^{d}\rightarrow
E_{\lambda}$ is the projection along the complementary invariant subspace,
then
$E=\\{x\in\mathbb{R}^{d}\hskip 5.69054pt|\hskip 5.69054pt\pi_{1}(x)=rv\hskip
8.53581pt\text{for some }r>0\\}.$
###### Theorem 1.7 (Lind).
Let $\lambda$ be a Perron number of algebraic degree $d$, and $B$ and $E$ be
as above. If $A=[a_{ij}]$ is an $n$-dimensional primitive non-negative
integral matrix with spectral radius $\lambda$, then there are
$z_{i}\in\mathbb{Z}^{d}\cap E$ for $1\leq i\leq n$ such that
$Bz_{j}=\sum_{i=1}^{n}a_{ij}z_{i}$.
Conversely, if $\lambda$, $B$, and $E$ are as above, and the points
$z_{i}\in\mathbb{Z}^{d}\cap E$ and a non-negative integral matrix $A=[a_{ij}]$
satisfy $Bz_{j}=\sum_{i=1}^{n}a_{ij}z_{i}$, then every irreducible component
of $A$ has spectral radius equal to $\lambda$.
The above theorem of Lind gives a practical way to produce an integral
primitive matrix with a given spectral radius; see [Lin84, page 289]. However,
it does not tell us how to find such a matrix with smallest (or close to
smallest) dimension, since we are not given control over the size of the
coordinates of $z_{i}$. Nevertheless, the referee has kindly mentioned to me
that there exists a simple, but not necessarily practical, algorithm that
computes the Perron–Frobenius degree of a Perron number: Assume that $\lambda$
is given by its minimal polynomial, and denote the algebraic degree of
$\lambda$ by $d$. For every positive integer $n$, there are only finitely many
primitive $n\times n$ integral matrices with spectral radius less than or
equal to $\lambda$. One can algorithmically enumerate these. For each of them,
one can algorithmically determine whether the spectral radius is equal to
$\lambda$. So for $n=d$, we can check whether there is an integral primitive
matrix of dimension $n$ which has $\lambda$ as the spectral radius.
Recursively, if we fail at $n$, then we try at $n+1$. By Lind’s theorem we
eventually find an $n$ where we succeed; that $n$ is equal to
$d_{PF}(\lambda)$.
For matrices with non-negative integral _polynomial_ entries, the situation is
different. See the work of Boyle and Lind, which gives a uniform upper bound
(in fact a 2 by 2 matrix) in this context [BL02]. For the related topic of
inverse spectral problem for non-negative integral matrices see the works of
Boyle–Handelman [BH91] and Kim–Ormes–Roush [KOR00] and the references therein.
### 1.2. Idea of the proof
In [Thu14], Thurston gave a simpler proof of Lind’s converse to the integer
Perron–Frobenius theorem. Our proof of Theorem 1.3 follows Thurston’s
approach, while controlling the dimension of a constructed matrix.
The tensor product $\mathbb{Q}(\lambda)\otimes_{\mathbb{Q}}\mathbb{R}$ can be
identified with $\mathbb{R}^{d}$. Let $M_{\lambda}$ be the linear endomorphism
of $\mathbb{R}^{d}\cong\mathbb{Q}(\lambda)\otimes_{\mathbb{Q}}\mathbb{R}$
induced by multiplication by $\lambda$ in $\mathbb{Q}(\lambda)$. The
eigenvalues of $M_{\lambda}$ are the Galois conjugates of $\lambda$. Then
$\mathbb{R}^{d}$ decomposes into invariant subspaces of $M_{\lambda}$
corresponding to real places and pairs of conjugate complex places of
$\lambda$; see the opening paragraphs to Section 3. Fix an eigenvector for
$M_{\lambda}$ with eigenvalue $\lambda$, and consider the positive half-space
corresponding to $\lambda$.
We start with a polygonal cone with apex at the origin that lies in the
positive half-space and is invariant under $M_{\lambda}$. We then perturb the
vertices of the cone to obtain an invariant polygonal cone $\mathcal{C}$ with
_integral_ vertices; see Steps 2–4 of the proof of Theorem 1.3. It is during
this perturbation that the minimal Hermite-like thickness appears in the
picture. Since the polygonal cone has integral vertices, the semigroup $S$
generated by the set of integral points in the cone $\mathcal{C}$ under
addition of vectors is finitely generated; see Proposition 2.1. The
cardinality of a generating set for the semigroup $S$ gives an upper bound for
the dimension of an integral non-negative matrix $A$ with spectral radius
$\lambda$. Moreover, after possibly passing to an irreducible component of
$A$, an integral irreducible matrix with spectral radius $\lambda$ is
obtained; see Step 5. Finally we give an upper bound for the dimension of $A$;
see Step 6.
### 1.3. Plan of the paper
In Section 2, we present a few preliminary lemmas. The proof of Theorem 1.3 is
given in Section 3. Theorem 1.6 is proved in Section 4. In Section 5, a few
questions are posed.
### 1.4. Acknowledgement
I would like to thank Douglas Lind for sharing his intuition that some lattice
property of the ring of integers should play a role in the Perron–Frobenius
degree. Many thanks to Curtis T. McMullen and Eva Bayer Fluckiger for helpful
comments connected to Remark 3.14. I am grateful to the anonymous referee for
his/her very helpful comments, in particular for explaining how the bound for
primitive matrices in Theorem 1.6 can be obtained. During this work, the
author was supported by a Glasstone Research Fellowship in Science, a
Titchmarsh Fellowship, and a UKRI Postdoctoral Research Fellowship.
## 2\. Preliminaries
### 2.1. Lattice points
Throughout this article, by a _lattice_ $\Lambda\subset\mathbb{R}^{d}$ we mean
a lattice of full rank; i.e. a discrete subgroup of $\mathbb{R}^{d}$
isomorphic to $\mathbb{Z}^{d}$.
###### Proposition 2.1.
Let $\Lambda\subset\mathbb{R}^{d}$ be a lattice, and
$x_{1},\cdots,x_{k}\in\Lambda$. Denote by $\mathcal{C}$ the convex cone over
the points $x_{i}$ inside $\mathbb{R}^{d}$, that is
$\mathcal{C}:=\\{\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k}\hskip 5.69054pt|\hskip
5.69054pt\alpha_{i}\geq 0\hskip 8.53581pt\text{for every }i\\}.$
Let $S$ be the semigroup generated by elements of $\mathcal{C}\cap\Lambda$
under vector addition. Define the compact set $C$, and the finite set
$C_{\Lambda}\subset C$ as
$C:=\\{\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k}\hskip 5.69054pt|\hskip
5.69054pt0\leq\alpha_{i}\leq 1\hskip 8.53581pt\text{for every }i\\},\hskip
8.53581ptC_{\Lambda}:=C\cap\Lambda.$
Then $C_{\Lambda}$ is a finite generating set for the semigroup $S$.
###### Proof.
For $\alpha\in\mathbb{R}$, denote the fractional part of $\alpha$ by
$\\{\alpha\\}$, and let $\lfloor\alpha\rfloor=\alpha-\\{\alpha\\}$. Any point
$y\in S$ can be written as
$y=\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k}=\big{(}\lfloor\alpha_{1}\rfloor
x_{1}+\cdots+\lfloor\alpha_{k}\rfloor
x_{k}\big{)}+\big{(}\\{\alpha_{1}\\}x_{1}+\cdots+\\{\alpha_{k}\\}x_{k}\big{)},$
where $\alpha_{i},\lfloor\alpha_{i}\rfloor\geq 0$ for each $i$. Note that we
have $x_{i}\in C_{\Lambda}$ for each $i$, hence the first parenthesis is a sum
of elements of $C_{\Lambda}$. As $y$ and the first parenthesis are both in
$\Lambda$, the second parenthesis should represent a point in $\Lambda$ as
well. On the other hand, the coefficients of the second parenthesis are in the
interval $[0,1)$, and hence the second parenthesis lies in
$C_{\Lambda}=C\cap\Lambda$. We have written $y$ as a sum of elements of
$C_{\Lambda}$, hence $C_{\Lambda}$ is a generating set for the semigroup $S$.
Since $C$ is compact and $\Lambda$ is a lattice, $C_{\Lambda}=C\cap\Lambda$ is
a finite set. ∎
By a _Euclidean space_ of dimension $d$ we mean a $d$-dimensional vector space
$\mathbb{R}^{d}$ equipped with an inner product. A _polytope_ is the convex
hull of finitely many points in $\mathbb{R}^{d}$. Let
$\Lambda\subset\mathbb{R}^{d}$ be a lattice. If $\\{v_{1},\cdots,v_{d}\\}$ is
a basis for $\Lambda\cong\mathbb{Z}^{d}$, then a _fundamental domain_ for
$\Lambda$ is
$\\{\alpha_{1}v_{1}+\cdots+\alpha_{d}v_{d}\hskip 5.69054pt|\hskip
5.69054pt0\leq\alpha_{i}<1\text{ for every }i\\}.$
If $\mathbb{R}^{d}$ is a Euclidean space, then the _covolume_ of $\Lambda$ is
defined as the volume of any fundamental domain for $\Lambda$. A _lattice
polytope_ is a polytope whose vertices are lattice points.
###### Proposition 2.2.
Let $\mathbb{R}^{d}$ be a Euclidean space, $\Lambda\subset\mathbb{R}^{d}$ be a
lattice with covolume $\mathrm{Covol}(\Lambda)$, and $P\subset\mathbb{R}^{d}$
be a $d$-dimensional lattice polytope with volume $\mathrm{Vol}(P)$. Denote
the number of lattice points in $P$ by $|P\cap\Lambda|$. Then
$|P\cap\Lambda|\leq\frac{\mathrm{Vol}(P)}{\mathrm{Covol}(\Lambda)}\cdot(d+1)!.$
The equality happens exactly when $P$ is a $d$-simplex with
$|P\cap\Lambda|=d+1$.
###### Proof.
First consider the special case that $P$ has exactly $d+1$ vertices, and every
lattice point in $P$ is a vertex of $P$. If
$v_{0},v_{1},\cdots,v_{d}\in\mathbb{R}^{d}$ are the vertices of $P$, then the
volume of the parallelepiped formed by the vectors
$v_{1}-v_{0},\cdots,v_{d}-v_{0}$ is equal to $d!\times\mathrm{Vol}(P)$. Since
the volume of this parallelepiped is at least as large as the volume of a
fundamental domain for $\Lambda$, we have
$\displaystyle d!\times\mathrm{Vol}(P)\geq\mathrm{Covol}(\Lambda),$
implying that
$|P\cap\Lambda|=d+1\leq\frac{\mathrm{Vol}(P)}{\mathrm{Covol}(\Lambda)}\cdot(d+1)!.$
In general, decompose $P$ into $d$-simplices $\Delta_{1},\cdots,\Delta_{n}$
with disjoint interiors such that each simplex $\Delta_{i}$ contains no
lattice point except for its vertices. This can be done for example as
follows. Decompose $P$ into smaller polyhedra by coning off from one of the
vertices of $P$. Here by coning off from a point $v\in P$ we mean that for
every facet $F$ of $P$, the polyhedron which is the convex hull of $F\cup v$
is added unless its dimension is strictly smaller than that of $P$; for
example if the starting polyhedron is a polygon, then coning off from a vertex
$v$ is just decomposing the polygon into triangles via adding all the
diagonals emanating from $v$. For any of the resulting polyhedra, successively
take a lattice point inside or on the boundary, and cone off from that lattice
point. After finitely many repetitions, we arrive at the decomposition into
$\Delta_{i}$.
The desired inequality follows from adding up the corresponding inequalities
for simplices $\Delta_{1},\cdots,\Delta_{n}$. Note that if $P$ is not a
$d$-simplex or $|P\cap\Lambda|>d+1$, then at least one lattice point in
$P\cap\Lambda$ is counted for more than one simplex $\Delta_{i}$, and so the
inequality is strict. ∎
### 2.2. Minkowski sum and difference
For sets $A,B\subset\mathbb{R}^{d}$, define their _Minkowski sum_ as
$A+B:=\\{a+b\hskip 5.69054pt|\hskip 5.69054pta\in A,\text{ and }b\in
B\\}\subset\mathbb{R}^{d}.$
Intuitively, $A+B$ is the union of all translates of $A$ by elements of $B$
$A+B=\bigcup_{b\in B}(A+b).$
Define the _Minkowski difference_ of $A$ and $B$ by
$A\div B:=\\{c\in\mathbb{R}^{d}\hskip 5.69054pt|\hskip 5.69054ptB+c\subseteq
A\\}.$
If $B$ is empty, $A\div B$ is, by convention, equal to $\mathbb{R}^{d}$.
Intuitively, $A\div B$ is the intersection of all translates of $A$ by the
antipodes of elements of $B$
$A\div B=\bigcap_{b\in B}(A-b).$
We have used the rather odd notation $\div$ for the Minkowski difference, in
order to distinguish it from the set
$\\{a-b\hskip 5.69054pt|\hskip 5.69054pta\in A,\text{ and }b\in
B\\}\subset\mathbb{R}^{d}.$
The Minkowski sum and difference are _not_ in general the inverse of each
other.
###### Lemma 2.3.
The following properties hold for sets $A,B,C\subset\mathbb{R}^{d}$
$\displaystyle A\subset(A+B)\div B,$ $\displaystyle A\subset B\implies A\div
C\subset B\div C,$ $\displaystyle A\subset B\implies A+C\subset B+C.$
Moreover, if $A$ and $B$ are non-empty compact, convex sets, then
$(A+B)\div B=A.$
###### Proof.
The first three properties directly follow from the definition. We sketch the
proof of the last implication, and refer the reader to e.g. [Sch13, Lemma
3.1.11, and Section 1.7] for details. Pick an inner product
$\langle\cdot,\cdot\rangle$ on $\mathbb{R}^{d}$, and denote the _support
functions_ for $A$ and $B$ by, respectively, $h_{A}$ and $h_{B}$. By
definition, for any $x\in\mathbb{R}^{d}$
$h_{A}(x):=\sup\\{\langle x,y\rangle\hskip 5.69054pt|\hskip 5.69054pty\in
A\\},$
$h_{B}$ is defined similarly. By the _separation theorem_ for convex sets, for
a non-empty compact convex set $A$ we have
$a\in A\iff\langle a,x\rangle\leq h_{A}(x)\hskip 5.69054pt\text{for all
}x\in\mathbb{R}^{d}.$
Assuming $x\in(A+B)\div B$, we would like to show that $x\in A$. By
hypothesis, $x+B\subset A+B$. Equivalently, the support function for $x+B$
does not exceed that of $A+B$ pointwise. This implies
$h_{\\{x\\}}+h_{B}\leq h_{A}+h_{B},$
using the fact that $h_{A+B}=h_{A}+h_{B}$ for non-empty compact convex sets
$A$ and $B$; see [Sch13, Theorem 1.7.5]. Cancelling $h_{B}$ from both sides
gives us the inequality $h_{\\{x\\}}\leq h_{A}$, implying that $x\in A$. ∎
## 3\. Proof of Theorem 1.3
We follow Bayer Fluckiger [Bay06] and Jarvis [Jar14] for the definitions
below. Let $\lambda$ be an algebraic integer, and
$\mathbb{K}=\mathbb{Q}(\lambda)$ be the number field obtained by adjoining
$\lambda$ to $\mathbb{Q}$. We may interpret the points of $\mathbb{K}$ as
lying in a $d$-dimensional real linear space as follows. Let
$\sigma_{1},\cdots,\sigma_{r}$ be the real embeddings of $\mathbb{K}$, and
$\sigma_{r+1},\overline{\sigma}_{r+1},\cdots,\sigma_{r+s},\overline{\sigma}_{r+s}$
be the pairwise conjugate complex embeddings of $\mathbb{K}$, where
(1) $\displaystyle r+2s=d.$
Consider the embedding
$\displaystyle\sigma\colon\mathbb{K}\longrightarrow\mathbb{R}^{r}\times\mathbb{C}^{s},$
$\displaystyle\sigma(x)=(\sigma_{1}(x),\cdots,\sigma_{r+s}(x)).$
We may identify $\mathbb{C}$ with $\mathbb{R}^{2}$, by identifying $a+bi$ with
$(a,b)$, then $\sigma$ becomes an embedding
$\sigma\colon\mathbb{K}\longrightarrow\mathbb{R}^{d}.$
The mapping $\sigma\colon\mathbb{K}\rightarrow\mathbb{R}^{d}$ identifies the
vector space $\mathbb{R}^{d}$ with the tensor product
$\mathbb{K}_{\mathbb{R}}:=\mathbb{K}\otimes_{\mathbb{Q}}\mathbb{R}$,
$\displaystyle\mathbb{K}\otimes_{\mathbb{Q}}\mathbb{R}\longleftrightarrow\mathbb{R}^{d}$
$\displaystyle x\otimes a\mapsto(\sigma x)a.$
$\mathbb{R}^{r}\times\mathbb{C}^{s}$ has a canonical involution, which is
identity on $\mathbb{R}^{r}$ and complex conjugation on $\mathbb{C}^{s}$. Let
(2)
$\displaystyle\mathfrak{B}:=\\{\alpha\in\mathbb{R}^{r}\times\mathbb{C}^{s}\hskip
5.69054pt|\hskip 5.69054pt\alpha=\bar{\alpha},\text{ and all components of
}\alpha\text{ are positive}\\}.$
Given $\alpha\in\mathfrak{B}$, define the symmetric positive definite bilinear
form $q_{\alpha}$ on $\mathbb{K}_{\mathbb{R}}$ by
$\displaystyle
q_{\alpha}\colon\mathbb{K}_{\mathbb{R}}\times\mathbb{K}_{\mathbb{R}}\longrightarrow\mathbb{R}$
$\displaystyle(x,y)\mapsto\text{Trace}(\alpha x\bar{y}).$
Here
$\text{Trace}(x_{1},\cdots,x_{r+s}):=\sum_{i=1}^{r}x_{i}+\sum_{j=r+1}^{r+s}(x_{j}+\overline{x}_{j})$
denotes the trace of
$\mathbb{K}_{\mathbb{R}}\cong\mathbb{R}^{r}\times\mathbb{C}^{s}$, and $\alpha
x\bar{y}$ denotes component wise product in
$\mathbb{R}^{r}\times\mathbb{C}^{s}$. Then $q_{\alpha}$ induces the norm
$|\cdot|_{\alpha}$ on $\mathbb{K}_{\mathbb{R}}$ given by the following
formula, where $\alpha=(\alpha_{1},\cdots,\alpha_{r+s})$ with
$\alpha_{i}\in\mathbb{R}^{>0}$
(3)
$\displaystyle|x|^{2}_{\alpha}=\sum_{i=1}^{r}\alpha_{i}\cdot|\sigma_{i}(x)|^{2}+2\sum_{j=r+1}^{r+s}\alpha_{j}\cdot|\sigma_{j}(x)|^{2}.$
Denote the ring of integers of $\mathbb{K}$ by $\mathcal{O}_{\mathbb{K}}$. Let
$(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ denote the lattice
$\mathcal{O}_{\mathbb{K}}$ equipped with the inner product $q_{\alpha}$. The
_maximum_ of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ is defined as
$\displaystyle\max(\mathcal{O}_{\mathbb{K}},q_{\alpha})=\inf\\{u\in\mathbb{R}\hskip
2.84526pt|\hskip 2.84526pt\text{for all }x\in\mathbb{K}_{\mathbb{R}},\text{
there exists }y\in\mathcal{O}_{\mathbb{K}}\text{ with }q_{\alpha}(x-y,x-y)\leq
u\\}.$
The _covering radius_ of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ is, by
definition, the square root of $\max(\mathcal{O}_{\mathbb{K}},q_{\alpha})$.
Define the _determinant_ of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ as the
determinant of the matrix of $q_{\alpha}$ in a $\mathbb{Z}$-basis of
$\mathcal{O}_{\mathbb{K}}$; i.e. if $\omega_{1},\cdots,\omega_{d}$ is a basis
for the abelian group $\mathcal{O}_{\mathbb{K}}\cong\mathbb{Z}^{d}$ (under
addition) then $\det(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ is the determinant
of the $d\times d$ matrix $(q_{\alpha}(w_{i},w_{j}))$. With this definition,
the determinant of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ is equal to the
_square_ of the volume of any fundamental domain for the lattice
$(\mathcal{O}_{\mathbb{K}},q_{\alpha})$. We remark that some texts define the
determinant as the volume of a fundamental domain, but we preferred to follow
Bayer Fluckiger’s convention as in [Bay06].
###### Definition 3.1.
Define the _Hermite-like thickness_
$\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ of
$(\mathcal{O}_{\mathbb{K}},q_{\alpha})$ as
$\displaystyle\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha}):=\frac{\max(\mathcal{O}_{\mathbb{K}},q_{\alpha})}{\det(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{d}}}.$
Define the _minimal Hermite-like thickness_ as
$\displaystyle\tau_{\min}(\mathcal{O}_{\mathbb{K}})=\inf\\{\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha})\hskip
5.69054pt|\hskip 5.69054pt\alpha\in\mathfrak{B})\\},$
where $\mathfrak{B}$ is as in (2).
###### Remark 3.2.
Although we called $\tau_{\min}(\mathcal{O}_{\mathbb{K}})$ the minimal
Hermite-like thickness, it should be noted that the minimum is taken over the
set of inner products coming from elements of $\mathfrak{B}$ and not all
possible inner products on $\mathbb{R}^{d}$.
It is clear that the concepts of maximum, covering radius, and Hermite-like
thickness can be defined more generally for a lattice in a Euclidean space;
see [Bay06].
###### Definition 3.3.
Let $\mathbb{K}$ be a number field. Assume that $\omega_{1},\cdots,\omega_{d}$
is any integral basis for $\mathcal{O}_{\mathbb{K}}$. Denote the complete list
of places of $\mathbb{K}$ by $\sigma_{1},\cdots,\sigma_{d}$. The
_discriminant_ of $\mathbb{K}$ is defined as the square of the determinant of
the $d\times d$ matrix $(\sigma_{i}(\omega_{j}))$.
See [Jar14, Chapters 3 and 7] for further properties of the discriminant as
well as the embedding of $\mathcal{O}_{\mathbb{K}}$ in $\mathbb{R}^{d}$.
###### Theorem 1.3.
Let $\lambda$ be a Perron number of algebraic degree $d\geq 3$ and spectral
ratio $\rho$. Set $\mathbb{K}:=\mathbb{Q}(\lambda)$. Let
$\mathcal{O}_{\mathbb{K}}$ be the ring of integers of $\mathbb{K}$, and denote
the discriminant and the minimal Hermite-like thickness of $\mathbb{K}$ by,
respectively, $D_{\mathbb{K}}$ and $\tau_{\min}(\mathcal{O}_{\mathbb{K}})$.
Then $d_{PF}^{irr}(\lambda)$ is bounded above by each of
$\Big{(}\dfrac{8d}{1-\rho}\Big{)}^{d^{2}}\tau_{\min}(\mathcal{O}_{\mathbb{K}})^{\frac{d}{2}}$
and
$\Big{(}\dfrac{8d}{1-\rho}\Big{)}^{d^{2}}\sqrt{D_{\mathbb{K}}}.$
###### Proof.
Let $\mathbb{K}_{\mathbb{R}}:=\mathbb{K}\otimes_{\mathbb{Q}}\mathbb{R}$. Let
$\sigma_{1},\cdots,\sigma_{r+s}$ be as before, and identify
$\mathbb{K}_{\mathbb{R}}$ with
$\mathbb{R}^{r}\times\mathbb{C}^{s}\cong\mathbb{R}^{d}$. A place $\sigma_{j}$
is a field homomorphism
$\sigma_{j}\colon\mathbb{Q}(\lambda)\rightarrow(\mathbb{R}\text{ or
}\mathbb{C})$ and hence $\sigma_{j}$ is completely determined by
$\sigma_{j}(\lambda)$ which is one of the Galois conjugates of $\lambda$.
Assume that $\sigma_{1}$ is the real place corresponding to $\lambda$ itself;
i.e. $\sigma_{1}(\lambda)=\lambda$. Let $M_{\lambda}$ be the linear
endomorphisms of $\mathbb{K}_{\mathbb{R}}$ induced by multiplication by
$\lambda$ in $\mathbb{K}$. The eigenvalues of $M_{\lambda}$ are the Galois
conjugates of $\lambda$. For any Galois conjugate $\lambda_{i}$, denote by
$E_{i}$ the invariant subspace for $M_{\lambda}$ with eigenvalue
$\lambda_{i}$, and let $\pi_{i}\colon\mathbb{K}_{\mathbb{R}}\rightarrow E_{i}$
be the projection along the complementary invariant subspace. Therefore,
$\pi_{i}$ is the projection onto the $i$th factor under the identification
$\mathbb{K}_{\mathbb{R}}\cong\mathbb{R}^{r}\times\mathbb{C}^{s}$.
Before going into the details, we explain the main idea when $\lambda$ is a
cubic algebraic integer that is not totally real. In order to find a non-
negative integral matrix with spectral radius $\lambda$, we follow Thurston’s
proof of Lind’s theorem. See [Thu14, pages 353–354] and [Lin84]. Let $E_{1}$
be the one dimensional invariant subspace for $M_{\lambda}$ with eigenvalue
$\lambda$, and $E_{2}$ be the two dimensional invariant subspace corresponding
to the pair of complex Galois conjugates $\\{\delta,\overline{\delta}\\}$ of
$\lambda$. The endomorphism $M_{\lambda}$ of $\mathbb{K}_{\mathbb{R}}$ leaves
$E_{1}$ and $E_{2}$ invariant, and it acts on $E_{1}\cong\mathbb{R}$ and
$E_{2}\cong\mathbb{C}$ by multiplication by, respectively, the numbers
$\lambda$ and $\delta$. Pick a large positive integer $N=N(\delta,\lambda)$
such that if $P_{\delta}$ is a regular $N$-gon inscribed in a circle of radius
$R$ around the origin in $E_{2}$, then $P_{\delta}\subset
E_{2}\cong\mathbb{C}$ is invariant under multiplication by the complex number
$\delta/\lambda$. Such an integer $N$ exists since by the Perron condition the
absolute value of $\delta/\lambda$ is strictly less than $1$. Let $v$ be an
eigenvector of $M_{\lambda}$ with eigenvalue $\lambda$, and $E_{2}^{v}$ be the
affine plane $Rv+E_{2}$. Then $E_{2}^{v}$ lies in the positive half-space $E$
corresponding to $\lambda$ and $v$.
Denote the vertices of the shifted polygon $P_{v}:=Rv+P_{\delta}\subset
E_{2}^{v}$ by $v_{1},\cdots,v_{k}$, and choose integral points
$z_{1},\cdots,z_{k}\subset\mathbb{R}^{3}$ such that the distance between
$z_{i}$ and $v_{i}$ is ‘small’. Since the cone over the points
$v_{1},\cdots,v_{k}$ lies in the positive half-space and is invariant under
$M_{\lambda}$, and the distance between $z_{i}$ and $v_{i}$ is small, it is
reasonable to expect that the cone $\mathcal{C}$ over the points $z_{i}$ also
lies in the positive half-space $E$ and is invariant under $M_{\lambda}$ for
‘large’ $R$. Let $S$ be the semigroup generated by the set of integral points
in the cone $\mathcal{C}$ under vector addition. Then $M_{\lambda}$ preserves
$S$ and induces an action $M_{\lambda}^{S}$ on $S$. Moreover, $S$ has a finite
generating set, and we can estimate an upper bound for the size $|G|$ of a
generating set $G$ using Proposition 2.1. If we write the action of
$M_{\lambda}^{S}$ on $S$ in the generating set $G$, we obtain a non-negative
integral matrix of size $|G|$ whose spectral radius is equal to $\lambda$. The
details of the proof are as follows.
Step 1: Choosing an inner product on $\mathbb{R}^{r}\times\mathbb{C}^{s}$.
Define $\mathfrak{B}$ as in (2). Pick $\alpha\in\mathfrak{B}$ and equip
$\mathbb{R}^{r}\times\mathbb{C}^{s}$ with the inner product $q_{\alpha}$. Note
that for the norm $|\cdot|_{\alpha}$ and for any $x\in\mathbb{K}_{\mathbb{R}}$
(4) $\displaystyle|x|_{\alpha}^{2}=\sum_{j=1}^{r+s}|\pi_{j}(x)|_{\alpha}^{2},$
and hence
(5) $\displaystyle|x|_{\alpha}\geq|\pi_{j}(x)|_{\alpha}\hskip
17.07164pt\text{for}\hskip 5.69054pt1\leq j\leq r+s.$
Let $\ell$ be the covering radius of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$.
Then
(6)
$\displaystyle\ell:=\max(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2}}=\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2}}\cdot\det(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2d}}.$
Step 2: Defining the polygon $P_{j}$ in the invariant subspace $E_{j}$ for
each $j>1$.
Define
(7)
$\displaystyle\rho_{j}=\frac{|\sigma_{j}(\lambda)|}{\lambda}\in\mathbb{R}\hskip
17.07164pt\text{for}\hskip 5.69054pt1<j\leq r+s.$
Hence
(8) $\displaystyle\rho=\max_{j>1}\\{\rho_{j}\\}.$
By the Perron condition, $\rho_{j}\in(0,1)$ for each $j>1$. Set
(9) $\displaystyle R_{j}=\frac{(2\sqrt{d}+4)\ell}{1-\rho}\hskip
17.07164pt\text{for}\hskip 5.69054pt1<j\leq r.$
###### Remark 3.4.
Clearly $R_{j}$ does not depend on $1<j\leq r$. However, we decided that this
notation would be more suitable if one would like to improve the bounds in the
article by substituting $\rho_{j}$ instead of $\rho$ in the definition of
$R_{j}$. Similarly, in what follows $R_{j}$ for $j>r$ will not depend on $j$.
For each real place $\sigma_{j}$ with $j>1$, define $P_{j}$ as the set of
points of distance at most $R_{j}$ from the origin in $E_{j}$; in particular
$P_{j}$ is an interval. Note, for future use, that
(10) $\displaystyle R_{j}\leq\Big{(}\frac{8\sqrt{d}}{1-\rho}\Big{)}\ell\hskip
17.07164pt\text{for}\hskip 5.69054pt1<j\leq r.$
For $j>r$, define the natural number $N_{j}\geq 3$ as the smallest positive
integer satisfying
(11) $\displaystyle N_{j}^{2}\geq\frac{2\sqrt{d}+9}{1-\rho}.$
Note, for later use, that
(12) $\displaystyle N_{j}^{2}\leq\frac{16\sqrt{d}}{1-\rho}.$
In the above, we used the fact that the smallest whole square exceeding $u\geq
16$ is less than or equal to $(\sqrt{u}+1)^{2}\leq 3u/2+1$. Set
(13) $\displaystyle R_{j}=N_{j}^{2}\ell\hskip 17.07164pt\text{for}\hskip
5.69054ptr<j\leq r+s.$
For $j>r$, define the solid polygon $P_{j}\subset E_{j}$ as a regular
$N_{j}$-gon inscribed in the circle of radius $R_{j}$ around the origin.
Define the polytope $P$ as the product of $P_{j}$ for $j>1$
$P:=\\{z\in\prod_{j>1}E_{j}\hskip 5.69054pt|\hskip 5.69054pt\pi_{j}(z)\in
P_{j}\hskip 5.69054pt\text{for every }j\\}\subset\prod_{j>1}E_{j}.$
The number of vertices of $P$ is equal to the product of the number of
vertices of $P_{j}$ for $j>1$, which is equal to
$2^{r-1}\times\prod_{j>r}N_{j}$.
Step 3: Defining the cone $\mathcal{C}$ in the positive half-space of
$\lambda$.
Let $v\in E_{1}$ be the positive (with respect to $\pi_{1}$) unit length
eigenvector for the eigenvalue $\lambda$. Here unit length is considered with
respect to the distance $|\cdot|_{\alpha}$. Let $E$ be the positive half-space
corresponding to $v$. Set
(14) $\displaystyle L=\max\\{R_{j}\hskip 5.69054pt|\hskip 5.69054pt1<j\leq
r+s\\}.$
Note, for future use, that
(15) $\displaystyle L>3\ell.$
Denote by $P_{v}$ the translated polytope $Lv+P$, and let
$\mathcal{C}_{v}\subset E$ be the cone over the polytope $P_{v}$.
Equivalently, if $\frac{1}{L}\cdot P_{j}$ is the dilation of $P_{j}$ by the
factor $\frac{1}{L}$ centered at the origin of $E_{j}$, then
(16) $\displaystyle\mathcal{C}_{v}=\\{z\in E\hskip 5.69054pt|\hskip
5.69054pt\frac{\pi_{j}(z)}{|\pi_{1}(z)|_{\alpha}}\in\frac{1}{L}\cdot
P_{j}\hskip 8.53581pt\text{for every }j>1\\}.$
Denote the vertices of $P_{v}$ by $v_{1},v_{2},\cdots,v_{k}$, where
(17) $\displaystyle k=2^{r-1}\times\prod_{j>r}N_{j}.$
Pick integral points $z_{1},z_{2},\cdots,z_{k}$ in $\mathcal{O}_{\mathbb{K}}$
such that the distance between $v_{i}$ and $z_{i}$ does not exceed the
covering radius $\ell$ of $(\mathcal{O}_{\mathbb{K}},q_{\alpha})$.
First we show that each $z_{i}$ lies in the positive half-space $E$. Identify
the subspace $E_{1}$ with $\mathbb{R}$ via
$x=rv\in E_{1}\longleftrightarrow r\in\mathbb{R}.$
It is enough to show that for each $i$ we have $\pi_{1}(z_{i})>0$. By the
triangle inequality
$\pi_{1}(z_{i})\geq\pi_{1}(v_{i})-|\pi_{1}(v_{i}-z_{i})|\geq\pi_{1}(v_{i})-|v_{i}-z_{i}|_{\alpha}\geq
L-\ell>0,$
where the second inequality holds by (5), and the last inequality is true by
(15).
Define $\mathcal{C}$ as the cone over the points $z_{1},z_{2},\cdots,z_{k}$
with apex at the origin
(18) $\displaystyle\mathcal{C}=\\{z\in E\hskip 5.69054pt|\hskip
5.69054ptz=\beta_{1}z_{1}+\cdots+\beta_{k}z_{k},\hskip 8.53581pt\beta_{i}\geq
0\hskip 8.53581pt\text{for every }i\\}.$
For each $z_{i}$, define $w_{i}$ as the intersection point of the ray through
$z_{i}$ from the origin and the affine hyperplane $Lv+E_{1}^{c}$, where
$E_{1}^{c}=\oplus_{j>1}E_{j}$ is the complementary invariant subspace to
$E_{1}$. Then
(19) $\displaystyle\mathcal{C}=\\{z\in E\hskip 5.69054pt|\hskip
5.69054ptz=\beta_{1}w_{1}+\cdots+\beta_{k}w_{k},\hskip 8.53581pt\beta_{i}\geq
0\hskip 8.53581pt\text{for every }i\\}.$
Equivalently, if $Q_{v}\subset Lv+E_{1}^{c}$ is the convex hull of
$w_{1},w_{2},\cdots,w_{k}$, then
(20) $\displaystyle\mathcal{C}=\\{z\in E\hskip 5.69054pt|\hskip
5.69054pt\frac{z}{|\pi_{1}(z)|_{\alpha}}\in\frac{1}{L}\cdot Q_{v}\\}=\\{z\in
E\hskip 5.69054pt|\hskip 5.69054pt\frac{Lz}{|\pi_{1}(z)|_{\alpha}}\in
Q_{v}\\}.$
Hence, $\mathcal{C}$ is the cone over the points $z_{1},z_{2},\cdots,z_{k}$,
or equivalently the cone over the points $w_{1},w_{2},\cdots,w_{k}$. In what
follows, we will use the two descriptions of $\mathcal{C}$ as needed.
Step 4: Showing that the cone $\mathcal{C}$ is invariant.
We need a few lemmas in order to prove that $\mathcal{C}$ is invariant.
###### Lemma 3.5.
For each $i$ we have
$\displaystyle|v_{i}|_{\alpha}\leq\sqrt{d}L$
$\displaystyle|v_{i}-w_{i}|_{\alpha}\leq(2\sqrt{d}+2)\ell.$
###### Proof.
For the first inequality, we have
$\displaystyle|v_{i}|_{\alpha}^{2}$
$\displaystyle=\sum_{j=1}^{r+s}|\pi_{j}(v_{i})|_{\alpha}^{2}\leq
L^{2}+\sum_{j>1}R_{j}^{2}\leq(r+s)L^{2}\leq dL^{2},$
where we used (4) and (14).
Assume that $w_{i}=r_{i}z_{i}$ for some positive real number $r_{i}$. Note
that
$\displaystyle|\pi_{1}(z_{i})-\pi_{1}(v_{i})|_{\alpha}=|\pi_{1}(z_{i}-v_{i})|_{\alpha}\leq|z_{i}-v_{i}|_{\alpha}\leq\ell$
$\displaystyle\implies\pi_{1}(z_{i})=\pi_{1}(v_{i})+\pi_{1}(z_{i}-v_{i})\in[L-\ell,L+\ell],$
where we used (5) in the first line above. Therefore
$\displaystyle
r_{i}=\frac{\pi_{1}(w_{i})}{\pi_{1}(z_{i})}=\frac{L}{\pi_{1}(z_{i})}\in[\frac{L}{L+\ell},\frac{L}{L-\ell}]$
$\displaystyle\implies r_{i}\leq\frac{L}{L-\ell},\hskip
8.53581pt\text{and}\hskip 8.53581pt|r_{i}-1|\leq\frac{\ell}{L-\ell}.$
Hence, by the triangle inequality
$\displaystyle|w_{i}-v_{i}|_{\alpha}=|r_{i}z_{i}-v_{i}|_{\alpha}\leq|r_{i}z_{i}-r_{i}v_{i}|_{\alpha}+|r_{i}v_{i}-v_{i}|_{\alpha}$
$\displaystyle=r_{i}|z_{i}-v_{i}|_{\alpha}+|r_{i}-1||v_{i}|_{\alpha}\leq(\frac{L}{L-\ell})\ell+(\frac{\ell}{L-\ell})\sqrt{d}L$
$\displaystyle=(\sqrt{d}+1)\frac{L\ell}{L-\ell}\leq(2\sqrt{d}+2)\ell.$
The last implication above, $L\leq 2(L-\ell)$, holds by (15). ∎
Set
(21) $\displaystyle b=(2\sqrt{d}+2)\ell.$
###### Lemma 3.6.
The following estimates hold for every $z_{i}$, where $M_{\lambda}$ is the
linear endomorphism of $\mathbb{K}_{\mathbb{R}}$ induced by multiplication by
$\lambda$ in $\mathbb{K}$.
1. a)
$\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\leq\rho_{j}\Big{(}\frac{R_{j}+\ell}{L-\ell}\Big{)}\hskip
17.07164pt\text{for every }j>1.$
2. b)
$\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\leq\frac{R_{j}-b}{L}\hskip
17.07164pt\text{for every }1<j\leq r.$
3. c)
$\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\leq\frac{1}{L}\bigg{(}R_{j}\big{(}1-\frac{5}{N_{j}^{2}}\big{)}-b\bigg{)}\hskip
17.07164pt\text{for every }j>r.$
###### Proof.
1. a)
We have
$\displaystyle\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}$
$\displaystyle=\frac{|\sigma_{j}(\lambda)|}{|\sigma_{1}(\lambda)|}\cdot\frac{|\pi_{j}(z_{i})|_{\alpha}}{|\pi_{1}(z_{i})|_{\alpha}}=\rho_{j}\cdot\frac{|\pi_{j}(z_{i})|_{\alpha}}{|\pi_{1}(z_{i})|_{\alpha}}$
$\displaystyle\leq\rho_{j}\cdot\frac{|\pi_{j}(v_{i})|_{\alpha}+|\pi_{j}(z_{i}-v_{i})|_{\alpha}}{|\pi_{1}(v_{i})|_{\alpha}-|\pi_{1}(v_{i}-z_{i})|_{\alpha}}$
$\displaystyle\leq\rho_{j}\cdot\frac{|\pi_{j}(v_{i})|_{\alpha}+|z_{i}-v_{i}|_{\alpha}}{|\pi_{1}(v_{i})|_{\alpha}-|v_{i}-z_{i}|_{\alpha}}\leq\rho_{j}\Big{(}\frac{R_{j}+\ell}{L-\ell}\Big{)},$
where we have used inequality (5).
2. b)
Assume that $\sigma_{j}$ is a real place where $1<j\leq r$, and set
$R^{\prime}_{j}=R_{j}/\ell$, $L^{\prime}=L/\ell$, and $b^{\prime}=b/\ell$.
Then
$\displaystyle\rho_{j}\Big{(}\frac{R_{j}+\ell}{L-\ell}\Big{)}\leq\frac{R_{j}-b}{L}$
$\displaystyle\iff\rho_{j}\Big{(}\frac{R^{\prime}_{j}+1}{L^{\prime}-1}\Big{)}\leq\frac{R^{\prime}_{j}-b^{\prime}}{L^{\prime}}$
$\displaystyle\iff(\rho_{j}+b^{\prime})L^{\prime}+R_{j}^{\prime}-b^{\prime}\leq(1-\rho_{j})L^{\prime}R^{\prime}_{j}.$
Hence, using $\rho_{j}\in(0,1)$, it is enough to have
$\displaystyle(1+b^{\prime})L^{\prime}+R_{j}^{\prime}\leq(1-\rho_{j})L^{\prime}R^{\prime}_{j}$
$\displaystyle\iff\frac{1+b^{\prime}}{R^{\prime}_{j}}+\frac{1}{L^{\prime}}\leq
1-\rho_{j}$
$\displaystyle\iff\frac{2\sqrt{d}+3}{R^{\prime}_{j}}+\frac{1}{L^{\prime}}\leq
1-\rho_{j}.$
On the other hand, $L^{\prime}\geq R_{j}^{\prime}$ by (14). Hence, it suffices
to have
$\displaystyle\frac{2\sqrt{d}+4}{R^{\prime}_{j}}\leq 1-\rho_{j}\iff
R^{\prime}_{j}\geq\frac{2\sqrt{d}+4}{1-\rho_{j}},$
which holds by (9).
3. c)
Now assume that $\sigma_{j}$ is a complex place where $j>r$. Then
$\displaystyle\rho_{j}\Big{(}\frac{R_{j}+\ell}{L-\ell}\Big{)}\leq\frac{1}{L}\bigg{(}R_{j}\big{(}1-\frac{5}{N_{j}^{2}}\big{)}-b\bigg{)}$
$\displaystyle\iff\rho_{j}(R_{j}+\ell)\leq\Big{(}\frac{L-\ell}{L}\Big{)}\Big{(}R_{j}\big{(}1-\frac{5}{N_{j}^{2}}\big{)}-b\Big{)}.$
By (14), we have $L\geq R_{j}=N_{j}^{2}\ell$, and so
$\displaystyle\frac{L-\ell}{L}=1-\frac{\ell}{L}\geq 1-\frac{1}{N_{j}^{2}}.$
Set $b^{\prime}=b/\ell$. After substituting $R_{j}=N_{j}^{2}\ell$ and dividing
both sides by $\ell$, it is enough to have
$\displaystyle\rho_{j}(N_{j}^{2}+1)\leq\Big{(}1-\frac{1}{N_{j}^{2}}\Big{)}\Big{(}N_{j}^{2}-5-b^{\prime}\Big{)}.$
Multiplying both sides by $N_{j}^{2}$ and then expanding, the last inequality
is equivalent to
$\displaystyle N_{j}^{4}(1-\rho_{j})+(5+b^{\prime})\geq
N_{j}^{2}(b^{\prime}+6+\rho_{j}).$
So, using $\rho_{j}\in(0,1)$ and neglecting the positive term $5+b^{\prime}$,
it suffices to have
$\displaystyle N_{j}^{4}(1-\rho_{j})\geq N_{j}^{2}(b^{\prime}+7)\iff
N_{j}^{2}\geq\frac{b^{\prime}+7}{1-\rho_{j}}=\frac{2\sqrt{d}+9}{1-\rho_{j}},$
and the last inequality holds by (11).
∎
Recall that $\mathcal{C}$ is the cone over the points $w_{1},\cdots,w_{k}$
with apex at the origin, i.e. the cone over the polytope $Q_{v}$ (i.e. the
convex hull of $w_{i}$). For each $i$, the point $w_{i}$ of $Q_{v}$ is of
distance at most $b=(2\sqrt{d}+2)\ell$ from the corresponding point $v_{i}$ of
$P_{v}$. Since $b$ is ‘small’, we expect the polytope $Q_{v}$ to contain a
‘smaller version’ of $P_{v}$. More precisely, define
$\displaystyle P_{v}^{b}=\\{x\in P_{v}\hskip 5.69054pt|\hskip
5.69054pt\text{dist}(x,\partial P_{v})\geq b\\},$
where
$\displaystyle\text{dist}(X,Y):=\min\\{|x-y|_{\alpha}\hskip 5.69054pt|\hskip
5.69054ptx\in X,\hskip 5.69054pty\in Y\\}.$
###### Lemma 3.7.
We have $P_{v}^{b}\subset Q_{v}$.
###### Proof.
Let $B$ be the ball of radius $b$ around the origin in
$E_{1}^{c}=\oplus_{j>1}E_{j}$. We show that
$P_{v}\subset Q_{v}+B,$
where $Q_{v}+B$ denotes the Minkowski sum. To see this, pick an arbitrary
point $x\in P_{v}$ and write it as a convex combination
$x=\sum_{i=1}^{k}\alpha_{i}v_{i},$
with $0\leq\alpha_{i}\leq 1$ and $\sum_{i=1}^{k}\alpha_{i}=1$. Then
$x=\sum_{i=1}^{k}\alpha_{i}w_{i}+\sum_{i=1}^{k}\alpha_{i}(v_{i}-w_{i}).$
If we set $y=\sum_{i=1}^{k}\alpha_{i}w_{i}$ and
$z=\sum_{i=1}^{k}\alpha_{i}(v_{i}-w_{i})$, then $y\in Q_{v}$. Moreover, by the
triangle inequality $z\in B$, since each term $v_{i}-w_{i}$ lies in $B$. This
shows that $P_{v}\subseteq Q_{v}+B$.
If we denote the Minkowski difference by $\div$, then we have
$P_{v}\div B\subset(Q_{v}+B)\div B=Q_{v},$
where the last equality holds since $Q_{v}$ and $B$ are non-empty, compact,
and convex. See Lemma 2.3. Therefore, it suffices to show that
$P_{v}^{b}\subset P_{v}\div B$. It follows from the definition that
$P_{v}^{b}+B\subset P_{v}$. Hence, assuming that $P_{v}^{b}$ is non-empty, we
have
$P_{v}^{b}=(P_{v}^{b}+B)\div B\subset P_{v}\div B,$
where we used Lemma 2.3 twice. This completes the proof. ∎
Define
$\displaystyle P_{j}^{b}:=\\{x\in P_{j}\hskip 5.69054pt|\hskip
5.69054pt\text{dist}(x,\partial P_{j})\geq b\\},$
and set
$\displaystyle V:=Lv+\\{x\in E_{1}^{c}\hskip 5.69054pt|\hskip
5.69054pt\pi_{j}(x)\in P_{j}^{b}\hskip 8.53581pt\text{for every }j>1\\},$
$\displaystyle W:=Lv+\\{x\in E_{1}^{c}\hskip 2.84526pt|\hskip
2.84526pt|\pi_{j}(x)|_{\alpha}\leq R_{j}-b\hskip 5.69054pt\text{for }1<j\leq
r,\hskip 2.84526pt|\pi_{j}(x)|_{\alpha}\leq
R_{j}(1-\frac{5}{N_{j}^{2}})-b\hskip 5.69054pt\text{for }j>r\\}.$
###### Lemma 3.8.
We have $W\subset V\subset P_{v}^{b}\subset Q_{v}$.
###### Proof.
The inclusion $W\subset V$ follows from the following simple fact: Let $T$ be
a regular $N$-gon inscribed in a circle of radius $R$ centered at the point
$O$. Every point in $\partial T$ is of distance at least $R(1-5/N^{2})$ from
$O$.
For completeness, we give a proof of the above fact. After scaling, we may
assume that $R=1$. The minimum distance $d(O,x)$ for $x\in\partial T$ is
obtained by drawing the perpendicular from $O$ to a side of $T$. Such a
perpendicular has length $\cos(\frac{\pi}{N})$. Hence
$d(O,x)\geq\cos(\frac{\pi}{N})>1-\frac{\pi^{2}}{2N^{2}}>1-\frac{5}{N^{2}},$
where we have used the inequality $\cos(x)>1-\frac{x^{2}}{2}$ for $0<x<1$.
The inclusion $P_{v}^{b}\subset Q_{v}$ was proved in Lemma 3.7. Now we show
that $V\subset P_{v}^{b}$. Pick arbitrary points $x\in V$ and $y\in\partial
P_{v}=\partial(Lv+P)$. Since $P=\prod_{j>1}P_{j}$ is a product, there is $j>1$
such that $\pi_{j}(y)\in\partial P_{j}$. Therefore
$\text{dist}(x,y)\geq\text{dist}(\pi_{j}(x),\pi_{j}(y))\geq\text{dist}(P_{j}^{b},\partial
P_{j})\geq b.$
In the above, we used the fact that projection onto a non-empty closed convex
set in a Euclidean space is distance decreasing; see e.g. [Sch13, Theorem
1.2.1]. Hence $x\in P_{v}^{b}$, proving the inclusion $V\subset P_{v}^{b}$. ∎
Using (20), in order to show that $\mathcal{C}$ is invariant, it is enough to
prove that for every $z_{i}$
$\displaystyle
L\cdot\frac{M_{\lambda}(z_{i})}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\in
W\subset Q_{v}.$
Equivalently, we would like to show that for every $j>1$ and every $z_{i}$ the
following inequalities hold
$\displaystyle\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\leq\frac{R_{j}-b}{L}\hskip
8.53581pt\text{for }1<j\leq r,$
$\displaystyle\frac{|\pi_{j}(M_{\lambda}(z_{i}))|_{\alpha}}{|\pi_{1}(M_{\lambda}(z_{i}))|_{\alpha}}\leq\frac{1}{L}\bigg{(}R_{j}\big{(}1-\frac{5}{N_{j}^{2}}\big{)}-b\bigg{)}\hskip
8.53581pt\text{for }j>r.$
But the above inequalities hold by Lemma 3.6. This finishes the proof of the
invariance of $\mathcal{C}$.
Step 5: Defining a non-negative integral matrix $A$, where each irreducible
component of $A$ has spectral radius equal to $\lambda$.
Let $S$ be the semigroup generated by elements of
$\mathcal{C}\cap\mathcal{O}_{\mathbb{K}}$ under vector addition. By
Proposition 2.1, the set of integral points (i.e. elements of
$\mathcal{O}_{\mathbb{K}}$) in the compact region
(22) $\displaystyle
C:=\\{\alpha_{1}z_{1}+\alpha_{2}z_{2}+\cdots+\alpha_{k}z_{k}\hskip
2.84526pt|\hskip 2.84526pt0\leq\alpha_{i}\leq 1\hskip 8.53581pt\text{for every
}i\\}$
generate $S$. Enumerate the set of points in $C\cap\mathcal{O}_{\mathbb{K}}$
by $c_{1},\cdots,c_{n}$. Recall that $M_{\lambda}$ acts on $S$. Moreover,
$M_{\lambda}$ can be represented by the companion matrix $B$ when written in
the basis $\\{1,\lambda,\cdots,\lambda^{d-1}\\}$ of
$\mathbb{K}_{\mathbb{R}}=\mathbb{Q}(\lambda)\otimes\mathbb{R}$. Let
$A=[a_{ij}]$ be a non-negative integral matrix corresponding to the action of
$B$ on $S$ in the basis $\\{c_{1},\cdots,c_{n}\\}$. Hence
(23) $\displaystyle Bc_{j}=\sum_{i=1}^{n}a_{ij}c_{i},$
where $B\colon\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ is the companion matrix
associated with $\lambda$. There might be various choices for $A$, when
$Bc_{j}$ can be written as a non-negative linear combination of
$c_{1},\cdots,c_{n}$ in more than one way.
Lind’s argument [Lin84, pages 288–289] shows that every irreducible component
of $A$ has spectral radius $\lambda$. We follow the proof given in [LM21,
Lemma 11.1.10] briefly for the reader’s convenience. Replace $A$ by one of its
irreducible components; this amounts to taking a minimal subset of
$\\{c_{1},\cdots,c_{n}\\}$ for which (23) holds. Let $\mu$ be the spectral
radius of $A$. Let $e_{i}$ be the $i$th unit vector in $\mathbb{R}^{n}$, and
define the linear map $P\colon\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}$ by
$P(e_{i})=c_{i}$. Then by (23) we have $PA=BP$. Since $A$ is non-negative and
irreducible, by the Perron–Frobenius theorem $A$ has a positive eigenvector
$v_{\mu}$ corresponding to $\mu$. Since $Pv_{\mu}$ is a positive linear
combination of the $c_{j}$, and each $c_{j}$ satisfies $\pi_{1}(c_{j})>0$, we
necessarily have $\pi_{1}(Pv_{\mu})>0$ and hence $Pv_{\mu}\neq 0$. Moreover,
$B(Pv_{\mu})=P(Av_{\mu})=\mu(Pv_{\mu}).$
Hence, $Pv_{\mu}$ is an eigenvector for $B$ with eigenvalue $\mu$ . But the
only eigenvectors of $B$ with positive $E_{1}$ component lie in $E_{1}$. Hence
$Pv_{\mu}$ is a multiple of $v$ (the eigenvector with eigenvalue $\lambda$)
and
$\displaystyle\mu(Pv_{\mu})$ $\displaystyle=B(Pv_{\mu})=\lambda(Pv_{\mu})$
$\displaystyle\implies\mu=\lambda.$
###### Remark 3.9.
In [Thu14, page 354], Thurston gave a new proof of Lind’s converse to the
integer Perron–Frobenius theorem. Thurston assumed that the matrix constructed
via his method can be taken to be primitive as well. If this is the case, then
Theorem 1.3 readily upgrades to give an upper bound for the Perron–Frobenius
degree of a Perron number.
Step 6: Bounding the dimension of the matrix $A$.
By Proposition 2.2, the number of integral points in $C$ is at most
$\displaystyle\frac{\textsc{Vol}(C)}{\text{Covol}(\mathcal{O}_{\mathbb{K}})}\cdot(d+1)!=\frac{\textsc{Vol}(C)}{\text{det}(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2}}}\cdot(d+1)!.$
We give an upper bound for $\textsc{Vol}(C)$ via a series of lemmas.
###### Lemma 3.10.
For every $j>1$ and every $w_{i}$, we have
$|\pi_{j}(w_{i})|_{\alpha}<2R_{j}.$
###### Proof.
By the triangle inequality
$\displaystyle|\pi_{j}(w_{i})|_{\alpha}$
$\displaystyle\leq|\pi_{j}(v_{i})|_{\alpha}+|\pi_{j}(w_{i}-v_{i})|_{\alpha}$
$\displaystyle\leq|\pi_{j}(v_{i})|_{\alpha}+|w_{i}-v_{i}|_{\alpha}$
$\displaystyle\leq R_{j}+(2\sqrt{d}+2)\ell<2R_{j}$ $\displaystyle\iff
R_{j}>(2\sqrt{d}+2)\ell.$
In the first line above (5) is used. The second line follows from Lemma 3.5,
and the last inequality holds by (9), (11), and (13). ∎
###### Lemma 3.11.
Let $M:=2k\sqrt{d}$, where $k$ is the number of vertices of $P_{v}$. Then
$C\subset\\{rx\in\mathbb{R}^{d}\hskip 5.69054pt|\hskip 5.69054ptx\in
Q_{v},\hskip 2.84526pt\text{and}\hskip 5.69054pt0\leq r\leq M\\}.$
###### Proof.
Since $Q_{v}$ is the convex hull of $w_{i}$, the ray from the origin and
passing thorough an arbitrary point $p$ of $C$ intersects $Q_{v}$ at some
point $w$. Therefore, if we define $M_{1}$ as
$M_{1}=\frac{\max_{p\in C}|p|_{\alpha}}{\min_{w\in Q_{v}}|w|_{\alpha}},$
then
$C\subset\\{rx\in\mathbb{R}^{d}\hskip 5.69054pt|\hskip 5.69054ptx\in
Q_{v},\hskip 2.84526pt\text{and}\hskip 5.69054pt0\leq r\leq M_{1}\\}.$
It is enough to show that $M_{1}\leq M$. For every point $w$ in $Q_{v}$
$|w|_{\alpha}\geq|\pi_{1}(w)|_{\alpha}=L.$
On the other hand if $p$ is a point in $C$, then by the triangle inequality we
have
$|p|_{\alpha}\leq k\cdot\max_{i}|z_{i}|_{\alpha}\leq
k\cdot\max_{i}(|v_{i}|_{\alpha}+\ell)\leq k(\sqrt{d}L+\ell)<2k\sqrt{d}L,$
where we used Lemma 3.5 for the implication $|v_{i}|_{\alpha}\leq\sqrt{d}L$.
Combining the previous inequalities gives
$M_{1}=\frac{\max_{p\in C}|p|_{\alpha}}{\min_{w\in
Q_{v}}|w|_{\alpha}}\leq\frac{2k\sqrt{d}L}{L}=2k\sqrt{d}=M.$
∎
###### Lemma 3.12.
We have
$\textsc{Vol}(Q_{v})\leq
2^{2d-2}\times\prod_{j>1}^{r}R_{j}\times\prod_{j>r}R_{j}^{2},$
where $\textsc{Vol}(Q_{v})$ is the $(d-1)$-dimensional volume of $Q_{v}$.
###### Proof.
We have
$\displaystyle\textsc{Vol}(Q_{v})\leq\prod_{j>1}^{r}\textsc{Length}(\pi_{j}(Q_{v}))\times\prod_{j>r}\textsc{Area}(\pi_{j}(Q_{v})),$
where Vol, Length and Area are with respect to the inner product $q_{\alpha}$.
For $j>1$, define $\textbf{R}_{j}$ as
$\textbf{R}_{j}=\max_{w\in Q_{v}}|\pi_{j}(w)|_{\alpha}.$
Therefore
$\textsc{Vol}(Q_{v})\leq\prod_{j>1}^{r}(2\textbf{R}_{j})\times\prod_{j>r}(\pi\textbf{R}_{j}^{2}).$
By Lemma 3.10 and the triangle inequality we have $\textbf{R}_{j}<2R_{j}$. The
lemma now follows from the inequalities $\textbf{R}_{j}<2R_{j}$ and $\pi<4$,
and the equality $r+2s=d$. ∎
###### Lemma 3.13.
We have
$\textsc{Vol}(C)<2^{d^{2}+6d-1}\times
d^{\frac{ds}{4}+\frac{3d}{2}-1}\times\frac{\ell^{d}}{(1-\rho)^{d+\frac{ds}{2}}},$
where $C$ is defined as in (22).
###### Proof.
Note that
(24) $\displaystyle M=2k\sqrt{d}=2^{r}\times\prod_{j>r}N_{j}\times\sqrt{d}\leq
2^{r}\times\Big{(}\frac{16\sqrt{d}}{1-\rho}\Big{)}^{\frac{s}{2}}\times\sqrt{d}<2^{d}\times\frac{d^{\frac{s}{4}+1}}{(1-\rho)^{\frac{s}{2}}},$
where we used (17), (12), and the identity $r+2s=d$. By Lemma 3.11
$\displaystyle\textsc{Vol}(C)$
$\displaystyle\leq\textsc{Vol}(\\{rx\in\mathbb{R}^{d}\hskip 2.84526pt|\hskip
2.84526ptx\in Q_{v},\hskip 2.84526pt\text{and}\hskip 5.69054pt0\leq r\leq
M\\}).$
The upper bound above is the volume of a cone over $M\cdot Q_{v}$, where
$M\cdot Q_{v}$ denotes a dilation of $Q_{v}$ with the factor of $M$ from the
point $0\in\mathbb{R}^{d}$. The height of this cone is $ML$, since the height
of $Q_{v}$ measured perpendicularly from its apex along the $E_{1}$-axis is
$L$. Hence, the volume of the cone is equal to
$\frac{(ML)\textsc{Vol}(M\cdot
Q_{v})}{d}=\frac{M^{d}L}{d}\textsc{Vol}(Q_{v}).$
Therefore, we have
$\displaystyle\textsc{Vol}(C)\leq\frac{M^{d}L}{d}\times\textsc{Vol}(Q_{v})$
$\displaystyle\leq\Big{(}2^{d}\times\frac{d^{\frac{s}{4}+1}}{(1-\rho)^{\frac{s}{2}}}\Big{)}^{d}\times\frac{L}{d}\times
2^{2d-2}\times\prod_{j>1}^{r}R_{j}\times\prod_{j>r}R_{j}^{2},$
where we used Lemma 3.12. Moreover, using (10), (12), and (13) we have
$\displaystyle\prod_{j>1}^{r}R_{j}\times\prod_{j>r}R_{j}^{2}$
$\displaystyle\leq\prod_{j>1}^{r}\Big{(}\frac{8\sqrt{d}\ell}{1-\rho}\Big{)}\times\prod_{j>r}\Big{(}\frac{16\sqrt{d}\ell}{1-\rho}\Big{)}^{2}$
$\displaystyle=2^{3r+8s-3}\times
d^{\frac{d-1}{2}}\times\frac{\ell^{d-1}}{(1-\rho)^{d-1}}.$
Combining with the previous upper bound for $\textsc{Vol}(C)$, and using
$L<\frac{16\sqrt{d}\ell}{1-\rho}$, we have
$\displaystyle\textsc{Vol}(C)\leq 2^{d^{2}+2d+3r+8s-1}\times
d^{\frac{ds}{4}+\frac{3d}{2}-1}\times\frac{\ell^{d}}{(1-\rho)^{d+\frac{ds}{2}}}\leq
2^{d^{2}+6d-1}\times
d^{\frac{ds}{4}+\frac{3d}{2}-1}\times\frac{\ell^{d}}{(1-\rho)^{d+\frac{ds}{2}}},$
where we used the relation $r+2s=d$.
∎
Therefore, the number of integral points in $C$ is at most
$\displaystyle\frac{\textsc{Vol}(C)}{\text{det}(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2}}}\cdot(d+1)!$
$\displaystyle\leq 2^{d^{2}+6d}\times
d^{\frac{ds}{4}+\frac{5d}{2}-1}\times\frac{\ell^{d}}{\text{det}(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{1}{2}}}\times\frac{1}{(1-\rho)^{d+\frac{ds}{2}}}$
$\displaystyle=2^{d^{2}+6d}\times
d^{\frac{ds}{4}+\frac{5d}{2}-1}\times\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha})^{\frac{d}{2}}\times\frac{1}{(1-\rho)^{d+\frac{ds}{2}}}.$
In the first line above, we used the inequality
$(d+1)!=(d+1)d!<(2d)d^{d-1}=2d^{d}.$
The second line above uses the definition of $\ell$, and Definition 3.1. By
taking infimum over all $\alpha\in\mathfrak{B}$, we obtain the upper bound
$\displaystyle 2^{d^{2}+6d}\times
d^{\frac{ds}{4}+\frac{5d}{2}-1}\times\frac{\tau_{\min}(\mathcal{O}_{\mathbb{K}})^{\frac{d}{2}}}{(1-\rho)^{d+\frac{ds}{2}}},$
for the number of integral points in $C$ and hence for the dimension of the
matrix $A$.
Bayer Fluckiger showed in [Bay06, Propositon 4.2], as a corollary of the work
of Banaszczyk [Ban93, Theorem 2.2], that for any $\alpha\in\mathfrak{B}$
(25)
$\displaystyle\tau(\mathcal{O}_{\mathbb{K}},q_{\alpha})\leq\frac{d}{4}\cdot
D_{\mathbb{K}}^{\frac{1}{d}}.$
This gives the upper bound
$2^{d^{2}+5d}\times
d^{\frac{ds}{4}+3d-1}\times\frac{\sqrt{D_{\mathbb{K}}}}{(1-\rho)^{d+\frac{ds}{2}}}$
for the dimension of $A$. Both parts of the theorem now follow from the crude
estimates
$2^{d^{2}+6d}\leq 8^{d^{2}},\hskip 17.07164pt\frac{ds}{4}+3d-1<d^{2},\hskip
17.07164ptd+\frac{ds}{2}<d^{2}.$
∎
The bound in Theorem 1.3 is perhaps enormous compared to the Perron–Frobenius
degree, so we have not tried to make the constants optimal. The point is to
have an explicit bound in terms of data that we believe are relevant to the
Perron–Frobenius degree; see Question 5.2.
###### Remark 3.14.
Denote the _covering conjecture_ for dimension $d$ by $C_{d}$.
###### Conjecture 3.15 (Covering conjecture).
The covering radius of any well-rounded unimodular lattice $L$ in
$\mathbb{R}^{d}$ (with the standard norm $|\cdot|$) satisfies
$\sup_{x\in\mathbb{R}^{d}}\inf_{y\in L}|x-y|\leq\frac{\sqrt{d}}{2}.$
Equality happens if and only if $L=g\cdot\mathbb{Z}^{d}$ for some
$g\in\mathrm{SO}_{d}(\mathbb{R})$.
See McMullen [McM05] for the definition of _well-rounded lattice_ , and the
application of the covering conjecture to _Minkowski’s conjecture_. The
covering conjecture is proved for $d\leq 10$; see [KR20, KR16] and the
references therein. Moreover, it is known to be false for $d\geq 30$; see
[RSW17].
In [Bay06, page 313], Bayer Fluckiger pointed out that McMullen’s results
[McM05] together with the covering conjecture $C_{d}$ imply that for any
totally real $\lambda$ of degree $d$ we have
$\tau_{\min}(\mathcal{O}_{\mathbb{K}})\leq\frac{d}{4}$. Hence, for any $d\leq
10$ and for any totally real Perron number $\lambda$ of degree $d$
$d_{PF}^{irr}(\lambda)\leq\Big{(}\frac{8d}{1-\rho}\Big{)}^{d^{2}}.$
It would be interesting to know which number fields satisfy an inequality
similar to that of $\tau_{\min}(\mathcal{O}_{\mathbb{K}})\leq\frac{d}{4}$ with
upper bound only depending on the degree $d$. As pointed out kindly by
McMullen to me, it is easy to see that such a bound does not exist for
imaginary quadratic fields $\mathbb{Q}(\sqrt{n})$ for $n<0$. On the other
hand, Bayer Fluckiger showed that the inequality
$\tau_{\min}(\mathcal{O}_{\mathbb{K}})\leq\frac{d}{4}$ holds for fields of the
form $\mathbb{K}=\mathbb{Q}(\zeta_{p^{r}})$ or
$\mathbb{K}=\mathbb{Q}(\zeta_{p^{r}}+\zeta^{-1}_{p^{r}})$, where $p$ is an odd
prime number, $r$ is a natural number, and $\zeta_{p^{r}}$ is a primitive
$p^{r}$th root of unity; see respectively [Bay06, page 319, line 4] and
[Bay06, Lemma 8.5].
###### Example 3.16 (Pisot numbers).
A real algebraic integer $\alpha>1$ is called _Pisot_ if all other Galois
conjugates of $\alpha$ lie in the unit circle $\\{z\in\mathbb{C}\hskip
5.69054pt|\hskip 5.69054pt|z|<1\\}$. Note a Pisot number is always Perron. We
collect a few facts about Pisot numbers.
1. (1)
A number field is called _real_ if it has at least one real place. It is clear
that every number field containing a Pisot number with the same degree should
be real. Pisot [Pis38] proved that every real number field $\mathbb{K}$ of
degree $d$ contains a Pisot number of degree $d$. Moreover, the set of Pisot
numbers of degree $d$ in $\mathbb{K}$ is closed under multiplication; see
[Mey72, Corollary in page 33].
2. (2)
The smallest Pisot number is the largest root $p$ of $x^{3}-x-1$ and is known
as the _plastic constant_. This was identified as the smallest known Pisot
number by Salem [Sal44], and Siegel proved it to be the smallest possible
Pisot number [Sie44]. In particular, if $\lambda$ is a Pisot number with
spectral ratio $\rho$, then $\rho<p^{-1}$ implying that
$\frac{1}{1-\rho}<\frac{1}{1-p^{-1}}.$
Let $\mathbb{K}$ be a real number field of degree $d\geq 3$. Let $\lambda$ be
any Pisot number in $\mathbb{K}$ of degree $d$, and note that
$\mathbb{Q}(\lambda)=\mathbb{K}$. By Theorem 1.3, $d_{PF}^{irr}(\lambda)$ is
bounded above by
$\Big{(}\dfrac{8d}{1-p^{-1}}\Big{)}^{d^{2}}\sqrt{D_{\mathbb{K}}}.$
Note this upper bound only depends on $\mathbb{K}$ and not on the Pisot number
$\lambda\in\mathbb{K}$. Every number field $\mathbb{K}$ has only finitely many
subfields $\mathbb{K}^{\prime}\subset\mathbb{K}$. Hence there is a similar
upper bound depending only on $\mathbb{K}$ for an arbitrary Pisot number
$\lambda\in\mathbb{K}$ (with any degree).
## 4\. Primitive matrices
In this section, we use Theorem 1.3 to derive an upper bound for the
Perron–Frobenius degree of a Perron number $\lambda$. A useful observation is
that if there exists an irreducible matrix $A$ with spectral radius
$\lambda-1$, then $\lambda$ is the spectral radius of the _primitive_ matrix
$I+A$. This is because $I+A$ is irreducible and has positive trace; hence it
is primitive.
###### Theorem 1.6.
Let $\mathbb{\lambda}$ be a Perron number of degree $d\geq 3$ and spectral
ratio $\rho$. Set $\mathbb{K}:=\mathbb{Q}(\lambda)$. Denote the bound from
Theorem 1.3 by $B(\mathbb{K},\rho)$, and let $\kappa(\mathbb{K},\rho)$ be as
in Notation 1.5. The Perron–Frobenius degree of $\lambda$ is bounded above by
$\max\\{2^{d^{2}}B(\mathbb{K},\rho),\kappa(\mathbb{K},\rho)\\}.$
###### Proof.
We may assume that $\lambda\geq M=1+\frac{4}{1-\rho}$. First we show that
$\lambda-1$ is Perron. The Galois conjugates of $\lambda-1$ are equal to
$\lambda_{i}-1$. We have
$\displaystyle\frac{|\lambda_{i}-1|}{\lambda-1}$
$\displaystyle\leq\frac{|\lambda_{i}|+1}{\lambda-1}\leq\frac{\rho\lambda+1}{\lambda-1}=\rho+\frac{\rho+1}{\lambda-1}\leq$
$\displaystyle\rho+\frac{2}{\lambda-1}\leq\rho+\frac{1-\rho}{2}<1,$
where we used the assumptions $\rho<1$ and $\lambda\geq 1+\frac{4}{1-\rho}$.
Denote the spectral ratio for $\lambda-1$ by $\mu$. Note that
(26) $\displaystyle 1-\mu\geq\frac{1-\rho}{2}.$
In fact, we just showed that for every $i$
$\displaystyle\frac{|\lambda_{i}-1|}{\lambda-1}\leq\rho+\frac{1-\rho}{2},$
hence
$\displaystyle 1-\frac{|\lambda_{i}-1|}{\lambda-1}\geq
1-(\rho+\frac{1-\rho}{2})=\frac{1-\rho}{2}.$
Inequality (26) in particular implies that
(27) $\displaystyle B(\mathbb{K},\mu)\leq 2^{d^{2}}B(\mathbb{K},\rho).$
Now $\lambda-1$ is a Perron number of spectral ratio $\mu$ and it generates
the same number field $\mathbb{K}$ as $\lambda$. By Theorem 1.3, there is an
integral irreducible matrix $A$ with spectral radius $\lambda-1$ and dimension
at most $B(\mathbb{K},\mu)$. Then $I+A$ is a primitive matrix of the same
dimension and with spectral radius $\lambda$.
∎
## 5\. Questions
In Theorem 1.3, for technical reasons, we constructed an integral
_irreducible_ matrix with spectral radius equal to a given Perron number,
although it would have been more natural to construct an integral _primitive_
matrix instead. This motivates the following:
###### Question 5.1.
1. (1)
Are there upper bounds for $d_{PF}(\lambda)$ in terms of
$d_{PF}^{irr}(\lambda)$?
2. (2)
Does $d_{PF}(\lambda)=d_{PF}^{irr}(\lambda)$ always hold?
The lower bound for the Perron–Frobenius degree in [Yaz21] is in terms of the
two largest Galois conjugates in the complex plane. Therefore, the following
is a natural question.
###### Question 5.2.
Let $d$ and $\rho$ denote, respectively, the algebraic degree and the spectral
ratio. Is there an upper bound $B=B(d,\rho)$ for $d_{PF}^{irr}(\lambda)$
(respectively $d_{PF}(\lambda)$) where $\lambda$ is an arbitrary Perron
number?
We expect the above question to have a negative answer, meaning that in
Theorem 1.3 the arithmetic information on $\mathcal{O}_{\mathbb{K}}$ cannot be
overlooked. A unit algebraic integer $\lambda$ is _bi-Perron_ if all other
Galois conjugates of $\lambda$ lie in the annulus $\\{c\in\mathbb{C}\hskip
2.84526pt|\hskip 2.84526pt\lambda^{-1}<|z|<\lambda\\}$ except possibly for
$\pm\lambda^{-1}$. See [McM14].
###### Question 5.3.
How can the bound in Theorem 1.3 be improved for special classes of algebraic
integers such as totally real Perron numbers, Pisot numbers, Salem numbers, or
bi-Perron numbers?
In particular, we ask the following about Pisot numbers.
###### Question 5.4.
Let $d$ denote the algebraic degree. Is there an upper bound $B=B(d)$ for
$d_{PF}^{irr}(\lambda)$ (respectively $d_{PF}(\lambda)$) where $\lambda$ is an
arbitrary Pisot number?
## References
* [Ban93] Wojciech Banaszczyk. New bounds in some transference theorems in the geometry of numbers. Mathematische Annalen, 296(1):625–635, 1993.
* [Bay06] Eva Bayer Fluckiger. Upper bounds for Euclidean minima of algebraic number fields. Journal of Number Theory, 121(2):305 – 323, 2006.
* [BH91] Mike Boyle and David Handelman. The spectra of nonnegative matrices via symbolic dynamics. Annals of Mathematics, pages 249–316, 1991.
* [BL02] Mike Boyle and Douglas Lind. Small polynomial matrix presentations of non-negative matrices. Linear algebra and its applications, 355:49–70, 2002.
* [Jar14] Frazer Jarvis. Algebraic number theory. Springer, 2014.
* [KOR00] Ki Kim, Nicholas Ormes, and Fred Roush. The spectra of nonnegative integer matrices via formal power series. Journal of the American Mathematical Society, 13(4):773–806, 2000\.
* [KR16] Leetika Kathuria and Madhu Raka. On conjectures of Minkowski and Woods for n= 9. Proceedings-Mathematical Sciences, 126(4):501–548, 2016.
* [KR20] Leetika Kathuria and Madhu Raka. On conjectures of Minkowski and Woods for $n=10$. arXiv preprint arXiv:2009.09992, 2020.
* [Lin84] Douglas A Lind. The entropies of topological Markov shifts and a related class of algebraic integers. Ergodic Theory and Dynamical Systems, 4(2):283–300, 1984.
* [LM21] Douglas Lind and Brian Marcus. An Introduction to Symbolic Dynamics and Coding. Cambridge Mathematical Library. Cambridge University Press, 2 edition, 2021.
* [McM05] Curtis McMullen. Minkowski’s conjecture, well-rounded lattices and topological dimension. Journal of the American Mathematical Society, 18(3):711–734, 2005\.
* [McM14] Curtis McMullen. Slides for Dynamics and algebraic integers: Perspectives on Thurston’s last theorem. http://www.math.harvard.edu/ ctm/expositions/home/text/talks/cornell/2014/slides/slides.pdf, 2014\.
* [Mey72] Yves Meyer. Algebraic Numbers and Harmonic Analysis, volume 2. Elsevier, 1972.
* [Pis38] Charles Pisot. La répartition modulo 1 et les nombres algébriques. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 7(3-4):205–248, 1938.
* [RSW17] Oded Regev, Uri Shapira, and Barak Weiss. Counterexamples to a conjecture of Woods. Duke Math. J., 166(13):2443–2446, 09 2017.
* [Sal44] Raphael Salem. A remarkable class of algebraic integers. Proof of a conjecture of Vijayaraghavan. Duke Mathematical Journal, 11(1):103–108, 1944.
* [Sch13] Rolf Schneider. Convex Bodies: The Brunn–Minkowski Theory. Encyclopedia of Mathematics and its Applications. Cambridge University Press, 2 edition, 2013.
* [Sie44] Carl Ludwig Siegel. Algebraic integers whose conjugates lie in the unit circle. Duke Mathematical Journal, 11(3):597–602, 1944.
* [Thu89] William P Thurston. Groups, tilings and finite state automata. In AMS Colloq. Lectures, 1989.
* [Thu14] William Thurston. Entropy in dimension one. Frontiers in Complex Dynamics: In Celebration of John Milnor’s 80th Birthday, 2014.
* [Yaz21] Mehdi Yazdi. Lower bound for the Perron–Frobenius degrees of Perron numbers. Ergodic Theory and Dynamical Systems, 41(4):1264–1280, 2021.
|
###### Abstract
We address the applicability of quantum key distribution with continuous-
variable coherent and squeezed states over long-distance satellite-based
links, considering low Earth orbits and taking into account strong varying
channel attenuation, atmospheric turbulence and finite data ensemble size
effects. We obtain tight security bounds on the untrusted excess noise on the
channel output, which suggest that substantial efforts aimed at setup
stabilization and reduction of noise and loss are required, or the protocols
can be realistically implemented over satellite links once either individual
or passive collective attacks are assumed. Furthermore, splitting the
satellite pass into discrete segments and extracting the key from each rather
than from the overall single pass allows to effectively improve robustness
against the untrusted channel noise and establish a secure key under active
collective attacks. We show that feasible amounts of optimized signal
squeezing can substantially improve the applicability of the protocols
allowing for lower system clock rates and aperture sizes and resulting in
higher robustness against channel attenuation and noise compared to the
coherent-state protocol.
###### keywords:
quantum cryptography; quantum optics; quantum key distribution; continuous
variables; coherent states; squeezed states; satellite; low Earth orbit
10.3390/e23010055 1 23 Received: 13 December 2020; Accepted: 29 December 2020;
Published: 31 December 2020 10.3390/e23010055 1 23 Received: 13 December 2020;
Accepted: 29 December 2020; Published: 31 December 2020 Applicability of
squeezed- and coherent-state continuous-variable quantum key distribution over
satellite links Ivan Derkach † and Vladyslav C. Usenko † Correspondence:
<EMAIL_ADDRESS>These authors contributed equally to this work.
03.67.Hk, 03.67.Dd, 84.40.Ua
## 1 Introduction
Quantum key distribution (QKD) Diamanti et al. (2016); Xu et al. (2020);
Pirandola et al. (2020) is well known to have its goal in developing methods
(protocols) for sharing a secret key between legitimate users, who can lately
use the key for the confidential information transfers. First started with the
discrete-variable protocols based on direct detection of single-photon states
(and their emulation using weak coherent pulses or entangled photon pairs
Gisin et al. (2002)), QKD was later extended to the realm of continuous
variables (CV) Braunstein and Van Loock (2005) based on efficient and low-
noise homodyne detection of multiphoton coherent or squeezed states of light.
One of the important applications of QKD is in the extra-terrestrial channels,
which potentially allow extremely long-distance secure communication enabled
by QKD over a satellite. While discrete-variable protocols were recently
successfully tested over the satellite links Vallone et al. (2015); Liao et
al. (2018); Villar et al. (2020); Yin et al. (2020), the applicability of
satellite-based CV QKD remains less studied. Indeed, it was considered in the
asymptotic regime of the infinitely many quantum states Hosseinidehaj and
Malaney (2016); Hosseinidehaj et al. (2019), which however is never the case
in practice. Moreover, CV QKD may have an important practical advantage in
free-space applications and particularly in the satellite-based channels
because a homodyne detector, in which the signal is coupled to a narrow-band
local oscillator (bright coherent beam used as a phase reference),
intrinsically filters out the background radiation at unmatched wavelengths
Elser et al. (2009). Thus CV QKD can operate in conditions of strong stray
light and potentially at daytime, which, in the case of discrete-variable
protocols, would require additional filtering, increasing attenuation and
complexity of the set-up. Recently, the feasibility of coherent-state CV QKD
over satellite links was discussed in Dequal et al. (2020). In the current
work we analyze applicability of CV QKD over satellite-based channels
considering also squeezed signal states. As the feasible squeezing up 10 dB,
achievable with current technology Andersen et al. (2016), is known to improve
robustness of CV QKD to noise García-Patrón and Cerf (2009); Madsen et al.
(2012); Usenko and Filip (2011), we confirm its usefulness in the satellite-
based links as well. We build the channel model on the assumption of normal
fluctuation of deflected signal beam center around the receiving aperture
center, and study applicability of CV QKD, taking into account the finite data
ensemble size. We show that in this regime the protocols appear to be
extremely sensitive to strong channel attenuation and large amounts of data
are required for successful realization of CV QKD over satellites, which
contradicts relatively short passage times. Possible solutions to circumvent
the problem can be i) use of squeezed states, that reduce requirements on the
data ensemble size and can tolerate stronger attenuation and channel noise;
ii) relaxation of security assumptions considering individual attacks or
passive eavesdropping, introducing no excess noise; iii) increase of the link
transmittance using larger telescopes in the downlink regime; iv) increase of
the repetition rate of the system in order to accumulate larger statistics.
Our results reveal substantial challenges for implementing CV QKD over
satellites, but shows no fundamental limits for such realizations. Already
under the strict assumption of collective eavesdropping attacks and untrusted
channel noise, CV QKD protocols using feasible squeezing should be applicable
with low-orbit satellites, while in the assumption of passive eavesdropping
squeezed-state CV QKD can tolerate up to 43 dB of channel attenuation, which
paves the way towards realization over geostationary satellites.
## 2 Security of CV QKD
We address satellite-based implementation of Gaussian CV QKD protocols
Weedbrook et al. (2012) using coherent or squeezed states of light as shown in
Figure 1 (a). We describe the quantum states of light in a given mode of
electromagnetic radiation using two complementary observables, namely
quadratures, being analogues of position and momentum operators of a single
particle, and expressed through mode’s quantum operators as
$\hat{x}=\hat{a}^{\dagger}+\hat{a}$ and
$\hat{p}=i[\hat{a}^{\dagger}-\hat{a}]$. The sender Alice prepares coherent or
squeezed states using respectively a laser source or an optical parametric
oscillator Giordmaine and Miller (1965), and modulates the states by applying
quadrature displacements, governed by independent zero-centered Gaussian
distributions, by using quadrature modulators. This way Alice prepares the
states described by the quadratures $\hat{x}_{A}=\hat{x}_{S}+\hat{x}_{M}$ and
$\hat{p}_{A}=\hat{p}_{S}+\hat{p}_{M}$, where $\hat{x}_{S}$ and $\hat{p}_{S}$
with variances $Var(\hat{x}_{S})=V_{S}$ and $Var(\hat{p}_{S})=1/V_{S}$ (we
define variance of an operator $\hat{r}$ with zero mean value as
$Var(\hat{r})=\langle\hat{r}^{2}\rangle$) are the quadrature values of the
signal (so that either $V_{S}=1$ for coherent states or with no loss of
generality we assume $x-$quadrature squeezed states with $V_{S}<1$).
$\hat{x}_{M}$ and $\hat{p}_{M}$ with $Var(\hat{x}_{M})=Var(\hat{p}_{M})=V_{M}$
are the displacements known to Alice, which constitute her classical data
contributing to the final secret key. The signal then travels through a
generally noisy and lossy quantum channel, which can be optimally Navascués et
al. (2006); Garcia-Patron and Cerf (2006) represented as a Gaussian channel
resulting in the output state described by the quadratures
$\hat{x}_{B}=\sqrt{\eta}\hat{x}_{A}+\sqrt{1-\eta}\hat{x}_{0}+\hat{x}_{N}$,
where $\eta$ is the channel transmittance, $\hat{x}_{0}$ with
$Var(\hat{x}_{0})=1$ is the variance of the vacuum noise concerned with the
channel attenuation, and $\hat{x}_{N}$ is the contribution from the excess
noise on the channel output with $Var(\hat{x}_{N})=\epsilon$, similarly for
$p-$quadrature with the same $\eta$ and $\epsilon$, as the free-space channel
is typically phase-insensitive. Trusted parties use a beacon laser for signal
acquiring, tracing and pointing, as well as for timing synchronization Liao et
al. (2017). The receiving side Bob is performing homodyne detection on the
incoming mode by measuring $\hat{x}_{B}$ or $\hat{p}_{B}$ (bases have to be
randomly switched between in order to fully characterize channel loss and
excess noise). Preferably such measurement is made using locally generated
local oscillator Soh et al. (2015); Qi et al. (2015), that despite additional
noise contribution from signal wavefront aberration Kish et al. (2020) and
phase noise due to relative phase drift Marie and Alléaume (2017); Kleis et
al. (2017); Laudenbach et al. (2019), allows to avoid phase reference pulse
attenuation in a strongly lossy satellite link, and provides enhanced security
of the protocol by ruling out the attacks on the local oscillator. After
accumulating a certain amount of data points $N$ from state preparation and
measurement, parties use $N-n$ of these to estimate the channel parameters
$\eta$ and $\epsilon$, from which the security of the protocol can be assessed
as described below. The remaining $n$ points are processed using error
correction and privacy amplification algorithms Gisin et al. (2002) in order
to obtain the resulting provably secure key which can then be used for
classical encryption. The ratio $n/N$ can be optimized Ruppert et al. (2014),
here for simplicity we assume it to be $1/2$ (which is close to optimal except
for the low repetition rates) and resulting estimates to be perfectly accurate
i.e. with infinitesimal confidence intervals (or already being pessimistic
lower bounds complying with a given probability of failure of the channel
estimation procedure). We assume that the remote trusted side (Bob) is the
reference side for the error correction algorithms, thus using so-called
reverse reconciliation, which was shown robust against any level of pure
channel loss Grosshans et al. (2003) and is applicable in the strongly
attenuating links with $\eta$$\ll 1$.
Figure 1: a) CV QKD scheme based on a signal state preparation using a Source
(laser or optical parametric oscillator for preparation of coherent or
squeezed states respectively) and a quadrature Modulator (driven by the data
$x_{M},p_{M}$) on the side of Alice and on a homodyne detection on the side of
Bob, resulting in measurement outcomes $x_{B}$ or $p_{B}$ after the
propagation through an untrusted quantum channel with transmittance $\eta$ and
excess noise $\epsilon$ related to the channel output; b) Theoretical
purification scheme of the state preparation on the side of Alice based on
generation of two oppositely squeezed states with variances (3) on optical
parametric oscillators (squeezers) $S_{1}$ and $S_{2}$, coupling squeezed
states on a balanced beamsplitter, and local homodyne measurement on the
Alice’s side resulting in $x_{A}$ or $p_{A}$ on one of the modes (equivalent
to modulation of squeezed states; in the case of coherent-state protocol, a
heterodyne detection, resulting in $x_{A}$ and $p_{A}$, is considered at
Alice’s side), while the other mode is sent to the channel towards Bob, the
rest of the scheme is an in a).
Security of CV QKD was established against two main types of attacks, namely
individual, when an eavesdropper is able to interact optimal probe states with
the signal states and then individually measure the probes, and collective,
when the probes are assumed to be stored in a quantum memory after the
interaction and then optimally collectively measured, which increases the
amount of information, that can be potentially obtained by an eavesdropper
Diamanti and Leverrier (2015). Security of the coherent-state protocol with
heterodyne detection against collective can be extended to security against
general attacks using de Finetti reduction Leverrier (2017), similar
extensions for the squeezed-state protocol and homodyne detection being more
demanding Furrer et al. (2012); Leverrier et al. (2013).
We study security of the above described protocol by evaluating the lower
bound on secure key rate per channel use, which in the finite-size regime,
reverse reconciliation scenario, and firstly assuming collective attacks,
performed by an eavesdropper, reads
$K=\text{max}\left\\{0,\frac{n}{N}\Big{[}\beta
I_{AB}-\chi_{BE}-\delta(n)\Big{]}\right\\},$ (1)
where $\beta\in[0,1]$ is post-processing efficiency representing how close the
trusted parties are able to reach the Shannon mutual information $I_{AB}$
using realistic error correction codes, $\chi_{BE}$ is the Holevo bound Holevo
and Werner (2001), giving the upper bound on an eavesdropper’s information on
the measurement results at the remote receiving side, Bob, and
$\delta(n)\approx 7\sqrt{\frac{\log_{2}{(2/\bar{\epsilon})}}{n}}$ is the
correction parameter related to the finite-size effects Leverrier et al.
(2010), where $\bar{\epsilon}$ is the smoothing parameter that contributes to
the overall failure probability of the protocol, and is set further to
$10^{-10}$.
The mutual information between the trusted parties
$I_{AB}=1/2\log_{2}{V_{B}/V_{B|A}}$ for the Gaussian-distributed data can be
expressed through variances and correlations between the measurement outcomes,
i.e., through the variance of Bob’s measurement $V_{B}=\eta
V+(1-\eta)+\epsilon$, where $V\equiv V_{S}+V_{M}$, and conditional variance
$V_{B|A}=V_{B}-C_{AB}^{2}/V_{A}$, $V_{A}\equiv V_{M}$ is the variance of
Alice’s data and $C_{AB}=\sqrt{\eta}V_{M}$ is the correlation between
modulation data and measurements on $\hat{x_{B}}$, for the zero-mean-
distributed observables obtained as
$C_{AB}=\langle\hat{x}_{M}\hat{x}_{B}\rangle$. The resulting expression for
the mutual information then reads
$I_{AB}=\frac{1}{2}\log_{2}{\Big{[}1+\frac{\eta V_{M}}{\eta
V_{S}+1-\eta+\epsilon}\Big{]}}$ (2)
and is essentially determined by the signal modulation variance $V_{M}$,
signal state variance $V_{S}$, channel transmittance $\eta$, and channel noise
$\epsilon$ related to the channel output.
Evaluation of the Holevo bound on the other hand is more involved. It is based
on the assumption that Eve holds purification of the noise added in the
channel (due to losses and excess noise) and relies on the evaluation of von
Neumann entropies of the state shared between Alice and Bob Usenko and Filip
(2016). This is performed in the equivalent entanglement-based representation,
when state preparation is purified using two-mode entangled state. In the case
of coherent-state protocol a symmetrical two-mode squeezed vacuum (TMSV) Ou et
al. (1992) state with variance $V=1+V_{M}$ is used for purification, such that
Alice is measuring one of the modes using a heterodyne (balanced homodyne)
detector Usenko and Filip (2016). For a general state preparation in the
squeezed-state protocol, assuming independent levels of signal squeezing and
modulation variance (contrary to the standard symmetrically modulated
protocol, where squeezing and modulation are essentially related as
$V_{M}=1/V_{S}-V_{S}$ Cerf et al. (2001)), we use the generalized
entanglement-based scheme using an asymmetrical entangled state instead of
TMSV and a homodyne detection on the local mode Usenko and Filip (2011), as
shown in Figure 1 (b). The preparation in this case is equivalent to the
prepare-and-measure scheme provided the asymmetrical state is constructed of
the oppositely squeezed states with variances in $x$-quadratures being
$\displaystyle V_{1}=V_{S}+V_{M}-\sqrt{V_{M}(V_{S}+V_{M})},\hskip
28.45274ptV_{2}=1/\big{[}V_{S}+V_{M}+\sqrt{V_{M}(V_{S}+V_{M})}\big{]},$ (3)
while having the opposite variances ($1/V_{1}$ and $1/V_{2}$ respectively) in
the $p$-quadratures.
In the purification-based scenario, the Holevo bound is evaluated as
$\chi_{BE}=S(AB)-S(A|B)$, where $S(\cdot)$ denotes the von Neumann (quantum)
entropy of a state, $S(AB)$ is the quantum entropy of a (generally noisy)
state shared between the trusted parties and $S(A|B)=S(A|x_{B})$ is the von
Neumann entropy of the state of the trusted parties, conditioned by Bob’s
measurement results in $x$-quadrature. We obtain the relevant von Neumann
entropies from bosonic entropic functions of symplectic eigenvalues of
respective covariance matrices of the states, shared between Alice and Bob
(see Usenko and Filip (2016) for details of security analysis techniques in CV
QKD).
In the case of individual attacks and reverse reconciliation scenario, the
upper bound on the information leakage is reduced to the classical (Shannon)
information between Bob and Eve, which, similarly to $I_{AB}$ (2) described
above, reads $I_{BE}=(1/2)\log_{2}{(V_{B}/V_{B|E})}$. The evaluation of
$V_{B|E}$ is done in the assumption that Eve is able to purify the channel
noise. The optimal individual attack in this case is the entangling cloner
attack Grosshans et al. (2003), which is a TMSV state of variance
$V_{N}=1+\frac{\epsilon}{1-\eta}$ set so to emulate the channel loss $\eta$
and noise $\epsilon$. One of the modes of the cloner interacts with the signal
with a linear coupling $\eta$, corresponding to the channel loss, and
resulting in $V_{B}=\eta V+(1-\eta)V_{N}$ (which gives exactly $V_{B}$ as
described above and as expected by the trusted parties), while the other mode
is measured by Eve in order to reduce her uncertainty on the quantum noise
added in the channel. The mutual information between Bob and Eve then reads
Grosshans et al. (2003); Garcia-Patron Sanchez (2007)
$I_{BE}=\frac{1}{2}\log_{2}\left\\{\frac{1}{V}[1+\epsilon+\eta(V-1)][\eta(V-1)+V(\epsilon+1)]\right\\},$
(4)
which gives the bounds on the secure key rate in the case of individual
eavesdropping attacks, similarly to (1).
## 3 CV QKD over satellite channels
The main challenge in the satellite-based communication, and particularly the
quantum one, is the extremely strong attenuation levels, which are much higher
than the typical loss in the terrestrial fiber and free-space links in which
QKD was mostly tested.
The total level of signal loss in a satellite link can widely vary and depends
on the type of satellite and technical specifications of the channel
realization. Indeed, in the recent experiment with measurement of the quantum-
limited signal from a geostationary satellite using the homodyne detection,
the total loss of 69 dB was observed Günthner et al. (2017) with an aperture
of 27 cm. The loss can be reduced to 55 dB once a bigger aperture of 1.5 m is
used. Alternatively, the channel loss from a low Earth orbit (LEO) satellite
can be substantially smaller and as low as 31 dB for an Alphasat-like
satellite at a distance of 500 km. The loss can be reduced to about 20 dB by
using larger receiving apertures Günthner et al. (2017). Therefore it is
important to assess applicability of CV QKD in various scenarios, mainly
resulting in different optical link attenuation levels.
### 3.1 Quantum channel and protocol parameters
The protocols are essentially influenced by the excess noise on the channel
output, $\epsilon$, further fixed to $10^{-4}$ shot-noise units (SNU), which
are the vacuum quadrature fluctuations. This complies with the experiment in
the 100-km optical fiber with the total attenuation of -20 dB, where the noise
on the channel input was estimated as $3\%$ SNU Huang et al. (2016) and with
the recent experiment in the 300 km long low-loss fiber with the total
attenuation of -32 dB, where the noise at the channel input was estimated in
the worst case as $3.8\%$ SNU Zhang et al. (2020). Note that the excess noise
is mainly concerned with imperfect parameter estimation from the homodyne data
on the receiving side of the protocol (even if the channel noise is physically
absent, the pessimistic assumption on the level of noise, related to noise
estimation error, results in effectively non-zero level of channel noise in
order to comply with the required probability of failure of channel estimation
procedures Ruppert et al. (2014)), which substantially depends on the
stability of the set-up. It is essential that we fix the noise at the channel
output contrary to the standard approach in CV QKD, when noise was fixed as
relates to the channel input and then scaled by the channel attenuation,
making the assessment of the channel noise in long-distance channels too
optimistic. Alternatively and taking into account the fact that the excess
noise atop of the calibrated electronic noise of the detector appears most
likely due to the imperfect estimation of a noiseless quantum channel, one may
assume passive eavesdropping such that no untrusted channel excess noise is
present. In the case of satellite-based links, where line of sight between the
sender and the receiver suggests the absence of equipment capable of active
eavesdropping, this is a particularly sensible assumption and it was applied
recently for feasibility study of DV QKD over satellite links Vergoossen et
al. (2019). We also take this assumption into account in CV QKD by assuming
the excess noise to be trusted (i.e., being out of control by an eavesdropper)
and including it in the state purification using the scheme similar to the
entangling cloner with a strongly unbalanced coupling to the signal prior to
detection Usenko and Filip (2016).
For the data ensemble size, we rely on the typical passage time of 300 seconds
observed for the Micius quantum satellite Liao et al. (2018). Assuming
repetition rate of a CV QKD system to be of order of GHz, which is challenging
but feasible with the current technology Zhang et al. (2018), we may expect
$N=10^{11}$ data points acquired during a satellite passage, half of which
will then contribute to the key. Lastly, post-processing efficiency is taken
$\beta=0.95$, complying with the currently available algorithms Milicevic et
al. (2018).
The maximum tolerable channel attenuation gives the idea of the protocols
applicability independently of the aperture setting and random atmospheric
disturbances, i.e. based only on the link optical budget. The results are
summarized in Table 1 for given security assumptions and protocol parameters
(signal states and clock rates).
| Coherent states | Squeezed states
---|---|---
| 100 MHz | GHz | 10 GHz | 100 MHz | GHz | 10 GHz
Active collective attack | 23 dB | 24 dB | 25 dB | 29 dB | 30 dB | 31 dB
Passive collective attack | 27 dB | 32 dB | 37 dB | 33 dB | 37 dB | 42 dB
Active individual attack | 29 dB | 34 dB | 39 dB | 33 dB | 37 dB | 43 dB
Passive individual attack | 29 dB | 34 dB | 39 dB | 33 dB | 38 dB | 43 dB
Table 1: Tolerable levels of channel attenuation (rounded) for various
security assumptions and optimized CV QKD protocol settings. Excess noise on
the channel output is fixed to $\epsilon=10^{-4}$ SNU, and optimal squeezing
is $V_{S}\geq 0.1$ SNU. During a passive attack the excess noise is presumed
to be trusted.
Evidently, in the assumption of passive eavesdropping there is no substantial
difference between collective and individual attacks. On the other hand, when
the channel excess noise is assumed to be untrusted, relaxing the assumptions
on the possible attacks from collective to individual ones can substantially
extend the tolerable loss. Note that the use of squeezed states typically
increases the tolerable channel attenuation by 4-6 dB depending on the attack
assumption (the better improvement being observed upon more strict collective
attacks). Furthermore, with the high repetition rate the squeezed-state
protocol can tolerate from 30 to 44 dB of channel attenuation depending on the
attack assumption, making it potentially feasible in the geostationary
scenario.
Performance of CV QKD over short-range (terrestrial) free-space links can be
essentially limited by quantum channel transmittance fluctuations Usenko et
al. (2012); Pirandola (2020), also referred to as fading, which are mainly
caused by the atmospheric turbulence effects of beam wander Vasylyev et al.
(2012), when a beam spot travels around the receiving aperture. The channel
fading then results in increase of the channel noise, detected on the receiver
Usenko et al. (2012). Even though this effect will be partially compensated
for in the long-distance realization of CV QKD, where beam spot drastically
expands during the propagation, which results in channel stabilization at the
cost of increasing the overall loss Usenko et al. (2018) as well as by active
beam tracking and stabilization systems, it must be taken into account in the
realistic CV QKD security analysis. However, as the residual transmittance
fluctuations due to atmospheric effects are slow (of the order of KHz Ursin et
al. (2007)) compared to high achievable repetition rates of CV QKD systems
(enabled by GHz rates of homodyne detectors Zhang et al. (2018)), the fading
can be further compensated for by properly grouping the data according to
estimated relatively stable transmittance values Ruppert et al. (2019), which
we also apply in our study.
### 3.2 Satellite-to-ground channel model
In our study we consider downlink satellite channels, as the turbulence
effects, being destructive for CV QKD Usenko et al. (2012), are less
pronounced in this regime compared to the uplink scenario, where the signal is
strongly affected by the atmosphere already in the beginning of the channel
Bourgoin et al. (2013). Optical Gaussian beam in the satellite-to-ground link
is influenced by analytically predictable systematic and statistical effects
occurring during the communication window. Aside from losses within the
receiver optical system $\eta_{det}$, due to coupling and detection
inefficiencies, systematic effects also include diffraction and refraction
induced losses. The main source of loss is beam-spot broadening caused by
diffraction. The broadening limits the maximal transmission efficiency
$\eta_{0}$ as finite receiving aperture of size $a$ truncates and measures
only a part of the incoming collimated beam with spot-size $W$. The latter is
largely determined as $W=\theta_{d}L(\zeta)$ by divergence of the beam
$\theta_{d}$ and the channel length $L(\zeta)$. Line-of-sight distance, also
referred to as slant range, between the observer and the satellite depends on
the exact position of the latter. In following we assume a perfectly circular
Low-Earth orbit within observers meridian plane, characterized by the altitude
above the ground $H$, ranging from 200 km to 2000 km. The slant range is
obtained as
$L(\zeta)=\sqrt{H^{2}+2HR_{\oplus}+R_{\oplus}^{2}\cos^{2}\zeta}-R_{\oplus}\cos\zeta,$
(5)
where $R_{\oplus}$ is Earth radius and $\zeta$ is zenith angle i.e. between
the line pointing in opposite direction to the gravity from the observer and
the slant range, see also Figure 2. Additionally, air density and optical
refractive index change with the altitude causing the bending of the beam and
making the overall traveling distance longer than the straight geometrical
slant range. The refraction induced elongation also depends on the signal
wavelength, geographical position and altitude of the receiver, as well as
atmospheric conditions (relative humidity, temperature, wind, pressure). We
assume the communication window is established up to the zenith angle
$\zeta_{max}=70^{\circ}$, when the difference between actual and perceived
slant ranges is small Vasylyev et al. (2019). While such limitation reduces
the communication window and the size of data block, it also avoids
contributions from the longest propagation distance overall and through the
thickest air mass Bedington et al. (2017); Lee et al. (2019). Duration of
communication window, i.e. total time in view of the satellite $t$, determines
the amount of accumulated data points $N$ for given source repetition rate.
The time in view is calculated from geometrical considerations and satellite
orbital velocity, which for circular orbit in the observers meridian plane can
be simplified as Larson and Wertz (1999); Lissauer and De Pater (2013):
$t\approx 2\frac{(R_{\oplus}+H)^{3/2}}{\sqrt{G\cdot
M_{\oplus}}}\left(\zeta_{max}-\arcsin{\left[\frac{R_{\oplus}}{R_{\oplus}+H}\sin{\zeta_{max}}\right]}\right),$
(6)
where $G$ is the gravitational constant, and $M_{\oplus}$ is Earth mass.
Figure 2: Geometrical representation of satellite-to-ground communication
scheme. $H$ is the circular orbit altitude; $R_{\oplus}$ is Earth radius;
$\zeta\in[-Pi/2,Pi/2]$ is the angle between the ray in the meridian plane,
pointing above the observer (Bob), and the direct line connecting Bob and
satellite (Alice), in practice limited to $\pm\zeta_{max}$; and $L(\zeta)$ is
the slant range.
The volume of air mass is related to the atmospheric extinction ratio
$\eta_{ext}$, which is a measure of Rayleigh scattering, scattering due to
aerosols, and molecular absorption experienced by the signal beam in
terrestrial atmosphere Palmer and Davenhall (2001). The value of extinction
ratio $\eta_{ext,\zeta=0}$ depends on the the signal wavelength, air
constituents, temperature and weather conditions, and its change with increase
of the path length through the atmosphere (in terms of zenith angle) can be
approximated (for angles up to $\zeta_{max}$ where refraction effects are
small) as Hardie and Hiltner (1962); Tomasi and Petkov (2014):
$\eta_{ext}(\zeta)=\eta_{ext,\zeta=0}^{sec(\zeta)}.$ (7)
The ratio at zenith can be obtained based on MODTRAN atmospheric transmittance
and radiance model Berk et al. (2014), which for rural or urban sea-level Mid-
latitude location with clear sky visibility yields $\eta_{ext}=0.908$.
Aside from aforementioned effects, refraction and diffraction are also caused
by wind shear and temperature fluctuations and consequently by spatial and
temporal variations of refractive index in the channel. Such perturbations
result in scintillation, deviation of beam-spot from the center of the
receiving aperture, and deformation of the beam-spot. For satellite links, the
beam-spot radius is always significantly larger than the aperture, i.e.,
$W>a$, which allows us to ignore the deformation of the Gaussian beam profile,
hence making beam wandering the dominant effect governing the fluctuation
statistic of the channel transmittance.
Figure 3: Mean signal loss in satellite-to-ground channel dependence on orbit
altitude $H$ with receiver aperture radius $a=0.5,\,0.75,\,1\,m$ (from top to
bottom respectively). Single pass transmittance distribution is assembled from
simulated individual links at every zenith angle within
$-\zeta_{max}<\zeta<\zeta_{max}$.
The maximal transmission efficiency is reached when incoming signal is
perfectly aligned with the receiving aperture ($r=0$), and is defined by the
ratio $a/W$ of the aperture and beam spot sizes as follows
$\eta_{0}=1-\exp\left[-2\left(\frac{a}{W}\right)^{2}\right].$ (8)
Efficiency $\eta\in[0,\eta_{0}]$ decreases with the increase of deflection
distance $r$, with approximate analytical solution Vasylyev et al. (2012)
being
$\eta=\eta_{0}\exp\left[-\left(\frac{r}{R}\right)^{\lambda}\right],$ (9)
where $\lambda$ and $R$ are shape and scale parameters respectively:
$\lambda=8\left(\frac{a}{W}\right)^{2}\frac{\exp\left[-4\left(\frac{a}{W}\right)^{2}\right]I_{1}\left[4\left(\frac{a}{W}\right)^{2}\right]}{1-\exp\left[-4\left(\frac{a}{W}\right)^{2}\right]I_{0}\left[4\left(\frac{a}{W}\right)^{2}\right]}\bigg{\\{}\ln\left[\frac{2\eta_{0}}{1-\exp\left[\\-4\left(\frac{a}{W}\right)^{2}\right]I_{0}\left[4\left(\frac{a}{W}\right)^{2}\right]}\right]\bigg{\\}}^{-1},$
(10)
$R=a\Bigg{\\{}\ln\left[\frac{2\eta_{0}}{1-\exp\left[-4\left(\frac{a}{W}\right)^{2}\right]I_{0}\left[4\left(\frac{a}{W}\right)^{2}\right]}\right]\Bigg{\\}}^{-1/\lambda},$
(11)
where $I_{n}$ is $n$-th order Bessel function. The position of deflected beam
center is assumed to be normally fluctuating around the aperture center
Vasylyev et al. (2018), and described by random transverse vector
$\textbf{r}_{0}$:
$\mathcal{P}(\textbf{r}_{0})=\frac{1}{2\pi\sigma^{2}_{BW}}\exp\left[-\frac{\textbf{r}_{0}^{2}}{2\sigma^{2}_{BW}}\right],$
(12)
where beam-wandering variance is limited by tracking accuracy and beam
stabilization $\sigma_{BW}=\theta_{p}L(\zeta)$, and $r=|\textbf{r}_{0}|^{2}$.
The analytical form of the probability distribution of atmospheric
transmittance is the log-negative Weibull distribution Vasylyev et al. (2018):
$\mathcal{P}(\eta)=\frac{R^{2}}{\lambda\eta\sigma^{2}_{BW}}\left(\ln\frac{\eta_{0}}{\eta}\right)^{2/\lambda-1}\times\exp\left[-\frac{R^{2}}{2\sigma^{2}_{BW}}\left(\ln\frac{\eta_{0}}{\eta}\right)^{2/\lambda}\right].$
We simulate the values of $\eta$ for each zenith angle
$\zeta\in[-\zeta_{max},\,\zeta_{max}]$ within the communication window with
respect to the clock rate of the CV QKD protocol, and combine it with
systematic receiver loss $\eta_{det}$, and respective extinction ratio
$\eta_{ext}\,(\zeta)$ to obtain an overall mean transmittance
$\langle\eta_{tot}\rangle$ (as well as $\langle\sqrt{\eta_{tot}}\rangle$) of
the satellite pass. Hence the transmittance statistics during an overall
communication window consists of contributions from individual simulated free-
space channels at every permitted zenith angle $\zeta$. Both
$\langle\eta_{tot}\rangle$ and $\langle\sqrt{\eta_{tot}}\rangle$ govern the
evolution of a covariance matrix of the state shared between Alice and Bob
over the fluctuating channel Dong et al. (2010). Note that we evaluate the
mutual information between the trusted parties, $I_{AB}$, from the overall
covariance matrix, averaged over the whole transmittance distribution
(similarly to evaluating the Holevo bound), which we observe to be lower than
the average mutual information, hence being the pessimistic estimate.
Following is the list of parameters used in the simulation: the receiver
aperture radius $a=0.5\,(m)$ (respective plots shown in green color)
$0.75\,(m)$ (shown in blue), and $1\,(m)$ (red), wavelength
$\lambda=1550(nm)$, detection efficiency $\eta_{det}=-3\,dB$ Bedington et al.
(2017), tracking and pointing accuracy $\theta_{p}=1.2\,(\mu rad)$ Liao et al.
(2017), and beam divergence $\theta_{d}=10\,(\mu rad)$ Liao et al. (2018). The
resulting mean transmittance for a single pass at altitude $H$ is shown in
Figure 3.
### 3.3 Security evaluation
---
Figure 4: (Left) Asymptotic lower bound on the key rate (in bits per channel
use) of the optimized coherent-state (dashed) and squeezed-state CV QKD
protocols (solid) secure against active individual attacks for a single pass
of a LEO satellite with orbit altitude $H$ and receiver aperture radius
$a=0.5,\,0.75m$ (green and blue lines respectively). Channel excess noise at
the output $\epsilon=10^{-4}$ SNU. (Right) The key rate (in bits per channel
use) of optimized coherent-state (dashed) and squeezed-state protocol (solid)
under passive collective attacks for a single pass of a LEO satellite with
orbit altitude $H$, and receiver aperture radius $a=0.5,\,0.75m$ (green and
blue lines respectively). Note that the coherent-state protocol can only be
securely established with larger aperture $a=0.75m$. The overall block size
depends on the length of communication window (given by the orbit altitude
$h$) and the clock rate (from bottom to top) 100 MHz, 1 GHz, and 10 GHz.
Trusted channel noise at the output is $10^{-4}$ SNU. Both squeezing and
modulation variance are optimized (with optimal squeezing limited by a
feasible value as $V_{S}\geq 0.1$) in order to establish a secure key
regardless of the altitude.
---
Figure 5: Maximal tolerable excess noise in the case of active collective
attacks on squeezed- or coherent-state protocols (solid and dashed lines
respectively) with receiver aperture size $a=0.5$ (left) and $0.75$ (right).
From top to bottom: asymptotic regime, finite-size regime with repetition rate
(from bottom to top) 100 MHz, 1 GHz, and 10 GHz. Squeezing $V_{S}\geq 0.1$ and
modulation variance $V_{M}$ are optimized.
---
Figure 6: Secure key rate without channel subdivision (solid lines) and with
channel subdivision in 3 segments (dashed lines) versus orbit altitude $H$
(km), secure against collective attacks in the finite-size regime with number
of data points determined by (from darker to lighter shade) repetition rate
$100MHz,\,1GHz,\,10GHz$ for the squeezed-state protocol with optimized
squeezing $V_{S}$$\geq 0.1$ in the presence of untrusted excess noise
$\epsilon=10^{-4}$ related to the channel output. Aperture size $a=0.5m$
(left) and $a=0.75m$ (right).
We first assess the security by looking at the impact of individual attacks in
asymptotic regime (see Figure 4 left). In this regime, security can be
established for every LEO satellite altitude and feasible squeezing $V_{S}\geq
0.1$ (SNU) provides a noticeable rate gain. Note that under the individual
attacks, the higher levels of squeezing always translate into a higher secure
key rate, which is not the case for collective attacks where the transmittance
fluctuations limit the applicable values of squeezing Derkach et al. (2020)
and require active squeezing optimization based on the estimated atmospheric
transmittance distribution. The optimization of the latter is a crucial step
required for the establishing of secure key rate based on the data generated
from a single pass of the satellite.
The performance of the optimized CV QKD protocol under passive collective
attacks with trusted noise $\epsilon=10^{-4}$ is depicted in Figure 4 (right).
The impact of finite-size effects reduces with an increase of the altitude $H$
as the communication window gets longer and consequently the overall block
size $N$ gets larger. Low altitude satellite downlinks exhibit less mean
attenuation, but such advantage is partially offset by larger confidence
intervals of estimated channel parameters and shorter raw key. While
optimization of estimation block-size $N-n$ can lengthen the key to some
extent, increasing the repetition rate of the system is necessary to greatly
extend the raw key. Higher repetition rate is especially crucial for the
coherent-state protocol that can be implemented at the altitudes above 500 km
only with 10 GHz clock rates. Increasing the size of receiving aperture also
leads to significant improvement of the secure key rate.
Mean channel attenuation and fading noise, originating from transmittance
fluctuations, diminish the tolerance to the channel excess noise, as shown in
Figure 5. Evidently, both protocols are extremely sensitive to excess noise
and no secure key can be generated at any orbit altitude if the noise at the
output of the squeezed-state protocol is $\epsilon\geq 2\cdot 10^{-3}$ SNU for
$a=0.5$, or if the noise $\epsilon\geq 5\cdot 10^{-3}$ for $a=0.75$, with
orders of magnitude lower values needed for security of the coherent-state
protocol ($\epsilon\geq 6\cdot 10^{-5}$ at $a=0.5$, or $\epsilon\geq 4\cdot
10^{-4}$ at $a=0.75$). Note that with an increase of satellite orbit altitude
the rate at which the protocol looses noise tolerance decreases. As the
optical channel becomes longer it also becomes more stable Usenko et al.
(2012, 2018), i.e. both mean attenuation and fading (viewed as the variance of
transmittance fluctuations
$\langle\eta_{tot}\rangle-\langle\sqrt{\eta_{tot}}\rangle^{2}$) are
simultaneously reduced.
While the effect at LEO altitudes is more apparent for the coherent-state
protocol, the same dependency can be expected for the squeezed-state protocol
at higher altitudes (MEO or GEO). Furthermore, channel stabilization will be
more pronounced with less accurate beam-tracking $\theta_{p}$ which directly
limits the beam-wandering and consequently channel fading.
Clearly, in order to operate under collective attacks with untrusted channel
noise, the noise has to be limited to very low values. This can be achieved by
proper control of the set-up or precise parameter estimation, however one can
as well reduce the amount of fading noise by dividing the overall single-pass
data block into a subset of smaller blocks Usenko et al. (2012). Data
clusterization with respect to channel attenuation allows to compensate the
effect of channel transmittance fluctuations Ruppert et al. (2019), however
such post-processing can be demanding for satellite-based QKD and is not
needed for slow systematic changes of transmittance during the satellite pass.
In the current work we therefore adopt simpler albeit similar method by
splitting the satellite tracked pass into a set of segments and generating the
key for each one. This allows to achieve the following lower bound on the
overall secure key rate as a weighed sum of the key rates from individual
segments:
$K=\sum_{i}\text{max}\left\\{0,\frac{n_{i}}{N}\left[\beta
I_{AB}-\chi_{BE}-\delta(n_{i})\right]\right\\},$ (13)
where $n_{i}$ is the raw key length for a given segment $i$ with number of
data points $N_{i}$, so that $\sum_{i}N_{i}=N$, $N_{i}-n_{i}$ of data points
is used for segment channel estimation, and the weight is determined by the
relative size of the segment $n_{i}/N$, with $N$ being the overall block size
for a given satellite pass. The segments are chosen in accordance with the
zenith angle $\zeta$, so that for $i=1,2,3$ we obtain 3 segments each
containing measurement results at respectively
$[-\zeta_{max},-2/3\,\zeta_{max})\cup(2/3\,\zeta_{max},\zeta_{max}]$,
$[-2/3\,\zeta_{max},-1/3\,\zeta_{max})\cup(1/3\,\zeta_{max},2/3\,\zeta_{max}]$,
and $[-1/3\,\zeta_{max},1/3\,\zeta_{max}]$. The finite-size effects are
stronger within each segment and transmittance fluctuations are substantial,
yet reduced. Three segments are already sufficient to attain an enhanced
positive secure key rate and extend the range of secure altitudes, as shown in
Figure 6. For systems with smaller apertures this implies an effective
increase of no tracking zone $\zeta_{max}$, as the segment characterized by
the longer slant ranges $L(\zeta)$ might not contribute to the overall key.
## 4 Conclusions and discussion
We studied applicability of CV QKD over satellite links considering coherent
and squeezed-state protocols, taking into account realistic satellite passage,
atmospheric effects, finite data ensemble size, system clock rate and data
processing efficiency. We show that the protocols are very sensitive to
channel noise at the respective loss levels so that either set-up
stabilization resulting in drastic decrease of noise or relaxation of security
assumptions to individual or passive collective attacks is required for
implementation over the low Earth orbit satellites. Satellite pass
segmentation provides another viable option to reduce channel fading thus
improving the secure key rate and allowing to establish the secure link with
satellites at higher altitudes. We show that the use of squeezing can make the
protocols more applicable in the satellite links, allowing for higher
attenuation and levels of noise at the given security assumptions, and has to
be optimized in the collective attacks scenario. On the other hand, in the
case of trusted noise assumption and at high repetition rates, squeezed-state
CV QKD can tolerate attenuation levels up to 42 dB, which may open possibility
of use over geostationary satellites. The obtained results are promising for
satellite-based QKD potentially applicable in daylight conditions.
###### Acknowledgements.
Acknowledgments: Authors acknowledge support from the project LTC17086 of
INTER-EXCELLENCE program of the Czech Ministry of Education, project 19-23739S
of the Czech Science Foundation and EU H2020 Quantum Flagship initiative
project No. 820466, CiViQ. Conflicts of Interest: The authors declare no
conflict of interest.
## References
* Diamanti et al. (2016) Diamanti, E.; Lo, H.K.; Qi, B.; Yuan, Z. Practical challenges in quantum key distribution. npj Quantum Information 2016, 2, 16025.
* Xu et al. (2020) Xu, F.; Ma, X.; Zhang, Q.; Lo, H.K.; Pan, J.W. Secure quantum key distribution with realistic devices. Reviews of Modern Physics 2020, 92, 025002.
* Pirandola et al. (2020) Pirandola, S.; Andersen, U.L.; Banchi, L.; Berta, M.; Bunandar, D.; Colbeck, R.; Englund, D.; Gehring, T.; Lupo, C.; Ottaviani, C.; others. Advances in quantum cryptography. Advances in Optics and Photonics 2020, 12, 1012–1236.
* Gisin et al. (2002) Gisin, N.; Ribordy, G.; Tittel, W.; Zbinden, H. Quantum cryptography. Reviews of Modern Physics 2002, 74, 145.
* Braunstein and Van Loock (2005) Braunstein, S.L.; Van Loock, P. Quantum information with continuous variables. Reviews of Modern Physics 2005, 77, 513.
* Vallone et al. (2015) Vallone, G.; Bacco, D.; Dequal, D.; Gaiarin, S.; Luceri, V.; Bianco, G.; Villoresi, P. Experimental satellite quantum communications. Physical Review Letters 2015, 115, 040502.
* Liao et al. (2018) Liao, S.K.; Cai, W.Q.; Handsteiner, J.; Liu, B.; Yin, J.; Zhang, L.; Rauch, D.; Fink, M.; Ren, J.G.; Liu, W.Y.; Li, Y.; Shen, Q.; Cao, Y.; Li, F.Z.; Wang, J.F.; Huang, Y.M.; Deng, L.; Xi, T.; Ma, L.; Hu, T.; Li, L.; Liu, N.L.; Koidl, F.; Wang, P.; Chen, Y.A.; Wang, X.B.; Steindorfer, M.; Kirchner, G.; Lu, C.Y.; Shu, R.; Ursin, R.; Scheidl, T.; Peng, C.Z.; Wang, J.Y.; Zeilinger, A.; Pan, J.W. Satellite-Relayed Intercontinental Quantum Network. Physical Review Letters 2018, 120, 030501.
* Villar et al. (2020) Villar, A.; Lohrmann, A.; Bai, X.; Vergoossen, T.; Bedington, R.; Perumangatt, C.; Lim, H.Y.; Islam, T.; Reezwana, A.; Tang, Z.; others. Entanglement demonstration on board a nano-satellite. Optica 2020, 7, 734–737.
* Yin et al. (2020) Yin, J.; Li, Y.H.; Liao, S.K.; Yang, M.; Cao, Y.; Zhang, L.; Ren, J.G.; Cai, W.Q.; Liu, W.Y.; Li, S.L.; others. Entanglement-based secure quantum cryptography over 1,120 kilometres. Nature 2020, 582, 501–505.
* Hosseinidehaj and Malaney (2016) Hosseinidehaj, N.; Malaney, R. CV-QKD with Gaussian and Non-Gaussian Entangled States over Satellite-Based Channels. 2016 IEEE Global Communications Conference (GLOBECOM). IEEE, 2016\.
* Hosseinidehaj et al. (2019) Hosseinidehaj, N.; Babar, Z.; Malaney, R.; Ng, S.X.; Hanzo, L. Satellite-Based Continuous-Variable Quantum Communications: State-of-the-Art and a Predictive Outlook. IEEE Communications Surveys Tutorials 2019, 21, 881–919.
* Elser et al. (2009) Elser, D.; Bartley, T.; Heim, B.; Wittmann, C.; Sych, D.; Leuchs, G. Feasibility of free space quantum key distribution with coherent polarization states. New Journal of Physics 2009, 11, 045014.
* Dequal et al. (2020) Dequal, D.; Vidarte, L.T.; Rodriguez, V.R.; Vallone, G.; Villoresi, P.; Leverrier, A.; Diamanti, E. Feasibility of satellite-to-ground continuous-variable quantum key distribution. NPJ Quantum Information 2021, 7, 1.
* Andersen et al. (2016) Andersen, U.L.; Gehring, T.; Marquardt, C.; Leuchs, G. 30 years of squeezed light generation. Physica Scripta 2016, 91, 053001.
* García-Patrón and Cerf (2009) García-Patrón, R.; Cerf, N.J. Continuous-variable quantum key distribution protocols over noisy channels. Physical Review Letters 2009, 102, 130501.
* Madsen et al. (2012) Madsen, L.S.; Usenko, V.C.; Lassen, M.; Filip, R.; Andersen, U.L. Continuous variable quantum key distribution with modulated entangled states. Nature Communications 2012, 3, 1083.
* Usenko and Filip (2011) Usenko, V.C.; Filip, R. Squeezed-state quantum key distribution upon imperfect reconciliation. New Journal of Physics 2011, 13, 113007.
* Weedbrook et al. (2012) Weedbrook, C.; Pirandola, S.; García-Patrón, R.; Cerf, N.J.; Ralph, T.C.; Shapiro, J.H.; Lloyd, S. Gaussian quantum information. Reviews of Modern Physics 2012, 84, 621–669.
* Giordmaine and Miller (1965) Giordmaine, J.A.; Miller, R.C. Tunable Coherent Parametric Oscillation in LiNbO3at Optical Frequencies. Physical Review Letters 1965, 14, 973–976.
* Navascués et al. (2006) Navascués, M.; Grosshans, F.; Acin, A. Optimality of Gaussian attacks in continuous-variable quantum cryptography. Physical Review Letters 2006, 97, 190502.
* Garcia-Patron and Cerf (2006) Garcia-Patron, R.; Cerf, N.J. Unconditional optimality of Gaussian attacks against continuous-variable quantum key distribution. Physical Review Letters 2006, 97, 190503.
* Liao et al. (2017) Liao, S.K.; Cai, W.Q.; Liu, W.Y.; Zhang, L.; Li, Y.; Ren, J.G.; Yin, J.; Shen, Q.; Cao, Y.; Li, Z.P.; others. Satellite-to-ground quantum key distribution. Nature 2017, 549, 43–47.
* Soh et al. (2015) Soh, D.B.; Brif, C.; Coles, P.J.; Lütkenhaus, N.; Camacho, R.M.; Urayama, J.; Sarovar, M. Self-referenced continuous-variable quantum key distribution protocol. Physical Review X 2015, 5, 041010.
* Qi et al. (2015) Qi, B.; Lougovski, P.; Pooser, R.; Grice, W.; Bobrek, M. Generating the local oscillator “locally” in continuous-variable quantum key distribution based on coherent detection. Physical Review X 2015, 5, 041009.
* Kish et al. (2020) Kish, S.; Villaseñor, E.; Malaney, R.; Mudge, K.; Grant, K. Use of a Local Local Oscillator for the Satellite-to-Earth Channel. arXiv preprint arXiv:2010.09399 2020.
* Marie and Alléaume (2017) Marie, A.; Alléaume, R. Self-coherent phase reference sharing for continuous-variable quantum key distribution. Physical Review A 2017, 95, 012316.
* Kleis et al. (2017) Kleis, S.; Rueckmann, M.; Schaeffer, C.G. Continuous variable quantum key distribution with a real local oscillator using simultaneous pilot signals. Optics letters 2017, 42, 1588–1591.
* Laudenbach et al. (2019) Laudenbach, F.; Schrenk, B.; Pacher, C.; Hentschel, M.; Fung, C.H.F.; Karinou, F.; Poppe, A.; Peev, M.; Hübel, H. Pilot-assisted intradyne reception for high-speed continuous-variable quantum key distribution with true local oscillator. Quantum 2019, 3, 193.
* Ruppert et al. (2014) Ruppert, L.; Usenko, V.C.; Filip, R. Long-distance continuous-variable quantum key distribution with efficient channel estimation. Physical Review A 2014, 90.
* Grosshans et al. (2003) Grosshans, F.; Van Assche, G.; Wenger, J.; Brouri, R.; Cerf, N.J.; Grangier, P. Quantum key distribution using Gaussian-modulated coherent states. Nature 2003, 421, 238–241.
* Diamanti and Leverrier (2015) Diamanti, E.; Leverrier, A. Distributing Secret Keys with Quantum Continuous Variables: Principle, Security and Implementations. Entropy 2015, 17, 6072–6092.
* Leverrier (2017) Leverrier, A. Security of continuous-variable quantum key distribution via a Gaussian de Finetti reduction. Physical Review Letters 2017, 118, 200501.
* Furrer et al. (2012) Furrer, F.; Franz, T.; Berta, M.; Leverrier, A.; Scholz, V.B.; Tomamichel, M.; Werner, R.F. Continuous variable quantum key distribution: finite-key analysis of composable security against coherent attacks. Physical review letters 2012, 109, 100502.
* Leverrier et al. (2013) Leverrier, A.; García-Patrón, R.; Renner, R.; Cerf, N.J. Security of continuous-variable quantum key distribution against general attacks. Physical review letters 2013, 110, 030502.
* Holevo and Werner (2001) Holevo, A.S.; Werner, R.F. Evaluating capacities of bosonic Gaussian channels. Physical Review A 2001, 63, 032312.
* Leverrier et al. (2010) Leverrier, A.; Grosshans, F.; Grangier, P. Finite-size analysis of a continuous-variable quantum key distribution. Physical Review A 2010, 81, 062343.
* Usenko and Filip (2016) Usenko, V.; Filip, R. Trusted Noise in Continuous-Variable Quantum Key Distribution: A Threat and a Defense. Entropy 2016, 18, 20.
* Ou et al. (1992) Ou, Z.Y.; Pereira, S.F.; Kimble, H.J.; Peng, K.C. Realization of the Einstein-Podolsky-Rosen paradox for continuous variables. Physical Review Letters 1992, 68, 3663–3666.
* Cerf et al. (2001) Cerf, N.J.; Lévy, M.; Assche, G.V. Quantum distribution of Gaussian keys using squeezed states. Physical Review A 2001, 63, 052311.
* Grosshans et al. (2003) Grosshans, F.; Cerf, N.J.; Wenger, J.; Tualle-Brouri, R.; Grangier, P. Virtual entanglement and reconciliation protocols for quantum cryptography with continuous variables. Quantum Info. Comput. 2003, 3, 535–552.
* Garcia-Patron Sanchez (2007) Garcia-Patron Sanchez, R. Quantum information with optical continuous variables: from Bell tests to key distribution 2007.
* Günthner et al. (2017) Günthner, K.; Khan, I.; Elser, D.; Stiller, B.; Bayraktar, Ö.; Müller, C.R.; Saucke, K.; Tröndle, D.; Heine, F.; Seel, S.; others. Quantum-limited measurements of optical signals from a geostationary satellite. Optica 2017, 4, 611–616.
* Huang et al. (2016) Huang, D.; Huang, P.; Lin, D.; Zeng, G. Long-distance continuous-variable quantum key distribution by controlling excess noise. Scientific Reports 2016, 6, 19201.
* Zhang et al. (2020) Zhang, Y.; Chen, Z.; Pirandola, S.; Wang, X.; Zhou, C.; Chu, B.; Zhao, Y.; Xu, B.; Yu, S.; Guo, H. Long-Distance Continuous-Variable Quantum Key Distribution over 202.81 km of Fiber. Phys. Rev. Lett. 2020, 125, 010502.
* Vergoossen et al. (2019) Vergoossen, T.; Bedington, R.; Grieve, J.A.; Ling, A. Satellite quantum communications when man-in-the-middle attacks are excluded. Entropy 2019, 21, 387.
* Zhang et al. (2018) Zhang, X.; Zhang, Y.; Li, Z.; Yu, S.; Guo, H. 1.2-GHz Balanced Homodyne Detector for Continuous-Variable Quantum Information Technology. IEEE Photonics Journal 2018, 10, 1–10.
* Milicevic et al. (2018) Milicevic, M.; Feng, C.; Zhang, L.M.; Gulak, P.G. Quasi-cyclic multi-edge LDPC codes for long-distance quantum cryptography. NPJ Quantum Information 2018, 4, 1–9.
* Usenko et al. (2012) Usenko, V.C.; Heim, B.; Peuntinger, C.; Wittmann, C.; Marquardt, C.; Leuchs, G.; Filip, R. Entanglement of Gaussian states and the applicability to quantum key distribution over fading channels. New Journal of Physics 2012, 14, 093048.
* Pirandola (2020) Pirandola, S. Limits and Security of Free-Space Quantum Communications. arXiv preprint arXiv:2010.04168 2020.
* Vasylyev et al. (2012) Vasylyev, D.Y.; Semenov, A.; Vogel, W. Toward global quantum communication: beam wandering preserves nonclassicality. Physical review letters 2012, 108, 220501.
* Usenko et al. (2018) Usenko, V.C.; Peuntinger, C.; Heim, B.; Günthner, K.; Derkach, I.; Elser, D.; Marquardt, C.; Filip, R.; Leuchs, G. Stabilization of transmittance fluctuations caused by beam wandering in continuous-variable quantum communication over free-space atmospheric channels. Optics Express 2018, 26, 31106.
* Ursin et al. (2007) Ursin, R.; Tiefenbacher, F.; Schmitt-Manderbach, T.; Weier, H.; Scheidl, T.; Lindenthal, M.; Blauensteiner, B.; Jennewein, T.; Perdigues, J.; Trojek, P.; others. Entanglement-based quantum communication over 144 km. Nature physics 2007, 3, 481–486.
* Ruppert et al. (2019) Ruppert, L.; Peuntinger, C.; Heim, B.; Günthner, K.; Usenko, V.C.; Elser, D.; Leuchs, G.; Filip, R.; Marquardt, C. Fading channel estimation for free-space continuous-variable secure quantum communication. New Journal of Physics 2019, 21, 123036.
* Bourgoin et al. (2013) Bourgoin, J.; Meyer-Scott, E.; Higgins, B.L.; Helou, B.; Erven, C.; Huebel, H.; Kumar, B.; Hudson, D.; D’Souza, I.; Girard, R.; others. A comprehensive design and performance analysis of low Earth orbit satellite quantum communication. New J. Phys 2013, 15, 023006.
* Vasylyev et al. (2019) Vasylyev, D.; Vogel, W.; Moll, F. Satellite-mediated quantum atmospheric links. Physical Review A 2019, 99, 053830.
* Bedington et al. (2017) Bedington, R.; Arrazola, J.M.; Ling, A. Progress in satellite quantum key distribution. npj Quantum Information 2017, 3, 1–13.
* Lee et al. (2019) Lee, O.; Vergoossen, T.; Ling, A. An updated analysis of satellite quantum-key distribution missions. arXiv preprint arXiv:1909.13061 2019.
* Larson and Wertz (1999) Larson, W.J.; Wertz, J.R. Space mission analysis and design. Space 1999, 5, 110.
* Lissauer and De Pater (2013) Lissauer, J.J.; De Pater, I. Fundamental planetary science: physics, chemistry and habitability; Cambridge University Press, 2013.
* Palmer and Davenhall (2001) Palmer, J.; Davenhall, A. The CCD Photometric Calibration Cookbook. Starlink Cookbook 2001, 6.
* Hardie and Hiltner (1962) Hardie, R.; Hiltner, W. Astronomical techniques. Stars and Stellar Systems 1962, 2, 196.
* Tomasi and Petkov (2014) Tomasi, C.; Petkov, B.H. Calculations of relative optical air masses for various aerosol types and minor gases in Arctic and Antarctic atmospheres. Journal of Geophysical Research: Atmospheres 2014, 119, 1363–1385.
* Berk et al. (2014) Berk, A.; Conforti, P.; Kennett, R.; Perkins, T.; Hawes, F.; Van Den Bosch, J. MODTRAN® 6: A major upgrade of the MODTRAN® radiative transfer code. 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2014, pp. 1–4.
* Vasylyev et al. (2018) Vasylyev, D.; Vogel, W.; Semenov, A. Theory of atmospheric quantum channels based on the law of total probability. Physical Review A 2018, 97, 063852.
* Dong et al. (2010) Dong, R.; Lassen, M.; Heersink, J.; Marquardt, C.; Filip, R.; Leuchs, G.; Andersen, U.L. Continuous-variable entanglement distillation of non-Gaussian mixed states. Physical Review A 2010, 82, 012312.
* Derkach et al. (2020) Derkach, I.; Usenko, V.C.; Filip, R. Squeezing-enhanced quantum key distribution over atmospheric channels. New Journal of Physics 2020, 22, 053006.
|
# Multiphase AGN winds from X-ray irradiated disk atmospheres
Tim Waters Department of Physics & Astronomy
University of Nevada, Las Vegas
4505 S. Maryland Pkwy
Las Vegas, NV, 89154-4002, USA Theoretical Division, Los Alamos National
Laboratory Daniel Proga Department of Physics & Astronomy
University of Nevada, Las Vegas
4505 S. Maryland Pkwy
Las Vegas, NV, 89154-4002, USA Randall Dannen Department of Physics &
Astronomy
University of Nevada, Las Vegas
4505 S. Maryland Pkwy
Las Vegas, NV, 89154-4002, USA
(Received January 2020; Revised April 2020; Accepted April 2020)
###### Abstract
The mechanism of thermal driving for launching mass outflows is interconnected
with classical thermal instability (TI). In a recent paper, we demonstrated
that as a result of this interconnectedness, radial wind solutions of X-ray
heated flows are prone to becoming clumpy. In this paper, we first show that
the Bernoulli function determines whether or not the entropy mode can grow due
to TI in dynamical flows. Based on this finding, we identify a critical
‘unbound’ radius beyond which TI should accompany thermal driving. Our
numerical disk wind simulations support this result and reveal that clumpiness
is a consequence of buoyancy disrupting the stratified structure of steady
state solutions. Namely, instead of a smooth transition layer separating the
highly ionized disk wind from the cold phase atmosphere below, hot bubbles
formed from TI rise up and fragment the atmosphere. These bubbles first appear
within large scale vortices that form below the transition layer, and they
result in the episodic production of distinctive cold phase structures
referred to as irradiated atmospheric fragments (IAFs). Upon interacting with
the wind, IAFs advect outward and develop extended crests. The subsequent
disintegration of the IAFs takes place within a turbulent wake that reaches
high elevations above the disk. We show that this dynamics has the following
observational implications: dips in the absorption measure distribution are no
longer expected within TI zones and there can be a less sudden desaturation of
X-ray absorption lines such as Oviii Ly$\alpha$ as well as multiple absorption
troughs in Fexxv K$\alpha$.
galaxies: active - radiation: dynamics
## 1 Introduction
The radiation associated with disk-mode accretion in active galactic nuclei
(AGNs) may power a wide variety of the ionized outflows that have been
observed (e.g., Giustini & Proga, 2019). In order of decreasing kinetic power,
these include extremely high-velocity outflows, broad absorption line quasar
outflows, ultrafast outflows, several other classes of broad and narrow
absorption line winds, X-ray obscurers, and warm absorbers (Rodríguez Hidalgo
et al., 2020; Laha et al., 2020, and references therein). If the radiation
primarily transfers its momentum to the gas, the resulting wind is referred to
as being line-driven, dust-driven, or radiation pressure driven, depending on
if the dominant source of opacity is, respectively, opacity due to spectral
lines, dust opacity, or electron scattering opacity. It is common to employ a
simple treatment of the gas thermodynamics (e.g., an isothermal approximation)
when modeling such winds, which precludes the possibility that the gas can
become multiphase due to thermal instability (TI; Field 1965). To explain
X-ray obscurers and warm absorbers, which seem to be multiphase components
within a more highly ionized outflow (e.g., Kaastra et al., 2002, 2014), a
more compelling wind launching mechanism results from radiation primarily
transferring its energy rather than its momentum to the gas, i.e. thermal
driving. The appeal of this explanation is due to the fundamental connection
between thermal driving and TI, our focus in this paper.
This connection was first utilized in a pioneering study by Begelman et al.
(1983; hereafter BMS83) to reveal how the irradiation of optically thin gas
results in a thermally driven wind beyond the Compton radius,
$R_{\rm IC}=\frac{GM_{\rm bh}\bar{m}}{k\,T_{C}}=5.24\times 10^{-2}\,\frac{\mu
M_{7}}{T_{C}/10^{8}{\rm\,K}}\,{\,\rm pc},$ (1)
the distance where the sound speed of gas heated to the Compton temperature
$T_{C}$ exceeds the escape speed from the disk. (Here $M_{\rm bh}$ is the
black hole mass, $M_{7}=M_{\rm bh}/10^{7}{M_{\odot}}$, and $\bar{m}=\mu m_{p}$
is the mean particle mass of the plasma.) Compared to $T_{C}$, the thermalized
gas in the disk itself is much colder at $T\sim 10^{4}\,\rm{K}$. Prior to the
work of BMS83, it had been shown by Krolik et al. (1981) that the process of
heating gas from $\sim 10^{4}\,\rm{K}$ to $T_{C}$ is highly prone to TI. In
particular, Krolik et al. (1981), Kallman & McCray (1982) and Lepp et al.
(1985) calculated a variety of so-called S-curves in the AGN environment,
revealing that they commonly have thermally unstable regions (‘TI zones’) at
temperatures where recombination cooling, bremsstrahlung, and Compton heating
are the dominant heating and cooling processes.
As BMS83 commented, the effective scale height of an irradiated accretion disk
is set by the highest stable temperature on the cold branch of the S-curve (of
order $10^{5}\,{\rm K}$) rather than by the conditions inside the disk. Above
this temperature, the presence of a TI zone leads to runaway heating and an
accompanying steep drop in density, thereby facilitating the heating of
atmospheric gas to Compton temperatures. The magnetohydrodynamical (MHD)
turbulent nature of the disk is not expected to alter this picture because the
Compton radius is at distances where the local dissipation energy (and hence
the magnetic energy) is far less than that due to radiative heating. While
self-gravity and dust physics may be important within the accretion disks at
these radii, they should be dynamically unimportant within the irradiated disk
atmosphere and disk wind, which have comparatively low density and are hotter
than dust sublimation temperatures.
The spectral energy distribution (SED) determines several properties of the
irradiated gas, in particular the Compton temperature and the net radiative
cooling rates relevant for assessing the number and temperature ranges of TI
zones. How the existence and properties of TI zones depend on the underlying
SED was systematically explored in series of papers by Chakravorty et al.
(2008, 2009, 2012) with further dependencies assessed by Ferland et al.
(2013), Lee et al. (2013) and Różańska et al. (2014). SEDs for individual
systems are inferred by fitting the observed spectra to theoretical models for
the intrinsic components of an AGN spectrum (e.g., Done et al., 2012; Jin et
al., 2012; Ferland et al., 2020). In this work, we present calculations for an
S-curve obtained in this way for the well studied system NGC 5548 (see
Mehdipour et al., 2015). While the velocity, density, and temperature
structure of thermal disk winds can be sensitive to the SED, our main results
regarding TI are expected to hold so long as the S-curve features at least one
TI zone.
SED fitting calculations for NGC 5548 performed by different groups using
photoionization modeling are in agreement that the corresponding S-curves have
prominent TI zones (e.g., Arav et al., 2015; Mehdipour et al., 2015). The SED
found by Mehdipour et al. (2015), which has been implemented into CLOUDY
(Ferland et al., 2017), yields a Compton temperature of $T_{\rm C}\approx
10^{8}\,\rm{K}$. The Compton radius for NGC 5548 is therefore about $R_{\rm
IC}=0.2\,\rm{pc}$, which is within the inferred distance of the warm absorbers
observed in this system (Arav et al., 2015; Ebrero et al., 2016) and other
similar systems like NGC 7469 (e.g., Grafton-Waters et al., 2020). It is this
correspondence that is suggestive of warm absorbers, which are found in more
than half of all nearby AGN (Crenshaw et al., 2003), being probes of the
multiphase gas dynamics taking place within the thermally driven winds
expected at these distances.
In a recent study applying a thermally driven disk wind model to explain the
origin of warm absorbers, Mizumoto et al. (2019) find that steady state
solutions (discussed here in §2) generally have velocities and column
densities in the right range ($300-3000\,{\rm km/s}$ and
$10^{20}-10^{22}\,{\rm cm^{-2}}$) to account for the sample of warm absorbers
collected by Laha et al. (2014). However, the ionization parameters were found
to be systematically too large, so they concluded that there must be an
additional process such as condensation due to TI that can explain the
presence of lower ionization gas at high velocity. Note that most previous
studies considering the role of TI in explaining the warm absorber phenomenon
have done so in the context of a thermal wind being driven from a
geometrically thick disk (i.e., a torus) (e.g., Krolik & Kriss, 2001;
Dorodnitsyn & Kallman, 2017; Kallman & Dorodnitsyn, 2019), not from a
geometrically thin disk.
To further understand the interconnectedness between TI and heating by
irradiation, in recent papers we took a step back from a disk wind geometry by
adopting an even simpler radial wind setup. This allowed us to first perform
controlled studies of global 1D solutions that result from using heating and
cooling processes obtained from photoionization calculations. Specifically, in
Dyda et al. (2017), we investigated the luminosity and SED-dependence of
isotropically irradiated radial winds. We verified that the critical
luminosity identified by BMS83 does indeed determine whether or not a flow
will occupy the S-curve (i.e. reach thermal equilibrium) or evolve to a lower
temperature below the S-curve due to adiabatic cooling (see also Higginbottom
et al., 2018). While this deviation from thermal equilibrium was expected
based on the flow timescales of warm absorbers (Kallman, 2010), it was
important to show that it can be quantified using time-dependent calculations
because many photoionization studies assume that gas only occupies the S-curve
(e.g., Chakravorty et al., 2008, 2009, 2012; Ponti et al., 2012; Ebrero et
al., 2016). In Dannen et al. (2020, D20 hereafter), we followed up this work
with both 1D and 2D radial wind simulations that reveal under what conditions
TI can be triggered in an outflow. We showed that for a small range of the
ratio of the local escape speed to the sound speed, corresponding to parsec-
scale distances for typical AGN parameters, unsteady clumpy outflow solutions
are obtained instead of smooth, steady-state solutions.
In this paper, we return to disk wind simulations to assess if clumpy disk
wind solutions can be obtained as simple extensions to the clumpy radial wind
solutions of D20. The full answer is complicated but the short answer is ‘yes’
— we show that irradiated disk atmospheres are unstable to linear, isobaric
perturbations, the outcome being the formation of what we will refer to as
irradiated atmospheric fragments (IAFs). Our simulations initially reach a
smooth state that is itself multiphase when viewed from lines of sight
intersecting both the disk atmosphere and disk wind. A transition from a
smooth to clumpy solution occurs as IAFs form and evolve into larger scale
filaments. The wind becomes multiphase over a large solid angle once these
filaments begin to break into clumps and disintegrate upon mixing with the
fast outflow, with the final gas distribution spanning a broad range of
ionization parameters.
This paper is organized as follows. In §2, we describe the dynamics of steady
state thermally driven wind models after presenting the framework that is now
widely used to obtain these solutions. In §3, we present our results. We first
connect our study of TI in radial winds to disk winds and derive a
characteristic radius beyond which thermally driven outflows are expected to
be clumpy. Then we present our numerical simulations of clumpy disk winds
followed by an analysis of the the absorption measure distribution and several
X-ray absorption lines along two illustrative lines of sight. We focus our
discussion in §4 on the possibility of TI occurring at smaller radii. Finally,
in §5 we summarize our understanding of these new disk wind solutions.
## 2 Overview of steady state solutions
In this section, we summarize the flow structure of steady state thermal wind
solutions obtained using a hydrodynamical modeling framework in which the
effects of radiation are treated in the optically thin limit. We first review
this framework because the overall takeaway from this work is that these
solutions serve as initial conditions for multiphase disk wind solutions.
Additionally, these thermal wind models are complementary to the latest
efforts to model MHD winds together with the internal dynamics of the disk in
full 3D, using either MRI turbulent disks (Zhu & Stone, 2018; Jacquemin-Ide et
al., 2020) or strongly magnetized disks, where a magnetic diffusivity is
included to permit accretion (e.g., Sheikhnezami & Fendt, 2018). While the
thermodynamics used in the above works is highly simplified compared to that
employed here, it may be feasible to combine both approaches (see e.g., Waters
& Proga, 2018).
### 2.1 Modeling framework
The ‘classical disk wind solution’ approach taken here is to not explicitly
treat the disk physics. Rather, a boundary condition (BC) along the midplane
is applied, providing a reservoir of mass and angular momentum. A self-
consistent choice of parameters and domain size will prevent the atmosphere
from becoming Compton thick, so that it becomes possible to model how
irradiation launches a thermal wind using an optically thin heating and
cooling function. As discussed below, given an SED, these solutions are
governed by just three dimensionless parameters. Formulating the problem in
terms of these parameters enables a comparison with our prior results on
radial winds, as well as with an already large body of published results.
In X-ray irradiated environments, the density BC cannot be chosen arbitrarily;
it is constrained by the value of the pressure ionization parameter,
$\Xi\equiv(4\pi J_{\rm ion}/c)/p$, where $J_{\rm ion}$ is the mean intensity
of the ionizing radiation (defined as energies between $1$ and $10^{3}\,{\rm
Ry}$; see Krolik et al. (1981)) local to a given parcel of gas with pressure
$p$. For parsec-scale gas irradiated by an effective point source, $4\pi
J_{\rm ion}$ is the ionizing flux $F_{X}$, making $\Xi$ the ratio of the
radiation pressure $p_{\rm rad}=F_{X}/c$ and gas pressure. Taking the central
source to have a total ionizing luminosity $L_{X}$, we have
$F_{X}=L_{X}\,e^{-\tau(r)}/4\pi r^{2}$, where
$\tau(r)=\int^{r}_{0}n(r^{\prime})\sigma(r^{\prime})\,dr^{\prime}$ is the
optical depth along the line of sight from the source at a distance $r$.
Hence, without invoking an optically thin approximation, the density is poorly
constrained due to the highly nonlinear dependence on optical depth. AGN warm
absorbers are located at parsec scale distances where this approximation may
apply, thereby reducing the ionization parameter to $\Xi=L_{X}/(4\pi
r^{2}c\,n\,k\,T)$.
The density in these models is therefore determined relative to the midplane
BC on the ionization parameter, $\Xi_{0}=L_{X}/(4\pi
r_{0}^{2}c\,n_{0}\,k\,T_{0})$, where $r_{0}$ is the radius at which $n=n_{0}$
and $T_{0}$ is the equilibrium temperature on the S-curve corresponding to
$\Xi_{0}$; $n_{0}$ thus evaluates to
$n_{0}=2.0\times 10^{6}\frac{(L_{X}/10^{44}\,{\rm
erg\,s^{-1}})}{\Xi_{0}\,(T_{0}/10^{5}\,{\rm K})\,(r_{0}/{\rm pc})^{2}}\>{\rm
cm^{-3}}.$ (2)
This normalization density is considered to be typical of the density at
heights in the irradiated layers of the accretion disk where the plasma first
becomes Compton thin. As indicated above, in the classic disk wind solution
approach, this value is assigned to the midplane of the computational domain
at $(r,\theta)=(r_{0},90^{\circ})$.111In Waters & Proga (2018), we describe a
modified approach that introduces a shielded adiabatic accretion disk into the
setup, which in 3D would allow combining these irradiated disk solutions with
MRI turbulent disks. The inner and outer boundaries of the computational
domain are set relative to $r_{0}$, and the midplane density profile is given
by $n(r)=n_{0}(r_{0}/r)^{2}$ to keep $\Xi(r)=\Xi_{0}$ along the midplane.
The fiducial radius $r_{0}$ can be expressed in terms of the hydrodynamic
escape parameter (HEP), defined as
${\rm HEP}=\frac{v_{\rm esc}^{2}}{c_{s}^{2}}=\frac{GM_{\rm
bh}(1-\Gamma)}{rc_{s}^{2}}.$ (3)
Here, $c_{s}=\sqrt{\gamma kT/\bar{m}}$ is the adiabatic sound speed, $v_{\rm
esc}$ the effective escape speed from the disk, and $\Gamma\equiv L/L_{\rm
Edd}$ is the Eddington parameter for a system with total luminosity $L$. The
$\sqrt{1-\Gamma}$ reduction to the escape speed $v_{\rm esc}=\sqrt{GM_{\rm
bh}(1-\Gamma)/r}$ is due to radiation pressure from the central engine. Using
Eq. (1), we have
$r_{0}=\frac{1}{\gamma\rm{HEP_{0}}}\,\frac{T_{C}}{T_{0}}\,R_{\rm
IC}(1-\Gamma),$ (4)
where ${\rm HEP_{0}}$ is the HEP assigned to $T_{0}$; this is the main
parameter controlling the strength of the thermal wind. If we insist on having
a strong wind beyond $r_{0}$, we require $r_{0}\geq R_{\rm IC}(1-\Gamma)$
[Proga & Kallman (2002); this criterion is just that of BMS83 for $\Gamma=0$],
giving the upper bound
$\rm{HEP_{0}}\lesssim\frac{1}{\gamma}\frac{T_{C}}{T_{0}}\>\>\>\text{(requirement
for a strong wind)}.$ (5)
To complete the specification of a thermal wind model, we must determine the
heating and cooling rates corresponding to the radiation field. In recent
years, we have developed methods to compute these from the observationally
inferred intrinsic SED as self-consistently as is currently possible (Dyda et
al., 2017; Dannen et al., 2019) using the photoionization code XSTAR (Bautista
& Kallman, 2001; Kallman & Bautista, 2001). These calculations require
introducing the density ionization parameter, $\xi=L_{X}/n_{H}\,r^{2}$, where
$n_{H}=(\mu_{H}/\mu)n$ is the hydrogen number density (with $\mu
m_{p}\equiv\rho/n$ and $\mu_{H}m_{p}\equiv\rho/n_{H}$ for mass density
$\rho$). In this work, we use rates derived from the unobscured SED for NGC
5548 (see Mehdipour et al., 2015), which has $f_{X}\equiv L_{X}/L=0.36$. The
S-curve corresponding to this SED has a Compton temperature $T_{C}=1.01\times
10^{8}\,{\rm K}$. Notice that for a fixed SED and elemental abundances (needed
to determine the heating and cooling rates in photoionization calculations),
the three dimensionless parameters governing thermal disk wind solutions are
$\Gamma$, $\Xi_{0}$, and ${\rm HEP_{0}}$.
An important property of thermal wind solutions is that they apply to any mass
black hole system if the SED is assumed to be unchanged.222The implicit
dependence of the cooling rate ($C_{23}$ in Eq. (6)) on density and
temperature can make these solutions sensitive to the S-curve and hence to the
SED, but because AGN SEDs are overall similar yet uncertain for any given
system, a good first approximation can be found by simply adopting a
representative S-curve and applying it to different systems. The only term
that can potentially break the scale-free nature of the hydrodynamic equations
is the non-adiabatic source term. When non-dimensionalized, it enters the
equations multiplied by $t_{0}/t_{\rm cool}(r_{0})$, where $t_{0}$ is a
characteristic dynamical time such as the orbital period or $R_{g}/c$
($R_{g}=GM_{\rm bh}/c^{2}$) and $t_{\rm cool}$ is the cooling time defined by
$t_{\rm
cool}\equiv\frac{\mathcal{E}}{\Lambda}=6.57\frac{T_{5}}{n_{4}\,C_{23}}\>{\rm
yrs}.$ (6)
Here $\mathcal{E}=c_{\rm v}T$ is the gas internal energy density,
$\Lambda=10^{-23}\,(n/\bar{m})\,C_{23}$ is the total cooling rate in units of
${\rm erg\,s^{-1}g^{-1}}$ with $C_{23}$ the rate in units of $10^{-23}\,{\rm
erg\,cm^{3}\,s^{-1}}$, and $n_{4}=n/10^{4}\,{\rm cm^{-3}}$. The minimum value
of $t_{\rm cool}$ is reached along the midplane at $r_{0}$ when this is taken
as the location of the inner boundary. Using Eq. (2) and Eq. (4), we find
$t_{\rm cool}(r_{0})=2.22\,\frac{\mu\,M_{7}(1-\Gamma)^{2}}{f_{X}\,\Gamma({\rm
HEP_{0}}/100)^{2}}\frac{\Xi_{0}}{\,C_{23}}\,\>{\rm hrs},$ (7)
where we have eliminated $L_{X}$ in favor of $f_{X}$ using $L_{X}=f_{X}\Gamma
L_{\rm Edd}$. Because both $t_{0}$ and $t_{\rm cool}(r_{0})$ are proportional
to $M_{\rm bh}$, the equations have no mass dependence. This is ultimately due
the density scaling as $n_{0}\propto M_{\rm bh}^{-1}$ in the above ionization
parameter framework.
Figure 1: An example of the steady state density and velocity structure of
thermally driven disk wind solutions near the Compton radius $R_{\rm IC}$. The
solution consists of a tenuous outflow (brownish gas) above a somewhat denser
corona (yellowish gas) that lies above a dense and nearly hydrostatic cold
phase atmosphere (blue and green gas). A set of streamlines are overlaid and
color-coded according to where they originate: along the inner spherical
boundary (gray), parallel to atmosphere/wind interface (red), and parallel to
the midplane at $z=0.1\,R_{\rm IC}$ (white). As the red streamlines show, the
base of the disk wind is in the upper layers of the atmosphere where poloidal
velocities are less than $10\,{\rm km/s}$. The lowest black contour denotes
$500\,{\rm km/s}$ and the next two $1000\,{\rm km/s}$ and $1500\,{\rm km/s}$.
The white dotted line marks the sonic surface. The circulation apparent in the
cold phase atmosphere increases with radius and permits the formation of a
vortex for $r>15\,R_{\rm IC}$ (see Fig. 5), which in turn triggers a
multiphase outflow at even larger radii. Figure 2: Phase diagram showing the
S-curve corresponding to the SED of NGC 5548 (solid black curve). Blue dots
and vertical lines mark $\Xi_{0}$ (the midplane BC on $\Xi$) for each of the
solutions in Fig. 3, $\log(\Xi_{0})=0.5$, $\log(\Xi_{0})=0.92$, and
$\log(\Xi_{0})=1.05$. The Balbus contour is plotted as a dashed gray curve;
all points to the left of it are thermally unstable and to the right thermally
stable according to Balbus’ criterion for TI (Balbus, 1986). The two unstable
regions encountered while tracing the S-curve starting from low temperatures
are referred to as the upper and lower TI zones. The base of the lower TI zone
is conventionally denoted as $\Xi_{\rm c,max}$ and is marked. Overplotted on
this $(T,\Xi)$-plane are the phase-‘tracks’ for each of the streamlines in
Fig. 1. (The white streamlines are plotted in cyan here.) Notice that the disk
wind streamlines pass through both TI zones.
### 2.2 Basic flow structure of irradiated disks
As shown in Fig. 1, the density structure that results from this modeling
framework consists of three distinct regions: a dense disk atmosphere (blue to
green regions), a less dense corona (yellow regions), and a tenuous disk wind
(brown to whitish regions). That the solution consists of these basic
components was already apparent from the early study by Woods et al. (1996;
see also Proga & Kallman 2002; Jimenez-Garate et al. 2002), but the focus
subsequently shifted to simulating only the corona and disk wind regions to
determine if the velocities of these models can reach those observed in low-
mass X-ray binaries (e.g., Luketic et al., 2010; Higginbottom & Proga, 2015;
Higginbottom et al., 2017). As discussed in §2.3, this is accomplished by
placing $\Xi_{0}$ just below $\Xi_{\rm c,max}$ on the S-curve. In Fig. 2, we
mark the location of $\Xi_{\rm c,max}$ on the S-curve used in this work. We
also show three values of $\Xi_{0}$; the midplane BC of the solution shown in
Fig. 1 corresponds to the middle blue line at $\log(\Xi_{0})=0.92$
($\xi_{0}=50$).
The velocity structure can also be grouped into three sets of streamlines
(although these groupings do not correspond to that of the density field):
those originating along the inner boundary of the computational domain, those
with footpoints nearly parallel to the atmosphere/wind interface, and those
along the midplane. We show this streamline geometry in the Fig. 1, and we
plot the temperature along these streamlines on the $(T,\Xi)$-plane in Fig. 2.
As the gas becomes less bound, the velocity field transitions from streamlines
with appreciable poloidal divergence for $1\,R_{\rm IC}\lesssim r\lesssim
5\,R_{\rm IC}$, to nearly parallel streamlines for $r>5\,R_{\rm IC}$. Also,
contours of constant velocity (black lines denote $500\,{\rm km/s}$,
$1000\,{\rm km/s}$, and $1500\,{\rm km/s}$) become more widely spaced. The
disk atmosphere is characterized by streamlines that turn back into the
midplane or by segments of streamlines near the footpoints that curve before
transitioning into the wind, indicative of bound gas with some amount of
circulation. The caption of Fig. 1 provides further details on the velocity
dynamics.
The highest temperatures are reached near the inner boundary, corresponding to
the gray streamlines closest to the Compton branch of the S-curve in Fig. 2.
This hot gas is responsible for launching the fastest outflow at high
latitudes. We will refer to this as the radial wind region because the
streamlines do not originate from the disk. We note that the midplane BC is
the source of matter for gas along the inner boundary, as the pressure
gradients above the atmosphere establish the density profile along $r_{\rm
in}$, but this gas then becomes the base of a disconnected wind. Temperature
decreases with distance along these wind streamlines because of the near
spherical expansion, which permits the timescale for adiabatic cooling to be
shorter than the timescale for radiative heating. The radiative heating rate
scales similarly to $t_{\rm cool}^{-1}$. Outside the atmosphere, $t_{\rm
cool}$ is shortest in the disk corona, explaining why gas becomes nearly
Comptonized only in this region. Radiative heating timescales are shortest
here due to the nearly $1/r^{2}$ scaling of density in the radial wind region.
The streamlines originating near the atmosphere/wind interface show similar
behavior on the $(T,\Xi)$-plane to the radial wind region streamlines only at
large distances. The footpoints of these streamlines are located on the
S-curve slightly below $\Xi_{\rm c,max}$, and the temperature sharply rises
with distance upon entering the region of runaway heating. All streamlines
pass through both the lower and upper TI zones (the regions falling to the
left of the Balbus contour, the dashed line in Fig. 2). The temperature peaks
occuring between $\sim 10^{7}\,{\rm K}$ and $5\times 10^{7}\,{\rm K}$ are at
points beyond the sonic surface (dotted white line in Fig. 2), and the
decreasing temperature thereafter is again due to adiabatic cooling, which is
less efficient for non-spherical expansion. There is a shear layer associated
with the steep rise in temperature at the atmosphere/wind interface, but we
checked that the velocity gradient in this layer is too small to make it
unstable to the Kelvin-Helmholtz instability (KHI). Specifically, we
introduced constant pressure perturbations with a range of wavenumbers
throughout the atmosphere to test stability to both TI and KHI. Stability to
TI is discussed further in §2.4 and §3.1.2, but it is likely that even a
higher velocity gradient shear layer will be stable to KHI because sound waves
are highly damped in the presence of radiative heating/cooling.
Figure 3: Top panels: Temperature maps showing how the height of the disk
atmosphere varies with the value of the ionization parameter assigned to the
midplane of the compuational domain ($\xi_{0}=100.0$, 50.0, and 5.0 correspond
to $\Xi_{0}=11.324$, 8.235, and 3.171, respectively). It has become standard
practice to exclude an atmosphere entirely by placing $\Xi_{0}$ right below
$\Xi_{\rm c,max}$, which results in a solution like that shown in the left
panel. However, the presence of a cold phase atmosphere is necessary to
capture TI in the simulations. In this work, we present results for
$\xi_{0}=50$ solutions. Even smaller values of $\xi_{0}$ results in a lower
temperature atmosphere that extends to greater heights above the midplane
(compare middle and right panels). This $T$-dependence of the height of the
disk atmosphere is opposite that of an underlying disk, as seen by comparing
the classical disk scale height $h=\epsilon r$, marked with the black dashed
lines, with the height of the atmosphere. The temperature and velocity
structure of the disk wind, meanwhile, are comparatively insensitive to
$\Xi_{0}$. To show this clearly, we again plot the sonic surface as well as
the velocity contours from Fig. 1. Bottom panels: Maps of the cooling time (in
years), calculated for NGC 5548 ($M_{\rm bh}=5.2\times 10^{7}M_{\odot}$). The
velocity structure is again overlaid and the orange contour marks where
$t_{\rm cool}/t_{\rm ff}=0.01$.
### 2.3 Scale height dependence of irradiated disk atmospheres on $\Xi_{0}$
Recent work applying these solutions to explain the observed changes in wind
velocities accompanying state changes in low-mass X-ray binaries has pointed
out how the extent of the atmosphere is sensitive to $\Xi_{0}$ (see the
appendix of Tomaru et al. 2019). This dependence is illustrated in Fig. 3,
where we compare temperature maps for solutions with $\xi_{0}=100$,
$\xi_{0}=50$, and $\xi_{0}=5$ ($\Xi_{0}=11.32$, $\Xi_{0}=8.24$, and
$\Xi_{0}=3.17$), corresponding to different cold phase locations along the
S-curve (marked with vertical blue lines in Fig. 2). Most previous studies
have focused on solutions similar to those shown in the left panel, where
there is no disk atmosphere at all (e.g., Luketic et al., 2010; Higginbottom &
Proga, 2015; Higginbottom et al., 2017, 2018; Tomaru et al., 2018; Waters &
Proga, 2018). This corresponds to taking $\Xi_{0}$ somewhat below $\Xi_{\rm
c,max}$. Including the cold phase atmosphere incurs much greater computational
expense because the cooling time becomes orders of magnitude smaller, as shown
in the bottom panels. Once the atmosphere is present, taking a lower $\Xi_{0}$
results in it extending to larger heights above the disk midplane before the
transition into an outflowing corona and disk wind occurs at $T>T(\Xi_{\rm
c,max})$.
A comparison of these solutions reveals that the vertical heights of accretion
disks and their irradiated atmospheres scale oppositely with temperature. The
black dashed lines on the temperature maps in Fig. 3 mark, for the same
midplane temperature, the classical scale height of a disk, $h=\epsilon\,r$,
where $\epsilon=v_{\rm esc}/c_{\rm iso}=1/\sqrt{\gamma\rm{HEP}}$ (with $c_{\rm
iso}=\sqrt{kT/\bar{m}}$ the isothermal sound speed). A cooler disk implies a
thinner, less pressurized disk, but a cooler irradiated atmosphere requires it
to extend to higher latitudes and be of higher pressure. The latter property
follows immediately from the definition of the pressure ionization parameter,
$\Xi\equiv p_{\rm rad}/p$. For a fixed radiation flux, i.e. at a given radius
in the atmosphere, gas in thermal equilibrium at a lower $\Xi$ has a higher
gas pressure. The greater extent of the cold phase follows because the
atmosphere is nearly in hydrostatic equilibrium, so a fixed pressure gradient
is required to balance the gravitational and centrifugal forces. When starting
from a higher pressure, therefore, cold phase gas must extend to larger
heights to ‘climb’ the S-curve at this particular pressure gradient.
It is the runaway heating process that occurs when gas reaches $\Xi_{\rm
c,max}$ on the S-curve that is responsible for the atmosphere transitioning
into a disk wind (BMS83). Thus, while the height of the atmosphere is
sensitive to the value of $\Xi_{0}$, the temperature at the base of the disk
wind is the same in each of the panels of Fig. 3. It is therefore expected
that the run of temperature with distance in the wind is not sensitive to
$\Xi_{0}$. Because the velocity structure of the disk wind is set by the
temperature at $\Xi_{\rm c,max}$, it also is nearly the same from panel to
panel. To see this clearly, we overplot the sonic surface (white contour) and
three constant velocity surfaces.
### 2.4 Stability of the disk wind vs. atmosphere to TI
To emphasize how TI can operate differently in the cold phase atmosphere
versus in the disk wind, it is useful to draw a comparison with studies of the
circumgalactic medium (CGM), where TI is invoked as a condensation process
only. Focus has been given to the importance of the ratio $t_{\rm cool}/t_{\rm
ff}$ (with $t_{\rm ff}=r\sqrt{2}/\sqrt{GM_{\rm bh}/r}$ the free-fall
timescale) in assessing the stability of CGM atmospheres to TI (e.g., Sharma
et al., 2012; Gaspari et al., 2015; Voit et al., 2017). We computed this ratio
for the solutions here, because the relevant dynamical time is the orbital
period, which is just $t_{\rm dyn}=\pi\sqrt{2}\,t_{\rm ff}$. We find that
$t_{\rm cool}/t_{\rm ff}<1$ in the entire computational domain, falling to
less than $0.01$ below the orange contour plotted in the bottom panels of Fig.
3. This ratio is a qualitative diagnostic of whether cooling times are short
enough to permit condensation out of the hot phase, which they are. The
fundamental criterion for triggering TI, however, is that gas occupy the TI
zone long enough for perturbations to grow. As explained in §3, dynamical
effects prohibit this in the disk wind.
Gas in the disk atmosphere already occupies the cold phase, so condensation
clearly cannot take place. It is still relevant that $t_{\rm cool}/t_{\rm
ff}\ll 1$ in this gas, however, because it implies that $t_{\rm heat}/t_{\rm
ff}\ll 1$. That is, the only way for TI to operate in the atmosphere (at least
initially) is for perturbations to undergo runaway heating to form heated
layers or pockets of warmer gas. We identified this as the dominant process
triggering clumpiness in the radial wind solutions presented in D20. Because
the atmosphere is cooler than the temperature of the lower TI zone,
perturbations can only grow near the disk wind interface where $T$ reaches
$T(\Xi_{\rm c,max})$.
## 3 Results
The solutions shown in Fig. 3 only reach a steady state if the outer boundary
is located within about $15\,R_{\rm IC}$. We now proceed to show that with
larger domain runs, the innermost flow region can still reach a steady state
but the outer regions will not. This is because the inner flow region is just
within the radius where TI can first be triggered due to dynamical changes in
the atmosphere. We note that $15\,R_{\rm IC}$ corresponds to about
$1\,\rm{pc}$ for $M_{\rm bh}\approx 10^{7}\,M_{\odot}$ (see Eq. (1)) and is
close to the inner radius of our clumpy disk wind simulations presented in
§3.2. This solution regime therefore corresponds to scales associated with the
torus in the AGN literature (e.g., Ramos Almeida & Ricci, 2017; Combes et al.,
2019; Hönig, 2019). We choose not to speculate here on this correspondence
other than to point out the following possibility. The inverse dependence
between the atmospheric height and the value of $\Xi_{0}$ pointed out in §2.2,
which is an overlooked property of thermal wind solutions, can in principle
accommodate a smooth transition from ionized to atomic and molecular gas due
to the small ram pressure of the atmospheric gas residing on the cold phase
branch of our S-curve.
We emphasize again that these solutions do not depend on the black hole mass,
so for AGNs with $M_{\rm bh}\approx 10^{6}\,M_{\odot}$, these solutions are
within the inferred distance to the torus and the dynamics can therefore be
considered independently of outflow models developed for the dusty torus
region (e.g., Dorodnitsyn, Kallman, & Bisnovatyi-Kogan, 2012; Wada, 2012; Chan
& Krolik, 2016, 2017; Williamson, Hönig, & Venanzi, 2019). We briefly comment
on the effects of adding additional physical processes to these solutions in
§4.2.
### 3.1 From clumpy radial winds to clumpy disk winds
Here we examine the dynamics of steady-state disk wind solutions beyond
$15\,R_{\rm IC}$ after connecting them to our previous work on radial winds,
where we discussed how the flow can be stable to TI despite passing through a
TI zone. It is beyond the scope of this work to develop a comprehensive theory
of ‘dynamical TI’. It is clear, however, that the list of qualitative
requirements identified by D20 for triggering TI in radial winds holds also
for disk winds. Therefore, we first recall our list from D20.
#### 3.1.1 Necessary conditions for triggering TI in outflows
* •
For radiation pressure due to a central source, the gas pressure along a
streamline must decrease as or slower than $r^{-2}$ upon entering a TI zone to
allow $\Xi\propto 1/(r^{2}p)$ to remain constant or decrease. Otherwise gas
will quickly exit the TI zone, as is clear from Fig. 2. In more complicated
radiation fields, the gas pressure gradient along streamlines will still
likely need to permit $\Xi=4\pi J/p$ to vary similarly with distance in the TI
zone.
* •
The flow must not be so fast that perturbations do not have time to become
nonlinear during their passage through a TI zone.
* •
The stretching of perturbations due to acceleration terms (from the nonlinear
term $\mbox{\boldmath$v$}\cdot\nabla\mbox{\boldmath$v$}$) must be less
efficient than the amplification of perturbations from TI.
Bullets (i) and (ii) relate to pressure gradients in the flow field and (iii)
to velocity gradients, none of which are accounted for in the local theory of
TI.333While the local dispersion relation for TI is unchanged in a uniform
velocity field (due to Galiliean invariance), bullet (ii) arises because TI
zones may span only a small range of pressures on the S-curve, and the flow
pressure typically drops off rapidly. (For prior applications of dynamical TI
in global flows see Balbus & Soker 1989 and Mościbrodzka & Proga 2013; see
also the 3D simulations of Kurosawa & Proga 2009.)
#### 3.1.2 Connection between TI and the Bernoulli function
In D20, we presented four qualitatively different types of flow solutions
(Models A, B, C, & D) in 1D. The model A solution had the highest
$\rm{HEP_{0}}$ and is susceptible to TI only during a transient phase of
evolution, while models B, C, & D are clumpy wind solutions that never reached
a steady state. After improving our numerical methods (see Appendix A), all of
these solutions reach a steady state, which allowed us to perform a followup
analysis based on the Bernoulli theorem. This analysis is presented in Fig. 4
and amounts to the following empirical result:
* •
The above conditions for an outflow to be unstable to TI cannot be met if the
flow enters a TI zone with a negative value of the Bernoulli function,
$Be=\frac{v^{2}}{2}+\frac{c_{s}^{2}}{\gamma-1}-\frac{GM_{\rm
bh}}{r}(1-\Gamma).$ (8)
Figure 4: Top panel: Steady state versions of our 1D radial wind solutions
from D20 plotted on the $(T,\Xi)$-plane. Plus signs denote sonic points and a
black dot marks $\Xi_{\rm c,max}$. Model A is stable to TI and exits the TI
zone, whereas models B and C are unstable and settle onto the S-curve inside
the TI zone. Only models B and C will turn into clumpy wind solutions if they
are perturbed (by applying linear perturbations at the base, for example).
Bottom panel: The Bernoulli function (normalized to its value at $r_{\rm
out}$) versus radius. Plus signs mark sonic points. Black dots mark the
locations where $\Xi=\Xi_{\rm c,max}$, the entry to the lower TI zone. As
discussed in §3.1.2, models B and C are unstable solutions because they enter
the TI zone with $Be>0$. From the dynamical criterion that a sign change in
$Be$ at the location where $\Xi=\Xi_{\rm c,max}$ implies stability, we
identified the characteristic ‘unbound’ radius $R_{\rm u}$ beyond which a
transition to instability should occur (see §3.1.3). Like $R_{\rm IC}$,
$R_{\rm u}$ is a property of the S-curve; the vertical line marks $R_{\rm
u}=98.3\,R_{\rm IC}$.
To understand this result, we consider what must happen to trigger TI starting
from a steady state flow field. First recall that isobaric TI only amplifies
the entropy mode [unless the S-curve has a negative slope on the
$(\xi,T)$-plane; see Waters & Proga 2019], and that entropy modes, unlike
sound waves, are advected with the flow. Therefore, only those streamlines
that already pass through a TI zone can amplify perturbations. Next recall the
close connection between Bernoulli’s theorem and the steady state entropy
equation. For a net radiative cooling function $\mathcal{L}$ having units
${\rm erg~{}g^{-1}s^{-1}}$, the heat equation can be expressed as
$T\,Ds/Dt=-\mathcal{L}$ (for entropy per unit mass $s$ having units ${\rm
erg~{}g^{-1}K^{-1}}$ and with $D/Dt$ the Lagrangian derivative). In a steady
state,
$\mbox{\boldmath$v$}\cdot\nabla s=-\frac{\mathcal{L}}{T}.$ (9)
Meanwhile, Bernoulli’s theorem is
$\mbox{\boldmath$v$}\cdot\nabla Be=-\mathcal{L}.$ (10)
By eliminating $\mathcal{L}$, we arrive at Crocco’s theorem ($\nabla
Be=T\nabla s$ along streamlines) for steady state solutions, but it is clear
from the derivation that the advected value of the Bernoulli function closely
tracks that of entropy. An abrupt jump in the value of $Be$ therefore implies
a similarly large change in the steady state entropy profile.
The key insight revealed by this Bernoulli analysis is the effect this profile
has on an advected entropy mode that can become unstable to TI. Only for an
entropy profile that does not suffer a dramatic jump at the entry to a TI zone
can the 3 bullet points from §3.1.1 be satisfied. That is, TI can amplify an
entropy mode into the nonlinear regime and thus seed the formation of an IAF
only if the mode is not disrupted, which requires the entropy and $Be$
profiles to be smooth at the TI zone.
#### 3.1.3 Characteristic radius at which thermally driven outflows become
clumpy
The sign of $Be$ is negative in the disk atmosphere because this cold phase
gas is virialized. It becomes positive in the disk wind because runaway
heating makes the enthalpy large and accelerates the flow, i.e., the first two
terms in Eq. (8) dominate the third. For an entropy mode to avoid an abrupt
change as it enters a TI zone, $Be$ must not change signs, hence our previous
bullet point. The radius at which $Be$ first becomes positive just as $T$
reaches $T_{\rm c,max}\equiv T(\Xi_{\rm c,max})$ from below defines the
characteristic ‘unbound radius’ beyond which thermally driven winds are
expected to be clumpy. The exact radius is only implicitly defined by the
Bernoulli function, but as shown in Appendix B, a lower bound is
$R_{\rm u}=\frac{2}{\gamma}\frac{\gamma-1}{\gamma+1}\frac{T_{C}}{T_{\rm
c,max}}\,R_{\rm IC}\,(1-\Gamma).$ (11)
Eliminating $R_{\rm IC}$ using Eq. (1) shows $R_{\rm u}$ to be proportional to
‘the other’ characteristic distance set by the radiation field for a given
black hole mass, $GM_{\rm bh}\bar{m}/kT_{\rm c,max}$. Like $R_{\rm IC}$,
therefore, it is a property of the S-curve for any particular system; the
ratio $T_{C}/T_{\rm c,max}=4.68\times 10^{2}$ for our S-curve, giving $R_{\rm
u}=140\,R_{\rm IC}\,(1-\Gamma)$ for $\gamma=5/3$. The vertical line in Fig. 4
marks this radius for our radial wind solutions (which have $\Gamma=0.3$),
showing that it is indeed a good estimate of the parameter space that we
uncovered by trial and error in D20. Because $R_{\rm u}$ is simply the
approximate radius beyond which gas entering the TI zone has enough thermal
energy to escape the system, it characterizes both radial wind and disk wind
solutions.
Figure 5: The same $\xi_{0}=50$ solution from Figs. 1 and 3 but showing the
full domain, which extends to $40\,R_{\rm IC}$. In addition to the streamlines
overlaid in Figs. 1, we add a set parallel to the midplane with footpoints
$z=2.3\,R_{\rm IC}$ spaced $1\,R_{\rm IC}$ apart along $x$ (black). Beyond
$25\,{\rm R_{IC}}$, these streamlines are all outflowing and do not enter the
disk wind, showing that the disk wind becomes dynamically disconnected from
the interior of the outflowing atmosphere. Within $25\,{\rm R_{IC}}$, some
black streamlines flow into a vortex centered at $r\approx 18.5\,R_{\rm IC}$.
This vortex spans about $9\,R_{\rm IC}$, corresponding to $\approx 1.5\,{\rm
pc}$ for NGC 5548 ($M_{\rm bh}=5.2\times 10^{7}\,M_{\odot}$) and $0.03\,{\rm
pc}$ for an AGN with $M_{\rm bh}=10^{6}\,M_{\odot}$. In the lower panel, we
zoom-in somewhat to better show the vortex, which turns out to be critical for
the onset of TI.
#### 3.1.4 The role of a cold phase vortex
Well within the distance $R_{\rm u}$, the gas dynamics in the cold phase
atmosphere changes significantly, as we show in Fig. 5. For $r\gtrsim
25\,R_{\rm IC}$, the atmosphere transitions into a slow outflow. Prior to
that, a large scale scale vortex appears in the flow at $16\,R_{\rm
IC}\lesssim r\lesssim 25\,R_{\rm IC}$, and this dynamics is associated with a
steepening of the slope of the atmosphere/wind interface at $15\,R_{\rm
IC}\lesssim r\lesssim 16\,R_{\rm IC}$. We are not aware of any previous
authors reporting on the presence of such a counter-clockwise vortex in the
atmosphere. This circulation is driven by pressure gradients accompanying the
steepest bend in the S-curve at about $\log(T)=5.1$.
The vortex turns out to be the critical element that connects the geometry of
radial winds to that of disk winds and allows TI to operate. In radial winds,
the streamlines in the cold phase atmosphere are necessarily normal to the
atmosphere’s surface, whereas in disk winds they are nearly parallel to it
beyond $25\,R_{\rm IC}$. Within $15\,R_{\rm IC}$, atmosphere streamlines can
connect to the disk wind, as Fig. 1 shows. However, as discussed above, these
streamlines are stable to TI. Beyond $25\,R_{\rm IC}$, the base of the disk
wind is disconnected from the atmosphere’s interior. The temperature at the
base is that of the lower TI zone, $T_{\rm c,max}$, but pressure falls off
with distance, meaning $\Xi$ increases outward and gas does not enter the TI
zone. Thus, starting from a steady state at least, the only path for matter to
enter the wind from the atmosphere at radii $r>15\,R_{\rm IC}$ is through the
vortex. We sum this up with a final bullet point:
* •
In the case of thermally driven disk winds, pressure gradients along the
S-curve must drive a vortex in the atmosphere, as this allows perturbations to
enter the TI zone and become nonlinear.
Figure 6: Density maps for our mid-res run capturing the onset of TI and the
subsequent formation of an IAF. Velocity arrows and contours at $500\,{\rm
km\,s^{-1}}$ and $10^{3}\,{\rm km\,s^{-1}}$ are overlaid as in previous plots.
The $Be=0$ contour is shown in red and it coincides with the contour $T=T_{\rm
c,max}$ marking the position of the lower TI zone. To visualize the slow
velocity field within the disk atmosphere, we overplot a second set of
velocity arrows (gray) that have $10\times$ smaller magnitude. This reveals a
vortex at $(x,z)\approx(75,25)\,R_{\rm IC}$, the formation site of a hot spot
(first visible at $t=1049$). This spot forms due to TI (see Fig. 7). Figure
7: Analysis of hot spot/bubble dynamics. Top Panels: Colormaps of the number
density as in Fig. 6 but zooming in on a $50\,R_{\rm IC}\times 50\,R_{\rm IC}$
region centered on the hot spot. Three streamlines are overplotted (blue,
orange, and green). In the left panel, the footpoints are located within the
vortex and the orange and green streamlines are wrapped around the vortex. In
the right panel, they are located at smaller radii to illustrate that as the
hot spot becomes a hot bubble, it separates from the vortex. Bottom Panels:
Phase diagrams similar to Fig. 2 showing the tracks for the streamlines in the
panels above. For the hot spot, the streamlines enter the lower TI zone only
(left diagram). For the hot bubble, the streamlines enter both the lower and
upper TI zones (right diagram).
Figure 8: As in Fig. 6 but for the hi-res run. Rather than through the growth
of an embedded hot spot on the bottom edge of a vortex, the IAF here initially
forms by hot gas from the wind region flowing into the atmosphere along the
top edge of a vortex. As the bottom panels show, embedded hot spots also form,
indicating that both IAF formation channels operate simultaneously.
Additionally, in this hi-res run a turbulent wake forms in the wind downstream
of the IAF and a vortex shedding process can be seen taking place. An
animation of this is viewable at
https://trwaters.github.io/multiphaseAGNdiskwinds/.
Figure 9: Density map snapshots as in Fig. 6 but showing the subsequent
evolution of the flow. The full domain is now plotted. The white circular line
marks the ‘unbound radius’ $R_{\rm u}$ that approximates the location in the
atmosphere where the contours $Be=0$ and $T=T_{\rm c,max}$ no longer coincide.
As shown by the red contour, $Be>0$ beyond $R_{\rm u}$; the flow at these
distances becomes highly susceptible to TI. The brown ‘spots’ in the
atmosphere at $t\geq 1600$ are thermally unstable hot bubbles.
### 3.2 Numerical simulations of clumpy disk winds
Thus far we have examined solutions for a few different $\Xi_{0}$, all with
$\rm{HEP_{0}}=225$ and $\Gamma=0.1$, corresponding to $r_{0}\approx 1/3\,{\rm
pc}$ for $M_{\rm bh}=5.2\times 10^{7}\,M_{\odot}$, the mass of NGC 5548 (e.g.,
Kriss et al., 2019). These domains have $r_{\rm in}=r_{0}/2=1.0\,R_{\rm IC}$
and $r_{\rm out}/r_{\rm in}=40$. Given the theoretical expectations from
§3.1.1-§3.1.2 for when TI can and cannot be triggered, the inner boundary of
the computational domain need not begin at $R_{\rm IC}$ but should at least
include the transition from bound to outflowing streamlines in the atmosphere,
which occurs around $20\,R_{\rm IC}$ for $\rm{HEP_{0}}=225$. The outer
boundary should extend beyond $R_{\rm u}=140\,R_{\rm IC}\,(1-\Gamma)$, where
clumpy wind dynamics is expected to occur. To have maximal numerical
resolution at minimal cost, it is useful to minimize the dynamic range. Based
on these considerations, here we present solutions for $\rm{HEP_{0}}=36$,
which gives $r_{0}=13.0\,R_{\rm IC}$ for $\Gamma=0.1$. Our radial domain is
then assigned as $r_{\rm in}=r_{0}$ with a dynamic range $r_{\rm out}/r_{\rm
in}=22$, giving $r_{\rm out}=1.55\,R_{\rm u}$.
This fiducial setup was arrived at after analyzing more than a dozen
simulations with different domain sizes and governing dimensionless parameters
(${\rm HEP}_{0}$, $\Xi_{0}$, and $\Gamma$). The processes at play are robust
and hence captured by a single simulation. Our numerical methods are detailed
in Appendix A; in summary, we solve the equations of non-adiabatic gas
dynamics using the Athena++ code. Our simulations are done in spherical
coordinates assuming axisymmetry, and we employ static mesh refinement (SMR).
We present results at two resolutions: our mid-res (hi-res) run has three
(four) levels of SMR. We emphasize that these solutions are obtained without
adding any perturbations to the flow.
To better enable a direct comparison with prior studies, in our figures we
quote times in units of the Keplerian orbital period at $R_{\rm IC}$ ($2\pi
R_{\rm IC}/\sqrt{GM_{\rm bh}/R_{\rm IC}}$):
$t_{\rm kep}(R_{\rm IC})=3.53\times
10^{2}\,\frac{\mu^{3/2}M_{7}}{(T_{C}/10^{8}\,{\rm K})}\,{\rm yrs}.$ (12)
With $\mu=0.6$, $t_{\rm kep}(R_{\rm IC})=844\,{\rm yrs}$ for $M_{\rm
bh}=5.2\times 10^{7}\,M_{\odot}$, while $t_{\rm kep}(R_{\rm IC})=16.2\,{\rm
yrs}$ for $M_{\rm bh}=10^{6}\,M_{\odot}$ (the black hole mass assumed in D20).
We continue to express distances in units of $R_{\rm IC}$, while number
densities are computed in cgs units for $M_{\rm bh}=5.2\times
10^{7}\,M_{\odot}$.
Because the dynamical time scales as $(r/R_{\rm IC})^{3/2}$, we see from Fig.
6 that a characteristic time span for the flow to visibly evolve is $\Delta
t\approx 50$, consistent with the inner boundary being at $r_{0}=13\,R_{\rm
IC}$ (i.e. $13^{3/2}\approx 47$). The other relevant timescale is the cooling
time evaluated for temperatures near $T_{\rm c,max}$, the lowest temperature
where gas can become thermally unstable. As will be seen below, this typically
occurs for gas near the atmosphere/wind interface where number densities are
$\sim 10^{3}\,{\rm cm^{-3}}$. Re-evaluating Eq. (6) gives
$t_{\rm cool}=1.42\times 10^{2}\frac{T/T_{\rm c,max}}{n_{3}\,C_{23}}\>{\rm
yrs}.$ (13)
This is comparable to the timescale for TI to operate within a single
computational zone.
A comparison of Fig. 5 and Fig. 6 reveals that reducing $\rm{HEP_{0}}$ from
225 to 36 mostly preserves the dynamics in the disk wind. Namely,
$\rm{HEP_{0}}=36$ solutions are nearly equivalent to $\rm{HEP_{0}}=225$
solutions at locations in the wind beyond where $\rm{HEP}<36$ along the
midplane. The location of vortices in the atmosphere, meanwhile, is somewhat
sensitive to $\rm{HEP_{0}}$ because the dynamics of gas in the cold phase
depends more closely (due to the higher densities) on the value of $t_{\rm
cool}$.
#### 3.2.1 The formation of IAFs
Referring to Fig. 6, the first appearance of unsteady behavior begins after
$10^{3}\,t_{\rm kep}(R_{\rm IC})$. Shown are six stages capturing how the
first IAF in this run forms. The red contour here is the surface where $Be=0$,
which coincides with the surface where $\Xi=\Xi_{\rm c,max}$. This implies
that streamlines enter the TI zone with $Be<0$, and so TI cannot easily
amplify entropy modes (see §3.1.2). However, there are several vortices in the
atmosphere, and the flow along streamlines traversing a smaller vortex at
$(x,z)\approx(75,25)\,R_{\rm IC}$ in the 1-st panel gets compressed enough to
reach the temperature of the TI zone. Because it is circulating, the flow
passes through the TI zone multiple times, giving more time for TI to operate.
It is apparent from the 2-nd panel that a ‘hot spot’ is forming due to the
$Be=0$ contour appearing within the atmosphere (as a red ‘dot’). This spot
becomes a hot bubble by the 3rd panel. As the bubble expands further in the
next 3 frames, a thin layer of the atmosphere peels off, hence our referring
to this as an irradiated atmospheric fragment.
To show that the dynamics just described is attributed to TI, in Fig. 7, we
analyze the flow behavior on the $(T,\Xi)$-plane as in Fig. 2. In the top
panels, three streamlines are plotted on zoomed-in versions of the 2-nd and
3-rd frames shown in Fig. 6. All three pass through the hot spot, which is
located near the bottom of the vortex where the streamlines bunch together,
indicative of compressional heating. The bottom left panel of Fig. 7 shows
that these streamlines take several passes through the lower TI zone. The
bottom right panel of Fig. 7 shows that the transition from hot spot to hot
bubble coincides with the flow also passing through the upper TI zone. The
right-most tracks on the $(T,\Xi)$-plane that avoid the upper TI zone
correspond to the streamlines exiting the vortex and entering the disk wind.
A second formation channel for an IAF occurs in Fig. 6 also: rather than an
embedded hot spot forming, the final two panels shows hot gas from the disk
wind carving its way into the atmosphere. This channel is the first to occur
in the hi-res run, as shown in Fig. 8. Notice the IAF begins forming at an
earlier time, which is not unexpected because higher resolution can permit
faster growing entropy modes. The depression at $r\approx 50\,R_{\rm IC}$
starts out as just a ripple on the surface of the atmosphere. The ripples, in
turn, are due to the small level of amplification that entropy modes undergo
when gas with $Be<0$ pass through the TI zone upon reaching this interface.
Consistent with our explanation in §3.1.2, the process that deepens the
depression is not the direct growth of entropy modes along streamlines passing
through the TI zone within the ripples, as there are ripples elsewhere that do
not form depressions. Rather, it is again dynamics associated with the vortex:
instead of an embedded spot forming on a path through the bottom side of a
vortex, pre-existing hot gas flows through the upper entry into the vortex.
Thus, it is really the same physical mechanism at play — flow through a vortex
facilitating the growth of TI modes — and the bottom panels of Fig. 8 show
that both of these IAF formation channels will in general occur
simultaneously.
#### 3.2.2 The evolution of IAFs
Notice from both Fig. 6 and Fig. 8 that once an IAF is exposed to the disk
wind, it begins to disintegrate, sending multiphase gas into the wind. The
classic turbulent flow patterns associated with vortex shedding accompany this
process in the case of the second formation channel, when a small filament
appears at the top of the more distant surface of the depression in the
atmosphere. This budding IAF acts as an obstruction to the wind, resulting in
what clearly looks to be a Kármán vortex street in the downstream flow. This
is visible in the 3-rd, 4-th, and 5-th panels of Fig. 8.
The disintegration of the filaments/crests of IAFs combined with the
downstream mixing and gradual heating of the shedded multiphase gas within
their wakes will together be referred to as ‘evaporation’. We use this term to
cast equal emphasis on the wake dynamics, which is what truly distinguishes
smooth and clumpy evolution in terms of global wind diagnostics (see §3.3).
Note that since we ignore the effects of thermal conduction, classical
evaporation (McKee & Begelman, 1990) does not take place in these simulations.
The ‘evaporative’ wakes that form downstream of IAFs are reminiscent of both
the cometary-structured clumps that result from the photoevaporation of
neutral gas clouds illuminated by massive stars (e.g., Nakatani & Yoshida,
2019) and of the cloud destruction dynamics of wind-cloud interactions (e.g.,
Banda-Barragán et al., 2019).
Fig. 9 depicts the evolution of the flow on longer timescales, starting at
$t=1200$, the final snapshot from Fig. 6. The IAF formation dynamics described
above happens episodically, and the small scale IAFs shown up close in Fig. 6
gradually disrupt the flow at larger radii. As seen in the $t=1600$ panel, an
IAF can take on a tsunami-like appearance just within $R_{\rm u}$. As shown in
the final three panels, this IAF becomes increasingly filamentary and
separates from the atmosphere, resulting in a smaller scale clump becoming
entrained in the wind. This clump evaporates on a timescale of about $\Delta
t=100$, corresponding to a few thousand years for $M_{\rm
bh}=10^{6}\,M_{\odot}$ and $\sim 10^{5}\,{\rm yrs}$ for $M_{\rm bh}=5.2\times
10^{7}\,M_{\odot}$ (see Eq. (12)).
This ‘early’ evolution, while useful for examining the
formation/disintegration dynamics of IAFs, can be considered ‘transient’
behavior that eventually leads to a fully developed clumpy disk wind solution
starting from smooth initial conditions. As shown below, the fully developed
solution features long lived hot bubbles evolving within the disk atmosphere
at the base of even larger scale, tsunami-like IAFs. In Fig. 9, these bubbles
are noticeable in the final five snapshots (see the brown gas to the lower
left of the IAFs between $100\,R_{\rm IC}$ and $200\,R_{\rm IC}$).
Figure 10: Comparison of the temperature structure and TI dynamics of smooth
and clumpy wind stages. Top panels: Colormaps of the temperature at $10^{3}$
orbital periods at $R_{\rm IC}$ when the solution is still smooth (left) and
after evolving the solution for 1 orbital period at $r_{\rm out}$ when we end
the simulation (right). For each snapshot, we plot four streamlines anchored
at various heights at $x=150\,R_{\rm IC}$. Bottom panels: Phase diagrams
showing the tracks for the streamlines in the panels above. In both cases, the
red streamline only occupies the cold branch of the S-curve. In the clumpy
wind snapshot, the green streamline passes through the inner hot bubble, while
the orange streamline passes through both the inner and outer bubbles, visible
as two separate tracks through the upper TI zone. Both bubbles undergo runaway
heating associated with TI. Figure 11: Comparison of the ionization structure
and TI dynamics of smooth and clumpy wind stages along two different
sightlines for the hi-res run. Left panels: colormaps of the ionization
parameter $\xi$ for an early smooth phase (top panel) and at two later stages
when IAFs become fully developed. Velocity arrows and contours are again
overlaid. The radius $R_{\rm u}$ is shown in white. Two sightlines at
$54^{\degree}$ and $63^{\degree}$ from the polar axis are drawn; the red
portions of these lines trace the radial range $r>R_{\rm u}$ used in
subsequent figures. Right panels: Corresponding phase diagrams of gas along
the $54^{\degree}$ (left plot) and $63^{\degree}$ sightline (right plot). The
solid gray line is the S-curve. The shaded gray area shows thermally unstable
regions according to Balbus’ criterion for TI; the boundary of this region
defines the Balbus contour shown in previous phase diagrams. The upper right
panel shows that sightlines passing through both the disk atmosphere and the
base of the disk wind span a large range of ionization states and only skim
the instability region. The smooth flow results in a continuous track on the
phase diagram but would still be considered ‘multiphase’ if observed along the
$63^{\degree}$ sightline. The subsequent clumpy flow regime shows an even
larger range of ionization states. Depending on the sightline and evolutionary
stage, passing through an IAF can entail traversing brownish, yellowish,
greenish, and bluish gas both upon entry and exit (corresponding to multiple
passes through the instability region). Figure 12: Comparison of wind
properties along $\theta$ at $r_{\rm out}$ for smooth and clumpy stages of
evolution. These two states correspond to the $t=588$ and $t=2235$ snapshots
shown in Fig. 11. The solid and dashed lines in the second panel correspond to
the total velocity $v$ and its radial component $v_{r}$, showing that the
velocity field of the disk wind region is almost purely radial. The next two
panels show the mass loss rate and kinetic power. Their integrated values are
compared in Table 1. Consistent with the table, we exclude the region
$85^{\degree}<\theta<90^{\degree}$ to allow for an underlying accretion disk.
The dotted vertical line divides the remaining domain equally.
### 3.3 Smooth versus clumpy solutions
We now quantify the difference between smooth and clumpy solutions by
comparing our runs at late times with an earlier phase when the flow resembles
a steady state solution. In Fig. 10, we present a streamline analysis similar
to that of Fig. 7 but now focusing on the flow at large radii. The red
streamline in both temperature maps traverses the atmosphere, and as seen on
the accompanying phase diagrams, traces only points on the cold stable branch.
For the smooth flow, neither the orange nor blue streamline enter a TI zone.
Only the green streamline that enters the wind through the atmosphere
traverses the lower TI zone, and it does so with $\Xi$ increasing. This is
indicative of the flow still being marginally stable to TI at this stage
despite this region being beyond $R_{\rm u}$.
For the clumpy flow, the question arises, does TI operate only at the
formation sites of IAFs or also within the IAFs as they are advected
downstream? In Fig. 10, the green streamline in the late time snapshot shows
that TI is indeed operational in the distant flow. This streamline passes
through a hot bubble and is seen to occupy both the upper and lower TI zones.
Similarly, there are two separate tracks of the orange streamline in the upper
TI zone, one reaching a lower temperature than the other. This is due to TI
also taking place in the less hot bubble at $r\approx 250\,R_{\rm IC}$.
Finally, the blue streamline passing through the crest of a fully developed
IAF actually starts off in the upper TI zone, likely due to its footpoint
being in hot gas from the nearby bubble. It passes isobarically to the cold
phase as the IAF is traversed, and then pressure decreases downstream from
there as it enters the yellow gas of the lower disk wind. Thus, while Fig. 6
shows that the dynamics of hot bubbles and IAFs are coupled, here we see that
the bubbles themselves stay heated due to their being thermally unstable. This
continued heating of the hot gas further expands the bubble, forcing more of
the cool gas enveloping it to enter the TI zone.
One can ask a related but more observationally relevant question: does a line
of sight analysis reveal the same behavior, namely a paucity of gas at $\xi$
ranges corresponding to TI zones in smooth flows compared to clumpy ones? To
answer this question, we switch to analyzing the high-resolution run because
it yields the most accurate calculation of the absorption measure distribution
(AMD). Fig. 11 shows maps of $\xi$, again for a smooth stage of evolution and
also at two later, clumpy stages that capture how the crest of an IAF can
break off, leaving an isolated, outflowing clump. The upper sightline in the
bottom panel passes through this clump, whereas the same sightline at $t=2141$
passes just above the crest. Thus, there is a much broader range of $\xi$ at
$t=2235$ compared to $t=2141$, as is also apparent from the corresponding
phase diagrams, which are now plotted using the $(T,\xi)$-plane. There are
again multiple passes through both TI zones for sightlines that intersect cold
phase gas, implying higher column densities of gas in the temperature range of
the lower TI zone, $T\approx 2-5\times 10^{6}\,{\rm K}$. For sightlines
through the smooth solution, however, gas only occupies this temperature range
if the upper atmosphere is skimmed. This contrast in what a distant observer
will see indicates that smooth and clumpy wind models are very different, and
both happen to defy the standard narrative that accompanies discussions of the
AMD (see §3.4).
Quantity | Smooth | Clumpy
---|---|---
$<42.5^{\degree}$ | $>42.5^{\degree}$ | $<42.5^{\degree}$ | $>42.5^{\degree}$
$\langle v\rangle\>{\rm[km\,s^{-1}]}$ | 857 | 123 | 835 | 105
$4\pi\,\dot{m}/\dot{M}_{\odot}$ | 0.13 | 0.57 | 0.13 | 0.62
$4\pi\,\dot{K}/L\>[10^{-4}]$ | 1.04 | 0.11 | 1.04 | 0.07
Table 1: Global wind diagnostics of smooth versus clumpy wind stages
corresponding to the $t=588$ and $t=2235$ snapshots in Fig. 11. The top
entries are the mean outflow speeds at $r_{\rm out}$, $\langle v\rangle=\int
v_{r}(r_{\rm out})\sin\theta\,d\theta/\int\sin\theta\,d\theta$. The next
entries are the integrated profiles shown in the lower panels of Fig. 12:
$4\pi\,\dot{m}/\dot{M}_{\odot}$ is the mass loss rate in units of
$M_{\odot}\,{\rm yr^{-1}}$ and $4\pi\,\dot{K}/L$ is the kinetic luminosity in
units of $L=6.32\times 10^{44}\,{\rm erg\,s^{-1}}$ (the luminosity for $M_{\rm
bh}=5\times 10^{7}\,M_{\odot}$ assuming $\eta=0.1$). The $>42.5^{\degree}$
values exclude a $5^{\degree}$ region above the midplane to allow for an
underlying disk. The entries for $\dot{m}$ and $\dot{K}$ account for midplane
symmetry, so their sum gives $\dot{M}$ and $P_{\rm K}$, the total values over
a $4\pi$ solid angle.
On the other hand, in Fig. 12 we present a comparison of quantities along the
outer boundary, which shows that smooth and clumpy wind stages hardly differ
outside of the multiphase wind region. (The particular flow states compared
here are top and bottom snapshots in Fig. 11.) Within this region, the spikes
in density in the top panel result in a somewhat larger mass loss rate, but
this enhancement in the mass flux is outweighed by the drop in velocity (due
to drag forces), yielding an overall reduction in kinetic luminosity. In Table
1, we list precise values for these global wind properties, which are computed
as $\dot{M}=4\pi\int_{0}^{\pi/2}\dot{m}\,\sin\theta\,d\theta$ and
$P_{K}=4\pi\int_{0}^{\pi/2}\dot{K}\,\sin\theta\,d\theta$, where $\dot{m}=\rho
v_{r}r_{\rm out}^{2}$ and $\dot{K}=(1/2)\rho v^{2}v_{r}r_{\rm out}^{2}$. These
calculations require $v_{r}$ in addition to $\rho$ and
$v=\sqrt{v_{r}^{2}+v_{\theta}^{2}+v_{\phi}^{2}}$, which is shown by the dashed
curve in the 2-nd from top panel of Fig. 12. Notice that the profiles of
$v_{r}$ and $v$ overlap above the disk atmosphere, indicating that the disk
wind regions are radially outflowing. Within the atmosphere, $v_{\phi}$ is
dominant, explaining the higher velocity for $\theta>65^{\degree}$. In Table
1, for both smooth and clumpy states, we compute values to the left and right
of the vertical line in Fig. 12, which approximately separates the subsonic,
multiphase flow from the highly ionized, supersonic flow above.
That the solutions are so similar in terms of bulk wind properties is merely a
reflection of the underlying dynamics serving mainly to restructure parts of
the flow without affecting the efficiency of thermal driving. Specifically,
buoyancy effects lead to a rearrangement of matter along the atmosphere/wind
interface, resulting in enhanced vertical rather than radial motion within the
domain. This restructuring does not impact the fast, smooth flow at high
latitudes, meaning it remains nearly undetectable in absorption (since it
consists of almost fully ionized plasma). The clumpiness in the multiphase
wind region, a result of the evaporation dynamics within the wakes of the
IAFs, leads to a broader range of ionization parameters — all characteristic
of warm absorbers — implying that these new solutions may be important for the
interpretation of AGN spectra.
### 3.4 Absorption diagnostics
Ionized AGN outflows can show absorption from ions that are widely separated
in ionization energy. From the distribution of ionic column densities, it is
possible to reconstruct the AMD (Holczer et al., 2007), which is the
distribution of hydrogen column density $N_{\rm H}$ as a function of the
density ionization parameter, $\xi$. Conversely, from a model of the gas
distribution along the line of sight, the AMD can be computed as the slope of
the column density $N_{\rm H}$ when plotted against $\log(\xi)$,
${\rm AMD}\equiv\frac{dN_{\rm H}}{d\log(\xi)}.$ (14)
Because $N_{H}$ is a monotonically increasing function by definition, the AMD
is a positive quantity that should be overall large in high density regions
characterized by smaller ionization parameters, such as inside an AGN cloud.
The generic shape of the AMD across regions with sharp density gradients, such
as the surface of clouds (IAFs in these models), is unclear. We therefore
present an in depth analysis of the AMD for the two sightlines discussed
already in §3.3. It is hoped that our discussion here provides a useful
perspective into the ongoing debate over what shapes the AMD — is it
individual clouds, a stratified outflow, or as suggested by Behar (2009) and
as we find here, individual clouds (IAFs) evolving within a stratified
outflow?
To further assess the extent to which smooth and clumpy disk wind solutions
differ in terms of absorption signatures, we also present calculations of
X-ray absorption line profiles for selected ions spanning a similar range of
ionization parameters as the AMD. We evaluate the formal solution to the
radiative transfer equation for an absorption line against a background
intensity, $I_{0}$, that is uniform in space and frequency,
$\displaystyle I_{\nu}=I_{0}\,e^{-\tau_{\nu}(\theta_{0})},$ (15)
where
$\tau_{\nu}(\theta_{0})=\int\kappa_{\nu}(r,\theta_{0})\,\rho(r,\theta_{0})\,dr$
is the optical depth at $r_{\rm out}$ for a fixed inclination $\theta_{0}$
(the viewing angle for a distant observer). These calculations therefore
neglect contributions due to the wind’s intrinsic emission along the line of
sight, as well as scattering into the line of sight. Following the methods of
Waters et al. (2017), we evaluate
$\kappa_{\nu}(r)=\kappa_{0}(r)\phi(\nu)/\phi(\nu_{D})$ by post-processing our
simulation results to Doppler broaden the lines according to the radial
velocity and temperature profiles; $\phi(\nu)$ is the (radially dependent)
Gaussian profile with a width $\Delta\nu_{0}=\nu_{0}\,v_{\rm th}/c$ set by the
thermal velocity $v_{\rm th}(r)=\sqrt{kT(r)/m_{\rm ion}}$ and with the line
center frequency Doppler-shifted to $\nu_{D}(r)=\nu_{0}[1+v_{r}(r)/c]$. We use
lookup tables generated from XSTAR output to obtain the line-center opacity
$\kappa_{0}(r)=\kappa_{0}[\xi(r),T(r)]$ for each ion. These opacity tables are
generated from the same grid of photoionization calculations used to tabulate
our net cooling function $\mathcal{L}$ that defines our S-curve. See Ganguly
et al. (2021) for a more thorough description of these calculations, as well
as for a detailed analysis of various factors affecting line formation.
#### 3.4.1 Absorption measure distribution (AMD)
We analyze the AMD for the particular states already examined in a dynamical
context in §3.3. The middle panels in the left column of Fig. 13 shows the AMD
computed along the two sightlines plotted in Fig. 11. To aid our analysis, in
the right middle panels we also show the AMD over just the outer portion of
the sightline extending from $r>R_{\rm u}$ (the red portion in Fig. 11).
Additionally, in the upper panels we plot the cumulative column density to
show that the shape of the AMD is consistent with the slope of these curves.
Finally, in the lower panels we plot the column density weighted radial
velocity
$\overline{v_{r}}\equiv\int_{\Delta}N\,v_{r}\,d\xi/\int_{\Delta}N\,d\xi$,
where $\Delta=0.1\,{\rm dex}$ denotes the bin size of $\log(\xi)$ over which
the integrals were evaluated; $\overline{v_{r}}$ is indicative of the expected
blueshifts of absorption lines.
Focusing first on the smooth state (upper panel in Fig. 11), notice that
starting from the outer boundary, the two sightlines trace gas with increasing
ionization as $r$ decreases. These radial profiles are therefore in direct
correspondence with the AMD. For the $63^{\degree}$ sightline, the cold phase
atmosphere gas (blue color) is all at the same low ionization and there is a
large column of it compared to the less dense gas beyond (green through brown
colors), explaining the more than two order of magnitude dip in the AMD. The
AMD of the $54^{\degree}$ sightline spans a much smaller range of $\xi$ and is
overall small (a few $10^{21}\,{\rm cm^{-2}}$) as only low density gas is
being probed. There is a dip upon exiting the $\xi$-range of the upper TI zone
(marked with vertical dashed lines) because the higher ionization gas beyond
has a smaller rise in column density, as shown in the top panels of Fig. 13.
Before discussing the AMD for the clumpy states, we assess how the analysis
just given fits in with the most common discussion of the AMD in the
literature (‘the standard narrative’ hereafter), in which reference is made to
the theory of local TI. Namely, dips in the AMD are interpreted as indicating
thermally unstable $\xi$-ranges, the outcome of local TI being to replace
these ranges by disparate regions of the S-curve that are connected
isobarically.444Technically, in the theory of local TI, gas will not typically
reach the upper stable branch of the S-curve due to the effects of thermal
conduction (see Proga & Waters, 2015). Below we will discuss in more detail
local versus ‘dynamical’ TI, but here we emphasize that the dip in the AMD is
not due to dynamical TI because the flow is still smooth at this stage. While
the pronounced dip in the AMD for the $63^{\degree}$ sightline is merely a
column density effect, it is not coincidental that it occurs near $\xi_{\rm
c,max}$, the entry to the lower TI zone. The flow structure along this
sightline actually epitomizes the intrinsic connection between thermal driving
and TI discussed in §2. To recap, the presence of a TI zone is responsible for
the sharp transition from a bound atmosphere to a thermally driven wind at
$\xi=\xi_{\rm c,max}$. Gas with $T>T_{\rm c,max}$ is much more tenuous in a
steady equilibrium state. Hence, for sightlines through the atmosphere, the
AMD will naturally exhibit a dramatic falloff once $\xi_{\rm c,max}$ is
reached. Compared to the standard narrative, therefore, pronounced dips in the
AMD here are caused by the full structure (see §2) of thermally driven wind
solutions being probed.
The most basic result in Fig. 13 is that the AMD for clumpy states span a
larger range of $\xi$. The AMD always extends to lower $\xi$ for clumpy
compared to smooth states because IAFs undergo compression. It can extend to
larger $\xi$ when gas within bubbles are probed. We showed bubbles to be the
primary sites were TI takes place, hence they are subjected to continuous
runaway heating that increases $\xi$.
Figure 13: AMD analysis for the two sightlines shown in Fig. 11. The top,
middle, and bottom subpanels show the column density, the AMD, and the column
density weighted line of sight velocity (as defined in the text). Calculations
for the full sightline are shown in the left panels, while the right panels
show a calculation only along the red portion of the sightline shown in Fig.
11, covering the range $r>R_{\rm u}$. Black, blue, and orange curves
correspond to the three snapshots shown in Fig. 11. The $\xi$-ranges of the
upper and lower TI zones are marked by pairs of solid and dashed vertical
lines in the AMD plots.
Notice that beyond $\xi_{\rm c,max}$, the AMDs for both clumpy states feature
dips or remain relatively flat between the pairs of vertical lines marking the
‘unstable’ $\xi$-ranges that correspond to the upper and lower TI zones. Dips
can also occur beyond the upper TI zone. For smooth states, the AMD rises
slightly within the upper TI zone.
These features are inconsistent with the standard narrative. Rather than
featuring a dip within the unstable $\xi$-ranges, the AMD shape for smooth,
thermally driven wind solutions is simpler still: gas continues to occupy the
‘unstable’ $\xi$-range of the S-curve because this same range is actually
stable just slightly off the S-curve. As shown in the top panels of Fig. 11,
gas does not actually pass through the upper TI zone; hence for $t=588$, the
flow is formally stable in the range between the vertical dashed lines in Fig.
13, and it lies below the equilibrium curve in a region of net radiative
heating. A steady state dynamical equilibrium is maintained by this heating
being balanced by adiabatic cooling due to expansion. For the $63^{\degree}$
sightline, gas also passes through the lower TI zone, but it has not yet
become unstable due to the predominance of the stabilizing effects discussed
in §3.1.2. In clumpy states, dips at large $\xi$ beyond the upper TI zone are
simply due to the overheated gas within bubbles having low column densities.
To understand the AMD of the clumpy flow states in detail, it is helpful to
contrast the regimes of local TI and dynamical TI. The depletion of gas within
TI zones — the principle outcome of local TI — still takes place within global
outflow solutions once they cease to be smooth. In local TI, there is finite
supply of the initial unstable gas reservoir, while in global flows, the TI
zone can be continually replenished with new gas. In papers that reference the
theory of local TI when interpreting the AMD (e.g., Behar, 2009; Detmers et
al., 2011; Adhikari et al., 2019), the implicit assumption is therefore being
made that thermal timescales are much shorter than dynamical ones to allow
depletion to dominate replenishment. This would entail, for example, that a
hot bubble in pressure equilibrium with the cold atmosphere forms rapidly
(compared to orbital timescales) at the onset of TI and then no longer evolves
thermally, thereby keeping the TI zone devoid of gas. Such a scenario is
inconsistent with the force equation, however, because hot bubbles tend to
buoyantly rise. The dynamics accompanying the bubbles appearing in our
simulations indeed involves the effects of buoyancy, as this is what leads to
the formation of IAFs (see §3.2.1). Moreover, our simulations show that
bubbles continue to expand but on timescales slow compared to orbital times,
and we suspect that this is only possible when the rates for gas entering and
exiting TI zones are closely balanced.
Based on this qualitative understanding of dynamical TI, we would have been
surprised to find an association between dips in the AMD and the locations of
TI zones. Rather, the expected signature of dynamical TI is a dip occurring
outside of TI zones at even larger $\xi$. This dip probes the small column of
gas within a hot bubble. Referring to the lower ($63^{\degree}$) sightline in
the bottom panel of Fig. 11, the corresponding partial AMD (orange curve in
bottom right AMD panel of Fig. 13) shows prominent dips at $\xi\approx
1.8\times 10^{3}$ and $\xi\approx 3.5\times 10^{3}$. The lower ionization dip
is from the entrained bubble at $r\approx 200\,R_{\rm IC}$, while the higher
ionization one is the darker brown gas between $R_{\rm u}$ and the first IAF
encountered along this portion of the sightline. This latter gas is seen to
originate from another bubble located just below the sightline that is also
subjected to TI. However, both of these dips become mostly filled when we
include the full sightline because the gas at small radii covers the same
ionization parameter range and is also denser. This analysis therefore casts
serious doubt on the whole notion that the AMD can be used to draw any
conclusions about TI. In principle it can, but there is degeneracy with
‘normally’ heated gas now that we have shown that TI should not lead to dips
within the $\xi$-ranges of TI zones.
The AMD of the same sightline but for the $t=2141$ snapshot (blue
$63^{\degree}$ line) does show a wide dip in the TI zone for the partial AMD
calculation. This dip also gets filled in upon including the full sightline,
but what is interesting is the red portion of the sightline traces only one
region of yellowish/brownish gas — that of the evaporating flow of the IAF
just beyond $r=R_{\rm u}$. This is the most highly ionized gas probed and is
seen to be at the $\xi$ marking the location of the dip. Furthermore, the
evaporative layer of the other much thicker IAF along this sightline contains
the less ionized (light green colored) gas with $\xi\approx 800$ that is
within the same TI zone. The primary dip in the AMD near $\xi_{c,max}$ is
unambiguously the dark green evaporation layer. Thus, we have arrived at the
expected AMD shape of a single IAF: an overall peak for $\xi<\xi_{c,max}$, a
large dip near $\xi=\xi_{c,max}$ (that corresponds to ‘exiting the atmosphere’
as in the AMD for the smooth phase) followed by secondary dips at larger $\xi$
that often coincide with TI zones but are attributable to a sharp falloff in
column density in the IAF’s wake rather than to TI.
The orange AMD curves in the top panels of Fig. 13 provide another example of
this AMD shape. This is the upper sightline in the bottom panel of Fig. 11
that passes through an isolated clump. Such clumps arise when the crest of an
IAF becomes detached after being ablated by the wind. This is clearly the most
low ionization gas along this sightline, showing that the primary dip in the
AMD beginning near $\xi_{c,max}$ is the signature of this clump. There would
be a secondary dip in the upper TI zone but it is filled in by the larger
column of brownish gas at smaller radii. Notice that this evaporative
structure is reminiscent of the cometary tails discussed in the context of
occulation events (e.g., Bianchi et al., 2012). The evaporating wakes there
are depicted as moving transverse to the line of sight rather than along it,
but the ionization structure they inferred is nevertheless consistent with the
dynamics taking place in our simulations.
In summary, the AMD for clumpy multiphase disk wind models can be understood
as a combination of that expected for a smooth outflow with that for one or
more embedded IAFs (which are equivalent to discrete clouds when probed only
along the line of sight). Specifically, the AMD shape has several variants
depending on the sightline. For sightlines intersecting cold phase gas,
indicative of either passage through the atmosphere or an IAF, we find
* •
a nearly flat distribution for $\xi<\xi_{\rm c,max}$ corresponding to
compressed cold phase gas;
* •
a pronounced dip just above $\xi_{\rm c,max}$, accompanied by somewhat larger
blueshifts because IAFs are subjected to ram pressure by the disk wind.
For sightlines passing through hot bubbles, where TI is actively taking place,
* •
a dip should occur at $\xi$ larger than that of any TI zones (for $\log\xi>3$
by our calculations).
Finally, for sightlines passing above the dense IAFs but still probing their
evaporative wakes,
* •
the low-$\xi$ range of the AMD does not extend to $\xi_{\rm c,max}$ and
blueshifts are systematically higher.
The last point was not discussed above, but it is evident from the blue curve
in the upper panels of Fig. 11 and is relevant to the synthetic line profile
calculations that we now present.
Figure 14: Synthetic X-ray absorption line profiles calculated for the
$54^{\degree}$ (top panels) and $63^{\degree}$ (bottom panels) sightline for
each snapshot in Fig. 11. Shown are profiles for the ${\rm K}\alpha$ resonance
lines of Fexxvi and Fexxv followed by the doublets Siviii Ly$\beta$ and Oviii
Ly$\alpha$; the individual components of these blended doublets are shown as
blue and red colors. Bottom subpanels are scatter plots showing $\xi$ as a
function of the line of sight velocity (the x-axis is $v_{\rm los}=-v_{r}$);
red dots correspond to the red portions of the sightlines in Fig. 11. Hashed
areas mark the $\xi$-ranges of the upper/lower TI zones.
#### 3.4.2 Synthetic absorption line profiles
We frame our discussion of X-ray absorption line profiles around the four
bullet points above. These AMD properties correspond, respectively, to the
following expectations:
* •
Deeper absorption should occur for unsaturated lines in clumpy compared to
smooth states for ions probing the cold phase gas.
* •
There should be a spectral signature corresponding to the prominent dip near
$\xi_{\rm c,max}$: for ions with peak abundance at $\xi\gtrsim\xi_{\rm
c,max}$, a gradual transition for core to wing as the line desaturates; for
ions with peak abundance at $\xi<\xi_{\rm c,max}$, a sharp desaturation should
occur right at $\xi_{\rm c,max}$. For the $54^{\degree}$ sightline here, this
occurs at $v\approx 150-200\,{\rm km\,s^{-1}}$.
* •
Deeper absorption should occur at the lower velocities characterizing gas in
the atmosphere for ions directly probing gas in the hot bubbles (Fexxv
K$\alpha$ and Fexxvi K$\alpha$ for our S-curve). For ions probing the cold
phase gas, less absorption should accompany bubbles due to the reduction in
atmospheric column density caused by their presence.
* •
Multiple absorption troughs should mimic the shapes of absorption line
doublets for ions probing the evaporative wakes of the IAFs, due to the AMD
showing dips at different velocities.
In Fig. 14, line profile calculations are presented for ions with peak
abundance at a few $10^{5}~{}{\rm K}$ where $\xi\approx\xi_{\rm c,max}$
(Oviii), at higher temperatures ($\sim 10^{6}~{}{\rm K}$) characteristic of
the evaporative wakes of IAFs (Siviii), and in the hot bubbles or the highly
ionized gas at small radii (Fexxv, Fexxvi) where $T>10^{7}~{}{\rm K}$. The
third and fourth bulleted expectations are readily seen to be realized for
Fexxv K$\alpha$ and Fexxvi K$\alpha$. The abundance of Fexxv peaks in what is
essentially the brightest yellow gas in Fig. 11, while for Fexxvi the peak is
for the brown (higher ionization) gas, which has higher velocity. This
explains the overall line shapes for the $63^{\degree}$ sightline (bottom
panels of Fig. 14). Comparing the amounts of brown gas along the red portion
of this sightline in Fig. 11, the $t=2235$ snapshot should show deeper
absorption in Fexxvi K$\alpha$ at velocities corresponding to the bubbles
($v\approx 125\,{\rm km\,s^{-1}}$) compared to $t=588$. This is clearly the
case, and it occurs more prominently even for Fexxv K$\alpha$, although for
Fexxvi K$\alpha$ the effect serves to make it appear as though this singlet is
blended with another line of comparable opacity.
For the $54^{\degree}$ sightline, the shape of Fexxv K$\alpha$ at $t=2235$ is
such that it masquerades as a blended 2:1 doublet (where the blue component
has an oscillator strength twice that of the red component). Siviii Ly$\beta$
is an actual 2:1 doublet line with blended components. The effects of
clumpiness are mostly concealed for it because this line mainly traces the
green gas in Fig. 11, which is confined to the outer portion of the sightline
and hence rather localized in velocity space. However, comparing the shapes of
the Siviii Ly$\beta$ profiles for the $63^{\degree}$ sightline at $t=588$ and
$t=2235$, the effects of evaporation dynamics in IAFs can be inferred: a more
gradual transition from core to wing implies increased ion opacity across a
broader velocity range. This reflects the underlying IAF dynamics, where the
warmer gas in the wake is being accelerated to higher velocity.
This spectral signature for evaporation dynamics is even more prevalent in the
line profiles for Oviii Ly$\alpha$, which is another prominent 2:1 X-ray
doublet line. It is very optically thick in the cold phase gas (blue colors in
Fig. 11), while it is marginally optically thin to evaporating gas (green and
yellow colors). For the $54^{\degree}$ sightline at $t=588$ and $t=2141$, Fig.
14 therefore shows that this line is unsaturated because there is no cold
phase gas present along the line of sight. For $t=2235$ in the upper set of
plots, the line becomes saturated due to the dense clump in the wind (see Fig.
11). The wing is no longer Guassian-like as in the profile at $t=588$. This
extended wing feature in clumpy compared to smooth snapshots is seen most
clearly for the profiles of the $63^{\degree}$ sightline. Note the difference
with the extended wings of the Fexxvi K$\alpha$ profiles. Fexxvi is not a
direct tracer of evaporation dynamics, as its ${\rm K}\alpha$ line forms in
the hottest gas far downstream in an IAF’s wake.
The Oviii Ly$\alpha$ line profiles for the $54^{\degree}$ sightline exhibit
the expected behavior given in the first bullet above. There is no cold gas
along this sightline for the smooth state and progressively more in subsequent
states. We note that the Siviii Ly$\beta$ profiles are also consistent with
our expectations despite showing opposite trends in moving from $t=588$ to
$t=2235$ when comparing both sightlines. Because this ion’s abundance peaks in
the evaporative wakes (green gas) rather than in the cold phase, it deepens in
sightlines passing above rather than through the atmosphere.
As for the second bullet, here we have only considered ions with peak
abundances near or above $\xi_{\rm c,max}$, so the core to wing transition is
gradual rather than sharp. In other words, a prominent dip in the AMD results
in a sharp loss of opacity only if the ion abundance is not significantly
increasing beyond the dip. For high ionization UV lines, such as the doublet C
iv $\lambda\lambda 1548,1550$, there is indeed a sharp core to wing transition
(see Ganguly et al., 2021, where synthetic line profiles for our 1D radial
wind solutions are presented).
## 4 Discussion
In this work, we found that the standard framework for modeling thermally
driven disk winds allows steady state solutions to be obtained only when the
computational domain excludes the cold phase atmosphere and the accompanying
large scale vortices that form at $r\gtrsim 15\,R_{\rm IC}$. Clumpy disk wind
solutions result otherwise, and we showed these to be analogs of the clumpy
radial wind solutions obtained by D20 as the clumpiness is again caused by TI.
Here we discuss this instance of dynamical TI in the larger context of AGN
physics.
At scales well outside of the broad line region (BLR) as probed here, TI plays
two related but distinct roles: (i) it first causes the formation of small hot
spots in the upper atmosphere that subsequently expand, becoming larger hot
bubbles; and (ii) it facilitates continuous heating of new gas entering the
bubbles. As a result of (ii), the bubbles further expand and rise, causing a
fragmented layer of the atmosphere to be lifted into the disk wind above. The
resulting structure, which we refer to as an irradiated atmospheric fragment
(IAF), forms crests and filaments that can break off, sending smaller clumps
into the wind. The evaporation of the IAF and any resulting clumps supplies
the disk wind with a substantially higher column of lower ionization gas than
is present in the smooth evolutionary stages at the same viewing angle. IAFs
are large scale structures and the dynamics just described occurs on
timescales spanning $10^{3}-10^{4}$ years. According to these models, the
system is frozen in a state like those shown in the bottom two panels of Fig.
11 on timescales of years.
### 4.1 Dynamical TI in the presence of other forces
We previously showed that radiation forces alter the basic outcome of local TI
in an anisotropic radiation field (Proga & Waters, 2015). Line driving at the
distances where IAFs form appears to be too weak (due to over-ionization and
large optical depth in lines) to significantly affect these multiphase disk
wind solutions (Dannen et al., 2019). It is useful nonetheless to draw a
comparison with these previous cloud acceleration simulations to address the
conflicting claims regarding the role of TI in both the BLR (e.g.,
Beltrametti, 1981; Mathews & Ferland, 1987; Mathews & Doane, 1990; Elvis,
2017; Wang et al., 2017; Matthews et al., 2020) and intermediate to narrow
line regions of AGNs (e.g., Krolik & Vrtilek, 1984; Stern et al., 2014;
Goosmann et al., 2016; Bianchi et al., 2019; Borkar et al., 2021). In Proga &
Waters (2015), we were tracking a single cloud as it formed and accelerated
(mainly by line driving), and we envisioned that globally, the environment of
this cloud would be similar to that surrounding the isolated clump shown in
the bottom panel of Fig. 11. Rather than the result of condensation out of a
warm unstable phase, this clump is an ejected piece of the crest of an IAF.
‘Cloud formation due to TI’ has therefore taken on a new meaning in this
paper, as clumps ejected from IAFs ultimately owe their existence to TI. It is
thus necessary to distinguish between local TI, in which clouds form via
condensation, and dynamical TI. In this work, the latter is only directly
responsible for forming hot bubbles, which then go on to fragment the cold
phase gas in the upper layers of the disk atmosphere.
At much smaller radii where line-driving becomes very important, it is first
of all unclear if TI is even relevant. In the original discovery of clumpy
disk winds by Proga et al. (1998), for example, the wind was taken to be
isothermal, thereby showing that clumps can arise due to radiation forces and
gravity alone. Additionally, the recent work of Matthews et al. (2020)
demonstrates that a combination of ‘microclumping’ and density stratification
shows promise for explaining the known properties of the BLR. However, it has
been proposed that BLR clouds are the result of condensation out of the hot
phase of line-driven winds (Shlosman et al., 1985), i.e. local TI was
envisioned.
Let us suppose, therefore, that TI and line driving are interdependent in the
BLR. It then becomes unclear if the above distinction between dynamical TI and
local TI still needs to be drawn. Waters & Li (2019) argue that the local TI
approximation is valid in the BLR and furthermore show that if the flow
dynamics can be modeled using multiphase turbulence simulations, then the
resulting ionization properties are consistent with those of local optimally
emitting cloud (LOC; see e.g., Leighly & Casebeer, 2007) models. The
properties of multiphase turbulence can likely resolve most of the
shortcomings of the original two phase model of Krolik et al. (1981)
identified by Mathews & Ferland (1987) and Mathews & Doane (1990), who
concluded that BLR clouds are unlikely to be formed by TI. The remaining
issues depend on the validity of the local TI approximation, which will
typically break down wherever gradients in the bulk flow become steep, e.g.,
within the transition layers expected between disks, atmospheres, coronae, and
winds. The line emitting gas is likely produced within such layers (or within
the interfaces of embedded dense clumps), so these issues should be revisited
after the much richer multiphase gas dynamics accompanying dynamical TI under
strong radiation and/or magnetic forces is better understood.
Our finding that the sign of the Bernoulli function at the entrance to a TI
zone determines whether or not the flow is stable to TI is an important
development that will hopefully lead to a more complete understanding of
dynamical TI. The need for a sign change in $Be$ may turn out to be unique to
thermally driven flows at large distances where gravity is small. More
fundamental is the accompanying interpretation that individual entropy modes
propagating along streamlines get disrupted upon encountering a large gradient
in $Be$. By considering Balbus’ criterion for TI,
$\left(\frac{\partial\mathcal{L}/T}{\partial T}\right)_{p}<0,$ (16)
and the fact that in a steady state,
$\mathcal{L}/T=-T^{-1}\mbox{\boldmath$v$}\cdot\nabla Be$ by Bernoulli’s
theorem (see Eq. (8)), we see directly that $\nabla Be$ along streamlines is
relevant to the onset of TI. A formal inquiry into this connection requires a
Lagrangian perturbation analysis, however.
Since a generalized Bernoulli function is central to the theory of MHD
outflows (e.g., Spruit, 1996), it will be interesting to determine if a jump
in $Be$ caused by magnetic pressure alone can inhibit TI. In extending these
solutions to MHD, the disk atmospheres are expected to be unstable to the MRI.
It remains to be seen if the resulting MHD turbulence can upend the important
role of the atmospheric vortices in this work, which result in the episodic
production of IAFs. This is a pertinent issue to address, as these vortices
can become much weaker upon including the pressure gradient necessary for a
radial force balance in the midplane BC (see Appendix A). While this term is
only a small correction for a cold disk and is therefore typically omitted
(e.g., Tomaru et al. 2018; Higginbottom et al. 2018), our test runs showed
that IAFs develop on significantly longer timescales upon including it. An MRI
disk setup will not fully alleviate the ambiguity regarding this
term.555Ambiguity arises because the midplane BC serves to approximate the
angular momentum supply of the underlying disk but not its temperature, which
is significantly colder than the actual temperature assigned to
$\theta=\pi/2$. Optical depth effects in the transition from disk to
atmosphere will dictate the value of this pressure gradient, so radiation MHD
is ultimately needed.
We point out that by extending the current modeling framework to include an
MRI disk setup, it becomes possible to investigate one of the most plausible
theories envisioned for the origin of BLR clouds — dense clumps being ejected
from disk atmospheres by large scale magnetic fields (Emmering et al., 1992;
de Kool & Begelman, 1995). IAFs, if they can somehow still form at these small
radii, can no longer be advected outward as gravity is too strong (i.e. the
bound in Eq. (5) is violated). It is entirely conceivable, nevertheless, that
they could facilitate the mass loading of cold phase atmospheric gas at the
base of a magnetic flux tube, perhaps in concert with a combination of line
driving and magnetic pressure forces. However, solving the non-adiabatic MHD
equations in the BLR is a very computationally demanding problem, even in 2D.
The ${\rm HEP}_{0}$ of the BLR is found by rearranging Eq. (4),
${\rm HEP}_{0}=3.73\times 10^{4}\frac{\mu(1-\Gamma)}{(T_{0}/10^{5}\,{\rm
K})}\,(r_{0}/1\,{\rm ld})^{-1}.$ (17)
For $M_{\rm bh}=5.2\times 10^{7}\,M_{\odot}$ and a typical BLR radius of
$10\,{\rm ld}$ (characteristic of NGC 5548; see e.g., Kriss et al., 2019),
${\rm HEP}_{0}\approx 10^{4}$. Noting from §2.2 that $t_{\rm
cool}(r_{0})/t_{\rm kep}\propto{\rm HEP}_{0}^{-1/2}$, applying the same
methods used here would incur a timestep at least $10\times$ smaller than our
current one, and our current one is already an order of magnitude smaller than
the timestep necessary to perform global MRI disk simulations using ideal MHD.
Nevertheless, we aim to investigate such a role for dynamical TI in the near
future.
### 4.2 Limitations of the present models
Our current solutions account for the gravity of the black hole, the radiation
force due to electron scattering, and the effects of X-ray irradiation for
optically thin flows. Several other processes need to be included to establish
IAFs as a robust phenomenon in the environment of AGNs. Perhaps the most
important ones are geometrical: upon relaxing axisymmetry by running fully 3-D
simulations, there is an additional degree of freedom afforded to TI that may
allow IAFs to form at smaller radii. Specifically, while the poloidal velocity
field prevents the growth of entropy modes within $\sim 15\,R_{\rm IC}$ in
2-D, entropy modes undergoing condensation in the $\phi-$direction will be
subject to weaker stretching. Overall this should increase the efficiency of
thermal driving due to the enhanced heating accompanying clump formation.
Other neglected effects include those of self-gravity, radiative transfer, and
dust opacity. Of these, only dust opacity is expected to also strengthen the
outflow. Note that because dust can only survive inside the clumps, the
contribution of dust opacity to the driving force can only be accurately
assessed after determining the cold gas distribution from 3-D simulations.
Both radiative transfer and the disk self-gravity are expected to slow the
outflow by leading to an effective increase in the local HEP value (the ratio
of gravitational to thermal energy). The former will likely reduce the net
heating of cold gas and hence lower the thermal energy, while the latter
should increase the local gravity.
## 5 Summary and Conclusions
In this paper, we have shown that the thermally unstable radial wind solutions
obtained by D20 have counterpart disk wind solutions. This regime is found by
modifying D20’s fixed-density spherical boundary condition to a ‘midplane’
boundary condition where density varies as $r^{-2}$ and a near-Keplerian
azimuthal velocity profile is assigned. Because our understanding of TI
derives mainly from studies performed in a local approximation (using periodic
boundary conditions), we emphasize that these clumpy wind solutions are
realizations of flows encountering ‘dynamical TI’, where effects such as the
stretching of perturbations due to the acceleration of the flow are more
important than Field’s criterion for determining stability.
Prior work on thermally driven disk winds has shown that within $\sim
15\,R_{\rm IC}$, the presence of a TI zone on the S-curve simply aids heating
to the Compton temperature by providing an efficient heating mechanism (see
§2.2). The flow can enter the TI zone, meaning it becomes locally thermally
unstable, but fundamentally the local theory of TI describes the evolution of
individual entropy modes. We showed in §3.1.2 that by considering the
evolution of such modes advected along streamlines, strong disruptive effects
encountered as they enter the TI zone (such as the above mentioned stretching
due to rapid acceleration) will prevent their growth. Beyond the large scale
vortices that are present in the atmospheres of our solutions within
$80\,R_{\rm IC}$, streamlines in the atmosphere transition from being
connected to the disk wind to being separately outflowing. On a phase diagram,
this atmospheric outflow occupies the cold phase branch of the S-curve below
the temperature of the TI zone. The only actual pathway into the TI zone is
along streamlines passing through the vortices themselves, and the circulation
permits multiple passes through the zone, giving TI more time to amplify
entropy modes. These streamlines are confined to a small range of radii
relative to the domain size but nevertheless occupy a region spanning more
than $10\,R_{\rm IC}$ (see Fig. 7).
The resulting nonlinear perturbations produce hot spots that then seed the
formation of an IAF. Namely, IAFs are what we call the large, filamentary and
outward propagating vortical structures that arise as hot spots become hot
bubbles and disrupt the transition layer between the atmosphere and disk wind.
They are lifted into the disk wind by the expansion and buoyancy of the
bubbles, which undergo continuous runaway heating as a consequence of
remaining thermally unstable. Once exposed to the wind, the filamentary layers
of the IAFs evaporate and supply the disk wind with multiphase gas over a
large solid angle. Occasionally these layers break into smaller clumps that
also evaporate before entering the supersonic flow. Importantly, however, this
evaporation dynamics occurs on timescales of thousands to hundreds of
thousands of years in AGNs, so IAFs can provide a physical explanation for the
fact that warm absorber type outflows are very common.
Our simulations show that the IAFs transition from mere bulges and depressions
in the atmosphere to large scale filaments right around the ‘unbound radius’,
$R_{\rm u}\approx 3.03\times 10^{2}\,\frac{T_{C}/10^{8}\,\rm{K}}{T_{\rm
c,max}/10^{5}\,\rm{K}}\,R_{\rm IC}(1-\Gamma),$ (18)
which was identified as the distance beyond which our radial wind solutions
from D20 can become clumpy. We note that $R_{\rm IC}(1-\Gamma)$ is the
generalized Compton radius upon accounting for radiation pressure (Proga &
Kallman, 2002) and that $R_{\rm u}$ (like $R_{\rm IC}$) is a property of the
S-curve. Therefore, the theory of multiphase thermally driven disk winds is
predictive: this radius is sensitive to both the luminosity and the SED,
making it possible to compare $R_{\rm u}$ with the inferred locations to warm
absorbers across a population of AGNs with different values of $\Gamma$ and
$T_{C}/T_{\rm c,max}$.
In conclusion, here we have uncovered what are likely the simplest type of
multiphase disk wind solutions. It is also noteworthy that this year marks 25
years since the pioneering numerical work by Woods et al. (1996), whose
hydrodynamical simulations of thermally driven winds not only confirmed and
refined the theory developed by BMS83, but also established a framework for
building self-consistent disk wind models. Here we have applied this framework
to further extend the basic theory of thermally driven disk winds by
clarifying the role of dynamical TI and its relation to local TI. We view this
development as a building block toward a much broader theory of multiphase
disk winds in AGNs. A full theory should encompass multiple wind launching
mechanisms (e.g., thermal-driving, line-driving, magnetic-driving, and shock-
driving) as well as different ways that gas can become multiphase (e.g.,
clumps can alternatively arise from compression due to shocks or radiation
pressure).
## 6 Acknowledgements
We thank the anonymous referee for a constructive report that led us to
improve the organization of §3 and to add §4.2. We thank Sergei Dyda for
regular discussions over the course of this project. TW thanks Hui Li and the
Theoretical Division at Los Alamos National Laboratory (LANL) for allowing him
to retain access to the institutional computing (IC) clusters. These
calculations were performed under the LANL IC allocation award
`w19_rhdccasims`. Support for this work was provided by the National
Aeronautics Space Administration under ATP grant NNX14AK44G and through
Chandra Award Number TM0-21003X issued by the Chandra X-ray Observatory
Center, which is operated by the Smithsonian Astrophysical Observatory for and
on behalf of the National Aeronautics Space Administration under contract
NAS8-03060.
## References
* Adhikari et al. (2019) Adhikari, T. P., Różańska, A., Hryniewicz, K., Czerny, B., & Behar, E. 2019, ApJ, 881, 78, doi: 10.3847/1538-4357/ab2dfc
* Arav et al. (2015) Arav, N., Chamberlain, C., Kriss, G. A., et al. 2015, A&A, 577, A37, doi: 10.1051/0004-6361/201425302
* Balbus (1986) Balbus, S. A. 1986, ApJ, 303, L79, doi: 10.1086/184657
* Balbus & Soker (1989) Balbus, S. A., & Soker, N. 1989, ApJ, 341, 611, doi: 10.1086/167521
* Banda-Barragán et al. (2019) Banda-Barragán, W. E., Zertuche, F. J., Federrath, C., et al. 2019, MNRAS, 486, 4526, doi: 10.1093/mnras/stz1040
* Bautista & Kallman (2001) Bautista, M. A., & Kallman, T. R. 2001, ApJS, 134, 139, doi: 10.1086/320363
* Begelman et al. (1983) Begelman, M. C., McKee, C. F., & Shields, G. A. 1983, ApJ, 271, 70, doi: 10.1086/161178
* Behar (2009) Behar, E. 2009, ApJ, 703, 1346, doi: 10.1088/0004-637X/703/2/1346
* Beltrametti (1981) Beltrametti, M. 1981, ApJ, 250, 18, doi: 10.1086/159344
* Bianchi et al. (2019) Bianchi, S., Guainazzi, M., Laor, A., Stern, J., & Behar, E. 2019, MNRAS, 485, 416, doi: 10.1093/mnras/stz430
* Bianchi et al. (2012) Bianchi, S., Maiolino, R., & Risaliti, G. 2012, Advances in Astronomy, 2012, 782030, doi: 10.1155/2012/782030
* Borkar et al. (2021) Borkar, A., Adhikari, T. P., Różańska, A., et al. 2021, MNRAS, 500, 3536, doi: 10.1093/mnras/staa3515
* Chakravorty et al. (2009) Chakravorty, S., Kembhavi, A. K., Elvis, M., & Ferland, G. 2009, MNRAS, 393, 83, doi: 10.1111/j.1365-2966.2008.14249.x
* Chakravorty et al. (2008) Chakravorty, S., Kembhavi, A. K., Elvis, M., Ferland, G., & Badnell, N. R. 2008, MNRAS, 384, L24, doi: 10.1111/j.1745-3933.2007.00414.x
* Chakravorty et al. (2012) Chakravorty, S., Misra, R., Elvis, M., Kembhavi, A. K., & Ferland, G. 2012, MNRAS, 422, 637, doi: 10.1111/j.1365-2966.2012.20641.x
* Chan & Krolik (2016) Chan C.-H., Krolik J. H., 2016, ApJ, 825, 67. doi:10.3847/0004-637X/825/1/67
* Chan & Krolik (2017) Chan C.-H., Krolik J. H., 2017, ApJ, 843, 58. doi:10.3847/1538-4357/aa76e4
* Combes et al. (2019) Combes F., García-Burillo S., Audibert A., Hunt L., Eckart A., Aalto S., Casasola V., et al., 2019, A&A, 623, A79. doi:10.1051/0004-6361/201834560
* Crenshaw et al. (2003) Crenshaw, D. M., Kraemer, S. B., & George, I. M. 2003, ARA&A, 41, 117, doi: 10.1146/annurev.astro.41.082801.100328
* Dannen et al. (2019) Dannen, R. C., Proga, D., Kallman, T. R., & Waters, T. 2019, ApJ, 882, 99, doi: 10.3847/1538-4357/ab340b
* Dannen et al. (2020) Dannen, R. C., Proga, D., Waters, T., & Dyda, S. 2020, ApJ, 893, L34, doi: 10.3847/2041-8213/ab87a5
* de Kool & Begelman (1995) de Kool, M., & Begelman, M. C. 1995, ApJ, 455, 448, doi: 10.1086/176594
* Detmers et al. (2011) Detmers, R. G., Kaastra, J. S., Steenbrugge, K. C., et al. 2011, A&A, 534, A38, doi: 10.1051/0004-6361/201116899
* Dorodnitsyn, Kallman, & Bisnovatyi-Kogan (2012) Dorodnitsyn A., Kallman T., Bisnovatyi-Kogan G. S., 2012, ApJ, 747, 8. doi:10.1088/0004-637X/747/1/8
* Dorodnitsyn & Kallman (2017) Dorodnitsyn A., Kallman T., 2017, ApJ, 842, 43. doi:10.3847/1538-4357/aa7264
* Done et al. (2012) Done, C., Davis, S. W., Jin, C., Blaes, O., & Ward, M. 2012, MNRAS, 420, 1848, doi: 10.1111/j.1365-2966.2011.19779.x
* Dyda et al. (2017) Dyda, S., Dannen, R., Waters, T., & Proga, D. 2017, MNRAS, 467, 4161, doi: 10.1093/mnras/stx406
* Ebrero et al. (2016) Ebrero, J., Kaastra, J. S., Kriss, G. A., et al. 2016, A&A, 587, A129, doi: 10.1051/0004-6361/201527808
* Elvis (2017) Elvis, M. 2017, ApJ, 847, 56, doi: 10.3847/1538-4357/aa82b6
* Emmering et al. (1992) Emmering, R. T., Blandford, R. D., & Shlosman, I. 1992, ApJ, 385, 460, doi: 10.1086/170955
* Ferland et al. (2020) Ferland, G. J., Done, C., Jin, C., Landt, H., & Ward, M. J. 2020, MNRAS, 494, 5917, doi: 10.1093/mnras/staa1207
* Ferland et al. (2013) Ferland, G. J., Kisielius, R., Keenan, F. P., et al. 2013, ApJ, 767, 123, doi: 10.1088/0004-637X/767/2/123
* Ferland et al. (2017) Ferland, G. J., Chatzikos, M., Guzmán, F., et al. 2017, Rev. Mexicana Astron. Astrofis., 53, 385\. https://arxiv.org/abs/1705.10877
* Field (1965) Field, G. B. 1965, ApJ, 142, 531, doi: 10.1086/148317
* Ganguly et al. (2021) Ganguly S., Proga D., Waters T., Dannen R. C., Dyda S., Giustini M., Kallman T., et al., 2021, arXiv, arXiv:2103.06497
* Gaspari et al. (2015) Gaspari, M., Brighenti, F., & Temi, P. 2015, A&A, 579, A62, doi: 10.1051/0004-6361/201526151
* Giustini & Proga (2019) Giustini, M., & Proga, D. 2019, A&A, 630, A94, doi: 10.1051/0004-6361/201833810
* Goosmann et al. (2016) Goosmann, R. W., Holczer, T., Mouchet, M., et al. 2016, A&A, 589, A76, doi: 10.1051/0004-6361/201425199
* Grafton-Waters et al. (2020) Grafton-Waters, S., Branduardi-Raymont, G., Mehdipour, M., et al. 2020, A&A, 633, A62, doi: 10.1051/0004-6361/201935815
* Higginbottom et al. (2018) Higginbottom, N., Knigge, C., Long, K. S., et al. 2018, MNRAS, 479, 3651, doi: 10.1093/mnras/sty1599
* Higginbottom & Proga (2015) Higginbottom, N., & Proga, D. 2015, ApJ, 807, 107, doi: 10.1088/0004-637X/807/1/107
* Higginbottom et al. (2017) Higginbottom, N., Proga, D., Knigge, C., & Long, K. S. 2017, ApJ, 836, 42, doi: 10.3847/1538-4357/836/1/42
* Holczer et al. (2007) Holczer, T., Behar, E., & Kaspi, S. 2007, ApJ, 663, 799, doi: 10.1086/518416
* Hönig (2019) Hönig S. F., 2019, ApJ, 884, 171. doi:10.3847/1538-4357/ab4591
* Jacquemin-Ide et al. (2020) Jacquemin-Ide, J., Lesur, G., & Ferreira, J. 2020, arXiv e-prints, arXiv:2011.14782. https://arxiv.org/abs/2011.14782
* Jimenez-Garate et al. (2002) Jimenez-Garate, M. A., Raymond, J. C., & Liedahl, D. A. 2002, ApJ, 581, 1297. doi:10.1086/344364
* Jin et al. (2012) Jin, C., Ward, M., & Done, C. 2012, MNRAS, 422, 3268, doi: 10.1111/j.1365-2966.2012.20847.x
* Kaastra et al. (2002) Kaastra, J. S., Steenbrugge, K. C., Raassen, A. J. J., et al. 2002, A&A, 386, 427, doi: 10.1051/0004-6361:20020235
* Kaastra et al. (2014) Kaastra, J. S., Kriss, G. A., Cappi, M., et al. 2014, Science, 345, 64, doi: 10.1126/science.1253787
* Kallman & McCray (1982) Kallman, T. R., & McCray, R. 1982, ApJS, 50, 263, doi: 10.1086/190828
* Kallman & Bautista (2001) Kallman, T. & Bautista, M. 2001, ApJS, 133, 221. doi:10.1086/319184
* Kallman (2010) Kallman, T. R. 2010, Space Sci. Rev., 157, 177, doi: 10.1007/s11214-010-9711-6
* Kallman & Dorodnitsyn (2019) Kallman, T., & Dorodnitsyn, A. 2019, ApJ, 884, 111, doi: 10.3847/1538-4357/ab40aa
* Kriss et al. (2019) Kriss, G. A., De Rosa, G., Ely, J., et al. 2019, ApJ, 881, 153, doi: 10.3847/1538-4357/ab3049
* Krolik & Kriss (2001) Krolik, J. H., & Kriss, G. A. 2001, ApJ, 561, 684, doi: 10.1086/323442
* Krolik et al. (1981) Krolik, J. H., McKee, C. F., & Tarter, C. B. 1981, ApJ, 249, 422, doi: 10.1086/159303
* Krolik & Vrtilek (1984) Krolik, J. H., & Vrtilek, J. M. 1984, ApJ, 279, 521, doi: 10.1086/161916
* Kurosawa & Proga (2009) Kurosawa, R., & Proga, D. 2009, ApJ, 693, 1929, doi: 10.1088/0004-637X/693/2/1929
* Laha et al. (2014) Laha, S., Guainazzi, M., Dewangan, G. C., Chakravorty, S., & Kembhavi, A. K. 2014, MNRAS, 441, 2613, doi: 10.1093/mnras/stu669
* Laha et al. (2020) Laha, S., Reynolds, C. S., Reeves, J., et al. 2020, Nature Astronomy, doi: 10.1038/s41550-020-01255-2
* Lee et al. (2013) Lee, J. C., Kriss, G. A., Chakravorty, S., et al. 2013, MNRAS, 430, 2650, doi: 10.1093/mnras/stt050
* Leighly & Casebeer (2007) Leighly, K. M., & Casebeer, D. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 373, The Central Engine of Active Galactic Nuclei, ed. L. C. Ho & J. W. Wang, 365
* Lepp et al. (1985) Lepp, S., McCray, R., Shull, J. M., Woods, D. T., & Kallman, T. 1985, ApJ, 288, 58, doi: 10.1086/162763
* Lodders et al. (2009) Lodders, K., Palme, H., & Gail, H. P. 2009, Landolt Börnstein, 4B, 712, doi: 10.1007/978-3-540-88055-4_34
* Luketic et al. (2010) Luketic, S., Proga, D., Kallman, T. R., Raymond, J. C., & Miller, J. M. 2010, ApJ, 719, 515, doi: 10.1088/0004-637X/719/1/515
* Mathews & Doane (1990) Mathews, W. G., & Doane, J. S. 1990, ApJ, 352, 423, doi: 10.1086/168548
* Mathews & Ferland (1987) Mathews, W. G., & Ferland, G. J. 1987, ApJ, 323, 456, doi: 10.1086/165843
* Matthews et al. (2020) Matthews, J. H., Knigge, C., Higginbottom, N., et al. 2020, MNRAS, 492, 5540, doi: 10.1093/mnras/staa136
* McKee & Begelman (1990) McKee, C. F. & Begelman, M. C. 1990, ApJ, 358, 392. doi:10.1086/168995
* Mehdipour et al. (2015) Mehdipour, M., Kaastra, J. S., Kriss, G. A., et al. 2015, A&A, 575, A22, doi: 10.1051/0004-6361/201425373
* Mizumoto et al. (2019) Mizumoto, M., Done, C., Tomaru, R., & Edwards, I. 2019, MNRAS, 489, 1152, doi: 10.1093/mnras/stz2225
* Mościbrodzka & Proga (2013) Mościbrodzka, M., & Proga, D. 2013, ApJ, 767, 156, doi: 10.1088/0004-637X/767/2/156
* Nakatani & Yoshida (2019) Nakatani, R., & Yoshida, N. 2019, ApJ, 883, 127, doi: 10.3847/1538-4357/ab380a
* Ponti et al. (2012) Ponti, G., Fender, R. P., Begelman, M. C., et al. 2012, MNRAS, 422, L11, doi: 10.1111/j.1745-3933.2012.01224.x
* Proga & Kallman (2002) Proga, D., & Kallman, T. R. 2002, ApJ, 565, 455, doi: 10.1086/324534
* Proga et al. (1998) Proga, D., Stone, J. M., & Drew, J. E. 1998, MNRAS, 295, 595, doi: 10.1046/j.1365-8711.1998.01337.x
* Proga & Waters (2015) Proga, D., & Waters, T. 2015, ApJ, 804, 137, doi: 10.1088/0004-637X/804/2/137
* Ramos Almeida & Ricci (2017) Ramos Almeida C., Ricci C., 2017, NatAs, 1, 679. doi:10.1038/s41550-017-0232-z
* Rodríguez Hidalgo et al. (2020) Rodríguez Hidalgo, P., Khatri, A. M., Hall, P. B., et al. 2020, ApJ, 896, 151, doi: 10.3847/1538-4357/ab9198
* Różańska et al. (2014) Różańska, A., Czerny, B., Kunneriath, D., et al. 2014, MNRAS, 445, 4385, doi: 10.1093/mnras/stu2066
* Sharma et al. (2012) Sharma, P., McCourt, M., Quataert, E., & Parrish, I. J. 2012, MNRAS, 420, 3174, doi: 10.1111/j.1365-2966.2011.20246.x
* Sheikhnezami & Fendt (2018) Sheikhnezami, S., & Fendt, C. 2018, ApJ, 861, 11, doi: 10.3847/1538-4357/aac5dc
* Shlosman et al. (1985) Shlosman, I., Vitello, P. A., & Shaviv, G. 1985, ApJ, 294, 96, doi: 10.1086/163278
* Spruit (1996) Spruit, H. C. 1996, in NATO Advanced Study Institute (ASI) Series C, Vol. 477, Evolutionary Processes in Binary Stars, ed. R. A. M. J. Wijers, M. B. Davies, & C. A. Tout, 249–286
* Stern et al. (2014) Stern, J., Behar, E., Laor, A., Baskin, A., & Holczer, T. 2014, MNRAS, 445, 3011, doi: 10.1093/mnras/stu1960
* Stone et al. (2020) Stone, J. M., Tomida, K., White, C. J., & Felker, K. G. 2020, ApJS, 249, 4, doi: 10.3847/1538-4365/ab929b
* Tomaru et al. (2018) Tomaru, R., Done, C., Odaka, H., Watanabe, S., & Takahashi, T. 2018, MNRAS, 476, 1776, doi: 10.1093/mnras/sty336
* Tomaru et al. (2019) Tomaru, R., Done, C., Ohsuga, K., Nomura, M., & Takahashi, T. 2019, MNRAS, 490, 3098, doi: 10.1093/mnras/stz2738
* Townsend (2009) Townsend, R. H. D. 2009, ApJS, 181, 391, doi: 10.1088/0067-0049/181/2/391
* Voit et al. (2017) Voit G. M., Meece G., Li Y., O’Shea B. W., Bryan G. L., Donahue M., 2017, ApJ, 845, 80, doi: 10.3847/1538-4357/aa7d04
* Wada (2012) Wada K., 2012, ApJ, 758, 66. doi:10.1088/0004-637X/758/1/66
* Wang et al. (2017) Wang, J.-M., Du, P., Brotherton, M. S., et al. 2017, Nature Astronomy, 1, 775, doi: 10.1038/s41550-017-0264-4
* Waters & Li (2019) Waters, T., & Li, H. 2019, arXiv e-prints, arXiv:1912.03382. https://arxiv.org/abs/1912.03382
* Waters & Proga (2018) Waters, T., & Proga, D. 2018, MNRAS, 481, 2628, doi: 10.1093/mnras/sty2398
* Waters & Proga (2019) —. 2019, ApJ, 875, 158, doi: 10.3847/1538-4357/ab10e1
* Waters et al. (2017) Waters, T., Proga, D., Dannen, R., & Kallman, T. R. 2017, MNRAS, 467, 3160, doi: 10.1093/mnras/stx238
* Williamson, Hönig, & Venanzi (2019) Williamson D., Hönig S., Venanzi M., 2019, ApJ, 876, 137. doi:10.3847/1538-4357/ab17d5
* Woods et al. (1996) Woods, D. T., Klein, R. I., Castor, J. I., McKee, C. F., & Bell, J. B. 1996, ApJ, 461, 767, doi: 10.1086/177101
* Zhu & Stone (2018) Zhu, Z., & Stone, J. M. 2018, ApJ, 857, 34, doi: 10.3847/1538-4357/aaafc9
## Appendix A Numerical Methods
Using Athena++ (Stone et al., 2020), we solve the equations of non-adiabatic
gas dynamics, accounting for the forces of gravity and radiation pressure due
to electron scattering opacity:
$\frac{D\rho}{Dt}=-\rho\nabla\cdot{\bf v},$ (A1) $\rho\frac{D{\bf
v}}{Dt}=-\nabla p-\frac{G\,M_{\rm bh}\rho}{r^{2}}(1-\Gamma),$ (A2)
$\rho\frac{D\mathcal{E}}{Dt}=-p\nabla\cdot{\bf v}-\rho\mathcal{L}.$ (A3)
For an ideal gas, $p=(\gamma-1)\rho\mathcal{E}$, and we set $\gamma=5/3$. Our
calculations are performed in spherical polar coordinates, $(r,\theta,\phi)$,
assuming axial symmetry about a rotational axis at $\theta=0^{\degree}$. The
S-curve of our net cooling function $\mathcal{L}$ is shown in Fig. 2 and was
obtained by running a large grid of photoionization calculations using XSTAR
(see Dyda et al., 2017) for the SED of NGC 5548 obtained by Mehdipour et al.
(2015). The abundances are those of Lodders et al. (2009), but for simplicity,
in this work we set $\mu_{H}=1$, keeping only $\mu=0.6$.
The net cooling term can be expressed as $\rho\mathcal{L}=n^{2}(C-H)$, and we
interpolate from tables of the total cooling ($C$) and total heating ($H$)
rates given in units of ${\rm erg\,cm^{3}\,s^{-1}}$; these tables were made
publicly available by Dannen et al. (2019). To obtain the clumpy outflow
solutions in D20, we used bilinear interpolation to access values from the
tables (for details, see Dyda et al., 2017). We suspected that this basic
algorithm could be somewhat inaccurate near the S-curve where $\mathcal{L}$’s
are close to zero. In the testing phase of this work, we verified this
suspicion, confirming that interpolation errors introduce perturbations into
the flow that are continually amplified by TI, allowing even our 1D solutions
to remain clumpy indefinitely. Specifically, the %-difference in rates
evaluated near the S-curve where $C\approx H$ could exceed 1000% based on a
comparison of looking up rates tabulated from an analytic function (using a
similar sampling as our actual tables).
We therefore implemented a GSL bicubic interpolation routine that we found
reduces the percent differences for rates near the S-curve to less than 1% and
returns rates that are on average about $10\times$ more accurate. As discussed
in §3.1.2, all of the 1D radial wind solutions in D20 reach a steady state
with this improvement.
### A.1 Initial conditions, boundary conditions, and computational domain
Our initial conditions are $\rho(r,\theta_{\rm max})=\rho_{0}(r_{0}/r)^{2}$
(with $\rho_{0}=\bar{m}\,n_{0}$), $\mbox{\boldmath$v$}(r,\theta_{\rm
max})=v_{\phi}(r)\hat{\phi}$, and $p(r,\theta_{\rm
max})=n_{0}k\,T_{0}(r_{0}/r)^{2}$, where $\theta_{\rm max}$ denotes zones with
cell edges at $\pi/2$ ($\tt{je}$ zones in Athena++ notation). All other zones
have $\rho(r,\theta)=\rho_{0}(T_{C}/T_{0})(r_{0}/r)^{2}$,
$\mbox{\boldmath$v$}(r,\theta)=(1.1\,v_{0}\sqrt{1.-r_{\rm
in}/r}+10^{-4}v_{0})\hat{r}$, and $p(r)=p(r,\theta_{\rm max})$, where
$v_{0}=\sqrt{GM_{\rm bh}/r_{0}}$.
Reflecting BCs are applied at $\theta=\pi/2$. In our fiducial runs, the
rotation profile along this boundary is enforced to be sub-Keplerian to
account for the radiation force, $v_{\phi}(r)=\sqrt{GM_{\rm bh}(1-\Gamma)/r}$
at $\theta=\pi/2$. We also ran tests accounting for the contribution of the
pressure gradient in the radial force balance along the midplane, given by
$\frac{v_{\phi}^{2}}{r}=\frac{GM_{\rm
bh}}{r^{2}}(1-\Gamma)+\frac{1}{\rho}\frac{dp}{dr}.$ (A4)
For the isothermal midplane BC that we use, $dp/dr=(-2/r)p(r)$ at
$\theta=\pi/2$. This midplane BC is applied inside our heating and cooling
routine (implemented using EnrollUserExplicitSourceFunction()). We reset the
midplane density, velocity, and pressure on every call ($2\times$ per timestep
for a 2nd order integrator) to the above values. Only the density and velocity
are required to be reset, as Athena++ will find the same solution without
specifying the pressure also, but only if a timestep $2\times$ smaller is
used. This stability constraint arises because the midplane contains the
densest gas and thus gas with the shortest cooling times in the domain — see
Fig. 3.
We apply a ‘constant gradient’ BC at $r_{\rm in}$, in which all primitive
variables are linearly extrapolated from the first active zone into the ghost
zones. At $r_{\rm out}$, we apply modified outflow BCs that prevent inflows
from developing at the boundary. Specifically, $v_{r}$ is set to 0 in the
ghost zones if it is found to be less than zero (in Athena++ notation, we set
$\tt{prim(IVX,k,j,ie+i})=0$ if $\tt{prim(IVX,k,j,ie+i})<0$). Without this
latter BC, a transient disturbance will enter from the outer boundary after
hundreds of orbital times at $R_{\rm IC}$ (but before IAFs develop).
Our calculations utilize static mesh refinement (SMR) to give an effective
grid size, $N_{r}\times N_{\theta}$ ($\tt{nx1\times nx2}$ in Athena++
notation), of $1056\times 576$ for our mid-res run and $2112\times 1152$ for
our hi-res run. Our base grid is $132\times 72$, and SMR levels are set at
$15^{\degree}<\theta<90^{\degree}$, $30^{\degree}<\theta<90^{\degree}$, and
$35^{\degree}<\theta<90^{\degree}$ for our mid-res run. For our hi-res run, a
fourth SMR level is applied at $35^{\degree}<\theta<90^{\degree}$. Our radial
domain size is $r_{\rm in}=13.1\,R_{\rm IC}$ and $r_{\rm out}=285\,R_{\rm
IC}$.
### A.2 Timestep constraint
In Waters & Proga (2018), we used a semi-implicit solver to include the
heating and cooling source term as described by Dyda et al. (2017). For this
work, we had to abandon this routine in favor of a simpler explicit solver
after finding that the semi-implicit solve causes a small fraction of the
coldest zones within the atmosphere to ‘jump off’ the S-curve at every
timestep when these zones are visualized on the $(T,\Xi)$-plane. This behavior
indicates that numerically, multiple temperature solutions can exist for the
same timestep, $\Delta t$. This is a generic property of implicit routines
(Townsend, 2009), with the simple remedy being to resort to an explicit solve.
Note that Townsend’s exact integration routine requires $\mathcal{L}$ to be a
function of temperature alone, whereas here, $\mathcal{L}=\mathcal{L}(T,\xi)$.
An explicit solve requires implementing a custom timestep (using
EnrollUserTimeStepFunction) because $t_{\rm cool}$ is generally smaller than
the CFL constraint on $\Delta t$ for zones near the midplane. This can result
in these solutions becoming computationally very costly, and indeed it is
prohibitively expensive to obtain exact analog solutions to those in D20,
which are for $\xi_{0}=5$. Our $\xi_{0}=5$ solution shown in Fig. 3 used only
2 levels of SMR and was not run long enough for the regions beyond $15\,R_{\rm
IC}$ to reach a steady state. However, the cost is greatly reduced for the
$\rm{HEP_{0}}=36$ solutions presented in §3.2 because $t_{\rm
cool}(r_{0})\propto{\rm HEP_{0}}^{-2}$ (see Eq. (7)).
## Appendix B The ‘unbound radius’
Dividing Eq. (8) by $c_{s}^{2}$ and writing $\phi$ in terms ${\rm HEP}_{0}$
gives $Be/c_{s}^{2}=M^{2}/2+1/(\gamma-1)-{\rm HEP}_{0}(T_{0}/T)(r_{0}/r)$,
where $M$ is the Mach number. As shown by the red contours in Fig. 6 through
Fig. 9, the locus of points at which $Be$ equals zero coincides with those
where $T$ reaches $T_{\rm c,max}\equiv T(\Xi_{\rm c,max})$ up until some
distance. Setting $Be=0$ defines this contour implicitly as
$r=r_{0}\,{\rm HEP}_{0}\frac{T_{0}/T_{\rm c,max}}{M^{2}_{\rm
c,max}/2+1/(\gamma-1)},$ (B1)
where $M_{\rm c,max}$ is the Mach number evaluated at $T=T_{\rm c,max}$. The
‘unbound radius’ we are after is the transition radius where surfaces of
$Be=0$ and $T=T_{\rm c,max}$ no longer coincide in Fig. 9. It lies interior to
the sonic surface, so a lower limit is found by setting $M_{\rm c,max}=1$ in
the above expression. This defines a characteristic radius
$R_{\rm u}\equiv 2\,r_{0}\,{\rm
HEP}_{0}\frac{\gamma-1}{\gamma+1}\frac{T_{0}}{T_{\rm c,max}}.$ (B2)
Eliminating $r_{0}\,{\rm HEP}_{0}$ in favor of $R_{\rm IC}$ using Eq. (4)
gives the expression quoted in Eq. (11).
|
††thanks: Contributed equally††thanks: Contributed equally
# Sub-dimensional topologies, indicators and higher order phases
Gunnar F. Lange<EMAIL_ADDRESS>TCM Group, Cavendish Laboratory, University
of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom Adrien
Bouhon<EMAIL_ADDRESS>Nordic Institute for Theoretical Physics
(NORDITA), Stockholm, Sweden Department of Physics and Astronomy, Uppsala
University, Box 516, SE-751 21 Uppsala, Sweden Robert-Jan Slager
<EMAIL_ADDRESS>TCM Group, Cavendish Laboratory, University of Cambridge,
J.J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom
###### Abstract
The study of topological band structures have sparked prominent research
interest the past decade, culminating in the recent formulation of rather
prolific classification schemes that encapsulate a large fraction of phases
and features. Within this context we recently reported on a class of
unexplored topological structures that thrive on the concept of sub-
dimensional topology. Although such phases have trivial indicators and band
representations when evaluated over the complete Brillouin zone, they have
stable or fragile topologies within sub-dimensional spaces, such as planes or
lines. This perspective does not just refine classification pursuits, but can
result in observable features in the full dimensional sense. In three spatial
dimensions (3D), for example, sub-dimensional topologies can be characterized
by non-trivial planes, having general topological invariants, that are
compensated by Weyl nodes away from these planes. As a result, such phases
have 3D stable characteristics such as Weyl nodes, Fermi arcs and edge states
that can be systematically predicted by sub-dimensional analysis. Within this
work we further elaborate on these concepts. We present refined representation
counting schemes and address distinctive bulk-boundary effects, that include
momentum depended (higher order) edge states that have a signature dependence
on the perpendicular momentum. As such, we hope that these insights might spur
on new activities to further deepen the understanding of these unexplored
phases.
## I Introduction
Topological effects in band structures has received significant attention in
recent years Qi and Zhang (2011); Hasan and Kane (2010), leading to a myriad
of topological phases and effects Slager _et al._ (2012); Fu (2011); Slager
(2019); Bzdušek _et al._ (2016); Höller and Alexandradinata (2018); Shiozaki
_et al._ (2017); Fang _et al._ (2012); Lee _et al._ (2020); Mesaros _et
al._ (2013); Kariyado and Slager (2020); Zhang and Liu (2015); Rhim _et al._
(2018); Bouhon and Black-Schaffer (2017a). Of particular interest in this
regard however is recent progress on mapping out a considerable fraction of
topological materials. Namely, using constraints on band representations
between high symmetry point in the Brillouin zone (BZ) that can reproduce the
full K-theory in certain cases Kruthoff _et al._ (2017), elaborate schemes
have emerged that upon comparing these condition in momentum space to real
space provide for direct indicators of topological non-triviality Po _et al._
(2017); Bradlyn _et al._ (2017); Watanabe _et al._ (2018); Elcoro _et al._
(2020); Bouhon _et al._ (2020a). This is usually phrased in terms of so-
called symmetry indicators Po _et al._ (2017) or elementary band
representations Bradlyn _et al._ (2017). While the former is roughly obtained
by considering the constraints in momentum space as a vector space, which
delivers indicators upon dividing out Fourier transformed trivial atomic limit
configurations, the rational behind considering elementary band
representations (EBR) is that a split EBR must lead to a non-trivial behavior
Bradlyn _et al._ (2017). That is, a topological configuration can by
definition not be represented in terms of Wannier functions (of the localized
kind) that also respect all symmetries.
This progress in itself has already sparked the discovery of new kinds of
topologies. It was for example found that mismatches between stable symmetry
indicators and split EBRs can be understood by the existence of fragile
topological phases Po _et al._ (2018). Such fragile phases formally amount to
a difference of trivial phases and can consequently be trivialized by the
addition of trivial bands, rather than bands having opposite topological
invariants Bouhon _et al._ (2019); Peri _et al._ (2020); Song _et al._
(2019). This concept of fragile topology in fact not only applies to symmetry
indicated phases (i.e. those that can be deduced from the irreducible
representation content at high symmetry points) but can be generalized by
taking into account multi-gap conditions Bouhon _et al._ (2020b). Such phases
can physically be understood as arising by braiding non-Abelian frame charges
Wu _et al._ (2019); Ahn _et al._ (2019); Bouhon _et al._ (2020c), leading
to novel types of invariants and physical effects Ünal _et al._ (2020) .
In recent work Bouhon _et al._ (2020a), we elucidated the idea of fragile
topology in a magnetic context Jo _et al._ (2020); Qi _et al._ (2008);
Wieder and Bernevig (2018); Liu _et al._ (2020); Yang _et al._ (2021); Mong
_et al._ (2010); Otrokov _et al._ (2019); Zhang and Liu (2015); Elcoro _et
al._ (2020) and outlined its connection to the concept of sub-dimensional
topology. The essential idea articulates around the fact that while EBRs can
be globally connected, thus appearing trivial in previous schemes, they can
still be splittable on sub-dimenional spaces such as planes in 3D Brillouin
zones. These sub-dimensional spaces can effectively be diagnosed in terms of
symmetry indicators and EBR content. This is not merely an esoteric
observation but results in real (in the 3D sense) consequences. The non-
trivial plane might for example host stable invariants such as Chern numbers
or spin Chern numbers that necessarily have to be compensated by Weyl nodes to
lead to a globally connected EBR. As result, there are phases that have stable
topological features (such as Chern planes and Weyl nodes) with distinct
physical observables (such as edge states and Fermi arcs) that depend on this
new concept, i.e the analysis of sub-dimensions that in turn reveal the
necessary existence of nodes in the 3D band structure. While we presented an
exhaustive table of all space groups in which the simplest form of this
mechanism occurs Bouhon _et al._ (2020a), it was conjectured that this new
view can play a role in other settings as well.
In this work, we further develop the idea of fragile magnetic topology and
sub-dimensional topology by investigating the bulk-corner and bulk-hinge
correspondence, the twisted boundary conditions and the bulk-edge
correspondence for these phases. This elucidates the connection between
fragile magnetic topology, sub-dimensional topology and other well-studied
phases such as higher-order topological insulators (HOTI). In particular, we
explore the physical consequences of sub-dimensional topology by looking at
the hinge and edge state spectra of a sub-dimensional phase, and by examining
its bulk-edge correspondence. Interestingly, we find that the sub-dimensional
perspective can manifest itself by affecting the edge/hinge spectrum as
function of the momentum along the hinge or direction perpendicular to the
plane hosting the non-trivial sub-dimensional topology. We thus find as a main
result that this refined perspective can results in specific consequences at
the edge, momentum depended hinge spectra, thereby providing for new physical
signatures.
The paper is organised as follows. In section II we introduce the magnetic
space-groups (MSG) and representation content which we will be considering. In
section III, we present the spectra in various finite-dimensional geometries
and comment on their connection to corner/edge charges and to HOTI. In section
IV, we further corroborate our findings by connecting them to the recently
introduced idea of real-space invariants and twisted boundary conditions Song
_et al._ (2020); Peri _et al._ (2020). In section V, we finally connect our
discussion to Wilson loops and a bulk-edge correspondence. We conclude in
section VI.
## II Setup and topology for MSG75.5 and MSG77.18
### II.1 Setup and topology for MSG75.5
The magnetic space-group (MSG) 75.5 is generated from the tetragonal (non-
magnetic) space group 75 (P4, generated by $C_{4}$ rotation) by including the
antiunitary symmetry $(E|\tau)^{\prime}$, with $E$ the identity,
$\tau=\boldsymbol{a}_{1}/2+\boldsymbol{a}_{2}/2$, $(\cdot)^{\prime}$ denoting
time-reversal and $\boldsymbol{a}_{i}$ the primitive vectors of a (primitive)
tetragonal Bravais lattice. Thus MSG75.5 is a Shubnikov type IV MSG which
hosts anti-ferromagnetic ordering Bradley and Cracknell (1972). We showed in
Bouhon _et al._ (2020a) that starting from magnetic Wyckoff position (mWP)
$2b$, with sites $\boldsymbol{r}_{A}=\boldsymbol{a}_{1}/2$ and
$\boldsymbol{r}_{B}=\boldsymbol{a}_{2}/2$, the real-space symmetries
necessitate a minimum of four states to be present in the unit cell. Our
choice of unit cell is shown in Fig. 1a). We introduced a spinful model of
this MSG with two sites per unit cell, each hosting two orbitals, in Bouhon
_et al._ (2020a) (summarized in Appendix A.1). This model can be split into
two disconnected two-band subspaces over the entire BZ whilst respecting all
symmetries of MSG75.5, and thus realizes a split magnetic elementary band
representation (MEBR) Elcoro _et al._ (2020). All symmetry indicators are
trivial in our model, and therefore one of these subspaces necessarily
realizes fragile topology, i.e. it can be trivialized by coupling to trivial
bands. The other two-band subspace realizes an atomically obstructed limit,
where the electrons localize at a mWP distinct from the mWP of the atomic
orbitals Bradlyn _et al._ (2017). We note, however, that both subspaces
display non-trivial Wilson loop winding as discussed in section V. Thus whilst
the obstructed atomic insulator label is useful pictorially, a more careful
rigorous analysis based on the Wilson loop is needed in general, which we
carry out in section V. The split in our model can be written explicitly as:
$\mathrm{MEBR}^{2b}_{75.5}\rightarrow\underbrace{(\mathrm{MEBR}_{75.5}^{2b}\ominus\mathrm{MEBR}_{75.5}^{2a})}_{\mathrm{Lower\
subspace}}\oplus\underbrace{\mathrm{MEBR}_{75.5}^{2a}}_{\mathrm{Upper\
subspace}}.$
Where $\ominus$ denotes formal subtraction of MEBRs. In terms of the spinful
site-symmetry co-IRREPs (using notation from the Bilbao Crystallographic
Server Perez-Mato _et al._ (2011); Aroyo _et al._ (2006a, b)) this can be
written as:
$[\underbrace{(^{1}\overline{E}^{2}\overline{E})_{2b}\uparrow
G\ominus(^{1}\overline{E}_{1})_{2a}\uparrow G}_{\mathrm{Lower\
subspace}}]\oplus\underbrace{(^{1}\overline{E}_{1})_{2a}\uparrow
G}_{\mathrm{Upper\ subspace}}$ (1)
This decomposition can be determined directly from the momentum space IRREPs,
using the formalism of (magnetic) topological quantum chemsistry Bradlyn _et
al._ (2017); Elcoro _et al._ (2020). For MSG75.5, the unitary symmetries do
not dictate any $k_{z}$ dependence. It is therefore legitimate to consider the
planes containing the time-reversal invariant momentum points (TRIMPs) in the
BZ separately, which results in an effective 2D model. We choose to focus on
the plane $k_{z}=0$.
To explore the physical consequences of magnetic fragile topology in this
system, we analyze its edge/corner spectrum and also consider its evolution
under twisted boundary conditions (TBC). We therefore build a finite 2D
lattice version of our model which respects $C_{4}$ symmetry. We consider two
different ways of building this lattice, illustrated in Fig. 1 b)-c). As can
be seen in Fig. 1b), including an integer number of unit cells necessarily
violates $C_{4}$ symmetry for a cut along the crystallographic axis. We
therefore consider a half-integer number of unit cells in both direction for
such a cut. We also consider how these spectra depend on the real-space
termination, which leads us to also consider the cut in Fig. 1c). This cut is
also useful with regard to the TBC, which we discuss in section IV, as there
are no orbitals on the boundaries between regions related by $C_{4}$ symmetry.
We refer to this as the diagonal cut.
Figure 1: Illustration of the unit cell (panel a) and two possible lattice
cuts that respect $C_{4}$ symmetry (panels b and c) for MSG75.5. a) The unit
cell, with (m)WPs labelled, using both the convention of MSG75.5
($2a,2a^{\prime},2b,2b^{\prime}$) and of wallpaper group 10/p4
($1a,1b,2c,2c^{\prime}$). We place one spin up and one spin down orbital at
$2b$ and $2b^{\prime}$ respectively. b) An integer number of units cells
violates $C_{4}$ symmetry. This can be restored by considering a half-integer
number of unit cells, which corresponds to ignoring the sites circled in
green. c) To have $C_{4}$ symmetric sectors with no orbitals on the boundary,
we can perform the cut shown in the green square, with the boundary between
the sectors indicated.
Note that our finite model necessarily breaks the non-symmorphic antiunitary
symmetry $(E|\tau)^{\prime}$. This effectively reduces the symmetry to the
spinful ferro/ferrimagnetic phase 75.1 (Shubnikov type I), which in the 2D
case reduces to wallpaper group 10 (p4) with strong spin-orbit coupling (SOC)
and without time-reversal symmetry (TRS). This group has been well-studied in
e.g. Schindler _et al._ (2019); Song _et al._ (2020). We reproduce some of
their results to make the connection to the novel sub-dimensional topology in
MSG77.18 more transparent.
Wallpaper group 10 (p4) has three (non-magnetic) WPs: $1a$ and $1b$ with
point-group (PG) symmetry $C_{4}$ and $2c$ with PG symmetry $C_{2}$. This
labeling is also shown in Fig. 1a). This is similar to the model considered in
Song _et al._ (2020), however, there the fragile phase originates from TRS.
In our case, the fragile phase results from magnetic symmetries which manifest
in the gluing of IRREPs in momentum space. In terms of the site-symmetry
IRREPs of MSG75.1, our model realizes the decomposition in equation (2),
$\underbrace{[(^{1}\overline{E})_{2c}(^{2}\overline{E})_{2c}\ominus(^{1}\overline{E}_{1})_{1a}(^{2}\overline{E}_{1})_{1b}]}_{\mathrm{Lower\
subspace}}\oplus\underbrace{(^{1}\overline{E}_{1})_{1a}(^{2}\overline{E}_{1})_{1b}}_{\mathrm{Upper\
subspace}}.$ (2)
As suggested by Eq. (1) and Eq. (2), in the following we refer to the
unoccupied band subspace as the obstructed phase because it is formally
compatible with an obstructed atomic limit in terms of its IRREPs content, and
we call the occupied band subspace the fragile phase because it can only be
written as a subtraction of two atomic limits.
### II.2 Setup and topology for MSG77.18
As there are no symmetry constraints in the $k_{z}$ direction for MSG75.5, we
can freely consider topologies on the planes $k_{z}=0$ and $k_{z}=\pi$
independently. This is not the case in MSG77.18, as was discussed in Bouhon
_et al._ (2020a). This MSG is similar to MSG75.5, except that all $C_{4}$
rotations are replaced by screw rotations $C_{4_{2}}=(C_{4}|00\frac{1}{2})$,
and the non-symmorphic time-reversal is replaced by $(E|\tau_{d})^{\prime}$
with
$\tau_{d}=\boldsymbol{a}_{1}/2+\boldsymbol{a}_{2}/2+\boldsymbol{a}_{3}/2$. The
non-symmorphic screw symmetry imposes connectivity constraints along the
$k_{z}$ direction which prevent us from globally gapping the band structure in
the 3D BZ. However, we can still gap the band structure on the planes
$k_{z}=0$ and $k_{z}=\pi$. We constructed a model in Bouhon _et al._ (2020a),
summarized in appendix A.2, which realizes a gapped band structures on these
planes. Our model is based on mWP $2a$ in MSG77.18, with sites
$\boldsymbol{r}_{A}=\boldsymbol{a}_{1}/2+z\boldsymbol{a}_{3}$,
$\boldsymbol{r}_{B}=\boldsymbol{a}_{2}/2+(z+1/2)\boldsymbol{a}_{3}$, where we
specialize to $z=0$. This model agrees with the model for MSG75.5 on the plane
$k_{z}=0$, and therefore everything we find for MSG75.5 holds on the plane
$k_{z}=0$ of MSG77.18 as well.
To satisfy the symmetry constraints, we must necessarily have two nodal points
along the $\overline{\Gamma\text{Z}}$ line and two other nodal points on the
$\overline{\text{MA}}$ line, resulting in a semi-metallic phase as studied by
e.g. Bouhon and Black-Schaffer (2017a, b); Mathai and Thiang (2017); Sun _et
al._ (2018); Slager _et al._ (2017); Yang and Nagaosa (2014); Slager _et
al._ (2016); Zou _et al._ (2019); Wieder _et al._ (2020); Călugăru _et al._
(2019); Lin and Hughes (2018). These Weyl nodes manifest through Fermi arcs in
the surface spectrum, and as gapless states in the hinge spectrum as we show
in Fig. 5. Furthermore, the pair of Weyl points on one vertical line are
related by $C_{2}T$ symmetry and must thus have equal chirality, say positive
chirality for the Weyl points on $\overline{\Gamma\text{Z}}$ Bouhon _et al._
(2020a). We can thus define a vertical cylinder surrounding the $k_{z}$-axis
over which the Chern number must be $+2$. By the Nielsen-Ninomiya theorem this
must in turn be compensated by the Weyl points appearing on the
$\overline{\text{MA}}$ line contributing a Chern number of $-2$. We thus see
that upon employing the sub-dimensional analysis we can predicatively
enumerate in-plane non-triviality and, accordingly, necessarily present Weyl
nodes that are the consequence of the in-plane topology Bouhon _et al._
(2020a). In the rest of this paper, we further explore the consequences of
these topological structures in MSG75.5 and MSG77.18.
## III Corner charges
To elucidate the physical consequences of magnetic fragile phases in sub-
dimensional topologies, we begin by considering the corner charges present in
the system. Corner charges were studied in detail, for 2D non-magnetic
systems, in Benalcazar _et al._ (2019); Schindler _et al._ (2019). They can
be related to electric multipole moments, as described in Benalcazar _et al._
(2017). For an obstructed atomic or fragile insulator it can happen that
charge neutrality and space-group symmetry are incompatible. Symmetry then
necessitates an imbalance of ionic and electronic degrees of freedom, which
gives rise to a filling anomaly $\chi$, defined as:
$\chi=(\\#\mathrm{Electronic\ sites}-\\#\mathrm{Ionic\ sites})\ \mathrm{mod}\
n$ (3)
Symmetry-allowed perturbations of the edges and corners can change this number
in general, but it is always well-defined modulo some number $n$ related to
the order of the symmetry. For the corner charges in MSG75.5 and MSG77.18,
$n=4$ due to the fourfold rotation symmetry.
If the edges are insulating, then the excess charge must localize at the
corners of the system. If $\chi$ is incompatible with the order of the
symmetry, then there will be fractional charges at the corners. This charge on
the corner is only well-defined in the absence of an edge state, as edge
states generically allow charge to flow away from the corner. Corner charges
are therefore closely linked with higher-order topological insulators (HOTIs),
as explored in Schindler _et al._ (2018); Slager _et al._ (2015); Wieder
_et al._ (2020); Călugăru _et al._ (2019); Benalcazar _et al._ (2019);
Khalaf _et al._ (2018); Van Miert and Ortix (2018); Khalaf _et al._ (2019).
We find that this allows for a counting procedure to determine the excess
charge in MSG75.5. We only consider half-filling, so that every ionic and
electronic site contributes a single charge.
### III.1 Corner charges in MSG75.5
As $\mathrm{MEBR}_{75.5}^{2b}$ corresponds to a trivial insulator, with the
electrons (center of band charges) localized at the ionic sites, its total
filling anomaly must be zero. It follows that the filling anomalies of the
occupied and unoccupied subspaces of the split EBR in MSG75.5 must sum to zero
(modulo 4). We can therefore determine the filling anomaly of the fragile
phase (i.e. the occupied subspace) at half-filling by studying the atomic
obstructed phase (i.e. the unoccupied subspace) and counting the ionic and
electronic sites in the system (this only works because of the vanishing bulk
polarization, see Schindler _et al._ (2019)). For this purpose, we redraw
Fig. 1 with the sites of the magnetic WP$2a$ included. This corresponds to the
non-magnetic WPs $1a$ and $1b$ and is shown in Fig. 2.
Figure 2: Same cuts for the $k_{z}=0$ plane of MSG75.5/MSG77.18 as in Fig. 1b)
and c), but including the WPs where the electrons localize. The electrons
localize on the smaller (black/green) sites, but the ions sit on the larger
(red/blue) sites.
Counting the total number of electronic and ionic sites gives for the cut in
Fig. 2a) a filling anomaly of $\chi=41-40=1$. Note, however, that whether or
not we include the electronic sites on the boundary is a matter of convention,
as they only represent the localization centers of the electrons (they are not
real sites in our model). We can therefore discount charges on the edge, but
must do so in a $C_{4}$ symmetric fashion, as discussed above. We thus expect
the obstructed phase to have an excess charge of $1$ mod $4$ electrons, and
the fragile phase must then have a compensating excess charge of $3$ mod $4$
electrons. The same counting gives for the diagonal cut in 2b) an anomaly of
$\chi=25-16=9$, which gives the same $\chi$ of $1$ mod $4$ in the obstructed
phase.
To determine whether or not this filling anomaly gives rise to corner states,
we must determine whether there is any excess charge localized on the edge of
the system, which could result in a conducting edge. We show the counting of
charges on the edge in Fig. 3, together with the edge spectra for the relevant
cuts.
Figure 3: Counting of charges on the edge for MSG75.5. The arrows indicate
which directions are considered to be periodic. The greyed out orbitals stem
from adjacent unit cells and the black dotted lines indicate a possible choice
of cell for the diagonal case. This is shown for the straight cut in panels a)
and b) and the diagonal cut in panel c) and d), with associated edge band
structures. The edge bands were calculated using the PythTB package Coh and
Vanderbilt (2016), with 150 unit cells perpendicular to the edge.
To count the orbitals, we must take care not to overcount sites which are
periodic images of each other (indicated by greyed out orbitals). This gives a
filling anomaly of $\chi=7-7=0$ for Fig. 3a), and $14-12=2$ for Fig. 3b).
Note, however, that we can remove charges on the edge, as long as we remove
them symetrically from both edges, e.g. the edge charges are quantized mod 2.
We therefore do not expect any fractional charges on the edge for either cut,
and therefore expect quantized corner charges. We noted in Ref. Bouhon _et
al._ (2020a), that we have a quantized Berry phase of $\pi$ in the fragile
subspace. This does not give rise to a topological edge state because the
ionic sites are shifted from the origin by the same amount as the electronic
sites, as discussed in section V. If this relation is violated, we expect an
edge state to arise. We show the effect of removing various orbitals on the
edge on the edge spectrum in Fig. 9 in Appendix C.1. This confirms that we can
get in-gap states by removing edge orbitals. When we remove pairs of orbitals
at one edge, there are no topological in-gap edge states.
We note that this counting can also be used to predict the split of states
into the occupied/unoccupied space in this case. If there are $N$ total states
in the system, we expect $N/2-1$ states below the gap, $4$ corner states in
the gap and $N/2-3$ states above the gap. Note, however, as shown in Fig. 3c)
and d), that there are residual, model-specific, edge branches in the system
which slightly extend into the gap but are not genuine topological in-gap
states. These likely come from a nearby symmetry which our model only breaks
weakly. We therefore expect some residual charge on the edges, but this charge
is not topologically protected. To compute the corner charges, we therefore
sum over the charge from the occupied bands contained in a region with finite
thickness including the edge. We show the total charge for both cuts in Fig.
4, together with the associated spectra and typical corner states.
Figure 4: Corner charges and spectrum for the straight cut a) and b) and
diagonal cut c) and d) respectively for MSG75.5. On the top, we show the
charge distribution, with Fermi level set at the top of the gap, above the
corner states. Red indicates excess charge (relative to the center), blue a
deficit of charge where we sum over all occupied states. We confirm
numerically that the excess electronic charge in a region away from the center
is $3$ for both cuts. In the lower panels we show the spectrum, including the
absolute value squared of a typical corner state. In both cuts, there are
$N/2-1$ occupied states, $4$ corner states and $N/2-3$ unoccupied states,
where $N$ is the total number of states. All calculations were done using the
PythTB package Coh and Vanderbilt (2016).
To confirm that these are indeed corner charges, we also plot the same system
with a single spin removed on the boundary in Fig. 11 in Appendix C.2. This
violates the integer quantization of charges (since we are removing one
orbital while we remain at half-filling) and naturally leads to the appearance
of an edge state with edge charges. In Fig. 11, we clearly see that the edge
states dominate the corner charges, illustrating that, in contrast, Fig. 4
displays corner charges. We also confirm numerically that the excess charge in
a region including the boundary is $3$ when fixing the Fermi energy above the
corner states but below the upper (conduction) subspace. The sub-dimensional
topology in MSG75.5 thus hosts corner charges. As the unitary symmetries do
not dictate the $k_{z}$ dependence for MSG75.5, we expect that these $C_{4}$
symmetry protected corner charges are present for all values of $k_{z}$,
leading to hinge states.
### III.2 Corner charges in MSG77.18
The translational symmetry in the $z$-direction for MSG75.5 allows for the
existence of hinge states that can be traced to the corner states of the
fragile topology at $k_{z}=0$ and $k_{z}=\pi$. The screw symmetry in MSG77.18
breaks this translational symmetry, and connects the planes at $k_{z}=0$ and
$k_{z}=\pi$. Our model for MSG77.18 was introduced in Bouhon _et al._
(2020a), and is discussed in Appendix A.2. It agrees with the model for
MSG75.5 on the plane $k_{z}=0$, and we therefore expect corner charges on this
plane. As we move along the $k_{z}$ direction, the screw symmetry necessitates
the existence of Weyl nodes along $\overline{\Gamma\mathrm{Z}}$ and
$\overline{\mathrm{MA}}$, which leads to Fermi arcs in the surface spectrum.
We denote the $k_{z}>0$ coordinates of these Weyl nodes as
$k_{\overline{\Gamma\mathrm{Z}}}$ and $k_{\overline{\mathrm{MA}}}$. In the
hinge spectrum, we then expect a gap closing for all
$k_{z}\in\big{[}\mathrm{min}\big{(}k_{\overline{\Gamma\mathrm{Z}}},k_{\overline{\mathrm{MA}}}\big{)},\mathrm{max}\big{(}k_{\overline{\Gamma\mathrm{Z}}},k_{\overline{\mathrm{MA}}}\big{)}\big{]}$.
We plot the hinge spectrum for the straight and diagonal cut in Fig. 5.
Figure 5: Hinge spectrum and edge spectrum in the $k_{z}=\pi$ plane for a slab
calculation of MSG77.18. Panels a) and b) correspond to straight cuts. Panels
c) and d) present data for the diagonal cut. All calculations were done using
the PythTB package Coh and Vanderbilt (2016)
We note that the symmetry $k_{z}\rightarrow-k_{z}$ is not maintained for
edge/corner states, as the non-symmorphic $C_{2}T$ symmetry (relating $k_{z}$
to $-k_{z}$) is broken at the hinge. We also note that we get clear in-gap
states for the hinge spectrum in the diagonal cut. We show the equivalent of
Fig. 4 for the $k_{z}=\pi$ plane of 77.18 in Fig. 6.
Figure 6: Corner charges for the straight cut a) and b) and diagonal cut c)
and d) respectively for the $k_{z}=\pi$ plane of MSG77.18. On the top, we show
the charge distribution, with Fermi level fixed to the same value as for
MSG75.5, shown in Fig. 4. Red indicates excess charge (relative to the
center), blue a deficit of charge. We confirm numerically that the excess
electronic charge in a region away from the center is $1$ for the straight
cut. In the lower panels we show the spectrum, including the absolute value
squared of a typical corner state. In the straight cut, there are $N/2-3$
occupied states, $4$ corner states, and $N/2-1$ empty states, with $N$ the
total number of states. All calculations were done using the PythTB package
Coh and Vanderbilt (2016).
We note that the diagonal cut actually hosts edge modes on this plane, and
that the corner charge differs from that on the $k_{z}=0$ plane. This
illustrates that the counting approach, which we adopted for MSG75.5 (or
equivalently the $k_{z}=0$ plane of MSG77.18), breaks down on the plane
$k_{z}=\pi$. Thus care must be taken when extending counting schemes in 2D
such as those discussed in Benalcazar _et al._ (2019); Schindler _et al._
(2019) to full 3D models. This difference between the planes can be understood
by considering more carefully where the electrons localize on the plane
$k_{z}=\pi$ of MSG77.18, as we describe in section V. The counting for the
$k_{z}=0$ plane of MSG77.18 only works because we have a direct map to an
explicit 2D model, where the obstructed limits are known.
## IV Real space invariants and twisted boundary conditions
Having explored the corner and edge spectrum, we will turn to relating the
bulk and boundary features from a Wilson flow perspective in the next Section.
However, before we move to this main topic, we shortly comment on the
connection with the recently introduced concepts of twisted boundary
conditions (TBC) Song _et al._ (2020); Peri _et al._ (2020) and real-space
invariants. In particular, we note that on a single plane (e.g. $k_{z}=0$) of
MSG75.5, everything we have discussed is exclusively protected by $C_{4}$
symmetry. As a result, we can directly relate to the results as discussed in
Ref. Song _et al._ (2020), providing an alternative perspective on fragile
phases. This also serves to connect our work to other recent works, and can be
relevant in an experimental setting Peri _et al._ (2020). This formalism is
most transparent when the phase being considered is gapless and when there are
no orbitals on the boundary between regions related by $C_{4}$ symmetry. We
therefore focus exclusively on the $k_{z}=0$ plane of MSG75.5/MSG77.18, using
the diagonal cut illustrated in Fig. 1c), as MSG77.18 hosts a gapless phase on
the $k_{z}=\pi$ plane for this cut.
This plane hosts wallpaper group p4 with SOC but without TRS. Using the
expressions for the real-space invariants (RSIs) found in Song _et al._
(2020), we find that both the occupied and the unoccupied subspace have
nonzero RSIs, the expressions of which can be found in appendix D. This
implies a non-trivial flow under twisted boundary conditions (TBC), where the
coupling between the $C_{4}$ symmetric sectors is twisted by a factor
$\lambda=e^{i\theta}$. The spectrum under this twisting is shown in Fig. 7,
and we check that it agrees with the flow predicted from the RSIs. Note also
the presence of the corner states which, as they are not cut by the TBCs, do
not flow.
Figure 7: Twisted boundary conditions for a finite model of the $k_{z}=0$
plane of MSG75.5. The hoppings between adjacent regions (orange arrows) in the
clockwise/counterclockwise direction are multiplied by $\lambda=e^{\pm
i\theta}$. The hoppings between diagonal regions (purple arrows) are
multiplied by Re($\lambda^{2}$).
The RSIs can also be used to predict whether or not a set of bands are
fragile. We find, as anticipated from the above discussion, that the lower
band subspace is fragile, whereas the upper subspace is not. We refer for
further details to Appendix D.
## V Bulk-edge correspondence
The Wilson loop spectrum provides a powerful tool for the bulk-edge
correspondence, which we here consider for the different edge terminations and
at the different $k_{z}$-planes. We recall that upon integrating the Wilson
loop along the momentum direction ($k_{\perp}$) perpendicular to the edge
direction ($x_{\parallel}$), the Wilson phases give the $x_{\perp}$-component
of the center of charge of the occupied band states, thus relating to a
specific WP, which can then be used to predict an atomic obstruction (i.e. the
displacement of band charges to a WP distinct from the WP of the atomic
orbitals) Zak (1989) and, subsequently, an edge-specific charge anomaly Su
_et al._ (1979). Importantly, this argument is conclusive only when the Wilson
loop spectrum is quantized as an effect of symmetries, since different WPs
only then relate to topologically distinct phases (thus avoiding the adiabatic
transfer of charges from one WP to an other). In the following we make use of
the $\\{0,\pi\\}$ Berry phases (given as the sum of all Wilson loop phases)
protected by $C_{2}T$ symmetry Bouhon _et al._ (2019) and the
$\mathbb{Z}_{2}$ polarization protected by TRS and $C_{2}$ symmetry Lau _et
al._ (2016) associated with Kramers degeneracies of the Wilson loop spectrum.
We start with a discussion of the effect of the symmetries of MSG75.5 and
MSG77.18 on the Wilson loops in relation to (i) the sub-dimensional bulk
topologies ($k_{z}=0,\pi$), and (ii) the two possible choices of geometries
(straight versus diagonal, see Fig. 8 a)). This allows us to determine which
topological invariant ($\boldsymbol{Z}_{2}$ Berry phase and
$\boldsymbol{Z}_{2}$ polarization) is associated with an edge geometry, as
well as its actual value indicated by symmetry. We then motivate the bulk-edge
correspondence for each case.
### V.1 Topological insights from Wilson loop
We show in Fig. 8 the Wilson loop spectrum for MSG77.18 (and MSG75.5, see
below) corresponding to a straight geometry at the plane b) $k_{z}=0$ and d)
$k_{z}=\pi$, and corresponding to the $45^{\circ}$-rotated (diagonal) geometry
at the plane c) $k_{z}=0$ and e) $k_{z}=\pi$. Fig. 8 a) shows the directions
of integration in the momentum space with a full-line double arrow (blue) for
the straight geometry and a dashed double arrow (orange) for the diagonal
geometry. The Wilson loop spectrum for MSG75.5 is qualitatively the same as in
Fig. 8 b) for the straight geometry and Fig. 8 c) for the diagonal geometry,
both for the planes $k_{z}=0$ and $k_{z}=\pi$. Indeed, for MSG75.5 the sub-
dimensional topologies at $k_{z}=0$ and $k_{z}=\pi$ are the same.
Figure 8: a) Planes of the Brillouin zone at $k_{z}=0$ ($\Gamma$, X, M, X’)
and $k_{z}=\pi$ (Z, R, A, R’), along straight directions
$\\{\boldsymbol{b}_{1},\boldsymbol{b}_{2}\\}$, and diagonal directions
$\\{\boldsymbol{b}_{1}+\boldsymbol{b}_{2},-\boldsymbol{b}_{1}/2+\boldsymbol{b}_{2}/2\\}$.
b)-e) Wilson loop spectrum for MSG77.18 integrated (b,d) along the straight
direction $\boldsymbol{b}_{1}$ (while varying $k_{y}$), and (c,e) along the
diagonal direction $\boldsymbol{b}_{1}+\boldsymbol{b}_{2}$ (while varying
$k_{2}$), for the plane (b,c) $k_{z}=0$ and (d,e) $k_{z}=\pi$. The respective
paths of Wilson loop integration for the straight geometry is
$l_{k_{y}}=[(0,k_{y})+\boldsymbol{b}_{1}\leftarrow(0,k_{y})]$ (dashed orange),
and for the diagonal geometry is
$l_{k_{2}}=[(0,k_{2})+\boldsymbol{b}_{1}+\boldsymbol{b}_{2}\leftarrow(0,k_{2})]$
(full blue), shown as double arrows in a).
#### V.1.1 Sub-dimensional topologies: $k_{z}=0$ vs $k_{z}=\pi$
Let us now explain the differences between the Wilson loops in terms of the
sub-dimensional topologies, i.e. $k_{z}=0$ versus $k_{z}=\pi$, for MSG77.18.
The sub-dimensional topology of the $k_{z}=0$ plane for MSG77.18 is identical
to sub-dimensional topology of MSG75.5, with the high-symmetry points
$\\{\Gamma,\text{M}\\}$ as TRIMPs (time reversal invariant momentum points)
and with $[C_{2}T]^{2}=+1$ which protects the complete winding of two-band
Wilson loop, thus indicating a nontrivial Euler class (fragile) topology
Bouhon _et al._ (2019, 2020a). The winding of Wilson loop does not depend on
the geometry (straight or diagonal) and we accordingly find complete Wilson
loop windings in Fig. 8 b) and c). At $k_{z}=\pi$, the sub-dimensional
topology for MSG77.18 is characterized by the TRIMPs
$\\{\text{R},\text{R'}\\}$ (which differ from the TRIMPs
$\\{\text{Z},\text{A}\\}$ for MSG75.5) and with $[C_{2}T]^{2}=-1$ which
implies the Kramers degeneracy of the energy bands over the whole momentum
plane, while discarding Euler class topology Bouhon _et al._ (2020a).
Accordingly, we find that there is no complete winding of the Wilson loops in
Fig. 8 d) and e).
#### V.1.2 Symmetries and quantizations of Wilson loop: straight vs diagonal
geometry
We now turn to a detailed account of the quantizations and symmetries of
Wilson loop spectra protected by symmetries for the two geometries, i.e.
straight versus diagonal. Our aim is to determine which topological invariant
($\boldsymbol{Z}_{2}$ Berry phase and $\boldsymbol{Z}_{2}$ polarization) can
be associated with an edge geometry and when it is symmetry indicated (i.e.
with a definite value). These results combine the effects of $C_{2}$ symmetry,
and the non-symmorphic anti-unitary symmetries TRS and $C_{2}T$. The
derivation of the constraints on the Wilson loop due to the non-symmorphic
anti-unitary symmetries (TRS and $C_{2}T$) is given in Appendix B. Below we
write $\mathcal{W}[l_{k}]$ the Wilson loop with the spectrum (phases)
$\\{\varphi_{1}(k),\varphi_{2}(k)\\}$ evaluated over the non-contractible base
path $l_{k}$ that is parametrized by $k=k_{y}$ in the straight geometry, and
by $k=k_{2}$ in the diagonal geometry [Fig. 8 a)].
In the straight geometry, the Wilson loop phases at $k_{y}=0$ and $k_{y}=\pi$
are quantized to $(\varphi_{1},\varphi_{2})=(0,\pi)$ by the $C_{2}$ symmetry,
i.e. it follows from the $C_{2}$ eigenvalues of the band eigenstates at the
high-symmetry points $\Gamma$ and M Alexandradinata _et al._ (2014); Bouhon
and Black-Schaffer (2017c); Bouhon _et al._ (2019), i.e. the Berry phase must
be
$\gamma_{B}[l_{k_{y}=0,\pi}]=\varphi_{1}+\varphi_{2}\>\mathrm{mod}\>2\pi=\pi\>\mathrm{mod}\>2\pi$.
This is true on both planes $k_{z}=0,\pi$. Then on the $k_{z}=0$ plane, by the
$\boldsymbol{Z}_{2}$ quantization of the Berry phase protected by $C_{2}T$
($[C_{2}T]^{2}=+1$), we conclude that the Berry phase must be $\pi$ for all
$k_{y}$. On the $k_{z}=\pi$ plane, we have $[C_{2}T]^{2}=-1$ and there is the
question of the quantization of the Berry phase. We have shown in Appendix B
that the non-symmorphic $C_{2}T$ imposes
$\varphi_{2}(k_{y})=-\varphi_{1}(k_{y})+\pi\>\mathrm{mod}\>\ 2\pi$ on the
spectrum of the Wilson loop. We thus get
$\gamma_{B}(k_{y})=\varphi_{1}(k_{y})+\varphi_{2}(k_{y})=\pi\>\mathrm{mod}\>2\pi$
for all $k_{y}$. We note that the non-symmorphic TRS furthermore requires
$\varphi_{2}(-k_{y})=\varphi_{1}(k_{y})+\pi\>\mathrm{mod}\>\ 2\pi$ (see
Appendix B) which explains the global structure of the Wilson loops in 8 b)
and d). Also, since there is no path $l_{k_{y}}$ for which TRS squares to $-1$
for all $k_{x}$, no $\boldsymbol{Z}_{2}$ polarization can be defined in the
straight geometry.
In the diagonal geometry, the Wilson loop is quantized by $C_{2}$ at
$k_{2}=\pm\pi/\sqrt{2}$ to $(\varphi_{1},\varphi_{2})=(0,0)$, for $k_{z}=0$
and $k_{z}=\pi$ (this follows from the fact that the $C_{2}$ eigenvalues of
the occupied-band eigenstates at X and at R are all equal to $\mathrm{i}$ or
$-\mathrm{i}$), such that the Berry phase is zero. (At $k_{2}=0$, and
$k_{z}=0$ or $k_{z}=\pi$, on the contrary, there is no quantization of the
Wilson loop from $C_{2}$.) At $k_{z}=0$, the $\boldsymbol{Z}_{2}$ quantization
of the Berry phase (protected by $C_{2}T$ with $[C_{2}T]^{2}=-1$) implies it
must be zero for all $k_{2}$, in agreement with Fig. 8 c). At $k_{z}=\pi$, the
non-symmorphic $C_{2}T$ symmetry requires
$\varphi_{2}(k_{2})=-\varphi_{1}(k_{2})$ (see Appendix B), hence again the
Berry phase must be zero for all $k_{2}$, in agreement with Fig. 8 e). We now
address the effect of non-symmorphic TRS which requires
$\varphi_{2}(k_{2})=\varphi_{1}(-k_{2})$ for $k_{z}=0$ and $k_{z}=\pi$ (see
Appendix B), which explains the global structure of the Wilson loops in Fig. 8
c) and e). Furthermore, from the square of TRS,
$T^{2}=-\mathrm{e}^{-\mathrm{i}(k_{x}+k_{y}+k_{z})}$, we predict one diagonal
per $k_{z}$-plane along which $T^{2}=-1$. We find $T^{2}=-1$ for
$k_{x}=k_{y}\>\mathrm{mod}\>2\pi$ at $k_{z}=0$, i.e. on the diagonal
$\overline{\Gamma\text{M}}$ (connecting the TRIMPs $\Gamma$ and M), and
$T^{2}=-1$ for $k_{x}=k_{y}+\pi\>\mathrm{mod}\>2\pi$ at $k_{z}=\pi$, i.e. on
the diagonal $\overline{\text{RR'}}$ (connecting the TRIMPs R and R’). This
results in the presence of Kramers degeneracies of the Wilson loop spectrum at
$k_{2}=0$ for $k_{z}=0$, namely $(\varphi_{1},\varphi_{2})=(\pi,\pi)$, and at
$k_{2}=\pm\pi/\sqrt{2}$ for $k_{z}=\pi$, namely
$(\varphi_{1},\varphi_{2})=(0,0)$. Moreover, the combination of $C_{2}$ and
TRS leads to the definition of a $\nu[l]\in\boldsymbol{Z}_{2}$ polarization on
these diagonals which is directly indicated by the phases of the Kramers
degeneracies, i.e.
$\displaystyle\nu^{(k_{z}=0)}[l_{k_{2}=0}]$ $\displaystyle=1,$ (4)
$\displaystyle\mathrm{and}~{}\nu^{(k_{z}=\pi)}[l_{k_{2}=\pi/\sqrt{2}}]$
$\displaystyle=0.$
We show below that these have important consequences for the bulk-edge
correspondence.
### V.2 Sub-dimensional charge anomalies and edge states
Let us first summarize the findings of the previous section. The Berry phase
is $\pi$ in the straight geometry on both planes $k_{z}=0$ and $k_{z}=\pi$.
The Berry phase is $0$ in the diagonal geometry on both planes $k_{z}=0$ and
$k_{z}=\pi$. There is no $\boldsymbol{Z}_{2}$ polarization in the straight
geometry. In the diagonal geometry, there a nontrivial $\boldsymbol{Z}_{2}$
polarization on $l^{(k_{z}=0)}_{k_{2}=0}=\overline{\Gamma\text{M}}$, and a
trivial one on $l^{(k_{z}=\pi)}_{k_{2}=0}=\overline{\text{RR'}}$.
#### V.2.1 Straight geometry
We consider here the bulk-edge correspondence at $k_{z}=0$ for which the
phases for MSG75.5 and MSG77.18 are the same. We use MSG75.5 to give a
detailed account of the bulk-edge correspondence in that momentum plane.
Let us start with a discussion of the straight edge geometry. Let us argue
explicitly for MSG75.5 that the $\pi$-Berry phase does not indicate a charge
anomaly. Indeed, the Wyckoff position $2b$ of the atomic orbitals and the
Wyckoff positions $2a$ of the obstructed band charges both have, component-
wise, one site centered in the unit cell and one site shifted to the unit cell
boundary. I.e. defining the (unordered) sets of component-wise positions
relative to the unit cell center
$\displaystyle\\{\boldsymbol{r}^{(2b)}_{A,i},\boldsymbol{r}^{(2b)}_{B,i}\\}$
$\displaystyle=\\{0,a/2\\}\>\mathrm{mod}\>a,$ (5)
$\displaystyle\mathrm{and}~{}\\{\boldsymbol{r}^{(2a)}_{C,i},\boldsymbol{r}^{(2a)}_{D,i}\\}$
$\displaystyle=\\{0,a/2\\}\>\mathrm{mod}\>a,$
for $i=x,y$, there is no difference. Therefore, even in the case of an
obstruction (from WP $2b$ to WP $2a$) there is no charge anomaly, i.e. no
mismatch between the number of charges localized at the distinct WPs, in a
ribbon system with straight edge cuts and assuming that a slide of the ribbon
contains an integer number of unit cells (see Appendix C.1 with the results
for a fractional number of unit cells). This can also be readily checked
through direct counting. We thus conclude that there is no topological edge
branch along the straight edges. We can readily transpose this conclusion is
directly for MSG77.18 at $k_{z}=0$, as well as at $k_{z}=\pi$ for MSG75.5.
Since the Berry phase is also $\pi$ at $k_{z}=\pi$ for MSG77.18, we find
similarly to $k_{z}=0$ that there is no edge state in the straight geometry
(assuming an integer number of unit cells per slice of the ribbon). We
conclude that in the straight geometry there no effect of the sub-dimensional
topology (i.e. comparing $k_{0}=\pi$ and $k_{z}=\pi$) on the bulk-edge
correspondence for MSG77.18.
#### V.2.2 Diagonal geometry
To fix ideas, we assume that $x_{1}=x_{\perp}$ is the direction perpendicular
to the edge, and $x_{2}=x_{\parallel}$ the direction parallel to it. Then, we
define the diagonal unit cell through
$\boldsymbol{a}_{\perp}=\boldsymbol{a}_{1}/2+\boldsymbol{a}_{2}/2=a/\sqrt{2}\hat{e}_{1}$
and
$\boldsymbol{a}_{\parallel}=-\boldsymbol{a}_{1}+\boldsymbol{a}_{2}=\sqrt{2}a\hat{e}_{2}$,
such that invariance under a translation by $\boldsymbol{a}_{\parallel}$ is
satisfied. Note that this unit cell is different from the edge unit cell in
figure 5. Writing the atomic positions in the diagonal axes, i.e.
$\boldsymbol{r}=(x_{1},x_{2})$, the perpendicular component ($x_{1}$) of the
Wyckoff positions in the diagonal geometry are
$\displaystyle(\boldsymbol{r}^{(2b)}_{A,1},\boldsymbol{r}^{(2b)}_{B,1})$
$\displaystyle=\dfrac{a}{2\sqrt{2}}(1,1)\>\mathrm{mod}\>\dfrac{a}{\sqrt{2}},$
(6)
$\displaystyle\mathrm{and}~{}(\boldsymbol{r}^{(2a)}_{C,1},\boldsymbol{r}^{(2a)}_{D,1})$
$\displaystyle=(0,0)\>\mathrm{mod}\>\dfrac{a}{\sqrt{2}}.$
The zero Berry phase obtained for the diagonal geometry thus indicates that
there must be an even number of charges at WP $2b$ and at WP $2a$. Assuming an
obstruction of the charges from WP $2b$ to WP $2a$, the diagonal edge cut
leads to a total charge anomaly of $\pm 2e$ for the ribbon if we assume that a
slide of the ribbon contains an integer number of diagonal unit cells (see
Appendix C.1 with the results for a fractional number of unit cells). This
means that we have a charge anomaly of $\pm e$ per diagonal edge.
Let us now make use of the $\nu\in\boldsymbol{Z}_{2}$ polarization, which we
have seen is well defined in the diagonal geometry. At $k_{z}=0$, we have
found $\nu_{k_{2}=0}=1$ which corresponds to the Kramers degenerate Wilson
loop phases $(\varphi_{1},\varphi_{2})=(\pi,\pi)$. This implies that the band
charges are not obstructed, i.e. they can be located at WP $2b$. There is thus
no charge anomaly and following there is no edge state. This conclusion holds
for MSG75.5 at $k_{z}=0$ and $k_{z}=\pi$, and for MSG77.18 at $k_{z}=0$ only.
Considering now MSG77.18 at $k_{z}=\pi$, we have found
$\nu_{k_{2}=\pi/\sqrt{2}}=0$ which corresponds to the Kramers degenerate
Wilson loop phases $(\varphi_{1},\varphi_{2})=(0,0)$. This implies an
obstruction of the band charges, i.e. shifted from the atomic mWP $2a$ of
MSG77.18. We have argued above that with an obstruction there is a charge
anomaly of $\pm e$ per diagonal edge (assuming an integer number of diagonal
unit cells in one slice of the ribbon). We also know that the
$\boldsymbol{Z}_{2}$ polarization predicts the presence of an odd number
integer-valued electronic charge at a single edge Lau _et al._ (2016), which
by virtue of the Kramers degeneracy means a half-integer-valued charge per
spin. The topological (helical) edge states take the form of an odd number of
edge Kramers pairs per edge. This is fully consistent with the numerical
results shown in Fig. 5 where we find one Kramers pair of edge states per
edge.
We finally conclude that there is a non-trivial effect of the sub-dimensional
topology on the bulk-edge correspondence that is observable in the diagonal
geometry.
## VI Conclusion
In this work we have further explored the concept of sub-dimensional topology.
While these topologies have a connected EBRs and thus appear trivial, they
feature split EBRs on sub-dimensional spaces, such as planes, in the Brillouin
zone. These in-plane topologies are subsequently compensated by Weyl nodes and
have full dimensional non-trivial features. We in particular find that these
concepts can be related to more refined counting and symmetry indicator
arguments. Most notably, however, is the connection of these insights to
consequences on the edge. We find that the sub-dimensional topology results in
distinctive bulk-boundary signatures. These include hinge spectra and edge
states that have a distinct dependence on the perpendicular momentum,
underpinning the physical significance of this new topological concept. We
therefore hope that our results will result in the further exploration of
these features.
###### Acknowledgements.
R.-J. S. acknowledges funding from the Marie Skłodowska-Curie programme under
EC Grant No. 842901 and the Winton programme as well as Trinity College at the
University of Cambridge. G.F.L acknowledges funding from the Aker Scholarship.
## References
* Qi and Zhang (2011) Xiao-Liang Qi and Shou-Cheng Zhang, “Topological insulators and superconductors,” Rev. Mod. Phys. 83, 1057–1110 (2011).
* Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, “Colloquium,” Rev. Mod. Phys. 82, 3045–3067 (2010).
* Slager _et al._ (2012) Robert-Jan Slager, Andrej Mesaros, Vladimir Juričić, and Jan Zaanen, “The space group classification of topological band-insulators,” Nat. Phys. 9, 98 (2012).
* Fu (2011) Liang Fu, “Topological crystalline insulators,” Phys. Rev. Lett. 106, 106802 (2011).
* Slager (2019) Robert-Jan Slager, “The translational side of topological band insulators,” Journal of Physics and Chemistry of Solids 128, 24 – 38 (2019).
* Bzdušek _et al._ (2016) Tomáš Bzdušek, QuanSheng Wu, Andreas Rüegg, Manfred Sigrist, and Alexey A. Soluyanov, “Nodal-chain metals,” Nature 538, 75 EP – (2016).
* Höller and Alexandradinata (2018) Judith Höller and Aris Alexandradinata, “Topological bloch oscillations,” Phys. Rev. B 98, 024310 (2018).
* Shiozaki _et al._ (2017) Ken Shiozaki, Masatoshi Sato, and Kiyonori Gomi, “Topological crystalline materials: General formulation, module structure, and wallpaper groups,” Phys. Rev. B 95, 235425 (2017).
* Fang _et al._ (2012) Chen Fang, Matthew J. Gilbert, and B. Andrei Bernevig, “Bulk topological invariants in noninteracting point group symmetric insulators,” Phys. Rev. B 86, 115112 (2012).
* Lee _et al._ (2020) Kyungchan Lee, Gunnar F. Lange, Lin-Lin Wang, Brinda Kuthanazhi, Thais V. Trevisan, Na Hyun Jo, Benjamin Schrunk, Peter P. Orth, Robert-Jan Slager, Paul C. Canfield, and Adam Kaminski, “Discovery of a weak topological insulating state and van hove singularity in triclinic rhbi2,” (2020), arXiv:2009.12502 [cond-mat.str-el] .
* Mesaros _et al._ (2013) Andrej Mesaros, Robert-Jan Slager, Jan Zaanen, and Vladimir Juricic, “Zero-energy states bound to a magnetic $\pi$-flux vortex in a two-dimensional topological insulator,” Nucl. Phys. B 867, 977 – 991 (2013).
* Kariyado and Slager (2020) Toshikaze Kariyado and Robert-Jan Slager, “Selective branching, quenching, and converting of topological modes,” (2020), arXiv:2007.01876 [cond-mat.mes-hall] .
* Zhang and Liu (2015) Rui-Xing Zhang and Chao-Xing Liu, “Topological magnetic crystalline insulators and corepresentation theory,” Phys. Rev. B 91, 115317 (2015).
* Rhim _et al._ (2018) Jun-Won Rhim, Jens H. Bardarson, and Robert-Jan Slager, “Unified bulk-boundary correspondence for band insulators,” Phys. Rev. B 97, 115143 (2018).
* Bouhon and Black-Schaffer (2017a) Adrien Bouhon and Annica M. Black-Schaffer, “Global band topology of simple and double dirac-point semimetals,” Phys. Rev. B 95, 241101 (2017a).
* Kruthoff _et al._ (2017) Jorrit Kruthoff, Jan de Boer, Jasper van Wezel, Charles L. Kane, and Robert-Jan Slager, “Topological classification of crystalline insulators through band structure combinatorics,” Phys. Rev. X 7, 041069 (2017).
* Po _et al._ (2017) Hoi Chun Po, Ashvin Vishwanath, and Haruki Watanabe, “Symmetry-based indicators of band topology in the 230 space groups,” Nat. Commun. 8, 50 (2017).
* Bradlyn _et al._ (2017) Barry Bradlyn, L. Elcoro, Jennifer Cano, M. G. Vergniory, Zhijun Wang, C. Felser, M. I. Aroyo, and B. Andrei Bernevig, “Topological quantum chemistry,” Nature 547, 298 (2017).
* Watanabe _et al._ (2018) Haruki Watanabe, Hoi Chun Po, and Ashvin Vishwanath, “Structure and topology of band structures in the 1651 magnetic space groups,” Science Advances 4 (2018).
* Elcoro _et al._ (2020) Luis Elcoro, Benjamin J. Wieder, Zhida Song, Yuanfeng Xu, Barry Bradlyn, and B. Andrei Bernevig, “Magnetic topological quantum chemistry,” (2020), arXiv:2010.00598 [cond-mat.mes-hall] .
* Bouhon _et al._ (2020a) Adrien Bouhon, Gunnar F. Lange, and Robert-Jan Slager, “Topological correspondence between magnetic space group representations,” (2020a), arXiv:2010.10536 .
* Po _et al._ (2018) Hoi Chun Po, Haruki Watanabe, and Ashvin Vishwanath, “Fragile topology and wannier obstructions,” Phys. Rev. Lett. 121, 126402 (2018).
* Bouhon _et al._ (2019) Adrien Bouhon, Annica M. Black-Schaffer, and Robert-Jan Slager, “Wilson loop approach to fragile topology of split elementary band representations and topological crystalline insulators with time-reversal symmetry,” Phys. Rev. B 100, 195135 (2019).
* Peri _et al._ (2020) Valerio Peri, Zhi-Da Song, Marc Serra-Garcia, Pascal Engeler, Raquel Queiroz, Xueqin Huang, Weiyin Deng, Zhengyou Liu, B. Andrei Bernevig, and Sebastian D. Huber, “Experimental characterization of fragile topology in an acoustic metamaterial,” Science 367, 797–800 (2020).
* Song _et al._ (2019) Zhida Song, L. Elcoro, Nicolas Regnault, and B. Andrei Bernevig, “Fragile phases as affine monoids: Full classification and material examples,” (2019), arXiv:1905.03262 [cond-mat.mes-hall] .
* Bouhon _et al._ (2020b) Adrien Bouhon, Tomáš Bzdušek, and Robert-Jan Slager, “Geometric approach to fragile topology beyond symmetry indicators,” Phys. Rev. B 102, 115135 (2020b).
* Wu _et al._ (2019) QuanSheng Wu, Alexey A. Soluyanov, and Tomáš Bzdušek, “Non-abelian band topology in noninteracting metals,” Science 365, 1273–1277 (2019).
* Ahn _et al._ (2019) Junyeong Ahn, Sungjoon Park, and Bohm-Jung Yang, “Failure of nielsen-ninomiya theorem and fragile topology in two-dimensional systems with space-time inversion symmetry: Application to twisted bilayer graphene at magic angle,” Phys. Rev. X 9, 021013 (2019).
* Bouhon _et al._ (2020c) Adrien Bouhon, QuanSheng Wu, Robert-Jan Slager, Hongming Weng, Oleg V. Yazyev, and Tomáš Bzdušek, “Non-abelian reciprocal braiding of weyl points and its manifestation in zrte,” Nature Physics 16, 1137–1143 (2020c).
* Ünal _et al._ (2020) F. Nur Ünal, Adrien Bouhon, and Robert-Jan Slager, “Topological euler class as a dynamical observable in optical lattices,” Phys. Rev. Lett. 125, 053601 (2020).
* Jo _et al._ (2020) Na Hyun Jo, Lin-Lin Wang, Robert-Jan Slager, Jiaqiang Yan, Yun Wu, Kyungchan Lee, Benjamin Schrunk, Ashvin Vishwanath, and Adam Kaminski, “Intrinsic axion insulating behavior in antiferromagnetic $\text{MnBi}_{6}\text{Te}_{10}$,” Phys. Rev. B 102, 045130 (2020).
* Qi _et al._ (2008) Xiao-Liang Qi, Taylor L. Hughes, and Shou-Cheng Zhang, “Topological field theory of time-reversal invariant insulators,” Phys. Rev. B 78, 195424 (2008).
* Wieder and Bernevig (2018) Benjamin J Wieder and B Andrei Bernevig, “The axion insulator as a pump of fragile topology,” arXiv preprint arXiv:1810.02373 (2018).
* Liu _et al._ (2020) Chang Liu, Yongchao Wang, Hao Li, Yang Wu, Yaoxin Li, Jiaheng Li, Ke He, Yong Xu, Jinsong Zhang, and Yayu Wang, “Robust axion insulator and chern insulator phases in a two-dimensional antiferromagnetic topological insulator,” Nature Materials 19, 522–527 (2020).
* Yang _et al._ (2021) Jian Yang, Chen Fang, and Zheng-Xin Liu, “Symmetry-protected nodal points and nodal lines in magnetic materials,” (2021), arXiv:2101.01733 [cond-mat.mes-hall] .
* Mong _et al._ (2010) Roger S. K. Mong, Andrew M. Essin, and Joel E. Moore, “Antiferromagnetic topological insulators,” Phys. Rev. B 81, 245209 (2010).
* Otrokov _et al._ (2019) M. M. Otrokov, I. I. Klimovskikh, H. Bentmann, D. Estyunin, A. Zeugner, Z. S. Aliev, S. Gaß, A. U. B. Wolter, A. V. Koroleva, A. M. Shikin, M. Blanco-Rey, M. Hoffmann, I. P. Rusinov, A. Yu. Vyazovskaya, S. V. Eremeev, Yu. M. Koroteev, V. M. Kuznetsov, F. Freyse, J. Sánchez-Barriga, I. R. Amiraslanov, M. B. Babanly, N. T. Mamedov, N. A. Abdullayev, V. N. Zverev, A. Alfonsov, V. Kataev, B. Büchner, E. F. Schwier, S. Kumar, A. Kimura, L. Petaccia, G. Di Santo, R. C. Vidal, S. Schatz, K. Kißner, M. Ünzelmann, C. H. Min, Simon Moser, T. R. F. Peixoto, F. Reinert, A. Ernst, P. M. Echenique, A. Isaeva, and E. V. Chulkov, “Prediction and observation of an antiferromagnetic topological insulator,” Nature 576, 416–422 (2019).
* Song _et al._ (2020) Zhi Da Song, Luis Elcoro, and B. Andrei Bernevig, “Twisted bulk-boundary correspondence of fragile topology,” Science 367, 794–797 (2020).
* Bradley and Cracknell (1972) C.J. Bradley and A.P. Cracknell, _The Mathematical Theory of Symmetry in Solids_ (Oxford University Press, 1972).
* Perez-Mato _et al._ (2011) J. Perez-Mato, D Orobengoa, Emre Tasci, Gemma De la Flor Martin, and A Kirov, “Crystallography online: Bilbao crystallographic server,” Bulgarian Chemical Communications 43, 183–197 (2011).
* Aroyo _et al._ (2006a) Mois Ilia Aroyo, Juan Manuel Perez-Mato, Cesar Capillas, Eli Kroumova, Svetoslav Ivantchev, Gotzon Madariaga, Asen Kirov, and Hans Wondratschek, “Bilbao Crystallographic Server: I. Databases and crystallographic computing programs,” in _Zeitschrift fur Kristallographie_, Vol. 221 (De Gruyter, 2006) pp. 15–27.
* Aroyo _et al._ (2006b) M. I. Aroyo, A. Kirov, C. Capillas, J. M. Perez-Mato, and H. Wondratschek, “Bilbao crystallographic server ii: Representations of crystallographic point groups and space groups,” Acta Cryst. A 62, 115–128 (2006b).
* Schindler _et al._ (2019) Frank Schindler, Marta Brzezińska, Wladimir A. Benalcazar, Mikel Iraola, Adrien Bouhon, Stepan S. Tsirkin, Maia G. Vergniory, and Titus Neupert, “Fractional corner charges in spin-orbit coupled crystals,” Phys. Rev. Research 1, 033074 (2019).
* Bouhon and Black-Schaffer (2017b) Adrien Bouhon and Annica M. Black-Schaffer, “Bulk topology of line-nodal structures protected by space group symmetries in class ai,” (2017b), arXiv:1710.04871 [cond-mat.mtrl-sci] .
* Mathai and Thiang (2017) Varghese Mathai and Guo Chuan Thiang, “Global topology of weyl semimetals and fermi arcs,” J. Phys. A: Math. Theor. 50, 11LT01 (2017).
* Sun _et al._ (2018) Xiao-Qi Sun, Shou-Cheng Zhang, and Tomá š Bzdušek, “Conversion rules for weyl points and nodal lines in topological media,” Phys. Rev. Lett. 121, 106402 (2018).
* Slager _et al._ (2017) Robert-Jan Slager, Vladimir Juričić, and Bitan Roy, “Dissolution of topological fermi arcs in a dirty weyl semimetal,” Phys. Rev. B 96, 201401 (2017).
* Yang and Nagaosa (2014) Bohm-Jung Yang and Naoto Nagaosa, “Classification of stable three-dimensional dirac semimetals with nontrivial topology,” Nature Communications 5, 4898 (2014).
* Slager _et al._ (2016) Robert-Jan Slager, Vladimir Juričić, Ville Lahtinen, and Jan Zaanen, “Self-organized pseudo-graphene on grain boundaries in topological band insulators,” Phys. Rev. B 93, 245406 (2016).
* Zou _et al._ (2019) Jinyu Zou, He Zhouran, and Xu Gang, “The study of magnetic topological semimetals by first principles calculations,” npj Computational Materials 5 (2019), arXiv:1909.11999 .
* Wieder _et al._ (2020) Benjamin J. Wieder, Zhijun Wang, Jennifer Cano, Xi Dai, Leslie M. Schoop, Barry Bradlyn, and B. Andrei Bernevig, “Strong and fragile topological Dirac semimetals with higher-order Fermi arcs,” Nature Communications 11, 1–13 (2020), arXiv:1908.00016 .
* Călugăru _et al._ (2019) Dumitru Călugăru, Vladimir Juričić, and Bitan Roy, “Higher-order topological phases: A general principle of construction,” Phys. Rev. B 99, 041301 (2019).
* Lin and Hughes (2018) Mao Lin and Taylor L. Hughes, “Topological quadrupolar semimetals,” Phys. Rev. B 98, 241103 (2018).
* Benalcazar _et al._ (2019) Wladimir A. Benalcazar, Tianhe Li, and Taylor L. Hughes, “Quantization of fractional corner charge in Cn -symmetric higher-order topological crystalline insulators,” Physical Review B 99, 245151 (2019), arXiv:1809.02142 .
* Benalcazar _et al._ (2017) Wladimir A. Benalcazar, B. Andrei Bernevig, and Taylor L. Hughes, “Electric multipole moments, topological multipole moment pumping, and chiral hinge states in crystalline insulators,” Phys. Rev. B 96, 245115 (2017).
* Schindler _et al._ (2018) Frank Schindler, Ashley M. Cook, Maia G. Vergniory, Zhijun Wang, Stuart S.P. Parkin, B. Andrei Bernevig, and Titus Neupert, “Higher-order topological insulators,” Science Advances 4, eaat0346 (2018), arXiv:1708.03636 .
* Slager _et al._ (2015) Robert-Jan Slager, Louk Rademaker, Jan Zaanen, and Leon Balents, “Impurity-bound states and green’s function zeros as local signatures of topology,” Phys. Rev. B 92, 085126 (2015).
* Khalaf _et al._ (2018) Eslam Khalaf, Hoi Chun Po, Ashvin Vishwanath, and Haruki Watanabe, “Symmetry indicators and anomalous surface states of topological crystalline insulators,” Phys. Rev. X 8, 031070 (2018).
* Van Miert and Ortix (2018) Guido Van Miert and Carmine Ortix, “Higher-order topological insulators protected by inversion and rotoinversion symmetries,” Physical Review B 98, 081110 (2018), 1806.04023 .
* Khalaf _et al._ (2019) Eslam Khalaf, Wladimir A. Benalcazar, Taylor L. Hughes, and Raquel Queiroz, “Boundary-obstructed topological phases,” (2019), arXiv:1908.00011 [cond-mat.mes-hall] .
* Coh and Vanderbilt (2016) Sinsia Coh and David Vanderbilt, “Python tight binding (pythtb) code,” (2016).
* Zak (1989) J. Zak, “Berry’s phase for energy bands in solids,” Phys. Rev. Lett. 62, 2747–2750 (1989).
* Su _et al._ (1979) W. P. Su, J. R. Schrieffer, and A. J. Heeger, “Solitons in polyacetylene,” Phys. Rev. Lett. 42, 1698–1701 (1979).
* Lau _et al._ (2016) Alexander Lau, Jeroen van den Brink, and Carmine Ortix, “Topological mirror insulators in one dimension,” Phys. Rev. B 94, 165164 (2016).
* Alexandradinata _et al._ (2014) Aris Alexandradinata, Xi Dai, and B. Andrei Bernevig, “Wilson-loop characterization of inversion-symmetric topological insulators,” Phys. Rev. B 89, 155114 (2014).
* Bouhon and Black-Schaffer (2017c) Adrien Bouhon and Annica M. Black-Schaffer, “Global band topology of simple and double dirac-point semimetals,” Phys. Rev. B 95, 241101 (2017c).
* Wang _et al._ (2016) Zhijun Wang, Aris Alexandradinata, R. J. Cava, and B. Andrei Bernevig, “Hourglass fermions,” Nature 532, 189 EP – (2016).
## Appendix A Summary of the models
Here we reproduce the models used for MSG75.5 and MSG77.18. These models were
originally introduced in Bouhon _et al._ (2020a), and are written in the
Bloch basis:
$|\varphi_{\alpha,\sigma},\boldsymbol{k}\rangle=\sum_{\boldsymbol{R}\in\boldsymbol{T}}\mathrm{e}^{\mathrm{i}\boldsymbol{k}\cdot(\boldsymbol{R}+\boldsymbol{r}_{\alpha})}|w_{\alpha,\sigma},\boldsymbol{R}+\boldsymbol{r}_{\alpha}\rangle,$
(7)
Where $\alpha\in\\{A,B\\}$ labels the sites in the unit cell, and
$\sigma\in\\{\uparrow,\downarrow\\}$ labels spin components. We choose to
order our basis as
$\boldsymbol{\varphi}=(\varphi_{A,\uparrow},\varphi_{A,\downarrow},\varphi_{B,\uparrow},\varphi_{B,\downarrow})$.
A more thorough analysis of these models in momentum space, including band
structures, can be found in Bouhon _et al._ (2020a).
### A.1 The model for MSG75.5
Our model for MSG75.5 is defined as:
$\displaystyle
H(\boldsymbol{k})=t_{1}f_{1}(\boldsymbol{k})\sigma_{z}\otimes\sigma_{z}$ (8)
$\displaystyle+t_{2}f_{2}(\boldsymbol{k})\sigma_{y}\otimes\mathbb{1}+t_{3}f_{3}(\boldsymbol{k})\sigma_{x}\otimes\mathbb{1}$
$\displaystyle+\lambda_{1}g_{1}(\boldsymbol{k})\mathbb{1}\otimes\sigma_{+}+\lambda_{1}^{*}g_{1}^{*}(\boldsymbol{k})\mathbb{1}\otimes\sigma_{-}$
$\displaystyle+\lambda_{2}g_{2}(\boldsymbol{k})\sigma_{x}\otimes\sigma_{+}+\lambda_{2}^{*}g_{2}^{*}(\boldsymbol{k})\sigma_{x}\otimes\sigma_{-},$
with $\sigma_{\pm}=(\sigma_{x}\pm\mathrm{i}\sigma_{y})/2$ and lattice form
factors
$\begin{array}[]{ll}f_{1}=\cos\boldsymbol{a}_{1}\boldsymbol{k}-\cos\boldsymbol{a}_{2}\boldsymbol{k},&g_{1}=\sin\boldsymbol{a}_{1}\boldsymbol{k}-\mathrm{i}\sin\boldsymbol{a}_{2}\boldsymbol{k},\\\
f_{2}=\cos\boldsymbol{\delta}_{1}\boldsymbol{k}-\cos\boldsymbol{\delta}_{2}\boldsymbol{k},&g_{2}=\sin\boldsymbol{\delta}_{1}\boldsymbol{k}-\mathrm{i}\sin\boldsymbol{\delta}_{2}\boldsymbol{k},\\\
f_{3}=\cos\boldsymbol{\delta}_{1}\boldsymbol{k}+\cos\boldsymbol{\delta}_{2}\boldsymbol{k},&\end{array}$
(9)
These are defined in terms of the bond vectors
$\boldsymbol{\delta}_{\left(\begin{subarray}{c}1\\\
2\end{subarray}\right)}=(\boldsymbol{a}_{1}\left(\begin{subarray}{c}-\\\
+\end{subarray}\right)\boldsymbol{a}_{2})/2$. We have assumed that
$\\{t_{1},t_{2},t_{3}\\}$ are real, while $\\{\lambda_{1},\lambda_{2}\\}$ can
be complex. We fix $t_{1},t_{2},t_{3}=1$, and
$\lambda_{1},\lambda_{2}=(1/2)\mathrm{e}^{\mathrm{i}\pi/5}$.
### A.2 The model for MSG77.18
Our model for MSG77.18 is defined by adding an extra term $H^{\prime}$ to the
above model for MSG75.5:
$H^{\prime}(\boldsymbol{k})=H[f_{1},f_{2}^{\prime},f_{3}^{\prime},g_{1},g_{2}^{\prime}](\boldsymbol{k})+\\\
\rho_{1}h_{1}(\boldsymbol{k})\sigma_{x}\otimes\sigma_{z}+\rho_{2}h_{2}(\boldsymbol{k})\sigma_{y}\otimes\sigma_{z},$
(10)
Where $H(\boldsymbol{k})$ is given in Equation 8, and the lattice form factors
have been extended to 3D momentum space,
$\displaystyle f^{\prime}_{2}(\boldsymbol{k})$
$\displaystyle=\left(\cos\boldsymbol{\delta}^{\prime}_{1}\boldsymbol{k}-\cos\boldsymbol{\delta}^{\prime}_{2}\boldsymbol{k}+\cos\boldsymbol{\delta}^{\prime}_{3}\boldsymbol{k}-\cos\boldsymbol{\delta}^{\prime}_{4}\boldsymbol{k}\right)/2,$
(11) $\displaystyle f^{\prime}_{3}(\boldsymbol{k})$
$\displaystyle=\left(\cos\boldsymbol{\delta}^{\prime}_{1}\boldsymbol{k}+\cos\boldsymbol{\delta}^{\prime}_{2}\boldsymbol{k}+\cos\boldsymbol{\delta}^{\prime}_{3}\boldsymbol{k}+\cos\boldsymbol{\delta}^{\prime}_{4}\boldsymbol{k}\right)/2,$
$\displaystyle g^{\prime}_{2}(\boldsymbol{k})$
$\displaystyle=\left(\sin\boldsymbol{\delta}^{\prime}_{1}\boldsymbol{k}-\mathrm{i}\sin\boldsymbol{\delta}^{\prime}_{2}\boldsymbol{k}-\sin\boldsymbol{\delta}^{\prime}_{3}\boldsymbol{k}+\mathrm{i}\sin\boldsymbol{\delta}^{\prime}_{4}\boldsymbol{k}\right)/2,$
$\displaystyle h_{1}(\boldsymbol{k})$
$\displaystyle=\left(\sin\boldsymbol{\delta}^{\prime}_{1}\boldsymbol{k}+\sin\boldsymbol{\delta}^{\prime}_{2}\boldsymbol{k}+\sin\boldsymbol{\delta}^{\prime}_{3}\boldsymbol{k}+\sin\boldsymbol{\delta}^{\prime}_{4}\boldsymbol{k}\right)/2,$
$\displaystyle h_{2}(\boldsymbol{k})$
$\displaystyle=\left(\sin\boldsymbol{\delta}^{\prime}_{1}\boldsymbol{k}-\sin\boldsymbol{\delta}^{\prime}_{2}\boldsymbol{k}+\sin\boldsymbol{\delta}^{\prime}_{3}\boldsymbol{k}-\sin\boldsymbol{\delta}^{\prime}_{4}\boldsymbol{k}\right)/2,$
with
$\boldsymbol{\delta}^{\prime}_{1,2}=\boldsymbol{\delta}_{1,2}+\boldsymbol{a}_{3}/2$,
and
$\boldsymbol{\delta}^{\prime}_{3,4}=-\boldsymbol{\delta}_{1,2}+\boldsymbol{a}_{3}/2$,
and with new real parameters $\rho_{1},\rho_{2}\in\mathbb{R}$. We fix
$\rho_{1}=-1$ and $\rho_{2}=-2/5$.
## Appendix B Symmetries of the Wilson loop due to non-symmorphic TRS and
$C_{2}T$ symmetry
We here derive the effect of non-symmorphic TRS and $C_{2}T$ symmetry on the
Wilson loop for the different geometries, i.e. straight versus diagonal shown
in Fig. 8 a). We do it explicitly for MSG77.18 but the final results apply to
MSG75.5 as well. Let us write
$\mathcal{W}[l_{\boldsymbol{k}_{0}}]=\langle\boldsymbol{u},\boldsymbol{k}_{0}+\boldsymbol{K}|\prod\limits_{\boldsymbol{k}}^{\boldsymbol{k}_{0}+\boldsymbol{K}\leftarrow\boldsymbol{k}_{0}}\mathcal{P}_{\boldsymbol{k}}|\boldsymbol{u},\boldsymbol{k}_{0}\rangle$
the Wilson loop over the occupied Bloch eigenvectors
$\\{|u_{n},\boldsymbol{k}_{0}\rangle\\}_{n=1,2}$ integrated over the base loop
$l_{\boldsymbol{k}_{0}}=[\boldsymbol{k}_{0}+\boldsymbol{K}\leftarrow\boldsymbol{k}_{0}]$
that crosses the Brillouin zone with $\boldsymbol{K}$ defined as the smallest
reciprocal lattice vector in that direction. We define the anti-unitary
representation of the non-symmorphic TRS for MSG77.18, $(E|\tau_{d})^{\prime}$
where $\tau_{d}=(\boldsymbol{a}_{1}+\boldsymbol{a}_{2}+\boldsymbol{a}_{3})/2$,
in the basis of the occupied Bloch eigenstates through
$\mathcal{T}(\boldsymbol{k})=\langle\boldsymbol{\psi},-\boldsymbol{k}|^{(E|\tau_{d})^{\prime}}|\boldsymbol{\psi},\boldsymbol{k}\rangle=\mathrm{e}^{\mathrm{i}\boldsymbol{k}\cdot\tau_{d}}\langle\boldsymbol{u},-\boldsymbol{k}|(\sigma_{x}\otimes-\mathrm{i}\sigma_{y})\mathcal{K}|\boldsymbol{u},\boldsymbol{k}\rangle=\mathrm{e}^{\mathrm{i}\boldsymbol{k}\cdot\tau_{d}}\langle\boldsymbol{u},-\boldsymbol{k}|(\sigma_{x}\otimes-\mathrm{i}\sigma_{y})|\boldsymbol{u}^{*},\boldsymbol{k}\rangle\mathcal{K}=\mathcal{U}\mathcal{K}$
where $\mathcal{K}$ is the complex conjugation and $\mathcal{U}$ is unitary
Bouhon _et al._ (2020a). It is then convenient to write the unitary
representation with the phase factor of the non-symmorphicity removed, i.e.
$U(\boldsymbol{k})=\mathrm{e}^{-\mathrm{i}\boldsymbol{k}\cdot\tau_{d}}\mathcal{U}(\boldsymbol{k})$
(this will correspond below to taking the “periodic gauge” Wang _et al._
(2016); Bouhon and Black-Schaffer (2017c, b) in the Wilson loop over a non-
contractible path of the torus Brillouin zone). Similarly to the symmorphic
case Alexandradinata _et al._ (2014), the constraint imposed by
$(E|\tau_{d})^{\prime}$ on the Wilson loop is found to be
$U^{*}(\boldsymbol{k}_{0}+\boldsymbol{K})^{-1}\mathcal{W}^{T}[l_{-\boldsymbol{k}_{0}-\boldsymbol{K}}]U^{*}(\boldsymbol{k}_{0})=\mathcal{W}[l_{\boldsymbol{k}_{0}}]$
with
$l_{-\boldsymbol{k}_{0}-\boldsymbol{K}}=[-\boldsymbol{k}_{0}\leftarrow-\boldsymbol{k}_{0}-\boldsymbol{K}]$,
which we rewrite as
$\mathrm{e}^{-\mathrm{i}\boldsymbol{K}\cdot\tau_{d}}\mathcal{U}^{*}(\boldsymbol{k}_{0}+\boldsymbol{K})^{-1}\mathcal{W}^{T}[l_{-\boldsymbol{k}_{0}-\boldsymbol{K}}]\mathcal{U}^{*}(\boldsymbol{k}_{0})=\mathcal{W}[l_{\boldsymbol{k}_{0}}].$
(12)
We thus conclude that in the straight geometry, i.e. taking
$\boldsymbol{k}_{0}=(0,k_{y})$ and $\boldsymbol{K}=\boldsymbol{b}_{1}$, the
Wilson loop spectrum satisfies the following symmetry
$\left\\{\varphi_{n}[l_{(0,-k_{y})}]\right\\}_{n=1,2}=\left\\{\varphi_{n}[l_{(0,k_{y})}]+\pi\>\mathrm{mod}\>2\pi\right\\}_{n=1,2},$
(13)
while in the diagonal geometry, i.e. taking $\boldsymbol{k}_{0}=(0,k_{2})$ and
$\boldsymbol{K}=\boldsymbol{b}_{1}+\boldsymbol{b}_{2}$, the symmetry reads
$\left\\{\varphi_{n}[l_{(0,-k_{2})}]\right\\}_{n=1,2}=\left\\{\varphi_{n}[l_{(0,k_{2})}]\right\\}_{n=1,2}.$
(14)
We now consider the effect of the non-symmorphic $C_{2}T$ symmetry of
MSG77.18, $(C_{2}|\tau_{d})^{\prime}$, on the Wilson loop spectrum. Its anti-
unitary representation in the occupied Bloch eigenstates reads
$\mathcal{A}(\boldsymbol{k})=\langle\boldsymbol{\psi},-C_{2}\boldsymbol{k}|^{(C_{2}|\tau_{d})^{\prime}}|\boldsymbol{\psi},\boldsymbol{k}\rangle=\mathrm{e}^{\mathrm{i}C_{2}\boldsymbol{k}\cdot\tau_{d}}\langle\boldsymbol{u},-C_{2}\boldsymbol{k}|(\sigma_{x}\otimes\mathrm{i}\sigma_{x})|\boldsymbol{u}^{*},\boldsymbol{k}\rangle\mathcal{K}=\mathcal{R}\mathcal{K}$
where $\mathcal{R}$ is unitary. Defining
$R(\boldsymbol{k})=\mathrm{e}^{-\mathrm{i}C_{2}\boldsymbol{k}\cdot\tau_{d}}\mathcal{R}(\boldsymbol{k})$,
we then find, similarly to the case of TRS above,
$R^{*}(\boldsymbol{k}_{0}+\boldsymbol{K})^{-1}\mathcal{W}^{*}[l_{\boldsymbol{k}_{0}}]R^{*}(\boldsymbol{k}_{0})=\mathcal{W}[l_{\boldsymbol{k}_{0}}]$,
which we rewrite as
$\mathrm{e}^{-\mathrm{i}C_{2}\boldsymbol{K}\cdot\tau_{d}}\mathcal{R}^{*}(\boldsymbol{k}_{0}+\boldsymbol{K})^{-1}\mathcal{W}^{*}[l_{\boldsymbol{k}_{0}}]\mathcal{R}^{*}(\boldsymbol{k}_{0})=\mathcal{W}[l_{\boldsymbol{k}_{0}}].$
(15)
We therefore conclude that in the straight geometry
($\boldsymbol{k}_{0}=(0,k_{y})$, $\boldsymbol{K}=\boldsymbol{b}_{1}$) the
Wilson loop spectrum satisfies the following symmetry
$\left\\{\varphi_{n}[l_{(0,k_{y})}]\right\\}_{n=1,2}=\left\\{-\varphi_{n}[l_{(0,k_{y})}]+\pi\>\mathrm{mod}\>2\pi\right\\}_{n=1,2},$
(16)
while in the diagonal geometry ($\boldsymbol{k}_{0}=(0,k_{2})$,
$\boldsymbol{K}=\boldsymbol{b}_{1}+\boldsymbol{b}_{2}$) the symmetry reads
$\left\\{\varphi_{n}[l_{(0,k_{2})}]\right\\}_{n=1,2}=\left\\{-\varphi_{n}[l_{(0,k_{2})}]\right\\}_{n=1,2}.$
(17)
We emphasize that these Wilson loop symmetries hold similarly at $k_{z}=0$ and
at $k_{z}=\pi$ since $\boldsymbol{K}$ has no $k_{z}$ component. This is also
the reason why these results also apply to MSG75.5.
We conclude by noting that these results are fully consistent with the
computed Wilson loops in Fig. 8.
## Appendix C Additional figures
In this Appendix we present additional figures to further detail the findings
presented in the main text. In particular, we present the edge spectrum which
results from removing various orbitals on the edges in appendix C.1, and we
present the charge distribution when removing a single spin on the entire edge
in appendix C.2.
### C.1 Alternative edge termination
We noted in Section III of the main text that removing a single orbital at the
edge of the system can induce edge states. In Fig. 9 we show a selection of
ways this can be done for MSG75.5 (equivalently the $k_{z}=0$ plane of
MSG77.18). In Fig. 10, we show the corresponding plot for the $k_{z}=\pi$
plane of MSG77.18.
Figure 9: Edge spectra for a selection of removed edge orbitals in the
straight cut (panel a - panel d) and diagonal cut (panel e - panel h) for
MSG75.5/the $k_{z}=0$ plane of MSG77.18. A full circle indicates the presence
of both spin components, a single arrow indicates that the other spin has been
removed and the lack of a site indicates that both spin components on this
site have been removed. a) Removing a down spin from the top edge. b) Removing
an up spin from the bottom edge. c) Removing a down spin from both edges. d)
Removing a down spin from the top edge and an up spin from the bottom edge. e)
Removing a down spin from the A site on the top edge. f) Removing both spin
components from the A site on the top edge. g) Removing a down spin from the A
site on the top edge and an up spin from the A site at the bottom edge. h)
Removing all spins from the A and B site at the top edge. All in-gap states
are singly degenerate. The states extending slightly into the gap in h) are
doubly degenerate. Figure 10: Edge spectra for a selection of removed edge
orbitals in the straight cut (panel a - panel d) and diagonal cut (panel e -
panel h) for the $k_{z}=\pi$ plane of MSG77.18. A full circle indicates the
presence of both spin components, a single arrow indicates that the other spin
has been removed and the lack of a site indicates that both spin components on
this site have been removed. a) Removing a down spin from the top edge. b)
Removing an up spin from the bottom edge. c) Removing a down spin from both
edges. d) Removing a down spin from the top edge and an up spin from the
bottom edge. e) Removing a down spin from the A site on the top edge. f)
Removing both spin components from the A site on the top edge. g) Removing a
down spin from the A site on the top edge and an up spin from the A site at
the bottom edge. h) Removing all spins from the A and B site at the top edge.
The in-gap bands in d), g) and h) are doubly degenerate.
### C.2 Removing a single spin
We show the effect of removing a single spin on the boundary in Fig. 11.
Figure 11: Corner charges for the straight cut (panels a and b) and diagonal
cut (panels c and d) respectively for MSG75.5/the $k_{z}=0$ plane of MSG77.18.
On the top, we show the charge distribution, with Fermi level fixed to the
same value as for MSG75.5, shown in Fig. 4. Red indicates excess charge
(relative to the center), blue a deficit of charge, where we sum over all
occupied bands. On the bottom we show the spectrum, together with a state just
below the Fermi level. All calculations were done using the PythTB package Coh
and Vanderbilt (2016).
## Appendix D Real-space invariants and twisted boundary conditions for
MSG75.5
In this appendix, we provide further detail on the RSIs and TBCs discussed in
section IV. Our notation follows that of Song _et al._ (2020) closely, and we
use their tables throughout.
### D.1 Real space invariants
Real-space invariants are defined at every WP as quantities which do not
change as we move orbitals in a symmetry-preserving fashion from a high-
symmetry WP to a lower-symmetry WP or vice versa. In the finite model of the
$k_{z}=0$ plane of MSG75.5, where the symmetries are broken down to wallpaper
group p4, the relevant (non-magnetic) WPs are $1a$, $1b$ and $2c$, as shown in
Fig. 1a). Removing an orbital from WP $1a$ or $1b$ (with site-symmetry
$C_{4}$) requires a minimum of four orbitals to come together, as this is the
only way to consistently subduce to the trivial position. Thus, the imbalance
between the four site-symmetry orbitals at WP $1a$ or $1b$ is protected, but
not the total number of orbitals. This allows for the definition of three
independent invariants (the real-space invariants) for each of these WP. For
(non-magnetic) WP $2c$, with site-symmetry group $C_{2}$, pairs of orbitals
must come together, so there is a single RSI. A full enumeration of the RSIs
for wallpaper groups can be found in Song _et al._ (2020). For our model, we
find the RSIs for the lower/upper subspace as:
$\displaystyle\delta_{1a}$ $\displaystyle=$ $\displaystyle\pm 1$
$\displaystyle\delta_{2a}$ $\displaystyle=$ $\displaystyle\pm 1$
$\displaystyle\delta_{3a}$ $\displaystyle=$ $\displaystyle\pm 1$
$\displaystyle\delta_{1b}$ $\displaystyle=$ $\displaystyle\mp 1$
$\displaystyle\delta_{2b}$ $\displaystyle=$ $\displaystyle 0$
$\displaystyle\delta_{3b}$ $\displaystyle=$ $\displaystyle 0$
$\displaystyle\delta_{1c}$ $\displaystyle=$ $\displaystyle 0$
where $\delta_{iw}$ denotes RSI $i$ for the WP with label $w$. These RSIs can
be used to directly confirm the fragility of our occupied manifold. In
particular, for our symmetry setting (spinful $C_{4}$ without TRS), the
criterion for fragility is that it be impossible to find numbers
$N_{a},N_{b},N_{c}$ summing to $N_{\mathrm{bands}}=2$ whilst satisfying the
constraints tabulated in Song _et al._ (2020) and reproduced here for
convenience:
$\displaystyle N_{a/b}$ $\displaystyle=$
$\displaystyle\delta_{1a/1b}+\delta_{2a/2b}+\delta_{3a/3b}\ \mathrm{mod}\ 4$
$\displaystyle N_{a/b}$ $\displaystyle\geq$
$\displaystyle-3\delta_{1a/1b}+\delta_{2a/2b}+\delta_{3a/3b}$ $\displaystyle
N_{a/b}$ $\displaystyle\geq$
$\displaystyle\delta_{1a/1b}-3\delta_{2a/2b}+\delta_{3a/3b}$ $\displaystyle
N_{a/b}$ $\displaystyle\geq$
$\displaystyle\delta_{1a/1b}+\delta_{2a/2b}-3\delta_{3a/3b}$ $\displaystyle
N_{a/b}$ $\displaystyle\geq$
$\displaystyle\delta_{1a/1b}+\delta_{2a/2b}+\delta_{3a/3b}$ $\displaystyle
N_{c}$ $\displaystyle=$ $\displaystyle\delta_{1c}\ \mathrm{mod}\ 2$
$\displaystyle N_{c}$ $\displaystyle\geq$ $\displaystyle-\delta_{1c}$
$\displaystyle N_{c}$ $\displaystyle\geq$ $\displaystyle\delta_{1c}$
In the occupied subspace, $N_{a}$ and $N_{b}$ have to equal $3\ \mathrm{mod\
4}$, whereas $N_{c}$ has to equal $0\ \mathrm{mod\ 2}$. However, this is not
possible to satisfy if $N_{a}+N_{b}+N_{c}=2$ Thus, the occupied subspace is
indeed fragile. The RSIs in the unoccupied subspace, on the other hand,
require $N_{a}$ and $N_{b}$ to equal $1\ \mathrm{mod\ 4}$, whilst $N_{c}$
still equals $0$ mod 2. This can be satisfied with $N_{a}=N_{b}=1$ and
$N_{c}=0$. As can be easily checked, this also satisfies all other conditions
implying that the unoccupied subspace is not fragile as expected.
### D.2 Twisted boundary conditions
The twisted boundary conditions (TBC) are designed as perturbations to the
Hamiltonian which leave the RSIs invariant but exchange the $C_{4}$
eigenvalues of bands Song _et al._ (2020). As the RSIs are defined in terms
of the $C_{4}$ eigenvalues, if the relative balance of states with differing
$C_{4}$ eigenvalues changes between the occupied and the unoccupied subspace,
then there is necessarily a gap closing under this perturbation to ensure that
the RSIs are invariant.
More concretely, as our system has $C_{4}$ symmetry, we can divide it into
four regions which are related by symmetry, shown in Fig. 7a). As we choose
the cut illustrated in Fig. 1c), there are no sites on the boundary between
regions so each site is uniquely assigned to a region. Multiplying the hopping
between the regions by some well-chosen parameter $\lambda$ (as described Song
_et al._ (2020)) amounts to a gauge change in the $C_{4}$ symmetry operator,
permuting the site-symmetry representation. These are the TBC for this system.
Using the RSIs, which are invariant under such transformations, we can predict
which orbitals are exchanged. If chosen correctly, we can force an exchange
between the occupied and the unoccupied spaces. The result of implementing the
TBC by tuning $\lambda:1\rightarrow i$ is shown in Fig. 7. Note that the
corner states (localized in the gap) are not cut by the boundary conditions
and therefore do not flow. Checking the IRREPs of the states that exchange
under TBC confirms that this pattern corresponds to our expected fragile
phases, as explained in Song _et al._ (2020). This is thus another physical
signature of the fragile magnetic topology in MSG75.5, which in this case is
equivalent to fragility in the p4 wallpaper group. A similar pattern holds on
the $k_{z}=0$ plane of MSG77.18.
|
# Laser threshold magnetometry using green light absorption by diamond
nitrogen vacancies in an external cavity laser
James L. Webb<EMAIL_ADDRESS>Center for Macroscopic Quantum States
(bigQ), Department of Physics, Technical University of Denmark, Kgs. Lyngby,
Denmark Andreas F. L. Poulsen Center for Macroscopic Quantum States (bigQ),
Department of Physics, Technical University of Denmark, Kgs. Lyngby, Denmark
Robert Staacke Division of Applied Quantum System, Felix Bloch Institute for
Solid State Physics,
Leipzig University, 04103, Leipzig, Germany Jan Meijer Division Applied
Quantum System, Felix Bloch Institute for Solid State Physics,
Leipzig University, 04103, Leipzig, Germany Kirstine Berg-Sørensen
Department of Health Technology, Technical University of Denmark, Kgs. Lyngby,
Denmark Ulrik Lund Andersen Center for Macroscopic Quantum States (bigQ),
Department of Physics, Technical University of Denmark, Kgs. Lyngby, Denmark
Alexander Huck Center for Macroscopic Quantum States (bigQ), Department of
Physics, Technical University of Denmark, Kgs. Lyngby, Denmark
###### Abstract
Nitrogen vacancy (NV) centers in diamond have attracted considerable recent
interest for use in quantum sensing, promising increased sensitivity for
applications ranging from geophysics to biomedicine. Conventional sensing
schemes involve monitoring the change in red fluorescence from the NV center
under green laser and microwave illumination. Due to the strong fluorescence
background from emission in the NV triplet state and low relative contrast of
any change in output, sensitivity is severely restricted by a high optical
shot noise level. Here, we propose a means to avoid this issue, by using the
change in green pump absorption through the diamond as part of a semiconductor
external cavity laser run close to lasing threshold. We show theoretical
sensitivity to magnetic field on the pT/$\sqrt{\rm Hz}$ level is possible
using a diamond with an optimal density of NV centers. We discuss the physical
requirements and limitations of the method, particularly the role of amplified
spontaneous emission near threshold and explore realistic implementations
using current technology.
††preprint: APS/123-QED
## I Introduction
Optical manipulation of material defects represents an ideal method for
quantum sensing, exploiting properties such as entanglement and superposition
[1]. The nitrogen-vacancy (NV) center in diamond, possessing long quantum
coherence times at room temperature, has in particular drawn considerable
interest [2, 3, 4]. Diamond is an ideal material for sensing, being
mechanically hard, chemically stable, isotopically pure as well as
biocompatible [5, 6]. The negatively charged nitrogen-vacancy center (NV-) has
an energy level structure that results in optical properties that are highly
sensitive to temperature [7], strain (pressure) [8], electric field [9] and
particularly magnetic field. Sensing is conventionally performed by detecting
changes in the intensity of red fluorescence ($\approx$ 637-750 nm) under
irradiation with green light and resonant microwaves via a process termed
optically detected magnetic resonance (ODMR) spectroscopy [4, 10, 11, 12]. It
can be done using a continuous wave (CW) method [13], or by using short laser
and microwave pulses [14, 15].
However, measuring via red fluorescence suffers from two considerable physical
limitations. First, the signal to be measured has a very low contrast on
bright emission from decay in the NV- triplet state. Although for a single
NV-, spin dependent contrast can be up to 30$\%$ [16], for a large ensemble of
NVs suitable for a diamond sensor the contrast can be at most a few percent
[17, 18]. Sensitivity is therefore limited by this low contrast and the high
level of shot noise from the bright background rising from triplet state
fluorescence emission. The second physical limitation is the high refractive
index of diamond, which traps the majority of the fluorescence inside the
diamond. Microfabrication schemes have been proposed to mitigate this issue,
but have yet to deliver significant improvements [19, 20].
An alternative method is to use optical absorption of the pump light by the
NVs. Previous work has used the change in green absorption in an optical
cavity [21, 22] or by using changes in infrared (IR) absorption by the singlet
state [23]. These schemes are technically demanding, requiring an optical
cavity or unusual wavelength (1042 nm) laser. A promising alternative is laser
threshold sensing [24], using changes in optical absorption resulting from the
parameter to be sensed (e.g. magnetic field or temperature) to push a medium
across lasing threshold. This method eliminates the bright background that
limits sensitivity using conventional fluorescence detection. A further
attraction is wide applicability to any material with variable optical
absorption, including a wider range of defects in diamond, SiC and 2D
materials [25, 26].
Building on the work by Dumeige et al. [27] and our own previous work on
diamond absorption magnetometry [21, 22], we outline here a scheme to use
laser threshold sensing of magnetic field with green light in a standard
external cavity laser. We show it is possible to achieve high sensitivity in
the pT/$\sqrt{\rm Hz}$ range with realistic assumptions for key physical
parameters. Our proposal differs from that of previous work by using simpler
green pump absorption rather than IR absorption and by using an ordinary
current driven laser diode/gain chip medium without the need for an additional
pump laser. We show that this configuration, highly suitable for
miniaturization, can deliver high sensitivity, and we discuss the key physics
required to reach such sensitivity levels. Finally, we discuss and calculate
limiting factors that may prevent these levels from being reached in practice.
This includes factors that may not have been previously considered, such as
amplified spontaneous emission near to lasing threshold.
## II Methods
### II.1 External Cavity Laser Model
Figure 1: a) Schematic of the external cavity setup with a Fabry-Perot
semiconductor laser diode of cavity length $L_{m}$ and end facet power
reflectivities $R_{1}$ and $R_{2}$ coupled to an external cavity of length
$L_{r}$ via mirror $R_{3}$ and diamond of thickness $d$. Laser emission
(dashed line) is through $R_{1}$. We model our external cavity laser with the
diamond as a single cavity with equivalent end reflectance $R_{e}$ and the
diamond absorption loss $\alpha_{d}$ included in the total cavity loss
$\alpha_{t}$, b) Simplified schematic of the laser threshold process, where a
reduction in diamond absorption through application of resonant microwaves
reduces the threshold current to $I_{th}^{\rm on}$, producing lasing output
$P_{out}$ when driven at $I_{th}^{\rm off}$.
We place the diamond into a standard external cavity laser setup as described
schematically in Fig. 1,a). This consists of a Fabry-Perot semiconductor laser
diode or gain chip of length $L$ with end facet reflectivities $R_{1}$ and
$R_{2}$ coupled to an external cavity formed by mirror $R_{3}$ via an external
cavity of length $L_{r}$ containing the diamond of thickness $d$. We assume
normal incidence and that transmission through the diamond is high, with
minimal reflection from the diamond facets. For simplicity, we assume a single
optical mode at a single wavelength. We consider the optical loss due to
absorption in a diamond of thickness $d$. The change in laser intensity
$\hat{I}$ on a pass through the diamond is given by
$\Delta\hat{I}=\hat{I_{0}}-\hat{I_{0}}e^{-\alpha_{d}z},$ (1)
where $\hat{I_{0}}$ is the intensity of the laser emission with no diamond
present in the external cavity, $\alpha_{d}$ is the absorption coefficient in
the diamond and $z$ is the path length taken within the diamond. For normal
incidence $z$ = $d$ and the absorption coefficient can be derived from the
rate equation model given in the following section or can be measured
experimentally. For the semiconductor lasing medium between mirrors 1 and 2,
we assume a total cavity loss $\alpha_{t}$, given by the sum of intrinsic
cavity loss due to the gain medium $\alpha_{c}$ and losses from the mirrors
and end facets $\alpha_{m}$, giving a total loss $\alpha_{t}$
$\alpha_{t}=\alpha_{c}+\alpha_{m}=\alpha_{c}+\frac{1}{L_{m}}\ln\left(\frac{1}{\sqrt{R_{1}R_{2}}}\right).$
(2)
In order to simplify the analysis of the external cavity structure, we use the
three mirror model [28, 29, 30, 31] to treat the complete diode/external
cavity structure as a single cavity of length $L$ = $L_{m}$ \+ $L_{r}$, with
mirror $R_{2}$ replaced by an effective reflectivity $R_{e}$, with the single
cavity containing the optical losses of the external cavity and diamond, the
internal losses of the gain medium in the laser diode and loss from the cavity
through the mirrors. By assuming the losses due to the diamond are spread
evenly throughout, we redefine the loss coefficient due to the diamond as
$\alpha_{e}=(\alpha_{d}/L)d$, and our total cavity loss as
$\alpha_{t}=\alpha_{c}+\alpha_{e}+\frac{1}{L}\ln\left(\frac{1}{\sqrt{R_{1}R_{e}}}\right),$
(3)
where $R_{e}=\left|r_{e}\right|^{2}$ relates the power reflectivity to the
complex field reflectivity, $r_{e}$. We use the model for the effective
reflectivity by Voumard et al. [32], detailed further in the Supplementary
Information. Neglecting phase components, at threshold $R_{1}R_{e}e^{(\Gamma
g-\alpha_{t})2L}=1$, where $g$ = $g_{th}$ is the (threshold) gain coefficient.
For the full structure, the rate equations for photon ($S$) and carrier ($N$)
density are given by the standard equations for a laser diode as
$\frac{dN}{dt}=\frac{I}{qV}-\frac{N}{\tau_{N}}-GS,$ (4)
and
$\frac{dS}{dt}=GS-\frac{S}{\tau_{P}}+\frac{\beta N}{\tau_{N}}.$ (5)
Here $I$ is the drive current, $V$ the volume of the gain region, $G$ the gain
of the lasing medium and $q$ the electronic charge. The term $GS$ arises from
stimulated emission in the laser diode gain medium and $S/\tau_{P}$ includes
the cavity loss from the mirrors, gain medium and diamond. Further, $\tau_{P}$
is the photon lifetime in the cavity and $\tau_{N}$ the carrier lifetime in
the laser diode. Carriers are generated by a current $I$ in a volume $V$,
where $V=L\times w\times t_{h}$, where $t_{h}$ is the thickness and $w$ the
width of the laser diode active region. The term $\beta N/\tau_{N}$ relates to
spontaneous emission, governed by the spontaneous emission factor $\beta$.
We can define gain $G$ phenomenologically, in the form [33]
$G=\Gamma g=\Gamma a(N_{th}-N_{tr})(1-\epsilon S),$ (6)
where $\Gamma$ is the confinement factor and $\epsilon$ is the gain
compression factor that phenomenologically accounts for effects such as
spectral hole burning at higher optical power. The carrier density at
transparency is given by $N_{tr}$. The rate equations for photon and carrier
density can be solved for a steady state condition ($dS/dt$ = 0, $dN/dt$ = 0).
For carrier density $N$ = $N_{th}$ close to $N_{tr}$ and neglecting
spontaneous emission ($\beta$ = 0), the gain balances the cavity loss. The
factor $a$ is the differential gain coefficient, a material specific property
defining how well the semiconductor can generate carriers for population
inversion. Equation (6) is valid for heterostructure laser diodes and certain
quantum well structures where the threshold is close to the transparency
carrier density. Unless otherwise stated, we use the Eq. (6) model in this
work.
Using Eqs. (4-6) at lasing threshold, where $S=0$, $G$ = 1/$\tau_{p}$ and
$\Gamma g_{th}$ = $\alpha_{t}$ we can derive an equation for carrier density
at threshold $N_{th}$
$N_{th}=N_{tr}+\frac{\alpha_{t}}{\Gamma a},$ (7)
and inserting this result into the rate equation for carrier density - Eq. (4)
- allows us to calculate the threshold current
$I_{th}=\frac{qV}{\eta_{i}\tau_{N}}N_{th}=\frac{qV}{\eta_{i}\tau_{N}}\left(N_{tr}+\frac{\alpha_{t}}{\Gamma
a}\right).$ (8)
Here we introduce the quantum efficiency of carrier to photon conversion
$\eta_{i}$. By using Eq. (8) in the rate equations at $I>I_{th}$, we can
calculate the photon density at any current above the lasing threshold. We can
then calculate the laser light power that can be emitted from the left hand
side mirror $R_{1}$ using the factor $\eta_{o}$, the output coupling
efficiency, which is defined as the ratio of photons lost through the mirror
$R_{1}$ to the total cavity loss $\alpha_{t}$ =
$\alpha_{m}+\alpha_{c}+\alpha_{e}$
$P_{out}=\eta_{o}\frac{hc}{\lambda\tau_{p}}\frac{V}{\Gamma}S,$ (9)
where $V/\Gamma$ is the effective mode volume of the cavity, $\lambda$ the
wavelength and $h$ and $c$ Plank’s constant and the speed of light
respectively. In the limit of $\epsilon S\rightarrow 0$ where there is no
limiting effect on the gain, the power output can be rewritten directly in
terms of the threshold current
$P_{out}=\eta_{o}\eta_{i}\frac{hc}{q\lambda}(I-I_{th}).$ (10)
In both of these expressions
$\eta_{o}=\frac{\alpha_{m_{1}}}{\alpha_{m}+\alpha_{c}+\alpha_{e}}=\frac{ln\frac{1}{\sqrt{R_{1}}}}{ln\frac{1}{\sqrt{R_{1}R_{e}}}+(\alpha_{e}+\alpha_{c})L}.$
(11)
For larger finite values of $\epsilon$ well above threshold or including
finite spontaneous emission through nonzero $\beta$, we can numerically solve
the steady state rate equations (Eqs. (4) and (5)) to calculate $N$, $S$ and
the laser power output.
The total cavity absorption $\alpha_{t}$ will change when microwaves are
applied to the diamond at frequency equal to the splitting of the NV triplet
ground state levels, reducing the lasing threshold current $\Delta I_{th}$ =
$I_{th}^{\rm off}$ \- $I_{th}^{\rm on}$, where $I_{th}^{\rm on}$ is the
threshold current on microwave resonance, and $I_{th}^{\rm off}$ the threshold
current off resonance. By running at drive current equal to $I_{th}^{\rm
off}$, laser output is generated only while on microwave resonance. This is
shown schematically in Fig. 1,b).
### II.2 Absorption model
We use the rate equation model from [34] in order to calculate the optical
absorption of green pump light by the diamond and the maximum change in
absorption when on microwave resonance. The parameters we use for the
transition rates are the same as those in [27], derived from [35, 36, 37, 38].
We calculate the normalized occupancies of each energy level with microwaves
supplied $n_{i}^{\rm on}$ and without microwaves $n_{i}^{\rm off}$, where
$\sum_{i}n_{i}$ = 1 and index $i$ = 1-8, where i=1 refers to the $m_{s}$=0
ground state level, i=2 the $m_{s}$=$\pm$1 ground state levels, i=3,4 the spin
triplet excited states, i=5,6 the spin singlet shelving states and i=7,8 the
ground and excited state of the NV0. We define a total NV- density $N_{NV}$ in
ppm. Off resonance, the total number density of NV- in each state $N^{\rm
off}_{i}$ are given by
$N^{\rm off}_{i}=N_{NV}\frac{n_{i}^{\rm off}}{\sum_{i}n_{i}^{\rm off}}.$ (12)
We define a measurement axis along one of the 4 possible crystallographic axes
for the NV. We calculate that when microwaves are applied, we drive only the
NVs aligned along one axis such that the total number density on resonance
$N^{\rm on}_{i}$ is given by
$N^{\rm on}_{i}=\frac{1}{4}N_{NV}\frac{n_{i}^{\rm on}}{\sum_{i}n_{i}^{\rm
on}}+\frac{3}{4}N_{NV}\frac{n_{i}^{\rm off}}{\sum_{i}n_{i}^{\rm off}}.$ (13)
We calculate the change in intensity on a single pass when on and off
microwave resonance as
$\begin{split}\hat{I_{\rm on}}=\hat{I_{0}}e^{-\alpha_{\rm on}d},\\\
\hat{I_{\rm off}}=\hat{I_{0}}e^{-\alpha_{\rm off}d},\end{split}$ (14)
where $d$ is the thickness of the diamond and the absorption coefficient
$\alpha$ on and off resonance is given by
$\alpha^{\rm on}=\sigma_{g}(N^{\rm on}_{1}+N^{\rm on}_{2})+\sigma_{g0}N^{\rm
on}_{7}+\sigma_{e}(N^{\rm on}_{3}+N^{on}_{4})+\sigma_{r}N^{\rm on}_{8},$ (15)
$\begin{split}\alpha^{\rm off}=\sigma_{g}(N^{\rm off}_{1}+N^{\rm
off}_{2})+\sigma_{g0}N^{\rm off}_{7}+\sigma_{e}(N^{\rm off}_{3}+\\\ N^{\rm
off}_{4})+\sigma_{r}N^{\rm off}_{8}.\end{split}$ (16)
Here $\sigma_{g}$ and $\sigma_{g0}$ are respectively the absorption cross
sections of green light for NV- and NV0 and $\sigma_{e}$, and $\sigma_{r}$ the
ionisation cross sections for transfer between the charged and uncharged
defect state. This allows us to calculate the change in absorption when the
diamond is present without microwaves $\hat{I_{\rm off}}/\hat{I_{0}}$, the
change when driven on microwave resonance $\hat{I_{\rm on}}/\hat{I_{0}}$ and
the change between these, which we term the absorption contrast
$C=(\hat{I_{\rm off}}/\hat{I_{0}})-(\hat{I_{\rm on}}/\hat{I_{0}}).$ (17)
### II.3 Key Physical Parameters
The key physical parameters of the model can be divided into those that are
intrinsic to the semiconductor gain medium, those intrinsic to the diamond and
those defined by the setup. Examples of the latter include the mirror
reflectivities $R_{1}$, $R_{2}$, $R_{3}$, the cavity length $L$ and any other
losses, such as reflection out of the cavity or from absorption by other
optical components such as lenses, included in the cavity loss factor
$\alpha_{c}$. These factors will also influence the photon lifetime in the
cavity $\tau_{P}$. The maximum Rabi frequency $\Omega_{R}$ that can be reached
also depends on microwave power and how well the microwaves can be coupled
into the diamond.
Figure 2: Dephasing time $T_{2}^{*}$ vs NV- density $N_{NV}$ where both values
are given in other works (citations in the main text). $T_{2}^{*}$ in those
with low NV concentration are limited by interaction with 13C spin, with the
highest values given by diamonds isotropically purified with 12C spin during
growth. $T_{2}^{*}$ in those with high NV concentration is limited by dipolar
interaction between defects, including other subsitutional nitrogen defects
such as P1 centers. (Note: the NV- density for the work by Childress et al. is
an upper estimate made here assuming a 10$\%$ NV- fraction; total
substitutional nitrogen content for this diamond was given as $\leq$0.1 ppm.
The parameters which are intrinsic to the diamond are the diamond thickness
$d$, $NV^{-}$ density $N_{NV}$, ensemble dephasing time $T_{2}^{*}$ defining
the ODMR linewidth and absorption contrast $C$ arising from changes in pump
absorption on or off microwave resonance. These factors define the diamond
absorption factor $\alpha_{d}$.
A number of these parameters are interrelated. The ODMR linewidth is
proportional to the inverse of $T_{2}^{*}$, which in turn is dependent on
$N_{NV}$ concentration in the limit of high nitrogen content and the abundance
of 13C for low nitrogen content [39]. There is also a dependence on other
material properties such as strain [40], which makes the relationship between
the parameters difficult to determine. We therefore consider values in the
experimental literature as a guide. Fig. 2 shows a plot of $T_{2}^{*}$ versus
NV- density $N_{NV}$ for a range of diamonds from the literature [41, 36, 42,
43, 44, 45, 46]. Typical NV- densities range from 0.1ppb up to tens of
ppm[47]. In general, $T_{2}^{*}$ $<$ 1 $\mu$s for samples with natural
(1.1$\%$) 13C content [48, 49]. Experiments typically realize Rabi frequencies
$\Omega_{R}$ of 1-5 MHz, with up to 10 MHz using optimal antenna geometries
[50].
Parameter | Range | Ref.
---|---|---
Transp. carrier density, $N_{tr}$ | $3\times 10^{18}$-$2\times 10^{19}$cm-3 | [51, 52, 53, 54, 55]
Carrier lifetime, $\tau_{N}$ | 1-5ns | [56, 57]
Differential gain factor, $a$ | 10-17 \- 10-22m2 | [58, 59]
Confinement factor, $\Gamma$ | 0.01-0.1 | [60, 61]
Spont. emission factor, $\beta$ | 10-5-10-2 | [62, 63]
Table 1: Typical ranges for the key semiconductor gain medium parameters. Here
$N_{tr}$, $\tau_{N}$ and $a$ are taken for typical III-nitride semiconductors.
The range for $\Gamma$ is given for laser diodes with a thin (sub-$\mu$m)
active layer and is typically no more than a few percent. The range of $\beta$
is given for literature values for a range of laser diodes where confinement
is not deliberately sought e.g. microcavities, where values several orders of
magnitude higher than the given range are possible [64].
Those parameters intrinsic to the laser diode/gain chip used are the carrier
density at transparency $N_{tr}$, the gain compression factor $\epsilon$ that
arises from effects that limit the gain well above threshold, the differential
gain coefficient $a$ that relates gain and carrier density, the threshold
carrier lifetime $\tau_{N}$ , the confinement factor $\Gamma$, the volume of
the gain medium $V$ and the spontaneous emission factor $\beta$. For our gain
medium we take a III-V semiconductor heterostructure device, such as the
nitride compounds capable of emission at green wavelengths (e.g. InGaN) [65].
Table 1 shows a typical range of values for each of these parameters. We take
the typical ranges shown based on experimental results from different
structures (quantum well, vertical cavity) and from calculations based on bulk
material properties such as effective mass. $N_{tr}$ effectively defines the
size of the lasing threshold current $I_{th}$. The desired change in threshold
current on change in absorption factor $\alpha_{t}$ is defined in particular
by $\Gamma$ and the gain coefficient $a$ in Eqs. (3) and (7).
## III Results
### III.1 Absorption contrast
Figure 3: a) Absorption contrast percentage calculated from the rate model for
Diamond D3. This is the maximum change in absorption between on microwave
resonance and off microwave resonance as a function of Rabi frequency
$\Omega_{R}$ and laser intensity $\hat{I}$ in W/m2. b)-d) Normalized level
occupancy for the NV- triplet ground state, responsible for green absorption,
the uncharged NV0 defect state and the NV- singlet state. At high laser
intensities, population transfer to NV0 limits the achievable absorption
contrast. For reference, 10 mW of laser power with a 1 mm diameter circular
beam on the diamond gives an intensity $\hat{I}$ = $10^{4}$ W/m2. The black
spot indicates the Rabi frequency and power for calculations later in this
work.
We first calculate from the rate model the fraction of incident pump light
which is absorbed by the diamond and the change in this absorption ($C$) when
on microwave resonance. We choose to model three different diamonds covering
different regimes: D1, D2 and D3 with parameters (NV density and $T_{2}^{*}$)
representative of the values seen in the literature (Fig. 2). For Diamond D1
we choose a low NV- concentration $N_{NV}$ = 0.001 ppm, high $T_{2}^{*}$ = 5
$\mu$s, representative of 12C enriched diamonds. For Diamond D2 we choose a
medium NV- concentration $N_{NV}$ = 0.1 ppm, $T_{2}^{*}$ = 0.75 $\mu$s,
representative of CVD-grown diamond with natural 13C abundance. For Diamond D3
we choose $N_{NV}$ = 10 ppm, $T_{2}^{*}$ = 0.1 $\mu$s, characteristic of high
nitrogen content high-pressure high-temperature (HPHT) diamond. We use a
diamond thickness $d$ = 500 $\mu$m for all, representative of commercially
available single crystal plates.
Using the rate model, we can calculate the absorption of light incident on the
diamond 1 - $\hat{I_{\rm off}}$/$\hat{I_{0}}$ where $\hat{I_{0}}$ is intensity
of the incident light and $\hat{I_{\rm off}}$ the intensity of the light after
the diamond (without supplying microwaves). The calculated absorption for
Diamonds D1 - D3 is 0.015$\%$, 1.498$\%$ and 77$\%$, as expected from
increasing NV density. We can also calculate the change in absorption when on
and off microwave resonance. This is shown in Fig. 3 as absorption contrast
$C$ for D3 as a function of microwave drive power (as Rabi frequency
$\Omega_{R}$) and laser output (as intensity). The equivalent plots for D1 and
D2 are given in the Supplementary Information. Maximum $C$ = 0.22$\%$ for D3
and lowest for D1 with the lowest NV density with $C$ = 10${}^{-4}\%$. This
contrast is comparable to our previous absorption experiments using a diamond
with equivalent ppb-level NV- density [21]. We note that at Rabi frequencies
above 100 kHz and laser outputs above 106 W/m2 the absorption contrast begins
to drop. This results from depopulation of the triplet ground state 3A2
(normalized occupancy in Fig. 3,b) in favor of the NV0 (Fig. 3,c) and the
singlet shelving state (Fig. 3,d). However, since we aim to operate near the
lasing threshold, laser intensity will be low in our scheme, avoiding this
issue and ensuring we remain in the region of highest contrast.
Figure 4: Experimental absorption contrast percentage as a function of
microwave drive frequency and microwave power before the amplifier
(Minicircuits ZHL-16W), measuring through the diamond with 100 mW of laser
light ($\hat{I}$ = 2.5x104 W/m2). The sample was used to test the absorption
model using estimates of NV density and $T_{2}^{*}$ from the observed
linewidth (values in the main text). Note: due to input loss we remain below
the maximum gain threshold of the amplifier for all microwave powers shown,
which is exceeded at +3dBm.
To further validate the absorption modeling, we have also measured diamond
absorption on a high density sample consisting of a 1 mm thick HPHT diamond
with 200 ppm nitrogen content, irradiated with 10 MeV electrons and annealed
at 900 ∘C. The estimated NV content for this sample was 10-20 ppm. The
absorption contrast for this sample is shown in Fig. 4. The sample was found
to be moderately polycrystalline and was therefore measured without an offset
field to produce a single central dip in fluorescence, with a number of
satellite features resulting from the polycrystalinity and residual magnetic
field in the laboratory. Here total off resonance diamond absorption was
90$\%$ of incident pump light and maximum absorption contrast $C$ = 0.13$\%$.
For comparison to experiment, we model absorption contrast with our model with
NV density of 15 ppm, $T_{2}^{*}$ = 100 ns, derived from an estimate of the
resonance linewidth, an estimated Rabi frequency of 1 MHz and the same 100 mW
laser power as used experimentally. This gives a total off resonance
absorption of 89$\%$ of the pump light and absorption contrast of $C$ =
0.14$\%$, in good agreement with our measurements.
### III.2 Change in Threshold Current
Figure 5: a) Threshold current as a function of differential gain factor $a$
and confinement factor $\Gamma$ for Diamond D3. b) Change in threshold current
due to the diamond absorption contrast $C$ = 0.02$\%$ for Diamond D3. Here
external cavity length $L_{r}$ was 10 mm and output mirror reflectivity
$R_{1}$ = 0.9.
We first calculate the lasing threshold current $I_{th}$ with the diamond
absent from the cavity. To do this we fixed some of the parameters of the
semiconductor gain medium. We choose a transparency carrier density of
$N_{tr}$ = 1$\times$1025 m-3 in the range typical for InGaN laser structures
[51], a gain region volume of $V$ = 1.25$\times$10-16 m3 (25 $\mu$m x 100 nm x
100 $\mu$m). For zero total cavity absorption $\alpha_{t}$ = 0, a typical
differential gain factor $a$ = 5$\times$10-20 m2, a confinement factor
$\Gamma$ of 2$\%$ and a carrier lifetime $\tau_{N}$ = 4 ns, giving a
reasonable lasing threshold current of 50 mA [65]. We take the relation
between the gain and the carrier density to be linear, with the carrier
density close to transparency. We make the simplifying assumption that due to
the low power, running close to lasing threshold we do not encounter gain
compression effects, such that the factor $\epsilon\rightarrow$ 0\. We also
initially make the simplifying assumption that the spontaneous emission rate
is low, with $\beta\rightarrow$0 (the importance of this second assumption
will be tested in the final section of this work). These assumptions allow the
threshold current $I_{th}$ to be calculated easily from Eq. (8). We define
$L_{r}$ = 10 mm, sufficient to include the diamond and any necessary optics in
a practical implementation. We set mirror reflectivity $R_{3}$ = 0.99 and
collect laser output from transmission through mirror $R_{1}$. We calculate
reflectivity $R_{2}$ from the Fresnel equations assuming an InxGa1-xN/air
interface with refractive index $n$ $\approx$ 2.6-2.9 for InxGa1-xN [66].
We impose two feasibility limits on the threshold current $I_{th}$. The first
is that it should not exceed 300 mA, based on the limits discussed in
technical documentation, in order to maintain thermal stability and for
practical heatsinking for a miniaturized diode/gain chip medium. The second is
that the change in the threshold caused by the diamond absorption must exceed
the shot noise of the drive current. From Fig. 5, we can see that these are
mutually exclusive objectives. Setting an absorption contrast $C$ = 0.2$\%$
(Diamond D3) and output mirror reflectivity $R_{1}$ = 0.9 in order to achieve
laser output while keeping threshold current reasonably low, a low confinement
factor $\Gamma$ and high differential gain coefficient $a$ result in the
highest change in threshold current $\Delta I_{th}$ and thus strongest effect
for sensing, but for very high $I_{th}$. Conversely, a higher value of
$\Gamma$ or lower $a$ gives lower $I_{th}$, but $\Delta I_{th}$ shifts which
are too small to be resolved.
Although Ntr is a factor usually defined by the semiconductor material, we
note that the other parameters here which define $I_{th}$, $\Gamma$, $a$ and
total cavity loss $\alpha_{t}$ including the mirror reflectivity and cavity
output through $R_{1}$, are all factors which are well understood and can be
controlled and optimized at either the semiconductor growth stage or in the
external cavity design.
### III.3 Simulated ODMR
Figure 6: a) Simulated ODMR for Diamond D3 at a range of differential gain
factors $a$=10-21 $\rightarrow$ 10-19.5 m2 (exponents given in legend)
measured by calculating external cavity laser output power $P$ as a function
of microwave frequency for a Lorentzian lineshape transition centered at 2.83
GHz and of linewidth defined by fl = $\frac{1}{\pi T_{2}^{*}}$ = 3.2 MHz. b)
Maximum laser power output $P_{max}$ on resonance as a function of gain
coefficient $a$ for confinement factor $\Gamma$ = 0.01, 0.05, 0.1, c) the
lasing threshold current for $I_{th}$ $<$ 300 mA, for the same three values of
$\Gamma$.
Here we calculate the ODMR spectrum that would be produced from the external
cavity laser. We model a single microwave resonance from a single $m_{s}$ = 0
$\rightarrow$ $m_{s}$ = $\pm$1 transition using a Lorentzian lineshape typical
of ODMR for diamond [17]. We center our resonance at 2.82 GHz, replicating an
ODMR resonance feature associated with a single NV axis, split from resonance
features from other axes by an arbitrary weak DC offset magnetic field. The
maximum amplitude is defined by the maximum change in threshold current
between on and off microwave resonance and full width half maximum linewidth
fl. For simplicity we assume that we can reach the pulsed readout linewidth
defined by $T_{2}^{*}$. We calculate the external cavity laser output power
using Eq. (10). Fig. 6,a) shows the simulated ODMR for Diamond D3, with a
laser power output in the mW range for reasonable values of $\Gamma$ $<$ 0.1
and $a$ = 10-17-10-21 m2. The equivalent plots for Diamond D1 and D2 are given
in the Supplementary Information, with maximum power outputs in the range of
nW and $\mu$W respectively. Unlike for conventional red fluorescence ODMR, the
spectrum using this method is a peak at microwave resonance with zero
background, rather than a small percentage change on a bright background.
### III.4 Magnetic Field Sensitivity
Figure 7: a) Optical shot noise limited sensitivity for Diamond D3 within a
viable range for diode parameters $a$ and $\Gamma$. Sensitivity is in the
picotesla range, enabled by the elimination of the high noise from the
background in the conventional fluorescence detection scheme. b) The ultimate
sensitivity limit imposed by shot noise on the laser drive current, worse by
up to 2 orders of magnitude. These plots give no consideration for practical
viability, with threshold currents $>$4 A at low $\Gamma$.
We calculate sensitivity to magnetic field by taking the background noise
level, dividing by the maximum ODMR slope and by assuming a maximum frequency
shift of 28 Hz $\approx$ 1 nT [13]. In our model, the primary sources of noise
are readout from the photodetector and the noise on the drive current. The
ultimate limit on both of these is shot noise of the output laser light and
the drive current shot noise. We make no account for other direct sources of
noise which are difficult to quantify, such as vibration or temperature
fluctuations. Fig. 7 shows a plot of sensitivity for Diamond D3 versus laser
diode parameters for a) the optical shot noise and b) drive current shot noise
limited regimes, with best sensitivity of 50 pT/$\sqrt{\rm Hz}$ when limited
by the shot noise of the drive current. We note that in practice the shot
noise limited operation may be experimentally difficult to realize and include
an estimate based on a commercial current source with ppm-level noise in the
Supplementary Information.
Figure 8: Regions where the sensor can and cannot operate due to imposed
limitations. In Regions A and C, operation is constrained by having a change
in threshold less than the shot noise of the drive current and $I_{th}$ $>$
300 mA respectively. In Region B, operation is possible.
The noise limitations as a function of diode parameters are highlighted in
Fig. 8 for Diamond D3. Here the regions A and C represent where operation is
noise limited and the region B represents the region in which the system can
operate. In region A, the change in threshold current is less than the shot
noise of the laser drive current ($\Delta I_{th}$ $<$ $I_{sh}$). In region C,
the threshold current $I_{th}$ $>$ 300 mA exceeds a reasonable maximum drive
current in order to maintain thermal stability. The limitations we impose mean
Diamond D1 or D2 have no viable operating region. For completeness, their
sensitivity plots are included in the Supplementary Information.
Figure 9: Best field sensitivity optimizing variables listed in the main text
as a function of a) NV- density and b) $T_{2}^{*}$ in $\mu$s, with the inset
showing a zoomed plot at the highest simulated values of $T_{2}^{*}$. The best
sensitivity was observed at the highest $T_{2}^{*}$, for NV- density 104 ppb.
Above this the total absorption for the diamond was too high, limiting laser
output and sensitivity.
By solving the rate model and calculating for laser diode output, we can
calculate sensitivity to magnetic field for any valid physical parameters of
the system, regardless of whether a diamond can be created with the requisite
properties. This includes whether a value of $T_{2}^{*}$ can be realized for a
corresponding $N_{NV}$, making no assumption regarding the relation between
these parameters, or whether $N_{NV}$ can be realized experimentally. Here we
choose parameters $R_{1}$, $\Gamma$, $T_{2}^{*}$, $a$ and NV- density $N_{NV}$
as optimization variables, while fixing diamond thickness ($d$ = 500 $\mu$m),
Rabi frequency ($\Omega_{R}$=1 MHz), laser beam width (0.5 mm), power (200 mW)
and mirror reflectivities. We limit our laser power to 200 mW based on our
rate model calculations, to ensure the majority of light is absorbed by the
NV- defects. We optimize using standard gradient descent methods. Fig. 9 shows
a plot of optical shot noise limited field sensitivity as a function of
$T_{2}^{*}$ and NV density. Sensitivity increased with higher $T_{2}^{*}$ as
would be expected, with maximum sensitivity at NV density of 104 ppb, above
which high overall absorption by the diamond acted to excessively reduce laser
output. Sub-picotesla level sensitivity is predicted for $T_{2}^{*}$ $>$ 1
$\mu$s (0.3-0.02 pT/$\sqrt{\rm Hz}$ for $T_{2}^{*}$ = 1-10 $\mu$s). Here the
optimal parameters were $a$ = 1.6x10-20 m2, $R_{1}$ = 0.154 and $\Gamma$ =
0.025. These are parameters within the achievable range for a semiconductor
gain medium (see Table 1).
Figure 10: Calculated magnetic field sensitivity and laser diode threshold
current as a function of differential gain factor $a$ using an empirical model
for a quantum well laser diode. The lasing threshold current increases such
that sub-picotesla sensitivity is not reached at a feasible threshold current
($<$300 mA). Here we take confinement factor $\Gamma$=0.01 as an example of
the low values typical of a quantum well laser diode.
We note that in general, the highest sensitivity is realized for the lowest
differential gain factor $a$. A standard laser diode demands a large $a$,
maximizing gain vs carrier density (steeper output power vs drive current
slope). Our scheme requires the reverse: that a small change in gain produced
by the diamond on/off microwave resonance results in a large change in
$N_{th}$ and $I_{th}$. In this respect, a quantum well structure with a
flatter logarithmic relation between gain and carrier density would seem
preferable. However, as we demonstrate in Fig. 10, using our model, with
modifications to the phenomenological description of the medium gain (detailed
in the Supplementary Information) and with the same optimization methodology
as above results in the threshold current exponentially exceeding drive
current feasibility limits before sub-picotesla/$\sqrt{\rm Hz}$ sensitivity is
reached, for any typical value for $\Gamma$ in the low percentage range.
### III.5 Effect of spontaneous emission
Figure 11: a) External cavity laser output $P$ as a function of semiconductor
laser medium drive current $I$, varying spontaneous emission factor $\beta$.
The result of increasing $\beta$ is that there is no longer a sharp lasing
cut-on at threshold. b) shows the effect on the achievable sensitivity of this
effect, with sensitivity considerably reduced by up to 2 orders of magnitude
for $\beta$ = 10-2.
In the previous sections and past literature, the physical role of spontaneous
emission in the semiconductor gain medium used was not considered. In order to
maximize sensitivity, it is necessary to operate at or close to the off-
resonance lasing threshold. Without spontaneous emission, this can be treated
as a step cut-on, with zero or near-zero emission before lasing begins at
$I_{th}$. With spontaneous emission included, modeled by finite $\beta$ in
Eqs. (4) and (5) above, the power-current relationship close to threshold
instead follows a shallow curve, resulting from weak amplification of
spontaneous emission near threshold producing light emission below $I_{th}$.
Fig. 11,a) shows this effect for varying $\beta$. This acts to severely limit
sensitivity (Fig. 11,b) by reducing the contrast and adding background shot
noise. Typical values of $\beta$ range from 10-3 to 10-5, depending on laser
diode structure. We estimate approximately an order of magnitude worse
sensitivity at the low end of this range than with $\beta$ = 0.
## IV Conclusion
In this work, we propose a scheme for laser threshold sensing using an
external cavity laser configuration with a current driven semiconductor lasing
medium. Using the change in lasing threshold, light emission only occurs on
microwave resonance. This eliminates the bright background that limits
sensitivity using conventional red fluorescence emission. Predicted
sensitivities for magnetometry with realistic cavity parameters and intrinsic
material parameters are in the pT/$\sqrt{\rm Hz}$ range, offering a route to
improvement over existing methods. Our model has limitations: we base our
calculations on emission into a single laser mode and do not calculate the
dynamics of the system, such as rapid switching in a pulsed operation scheme.
Although beyond the focus of this work, we note that the latter may be a
promising route for future investigation. A scheme where the laser medium
could be initially pumped and then the pump shut off while retaining
population inversion during sensing, typical of a Q-switched setup, would only
be limited by the optical shot noise of any emitted laser light. This is
however challenging to achieve for a semiconductor laser due to the short
excited state (carrier) lifetime.
A key physical limitation of any laser threshold scheme is the role of
amplified spontaneous emission. This blurs the sharp lasing transition, giving
nonzero light emission even below threshold and compromising sensitivity. A
broad transition can be avoided by minimizing gain factor $\beta$, although
this is difficult for a semiconductor laser, particularly since $\beta$ can
scale inversely with the size of the gain medium [67]. Obtaining a gain chip
or antireflective coated laser diode with the right parameters is challenging,
especially for green wavelengths. This problem also exists for infrared
absorption, since the laser emission must match the 1042 nm gap in the singlet
state. In our scheme, running in the infrared could be achieved by extending
the external cavity design proposed here using a diffraction grating in a
Littrow and Littman–Metcalf configuration to create a tunable system.
We consider in this work a normal incidence beam path through a fixed diamond
thickness $d$=500$\mu$m. We note that a higher sensitivity within feasible
limits of threshold current could potentially be reached with a thinner
diamond with a very high NV density. However, in the limit of
$d$$\rightarrow$0, other effects not considered in our model may act to limit
performance, such as variation in NV density and the role of other types of
defects. Measurements of absorption and $T_{2}^{*}$ as a function of diamond
thickness would be extremely useful in determining behavior in this regime. We
also consider that it may be possible to reach higher sensitivity in the low
NV density regime using an extended beam path achieved through internal
reflection in a thicker diamond. This again requires new experimental
measurements to precisely quantify losses (due to reflection or absorption) in
such a geometry.
We note that the fundamental limit for the scheme is the level of contrast $C$
generated between the on/off microwave resonance states, very low for a large
diamond ensemble. However, the scheme is not specifically limited to diamond
and is broadly applicable for any material where a large enough, controllable
difference in optical absorption could be generated. The advantage of using
diamond is the ability to coherently manipulate the desired states in a
quantum sensing scheme. Our calculations indicate the scheme will likely only
work for diamonds with a high ($>$ 1 ppm) NV- density. Such diamonds have a
worse ensemble $T_{2}^{*}$ time, limited by nitrogen spin interaction. A
developing solution here may be to use optimal control methods in order to
better control the ensemble. Such methods are widely implemented for nuclear
magnetic resonance and electron spin resonance on bulk samples, but have yet
to be fully developed for sensing using diamond defects [68].
## V Acknowledgments
The work presented here was funded by the Novo Nordisk foundation through the
synergy grant bioQ and the bigQ Center funded by the Danish National Research
Foundation (DNRF).
## References
* Schleich et al. [2016] W. P. Schleich, K. S. Ranade, C. Anton, M. Arndt, M. Aspelmeyer, M. Bayer, G. Berg, T. Calarco, H. Fuchs, E. Giacobino, et al., Applied Physics B 122 (2016).
* Gruber [1997] A. Gruber, Science 276, 2012 (1997).
* Doherty et al. [2013] M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup, and L. C. Hollenberg, Physics Reports 528, 1 (2013).
* Taylor et al. [2008] J. M. Taylor, P. Cappellaro, L. Childress, L. Jiang, D. Budker, P. R. Hemmer, A. Yacoby, R. Walsworth, and M. D. Lukin, Nature Physics 4, 810 (2008).
* Schirhagl et al. [2014] R. Schirhagl, K. Chang, M. Loretz, and C. L. Degen, Annual Review of Physical Chemistry 65, 83 (2014).
* Webb et al. [2020] J. L. Webb, L. Troise, N. W. Hansen, J. Achard, O. Brinza, R. Staacke, M. Kieschnick, J. Meijer, J.-F. Perrier, K. Berg-Sørensen, et al., Frontiers in Physics 8 (2020).
* Kucsko et al. [2013] G. Kucsko, P. C. Maurer, N. Y. Yao, M. Kubo, H. J. Noh, P. K. Lo, H. Park, and M. D. Lukin, Nature 500, 54 (2013).
* Knauer et al. [2020] S. Knauer, J. P. Hadden, and J. G. Rarity, npj Quantum Information 6 (2020).
* Dolde et al. [2011] F. Dolde, H. Fedder, M. W. Doherty, T. Nöbauer, F. Rempp, G. Balasubramanian, T. Wolf, F. Reinhard, L. C. L. Hollenberg, F. Jelezko, et al., Nature Physics 7, 459 (2011).
* Hong et al. [2013] S. Hong, M. S. Grinolds, L. M. Pham, D. L. Sage, L. Luan, R. L. Walsworth, and A. Yacoby, MRS Bulletin 38, 155 (2013).
* Rondin et al. [2014] L. Rondin, J.-P. Tetienne, T. Hingant, J.-F. Roch, P. Maletinsky, and V. Jacques, Reports on Progress in Physics 77, 056503 (2014).
* Barry et al. [2020] J. F. Barry, J. M. Schloss, E. Bauch, M. J. Turner, C. A. Hart, L. M. Pham, and R. L. Walsworth, Reviews of Modern Physics 92 (2020).
* Webb et al. [2019] J. L. Webb, J. D. Clement, L. Troise, S. Ahmadi, G. J. Johansen, A. Huck, and U. L. Andersen, Applied Physics Letters 114, 231103 (2019).
* Wolf et al. [2015] T. Wolf, P. Neumann, K. Nakamura, H. Sumiya, T. Ohshima, J. Isoya, and J. Wrachtrup, Physical Review X 5 (2015).
* Bucher et al. [2019] D. B. Bucher, D. P. L. A. Craik, M. P. Backlund, M. J. Turner, O. B. Dor, D. R. Glenn, and R. L. Walsworth, Nature Protocols 14, 2707 (2019).
* Osterkamp et al. [2019] C. Osterkamp, M. Mangold, J. Lang, P. Balasubramanian, T. Teraji, B. Naydenov, and F. Jelezko, Scientific Reports 9 (2019).
* Levchenko et al. [2015] A. O. Levchenko, V. V. Vasil'ev, S. A. Zibrov, A. S. Zibrov, A. V. Sivak, and I. V. Fedotov, Applied Physics Letters 106, 102402 (2015).
* Wojciechowski et al. [2018] A. M. Wojciechowski, M. Karadas, C. Osterkamp, S. Jankuhn, J. Meijer, F. Jelezko, A. Huck, and U. L. Andersen, Applied Physics Letters 113, 013502 (2018).
* Liu et al. [2019] Z. Liu, H. N. Abbasi, T.-F. Zhu, Y.-F. Wang, J. Fu, F. Wen, W. Wang, S. Fan, K. Wang, and H.-X. Wang, AIP Advances 9, 125218 (2019).
* Huang et al. [2019] T.-Y. Huang, R. R. Grote, S. A. Mann, D. A. Hopper, A. L. Exarhos, G. G. Lopez, G. R. Kaighn, E. C. Garnett, and L. C. Bassett, Nature Communications 10 (2019).
* Ahmadi et al. [2018] S. Ahmadi, H. A. R. El-Ella, A. M. Wojciechowski, T. Gehring, J. O. B. Hansen, A. Huck, and U. L. Andersen, Physical Review B 97 (2018).
* Ahmadi et al. [2017] S. Ahmadi, H. A. El-Ella, J. O. Hansen, A. Huck, and U. L. Andersen, Physical Review Applied 8 (2017).
* Jensen et al. [2014] K. Jensen, N. Leefer, A. Jarmola, Y. Dumeige, V. Acosta, P. Kehayias, B. Patton, and D. Budker, Physical Review Letters 112 (2014).
* Jeske et al. [2016] J. Jeske, J. H. Cole, and A. D. Greentree, New Journal of Physics 18, 013015 (2016).
* Jeong et al. [2019] T. Y. Jeong, H. Kim, S.-J. Choi, K. Watanabe, T. Taniguchi, K. J. Yee, Y.-S. Kim, and S. Jung, Nature Communications 10 (2019).
* Castelletto and Boretti [2020] S. Castelletto and A. Boretti, Journal of Physics: Photonics 2, 022001 (2020).
* Dumeige et al. [2019] Y. Dumeige, J.-F. Roch, F. Bretenaker, T. Debuisschert, V. Acosta, C. Becher, G. Chatzidrosos, A. Wickenbrock, L. Bougas, A. Wilzewski, et al., Optics Express 27, 1706 (2019).
* Cunyun [2004] Y. Cunyun, _Tunable External Cavity Diode Lasers_ (World Scientific, 2004).
* Petermann [1988] K. Petermann, _Laser Diode Modulation and Noise_ (Springer Netherlands, 1988).
* de Groot et al. [1988] P. J. de Groot, G. M. Gallatin, and S. H. Macomber, Applied Optics 27, 4475 (1988).
* Li et al. [2017] J. Li, H. Niu, and Y. Niu, Optical Engineering 56, 050901 (2017).
* Voumard et al. [1977] C. Voumard, R. Salathé, and H. Weber, Applied Physics 12, 369 (1977).
* Allen [1994] J. W. Allen, Advanced Materials for Optics and Electronics 4, 51 (1994).
* Robledo et al. [2011] L. Robledo, H. Bernien, T. van der Sar, and R. Hanson, New Journal of Physics 13, 025013 (2011).
* Tetienne et al. [2012] J.-P. Tetienne, L. Rondin, P. Spinicelli, M. Chipaux, T. Debuisschert, J.-F. Roch, and V. Jacques, New Journal of Physics 14, 103033 (2012).
* Acosta et al. [2010] V. M. Acosta, E. Bauch, A. Jarmola, L. J. Zipp, M. P. Ledbetter, and D. Budker, Applied Physics Letters 97, 174104 (2010).
* Meirzada et al. [2018] I. Meirzada, Y. Hovav, S. A. Wolf, and N. Bar-Gill, Physical Review B 98 (2018).
* Wee et al. [2007] T.-L. Wee, Y.-K. Tzeng, C.-C. Han, H.-C. Chang, W. Fann, J.-H. Hsu, K.-M. Chen, and Y.-C. Yu, The Journal of Physical Chemistry A 111, 9379 (2007).
* Jahnke et al. [2012] K. D. Jahnke, B. Naydenov, T. Teraji, S. Koizumi, T. Umeda, J. Isoya, and F. Jelezko, Applied Physics Letters 101, 012405 (2012).
* Kehayias et al. [2019] P. Kehayias, M. J. Turner, R. Trubko, J. M. Schloss, C. A. Hart, M. Wesson, D. R. Glenn, and R. L. Walsworth, Physical Review B 100 (2019).
* Rubinas et al. [2018] R. Rubinas, V. V. Vorobyov, V. V. Soshenko, S. V. Bolshedvorskii, V. N. Sorokin, A. N. Smolyaninov, V. G. Vins, A. P. Yelisseyev, and A. V. Akimov, Journal of Physics Communications 2, 115003 (2018).
* Acosta et al. [2009] V. M. Acosta, E. Bauch, M. P. Ledbetter, C. Santori, K.-M. C. Fu, P. E. Barclay, R. G. Beausoleil, H. Linget, J. F. Roch, F. Treussart, et al., Physical Review B 80 (2009).
* Balasubramanian et al. [2019] P. Balasubramanian, C. Osterkamp, Y. Chen, X. Chen, T. Teraji, E. Wu, B. Naydenov, and F. Jelezko, Nano Letters 19, 6681 (2019).
* Glenn et al. [2018] D. R. Glenn, D. B. Bucher, J. Lee, M. D. Lukin, H. Park, and R. L. Walsworth, Nature 555, 351 (2018).
* Bauch et al. [2018] E. Bauch, C. A. Hart, J. M. Schloss, M. J. Turner, J. F. Barry, P. Kehayias, S. Singh, and R. L. Walsworth, Physical Review X 8 (2018).
* Childress et al. [2006] L. Childress, M. V. G. Dutt, J. M. Taylor, A. S. Zibrov, F. Jelezko, J. Wrachtrup, P. R. Hemmer, and M. D. Lukin, Science 314, 281 (2006).
* Su et al. [2013] L.-J. Su, C.-Y. Fang, Y.-T. Chang, K.-M. Chen, Y.-C. Yu, J.-H. Hsu, and H.-C. Chang, Nanotechnology 24, 315702 (2013), URL https://doi.org/10.1088/0957-4484/24/31/315702.
* Bauch et al. [2019] E. Bauch, S. Singh, J. Lee, C. A. Hart, J. M. Schloss, M. J. Turner, J. F. Barry, L. Pham, N. Bar-Gill, S. F. Yelin, et al., _Decoherence of dipolar spin ensembles in diamond_ (2019), eprint 1904.08763.
* Maze et al. [2009] J. R. Maze, P. Cappellaro, L. Childress, M. V. G. Dutt, J. S. Hodges, S. Hong, L. Jiang, P. L. Stanwix, J. M. Taylor, E. Togan, et al., in _Advanced Optical Concepts in Quantum Computing, Memory, and Communication II_ , edited by Z. U. Hasan, A. E. Craig, and P. R. Hemmer (SPIE, 2009).
* Yaroshenko et al. [2020] V. Yaroshenko, V. Soshenko, V. Vorobyov, S. Bolshedvorskii, E. Nenasheva, I. Kotel’nikov, A. Akimov, and P. Kapitanova, Review of Scientific Instruments 91, 035003 (2020).
* Fu et al. [2019] H. Fu, W. Sun, O. Ogidi-Ekoko, J. C. Goodrich, and N. Tansu, AIP Advances 9, 045013 (2019).
* Nakamura [1999] S. Nakamura, Semiconductor Science and Technology 14, R27 (1999).
* Sulmoni [2014] L. A. M. Sulmoni (2014).
* Tabataba-Vakili et al. [2020] F. Tabataba-Vakili, C. Brimont, B. Alloing, B. Damilano, L. Doyennette, T. Guillet, M. E. Kurdi, S. Chenot, V. Brändli, E. Frayssinet, et al., Applied Physics Letters 117, 121103 (2020).
* Farrell et al. [2011] R. M. Farrell, D. A. Haeger, P. S. Hsu, K. Fujito, D. F. Feezell, S. P. DenBaars, J. S. Speck, and S. Nakamura, Applied Physics Letters 99, 171115 (2011).
* Lutgen et al. [2010] S. Lutgen, A. Avramescu, T. Lermer, D. Queren, J. Müller, G. Bruederl, and U. Strauss, physica status solidi (a) 207, 1318 (2010).
* Nakamura [1997] S. Nakamura, IEEE Journal of Selected Topics in Quantum Electronics 3, 435 (1997).
* Al-Ghamdi et al. [2019] M. S. Al-Ghamdi, A. Bakry, and M. Ahmed, Journal of the European Optical Society-Rapid Publications 15 (2019).
* Frost et al. [2013] T. Frost, A. Banerjee, and P. Bhattacharya, Applied Physics Letters 103, 211111 (2013).
* Stańczyk et al. [2013] S. Stańczyk, T. Czyszanowski, A. Kafar, J. Goss, S. Grzanka, E. Grzanka, R. Czernecki, A. Bojarska, G. Targowski, M. Leszczyński, et al., Applied Physics Letters 103, 261107 (2013).
* Zhang et al. [2009] L. Q. Zhang, D. S. Jiang, J. J. Zhu, D. G. Zhao, Z. S. Liu, S. M. Zhang, and H. Yang, Journal of Applied Physics 105, 023104 (2009).
* Cassidy [1991] D. T. Cassidy, Journal of the Optical Society of America B 8, 747 (1991).
* Scheibenzuber et al. [2011] W. G. Scheibenzuber, U. T. Schwarz, L. Sulmoni, J. Dorsaz, J.-F. Carlin, and N. Grandjean, Journal of Applied Physics 109, 093106 (2011).
* Kreinberg et al. [2017] S. Kreinberg, W. W. Chow, J. Wolters, C. Schneider, C. Gies, F. Jahnke, S. Höfling, M. Kamp, and S. Reitzenstein, Light: Science & Applications 6, e17030 (2017).
* Yang et al. [2017] J. Yang, D. G. Zhao, D. S. Jiang, X. Li, F. Liang, P. Chen, J. J. Zhu, Z. S. Liu, S. T. Liu, L. Q. Zhang, et al., Optics Express 25, 9595 (2017).
* Anani et al. [2007] M. Anani, H. Abid, Z. Chama, C. Mathieu, A. Sayede, and B. Khelifa, Microelectronics Journal 38, 262 (2007).
* Ma and Oulton [2018] R.-M. Ma and R. F. Oulton, Nature Nanotechnology 14, 12 (2018).
* Nöbauer et al. [2015] T. Nöbauer, A. Angerer, B. Bartels, M. Trupke, S. Rotter, J. Schmiedmayer, F. Mintert, and J. Majer, Physical Review Letters 115 (2015).
|
# Non-conservative $H^{\nicefrac{{1}}{{2}}-}$ weak solutions of the
incompressible 3D Euler equations
Tristan Buckmaster Department of Mathematics, Princeton University, Princeton,
NJ 08544<EMAIL_ADDRESS> Nader Masmoudi NYUAD Research Institute,
New York University Abu Dhabi, PO Box 129188, Abu Dhabi, UAE & Courant
Institute of Mathematical Sciences, New York University, New York, NY 10012,
<EMAIL_ADDRESS> Matthew Novack Courant Institute of Mathematical
Sciences, New York University, New York, NY 10012<EMAIL_ADDRESS> Vlad
Vicol Courant Institute of Mathematical Sciences, New York University, New
York, NY 10012<EMAIL_ADDRESS>
###### Abstract
For any positive regularity parameter $\beta<\nicefrac{{1}}{{2}}$, we
construct non-conservative weak solutions of the 3D incompressible Euler
equations which lie in $H^{\beta}$ uniformly in time. In particular, we
construct solutions which have an $L^{2}$-based regularity index _strictly
larger_ than $\nicefrac{{1}}{{3}}$, thus deviating from the
$H^{\nicefrac{{1}}{{3}}}$-regularity corresponding to the Kolmogorov-Obhukov
$\nicefrac{{5}}{{3}}$ power spectrum in the inertial range.
###### Contents
1. 1 Introduction
1. 1.1 Context and motivation
2. 1.2 Ideas and difficulties
2. 2 Outline of the convex integration scheme
1. 2.1 A guide to the parameters
2. 2.2 Inductive assumptions
3. 2.3 Intermittent pipe flows
4. 2.4 Higher order stresses
5. 2.5 Cut-off functions
6. 2.6 The perturbation
7. 2.7 The Reynolds stress error and heuristic estimates
3. 3 Inductive assumptions
1. 3.1 General notations
2. 3.2 Inductive estimates
3. 3.3 Main inductive proposition
4. 3.4 Proof of Theorem 1.1
4. 4 Building blocks
1. 4.1 A careful construction of intermittent pipe flows
2. 4.2 Deformed pipe flows and curved axes
3. 4.3 Placements via relative intermittency
5. 5 Mollification
6. 6 Cutoffs
1. 6.1 Definition of the velocity cutoff functions
2. 6.2 Properties of the velocity cutoff functions
3. 6.3 Definition of the temporal cutoff functions
4. 6.4 Estimates on flow maps
5. 6.5 Stress estimates on the support of the new velocity cutoff functions
6. 6.6 Definition of the stress cutoff functions
7. 6.7 Properties of the stress cutoff functions
8. 6.8 Definition and properties of the checkerboard cutoff functions
9. 6.9 Definition of the cumulative cutoff function
7. 7 From $q$ to $q+1$: breaking down the main inductive estimates
1. 7.1 Induction on $q$
2. 7.2 Notations
3. 7.3 Induction on $\widetilde{n}$
8. 8 Proving the main inductive estimates
1. 8.1 Definition of $\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$ and $w_{q+1,{\widetilde{n}},{\widetilde{p}}}$
2. 8.2 Estimates for $w_{q+1,{\widetilde{n}},{\widetilde{p}}}$
3. 8.3 Identification of error terms
4. 8.4 Transport errors
5. 8.5 Nash errors
6. 8.6 Type 1 oscillation errors
7. 8.7 Type 2 oscillation errors
8. 8.8 Divergence corrector errors
9. 8.9 Time support of perturbations and stresses
9. 9 Parameters
1. 9.1 Definitions and hierarchy of the parameters
2. 9.2 Definitions of the $q$-dependent parameters
3. 9.3 Inequalities and consequences of the parameter definitions
4. 9.4 Mollifiers and Fourier projectors
5. 9.5 Notation
10. A Useful lemmas
1. A.1 Transport estimates
2. A.2 Proof of Lemma 6.2
3. A.3 $L^{p}$ decorrelation
4. A.4 Sobolev inequality with cutoffs
5. A.5 Consequences of the Faa di Bruno formula
6. A.6 Bounds for sums and iterates of operators
7. A.7 Commutators with material derivatives
8. A.8 Intermittency-friendly inversion of the divergence
## 1 Introduction
We consider the homogeneous incompressible Euler equations
$\displaystyle\partial_{t}v+\mathrm{div\,}(v\otimes v)+\nabla p$
$\displaystyle=0$ (1.1a) $\displaystyle\mathrm{div\,}v$ $\displaystyle=0$
(1.1b)
for the unknown velocity vector field $v$ and scalar pressure field $p$, posed
on the the three dimensional box $\mathbb{T}^{3}=[-\pi,\pi]^{3}$ with periodic
boundary conditions. We consider weak solutions of (1.1), which may be defined
in the usual way for $v\in L^{2}_{t}L^{2}_{x}$.
We show that within the class of weak solutions of regularity
$C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$, the 3D Euler system (1.1) is
flexible.111Loosely speaking, we consider a system of partial differential
equations of physical origin to be flexible in a certain regularity class, if
at this regularity level the PDEs are not anymore predictive: there exist
infinitely many solutions, which behave in a non-physical way, in stark
contrast to the behavior of the PDE in the smooth category. We refer the
interested reader to the discussion in the surveys of De Lellis and
Székelyhidi Jr. [30, 32] which draw the analogy with the flexibility in
Gromov’s $h$-principle [40]. An example of this flexibility is provided by:
###### Theorem 1.1 (Main result).
Fix $\beta\in(0,\nicefrac{{1}}{{2}})$. For any divergence-free
$v_{\mathrm{start}},v_{\mathrm{end}}\in L^{2}(\mathbb{T}^{3})$ which have the
same mean, any $T>0$, and any $\epsilon>0$, there exists a weak solution $v\in
C([0,T];H^{\beta}(\mathbb{T}^{3}))$ to the 3D Euler equations (1.1) such that
$\left\|v(\cdot,0)-v_{\mathrm{start}}\right\|_{L^{2}(\mathbb{T}^{3})}\leq\epsilon$
and
$\left\|v(\cdot,T)-v_{\mathrm{end}}\right\|_{L^{2}(\mathbb{T}^{3})}\leq\epsilon$.
Since the vector field $v_{\mathrm{end}}$ may be chosen to have a much higher
(or much lower) kinetic energy than the vector field $v_{\mathrm{start}}$, the
above result shows the existence of infinitely many non-conservative weak
solutions of 3D Euler in the regularity class
$C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$. Theorem 1.1 further shows that the
set of so-called wild initial data is dense in the space of $L^{2}$ periodic
functions of given mean. The novelty of this result is that these weak
solutions have more than $\nicefrac{{1}}{{3}}$ regularity, when measured on a
$L^{2}_{x}$-based Banach scale.
###### Remark 1.2 (Corollaries of the proof).
We have chosen to state the flexibility of the 3D Euler equations as in
Theorem 1.1 because it is a simple way to exhibit weak solutions which are
non-conservative, leaving the entire emphasis of the proof on the regularity
class in which the weak solutions lie. Using by now standard approaches
encountered in convex integration constructions for the Euler equations, we
may alternatively establish the following variants of flexibility for (1.1)
within the class of $C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$ weak solutions:
1. (a)
The proof of Theorem 1.1 also shows that: given any
$\beta<\nicefrac{{1}}{{2}}$, $T>0$, and $E>0$, there exists a weak solution
$v\in C(\mathbb{R},H^{\beta}(T^{3}))$ of the 3D Euler equations such that:
$\mathrm{supp\,}_{t}v\subset[-T,T]$, and
$\left\|v(\cdot,0)\right\|_{L^{2}}\geq E$. Such weak solutions are nontrivial
and have compact support in time, thereby implying the non-uniqueness of weak
solutions to (1.1) in the regularity class
$C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$. The argument is sketched in Remark
3.7 below.
2. (b)
The proof of Theorem 1.1 may be modified to show that: given any
$\beta\in(0,\nicefrac{{1}}{{2}})$, and any $C^{\infty}$ smooth function
$e\colon[0,T]\to(0,\infty)$, there exists a weak solution $v\in
C^{0}([0,T];H^{\beta}(\mathbb{T}^{3}))$ of the 3D Euler equations, such that
$v(\cdot,t)$ has kinetic energy $e(t)$, for all $t\in[0,T]$. In particular,
the flexibility of 3D Euler in $C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$ may be
shown to also hold within the class of dissipative weak solutions, by choosing
$e$ to be a non-increasing function of time. This is further discussed in
Remark 3.8 below.
### 1.1 Context and motivation
Classical solutions of the Cauchy problem for the 3D Euler equations (1.1) are
known to exist, locally in time, for initial velocities which lie in
$C^{1,\alpha}$ for some $\alpha>0$ (see e.g. Lichtenstein [48]). These
solutions are unique, and they conserve (in time) the kinetic energy
${\mathcal{E}}(t)=\frac{1}{2}\int_{\mathbb{T}^{3}}|v(x,t)|^{2}dx$, giving two
manifestations of rigidity of the Euler equations within the class of smooth
solutions.
Motivated by hydrodynamic turbulence, it is natural to consider a much broader
class of solutions to the 3D Euler system; these are the distributional or
weak solutions of (1.1), which may be defined in the natural way as soon as
$v\in L^{2}_{t}L^{2}_{x}$, since (1.1) is in divergence form. Indeed, one of
the fundamental assumptions of Kolmogorov’s ’41 theory of turbulence [46] is
that in the infinite Reynolds number limit, turbulent solutions of the 3D
Navier-Stokes equations exhibit anomalous dissipation of kinetic energy; by
now, this is considered to be an experimental fact, see e.g. the book of
Frisch [39] for a detailed account. In particular, this anomalous dissipation
of energy necessitates that the family of Navier-Stokes solutions does not
remain uniformly bounded in the topology of $L^{3}_{t}B^{\alpha}_{3,\infty,x}$
for any $\alpha>\nicefrac{{1}}{{3}}$, as the Reynolds number diverges, as was
alluded to in the work of Onsager [57].222Onsager did not use the Besov norm
$\left\|v\right\|_{B^{\alpha}_{p,\infty}}=\left\|v\right\|_{L^{p}}+\sup_{|z|>0}|z|^{-\alpha}\left\|v(\cdot+z)-v(\cdot)\right\|_{L^{p}}$;
here we use this modern notation and the sharp version of this conclusion, cf.
Constantin, E, and Titi [22], Duchon and Robert [35], Drivas and Eyink [34].
Thus, in the infinite Reynolds number limit for turbulent solutions of 3D
Navier-Stokes, one expects the convergence to _weak_ solutions of 3D Euler,
not classical ones.
It turns out that even in the context of weak solutions, the 3D Euler
equations enjoy some conditional variants of rigidity. An example is the
classical weak-strong uniqueness property.333If $v$ is a strong solution of
the Cauchy problem for (1.1) with initial datum $v_{0}\in L^{2}$, and $w\in
L^{\infty}_{t}L^{2}_{x}$ is merely a weak solution of the Cauchy problem for
(1.1), which has the additional property that it its kinetic energy
${\mathcal{E}}(t)$ is less than the kinetic energy of $v_{0}$, for a.e. $t>0$,
then in fact $v\equiv w$. See e.g. the review [65] for a detailed account.
Another example is the question of whether weak solutions of the 3D Euler
equation conserve kinetic energy. This is the subject of the Onsager
conjecture [57], one of the most celebrated connections between
phenomenological theories in turbulence and the rigorous mathematical analysis
of the PDEs of fluid dynamics. For a detailed account we refer the reader to
the reviews [37, 21, 60, 30, 63, 32, 33, 12, 14] and mention here only a few
of the results in the Onsager program for 3D Euler.
Constantin, E, and Titi [22] established the rigid side of the Onsager
conjecture, which states that if a weak solution $v$ of (1.1) lies in
$L^{3}_{t}B^{\beta}_{3,\infty,x}$ for some $\beta>\nicefrac{{1}}{{3}}$, then
$v$ conserves its kinetic energy. The endpoint case
$\beta=\nicefrac{{1}}{{3}}$ was addressed by Cheskidov, Constantin,
Friedlander, and Shvydkoy [16], who established a criterion which is known to
be sharp in the context of 1D Burgers. By using the Bernstein inequality to
transfer information from $L^{2}_{x}$ into $L^{3}_{x}$ , the authors of [16]
also prove energy-rigidity for weak solutions based on a regularity condition
for an $L^{2}_{x}$ based scale: if $v\in L^{3}_{t}H^{\beta}_{x}$ with
$\beta>\nicefrac{{5}}{{6}}$, then $v$ conserves kinetic energy (see also the
work of Sulem and Frisch [62]). We emphasize the discrepancy between the
energy-rigidity threshold exponents $\nicefrac{{5}}{{6}}$ for the
$L^{2}$-based Sobolev scale, and $\nicefrac{{1}}{{3}}$ for $L^{p}$-based
regularity scales with $p\geq 3$.
The first flexibility results were obtained by Scheffer [58], who constructed
non-trivial weak solutions of the 2D Euler system, which lie in
$L^{2}_{t}L^{2}_{x}$ and have compact support in space and time. The existence
of infinitely many dissipative weak solutions to the Euler equations was first
proven by Shnirelman in [59], in the regularity class
$L^{\infty}_{t}L^{2}_{x}$. Inspired by the work [53] of Müller and Šverak for
Lipschitz differential inclusions, in [29] De Lellis and Székelyhidi Jr. have
constructed infinitely many dissipative weak solutions of (1.1) in the
regularity class $L^{\infty}_{t}L^{\infty}_{x}$ and have developed a
systematic program towards the resolution of the flexible of the Onsager
conjecture, using the technique of convex integration. Inspired by Nash’s
paradoxical constructions for the isometric embedding problem [54], the first
proof of flexibility of the 3D Euler system in a Hölder space was given by De
Lellis and Székelyhidi Jr. in the work [31]. This breakthrough or crossing of
the $L^{\infty}_{x}$ to $C^{0}_{x}$ barrier in convex integration for 3D Euler
[31] has in turn spurred a number of results [8, 6, 9, 27] which have used
finer properties of the Euler equations to increase the regularity of the wild
weak solutions being constructed. The flexible part of the Onsager conjecture
was finally resolved by Isett [43, 42] in the context of weak solutions with
compact support in time (see also the subsequent work by the first and last
authors with De Lellis and Székelyhidi Jr. [11] for dissipative weak
solutions), by showing that for any regularity parameter
$\beta<\nicefrac{{1}}{{3}}$, the 3D Euler system (1.1) is flexible in the
class of $C^{\beta}_{t,x}$ weak solutions. We refer the reader to the review
papers [30, 63, 32, 33, 12, 14] for more details concerning convex integration
constructions in fluid dynamics, and for open problems in this area.
Since the aforementioned convex integration constructions are spatially
homogenous, they yield weak solutions whose Hölder regularity index cannot be
taken to be larger than $\nicefrac{{1}}{{3}}$ (recall that weak solutions in
$L^{3}_{t}C^{\beta}_{x}$ with $\beta>\nicefrac{{1}}{{3}}$ must conserve
kinetic energy). However, the exponent $\nicefrac{{1}}{{3}}$ is not expected
to be a sharp threshold for energy-rigidity/flexibility if the weak solutions’
regularity is measured on an $L^{p}_{x}$-based Banach scale with $p<3$. This
expectation stems from the measured intermittent nature of turbulent flows,
see e.g. Frisch [39, Figure 8.8, page 132]. In broad terms, intermittency is
characterized as a deviation from the Kolmogorov ’41 scaling laws, which were
derived under the assumptions of homogeneity and isotropy (for a rigorous way
to measure this deviation, see Cheskidov and Shvydkoy [20]). A common
signature of intermittency is that for $p\neq 3$, the $p^{th}$ order structure
function444In analogy with $L^{p}$-based Besov spaces, absolute $p^{th}$ order
structure functions are typically defined as
$S_{p}(\ell)=\fint_{0}^{T}\fint_{\mathbb{T}^{3}}\fint_{{\mathbb{S}}^{2}}|v(x+\ell
z,t)-v(x,t)|^{p}dzdxdt$. The structure function exponents in Kolmogorov’s ’41
theory are then given by $\zeta_{p}=\limsup_{\ell\to 0^{+}}\frac{\log
S_{p}(\ell)}{\log(\epsilon\ell)}$, where $\epsilon>0$ is the postulated
anomalous dissipation rate in the infinite Reynolds number limit. Of course,
for any non-conservative weak solution we may define a positive number
$\epsilon=\fint_{0}^{T}|\frac{d}{dt}{\mathcal{E}}(t)|dt$ as a substitute for
Kolmogorov’s $\epsilon$, which allows one to define $\zeta_{p}$ accordingly.
exponents $\zeta_{p}$ deviate from the Kolmogorov-predicted values of
$\nicefrac{{p}}{{3}}$. We note that the regularity statement $v\in
C^{0}_{t}B^{s}_{p,\infty}$ corresponds to a structure function exponent
$\zeta_{p}=sp$; that is, Kolmogorov ’41 predicts that $s=\nicefrac{{1}}{{3}}$
for all $p$. The exponent $p=2$ plays a special role, as it allows one to
measure the intermittent nature of turbulent flows on the Fourier side as a
power-law decay of the energy spectrum. Throughout the last five decades, the
experimentally measured values of $\zeta_{2}$ (in the inertial range, for
viscous flows at very high Reynolds numbers) have been consistently observed
to exceed the Kolmogorov-predicted value of $\nicefrac{{2}}{{3}}$ [1, 50, 61,
45, 15, 44, 55], thus showing a steeper decay rate in the inertial range power
spectrum than the one predicted by the Kolmogorov-Obhukov $5/3$ law. Moreover,
in the mathematical literature, Constantin and Fefferman [23] and Constantin,
Nie, and Tanveer [24] have used the 3D Navier-Stokes equations to show that
the Kolmogorov ’41 prediction $\zeta_{2}=\nicefrac{{2}}{{3}}$ is only
consistent with a lower bound for $\zeta_{2}$, instead of an exact equality.
Prior to this work, it was not known whether the 3D Euler equation can sustain
weak solutions which have kinetic energy that is uniformly bounded in time but
not conserved, and which have spatial regularity equal to or exceeding
$H^{\nicefrac{{1}}{{3}}}_{x}$, corresponding to
$\zeta_{2}\geq\nicefrac{{2}}{{3}}$; see [12, Open Problem 5] and [14,
Conjecture 2.6]. Theorem 1.1 gives the first such existence result. The
solutions in Theorem 1.1 may be constructed to have second order structure
function exponent $\zeta_{2}$ an arbitrary number in $(0,1)$, showing that
(1.1) exhibits weak solutions which severely deviate from the Kolmogorov-
Obhukov $5/3$ power spectrum.
We note that in a recent work [18], Cheskidov and Luo established the
sharpness of the $L^{2}_{t}L^{\infty}_{x}$ endpoint of the Prodi-Serrin
criteria for the 3D Navier-Stokes equations, by constructing non-unique weak
(mild) solutions of these equations in $L^{p}_{t}L^{\infty}_{x}$, for any
$p<2$.555See also [19] for a proof that the space $C^{0}_{t}L^{p}_{x}$ is
critical for uniqueness at $p=2$, in two space dimensions. As noted in [18,
Theorem 1.10], their approach also applies to the 3D Euler equations, yielding
weak solutions that lie in $L^{1}_{t}C^{\beta}_{x}$ for any $\beta<1$, and
thus these weak solutions also have more than $\nicefrac{{1}}{{3}}$
regularity. The drawback is that the solutions constructed in [18] do not have
bounded (in time) kinetic energy, in contrast to Theorem 1.1, which yields
weak solutions with kinetic energy that is continuous in time.
Theorem 1.1 is proven by using an intermittent convex integration scheme,
which is necessary in order to reach beyond the $\nicefrac{{1}}{{3}}$
regularity exponent, uniformly in time. Intermittent convex integration
schemes have been introduced by the first and last authors in [13] in order to
prove the non-uniqueness of weak (mild) solutions of the 3D Navier-Stokes
equations with bounded kinetic energy, and then refined in collaboration with
Colombo [7] to construct solutions which have partial regularity in time.
Recently, intermittent convex integration techniques have been used
successfully to construct non-unique weak solutions for the transport equation
(cf. Modena and Székelyhidi Jr. [52, 51], Brué, Colombo, and De Lellis [5],
and Cheskidov and Luo [17]), the 2D Euler equations with vorticity in a
Lorentz space (cf. [4]), the stationary 4D Navier-Stokes equations (cf. Luo
[49]), the $\alpha$-Euler equations (cf. [3]), in the context of the MHD
equations (cf. Dai [26], the first and last authors with Beekie [2]), and the
effect of temporal intermittency has recently been studied by Cheskidov and
Luo [18]. We refer the reader to the reviews [12, 14] for further references,
and for a comparison between intermittent and homogenous convex integration.
When applied to three-dimensional nonlinear problems, intermittent convex
integration has insofar only been successful at producing weak solutions with
negligible spatial regularity indices, uniformly in time. As we explain in
Section 1.2, there is a fundamental obstruction to achieving high regularity:
in physical space, intermittency causes concentrations that results in the
formation of intermittent peaks, and to handle these peaks the existing
techniques have used an extremely large separation between the frequencies in
consecutive steps of the convex integration scheme.666This becomes less of an
issue when one considers the equations of fluid dynamics in very high space
dimensions, cf. Tao [64]. This paper is the first to successfully implement a
high-regularity (in $L^{2}$), spatially-intermittent, temporally-homogenous,
convex integration scheme in three space dimensions, and shows that for the 3D
Euler system any regularity exponent $\beta<\nicefrac{{1}}{{2}}$ may be
achieved.777It was known within the community (see Section 2.4.1 for a
detailed explanation) that there is a key obstruction to reaching a regularity
index in $L^{2}$ for a solution to the Euler equations larger than
$\nicefrac{{1}}{{2}}$ via convex integration. In fact, the techniques
developed in this paper are the backbone for the recent work [56] of the last
two authors, which gives an alternative, intermittent, proof of the Onsager
conjecture.
### 1.2 Ideas and difficulties
As alluded to in the previous paragraph, the main difficulty in reaching a
high regularity exponent for weak solutions of (1.1) is that the existing
intermittent convex integration schemes do not allow for consecutive frequency
parameters $\lambda_{q}$ and $\lambda_{q+1}$ to be close to each other. In
essence, this is because intermittency smears out the set of active
frequencies in the approximate solutions to the Euler system (instead of
concentric spheres, they are more akin to thick concentric annuli), and
several of the key estimates in the scheme require frequency separation to
achieve $L^{p}$-decoupling (see Section 2.4.1). Indeed, high regularity
exponents necessitate an almost geometric growth of frequencies
($\lambda_{q}=\lambda_{0}^{q}$), or at least a barely super-exponential growth
rate $\lambda_{q+1}=\lambda_{q}^{b}$ with $0<b-1\ll 1$ (in comparison, the
schemes in [13, 7] require $b\approx 10^{3}$). Essentially every new idea in
this paper is aimed either directly or indirectly at rectifying this issue:
how does one take advantage of intermittency, and at the same time keep the
frequency separation to be nearly geometric?
The building blocks used in the convex integration scheme are intermittent
pipe flows,888The moniker used in [27] and the rest of the literature for
these stationary solutions has been “Mikado flows”. However, we rely rather
heavily on the geometric properties of these solutions, such as orientation
and concentration around axes, and so to emphasize the tube-like nature of
these objects, we will frequently use the name “pipe flows”. which we describe
in Section 2.3. Due to their spatial concentration and their periodization
rate, quadratic interactions of these building blocks produce both the helpful
low frequency term which is used to cancel the previous Reynolds stress
$\mathring{R}_{q}$, and also a number of other errors which live at
intermediate frequencies. These errors are spread throughout the frequency
annulus with inner radius $\lambda_{q}$ and outer radius $\lambda_{q+1}$, and
may have size only slightly less than that of $\mathring{R}_{q}$. If left
untreated, these errors only allow for a very small regularity parameter
$\beta$. In order to increase the regularity index of our weak solutions, we
need to take full advantage of the frequency separation between the slow
frequency $\lambda_{q}$ and the fast frequency $\lambda_{q+1}$. As such, the
intermediate-frequency errors need to be further corrected via velocity
increments designed to push these residual stresses towards the frequency
sphere of radius $\lambda_{q+1}$. The quadratic interactions among these
higher-order velocity corrections themselves, and in principle also with the
old velocity increments, in turn create higher order Reynolds stresses, which
live again at intermediate frequencies (slightly higher than before), but
whose amplitude is slightly smaller than before. This process of adding higher
order velocity perturbations designed to cancel intermediate-frequency higher
order stresses has to be repeated many times until all the resulting errors
are either small, or they live at frequency $\approx\lambda_{q+1}$, and thus
are also small upon inverting the divergence. See Sections 2.4 and 2.6 for a
more thorough account of this iteration.
Throughout the process described in the above paragraph, we need to keep
adding velocity increments, while at the same time keeping the high-high-high
frequency interactions under control. The fundamental obstacle here is that
when composing the intermittent pipe flows with the Lagrangian flow of the
slow velocity field, the resulting deformations are not spatiotemporally
homogenous. In essence, the intermittent nature of the approximate velocity
fields implies that a sharp global control on their Lipschitz norm is
unavailable, thus precluding us from implementing a gluing technique as in
[42, 11]. Additionally, we are faced with the issue that pipe flows which were
added at different stages of the higher order correction process have
different periodization rates and different spatial concentration rates, and
may a-priori overlap. Our main idea here is to implement a placement technique
which uses the relative intermittency of pipe flows from previous or same
generations, in conjunction with a sharp bound on their local Lagrangian
deformation rate, to determine suitable spatial shifts for the placement of
new pipe flows so that they dodge all other bent pipes which live in a
restricted space-time region. This geometric placement technique is discussed
in Section 2.5.2.
A rigorous mathematical implementation of the heuristic ideas described in the
previous two paragraphs, which crucially allows us to slow down the frequency
growth to be almost geometric, requires extremely sharp information on all
higher order errors and their associated velocity increments. For instance, in
order to take advantage of the transport nature of the linearized Euler system
while mitigating the loss of derivatives issue which is characteristic of
convex integration schemes, we need to keep track of essentially _infinitely
many sharp material derivative estimates_ for all velocity increments and
stresses. Such estimates are naturally only attainable on a local inverse
Lipschitz timescale, which in turn necessitates keeping track of the precise
location in space of the peaks in the densities of the pipe flows, and
performing a frequency localization with respect to both the Eulerian and the
Lagrangian coordinates. In order to achieve this, we introduce carefully
designed cutoff functions, which are defined recursively for the velocity
increments (in order to keep track of overlapping pipe flows from different
stages of the iteration), and iteratively for the Reynolds stresses (in order
to keep track of the correct amplitude of the perturbation which needs to be
added to correct these stresses); see Section 2.5. The cutoff functions we
construct effectively play the role of a joint Eulerian-and-Lagrangian
Littlewood-Paley frequency decomposition, which in addition keeps track of
both the position in space and the amplitude of various objects (more akin to
a wavelet decomposition). The analysis of these cutoff functions requires
estimating very high order commutators between Lagrangian and Eulerian
derivatives which in great part are responsible for the length of this paper
(see Section 6 and Appendix A). Lastly, we mention an additional technical
complication: since the sharp control of the Lipschitz norm of the approximate
velocities in our scheme is local in space and time, we need to work with an
inverse divergence operator (e.g. for computing higher order stresses) which,
up to much lower order error terms, maintains the spatial support of the
vector fields that it is applied to. Additionally, we need to be able to
estimate an essentially infinite number of material derivatives applied to the
output of this inverse divergence operator. This issue is addressed in Section
A.8.
The rest of the paper is organized as follows. Section 2 contains an outline
of the convex integration scheme, in which we replace some of the actual
estimates and definitions appearing in the proof with heuristic ones in order
to highlight the new ideas at an intuitive level. The proof of Theorem 1.1 is
given in Section 3, assuming that a number of estimates hold true inductively
for the solutions of the Euler-Reynolds system at every step of the convex
integration iteration. The remainder of the paper is dedicated to showing that
the inductive bounds stated in Section 3.2 may indeed be propagated from step
$q$ to step $q+1$. Section 4 contains the construction of the intermittent
pipe flows used in this paper and describes the careful placement required to
show that these pipe flows do not overlap on a suitable space-time set. The
mollification step of the proof is performed in Section 5. Section 6 contains
the definitions of the cutoff functions used in the proof and establishes
their properties. Section 7 breaks down the the main inductive bounds from
Section 3.2 into components which take into account the higher order stresses
and perturbations. Section 8 then proves the constituent parts of the
inductive bounds outlined in the previous section. Section 9 carefully defines
the many parameters in the scheme, states the precise order in which they are
chosen, and lists a few consequences of their definitions. Finally, Appendix A
contains the analytical toolshed to which we appeal throughout the paper.
### Acknowledgements
T.B. was supported by the NSF grant DMS-1900149 and a Simons Foundation
Mathematical and Physical Sciences Collaborative Grant. N.M. was supported by
the NSF grant DMS-1716466 and by Tamkeen under the NYU Abu Dhabi Research
Institute grant of the center SITE. V.V. was supported by the NSF grant CAREER
DMS-1911413.
## 2 Outline of the convex integration scheme
### 2.1 A guide to the parameters
In order to make sharp estimates throughout the scheme, we will require
numerous parameters. For the reader’s convenience, we have collected in this
section the heuristic definitions of all the parameters introduced in the
following sections of the outline. The parameters are listed in Section 2.1.1
in the order corresponding to their first appearance in the outline. We give
as well brief descriptions of the significance of each parameter.
#### 2.1.1 Definitions
###### Definition 2.1 (Parameters Introduced in Section 1).
1. (1)
$\beta$ \- The regularity exponent corresponding to a final solution $v\in
C\left(\mathbb{R};H^{\beta}(\mathbb{T}^{3})\right)$.
###### Definition 2.2 (Parameters Introduced in Section 2.2).
1. (1)
$q$ \- The integer which represents the primary stages of the iterative convex
integration scheme.
2. (2)
${\lambda_{q}=a^{(b^{q})}}$ \- The primary parameter used to quantify
frequencies. $a$ and $b$ will be chosen later, with $a\in\mathbb{R}_{+}$ being
a sufficiently large positive number and $b\in\mathbb{R}$ a real number
slightly larger than $1$.
3. (3)
$\delta_{q}=\lambda_{q}^{-2\beta}$ \- The primary parameter used to quantify
amplitudes of stresses and perturbations.
4. (4)
$\tau_{q}=(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q})^{-1}$ \- The primary
parameter used to quantify the cost of a material derivative
$\partial_{t}+v_{q}\cdot\nabla$.999For technical reasons, $\tau_{q}^{-1}$ will
be chosen to be slightly shorter than $\delta_{q}^{\frac{1}{2}}\lambda_{q}$.
For the heuristic calculations, one may ignore this modification and simply
use $\tau_{q}^{-1}=\delta_{q}^{\frac{1}{2}}\lambda_{q}$.
###### Definition 2.3 (Parameters Introduced in Section 2.3).
1. (1)
$n$ \- The primary parameter which will be used to divide up the frequencies
between $\lambda_{q}$ and $\lambda_{q+1}$ and which will take non-negative
integer values. The divisions will be used both for the frequencies of the
higher order stresses in Section 2.4 as well as the thickness of the
intermittent pipe flows used to correct the higher order stresses.
2. (2)
${n_{\rm max}}$ \- A large integer which is fixed independently of $q$ and
which sets the largest allowable value of $n$.
3. (3)
$\displaystyle{r_{q+1,n}=\left(\lambda_{q}\lambda_{q+1}^{-1}\right)^{\left(\frac{4}{5}\right)^{n+1}}}$
\- The parameter quantifying intermittency, or the thickness of a tube
periodized at unit scale for values of $n$ such that $0\leq n\leq{n_{\rm
max}}$.101010In particular, this choice gives
$r_{q+1,n+1}=r_{q+1,n}^{\frac{4}{5}}$. In our proof, the inequality
$r_{q+1,n}^{3}\ll r_{q+1,n+1}^{4}$ plays a crucial role. In order to absorb
$q$ independent constants, as well as to ensure that there is a sufficient gap
between these parameters to ensure decoupling, we have chosen to work with the
$\frac{4}{5}$ instead of the $\frac{3}{4}$ geometric scale.
4. (4)
$\displaystyle{\lambda_{q,n}=\lambda_{q+1}r_{q+1,n}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}}$
\- The minimum frequency present in an intermittent pipe flow
$\mathbb{W}_{q+1,n}$. Equivalently, $\left(\lambda_{q+1}r_{q+1,n}\right)^{-1}$
is the scale to which $\mathbb{W}_{q+1,n}$ is periodized.
Figure 1: Schematic of the frequency parameters appearing in Definitions 2.2
and 2.4.
###### Definition 2.4 (Parameters Introduced in Section 2.4).
1. (1)
For $2\leq n\leq{n_{\rm max}}$,
$\displaystyle\lambda_{q,n,0}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}$
is the minimum frequency present in the higher order stress
$\mathring{R}_{q,n}$. Conversely, $\lambda_{q,n+1,0}$ is the maximum frequency
present in $\mathring{R}_{q,n}$. When $n=0$, we set
$\lambda_{q,0,0}=\lambda_{q}$ to be the maximum frequency present in
$\mathring{R}_{q,0}=\mathring{R}_{q}$, and when $n=1$,
$\lambda_{q,1,0}=\lambda_{q,0}$ is the minimum frequency present in
$\mathring{R}_{q,1}$, while $\lambda_{q,2,0}$ is the maximum frequency.
2. (2)
$p$ \- A secondary parameter which takes positive integer values and which
will be used to divide up the frequencies in between $\lambda_{q,n,0}$ and
$\lambda_{q,n+1,0}$, as well as the higher order stresses.
3. (3)
${p_{\rm max}}$ \- A large integer, fixed independently of $q$, which is the
largest allowable value of $p$.
4. (4)
$\displaystyle\lambda_{q,n,p}=\lambda_{q,n,0}^{1-\frac{p}{{p_{\rm
max}}}}\lambda_{q,n+1,0}^{\frac{p}{{p_{\rm max}}}}$ \- The maximum frequency
present in the higher order stress $\mathring{R}_{q,n,p}$ for $1\leq
n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$. Conversely,
$\lambda_{q,n,p-1}$ is the minimum frequency in $\mathring{R}_{q,n,p}$. When
$n=0$ and $p$ takes any value, we adopt the convention that
$\lambda_{q,0,p}=\lambda_{q}$.
5. (5)
$\displaystyle f_{q,n}=\lambda_{q,n+1,0}^{\frac{1}{{p_{\rm
max}}}}\lambda_{q,n,0}^{-\frac{1}{{p_{\rm max}}}}$ \- The increment between
frequencies $\lambda_{q,n,p-1}$ and $\lambda_{q,n,p}$ for $n\geq 1$. We have
the equalities
$\lambda_{q,n,p}=\lambda_{q,n,0}f_{q,n}^{p},\qquad\lambda_{q,n+1,0}=\lambda_{q,n,0}f_{q,n}^{p_{\rm
max}}\,.$
For ease of notation, when $n=0$ we set $f_{q,n}=1$.
6. (6)
For $n=0$ and $p=1$, $\delta_{q+1,0,1}:=\delta_{q+1}$ is the amplitude of
$\mathring{R}_{q}:=\mathring{R}_{q,0}$. For $n=0$ and $p\geq 2$,
$\delta_{q+1,0,p}=0$, since there are no higher order stresses at $n=0$. For
$n\geq 1$ and any value of $p$, the amplitude of $\mathring{R}_{q,n,p}$ is
given by
$\displaystyle\delta_{q+1,n,p}:=\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,p-1}}\cdot\prod_{n^{\prime}<n}f_{q,n^{\prime}}.$
One should view the product of $f_{q,n^{\prime}}$ terms as a negligible error,
which is justified by calculating
$\displaystyle\prod\limits_{0\leq n^{\prime}\leq{n_{\rm
max}}}f_{q,n^{\prime}}$ $\displaystyle=\left(\frac{\lambda_{q,{n_{\rm
max}}+1,0}}{\lambda_{q,1,0}}\right)^{\frac{1}{{p_{\rm
max}}}}\leq\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\frac{1}{{p_{\rm
max}}}}$ (2.1)
and assuming that ${p_{\rm max}}$ is large.
###### Definition 2.5 (Parameters Introduced in Section 2.5).
1. (1)
$\varepsilon_{\Gamma}$ \- A very small positive number.
2. (2)
$\displaystyle{\Gamma_{q+1}=\left(\lambda_{q+1}\lambda_{q}^{-1}\right)^{\varepsilon_{\Gamma}}}$
\- A parameter which will be used to quantify deviations in amplitude. In
particular, $\Gamma_{q}$ will be used to quantify amplitudes of both velocity
fields and (higher-order) stresses.
### 2.2 Inductive assumptions
For every non-negative integer $q$ we will construct a solution
$(v_{q},p_{q},\mathring{R}_{q})$ to the Euler-Reynolds system
$\displaystyle\partial_{t}v_{q}+\mathrm{div\,}(v_{q}\otimes v_{q})+\nabla
p_{q}$ $\displaystyle=\mathrm{div\,}\mathring{R}_{q}$ (2.2a)
$\displaystyle\mathrm{div\,}v_{q}$ $\displaystyle=0\,.$ (2.2b)
Here $\mathring{R}_{q}$ is assumed to be a trace-free symmetric matrix. The
relative size of the approximate solution $v_{q}$ and the Reynolds stress
error $\mathring{R}_{q}$ will be measured in terms of the frequency parameter
$\lambda_{q}$ and the amplitude parameter $\delta_{q}$, which are defined in
Definition 2.2. We will propagate the following basic inductive estimates on
$(v_{q},\mathring{R}_{q})$:111111By $\left\|v_{q}\right\|_{H^{1}}$, we
actually mean $\left\|v_{q}\right\|_{C^{0}_{t}H^{1}_{x}}$. Similarly,
$\|\mathring{R}_{q}\|_{L^{1}}$ stands for
$\|\mathring{R}_{q}\|_{C^{0}_{t}L^{1}_{x}}$. Unless stated explicitly
otherwise, all the norms used in this paper represent analogous uniform in
time estimates and will be abbreviated as such.
$\displaystyle\left\|v_{q}\right\|_{H^{1}}$
$\displaystyle\leq\delta_{q}^{\frac{1}{2}}\lambda_{q}$ (2.3)
$\displaystyle\|\mathring{R}_{q}\|_{L^{1}}$ $\displaystyle\leq\delta_{q+1}.$
(2.4)
We shall see later that in order to build solutions belonging to
$\dot{H}^{\beta}$ for $\beta$ approaching $\frac{1}{2}$, we must propagate
additional estimates on higher order material and spatial derivatives of both
$v_{q}$ and $\mathring{R}_{q}$ in $L^{2}$ and $L^{1}$, respectively. Roughly
speaking, every spatial derivative on either $v_{q}$ or $\mathring{R}_{q}$
costs a factor of $\lambda_{q}$. Additional material derivatives are more
delicate and will be discussed further in Section 2.5, but for the time being,
one may imagine that each material derivative
$D_{t,q}:=\partial_{t}+v_{q}\cdot\nabla$ on $v_{q}$ or $\mathring{R}_{q}$
costs a factor of $\tau_{q}^{-1}$.
### 2.3 Intermittent pipe flows
Pipe flows, both homogeneous and intermittent, have proven to be one of the
most useful components of many convex integration schemes. Homogeneous pipe
flows were introduced first by Daneri and Székelyhidi Jr. [27]. The
prototypical pipe flow in the $\vec{e}_{3}$ direction is constructed using a
smooth function $\rho:\mathbb{R}^{2}\rightarrow\mathbb{R}$ which is compactly
supported, for example in a ball of radius $1$ centered at the origin, and has
zero mean. Letting $\varrho:\mathbb{T}^{2}\rightarrow\mathbb{R}$ be the
$\mathbb{T}^{2}$-periodized version of $\rho$, the $\mathbb{T}^{3}$-periodic
pipe flow $\mathbb{W}:\mathbb{T}^{3}\rightarrow\mathbb{R}^{3}$ is defined as
$\mathbb{W}(x_{1},x_{2},x_{3})=\varrho(x_{1},x_{3}){e}_{2}\,.$ (2.5)
It is immediate that $\mathbb{W}$ is divergence-free and a stationary solution
to the Euler equations. Pipe flows such as $\mathbb{W}$ have been used in
convex integration schemes which produce solutions in $L^{\infty}$-based
spaces [27, 43, 11]. At the $q^{\textnormal{th}}$ stage of the iteration, the
$\frac{\mathbb{T}^{3}}{\lambda_{q+1}}$-periodized pipe flow
$\mathbb{W}\left(\lambda_{q+1}\cdot\right)$ is used to construct the
perturbation.
By contrast, intermittent pipe flows are _not_ spatially homogeneous.
Intermittency in the context of convex integration schemes was introduced by
the first and last authors in [13] via _intermittent Beltrami flows_ , which
are defined via their Fourier support and may be likened to modified and
renormalized Dirichlet kernels. Intermittent pipe flows were introduced by
Modena and Székelyhidi Jr. in the context of the transport and transport-
diffusion equation [52] and have also been utilized for the higher dimensional
(at least four dimensional121212In three dimensions, intermittent pipe flows
are not sufficiently sparse to handle the error term arising from the
Laplacian. This issue was addressed by Colombo and the first and last authors
in [7] through the usage of _intermittent jets_ , and similar objects have
been used in subsequent papers as well (see work of Brue, Colombo, and De
Lellis [5], Cheskidov and Luo [17, 18]).) Navier-Stokes equations [49, 64].
The precise objects we use are defined in (4.10) in Proposition 4.4, but let
us briefly describe some of their important attributes. The intermittency is
quantified by the parameter $r_{q+1,n}\ll 1$. Let
$\rho_{r_{q+1,n}}\colon\mathbb{R}^{2}\rightarrow\mathbb{R}$ be defined by
$\rho_{r_{q+1,n}}(\cdot)=\rho\left(\frac{\cdot}{r_{q+1,n}}\right)$, and let
$\varrho_{r_{q+1,n}}$ be the $\mathbb{T}^{2}$-periodized version of
$\rho_{r_{q+1,n}}$. Thus one can see that $r_{q+1,n}$ describes the _thickness
of the pipes at unit scale._ In order to make the intermittent pipe flows of
unit size in $L^{2}(\mathbb{T}^{3})$, one must multiply by a factor of
$r_{q+1,n}^{-1}$, meaning that the Lebesgue norms of the resulting object
$\mathbb{W}_{r_{q+1,n}}$ scale like
$\left\|\mathbb{W}_{r_{q+1,n}}\right\|_{L^{p}(\mathbb{T}^{3})}\sim
r_{q+1,n}^{\frac{2}{p}-1}.$ (2.6)
Let $\mathbb{W}_{q+1,n}$ be the
$\frac{\mathbb{T}^{3}}{(r_{q+1,n}\lambda_{q+1})}$-periodic version of
$\mathbb{W}_{r_{q+1,n}}$. Notice that this implies that the thickness of the
pipes comprising $\mathbb{W}_{q+1,n}$ is of order $\lambda_{q+1}^{-1}$ for all
$n$, and that the Lebesgue norms of the periodized object $\mathbb{W}_{q+1,n}$
depend only on $r_{q+1,n}$. Per Definition 2.3, the thickness of the pipes
used in the perturbation at stage $q+1$ will be quantified by
$r_{q+1,n}=\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}.$
This choice will be jusified upon calculation of the heuristic bounds.
Figure 2: A pipe flow $\mathbb{W}_{q+1,n}$ which is periodized to scale
$(\lambda_{q+1}r_{q+1,n})^{-1}=\lambda_{q,n}^{-1}$ is placed in a direction
parallel to the $e_{2}$ axis. Upon taking into account periodic shifts, we
note that there are $r_{q+1,n}^{-2}$ many options to place this pipe. This
degree of freedom will be used later, see e.g. Figure 7.
#### 2.3.1 Lagrangian coordinates, intermittency, and placements
In order to achieve the optimum regularity $\beta$, we will define the pipe
flows which comprise the perturbation at stage $q+1$ in Lagrangian coordinates
corresponding to the velocity field $v_{q}$. Due to the inherent instability
of Lagrangian coordinates over timescales longer than that dictated by the
Lipschitz norm of the velocity field, there will be many sets of coordinates
used in different time intervals which are then patched together using a
partition of unity. This technique has been used frequently in recent convex
integration schemes, beginning with work of Isett [41], the first author, De
Lellis, and Székelyhidi Jr. [10], and Isett, the first author, De Lellis, and
Székelyhidi Jr. [8], but perhaps most notably in the proof of the Onsager
conjecture by Isett [43] and the subsequent strengthening to dissipative
solutions by the first and last authors, De Lellis, and Székelyhidi Jr. [11].
The proof of Onsager’s conjecture employs the gluing technique to prevent pipe
flows defined using different Lagrangian coordinate systems from overlapping.
The intermittent quality of our building blocks, and thus the approximate
solution $v_{q}$, appears to obstruct the successful implementation of the
gluing technique, since gluing requires a sharp control on the global
Lipschitz norm of the velocity field which will be unavailable. Thus, we
cannot use the gluing technique and must control in a different fashion the
possible interactions between two intermittent pipe flows defined using
different Lagrangian coordinate systems.
To control these interactions, we have introduced a _placement technique_ (cf.
Proposition 4.8) which is used to completely prevent all such interactions.
This placement technique is predicated on a simple observation about
intermittent pipe flows, which to our knowledge has not yet been used in any
convex integration schemes to date. When the diameter of the pipe at unit
scale is of size $r_{q+1,n}$, there are $(r_{q+1,n})^{-2}$ disjoint choices
for the support of pipe. These choices simply correspond to shifting the
intersection of the axis of the pipe in the plane which is perpendicular to
the axis, cf. Proposition 4.3. This degree of freedom is unaffected by
periodization and is depicted in Figure 2 for a
$\frac{\mathbb{T}^{3}}{\lambda_{q+1}r_{q+1,n}}$-periodic intermittent pipe
flow $\mathbb{W}_{q+1,n}$. We will exploit this degree of freedom to choose
placements for each set of pipes which _entirely avoid_ other sets of pipes on
small discretized regions of space-time. The space-time discretization is made
possible through the usage of cutoff functions which will be discussed in more
detail later in Section 2.5. We remark that De Lellis and Kwon [28] have
introduced a placement technique in the context of $C^{\alpha}$, globally
dissipative solutions to the 3D Euler equations which is predicated on
restricting the timescale of the Lagrangian coordinate systems to be
significantly shorter than the Lipschitz timescale. This restriction
significantly limits the regularity of the final solution and is thus not
suited for a intermittent scheme aimed at $H^{\frac{1}{2}-}$ regularity.
### 2.4 Higher order stresses
#### 2.4.1 Regularity beyond $\nicefrac{{1}}{{3}}$
The resolution of the flexible side of the Onsager conjecture in [43] and [11]
mentioned previously shows that given some prescribed regularity index
$\beta\in(0,\frac{1}{3})$, one can construct dissipative weak solutions $u$ in
$C^{\beta}$. Conversely, following on partial work by Eyink [36], Constantin,
E, and Titi [22] have proven that conservation of energy in the Euler
equations requires only that $u\in
L^{3}_{t}\left(B^{\alpha}_{3,\infty}\right)$ for $\alpha>\nicefrac{{1}}{{3}}$.
This leaves open the possibility of building dissipative weak solutions with
more than $\frac{1}{3}$-many derivatives in $L^{p}\left(\mathbb{T}^{3}\right)$
(uniformly in time in our case) for $p<3$.
Let us present a heuristic estimate which indicates a regularity limit of
$H^{\frac{1}{2}}$ for solutions produced via convex integration schemes. For
this purpose, let us focus on one of the principal parts of the stress in an
intermittent convex integration schemes (for the familiar reader, this is part
of the oscillation error). The perturbations include a coefficient function
$a$ which depends on $\mathring{R}_{q}$ and thus for which derivatives cost
$\lambda_{q}$ and which has amplitude $\delta_{q+1}^{\nicefrac{{1}}{{2}}}$
(the square root of the amplitude of the stress). These coefficient functions
are multiplied by intermittent pipe flows $\mathbb{W}_{q+1,0}$ for which
derivatives cost $\lambda_{q+1}$ and which have unit size in $L^{2}$, but are
only periodized to scale $\left(\lambda_{q+1}r_{q+1,0}\right)^{-1}$. When the
divergence lands on the square of the coefficient function $a^{2}$ in the
nonlinear term, the resulting error term satisfies the estimate
$\displaystyle\left\|\mathrm{div\,}^{-1}\left(\nabla(a^{2})\mathbb{P}_{\neq
0}(\mathbb{W}_{q+1,0}\otimes\mathbb{W}_{q+1,0})\right)\right\|_{L^{1}}\leq\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q+1}r_{q+1,0}}.$
(2.7)
The numerator is the size of $\nabla(a^{2})$ in $L^{1}$, while the denominator
is the gain induced by inverting the divergence at $\lambda_{q+1}r_{q+1,0}$,
which is the minimum frequency of $\mathbb{P}_{\neq
0}(\mathbb{W}_{q+1,0}\otimes\mathbb{W}_{q+1,0})=\mathbb{W}_{q+1,0}\otimes\mathbb{W}_{q+1,0}-\fint_{\mathbb{T}^{3}}\mathbb{W}_{q+1,0}\otimes\mathbb{W}_{q+1,0}$.
Note that we have used implicitly that $\mathbb{W}_{q+1,0}$ has unit $L^{2}$
norm, and that by periodicity $\mathbb{P}_{\neq
0}(\mathbb{W}_{q+1,0}\otimes\mathbb{W}_{q+1,0})$ decouples from
$\nabla(a^{2})$. This error would be minimized when $r_{q+1,0}=1$, in which
case
$\displaystyle\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q+1}}<\delta_{q+2}$
$\displaystyle\iff\lambda_{q+1}^{-2\beta+\frac{1}{b}}<\lambda_{q+1}^{-2\beta
b+1}$ $\displaystyle\iff 2\beta b^{2}-2\beta b<b-1$ $\displaystyle\iff 2\beta
b(b-1)<b-1$ $\displaystyle\iff\beta<\frac{1}{2b}.$ (2.8)
Any intermittency parameter $r_{q+1,0}\ll 1$ would weaken this estimate since
the gain induced from inverting the divergence will only be
$\lambda_{q+1}r_{q+1,0}\ll\lambda_{q+1}$. On the other hand, we will see that
a small choice of $r_{q+1,0}$ _strengthens all other error terms_ , and
because of this, in our construction we will choose $r_{q+1,0}$ as in
Definition 2.3, item (3). One may refer to the blog post of Tao [64] for a
slightly different argument which reaches the same apparent regularity limit.
This apparent regularity limit is independent of dimension, and we believe
that the method in this paper can not be modified to yield weak solutions with
regularity $L^{\infty}_{t}W^{s,p}_{x}$ with $s>1/2$, for any $p\in[1,2]$.
The higher order stresses mentioned in Section 1.2 will compensate for the
losses incurred in this nonlinear error term when $r_{q+1,0}\ll 1$. As we
shall describe in the next section, we use the phrase “higher order stresses”
to describe errors which are higher in frequency and smaller in amplitude than
$\mathring{R}_{q}$, but not sufficiently small enough or at high enough
frequency to belong to $\mathring{R}_{q+1}$. Similarly, “higher order
perturbations” are used to correct the higher order stresses and thus increase
the extent to which an approximate solution solves the Euler equations.
#### 2.4.2 Specifics of the higher order stresses
In convex integration schemes which measure regularity in $L^{\infty}$ (i.e.
using Hölder spaces $C^{\alpha}$), pipe flows interact through the
nonlinearity to produce low ($\approx\lambda_{q}$) and high
($\approx\lambda_{q+1}$) frequencies. We denote by $w_{q+1,0}$ the
perturbation designed to correct $\mathring{R}_{q}$. In the absence of
intermittency, the low frequencies from the self-interaction of $w_{q+1,0}$
cancel the Reynolds stress error $\mathring{R}_{q}$, and the high frequencies
are absorbed by the pressure up to an error small enough to be placed in
$\mathring{R}_{q+1}$. In an intermittent scheme, the self-interaction of the
intermittent pipe flows comprising $w_{q+1,0}$ produces low, intermediate, and
high frequencies. The low and high frequencies play a similar role as before.
However, the intermediate frequencies cannot be written as a gradient, nor are
small enough to be absorbed in $\mathring{R}_{q+1}$. This issue has limited
the available regularity on the final solution in many previous intermittent
convex integration schemes. In order to reach the threshold $H^{\frac{1}{2}}$,
we address this issue using higher order Reynolds stress errors
$\mathring{R}_{q,n}$ for $n=1,2,\dots,{{n_{\rm max}}}$, cf. Figure 3.
Figure 3: Adding the increment $w_{q+1,0}$ corrects the stress
$\mathring{R}_{q,0}=\mathring{R}_{q}$, but produces error terms which live at
frequencies that are intermediate between $\lambda_{q}$ and $\lambda_{q+1}$,
due to the intermittency of $w_{q+1,0}$. These new errors are sorted into
higher order stresses $\mathring{R}_{q,n}$ for $1\leq n\leq{n_{\rm max}}$, as
depicted above. The heights of the boxes corresponds to the amplitude of the
errors that will fall into them, while the frequency support of each box
increases from $\lambda_{q}$ for $\mathring{R}_{q,0}=\mathring{R}_{q}$, to
$\lambda_{q+1}$ for $\mathring{R}_{q+1}$.
After the addition of $w_{q+1,0}$ to correct $\mathring{R}_{q}$, which is
labeled in Figure 4 as $\mathring{R}_{q,0}$, low frequency error terms are
produced, which we divide into higher order stresses. To correct the error
term of this type at the _lowest_ frequency, which is labeled
$\mathring{R}_{q,1}$ in Figure 4, we add a sub-perturbation $w_{q+1,1}$. The
subsequent bins are lighter in color to emphasize that they are not yet full;
that is, there are more error terms which have yet to be constructed but will
be sorted into such bins. The emptying of the bins then proceeds inductively
on $n$, as we add higher order perturbations $w_{q+1,n}$, which are designed
to correct $\mathring{R}_{q,n}$. For $1\leq n\leq{n_{\rm max}}$, the frequency
support of $\mathring{R}_{q,n}$ is131313In reality, the higher order stresses
are not compactly supported in frequency. However, they will satisfy
derivative estimates to very high order which are characteristic of functions
with compact frequency support.
$\left\\{k\in{\mathbb{Z}}^{3}\colon\lambda_{q,n,0}\leq|k|<\lambda_{q,n+1,0}\right\\}.$
(2.9)
This division will be justified upon calculation of the heuristic bounds in
Section 2.7.
Figure 4: Adding $w_{q+1,n}$ to correct $\mathring{R}_{q,n}$ produces error
terms which are distributed among the Reynolds stresses
$\mathring{R}_{q,n^{\prime}}$ for $n+1\leq n^{\prime}\leq{n_{\rm max}}$.
Let us now explain the motivation for the division of $\mathring{R}_{q,n}$
into the further subcomponents $\mathring{R}_{q,n,p}$. Suppose that we add a
perturbation $w_{q+1,n}$ to correct $\mathring{R}_{q,n}$ for $n\geq 1$. The
amplitude of $w_{q+1,n}$ would depend on the amplitude of
$\mathring{R}_{q,n}$, which in turn depends on the gain induced by inverting
the divergence to produce $\mathring{R}_{q,n}$, which depends then on the
minimum frequency $\lambda_{q,n,0}$. However, derivatives on the low frequency
coefficient function used to define $w_{q+1,n}$ would depend on the maximum
frequency of $\mathring{R}_{q,n}$, which is $\lambda_{q,n+1,0}$. The (sharp-
eyed) reader may at this point object that the first derivative on the low-
frequency coefficient function $\nabla(a(\mathring{R}_{q,n}))$ should be
cheaper, since $\mathring{R}_{q,n}$ is obtained from inverting the divergence,
and taking the gradient of the cutoff function written above should thus
morally involve bounding a zero-order operator. However, constructing the low-
frequency coefficient function presents technical difficulties which prevent
us from taking advantage of this intuition. In fact, the failure of this
intuition is the sole reason for the introduction of the parameter $p$, as one
may see from the heuristic estimates later. In any case, increasing the
regularity $\beta$ of the final solution requires minimizing this gap between
the gain in amplitude provided by inverting the divergence and the cost of a
derivative, and so we subdivide $\mathring{R}_{q,n}$ into further components
$\mathring{R}_{q,n,p}$ for $1\leq p\leq p_{max}$.141414There are certainly a
multitude of ways to manage the bookkeeping for amplitudes and frequencies.
Using both $n$ and $p$ is convenient because then $n$ is the only index which
quantifies the rate of periodization. Both ${{n_{\rm max}}}$ and ${p_{\rm
max}}$ are fixed independently of $q$. Each component $\mathring{R}_{q,n,p}$
then will have frequency support in the set
$\left\\{k\in{\mathbb{Z}}^{3}\colon\lambda_{q,n,p-1}\leq|k|<\lambda_{q,n,p}\right\\}=\left\\{k\in{\mathbb{Z}}^{3}\colon\lambda_{q,n,0}f_{q,n}^{p-1}\leq|k|<\lambda_{q,n,0}f_{q,n}^{p}\right\\}.$
(2.10)
Notice that by the definition of $f_{q,n}$ in Definition 2.4, 2.10 defines a
partition of the frequencies in between $\lambda_{q,n,0}$ and
$\lambda_{q,n+1,0}$ for $1\leq p\leq{p_{\rm max}}$. Figure 5 depicts this
division, and we shall describe in the heuristic estimates how each
subcomponent $\mathring{R}_{q,n,p}$ is corrected by $w_{q+1,n,p}$, with all
resulting errors absorbed into either $\mathring{R}_{q+1}$ or
$\mathring{R}_{q,n^{\prime}}$ for $n^{\prime}>n$.
Figure 5: The higher order stress $\mathring{R}_{q,n}$ is decomposed into
components $\mathring{R}_{q,n,p}$, which increase in frequency and decrease in
amplitude as $p$ increases. We use the base of the red boxes to indicate
support in frequency, where frequency is increasing from left to right, and
the height to indicate amplitudes. Each subcomponent $\mathring{R}_{q,n,p}$ is
corrected by its own corresponding sub-perturbation $w_{q+1,n,p}$, which has a
commensurate frequency and amplitude.
Thus, the net effect of the higher order stresses is that one may take errors
for which the inverse divergence provides a weak estimate due to the presence
of relatively low frequencies and push them to higher frequencies for which
the inverse divergence estimate is stronger. We will repeat this process until
all errors are moved (almost) all the way to frequency $\lambda_{q+1}$, at
which point they are absorbed into $\mathring{R}_{q+1}$. Heuristically, this
means that in constructing the perturbation $w_{q+1}$ at stage $q$, we have
_eliminated_ all the higher order error terms which arise from self-
interactions of intermittent pipe flows, thus producing a solution $v_{q+1}$
to the Euler-Reynolds system at level $q+1$ which is _as close as possible_ to
a solution of the Euler equations. We point out that one side effect of the
higher order perturbations is that the total perturbation $w_{q+1}$ has
spatial support which is _not_ particularly sparse, since as $n$ increases the
perturbations $w_{q+1,n}$ become successively less intermittent and thus more
homogeneous. At the same time, the frequency support of our solution is also
not too sparse, since $b$ is close to $1$ and
$r_{q+1,0}=\left(\lambda_{q}\lambda_{q+1}^{-1}\right)^{\frac{4}{5}}$, so that
many of the frequencies between $\lambda_{q}$ and $\lambda_{q+1}$ are active.
### 2.5 Cut-off functions
#### 2.5.1 Velocity and stress cut-offs
The concept of a turnover time, which is proportional to the inverse of the
gradient of the mean flow $v_{q}$, is crucial to the previous convex
integration schemes mentioned earlier which utilized Lagrangian coordinates.
Since the perturbation is expected to be roughly flowed by the mean flow
$v_{q}$, the turnover time determines a timescale on which the perturbation is
expected to undergo significant deformations. An important property of pipe
flows, first noted by Daneri and Székelyhidi Jr. in [27] and utilized
crucially by Isett [43] towards the proof of Onsager’s conjecture, is that the
length of time for which pipe flows written in Lagrangian coordinates remain
approximately stationary solutions to Euler depends only on the Lipschitz norm
of the transport velocity $v_{q}$ and not the Lipschitz norms of the original
(undeformed) pipe flow. However, the timescale under which pipe flows
transported by an intermittent velocity field remain coherent is space-time
dependent, in contrast to previous convex integration schemes in which the
timescale was uniform across $\mathbb{R}\times\mathbb{T}^{3}$. As such, we
will need to introduce space-time cut-offs $\psi_{i,q}$ in order to determine
the local turnover time. In particular, the cut-off $\psi_{i,q}$ will be
defined such that
$\displaystyle\left\|\nabla
v_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i}:=\tau_{q}^{-1}\Gamma_{q+1}^{i}\,.$
(2.11)
With such cut-offs defined, we then define in addition a family of temporal
cut-offs $\chi_{i,k,q}$ which will be used to restrict the timespan of the
intermittent pipe flows in terms of the local turnover. Each cut-off function
$\chi_{i,k,q}$ will have temporal support contained in an interval of length
$\tau_{q}\Gamma_{q+1}^{-i}.$ (2.12)
It should be noted that we will design the cut-offs so that we can deduce much
more on its support than (2.11). Since the material derivative
$D_{t,q}:=\partial_{t}+v_{q}\cdot\nabla$ will play an important role, we will
require estimates involving material derivatives $D_{t,q}^{N}$ of very high
order.151515The loss of material derivative in the transport error means that
to produce solutions with regularity approaching $\dot{H}^{\frac{1}{2}}$, we
have to propagate material derivative estimates of arbitrarily high order on
the stress. We expect the cost of a material derivative to be related to the
turnover time, which itself is local in nature. As such, high order material
derivative estimates will be done on the support of the cut-off functions and
will be of the form
$\left\|\psi_{i,q}D_{t,q}^{N}\mathring{R}_{q,n,p}\right\|_{L^{r}}\,.$
In addition to the family of cut-offs $\psi_{i,q}$ and $\chi_{i,k,q}$, we will
also require stress cut-offs $\omega_{i,j,q,n,p}$ which determine the local
size of the Reynolds stress errors $\mathring{R}_{q,n,p}$; in particular
$\omega_{i,j,q,n,p}$ will be defined such that
$\displaystyle\left\|\nabla^{M}\mathring{R}_{q,n,p}\right\|_{L^{\infty}(\mathrm{supp\,}\omega_{i,j,q,n,p})}\leq\delta_{q+1,n,p}\Gamma_{q+1}^{2j}\lambda_{q,n,p}^{M}\,.$
(2.13)
Previous intermittent convex integration schemes have managed to successfully
cancel intermittent stress terms with much simpler stress cutoff functions
than the ones we use. However, mitigating the loss of spatial derivative in
the oscillation error means that we have to propagate sharp spatial derivative
estimates of arbitrarily high order on the stress in order to produce
solutions with regularity approaching $\dot{H}^{\frac{1}{2}}$. Due to this
requirement, we then have to estimate the _second_ derivative (and higher) of
the stress cutoff function
$\left\|\nabla^{2}\left(\omega^{2}\left(\mathring{R}_{q,n,p}\right)\right)\right\|_{L^{1}}\,,$
which in turn necessitates bounding the local $L^{2}$ norm of
$\nabla\mathring{R}_{q,n,p}$ due to the term
$\left\|\left(\nabla^{2}(\omega^{2})\right)\left(\mathring{R}_{q,n,p}\right)\;\left|\nabla\mathring{R}_{q,n,p}\right|^{2}\right\|_{L^{1}}\,.$
Given inductive estimates about the derivatives of $\mathring{R}_{q}$ _only
in_ $L^{1}$ which have not been upgraded to $L^{p}$ for $p>1$, this term will
obey a fatally weak estimate, which is why we must estimate
$\mathring{R}_{q,n,p}$ in $L^{\infty}$ as in (2.13).
#### 2.5.2 Checkerboard cut-offs
As mentioned in the discussion of intermittent pipe flows, we must prevent
pipes originating from different Lagrangian coordinate systems from
intersecting. The first step is to reduce the complexity of this problem by
restricting the size of the spatial domain on which intersections must be
prevented. Towards this end, consider the maximum frequency of the original
stress $\mathring{R}_{q}=\mathring{R}_{q,0}$, or any of the higher order
stresses $\mathring{R}_{q,n}$ for $n\geq 1$. We may write these frequencies as
$\lambda_{q+1}r_{1}$ for $\lambda_{q}\lambda_{q+1}^{-1}\leq r_{1}<1$. We then
decompose $\mathring{R}_{q,n}$ using a checkerboard partition of unity
comprised of bump functions which follow the flow of $v_{q}$ and have support
of diameter $\left(\lambda_{q+1}r_{1}\right)^{-1}$. These two properties
ensure that we have _preserved_ the derivative bounds on $\mathring{R}_{q,n}$.
Thus, we fix the set $\Omega$ to be the support of an individual checkerboard
cutoff function in this partition of unity at a fixed time, cf. (4.28).
Suppose furthermore that $\Omega$ is inhabited by disjoint sets of deformed
intermittent pipe flows which are periodized to spatial scales no finer than
$\left(\lambda_{q+1}r_{2}\right)^{-1}$ for $0<r_{1}<r_{2}<1$. In practice,
$r_{2}$ will be $r_{q+1,n}$, where $r_{q+1,n}$ is the amount of intermittency
used in the pipes which comprise the perturbation $w_{q+1,n}$ which is used to
correct $\mathring{R}_{q,n}$. The pipes which already inhabit $\Omega$ may
first be from previous generations of perturbations $w_{q+1,n^{\prime}}$ for
$n^{\prime}<n$, in which case they are periodized to spatial scales much
broader than $\left(\lambda_{q+1}r_{2}\right)^{-1}$, or from an overlapping
checkerboard cutoff function used to decompose $\mathring{R}_{q,n}$ on which a
placement of pipes periodized to spatial scale
$\left(\lambda_{q+1}r_{2}\right)^{-1}$ has already been chosen. In either
case, these pipes will have been deformed by the velocity field $v_{q}$ on the
time-scale given by the inverse of the local Lipschitz norm. We represent the
support of these deformed pipe flows in terms of axes
$\\{A_{i}\\}_{i\in\mathcal{I}}$ around which the pipes
$\\{P_{i}\\}_{i\in\mathcal{I}}$ are concentrated to thickness
$\lambda_{q+1}^{-1}$ (recall from Section 2.3 that all intermittent pipe flows
used in our scheme have this thickness).
We will now explain that one may choose a new set of (straight, i.e. not
deformed) intermittent pipe flows $\mathbb{W}_{r_{2},\lambda_{q+1}}$
periodized to scale $\left(\lambda_{q+1}r_{2}\right)^{-1}$ which are disjoint
from each deformed pipe $P_{i}$ and _on the support of_ $\Omega$ and _under
appropriate restrictions on $r_{1}$ and $r_{2}$_. Heuristically, this task
becomes easier when $r_{2}$ is smaller, since this means both that we have
more choices of placement for the new set, and there are less pipes $P_{i}$
inhabiting $\Omega$. Conversely, this task becomes more difficult when $r_{1}$
is smaller, since then $\Omega$ is larger and will contain more pipes $P_{i}$.
We assume throughout that the deformations of the $P_{i}$’s are mild enough to
preserve the expected length, curvature, and spacing bounds between
neighboring pipes that arise from writing pipes in Lagrangian coordinates and
flowing for a length of time which is strictly less than the inverse of the
Lipschitz norm of the velocity field.
First, we can estimate the cardinality of the set $\mathcal{I}$ (which indexes
the axes $A_{i}$ and pipes $P_{i}$) from above by $r_{2}^{2}r_{1}^{-2}$. To
understand this bound, first note that if we had _straight_ pipes $P_{i}$
periodized to scale $\left(\lambda_{q+1}r_{2}\right)^{-1}$ inhabiting a _cube_
of side length $\left(\lambda_{q+1}r_{1}\right)^{-1}$, this bound would hold.
Using the fact that our deformed pipes obey similar length, curvature, and
spacing bounds as straight pipes and that our set $\Omega$ can be considered
as a subset of a cube with side length proportional to
$\left(\lambda_{q+1}r_{1}\right)^{-1}$, the same bound will hold up to
dimensional constants. Secondly, by the intermittency of the desired set of
new pipes, we have $r_{2}^{-2}$ choices for the placement of the new set, as
indicated in Figure 2.
To finish the argument, we must estimate how many of these $r_{2}^{-2}$
choices would lead to non-empty intersections between the new pipes and any
$P_{i}$. To calculate this bound, we will imagine the placement of the new set
of straight pipes as occurring on a two-dimensional plane which is
perpendicular to the axes of the pipes. After projecting each $P_{i}$ onto
this two-dimensional plane, our task is to choose the intersection points of
the new pipes with the plane so that the new pipes do not intersect the
shadows of the $P_{i}$’s.
Figure 6: In the figure on the left we display $\mathbb{T}^{3}$, in which we
have: in green, a set of pipe flows (old generation, very sparse) that were
deformed by $v_{q}$; in blue, the support of a cutoff function
$\zeta_{q,i,k,n,\vec{l}}$, whose diameter is
$\approx(\lambda_{q+1}r_{1})^{-1}$. Due to the sparseness, very few (if any!)
of these green pipes intersect the blue region. The figure on the right
further zooms into the blue region, to emphasize its contents. On the support
of $\zeta_{q,i,k,n,\vec{l}}$ we have displayed two sets of deformed pipe
flows, in pink and orange. These pipes flows were deformed also by $v_{q}$,
from a nearby time at which they were straight and periodic at scale
$(\lambda_{q+1}r_{2})^{-1}$. At the current time, at which the above figure is
considered, these pipe flows aren’t quite periodic anymore, but they are
close. The question now is: can we place a straight pipe flow, periodic at
scale $(\lambda_{q+1}r_{2})^{-1}$, whose axis is orthogonal to the front face
of the blue box (pictured in black), and which does not intersect any of the
existing pipes in this region? To see that this is possible, in Figure 7 we
estimate the area of shadows on this face of the cube. Figure 7: As mentioned
in the caption of Figure 6, we consider the image on the right and take the
projection of all pipes present in the blue box (green, pink, orange), onto
the front face of the cube (parallel to the $e_{3}-e_{1}$ plane). Because
these existing pipes were bent by $v_{q}$, the shadow does not consist of
straight lines, and in fact the projections can overlap. By estimating the
area of this projection, we see that if $r_{2}^{4}\ll r_{1}^{3}$ then there is
enough room left to insert a new pipe flow with orientation axis $e_{2}$
(represented by the black disks in the above figure), which will not intersect
any of the projections of the existing pipes, and thus not intersect the
existing pipes themselves.
Given one of the deformed pipes $P_{i}$, since its thickness is
$\lambda_{q+1}^{-1}$ and its length inside $\Omega$ is proportional to the
diameter of $\Omega$, specifically $\left(\lambda_{q+1}r_{1}\right)^{-1}$, we
may cover the shadow of $P_{i}$ on the plane with $\approx r_{1}^{-1}$ many
balls of diameter $\lambda_{q+1}^{-1}$. Covering all the $P_{i}$’s thus
requires $\approx r_{2}^{2}r_{1}^{-2}\cdot r_{1}^{-1}$ balls of diameter
$\lambda_{q+1}^{-1}$. Now, imagine the intersection of the new set of pipes
with the plane. Each choice of placement defines this intersection as
essentially a set of balls of diameter $\approx\lambda_{q+1}^{-1}$ equally
spaced at distance $\left(\lambda_{q+1}r_{2}\right)^{-1}$. The intermittency
ensures that there are $r_{2}^{-2}$ disjoint choices of placement, i.e.
$r_{2}^{-2}$ disjoint sets of balls which represent the intersection of a
particularly placed new set of pipes with the plane. As long as
$r_{2}^{2}r_{1}^{-2}\cdot r_{1}^{-1}\ll r_{2}^{-2}\quad\iff r_{2}^{4}\ll
r_{1}^{3}$
there must exist at least one choice of placement which does not produce _any_
intersections between $\mathbb{W}_{r_{2},\lambda_{q+1}}$ and the $P_{i}$’s.
Notice that if $r_{1}$ is too small or if $r_{2}$ is too large, this
inequality will not be satisfied, thus validating our previous heuristics
about $r_{1}$ and $r_{2}$.
To obey the relative intermittency inequality between $r_{1}$ and $r_{2}$
derived above for placements of new intermittent pipes on sets of a certain
diameter, we will utilize cutoff functions
$\zeta_{q,i,k,n,\vec{l}}$
which are defined using a variety of parameters. The index $q$ describes the
stage of the convex integration scheme, while $i$ and $k$ refer to the
velocity and temporal cutoffs defined above. The parameter $n$ corresponds to
a higher order stress $\mathring{R}_{q,n}$ and refers to its minimum frequency
$\lambda_{q,n,0}$, quantifying the value of $(\lambda_{q+1}r_{1})^{-1}$ and
the diameter of the support as described earlier. The parameter
$\vec{l}=(l,w,h)$ depends on $q$ and $n$ and provides an enumeration of the
(three-dimensional) checkerboard covering $\mathbb{T}^{3}$ at scale
$\left(\lambda_{q,n,0}\right)^{-1}$. On the support of one of these
checkerboard cutoff functions, we can inductively place pipes periodized to
scale $\left(\lambda_{q+1}r_{2}\right)^{-1}=\lambda_{q,n}^{-1}$ which are
disjoint. The checkerboard cutoff functions and the pipes themselves all
follow the same velocity field, and so ensuring the disjointness at a single
time slice is sufficient.
#### 2.5.3 Cumulative cut-off function
Finally, the variety of cut-offs described above will be combined into the
family of cut-offs
$\eta_{i,j,k,q,n,p,\vec{l}}:=\eta_{i,j,k,q,n,p}:=\chi_{i,k,q}\psi_{i,q}\omega_{i,j,q,n,p}\zeta_{q,i,k,n,\vec{l}},$
which have timespans of $\tau_{q}\Gamma_{q+1}^{-i}$ and $L^{2}$ norms
$\left\|\eta_{i,j,k,q,n,p,\vec{l}}\right\|_{L^{2}}\lesssim\Gamma^{-\frac{i}{2}}_{q+1}\cdot\Gamma^{-\frac{j}{2}}_{q+1}$
(2.14)
We will also require a cut-off $\eta_{i\pm,j\pm,k\pm,q,n,p,\vec{l}}$ which is
defined to be $1$ on the support of $\eta_{i,j,k,q,n,\vec{l}}$ and satisfies
the estimate
$\left\|\eta_{i\pm,j\pm,k\pm,q,n,\vec{l}}\right\|_{L^{2}}\lesssim\Gamma^{-\frac{i}{2}}_{q+1}\cdot\Gamma^{-\frac{j}{2}}_{q+1}.$
(2.15)
We remark that (2.14) and (2.15) are only heuristics (see Lemma 6.41 for the
precise estimate). Designing the cut-offs turned out to be for the authors
perhaps the most significant technical challenge of the paper. Their
definition will be inductive and estimates involving them will involve several
layers of induction.
### 2.6 The perturbation
The intermittent pipe flows of Section 2.3, the higher order stresses of
Section 2.4, and the cut-off functions of Section 2.5 provide the key
ingredients in the construction of the perturbation
${w_{q+1}:=\sum_{n=0}^{n_{\rm max}}\sum_{p=1}^{p_{\rm
max}}w_{q+1,n,p}}:=\sum_{n=0}^{n_{\rm max}}w_{q+1,n}.$
In the above double sum, we will adopt the convention that $w_{q+1,0,p}=0$
unless $p=1$ to streamline notation. Let us emphasize that $w_{q+1}$ is
constructed _inductively_ on $n$ for the following reason. Each perturbation
$w_{q+1,n}=\sum_{p=1}^{p_{\rm max}}w_{q+1,n,p}$ will contribute error terms to
all higher order stresses $\mathring{R}_{q,{\widetilde{n}},p}$ for
${\widetilde{n}}>n$ and $1\leq p\leq{p_{\rm max}}$, and so
$\mathring{R}_{q,{\widetilde{n}}}=\sum_{p=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},p}$ is not a well-defined object until
each $w_{q+1,n^{\prime}}$ has been constructed for all $n^{\prime}<n$. For the
purposes of the following heuristics, we will abbreviate the cutoff functions
by $a_{n,p}$, and ignore summation over many of the indexes which parametrize
the cutoff functions, as they are not necessary to understand the heuristic
estimates. We will freely use the heuristic that the cutoff functions allow us
to use the $L^{\infty}_{t}H^{1}_{x}$ norm of $v_{q}$ to control terms (usually
related to the turnover time) which previously required global Lipschitz
bounds on $v_{q}$.
Let $\Phi_{q,k}:\mathbb{R}\times\mathbb{T}^{3}\rightarrow\mathbb{T}^{3}$ be
the solution to the transport equation
$\partial_{t}\Phi_{q,k}+v_{q}\cdot\nabla\Phi_{q,k}=0$
with initial data given to be the identity at time $t_{k}=k\tau_{q}$. We
mention that this definition is _purely_ heuristic, since as mentioned
previously, the Lagrangian coordinate systems will have to be indexed by
another parameter which encodes the fact that $\nabla v_{q}$ is spatially
inhomogeneous.161616The actual transport maps used in the proof are defined in
Definition 6.26. For the time being let us ignore this issue. Each map
$\Phi_{q,k}$ has an effective timespan
$\tau_{q}=(\delta_{q}^{\frac{1}{2}}\lambda_{q})^{-1}$, at which point one
resets the coordinates and defines a new transport map $\Phi_{q,k+1}$ starting
from the identity. Let $\mathbb{W}_{q+1,n}$ denote the pipe flow with
intermittency $r_{q+1,n}$ periodized to scale
$\left(\lambda_{q+1}r_{q+1,n}\right)^{-1}$. The perturbation $w_{q+1,n,p}$ is
then defined heuristically by
$\displaystyle w_{q+1,n,p}(x,t)$
$\displaystyle=\sum_{k}a_{n,p}\left(\mathring{R}_{q,n,p}(x,t)\right)\left(\nabla\Phi_{{q,k}}(x,t)\right)^{-1}(x,t)\mathbb{W}_{q+1,n}(\Phi_{q,k}(x,t)).$
We have adopted the convention that
$\mathring{R}_{q}=\mathring{R}_{q,0}=\mathring{R}_{q,0,1}$ and
$\mathring{R}_{q,0,p}=0$ if $p\geq 2$. Composing with $\Phi_{q,k}$ adapts the
pipe flows to the Lagrangian coordinate system associated to $v_{q}$ so that
$(\nabla\Phi_{q,k})^{-1}\mathbb{W}_{q+1,n}(\Phi_{q,k})$ is Lie-advected and
remains divergence-free to leading order. The perturbation $w_{q+1,n,p}$ has
the following properties:
1. (1)
The thickness (at unit scale) of the pipes on which $w_{q+1,n,p}$ is supported
depends only on $q$ and $n$ and is quantified by
$r_{q+1,n}=\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}.$
(2.16)
Thus, the perturbations become less intermittent as $n$ increases, since the
thickness of the pipes (periodized at unit scale) becomes larger as $n$
increases. Notice that the maximum frequency of $\mathring{R}_{q,n,p}$ is
$\lambda_{q,n,p}$ for $n\geq 1$ per (2.10), and $\lambda_{q}$ for $n=0$, while
the minimum frequency of the intermittent pipe flow $\mathbb{W}_{q+1,n}$ used
to construct $w_{q+1,n,p}$ is $\lambda_{q,n}$. Referring back to Definition
2.3 and Definition 2.4, we have that for $1\leq n\leq{n_{\rm max}}$ and $1\leq
p\leq{p_{\rm max}}$,
$\lambda_{q,n,p}=\lambda_{q,n,0}^{1-\frac{p}{{p_{\rm
max}}}}\lambda_{q,n+1,0}^{\frac{p}{{p_{\rm
max}}}}\leq\lambda_{q,n+1,0}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n}\cdot\frac{5}{6}}\ll\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}=\lambda_{q,n},$
which ensures that the low frequency portion of $w_{q+1,n,p}$ decouples from
the high frequency intermittent pipe flow $\mathbb{W}_{q+1,n}$. For $n=0$, the
maximum frequency of $\mathring{R}_{q,0}=\mathring{R}_{q}$ is $\lambda_{q}$,
which is much less than $\lambda_{q,0}$ per Definition 2.3.
2. (2)
The $L^{2}$ size of $w_{q+1,n,p}$ is equal to the square root of the $L^{1}$
norm of $\mathring{R}_{q,n,p}$, which in turn depends on the minimum frequency
of $\mathring{R}_{q,n,p}$ and will be $\delta_{q+1,n,p}$, where we define
$\delta_{q+1,0,p}=\delta_{q+1}$. For $n\geq 1$ and $1\leq p\leq{p_{\rm max}}$,
we have from Definition 2.5 that
$\displaystyle\delta_{q+1,n,p}=\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,p-1}}\prod_{n^{\prime}<n}f_{q,n^{\prime}}.$
3. (3)
For $n\geq 1$, derivatives on the low frequency coefficient function of
$w_{q+1,n,p}$ cost the maximum frequency of $\mathring{R}_{q,n,p}$, which is
$\lambda_{q,n,p}$. For $n=0$, $\mathring{R}_{q,0}=\mathring{R}_{q}$, so that
each spatial derivative on the coefficient function of $w_{q+1,0}$ costs
$\lambda_{q}$.
4. (4)
The transport error and Nash error created by the addition of $w_{q+1,n,p}$
are small enough to be absorbed into $\mathring{R}_{q+1}$ for every $n$ .
5. (5)
Per Definition 2.3, the oscillation error which results from $w_{q+1,n,p}$
interacting with itself has minimum frequency
$\lambda_{q,n}=\lambda_{q+1}r_{q+1,n}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}.$
### 2.7 The Reynolds stress error and heuristic estimates
Note that since the relation (2.2) is linear in the Reynolds stress, replacing
$q$ with $q+1$, the right hand side can be split into three components:
$\begin{split}\mathrm{div\,}(w_{q+1}\otimes w_{q+1}+\mathring{R}_{q})\\\
\partial_{t}w_{q+1}+v_{q}\cdot\nabla w_{q+1}\\\ w_{q+1}\cdot\nabla
v_{q}~{},\end{split}$ (2.17)
which we call the _oscillation error_ , _transport error_ and _Nash error_
respectively.
#### 2.7.1 Type 1 oscillation error
In this section, we sketch the heuristic estimates which justify the following
principle: the low frequency, high amplitude errors arising from the self
interaction of an intermittent pipe flow can be transferred to higher
frequencies and smaller amplitudes through the higher order stresses and
perturbations. We shall show that the following estimates are self-consistent
and allow for the constructions of solutions approaching the regularity
threshold $\dot{H}^{\frac{1}{2}}$:
$\left\|\nabla^{M}\mathring{R}_{q}\right\|_{L^{1}}\leq\delta_{q+1}\lambda_{q}^{M}$
(2.18)
$\left\|\nabla^{M}\mathring{R}_{q,n,p}\right\|_{L^{1}}\leq\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,p-1}}\prod_{n^{\prime}<n}f_{q,n^{\prime}}\lambda_{q,n,p}^{M}=\delta_{q+1,n,p}\lambda_{q,n,p}^{M}\,.$
(2.19)
The higher order stress $\mathring{R}_{q,n,p}$ is defined using the spatial
Littlewood-Paley projection operator
$\mathbb{P}_{[q,n,p]}:=\mathbb{P}_{\left[\lambda_{q,n,p-1},\lambda_{q,n,p}\right)}=\mathbb{P}_{\geq\lambda_{q,n,p-1}}\mathbb{P}_{<\lambda_{q,n,p}},$
which projects onto the frequencies from (2.10). We define
$\mathring{R}_{q,n,p}$ as follows:
$\mathring{R}_{q,n,p}:=\sum_{n^{\prime}<n}\sum\limits_{p^{\prime}=1}^{p_{\rm
max}}\mathrm{div\,}^{-1}\left(\nabla\left(a_{n^{\prime},p^{\prime}}^{2}(\mathring{R}_{q,n^{\prime},p^{\prime}})\nabla\Phi_{q,k}^{-1}\otimes\nabla\Phi_{q,k}^{-T}\right):\left(\mathbb{P}_{[q,n,p]}\left(\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}\right)\right)(\Phi_{q,k})\right).$
(2.20)
We pause here to point out an important consequence of this definition. Let
$n^{\prime}$ be fixed, and consider the right side of the above equality.
Then, due to the periodicity of $\mathbb{W}_{q+1,n^{\prime}}$ at scale
$(\lambda_{q+1}r_{q+1,n^{\prime}})^{-1}$ we have171717We denote by
$\mathbb{P}_{\neq 0}$ the operator which subtracts from a function its mean in
space.
$\displaystyle\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}$
$\displaystyle=\mathbb{P}_{=0}\left(\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}\right)+\mathbb{P}_{\neq
0}\left(\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}\right)$
$\displaystyle=\mathbb{P}_{=0}\left(\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}\right)+\mathbb{P}_{\geq\lambda_{q+1}r_{q+1,n^{\prime}}}\left(\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}\right).$
For $n^{\prime}\geq 1$, we have that
$\lambda_{q+1}r_{q+1,n^{\prime}}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n^{\prime}+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n^{\prime}+1}}\gg\lambda_{q}^{\left(\frac{4}{5}\right)^{n^{\prime}}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n^{\prime}}\cdot\frac{5}{6}}=\lambda_{q,n^{\prime}+1,0}=\lambda_{q,n^{\prime},{p_{\rm
max}}},$
where $\lambda_{q,n^{\prime}+1,0}$ is the minimum frequency of
$\mathring{R}_{q,n^{\prime}+1}=\sum_{p^{\prime}=0}^{p_{\rm
max}}\mathring{R}_{q,n^{\prime}+1,p^{\prime}}$, while for $n^{\prime}=0$ we
have that
$\lambda_{q+1}r_{q+1,0}=\lambda_{q,1}=\lambda_{q}^{\left(\frac{4}{5}\right)}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)}=\lambda_{q,1,0},$
which is the minimum frequency of $\mathring{R}_{q,1}$. Therefore, we have
shown that the error terms arising from _all_ non-zero modes of
$\mathbb{W}_{q+1,n^{\prime}}\otimes\mathbb{W}_{q+1,n^{\prime}}$ are accounted
for in the higher order stresses $\mathring{R}_{q,{\widetilde{n}}}$ for
${\widetilde{n}}>n^{\prime}$. Thus, the higher order stresses created by the
interaction of $w_{q+1,n^{\prime}}$ will be absorbed into higher order
stresses with _strictly larger_ values of $n$.
Now assuming that $\mathring{R}_{q,n^{\prime},p^{\prime}}$ and
$w_{q+1,n^{\prime},p^{\prime}}$ are well-defined for all $n^{\prime}<n$ and
$1\leq p^{\prime}\leq{p_{\rm max}}$ and using the heuristic estimates from the
previous section for $w_{q+1,n^{\prime},p^{\prime}}$, we can estimate the
component of $\mathring{R}_{q,n,p}$ coming from
$w_{q+1,n^{\prime},p^{\prime}}$ by recalling (2.20) and writing
$\displaystyle\left\|\mathring{R}_{q,n,p}\right\|_{L^{1}}$
$\displaystyle\leq\sum_{n^{\prime}<n}\frac{\delta_{q+1,n^{\prime},p^{\prime}}\lambda_{q,n^{\prime},p^{\prime}}}{\lambda_{q,n,p-1}}$
$\displaystyle=\sum_{n^{\prime}<n}\frac{\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n^{\prime},p^{\prime}-1}}\prod_{n^{{}^{\prime\prime}}<n^{\prime}}f_{q,n^{{}^{\prime\prime}}}{\lambda_{q,n^{\prime},p^{\prime}}}}{\lambda_{q,n,p-1}}$
$\displaystyle\leq\sum_{n^{\prime}<n}\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,p-1}}\prod_{n^{{}^{\prime\prime}}\leq
n^{\prime}}f_{q,n^{{}^{\prime\prime}}}$
$\displaystyle\lesssim\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,p-1}}\prod_{n^{{}^{\prime\prime}}<n}f_{q,n^{{}^{\prime\prime}}}=\delta_{q+1,n,p}\,.$
The denominator comes from the gain induced by the combination of the inverse
divergence and the Littlewood-Paley projector $\mathbb{P}_{[q,n,p]}$. The
numerator is the amplitude of
$\nabla|a_{n^{\prime},p^{\prime}}(\mathring{R}_{q,n^{\prime},p^{\prime}})|^{2}$,
computed using the chain rule and the assumption (2.19) on
$\nabla\mathring{R}_{q,n^{\prime},p^{\prime}}$. We have used that the $L^{2}$
norm of $\mathbb{W}_{q+1,n^{\prime}}$ is normalized to unit size. Any
derivatives on $\mathring{R}_{q,n,p}$ will cost $\lambda_{q,n,p}$, which is
the maximum frequency in the Littlewood-Paley projector
$\mathbb{P}_{[q,n,p]}$. Thus, all terms which will land in
$\mathring{R}_{q,n,p}$ will satisfy the correct estimates given that
$\mathring{R}_{q,n^{\prime},p^{\prime}}$ satisfies the correct estimates for
$n^{\prime}<n$ and $1\leq p^{\prime}\leq{p_{\rm max}}$. Since
$\mathring{R}_{q}=:\mathring{R}_{q,0}$ satisfies the inductive assumptions, we
can initiate this iteration at level $n=0$ while satisfying (2.18).
Now that $\mathring{R}_{q,n,p}$ satisfies the appropriate estimates, we can
correct it with a perturbation $w_{q+1,n,p}$ as described in the previous
section. As before, since $\mathbb{W}_{q+1,n}$ has minimum frequency
$\lambda_{q,n}=\lambda_{q+1}r_{q+1,n}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}\gg\lambda_{q}^{\left(\frac{4}{5}\right)^{n}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n}\cdot\frac{5}{6}}=\lambda_{q,n+1,0}\,,$
and the minimum frequency in $\mathring{R}_{q,n+1}$ is $\lambda_{q,n+1,0}$,
every error term resulting from the self interaction of $w_{q+1,n,p}$ will be
absorbed into higher order stresses $\mathring{R}_{q,{\widetilde{n}}}$ for
${\widetilde{n}}>n$. Therefore, we can induct on $n$ to add a sequence of
perturbations $w_{q+1,n}=\sum_{p=1}^{p_{\rm max}}w_{q+1,n,p}$ such that all
nonlinear error terms are canceled by subsequent perturbations. Upon reaching
${n_{\rm max}}$ and recalling (2.1), we can estimate the final nonlinear error
term by
$\displaystyle\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q+1}r_{q+1,{n_{\rm
max}}}}\prod_{n^{{}^{\prime}}<{n_{\rm
max}}}f_{q,n^{{}^{\prime}}}\leq\delta_{q+2}$
$\displaystyle\impliedby\delta_{q+1}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{1-\left(\frac{4}{5}\right)^{{n_{\rm
max}}+1}-\frac{1}{{p_{\rm max}}}}\leq\delta_{q+2}$
$\displaystyle\iff\lambda_{q+1}^{-2\beta}\lambda_{q+1}^{\left(\frac{1}{b}-1\right)\left(1-\left(\frac{4}{5}\right)^{{n_{\rm
max}}+1}-\frac{1}{{p_{\rm max}}}\right)}\leq\lambda_{q+1}^{-2\beta b}$
$\displaystyle\iff 2\beta
b(b-1)\leq(b-1)\left(1-\left(\frac{4}{5}\right)^{{n_{\rm
max}}+1}-\frac{1}{{p_{\rm max}}}\right)$
$\displaystyle\iff\beta\leq\frac{1}{2b}\left(1-\left(\frac{4}{5}\right)^{{n_{\rm
max}}+1}-\frac{1}{{p_{\rm max}}}\right).$
Choosing $b$ to be close to $1$ and ${n_{\rm max}}$ and ${p_{\rm max}}$
sufficiently large shows that these error terms are commensurate with
$\dot{H}^{\frac{1}{2}-}$ regularity.
#### 2.7.2 Type 2 oscillation error
We now consider the second type of oscillation error, which would arise as a
result of two distinct pipes intersecting and thus serves no purpose in the
cancellation of stresses. Beginning with
$\mathring{R}_{q}=\mathring{R}_{q,0}$, we have that every derivative on
$\mathring{R}_{q,0}$ costs $\lambda_{q}$. Therefore, we may decompose
$\mathring{R}_{q,0}$ using a checkerboard partition of unity at scale
$\lambda_{q}^{-1}$. Referring back to the discussion of the checkerboard
cutoff functions, this sets the value of $r_{1}$ to be
$\lambda_{q}\lambda_{q+1}^{-1}$. Now, suppose that on a single square of this
checkerboard, we have placed a set of intermittent pipe flows
$\mathbb{W}_{q+1,0}$ which are periodized to scale
$\left(\lambda_{q+1}r_{q+1,0}\right)^{-1}$. After flowing the pipes and the
checkerboard square by $v_{q}$ for a short length of time181818The length of
time is equal to the local Lipschitz norm of $v_{q}$ on the support of the
cutoff $\psi_{i,q}$, given by the time-cutoff hidden in $a_{n,p}$., we must
place a new set of pipes $\mathbb{W}^{\prime}_{q+1,0}$ which are disjoint from
the flowed pipes $\mathbb{W}_{q+1,0}$. Given the choice of $r_{1}$, this will
be possible provided that
$\displaystyle r_{q+1,0}=r_{2}\ll r_{1}^{\frac{3}{4}}\,.$ (2.21)
Thus, the _minimum_ amount of intermittency needed to successfully place
disjoint sets of intermittent pipes is
$\left(\lambda_{q}\lambda_{q+1}^{-1}\right)^{\frac{3}{4}}$. Per Definition
2.3, our choice of $r_{q+1,0}$ is
$\left(\lambda_{q}\lambda_{q+1}^{-1}\right)^{\frac{4}{5}}$, which is then
sufficiently small.
Let us now assume that we have successfully corrected
$\mathring{R}_{q,n^{\prime}}$ for $n^{\prime}<n$, and that we wish to correct
$\mathring{R}_{q,n}=\sum_{p=1}^{p_{\rm max}}\mathring{R}_{q,n,p}$ with a
perturbation $w_{q+1,n}=\sum_{p=1}^{p_{\rm max}}w_{q+1,n,p}$. First, we recall
that
$\left\|\nabla^{M}\mathring{R}_{q,n,p}\right\|_{L^{1}}\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{M}.$
Therefore, we can multiply $\mathring{R}_{q,n,p}$ by a checkerboard partition
of unity at scale $\lambda_{q,n,0}^{-1}\gg\lambda_{q,n,p}^{-1}$ while
preserving these bounds. We must choose values of $r_{1}$ and $r_{2}$, as in
Section 2.5.2. Since for $n\geq 2$
$\lambda_{q+1}r_{1}=\lambda_{q,n,0}=\lambda_{q}^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}=\lambda_{q+1}\cdot\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}},$
and for $n=1$
$\lambda_{q,1,0}=\lambda_{q}^{\frac{4}{5}}\lambda_{q+1}^{\frac{1}{5}}\gg\lambda_{q+1}\cdot\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{1-1}\cdot\frac{5}{6}},$
we have that for all $n\geq 1$
$r_{1}\geq\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}.$
Recall that $\mathring{R}_{q,n,p}$ will be corrected by $w_{q+1,n,p}$, which
is constructed using intermittent pipe flows $\mathbb{W}_{q+1,n}$ with
intermittency
$r_{q+1,n}=\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}=r_{2}.$
Thus in order to succeed in placing pipes $\mathbb{W}_{q+1,n}$ which avoid
both previous generations of pipes, which are periodized to scales rougher
than $\mathbb{W}_{q+1,n}$, and pipes from the same generation on overlapping
cutoff functions, we must ensure that
$\displaystyle r_{2}$ $\displaystyle\ll r_{1}^{\frac{3}{4}}$
$\displaystyle\iff\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}$
$\displaystyle\ll\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}\cdot\frac{3}{4}}$
$\displaystyle\iff\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}\cdot\frac{3}{4}$
$\displaystyle<\left(\frac{4}{5}\right)^{n+1}$ $\displaystyle\iff\frac{1}{2}$
$\displaystyle<\left(\frac{4}{5}\right)^{3}=\frac{64}{125}.$
So our choice of $r_{q+1,n}$ is sufficient to ensure that we can successfully
place intermittent pipe flows when constructing $w_{q+1,n,p}$ which are
disjoint from all other pipe flows from either previous generations
($n^{\prime}<n$) or the same generation (the same value of $n$).
#### 2.7.3 Nash and transport errors
The heuristic for the Nash and transport errors is that our choice of
$r_{q+1,n}$ provides much more intermittency than is needed to ensure that
linear errors arising from $w_{q+1,n,p}$ can be absorbed into
$\mathring{R}_{q+1}$.191919One may verify that in three dimensions, the
minimum amount of intermittency needed to absorb the Nash and transport errors
arising from $w_{q+1,0}$ into $\mathring{R}_{q+1}$ at regularity approaching
$\dot{H}^{\frac{1}{2}}$ is
$r_{q+1,0}=\lambda_{q}^{\frac{1}{2}}\lambda_{q+1}^{-\frac{1}{2}}$. In general,
one can further verify that given errors supported at frequency
$\lambda_{q}^{\alpha}\lambda_{q+1}^{1-\alpha}$, one could correct them using
intermittent pipe flows with minimum frequency
$\lambda_{q}^{\frac{\alpha}{2}}\lambda_{q+1}^{1-\frac{\alpha}{2}}$ while
absorbing the resulting Nash and transport errors into $\mathring{R}_{q+1}$.
One should compare this with (2.21), which shows that the placement technique
requires more intermittency, which at level $n=0$ corresponds to
$\lambda_{q}^{\frac{3}{4}}\lambda_{q+1}^{-\frac{3}{4}}$. In other words, the
Type 2 oscillation errors required much more intermittency than the Nash and
transport errors will.
Let us start with the Nash error arising from the addition of $w_{q+1,0,1}$,
which is designed to correct $\mathring{R}_{q}$. Using decoupling, the cost of
a derivative on $\mathbb{W}_{q+1,0}$ being $\lambda_{q+1}$ (so that inverting
the divergence gains a factor of $\lambda_{q+1}$), the size of $\nabla v_{q}$
in $L^{2}$, and the $L^{1}$ size of $\mathbb{W}_{q+1,n}$ being $r_{q+1,0}$,
the size of this error is
$\displaystyle\frac{1}{\lambda_{q+1}}\delta_{q+1}^{\nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}r_{q+1,0}$
$\displaystyle=\frac{1}{\lambda_{q+1}}\delta_{q+1}^{\nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)}.$
This is (much) less than $\delta_{q+2}$ since
$\displaystyle\frac{\delta_{q+1}^{\nicefrac{{1}}{{2}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{\nicefrac{{3}}{{2}}}}{\lambda_{q+1}^{\nicefrac{{3}}{{2}}}}\leq\delta_{q+2}$
$\displaystyle\iff\lambda_{q+1}^{-\beta}\lambda_{q+1}^{-\frac{\beta}{b}}\lambda_{q+1}^{\frac{1}{b}\cdot\frac{3}{2}}\lambda_{q+1}^{-\frac{3}{2}}\leq\lambda_{q+1}^{-2\beta
b}$ $\displaystyle\iff 2\beta b^{2}-\beta b-\beta\leq(b-1)\cdot\frac{3}{2}$
$\displaystyle\iff\beta(2b+1)(b-1)\leq(b-1)\cdot\frac{3}{2}.$ (2.22)
Choosing $b$ close to $1$ will make this error commensurate with
$\dot{H}^{\frac{1}{2}-}$ regularity.
Let us now estimate the Nash error arising from the addition of $w_{q+1,n,p}$
for $n\geq 2$, given by
$\left\|\mathrm{div\,}^{-1}\left(\left(a_{n,p}\nabla\Phi_{q,k}^{-1}\mathbb{W}_{q+1,n}(\Phi_{q,k})\right)\cdot\nabla
v_{q}\right)\right\|_{L^{1}}.$
Using again decoupling, the cost of a derivative on $\mathbb{W}_{q+1,n}$ being
$\lambda_{q+1}$ (so that inverting the divergence gains a factor of
$\lambda_{q+1}$), the size of $\nabla v_{q}$ in $L^{2}$, the $L^{1}$ size of
$\mathbb{W}_{q+1,n}$ being $r_{q+1,n}$, and (2.1), we have that for $n\geq 2$,
the size of this error is
$\displaystyle\frac{1}{\lambda_{q+1}}\cdot\delta_{q+1,n,p}^{\frac{1}{2}}r_{q+1,n}\cdot\delta_{q}^{\frac{1}{2}}\lambda_{q}$
$\displaystyle\leq\frac{1}{\lambda_{q+1}}\cdot\delta_{q+1,n,1}^{\frac{1}{2}}r_{q+1,n}\cdot\delta_{q}^{\frac{1}{2}}\lambda_{q}$
$\displaystyle=\frac{1}{\lambda_{q+1}}\left(\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q,n,0}}\right)^{\frac{1}{2}}\left(\prod_{n^{\prime}<n}f_{q,n^{\prime}}\right)^{\frac{1}{2}}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}\delta_{q}^{\frac{1}{2}}\lambda_{q}$
$\displaystyle\leq\frac{1}{\lambda_{q+1}}\left(\frac{\delta_{q+1}\lambda_{q}}{\lambda_{q}^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}}\right)^{\frac{1}{2}}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}-\frac{1}{2{p_{\rm
max}}}}\delta_{q}^{\frac{1}{2}}\lambda_{q}\,.$
Since
$\frac{1}{2{p_{\rm
max}}}+\frac{1}{2}\cdot\frac{5}{6}\cdot\left(\frac{4}{5}\right)^{n-1}<\left(\frac{4}{5}\right)^{n+1}$
independently of $n\geq 2$ if ${p_{\rm max}}$ is sufficiently large, the Nash
error will be smaller than $\delta_{q+2}$ based on the preceding estimates.
Furthermore, one may check that
$\delta_{q+1,1,1}^{\frac{1}{2}}r_{q+1,1}<\delta_{q+1,2,1}^{\frac{1}{2}}r_{q+1,2},$
so that the Nash error arising from the addition of $w_{q+1,1,p}$ is also
satisfactorily small for all $p$.
Now let us consider the transport error. The size of the transport error
arising from the addition of $w_{q+1,n,p}$ is
$\displaystyle\left\|\mathrm{div\,}^{-1}\left((D_{t,q}a_{n,p})\nabla\Phi_{q,l}^{-1}\mathbb{W}_{q+1,n}\right)\right\|_{L^{1}}$
$\displaystyle\leq\frac{1}{\lambda_{q+1}}\tau_{q}^{-1}\delta_{q+1,n,p}^{\frac{1}{2}}r_{q+1,n}$
$\displaystyle=\frac{1}{\lambda_{q+1}}\cdot\delta_{q+1,n,p}^{\frac{1}{2}}r_{q+1,n}\cdot\delta_{q}^{\frac{1}{2}}\lambda_{q}.$
(2.23)
Thus, the transport error is the same size as the Nash error and is
sufficiently small to be put into $\mathring{R}_{q+1}$.
## 3 Inductive assumptions
While in Section 2 we have outlined in broad terms the main steps in the proof
of Theorem 1.1, along with the heuristics for some of the choices we have made
in our proof, starting with the current section, we work with precise
estimates.
In Section 3.1 we introduce some of the notation used in the proof, such as
the Euler-Reynolds system, the mollified velocity, velocity increments,
material/directional derivatives, our notation for geometric upper bounds with
tho different bases, and our notation for $\left\|\cdot\right\|_{L^{p}}$.
In Section 3.2 we introduce the principal amplitude and frequency parameters
used in proof (the precise definitions and the order of choosing these
parameters is detailed in Section 9.1). Next, in Sections 3.2.1 and 3.2.2 we
state the primary inductive assumptions for the velocity, velocity increments,
and Reynolds stress. These primary estimates hold on the support of previous
generation velocity cutoff functions, which are inductively assumed to satisfy
a number of properties, listed in Section 3.2.3. Lastly, in Section 3.2.4 we
list a number of bounds for the velocity increments and mollified velocities,
which involve all possible combinations of space and material derivatives, up
to a certain order. These bounds are technical in nature, and should be
ignored at a first reading; their sole purpose is to allow us to bound
commutators between $D^{n}$ and $D_{t,q}^{m}$ for very high values of $n$ and
$m$.
In conclusion, in Section 3.4 we show that if we are able to propagate the
previously stated inductive estimates from step $q$ to step $q+1$, for every
$q\geq 0$, then Theorem 1.1 follows. At the end of the section we discuss the
modifications to the proof which would be necessary in order to obtain other
types of flexibility statements.
### 3.1 General notations
As is standard in convex integration schemes for the Euler system [29], we
introduce the Euler-Reynolds system for the unknowns
$(v_{q},\mathring{R}_{q})$:
$\displaystyle\partial_{t}v_{q}+\mathrm{div\,}(v_{q}\otimes v_{q})+\nabla
p_{q}$ $\displaystyle=\mathrm{div\,}\mathring{R}_{q}$ (3.1a)
$\displaystyle\mathrm{div\,}v_{q}$ $\displaystyle=0.$ (3.1b)
Here and throughout the paper, the pressure $p_{q}$ is uniquely defined by
solving $\Delta
p_{q}=\mathrm{div\,}\mathrm{div\,}(\mathring{R}_{q}-v_{q}\otimes v_{q})$, with
$\int_{\mathbb{T}^{3}}p_{q}dx=0$.
In order to avoid the usual derivative-loss issue in convex integration
schemes, for $q\geq 0$ we use the space-time mollification operator defined in
Section 9.4 – equation (9.64), to smoothen out the velocity field $v_{q}$ as:
$\displaystyle v_{\ell_{q}}:=\mathcal{P}_{q,x,t}v_{q}\,.$ (3.2)
In particular, we note that spatial mollification is performed at scale
$\widetilde{\lambda}_{q}^{-1}$ (which is just slightly smaller than
$\lambda_{q}^{-1}$), while temporal mollification is at scale
$\widetilde{\tau}_{q-1}$ (which is a lot smaller than $\tau_{q-1}$).
Next, for all $q\geq 1$, define
$\displaystyle w_{q}:=v_{q}-v_{\ell_{q-1}},\qquad
u_{q}:=v_{\ell_{q}}-v_{\ell_{q-1}}.$ (3.3)
For consistency of notation, define $w_{0}=v_{0}$ and $u_{0}=v_{\ell_{0}}$.
Note that
$\displaystyle
u_{q}=\mathcal{P}_{q,x,t}w_{q}+(\mathcal{P}_{q,x,t}v_{\ell_{q-1}}-v_{\ell_{q-1}})$
(3.4)
so that we may morally think that $u_{q}=w_{q}+$ a small error term (the
smallness of this error term will be ensured by choosing a mollifier with a
large number of vanishing moments, cf. (9.62)).
We use the following notation for the material derivative corresponding to the
vector field $v_{\ell_{q}}$:
$\displaystyle D_{t,q}:=\partial_{t}+v_{\ell_{q}}\cdot\nabla.$ (3.5)
With this notation, we have that
$\displaystyle D_{t,q}=D_{t,q-1}+u_{q}\cdot\nabla.$ (3.6)
We also introduce the directional derivatives
$\displaystyle D_{q}:=u_{q}\cdot\nabla$ (3.7)
which allow us to transfer information between $D_{t,q-1}$ and $D_{t,q}$ via
$D_{t,q}=D_{t,q-1}+D_{q}$.
###### Remark 3.1 (Geometric upper bounds with two bases).
If for a sequence of numbers $\\{a_{n}\\}_{n\geq 0}$, and for two parameters
$0<\lambda<\Lambda$ we have the bounds
$\displaystyle a_{n}$ $\displaystyle\leq\lambda^{n},\quad\mbox{for all}\quad
n\leq N_{*}$ $\displaystyle a_{n}$
$\displaystyle\leq\lambda^{N_{*}}\Lambda^{n-N_{*}}\quad\mbox{for all}\quad
n>N_{*},$
for some $N_{*}\geq 1$, we will abbreviate these bounds as
$\displaystyle a_{n}\leq\mathcal{M}\left(n,N_{*},\lambda,\Lambda\right)\,,$
where we define
$\displaystyle\mathcal{M}\left(n,N_{*},\lambda,\Lambda\right):=\lambda^{\min\\{n,N_{*}\\}}\Lambda^{\max\\{n-N_{*},0\\}}$
(3.8)
for all $n\geq 0$. The first entry of
$\mathcal{M}\left(\cdot,\cdot,\lambda,\Lambda\right)$ measures the index in
the sequence (typically number of derivatives considered) and the second entry
determines the index after which the base of the geometric bound changes from
$\lambda$ to $\Lambda$. This notation has the following consequence, which
will be used throughout the paper: if $1\leq\lambda\leq\Lambda$, then
$\displaystyle\mathcal{M}\left(a,N_{*},\lambda,\Lambda\right)\mathcal{M}\left(b,N_{*},\lambda,\Lambda\right)\leq\mathcal{M}\left(a+b,N_{*},\lambda,\Lambda\right).$
(3.9)
When either $a$ or $b$ are larger than $N_{*}$ the above inequality creates a
loss; for $a+b\leq N_{*}$, it is an equality.
###### Remark 3.2 (Norms are uniform in time).
Throughout this section, and the remainder of the paper, in order to
abbreviate notation we shall use the notation $\left\|f\right\|_{L^{p}}$ to
denote $\left\|f\right\|_{L^{\infty}_{t}(L^{p}(\mathbb{T}^{3}))}$. That is,
all $L^{p}$ norms stand for $L^{p}$ norms in space, uniformly in time.
Similarly, when we wish to emphasize a set dependence of an $L^{p}$ norm, we
write $\left\|f\right\|_{L^{p}(\Omega)}$, for some space-time set
$\Omega\subset\mathbb{R}\times\mathbb{T}^{3}$, to stand for
$\left\|{\mathbf{1}}_{\Omega}\;f\right\|_{L^{\infty}_{t}(L^{p}(\mathbb{T}^{3}))}$.
### 3.2 Inductive estimates
The proof is based on propagating estimates for solutions
$(v_{q},\mathring{R}_{q})$ of the Euler-Reynolds system (3.1), inductively for
$q\geq 0$. In order to state these bounds, we first need to fix a number of
parameters in terms of which these inductive estimates are stated. We start by
picking a regularity exponent
$\beta\in(\nicefrac{{1}}{{3}},\nicefrac{{1}}{{2}})$, and a super-exponential
rate parameter $b\in(1,\nicefrac{{3}}{{2}})$ such that $2\beta b<1$. In terms
of this choice of $\beta$ and $b$, a number of additional parameters (${n_{\rm
max}},\ldots\mathsf{N}_{\rm fin}$) are fixed, whose precise definition is
summarized for convenience in items (iii)–(xii) of Section 9.1. Note that at
this point the parameter $a_{*}(\beta,b)$ from item (xiii) in Section 9.1 is
not yet fixed. With this choice, we then introduce the fundamental
$q$-dependent frequency and amplitude parameters from Section 9.2. We state
here for convenience the main $q$-dependent parameters defined in (9.15),
(9.17), (9.18), and (9.21):
$\displaystyle\lambda_{q}=2^{\big{\lceil}{(b^{q})\log_{2}a}\big{\rceil}}\approx\lambda_{q-1}^{b}\,,\qquad$
$\displaystyle\delta_{q}=\lambda_{1}^{\beta(b+1)}\lambda_{q}^{-2\beta}\,,$
(3.10a)
$\displaystyle\tau_{q}^{-1}=\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{\mathsf{c_{0}}+11}\,,\qquad$
$\displaystyle\Gamma_{q+1}=\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\varepsilon_{\Gamma}}\approx\lambda_{q}^{(b-1)\varepsilon_{\Gamma}}\,,$
(3.10b)
where the constant $\mathsf{c_{0}}$ is defined by (9.6). The $\approx$ symbols
in (3.10) mean that the left side of the $\approx$ symbol lies between two
(universal) constant multiples of the right side (see e.g. (9.16)).
###### Remark 3.3 (Usage of the symbol $\lesssim$ and choice of $a_{*}$).
Throughout the subsequent sections, we will make frequent use of the symbol
$\lesssim$. We emphasize that any implicit constants indicated by $\lesssim$
are only allowed to depend on the parameters defined in Section 9.1, items
(i)–(xii). The implicit constants in $\lesssim$ are however always independent
of the parameters $a$ and $q$, which appear in (3.10). This allows us at the
end of the proof, cf. item (xiii) in Section 9.1 to choose $a_{*}(\beta,b)$ to
be sufficiently large so that for all $a\geq a_{*}(\beta,b)$ and all $q\geq
0$, the parameter $\Gamma_{q+1}$ appearing in (3.10) is larger than all the
implicit constants in $\lesssim$ symbols encountered throughout the paper.
That is, upon choosing $a_{*}$ sufficiently large, any inequality of the type
$A\lesssim B$ which appears in this manuscript, may be rewritten as
$A\leq\Gamma_{q+1}B$, for any $q\geq 0$.
In order to state the inductive assumptions we use four large integers,
defined precisely in Section 9.1. For the moment it is just important to note
that these fixed parameters are independent of $q$ and that they satisfy the
ordering
$\displaystyle 1\ll\mathsf{N}_{\rm
cut,t}\ll\mathsf{N}_{\textnormal{ind,t}}\ll\mathsf{N}_{\textnormal{ind,v}}\ll\mathsf{N}_{\rm
fin}\,.$ (3.11)
The precise definitions of these integers, and the meaning of the $\ll$
symbols in (3.11), are given in (9.9), (9.10), (9.11), and (9.14). Roughly
speaking, the role of these parameters is as follows:
* •
$\mathsf{N}_{\rm cut,t}$ is the number of sharp material derivatives which are
built into the velocity and stress cutoff functions.
* •
$\mathsf{N}_{\textnormal{ind,t}}$ is the number of sharp material derivatives
propagated for velocities and stresses.
* •
$\mathsf{N}_{\textnormal{ind,v}}$ is used to quantify the number of (lossy)
higher order space and time derivatives for velocities and stresses.
* •
$\mathsf{N}_{\rm fin}$ is used to quantify the highest order derivatives
appearing in the proof.
Next, we state the inductive assumptions for the velocity increments and
stresses at various levels $q\geq 0$. Throughout the section we frequently
refer to the notation $\mathcal{M}\left(n,N_{*},\lambda,\Lambda\right)$ from
(3.8).
#### 3.2.1 Primary inductive assumption for velocity increments
We make $L^{2}$ inductive assumptions for
$u_{q^{\prime}}=v_{\ell_{q^{\prime}}}-v_{\ell_{q^{\prime}-1}}$ at levels
$q^{\prime}$ strictly below $q$. For all $0\leq q^{\prime}\leq q-1$ we assume
that
$\displaystyle\left\|\psi_{i,q^{\prime}-1}D^{n}D^{m}_{t,q^{\prime}-1}u_{q^{\prime}}\right\|_{L^{2}}\leq\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}}^{i}\tau_{q^{\prime}-1}^{-1},\widetilde{\tau}_{q^{\prime}-1}^{-1}\right)$
(3.12)
holds for all $n+m\leq\mathsf{N}_{\rm fin}$.
At level $q$, we assume that the velocity increment $w_{q}$ satisfies
$\displaystyle\left\|\psi_{i,q-1}D^{n}D^{m}_{t,q-1}w_{q}\right\|_{L^{2}}\leq\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i-1}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
(3.13)
for $n,m\leq 7\mathsf{N}_{\textnormal{ind,v}}$. Moreover, recalling from
(9.67) that $\mathrm{supp\,}_{t}f$ denotes the temporal support of a function
$f$, we inductively assume that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q-1})\subset[T_{1},T_{2}]\quad\Rightarrow\quad\mathrm{supp\,}_{t}(w_{q})\subset\left[T_{1}-(\lambda_{q-1}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{-1},T_{2}+(\lambda_{q-1}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{-1}\right]\,.$
(3.14)
#### 3.2.2 Inductive assumption for the stress
For the Reynolds stress $\mathring{R}_{q}$, we make $L^{1}$ inductive
assumptions
$\displaystyle\left\|\psi_{i,q-1}D^{n}D_{t,q-1}^{m}\mathring{R}_{q}\right\|_{L^{1}}\leq\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\lambda_{q}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i+1}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
(3.15)
for all $0\leq n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
#### 3.2.3 Inductive assumption for the previous generation velocity cutoff
functions
More assumptions are needed in relation to the previous velocity perturbations
and old cutoffs functions. First, we assume that the velocity cutoff functions
form a partition of unity for $q^{\prime}\leq q-1$:
$\displaystyle\sum_{i\geq 0}\psi_{i,q^{\prime}}^{2}\equiv
1,\qquad\mbox{and}\qquad\psi_{i,q^{\prime}}\psi_{i^{\prime},q^{\prime}}=0\quad\textnormal{for}\quad|i-i^{\prime}|\geq
2.$ (3.16)
Second, we assume that there exists an ${i_{\rm max}}={i_{\rm max}}(q)>0$,
which is bounded uniformly in $q$ as
$\displaystyle\frac{b+1}{b-1}\leq{i_{\rm
max}}(q)\leq\frac{4}{\varepsilon_{\Gamma}(b-1)}\,,$ (3.17)
such that
$\displaystyle\psi_{i,q^{\prime}}$ $\displaystyle\equiv 0\quad\mbox{for
all}\quad i>{i_{\rm
max}}(q^{\prime})\,,\qquad\mbox{and}\qquad\Gamma_{q^{\prime}+1}^{{i_{\rm
max}}(q^{\prime})}\leq\lambda_{q^{\prime}}^{\nicefrac{{5}}{{3}}}\,,$ (3.18)
for all $q^{\prime}\leq q-1$. For all $0\leq q^{\prime}\leq q-1$ and $0\leq
i\leq{i_{\rm max}}$ we assume the following pointwise derivative bounds for
the cutoff functions $\psi_{i,q^{\prime}}$. For mixed space and material
derivatives (recall the notation from (3.5)) we assume that
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q^{\prime}}}}{\psi_{i,q^{\prime}}^{1-(K+M)/\mathsf{N}_{\rm
fin}}}\left|\left(\prod_{l=1}^{k}D^{\alpha_{l}}D_{t,q^{\prime}-1}^{\beta_{l}}\right)\psi_{i,q^{\prime}}\right|$
$\displaystyle\qquad\lesssim\mathcal{M}\left(K,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\Gamma_{q^{\prime}}\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q^{\prime}+1}^{i+3}\tau_{q^{\prime}-1}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.19)
for $K,M,k\geq 0$ with $0\leq K+M\leq\mathsf{N}_{\rm fin}$, where
$\alpha,\beta\in{\mathbb{N}}^{k}$ are such that $|\alpha|=K$ and $|\beta|=M$.
Lastly, we consider mixtures of space, material, and directional derivatives
(recall the notation from (3.7)). Then with $K,M,\alpha,\beta$ and $k$ as
above, and with $N\geq 0$, we assume that
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q^{\prime}}}}{\psi_{i,q^{\prime}}^{1-(N+K+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}\left(\prod_{l=1}^{k}D_{q^{\prime}}^{\alpha_{l}}D_{t,q^{\prime}-1}^{\beta_{l}}\right)\psi_{i,q^{\prime}}\right|$
$\displaystyle\qquad\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\Gamma_{q^{\prime}}\widetilde{\lambda}_{q^{\prime}}\right)(\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q^{\prime}+1}^{i+3}\tau_{q^{\prime}-1}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.20)
as long as $0\leq N+K+M\leq\mathsf{N}_{\rm fin}$.
In addition to the above pointwise estimates for the cutoff functions
$\psi_{i,q^{\prime}}$, we also assume that we have a good $L^{1}$ control.
More precisely, we postulate that
$\displaystyle\left\|\psi_{i,q^{\prime}}\right\|_{L^{1}}\lesssim\Gamma_{q^{\prime}+1}^{-2i+{\mathsf{C}_{b}}}\qquad\mbox{where}\qquad{\mathsf{C}_{b}}=\frac{4+b}{b-1}$
(3.21)
holds for $0\leq q^{\prime}\leq q-1$ and all $0\leq i\leq{i_{\rm
max}}(q^{\prime})$.
#### 3.2.4 Secondary inductive assumptions for velocities
Next, for $0\leq q^{\prime}\leq q-1$, $0\leq i\leq{i_{\rm max}}$, $k\geq 1$,
$K,M\geq 0$, $\alpha,\beta\in\mathbb{N}^{k}$ with $|\alpha|=K$ and
$|\beta|=M$, we assume that the following mixed space-and-material derivative
bounds hold
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q^{\prime}-1}^{\beta_{i}}\Big{)}u_{q^{\prime}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q^{\prime}})}$
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i+3}\tau_{q^{\prime}-1}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.22)
for $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$,
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q^{\prime}}^{\beta_{i}}\Big{)}Dv_{\ell_{q^{\prime}}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q^{\prime}})}$
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q^{\prime}})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.23)
for $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$, and
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q^{\prime}}^{\beta_{i}}\Big{)}v_{\ell_{q^{\prime}}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q^{\prime}})}$
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}}\lambda_{q^{\prime}}^{2})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.24)
for $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. Additionally, for
$N\geq 0$ we postulate that mixed space-material-directional derivatives
satisfy
$\displaystyle\left\|D^{N}\Big{(}\prod_{i=1}^{k}D_{q^{\prime}}^{\alpha_{i}}D_{t,q^{\prime}-1}^{\beta_{i}}\Big{)}u_{q^{\prime}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q^{\prime}})}$
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}})^{K+1}\mathcal{M}\left(N+K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i+3}\tau_{q^{\prime}-1}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.25a)
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)(\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i+3}\tau_{q^{\prime}-1}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.25b)
whenever $N+K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$.
###### Remark 3.4.
Identity (A.39) shows that (3.25b) automatically implies the bound
$\displaystyle\left\|D^{N}D_{t,q^{\prime}}^{M}u_{q^{\prime}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q^{\prime}})}$
$\displaystyle\qquad\lesssim(\Gamma_{q^{\prime}+1}^{i+1}\delta_{q^{\prime}}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.26)
for all $N+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. To see this, we
take $B=D_{t,q^{\prime}-1}$ and $A=D_{q^{\prime}}$, so that
$A+B=D_{t,q^{\prime}}$. The estimate (3.26) now is a consequence of identity
(A.39) and the parameter inequalities
$\Gamma_{q^{\prime}+1}^{\mathsf{c_{0}}+3}\tau_{q^{\prime}-1}^{-1}\leq\tau_{q^{\prime}}^{-1}$
(which follows from (9.40)) and
$\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}+1}\tau_{q^{\prime}}^{-1}\leq\widetilde{\tau}_{q^{\prime}}^{-1}$
(which is a consequence of (3.18) and (9.43)). In a similar fashion, the bound
(3.20) and identity (A.39) imply that
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q^{\prime}}}}{\psi_{i,q^{\prime}}^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}D_{t,q^{\prime}}^{M}\psi_{i,q^{\prime}}\right|$
$\displaystyle\qquad\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q^{\prime}}\lambda_{q^{\prime}},\Gamma_{q^{\prime}}\widetilde{\lambda}_{q^{\prime}}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q^{\prime}+1}^{i-\mathsf{c_{0}}}\tau_{q^{\prime}}^{-1},\Gamma_{q^{\prime}+1}^{-1}\widetilde{\tau}_{q^{\prime}}^{-1}\right)$
(3.27)
for all $N+M\leq\mathsf{N}_{\rm fin}$. Indeed, the above estimates follow from
the same parameter inequalities mentioned above, and from identity (A.39) with
$A=D_{q^{\prime}}$ and $B=D_{t,q^{\prime}-1}$.
###### Remark 3.5.
The inductive assumptions for the velocities given in Sections 3.2.1 and
3.2.4, with the definition of the mollifier operator ${\mathcal{P}}_{q,x,t}$
in Section 9.4, imply that the new velocity field $v_{q}=w_{q}+v_{\ell_{q-1}}$
is very close to its mollification $v_{\ell_{q}}$, uniformly in space and
time. That is, we have
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(v_{\ell_{q}}-v_{q})\right\|_{L^{\infty}}\leq\lambda_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i-1},\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\right)$
(3.28)
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. The proof of the above
bound is given in Lemma 5.1, cf. estimate (5.4).
### 3.3 Main inductive proposition
The main inductive proposition, which propagates the inductive estimates in
Section 3.2 from step $q$ to step $q+1$, is as follows.
###### Proposition 3.6.
Fix $\beta\in[\nicefrac{{1}}{{3}},\nicefrac{{1}}{{2}})$ and choose
$b\in(1,\nicefrac{{1}}{{2\beta}})$. Solely in terms of $\beta$ and $b$, define
the parameters ${n_{\rm max}}$, ${\mathsf{C}_{b}}$, $\mathsf{C_{R}}$,
$\mathsf{c_{0}}$, $\varepsilon_{\Gamma}$, $\alpha_{\mathsf{R}}$,
$\mathsf{N}_{\rm cut,t}$, $\mathsf{N}_{\rm cut,x}$,
$\mathsf{N}_{\textnormal{ind,t}}$, $\mathsf{N}_{\textnormal{ind,v}}$,
${\mathsf{N}_{\rm dec}}$, ${\mathsf{d}}$, and $\mathsf{N}_{\rm fin}$, by the
definitions in Section 9.1, items (i)–(xii). Then, there exists a sufficiently
large $a_{*}=a_{*}(\beta,b)\geq 1$, such that for any $a\geq a_{*}$, the
following statement holds for any $q\geq 0$. Given a velocity field $v_{q}$
which solves the Euler-Reynolds system with stress $\mathring{R}_{q}$, define
$v_{\ell_{q}},w_{q}$, and $u_{q}$ via (3.2)–(3.3). Assume that
$\\{u_{q^{\prime}}\\}_{q^{\prime}=0}^{q-1}$ satisfies (3.12), $w_{q}$ obeys
(3.13)–(3.14), $\mathring{R}_{q}$ satisfies (3.15), and that for every
$q^{\prime}\leq q-1$ there exists a partition of unity
$\\{\psi_{i,q^{\prime}}\\}_{i\geq 0}$ such that properties (3.16)–(3.18) and
estimates (3.19)–(3.25) hold. Then, there exists a velocity field $v_{q+1}$, a
stress $\mathring{R}_{q+1}$, and a partition of unity $\\{\psi_{i,q}\\}_{q\geq
0}$, such that $v_{q+1}$ solves the Euler-Reynolds system with stress
$\mathring{R}_{q+1}$, $u_{q}$ satisfies (3.12) for $q^{\prime}\mapsto q$,
$w_{q+1}$ obeys (3.13)–(3.14) for $q\mapsto q+1$, $\mathring{R}_{q+1}$
satisfies (3.15) for $q\mapsto q+1$, and the $\psi_{i,q}$ are such that
(3.16)–(3.25) hold when $q^{\prime}\mapsto q$.
### 3.4 Proof of Theorem 1.1
Choose the parameters $\beta,b,\ldots,a_{*}$, as described in Section 9.1, and
assume that with these parameter choices, and for any $a\geq a_{*}$, we are
able to propagate the inductive bounds claimed in Sections 3.2.1–3.2.4 from
step $q$ to step $q+1$, for all $q\geq 0$; this is achieved in Sections 6–8.
We next show that if $a\geq a_{*}$ is chosen sufficiently large, depending
additionally on the $v_{\mathrm{start}}$, $v_{\mathrm{end}}$, $T>0$, and
$\epsilon>0$ from the statement of Theorem 1.1, then the inductive assumptions
imply Theorem 1.1.
Without loss of generality, assume that
$\int_{\mathbb{T}^{3}}v_{\mathrm{start}}(x)dx=\int_{\mathbb{T}^{3}}v_{\mathrm{end}}(x)dx=0$.
Since these functions lie in $L^{2}(\mathbb{T}^{3})$, there exists $R>0$ such
that upon defining
$\displaystyle v_{0}^{(1)}:=\mathbb{P}_{\leq
R}v_{\mathrm{start}}\,,\qquad\mbox{and}\qquad v_{0}^{(2)}:=\mathbb{P}_{\leq
R}v_{\mathrm{end}}\,,$
where $\mathbb{P}_{\leq R}$ denotes the Fourier truncation operator to
frequencies $|\xi|\leq R$, we have that
$\displaystyle\|v_{0}^{(1)}-v_{\mathrm{start}}\|_{L^{2}(\mathbb{T}^{3})}+\|v_{0}^{(2)}-v_{\mathrm{end}}\|_{L^{2}(\mathbb{T}^{3})}\leq\frac{\epsilon}{2}\,.$
(3.29)
Note that $v_{0}^{(1)},v_{0}^{(2)}\in C^{\infty}(\mathbb{T}^{3})$, and thus by
the classical local well-posedness theory plus propagation of regularity (see
Foias, Frisch, and Temam [38]), there exists $T_{0}>0$ and unique strong
solutions $v^{(1)}\in C^{\infty}((-T_{0},T_{0})\times\mathbb{T}^{3})$ and
$v^{(2)}\in C^{\infty}((T-T_{0},T+T_{0})\times\mathbb{T}^{3})$ of the 3D Euler
system (1.1), such that $v^{(1)}(x,0)=v_{0}^{(1)}(x)$ and
$v^{(2)}(x,T)=v_{0}^{(2)}(x)$. Without loss of generality, we may take
$T_{0}\leq\nicefrac{{T}}{{2}}$.
Next, let $\varphi\colon[0,T]\to[0,1]$ be a non-increasing $C^{\infty}$ smooth
function such that $\varphi\equiv 1$ on $[0,\nicefrac{{T_{0}}}{{2}}]$ and
$\varphi\equiv 0$ on $[T_{0},T]$. Define the $C^{\infty}$-smooth function
$\displaystyle v_{0}(x,t):=\varphi(t)v^{(1)}(x,t)+\varphi(T-t)v^{(2)}(x,t)\,.$
(3.30)
On $[0,T]$, $v_{0}$ solves the Euler-Reynolds system (3.1) for a suitable zero
mean scalar pressure $p_{0}$, with the $C^{\infty}$-smooth stress
$\mathring{R}_{0}$ defined by
$\displaystyle\mathring{R}_{0}(x,t)$
$\displaystyle:=(\partial_{t}\varphi)(t)\mathcal{R}v^{(1)}(x,t)-(\partial_{t}\varphi)(T-t)\mathcal{R}v^{(2)}(x,t)$
$\displaystyle\quad+\varphi(t)(\varphi(t)-1)(v^{(1)}\mathring{\otimes}v^{(1)})(x,t)+\varphi(T-t)(\varphi(T-t)-1)(v^{(2)}\mathring{\otimes}v^{(2)})(x,t)\,,$
(3.31)
where $\mathcal{R}$ is the classical nonlocal inverse-divergence operator (see
(A.100) for the definition). From the above definition and the fact that
$\varphi\equiv 1$ on $[0,\nicefrac{{T_{0}}}{{2}}]$, we deduce that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{0})\subset[\nicefrac{{T_{0}}}{{2}},T-\nicefrac{{T_{0}}}{{2}}]\,.$
(3.32)
This fact will be needed towards the end of the proof.
For consistency of notation, we also define $v_{-1}=v_{\ell_{-1}}=u_{-1}=0$,
so that $v_{0}=w_{0}$ holds by (3.3). For the velocity cutoffs, we let
$\psi_{0,-1}=1$ and $\psi_{i,-1}=0$ for all $i\geq 1$. It is then immediate to
check that the $\\{\psi_{i,-1}\\}_{i\geq 0}$ satisfy the inductive assumptions
(3.16)–(3.21), for $q^{\prime}=-1$, with the derivative bounds (3.19) and
(3.20) being empty statements for $K+M\geq 1$, respectively when $N+K+M\geq
1$. Moreover, the bounds (3.12) and (3.22)–(3.25b) hold for $q^{\prime}=-1$
since the left side of these inequalities vanishes identically. Lastly, the
assumption (3.14) is empty since there is no $\mathring{R}_{-1}$ stress to
speak of.
It thus remains to verify that the pair $(v_{0},\mathring{R}_{0})$ defined in
(3.30)–(3.31) satisfies the estimates (3.13) and (3.15), where by the above
choices we have $D_{t,-1}=\partial_{t}$. Note that the parameter
$\mathsf{N}_{\textnormal{ind,v}}$ was already chosen; thus, we have that
$\displaystyle C_{\mathrm{datum}}$ $\displaystyle:=\max_{0\leq n,m\leq
7\mathsf{N}_{\textnormal{ind,v}}}\left\|D^{n}\partial_{t}^{m}v_{0}\right\|_{L^{\infty}(0,T;L^{2}(\mathbb{T}^{3}))}+\max_{0\leq
n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}}\left\|D^{n}\partial_{t}^{m}\mathring{R}_{0}\right\|_{L^{\infty}(0,T;L^{1}(\mathbb{T}^{3}))}<\infty\,.$
(3.33)
Note that $C_{\mathrm{datum}}$ only depends on $v_{\mathrm{start}}$,
$v_{\mathrm{end}}$, the cutoff frequency $R>0$, the choice of the cutoff
function $\varphi$, on $T>0$, and on the parameter
$\mathsf{N}_{\textnormal{ind,v}}$. In particular, $C_{\mathrm{datum}}$ does
not depend on the parameter $a$, which is the base of the exponential defining
$\lambda_{q}$ in (3.10). Defining
$\tau_{-1}=\Gamma_{0}^{-1}=\lambda_{0}^{-\varepsilon_{\Gamma}}$ and
$\widetilde{\tau}_{-1}=\Gamma_{0}^{-3}=\lambda_{0}^{-3\varepsilon_{\Gamma}}$
(these parameters are never used again), and that $\lambda_{0}\geq a\geq
a_{*}\geq 1$, we thus have that (3.13) and (3.15) hold if we ensure that
$\displaystyle
C_{\mathrm{datum}}\leq\Gamma_{0}^{-1}\delta_{0}^{\nicefrac{{1}}{{2}}}\qquad\mbox{and}\qquad
C_{\mathrm{datum}}\leq\Gamma_{0}^{{-\mathsf{C_{R}}}}\delta_{1}\,.$ (3.34)
Using that $\varepsilon_{\Gamma}$ is sufficiently small with respect to
$\beta$ and $b$, we have that
$\Gamma_{0}^{-1}\delta_{0}^{\nicefrac{{1}}{{2}}}=\lambda_{0}^{-\varepsilon_{\Gamma}}\lambda_{1}^{(b+1)\beta/2}\lambda_{0}^{-\beta}\geq(\lambda_{1}\lambda_{0}^{-1})^{(b+1)\beta/2}\geq(a^{b-1}/2)^{\beta}$.
Also, by using that $\varepsilon_{\Gamma}$ is chosen to be sufficiently small
with respect to $\beta$ and $b$, we have that
$\Gamma_{0}^{{-\mathsf{C_{R}}}}\delta_{1}=\lambda_{0}^{(4b+1)\varepsilon_{\Gamma}}\lambda_{1}^{(b-1)\beta}\geq(\lambda_{1}\lambda_{0}^{-1})^{(b-1)\beta}\geq(a^{b-1}/2)^{(b-1)\beta}$.
Thus, if in addition to $a\geq a_{*}$, as specified by item (xiii) in Section
9.1, if we choose $a\geq a_{*}$ to be sufficiently large in terms of $\beta,b$
and the constant $C_{\mathrm{datum}}$ from (3.33), in order to ensure that
$a^{(b-1)^{2}\beta}\geq 4C_{\mathrm{datum}}\,,$
then the condition (3.34) is satisfied. We make this choice of $a$, and thus
all the estimates claimed in Sections 3.2.1–3.2.4 hold true for the base step
in the induction, the case $q=0$.
Proceeding inductively, these estimates thus hold true for all $q\geq 0$. This
allows us to define a function $v\in
C^{0}(0,T;H^{\beta^{\prime}}(\mathbb{T}^{3}))$ for any $\beta^{\prime}<\beta$
via the absolutely convergent series202020We may equivalently define
$v=\lim_{q\to\infty}v_{q}=\lim_{q\to\infty}w_{q}+\sum_{q^{\prime}=0}^{q-1}u_{q^{\prime}}=\sum_{q^{\prime}\geq
0}u_{q^{\prime}}$. We choose to work with (3.35) because it highlights the
dependence on $v_{0}$.
$\displaystyle v=\lim_{q\to\infty}v_{q}=v_{0}+\sum_{q\geq
0}(v_{q+1}-v_{q})=v_{0}+\sum_{q\geq
0}\left(w_{q+1}+(v_{\ell_{q}}-v_{q})\right)\,,$ (3.35)
where we recall the notation (3.2) and (3.3). Indeed, by (3.13), (3.16), and
interpolation, we have that $\left\|w_{q}\right\|_{H^{\beta^{\prime}}}\leq
2\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{\beta^{\prime}}=2\Gamma_{q}^{-1}\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\lambda_{q}^{-(\beta-\beta^{\prime})}$
which is summable for $q\geq 0$ whenever $\beta^{\prime}<\beta$. By appealing
to the bound (3.28), we furthermore obtain that
$\left\|v_{\ell_{q}}-v_{q}\right\|_{H^{\beta^{\prime}}}\lesssim\lambda_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{\beta^{\prime}}\lesssim\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\lambda_{q}^{-2-(\beta-\beta^{\prime})}$,
which is again summable over $q\geq 0$. This justifies the definition of $v$
in (3.35), and the fact that $v\in
C^{0}(0,T;H^{\beta^{\prime}}(\mathbb{T}^{3}))$ for any $\beta^{\prime}<\beta$.
Finally, we note that by additionally appealing to (3.15), which yields
$\|\mathring{R}_{q}\|_{L^{1}}\lesssim\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\to
0$ as $q\to\infty$, in view of (3.1) the function $v$ defined in (3.35) is a
weak solution of the Euler equations on $[0,T]$.
In order to complete the proof, we return to (3.35) and note that due to
(3.14) (with $q=1$), the property (3.32) of $\mathring{R}_{0}$, and the fact
that
$\lambda_{0}\delta_{0}^{\nicefrac{{1}}{{2}}}=\lambda_{0}^{1-\beta}\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\geq\nicefrac{{4}}{{T_{0}}}$
(which holds upon choosing $a$ sufficiently large with respect to
$T_{0},\beta,b$), we have that $w_{1}\equiv 0$ on the set
$[0,\nicefrac{{T_{0}}}{{4}}]\times\mathbb{T}^{3}\cup[T-\nicefrac{{T_{0}}}{{4}},T]\times\mathbb{T}^{3}$.
Thus, from (3.35) and the previously established bounds for $w_{q}$ (via
(3.13), (3.16)) and $v_{\ell_{q}}-v_{q}$ (via (3.28)), we have that
$\displaystyle\left\|v-v_{0}\right\|_{L^{\infty}([0,\nicefrac{{T_{0}}}{{4}}]\cup[T-\nicefrac{{T_{0}}}{{4}},T];L^{2}(\mathbb{T}^{3}))}$
$\displaystyle\leq\sum_{q\geq
2}\left\|w_{q}\right\|_{L^{\infty}([0,T];L^{2}(\mathbb{T}^{3}))}+\sum_{q\geq
0}\left\|v_{\ell_{q}}-v_{q}\right\|_{L^{\infty}([0,T];L^{2}(\mathbb{T}^{3}))}$
$\displaystyle\leq 2\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\sum_{q\geq
2}\Gamma_{q}^{-1}\lambda_{q}^{-\beta}+\lambda_{1}^{(b+1)\beta}\sum_{q\geq
0}\lambda_{q}^{-2-\beta}$ $\displaystyle\leq
4\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\Gamma_{2}^{-1}\lambda_{2}^{-\beta}+2\lambda_{1}^{(b+1)\beta}\lambda_{0}^{-2-\beta}$
$\displaystyle\leq
8\Gamma_{2}^{-1}\lambda_{1}^{\nicefrac{{(b+1)\beta}}{{2}}}\lambda_{1}^{-\beta
b}+4\lambda_{0}^{(b+1)b\beta}\lambda_{0}^{-2-\beta}$
$\displaystyle\leq\lambda_{1}^{-\nicefrac{{(b-1)\beta}}{{2}}}+4\lambda_{0}^{-\nicefrac{{1}}{{2}}}$
$\displaystyle\leq\frac{\epsilon}{2}$ (3.36)
once $a$ (and thus $\lambda_{0}$ and $\lambda_{1}$) is taken to be
sufficiently large with respect to $b,\beta$, and $\epsilon$. Here, in the
second-to-last inequality we have used that
$\beta(b^{2}+b-1)\leq\nicefrac{{3}}{{2}}$, which holds since
$\beta<\nicefrac{{1}}{{2}}$ and $b<\nicefrac{{3}}{{2}}$. Combining (3.36) with
the definition of the functions $v^{(1)}$, $v^{(2)}$, and $v_{0}$, and the
bound (3.29), we deduce that
$\left\|v(\cdot,0)-v_{\mathrm{start}}\right\|_{L^{2}(\mathbb{T}^{3})}\leq\epsilon$
and
$\left\|v(\cdot,T)-v_{\mathrm{end}}\right\|_{L^{2}(\mathbb{T}^{3})}\leq\epsilon$.
This concludes the proof of Theorem 1.1, with $\beta$ being replaced by an
arbitrary $\beta^{\prime}\in(0,\beta)$.
###### Remark 3.7 (Modifications for achieving compact support in time).
The proof outlined above may be easily modified to show the existence of
infinitely many weak solutions in $C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$
which are nontrivial and have compact support in time, as mentioned in Remark
1.2. The argument is as follows. Let $\varphi(t)$ be a $C^{\infty}$ smooth
cutoff function, with $\varphi\equiv 1$ on
$-[\nicefrac{{T}}{{4}},\nicefrac{{T}}{{4}}]$ and $\varphi\equiv 0$ on
$\mathbb{R}\setminus[-\nicefrac{{T}}{{2}},\nicefrac{{T}}{{2}}]$. Then, instead
of (3.30), we define define $v_{0}(x,t)=E\varphi(t)(\sin(x_{3}),0,0)$. Note
that the kinetic energy of $v_{0}$ at time $t=0$ is larger
$E(2\pi)^{\nicefrac{{3}}{{2}}}/2\geq 2E$, and that $v_{0}$ has time-support in
$[-\nicefrac{{T}}{{2}},\nicefrac{{T}}{{2}}]$. Since $(\sin(x_{3}),0,0)$ is a
shear flow, the zero order stress $\mathring{R}_{0}$ is given by
$E\varphi^{\prime}(t)$ multiplied by a matrix whose entries are zero, except
for the $(1,3)$ and $(3,1)$ entries which equal $-\cos(x_{3})$ (see [12,
Section 5.2] for details). The point is that $R_{0}$ is smooth, and its time-
support lies in the interval
$\nicefrac{{T}}{{4}}\leq|t|\leq\nicefrac{{T}}{{2}}$, which plays the role of
(3.32). Using the same argument used in the proof of Theorem 1.1, we may show
that for $a$ sufficiently large, the above defined pair
$(v_{0},\mathring{R}_{0})$ satisfies the inductive assumptions at level $q=0$,
and that these inductive assumptions may be propagated to all $q\geq 0$. As in
(3.36), we deduce that the limiting weak solution solution $v$ has kinetic
energy at time $t=0$ which is strictly larger than $E$. The fact that
$\mathrm{supp\,}_{t}v_{0},\mathrm{supp\,}_{t}\mathring{R}_{0}\subset[-\nicefrac{{T}}{{2}},\nicefrac{{T}}{{2}}]$,
combined with the inductive assumption (3.14) and the fact that the
mollification procedure in Lemma 5.1 expands time-supports by at most a factor
of
$\widetilde{\tau}_{q-1}\ll(\lambda_{q-1}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{-1}$,
implies that the the weak solution $v$ has time-support in the set
$|t|\leq\nicefrac{{T}}{{2}}+4\sum_{q\geq
0}(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}})^{-1}\leq\nicefrac{{T}}{{2}}+8\lambda_{0}^{\beta-1}$.
Choosing $a$ sufficiently large shows that
$\mathrm{supp\,}_{t}v\subset[-T,T]$.
###### Remark 3.8 (Modifications for attaining a given energy profile).
The intermittent convex integration scheme described in this paper may be
modified to show that within the regularity class
$C^{0}_{t}H^{\nicefrac{{1}}{{2}}-}_{x}$, weak solutions of 3D Euler may be
constructed to attain any given smooth energy profile, as mentioned in Remark
1.2. The main modifications required to prove this fact are as follows. As in
previous schemes (see e.g. De Lellis and Székelyhidi Jr. [31], equations (7)
and (9), or [13], equations (2.5) and (2.6), etc.) we need to measure the
distance between the energy resolved at step $q$ in the iteration, and the
desired energy profile $e(t)$. The energy pumping produced in steps $q\mapsto
q+1$ by the additions of pipe flows which comprise the velocity increments
$w_{q+1}$, and the error due to mollification, was already understood in
detail in Daneri and Székelyhidi Jr. [27] and in [11]. An additional
difficulty in this paper is due to the presence of the higher order stresses:
the energy profile would have to be inductively adjusted also throughout the
steps $n\mapsto n+1$ and $p\mapsto p+1$. The other difficulty is the presence
of the cutoff functions. This issue was however already addressed in [13], cf.
Sections 4.5, 4.5, 6; albeit for a simpler version of the cutoff functions,
which only included the stress cutoffs. With some effort, the argument in [13]
may be indeed modified to deal with the cutoff functions present in this work.
## 4 Building blocks
In Section 4.1, we specify in Propositions 4.1 and Proposition 4.3 the axes
and shifts, respectively, that will characterize our intermittent pipe flows.
A sufficiently diverse set of vector directions for the axes ensures that we
can span a neighborhood of the identity in the space of symmetric $3\times 3$
matrices using positive linear combinations of simple tensors. Proposition 4.3
crucially describes the $r^{-2}$ choices of placement afforded by the
parameter $r$, which quantifies the diameter of the pipe. Then in Proposition
4.4, we construct the intermittent pipe flows used in the rest of the paper
and specify the essential properties. Section 4.2 contains Lemma 4.7, which
studies the evolution of the axes of the pipes under flow by an incompressible
velocity field and related properties. Section 4.3 contains Proposition 4.8,
which is the placement lemma used to eliminate the Type 2 oscillation errors.
We remark that the results of this section are _only_ used in Section 8 \-
first to ensure the cancellation of errors in Section 8.3, and second to show
that the Type 2 errors vanish in Section 8.7.
### 4.1 A careful construction of intermittent pipe flows
We recall from [54, Lemma 1] or [27, Lemma 2.4] a version of the following
geometric decomposition:
###### Proposition 4.1 (Choosing Vectors for the Axes).
Let $B_{\nicefrac{{1}}{{2}}}(\mathrm{Id})$ denote the ball of symmetric
$3\times 3$ matrices, centered at $\mathrm{Id}$, of radius
$\nicefrac{{1}}{{2}}$. Then, there exists a finite subset
$\Xi\subset\mathbb{S}^{2}\cap{\mathbb{Q}}^{3}$, for every $\xi\in\Xi$ there
exists a smooth positive function $\gamma_{\xi}\colon
C^{\infty}\left(B_{\nicefrac{{1}}{{2}}}(\mathrm{Id})\right)\to\mathbb{R}$,
such that for each $R\in B_{\nicefrac{{1}}{{2}}}(\mathrm{Id})$ we have the
identity
$R=\sum_{\xi\in\Xi}\left(\gamma_{\xi}(R)\right)^{2}\xi\otimes\xi.$ (4.1)
Additionally, for every $\xi$ in $\Xi$, there exist vectors
$\xi^{(2)},\xi^{(3)}\in\mathbb{S}^{2}\cap\mathbb{Q}^{3}$ such that
$\\{\xi,\xi^{(2)},\xi^{(3)}\\}$ is an orthonormal basis of $\mathbb{R}^{3}$,
and there exists a least positive integer $n_{\ast}$ such that
$n_{\ast}\xi,n_{\ast}\xi^{(2)},n_{\ast}\xi^{(3)}\in\mathbb{Z}^{3}$, for every
$\xi\in\Xi$.
In order to adapt the proof of Proposition 4.8 to pipe flows oriented around
axes which are not parallel to the standard basis vectors $e_{1}$, $e_{2}$, or
$e_{3}$, it is helpful to consider functions which are periodic not only with
respect to $\mathbb{T}^{3}$, but also with respect to a torus for which one
face is perpendicular to the axis of the pipe (i.e., that one edge of the
torus is parallel to the axis).
Figure 8: The torus on the left, $\mathbb{T}^{3}$, has axes parallel to the
usual coordinate axes, while the torus on the right, denoted
$\mathbb{T}^{3}_{\xi}$, has been rotated and has axes parallel to a new set of
vectors $\xi$, $\xi^{(2)}$, and $\xi^{(3)}$.
###### Definition 4.2 ($\mathbb{T}_{\xi}^{3}$-periodicity).
Let $\\{\xi,\xi^{(2)},\xi^{(3)}\\}\subset\mathbb{S}^{2}\cap\mathbb{Q}^{3}$ be
an orthonormal basis for $\mathbb{R}^{3}$, and let
$f:\mathbb{R}^{3}\rightarrow\mathbb{R}^{n}$. We say that $f$ is
$\mathbb{T}_{\xi}^{3}$-periodic if for all
$(k_{1},k_{2},k_{3})\in\mathbb{Z}^{3}$ and
$(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}$,
$f\left((x_{1},x_{2},x_{3})+2\pi\left(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)}\right)\right)=f(x_{1},x_{2},x_{3})$
(4.2)
and write $f:\mathbb{T}_{\xi}^{3}\rightarrow\mathbb{R}^{n}$. If
$\\{\xi,\xi^{(2)},\xi^{(3)}\\}=\\{e_{1},e_{2},e_{3}\\}$, i.e. the standard
basis for $\mathbb{R}^{3}$, we drop the subscript $\xi$ and write
$\mathbb{T}^{3}$. For sets $\mathcal{S}\subset\mathbb{R}^{3}$, we say that
$\mathcal{S}$ is $\mathbb{T}_{\xi}^{3}$-periodic if the indicator function of
$\mathcal{S}$ is $\mathbb{T}_{\xi}^{3}$-periodic. Additionally, if $L$ is a
positive number, we say that $f$ is
$\left(\frac{\mathbb{T}^{3}_{\xi}}{L}\right)$-periodic if
$f\left((x_{1},x_{2},x_{3})+\frac{2\pi}{L}\left(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)}\right)\right)=f(x_{1},x_{2},x_{3})$
for all $(k_{1},k_{2},k_{3})\in\mathbb{Z}^{3}$ and
$(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}$. Note that if $L$ is a positive
integer, $\frac{\mathbb{T}_{\xi}^{3}}{L}$-periodicity implies
$\mathbb{T}_{\xi}^{3}$-periodicity.
We can now construct shifted intermittent pipe flows concentrated around axes
with a prescribed vector direction $\xi$ while imposing that each flow is
supported in a single member of a large collection of disjoint sets. For the
sake of clarity, we split the construction into two steps. First, in
Proposition 4.3 we construct the shifts and then periodize and rotate the
scalar-valued flow profiles and potentials associated to the pipe flows
$\mathbb{W}_{\xi,\lambda,r}$. The support and placement properties are ensured
at the level of the flow profile and potential. Next, we use the flow profiles
to construct the pipe flows themselves in Proposition 4.4.
###### Proposition 4.3 (Rotating, Shifting, and Periodizing).
Fix $\xi\in\Xi$, where $\Xi$ is as in Proposition 4.1. Let
${r^{-1},\lambda\in\mathbb{N}}$ be given such that $\lambda r\in\mathbb{N}$.
Let $\varkappa:\mathbb{R}^{2}\rightarrow\mathbb{R}$ be a smooth function with
support contained inside a ball of radius $\frac{1}{4}$. Then for
$k\in\\{0,...,r^{-1}-1\\}^{2}$, there exist functions
$\varkappa^{k}_{\lambda,r,\xi}:\mathbb{R}^{3}\rightarrow\mathbb{R}$ defined in
terms of $\varkappa$, satisfying the following additional properties:
1. (1)
We have that $\varkappa^{k}_{\lambda,r,\xi}$ is simultaneously
$\left(\frac{\mathbb{T}^{3}}{\lambda r}\right)$-periodic and
$\left(\frac{\mathbb{T}_{\xi}^{3}}{\lambda rn_{\ast}}\right)$-periodic.
2. (2)
Let $F_{\xi}$ be one of the two faces of the cube
$\frac{\mathbb{T}_{\xi}^{3}}{\lambda rn_{\ast}}$ which is perpendicular to
$\xi$. Let $\mathbb{G}_{\lambda,r}\subset F_{\xi}\cap 2\pi\mathbb{Q}^{3}$ be
the grid consisting of $r^{-2}$-many points spaced evenly at distance
$2\pi(\lambda n_{\ast})^{-1}$ on $F_{\xi}$ and containing the origin. Then
each grid point $g_{k}$ for $k\in\\{0,...,r^{-1}-1\\}^{2}$ satisfies
$\left(\mathrm{supp\,}\varkappa_{\lambda,r,\xi}^{k}\cap
F_{\xi}\right)\subset\left\\{x:|x-g_{k}|\leq 2\pi\left(4\lambda
n_{\ast}\right)^{-1}\right\\}.$ (4.3)
3. (3)
The support of $\varkappa_{\lambda,r,\xi}^{k}$ consists of a pipe (cylinder)
centered around a $\left(\frac{\mathbb{T}^{3}}{\lambda r}\right)$-periodic and
$\left(\frac{\mathbb{T}_{\xi}^{3}}{\lambda rn_{\ast}}\right)$-periodic line
parallel to $\xi$, which passes through the point $g_{k}$. The radius of the
cylinder’s cross-section is as given in (4.3).
4. (4)
For $k\neq k^{\prime}$,
$\mathrm{supp\,}\varkappa_{\lambda,r,\xi}^{k}\cap\mathrm{supp\,}\varkappa_{\lambda,r,\xi}^{k^{\prime}}=\emptyset$.
Figure 9: We have pictured above a grid on the front face of $\mathbb{T}^{3}$
in which there are $4^{2}=(\lambda r)^{2}$ many periodic cells, each with
$4^{2}=r^{-2}$ many subcells of diameter $16^{-1}=\lambda^{-1}$. The
periodized axes of the pipes are the green lines, and they have been placed in
the highlighted red squares on the front face of the torus.
###### Proof of Proposition 4.3.
For ${r^{-1}\in\mathbb{N}}$, which quantifies the rescaling, and for
$k=(k_{1},k_{2})\in\\{0,...,r^{-1}-1\\}^{2}$ which quantifies the shifts,
define $\varkappa_{r}^{k}$ to be the rescaled and shifted function
$\varkappa_{r}^{k}\left(x_{1},x_{2}\right):=\frac{1}{2\pi
r}\varkappa\left(\frac{x_{1}}{2\pi r}-k_{1},\frac{x_{2}}{2\pi
r}-k_{2}\right).$ (4.4)
Then $(x_{1},x_{2})\in\mathrm{supp\,}\varkappa_{r}^{k}$ if and only if
$\left|\frac{x_{1}}{2\pi r}-k_{1}\right|^{2}+\left|\frac{x_{2}}{2\pi
r}-k_{2}\right|^{2}\leq\frac{1}{16}.$ (4.5)
This implies that
$k_{1}-\frac{1}{4}\leq\frac{x_{1}}{2\pi r}\leq k_{1}+\frac{1}{4},\qquad
k_{2}-\frac{1}{4}\leq\frac{x_{2}}{2\pi r}\leq k_{2}+\frac{1}{4}.$ (4.6)
Since these inequalities cannot be satisfied by a single pair $(x,y)$ for both
$k=(k_{1},k_{2})$ and $k^{\prime}=(k_{1}^{\prime},k_{2}^{\prime})$
simultaneously when $k\neq k^{\prime}$, it follows that
$\mathrm{supp\,}\varkappa_{r}^{k}\cap\mathrm{supp\,}\varkappa_{r}^{k^{\prime}}=\emptyset$
(4.7)
for all $k\neq k^{\prime}$. Also, notice that plugging $k_{1}=0$ and
$k_{1}=r^{-1}-1$ into (4.6) shows that the set of $x_{1}$ for which there
exists $(k_{1},k_{2})$ such that $\varkappa_{r}^{k}(x)\neq 0$ is contained in
$\left\\{-\frac{r\pi}{2}\leq x_{1}\leq 2\pi-\frac{3r\pi}{2}\right\\},$
which is a set with diameter strictly less than $2\pi$. Therefore, periodizing
in $x_{1}$ will not cause overlap in the supports of the periodized objects.
Arguing similarly for $x_{2}$ and enumerating the pairs $(k_{1},k_{2})$ with
$k\in\\{0,...,r^{-1}-1\\}^{2}$, we overload notation and denote by
$\varkappa_{r}^{k}$, the $\mathbb{T}^{2}$-periodized version of
$\varkappa_{r}^{k}$. Thus we have produced $r^{-2}$-many functions which are
$\mathbb{T}^{2}$-periodic and which have disjoint supports.
Now define $\mathbb{G}_{r}\subset\mathbb{T}^{2}$ to be the grid containing
$r^{-2}$-many points evenly spaced at distance $2\pi r$ and containing the
origin. Then
$\mathbb{G}_{r}=\left\\{g_{k}^{0}:=2\pi
rk:k\in\\{0,...,r^{-1}-1\\}^{2}\right\\}\subset 2\pi\mathbb{Q}^{2}\,.$
Thus the support of each function $\varkappa_{r}^{k}$ contains $g_{k}^{0}$ as
the center of its support but no other grid points.
Let $\xi\in\Xi$ be fixed, with the associated orthonormal basis
$\\{\xi,\xi^{(2)},\xi^{(3)}\\}$. For $x=(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}$
and $\lambda r\in\mathbb{N}$, define
$\varkappa_{\lambda,r,\xi}^{k}(x):=\varkappa_{r}^{k}\left(n_{\ast}\lambda
rx\cdot\xi^{(2)},n_{\ast}\lambda rx\cdot\xi^{(3)}\right).$ (4.8)
Then for $(k_{1},k_{2},k_{3})\in\mathbb{Z}^{3}$,
$\displaystyle\varkappa_{\lambda,r,\xi}^{k}\left(x+\frac{2\pi}{\lambda
r}(k_{1},k_{2},k_{3})\right)$
$\displaystyle=\varkappa_{r}^{k}\left(n_{\ast}\lambda
r\Bigl{(}x+\frac{2\pi}{\lambda
r}(k_{1},k_{2},k_{3})\Bigr{)}\cdot\xi^{(2)},n_{\ast}\lambda
r\Bigl{(}x+\frac{2\pi}{\lambda
r}(k_{1},k_{2},k_{3})\Bigr{)}\cdot\xi^{(3)}\right)$
$\displaystyle=\varkappa_{r}^{k}\left(n_{\ast}\lambda
rx\cdot\xi^{(2)},n_{\ast}\lambda rx\cdot\xi^{(3)}\right)$
$\displaystyle=\varkappa_{\lambda,r,\xi}^{k}(x)$
since $n_{\ast}\xi^{(2)},n_{\ast}\xi^{(3)}\in\mathbb{Z}^{3}$ and
$\varkappa_{r}^{k}$ is $\mathbb{T}^{2}$-periodic, and thus
$\varkappa_{\lambda,r,\xi}^{k}$ is $\frac{\mathbb{T}^{3}}{\lambda
r}$-periodic. Similarly,
$\displaystyle\varkappa_{\lambda,r,\xi}^{k}\left(x+\frac{2\pi}{\lambda
rn_{\ast}}(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)})\right)$
$\displaystyle=\varkappa_{r}^{k}\left(n_{\ast}\lambda
r\Bigl{(}x+\frac{2\pi}{\lambda
rn_{\ast}}(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)})\Bigr{)}\cdot\xi^{(2)},n_{\ast}\lambda
r\Bigl{(}x+\frac{2\pi}{\lambda
rn_{\ast}}(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)})\Bigr{)}\cdot\xi^{(3)}\right)$
$\displaystyle=\varkappa_{r}^{k}\left(n_{\ast}\lambda
rx\cdot\xi^{(2)},n_{\ast}\lambda rx\cdot\xi^{(3)}\right)$
$\displaystyle=\varkappa^{k}_{\lambda,r,\xi}(x)$
since
$2\pi(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)})\cdot\xi^{(2)}=2\pi k_{2},\qquad
2\pi(k_{1}\xi+k_{2}\xi^{(2)}+k_{3}\xi^{(3)})\cdot\xi^{(3)}=2\pi k_{3}$
and $\varkappa_{r}^{k}$ is $\mathbb{T}^{2}$-periodic. Thus
$\varkappa_{\lambda,r,\xi}^{k}$ is $\frac{\mathbb{T}_{\xi}^{3}}{\lambda
rn_{\ast}}$-periodic, and as a consequence
$\frac{\mathbb{T}_{\xi}^{3}}{\lambda r}$-periodic as well. Therefore, we have
proved point 1.
To prove point 2, define
$\mathbb{G}_{\lambda,r}=\left\\{g_{k}:=2\pi k_{1}\left(\lambda
n_{\ast}\right)^{-1}\xi^{(2)}+2\pi k_{2}\left(\lambda
n_{\ast}\right)^{-1}\xi^{(3)}:k_{1},k_{2}\in\\{0,...,r^{-1}-1\\}\right\\}\,.$
(4.9)
We claim that $\varkappa_{\lambda,r,\xi}^{k}|_{F_{\xi}}$ is supported in a
$2\pi(4\lambda n_{\ast})^{-1}$-neighborhood of $g_{k}$. To prove the claim,
let $x\in F_{\xi}$ be such that $\varkappa_{\lambda,r,\xi}^{k}(x)\neq 0$. Then
since
$\displaystyle\varkappa_{\lambda,r,\xi}^{k}(x)$
$\displaystyle=\varkappa_{r}^{k}\left(n_{\ast}\lambda
rx\cdot\xi^{(2)},n_{\ast}\lambda rx\cdot\xi^{(3)}\right),$
we can use (4.5) to assert that
$x\in\mathrm{supp\,}\varkappa_{\lambda,r,\xi}^{k}$ if and only if
$x=(x_{1},x_{2},x_{3})$ satisfies
$\displaystyle\left|\frac{n_{\ast}\lambda rx\cdot\xi^{(2)}}{2\pi
r}-k_{1}\right|^{2}+\left|\frac{n_{\ast}\lambda rx\cdot\xi^{(3)}}{2\pi
r}-k_{2}\right|^{2}\leq\frac{1}{16}$
$\displaystyle\quad\iff\left|(x_{1},x_{2},x_{3})-\left(\frac{2\pi}{n_{\ast}\lambda}k_{1}\xi^{(2)}+\frac{2\pi}{n_{\ast}\lambda}k_{2}\xi^{(3)}\right)\right|^{2}\leq\left(\frac{2\pi}{4n_{\ast}\lambda}\right)^{2},$
which proves the claim.
Items 3 and 4 follow immediately after noting that
$\varkappa_{\lambda,r,\xi}^{k}$ is constant on every plane parallel to
$F_{\xi}$, and that the grid points $g_{k}\in\mathbb{G}_{\lambda,r}$ around
which the supports of $\varkappa_{\lambda,r,\xi}^{k}$ are centered, are spaced
at a distance which is twice the diameters of the supports. ∎
###### Proposition 4.4 (Construction and properties of shifted Intermittent
Pipe Flows).
Fix a vector $\xi$ belonging to the set of rational vectors
$\Xi\subset\mathbb{Q}^{3}$ from Proposition 4.3, $r^{-1},\lambda\in\mathbb{N}$
with $\lambda r\in\mathbb{N}$, and large integers $2\mathsf{N}_{\rm fin}$ and
${\mathsf{d}}$. There exist vector fields
$\mathbb{W}^{k}_{\xi,\lambda,r}:\mathbb{T}^{3}\rightarrow\mathbb{R}^{3}$ for
$k\in\\{0,...,r^{-1}-1\\}^{2}$ and implicit constants depending on
$\mathsf{N}_{\rm fin}$ and ${\mathsf{d}}$ but not on $\lambda$ or $r$ such
that:
1. (1)
There exists $\varrho:\mathbb{R}^{2}\rightarrow\mathbb{R}$ given by the
iterated Laplacian $\Delta^{\mathsf{d}}\vartheta=:\varrho$ of a potential
$\vartheta:\mathbb{R}^{2}\rightarrow\mathbb{R}$ with compact support in a ball
of radius $\frac{1}{4}$ such that the following holds. Let
$\varrho_{\xi,\lambda,r}^{k}$ and $\vartheta_{\xi,\lambda,r}^{k}$ be defined
as in Proposition 4.3. Then there exist
$\mathbb{U}^{k}_{\xi,\lambda,r}:\mathbb{T}^{3}\rightarrow\mathbb{R}^{3}$ such
that
$\displaystyle{\mathrm{curl\,}\mathbb{U}^{k}_{\xi,\lambda,r}=\xi\lambda^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\left(\vartheta^{k}_{\xi,\lambda,r}\right)=\xi\varrho^{k}_{\xi,\lambda,r}=:\mathbb{W}^{k}_{\xi,\lambda,r}}\,.$
(4.10)
2. (2)
Each of the sets of functions $\\{\mathbb{U}_{\xi,\lambda,r}^{k}\\}_{k}$,
$\\{\varrho_{\xi,\lambda,r}^{k}\\}_{k}$,
$\\{\vartheta_{\xi,\lambda,r}^{k}\\}_{k}$, and
$\\{\mathbb{W}_{\xi,\lambda,r}^{k}\\}_{k}$ satisfy items 1–4. In particular,
when $k\neq k^{\prime}$, we have that the intersection of the supports of
$\mathbb{W}_{k}^{\xi,\lambda,r}$ and $\mathbb{W}_{\xi,\lambda,r}^{k^{\prime}}$
is empty, and similarly for the other sets of functions.
3. (3)
$\mathbb{W}^{k}_{\xi,\lambda,r}$ is a stationary, pressureless solution to the
Euler equations, i.e.
$\mathrm{div\,}\mathbb{W}^{k}_{\xi,\lambda,r}=0,\qquad\mathrm{div\,}\left(\mathbb{W}^{k}_{\xi,\lambda,r}\otimes\mathbb{W}^{k}_{\xi,\lambda,r}\right)=0.$
4. (4)
$\displaystyle{\frac{1}{|\mathbb{T}^{3}|}\int_{\mathbb{T}^{3}}\mathbb{W}^{k}_{\xi,\lambda,r}\otimes\mathbb{W}^{k}_{\xi,\lambda,r}=\xi\otimes\xi}$
5. (5)
For all $n\leq 2\mathsf{N}_{\rm fin}$,
${\left\|\nabla^{n}\vartheta^{k}_{\xi,\lambda,r}\right\|_{L^{p}(\mathbb{T}^{3})}\lesssim\lambda^{n}r^{\left(\frac{2}{p}-1\right)}},\qquad{\left\|\nabla^{n}\varrho^{k}_{\xi,\lambda,r}\right\|_{L^{p}(\mathbb{T}^{3})}\lesssim\lambda^{n}r^{\left(\frac{2}{p}-1\right)}}$
(4.11)
and
${\left\|\nabla^{n}\mathbb{U}^{k}_{\xi,\lambda,r}\right\|_{L^{p}(\mathbb{T}^{3})}\lesssim\lambda^{n-1}r^{\left(\frac{2}{p}-1\right)}},\qquad{\left\|\nabla^{n}\mathbb{W}^{k}_{\xi,\lambda,r}\right\|_{L^{p}(\mathbb{T}^{3})}\lesssim\lambda^{n}r^{\left(\frac{2}{p}-1\right)}}.$
(4.12)
6. (6)
Let $\Phi:\mathbb{T}^{3}\times[0,T]\rightarrow\mathbb{T}^{3}$ be the periodic
solution to the transport equation
$\displaystyle\partial_{t}\Phi+v\cdot\nabla\Phi$ $\displaystyle=0,$ (4.13a)
$\displaystyle\Phi_{t=t_{0}}$ $\displaystyle=x\,,$ (4.13b)
with a smooth, divergence-free, periodic velocity field $v$. Then
$\nabla\Phi^{-1}\cdot\left(\mathbb{W}^{k}_{\xi,\lambda,r}\circ\Phi\right)=\mathrm{curl\,}\left(\nabla\Phi^{T}\cdot\left(\mathbb{U}^{k}_{\xi,\lambda,r}\circ\Phi\right)\right).$
(4.14)
7. (7)
For $\mathbb{P}_{[\lambda_{1},\lambda_{2}]}$ a Littlewood-Paley projector,
$\Phi$ as in (4.13), and $A=(\nabla\Phi)^{-1}$,
$\displaystyle\bigg{[}\nabla\cdot\bigg{(}A\,\mathbb{P}_{[\lambda_{1},\lambda_{2}]}\left(\mathbb{W}_{\xi,\lambda,r}\otimes\mathbb{W}_{\xi,\lambda,r}\right)(\Phi)A^{T}\bigg{)}\bigg{]}_{i}$
$\displaystyle\qquad=A_{k}^{j}\,\mathbb{P}_{[\lambda_{1},\lambda_{2}]}\left(\mathbb{W}^{k}_{\xi,\lambda,r}\mathbb{W}_{\xi,\lambda,r}^{l}\right)(\Phi)\partial_{j}A_{l}^{i}$
$\displaystyle\qquad=A_{k}^{j}\xi^{k}\xi^{l}\partial_{j}A_{l}^{i}\,\mathbb{P}_{[\lambda_{1},\lambda_{2}]}\left(\left(\varrho^{k}_{\xi,\lambda,r}\right)^{2}\right)$
(4.15)
for $i=1,2,3$.
###### Remark 4.5.
The identity (4.15) is one of the main advantages of pipe flows over Beltrami
flows. The utility of this identity is that when checking whether a pipe flow
$\mathbb{W}_{\xi,\lambda,r}$ which has been deformed by $\Phi$ is still an
approximately stationary solution of the pressureless Euler equations, one
does not need to estimate any derivatives of $\mathbb{W}_{\xi,\lambda,r}$ \-
only derivatives on the flow map $\Phi$, which will cost much less than
$\lambda$.
###### Remark 4.6.
The formulation of (4.15) is useful for our inversion of the divergence
operator, which is presented in Proposition A.17 and the subsequent remark. We
refer to the statement of that proposition and the subsequent remark for
further properties related to (4.15).
###### Proof of Proposition 4.4.
With the definition
$\mathbb{W}_{\xi,\lambda,r}^{k}:=\xi\varrho_{\xi,\lambda,r}^{k}$, the equality
$\lambda^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}(\vartheta_{\xi,\lambda,r}^{k})=\varrho_{\xi,\lambda,r}^{k}$
follows from the proof of Proposition 4.3, specifically equations (4.4),
(4.4), and (4.8). The equality
$\mathrm{curl\,}\mathbb{U}_{\xi,\lambda,r}^{k}=\mathbb{W}_{\xi,\lambda,r}$
follows as well using the standard vector calculus identity
$\mathrm{curl\,}\mathrm{curl\,}=\nabla\mathrm{div\,}-\Delta$. Secondly,
properties (1), (2), and (4) from Proposition 4.3 for
$\vartheta_{\xi,\lambda,r}^{k}$ follow from Proposition 4.3 applied to
$\varkappa=\vartheta$. The same properties for $\varrho_{\xi,\lambda,r}^{k}$,
$\mathbb{U}_{\xi,\lambda,r}^{k}$, and $\mathbb{W}_{\xi,\lambda,r}^{k}$ follow
from differentiating. Next, it is clear that $\mathbb{W}_{\xi,\lambda,r}^{k}$
solves the pressureless Euler equations since
$\xi\cdot\nabla\varrho_{\xi,\lambda,r}^{k}=0$. The normalization in (4)
follows from imposing that
$\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}(\Delta^{\mathsf{d}}\vartheta(x_{1},x_{2}))^{2}\,dx_{1}\,dx_{2}=1,$
recalling that orthogonal transformations, shifts, and scaling do not alter
the $L^{p}$ norms of $\mathbb{T}^{3}$-periodic functions, and using (4.4). The
estimates in (5) follow similarly using (4.4). The proof of (4.14) in (6) can
be found in the paper of Daneri and Székelyhidi Jr. [27].
The proof of (4.15) from (7) is simple and similar in spirit to (6) but
perhaps not standard, and so we will check it explicitly here. We first set
$\mathcal{P}$ to be the $\mathbb{T}^{3}$-periodic convolution kernel
associated with the projector $\mathbb{P}_{[\lambda_{1},\lambda_{2}]}$ and
write
$\displaystyle\nabla\cdot$
$\displaystyle\bigg{(}(\nabla\Phi)^{-1}\mathbb{P}_{[\lambda_{1},\lambda_{2}]}\left(\mathbb{W}_{\xi,\lambda,r}\otimes\mathbb{W}_{\xi,\lambda,r}\right)(\Phi)(\nabla\Phi)^{-T}\bigg{)}(x)$
$\displaystyle=\nabla_{x}\cdot\bigg{(}(\nabla\Phi)^{-1}(x)\left(\int_{\mathbb{T}^{3}}\mathcal{P}(y)(\mathbb{W}_{\xi,\lambda,r}\otimes\mathbb{W}_{\xi,\lambda,r})(\Phi(x-y))\,dy\right)(\nabla\Phi)^{-T}(x)\bigg{)}$
$\displaystyle=\nabla_{x}\cdot\bigg{(}\int_{\mathbb{T}^{3}}(\nabla\Phi)^{-1}(x)\mathcal{P}(y)(\mathbb{W}_{\xi,\lambda,r}\otimes\mathbb{W}_{\xi,\lambda,r})(\Phi(x-y))(\nabla\Phi)^{-T}(x)\,dy\bigg{)}$
$\displaystyle=\nabla_{x}\cdot\left(\int_{\mathbb{T}^{3}}\mathcal{P}(y)\left((\nabla\Phi)^{-1}(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\otimes\left((\nabla\Phi)^{-1}(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\,dy\right).$
(4.16)
Then applying (4.14), we obtain that (4.16) is equal to
$\displaystyle\int_{\mathbb{T}^{3}}\mathcal{P}(y)\left((\nabla\Phi)^{-1}(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\cdot\nabla_{x}\left((\nabla\Phi)^{-1}(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\,dy.$
Writing out the $i^{th}$ component of this vector and using the notation
$A=(\nabla\Phi)^{-1}$, we obtain
$\displaystyle\bigg{[}\int_{\mathbb{T}^{3}}\mathcal{P}(y)$
$\displaystyle\left(A(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\cdot\nabla_{x}\left(A(x)\mathbb{W}_{\xi,\lambda,r}(\Phi(x-y))\right)\,dy\bigg{]}_{i}$
$\displaystyle=\int_{\mathbb{T}^{3}}\mathcal{P}(y)A_{k}^{j}(x)\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))A_{l}^{i}(x)\partial_{n}\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\partial_{j}\Phi_{n}(x)\,dy$
$\displaystyle\qquad+\int_{\mathbb{T}^{3}}\mathcal{P}(y)A_{k}^{j}(x)\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))\partial_{j}A_{l}^{i}(x)\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\,dy\,.$
(4.17)
Since the second term in (4.17) can be rewritten as
$\displaystyle\int_{\mathbb{T}^{3}}\mathcal{P}(y)A_{k}^{j}(x)$
$\displaystyle\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))\partial_{j}A_{l}^{i}(x)\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\,dy$
$\displaystyle=A_{k}^{j}(x)\mathbb{P}_{[\lambda_{1},\lambda_{2}]}\left(\mathbb{W}^{k}_{\xi,\lambda,r}\mathbb{W}_{\xi,\lambda,r}^{l}\right)(\Phi(x))\partial_{j}A_{l}^{i}(x),$
to conclude the proof, we must show that the first term in (4.17) is equal to
$0$. Using that
$A_{k}^{j}\partial_{j}\Phi^{n}=\delta_{nk}$
and
$\mathbb{W}_{\xi,\lambda,r}^{k}\partial_{k}\mathbb{W}_{\xi,\lambda,r}^{l}=0$
for all $l$, we can simplify the first term as
$\displaystyle\int_{\mathbb{T}^{3}}\mathcal{P}(y)A_{k}^{j}(x)$
$\displaystyle\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))A_{l}^{i}(x)\partial_{n}\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\partial_{j}\Phi_{n}(x)\,dy$
$\displaystyle=\int_{\mathbb{T}^{3}}\mathcal{P}(y)\delta_{nk}\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))A_{l}^{i}(x)\partial_{n}\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\,dy$
$\displaystyle=\int_{\mathbb{T}^{3}}\mathcal{P}(y)\mathbb{W}_{\xi,\lambda,r}^{k}(\Phi(x-y))A_{l}^{i}(x)\partial_{k}\mathbb{W}^{l}_{\xi,\lambda,r}(\Phi(x-y))\,dy$
$\displaystyle=0,$
proving (4.15). ∎
### 4.2 Deformed pipe flows and curved axes
###### Lemma 4.7 (Control on Axes, Support, and Spacing).
Consider a convex neighborhood of space $\Omega\subset\mathbb{T}^{3}$. Let $v$
be an incompressible velocity field, and define the flow $X(x,t)$
$\displaystyle\partial_{t}X(x,t)$ $\displaystyle=v\left(X(x,t),t\right)$
(4.18a) $\displaystyle X_{t=t_{0}}$ $\displaystyle=x\,,$ (4.18b)
and inverse $\Phi(x,t)=X^{-1}(x,t)$
$\displaystyle\partial_{t}\Phi+v\cdot\nabla\Phi$ $\displaystyle=0$ (4.19a)
$\displaystyle\Phi_{t=t_{0}}$ $\displaystyle=x\,.$ (4.19b)
Define $\Omega(t):=\\{x\in\mathbb{T}^{3}:\Phi(x,t)\in\Omega\\}=X(\Omega,t)$.
For an arbitrary $C>0$, let $\tau>0$ be a parameter such that
$\tau\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{C+2}\right)^{-1}\,.$
(4.20)
Furthermore, suppose that the vector field $v$ satisfies the Lipschitz
bound212121The implicit constant in this inequality is assumed to be
independent of $q$, cf. (6.60).
$\sup_{t\in[t_{0}-\tau,t_{0}+\tau]}\left\|\nabla
v(\cdot,t)\right\|_{L^{\infty}(\Omega(t))}\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{C}\,.$
(4.21)
Let
$\mathbb{W}^{k}_{\lambda_{q+1},r,\xi}:\mathbb{T}^{3}\rightarrow\mathbb{R}^{3}$
be a set of straight pipe flows constructed as in Proposition 4.3 and
Proposition 4.4 which are $\frac{\mathbb{T}^{3}}{\lambda_{q+1}r}$-periodic for
$\frac{\lambda_{q}}{\lambda_{q+1}}\leq r\leq 1$ and concentrated around axes
$\\{A_{i}\\}_{i\in\mathcal{I}}$ oriented in the vector direction $\xi$ for
$\xi\in\Xi$. Then
$\mathbb{W}:=\mathbb{W}^{k}_{\lambda_{q+1},r,\xi}(\Phi(x,t)):\Omega(t)\times[t_{0}-\tau,t_{0}+\tau]$
satisfies the following conditions:
1. (1)
We have the inequality
$\textnormal{diam}(\Omega(t))\leq\left(1+\Gamma_{q+1}^{-1}\right)\textnormal{diam}(\Omega).$
(4.22)
2. (2)
If $x$ and $y$ with $x\neq y$ belong to a particular axis
$A_{i}\subset\Omega$, then
$\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|}=\frac{x-y}{|x-y|}+\delta_{i}(x,y,t)$
(4.23)
where $|\delta_{i}(x,y,t)|<\Gamma_{q+1}^{{-1}}$.
3. (3)
Let $x$ and $y$ belong to a particular axis $A_{i}\subset\Omega$. Denote the
length of the axis $A_{i}(t):=X(A_{i}\cap\Omega,t)$ in between $X(x,t)$ and
$X(y,t)$ by $L(x,y,t)$. Then
$L(x,y,t)\leq\left(1+\Gamma_{q+1}^{-1}\right)\left|x-y\right|.$ (4.24)
4. (4)
The support of $\mathbb{W}$ is contained in a
$\displaystyle\left(1+\Gamma_{q+1}^{-1}\right)\frac{2\pi}{4n_{\ast}\lambda_{q+1}}$-neighborhood
of
$\bigcup_{i}A_{i}(t).$ (4.25)
5. (5)
$\mathbb{W}$ is “approximately periodic” in the sense that for distinct axes
$A_{i},A_{j}$ with $i\neq j$ and
$\mathrm{dist\,}(A_{i}\cap\Omega,A_{j}\cap\Omega)=d$,
$\left(1-\Gamma_{q+1}^{-1}\right)d\leq\mathrm{dist\,}\left(A_{i}(t),A_{j}(t)\right)\leq\left(1+\Gamma_{q+1}^{-1}\right)d.$
(4.26)
###### Proof of Lemma 4.7.
First, we have that for $x,y\in\Omega$,
$\displaystyle\left|X(x,t)-X(y,t)\right|$
$\displaystyle=\left|x-y+\int_{t_{0}}^{t}\partial_{s}X(x,s)-\partial_{s}X(y,s)\,ds\right|$
$\displaystyle\leq|x-y|+\int_{t_{0}}^{t}\left|v\left(X(x,s),s\right)-v\left(X(y,s),s\right)\right|\,ds.$
Furthermore,
$\displaystyle\left|v^{\ell}\left(X(x,s),s\right)-v^{\ell}\left(X(y,s),s\right)\right|$
$\displaystyle=\left|\int_{0}^{1}\partial_{j}v^{\ell}\left(X(x+t(y-x),s),s\right)\partial_{k}X^{j}(x+t(y-x),s)(y-x)^{k}\,dt\right|$
$\displaystyle\leq\left\|\nabla v\right\|_{L^{\infty}(\Omega(s))}\left\|\nabla
X\right\|_{L^{\infty}(\Omega(s))}|x-y|$
$\displaystyle\leq\frac{3}{2}\delta_{q}^{\frac{1}{2}}\lambda_{q}\Gamma_{q+1}^{C}|x-y|\,.$
Integrating this bound from $t_{0}$ to $t$ and using a factor of
$\Gamma_{q+1}$ to absorb the constant, we deduce that
$\left(1-\Gamma_{q+1}^{-1}\right)|x-y|\leq\left|X(x,t)-X(y,t)\right|\leq\left(1+\Gamma_{q+1}^{-1}\right)|x-y|.$
(4.27)
The inequality in (4.22) follows immediately.
To prove (4.23), we will show that for $x,y\in\Omega\cap A_{i}$ for a chosen
axis $A_{i}$,
$\left|\frac{x-y}{|x-y|}-\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|}\right|<\Gamma_{q+1}^{-1}.$
At time $t_{0}$, the above quantity vanishes. Differentiating inside the
absolute value in time, we have that
$\displaystyle\frac{d}{dt}\left[\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|}\right]$
$\displaystyle=\frac{\partial_{t}X(x,t)-\partial_{t}X(y,t)}{|X(x,t)-X(y,t)|}-\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|^{2}}\frac{\left(\partial_{t}X(x,t)-\partial_{t}X(y,t)\right)\cdot\left(X(x,t)-X(y,t)\right)}{|X(x,t)-X(y,t)|}$
$\displaystyle=\frac{v(X(x,t),t)-v(X(y,t),t)}{|X(x,t)-X(y,t)|}-\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|}\frac{\left(v(X(x,t),t)-v(X(y,t),t)\right)\cdot\left(X(x,t)-X(y,t)\right)}{|X(x,t)-X(y,t)|^{2}}\,.$
Utilizing the mean value theorem and the Lipschitz bound on $v$ and (4.27), we
deduce
$\displaystyle\left|\frac{v(X(x,t),t)-v(X(y,t),t)}{|X(x,t)-X(y,t)|}-\frac{X(x,t)-X(y,t)}{|X(x,t)-X(y,t)|}\frac{\left(v(X(x,t),t)-v(X(y,t),t)\right)\cdot\left(X(x,t)-X(y,t)\right)}{|X(x,t)-X(y,t)|^{2}}\right|$
$\displaystyle\leq 2\left\|\nabla v\right\|_{L^{\infty}}$ $\displaystyle\leq
2\delta_{q}^{\frac{1}{2}}\lambda_{q}\Gamma_{q+1}^{C}.$
Integrating in time from $t_{0}$ to $t$ for
$|t-t_{0}|\leq\left(\delta_{q}^{\frac{1}{2}}\lambda_{q}\Gamma_{q+1}^{C+2}\right)^{-1}$
and using the extra factors of $\Gamma_{q+1}$ to again kill the constants, we
obtain (4.23).
To prove (4.24), we parametrize the curve using $X$ to obtain
$\displaystyle L(x,y,t)=\int_{0}^{1}\left|\nabla
X(x+r(y-x),t)\cdot(x-y)\right|\,dr\leq\left(1+\Gamma_{q+1}^{-1}\right)|x-y|.$
The claims in (4.25) and (4.26) follow immediately from (4.27) and (4.3). ∎
### 4.3 Placements via relative intermittency
We now state and prove the main proposition regarding the placement of a new
set of intermittent pipe flows which do not intersect with previously placed
and possibly deformed pipes _within_ a subset $\Omega$ of the full torus
$\mathbb{T}^{3}$. We do not claim that intersections do not occur outside of
$\Omega$. In applications, $\Omega$ will be the support of a cutoff
function.222222Technically, $\Omega$ will be a set slightly larger than the
support of a cutoff function. See (8.115), (8.118), and (8.129). We state the
proposition for new pipes periodized to spatial scale
$\left(\lambda_{q+1}r_{2}\right)^{-1}$ with axes parallel to a direction
vector $\xi\in\Xi$. By “relative intermittency,” we mean the inequality (4.31)
satisfied by $r_{1}$ and $r_{2}$. The proof proceeds, first in the case
$\xi=e_{3}$, by an elementary but rather tedious counting argument for the
number of cells in a two-dimensional grid which may intersect a set
concentrated around a smooth curve. In applications, these correspond to a
piece of a periodic pipe flow concentrated around its deformed axis and then
projected onto a plane. Then using (1) and (2) from Proposition 4.3, we
describe the minor adjustments needed to obtain the same result for new pipes
with axes parallel to arbitrary direction vectors $\xi\in\Xi$.
###### Proposition 4.8 (Placing straight pipes which avoid bent pipes).
Consider a neighborhood of space $\Omega\subset\mathbb{T}^{3}$ such that
$\textnormal{diam}(\Omega)\leq 16(\lambda_{q+1}r_{1})^{-1},$ (4.28)
where $\nicefrac{{\lambda_{q}}}{{\lambda_{q+1}}}\leq r_{1}\leq 1$. Assume that
there exist smooth $\mathbb{T}^{3}$-periodic curves
$\\{A_{n}\\}_{n=1}^{N_{\Omega}}\subset\Omega$232323That is, the range of each
curve is contained in $\Omega$; otherwise replace the curves with
$A_{n}\cap\Omega$. and $\mathbb{T}^{3}$-periodic sets
$\\{S_{n}\\}_{n=1}^{N_{\Omega}}\subset\Omega$ satisfying the following
properties:
1. (1)
There exists a positive constant $\mathcal{C}_{A}$ and a parameter $r_{2}$,
with $r_{1}<r_{2}<1$ such that
$N_{\Omega}\leq\mathcal{C}_{A}r_{2}^{2}r_{1}^{-2}\,.$ (4.29)
2. (2)
For any $x,x^{\prime}\in A_{n}$, let the length of the curve $A_{n}$ which
lies between $x$ and $x^{\prime}$ be denoted by $L_{n,x,x^{\prime}}$. Then,
for every $1\leq n\leq N_{\Omega}$ we have
$\displaystyle L_{n,x,x^{\prime}}\leq 2\left|x-x^{\prime}\right|\,.$ (4.30)
3. (3)
For every $1\leq n\leq N_{\Omega}$, we have that $S_{n}$ is contained in a
$2\pi(1+\Gamma_{q+1}^{-1})\left(4n_{*}\lambda_{q+1}\right)^{-1}$-neighborhood
of $A_{n}$.
Then, there exists a geometric constant $C_{*}\geq 1$ such that if
$\displaystyle C_{*}\mathcal{C}_{A}r_{2}^{4}\leq r_{1}^{3},$ (4.31)
then, for any $\xi\in\Xi$ (recall the set $\Xi$ from Proposition 4.1), we can
find a set of pipe flows
$\mathbb{W}^{k_{0}}_{\lambda_{q+1},r_{2},\xi}\colon\mathbb{T}^{3}\to\mathbb{R}^{3}$
which are $\frac{\mathbb{T}^{3}}{\lambda_{q+1}r_{2}}$-periodic, concentrated
to width $\frac{2\pi}{4\lambda_{q+1}n_{*}}$ around axes with vector direction
$\xi$, satisfy the properties listed in Proposition 4.4, and for all
$n\in\left\\{1,...,N_{\Omega}\right\\}$,
$\displaystyle\mathrm{supp\,}\mathbb{W}^{k_{0}}_{\lambda_{q+1},r_{2},\xi}\cap
S_{n}=\emptyset.$ (4.32)
###### Remark 4.9.
As mentioned previously, the sets $S_{n}$ will be supports of previously
placed pipes oriented around deformed axes $A_{n}$. The properties of $S_{n}$
and $A_{n}$ will follow from Lemma 4.7.
###### Proof of Proposition 4.8.
For simplicity, we first give the proof for $\xi=e_{3}$, and explain how to
treat the case of general $\xi\in\Xi$ at the end of the proof.
The proof will proceed by measuring the size of the shadows of the
$\\{S_{n}\\}_{n=1}^{N_{\Omega}}$ when projected onto the face of the cube
$\mathbb{T}^{3}$ which is perpendicular to $e_{3}$, so it will be helpful to
set some notation related to this projection. Let $F_{e_{3}}$ be the face of
the torus $\mathbb{T}^{3}$ which is perpendicular to $e_{3}$. For the sake of
concreteness, we will occasionally identify $F_{e_{3}}$ with the set of points
$x=(x_{1},x_{2},x_{3})\in\mathbb{T}^{3}$ such that $x_{3}=0$, or use that
$F_{e_{3}}$ is isomorphic to $\mathbb{T}^{2}$. Let $A_{n}^{p}$ be the
projection of $A_{n}$ onto $F_{e_{3}}$ defined by
$A_{n}^{p}:=\left\\{(x_{1},x_{2})\in F_{e_{3}}:(x_{1},x_{2},x_{3})\in
A_{n}\right\\},$ (4.33)
and let $S_{n}^{p}$ be defined similarly as the projection of $S_{n}$ onto
$F_{e_{3}}$. For $x=(x_{1},x_{2},x_{3})\in\mathbb{T}^{3}$ and
$x^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})\in\mathbb{T}^{3}$
we let $P(x)=(x_{1},x_{2})\in F_{e_{3}}$ and
$P(x^{\prime})=(x_{1}^{\prime},x_{2}^{\prime})\in F_{e_{3}}$ be the projection
of these points onto $F_{e_{3}}$. Since projections do not increase distances,
we have that
$\left|P(x)-P(x^{\prime})\right|\leq\left|x-x^{\prime}\right|.$ (4.34)
Since both $A_{n}$ and $A_{n}^{p}$ are smooth curves242424Technically, the
proof still applies if $A_{n}^{p}$ is self-intersecting, but the conclusions
of Lemma 4.7 eliminate this possibility, so we shall ignore this issue and use
the word “smooth”. and can be approximated by piecewise linear polygonal
paths, (4.34), (4.28), and (4.30) imply that if $L_{n,x,x^{\prime}}^{p}$ is
the length of the projected curve $A_{n}^{p}$ in between the points $P(x)$ and
$P(x^{\prime})$, then
$L_{n,x,x^{\prime}}^{p}\leq 2|x-x^{\prime}|\leq
32\left(\lambda_{q+1}r_{1}\right)^{-1}.$ (4.35)
In particular, taking $x$ and $x^{\prime}$ to be the endpoints of the curve
$A_{n}$, we obtain a bound for the total length of $A_{n}^{p}$. Additionally,
(4.34) and the third assumption of the lemma imply that $S_{n}^{p}$ is
contained inside a
$2\pi(1+\Gamma_{q+1}^{-1})(4n_{*}\lambda_{q+1})^{-1}$-neighborhood of
$A_{n}^{p}$. Finally, since $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k}$ is
independent of $x_{3}$ for all $k\in\\{0,...,r_{2}^{-1}-1\\}^{2}$, it is clear
that the conclusion (4.32) will be achieved if we can show that there exists a
shift $k_{0}$ such that
$S_{n}^{p}\cap\left(\mathrm{supp\,}\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}\cap\\{x_{3}=0\\}\right)=\emptyset\,,$
(4.36)
for all $1\leq n\leq N_{\Omega}$. To prove (4.36), we will apply a covering
argument to each $S_{n}^{p}$.
Let $\mathbb{S}_{\lambda_{q+1}}$ be the grid of
$(\lambda_{q+1}n_{*})^{2}$-many open squares contained in $F_{e_{3}}$, evenly
centered around a grid of $(\lambda_{q+1}n_{*})^{2}$-many points
$\mathbb{G}_{\lambda_{q+1}}$ which contains the origin. By Proposition 4.3,
for each choice of $k=(k_{1},k_{2})\in\\{0,\ldots,r_{2}^{-1}-1\\}^{2}$, the
support of the shifted pipe $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k}$
intersects $F_{e_{3}}$ in a $\frac{2\pi}{4\lambda_{q+1}n_{*}}$-neighborhood of
a finite subcollection of grid points from $\mathbb{G}_{\lambda_{q+1}}$, which
we call $\mathbb{G}_{\lambda_{q+1}}^{k}$, and which by construction is
$\frac{\mathbb{T}^{3}}{\lambda_{q+1}r_{2}n_{*}}$-periodic. Furthermore, two
subcollections for $k\neq k^{\prime}$ contain no grid points in common. Let
$\mathbb{S}^{k}_{\lambda_{q+1}}$ be the set of open squares centered around
grid points in $\mathbb{G}_{\lambda_{q+1}}^{k}$, so that
$\mathbb{S}^{k}_{\lambda_{q+1}}$ and $\mathbb{S}^{k^{\prime}}_{\lambda_{q+1}}$
are disjoint if $k\neq k^{\prime}$. To prove (4.36), we will identify a shift
$k_{0}$ such that the set of squares $\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$ has
empty intersection with $S_{n}^{p}$ for all $n$. Then by Proposition 4.3, we
have that the pipe flow $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}$
intersects $F_{e_{3}}$ inside of $\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$, and so
we will have verified (4.36).
Figure 10: The boundary of the projection of $\Omega$ onto the face
$F_{e_{3}}$ is represented by the black oval. The blue grid cells represent
the elements of $\mathbb{S}_{\lambda_{q+1}}$, while the center points are the
elements of $\mathbb{G}_{\lambda_{q+1}}$. A projected pipe $S_{n}^{p}$ with
axis $A_{n}^{p}$ is represented in shades of green. A point $x_{i}\in
A_{n}^{p}$, its associated grid cell $s_{x_{i}}$, and its $3\times 3$ cluster
$S_{x_{i},9}$ are represented in pink. The union of the pink clusters,
$\cup_{i}S_{x_{i},9}$, generously covers $S_{n}^{p}$.
In order to identify a suitable shift $k_{0}$ such that
$\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$ has empty intersection with $S_{n}^{p}$,
we first present a generous cover for $S_{n}^{p}$; see Figure 10. Let
$x_{1}\in A_{n}^{p}$ be arbitrary. Set
$s_{x_{1}}\in\mathbb{S}_{\lambda_{q+1}}$ to be the grid square of sidelength
$\frac{2\pi}{\lambda_{q+1}n_{*}}$ containing $x_{1}$,252525If $x_{1}$ is on
the boundary of more than one square, any choice of $s_{x_{1}}$ will work. and
let $S_{x_{1},9}$ be the $3\times 3$ cluster of squares surrounding
$s_{x_{1}}$. Then either $x_{1}$ is within distance
$\frac{2\pi}{\lambda_{q+1}n_{*}}$ of an endpoint of $A_{n}^{p}$, or the length
of $A_{n}^{p}\cap S_{x_{1},9}$ is at least $\frac{2\pi}{n_{*}\lambda_{q+1}}$.
If possible, choose $x_{2}\in A_{n}^{p}$ so that $S_{x_{2},9}$ is disjoint
from $S_{x_{1},9}$, and iteratively continue choosing $x_{i}\in A_{n}^{p}$
with $S_{x_{i},9}$ disjoint from $S_{x_{j},9}$ with $1\leq j\leq i-1$. Due to
aforementioned observation about the lower bound on the length of $A_{n}^{p}$
in each $S_{x_{i},9}$, after a finite number of steps, which we denote by
$i_{n}$, one cannot choose $x_{i_{n+1}}\in A_{n}^{p}$ so that
$S_{x_{i_{n+1}},9}$ is disjoint from previous clusters; see Figure 10. By the
length constraint on $A_{n}^{p}$ and the observations on the length of
$A_{n}^{p}\cap S_{x_{i},9}$ for each $i$, we obtain the bound
$\displaystyle 32(\lambda_{q+1}r_{1})^{-1}$
$\displaystyle\geq|A_{n}^{p}|\geq(i_{n}-2)2\pi\left(n_{*}\lambda_{q+1}\right)^{-1}$
which implies that $i_{n}$ may be bounded from above as
$i_{n}\leq\frac{32r_{1}^{-1}n_{*}}{2\pi}+2\leq 6n_{*}r_{1}^{-1}+2\leq
8n_{*}r_{1}^{-1}$ (4.37)
since $r_{1}^{-1}\geq 1$. By the definition of $i_{n}$, any point $x\in
A_{n}^{p}$ which does not belong to any of the clusters
$\\{S_{x_{i},9}\\}_{i=1}^{i_{n}}$, must be such that $S_{x,9}$ has non-empty
intersection with $S_{x_{j},9}$ for some $j\leq i_{n}$. Thus, if we denote by
$S_{x_{j},81}$ be the cluster of $9\times 9$ grid squares centered at $x_{j}$,
it follows that $x$ belongs to $S_{x_{j},81}$, and thus
$A_{n}^{p}\subset\cup_{i\leq i_{n}}S_{x_{i},81}$. Furthermore, since it was
observed earlier that $S_{n}^{p}$ is contained inside a
$2\pi(1+\Gamma_{q+1}^{-1})\left(4n_{*}\lambda_{q+1}\right)^{-1}$-neighborhood
of $A_{n}^{p}$, we have in addition that
$S_{n}^{p}\subset\bigcup_{i=1}^{i_{n}}S_{x_{i},81}.$
Thus, we have covered $S_{n}^{p}$ using no more than
$81i_{n}\leq 81\cdot 8n_{*}r_{1}^{-1}=648n_{*}r_{1}^{-1}$
grid squares. Set $C_{\ast}=1300n_{*}$. Repeating this argument for every
$1\leq n\leq N_{\Omega}$ and taking the union over $n$, we have thus covered
$\cup_{n\leq N_{\Omega}}S_{n}^{p}$ using no more than
$\displaystyle\frac{1}{2}C_{\ast}\mathcal{C}_{A}\cdot r_{2}^{2}r_{1}^{-2}\cdot
r_{1}^{-1}<r_{2}^{-2}$ (4.38)
grid squares of sidelength $\frac{2\pi}{\lambda_{q+1}n_{*}}$; the strict
inequality in (4.38) follows from the assumption (4.31).
Figure 11: We revisit Figure 10. The union of the pink clusters $S_{x_{i},9}$
covers $S_{n}^{p}$. We would like to determine which set
$\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$ of
$\frac{2\pi}{\lambda_{q+1}r_{2}n_{*}}$-periodic grid cells is free (we index
these cells by the shift parameter $k_{0}$), so that we can place a
$\frac{2\pi}{\lambda_{q+1}r_{2}n_{*}}$-periodic pipe flow
$\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}$ at the centers of the cells.
This pipe flow then will not intersect the cells taken up by the union of the
pink clusters $\cup_{i}S_{x_{i},9}$. Towards this purpose, consider one of the
red periodic cells of sidelength $\frac{2\pi}{\lambda_{q+1}r_{2}n_{*}}$; e.g.
bottom row, second from left. This cell contains $r_{2}^{2}$-many blue cells
of sidelength $\frac{2\pi}{\lambda_{q+1}n_{*}}$, which in the figure we index
by an integer $k\in\\{1,\ldots,36\\}$ (that is, $r_{2}=6$). In order to
determine which of these blue cells are “free,” we verify for every $k$
whether a periodic copy of the $k$-cell lies in union of the pink clusters
$\cup_{i}S_{x_{i},9}$; if yes, we label this index $k$ in black, and we also
label with the same number the cell in $\cup_{i}S_{x_{i},9}$ where this cell
appears. For instance, the cell with label $9$ appears three times within the
union of the pink cluster; the cell with label $3$ appears twice; while the
cell with label $36$ appears just once. In the above figure we discover that
there are only three “free” blue cells, corresponding to the red indices $7$,
$12$, and $20$. Any of these indices indicates a location where we may place a
new pipe flow $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}$; in the figure
we have chosen $k_{0}$ to correspond to the label $7$, and have represented by
a $\frac{2\pi}{\lambda_{q+1}r_{2}n_{*}}$-periodic array of purple circles the
intersections of the pipes in $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}$
with $F_{e_{3}}$.
In order to conclude the proof, we appeal to a pigeonhole argument, made
possible by the bound (4.38). Indeed, the left side of (4.38) represents as an
upper bound on the number of grid cells in $\mathbb{S}_{\lambda_{q+1}}$ which
are deemed “occupied” by $\cup_{n\leq N_{\Omega}}S_{n}^{p}$, while the right
side of (4.38) represents the number of possible choices for the shifts
$k_{0}\in\\{0,...,r_{2}^{-1}-1\\}^{2}$ belonging to the
$\frac{2\pi}{\lambda_{q+1}r_{2}n_{*}}$-periodic subcollection
$\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$. See Figure 11 for details. We conclude
by (4.38) and the pigeonhole principle that there exists a “free” shift
$k_{0}\in\\{0,...,r_{2}^{-1}-1\\}^{2}$ such that _none_ of the squares in
$\mathbb{S}_{\lambda_{q+1}}^{k_{0}}$ intersect the covering $\cup_{i\leq
i_{n}}S_{x_{i},81}$ of $\cup_{n\leq N_{\Omega}}S_{n}^{p}$. Choosing the pipe
flow $\mathbb{W}_{\lambda_{q+1},r_{2},e_{3}}^{k_{0}}$, we have proven (4.36),
concluding the proof of the lemma when $\xi=e_{3}$.
To prove the Proposition when $\xi\neq e_{3}$, first consider the
portion262626Recall that $\Omega$ is a $\mathbb{T}^{3}$-periodic set but can
be considered as a subset of $\mathbb{R}^{3}$, cf. Definition 4.2. of
$\Omega\subset\mathbb{R}^{3}$ restricted to the cube $[-\pi,\pi]^{3}$, denoted
$\Omega|_{[-\pi,\pi]^{3}}$, and consider similarly $S_{n}|_{[-\pi,\pi]^{3}}$
and $A_{n}|_{[-\pi,\pi]^{3}}$. Let $3\mathbb{T}_{\xi}^{3}$ be the $3\times
3\times 3$ cluster of periodic cells for $\mathbb{T}_{\xi}^{3}$ centered at
the origin. Then $[-\pi,\pi]^{3}$ is contained in this cluster, and in
particular $[-\pi,\pi]^{3}$ has _empty_ intersection with the boundary of
$3\mathbb{T}_{\xi}^{3}$ (understood as the boundary of the
$3\mathbb{T}_{\xi}^{3}$-periodic cell centered at the origin when simply
viewed as a subset of $\mathbb{R}^{3}$). Thus $\Omega|_{[0,2\pi]^{3}}$,
$S_{n}|_{[-\pi,\pi]^{3}}$, and $A_{n}|_{[-\pi,\pi]^{3}}$ also have empty
intersection with the boundary of $3\mathbb{T}_{\xi}^{3}$ and may be viewed as
$3\mathbb{T}_{\xi}^{3}$-periodic sets. Up to a dilation which replaces
$3\mathbb{T}_{\xi}^{3}$ with $\mathbb{T}_{\xi}^{3}$, we have exactly satisfied
the assumptions of the proposition, but with $\mathbb{T}^{3}$-periodicity
replaced by $\mathbb{T}_{\xi}^{3}$-periodicity. This dilation will shrink
everything by a factor of $3$, which we may compensate for by choosing a pipe
flow $\mathbb{W}_{3\lambda_{q+1},r_{2},\xi}$, and then undoing the dilation at
the end. Any constants related to this dilation are $q$-independent and may be
absorbed into the geometric constant $C_{*}$ at the end of the proof. At this
point we may then redo the proof of the proposition with minimal adjustments.
In particular, we replace the projection of $S_{n}$ and $A_{n}$ onto the face
$F_{e_{3}}$ of the box $\mathbb{T}^{3}$ with the projection of the restricted
and dilated versions of $S_{n}$ and $A_{n}$ onto the face $F_{\xi}$ of the box
$\mathbb{T}^{3}_{\xi}$. We similarly replace the grids and squares on
$F_{e_{3}}$ with grids and squares on $F_{\xi}$, exactly analogous to (4.3).
The covering argument then proceeds exactly as before. The proof produces
pipes belonging to the intermittent pipe flow
$\mathbb{W}_{3\lambda_{q+1},r_{2},\xi}^{k_{0}}$ which are
$\frac{\mathbb{T}^{3}}{3\lambda_{q+1}n_{*}r_{2}}$-periodic and disjoint from
the dilated and restricted versions of the $S_{n}$’s. Undoing the dilation, we
find that $\mathbb{W}_{\lambda_{q+1},r_{2},\xi}^{k_{0}}$ is
$\frac{\mathbb{T}^{3}}{\lambda_{q+1}r_{2}}$-periodic and disjoint from each
$S_{n}$. Then all the conclusions of Proposition 4.8 have been achieved,
finishing the proof. ∎
## 5 Mollification
Because the principal inductive assumptions for the velocity increments (3.13)
and the Reynolds stress (3.15) are only assumed to hold for a limited number
of space and material derivatives ($\leq 7\mathsf{N}_{\textnormal{ind,v}}$ and
$\leq 3\mathsf{N}_{\textnormal{ind,v}}$ respectively), and because in our
proof we need to appeal to derivative bounds of much higher orders, it is
customary to employ a mollification step prior to adding the convex
integration perturbation. This mollification step is discussed in Lemma 5.1.
Note that the mollification step is only employed once (for every inductive
step $q\mapsto q+1$), and is not repeated for the higher order stresses
$R_{q,n,p}$. In particular, Lemma 5.1 already shows that the inductive
assumption (3.12) holds for $q^{\prime}=q$.
###### Lemma 5.1 (Mollifying the Euler-Reynolds system).
Let $(v_{q},\mathring{R}_{q})$ solve the Euler-Reynolds system (3.1), and
assume that $\psi_{i,q^{\prime}},u_{q^{\prime}}$ for $q^{\prime}<q$, $w_{q}$,
and $\mathring{R}_{q}$ satisfy (3.12)–(3.25b). Then, we mollify
$(v_{q},\mathring{R}_{q})$ at spatial scale $\widetilde{\lambda}_{q}^{-1}$ and
temporal scale $\widetilde{\tau}_{q-1}$ (cf. the notation in (9.64)), and
accordingly define
$\displaystyle
v_{\ell_{q}}:=\mathcal{P}_{q,x,t}v_{q}\qquad\mbox{and}\qquad\mathring{R}_{\ell_{q}}:=\mathcal{P}_{q,x,t}\mathring{R}_{q}\,.$
(5.1)
The mollified pair $(v_{\ell_{q}},\mathring{R}_{\ell_{q}})$ satisfy
$\displaystyle\partial_{t}v_{\ell_{q}}+\mathrm{div\,}(v_{\ell_{q}}\otimes
v_{\ell_{q}})+\nabla p_{\ell_{q}}$
$\displaystyle=\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}\,,$
(5.2a) $\displaystyle\mathrm{div\,}v_{\ell_{q}}$ $\displaystyle=0\,.$ (5.2b)
The commutator stress $\mathring{R}_{q}^{\textnormal{comm}}$ satisfies the
estimate (consistent with (3.15) at level $q+1$)
$\displaystyle\left\|D^{n}D_{t,q}^{m}\mathring{R}_{q}^{\textnormal{comm}}\right\|_{L^{\infty}}\leq\Gamma_{q+1}^{-1}\Gamma_{q+1}^{-\mathsf{C_{R}}}\delta_{q+2}\lambda_{q+1}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(5.3)
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$, and the we have that
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(v_{\ell_{q}}-v_{q})\right\|_{L^{\infty}}\leq\lambda_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i-1},\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\right)$
(5.4)
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Furthermore,
$u_{q}=v_{\ell_{q}}-v_{\ell_{q-1}}$
satisfies the bound (3.12) with $q^{\prime}$ replaced by $q$, namely
$\displaystyle\left\|\psi_{i,q-1}D^{n}D_{t,q-1}^{m}u_{q}\right\|_{L^{2}}\leq\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i}\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right).$
(5.5)
for all $n+m\leq 2\mathsf{N}_{\rm fin}$. In fact, when either $n\geq
3\mathsf{N}_{\textnormal{ind,v}}$ or $m\geq 3\mathsf{N}_{\textnormal{ind,v}}$
are such that $n+m\leq 2\mathsf{N}_{\rm fin}$, then the above estimate holds
uniformly
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}}\leq\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right).$
(5.6)
Finally, $\mathring{R}_{\ell_{q}}$ satisfies bounds which extend (3.15) to
$\displaystyle\left\|\psi_{i,q-1}D^{n}D_{t,q-1}^{m}\mathring{R}_{\ell_{q}}\right\|_{L^{1}}\lesssim\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\tilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i+2}\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
(5.7)
for all $n+m\leq 2\mathsf{N}_{\rm fin}$. In fact, the above estimate holds
uniformly
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}\mathring{R}_{\ell_{q}}\right\|_{L^{\infty}}\lesssim\Gamma_{q}^{-1}\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\tilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
(5.8)
whenever either $n\geq 3\mathsf{N}_{\textnormal{ind,v}}$ or $m\geq
3\mathsf{N}_{\textnormal{ind,v}}$ are such that $n+m\leq 2\mathsf{N}_{\rm
fin}$.
###### Remark 5.2 ($L^{\infty}$ estimates on the support of $\psi_{i,q-1}$).
The bounds (5.6) and (5.8) provide $L^{\infty}$ estimates for
$D^{n}D_{t,q-1}^{m}$ applied to $u_{q}$ and respectively
$\mathring{R}_{\ell_{q}}$, but only when either $n$ or $m$ are sufficiently
large. In the remaining cases, we note that (5.5), combined with the partition
of unity property (3.16), and the inductive assumption (3.19) (with $M=0$, and
$K=4$), implies the bound
$\displaystyle\left\|D^{n}D^{m}_{t,q-1}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q-1})}\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i+1},\widetilde{\tau}_{q-1}^{-1}\right)$
(5.9)
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Indeed, we may apply Lemma
A.3 (estimate (A.18b)) with $\psi_{i}=\psi_{i,q-1}$, $f=u_{q}$,
$C_{f}=\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\rho=\lambda_{q-1}\Gamma_{q-1}\leq\lambda_{q}$ (cf. (9.38)),
$\lambda=\lambda_{q}$, $\widetilde{\lambda}=\widetilde{\lambda}_{q}$,
$\mu_{i}=\tau_{q}^{-1}\Gamma_{q}^{i}$,
$\widetilde{\mu}_{i}=\widetilde{\tau}_{q-1}^{-1}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and $N_{\circ}=2\mathsf{N}_{\rm
fin}$, to conclude that (5.9) holds for all $n+m\leq 2\mathsf{N}_{\rm fin}-2$,
and in particular for $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
A similar argument, shows that estimate (5.7) and Lemma A.3 imply
$\displaystyle\left\|D^{n}D^{m}_{t,q-1}\mathring{R}_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q-1})}\lesssim\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\widetilde{\lambda}_{q}^{3}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\tilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i+3}\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
(5.10)
for $n+m\leq 2\mathsf{N}_{\rm fin}-4$, and in particular for $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$.
###### Proof of Lemma 5.1.
The bound (5.3) requires a different proof than (5.5) and (5.7), so that we
start with the former.
Proof of (5.3). Recall that
$\displaystyle\mathring{R}_{q}^{\textnormal{comm}}=\mathcal{P}_{q,x,t}v_{q}\mathring{\otimes}\mathcal{P}_{q,x,t}v_{q}-\mathcal{P}_{q,x,t}(v_{q}\mathring{\otimes}v_{q})\,.$
(5.11)
We note cf. (9.64) that $\mathcal{P}_{q,x,t}$ mollifies in space at length
scale $\widetilde{\lambda}_{q}$, and in time at time scale
$\widetilde{\tau}_{q-1}^{-1}$. Let us denote by $K_{q}$ the space-time
mollification kernel for $\mathcal{P}_{q,x,t}$, which thus equals the product
of the bump functions
$\phi_{\widetilde{\lambda}_{q}}^{(x)}\phi_{\widetilde{\tau}_{q-1}^{-1}}^{(t)}$.
For brevity of notation, (locally in this proof) it is convenient to denote
space-time points as $(x,t),(y,s),(z,r)\in\mathbb{T}^{3}\times\mathbb{R}$
$\displaystyle(x,t)=\theta,\qquad(y,s)=\kappa,\qquad(z,r)=\zeta.$ (5.12)
Using this notation we may write out the commutator stress
$\mathring{R}_{q}^{\textnormal{comm}}$ explicitly, and symmetrizing the
resulting expression leads to the formula
$\displaystyle\mathring{R}_{q}^{\textnormal{comm}}(\theta)=\frac{-1}{2}\int\\!\\!\\!\int_{(\mathbb{T}^{3}\times\mathbb{R})^{2}}\left(v_{q}(\theta-\kappa)-v_{q}(\theta-\zeta)\right)\mathring{\otimes}\left(v_{q}(\theta-\kappa)-v_{q}(\theta-\zeta)\right)K_{q}(\kappa)K_{q}(\zeta)\,d\kappa\,d\zeta\,.$
(5.13)
Expanding $v_{q}$ in a Taylor series in space and time around $\theta$ yields
the formula
$\displaystyle
v_{q}(\theta-\kappa)=v_{q}(\theta)+\sum_{|\alpha|+m=1}^{{N_{\textnormal{c}}-1}}\frac{1}{\alpha!m!}D^{\alpha}\partial_{t}^{m}v_{q}(\theta)(-\kappa)^{(\alpha,m)}+R_{N_{\textnormal{c}}}(\theta,\kappa)$
(5.14)
where the remainder term with $N_{\textnormal{c}}$ derivatives is given by
$\displaystyle
R_{N_{\textnormal{c}}}(\theta,\kappa)=\sum_{|\alpha|+m=N_{\textnormal{c}}}\frac{N_{\textnormal{c}}}{\alpha!m!}(-\kappa)^{(\alpha,m)}\int_{0}^{1}(1-\eta)^{N_{\textnormal{c}}-1}D^{\alpha}\partial_{t}^{m}v_{q}(\theta-\eta\kappa)\,d\eta.$
(5.15)
The value of $N_{\textnormal{c}}$ will be chosen later so that
$\mathsf{N}_{\textnormal{ind,t}}\ll
N_{\textnormal{c}}=\mathsf{N}_{\textnormal{ind,v}}-2$, more precisely, such
that conditions (5.24) and (9.50a) hold.
Using that by (9.62) all moments of $K_{q}$ vanish up to order
$N_{\textnormal{c}}$, we rewrite (5.13) as
$\displaystyle\mathring{R}_{q}^{\textnormal{comm}}(\theta)$
$\displaystyle=\int_{\mathbb{T}^{3}\times\mathbb{R}}\sum_{|\alpha|+m=1}^{N_{\textnormal{c}}-1}\frac{(-\kappa)^{(\alpha,m)}}{\alpha!m!}D^{\alpha}\partial_{t}^{m}v_{q}(\theta)\,\mathring{\otimes}_{\rm
s}\,R_{N_{\textnormal{c}}}(\theta,\kappa)K_{q}(\kappa)\,d\kappa$
$\displaystyle\quad-\int_{\mathbb{T}^{3}\times\mathbb{R}}R_{N_{\textnormal{c}}}(\theta,\kappa)\mathring{\otimes}R_{N_{\textnormal{c}}}(\theta,\kappa)K_{q}(\kappa)\,d\kappa$
$\displaystyle\quad-\int\\!\\!\\!\int_{(\mathbb{T}^{3}\times\mathbb{R})^{2}}R_{N_{\textnormal{c}}}(\theta,\kappa)\mathring{\otimes}_{\mathrm{sym}}R_{N_{\textnormal{c}}}(\theta,\zeta)K_{q}(\kappa)K_{q}(\zeta)\,d\kappa\,d\zeta$
$\displaystyle=:\mathring{R}_{q,1}^{\textnormal{comm}}(\theta)+\mathring{R}_{q,2}^{\textnormal{comm}}(\theta)+\mathring{R}_{q,3}^{\textnormal{comm}}(\theta)\,,$
(5.16)
where we have used the notation (9.66).
In order to prove (5.3), we first show that every term in
$D^{n}D_{t,q}^{m}\mathring{R}_{q}^{\textnormal{comm}}$ can be decomposed into
products of pure space and time differential operators applied to products of
$v_{\ell_{q}}$ and $v_{q}$. More generally, for any sufficiently smooth
function $F=F(x,t)$ and for any $n,m\geq 0$, the Leibniz rule implies that
$\displaystyle D^{n}D_{t,q}^{m}F$
$\displaystyle=D^{n}(\partial_{t}+v_{\ell_{q}}\cdot\nabla_{x})^{m}F=\sum_{\begin{subarray}{c}m^{\prime}\leq
m\\\ n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}d_{n,m,n^{\prime},m^{\prime}}(x,t)D^{n^{\prime}}\partial_{t}^{m^{\prime}}F$
(5.17a) $\displaystyle d_{n,m,n^{\prime},m^{\prime}}(x,t)$
$\displaystyle=\sum_{k=0}^{m-m^{\prime}}\sum_{\begin{subarray}{c}\\{\gamma\in{\mathbb{N}}^{k}\colon|\gamma|=n-n^{\prime}+k,\\\
\beta\in{\mathbb{N}}^{k}\colon|\beta|=m-m^{\prime}-k\\}\end{subarray}}c(m,n,k,\gamma,\beta)\prod_{\ell=1}^{k}\left(D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q}}(x,t)\right)$
(5.17b)
where $c(m,n,k,\gamma,\beta)$ denotes an explicitly computable combinatorial
coefficient which depends only on the factors inside the parenthesis, and are
in particular independent of $q$ (which is why we do not carefully track these
coefficients). Identity (5.17a)–(5.17b) holds because $D$ and $\partial_{t}$
commute; the proof is based on induction on $n$ and $m$. Clearly, if $D_{t,q}$
in (5.17a) is replaced by $D_{t,q-1}$, then the same formula holds, with the
$v_{\ell_{q}}$ factors in (5.17b) being replaced by $v_{\ell_{q-1}}$.
In order to prove (5.3) we consider (5.17a)–(5.17b) for $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$ and with
$F=\mathring{R}_{q}^{\textnormal{comm}}$. In order to estimate the factors
$d_{n,m,n^{\prime},m^{\prime}}$ in (5.17b), we need to bound
$D^{n}\partial_{t}^{m}v_{q}$ for $n\leq
6\mathsf{N}_{\textnormal{ind,v}}+N_{\textnormal{c}}$ and $m\leq
3\mathsf{N}_{\textnormal{ind,v}}+N_{\textnormal{c}}$, with $n+m\leq
6\mathsf{N}_{\textnormal{ind,v}}+N_{\textnormal{c}}$. Recall that
$v_{q}=w_{q}+v_{\ell_{q-1}}$ and thus we will obtain the needed estimate from
bounds on $D^{n}\partial_{t}^{m}w_{q}$ and
$D^{n}\partial_{t}^{m}v_{\ell_{q-1}}$. We start with the latter.
We recall that $v_{\ell_{q-1}}=w_{q-1}+v_{\ell_{q-2}}$. Using (3.16) with
$q^{\prime}=q-2$ and the inductive assumption (3.13) with $q$ replaced with
$q-1$, we obtain from Sobolev interpolation that
$\left\|w_{q-1}\right\|_{L^{\infty}}\lesssim\left\|w_{q-1}\right\|_{L^{2}}^{\nicefrac{{1}}{{4}}}\left\|D^{2}w_{q-1}\right\|_{L^{2}}^{\nicefrac{{3}}{{4}}}\lesssim\delta_{q-1}^{\nicefrac{{1}}{{2}}}\lambda_{q-1}^{\nicefrac{{3}}{{2}}}$.
Additionally, combining (3.24) with $q^{\prime}=q-2$ and (3.18) with
$q^{\prime}=q-2$, we obtain
$\left\|v_{\ell_{q-2}}\right\|_{L^{\infty}}\lesssim\lambda_{q-2}^{2}\Gamma_{q-1}^{{i_{\rm
max}}+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\lesssim\lambda_{q-2}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}$.
Jointly, these two estimate imply
$\displaystyle\left\|v_{q-1}\right\|_{L^{\infty}}\lesssim\left\|w_{q-1}\right\|_{L^{\infty}}+\left\|v_{\ell_{q-2}}\right\|_{L^{\infty}}\lesssim\delta_{q-1}^{\nicefrac{{1}}{{2}}}\lambda_{q-1}^{4}\,.$
Now, using that $v_{\ell_{q-1}}=\mathcal{P}_{q-1,x,t}v_{q-1}$, and that the
mollifier operator $\mathcal{P}_{q-1,x,t}$ localizes at scale
$\widetilde{\lambda}_{q-1}$ in space and $\widetilde{\tau}_{q-2}^{-1}$ in
time, we deduce the global estimate
$\displaystyle\left\|D^{n}\partial_{t}^{m}v_{\ell_{q-1}}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\widetilde{\lambda}_{q-1}^{n}\widetilde{\tau}_{q-2}^{-m}$
(5.18)
for $n+m\leq 2\mathsf{N}_{\rm fin}$. Note that from the definitions (9.19) and
(9.20), it is immediate that
$\widetilde{\tau}_{q-2}^{-1}\ll\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$.
As mentioned earlier, the bound for the space-time derivatives of
$v_{\ell_{q-1}}$ needs to be combined with similar estimates for $w_{q}$ in
order to yield a control of $v_{q}$. For this purpose, we appeal to the
Sobolev embedding $H^{2}\subset L^{\infty}$ and the bound (3.13) (in which we
take a supremum over $0\leq i\leq{i_{\rm max}}$ and use (9.43)) to deduce
$\displaystyle\left\|D^{n}D^{m}_{t,q-1}w_{q}\right\|_{L^{\infty}}\lesssim\left\|D^{n}D^{m}_{t,q-1}w_{q}\right\|_{H^{2}}\lesssim(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}$
(5.19)
for all $n\leq 7\mathsf{N}_{\textnormal{ind,v}}-2$ and $m\leq
7\mathsf{N}_{\textnormal{ind,v}}$. Using the above estimate we may apply Lemma
A.10 with the decomposition
$\partial_{t}=-v_{\ell_{q-1}}\cdot\nabla+D_{t,q-1}=A+B$, $v=-v_{\ell_{q-1}}$
and $f=w_{q}$. The conditions (A.40) in Lemma A.10 holds in view of the
inductive estimate (3.24) at level $q-1$, with the following choice of
parameters: $p=\infty$, $\Omega=\mathbb{T}^{3}$,
$\mathcal{C}_{v}=\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}$,
$N_{x}=\mathsf{N}_{\textnormal{ind,v}}-2$,
$\lambda_{v}=\Gamma_{q-1}\lambda_{q-1}$,
$\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$,
$\mu_{v}=\lambda_{q-1}^{2}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$, and
$N_{*}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. On the other hand, using
(5.19) we have that condition (A.41) holds with the parameters: $p=\infty$,
$\Omega=\mathbb{T}^{3}$,
$\mathcal{C}_{f}=\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2}$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\lambda_{q}$,
$\mu_{f}=\widetilde{\mu}_{f}=\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$, and
$N_{*}=7\mathsf{N}_{\textnormal{ind,v}}-2$. We deduce from (A.44) and the
inequalities $\widetilde{\lambda}_{q-1}\leq\lambda_{q}$ and
$\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\lambda_{q}\leq\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$
(cf. (9.39), (9.43), and (9.20)), that
$\displaystyle\left\|D^{n}\partial_{t}^{m}w_{q}\right\|_{L^{\infty}}\lesssim(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}$
(5.20)
holds for $n+m\leq 7\mathsf{N}_{\textnormal{ind,v}}-2$.
By combining (5.18) and (5.20) with the definition (3.3) we thus deduce
$\displaystyle\left\|D^{n}\partial_{t}^{m}v_{q}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}$
(5.21)
for all $n+m\leq 7\mathsf{N}_{\textnormal{ind,v}}-2$, where we have used that
$\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\geq\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2}$
and that
$\widetilde{\tau}_{q-2}^{-1}\leq\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$.
By the definition of $v_{\ell_{q}}$ in (5.1) we thus also deduce that
$\displaystyle\left\|D^{n}\partial_{t}^{m}v_{\ell_{q}}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}$
(5.22)
for all $n+m\leq 7\mathsf{N}_{\textnormal{ind,v}}-2$. Note that by the
definition of the mollifier operator $\mathcal{P}_{q,x,t}$, any further space
derivative on $v_{\ell_{q}}$ costs a factor of $\widetilde{\lambda}_{q}$,
while additional temporal derivatives cost $\widetilde{\tau}_{q-1}$, up to a
$2\mathsf{N}_{\rm fin}$ total number of derivatives.
With (5.22) in hand, we may return to (5.17b) and deduce that for $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$, we have
$\displaystyle\left\|d_{n,m,n^{\prime},m^{\prime}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{k=0}^{m-m^{\prime}}\lambda_{q}^{n-n^{\prime}+k}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}-k}(\lambda_{q-1}^{4}\Gamma_{q}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{k}$
$\displaystyle\lesssim\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}\,.$
(5.23)
In the last inequality above we have used that
$\lambda_{q}\lambda_{q-1}^{4}\Gamma_{q}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\leq\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}$,
which is a consequence of (9.39), (9.43), and (9.20).
Returning to (5.17a) with $F=\mathring{R}_{q}^{\textnormal{comm}}$, we use the
expansion in (5.16), the definition (5.15), and the bound (5.21) to estimate
$D^{n^{\prime}}\partial_{t}^{m^{\prime}}\mathring{R}_{q}^{\textnormal{comm}}$
when $n^{\prime},m^{\prime}\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Using
(5.21) and the choice
$\displaystyle N_{\textnormal{c}}=\mathsf{N}_{\textnormal{ind,v}}-2\,,$ (5.24)
which is required in order to ensure that $n^{\prime}+m^{\prime}+N_{c}\leq
7\mathsf{N}_{\textnormal{ind,v}}-2$, we first obtain the pointwise estimate
$\displaystyle\left|D^{n^{\prime\prime}}\partial_{t}^{m^{\prime\prime}}R_{N_{\textnormal{c}}}(\theta,\kappa)\right|\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\sum_{|\alpha|+m_{1}=N_{\textnormal{c}}}\left|\kappa^{(\alpha,m_{1})}\right|\lambda_{q}^{n^{\prime\prime}+|\alpha|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime\prime}+m_{1}}\,,$
(5.25)
where we recall the notation in (5.12). Using (5.25), the Leibniz rule, and
the fact that $\lambda_{q}\Gamma_{q}\leq\widetilde{\lambda}_{q}$, we may
estimate
$\displaystyle\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}\mathring{R}_{q,2}^{\textnormal{comm}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\sum_{|\alpha|+m_{1}=N_{\textnormal{c}}}\sum_{|\alpha^{\prime}|+m_{2}=N_{\textnormal{c}}}\lambda_{q}^{n^{\prime}+|\alpha|+|\alpha^{\prime}|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}+m_{1}+m_{2}}\int_{\mathbb{T}^{3}\times\mathbb{R}}|\kappa^{(\alpha+\alpha^{\prime},m_{1}+m_{2})}||K_{q}(\kappa)|d\kappa$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\sum_{|\alpha|+m_{1}=N_{\textnormal{c}}}\sum_{|\alpha^{\prime}|+m_{2}=N_{\textnormal{c}}}\lambda_{q}^{n^{\prime}+|\alpha|+|\alpha^{\prime}|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}+m_{1}+m_{2}}\widetilde{\lambda}_{q}^{-|\alpha|-|\alpha^{\prime}|}\widetilde{\tau}_{q-1}^{m_{1}+m_{2}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}}\Gamma_{q}^{-2N_{\textnormal{c}}}$
whenever $n^{\prime},m^{\prime}\leq 3\mathsf{N}_{\textnormal{ind,v}}$. It is
clear that a very similar argument also gives the bound
$\displaystyle\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}\mathring{R}_{q,3}^{\textnormal{comm}}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}}\Gamma_{q}^{-2N_{\textnormal{c}}}$
for the same range of $n^{\prime}$ and $m^{\prime}$. Lastly, by combining
(5.25), (5.21), and the Leibniz rule, we similarly deduce
$\displaystyle\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}\mathring{R}_{q,1}^{\textnormal{comm}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\sum_{|\alpha|+m_{1}=1}^{N_{\textnormal{c}}-1}\sum_{|\alpha^{\prime}|+m_{2}=N_{\textnormal{c}}}\lambda_{q}^{n^{\prime}+|\alpha|+|\alpha^{\prime}|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}+m_{1}+m_{2}}\int_{\mathbb{T}^{3}\times\mathbb{R}}\left|\kappa^{(\alpha+\alpha^{\prime},m_{1}+m_{2})}\right||K_{q}(\kappa)|d\kappa$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\sum_{|\alpha|+m_{1}=1}^{N_{\textnormal{c}}-1}\sum_{|\alpha^{\prime}|+m_{2}=N_{\textnormal{c}}}\lambda_{q}^{n^{\prime}+|\alpha|+|\alpha^{\prime}|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}+m_{1}+m_{2}}\widetilde{\lambda}_{q}^{-|\alpha|-|\alpha^{\prime}|}\widetilde{\tau}_{q-1}^{m_{1}+m_{2}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}}\Gamma_{q}^{-N_{\textnormal{c}}-1}\,.$
Combining the above three bounds, identity (5.16) yields
$\displaystyle\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}\mathring{R}_{q}^{\textnormal{comm}}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}}\Gamma_{q}^{-N_{\textnormal{c}}-1}$
(5.26)
whenever $n^{\prime},m^{\prime}\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
Lastly, by combining (5.17a) with (5.23) and (5.26) we obtain
$\displaystyle\left\|D^{n}D_{t,q}^{m}\mathring{R}_{q}^{\textnormal{comm}}\right\|_{L^{\infty}}\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}\Gamma_{q}^{-N_{\textnormal{c}}-1}$
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Therefore, in order to
verify (5.3), we need to verify that
$\displaystyle(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})^{2}\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}\Gamma_{q}^{-N_{\textnormal{c}}}\leq\Gamma_{q+1}^{-1}\Gamma_{q+1}^{-\mathsf{C_{R}}}\delta_{q+2}\lambda_{q+1}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q}^{-1}\right)$
for all $0\leq n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Since
$\lambda_{q}\leq\lambda_{q+1}$,
$\widetilde{\tau}_{q-1}^{-1}\leq\widetilde{\tau}_{q}^{-1}$, and
$\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\geq\tau_{q}^{-1}\geq\tau_{q-1}^{-1}$,
the above condition is ensured by the more restrictive condition
$\displaystyle\lambda_{q-1}^{8}\Gamma_{q+1}^{1+{\mathsf{C_{R}}}}\frac{\delta_{q-1}}{\delta_{q+2}}\left(\frac{\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}}{\tau_{q}^{-1}}\right)^{\mathsf{N}_{\textnormal{ind,t}}}\leq\lambda_{q-1}^{8}\Gamma_{q+1}^{1+{\mathsf{C_{R}}}}\frac{\delta_{q-1}}{\delta_{q+2}}\left(\frac{\widetilde{\tau}_{q-1}^{-1}}{\tau_{q-1}^{-1}}\right)^{\mathsf{N}_{\textnormal{ind,t}}}\leq\Gamma_{q}^{N_{\textnormal{c}}}=\Gamma_{q}^{\mathsf{N}_{\textnormal{ind,v}}-2}$
(5.27)
which holds as soon as $\mathsf{N}_{\textnormal{ind,v}}$ is chosen
sufficiently large with respect to $\mathsf{N}_{\textnormal{ind,t}}$; see
(9.50a) below. This completes the proof of (5.3).
Proof of (5.5) and (5.6). Using Hölder’s inequality and the extra factor of
$\Gamma_{q}^{-1}$ present in (5.6), it is clear than for all $n,m$ such that
(5.6) holds, the estimate (5.5) is also true. The proof is thus split in three
parts: first we consider $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$, then we
consider $m>3\mathsf{N}_{\textnormal{ind,v}}$, and lastly
$n>3\mathsf{N}_{\textnormal{ind,v}}$.
We start with the proof of (5.5). In view of (3.4), we first bound the main
term, $\mathcal{P}_{q,x,t}w_{q}$, which we claim may be estimated as
$\displaystyle\left\|\psi_{i,q-1}D^{n}D_{t,q-1}^{m}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{2}}\leq\frac{1}{2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i},\tilde{\tau}_{q-1}^{-1}\right).$
(5.28)
for all $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$, and as
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}\leq\Gamma_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right).$
(5.29)
when $n+m\leq 2\mathsf{N}_{\rm fin}$, and either
$n>3\mathsf{N}_{\textnormal{ind,v}}$ or $m>3\mathsf{N}_{\textnormal{ind,v}}$.
By the definition of $\mathcal{P}_{q,x,t}$ in (9.64), in view of the moment
condition (9.62) for the associated mollifier kernel, we have that
$\displaystyle\mathcal{P}_{q,x,t}w_{q}(\theta)-w_{q}(\theta)$
$\displaystyle=\sum_{|\alpha|+m^{\prime\prime}=N_{\textnormal{c}}}\frac{N_{\textnormal{c}}}{\alpha!m^{\prime\prime}!}\int\\!\\!\\!\int_{\mathbb{T}^{3}\times\mathbb{R}}K_{q}(\kappa)(-\kappa)^{(\alpha,m^{\prime\prime})}\int_{0}^{1}(1-\eta)^{N_{\textnormal{c}}-1}D^{\alpha}\partial_{t}^{m^{\prime\prime}}w_{q}(\theta-\eta\kappa)\,d\eta
d\kappa$ (5.30)
where we have appealed to the notation in (5.12), and
$N_{\textnormal{c}}=\mathsf{N}_{\textnormal{ind,v}}-2$. For $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$, we appeal to the identity (5.17a) with
$F=\mathcal{P}_{q,x,t}w_{q}-w_{q}$, and with $D_{t,q}$ replaced by
$D_{t,q-1}$, to obtain
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}\lesssim\sum_{\begin{subarray}{c}m^{\prime}\leq
m\\\ n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}\left\|d_{n,m,n^{\prime},m^{\prime}}\right\|_{L^{\infty}}\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}(\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}$
(5.31)
where
$\displaystyle d_{n,m,n^{\prime},m^{\prime}}$
$\displaystyle=\sum_{k=0}^{m-m^{\prime}}\sum_{\begin{subarray}{c}\\{\gamma\in{\mathbb{N}}^{k}\colon|\gamma|=n-n^{\prime}+k,\\\
\beta\in{\mathbb{N}}^{k}\colon|\beta|=m-m^{\prime}-k\\}\end{subarray}}c(m,n,k,\gamma,\beta)\prod_{\ell=1}^{k}\left(D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}(x,t)\right)\,.$
From (5.18), and the parameter inequality
$\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q-1}\leq\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$
we deduce the bound
$\left\|D^{n^{\prime\prime}}\partial_{t}^{m^{\prime\prime}}v_{\ell_{q-1}}\right\|_{L^{\infty}}\lesssim\widetilde{\lambda}_{q-1}^{n^{\prime\prime}-1}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{m^{\prime\prime}+1}$
for $n^{\prime\prime}+m^{\prime\prime}\leq 2\mathsf{N}_{\rm fin}$, and
therefore
$\displaystyle\left\|d_{n,m,n^{\prime},m^{\prime}}\right\|_{L^{\infty}}\lesssim\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}\,.$
(5.32)
Combining this estimate with the bound (5.20), we deduce that
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}m^{\prime}\leq m\\\
n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}(\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}m^{\prime}\leq m\\\
n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}\sum_{|\alpha|+m^{\prime\prime}=N_{\textnormal{c}}}\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}$
$\displaystyle\qquad\qquad\times(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\lambda_{q}^{n^{\prime}+|\alpha|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m^{\prime}+m^{\prime\prime}}\int_{\mathbb{T}^{3}\times\mathbb{R}}\left|\kappa^{(\alpha,m^{\prime\prime})}\right||K_{q}(\kappa)|d\kappa$
$\displaystyle\lesssim(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\sum_{|\alpha|+m^{\prime\prime}=N_{\textnormal{c}}}\lambda_{q}^{n+|\alpha|}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m+m^{\prime\prime}}\widetilde{\lambda}_{q}^{-|\alpha|}\widetilde{\tau}_{q-1}^{m^{\prime\prime}}$
$\displaystyle\lesssim(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}\Gamma_{q}^{-N_{\textnormal{c}}}\,.$
(5.33)
Next, we claim that the above estimate is consistent with (5.28): for $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$ we have
$\displaystyle(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}\Gamma_{q}^{-N_{\textnormal{c}}}\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i-1},\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\right)\,.$
(5.34)
Recalling the definition of $N_{\textnormal{c}}$ in (5.24), the above bound is
in turn implied by the estimate
$\displaystyle\Gamma_{q}^{3}\lambda_{q}^{2}\left(\frac{\widetilde{\tau}_{q-1}^{-1}}{\tau_{q-1}^{-1}}\right)^{\mathsf{N}_{\textnormal{ind,t}}}\leq\Gamma_{q}^{\mathsf{N}_{\textnormal{ind,v}}}$
which holds since
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$; in fact,
it is easy to see that the above condition is less stringent than (5.27).
Summarizing (5.33)–(5.34), and appealing to the inductive assumption (3.13),
we deduce that
$\displaystyle\left\|\psi_{i,q-1}D^{n}D^{m}_{t,q-1}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{2}}$
$\displaystyle\lesssim\left\|\psi_{i,q-1}D^{n}D^{m}_{t,q-1}w_{q}\right\|_{L^{2}}+\left\|D^{n}D^{m}_{t,q-1}(\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i-1},\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\right)$
(5.35)
for all $0\leq n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. The above estimate
verifies (5.28).
We next turn to the proof of (5.29). The key observation is that when
establishing (5.35), the two main properties of the mollification kernel
$K_{q}(\kappa)$ which we have used are: the vanishing of the moments
$\int\\!\\!\\!\int_{\mathbb{T}^{3}\times\mathbb{R}}K_{q}(\kappa)(-\kappa)^{(\alpha,m^{\prime\prime})}d\kappa=0$
for $1\leq|\alpha|+m^{\prime\prime}\leq\mathsf{N}_{\textnormal{ind,v}}$ and
the fact that
$\|K_{q}(\kappa)(-\kappa)^{(\alpha,m^{\prime\prime})}\|_{L^{1}(d\kappa)}\lesssim\widetilde{\lambda}_{q}^{-|\alpha|}\widetilde{\tau}_{q-1}^{m^{\prime\prime}}$
for all $|\alpha|+m^{\prime\prime}\leq\mathsf{N}_{\textnormal{ind,v}}$. We
claim that, for any $\widetilde{n}+\widetilde{m}\leq 2\mathsf{N}_{\rm fin}$,
the kernel
$\displaystyle
K_{q}^{(\widetilde{n},\widetilde{m})}(y,s):=D_{y}^{\widetilde{n}}\partial_{s}^{\widetilde{m}}K_{q}(y,s)\widetilde{\lambda}_{q}^{-\widetilde{n}}\widetilde{\tau}_{q-1}^{\widetilde{m}}$
satisfies exactly the same two properties. The second property, about the
$L^{1}$ norm, is immediate by scaling and the above definition, from the
properties of the Friedrichs mollifier densities $\phi$ and $\widetilde{\phi}$
from (9.62). Concerning the vanishing moment condition, we note that
$K_{q}^{(n,m)}$ has in fact more vanishing moments than $K_{q}$, as is easily
seen from integration by parts in $\kappa$. The upshot of this observation is
that in precisely the same way that (5.35) was proven, we may show that
$\displaystyle\left\|D^{n}D^{m}_{t,q-1}D^{\widetilde{n}}\partial_{t}^{\widetilde{m}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{2}}$
$\displaystyle\lesssim\sum_{i=0}^{{i_{\rm
max}}}\left\|\psi_{i,q-1}D^{n}D^{m}_{t,q-1}w_{q}\right\|_{L^{2}}+\left\|D^{n}D^{m}_{t,q-1}(D^{\widetilde{n}}\partial_{t}^{\widetilde{m}}\mathcal{P}_{q,x,t}w_{q}-w_{q})\right\|_{L^{\infty}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{\widetilde{n}}(\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}(\tilde{\tau}_{q-1}^{-1})^{\widetilde{m}}$
(5.36)
for all $0\leq n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$, and for all
$0\leq\widetilde{n}+\widetilde{m}\leq 2\mathsf{N}_{\rm fin}$. Here we have
used (3.16) and (3.18) with $q^{\prime}=q-1$, and the parameter inequality
$\tau_{q-1}^{-1}\Gamma_{q}^{{i_{\rm
max}}-1}\leq\tau_{q-1}^{-1}\lambda_{q-1}^{2}\leq\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}$.
Next, consider $n+m\leq 2\mathsf{N}_{\rm fin}$ such that $n\leq
3\mathsf{N}_{\textnormal{ind,v}}$ and $m>3\mathsf{N}_{\textnormal{ind,v}}$.
Define $\bar{m}=m-3\mathsf{N}_{\textnormal{ind,v}}>0$, which are the number of
excess material derivatives not covered by the bound (5.35). We rewrite the
term which we need to estimate in (5.29) as
$\displaystyle\left\|D^{n}D^{m}_{t,q-1}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}=\left\|D^{n}D^{3\mathsf{N}_{\textnormal{ind,v}}}_{t,q-1}D_{t,q-1}^{\bar{m}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}\,.$
(5.37)
Using (5.17a)–(5.17b) we expand $D_{t,q-1}^{\bar{m}}$ into space and time
derivatives and apply the Leibniz rule to deduce
$\displaystyle D_{t,q-1}^{\bar{m}}\mathcal{P}_{q,x,t}w_{q}$
$\displaystyle=\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq\bar{m}\\\
\bar{n}^{\prime}+\bar{m}^{\prime}\leq\bar{m}\end{subarray}}d_{\bar{m},\bar{n}^{\prime},\bar{m}^{\prime}}D^{\bar{n}^{\prime}}\partial_{t}^{\bar{m}^{\prime}}\mathcal{P}_{q,x,t}w_{q}$
(5.38a) $\displaystyle d_{\bar{m},\bar{n}^{\prime},\bar{m}^{\prime}}(x,t)$
$\displaystyle=\sum_{k=0}^{\bar{m}-\bar{m}^{\prime}}\sum_{\begin{subarray}{c}\\{\gamma\in{\mathbb{N}}^{k}\colon|\gamma|=-\bar{n}^{\prime}+k,\\\
\beta\in{\mathbb{N}}^{k}\colon|\beta|=\bar{m}-\bar{m}^{\prime}-k\\}\end{subarray}}c(\bar{m},k,\gamma,\beta)\prod_{\ell=1}^{k}\left(D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}(x,t)\right)\,.$
(5.38b)
Using the Leibniz rule, the previously established bound (5.36), and the
Sobolev embedding $H^{2}\subset L^{\infty}$, we deduce that
$\displaystyle\left\|D^{n}D^{3\mathsf{N}_{\textnormal{ind,v}}}_{t,q-1}D_{t,q-1}^{\bar{m}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{a=0}^{n}\sum_{b=0}^{3\mathsf{N}_{\textnormal{ind,v}}}\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq\bar{m}\\\
\bar{n}^{\prime}+\bar{m}^{\prime}\leq\bar{m}\end{subarray}}\left\|D^{a}D_{t,q-1}^{b}d_{\bar{m},\bar{n}^{\prime},\bar{m}^{\prime}}\right\|_{L^{\infty}}\left\|D^{n-a}D_{t,q-1}^{3\mathsf{N}_{\textnormal{ind,v}}-b}D^{\bar{n}^{\prime}}\partial_{t}^{\bar{m}^{\prime}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{a=0}^{n}\sum_{b=0}^{3\mathsf{N}_{\textnormal{ind,v}}}\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq\bar{m}\\\
\bar{n}^{\prime}+\bar{m}^{\prime}\leq\bar{m}\end{subarray}}\left\|D^{a}D_{t,q-1}^{b}d_{\bar{m},\bar{n}^{\prime},\bar{m}^{\prime}}\right\|_{L^{\infty}}\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n-a}\widetilde{\lambda}_{q}^{\bar{n}^{\prime}+2}(\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{3\mathsf{N}_{\textnormal{ind,v}}-b}(\tilde{\tau}_{q-1}^{-1})^{\bar{m}^{\prime}}\,.$
(5.39)
Thus, in order to obtain the desired bound on (5.37), we need to estimate
space and material derivatives $D^{a}D_{t,q-1}^{b}$ of the term defined in
(5.38b), and in particular for
$D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}$. We may however
appeal to (5.31)–(5.32) with $(\mathcal{P}_{q,x,t}w_{q}-w_{q})$ replaced by
$D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}$, and to the bound
(5.18) to deduce that
$\displaystyle\left\|D^{a^{\prime}}D_{t,q-1}^{b^{\prime}}D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}b^{\prime\prime}\leq
b^{\prime}\\\ a^{\prime\prime}+b^{\prime\prime}\leq
a^{\prime}+b^{\prime}\end{subarray}}\lambda_{q}^{a^{\prime}-a^{\prime\prime}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{b^{\prime}-b^{\prime\prime}}\left\|D^{a^{\prime\prime}}\partial_{t}^{b^{\prime\prime}}D^{\gamma_{\ell}}\partial_{t}^{\beta_{\ell}}v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\lambda_{q}^{a^{\prime}}\widetilde{\lambda}_{q-1}^{\gamma_{\ell}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{b^{\prime}+\beta_{\ell}}$
$\displaystyle\lesssim\lambda_{q}^{a^{\prime}}\widetilde{\lambda}_{q-1}^{\gamma_{\ell}-1}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{b^{\prime}+\beta_{\ell}+1}\,.$
where in the last estimate we have used the parameter inequality
$\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q-1}\leq\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$.
Using the above bound and the definition (5.38b) we deduce that
$\displaystyle\left\|D^{a}D_{t,q-1}^{b}d_{\bar{m},\bar{n}^{\prime},\bar{m}^{\prime}}\right\|_{L^{\infty}}\lesssim\lambda_{q}^{a}\widetilde{\lambda}_{q-1}^{-\bar{n}^{\prime}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{b+\bar{m}-\bar{m}^{\prime}}\,.$
(5.40)
The above display may be combined with (5.39) and yields
$\displaystyle\left\|D^{n}D^{3\mathsf{N}_{\textnormal{ind,v}}}_{t,q-1}D_{t,q-1}^{\bar{m}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{2}\sum_{b=0}^{3\mathsf{N}_{\textnormal{ind,v}}}\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq\bar{m}\\\
\bar{n}^{\prime}+\bar{m}^{\prime}\leq\bar{n}+\bar{m}\end{subarray}}\widetilde{\lambda}_{q-1}^{-\bar{n}^{\prime}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{b+\bar{m}-\bar{m}^{\prime}}(\tilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{3\mathsf{N}_{\textnormal{ind,v}}-b}(\tilde{\tau}_{q-1}^{-1})^{\bar{m}^{\prime}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{2}\sum_{\bar{m}^{\prime}\leq\bar{m}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{m-\bar{m}^{\prime}}(\tilde{\tau}_{q-1}^{-1})^{\bar{m}^{\prime}}$
(5.41)
where we have recalled that $3\mathsf{N}_{\textnormal{ind,v}}+\bar{m}=m$. The
above estimate has to be compared with the right side of (5.29), and for this
purpose we note that for
$\bar{m}^{\prime}\leq\bar{m}=m-3\mathsf{N}_{\textnormal{ind,v}}$ we have
$\displaystyle\lambda_{q}^{n}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{m-\bar{m}^{\prime}}(\tilde{\tau}_{q-1}^{-1})^{\bar{m}^{\prime}}$
$\displaystyle\lesssim\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\Gamma_{q}^{-(m-\bar{m}^{\prime})}(\widetilde{\tau}_{q-1}^{-1})^{-m}$
$\displaystyle\lesssim\Gamma_{q}^{-3\mathsf{N}_{\textnormal{ind,v}}}(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
where we have used the fact that $m-\bar{m}^{\prime}\geq
m-\bar{m}=3\mathsf{N}_{\textnormal{ind,v}}$. Taking
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$ such that
$\displaystyle\widetilde{\lambda}_{q}^{2}(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}\leq\Gamma_{q}^{3\mathsf{N}_{\textnormal{ind,v}}-2}\,,$
(5.42)
a condition which is satisfied due to (9.50c), it follows from (5.41) that
(5.29) holds whenever $m>3\mathsf{N}_{\textnormal{ind,v}}$, $n\leq
3\mathsf{N}_{\textnormal{ind,v}}$, and $m+n\leq 2\mathsf{N}_{\rm fin}$.
It remains to consider the case $n>3\mathsf{N}_{\textnormal{ind,v}}$, and
$n+m\leq 2\mathsf{N}_{\rm fin}$. In this case we still use (5.38a)–(5.38b),
but with $\bar{m}$ replaced by $m$, and similarly to (5.39), but by appealing
to the bounds (5.18) and (5.32) instead of (5.40), we obtain
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{a=0}^{n}\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq
m\\\ \bar{n}^{\prime}+\bar{m}^{\prime}\leq
m\end{subarray}}\left\|D^{a}d_{m,\bar{n}^{\prime},\bar{m}^{\prime}}\right\|_{L^{\infty}}\left\|D^{n-a+\bar{n}^{\prime}}\partial_{t}^{\bar{m}^{\prime}}\mathcal{P}_{q,x,t}w_{q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{a=0}^{n}\sum_{\begin{subarray}{c}\bar{m}^{\prime}\leq
m\\\ \bar{n}^{\prime}+\bar{m}^{\prime}\leq
m\end{subarray}}\lambda_{q}^{a-\bar{n}^{\prime}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{m-\bar{m}^{\prime}}\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{2}\mathcal{M}\left(n-a+\bar{n}^{\prime},3\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)(\tilde{\tau}_{q-1}^{-1})^{\bar{m}^{\prime}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{2}\mathcal{M}\left(n,3\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)(\tilde{\tau}_{q-1}^{-1})^{m}\,.$
To conclude the proof of (5.29) in this case, we note that for $n\geq
3\mathsf{N}_{\textnormal{ind,v}}$ the definition (9.19) implies
$\displaystyle\mathcal{M}\left(n,3\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\leq\Gamma_{q+1}^{-5\mathsf{N}_{\textnormal{ind,v}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)$
and this factor is sufficiently small to absorb losses due to bad material
derivative estimates. Indeed, we have that
$\displaystyle\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{2}\mathcal{M}\left(n,3\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)(\tilde{\tau}_{q-1}^{-1})^{m}$
$\displaystyle\lesssim\Gamma_{q}^{-3}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)\Gamma_{q}^{2}\widetilde{\lambda}_{q}^{2}\left(\frac{\tilde{\tau}_{q-1}^{-1}}{\tau_{q-1}^{-1}}\right)^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q+1}^{-5\mathsf{N}_{\textnormal{ind,v}}}$
$\displaystyle\lesssim\Gamma_{q}^{-1}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
by appealing to the condition
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$ given in
(9.50b). This concludes the proof of (5.29) for all $n+m\leq 2\mathsf{N}_{\rm
fin}$, if either $n$ or $m$ are larger than
$3\mathsf{N}_{\textnormal{ind,v}}$.
The bounds (5.28)–(5.29) estimate the leading order contribution to $u_{q}$.
According to the decomposition (3.4), the proofs of (5.5) and (5.6) are
completed if we are able to verify that
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}\leq\Gamma_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
(5.43)
holds for all $n+m\leq 2\mathsf{N}_{\rm fin}$.
In order to establish this bound, we appeal to (5.31)–(5.32) and obtain
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}\lesssim\sum_{\begin{subarray}{c}m^{\prime}\leq
m\\\ n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}$
(5.44)
for $n,m\geq 0$ such that $n+m\leq 2\mathsf{N}_{\rm fin}$. Here we distinguish
two cases. If either $n>3\mathsf{N}_{\textnormal{ind,v}}$ or
$m>3\mathsf{N}_{\textnormal{ind,v}}$, then we simply appeal to (5.18), use
that $\mathcal{P}_{q,x,t}$ commutes with $D$ and $\partial_{t}$, and obtain
from the above display that
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}m^{\prime}\leq m\\\
n^{\prime}+m^{\prime}\leq
n+m\end{subarray}}\lambda_{q}^{n-n^{\prime}}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m-m^{\prime}}(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\widetilde{\lambda}_{q-1}^{n^{\prime}}\widetilde{\tau}_{q-2}^{-m^{\prime}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\lambda_{q}^{n}(\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1})^{m}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})(\tau_{q-1}\widetilde{\tau}_{q-1}^{-1})^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q}^{-3\mathsf{N}_{\textnormal{ind,v}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim\Big{(}\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\Gamma_{q}^{2}\delta_{q}^{-\nicefrac{{1}}{{2}}}(\tau_{q-1}\widetilde{\tau}_{q-1}^{-1})^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q}^{-3\mathsf{N}_{\textnormal{ind,v}}}\Big{)}\Gamma_{q}^{-2}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\tilde{\tau}_{q-1}^{-1}\right)$
Using that
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$, as
described in (9.50c), the above estimate then readily implies (5.43).
We are thus left to consider (5.44) for $n,m\leq
3\mathsf{N}_{\textnormal{ind,v}}$. In this case, the bound for the term
$\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\|_{L^{\infty}}$
present in (5.44) is different. Similarly to (5.30) we use that the kernel
$K_{q}$ has vanishing moments of orders between $1$ and
$\mathsf{N}_{\textnormal{ind,v}}$, and thus we have
$\displaystyle\mathcal{P}_{q,x,t}v_{\ell_{q-1}}(\theta)-v_{\ell_{q-1}}(\theta)$
$\displaystyle=\sum_{|\alpha|+m^{\prime\prime}=\mathsf{N}_{\textnormal{ind,v}}}\frac{\mathsf{N}_{\textnormal{ind,v}}}{\alpha!m^{\prime\prime}!}\int\\!\\!\\!\int_{\mathbb{T}^{3}\times\mathbb{R}}K_{q}(\kappa)(-\kappa)^{(\alpha,m^{\prime\prime})}\int_{0}^{1}(1-\eta)^{\mathsf{N}_{\textnormal{ind,v}}-1}D^{\alpha}\partial_{t}^{m^{\prime\prime}}v_{\ell_{q-1}}(\theta-\eta\kappa)\,d\eta
d\kappa\,.$ (5.45)
Using (5.18) and (5.45), we may then estimate
$\displaystyle\left\|D^{n^{\prime}}\partial_{t}^{m^{\prime}}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\sum_{|\alpha|+m^{\prime\prime}=\mathsf{N}_{\textnormal{ind,v}}}\widetilde{\lambda}_{q}^{-|\alpha|}\widetilde{\tau}_{q-1}^{m^{\prime\prime}}\widetilde{\lambda}_{q-1}^{n^{\prime}+|\alpha|}(\Gamma_{q}^{-1}\widetilde{\tau}_{q}^{-1})^{m^{\prime}+m^{\prime\prime}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\Gamma_{q}^{-\mathsf{N}_{\textnormal{ind,v}}}\lambda_{q}^{n^{\prime}}(\Gamma_{q}^{-1}\widetilde{\tau}_{q}^{-1})^{m^{\prime}}\,.$
Combining the above display with (5.44) we arrive at
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\Gamma_{q}^{-\mathsf{N}_{\textnormal{ind,v}}}\lambda_{q}^{n}(\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1})^{m}$
$\displaystyle\lesssim(\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}})(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q}^{-\mathsf{N}_{\textnormal{ind,v}}}\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,.$
(5.46)
Using that
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$, see
condition (9.50c), the above estimate concludes the proof of (5.43).
Combining the bounds (5.28), (5.29), and (5.43) concludes the proofs of (5.5)
and (5.6).
Proof of (5.4). By (3.3) we have that
$\displaystyle
v_{\ell_{q}}-v_{q}=(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{q}=(\mathcal{P}_{q,x,t}-\mathrm{Id})w_{q}+(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\,.$
From (5.33) and (5.34) we deduce that the first term on the right side of the
above display is bounded as
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})w_{q}\right\|_{L^{\infty}}$
$\displaystyle\qquad\lesssim\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q}^{2}\lambda_{q}^{2}(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q}^{-\mathsf{N}_{\textnormal{ind,v}}}\right)\lambda_{q}^{n}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1}\Gamma_{q}^{i-1},\widetilde{\tau}_{q-1}^{-1}\Gamma_{q}^{-1}\right)\,,$
while the second term is estimated from (5.46) as
$\displaystyle\left\|D^{n}D_{t,q-1}^{m}(\mathcal{P}_{q,x,t}-\mathrm{Id})v_{\ell_{q-1}}\right\|_{L^{\infty}}$
$\displaystyle\qquad\lesssim\left(\delta_{q-1}^{\nicefrac{{1}}{{2}}}\lambda_{q-1}^{4}(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}\Gamma_{q}^{-\mathsf{N}_{\textnormal{ind,v}}}\right)\mathcal{M}\left(n,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,,$
for $n,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$. Since
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$, see e.g.
the parameter inequality (9.50a), the above two displays directly imply (5.4).
Proof of (5.7) and (5.8). The argument is nearly identical to how the
inductive bounds on $w_{q}$ in (3.13) were shown earlier to imply bounds for
$\mathcal{P}_{q,x,t}w_{q}$ as in (5.28). The crucial ingredients in this proof
were: that for each material derivative the bound on the mollified function
$\mathcal{P}_{q,x,t}w_{q}$ is relaxed by a factor of $\Gamma_{q}$, that the
cost of space derivatives is relaxed from $\lambda_{q}$ to
$\widetilde{\lambda}_{q}$ when $n\geq\mathsf{N}_{\textnormal{ind,v}}$, and
that the available number of estimates on the un-mollified function $w_{q}$
was much larger than $\mathsf{N}_{\textnormal{ind,v}}$ (more precisely
$7\mathsf{N}_{\textnormal{ind,v}}$). But the same ingredients are available
for the transfer of estimates from $\mathring{R}_{q}$ to
$\mathring{R}_{\ell_{q}}=\mathcal{P}_{q,x,t}\mathring{R}_{q}$. Indeed, the
derivatives available in (3.15) extend significantly past
$\mathsf{N}_{\textnormal{ind,v}}$ (this time up to
$3\mathsf{N}_{\textnormal{ind,v}}$), when comparing the desired bound on
$\mathring{R}_{\ell_{q}}$ in (5.7) with the available inductive bound in
(3.15) we note that the cost of each material derivative is relaxed by a
factor of $\Gamma_{q}$, and that the cost of each additional space derivative
is relaxed from $\lambda_{q}$ to $\widetilde{\lambda}_{q}$ when $n$ is
sufficiently large. To avoid redundancy, we omit these details. ∎
## 6 Cutoffs
This section is dedicated to the construction of the cutoff functions
described in Section 2.5, which play the role of a joint Eulerian-and-
Lagrangian Littlewood-Paley frequency decompositon, which in addition keeps
track of the size of objects in physical space. During a first pass at the
paper, the reader may skip this technical section — if the Lemmas 6.8, 6.14,
6.18, 6.21, 6.35, 6.36, 6.38, 6.40, 6.41, and Corollaries 6.27 and 6.33 are
taken for granted.
This section is organized as follows. In Section 6.1 we define the velocity
cutoff functions $\psi_{i,q}$, recursively in terms of the previous level
(meaning $q-1$) velocity cutoff functions $\psi_{i^{\prime},q-1}$ which are
assumed to satisfy the inductive bounds and properties mentioned in Section
3.2.3. In Section 6.2 we then verify that the velocity cutoff functions at
level $q$, and the velocity fields $u_{q}$ and $v_{\ell_{q}}$ satisfy all the
inductive estimates claimed in Sections 3.2.3 and 3.2.4, for $q^{\prime}=q$.
This section is the bulk of Section 6; and it is here that the various
commutators between Eulerian (space and time) derivatives and Lagrangian
derivatives cause a plethora of difficulties.
###### Remark 6.1 (Inductive assumptions which involve cutoffs and
commutators).
We note that by the conclusion of Section 6.2 we have verified all the
inductive assumptions from Section 3.2, except for (3.13)–(3.14) for the new
velocity increment $w_{q+1}$, and (3.15) for the new stress
$\mathring{R}_{q+1}$. These three inductive assumptions will be revisited,
broken down, and restated in Section 7 and proven in Section 8.
Next, in Section 6.3 we introduce the temporal cutoffs $\chi_{i,k,q}$, indexed
by $k$ which are meant to subdivide the support of the velocity cutoff
$\psi_{i,q}$ into time slices of width inversely to the local Lipschitz norm
of $v_{\ell_{q}}$. This allows us in Section 6.4 to properly define and
estimate the Lagrangian flow maps induced by the incompressible vector field
$v_{\ell_{q}}$, on the support of $\psi_{i,q}\chi_{i,k,q}$. We next turn to
defining the stress cutoff functions $\omega_{i,j,q,n,p}$, indexed by $j$, for
the stress $\mathring{R}_{q,n,p}$, on the support of $\psi_{i,q}$. Coupling
the stress and velocity cutoffs in this way allows us in Section 6.7 to
sharply estimate spatial and material derivatives of these higher order
stresses, but also to estimate the derivatives of the stress cutoffs
themselves. At last, we define in Section 6.8 the checkerboard cutoffs
$\zeta_{q,i,k,n,\vec{l}}$, indexed by an address $\vec{l}=(l,w,h)$ which
identifies a specific cube of side-length $2\pi/\lambda_{q,n,0}$ within
$\mathbb{T}^{3}$. This specific size of the support of
$\zeta_{q,i,k,n,\vec{l}}$ is important for ensuring that Oscillation Type $2$
errors vanish (see Lemmas 8.11 and 8.12). These cutoff functions are flowed by
the backwards Lagrangian flows $\Phi_{i,k,q}$ defined earlier, explaining
their dependence on the indices $q,i,k$. Lastly, the cumulative cutoff
function $\eta_{i,j,k,q,n,p,\vec{l}}$ is defined in Section 6.9, along with
some of its principal properties. We emphasize that this cumulative cutoff has
embedded into it information about the local size and cost of space/Lagrangian
derivatives of both the velocity, the stress, and the Lagrangian maps.
### 6.1 Definition of the velocity cutoff functions
For all $q\geq 1$ and $0\leq m\leq\mathsf{N}_{\rm cut,t}$, we construct the
following cutoff functions. The proof is contained in Appendix A.2.
###### Lemma 6.2.
For all $q\geq 1$ and $0\leq m\leq\mathsf{N}_{\rm cut,t}$, there exist smooth
cutoff functions
$\widetilde{\psi}_{m,q},\psi_{m,q}:[0,\infty)\rightarrow[0,1]$ which satisfy
the following.
1. (1)
The support of $\widetilde{\psi}_{m,q}$ is precisely the set
$\left[0,\Gamma_{q}^{2(m+1)}\right]$, and furthermore
1. (a)
On the interval $\left[0,\frac{1}{4}\Gamma_{q}^{2(m+1)}\right]$,
$\widetilde{\psi}_{m,q}\equiv 1$.
2. (b)
On the interval
$\left[\frac{1}{4}\Gamma_{q}^{2(m+1)},\Gamma_{q}^{2(m+1)}\right]$,
$\widetilde{\psi}_{m,q}$ decreases from $1$ to $0$.
2. (2)
The support of $\psi_{m,q}$ is precisely the set
$\left[\frac{1}{4},\Gamma_{q}^{2(m+1)}\right]$, and furthermore
1. (a)
On the interval $\left[\frac{1}{4},1\right]$, $\psi_{m,q}$ increases from $0$
to $1$.
2. (b)
On the interval $\left[1,\frac{1}{4}\Gamma_{q}^{2(m+1)}\right]$,
$\psi_{m,q}\equiv 1$.
3. (c)
On the interval
$\left[\frac{1}{4}\Gamma_{q}^{2(m+1)},\Gamma_{q}^{2(m+1)}\right]$,
$\psi_{m,q}$ decreases from $1$ to $0$.
3. (3)
For all $y\geq 0$, a partition of unity is formed as
$\displaystyle\widetilde{\psi}_{m,q}^{2}(y)+\sum_{{i\geq
1}}\psi_{m,q}^{2}\left(\Gamma_{q}^{-2i(m+1)}y\right)=1$ (6.1)
4. (4)
$\widetilde{\psi}_{m,q}$ and
$\psi_{m,q}\left(\Gamma_{q}^{-2i(m+1)}\cdot\right)$ satisfy
$\displaystyle\mathrm{supp\,}\widetilde{\psi}_{m,q}(\cdot)\cap\mathrm{supp\,}\psi_{m,q}\left(\Gamma_{q}^{-2i(m+1)}\cdot\right)$
$\displaystyle=\emptyset\quad\textnormal{if}\quad i\geq 2,$
$\displaystyle\mathrm{supp\,}\psi_{m,q}\left(\Gamma_{q}^{-2i(m+1)}\cdot\right)\cap\mathrm{supp\,}\psi_{m,q}\left(\Gamma_{q}^{-2i^{\prime}(m+1)}\cdot\right)$
$\displaystyle=\emptyset\quad\textnormal{if}\quad|i-i^{\prime}|\geq 2.$ (6.2)
5. (5)
For $0\leq N\leq\mathsf{N}_{\rm fin}$, when $0\leq y<\Gamma_{q}^{2(m+1)}$ we
have
$\displaystyle\frac{|D^{N}\widetilde{\psi}_{m,q}(y)|}{(\widetilde{\psi}_{m,q}(y))^{1-N/\mathsf{N}_{\rm
fin}}}$ $\displaystyle\lesssim\Gamma_{q}^{-2N(m+1)}.$ (6.3)
For $\frac{1}{4}<y<1$ we have
$\displaystyle\frac{|D^{N}\psi_{m,q}(y)|}{(\psi_{m,q}(y))^{1-N/\mathsf{N}_{\rm
fin}}}$ $\displaystyle\lesssim 1,$ (6.4)
while for $\frac{1}{4}\Gamma_{q}^{2(m+1)}<y<\Gamma_{q}^{2(m+1)}$ we have
$\displaystyle\frac{|D^{N}\psi_{m,q}(y)|}{(\psi_{m,q}(y))^{1-N/\mathsf{N}_{\rm
fin}}}$ $\displaystyle\lesssim\Gamma_{q}^{-2N(m+1)}.$ (6.5)
In each of the above inequalities, the implicit constants depend on $N$ but
not $m$ or $q$.
###### Definition 6.3.
Given $i,j,q\geq 0$, we define
$\displaystyle i_{*}=i_{*}(j,q)=i_{*}(j)=\min\\{i\geq
0\colon\Gamma_{q+1}^{i}\geq\Gamma_{q}^{j}\\}.$
In view of the definition (3.10), we see that
$i_{*}(j)=\left\lceil
j\frac{\log(\lambda_{q})-\log(\lambda_{q-1})}{\log(\lambda_{q+1})-\log(\lambda_{q})}\right\rceil=\left\lceil
j\frac{\log\left(\left\lceil
a^{b^{q}}\right\rceil\right)-\log\left(\left\lceil
a^{b^{q-1}}\right\rceil\right)}{\log\left(\left\lceil
a^{b^{q+1}}\right\rceil\right)-\log\left(\left\lceil
a^{b^{q}}\right\rceil\right)}\right\rceil.$
One may check that as $q\rightarrow\infty$ or $a\rightarrow\infty$, $i_{*}(j)$
converges to $\left\lceil\frac{j}{b}\right\rceil$ for any $j$, and so if $a$
is sufficiently large, $i_{*}(j)$ is bounded from above and below
independently of $q$ for each $j$. Note that in particular, for $j=0$ we have
that $i_{*}(j)=0$.
At stage $q\geq 1$ of the iteration (by convention $w_{0}=u_{0}=0$) and for
$m\leq\mathsf{N}_{\rm cut,t}$ and $j_{m}\geq 0$, we can now define
$\displaystyle h_{m,j_{m},q}^{2}(x,t):=\sum_{n=0}^{\mathsf{N}_{\rm
cut,x}}\Gamma_{q+1}^{-2i_{*}\left(j_{m}\right)}\delta_{q}^{-1}\left(\lambda_{q}\Gamma_{q}\right)^{-2n}\left(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2}\right)^{-2m}|D^{n}D_{t,q-1}^{m}u_{q}(x,t)|^{2}.$
(6.6)
###### Definition 6.4 (Intermediate Cutoff Functions).
Given $q\geq 1$, $m\leq\mathsf{N}_{\rm cut,t}$, and $j_{m}\geq 0$ we define
$\psi_{m,i_{m},j_{m},q}$ by
$\displaystyle\psi_{m,i_{m},j_{m},q}(x,t)$
$\displaystyle=\psi_{m,q+1}\left(\Gamma_{q+1}^{-2(i_{m}-i_{*}(j_{m}))(m+1)}h_{m,j_{m},q}^{2}(x,t)\right)$
(6.7)
for $i_{m}>i_{*}(j_{m})$, while for $i_{m}=i_{*}(j_{m})$,
$\displaystyle\psi_{m,i_{*}(j_{m}),j_{m},q}(x,t)$
$\displaystyle=\widetilde{\psi}_{m,q+1}\left(h_{m,j_{m},q}^{2}(x,t)\right).$
(6.8)
The intermediate cutoff functions $\psi_{m,i_{m},j_{m},q}$ are equal to zero
for $i_{m}<i_{*}(j_{m})$.
The indices $i_{m}$ and $j_{m}$ will be shown to run up to some maximal values
$i_{\mathrm{max}}$ and $\widetilde{i}_{\textnormal{max}}$ to be determined in
the proof (see Lemma 6.14 and (6.27)). With this notation and in view of (6.1)
and (6.2), it immediately follows that
$\displaystyle\sum_{i_{m}\geq 0}\psi_{m,i_{m},j_{m},q}^{2}=\sum_{i_{m}\geq
i_{*}(j_{m})}\psi_{m,i_{m},j_{m},q}^{2}=\sum_{\\{i_{m}\colon\Gamma_{q+1}^{i_{m}}\geq\Gamma_{q}^{j_{m}}\\}}\psi_{m,i_{m},j_{m},q}^{2}\equiv
1$ (6.9)
for any $m$ and for $|i_{m}-i^{\prime}_{m}|\geq 2$,
$\psi_{m,i_{m},j_{m},q}\psi_{m,i_{m}^{\prime},j_{m},q}=0.$ (6.10)
###### Definition 6.5 ($m^{\textnormal{th}}$ Velocity Cutoff Function).
For $q\geq 1$ and $i_{m}\geq 0$272727Later we will show that
$\psi_{m,i_{m},q}\equiv 0$ if $i\geq{i_{\rm max}}$, we inductively define the
$m^{\textnormal{th}}$ velocity cutoff function
$\psi_{m,i_{m},q}^{2}=\sum\limits_{\\{j_{m}\colon i_{m}\geq
i_{*}(j_{m})\\}}\psi_{j_{m},q-1}^{2}\psi_{m,i_{m},j_{m},q}^{2}.$ (6.11)
In order to define the full velocity cutoff function, we use the notation
$\vec{i}=\\{i_{m}\\}_{m=0}^{\mathsf{N}_{\rm
cut,t}}=\left(i_{0},...,i_{\mathsf{N}_{\rm
cut,t}}\right)\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}$ (6.12)
to denote a tuple of non-negative integers of length $\mathsf{N}_{\rm
cut,t}+1$.
###### Definition 6.6 (Velocity cutoff function).
For $0\leq i\leq i_{\textrm{max}}(q)$ and $q\geq 0$, we inductively define the
velocity cutoff function $\psi_{i,q}$ as follows. When $q=0$, we let
$\displaystyle\psi_{i,0}=\begin{cases}1&\mbox{if }i=0\\\
0&\mbox{otherwise}.\end{cases}$ (6.13)
Then, we inductively on $q$ define
$\displaystyle\psi_{i,q}^{2}=\sum\limits_{\left\\{\vec{i}\colon\max\limits_{0\leq
m\leq\mathsf{N}_{\rm
cut,t}}i_{m}=i\right\\}}\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}^{2}.$ (6.14)
for all $q\geq 1$.
The sum used to define $\psi_{i,q}$ for $q\geq 1$ is over all tuples with a
maximum entry of $i$. The number of such tuples is clearly $q$-independent
once it is demonstrated in Lemma 6.14 that $i_{m}\leq{i_{\rm max}}(q)$ (which
implies $i\leq{i_{\rm max}}(q)$), and ${i_{\rm max}}(q)$ is bounded above
independently of $q$.
For notational convenience, given an $\vec{i}$ as in the sum of (6.14), we
shall denote
$\displaystyle\mathrm{supp\,}\left(\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}\right)=\bigcap_{m=0}^{\mathsf{N}_{\rm
cut,t}}\mathrm{supp\,}(\psi_{m,i_{m},q})=:\mathrm{supp\,}(\psi_{\vec{i},q})\,.$
(6.15)
In particular, we will frequently use that
$(x,t)\in\mathrm{supp\,}(\psi_{i,q})$ if and only if there exists
$\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}$ such that $\max_{0\leq
m\leq\mathsf{N}_{\rm cut,t}}i_{m}=i$, and
$(x,t)\in\mathrm{supp\,}(\psi_{\vec{i},q})$.
### 6.2 Properties of the velocity cutoff functions
#### 6.2.1 Partitions of unity
###### Lemma 6.7 ($\psi_{m,i_{m},q}$ \- Partition of unity).
For all $m$, we have that
$\displaystyle\sum_{i_{m}\geq 0}\psi_{m,i_{m},q}^{2}\equiv
1\,,\qquad\psi_{m,i_{m},q}\psi_{m,i^{\prime}_{m},q}=0\quad\textnormal{for}\quad|i_{m}-i^{\prime}_{m}|\geq
2.$ (6.16)
###### Proof of Lemma 6.7.
The proof proceeds inductively. When $q=0$ there is nothing to prove as
$\psi_{m,i_{m},q}$ is not defined. Thus we assume $q\geq 1$. From (6.13) for
$q=0$ and (3.16) for $q\geq 1$, we assume that the functions
$\\{\psi_{j,q-1}^{2}\\}_{j\geq 0}$ form a partition of unity. To show the
first part of (6.16), we may use (6.9) and (6.11) and reorder the summation to
obtain
$\displaystyle\sum_{i_{m}\geq 0}\psi_{m,i_{m},q}^{2}=$
$\displaystyle\sum_{i_{m}\geq 0}\sum_{\\{j_{m}\colon i_{*}(j_{m})\leq
i_{m}\\}}~{}\psi_{j_{m},q-1}^{2}\psi_{m,i_{m},j_{m},q}^{2}(x,t)$
$\displaystyle=$ $\displaystyle\sum_{j_{m}\geq
0}~{}\psi_{j_{m},q-1}^{2}\underbrace{\sum_{\\{i_{m}\colon i_{m}\geq
i_{*}(j_{m})\\}}\psi_{m,i_{m},j_{m},q}^{2}}_{\equiv 1\mbox{ by
}\eqref{eq:psi:i:j:partition:0}}=\sum_{j_{m}\geq
0}~{}\psi_{j_{m},q-1}^{2}\equiv 1.$
The last equality follows from the inductive assumption (3.16).
The proof of the second claim is more involved and will be split into cases.
Using the definition in (6.11), we have that
$\displaystyle\psi_{m,i_{m},q}\psi_{m,i^{\prime}_{m},q}=\sum\limits_{\\{j_{m}:i_{m}\geq
i_{*}(j_{m})\\}}\sum\limits_{\\{j_{m}^{\prime}:i_{m}^{\prime}\geq
i_{*}(j_{m}^{\prime})\\}}\psi_{j_{m},q-1}^{2}\psi_{j^{\prime}_{m},q-1}^{2}\psi_{m,i_{m},j_{m},q}^{2}\psi_{m,i^{\prime}_{m},j^{\prime}_{m},q}^{2}.$
Recalling the inductive assumption (3.16), we have that the above sum only
includes pairs of indices $j_{m}$ and $j^{\prime}_{m}$ such that
$|j_{m}-j^{\prime}_{m}|\leq 1$. So we may assume that
$(x,t)\in\mathrm{supp\,}\psi_{m,i_{m},j_{m},q}\cap\mathrm{supp\,}\psi_{m,i^{\prime}_{m},j^{\prime}_{m},q},$
(6.17)
where $|j_{m}-j^{\prime}_{m}|\leq 1$. The first and simplest case is the case
$j_{m}=j^{\prime}_{m}$. We then appeal to (6.10) to deduce that it must be the
case that $|i_{m}-i^{\prime}_{m}|\leq 1$ in order for (6.17) to be true.
Before moving to the second and third cases, we first show that by symmetry it
will suffice to prove that $\psi_{m,i_{m},q}\psi_{m,i^{\prime}_{m},q}=0$ when
$i_{m}^{\prime}\leq i_{m}-2$. Assuming this has been proven, let
$i_{m_{1}},i_{m_{2}}$ be given with $|i_{m_{1}}-i_{m_{2}}|\geq 2$. Without
loss of generality we may assume that $i_{m_{1}}\geq i_{m_{2}}$, which implies
that $i_{m_{1}}\geq i_{m_{2}}+2$. Using the assumption and setting
$i_{m_{2}}=i^{\prime}_{m}$ and $i_{m_{1}}=i_{m}$, we deduce that
$\psi_{m,i_{m_{1}},q}\psi_{m,i_{m_{2}},q}=0$. Thus, we have reduced the proof
to showing that $\psi_{m,i_{m},q}\psi_{m,i^{\prime}_{m},q}=0$ when
$i_{m}^{\prime}\leq i_{m}-2$, which we will show next by contradiction.
Let us consider the second case, $j^{\prime}_{m}=j_{m}+1$. When
$i_{m}=i_{*}(j_{m})$, using that $i_{*}(j_{m})\leq i_{*}(j_{m}+1)$, we obtain
$i^{\prime}_{m}\leq
i_{m}-2=i_{*}(j_{m})-2<i_{*}(j_{m}+1)=i_{*}(j^{\prime}_{m}),$
and so by Definition 6.4, we have that
$\psi_{m,i^{\prime}_{m},j^{\prime}_{m},q}=0$. Thus, in this case there is
nothing to prove, and we need to only consider the case $i_{m}>i_{*}(j_{m})$.
From (6.17), points 1 and 2 from Lemma 6.2, and Definition 6.4, we have that
$\displaystyle h_{m,j_{m},q}(x,t)$
$\displaystyle\in\left[\frac{1}{2}\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))},\Gamma_{q+1}^{(m+1)(i_{m}+1-i_{*}(j_{m}))}\right],$
(6.18a) $\displaystyle h_{m,j_{m}+1,q}(x,t)$
$\displaystyle\leq\Gamma_{q+1}^{(m+1)(i^{\prime}_{m}+1-i_{*}(j_{m}+1))}.$
(6.18b)
Note that from the definition of $h_{m,j_{m},q}$ in (6.6), we have that
$\Gamma_{q+1}^{(m+1)(i_{*}\left(j_{m}+1\right)-i_{*}\left(j_{m}\right))}h_{m,j_{m}+1,q}=h_{m,j_{m},q}.$
Then, since $i^{\prime}_{m}\leq i_{m}-2$, from (6.18b) we have that
$\displaystyle\Gamma_{q+1}^{-(m+1)(i_{m}-i_{*}(j_{m}))}h_{m,j_{m},q}$
$\displaystyle=\Gamma_{q+1}^{-(m+1)(i_{m}-i_{*}(j_{m}))}h_{m,j_{m}+1,q}\Gamma_{q+1}^{(m+1)(i_{*}\left(j_{m}+1\right)-i_{*}\left(j_{m}\right))}$
$\displaystyle\leq\Gamma_{q+1}^{-(m+1)(i_{m}-i_{*}(j_{m}))}\Gamma_{q+1}^{(m+1)(i^{\prime}_{m}+1-i_{*}(j_{m}+1))}\Gamma_{q+1}^{(m+1)(i_{*}\left(j_{m}+1\right)-i_{*}\left(j_{m}\right))}$
$\displaystyle=\Gamma_{q+1}^{(m+1)(i^{\prime}_{m}+1-i_{m})}$
$\displaystyle\leq\Gamma_{q+1}^{-(m+1)}\,.$
Since $m\geq 0$, the above estimate contradicts the lower bound on
$h_{m,j_{m},q}$ in (6.18a) because $\Gamma_{q+1}^{-1}\ll\nicefrac{{1}}{{2}}$
for $a$ sufficiently large.
We move to the third and final case, $j^{\prime}_{m}=j_{m}-1$. As before, if
$i_{m}=i_{*}(j_{m})$, then since $i_{*}(j_{m})\leq i_{*}(j_{m}-1)+1$, we have
that
$i^{\prime}_{m}\leq i_{m}-2=i_{*}(j_{m})-2\leq
i_{*}(j_{m}-1)-1<i_{*}(j_{m}-1)=i_{*}(j^{\prime}_{m})\,,$
which by Definition 6.4 implies that
$\psi_{m,i^{\prime}_{m},j^{\prime}_{m},q}=0$, and there is nothing to prove.
Thus, we only must consider the case $i_{m}>i_{*}(j_{m})$. Using the
definition (6.6) we have that
$h_{m,j_{m},q}=\Gamma_{q+1}^{(m+1)(i_{*}(j_{m}-1)-i_{*}(j_{m}))}h_{m,j_{m}-1,q}\,.$
On the other hand, for $i^{\prime}_{m}\leq i_{m}-2$ we have from (6.18b) that
$\displaystyle
h_{m,j_{m}-1,q}\leq\Gamma_{q+1}^{(m+1)(i^{\prime}_{m}+1-i_{*}(j_{m}-1))}\leq\Gamma_{q+1}^{(m+1)(i_{m}-1-i_{*}(j_{m}-1))}\,.$
Therefore, combining the above two displays and the inequality
$-i_{*}(j_{m})\geq-i_{*}(j_{m}-1)-1$, we obtain the bound
$\displaystyle\Gamma_{q+1}^{-(m+1)(i_{m}-i_{*}(j_{m}))}h_{m,j_{m},q}$
$\displaystyle\leq\Gamma_{q+1}^{-(m+1)(i_{m}-i_{*}(j_{m}))}\Gamma_{q+1}^{(m+1)(i_{*}(j_{m}-1)-i_{*}(j_{m}))}\Gamma_{q+1}^{(m+1)(i_{m}-1-i_{*}(j_{m}-1))}$
$\displaystyle=\Gamma_{q+1}^{-(m+1)}\,,$
As before, since $m\geq 0$ this produces a contradiction with the lower bound
on $h_{m,j_{m},q}$ given in (6.18a), since
$\Gamma_{q+1}^{-1}\ll\nicefrac{{1}}{{2}}$. ∎
With Lemma 6.7 in hand, we can now verify the inductive assumption (3.16) at
level $q$.
###### Lemma 6.8 ($\psi_{i,q}$ is a partition of unity).
We have that for $q\geq 0$,
$\displaystyle\sum_{i\geq 0}\psi_{i,q}^{2}\equiv
1\,,\qquad\psi_{i,q}\psi_{i^{\prime},q}=0\quad\textnormal{for}\quad|i-i^{\prime}|\geq
2.$ (6.19)
###### Proof of Lemma 6.8.
When $q=0$, both statements are immediate from (6.13). To prove the first
claim for $q\geq 1$, let us introduce the notation
$\Lambda_{i}=\left\\{\vec{i}=(i_{0},...,i_{\mathsf{N}_{\rm
cut,t}})\colon\max_{0\leq m\leq\mathsf{N}_{\rm cut,t}}i_{m}=i.\right\\}$
(6.20)
Then
$\psi_{i,q}^{2}=\sum_{\vec{i}\in\Lambda_{i}}\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}^{2},$
and thus
$\displaystyle\sum_{i\geq 0}\psi_{i,q}^{2}=\sum_{i\geq
0}\sum_{\vec{i}\in\Lambda_{i}}\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}^{2}$
$\displaystyle=\sum_{\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm
cut,t}+1}}\left(\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}^{2}\right)$
$\displaystyle=\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\left(\sum_{i_{m}\geq
0}\psi_{m,i_{m},q}^{2}\right)=\prod\limits_{m=0}^{\mathsf{N}_{\rm cut,t}}1=1$
after using (6.16).
To prove the second claim, assume towards a contradiction that there exists
$|i-i^{\prime}|\geq 2$ such that $\psi_{i,q}\psi_{i^{\prime},q}\geq 0$. Then
$\displaystyle 0$
$\displaystyle\neq\psi_{i,q}^{2}\psi_{i^{\prime},q}^{2}=\sum_{\vec{i}\in\Lambda_{i}}\sum_{\vec{i}^{\prime}\in\Lambda_{i^{\prime}}}\prod\limits_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i_{m},q}^{2}\psi_{m,i^{\prime}_{m},q}^{2}.$ (6.21)
In order for (6.21) to be non-vanishing, by (6.16), there must exist
$\vec{i}=(i_{0},...,i_{\mathsf{N}_{\rm cut,t}})\in\Lambda_{i}$ and
$\vec{i}^{\prime}=(i^{\prime}_{0},...,i^{\prime}_{\mathsf{N}_{\rm
cut,t}})\in\Lambda_{i^{\prime}}$ such that $|i_{m}-i^{\prime}_{m}|\leq 1$ for
all $0\leq m\leq\mathsf{N}_{\rm cut,t}$. By the definition of $i$ and
$i^{\prime}$, there exist $m_{*}$ and $m^{\prime}_{*}$ such that
$i_{m_{*}}=\max_{m}i_{m}=i,\qquad
i^{\prime}_{m^{\prime}_{*}}=\max_{m}i^{\prime}_{m}=i^{\prime}.$
But then
$\displaystyle i$ $\displaystyle=i_{m_{*}}\leq i^{\prime}_{m_{*}}+1\leq
i^{\prime}_{m^{\prime}_{*}}+1=i^{\prime}+1$ $\displaystyle i^{\prime}$
$\displaystyle=i^{\prime}_{m^{\prime}_{*}}\leq i_{m^{\prime}_{*}}+1\leq
i_{m_{*}}+1=i+1,$
implying that $|i-i^{\prime}|\leq 1$, a contradiction. ∎
In view of the preceding two lemmas and (6.10), and for convenience of
notation, we define
$\displaystyle\psi_{i\pm,q}(x,t)$
$\displaystyle=\left(\psi^{2}_{i-1,q}(x,t)+\psi^{2}_{i,q}(x,t)+\psi^{2}_{i+1,q}(x,t)\right)^{\nicefrac{{1}}{{2}}},$
(6.22)
which are cutoffs with the property that
$\displaystyle\psi_{i\pm,q}$ $\displaystyle\equiv
1\quad\mbox{on}\quad{\mathrm{supp\,}(\psi_{i,q})}.$ (6.23)
###### Remark 6.9 (Rewriting $\psi_{i,q}$).
The definition (6.14) is not convenient to use directly for estimating
material derivatives of the $\psi_{i,q}$ cutoffs, because differentiating the
terms $\psi_{m,i_{m},q}$ individually ignores certain cancellations which
arise due to the fact that $\\{\psi_{m,i_{m},q}\\}_{i_{m}\geq 0}$ is a
partition of unity (as was shown above in Lemma 6.7). For this purpose, we re-
sum the terms in the definition (6.14) as follows. For any given $0\leq
m\leq\mathsf{N}_{\rm cut,t}$, we introduce the summed cutoff function
$\displaystyle\Psi_{m,i,q}^{2}=\sum_{i_{m}=0}^{i}\psi_{m,i_{m},q}^{2}$ (6.24)
and note via Lemma 6.7 its chief property:
$\displaystyle
D(\Psi_{m,i,q}^{2})=D(\psi_{m,i,q}^{2}){\mathbf{1}}_{\mathrm{supp\,}(\psi_{m,i+1,q})}=D(\psi_{m,i,q}^{2}){\mathbf{1}}_{\mathrm{supp\,}(\psi_{m,i+1,q})}\,.$
(6.25)
The above inclusion holds because on the support of $\psi_{m,i_{m},q}$ with
$i_{m}<i$, we have that $\Psi_{m,i,q}\equiv 1$. With the notation (6.24) we
return to the definition (6.14) and note that
$\displaystyle\psi_{i,q}^{2}$ $\displaystyle=\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i,q}^{2}\prod_{m^{\prime}=0}^{m-1}\Psi_{m^{\prime},i,q}^{2}\prod_{m^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}(\Psi_{m^{\prime\prime},i,q}^{2}-\psi_{m^{\prime\prime},i,q}^{2})$
$\displaystyle=\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\psi_{m,i,q}^{2}\prod_{m^{\prime}=0}^{m-1}\Psi_{m^{\prime},i,q}^{2}\prod_{m^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}\Psi_{m^{\prime\prime},i-1,q}^{2}\,.$ (6.26)
###### Remark 6.10 (Size of maximal $j_{m}$ in (6.11)).
Define $j_{*}(i,q)=\max\\{j\colon i_{*}(j)\leq i\\}$ to be the largest index
of $j_{m}$ appearing in the sum in (6.11). We note here that
$\displaystyle\Gamma_{q+1}^{i-1}<\Gamma_{q}^{j_{*}(i,q)}\leq\Gamma_{q+1}^{i}$
(6.27)
holds. This fact will be used later on in the proof in conjunction with Lemma
6.14 to bound the maximal values of $j_{m}$.
The following lemma is a direct consequence of the definitions of the cutoffs.
###### Lemma 6.11.
If $(x,t)\in\mathrm{supp\,}(\psi_{m,i_{m},j_{m},q})$ then
$\displaystyle
h_{m,j_{m},q}\leq\Gamma_{q+1}^{(m+1)\left(i_{m}+1-i_{*}(j_{m})\right)}.$
(6.28)
Moreover, if $i_{m}>i_{*}(j_{m})$ we have
$\displaystyle
h_{m,j_{m},q}\geq(\nicefrac{{1}}{{2}})\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}$
(6.29)
on the support of $\psi_{m,i_{m},j_{m},q}$. As a consequence, we have
$\displaystyle\left\|D^{N}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},q})}$
$\displaystyle\leq\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i_{m}+1}(\lambda_{q}\Gamma_{q})^{N}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}+3})^{m}$
(6.30)
$\displaystyle\left\|D^{N}D_{t,q-1}^{M}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\leq\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i+1}(\lambda_{q}\Gamma_{q})^{N}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i+3})^{M}$
(6.31)
for all $0\leq m,M\leq\mathsf{N}_{\rm cut,t}$ and $0\leq N\leq\mathsf{N}_{\rm
cut,x}$.
###### Proof of Lemma 6.11.
Estimates (6.28) and (6.29) follow directly from the definitions of
$\widetilde{\psi}_{m,q+1}$ and $\psi_{m,q+1}$. In order to prove (6.30), we
note that for $(x,t)\in\mathrm{supp\,}(\psi_{m,i_{m},q})$, by (6.11) there
must exist a $j_{m}$ with $i_{*}(j_{m})\leq i_{m}$ such that
$(x,t)\in\mathrm{supp\,}(\psi_{m,i_{m},j_{m},q})$. Using (6.28), we conclude
that
$\displaystyle\left\|D^{N}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},j_{m},q})}$
$\displaystyle\leq\Gamma_{q+1}^{(m+1)(i_{m}+1-i_{*}(j_{m}))}\Gamma_{q+1}^{i_{*}(j_{m})}(\Gamma_{q}\lambda_{q})^{N}(\Gamma_{q+1}^{i_{*}(j_{m})+2}\tau_{q-1}^{-1})^{m}\delta_{q}^{\nicefrac{{1}}{{2}}}$
$\displaystyle=\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i_{m}+1}\left(\lambda_{q}\Gamma_{q}\right)^{N}\left(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}+3}\right)^{m}$
(6.32)
which completes the proof of (6.30). The proof of (6.31) follows from the fact
that we have employed the maximum over $m$ of $i_{m}$ to define $\psi_{i,q}$
in (6.6). ∎
An immediate corollary of the bound (5.9) and of the previous Lemma is that
estimates for the derivatives of $u_{q}$ are also available on the support of
$\psi_{i,q}$, instead of $\psi_{i,q-1}$.
###### Corollary 6.12.
For $N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$, and $i\geq 0$, we have the
bound
$\displaystyle\left\|D^{N}D_{t,q-1}^{M}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,.$
(6.33)
Recall that if either $N>3\mathsf{N}_{\textnormal{ind,v}}$ or
$M>3\mathsf{N}_{\textnormal{ind,v}}$ are such that $N+M\leq 2\mathsf{N}_{\rm
fin}$, suitable estimates for $D^{N}D^{M}_{t,q-1}u_{q}$ are already provided
by (5.6).
###### Proof of Corollary 6.12.
When $0\leq N\leq\mathsf{N}_{\rm cut,x}$ and $0\leq M\leq\mathsf{N}_{\rm
cut,t}\leq\mathsf{N}_{\textnormal{ind,t}}$, the desired bound was already
established in (6.31).
For the remaining cases, note that if $0\leq m\leq\mathsf{N}_{\rm cut,t}$ and
$(x,t)\in\mathrm{supp\,}\psi_{m,i_{m},q}$, there exists $j_{m}\geq 0$ with
$i_{*}(j_{m})\leq i_{m}$, such that $(x,t)\in\mathrm{supp\,}\psi_{j_{m},q-1}$.
Thus, we may appeal to (5.9) and deduce that
$\displaystyle\left|D^{N}D^{M}_{t,q-1}u_{q}\right|\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{j_{m}+1}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,.$
Since $i_{*}(j_{m})\leq i_{m}$ implies
$\Gamma_{q}^{j_{m}}\leq\Gamma_{q+1}^{i_{m}}$, we deduce that
$\displaystyle\left\|D^{N}D^{M}_{t,q-1}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},q})}\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+1}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,.$
Note that the above estimate does not have a factor of
$\Gamma_{q+1}^{i_{m}+1}$ next to the $\delta_{q}^{\nicefrac{{1}}{{2}}}$ at the
amplitude.
We now consider two cases. If $\mathsf{N}_{\rm cut,x}<N\leq
3\mathsf{N}_{\textnormal{ind,v}}$, then
$\displaystyle\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\lesssim\Gamma_{q}^{-\mathsf{N}_{\rm
cut,x}}\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\,.$
On the other hand, if $\mathsf{N}_{\rm cut,t}<M\leq
3\mathsf{N}_{\textnormal{ind,v}}$, then
$\displaystyle\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+1}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\lesssim\Gamma_{q+1}^{-2\mathsf{N}_{\rm
cut,t}}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,.$
Combining the above three displays, and recalling the definition of
$\psi_{i,q}$ in (6.14), we deduce that if either $N>\mathsf{N}_{\rm cut,x}$ or
$M>\mathsf{N}_{\rm cut,t}$, we have
$\displaystyle\left\|D^{N}D^{M}_{t,q-1}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\max\\{\Gamma_{q}^{-\mathsf{N}_{\rm
cut,x}},\Gamma_{q+1}^{-2\mathsf{N}_{\rm
cut,t}}\\}\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+1}\Gamma_{q}^{2}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)\,,$
and the proof of (6.33) is completed by taking $\mathsf{N}_{\rm cut,x}$ and
$\mathsf{N}_{\rm cut,t}$ sufficiently large to ensure that
$\displaystyle\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\max\\{\Gamma_{q}^{-\mathsf{N}_{\rm
cut,x}},\Gamma_{q+1}^{-2\mathsf{N}_{\rm cut,t}}\\}\leq 1\,.$ (6.34)
This condition holds by (9.51). ∎
#### 6.2.2 Pure spatial derivatives
In this section we prove that the cutoff functions $\psi_{i,q}$ satisfy sharp
spatial derivative estimates, which are consistent with (3.19) for
$q^{\prime}=q$.
###### Lemma 6.13 (Spatial derivatives for the cutoffs).
Fix $q\geq 1$, $0\leq m\leq\mathsf{N}_{\rm cut,t}$, and $i_{m}\geq 0$. For all
$j_{m}\geq 0$ such that $i_{m}\geq i_{*}(j_{m})$ and all $N\leq\mathsf{N}_{\rm
fin}$, we have
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}\frac{|D^{N}\psi_{m,i_{m},j_{m},q}|}{\psi_{m,i_{m},j_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)\,,$
(6.35)
which in turn implies
$\displaystyle\frac{|D^{N}\psi_{i,q}|}{\psi_{i,q}^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)$
(6.36)
for all $i\geq 0$, all $N\leq\mathsf{N}_{\rm fin}$.
###### Proof of Lemma 6.13.
We first show that (5.9) implies (6.35). We distinguish two cases. The first
case is when $\psi=\widetilde{\psi}_{m,q+1}$, or $\psi=\psi_{m,q+1}$ _and_ we
have the lower bound
$h_{m,j_{m},q}^{2}\Gamma_{q+1}^{-2\left(i_{m}-i_{*}(j_{m})\right)(m+1)}\geq\frac{1}{4}\Gamma_{q+1}^{2(m+1)}$
(6.37)
so that (6.5) applies. The goal is then to apply Lemma A.4 to the function
$\psi=\widetilde{\psi}_{m,q+1}$ or $\psi=\psi_{m,q+1}$ as described above in
conjunction with $\Gamma_{\psi}=\Gamma_{q+1}^{m+1}$,
$\Gamma=\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}$, and
$h(x,t)=(h_{m,j_{m},q}(x,t))^{2}$. The assumption (A.21) holds by (6.3) or
(6.5) for all $N\leq\mathsf{N}_{\rm fin}$, and so we need to obtain bounds on
the derivatives of $h_{m,j_{m},q}^{2}$, which are consistent with assumption
(A.22) of Lemma A.4. For $B\leq\mathsf{N}_{\rm fin}$, the Leibniz rule gives
$\displaystyle\left|D^{B}h_{m,j_{m},q}^{2}\right|$
$\displaystyle\lesssim(\lambda_{q}\Gamma_{q})^{B}\sum_{B^{\prime}=0}^{B}\sum_{n=0}^{\mathsf{N}_{\rm
cut,x}}\Gamma_{q+1}^{-i_{*}(j_{m})}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{-m}(\lambda_{q}\Gamma_{q})^{-n-B^{\prime}}\delta_{q}^{-\nicefrac{{1}}{{2}}}|D^{n+B^{\prime}}D^{m}_{t,q-1}u_{q}|$
$\displaystyle\qquad\qquad\times\Gamma_{q+1}^{-i_{*}(j_{m})}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{-m}(\lambda_{q}\Gamma_{q})^{-n-B+B^{\prime}}\delta_{q}^{-\nicefrac{{1}}{{2}}}|D^{n+B-B^{\prime}}D^{m}_{t,q-1}u_{q}|\,.$
(6.38)
For the terms with $L\in\\{n+B^{\prime},n+B-B^{\prime}\\}\leq\mathsf{N}_{\rm
cut,x}$ we may appeal to appeal to estimate (6.28), which gives
$\displaystyle\Gamma_{q+1}^{-i_{*}(j_{m})}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{-m}(\lambda_{q}\Gamma_{q})^{-L}\delta_{q}^{-\nicefrac{{1}}{{2}}}\left\|D^{L}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},j_{m},q})}\leq\Gamma_{q+1}^{(m+1)(i_{m}+1-i_{*}(j_{m}))}\,.$
(6.39)
On the other hand, for $\mathsf{N}_{\rm
cut,x}<L\in\\{n+B^{\prime},n+B-B^{\prime}\\}\leq\mathsf{N}_{\rm cut,x}+B\leq
2\mathsf{N}_{\rm fin}-\mathsf{N}_{\textnormal{ind,t}}$, we may appeal to
appeal to estimates (5.6) and (5.9), and since $m\leq\mathsf{N}_{\rm
cut,t}<\mathsf{N}_{\textnormal{ind,t}}$, we deduce that
$\displaystyle\Gamma_{q+1}^{-i_{*}(j_{m})}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{-m}(\lambda_{q}\Gamma_{q})^{-L}\delta_{q}^{-\nicefrac{{1}}{{2}}}\left\|D^{L}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{j_{m},q-1})}$
$\displaystyle\lesssim(\Gamma_{q}^{j_{m}+1}\Gamma_{q+1}^{-i_{*}(j_{m})-2})^{m}(\Gamma_{q}^{-L}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}})\lambda_{q}^{-L}\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)$
$\displaystyle\lesssim\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)$
$\displaystyle\leq\Gamma_{q+1}^{(m+1)(i_{m}+1-i_{*}(j_{m}))}\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right).$
(6.40)
In the last inequality we have used that $i_{m}\geq i_{*}(j_{m})$, while in
the second to last inequality we have used that if $L\geq\mathsf{N}_{\rm
cut,x}$ then
$\Gamma_{q}^{L}\geq\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}$, which
follows once $\mathsf{N}_{\rm cut,x}$ is chosen to be sufficiently large, as
in (9.51). Summarizing the bounds (6.38)–(6.40), since $n\leq\mathsf{N}_{\rm
cut,x}$, we arrive at
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1}\psi_{m,i_{m},j_{m},q})}\left|D^{B}h_{m,j_{m},q}^{2}\right|$
$\displaystyle\lesssim(\lambda_{q}\Gamma_{q})^{B}\mathcal{M}\left(2\mathsf{N}_{\rm
cut,x}+B,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)\Gamma_{q+1}^{2(m+1)(i_{m}+1-i_{*}(j_{m}))}$
$\displaystyle\lesssim\mathcal{M}\left(B,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)\Gamma_{q+1}^{2(m+1)(i_{m}+1-i_{*}(j_{m}))}\,.$
whenever $B\leq\mathsf{N}_{\rm fin}$. Here we have used that $2\mathsf{N}_{\rm
cut,x}\leq\mathsf{N}_{\textnormal{ind,v}}$. Thus, assumption (A.22) holds with
$C_{h}=\Gamma_{q+1}^{2(m+1)(i_{m}+1-i_{*}(j_{m}))}$,
$\lambda=\Gamma_{q}\lambda_{q}$, $\Lambda=\widetilde{\lambda}_{q}\Gamma_{q}$,
$N_{*}=\mathsf{N}_{\textnormal{ind,v}}$. Note that with these choices of
parameters, we have $C_{h}\Gamma_{\psi}^{-2}\Gamma^{-2}=1$. We may thus apply
Lemma A.4 and conclude that
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}\frac{\left|D^{N}\psi_{m,i_{m},j_{m},q}\right|}{\psi_{m,i_{m},j_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)$
for all $N\leq\mathsf{N}_{\rm fin}$, proving (6.35) in the first case.
Recalling the inequality (6.37), the second case is when $\psi=\psi_{m,q+1}$
and
$h_{m,j_{m},q}^{2}\Gamma_{q+1}^{-2\left(i_{m}-i_{*}(j_{m})\right)(m+1)}\leq\frac{1}{4}\Gamma_{q+1}^{2(m+1)}.$
(6.41)
However, since $\psi_{m,q+1}$ is uniformly equal to $1$ when the left hand
side of the above display takes values in
$\left[1,\frac{1}{4}\Gamma_{q+1}^{2(m+1)}\right]$, (6.35) is trivially
satisfied. Thus we may reduce to the case that
$h_{m,j_{m},q}^{2}\Gamma_{q+1}^{-2\left(i_{m}-i_{*}(j_{m})\right)(m+1)}\leq
1.$ (6.42)
As in the first case, we aim to apply Lemma A.4 with $h=h_{m,j_{m},q}^{2}$,
but now with $\Gamma_{\psi}=1$ and
$\Gamma=\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}$. From (6.4), the assumption
(A.21) holds. Towards estimating derivatives of $h$, for the terms with
$L\in\\{n+B^{\prime},n+B-B^{\prime}\\}\leq\mathsf{N}_{\rm cut,x}$, (6.42)
gives immediately that
$\displaystyle\Gamma_{q+1}^{-i_{*}(j_{m})}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{-m}(\lambda_{q}\Gamma_{q})^{-L}\delta_{q}^{-\nicefrac{{1}}{{2}}}\left\|D^{L}D_{t,q-1}^{m}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},j_{m},q})}\leq\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}\,.$
(6.43)
Conversely, when $\mathsf{N}_{\rm cut,x}>L$, we may argue as in the estimates
which gave (6.40), only this time using that since $i_{m}\geq i_{*}(j_{m})$,
we can achieve the slightly improved bound282828This bound was also available
in (6.40), but we wrote the worse bound there to match the chosen value of
$\mathcal{C}_{h}$.
$\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right).$
(6.44)
We then arrive at
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1}\psi_{m,i_{m},j_{m},q})}$
$\displaystyle\left|D^{B}h_{m,j_{m},q}^{2}\right|$
$\displaystyle\lesssim(\lambda_{q}\Gamma_{q})^{B}\mathcal{M}\left(2\mathsf{N}_{\rm
cut,x}+B,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m}))}$
$\displaystyle\lesssim\mathcal{M}\left(B,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m}))}\,.$
whenever $B\leq\mathsf{N}_{\rm fin}$, again using that $2\mathsf{N}_{\rm
cut,x}\leq\mathsf{N}_{\textnormal{ind,v}}$. Thus, assumption (A.22) now holds
with $\mathcal{C}_{h}=\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m}))}$,
$\lambda=\Gamma_{q}\lambda_{q}$, $\Lambda=\widetilde{\lambda}_{q}\Gamma_{q}$,
$N_{*}=\mathsf{N}_{\textnormal{ind,v}}$. Note that with these new choices of
parameters, we still have $\mathcal{C}_{h}\Gamma_{\psi}^{-2}\Gamma^{-2}=1$. We
may thus apply Lemma A.4 and conclude that
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}\frac{\left|D^{N}\psi_{m,i_{m},j_{m},q}\right|}{\psi_{m,i_{m},j_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)$
for all $N\leq\mathsf{N}_{\rm fin}$, proving (6.35) in the second case.
From the definition (6.11), and the bound (6.35) we next estimate derivatives
of the $m^{th}$ velocity cutoff function $\psi_{m,i_{m},q}$, and claim that
$\displaystyle\frac{|D^{N}\psi_{m,i_{m},q}|}{\psi_{m,i_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)$
(6.45)
for all $i_{m}\geq 0$, all $N\leq\mathsf{N}_{\rm fin}$. We prove (6.45) by
induction on $N$. When $N=0$ the bound trivially holds, which gives the
induction base. For the induction step, assume that (6.45) holds for all
$N^{\prime}\leq N-1$. By the Leibniz rule we obtain
$\displaystyle
D^{N}(\psi_{m,i_{m},q}^{2})=2\psi_{m,i_{m},q}D^{N}\psi_{m,i_{m},q}+\sum_{N^{\prime}=1}^{N-1}{N\choose
N^{\prime}}D^{N^{\prime}}\psi_{m,i_{m},q}\,D^{N-N^{\prime}}\psi_{m,i_{m},q}$
(6.46)
and thus
$\displaystyle\frac{D^{N}\psi_{m,i_{m},q}}{\psi_{m,i_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}$
$\displaystyle=\frac{D^{N}(\psi_{m,i_{m},q}^{2})}{2\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}-\frac{1}{2}\sum_{N^{\prime}=1}^{N-1}{N\choose
N^{\prime}}\frac{D^{N^{\prime}}\psi_{m,i_{m},q}}{\psi_{m,i_{m},q}^{1-N^{\prime}/\mathsf{N}_{\rm
fin}}}\frac{D^{N-N^{\prime}}\psi_{m,i_{m},q}}{\psi_{m,i_{m},q}^{1-(N-N^{\prime})/\mathsf{N}_{\rm
fin}}}.$
Since $N^{\prime},N-N^{\prime}\leq N-1$ by the induction assumption (6.45) we
obtain
$\displaystyle\frac{\left|D^{N}\psi_{m,i_{m},q}\right|}{\psi_{m,i_{m},q}^{1-N/\mathsf{N}_{\rm
fin}}}$
$\displaystyle\lesssim\frac{|D^{N}(\psi_{m,i_{m},q}^{2})|}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}+\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right).$
(6.47)
Thus, establishing (6.45) for the $N$th derivative, reduces to bounding the
first term on the right side of the above. For this purpose we recall (6.11)
and compute
$\displaystyle\frac{\left|D^{N}(\psi_{m,i_{m},q}^{2})\right|}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}$ $\displaystyle=\frac{1}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}\sum_{\\{j_{m}\colon i_{*}(j_{m})\leq i_{m}\\}}\sum_{K=0}^{N}{N\choose
K}D^{K}(\psi_{j_{m},q-1}^{2})D^{N-K}(\psi_{m,i_{m},j_{m},q}^{2})$
$\displaystyle=\sum_{\\{j_{m}\colon i_{*}(j_{m})\leq
i_{m}\\}}\sum_{K=0}^{N}\sum_{L_{1}=0}^{K}\sum_{L_{2}=0}^{N-K}{N\choose
K}{K\choose L_{1}}{N-K\choose
L_{2}}\frac{\psi_{j_{m},q-1}^{2-K/\mathsf{N}_{\rm
fin}}\psi_{m,i_{m},j_{m},q}^{2-(N-K)/\mathsf{N}_{\rm
fin}}}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm fin}}}$
$\displaystyle\qquad\qquad\times\frac{D^{L_{1}}\psi_{j_{m},q-1}}{\psi_{j_{m},q-1}^{1-L_{1}/\mathsf{N}_{\rm
fin}}}\frac{D^{K-L_{1}}\psi_{j_{m},q-1}}{\psi_{j_{m},q-1}^{1-(K-L_{1})/\mathsf{N}_{\rm
fin}}}\frac{D^{L_{2}}\psi_{m,i_{m},j_{m},q}}{\psi_{m,i_{m},j_{m},q}^{1-L_{2}/\mathsf{N}_{\rm
fin}}}\frac{D^{N-K-
L_{2}}\psi_{m,i_{m},j_{m},q}}{\psi_{m,i_{m},j_{m},q}^{1-(N-K-
L_{2})/\mathsf{N}_{\rm fin}}}\,.$
Since $K,N-K\leq N$, and $\psi_{j_{m},q-1},\psi_{m,i_{m},j_{,}q}\leq 1$ we
have by (6.14) that
$\displaystyle\frac{\psi_{j_{m},q-1}^{2-K/\mathsf{N}_{\rm
fin}}\psi_{m,i_{m},j_{m},q}^{2-(N-K)/\mathsf{N}_{\rm
fin}}}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}\leq\frac{\psi_{j_{m},q-1}^{2-N/\mathsf{N}_{\rm
fin}}\psi_{m,i_{m},j_{m},q}^{2-N/\mathsf{N}_{\rm
fin}}}{\psi_{m,i_{m},q}^{2-N/\mathsf{N}_{\rm fin}}}\leq 1.$
Furthermore, the estimate (6.35), the inductive assumption (3.19), combined
with the parameter estimate
$\Gamma_{q-1}\widetilde{\lambda}_{q-1}\leq\Gamma_{q}\lambda_{q}$ (see (9.38))
and the previous three displays, conclude the proof of (6.45). In particular,
note that this upper bound is independent of the value of $i_{m}$.
In order to conclude the proof of the Lemma, we argue that (6.45) implies
(6.36). Recalling (6.14), we have that $\psi_{i,q}^{2}$ is given as a sum of
products of $\psi_{m,i_{m},q}^{2}$, for which suitable derivative bounds are
available (due to (6.45)). Thus, the proof of (6.36) is again done by
induction on $N$, mutatis mutandi to the proof of (6.45): indeed, we note that
$\psi_{m,i_{m},q}^{2}$ was also given as a sum of squares of cutoff functions,
for which derivative bounds were available. The proof of the induction step is
thus again based on the application of the Leibniz rule for $\psi_{i,q}^{2}$;
in order to avoid redundancy we omit these details. ∎
#### 6.2.3 Maximal indices appearing in the cutoff
A consequence of the inductive assumptions, Lemma 6.11, and of Lemma 6.13
above, is that we may a priori estimate the maximal $i$ appearing in
$\psi_{i,q}$, labeled as $i_{\mathrm{max}}(q)$.
###### Lemma 6.14 (Maximal $i$ index in the definition of the cutoff).
There exists ${i_{\rm max}}={i_{\rm max}}(q)\geq 0$, determined by the formula
(6.53) below, such that
$\displaystyle\psi_{i,q}\equiv 0\quad\mbox{for all}\quad i>i_{\mathrm{max}}$
(6.48)
and
$\displaystyle\Gamma_{q+1}^{i_{\mathrm{max}}}\leq\lambda_{q}^{\nicefrac{{5}}{{3}}}$
(6.49)
holds for all $q\geq 0$, where the implicit constant is independent of $q$.
Moreover ${i_{\rm max}}(q)$ is bounded uniformly in $q$ as
$\displaystyle{i_{\rm max}}(q)\leq{\frac{4}{\varepsilon_{\Gamma}(b-1)}}\,,$
(6.50)
assuming $\lambda_{0}$ is sufficiently large.
###### Proof of Lemma 6.14.
Assume $i\geq 0$ is such that $\mathrm{supp\,}(\psi_{i,q})\neq\emptyset$. Our
goal is to prove that $\Gamma_{q+1}^{i}\leq\lambda_{q}^{\nicefrac{{5}}{{3}}}$.
From (6.14) it follows that for any $(x,t)\in\mathrm{supp\,}(\psi_{i,q})$,
there must exist at least one $\vec{i}=(i_{0},\ldots,i_{\mathsf{N}_{\rm
cut,t}})$ such that $\max\limits_{0\leq m\leq\mathsf{N}_{\rm cut,t}}i_{m}=i$,
and with $\psi_{m,i_{m},q}(x,t)\neq 0$ for all $0\leq m\leq\mathsf{N}_{\rm
cut,t}$. Therefore, in light of (6.11), for each such $m$ there exists a
maximal $j_{m}$ such that $i_{*}(j_{m})\leq i_{m}$, with
$(x,t)\in\mathrm{supp\,}(\psi_{j_{m},q-1})\cap\mathrm{supp\,}(\psi_{m,i_{m},j_{m},q})$.
In particular, this holds for any of the indices $m$ such that $i_{m}=i$. For
the remainder of the proof, we fix such an index $0\leq m\leq\mathsf{N}_{\rm
cut,t}$.
If we have $i=i_{m}=i_{*}(j_{m})=i_{*}(j_{m},q)$, since
$(x,t)\in\mathrm{supp\,}(\psi_{j_{m},q-1})$, then by the inductive assumption
(3.18), we have that $j_{m}\leq{i_{\rm max}}(q-1)$. Then, due to (6.27), we
have $\Gamma_{q+1}^{i-1}<\Gamma_{q}^{j_{m}}\leq\Gamma_{q}^{{i_{\rm
max}}(q-1)}$, and thus
$\displaystyle\Gamma_{q+1}^{i}\leq\Gamma_{q+1}\Gamma_{q}^{{i_{\rm
max}}(q-1)}\leq\Gamma_{q+1}\lambda_{q-1}^{\nicefrac{{5}}{{3}}}<\lambda_{q}^{\nicefrac{{5}}{{3}}}.$
(6.51)
The last inequality above uses the fact that
$\lambda_{q}^{\nicefrac{{(b+1)}}{{2}}}\leq\lambda_{q+1}$ since $b>1$ and $a$
is taken sufficiently large.
On the other hand, if $i=i_{m}\geq i_{*}(j_{m})+1$, from (6.29) we have
$|h_{m,j_{m},q}(x,t)|\geq(\nicefrac{{1}}{{2}})\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}$,
and by the pigeonhole principle, there exists $0\leq n\leq\mathsf{N}_{\rm
cut,x}$ with
$\displaystyle|D^{n}D_{t,q-1}^{m}u_{q}(x,t)|$
$\displaystyle\geq\frac{1}{2\mathsf{N}_{\rm
cut,x}}\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}{\Gamma_{q+1}^{i_{*}(j_{m})}}\delta_{q}^{\nicefrac{{1}}{{2}}}(\lambda_{q}\Gamma_{q})^{n}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+2})^{m}$
$\displaystyle\geq\frac{1}{2\mathsf{N}_{\rm
cut,x}}\Gamma_{q+1}^{i_{m}}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}+2})^{m},$
and we also know that $(x,t)\in\mathrm{supp\,}(\psi_{j_{m},q-1})$. By (5.9),
the fact that $\mathsf{N}_{\rm cut,x}\leq 2\mathsf{N}_{\textnormal{ind,v}}-2$,
and $\mathsf{N}_{\rm cut,t}\leq\mathsf{N}_{\textnormal{ind,t}}$, we know that
$\displaystyle|D^{n}D_{t,q-1}^{m}u_{q}(x,t)|$ $\displaystyle\leq
M_{b}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}(\tau_{q-1}^{-1}\Gamma_{q}^{j_{m}+1})^{m}$
$\displaystyle\leq
M_{b}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j_{m})+1})^{m}$
$\displaystyle\leq
M_{b}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{n}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}})^{m}$
for some constant $M_{b}$ which is the maximal constant appearing in the
$\lesssim$ symbol of (5.9) with $n+m\leq\mathsf{N}_{\rm fin}$. In particular,
$M_{b}$ is independent of $q$. The proof is now completed, since the previous
two inequalities and the assumption that $i_{m}=i\geq i_{\mathrm{max}}(q)+1$
imply that
$\Gamma_{q+1}^{i}\leq 2\mathsf{N}_{\rm
cut,x}M_{b}\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\leq\lambda_{q}^{\nicefrac{{5}}{{3}}}.$
(6.52)
In view of (6.51) and (6.52), the value of $i_{\mathrm{max}}$ is chosen as
$\displaystyle
i_{\mathrm{max}}(q)=\sup\left\\{i^{\prime}\colon\Gamma_{q+1}^{i^{\prime}}\leq\lambda_{q}^{\nicefrac{{5}}{{3}}}\right\\}\,.$
(6.53)
To show that ${i_{\rm max}}(q)<\infty$, and in particular that it is bounded
independently of $q$, note that
$\displaystyle\frac{\log(\lambda_{q}^{\nicefrac{{5}}{{3}}})}{\log(\Gamma_{q+1})}\to\frac{\nicefrac{{5}}{{3}}}{\varepsilon_{\Gamma}(b-1)}\,,$
as $q\to\infty$. This, assuming $\lambda_{0}$ is sufficiently large, since
$(b-1)\varepsilon_{\Gamma}\leq\nicefrac{{1}}{{5}}$, the bound (6.50) holds. ∎
#### 6.2.4 Mixed derivative estimates
Recall from (3.7) the notation $D_{q}=u_{q}\cdot\nabla$ for the directional
derivative in the direction of $u_{q}$. With this notation, cf. (3.6) we have
$D_{t,q}=D_{t,q-1}+D_{q}$. Thus, $D_{q}$ derivatives are useful for
transferring bounds on $D_{t,q-1}$ derivatives to bounds for $D_{t,q}$
derivatives.
From the Leibniz rule we have that
$\displaystyle D_{q}^{K}=\sum_{j=1}^{K}f_{j,K}D^{j}$ (6.54)
where
$\displaystyle
f_{j,K}=\sum_{\\{\gamma\in\mathbb{N}^{K}\colon|\gamma|=K-j\\}}c_{j,K,\gamma}\prod_{\ell=1}^{K}D^{\gamma_{\ell}}u_{q}$
(6.55)
where $c_{j,K,\gamma}$ are explicitly computable coefficients that depend only
on $K,j$, and $\gamma$. Similarly to the coefficients in (A.49), the precise
value of these constants is not important, since all the indices appearing
throughout the proof are taken to be less than $2\mathsf{N}_{\rm fin}$. The
decomposition (6.54)–(6.55) will be used frequently in this section.
###### Remark 6.15.
Since throughout the paper the maximal number of spatial or material
derivatives is bounded from above by $2\mathsf{N}_{\rm fin}$, which is a
number that is independent of $q$, we have not explicitly stated the formula
for the coefficients $c_{a,k,\beta}$ in (A.49), as all these constants will be
absorbed in a $\lesssim$ symbol. We note however that the proof of Lemma A.13
does yield a recursion relation for the $c_{a,k,\beta}$, which may be used if
desired to compute the $c_{a,k,\beta}$ explicitly.
With the notation in (6.55) we have the following bounds.
###### Lemma 6.16.
For $q\geq 1$ and $1\leq K\leq 2\mathsf{N}_{\rm fin}$, the functions
$\\{f_{j,K}\\}_{j=1}^{K}$ defined in (6.55) obey the estimate
$\displaystyle\left\|D^{a}f_{j,K}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})^{K}\mathcal{M}\left(a+K-j,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right).$
(6.56)
for any $a\leq 2\mathsf{N}_{\rm fin}-K+j$, and any $0\leq i\leq{i_{\rm
max}}(q)$.
###### Proof of Lemma 6.16.
Note that no material derivative appears in (6.55), and thus to establish
(6.56) we appeal to Corollary 6.12 with $M=0$, and to the bound (5.6) with
$m=0$. From the product rule we obtain that
$\displaystyle\left\|D^{a}f_{j}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\sum_{\\{\gamma\in\mathbb{N}^{K}\colon|\gamma|=K-j\\}}\sum_{\\{\alpha\in\mathbb{N}^{k}\colon|\alpha|=a\\}}\prod_{\ell=1}^{K}\left\|D^{\alpha_{\ell}+\gamma_{\ell}}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\sum_{\\{\gamma\in\mathbb{N}^{K}\colon|\gamma|=K-j\\}}\sum_{\\{\alpha\in\mathbb{N}^{k}\colon|\alpha|=a\\}}\prod_{\ell=1}^{K}\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\mathcal{M}\left(\alpha_{\ell}+\gamma_{\ell},2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})^{K}\mathcal{M}\left(a+K-j,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
since $|\gamma|=K-j$. ∎
Next, we supplement the space-and-material derivative estimates for $u_{q}$
obtained in (5.6) and (6.33), with derivatives bounds that combine space,
directional, and material derivatives.
###### Lemma 6.17.
For $q\geq 1$ and $0\leq i\leq{i_{\rm max}}$, we have that
$\displaystyle\left\|D^{N}D_{q}^{K}D_{t,q-1}^{M}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})^{K+1}\mathcal{M}\left(N+K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}\right)$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)(\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
holds for $0\leq K+N+M\leq 2\mathsf{N}_{\rm fin}$.
###### Proof of Lemma 6.17.
The second estimate in the Lemma follows from the parameter inequality
$\Gamma_{q+1}^{1+\mathsf{c_{0}}}\widetilde{\lambda}_{q}\delta_{q}^{1/2}\leq\tau_{q}^{-1}$,
which is a consequence of (9.39). In order to prove the first statement, we
let $0\leq a\leq N$ and $1\leq j\leq K$. From estimate (6.33) and (5.6) we
obtain
$\displaystyle\left\|D^{N-a+j}D_{t,q-1}^{M}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(N-a+j,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
$\displaystyle\quad\times\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right),$
which may be combined with (6.54)–(6.55), and the bound (6.56), to obtain that
$\displaystyle\left\|D^{N}D_{q}^{K}D_{t,q-1}^{M}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\quad\lesssim\sum_{a=0}^{N}\sum_{j=1}^{K}\left\|D^{a}f_{j,K}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\left\|D^{N-a+j}D_{t,q-1}^{M}w_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\quad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})^{K+1}\mathcal{M}\left(N+K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
holds, concluding the proof of the lemma. ∎
The next Lemma shows that the inductive assumptions (3.22)–(3.25b) hold also
for $q^{\prime}=q$.
###### Lemma 6.18.
For $q\geq 1$, $k\geq 1$, $\alpha,\beta\in{\mathbb{N}}^{k}$ with $|\alpha|=K$
and $|\beta|=M$, we have
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\quad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.57)
for all $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. Additionally, for
$N\geq 0$, the bound
$\displaystyle\left\|D^{N}\Big{(}\prod_{i=1}^{k}D_{q}^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\quad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})^{K+1}\mathcal{M}\left(N+K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.58)
$\displaystyle\quad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(N,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)(\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.59)
holds for all $0\leq K+M+N\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$.
Lastly, we have the estimate
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q}^{\beta_{i}}\Big{)}Dv_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\qquad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.60)
for all $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$, and
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q}^{\beta_{i}}\Big{)}v_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\qquad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}^{2})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.61)
for all $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$.
###### Remark 6.19.
As shown in Remark 3.4, the bound (6.59) and identity (A.39) imply that
estimate (3.26) also holds with $q^{\prime}=q$.
###### Proof of Lemma 6.18.
We note that (6.59) follows directly from (6.58), by appealing to the
parameter inequality
$\Gamma_{q+1}^{1+\mathsf{c_{0}}}\delta_{q}^{1/2}\widetilde{\lambda}_{q}\leq\tau_{q}^{-1}$,
which is a consequence of (9.39). We first show that (6.57) holds, then
establish (6.58), and lastly, prove the bounds (6.60)–(6.61).
Proof of (6.57). The statement is proven by induction on $k$. For $k=1$ the
estimate is given by Corollary 6.12 and the bound (5.6); in fact, for $k=1$ we
have derivatives estimates up to level $2\mathsf{N}_{\rm fin}$, and not just
$\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. For the induction step, assume
that (6.57) holds for any $k^{\prime}\leq k-1$. We denote
$\displaystyle
P_{k^{\prime}}=\Big{(}\prod_{i=1}^{k^{\prime}}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}u_{q}$
(6.62)
and write
$\displaystyle\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}u_{q}$
$\displaystyle=(D^{\alpha_{k}}D_{t,q-1}^{\beta_{k}})(D^{\alpha_{k-1}}D_{t,q-1}^{\beta_{k-1}})P_{k-2}$
$\displaystyle=(D^{\alpha_{k}+\alpha_{k-1}}D_{t,q-1}^{\beta_{k}+\beta_{k-1}})P_{k-2}+D^{\alpha_{k}}\left[D_{t,q-1}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q-1}^{\beta_{k-1}}P_{k-2}.$
(6.63)
The first term in (6.63) already obeys the correct bound, since we know that
(6.57) holds for $k^{\prime}=k-1$. In order to treat the second term on the
right side of (6.63), we use Lemma A.12 to write the commutator as
$\displaystyle
D^{\alpha_{k}}\left[D_{t,q-1}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q-1}^{\beta_{k-1}}P_{k-2}$
$\displaystyle=D^{\alpha_{k}}\sum_{1\leq|\gamma|\leq\beta_{k}}\frac{\beta_{k}!}{\gamma!(\beta_{k}-|\gamma|)!}\left(\prod_{\ell=1}^{\alpha_{k-1}}(\mathrm{ad\,}D_{t,q-1})^{\gamma_{\ell}}(D)\right)D_{t,q-1}^{\beta_{k}+\beta_{k-1}-|\gamma|}P_{k-2}.$
(6.64)
From Lemma A.13 and the Leibniz rule we claim that one may expand
$\displaystyle\prod_{\ell=1}^{\alpha_{k-1}}(\mathrm{ad\,}D_{t,q-1})^{\gamma_{\ell}}(D)=\sum_{j=1}^{\alpha_{k-1}}g_{j}D^{j}$
(6.65)
for some explicit functions $g_{j}$ which obey the estimate
$\displaystyle\left\|D^{a}g_{j}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\widetilde{\lambda}_{q-1}^{a+\alpha_{k-1}-j}\mathcal{M}\left(|\gamma|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
(6.66)
for all $a$ such that
$a+\alpha_{k-1}-j+|\gamma|\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. The
claim (6.66) requires a proof, which we sketch next. Using the definition
(6.11), the inductive estimate (3.23) at level $q^{\prime}=q-1$ and with
$k=1$, the parameter inequality (9.39) at level $q-1$, for any $0\leq
m\leq\mathsf{N}_{\rm cut,t}$ we have that
$\displaystyle\left\|D^{a}D_{t,q-1}^{b}Dv_{\ell_{q-1}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{m,i_{m},q})}$
$\displaystyle\lesssim\sum_{\\{j_{m}\colon\Gamma_{q}^{j_{m}}\leq\Gamma_{q+1}^{i_{m}}\\}}\left\|D^{a}D_{t,q-1}^{b}Dv_{\ell_{q-1}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{j_{m},q-1})}$
$\displaystyle\lesssim\sum_{\\{j_{m}\colon\Gamma_{q}^{j_{m}}\leq\Gamma_{q+1}^{i_{m}}\\}}(\Gamma_{q}^{j_{m}+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\widetilde{\lambda}_{q-1}^{a+1}\mathcal{M}\left(b,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{j_{m}-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim(\Gamma_{q+1}^{i_{m}}\Gamma_{q}\delta_{q-1}^{\nicefrac{{1}}{{2}}})\widetilde{\lambda}_{q-1}^{a+1}\mathcal{M}\left(b,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim\widetilde{\lambda}_{q-1}^{a}\mathcal{M}\left(b+1,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
for all $a+b\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. Thus, from the
definition (6.14) we deduce that
$\displaystyle\left\|D^{a}D_{t,q-1}^{b}Dv_{\ell_{q-1}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\widetilde{\lambda}_{q-1}^{a}\mathcal{M}\left(b+1,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
(6.67)
for all $a+b\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. When combined with
the formula (A.49), which allows us to write
$\displaystyle(\mathrm{ad\,}D_{t,q-1})^{\gamma}(D)=f_{\gamma,q-1}\cdot\nabla$
(6.68)
for an explicit function $f_{\gamma,q-1}$ which is defined in terms of
$v_{\ell_{q-1}}$, estimate (6.67) and the Leibniz rule gives the estimate
$\displaystyle\left\|D^{a}f_{\gamma,q-1}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\widetilde{\lambda}_{q-1}^{a}\mathcal{M}\left(\gamma,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)$
(6.69)
for all $a+\gamma\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. In order to
conclude the proof of (6.65)–(6.66), we use (6.68) to write
$\displaystyle\prod_{\ell=1}^{\alpha_{k-1}}(\mathrm{ad\,}D_{t,q-1})^{\gamma_{\ell}}(D)=\prod_{\ell=1}^{\alpha_{k-1}}\left(f_{\gamma_{\ell},q-1}\cdot\nabla\right)=\sum_{j=1}^{\alpha_{k-1}}g_{j}D^{j}$
and now the claimed estimate for $g_{j}$ follows from the previously
established bound (6.69) for the $f_{\gamma_{\ell},q-1}$’s and their
derivatives, and the Leibniz rule.
With (6.65)–(6.66) in hand, and using estimate (6.57) with $k^{\prime}=k-1$,
we return to (6.64) and obtain
$\displaystyle\left\|D^{\alpha_{k}}\left[D_{t,q-1}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q-1}^{\beta_{k-1}}P_{k-2}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\sum_{j=1}^{\alpha_{k-1}}\sum_{1\leq|\gamma|\leq\beta_{k}}\left\|D^{\alpha_{k}}\left(g_{j}\;D^{j}D_{t,q-1}^{\beta_{k}+\beta_{k-1}-|\gamma|}P_{k-2}\right)\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\sum_{j=1}^{\alpha_{k-1}}\sum_{1\leq|\gamma|\leq\beta_{k}}\sum_{a^{\prime}=0}^{\alpha_{k}}\left\|D^{\alpha_{k}-a^{\prime}}g_{j}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\left\|D^{a^{\prime}+j}D_{t,q-1}^{\beta_{k}+\beta_{k-1}-|\gamma|}P_{k-2}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim\sum_{j=1}^{\alpha_{k-1}}\sum_{|\gamma|=1}^{\beta_{k}}\sum_{a^{\prime}=0}^{\alpha_{k}}\lambda_{q}^{\alpha_{k}+\alpha_{k-1}-j-a^{\prime}}\mathcal{M}\left(|\gamma|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}\right)(\Gamma_{q+1}^{i+1}\delta_{q}^{1/2})$
$\displaystyle\qquad\times\mathcal{M}\left(K-\alpha_{k}-\alpha_{k-1}+j+a^{\prime},2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M-|\gamma|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+3}^{i+1}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{1/2})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.70)
for $M\leq\mathsf{N}_{\textnormal{ind,t}}$ and
$K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. The $+1$ in the range of
derivatives is simply a consequence that the summand in the third line of the
above display starts with $j\geq 1$ and with $|\gamma|\geq 1$. This concludes
the proof of the inductive step for (6.57).
Proof of (6.58). This estimate follows from Lemma A.10. Indeed, letting
$v=f=u_{q}$, $B=D_{t,q-1}$, $\Omega=\mathrm{supp\,}\psi_{i,q}$, $p=\infty$,
the previously established bound (6.57) allows us to verify conditions
(A.40)–(A.41) of Lemma A.10 with $N_{*}=\nicefrac{{3\mathsf{N}_{\rm
fin}}}{{2}}+1$,
$\mathcal{C}_{v}=\mathcal{C}_{f}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\lambda_{f}=\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{v}=\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q},N_{x}=2\mathsf{N}_{\textnormal{ind,v}},\mu_{v}=\mu_{f}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\mu}_{v}=\widetilde{\mu}_{f}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1},N_{t}=\mathsf{N}_{\textnormal{ind,t}}$.
As $|\alpha|=K$ and $|\beta|=M$, the bound (6.58) now is a direct consequence
of (A.42).
Proof of (6.60) and (6.61). First we consider the bound (6.60), inductively
on $k$. For the case $k=1$ the main idea is to appeal to estimate (A.44) in
Lemma A.10 with the operators $A=D_{q},B=D_{t,q-1}$ and the functions
$v=u_{q}$ and $f=Dv_{\ell_{q}}$, so that
$D^{n}(A+B)^{m}f=D^{n}D_{t,q}^{m}Dv_{\ell_{q}}$. As before, the assumption
(A.40) holds due to (6.57) with $\Omega=\mathrm{supp\,}\psi_{i,q}$,
$N_{*}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$,
$\mathcal{C}_{v}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q},N_{x}=2\mathsf{N}_{\textnormal{ind,v}},\mu_{v}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$. Verifying condition (A.41) is this
time more involved, and follows by rewriting
$f=Dv_{\ell_{q}}=Du_{q}+Dv_{\ell_{q-1}}$. By using (6.57), and the parameter
inequality
$\Gamma_{q+1}^{3}\tau_{q-1}^{-1}\leq\Gamma_{q+1}^{-\mathsf{c_{0}}}\tau_{q}^{-1}$
(cf. (9.40)), we conveniently obtain
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}Du_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\qquad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.71)
for all $|\alpha|+|\beta|=K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$
(note that the maximal number of derivatives is not
$\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$ anymore, but instead it is just
$\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$; the reason is that we are
estimating $Du_{q}$ and not $u_{q}$). On the other hand, from the inductive
assumption (3.23) with $q^{\prime}=q-1$ we obtain that
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}Dv_{\ell_{q-1}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{j,q-1})}\lesssim(\Gamma_{q}^{j+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}})(\widetilde{\lambda}_{q-1})^{K+1}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{j-\mathsf{c_{0}}}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
for $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. Recalling the
definitions (6.11)–(6.14) and the notation (6.15), we have that
$(x,t)\in\mathrm{supp\,}(\psi_{i,q})$ if and only if
$(x,t)\in\mathrm{supp\,}(\psi_{\vec{i},q})$, and thus for every
$m\in\\{0,\ldots,\mathsf{N}_{\rm cut,t}\\}$, there exists $j_{m}$ with
$\Gamma_{q}^{j_{m}}\leq\Gamma_{q+1}^{i_{m}}\leq\Gamma_{q+1}^{i}$ and
$(x,t)\in\mathrm{supp\,}(\psi_{j_{m},q-1})$. Thus, the above stated estimate
and our usual parameter inequalities imply that
$\displaystyle\left\|\Big{(}\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\Big{)}Dv_{\ell_{q-1}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q-1})(\widetilde{\lambda}_{q-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q})(\Gamma_{q}\lambda_{q})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.72)
whenever $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. Here we have used
that
$\delta_{q-1}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q-1}\leq\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}$
and that
$\Gamma_{q+1}^{i}\Gamma_{q}^{-\mathsf{c_{0}}}\tau_{q-1}^{-1}\leq\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1}\leq\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$,
for all $i\leq{i_{\rm max}}$. In the last inequality, we have used (9.20) and
(6.49). Combining (6.71) and (6.72) we may now verify condition (A.41) for
$f=Dv_{\ell_{q}}$, with $p=\infty$, $\Omega=\mathrm{supp\,}(\psi_{i,q})$,
$\mathcal{C}_{f}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}$,
$\lambda_{f}=\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q},N_{x}=2\mathsf{N}_{\textnormal{ind,v}},\mu_{f}=\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\widetilde{\mu}_{f}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1},N_{t}=\mathsf{N}_{\textnormal{ind,t}}$,
and $N_{*}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. We may thus appeal to
(A.44) and obtain that
$\displaystyle\left\|D^{K}D_{t,q}^{M}Dv_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\qquad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q})\mathcal{M}\left(K,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
$\displaystyle\qquad\times\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\max\\{\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\\},\max\\{\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1},\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\\}\right)$
whenever $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. The parameter
inequalities
$\Gamma_{q+1}^{\mathsf{c_{0}}+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\tau_{q}^{-1}$
from (9.39) and
$\Gamma_{q+1}^{i+2}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\widetilde{\tau}_{q}^{-1}$,
which follows from (9.43) and (6.49), conclude the proof of (6.60) for $k=1$.
In order to prove (6.60) for a general $k$, we proceed by induction. Assume
the estimate holds for every $k^{\prime}\leq k-1$. Proving (6.60) at level $k$
is done in the same way as we have established the induction step (in $k$) for
(6.57). We let
$\displaystyle\widetilde{P}_{k^{\prime}}=\left(\prod_{i=1}^{k^{\prime}}D^{\alpha_{i}}D_{t,q}^{\beta_{i}}\right)Dv_{\ell_{q}}$
and decompose
$\displaystyle\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q}^{\beta_{i}}\right)Dv_{\ell_{q}}$
$\displaystyle=(D^{\alpha_{k}+\alpha_{k-1}}D_{t,q}^{\beta_{k}+\beta_{k-1}})\widetilde{P}_{k-2}+D^{\alpha_{k}}\left[D_{t,q}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q}^{\beta_{k-1}}\widetilde{P}_{k-2}$
and note that the first term is directly bounded using the induction
assumption (at level $k-1$). To bound the commutator term, similarly to
(6.64)–(6.66), we obtain from Lemmas A.12 and A.13 that
$\displaystyle
D^{\alpha_{k}}\left[D_{t,q}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q}^{\beta_{k-1}}\widetilde{P}_{k-2}=D^{\alpha_{k}}\sum_{1\leq|\gamma|\leq\beta_{k}}\frac{\beta_{k}!}{\gamma!(\beta_{k}-|\gamma|)!}\left(\sum_{j=1}^{\alpha_{k-1}}\widetilde{g}_{j}D^{j}\right)D_{t,q}^{\beta_{k}+\beta_{k-1}-|\gamma|}\widetilde{P}_{k-2}\,,$
where one may use the previously established bound (6.60) with $k=1$ (instead
of (6.67)) to estimate
$\displaystyle\left\|D^{a}\widetilde{g}_{j}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\mathcal{M}\left(a+\alpha_{k-1}-j,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(|\gamma|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right).$
(6.73)
Note that the above estimate is not merely (6.66) with $q$ increased by $1$.
Rather, the above estimate is proven in the same way that (6.66) was proven,
by first showing that the analogous version of (6.69) is
$\displaystyle\left\|D^{a}f_{\gamma,q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\mathcal{M}\left(a,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(\gamma,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)\,,$
from which the claimed estimate (6.73) on $D^{a}\widetilde{g}_{j}$ follows.
The estimate
$\displaystyle\left\|D^{\alpha_{k}}\left[D_{t,q}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t,q}^{\beta_{k-1}}\widetilde{P}_{k-2}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}$
$\displaystyle\quad\lesssim(\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}})\mathcal{M}\left(K+1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.74)
follows similarly to (6.70), from the estimate (6.73) for $\widetilde{g}_{j}$,
and the bound (6.60) with $k-1$ terms in the product. This concludes the proof
of estimate (6.60).
To conclude the proof of the Lemma, we also need to establish the estimates
for $v_{\ell_{q}}$ claimed in (6.61). The proof of this bound is nearly
identical to that of (6.60), as is readily seen for $k=1$: we just need to
replace $Du_{q}$ estimates with $u_{q}$ estimates, and $Dv_{\ell_{q-1}}$
bounds with $v_{\ell_{q-1}}$ bounds. For instance, instead of (6.71), we
appeal to (6.59) and obtain a bound for $D^{K}D_{t,q}^{M}u_{q}$ which is
better than (6.71) by a factor of $\widetilde{\lambda}_{q}$, and which holds
for $K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. This estimate is
sharper than required by (6.61). The estimate for
$D^{K}D_{t,q}^{M}v_{\ell_{q-1}}$ is obtained similarly to (6.72), except that
instead of appealing to the induction assumption (3.23) at level
$q^{\prime}=q-1$, we use (3.24) with $q^{\prime}=q-1$. The Sobolev loss
$\lambda_{q-1}^{2}$ is then apparent from (3.24), and the estimates hold for
$K+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. These arguments establish
(6.61) with $k=1$. The case of general $k\geq 2$ is treated inductively
exactly as before, because the commutator term is bounded in the same way as
(6.74), except that $K+1$ is replaced by $K$. To avoid redundancy, we omit
these details. ∎
#### 6.2.5 Material derivatives
The estimates in the previous sections, which have led up to Lemma 6.18, allow
us to estimate mixed space, directional, and material derivatives of the
velocity cutoff functions $\psi_{i,q}$, which in turn allow us to establish
the inductive bounds (3.19) and (3.20) with $q^{\prime}=q$.
In order to achieve this we crucially recall Remark 6.9. Note that if we were
to directly differentiate (6.14), then we would need to consider all vectors
$\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}$ such that $\max_{0\leq
m\leq\mathsf{N}_{\rm cut,t}}i_{m}=i$, and then for each one of these $\vec{i}$
consider the term
${\mathbf{1}}_{\mathrm{supp\,}(\psi_{\vec{i},q})}D_{t,q-1}(\psi_{m,i_{m},q}^{2})$
for each $0\leq m\leq\mathsf{N}_{\rm cut,t}$; however in this situation we
encounter for instance a term with $i_{0}=0$ and $i_{m^{\prime}}=i$ for all
$1\leq m^{\prime}\leq\mathsf{N}_{\rm cut,t}$; the bounds available on this
term would be catastrophic due to the mismatch $i_{0}<i_{m^{\prime}}$ for all
$m^{\prime}>0$. Identity (6.26) precisely permits us to avoid this situation,
because it has essentially ordered the indices
$\\{i_{m}\\}_{m=0}^{\mathsf{N}_{\rm cut,t}}$ to be non-increasing in $m$.
Indeed inspecting (6.26) and using identity (6.25) and the definitions (6.15),
(6.24), we see that
$\displaystyle(x,t)\in\mathrm{supp\,}(D_{t,q-1}\psi_{i,q}^{2})\quad\Leftrightarrow\quad$
$\displaystyle\exists\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}\mbox{
and }\exists 0\leq m\leq\mathsf{N}_{\rm cut,t}$ $\displaystyle\mbox{with
}i_{m}\in\\{i-1,i\\}\mbox{ and }\max_{0\leq m^{\prime}\leq\mathsf{N}_{\rm
cut,t}}i_{m^{\prime}}=i$ $\displaystyle\mbox{such that
}(x,t)\in\mathrm{supp\,}(\psi_{\vec{i},q})\cap\mathrm{supp\,}(D_{t,q-1}\psi_{m,i_{m},q})$
$\displaystyle\mbox{and }i_{m^{\prime}}\leq i_{m}\mbox{ whenever
}m<m^{\prime}\leq\mathsf{N}_{\rm cut,t}\,.$ (6.75)
The generalization of characterization (6.75) to higher order material
derivatives $D_{t,q-1}^{M}$ is direct:
$(x,t)\in\mathrm{supp\,}(D_{t,q-1}^{M}\psi_{i,q}^{2})$ if and only if there
exists $\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}$ with maximal
index equal to $i$, such that for every $0\leq m\leq\mathsf{N}_{\rm cut,t}$
for which
$(x,t)\in\mathrm{supp\,}(\psi_{\vec{i},q})\cap\mathrm{supp\,}(D_{t,q-1}\psi_{m,i_{m},q})$
(there are potentially more than one such $m$ if $M\geq 2$ due to the Leibniz
rule), we have $i_{m^{\prime}}\leq i_{m}\in\\{i-1,i\\}$ whenever
$m<m^{\prime}$. In light of this characterization, we have the following
bounds:
###### Lemma 6.20.
Let $q\geq 1$, $0\leq i\leq{i_{\rm max}}(q)$, and fix
$\vec{i}\in\mathbb{N}_{0}^{\mathsf{N}_{\rm cut,t}+1}$ such that $\max_{0\leq
m\leq\mathsf{N}_{\rm cut,t}}i_{m}=i$, as in the right side of (6.75). Fix
$0\leq m\leq\mathsf{N}_{\rm cut,t}$ such that $i_{m}\in\\{i-1,i\\}$ and such
that $i_{m^{\prime}}\leq i_{m}$ for all $m\leq m^{\prime}\leq\mathsf{N}_{\rm
cut,t}$. Lastly, fix $j_{m}$ such that $i_{*}(j_{m})\leq i_{m}$. For
$N,K,M,k\geq 0$, $\alpha,\beta\in{\mathbb{N}}^{k}$ such that $|\alpha|=K$ and
$|\beta|=M$, we have
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}(\psi_{\vec{i},q})}{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}}{\psi_{m,i_{m},j_{m},q}^{1-(K+M)/\mathsf{N}_{\rm
fin}}}\left|\left(\prod_{l=1}^{k}D^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}\right)\psi_{m,i_{m},j_{m},q}\right|$
$\displaystyle\quad\lesssim\mathcal{M}\left(K,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,x},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.76)
for all $K$ such that $0\leq K+M\leq\mathsf{N}_{\rm fin}$. Moreover,
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}(\psi_{\vec{i},q})}{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}}{\psi_{m,i_{m},j_{m},q}^{1-(N+K+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}\left(\prod_{l=1}^{k}D_{q}^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}\right)\psi_{m,i_{m},j_{m},q}\right|$
$\displaystyle\quad\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)(\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,x},\Gamma_{q+1}^{3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.77)
holds whenever $0\leq N+K+M\leq\mathsf{N}_{\rm fin}$.
###### Proof of Lemma 6.20.
Note that for $M=0$ estimate (6.76) was already established in (6.35). The
bound (6.77) with $M=0$, i.e., an estimate for the
$D^{N}D_{q}^{K}\psi_{m,i_{m},j_{m},q}$, holds by appealing to the expansion
(6.54)–(6.55), the bound (6.56) (which is applicable since in the context of
estimate (6.77) we work on the support of $\psi_{i,q}$), to the bound (6.76)
with $M=0$, and to the parameter inequality
$\Gamma_{q+1}^{2+\mathsf{c_{0}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\tau_{q}^{-1}$
(which follows from (9.39)). The rest of the proof is dedicated to the case
$M\geq 1$. The proofs are very similar to the proof of Lemma 6.13, but we
additionally need to appeal to bounds and arguments from the proof of Lemma
6.18.
Proof of (6.76). As in the proof of Lemma 6.13, we start with the case $k=1$,
and estimate $D^{K}D_{t,q-1}^{M}\psi_{m,i_{m},j_{m},q}$ for
$K+M\leq\mathsf{N}_{\rm fin}$, with $M\geq 1$. We note that just as $D$, the
operator $D_{t,q-1}$ is a scalar differential operator, and thus the Faá di
Bruno argument which was used to bound (6.35) may be repeated. As was done
there, we recall the definitions (6.7)–(6.8) and split the analysis in two
cases, according to whether (6.37) or (6.42) holds.
Let us first consider the case (6.37). Our goal is to apply Lemma A.5 to the
function $\psi=\psi_{m,q+1}$ or $\psi=\widetilde{\psi}_{m,q+1}$, with
$\Gamma_{\psi}=\Gamma_{q+1}^{m+1}$,
$\Gamma=\Gamma_{q+1}^{(m+1)(i_{m}-i_{*}(j_{m}))}$,
$h(x,t)=h_{m,j_{m},q}^{2}(x,t)$, and $D_{t}=D_{t,q-1}$. Estimate (A.24) holds
by (6.3) and (6.5), so that it remains to obtain a bound on the material
derivatives of $(h_{m,j_{m},q}(x,t))^{2}$ and establish a bound which
corresponds to (A.25) on the set
$\mathrm{supp\,}(\psi_{\vec{i},q})\cap\mathrm{supp\,}(\psi_{j_{m},q-1}\psi_{m,i_{m},j_{m},q})$.
Similarly to (6.38), for $K^{\prime}+M^{\prime}\leq\mathsf{N}_{\rm fin}$ the
Leibniz rule and definition (6.6) gives
$\displaystyle\left|D^{K^{\prime}}D_{t,q-1}^{M^{\prime}}h_{m,j_{m},q}^{2}\right|$
$\displaystyle\lesssim(\lambda_{q}\Gamma_{q})^{K^{\prime}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{M^{\prime}}\Gamma_{q+1}^{-2(m+1)i_{*}(j_{m})}$
$\displaystyle\times\sum_{K^{\prime\prime}=0}^{K^{\prime}}\sum_{M^{\prime\prime}=0}^{M^{\prime}}\sum_{n=0}^{\mathsf{N}_{\rm
cut,x}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{-m-M^{\prime\prime}}(\lambda_{q}\Gamma_{q})^{-n-K^{\prime\prime}}\delta_{q}^{-\nicefrac{{1}}{{2}}}|D^{n+K^{\prime\prime}}D^{m+M^{\prime\prime}}_{t,q-1}u_{q}|$
$\displaystyle\qquad\times(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{-m-M^{\prime}+M^{\prime\prime}}(\lambda_{q}\Gamma_{q})^{-n-K^{\prime}+K^{\prime\prime}}\delta_{q}^{-\nicefrac{{1}}{{2}}}|D^{n+K^{\prime}-K^{\prime\prime}}D^{m+M^{\prime}-M^{\prime\prime}}_{t,q-1}u_{q}|\,.$
(6.78)
By the characterization (6.75), for every $(x,t)$ in the support described on
the left side of (6.76) we have that for every $m\leq R\leq\mathsf{N}_{\rm
cut,t}$, there exists $i_{R}\leq i_{m}$ and $j_{R}$ with $i_{*}(j_{R})\leq
i_{R}$, such that
$(x,t)\in\mathrm{supp\,}\psi_{j_{R},q-1}\psi_{R,i_{R},j_{R},q}$. As a
consequence, for the terms in the sum (6.78) with
$L\in\\{n+K^{\prime\prime},n+K^{\prime}-K^{\prime\prime}\\}\leq\mathsf{N}_{\rm
cut,x}$ and
$R\in\\{m+M^{\prime\prime},m+M^{\prime}-M^{\prime\prime}\\}\leq\mathsf{N}_{\rm
cut,t}$, we may appeal to estimate (6.28) which gives a bound on
$h_{R,j_{R},q}$, and thus obtain
$\displaystyle(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{-R}(\lambda_{q}\Gamma_{q})^{-L}\delta_{q}^{-\nicefrac{{1}}{{2}}}\left\|D^{L}D_{t,q-1}^{R}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{R,i_{R},j_{R},q})}$
$\displaystyle\leq\Gamma_{q+1}^{(R+1)i_{*}(j_{R})}\Gamma_{q+1}^{(R+1)(i_{R}+1-i_{*}(j_{R}))}$
$\displaystyle\leq\Gamma_{q+1}^{(R+1)(i_{m}+1)}\,.$
On the other hand, if $L>\mathsf{N}_{\rm cut,x}$, or if $R>\mathsf{N}_{\rm
cut,t}$, then by (5.6) and (5.9) we have that
$\displaystyle(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{-R}(\lambda_{q}\Gamma_{q})^{-L}\delta_{q}^{-\nicefrac{{1}}{{2}}}\left\|D^{L}D_{t,q-1}^{R}u_{q}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{j_{m},q-1})}$
$\displaystyle\leq\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\Gamma_{q}^{-L}\Gamma_{q+1}^{-2R}\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(R,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{j_{m}+1},\tau_{q-1}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\leq\mathcal{M}\left(L,2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(R,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+1},\tau_{q-1}\widetilde{\tau}_{q-1}^{-1}\right)\,.$
(6.79)
since $\mathsf{N}_{\rm cut,x}$ and $\mathsf{N}_{\rm cut,t}$ were taken
sufficiently large to obey (9.51). Combining (6.78)–(6.79), we may derive that
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\psi_{\vec{i},q})}{\mathbf{1}}_{\mathrm{supp\,}(\psi_{j_{m},q-1})}\left|D^{K^{\prime}}D_{t,q-1}^{M^{\prime}}h_{m,j_{m},q}^{2}\right|$
$\displaystyle\lesssim\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m})+1)}(\lambda_{q}\Gamma_{q})^{K^{\prime}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{2})^{M^{\prime}}\mathcal{M}\left(2\mathsf{N}_{\rm
cut,x}+K^{\prime},2\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)\Gamma_{q+1}^{-2m(i_{m}+1)}$
$\displaystyle\quad\times\sum_{M^{\prime\prime}=0}^{M^{\prime}}\mathcal{M}\left(m+M^{\prime\prime},\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+1},\tau_{q-1}\widetilde{\tau}_{q-1}^{-1}\right)\mathcal{M}\left(m+M^{\prime}-M^{\prime\prime},\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i_{m}+1},\tau_{q-1}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m})+1)}(\lambda_{q}\Gamma_{q})^{K^{\prime}}(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}+3})^{M^{\prime}}\mathcal{M}\left(K^{\prime},\mathsf{N}_{\textnormal{ind,v}},1,\lambda_{q}^{-1}\widetilde{\lambda}_{q}\right)$
$\displaystyle\qquad\times\mathcal{M}\left(M^{\prime},\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},1,\tau_{q-1}\Gamma_{q+1}^{-(i_{m}+1)}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m})+1)}\mathcal{M}\left(K^{\prime},\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M^{\prime},\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\tau_{q-1}^{-1}\Gamma_{q+1}^{i+3},\Gamma_{q+1}^{2}\widetilde{\tau}_{q-1}^{-1}\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m})+1)}\mathcal{M}\left(K^{\prime},\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M^{\prime},\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\tau_{q-1}^{-1}\Gamma_{q+1}^{i+3},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.80)
for all $K^{\prime}+M^{\prime}\leq\mathsf{N}_{\rm fin}$. Here we have used
that $\mathsf{N}_{\textnormal{ind,v}}\geq 2\mathsf{N}_{\textnormal{ind,t}}$,
that $m\leq\mathsf{N}_{\rm cut,t}$, and that $i_{m}\leq i$. The upshot of
(6.80) is that condition (A.25) in Lemma A.5 is now verified, with
$\mathcal{C}_{h}=\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m})+1)}$, and
$\lambda=\Gamma_{q}\lambda_{q}$,
$\widetilde{\lambda}=\Gamma_{q}\widetilde{\lambda}_{q}$,
$\mu=\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{m}+3}$,
$\widetilde{\mu}=\Gamma_{q+1}^{2}\widetilde{\tau}_{q-1}^{-1}$,
$N_{x}=\mathsf{N}_{\textnormal{ind,v}}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm cut,t}$. We obtain from
(A.26) and the fact that $(\Gamma_{\psi}\Gamma)^{-2}\mathcal{C}_{h}=1$ that
(6.76) holds when $k=1$ for those $(x,t)$ such that $h_{m,j_{m},q}(x,t)$
satisfies (6.37). The case when $h_{m,j_{m},q}(x,t)$ satisfies the bound
(6.42) is nearly identical, as was the case in the proof of Lemma 6.13. The
only changes are that now $\Gamma_{\psi}=1$ (according to (6.4)), and that the
constant $\mathcal{C}_{h}$ which we read from the right side of (6.80) is now
improved to $\Gamma_{q+1}^{2(m+1)(i_{m}-i_{*}(j_{m}))}$. These two changes
offset each other, resulting in the same exact bound. Thus, we have shown that
(6.76) holds when $k=1$.
The general case $k\geq 1$ in (6.76) is obtained via induction on $k$, in
precisely the same fashion as the proof of estimate (6.57) in Lemma 6.18. At
the heart of the matter lies a commutator bound similar to (6.70), which is
proven in precisely the same way by appealing to the fact that we work on
$\mathrm{supp\,}(\psi_{\vec{i},q})\subset\mathrm{supp\,}(\psi_{i,q})$, and
thus bound (6.66) is available; in turn, this bound provides sharper space and
material estimates than required in (6.76), completing the proof. In order to
avoid redundancy we omit further details.
Proof of (6.77). This estimate follows from Lemma A.10 with $v=u_{q}$,
$B=D_{t,q-1}$, $f=\psi_{m,i_{m},j_{m},q}$,
$\Omega=\mathrm{supp\,}(\psi_{\vec{i},q})\cap\mathrm{supp\,}(\psi_{j_{m},q-1})\cap\mathrm{supp\,}(\psi_{m,i_{m},j_{m},q})$,
and $p=\infty$. Technically, the presence of the
$\psi_{m,i_{m},j_{m},q}^{-1+(N+K+M)/\mathsf{N}_{\rm fin}}$ factor on the left
side of (6.77) means that the bound doesn’t follow from the statement of Lemma
A.10, but instead, it follows from its proof; the changes to the argument are
minor and we ignore this distinction. First, we note that since
$\Omega\subset\mathrm{supp\,}(\psi_{i,q})$, estimate (6.57) allows us to
verify condition (A.40) of Lemma A.10 with $N_{*}=\nicefrac{{3\mathsf{N}_{\rm
fin}}}{{2}}+1$,
$\mathcal{C}_{v}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q},N_{x}=2\mathsf{N}_{\textnormal{ind,v}}\geq\mathsf{N}_{\textnormal{ind,v}},\mu_{v}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\mu}_{v}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1},N_{t}=\mathsf{N}_{\textnormal{ind,t}}\geq\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t}$. On the other hand, condition (A.41) of Lemma A.10 holds in view of
(6.76) with $\mathcal{C}_{f}=1$,
$\lambda_{f}=\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{f}=\Gamma_{q}\widetilde{\lambda}_{q},N_{x}=\mathsf{N}_{\textnormal{ind,v}},\mu_{f}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\mu}_{f}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1},N_{t}=\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t}$. As $|\alpha|=K$ and $|\beta|=M$, the bound (6.77) is now a direct
consequence of (A.42) and the parameter inequality
$\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q}\widetilde{\lambda}_{q}\leq\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1}\Leftarrow\Gamma_{q+1}^{\mathsf{c_{0}}+2}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\tau_{q}^{-1}$,
cf. (9.39). ∎
A direct consequence of Lemma 6.20 and identity (6.75) is that the inductive
bounds (3.19) and (3.20) hold for $q^{\prime}=q$, as is shown by the following
Lemma.
###### Lemma 6.21 (Mixed spatial and material derivatives for velocity
cutoffs).
Let $q\geq 1$, $0\leq i\leq i_{\mathrm{max}}(q)$, $N,K,M,k\geq 0$, and let
$\alpha,\beta\in{\mathbb{N}}^{k}$ be such that $|\alpha|=K$ and $|\beta|=M$.
Then we have
$\displaystyle\frac{1}{\psi_{i,q}^{1-(K+M)/\mathsf{N}_{\rm
fin}}}\left|\left(\prod_{l=1}^{k}D^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}\right)\psi_{i,q}\right|$
$\displaystyle\qquad\lesssim\mathcal{M}\left(K,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.81)
for $K+M\leq\mathsf{N}_{\rm fin}$, and
$\displaystyle\frac{1}{\psi_{i,q}^{1-(N+K+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}\left(\prod_{l=1}^{k}D_{q}^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}\right)\psi_{i,q}\right|$
$\displaystyle\qquad\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)(\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1})^{K}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.82)
holds for $N+K+M\leq\mathsf{N}_{\rm fin}$.
###### Remark 6.22.
As shown in Remark 3.4, the bound (6.82) and identity (A.39) imply that
estimate (3.27) also holds with $q^{\prime}=q$, namely that
$\displaystyle\frac{1}{\psi_{i,q}^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}D_{t,q}^{M}\psi_{i,q}\right|\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.83)
for $N+M\leq\mathsf{N}_{\rm fin}$. Note that for all $M\geq 0$ we have
$\displaystyle\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
$\displaystyle\leq\Gamma_{q+1}^{-(\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t})}\left(\tau_{q}\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)^{\mathsf{N}_{\rm
cut}}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
$\displaystyle\leq\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
once $\mathsf{N}_{\textnormal{ind,t}}$ is taken to be sufficiently large when
compared to $\mathsf{N}_{\rm cut,t}$ to ensure that
$\displaystyle\left(\tau_{q}\widetilde{\tau}_{q}^{-1}\right)^{\mathsf{N}_{\rm
cut}}\leq\Gamma_{q+1}^{\mathsf{N}_{\textnormal{ind,t}}}$
for all $q\geq 1$. This condition holds in view of (9.52). In summary, we have
thus obtained
$\displaystyle\frac{1}{\psi_{i,q}^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\left|D^{N}D_{t,q}^{M}\psi_{i,q}\right|$
$\displaystyle\lesssim\mathcal{M}\left(N,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\Gamma_{q}\widetilde{\lambda}_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(6.84)
for $N+M\leq\mathsf{N}_{\rm fin}$.
###### Proof of Lemma 6.21.
Note that for $M=0$ estimate (6.81) holds by (6.36). The bound (6.82) holds
for $M=0$, due to the expansion (6.54)–(6.55), the bound (6.56) on the support
of $\psi_{i,q}$, to the bound (6.82) with $M=0$, and to the parameter
inequality
$\Gamma_{q+1}^{2+\mathsf{c_{0}}}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\tau_{q}^{-1}$
(cf. (9.39)). The rest of the proof is dedicated to the case $M\geq 1$.
The argument is very similar to the proof of Lemma 6.13 and so we only
emphasize the main differences. We start with the proof of (6.81). We claim
that in a the same way that (6.35) was shown to imply (6.45), one may show
that estimate (6.76) implies that for any $\vec{i}$ and $0\leq
m\leq\mathsf{N}_{\rm cut,t}$ as on the right side of (6.75) (in particular, as
in Lemma 6.18), we have that
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}(\psi_{\vec{i},q})}}{\psi_{m,i_{m},q}^{1-(K+M)/\mathsf{N}_{\rm
fin}}}\left|\left(\prod_{l=1}^{k}D^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}\right)\psi_{m,i_{m},q}\right|$
$\displaystyle\quad\lesssim\mathcal{M}\left(K,\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\Gamma_{q}\right)\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,x},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)\,.$
(6.85)
The proof of the above estimate is done by induction on $k$. For $k=1$, the
first step in establishing (6.85) is to use the Leibniz rule and induction on
the number of material derivatives to reduce the problem to an estimate for
$\psi_{m,i_{m},q}^{-2+(K+M)/\mathsf{N}_{\rm
fin}}D^{K}D_{t,q-1}^{M}(\psi_{m,i_{m},q}^{2})$; this is achieved in precisely
the same way that (6.47) was proven. The derivatives of $\psi_{m,i_{m},q}^{2}$
are now bounded via the Leibniz rule and the definition (6.11). Indeed, when
$D^{K^{\prime}}D_{t,q-1}^{M^{\prime}}$ derivatives fall on
$\psi_{m,i_{m},j_{m},q}^{2}$ the required bound is obtained from (6.76), which
gives the same upper bound as the one required by (6.85). On the other hand,
if $D^{K-K^{\prime}}D_{t,q-1}^{M-M^{\prime}}$ derivatives fall on
$\psi_{j_{m},q-1}^{2}$, the required estimate is provided by (3.27) with
$q^{\prime}=q-1$ and $i$ replaced by $j_{m}$; the resulting estimates are
strictly better than what is required by (6.85). This shows that estimate
(6.85) holds for $k=1$. We then proceed inductively in $k\geq 1$, in the same
fashion as the proof of estimate (6.57) in Lemma 6.18; the corresponding
commutator bound is applicable because we work on
$\mathrm{supp\,}(\psi_{m,i_{m},q})\cap\mathrm{supp\,}(\psi_{i,q})$. In order
to avoid redundancy we omit these details, and conclude the proof of (6.85).
As in the proof of Lemma 6.13, we are now able to show that (6.81) is a
consequence of (6.85). As before, by induction on the number of material
derivatives and the Leibniz rule we reduce the problem to an estimate for
$\psi_{i,q}^{-2+(K+M)/\mathsf{N}_{\rm
fin}}\prod_{l=1}^{k}D^{\alpha_{l}}D_{t,q-1}^{\beta_{l}}(\psi_{i,q}^{2})$; see
the proof of (6.47) for details. In order to estimate derivatives of
$\psi_{i,q}^{2}$, we use identities (6.25) and (6.26), which imply upon
applying a differential operator, say $D_{t,q-1}$, that
$\displaystyle D_{t,q-1}(\psi_{i,q}^{2})$
$\displaystyle=D_{t,q-1}\left(\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\prod_{m^{\prime}=0}^{m-1}\Psi_{m^{\prime},i,q}^{2}\cdot\psi_{m,i,q}^{2}\cdot\prod_{m^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}\Psi_{m^{\prime\prime},i-1,q}^{2}\right)$
$\displaystyle=\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\sum_{\bar{m}^{\prime}=0}^{m-1}D_{t,q-1}(\psi_{\bar{m}^{\prime},i,q}^{2})\prod_{\begin{subarray}{c}0\leq
m^{\prime}\leq m-1\\\
m^{\prime}\neq\bar{m}^{\prime}\end{subarray}}\Psi_{m^{\prime},i,q}^{2}\cdot\psi_{m,i,q}^{2}\cdot\prod_{m^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}\Psi_{m^{\prime\prime},i-1,q}^{2}$
$\displaystyle\quad+\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\sum_{\bar{m}^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}\prod_{m^{\prime}=0}^{m-1}\Psi_{m^{\prime},i,q}^{2}\cdot\psi_{m,i,q}^{2}\cdot
D_{t,q-1}(\Psi_{\bar{m}^{\prime\prime},i-1,q}^{2})\prod_{\begin{subarray}{c}m+1\leq
m^{\prime\prime}\leq\mathsf{N}_{\rm cut,t}\\\
m^{\prime\prime}\neq\bar{m}^{\prime\prime}\end{subarray}}\Psi_{m^{\prime\prime},i-1,q}^{2}$
$\displaystyle\quad+\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\prod_{m^{\prime}=0}^{m-1}\Psi_{m^{\prime},i,q}^{2}\cdot
D_{t,q-1}(\psi_{m,i,q}^{2})\cdot\prod_{m^{\prime\prime}=m+1}^{\mathsf{N}_{\rm
cut,t}}\Psi_{m^{\prime\prime},i-1,q}^{2}\,.$ (6.86)
Higher order material derivatives of $\psi_{i,q}^{2}$, and mixtures of space
and material derivatives are obtained similarly, by an application of the
Leibniz rule. Equality (6.86) in particular justifies why we have only proven
(6.85) for $\vec{i}$ and $0\leq m\leq\mathsf{N}_{\rm cut,t}$ as on the right
side of (6.75)! With (6.85) and (6.86) in hand, we now repeat the argument
from the proof of Lemma 6.13 (see the two displays below (6.47)) and conclude
that (6.81) holds.
In order to conclude the proof of the Lemma, it remains to establish (6.82).
This bound follows now directly from (6.81) and an application of Lemma A.10
(to be more precise, we need to use the proof of this Lemma), in precisely the
same way that (6.76) was shown earlier to imply (6.77). As there are no
changes to be made to this argument, we omit these details. ∎
#### 6.2.6 $L^{1}$ size of the velocity cutoffs
The purpose of this section is to show that the inductive estimate (3.21)
holds with $q^{\prime}=q$.
###### Lemma 6.23 (Support estimate).
For all $0\leq i\leq{i_{\rm max}}(q)$ we have that
$\displaystyle\left\|\psi_{i,q}\right\|_{L^{1}}\lesssim\Gamma_{q+1}^{-2i+{\mathsf{C}_{b}}}$
(6.87)
where ${\mathsf{C}_{b}}$ is defined in (3.21) and thus depends only on $b$.
###### Proof of Lemma 6.23.
If $i\leq({\mathsf{C}_{b}}-1)/2$ then (6.87) trivially holds because
$0\leq\psi_{i,q}\leq 1$, and $|\mathbb{T}^{3}|\leq\Gamma_{q+1}$ for all $q\geq
1$, once $a$ is chosen to be sufficiently large. Thus, we only need to be
concerned with $i$ such that $({\mathsf{C}_{b}}+1)/2\leq i\leq{i_{\rm
max}}(q)$.
First, we note that Lemma 6.7 imply that the functions $\Psi_{m,i^{\prime},q}$
defined in (6.24) satisfy $0\leq\Psi_{m,i^{\prime},q}^{2}\leq 1$, and thus
(6.26) implies that
$\displaystyle\left\|\psi_{i,q}\right\|_{L^{1}}\leq\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\left\|\psi_{m,i,q}\right\|_{L^{1}}\,.$ (6.88)
Next, we let $j_{*}(i)=j_{*}(i,q)$ be the maximal index of $j_{m}$ appearing
in (6.11). In particular, recalling also (6.27), we have that
$\displaystyle\Gamma_{q+1}^{i-1}<\Gamma_{q}^{j_{*}(i)}\leq\Gamma_{q+1}^{i}<\Gamma_{q}^{j_{*}(i)+1}\,.$
(6.89)
Using (6.11), in which we simply write $j$ instead of $j_{m}$, the fact that
$0\leq\psi_{j,q-1}^{2},\psi_{m,i,j,q}^{2}\leq 1$, and the inductive assumption
(3.21) at level $q-1$, we may deduce that
$\displaystyle\left\|\psi_{m,i,q}\right\|_{L^{1}}$
$\displaystyle\leq\left\|\psi_{j_{*}(i),q-1}\right\|_{L^{1}}+\left\|\psi_{j_{*}(i)-1,q-1}\right\|_{L^{1}}+\sum_{j=0}^{j_{*}(i)-2}\left\|\psi_{j,q-1}\psi_{m,i,j,q}\right\|_{L^{1}}$
$\displaystyle\leq\Gamma_{q}^{-2j_{*}(i)+{\mathsf{C}_{b}}}+\Gamma_{q}^{-2j_{*}(i)+2+{\mathsf{C}_{b}}}+\sum_{j=0}^{j_{*}(i)-2}\left|\mathrm{supp\,}(\psi_{j,q-1}\psi_{m,i,j,q})\right|\,.$
(6.90)
The second term on the right side of (6.90) is estimated using the last
inequality in (6.89) as
$\displaystyle\Gamma_{q}^{-2j_{*}(i)+2+{\mathsf{C}_{b}}}\leq\Gamma_{q+1}^{-2i}\Gamma_{q}^{4+{\mathsf{C}_{b}}}\leq\Gamma_{q+1}^{-2i+{\mathsf{C}_{b}}-1}\Gamma_{q}^{4+{\mathsf{C}_{b}}-b({\mathsf{C}_{b}}-1)}=\Gamma_{q+1}^{-2i+{\mathsf{C}_{b}}-1}$
(6.91)
where in the last equality we have used the definition of ${\mathsf{C}_{b}}$
in (3.21). Clearly, the first term on the right side of (6.90) is also bounded
by the right side of (6.91). We are left to estimate the terms appearing in
the sum on the right side of (6.90). The key fact is that for any $j\leq
j_{*}(i)-2$ we have that $i\geq i_{*}(j)+1$; this can be seen to hold because
$b<2$. Recalling the definition (6.7) and item 2 of Lemma 6.2, we obtain that
for $j\leq j_{*}(i)-2$ we have
$\displaystyle\mathrm{supp\,}(\psi_{j,q-1}\psi_{m,i,j,q})$
$\displaystyle\subseteq\left\\{(x,t)\in\mathrm{supp\,}(\psi_{j,q-1})\colon
h_{m,j,q}^{2}\geq\frac{1}{4}\Gamma_{q+1}^{2(m+1)(i-i_{*}(j))}\right\\}$
$\displaystyle\subseteq\left\\{(x,t)\colon\psi_{j\pm,q-1}^{2}h_{m,j,q}^{2}\geq\frac{1}{4}\Gamma_{q+1}^{2(m+1)(i-i_{*}(j))}\right\\}\,.$
(6.92)
In the second inclusion of (6.92) we have appealed to (6.23) at level $q-1$.
By Chebyshev’s inequality and the definition of $h_{m,j,q}$ in (6.6) we deduce
that
$\displaystyle\left|\mathrm{supp\,}(\psi_{j,q-1}\psi_{m,i,j,q})\right|$
$\displaystyle\leq
4\Gamma_{q+1}^{-2(m+1)(i-i_{*}(j))}\sum_{n=0}^{\mathsf{N}_{\rm
cut,x}}\Gamma_{q+1}^{-2i_{*}(j)}\delta_{q}^{-1}(\lambda_{q}\Gamma_{q})^{-2n}\left(\tau_{q-1}^{-1}\Gamma_{q+1}^{i_{*}(j)+2}\right)^{-2m}\left\|\psi_{j\pm,q-1}D^{n}D_{t,q-1}^{m}u_{q}\right\|_{L^{2}}^{2}\,.$
Since in the above display we have that $n\leq\mathsf{N}_{\rm cut,x}\leq
2\mathsf{N}_{\textnormal{ind,v}}$ and $m\leq\mathsf{N}_{\rm
cut,t}\leq\mathsf{N}_{\textnormal{ind,t}}$, we may combine the above estimate
with (5.5) and deduce that
$\displaystyle\left|\mathrm{supp\,}(\psi_{j,q-1}\psi_{m,i,j,q})\right|$
$\displaystyle\leq
4\Gamma_{q+1}^{-2(m+1)(i-i_{*}(j))}\Gamma_{q+1}^{-2i_{*}(j)}\left(\Gamma_{q}^{j+1}\Gamma_{q+1}^{-i_{*}(j)-2}\right)^{2m}\sum_{n=0}^{\mathsf{N}_{\rm
cut,x}}\Gamma_{q}^{-2n}$ $\displaystyle\leq
8\Gamma_{q+1}^{-2i}\left(\Gamma_{q}^{j+1}\Gamma_{q+1}^{-i-2}\right)^{2m}$
$\displaystyle\leq\Gamma_{q+1}^{-2i+{\mathsf{C}_{b}}-1}\,.$ (6.93)
In the last inequality we have used that $\Gamma_{q}^{j}\leq\Gamma_{q+1}^{i}$,
that $m\geq 0$, and that ${\mathsf{C}_{b}}\geq 2$ (since $b\leq 6$).
Combining (6.88), (6.90), (6.91), and (6.93) we deduce that
$\displaystyle\left\|\psi_{i,q}\right\|_{L^{1}}\leq\mathsf{N}_{\rm
cut,t}\,j_{*}(i)\,\Gamma_{q+1}^{-2i+{\mathsf{C}_{b}}-1}\,.$
In order to conclude the proof of the Lemma, we use that $\mathsf{N}_{\rm
cut,t}$ is a constant independent of $q$, and that by (6.90) and (3.17) we
have
$\displaystyle j_{*}(i)\leq
i\frac{\log\Gamma_{q+1}}{\log\Gamma_{q}}\leq{i_{\rm
max}}(q)b\leq\frac{4b}{\varepsilon_{\Gamma}(b-1)}\,.$
Thus $j_{*}(i)$ is also bounded from above by a constant independent of $q$
and upon taking $a$ sufficiently large we have
$\displaystyle\mathsf{N}_{\rm
cut,t}\,j_{*}(i)\,\Gamma_{q+1}^{-1}\leq\frac{4\mathsf{N}_{\rm
cut,t}b}{\varepsilon_{\Gamma}(b-1)}\Gamma_{q+1}^{-1}\leq 1$
which concludes the proof. ∎
### 6.3 Definition of the temporal cutoff functions
Let $\chi:(-1,1)\rightarrow[0,1]$ be a $C^{\infty}$ function which induces a
partition of unity according to
$\displaystyle\sum_{k\in\mathbb{Z}}\chi^{2}(\cdot-k)\equiv 1.$ (6.94)
Consider the translated and rescaled function
$\chi\left(t\tau_{q}^{-1}\Gamma^{i-\mathsf{c_{0}}+2}_{q+1}-k\right)\,,$
which is supported in the set of times $t$ satisfying
$\left|t-\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}-2}k\right|\leq\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}-2}\quad\iff
t\in\left[(k-1)\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}-2},(k+1)\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}-2}\right]\,.$
(6.95)
We then define temporal cut-off functions
$\displaystyle\chi_{i,k,q}(t)=\chi_{(i)}(t)=\chi\left(t\tau_{q}^{-1}\Gamma^{i-\mathsf{c_{0}}+2}_{q+1}-k\right)\,.$
(6.96)
It is then clear that
$\displaystyle{|\partial_{t}^{m}\chi_{i,k,q}|\lesssim(\Gamma_{q+1}^{i-\mathsf{c_{0}}+2}\tau_{q}^{-1})^{m}}$
(6.97)
for $m\geq 0$ and
$\chi_{i,k_{1},q}(t)\chi_{i,k_{2},q}(t)=0$ (6.98)
for all $t\in\mathbb{R}$ unless $|k_{1}-k_{2}|\leq 1$. In analogy to
$\psi_{i\pm,q}$, we define
$\chi_{(i,k\pm,q)}(t):=\left(\chi_{(i,k-1,q)}^{2}(t)+\chi_{(i,k,q)}^{2}(t)+\chi_{(i,k+1,q)}^{2}(t)\right)^{\frac{1}{2}},$
(6.99)
which are cutoffs with the property that
$\chi_{(i,k\pm,q)}\equiv 1\textnormal{ on }\mathrm{supp\,}{(\chi_{(i,k,q)})}.$
(6.100)
Next, we define the cutoffs $\widetilde{\chi}_{i,k,q}$ by
$\widetilde{\chi}_{i,k,q}(t)=\widetilde{\chi}_{(i)}(t)=\chi\left(t\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c_{0}}}-\Gamma_{q+1}^{-\mathsf{c_{0}}}k\right).$
(6.101)
For comparison with (6.95), we have that $\widetilde{\chi}_{i,k,q}$ is
supported in the set of times $t$ satisfying
$\left|t-\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}k\right|\leq\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}.$
(6.102)
As a consequence of these definitions and a sufficiently large choice of
$\lambda_{0}$, let $(i,k)$ and $({i^{*}},{k^{*}})$ be such that
$\mathrm{supp\,}\chi_{i,k,q}\cap\mathrm{supp\,}\chi_{{i^{*}},{k^{*}},q}\neq\emptyset$
and ${i^{*}}\in\\{i-1,i,i+1\\}$, then
$\mathrm{supp\,}\chi_{i,k,q}\subset\mathrm{supp\,}\widetilde{\chi}_{{i^{*}},{k^{*}},q}.$
(6.103)
Finally, we shall require cutoffs $\overline{\chi}_{q,n,p}$ which satisfy the
following three properties:
1. (1)
$\overline{\chi}_{q,n,p}(t)\equiv 1$ on
$\mathrm{supp\,}_{t}\mathring{R}_{q,n,p}$
2. (2)
$\overline{\chi}_{q,n,p}(t)=0$ if
$\left\|\mathring{R}_{q,n,p}(\cdot,t^{\prime})\right\|_{L^{\infty}(\mathbb{T}^{3})}=0$
for all
$|t-t^{\prime}|\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)^{-1}$
3. (3)
$\partial_{t}^{m}\overline{\chi}_{q,n,p}\lesssim\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)^{m}$
For the sake of specificity, recalling (9.63), we may set
$\overline{\chi}_{q,n,p}=\phi^{(t)}_{\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)}\ast\mathbf{1}_{\left\\{t:\left\|\mathring{R}_{q,n,p}\right\|_{L^{\infty}\left(\left[t-\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)^{-1},t+\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)^{-1}\right]\times\mathbb{T}^{3}\right)}>0\right\\}}\,.$
(6.104)
It is then clear that $\overline{\chi}_{q,n,p}$ slightly expands and then
mollifies the characteristic function of the time support of
$\mathring{R}_{q,n,p}$ so that the inductive assumptions (7.12), (7.19), and
(7.26) regarding the time support of $w_{q+1,n,p}$ may be verified.
### 6.4 Estimates on flow maps
We can now make estimates regarding the flows of the vector field
$v_{\ell_{q}}$ on the support of a cutoff function.
###### Lemma 6.24 (Lagrangian paths don’t jump many supports).
Let $q\geq 0$ and $(x_{0},t_{0})$ be given. Assume that the index $i$ is such
that $\psi_{i,q}^{2}(x_{0},t_{0})\geq\kappa^{2}$, where
$\kappa\in\left[\frac{1}{16},1\right]$. Then the forward flow
$(X(t),t):=(X(x_{0},t_{0};t),t)$ of the velocity field $v_{\ell_{q}}$
originating at $(x_{0},t_{0})$ has the property that
$\psi_{i,q}^{2}(X(t),t)\geq\nicefrac{{\kappa^{2}}}{{2}}$ for all $t$ be such
that
$|t-t_{0}|\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+3}\right)^{-1}$,
which by (9.39) and (9.19) is satisfied for
$|t-t_{0}|\leq\tau_{q}\Gamma_{q+1}^{-i+5+\mathsf{c_{0}}}$.
###### Proof of Lemma 6.24.
By the mean value theorem in time along the Lagrangian flow $(X(t),t)$ and
(6.83), we have that
$\displaystyle\left|\psi_{i,q}(X(t),t)-\psi_{i,q}(x_{0},t_{0})\right|$
$\displaystyle\leq|t-t_{0}|\left\|D_{t,q}\psi_{i,q}\right\|_{L^{\infty}}$
$\displaystyle\leq|t-t_{0}|\left\|D_{t,q-1}\psi_{i,q}\right\|_{L^{\infty}}+|t-t_{0}|\left\|u_{q}\cdot\nabla\psi_{i,q}\right\|_{L^{\infty}}.$
From Lemma 6.21, Lemma 6.13, Lemma 6.11, and (9.41), we have that
$\displaystyle\left\|D_{t,q-1}\psi_{i,q}\right\|_{L^{\infty}}+\left\|u_{q}\cdot\nabla\psi_{i,q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1}+\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i+1}\lambda_{q}\Gamma_{q}$
$\displaystyle\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+2},$
and hence, under the working assumption on $|t-t_{0}|$ we obtain
$\displaystyle\left|\psi_{i,q}(X(x_{0},t_{0};t),t)-\psi_{i,q}(x_{0},t_{0})\right|\lesssim\Gamma_{q+1}^{-1},$
(6.105)
for some implicit constant $C>0$ which is independent of $q\geq 0$. From the
assumption of the lemma and (6.105) it follows that
$\displaystyle\psi_{i,q}(X(t),t)\geq\kappa-C\Gamma_{q+1}^{-1}\geq\nicefrac{{\kappa}}{{\sqrt{2}}}$
for all $q\geq 0$, since we have that $\kappa\geq\nicefrac{{1}}{{16}}$ and
$C\Gamma_{q+1}^{-1}\leq\nicefrac{{1}}{{100}}$, which holds independently of
$q$ once $\lambda_{0}$ is chosen sufficiently large. ∎
###### Corollary 6.25.
Suppose $(x,t)$ is such that $\psi^{2}_{i,q}(x,t)\geq\kappa^{2}$, where
$\kappa\in\left[\nicefrac{{1}}{{16}},1\right]$. For $t_{0}$ such that
$\left|t-t_{0}\right|\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+4}\right)^{-1}$,
which is in particular satisfied for
$|t-t_{0}|\leq\tau_{q}\Gamma_{q+1}^{-i+4+\mathsf{c_{0}}}$, define $x_{0}$ to
satisfy
$x=X(x_{0},t_{0};t).$
That is, the forward flow $X$ of the velocity field $v_{\ell_{q}}$,
originating at $x_{0}$ at time $t_{0}$, reaches the point $x$ at time $t$.
Then we have
$\psi_{i,q}(x_{0},t_{0})\neq 0\,.$
###### Proof of Corollary 6.25.
We proceed by contradiction and suppose that $\psi_{i,q}(x_{0},t_{0})=0$.
Without loss of generality we can assume $t<t_{0}$. By continuity, there
exists a minimal time $t^{\prime}\in(t,t_{0}]$ such that for
$x^{\prime}=x^{\prime}(t^{\prime})$ defined by
$x=X(x^{\prime},t^{\prime};t),$
we have
$\psi_{i,q}(x^{\prime},t^{\prime})=0\,.$
By minimality and (6.19), there exists an $i^{\prime}\in\\{i-1,i+1\\}$ such
that
$\psi_{i^{\prime},q}(x^{\prime},t^{\prime})=1\,.$
Applying Lemma 6.24, estimate (6.105), we obtain
$\displaystyle\left|\psi_{i^{\prime},q}\left(X(x^{\prime},t^{\prime};t),t\right)-\psi_{i^{\prime},q}(x^{\prime},t^{\prime})\right|=\left|\psi_{i^{\prime},q}(x,t)-\psi_{i^{\prime},q}(x^{\prime},t^{\prime})\right|\lesssim\Gamma_{q+1}^{-1}\,.$
(6.106)
Here we have used that
$|t^{\prime}-t|\leq|t_{0}-t|\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+4}\right)^{-1}\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i^{\prime}+3}\right)^{-1}$,
so that Lemma 6.24 is applicable. Since
$\psi_{i^{\prime},q}(x^{\prime},t^{\prime})=1$, from (6.106) we see that
$\psi_{i^{\prime},q}(x,t)>0$, and so
$\psi^{2}_{i,q}(x,t)=1-\psi^{2}_{i^{\prime},q}(x,t)$. Then we obtain
$\displaystyle\psi^{2}_{i,q}(x,t)$
$\displaystyle=1-\psi_{i^{\prime},q}^{2}(x,t)$
$\displaystyle=\left(1+\psi_{i^{\prime},q}(x,t)\right)\left(1-\psi_{i^{\prime},q}(x,t)\right)$
$\displaystyle=\left(1+\psi_{i^{\prime},q}(x,t)\right)\left(\psi_{i^{\prime},q}(x^{\prime},t^{\prime})-\psi_{i^{\prime},q}(x,t)\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}$
which is a contradiction once $\lambda_{0}$ is chosen sufficiently large,
since we assumed that $\psi^{2}_{i,q}(x,t)\geq\kappa^{2}$ and
$\kappa\geq\nicefrac{{1}}{{16}}$. ∎
###### Definition 6.26.
We define $\Phi_{i,k,q}(x,t):=\Phi_{(i,k)}(x,t)$ to be the flows induced by
$v_{\ell_{q}}$ with initial datum at time $k{\tau_{q}}\Gamma_{q+1}^{-i}$ given
by the identity, i.e.
$\left\\{\begin{array}[]{l}(\partial_{t}+v_{\ell_{q}}\cdot\nabla)\Phi_{i,k,q}=0\\\
\Phi_{i,k,q}(x,k{\tau_{q}}\Gamma_{q+1}^{-i})=x\,.\end{array}\right.$ (6.107)
We will use $D\Phi_{(i,k)}$ to denote the gradient of $\Phi_{(i,k)}$ (which is
a thus matrix-valued function). The inverse of the matrix $D\Phi_{(i,k)}$ is
denoted by $\left(D\Phi_{(i,k)}\right)^{-1}$, in contrast to
$D\Phi_{(i,k)}^{-1}$, which is the gradient of the inverse map
$\Phi_{(i,k)}^{-1}$.
###### Corollary 6.27 (Deformation bounds).
For $k\in\mathbb{Z}$, $0\leq i\leq i_{\mathrm{max}}$, $q\geq 0$, and $2\leq
N\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$, we have the following bounds
on the support of $\psi_{i,q}(x,t){\widetilde{\chi}_{i,k,q}(t)}$.
$\displaystyle\left\|D\Phi_{(i,k)}-{\mathrm{Id}}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}$ (6.108)
$\displaystyle\left\|D^{N}\Phi_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}\mathcal{M}\left(N-1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
(6.109)
$\displaystyle\left\|(D\Phi_{(i,k)})^{-1}-{\mathrm{Id}}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}$ (6.110)
$\displaystyle\left\|D^{N-1}\left((D\Phi_{(i,k)})^{-1}\right)\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}\mathcal{M}\left(N-1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
(6.111)
$\displaystyle\left\|D^{N}\Phi^{-1}_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}\mathcal{M}\left(N-1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
(6.112)
Furthermore, we have the following bounds for $1\leq
N+M\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$:
$\displaystyle\left\|D^{N-N^{\prime}}D_{t,q}^{M}D^{N^{\prime}+1}\Phi_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\leq\widetilde{\lambda}_{q}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(6.113)
$\displaystyle\left\|D^{N-N^{\prime}}D_{t,q}^{M}D^{N^{\prime}}(D\Phi_{(i,k)})^{-1}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\leq\widetilde{\lambda}_{q}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(6.114)
for all $0\leq N^{\prime}\leq N$.
###### Proof of Corollary 6.27.
Let $t_{k}:=\tau_{q}\Gamma_{q+1}^{-i}k$. For $t$ is on the support of
$\widetilde{\chi}_{i,k,q}$, we may assume from (6.102) that
$\left|t-t_{k}\right|\leq{\tau_{q}}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}$.
Moreover, since the $\\{\psi_{i^{\prime},q}\\}_{i^{\prime}\geq 0}$ form a
partition of unity, we know that there exists $i^{\prime}$ such that
$\psi_{i^{\prime},q}^{2}(x,t)\geq\nicefrac{{1}}{{2}}$ and
$i^{\prime}\in\\{i-1,i,i+1\\}$. Thus, we have that
$\left|t-t_{k}\right|\leq{\tau_{q}}\Gamma_{q+1}^{-i^{\prime}+1+\mathsf{c_{0}}}$,
and Corollary 6.25 is applicable. For this purpose, let $x_{0}$ be defined by
$X(x_{0},t_{k};t)=x$, where $X$ is the forward flow of the velocity field
$v_{\ell_{q}}$, which equals the identity at time $t_{k}$. Corollary 6.25
guarantees that $(x_{0},t_{k})\in\mathrm{supp\,}(\psi_{i^{\prime},q})$.
The above argument shows that the flow $(X(x_{0},t_{k};t),t)$ remains in the
support of $\psi_{i^{\prime},q}$ for all $t$ such that
$|t-t_{k}|\leq\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}$, where
$i^{\prime}\in\\{i-1,i,i+1\\}$. In turn, using estimate (6.60), this shows
that
$\displaystyle\sup_{|t-t_{k}|\leq{\tau_{q}}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}}|Dv_{\ell_{q}}(X(x_{0},t_{k};t),t)|\lesssim\left\|Dv_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i\pm,q}))}\lesssim\Gamma_{q+1}^{i+2}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}.$
To conclude, using (4) from Lemma A.1 and (9.39), we obtain
$\displaystyle\left\|{D\Phi_{(i,k)}}-{\mathrm{Id}}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\,\widetilde{\chi}_{i,k,q}))}\lesssim\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}\Gamma_{q+1}^{i+2}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\lesssim\Gamma_{q+1}^{-1}$
which implies the desired estimate in (6.108). Similarly, since the flow
$(X(x_{0},t_{k};t),t)$ remains in the support of $\psi_{i^{\prime},q}$ for all
$t$ such that $|t-t_{k}|\leq{\tau_{q}}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}$, for
$N\geq 2$ the estimates in (3) from Lemma A.1 give that
$\displaystyle\left\|D^{N}\Phi_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\,\widetilde{\chi}_{i,k,q}))}$
$\displaystyle\lesssim{\tau_{q}}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}\left\|D^{N}v_{\ell_{q}}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i\pm,q}))}$
$\displaystyle\lesssim{\tau_{q}}\Gamma_{q+1}^{-i+\mathsf{c_{0}}}(\Gamma_{q+1}^{i+2}\delta_{q}^{\nicefrac{{1}}{{2}}})\widetilde{\lambda}_{q}\mathcal{M}\left(N-1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{-1}\mathcal{M}\left(N-1,2\mathsf{N}_{\textnormal{ind,v}},\Gamma_{q}\lambda_{q},\widetilde{\lambda}_{q}\right).$
Here we have used the bound (6.60) with $M=0$ and $K=N-1$ up to
$N=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$.
The first bound on the inverse matrix follows from the fact that matrix
inversion is a smooth function in a neighborhood of the identity and fixes the
identity. The second bound on the inverse matrix follows from the fact that
$\det D\Phi_{(i,k)}=1$, so that we have the formula
$\textnormal{cof }D\Phi_{(i,k)}^{T}=(D\Phi_{(i,k)})^{-1}.$
Then since the cofactor matrix is a $C^{\infty}$ function of the entries of
$D\Phi$, we can apply Lemma A.4 and the bound on $D^{N}\Phi_{(i,k)}$. Note
that in the application of Lemma A.4, we set $h=D\Phi_{(i,k)}-\mathrm{Id}$,
$\Gamma=\Gamma_{\psi}=1$, $\mathcal{C}_{h}=\Gamma_{q+1}^{-1}$, and the cost of
the spatial derivatives to be that given in (6.109). The final bound on the
inverse flow $\Phi_{(i,k)}^{-1}$ follows from the identity
$D^{N}\left(\Phi_{(i,k)}^{-1}\right)(x)=D^{N-1}\left(\left(D\Phi_{(i,k)}\right)^{-1}\left(\Phi^{-1}(x)\right)\right),$
(6.115)
the Faa di Bruno formula in Lemma A.4, induction on $N$, and the previously
demonstrated bounds.
The bound in (6.113) will be achieved by bounding
$D^{N-N^{\prime}}\left[D_{t,q}^{M},D^{N^{\prime}+1}\right]\Phi_{(i,k)}\,,$
which after using that $D_{t,q}\Phi_{(i,k)}=0$ will conclude the proof.
Towards this end, we apply Lemma A.14, specifically Remark A.16 and Remark
A.15, with $v=v_{\ell_{q}}$ and $f=\Phi_{(i,k)}$. The assumption (A.50)
(adjusted to fit Remark A.15) follows from (6.60) with
$N_{0}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$,
$\mathcal{C}_{v}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\widetilde{\lambda_{v}}=\widetilde{\lambda}_{q}$,
$\mu_{v}=\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$. The assumption (A.51) follows with
$\mathcal{C}_{f}=\Gamma_{q+1}^{-1}$ from (6.109) and the fact that
$D_{t,q}\Phi_{(i,k)}=0$. The desired bound then follows from the conclusion
(A.56) from Remark A.16 after using $\Gamma_{q+1}^{-1}$ to absorb implicit
constants. The bound in (6.114) will follow again from Lemma A.5 after using
that $\left(D\Phi_{(i,k)}\right)^{-1}$ is a smooth function of $D\Phi_{(i,k)}$
in a neighborhood of the identity, which is guaranteed from (6.108). As
before, we set $\Gamma=\Gamma_{\psi}=1$ and
$\mathcal{C}_{h}=\Gamma_{q+1}^{-1}$ in the application of Lemma A.5. The
derivative costs are precisely those in (6.113). ∎
### 6.5 Stress estimates on the support of the new velocity cutoff functions
Before giving the definition of the stress cutoffs, we first note that the can
upgrade the $L^{1}$ bounds for
$\psi_{i,q-1}D^{n}D_{t,q-1}^{m}\mathring{R}_{\ell_{q}}$ available in (5.7), to
$L^{1}$ bounds for $\psi_{i,q}D^{n}D_{t,q}^{m}\mathring{R}_{\ell_{q}}$. We
claim that:
###### Lemma 6.28 ($L^{1}$ estimates for zeroth order stress).
Let $\mathring{R}_{\ell_{q}}$ be as defined in (5.1). For $q\geq 1$ and $0\leq
i\leq{i_{\rm max}}(q)$ we have the estimate
$\displaystyle\left\|D^{k}D_{t,q}^{m}\mathring{R}_{\ell_{q}}\right\|_{L^{1}(\mathrm{supp\,}(\psi_{i,q}))}\lesssim\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\mathcal{M}\left(k,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q}\Gamma_{q},\tilde{\lambda}_{q}\right)\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.116)
for all $k+m\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$.
###### Proof of Lemma 6.28.
The first step is to apply Lemma A.14, in fact Remark A.15, to the functions
$v=v_{\ell_{q-1}}$, $f=\mathring{R}_{\ell_{q}}$, with $p=1$, and on the domain
$\Omega=\mathrm{supp\,}(\psi_{i,q-1})$. The bound (A.50) holds in view of the
inductive assumption (3.23) with $q^{\prime}=q-1$, for the parameters
$\mathcal{C}_{v}=\Gamma_{q}^{i+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q-1}$,
$\mu_{v}=\Gamma_{q}^{i-\mathsf{c_{0}}}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q}^{-1}\widetilde{\tau}_{q-1}^{-1}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and for
$N_{\circ}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. On the other hand, the
assumption (A.51) holds due to (5.7) and the fact that $\psi_{i\pm,q-1}\equiv
1$ on $\mathrm{supp\,}(\psi_{i,q-1})$, with the parameters
$\mathcal{C}_{f}=\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}$,
$\lambda_{f}=\lambda_{q}$, $\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$\mu_{f}=\Gamma_{q}^{i+3}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{f}=\widetilde{\tau}_{q-1}^{-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and $N_{\circ}=2\mathsf{N}_{\rm
fin}$. We thus conclude from (A.54) that
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\right)\mathring{R}_{\ell_{q}}\right\|_{L^{1}(\mathrm{supp\,}(\psi_{i,q-1}))}\lesssim\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\mathcal{M}\left(|\alpha|,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(|\beta|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
whenever $|\alpha|+|\beta|\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. Here
we have used that $\widetilde{\lambda}_{q-1}\leq\lambda_{q}$ and that
$\Gamma_{q}^{i+1}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q-1}\leq\Gamma_{q}^{i+3}\tau_{q-1}^{-1}\leq\widetilde{\tau}_{q-1}^{-1}$
(in view of (9.39), (9.43), and (3.18)). In particular, the definitions of
$\psi_{i,q}$ in (6.14) and of $\psi_{m,i_{m},q}$ in (6.11) imply that
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t,q-1}^{\beta_{i}}\right)\mathring{R}_{\ell_{q}}\right\|_{L^{1}(\mathrm{supp\,}(\psi_{i,q}))}$
$\displaystyle\qquad\lesssim\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\mathcal{M}\left(|\alpha|,2\mathsf{N}_{\textnormal{ind,v}},\lambda_{q},\widetilde{\lambda}_{q}\right)\mathcal{M}\left(|\beta|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1},\widetilde{\tau}_{q-1}^{-1}\right)$
(6.117)
for all $|\alpha|+|\beta|\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$.
The second step is to apply Lemma A.10 with $B=D_{t,q-1}$,
$A=u_{q}\cdot\nabla$, $v=u_{q}$, $f=\mathring{R}_{\ell_{q}}$, $p=1$, and
$\Omega=\mathrm{supp\,}(\psi_{i,q})$. In this case
$D^{k}(A+B)^{m}f=D^{k}D_{t,q}^{m}\mathring{R}_{\ell_{q}}$, which is exactly
the object that we need to estimate in (6.116). The assumption (A.40) holds
due to (6.57) with
$\mathcal{C}_{v}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\Gamma_{q}\lambda_{q}$,
$\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$\mu_{v}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and
$N_{*}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$. The assumption (A.41) holds
due to (6.117) with the parameters
$\mathcal{C}_{f}=\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}$,
$\lambda_{f}=\lambda_{q}$, $\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$\mu_{f}=\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1}$,
$\widetilde{\mu}_{f}=\widetilde{\tau}_{q-1}^{-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and
$N_{*}=\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}$. The bound (A.44) and the
parameter inequalities
$\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\leq\Gamma_{q+1}^{i-\mathsf{c_{0}}-2}\tau_{q}^{-1}\leq\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$
and
$\Gamma_{q+1}^{i+3}\tau_{q-1}^{-1}\leq\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1}$
(which hold due to (9.40), (9.39), (9.43), and (3.18)) then directly imply
(6.116), concluding the proof. ∎
###### Remark 6.29 ($L^{1}$ estimates for higher order stresses).
As discussed in Sections 2.4 and 2.7, in order to verify at level $q+1$ the
inductive assumptions in (3.13) for the new stress $\mathring{R}_{q+1}$, it
will be necessary to consider a sequence of intermediate (in terms of the cost
of a spatial derivative) objects $\mathring{R}_{q,n,p}$ indexed by $n$ for
$1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$. For notational
convenience, when $n=0$ and $p=1$, we define
$\mathring{R}_{q,0,1}:=\mathring{R}_{\ell_{q}}$, and estimates on
$\mathring{R}_{q,0}$ are already provided by Lemma 6.28. When $n=0$ and $p\geq
2$, $\mathring{R}_{q,0,p}=0$. For $1\leq n\leq{n_{\rm max}}$ and $1\leq
p\leq{p_{\rm max}}$, the higher order stresses $\mathring{R}_{q,n,p}$ are
defined in Section 8.1, specifically in (8.7). Note that the definition of
$\mathring{R}_{q,n,p}$ is given as a finite sum of sub-objects
$\mathring{H}_{q,n,p}^{n^{\prime}}$ for $n^{\prime}\leq n-1$ and thus requires
induction on $n$. The definition of $\mathring{H}_{q,n,p}^{n^{\prime}}$ is
contained in Section 8.3, specifically in (8.35) and (8.52). Estimates on
$\mathring{H}_{q,n,p}^{n^{\prime}}$ on the support of $\psi_{i,q}$ are stated
in (7.15), (7.22), and (7.29) and proven in Section 8.6. For the time being,
we _assume_ that $\mathring{R}_{q,n,p}$ is well-defined and satisfies $L^{1}$
estimates similar to those alluded to in (2.19); more precisely, we assume
that
$\displaystyle\left\|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}\right\|_{L^{1}(\mathrm{supp\,}\psi_{i,q})}\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.118)
for all $0\leq k+m\leq\mathsf{N}_{\rm fin,n}$. For the purpose of defining the
stress cutoff functions, the precise definitions of the $n$ and $p$-dependent
parameters $\delta_{q+1,n,p},\lambda_{q,n,p}$, $\mathsf{N}_{\rm fin,n}$, and
$\mathsf{c}_{\textnormal{n}}$ present in (6.118) are not relevant. Note
however that definitions for $\lambda_{q,n,p}$ for $n=0$ are given in (9.26),
while for $1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$, the
definitions are given in (9.29). Similarly, when $n=0$, we let
$\delta_{q+1,0,p}=\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}$ as is consistent
with (9.32), and when $1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm
max}}$, $\delta_{q+1,n,p}$ is defined in (9.34). Finally, note that there are
losses in the sharpness and order of the available derivative estimates in
(6.118) relative to (6.116). Specifically, the higher order estimates will
only be proven up to $\mathsf{N}_{\rm fin,n}$, which is a parameter that is
decreasing with respect to $n$ and defined in (9.37). For the moment it is
only important to note that $\mathsf{N}_{\rm fin,n}\gg
14\mathsf{N}_{\textnormal{ind,v}}$ for all $0\leq n\leq{n_{\rm max}}$, which
is necessary in order to establish of (3.13) and (3.15) at level $q+1$.
Similarly, there is a loss in the cost of sharp material derivatives in
(6.118), as $\mathsf{c}_{\textnormal{n}}$ will be a parameter which is
decreasing with respect to $n$. When $n=0$, we set
$\mathsf{c}_{\textnormal{n}}=\mathsf{c_{0}}$ so that (6.116) is consistent
with (6.118). For $1\leq n\leq{n_{\rm max}}$, $\mathsf{c}_{\textnormal{n}}$ is
defined in (9.35).
### 6.6 Definition of the stress cutoff functions
For $q\geq 1$, $0\leq i\leq i_{\mathrm{max}}$, $0\leq n\leq n_{\mathrm{max}}$,
and $1\leq p\leq{p_{\rm max}}$, in analogy to the functions $h_{m,j_{m},q}$ in
(6.6), and keeping in mind the bound (6.118), we define
$\displaystyle g_{i,q,n,p}^{2}(x,t)=1+\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\delta_{q+1,n,p}^{-2}(\Gamma_{q+1}\lambda_{q,n,p})^{-2k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{-2m}|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|^{2}.$
(6.119)
With this notation, for $j\geq 1$ the stress cut-off functions are defined by
$\displaystyle\omega_{i,j,q,n,p}(x,t)=\psi_{0,q+1}\Big{(}\Gamma_{q+1}^{-2j}\,g_{i,q,n,p}(x,t)\Big{)}\,,$
(6.120)
while for $j=0$ we let
$\displaystyle\omega_{i,0,q,n,p}(x,t)=\widetilde{\psi}_{0,q+1}\Big{(}g_{i,q,n,p}(x,t)\Big{)}\,,$
(6.121)
where $\psi_{0,q+1}$ and $\widetilde{\psi}_{0,q+1}$ are as in Lemma 6.2. The
above defined cutoff functions $\omega_{i,j,q,n,p}$ will be shown to obey good
estimates on the support of the velocity cutoffs $\psi_{i,q}$ defined earlier.
### 6.7 Properties of the stress cutoff functions
#### 6.7.1 Partition of unity
An immediate consequence of (6.1) with $m=0$ is that for every fixed $i,n$, we
have
$\displaystyle\sum_{j\geq 0}\omega_{i,j,q,n,p}^{2}=1$ (6.122)
on $\mathbb{T}^{3}\times\mathbb{R}$. Thus,
$\\{\omega_{i,j,q,n,p}^{2}\\}_{j\geq 0}$ is a partition of unity.
#### 6.7.2 $L^{\infty}$ estimates for the higher order stresses
We recall cf. (6.4) and (6.5) that the cutoff function $\psi_{0,q+1}$
appearing in the definition (6.120) satisfies different derivative bounds
according to the size of its argument. Accordingly, we introduce the following
notation.
###### Definition 6.30 (Left side of the cutoff function
$\omega_{i,j,q,n,p}$).
For $j\geq 1$ we say that
$\displaystyle(x,t)$
$\displaystyle\in\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})\qquad\mbox{if}\qquad\nicefrac{{1}}{{4}}\leq\Gamma_{q+1}^{-2j}g_{i,q,n,p}(x,t)\leq
1\,.$ (6.123)
When $j=0$ we do not define the left side of the cutoff function
$\omega_{i,0,q,n,p}$.
Directly from the definition (6.119)–(6.121), the support properties of the
functions $\psi_{0,q+1}$ and $\widetilde{\psi}_{0,q+1}$ stated in Lemma 6.2,
and using Definition 6.30, it follows that:
###### Lemma 6.31.
For all $0\leq m\leq\mathsf{N}_{\rm cut,t}$, $0\leq k\leq\mathsf{N}_{\rm
cut,x}$, and $j\geq 0$, we have that
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}(\omega_{i,j,q,n,p})}|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|$
$\displaystyle\leq\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{m}\,.$
In the above estimate, if we replace
${\mathbf{1}}_{\mathrm{supp\,}(\omega_{i,j,q,n,p})}$ with
${\mathbf{1}}_{\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})}$ (cf.
Definition 6.30), then the factor $\Gamma_{q+1}^{2(j+1)}$ may be sharpened to
$\Gamma_{q+1}^{2j}$. Moreover, if $j\geq 1$, then
$g_{i,q,n,p}(x,t)\geq(\nicefrac{{1}}{{4}})\Gamma_{q+1}^{2j}$.
Lemma 6.31 provides sharp $L^{\infty}$ bounds for the space and material
derivatives of $\mathring{R}_{q,n,p}$, at least when the number of space
derivatives is less than $\mathsf{N}_{\rm cut,x}$, and the number of material
derivatives is less than $\mathsf{N}_{\rm cut,t}$. If we are willing to pay a
Sobolev-embedding loss, then (6.118) implies lossy $L^{\infty}$ bounds for
large numbers of space and material derivatives.
###### Lemma 6.32 (Derivative bounds with Sobolev loss).
For $q\geq 1$, $n\geq 0$, and $0\leq i\leq i_{\mathrm{max}}$, we have that:
$\displaystyle\left\|D^{k}D^{m}_{t,q}\mathring{R}_{q,n,p}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q})}\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{k+3}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.124)
for all $k+m\leq\mathsf{N}_{\rm fin,n}-4$.
###### Proof of Lemma 6.32.
We apply Lemma A.3 to $f=\mathring{R}_{q,n,p}$, with $\psi_{i}=\psi_{i,q}$,
and with $p=1$. Assumption (A.16) holds in view of (6.36), with the parameter
choice
$\rho=\Gamma_{q}\widetilde{\lambda}_{q}<\Gamma_{q+1}\widetilde{\lambda}_{q}=\lambda_{q,0,1}\leq\lambda_{q,n,p}$,
where the inequalities follow immediately from (9.26)-(9.29). The assumption
(A.17) holds due to (6.118), with the parameter choices
$\mathcal{C}_{f}=\delta_{q+1,n,p}$,
$\lambda=\widetilde{\lambda}=\lambda_{q,n,p}$,
$\mu_{i}=\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}}\tau_{q}^{-1}$,
$\widetilde{\mu}_{i}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and $N_{\circ}=\mathsf{N}_{\rm
fin,n}$. The Lemma now directly follows from (A.18b) with $p=1$. ∎
We note that Lemmas 6.31 and 6.32 imply the following estimate:
###### Corollary 6.33 ($L^{\infty}$ bounds for the stress).
For $q\geq 0$, $0\leq i\leq{i_{\rm max}}$, $0\leq n\leq{n_{\rm max}}$, and
$1\leq p\leq{p_{\rm max}}$ we have
$\displaystyle\left\|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q}\cap\mathrm{supp\,}\omega_{i,j,q,n,p})}$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.125)
for all $k+m\leq\mathsf{N}_{\rm fin,n}-4$. In the above estimate, if we
replace $\mathrm{supp\,}(\omega_{i,j,q,n,p})$ with
$\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$ (cf. Definition 6.30), then
the factor $\Gamma_{q+1}^{2(j+1)}$ may be sharpened to $\Gamma_{q+1}^{2j}$.
###### Proof of Corollary 6.33.
For $m\leq\mathsf{N}_{\rm cut,t}$ and $k\leq\mathsf{N}_{\rm cut,x}$, the bound
(6.125) is already contained in Lemma 6.31 (both for
$\mathrm{supp\,}(\omega_{i,j,q,n,p})$, and the improved bound for
$\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$). When either
$k>\mathsf{N}_{\rm cut,x}$ or $m>\mathsf{N}_{\rm cut,t}$, we appeal to
estimate (6.124) and the parameter bound
$\displaystyle\delta_{q+1,n,p}\lambda_{q,n,p}^{k+3}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
$\displaystyle\leq\left(\Gamma_{q+1}^{-k-\min\\{m,\mathsf{N}_{\textnormal{ind,t}}\\}}\lambda_{q,n,p}^{3}\right)\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
$\displaystyle\leq\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)\,.$
The second estimate in the above display is a consequence of the fact that
when either $k>\mathsf{N}_{\rm cut,x}$ or $m>\mathsf{N}_{\rm cut,t}$, since
$\mathsf{N}_{\rm cut,x}\geq\mathsf{N}_{\rm cut,t}$, we have
$\displaystyle\Gamma_{q+1}^{-k-\min\\{m,\mathsf{N}_{\textnormal{ind,t}}\\}}\lambda_{q,n,p}^{3}\leq\Gamma_{q+1}^{-\mathsf{N}_{\rm
cut,t}}\lambda_{q+1}^{3}\leq 1\,,$ (6.126)
once $\mathsf{N}_{\rm cut,t}$ (and hence $\mathsf{N}_{\rm cut,x}$) are chosen
large enough, as in (9.51). ∎
In the proof of Lemma 6.36 below, we shall require one more $L^{\infty}$ bound
for $\mathring{R}_{q,n,p}$, which is for iterates of space and material
derivatives. It is convenient to record this bound now, as it follows directly
from Corollary 6.33.
###### Corollary 6.34.
For $q\geq 0$, $0\leq i\leq{i_{\rm max}}$, $0\leq n\leq{n_{\rm max}}$, $1\leq
p\leq{p_{\rm max}}$, and $\alpha,\beta\in\mathbb{N}_{0}^{k}$ we have
$\displaystyle\left\|\left(\prod_{\ell=1}^{k}D^{\alpha_{\ell}}D_{t,q}^{\beta_{\ell}}\right)\mathring{R}_{q,n,p}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q}\cap\mathrm{supp\,}\omega_{i,j,q,n,p})}$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{|\alpha|}\mathcal{M}\left(|\beta|,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.127)
for all $|\alpha|+|\beta|\leq\mathsf{N}_{\rm fin,n}-4$. In the above estimate,
if we replace $\mathrm{supp\,}(\omega_{i,j,q,n,p})$ with
$\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$ (cf. Definition 6.30), then
the factor $\Gamma_{q+1}^{2(j+1)}$ may be sharpened to $\Gamma_{q+1}^{2j}$.
###### Proof of Corollary 6.34.
The proof follows from Corollary 6.33 and Lemma A.14. The bounds corresponding
to $\mathrm{supp\,}\omega_{i,j,q,n,p}$ and
$\mathrm{supp\,}\omega_{i,j,q,n,p}^{\mathsf{L}}$ are identical (except for the
improvement $\Gamma_{q+1}^{2(j+1)}\mapsto\Gamma_{q+1}^{2j}$ in the later
case), so we only give details for the former. Since
$D_{t,q}=\partial_{t}+v_{\ell_{q}}\cdot\nabla$, Lemma A.14 is applied with
$v=v_{\ell_{q}}$, $f=\mathring{R}_{q,n,p}$,
$\Omega=\mathrm{supp\,}\psi_{i,q}\cap\mathrm{supp\,}\omega_{i,j,q,n,p}$, and
$p=\infty$. In view of estimate (6.60) and the fact that
$\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}\geq\mathsf{N}_{\rm fin,n}$, the
assumption (A.50) holds with
$\mathcal{C}_{v}=\Gamma_{q+1}^{i+1}\delta_{q}^{\nicefrac{{1}}{{2}}}$,
$\lambda_{v}=\Gamma_{q}\lambda_{q}$,
$\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q}$,
$N_{x}=2\mathsf{N}_{\textnormal{ind,v}}$,
$\mu_{v}=\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}}\tau_{q}^{-1}$,
$\widetilde{\mu}_{v}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$. On the other hand, the bound (6.127)
implies assumption (A.51) with
$\mathcal{C}_{f}=\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\Gamma_{q+1}\lambda_{q,n,p}$,
$\mu_{f}=\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1}$,
$\widetilde{\mu}_{f}=\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$. Since $\lambda_{v}\leq\lambda_{f}$,
$\widetilde{\lambda}_{v}\leq\widetilde{\lambda}_{f}$, $\mu_{v}\leq\mu_{f}$,
and $\widetilde{\mu}_{v}=\widetilde{\mu}_{f}$, we deduce from the bound (A.54)
(in fact, its version mentioned in Remark A.15) that (6.127) holds, thereby
concluding the proof. Here we are also implicitly using the parameter estimate
$\mathcal{C}_{v}\widetilde{\lambda}_{v}\leq\mu_{f}$, which holds due to
(9.39). ∎
#### 6.7.3 Maximal $j$ index in the stress cutoffs
###### Lemma 6.35 (Maximal $j$ index in the stress cutoffs).
Fix $q\geq 0$, $0\leq n\leq n_{\mathrm{max}}$, and $1\leq p\leq{p_{\rm max}}$.
There exists a ${j_{\rm max}}={j_{\rm max}}(q,n,p)\geq 1$, determined by
(6.128) below, which is bounded independently of $q$, $n$, and $p$ as in
(6.129), such that for any $0\leq i\leq{i_{\rm max}}(q)$, we have
$\displaystyle\psi_{i,q}\,\omega_{i,j,q,n,p}\equiv 0\qquad\mbox{for all}\qquad
j>j_{\mathrm{max}}.$
Moreover, the bound
$\displaystyle\Gamma_{q+1}^{2(j_{\mathrm{max}}-1)}\lesssim\lambda_{q,n,p}^{3}$
holds, with an implicit constant that independent of $q$ and $n$.
###### Proof of Lemma 6.35.
We define $j_{\mathrm{max}}$ by
$\displaystyle{j_{\rm max}}={j_{\rm
max}}(q,n,p)=\frac{1}{2}\left\lceil\frac{\log(M_{b}\sqrt{8\mathsf{N}_{\rm
cut,x}\mathsf{N}_{\rm
cut,t}}\lambda_{q,n,p}^{3})}{\log(\Gamma_{q+1})}\right\rceil$ (6.128)
where $M_{b}$ is the implicit $q$, $n$, $p$, and $i$-independent constant in
(6.124); that is we take the largest such constant among all values of $k$ and
$m$ with $k+m\leq\mathsf{N}_{\rm fin,n}-4$. To see that $j_{\mathrm{max}}$ may
be bounded independently of $q$, $n$, and $p$, we note that
$\lambda_{q,n,p}\leq\lambda_{q+1}$, and thus
$\displaystyle 2j_{\mathrm{max}}\leq 1+\frac{\log(M_{b}\sqrt{8\mathsf{N}_{\rm
cut,x}\mathsf{N}_{\rm cut,t}})+3\log(\lambda_{q+1})}{\log(\Gamma_{q+1})}\to
1+\frac{3b}{\varepsilon_{\Gamma}(b-1)}\quad\mbox{as}\quad q\to\infty.$
Thus, assuming that $a=\lambda_{0}$ is sufficiently large, we obtain that
$\displaystyle 2{j_{\rm max}}(q,n,p)\leq\frac{4b}{\varepsilon_{\Gamma}(b-1)}$
(6.129)
for all $q\geq 0$, $0\leq n\leq{n_{\rm max}}$, and $1\leq p\leq{p_{\rm max}}$.
To conclude the proof of the Lemma, let $j>j_{\mathrm{max}}$, as defined in
(6.128), and assume by contradiction that there exists a point
$(x,t)\in\mathrm{supp\,}(\psi_{i,q}\omega_{i,j,q,n,p})\neq\emptyset$. In
particular, $j\geq 1$. Then, by (6.119)–(6.120) and the pigeonhole principle,
we see that there exists $0\leq k\leq\mathsf{N}_{\rm cut,x}$ and $0\leq
m\leq\mathsf{N}_{\rm cut,t}$ such that
$\displaystyle|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|\geq\frac{\Gamma_{q+1}^{2j}}{\sqrt{8\mathsf{N}_{\rm
cut,x}\mathsf{N}_{\rm
cut,t}}}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{m}.$
On the other hand, from (6.124), we have that
$\displaystyle|D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|\leq
M_{b}\lambda_{q,n,p}^{3}\delta_{q+1,n,p}\lambda_{q,n,p}^{k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+1}\tau_{q}^{-1})^{m}.$
The above two estimates imply that
$\displaystyle\Gamma_{q+1}^{2({j_{\rm max}}+1)}\leq\Gamma_{q+1}^{2j}\leq
M_{b}\sqrt{8\mathsf{N}_{\rm cut,x}\mathsf{N}_{\rm
cut,t}}\Gamma_{q+1}^{-k-m}\lambda_{q,n,p}^{3},\leq M_{b}\sqrt{8\mathsf{N}_{\rm
cut,x}\mathsf{N}_{\rm cut,t}}\lambda_{q,n,p}^{3},$
which contradicts the fact that $j>j_{\mathrm{max}}$, as defined in (6.128). ∎
#### 6.7.4 Bounds for space and material derivatives of the stress cutoffs
###### Lemma 6.36 (Derivative bounds for the stress cutoffs).
For $q\geq 0$, $0\leq n\leq{n_{\rm max}}$, $1\leq p\leq{p_{\rm max}}$, $0\leq
i\leq{i_{\rm max}}$, and $0\leq j\leq{j_{\rm max}}$, we have that
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q}}|D^{N}D_{t,q}^{M}\omega_{i,j,q,n,p}|}{\omega_{i,j,q,n,p}^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\lesssim(\Gamma_{q+1}\lambda_{q,n,p})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.130)
for all $N+M\leq\mathsf{N}_{\rm fin,n}-\mathsf{N}_{\rm cut,x}-\mathsf{N}_{\rm
cut,t}-4$.
###### Remark 6.37.
Notice that the sharp derivative bounds in (6.130) are only up to
$\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm cut,t}$. In order to obtain
bounds up to $\mathsf{N}_{\textnormal{ind,t}}$, we may argue exactly as in the
string of inequalities which converted (6.83) into (6.84), resulting in the
bound
$\displaystyle\frac{{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q}}|D^{N}D_{t,q}^{M}\omega_{i,j,q,n,p}|}{\omega_{i,j,q,n,p}^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\lesssim(\Gamma_{q+1}\lambda_{q,n,p})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+3}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)\,.$
(6.131)
###### Proof of Lemma 6.36.
For simplicity, we only treat here the case $j\geq 1$. Indeed, for $j=0$ we
simply replace $\psi_{0,q+1}$ with $\widetilde{\psi}_{0,q+1}$, which by Lemma
6.2 has similar properties to $\psi_{0,q+1}$.
The goal is to apply the Faa di Bruno Lemma A.5 with $\psi=\psi_{0,q+1}$,
$\Gamma=\Gamma_{q+1}^{-j}$, $D_{t}=D_{t,q}$, and $h(x,t)=g_{i,q,n,p}(x,t)$, so
that $g=\omega_{i,j,q,n,p}$.
Because the cutoff function $\psi=\psi_{0,q+1}$ satisfies slightly different
estimates depending on whether we are in the case (6.4) or (6.5), assumption
(A.24) holds with $\Gamma_{\psi}=1$, and respectively
$\Gamma_{\psi}=\Gamma_{q+1}^{-1}$, depending on whether we work on the set
$\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$ or on the set
$\mathrm{supp\,}(\omega_{i,j,q,n,p})\setminus\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$
(cf. Definition 6.30). We have in fact encountered this same issue in the
proof of Lemmas 6.13 and 6.20. The slightly worse value of $\Gamma_{\psi}$ for
$(x,t)\in\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$ is however
precisely balanced out by the fact that in Corollary 6.34 the bound (6.127) is
improved by a factor for $\Gamma_{q+1}^{2}$ on
$\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$. Since in the end these two
factors of $\Gamma_{q+1}^{2}$ cancel out, as they did in Lemmas 6.13 and 6.20,
we only give the proof of the bound (6.130) for
$(x,t)\in\mathrm{supp\,}(\omega_{i,j,q,n,p})\setminus\mathrm{supp\,}(\omega_{i,j,q,n,p}^{\mathsf{L}})$,
which is equivalent to the condition that
$1<\Gamma_{q+1}^{-2j}g_{i,q,n,p}(x,t)\leq\Gamma_{q+1}^{2}$. Note moreover that
we do not perform any estimates for $(x,t)$ such that
$1<\Gamma_{q+1}^{-2j}g_{i,q,n,p}(x,t)<(\nicefrac{{1}}{{4}})\Gamma_{q+1}^{2}$
since in this region $\psi_{0,q+1}\equiv 1$ (see item 2(b) in Lemma 6.2) and
so its derivatives equal to $0$. Therefore, for the remainder of the proof we
work with the subset of $\mathrm{supp\,}\omega_{i,j,q,n,p}$ on which we have
$\displaystyle(\nicefrac{{1}}{{4}})\Gamma_{q+1}^{2}\leq\Gamma_{q+1}^{-2j}g_{i,q,n,p}(x,t)\leq\Gamma_{q+1}^{2}\,.$
(6.132)
This ensures that assumption (A.24) of Lemma A.5 holds with
$\Gamma_{\psi}=\Gamma_{q+1}^{-1}$.
In order to verify condition (A.25), the main requirement is a supremum bound
for $D^{N}D_{t,q}^{M}g_{i,q,n,p}$ in $L^{\infty}$ on the support of
$\psi_{i,q}\omega_{i,j,q,n,p}$. In this direction, we claim that for all
$(x,t)$ as in (6.132), we have
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q}}\left|D^{N}D_{t,q}^{M}g_{i,q,n,p}(x,t)\right|\lesssim\Gamma_{q+1}^{2j+2}(\Gamma_{q+1}\lambda_{q,n,p})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
(6.133)
for all $N+M\leq\mathsf{N}_{\rm fin,n}-\mathsf{N}_{\rm cut,x}-\mathsf{N}_{\rm
cut,t}-4$. Thus, assumption (A.25) of Lemma A.5 holds with
$\mathcal{C}_{h}=\Gamma_{q+1}^{2j+2}$,
$\lambda=\widetilde{\lambda}=\Gamma_{q+1}\lambda_{q,n,p}$,
$\mu=\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1}$,
$\widetilde{\mu}=\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm cut,t}$. In particular,
we note that $(\Gamma_{\psi}\Gamma)^{-2}\mathcal{C}_{h}=1$, and estimate
(A.26) of Lemma A.5 directly implies (6.130).
Thus, in order to complete the proof of the lemma it remains to establish
estimate (6.133). As in the proof of Lemma 6.13, it is more convenient to
first estimate $D^{N}D_{t,q}^{M}(g_{i,q,n,p}(x,t)^{2})$, as its definition
(cf. (6.119)) makes it more amenable to the use of the Leibniz rule. Indeed,
for all $N+M\leq\mathsf{N}_{\rm fin,n}-\mathsf{N}_{\rm cut,x}-\mathsf{N}_{\rm
cut,t}-4$ we have that
$\displaystyle D^{N}D_{t,q}^{M}g_{i,q,n,p}^{2}$
$\displaystyle=\sum_{N^{\prime}=0}^{N}\sum_{M^{\prime}=0}^{M}{N\choose
N^{\prime}}{M\choose M^{\prime}}$
$\displaystyle\qquad\times\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\frac{D^{N^{\prime}}D_{t,q}^{M^{\prime}}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}\,D^{N-N^{\prime}}D^{M-M^{\prime}}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}}{\delta_{q+1,n,p}^{2}(\Gamma_{q+1}\lambda_{q,n,p})^{2k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{2m}}.$
Combining the above display with estimate (6.127) and the fact that
$k+m+N+M\leq\mathsf{N}_{\rm fin,n}-4$, we deduce
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q}\cap\mathrm{supp\,}\omega_{i,j,q,n,p}}\left|D^{N}D_{t,q}^{M}g_{i,q,n,p}^{2}\right|$
$\displaystyle\lesssim\sum_{N^{\prime}=0}^{N}\sum_{M^{\prime}=0}^{M}\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\frac{1}{\delta_{q+1,n,p}^{2}(\Gamma_{q+1}\lambda_{q,n,p})^{2k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{2m}}$
$\displaystyle\qquad\times\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{N^{\prime}+k}\mathcal{M}\left(M^{\prime}+m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
$\displaystyle\qquad\times\Gamma_{q+1}^{2(j+1)}\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{N-N^{\prime}+k}\mathcal{M}\left(M-M^{\prime}+m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)$
$\displaystyle\lesssim\Gamma_{q+1}^{4(j+1)}(\Gamma_{q+1}\lambda_{q,n,p})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)\,.$
(6.134)
Lastly, we show that the bound (6.134) and the fact that we work with $(x,t)$
such that (6.132) holds, implies (6.133). This argument is the same as the one
found earlier in (6.45)–(6.47). We establish (6.133) inductively in $K$ for
$N+M\leq K$. We know from (6.132) that (6.133) holds for $K=0$, i.e., for
$N=M=0$. So let us assume by induction that (6.133) was previously established
for any pair $N^{\prime}+M^{\prime}\leq K-1$, and fix a new pair with $N+M=K$.
Similarly to (6.46), the Leibniz rule gives
$\displaystyle
D^{N}D_{t,q}^{M}(g_{i,q,n,p}^{2})-2g_{i,q,n,p}D^{N}D_{t,q}^{M}g_{i,q,n,p}$
$\displaystyle=\sum_{\begin{subarray}{c}0\leq N^{\prime}\leq N\\\ 0\leq
M^{\prime}\leq M\\\ 0<N^{\prime}+M^{\prime}<N+M\end{subarray}}{N\choose
N^{\prime}}{M\choose
M^{\prime}}D^{N^{\prime}}D_{t,q}^{M^{\prime}}g_{i,q,n,p}\,D^{N-N^{\prime}}D_{t,q}^{M-M^{\prime}}g_{i,q,n,p}\,.$
Since every term in the sum on the right side of the above display satisfies
$1\leq N^{\prime}+M^{\prime}\leq K-1$, these terms are bounded by our
inductive assumption, and we deduce that
$\displaystyle{\mathbf{1}}_{\mathrm{supp\,}\psi_{i,q}}\left|D^{N}D_{t,q}^{M}g_{i,q,n,p}\right|$
$\displaystyle\lesssim\frac{\left|D^{N}D_{t,q}^{M}(g_{i,q,n,p}^{2})\right|}{g_{i,q,n,p}}$
$\displaystyle+\frac{\Gamma_{q+1}^{2(2j+2)}(\Gamma_{q+1}\lambda_{q,n,p})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-\mathsf{N}_{\rm
cut,t},\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\tilde{\tau}_{q}^{-1}\right)}{g_{i,q,n,p}}\,.$
Thus, (6.133) also holds for $N+M=K$ by combining the above display with
(6.132) (which implies $g_{i,q,n,p}\geq\Gamma_{q+1}^{2j+2}$), and with
estimate (6.134) (which gives the bounds for the derivatives of
$g_{i,q,n,p}^{2}$). This concludes the proof of (6.133) and thus of the Lemma.
∎
#### 6.7.5 $L^{r}$ norm of the stress cutoffs
###### Lemma 6.38.
Let $q\geq 0$. For $r\geq 1$ we have that
$\displaystyle\left\|\omega_{i,j,q,n,p}\right\|_{L^{r}(\mathrm{supp\,}\psi_{i\pm,q})}\lesssim\Gamma_{q+1}^{-\nicefrac{{2j}}{{r}}}$
(6.135)
holds for all $0\leq i\leq{i_{\rm max}}$, $0\leq j\leq{j_{\rm max}}$, $0\leq
n\leq{n_{\rm max}}$, and $1\leq p\leq{p_{\rm max}}$. The implicit constant is
independent of $i,j,q,n$ and $p$.
###### Proof of Lemma 6.38.
The argument is similar to the proof of (6.87). We begin with the case $r=1$.
The other cases $r\in(1,\infty]$ follow from the fact that
$\omega_{i,j,q,n,p}\leq 1$ and Lebesgue interpolation.
For $j=0$ we are done since by definition $0\leq\omega_{i,j,q,n,p}\leq 1$,
thus we consider only $j\geq 1$. Since, $\psi_{i\pm 2,q}\equiv 1$ on
$\mathrm{supp\,}(\psi_{i\pm,q})$, and using Lemma 6.31, we see that for any
$(x,t)\in\mathrm{supp\,}(\psi_{i\pm,q}\omega_{i,j,q,n,p})$ we have
$\displaystyle\psi_{i\pm 2,q}^{2}g_{i,q,n,p}^{2}$ $\displaystyle=\psi_{i\pm
2,q}^{2}+\sum_{k=0}^{\mathsf{N}_{\rm cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\frac{|\psi_{i\pm
2,q}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|^{2}}{\delta_{q+1,n,p}^{2}(\Gamma_{q+1}\lambda_{q,n,p})^{2k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{2m}}\geq\frac{1}{16}\Gamma_{q+1}^{4j}.$
Using that $a+b\geq\sqrt{a^{2}+b^{2}}$ for $a,b\geq 0$, and using
$\Gamma_{q+1}^{4j}\geq 64$ for $j\geq 1$, we conclude that
$\displaystyle\sum_{k=0}^{\mathsf{N}_{\rm cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\frac{|\psi_{i\pm
2,q}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|}{\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{m}}\geq\frac{1}{16}\Gamma_{q+1}^{2j}.$
Therefore, using Chebyshev’s inequality and the inductive assumption (6.118),
we obtain
$\displaystyle\left|\mathrm{supp\,}(\psi_{i\pm,q}\omega_{i,j,q,n,p})\right|$
$\displaystyle\leq\left|\left\\{(x,t)\colon\psi_{i\pm
2,q}g_{i,q,n,p}\geq(\nicefrac{{1}}{{16}})\Gamma_{q+1}^{2j}\right\\}\right|$
$\displaystyle\leq\left|\left\\{(x,t)\colon\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm cut,t}}\frac{|\psi_{i\pm
2,q}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}(x,t)|}{\delta_{q+1,n,p}(\Gamma_{q+1}\lambda_{q,n,p})^{k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{m}}\geq(\nicefrac{{1}}{{16}})\Gamma_{q+1}^{2j}\right\\}\right|$
$\displaystyle\leq 16\Gamma_{q+1}^{-2j}\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm
cut,t}}\delta_{q+1,n,p}^{-1}(\Gamma_{q+1}\lambda_{q,n,p})^{-k}(\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}+2}\tau_{q}^{-1})^{-m}\left\|\psi_{i\pm
2,q}D^{k}D_{t,q}^{m}\mathring{R}_{q,n,p}\right\|_{L^{1}}$
$\displaystyle\lesssim 16\Gamma_{q+1}^{-2j}\sum_{k=0}^{\mathsf{N}_{\rm
cut,x}}\sum_{m=0}^{\mathsf{N}_{\rm cut,t}}\Gamma_{q+1}^{-k}$
$\displaystyle\lesssim\Gamma_{q+1}^{-2j}$
where the implicit constant only depends on $\mathsf{N}_{\rm cut,t}$. The
proof is concluded since the $L^{1}$ norm of a function with range in $[0,1]$
is bounded by the measure of its support. ∎
### 6.8 Definition and properties of the checkerboard cutoff functions
For $0\leq n\leq{n_{\rm max}}$, consider all the
$\frac{\mathbb{T}^{3}}{\lambda_{q,n,0}}$-periodic cells contained in
$\mathbb{T}^{3}$, of which there are $\lambda_{q,n,0}^{3}$. Index these cells
by integer triples $\vec{l}=(l,w,h)$ for
$l,w,h\in\\{0,...,\lambda_{q,n,0}-1\\}$. Let $\mathcal{X}_{q,n,\vec{l}}$ be a
partition of unity adapted to this checkerboard of periodic cells which
satisfies
$\sum_{\vec{l}=(l,w,h)}\left(\mathcal{X}_{q,n,\vec{l}}\right)^{2}=1$ (6.136)
for any $q$ and $n$. Furthermore, for
$\vec{l}=(l,w,h),\vec{l}^{*}=(l^{*},w^{*},h^{*})\in\\{0,...,\lambda_{q,n,0}-1\\}^{3}$
such that
$|l-l^{*}|\geq 2,\qquad|w-w^{*}|\geq 2,\qquad|h-h^{*}|\geq 2,$
we impose that
$\mathcal{X}_{q,n,\vec{l}}\mathcal{X}_{q,n,\vec{l}^{*}}=0.$ (6.137)
###### Definition 6.39 (Checkerboard Cutoff Function).
Given $q$, $0\leq n\leq{n_{\rm max}}$, $i\leq{i_{\rm max}}$, and
$k\in\mathbb{Z}$, we define
$\zeta_{q,i,k,n,\vec{l}}(x,t)=\mathcal{X}_{q,n,\vec{l}}\left(\Phi_{i,k,q}(x,t)\right).$
(6.138)
###### Lemma 6.40.
The cutoff functions $\left\\{\zeta_{q,i,k,n,\vec{l}}\right\\}_{\vec{l}}$
satisfy the following properties:
1. (1)
The material derivative $D_{t,q}\left(\zeta_{q,i,k,n,\vec{l}}\right)$
vanishes.
2. (2)
For each $t\in\mathbb{R}$ and all $x\in\mathbb{T}^{3}$,
$\sum_{\vec{l}=(l,w,h)}\left(\zeta_{q,i,k,n,\vec{l}}(x,t)\right)^{2}=1.$
(6.139)
3. (3)
We have the spatial derivative estimate for all
$m\leq\nicefrac{{3\mathsf{N}_{\rm fin}}}{{2}}+1$
$\left\|D^{m}\zeta_{q,i,k,n,\vec{l}}\right\|_{L^{\infty}\left(\mathrm{supp\,}\psi_{i,q}\widetilde{\chi}_{i,k,q}\right)}\lesssim\lambda_{q,n,0}^{m}.$
(6.140)
4. (4)
There exists an implicit dimensional constant independent of $q$, $n$, $k$,
$i$, and $\vec{l}$ such that for all
$(x,t)\in\mathrm{supp\,}\psi_{i,q}\widetilde{\chi}_{i,k,q}$,
$\textnormal{diam}\left(\mathrm{supp\,}\left(\zeta_{q,i,k,n,\vec{l}}(\cdot,t)\right)\right)\lesssim\left(\lambda_{q,n,0}\right)^{-1}.$
(6.141)
###### Proof of Lemma 6.40.
The proof of (1) is immediate given that $\zeta_{q,i,k,n,\vec{l}}$ is
precomposed with the flow map $\Phi_{i,k,q}$. (6.139) follows from (1),
(6.136), and the fact that for each $t\in\mathbb{R}$, $\Phi_{i,k,q}(t,\cdot)$
is a diffeomorphism of $\mathbb{T}^{3}$. The spatial derivative estimate in
(6.140) follows from Lemma A.4, (6.109), and the parameter definitions in
(9.19), (9.26), and (9.29). The property in (6.141) follows from the
construction of the $\mathcal{X}_{q,n,\vec{l}}$ functions (which can be taken
simply as a dilation by a factor of $\lambda_{q,n,1}$ of a $q$-independent
partition of unity on $\mathbb{R}^{3}$) and (6.108). ∎
### 6.9 Definition of the cumulative cutoff function
Finally, combining the cutoff functions defined in Definition 6.6,
(6.120)–(6.121), and (6.96), we define the cumulative cutoff function by
$\displaystyle{\eta_{i,j,k,q,n,p,\vec{l}}(x,t)=\psi_{i,q}(x,t)\omega_{i,j,q,n,p}(x,t)\chi_{i,k,q}(t)\overline{\chi}_{q,n,p}(t)\zeta_{q,i,k,n,\vec{l}}(x,t).}$
Since the values of $q$ and $n$ are clear from the context, the values in
$\vec{l}$ are irrelevant in many arguments, and the time cutoffs
$\overline{\chi}_{q,n,p}$ are only used in Section 8.9, we may abbreviate the
above using any of
$\displaystyle\eta_{i,j,k,q,n,p,\vec{l}}\,(x,t)=\eta_{i,j,k,q,n,p}(x,t)=\eta_{(i,j,k)}(x,t)=\psi_{(i)}(x,t)\omega_{(i,j)}(x,t)\chi_{(i,k)}(t)\zeta_{(i,k)}(x,t).$
It follows from Lemma 6.8, (6.122), (6.94), and (6.139) that for every
${(q,n,p)}$ fixed, we have a partition of unity
$\displaystyle\sum_{i,j\geq
0}\sum_{k\in\mathbb{Z}}\sum_{\vec{l}}\eta_{i,j,k,q,n,p,\vec{l}}^{2}\,(x,t)=1.$
(6.142)
The sum in $i$ goes up to $i_{\mathrm{max}}$ (defined in (6.53)), while the
sum in $j$ goes up to $j_{\mathrm{max}}$ (defined in (6.128)). In analogy with
$\psi_{i\pm,q}$, we define
$\omega_{(i,j\pm)}(x,t):=\left(\omega_{(i,j-1)}^{2}(x,t)+\omega_{(i,j)}^{2}(x,t)+\omega_{(i,j+1)}^{2}(x,t)\right)^{\frac{1}{2}},$
(6.143)
which are cutoffs with the property that
$\omega_{(i,j\pm)}\equiv 1\textnormal{ on }\mathrm{supp\,}{(\omega_{(i,j)})}.$
(6.144)
We then define
$\eta_{(i\pm,j\pm,k,\pm)}(x,t):=\psi_{i\pm,q}(x,t)\omega_{(i,j\pm)}(x,t)\widetilde{\chi}_{i,k,q}(t)\zeta_{q,i,k,n,\vec{l}}(x,t),$
(6.145)
which are cutoffs with the property that
$\eta_{(i,\pm,j\pm,k\pm)}\equiv\zeta_{q,i,k,n,\vec{l}}\quad\textnormal{ on
}\mathrm{supp\,}{\left(\psi_{(i)}\omega_{(i,j)}\chi_{(i,k)}\right)}.$ (6.146)
We conclude this section with estimates on the $L^{p}$ norms of the cumulative
cutoff function $\eta_{(i,j,k)}$.
###### Lemma 6.41.
For $r_{1},r_{2}\in[1,\infty]$ with $\frac{1}{r_{1}}+\frac{1}{r_{2}}=1$ we
have
$\sum_{\vec{l}}\left|\mathrm{supp\,}(\eta_{i,j,k,q,n,p,\vec{l}})\right|\lesssim\Gamma_{q+1}^{-2\left(\frac{i}{r_{1}}+\frac{j}{r_{2}}\right)+\frac{{\mathsf{C}_{b}}}{r_{1}}+2}$
(6.147)
###### Proof of Lemma 6.41.
Applying Lemma 6.23, Lemma 6.38, Hölder’s inequality, and interpolating yields
$\displaystyle\left|\mathrm{supp\,}(\psi_{i,q})\cap\mathrm{supp\,}(\omega_{i,j,q,n,p})\right|$
$\displaystyle\leq\left\|\psi_{i\pm,q}\omega_{(i,j\pm)}\right\|_{L^{1}}$
$\displaystyle\leq\left\|\psi_{i\pm,q}\right\|_{L^{r_{1}}}\left\|\omega_{(i,j\pm)}\right\|_{L^{r_{2}}}$
$\displaystyle\lesssim\Gamma_{q+1}^{-\frac{2(i-1)-{\mathsf{C}_{b}}}{r_{1}}-\frac{2(j-1)}{r_{2}}}.$
Using $\frac{1}{r_{1}}+\frac{1}{r_{2}}=1$ and (6.139), which gives that the
$\zeta_{q,i,k,n,\vec{l}}$ form a partition of unity, yields (6.147). ∎
## 7 From $q$ to $q+1$: breaking down the main inductive estimates
The overarching goal of this section is to state several propositions which
decompose the verification of the main inductive assumptions (3.13) and (3.14)
for the perturbation $w_{q+1}$ and (3.15) for the stress $\mathring{R}_{q+1}$
into digestible components. We remind the reader, cf. Remark 6.1, that the
rest of the inductive estimates stated in Section 3.2.3 are proven in Section
6. We begin in Section 7.1 with Proposition 7.1, which simply translates the
main inductive assumptions into statements phrased at level $q+1$. At this
point, we then introduce in Section 7.2 a handful of notations which will be
necessary in order to state the propositions which form the constituent parts
of the proof of Proposition 7.1. The next three propositions (7.3, 7.4, and
7.5) are described and presented in Section 7.3. They are significantly more
detailed than Proposition 7.1, as they contain the precise estimates that will
be propagated throughout the construction and cancellation of the higher order
stresses $\mathring{R}_{q,{\widetilde{n}}}$. These three propositions will be
verified in Section 8.
### 7.1 Induction on $q$
The main claim of this section is an induction on $q$.
###### Proposition 7.1 (Inductive Step on $q$).
Given $v_{\ell_{q}}$, $\mathring{R}_{\ell_{q}}$, and
$\mathring{R}_{q}^{\textnormal{comm}}$ satisfying the Euler-Reynolds system
$\displaystyle\partial_{t}v_{\ell_{q}}+\mathrm{div\,}(v_{\ell_{q}}\otimes
v_{\ell_{q}})+\nabla p_{\ell_{q}}$
$\displaystyle=\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
(7.1a) $\displaystyle\mathrm{div\,}v_{\ell_{q}}$ $\displaystyle=0$ (7.1b)
with $v_{\ell_{q}}$, $\mathring{R}_{\ell_{q}}$, and
$\mathring{R}_{q}^{\textnormal{comm}}$ satisfying the conclusions of Lemma 5.1
in addition to (3.12)-(3.25b), there exist $v_{q+1}=v_{\ell_{q}}+w_{q+1}$ and
$\mathring{R}_{q+1}$ which satisfy the following:
1. (1)
$v_{q+1}$ and $\mathring{R}_{q+1}$ solve the Euler-Reynolds system
$\displaystyle\partial_{t}v_{q+1}+\mathrm{div\,}(v_{q+1}\otimes
v_{q+1})+\nabla p_{q+1}$ $\displaystyle=\mathring{R}_{q+1}$ (7.2a)
$\displaystyle\mathrm{div\,}v_{q+1}$ $\displaystyle=0.$ (7.2b)
2. (2)
For all $k,m\leq 7\mathsf{N}_{\textnormal{ind,v}}$,
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}w_{q+1}\right\|_{L^{2}}\leq\Gamma_{q+1}^{-1}\delta_{q+1}^{\frac{1}{2}}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.3)
Furthermore, we have that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q})\subset[T_{1},T_{2}]\quad\Rightarrow\quad\mathrm{supp\,}_{t}(w_{q+1})\subset\left[T_{1}-(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}})^{-1},T_{2}+(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}})^{-1}\right]\,.$
(7.4)
3. (3)
For all $k,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$,
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\mathring{R}_{q+1}\right\|_{L^{1}}\leq\Gamma_{q+1}^{-\mathsf{C_{R}}}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(7.5)
###### Remark 7.2.
In achieving the conclusions (7.2), (7.3), and (7.5), we have verified the
inductive assumptions (3.13)-(3.15) at level $q+1$. The inductive assumption
(3.12) at levels $q^{\prime}<q+1$ follows from Lemma (5.1). The proof of
Proposition 7.1 will entail many estimates which are much more detailed than
(7.3) and (7.5), but for the time being we record only the basic estimates
which are direct translations of (3.13)-(3.15) at level $q+1$.
### 7.2 Notations
The proof of Proposition 7.1 will be achieved through an induction with
respect to $\widetilde{n}$, where $0\leq{\widetilde{n}}\leq{n_{\rm max}}$
corresponds to the addition of the perturbation $\displaystyle
w_{q+1,{\widetilde{n}}}=\sum\limits_{{\widetilde{p}}=1}^{{p_{\rm
max}}}w_{q+1,{\widetilde{n}},{\widetilde{p}}}$. The addition of each
perturbation $w_{q+1,{\widetilde{n}}}$ will move the minimum effective
frequency present in the stress terms to $\lambda_{q,{\widetilde{n}}+1,0}$.
This induction on ${\widetilde{n}}$ requires three sub-propositions; the base
case ${\widetilde{n}}=0$, the inductive step from ${\widetilde{n}}-1$ to
${\widetilde{n}}$ for ${\widetilde{n}}\leq{n_{\rm max}}-1$, and the final step
from ${n_{\rm max}}-1$ to ${n_{\rm max}}$. Throughout these propositions, we
shall employ the following notations.
1. (1)
$\boldsymbol{{\widetilde{n}}}$ \- An integer taking values
$0\leq{\widetilde{n}}\leq{n_{\rm max}}$ over which induction is performed. At
every step in the induction, we add another component
$w_{q+1,{\widetilde{n}}}$ of the final perturbation
$w_{q+1}=\sum\limits_{{\widetilde{n}}=0}^{n_{\rm
max}}\sum\limits_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,{\widetilde{n}},{\widetilde{p}}}.$
We emphasize that the use of ${\widetilde{n}}$ at various points in statements
and estimates means that we are _currently_ working on the inductive step at
level ${\widetilde{n}}$.
2. (2)
$\boldsymbol{n}$ \- An integer taking values $1\leq n\leq{n_{\rm max}}$ which
correspond to the higher order stresses $\mathring{R}_{q,n}$. Occasionally, we
shall use the notation $\mathring{R}_{q,0}=\mathring{R}_{\ell_{q}}$ to
streamline an argument. We emphasize that $n$ will be used at various points
in statements and estimates to reference higher order objects in addition to
those at level ${\widetilde{n}}$, and so will satisfy the inequality
${\widetilde{n}}\leq n$.
3. (3)
$\boldsymbol{{\mathring{H}}_{q,n,p}^{n^{\prime}}}$ \- The component of
$\mathring{R}_{q,n,p}$ originating from an error term produced by the addition
of $w_{q+1,n^{\prime}}$. The parameter $n^{\prime}$ will always be a
_subsidiary_ parameter used to reference objects created at or _below_ the
level ${\widetilde{n}}$ that we are currently working on, and so will satisfy
$n^{\prime}\leq{\widetilde{n}}$.
4. (4)
$\boldsymbol{\mathbb{P}_{[q,n,p]}}$ \- We use the spatial Littlewood-Paley
projectors $\mathbb{P}_{[q,n,p]}$ defined by
$\displaystyle\mathbb{P}_{[q,n,p]}=\begin{cases}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}},{p_{\rm max}}}}&\mbox{if }n={n_{\rm max}},p={p_{\rm max}}+1\\\
\mathbb{P}_{\left[\lambda_{q,n,p-1},\lambda_{q,n,p}\right)}&\mbox{if }1\leq
n\leq{n_{\rm max}},1\leq p\leq{p_{\rm max}}\end{cases}$ (7.6)
where $\mathbb{P}_{[\lambda_{1},\lambda_{2})}$ is defined in Section 9.4 as
$\mathbb{P}_{\geq\lambda_{1}}\mathbb{P}_{<\lambda_{2}}$. Note that for
$n={n_{\rm max}}$ and $p={p_{\rm max}}+1$, $\mathbb{P}_{[q,{n_{\rm
max}},{p_{\rm max}}+1]}$ projects onto _all_ frequencies larger than
$\lambda_{q,{n_{\rm max}},{p_{\rm max}}}=\lambda_{q,{n_{\rm max}}+1,0}$.
Errors which include the frequency projector $\mathbb{P}_{[q,{n_{\rm
max}},{p_{\rm max}}+1]}$ will be small enough to be absorbed into
$\mathring{R}_{q+1}$.
We shall frequently utilize sums of Littlewood-Paley projectors
$\mathbb{P}_{[q,n,p]}$ to decompose products of intermittent pipe flows
periodized to scale $\lambda_{q,{\widetilde{n}}}^{-1}$. These sums will be
written in terms of three parameters - $n$, $p$, and ${\widetilde{n}}$. As a
consequence of (7.6), (9.29), (9.23), and (9.22), we have that
$\lambda_{q,{\widetilde{n}}+1,0}\leq\lambda_{q,{\widetilde{n}}}$ for
$0\leq{\widetilde{n}}\leq{n_{\rm max}}$, so that
$\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}=\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}+1,0}}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}=\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}.$
(7.7)
A consequence of (7.7) is that for
$\frac{\mathbb{T}^{3}}{\lambda_{q,{\widetilde{n}}}}$-periodic
functions292929We note that in the second equality in (7.8), such functions do
not have active frequencies in between $\lambda_{q,{\widetilde{n}}+1,0}$ and
$\lambda_{q,{\widetilde{n}}}$. where $0\leq{\widetilde{n}}\leq{n_{\rm max}}$,
$\displaystyle f$ $\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}f+\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}f$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}f+\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)f.$ (7.8)
These equalities will be useful in the calculations in Section 8.3, and we
will recall their significance when we estimate the Type 1 errors in Section
8.6.
5. (5)
$\boldsymbol{\mathring{R}_{q+1}^{\widetilde{n}}}$ \- Any stress term which
satisfies the estimates required of $\mathring{R}_{q+1}$ and which has already
been estimated at the ${\widetilde{n}}^{th}$ stage of the induction; that is,
error terms arising from the addition of $w_{q+1,n^{\prime}}$ for
$n^{\prime}\leq{\widetilde{n}}$. We _exclude_
$\mathring{R}_{q}^{\textnormal{comm}}$ from
$\mathring{R}_{q+1}^{\widetilde{n}}$, only absorbing it at the very end when
we define $\mathring{R}_{q+1}$. Thus
$\mathring{R}_{q+1}^{{\widetilde{n}}+1}=\mathring{R}_{q+1}^{{\widetilde{n}}}+\left(\textnormal{errors
coming from }w_{q+1,{\widetilde{n}}+1}\textnormal{ that also go into
}\mathring{R}_{q+1}\right)\,.$ (7.9)
### 7.3 Induction on $\widetilde{n}$
The first proposition asserts that there exists a perturbation $w_{q+1,0}$
which we add to $v_{\ell_{q}}$ so that $v_{q,0}:=v_{\ell_{q}}+w_{q+1,0}$
satisfies the following. First, $v_{q,0}$ solves the Euler-Reynolds system
with a righthand side consisting of stresses $\mathring{R}_{q+1}^{0}$ and
$\mathring{H}_{q,n,p}^{0}$ which belong respectively to $\mathring{R}_{q+1}$
and $\mathring{R}_{q,n,p}$ for $1\leq n\leq{n_{\rm max}}$ and $1\leq
p\leq{p_{\rm max}}$. Secondly, $w_{q+1,0}$ satisfies estimates which in
particular imply the inductive assumptions required of the velocity
perturbation $w_{q+1}$ in (7.3).303030This is checked in Remark 8.3. Thirdly,
$\mathring{R}_{q+1}^{0}$ satisfies the estimates required of
$\mathring{R}_{q+1}$ in the inductive assumption (6.118) (with an extra factor
of smallness). Finally, each $\mathring{H}_{q,n,p}^{0}$ satisfies the
inductive assumptions required of $\mathring{R}_{q,n,p}$ in (6.118).
###### Proposition 7.3 (Induction on ${\widetilde{n}}$: The Base Case
${\widetilde{n}}=0$).
Under the assumptions of Proposition 7.1 (equivalently the conclusions of
Lemma 5.1), there exist $\displaystyle
w_{q+1,0}=\sum\limits_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,0,p}=w_{q+1,0,1}$, $\mathring{R}_{q+1}^{0}$, and
$\mathring{H}_{q,n,p}^{0}$ for $1\leq n\leq{n_{\rm max}}$ and $1\leq
p\leq{p_{\rm max}}$ such that the following hold.
1. (1)
$v_{q,0}:=v_{\ell_{q}}+w_{q+1,0}$ solves
$\displaystyle\partial_{t}v_{q,0}+\mathrm{div\,}(v_{q,0}\otimes
v_{q,0})+\nabla p_{q,0}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{0}\right)+\mathrm{div\,}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{{p_{\rm
max}}}\mathring{H}_{q,n,p}^{0}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
(7.10a) $\displaystyle\mathrm{div\,}v_{q,0}$ $\displaystyle=0.$ (7.10b)
2. (2)
For all $k+m\leq\mathsf{N}_{\textnormal{fin},0}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9$ and
$1\leq{\widetilde{p}}\leq{p_{\rm max}}$ (although only $w_{q+1,0,1}$ is non-
zero)
$\displaystyle\left\|D^{k}D_{t,q}^{m}w_{q+1,0,{\widetilde{p}}}\right\|_{L^{2}\left(\mathrm{supp\,}\psi_{i,q}\right)}\lesssim\delta_{q+1,0,{\widetilde{p}}}^{\frac{1}{2}}\Gamma_{q+1}^{3+\frac{{\mathsf{C}_{b}}}{2}}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c_{0}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.11)
Furthermore, we have that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q})\subset[T_{1},T_{2}]\quad\Rightarrow\quad\mathrm{supp\,}_{t}(w_{q+1,0,{\widetilde{p}}})\subset\left[T_{1}-(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1},T_{2}+(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1}\right]\,.$
(7.12)
3. (3)
For all $k,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$,
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\mathring{R}_{q+1}^{0}\right\|_{L^{1}}\lesssim\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.13)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{R}_{q+1}^{0}\subseteq\mathrm{supp\,}_{t}w_{q+1,0}\,.$
(7.14)
4. (4)
For all $k+m\leq\mathsf{N}_{\rm fin,n}$ and $1\leq n\leq{n_{\rm max}}$, $1\leq
p\leq{p_{\rm max}}$,
$\left\|D^{k}D_{t,q}^{m}\mathring{H}_{q,n,p}^{0}\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.15)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{H}_{q,n,p}^{0}\subseteq\mathrm{supp\,}_{t}w_{q+1,0}\,.$
(7.16)
The second proposition assumes that perturbations $w_{q+1,n^{\prime}}$ have
been added for $n^{\prime}\leq{\widetilde{n}}-1$ while satisfying four
criteria. First,
$v_{q,{\widetilde{n}}-1}=v_{\ell_{q}}+\sum\limits_{n^{\prime}\leq{\widetilde{n}}-1}w_{q+1,n^{\prime}}$
solves an Euler-Reynolds system with stresses
$\mathring{R}_{q+1}^{{\widetilde{n}}-1}$ and
$\mathring{H}_{q,n,p}^{n^{\prime}}$. Secondly, the perturbations
$w_{q+1,n^{\prime}}$ satisfy the inductive assumptions required of $w_{q+1}$
in (7.3) for $n^{\prime}\leq{\widetilde{n}}-1$. Thirdly,
$\mathring{R}_{q+1}^{{\widetilde{n}}-1}$ satisfies the inductive assumption
(7.5) at level $q+1$. Finally, $\mathring{H}_{q,n,p}^{n^{\prime}}$ satisfies
the assumption (6.118) in the parameter regime ${\widetilde{n}}\leq
n\leq{n_{\rm max}}$, $n^{\prime}\leq{\widetilde{n}}-1$, $1\leq p\leq{p_{\rm
max}}$. The conclusion of the proposition replaces each ${\widetilde{n}}-1$ in
the assumptions with ${\widetilde{n}}$.
###### Proposition 7.4 (Induction on ${\widetilde{n}}$: From
${\widetilde{n}}-1$ to ${\widetilde{n}}$ for $1\leq{\widetilde{n}}\leq{n_{\rm
max}}-1$).
Let $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$ be given, and let
$v_{q,{\widetilde{n}}-1}:=v_{\ell_{q}}+\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}w_{q+1,n^{\prime}}=v_{\ell_{q}}+\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\sum\limits_{p^{\prime}=1}^{p_{\rm
max}}w_{q+1,n^{\prime},p^{\prime}},$
$\mathring{R}_{q+1}^{{\widetilde{n}}-1}$, and
$\mathring{H}_{q,n,p}^{n^{\prime}}$ be given for
$n^{\prime}\leq{\widetilde{n}}-1$, ${\widetilde{n}}\leq n\leq{n_{\rm max}}$
and $1\leq p,p^{\prime}\leq{p_{\rm max}}$ such that the following are
satisfied.
1. (1)
${v_{q,{\widetilde{n}}-1}}$ solves:
$\displaystyle\partial_{t}v_{q,{\widetilde{n}}-1}$
$\displaystyle+\mathrm{div\,}(v_{q,{\widetilde{n}}-1}\otimes
v_{q,{\widetilde{n}}-1})+\nabla p_{q,{\widetilde{n}}-1}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{{\widetilde{n}}-1}\right)+\mathrm{div\,}\left(\displaystyle\sum\limits_{n={\widetilde{n}}}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
(7.17a) $\displaystyle\mathrm{div\,}v_{q,{\widetilde{n}}-1}$
$\displaystyle=0\,.$ (7.17b)
2. (2)
For all
$k+m\leq\mathsf{N}_{\textnormal{fin},\textnormal{n}^{\prime}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9$,
$n^{\prime}\leq{\widetilde{n}}-1$, and $1\leq p^{\prime}\leq{p_{\rm max}}$,
$\left\|D^{k}D_{t,q}^{m}w_{q+1,n^{\prime},p^{\prime}}\right\|_{L^{2}\left(\mathrm{supp\,}\psi_{i,q}\right)}\lesssim\delta_{q+1,n^{\prime},p^{\prime}}^{\frac{1}{2}}\Gamma_{q+1}^{3+\frac{{\mathsf{C}_{b}}}{2}}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}^{\prime}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.18)
Furthermore, we have that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q,n^{\prime},p^{\prime}})$
$\displaystyle\subset[T_{1,n^{\prime},p^{\prime}},T_{2,n^{\prime},p^{\prime}}]$
$\displaystyle\Rightarrow\mathrm{supp\,}_{t}(w_{q+1,n^{\prime},p^{\prime}})\subset\left[T_{1,n^{\prime},p^{\prime}}-(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1},T_{2,n^{\prime},p^{\prime}}+(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1}\right]\,.$
(7.19)
3. (3)
For all $k,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$,
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\mathring{R}_{q+1}^{{\widetilde{n}}-1}\right\|_{L^{1}}\lesssim\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right).$
(7.20)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{R}_{q+1}^{{\widetilde{n}}-1}\subseteq\bigcup_{n^{\prime}\leq{\widetilde{n}}-1}\mathrm{supp\,}_{t}w_{q+1,n^{\prime}}\,.$
(7.21)
4. (4)
For all $k+m\leq\mathsf{N}_{\rm fin,n}$, ${\widetilde{n}}\leq n\leq{n_{\rm
max}}$, $n^{\prime}\leq{\widetilde{n}}-1$, and $1\leq p\leq{p_{\rm max}}$,
$\left\|D^{k}D_{t,q}^{m}\mathring{H}_{q,n,p}^{n^{\prime}}\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}}},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.22)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{H}_{q,n,p}^{n^{\prime}}\subseteq\mathrm{supp\,}_{t}w_{q+1,n^{\prime}}\,.$
(7.23)
Then there exists $w_{q+1,{\widetilde{n}}}$ such that (1)-(4) are satisfied
with ${\widetilde{n}}-1$ replaced with ${\widetilde{n}}$.
The final proposition considers the case $\widetilde{n}={n_{\rm max}}$ and
shows that, under assumptions analogous to those in Proposition 7.4, there
exists $w_{q+1,{n_{\rm max}}}$ such that all remaining errors after the
addition of $w_{q+1,{n_{\rm max}}}$ can be absorbed into $\mathring{R}_{q+1}$,
thus verifying the conclusions of Proposition 7.1.
###### Proposition 7.5 (Induction on ${\widetilde{n}}$: The Final Case
${\widetilde{n}}={n_{\rm max}}$).
Let
$v_{q,{n_{\rm max}}-1}:=v_{\ell_{q}}+\sum\limits_{n^{\prime}=0}^{{n_{\rm
max}}-1}w_{q+1,n^{\prime}}=v_{\ell_{q}}+\sum_{n^{\prime}=0}^{{n_{\rm
max}}-1}\sum_{p^{\prime}=1}^{p_{\rm max}}w_{q+1,n^{\prime},p^{\prime}}$
$\mathring{R}_{q+1}^{{n_{\rm max}}-1}$, and $\mathring{H}_{q,{n_{\rm
max}},p}^{n^{\prime}}$ be given for $n^{\prime}\leq{n_{\rm max}}-1$ and $1\leq
p,p^{\prime}\leq{p_{\rm max}}$ such that the following are satisfied.
1. (1)
${v_{q,{n_{\rm max}}-1}}$ solves:
$\displaystyle\partial_{t}v_{q,{n_{\rm max}}-1}$
$\displaystyle+\mathrm{div\,}(v_{q,{n_{\rm max}}-1}\otimes v_{q,{n_{\rm
max}}-1})+\nabla p_{q,{n_{\rm max}}-1}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right)+\mathrm{div\,}\left(\displaystyle\sum\limits_{n^{\prime}=0}^{{n_{\rm
max}}-1}\sum\limits_{p=1}^{p_{\rm max}}\mathring{H}^{n^{\prime}}_{q,{n_{\rm
max}},p}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$ (7.24a)
$\displaystyle\mathrm{div\,}v_{q,{n_{\rm max}}-1}$ $\displaystyle=0\,.$
(7.24b)
2. (2)
For all
$k+m\leq\mathsf{N}_{\textnormal{fin},\textnormal{n}^{\prime}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9$,
$n^{\prime}\leq{n_{\rm max}}-1$, and $1\leq p^{\prime}\leq{p_{\rm max}}$,
$\left\|D^{k}D_{t,q}^{m}w_{q+1,n^{\prime},p^{\prime}}\right\|_{L^{2}\left(\mathrm{supp\,}\psi_{i,q}\right)}\lesssim\delta_{q+1,n^{\prime},p^{\prime}}^{\frac{1}{2}}\Gamma_{q+1}^{3+\frac{{\mathsf{C}_{b}}}{2}}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}^{\prime}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.25)
Furthermore, we have that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q,n^{\prime},p^{\prime}})$
$\displaystyle\subset[T_{1,n^{\prime},p^{\prime}},T_{2,n^{\prime},p^{\prime}}]$
$\displaystyle\Rightarrow\quad\mathrm{supp\,}_{t}(w_{q+1,n^{\prime},p^{\prime}})\subset\left[T_{1,n^{\prime},p^{\prime}}-(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1},T_{2,n^{\prime},p^{\prime}}+(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1})^{-1}\right]\,.$
(7.26)
3. (3)
For all $k,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$,
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right\|_{L^{1}}\lesssim\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right).$
(7.27)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\subseteq\bigcup_{n^{\prime}\leq{n_{\rm
max}}-1}\mathrm{supp\,}_{t}w_{q+1,n^{\prime}}\,.$ (7.28)
4. (4)
For all
$k+m\leq\mathsf{N}_{\textnormal{fin},\textnormal{n}_{\textnormal{max}}}$,
$n^{\prime}\leq{n_{\rm max}}-1$, and $1\leq p\leq{p_{\rm max}}$
$\displaystyle\left\|D^{k}D_{t,q}^{m}\mathring{H}_{q,{n_{\rm
max}},p}^{n^{\prime}}\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\lesssim\delta_{q+1,{n_{\rm max}},p}\lambda_{q,{n_{\rm
max}},p}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\textnormal{n}_{\textnormal{max}}}},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(7.29)
Furthermore, we have that
$\mathrm{supp\,}_{t}\mathring{H}_{q,n,p}^{n^{\prime}}\subseteq\mathrm{supp\,}_{t}w_{q+1,n^{\prime}}\,.$
(7.30)
Then there exists $w_{q+1,{n_{\rm max}}}$ and $\mathring{R}_{q+1}$ such that
$v_{q+1}:=v_{q,{n_{\rm max}}-1}+w_{q+1,{n_{\rm max}}}$ and
$\mathring{R}_{q+1}$ satisfy conclusions (7.2), (7.3), (7.4), and (7.5) from
Proposition 7.1.
## 8 Proving the main inductive estimates
Because the proofs of Propositions 7.3, 7.4, and 7.5 will be comprised of
multiple arguments with many similarities, we divide up the proofs of the
Propositions into sections corresponding to these arguments.313131This
organization of proof avoids having to alternate between the definitions of
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}$ and
$\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$ for all
$1\leq{\widetilde{n}}\leq{n_{\rm max}}$ and $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$. We judge that it is wiser to define all the perturbations
simultaneously under the assumptions of Propositions 7.3, 7.4, and 7.5.
Namely, we assume that each $\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$
exists and satisfies the enumerated properties, some of which may not be
verified until later. First, we define
$\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$ and
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}$ in Section 8.1 for each
$0\leq{\widetilde{n}}\leq{n_{\rm max}}$ and $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$. Then, Section 8.2 collects estimates on
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}$, thus verifying (7.11) and (7.12),
(7.18) and (7.19), and (7.25) and (7.26) at levels ${\widetilde{n}}=0$,
$1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$, and ${\widetilde{n}}={n_{\rm
max}}$, respectively. Next, in Section 8.3 we separate out the different types
of error terms and write down the Euler-Reynolds system satisfied by
$v_{q,{\widetilde{n}}}$, which verifies (7.10), (7.17), and (7.24), again at
the respective values of ${\widetilde{n}}$.
The error estimates are then divided into five sections. We first estimate the
transport and Nash errors in Sections 8.4 and 8.5. The next section estimates
the Type 1 oscillation errors (notated with
$\mathring{H}_{q,n,p}^{\widetilde{n}}$), which are obtained via Littlewood-
Paley projectors $\mathbb{P}_{[q,n,p]}$. In the parameter regime $1\leq
n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$, Type 1 oscillation errors
will satisfy the estimates (7.15), (7.22), and (7.29) at respective parameter
values ${\widetilde{n}}=0$, $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$, and
${\widetilde{n}}={n_{\rm max}}$. Type 1 oscillation errors obtained from
$\mathbb{P}_{[q,{n_{\rm max}},{p_{\rm max}}+1]}$ have a sufficiently high
minimum frequency (from (7.6) specifically $\lambda_{q,{n_{\rm max}}+1,0}$,
which by a large choice of ${n_{\rm max}}$ is very close to $\lambda_{q+1}$)
to be absorbed into $\mathring{R}_{q+1}$. Then in Section 8.7, we use
Proposition 4.8 to show that on the support of a checkerboard cutoff function,
Type 2 oscillation errors vanish. The divergence corrector errors are
estimated in Sections 8.8. The divergence corrector, Nash, and transport
errors will always be absorbed into $\mathring{R}_{q+1}$ and thus must again
satisfy one of (7.13), (7.20), or (7.27). Finally, the conclusions (7.12),
(7.14), (7.16), (7.19), (7.21), (7.23), (7.26), (7.28), and (7.30), concerning
the time support will be verified in Section 8.9.
### 8.1 Definition of $\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$ and
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}$
In this section we construct the perturbations $w_{q+1,{\widetilde{n}}}$.
Before doing so, we recall the significance of each parameter used to define
the perturbations.
1. (a)
$\xi$ is the vector direction of the axis of the pipe
2. (b)
$i$ quantifies the amplitude of the velocity field $v_{\ell_{q}}$ along which
the pipe will be flowed
3. (c)
$j$ quantifies the amplitude of the Reynolds stress
4. (d)
$k$ describes which time cut-off $\chi_{i,k,q}$ is active
5. (e)
$q+1$ is the stage of the overall convex integration scheme
6. (f)
${\widetilde{n}}$ and ${\widetilde{p}}$ signify which higher order stress
$\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$ is being corrected, and
${\widetilde{n}}$ also denotes the intermittency parameter
$r_{q+1,{\widetilde{n}}}$
7. (g)
$\vec{l}=(l,w,h)$ is used to index the checkerboard cutoff functions. Recall
that the admissible values of $l$, $w$, and $h$ range from $0$ to
$\lambda_{q,{\widetilde{n}},0}-1$ and thus depend on ${\widetilde{n}}$.
#### 8.1.1 The case ${\widetilde{n}}=0$
To define $\displaystyle w_{q+1,0}=\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,0,p}=w_{q+1,0,1}$, we recall the notation
$\mathring{R}_{\ell_{q}}=\mathring{R}_{q,0}$ and set
$R_{q,0,1,j,i,k}=\nabla\Phi_{(i,k)}\left(\delta_{q+1,0,1}\Gamma^{2j+4}_{q+1}\mathrm{Id}-\mathring{R}_{q,0}\right)\nabla\Phi_{(i,k)}^{T}.$
(8.1)
For ${\widetilde{p}}\geq 2$, we set $R_{q,0,{\widetilde{p}},j,i,k}=0$. Fix
values of $i$, $j$, and $k$. Let $\xi\in\Xi$ be a vector from Proposition 4.1.
For all $\xi\in\Xi$, we define the coefficient function
$a_{\xi,i,j,k,q,0,{\widetilde{p}},\vec{l}}$ by
$a_{\xi,i,j,k,q,0,{\widetilde{p}},\vec{l}}:=a_{\xi,i,j,k,q,0,{\widetilde{p}}}:=a_{(\xi)}=\delta_{q+1,0,{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\eta_{i,j,k,q,0,{\widetilde{p}},\vec{l}}\gamma_{\xi}\left(\frac{R_{q,0,{\widetilde{p}},j,i,k}}{\delta_{q+1,0,{\widetilde{p}}}\Gamma^{2j+4}_{q+1}}\right)\,.$
(8.2)
From Lemma 6.31, we see that on the support of $\eta_{(i,j,k)}$ we have
$|\mathring{R}_{q,0,{\widetilde{p}}}|\leq\Gamma_{q+1}^{2j+2}\delta_{q+1,0,{\widetilde{p}}}$,
and thus by estimate (6.108) from Corollary 6.27, for ${\widetilde{p}}=1$ we
have that
$\left|\frac{R_{q,0,{\widetilde{p}},j,i,k}}{\delta_{q+1,0,{\widetilde{p}}}\Gamma^{2j+4}_{q+1}}-\mathrm{Id}\right|\leq\Gamma_{q+1}^{-1}<\frac{1}{2}$
once $\lambda_{0}$ is sufficiently large. Thus we may apply Proposition 4.1.
The coefficient function $a_{(\xi)}$ is then multiplied by an intermittent
pipe flow
$\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,0}\circ\Phi_{(i,k)},$
where we have used the objects defined in Proposition 4.4 and the shorthand
notation
$\displaystyle\mathbb{W}_{\xi,q+1,0}=\mathbb{W}^{(i,j,k,0,\vec{l})}_{\xi,q+1,0}=\mathbb{W}_{\xi,q+1,0}^{s}=\mathbb{W}^{s}_{\xi,\lambda_{q+1},r_{q+1,0}}.$
(8.3)
The superscript $s=(i,j,k,0,\vec{l})$ indicates the placement of the
intermittent pipe flow $\mathbb{W}^{i,j,k,0,p,\vec{l}}_{\xi,q+1,0}$ (cf. (2)
from Proposition 4.4), which depends on $i$, $j$, $k$, ${\widetilde{n}}=0$,
and $\vec{l}$ and is only relevant in Section 8.7.323232Note that for
${\widetilde{p}}\geq 2$, $\delta_{q+1,0,{\widetilde{p}}}=0$, so there is no
need for the placement to depend on ${\widetilde{p}}$ in this case, as
$w_{q+1,0,{\widetilde{p}}}$ will uniformly vanish. To ease notation, we will
suppress the superscript except in Section 8.7. Furthermore, item 1 from
Proposition 4.4 gives that
$\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,0}\circ\Phi_{(i,k)}=\mathrm{curl\,}\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right).$
We can now write the principal part of the first term of the perturbation as
$w_{q+1,0}^{(p)}=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}a_{(\xi)}\mathrm{curl\,}\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right):=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}w_{(\xi)}.$
(8.4)
The notation $w_{(\xi)}$ implicitly encodes all indices and thus will be a
useful shorthand for the principal part of the perturbation. To make the
perturbation divergence free, we add
$w_{q+1,0}^{(c)}=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}w_{(\xi)}^{(c)}$
(8.5)
so that
$w_{q+1,0}=w_{q+1,0}^{(p)}+w_{q+1,0}^{(c)}=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)$
(8.6)
is divergence-free and mean-zero.
#### 8.1.2 The case $1\leq{\widetilde{n}}\leq{n_{\rm max}}$
With $w_{q+1,0}$ constructed, we construct $\displaystyle
w_{q+1,{\widetilde{n}}}=\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,{\widetilde{n}},{\widetilde{p}}}$ for
$1\leq{\widetilde{n}}\leq{n_{\rm max}}$. For $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$, we define
$\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}=\sum_{n^{\prime}\leq{\widetilde{n}}-1}\mathring{H}_{q,{\widetilde{n}},{\widetilde{p}}}^{n^{\prime}}.$
(8.7)
With this definition in hand, we set
$R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}=\nabla\Phi_{(i,k)}\left(\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1}\mathrm{Id}-\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right)\nabla\Phi_{(i,k)}^{T},$
(8.8)
and define the coefficient function
$a_{\xi,i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}$ by
$a_{\xi,i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}=a_{\xi,i,j,k,q,{\widetilde{n}},{\widetilde{p}}}=a_{(\xi)}=\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}\gamma_{\xi}\left(\frac{R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}}{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1}}\right)\,.$
(8.9)
By Lemma 6.31 and Corollary 6.27 as before,
$R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}/(\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1})$
lies in the domain of $\gamma_{\xi}$, as soon as $\lambda_{0}$ is sufficiently
large (similarly to the display below (8.2)). The coefficient function is
multiplied by an intermittent pipe flow
$\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}=\mathrm{curl\,}\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right),$
where we have used the shorthand notation
$\displaystyle\mathbb{W}_{\xi,q+1,{\widetilde{n}}}=\mathbb{W}^{i,j,k,{\widetilde{n}},{\widetilde{p}},\vec{l}}_{\xi,q+1,{\widetilde{n}}}=\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{s}=\mathbb{W}_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}^{s}\,.$
(8.10)
As before, the superscript $s=(i,j,k,{\widetilde{n}},{\widetilde{p}},\vec{l})$
refers to the placement of the pipe, depends on $i$, $j$, $k$,
${\widetilde{n}}$, ${\widetilde{p}}$, and $\vec{l}$, and will be chosen in
Section 8.7. Thus the principal part of the perturbation is defined by
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}^{(p)}=\sum_{i,j,k}\sum_{\vec{l}}\sum_{\xi}a_{(\xi)}\mathrm{curl\,}\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)=\sum_{i,j,k}\sum_{\vec{l}}\sum_{\xi}w_{(\xi)}.$
(8.11)
As before, we add a corrector
$w_{q+1,{\widetilde{n}},{\widetilde{p}}}^{(c)}=\sum_{i,j,k}\sum_{\vec{l}}\sum_{\xi}\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)=\sum_{i,j,k}\sum_{\vec{l}}\sum_{\xi}w_{(\xi)}^{(c)},$
(8.12)
producing the divergence-free perturbation
$\displaystyle w_{q+1,{\widetilde{n}}}=\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,{\widetilde{n}},{\widetilde{p}}}$
$\displaystyle=\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\left(w_{q+1,{\widetilde{n}},{\widetilde{p}}}^{(p)}+w_{q+1,{\widetilde{n}},{\widetilde{p}}}^{(c)}\right)$
$\displaystyle=\sum_{i,j,k,{\widetilde{p}}}\sum_{\vec{l}}\sum_{\xi}\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\,.$
(8.13)
### 8.2 Estimates for $w_{q+1,{\widetilde{n}},{\widetilde{p}}}$
In this section, we verify (7.11), (7.18), and (7.25). We first estimate the
$L^{r}$ norms of the coefficient functions $a_{(\xi)}$. We have consolidated
the proofs for each value of ${\widetilde{n}}$ into the following lemma.
###### Lemma 8.1.
For $N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-4$, $r\geq 1$, and $r_{1},r_{2}\in[1,\infty]$
with $\frac{1}{r_{1}}+\frac{1}{r_{2}}=1$, we have
$\displaystyle\left\|D^{N}D_{t,q}^{M}a_{\xi,i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}\right\|_{L^{r}}$
$\displaystyle\qquad\lesssim\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
(8.14)
###### Proof of Lemma 8.1.
We begin by considering the case $r=\infty$. The general case $r\geq 1$ will
then follow from the size of the support of $a_{(\xi)}$. Recalling estimate
(6.125), we have that for all
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-4$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right\|_{L^{\infty}(\mathrm{supp\,}\eta_{(i,j,k)})}$
$\displaystyle\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+2},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
From Corollary 6.27, we have that for all $N+M\leq\nicefrac{{3\mathsf{N}_{\rm
fin}}}{{2}}$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}D\Phi_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\chi_{i,k,q}))}$
$\displaystyle\leq\widetilde{\lambda}_{q}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
Thus from the Leibniz rule and the definitions (8.8), (8.1), for
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-4$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}\right\|_{L^{\infty}(\mathrm{supp\,}\eta_{(i,j,k)})}$
$\displaystyle\qquad\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+2},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
(8.15)
The above estimates allow us to apply Lemma A.5 with $N=N^{\prime}$,
$M=M^{\prime}$ so that $N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-4$,
$\psi=\gamma_{\xi,}$ (which is allowable since by Proposition 4.1 we have that
$D^{B}\gamma_{\xi}$ is bounded uniformly with respect to $q$, and we have
checked in Section 8.1 that the argument of $\gamma_{\xi}$ remains strictly
within a ball of radius $\varepsilon$ of the identity), $\Gamma_{\psi}=1$,
$v=v_{\ell_{q}}$, $D_{t}=D_{t,q}$,
$h(x,t)=R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}(x,t)$,
$C_{h}=\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}=\Gamma^{2}$,
$\lambda=\widetilde{\lambda}=\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}$,
$\mu=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+2}$,
$\widetilde{\mu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$, and
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$. We obtain that for all
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-4$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}\gamma_{\xi}\left(\frac{R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}}{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1}}\right)\right\|_{L^{\infty}(\mathrm{supp\,}\eta_{(i,j,k)})}$
$\displaystyle\lesssim\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+2},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
From the above bound, definitions (8.2) and (8.9), the Leibniz rule, estimates
(6.84), (6.97), (6.131), and Lemma 6.40, we obtain that for
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-4$,333333The limit on the number of derivatives
comes from (6.131) and (8.15). The sharp cost of a material derivative comes
from (6.131).
$\displaystyle\left\|D^{N}D_{t,q}^{M}a_{(\xi)}\right\|_{L^{\infty}(\mathrm{supp\,}\eta_{(i,j,k)})}$
$\displaystyle\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\sum_{\begin{subarray}{c}N^{\prime}+N^{\prime\prime}=N,\\\
M^{\prime}+M^{\prime\prime}=M\end{subarray}}\left\|D^{N^{\prime}}D_{t,q}^{M^{\prime}}\eta_{(i,j,k)}\right\|_{L^{\infty}}\left\|D^{N^{\prime\prime}}D_{t,q}^{M^{\prime\prime}}\gamma_{\xi}\left(\frac{R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}}{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma^{2j+4}_{q+1}}\right)\right\|_{L^{\infty}(\mathrm{supp\,}\eta_{(i,j,k)})}$
$\displaystyle\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\sum_{\begin{subarray}{c}N^{\prime}+N^{\prime\prime}=N,\\\
M^{\prime}+M^{\prime\prime}=M\end{subarray}}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N^{\prime}}\mathcal{M}\left(M^{\prime},\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\times\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N^{\prime\prime}}\mathcal{M}\left(M^{\prime\prime},\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+2},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
This concludes the proof of (8.14) when $r=\infty$. For general $r\geq 1$, we
just note that
$\mathrm{supp\,}(a_{(\xi)})\subseteq\mathrm{supp\,}(\eta_{i,j,k,q,n,p,\vec{l}})$.
∎
An immediate consequence of Lemma 8.1 is that we have estimates for the
velocity increments themselves. These are summarized in the following
corollary.
###### Corollary 8.2.
For $N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-8$ we have the estimate
$\displaystyle\left\|D^{N}D_{t,q}^{M}w_{(\xi)}\right\|_{L^{r}}$
$\displaystyle\lesssim\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}{\left(r_{q+1,{\widetilde{n}}}\right)}^{\nicefrac{{2}}{{r}}-1}$
$\displaystyle\qquad\qquad\qquad\times\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
(8.16)
For $N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9$ and
$(r,r_{1},r_{2})\in\left\\{(1,2,2),(2,\infty,1)\right\\}$, we have the
estimates
$\displaystyle\left\|D^{N}D_{t,q}^{M}w_{(\xi)}^{(c)}\right\|_{L^{r}}$
$\displaystyle\lesssim\frac{\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}{\lambda_{q+1}}\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}{\left(r_{q+1,{\widetilde{n}}}\right)}^{\nicefrac{{2}}{{r}}-1}$
$\displaystyle\qquad\qquad\qquad\times\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(8.17)
$\displaystyle\left\|D^{N}D_{t,q}^{M}w_{q+1,{\widetilde{n}},{\widetilde{p}}}\right\|_{L^{r}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\lesssim\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{\frac{-2i+{\mathsf{C}_{b}}}{r_{1}r}+2+\frac{2}{r}}_{q+1}{\left(r_{q+1,{\widetilde{n}}}\right)}^{\nicefrac{{2}}{{r}}-1}$
$\displaystyle\qquad\qquad\qquad\times\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
(8.18)
Finally, we have that
$\displaystyle\mathrm{supp\,}_{t}(\mathring{R}_{q})\subset[T_{1},T_{2}]\quad\Rightarrow\quad\mathrm{supp\,}_{t}(w_{q+1,{\widetilde{n}},{\widetilde{p}}})\subset\left[T_{1}-(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}})^{-1},T_{2}+(\lambda_{q}\delta_{q}^{\nicefrac{{1}}{{2}}})^{-1}\right]\,.$
(8.19)
###### Remark 8.3.
By choosing $r=2$, $r_{2}=1$, and $r_{1}=\infty$ in (8.18) and recalling that
(9.56) and (9.60b) give that
$\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\leq\Gamma_{q+1}^{-2}\delta_{q+1}^{\nicefrac{{1}}{{2}}},\qquad\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9\geq
14\mathsf{N}_{\textnormal{ind,v}},$
we may sum over ${\widetilde{n}}$ and ${\widetilde{p}}$ in (8.18) and use the
extra negative factor of $\Gamma_{q+1}$ to absorb any implicit constants.
Finally, from (9.42), we have that the cost of a sharp material derivative in
(8.18) is sufficient to meet the bounds in (7.3). Then we have verified
(7.11), (7.18), and (7.25) at levels ${\widetilde{n}}=0$,
$1\leq{\widetilde{n}}<{n_{\rm max}}$, and ${\widetilde{n}}={n_{\rm max}}$,
respectively, and (7.3).
###### Proof of Corollary 8.2.
Recalling the definition of $w_{(\xi)}$ from (8.4) and (8.13), we aim to prove
the first estimate by applying Remark A.9, with
$f=a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}$,
$\mathcal{C}_{f}=\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}$,
$\Phi=\Phi_{(i,k)}$, $v=v_{\ell_{q}}$,
$\lambda=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}$,
$\zeta=\widetilde{\zeta}=\lambda_{q+1}$,
$\mathcal{C}_{\varphi}=r_{q+1,{\widetilde{n}}}^{\nicefrac{{2}}{{r}}-1}$,
$\mu=\lambda_{q,{\widetilde{n}}}=\lambda_{q+1}r_{q+1,{\widetilde{n}}}$,
$\nu=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3}$,
$\widetilde{\nu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$,
$g=\mathbb{W}_{\xi,q+1,{\widetilde{n}}}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and
$N_{\circ}=\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-4$. From (8.14) and Corollary 6.27, we have that
for $N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-4$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}a_{(\xi)}\right\|_{L^{r}}\lesssim\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(8.20)
$\displaystyle\left\|D^{N}D_{t,q}^{M}(D\Phi_{(i,k)})^{-1}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}\leq\widetilde{\lambda}_{q}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right),$
(8.21)
$\displaystyle{\left\|D^{N}\Phi_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}}\lesssim\Gamma_{q+1}^{-1}\widetilde{\lambda}_{q}^{N-1}$
(8.22)
$\displaystyle\left\|D^{N}\Phi^{-1}_{(i,k)}\right\|_{L^{\infty}(\mathrm{supp\,}(\psi_{i,q}\widetilde{\chi}_{i,k,q}))}\lesssim\Gamma_{q+1}^{-1}\widetilde{\lambda}_{q}^{N-1}$
(8.23)
showing that (A.30), (A.31), and (A.32) are satisfied. Recall that
$\mathbb{W}_{\xi,q+1,{\widetilde{n}}}$ is periodic to scale
$\displaystyle{\lambda_{q,{\widetilde{n}}}^{-1}=\left(\lambda_{q+1}r_{q+1,{\widetilde{n}}}\right)^{-1}=\left(\lambda_{q}^{\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}}\right)^{-1}}.$
By (9.48) and (9.60a), we have that for all $q$, ${\widetilde{n}}$, and
${\widetilde{p}}$,
$\displaystyle\lambda_{q+1}^{4}\leq\left(\frac{\lambda_{q,{\widetilde{n}}}}{2\pi\sqrt{3}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}\right)^{{\mathsf{N}_{\rm
dec}}},\qquad 2{\mathsf{N}_{\rm
dec}}+4\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5$ (8.24)
and so the assumptions (A.34) and (A.35) from Lemma A.5 are satisfied. From
the estimates in Proposition 4.4, we have that (A.33) is satisfied with
$\zeta=\widetilde{\zeta}=\lambda_{q+1}$. We may thus apply Lemma A.7, Remark
A.9 to obtain that for both choices of $(r,r_{1},r_{2})$ and
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-8$,
$\displaystyle\left\|D^{N}\left(D_{t,q}^{M}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\right)\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right\|_{L^{r}}$
$\displaystyle\lesssim\sum_{m=0}^{N}\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N-m}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\left\|D^{m}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\right\|_{L^{r}}$
$\displaystyle\lesssim\sum_{m=0}^{N}\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N-m}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\lambda_{q+1}^{m}\left(r_{q+1,n}\right)^{\nicefrac{{2}}{{r}}-1}$
$\displaystyle\lesssim\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\lambda_{q+1}^{N}\left(r_{q+1,n}\right)^{\nicefrac{{2}}{{r}}-1}\,.$
Here we have used that
${\lambda_{q+1}\geq\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}$
for all $0\leq n\leq{n_{\rm max}}$ and $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$, and thus we have proven (8.16).
The argument for the corrector is similar, save for the fact that $D_{t,q}$
will land on $\nabla a_{(\xi)}$, and so we require an extra commutator
estimate from Lemma A.14, specifically Remark A.15. Note that
$D_{t,q}\Phi_{(i,k)}=0$ gives
$\displaystyle D_{t,q}^{M}w_{(\xi)}^{(c)}$
$\displaystyle=D_{t,q}^{M}\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle=\sum_{M^{\prime}+M^{\prime\prime}=M}c(M^{\prime},M)\left(D_{t,q}^{M^{\prime}}\nabla
a_{(\xi)}\right)\times\left(\left(D_{t,q}^{M^{\prime\prime}}\nabla\Phi_{(i,k)}^{T}\right)\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right).$
Using (6.60) and (8.20) shows that (A.50) and (A.51) are satisfied with
$f=\nabla a_{(\xi)}$,
$\mathcal{C}_{f}=\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}},$
$\mathcal{C}_{v}=\delta_{q}^{\frac{1}{2}}\Gamma_{q+1}^{i+1}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q}$,
$\mu_{v}=\Gamma_{q+1}^{i-\mathsf{c_{0}}}\tau_{q}^{-1}$,
$N_{t}=\mathsf{N}_{\textnormal{ind,t}}$,
$\widetilde{\mu}_{v}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}$,
$\mu_{f}=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3}$,
and $\widetilde{\mu}_{f}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$. Applying
Lemma A.14 (estimate (A.54)) as before, we obtain that for
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}\nabla
a_{(\xi)}\right\|_{L^{r}}\lesssim\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}})^{N+1}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
(8.25)
In view of (8.21) and (8.24), we may apply Lemma A.7, Remark A.9, and the
estimates from Proposition 4.4 to obtain that for
$N+M\leq\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9$
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)\right\|_{L^{r}}$
$\displaystyle\
\lesssim\sum_{m=0}^{N}\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}^{N-m}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\left\|D^{m}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\right\|_{L^{r}}$
$\displaystyle\lesssim\lambda_{q+1}^{m-1}\sum_{m=0}^{N}\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}^{N-m}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\left(r_{q+1,n}\right)^{\frac{2}{r}-1}$
$\displaystyle\lesssim\frac{\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}{\lambda_{q+1}}\lambda_{q+1}^{N}\bigl{|}\mathrm{supp\,}(\eta_{(i,j,k)})\bigr{|}^{\frac{1}{r}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\Gamma^{j+2}_{q+1}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\left(r_{q+1,n}\right)^{\nicefrac{{2}}{{r}}-1}\,,$
(8.26)
proving (8.17).
The final estimate (8.18) follows from the first two after recalling that
$\psi_{i,q}$ may overlap with $\psi_{i+1,q}$, so that on the support of
$\psi_{i,q}$, we will have to appeal to (8.14) at level $i+1$. Then, we sum
over $\vec{l}$ and appeal to the bound (6.147). Next, we may sum on $j$, index
which we recall from Lemma 6.35 is bounded independently of $q$, and
${\widetilde{p}}$, $k$. The powers of $\Gamma_{q+1}^{j}$ cancel out since
$rr_{2}=1$. Next, we sum over ${\widetilde{p}}$, which is bounded
independently of $q$, and recall that the parameter $k$, although not bounded
independently of $q$, corresponds to a partition of unity, so that the number
of cutoff functions which may overlap at any fixed point _is_ finite and
bounded independently of $q$. ∎
### 8.3 Identification of error terms
In this section, we identify the error terms arising from the addition of
$\displaystyle w_{q+1,{\widetilde{n}}}=\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}w_{q+1,{\widetilde{n}},{\widetilde{p}}}$. After doing so, we can write
down the Euler-Reynolds system satisfied by $v_{q,{\widetilde{n}}}$, in turn
verifying at level ${\widetilde{n}}$ the conclusions (7.10), (7.17), and
(7.24) of Propositions 7.3, 7.4, and 7.5, respectively.
#### 8.3.1 The case ${\widetilde{n}}=0$
By the inductive assumption of Proposition 7.3, we have that
$\mathrm{div\,}v_{\ell_{q}}=0$, and
$\partial_{t}{v_{\ell_{q}}}+\mathrm{div\,}(v_{\ell_{q}}\otimes
v_{\ell_{q}})+\nabla
p_{\ell_{q}}=\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}.$
Adding $w_{q+1,0}$ as defined in (8.6), we obtain that
$v_{q,0}:=v_{\ell_{q}}+w_{q+1,0}$ solves
$\displaystyle\partial_{t}v_{q,0}+\mathrm{div\,}(v_{q,0}\otimes
v_{q,0})+\nabla p_{\ell_{q}}$
$\displaystyle=(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,0}+w_{q+1,0}\cdot\nabla
v_{\ell_{q}}+\mathrm{div\,}\left(w_{q+1,0}\otimes
w_{q+1,0}\right)+\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle:=\mathcal{T}_{0}+\mathcal{N}_{0}+\mathcal{O}_{0}+\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}.$
(8.27)
For a fixed ${\widetilde{n}}$, throughout this section we will consider sums
over indices
$\displaystyle(\xi,i,j,k,\widetilde{p},\vec{l})$
where the direction vector $\xi$ takes on one of the finitely many values in
Proposition 4.4, $0\leq i\leq{i_{\rm max}}(q)$ indexes the velocity cutoffs
(there are finitely many such values, cf. (6.50)), $0\leq j\leq{j_{\rm
max}}(q,{\widetilde{n}},{\widetilde{p}})$ indexes the stress cutoffs (there
are finitely many such values, cf. (6.129)), the parameter $k$ indexes the
time cutoffs defined in (6.96) (the number of values of $k$ is $q$-dependent,
but this is irrelevant because they form a partition of unity cf. (6.94)), the
parameter $1\leq\widetilde{p}\leq{p_{\rm max}}$ indexes which component of
$\mathring{R}_{q+1,{\widetilde{n}},{\widetilde{p}}}$ we are working with
(there are finitely many such values, cf. (9.3)), and lastly, $\vec{l}$
indexes the checkerboard cutoffs from Definition 6.39 (again, the number of
such indexes is $q$-dependent, but this is acceptable because they form a
partition of unity cf. (6.139)). For brevity of notation, we denote sums over
such indexes as
$\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\,.$
Moreover, we shall denote as
$\sum_{\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}}$ (8.28)
the double-summation over indexes $(\xi,i,j,k,\widetilde{p},\vec{l})$ and
$({\xi^{*}},{i^{*}},{j^{*}},{k^{*}},{p^{*}},\vec{l}^{*})$ which belong to the
set
$\left\\{(\xi,i,j,k,{\widetilde{p}},\vec{l},{\xi^{*}}),({i^{*}},{j^{*}},{k^{*}},{p^{*}},\vec{l}^{*}):\xi\neq{\xi^{*}}\lor
i\neq{i^{*}}\lor j\neq{j^{*}}\lor
k\neq{k^{*}}\lor{\widetilde{p}}\neq{p^{*}}\lor\vec{l}\neq\vec{l}^{*}\right\\},$
(8.29)
although we remind the reader that at the current stage ${\widetilde{n}}=0$,
the sum over ${\widetilde{p}}$ is superfluous since $w_{q+1,0}=w_{q+1,0,1}$.
For the sake of consistency between $w_{q+1,0}$ and $w_{q+1,{\widetilde{n}}}$
for $1\leq{\widetilde{n}}\leq{n_{\rm max}}$, we shall include the index
${\widetilde{p}}$ throughout this section. Expanding out the oscillation error
$\mathcal{O}_{0}$, we have that
$\displaystyle\mathcal{O}_{0}$
$\displaystyle=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\otimes\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle+\sum_{\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\otimes\mathrm{curl\,}\left(a_{({\xi^{*}})}\nabla\Phi_{({i^{*}},{k^{*}})}^{T}\mathbb{U}_{{\xi^{*}},q+1,0}\circ\Phi_{({i^{*}},{k^{*}})}\right)\right)$
$\displaystyle:=\mathrm{div\,}\mathcal{O}_{0,1}+\mathrm{div\,}\mathcal{O}_{0,2}.$
(8.30)
In Section 8.7, we will show that $\mathcal{O}_{0,2}$ is a Type 2 oscillation
error so that
$\mathcal{O}_{0,2}=0.$
Recalling identity (4.14) and the notation (9.65), we further split
$\mathcal{O}_{0,1}$ as
$\displaystyle\mathrm{div\,}\mathcal{O}_{0,1}=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\otimes\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle\qquad+2\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\otimes_{\mathrm{s}}\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\right)\otimes\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle:=\mathrm{div\,}\left(\mathcal{O}_{0,1,1}+\mathcal{O}_{0,1,2}+\mathcal{O}_{0,1,3}\right).$
(8.31)
Aside from $\mathcal{O}_{0,1,1}$, each of these terms is a divergence
corrector error and will therefore be estimated in Section 8.8.
Recall by Propositions 4.3, 4.4, and by (8.3) that $\mathbb{W}_{\xi,q+1,0}$ is
periodized to scale
$\left(\lambda_{q+1}r_{q+1,0}\right)^{-1}=\lambda_{q,0}^{-1}$. Using the
definition of $\mathbb{P}_{[q,n,p]}$ and (7.8), we have that343434The case
${\widetilde{n}}=0$ is exceptional in the sense that the minimum frequency of
$\mathbb{P}_{\geq\lambda_{q,0}}$ and the minimum frequency of
$\mathbb{P}_{[q,1,0]}$ are in fact both equal to
$\lambda_{q,0}=\lambda_{q,1,0}=\lambda_{q}^{\frac{4}{5}}\lambda_{q+1}^{\frac{1}{5}}$
from (9.27) and (9.22). For the sake of consistency with the
${\widetilde{n}}\geq 1$ cases, we will include the superfluous
$\mathbb{P}_{\geq\lambda_{q,0}}$ in the calculations in this section.
$\displaystyle\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0}$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}{\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0}}$
$\displaystyle+\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)\left(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0}\right)\,.$
Combining this observation with identity (4.15) from Proposition 4.4, and with
the definition of the $a_{(\xi)}$ in (8.2), we further split
$\mathcal{O}_{0,1,1}$ as
$\displaystyle\mathrm{div\,}\left(\mathcal{O}_{0,1,1}\right)=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0}(\Phi_{(i,k)})\right)\nabla\Phi_{(i,k)}^{-T}\right)$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\bigg{(}a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,0}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}\bigg{)}$
$\displaystyle=\mathrm{div\,}\sum_{i,j,k,{\widetilde{p}},\vec{l}}\sum_{\xi}\delta_{q+1,0,{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,0,{\widetilde{p}},j,i,k}}{\delta_{q+1,0,{\widetilde{p}}}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\times\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,0}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}$
$\displaystyle\qquad\times\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,0}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\,.$
(8.32)
By Proposition 4.1, equation (4.1), and the definition (8.1), we may rewrite
the first term on the right side of the above display as
$\displaystyle\mathrm{div\,}\sum_{i,j,k,{\widetilde{p}},\vec{l}}\sum_{\xi}\delta_{q+1,0,{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,0,{\widetilde{p}},j,i,k}}{\delta_{q+1,0,{\widetilde{p}}}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad=\mathrm{div\,}\sum_{i,j,k,\vec{l}}\eta_{(i,j,k)}^{2}\left(\delta_{q+1,0,1}\Gamma_{q+1}^{2j+4}\mathrm{Id}-\mathring{R}_{\ell_{q}}\right)$
$\displaystyle\qquad=-\mathrm{div\,}\sum_{i,j,k,\vec{l}}\eta^{2}_{(i,j,k)}\mathring{R}_{\ell_{q}}+\nabla\left(\sum_{i,j,k,\vec{l}}\eta_{(i,j,k)}^{2}\delta_{q+1,0,1}\Gamma_{q+1}^{2j+4}\right)$
$\displaystyle\qquad:=-\mathrm{div\,}\left(\mathring{R}_{\ell_{q}}\right)+\nabla\pi$
(8.33)
In the last equality of the above display we have used the fact that by
(6.142) we have
$\mathring{R}_{\ell_{q}}=\sum_{i,j,k,\vec{l}}\eta_{(i,j,k)}^{2}\mathring{R}_{\ell_{q}}\,.$
(8.34)
We apply Proposition A.18 to the remaining two terms from (8.32) to define for
$1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$ 353535Recall that
$\mathcal{H}$ is the local portion of the inverse divergence operator. The
pressure and the nonlocal portion will be accounted for shortly. We will check
in Section 8.6 that these errors are of the form required by the inverse
divergence operator as well as check the associated estimates.
$\displaystyle\mathring{H}_{q,n,p}^{0}$
$\displaystyle:=\mathcal{H}\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,0}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,0}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,0}^{\theta}\mathbb{W}_{\xi,q+1,0}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}.$
(8.35)
The last terms from (8.32) with $\mathbb{P}_{[q,{n_{\rm max}},{p_{\rm
max}}+1]}$ will be absorbed into $\mathring{R}_{q+1}$, whereas the terms in
(8.35) correspond to the error terms in (7.15).
Before amalgamating the preceding calculations, we pause to calculate the
means of various terms to which the inverse divergence operator from
Proposition A.18 will be applied. Examining the equality
$\partial_{t}v_{q,0}+\mathrm{div\,}\left(v_{q,0}\otimes v_{q,0}\right)+\nabla
p_{\ell_{q}}=\mathcal{T}_{0}+\mathcal{N}_{0}+\mathcal{O}_{0}+\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
(8.36)
and recalling the definitions of $\mathcal{T}_{0}$, $\mathcal{N}_{0}$, and
$\mathcal{O}_{0}$, we see immediately that every term can be written as the
divergence of a tensor except for $\partial_{t}v_{q,0}$ and $\mathcal{T}_{0}$.
Note however that $v_{q,0}=v_{\ell_{q}}+w_{q+1,0}$, that
$\int_{\mathbb{T}^{3}}\partial_{t}v_{\ell_{q}}=0$ (by integrating in space
(5.2)), and that $w_{q+1,0}$ is the curl of a vector field (cf. (8.13)). This
shows that $\int_{\mathbb{T}^{3}}\partial_{t}v_{q,0}=0$, and thus
$\int_{\mathbb{T}^{3}}\mathcal{T}_{0}=0$ as well. Therefore, we may use (A.72)
and (A.78) write
$\mathcal{T}_{0}=\mathrm{div\,}\left(\left(\mathcal{H}+\mathcal{R}^{*}\right)\mathcal{T}_{0}\right)+\nabla
P.$
We can now combine the calculations of (8.27), (8.30), (8.31), (8.32), (8.33)
(8.34), and (8.35) and let the notation $\nabla\pi$ change from line to line
to incorporate all the pressure terms to write that
$\displaystyle\partial_{t}v_{q,0}+\mathrm{div\,}\left(v_{q,0}\otimes
v_{q,0}\right)+\nabla p_{\ell_{q}}$
$\displaystyle=\mathcal{T}_{0}+\mathcal{N}_{0}+\mathcal{O}_{0}+\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle=\mathcal{T}_{0}+\mathcal{N}_{0}+\mathrm{div\,}\left(\mathcal{O}_{0,1}\right)+\mathrm{div\,}\left(\mathcal{O}_{0,2}\right)+\mathrm{div\,}\mathring{R}_{\ell_{q}}+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle=\mathcal{T}_{0}+\mathcal{N}_{0}+\mathrm{div\,}\left(\mathring{R}_{\ell_{q}}+\mathcal{O}_{0,1,1}\right)+\mathrm{div\,}\left(\mathcal{O}_{0,1,2}+\mathcal{O}_{0,1,3}+\mathcal{O}_{0,2}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle=\mathcal{T}_{0}+\mathcal{N}_{0}+\nabla\pi$
$\displaystyle\quad+\mathrm{div\,}(\mathcal{H}+\mathcal{R}^{*})\bigg{[}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\quad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,0}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{]}$
(8.37)
$\displaystyle\qquad+\mathrm{div\,}\left(\mathcal{O}_{0,1,2}+\mathcal{O}_{0,1,3}+\mathcal{O}_{0,2}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle=\nabla\pi+\mathrm{div\,}\bigg{[}\underbrace{\left(\mathcal{H}+\mathcal{R}^{*}\right)(\mathcal{T}_{0})}_{\textnormal{transport}}+\underbrace{\left(\mathcal{H}+\mathcal{R}^{*}\right)(\mathcal{N}_{0})}_{\textnormal{Nash}}+\mathring{R}_{q}^{\textnormal{comm}}$
(8.38)
$\displaystyle\qquad+\underbrace{\left(\mathcal{H}+\mathcal{R}^{*}\right)\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{[q,{n_{\rm max}},{p_{\rm
max}}+1]}(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{0}}$ (8.39)
$\displaystyle\qquad\qquad+\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{[q,{n_{\rm
max}},{p_{\rm
max}}+1]}(\mathbb{W}_{\xi,q+1,0}^{\theta}\mathbb{W}_{\xi,q+1,0}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{0}}\bigg{)}$ (8.40)
$\displaystyle\quad+\underbrace{\mathcal{R}^{*}\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{0}}$ (8.41)
$\displaystyle\qquad+\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}_{\xi,q+1,0}^{\theta}\mathbb{W}_{\xi,q+1,0}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{0}}\bigg{)}$ (8.42)
$\displaystyle\qquad\qquad+\underbrace{\mathcal{O}_{0,1,2}+\mathcal{O}_{0,1,3}}_{\textnormal{divergence
corrector}}+\underbrace{\mathcal{O}_{0,2}}_{\textnormal{Type 2}}\bigg{]}$
(8.43)
$\displaystyle\quad+\mathrm{div\,}\mathcal{H}\bigg{(}\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - }\mathring{H}_{q,n,p}^{0}}$ (8.44)
$\displaystyle\qquad+\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,0}}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}_{\xi,q+1,0}^{\theta}\mathbb{W}_{\xi,q+1,0}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}}_{\textnormal{Type
1 - }\mathring{H}_{q,n,p}^{0}}\bigg{)}$ (8.45)
$\displaystyle:=\mathrm{div\,}(\mathring{R}_{q+1}^{0})+\mathrm{div\,}\left(\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{{p_{\rm
max}}}\mathring{H}_{q,n,p}^{0}\right)+\nabla\pi+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}},$
thus verifying (7.10) from Proposition 7.3 after condensing the labeled terms
into $\mathring{R}_{q+1}^{0}$ and using (8.35) on the pieces labeled
$\mathring{H}_{q,n,p}^{0}$.
#### 8.3.2 The case $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$
From (7.17), we assume that $v_{q,{\widetilde{n}}-1}$ is divergence-free and
is a solution to
$\displaystyle\partial_{t}v_{q,{\widetilde{n}}-1}+$
$\displaystyle\mathrm{div\,}\left(v_{q,{\widetilde{n}}-1}\otimes
v_{q,{\widetilde{n}}-1}\right)+\nabla p_{q,{\widetilde{n}}-1}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{{\widetilde{n}}-1}\right)+\mathrm{div\,}\left(\sum\limits_{n={\widetilde{n}}}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}\,.$
Now using the definition of $\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$
from (8.7) and adding $w_{q+1,{\widetilde{n}}}$ as defined in (8.13), we have
that
$v_{q,{\widetilde{n}}}:=v_{q,{\widetilde{n}}-1}+w_{q+1,{\widetilde{n}}}=v_{\ell_{q}}+\sum_{0\leq
n^{\prime}\leq{\widetilde{n}}-1}w_{q+1,n^{\prime}}+w_{q+1,{\widetilde{n}}}$
solves
$\displaystyle\partial_{t}v_{q,{\widetilde{n}}}+$
$\displaystyle\mathrm{div\,}\left(v_{q,{\widetilde{n}}}\otimes
v_{q,{\widetilde{n}}}\right)+\nabla p_{q,{\widetilde{n}}-1}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{{\widetilde{n}}-1}\right)+\mathrm{div\,}\left(\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle\quad+(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{\widetilde{n}}}+w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}$
$\displaystyle\quad+\sum_{n^{\prime}\leq{\widetilde{n}}-1}\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{\widetilde{n}}}\right)$
$\displaystyle\quad+\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,{\widetilde{n}}}+\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right).$ (8.46)
The first term on the right hand side is
$\mathring{R}_{q+1}^{{\widetilde{n}}-1}$, which satisfies the same estimates
as $\mathring{R}_{q+1}^{\widetilde{n}}$ by (7.20) and will thus be absorbed
into $\mathring{R}_{q+1}^{{\widetilde{n}}}$ (these estimates do not change in
${\widetilde{n}}$ save for implicit constants). The second term, save for the
fact that the sum is over $n^{\prime}\leq{\widetilde{n}}-1$ rather than
$n^{\prime}\leq{\widetilde{n}}$ and is therefore missing the terms
$\mathring{H}_{q,n,p}^{\widetilde{n}}$, matches (7.17) at level
${\widetilde{n}}$ (i.e. replacing every instance of ${\widetilde{n}}-1$ with
${\widetilde{n}}$). As before, we apply the inverse divergence operators from
Proposition A.18 to the transport and Nash errors to obtain
$(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{\widetilde{n}}}+w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}=\mathrm{div\,}\left((\mathcal{H}+\mathcal{R}^{*})\left((\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{\widetilde{n}}}+w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)\right)+\nabla\pi,$
and these errors are absorbed into $\mathring{R}_{q+1}^{\widetilde{n}}$ or the
new pressure. We will show in Section 8.7 that the interaction of
$w_{q+1,{\widetilde{n}}}$ with previous terms $w_{q+1,n^{\prime}}$ is a Type 2
oscillation error so that
$\sum_{n^{\prime}\leq{\widetilde{n}}-1}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes
w_{q+1,{\widetilde{n}}}\right)=0.$ (8.47)
So to verify (7.17) at level ${\widetilde{n}}$, only the analysis of
$\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,{\widetilde{n}}}+\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right)$
remains. Reusing the notations from (8.28)363636In a slight abuse of notation,
notice that the admissible values of $\vec{l}$ have changed, since these
parameters describe the checkerboard cutoff functions at scale
$\lambda_{q,{\widetilde{n}},1}^{-1}$ and thus depend on ${\widetilde{n}}$. and
writing out the self-interaction of $w_{q+1,{\widetilde{n}}}$ yields
$\displaystyle\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,{\widetilde{n}}}\right)$
$\displaystyle=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\right)\otimes\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{i,k}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\right)\right)$
$\displaystyle+\sum_{\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\right)\otimes\mathrm{curl\,}\left(a_{(\xi^{\prime})}\nabla\Phi_{(i^{\prime},k^{\prime})}^{T}\mathbb{U}_{\xi^{\prime},q+1,{\widetilde{n}}}\right)\right)$
$\displaystyle:=\mathrm{div\,}\mathcal{O}_{{\widetilde{n}},1}+\mathrm{div\,}\mathcal{O}_{{\widetilde{n}},2}.$
(8.48)
As before, we will show that $\mathcal{O}_{{\widetilde{n}},2}$ is a Type 2
oscillation error so that
$\mathcal{O}_{{\widetilde{n}},2}=0.$
Splitting $\mathcal{O}_{{\widetilde{n}},1}$ gives
$\displaystyle\mathrm{div\,}\mathcal{O}_{{\widetilde{n}},1}=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\otimes\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle\qquad+2\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\otimes_{\mathrm{s}}\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)\otimes\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle:=\mathrm{div\,}\left(\mathcal{O}_{{\widetilde{n}},1,1}+\mathcal{O}_{{\widetilde{n}},1,2}+\mathcal{O}_{{\widetilde{n}},1,3}\right).$
(8.49)
The last two of these terms are again divergence corrector errors and will
therefore be absorbed into $\mathring{R}_{q+1}^{\widetilde{n}}$ and estimated
in Section 8.8. So the only terms remaining from (8.46) are
$\mathcal{O}_{{\widetilde{n}},1,1}$ and $\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}$, which are analyzed in
a fashion similar to the ${\widetilde{n}}=0$ case, save for the fact that
summation over ${\widetilde{p}}$ is now crucial.
Recall cf. (8.10) that $\mathbb{W}_{\xi,q+1,{\widetilde{n}}}$ is periodized to
scale
$\left(\lambda_{q+1}r_{q+1,{\widetilde{n}}}\right)^{-1}=\lambda_{q,{\widetilde{n}}}^{-1}$.
Using (7.8), we have that
$\displaystyle\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}}$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}{\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}}}$
$\displaystyle+\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)\left(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\right).$
Combining this division with identity (4.15) from Proposition 4.4, we further
split $\mathcal{O}_{{\widetilde{n}},1,1}$ as
$\displaystyle\mathrm{div\,}\left(\mathcal{O}_{{\widetilde{n}},1,1}\right)=\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\left(a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\right)\nabla\Phi_{(i,k)}^{-T}\right)$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathrm{div\,}\bigg{(}a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}\bigg{)}$
$\displaystyle=\mathrm{div\,}\sum_{i,j,k,{\widetilde{p}},\vec{l}}\sum_{\xi}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}}{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}$
$\displaystyle\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\,.$
(8.50)
By Proposition 4.1, equation (4.1), and identity (8.8), we obtain that
$\displaystyle\mathrm{div\,}\sum_{i,j,k,{\widetilde{p}},\vec{l}}\sum_{\xi}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,{\widetilde{n}},{\widetilde{p}},j,i,k}}{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad=\mathrm{div\,}\sum_{i,j,k,{\widetilde{p}},\vec{l}}\eta_{(i,j,k)}^{2}\left(\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\mathrm{Id}-\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right)$
$\displaystyle\qquad=-\mathrm{div\,}\sum_{i,j,k,\vec{l}}\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\eta^{2}_{(i,j,k)}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}+\nabla\left(\sum_{i,j,k,\vec{l}}\eta_{(i,j,k)}^{2}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{2j+4}\right)$
$\displaystyle\qquad:=-\mathrm{div\,}\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}+\nabla\pi\,,$ (8.51)
where in the last equality we have appealed to (6.142). We can finally apply
Proposition A.18 to the remaining terms in (8.50) for ${\widetilde{n}}+1\leq
n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$ to define
$\displaystyle\mathring{H}_{q,n,p}^{\widetilde{n}}:=\mathcal{H}\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad+\sum_{\xi,i,j,k,{\widetilde{p}}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\theta}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}.$
(8.52)
As before, the terms from (8.50) with $\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm max}}+1\right]}$ will be absorbed into
$\mathring{R}_{q+1}^{\widetilde{n}}$. We will show shortly that the terms
$\mathring{H}_{q,n,p}^{\widetilde{n}}$ in (8.52) are precisely the terms
needed to make (8.46) match (7.17) at level ${\widetilde{n}}$. As before, any
nonlocal inverse divergence terms will be absorbed into
$\mathring{R}_{q+1}^{\widetilde{n}}$.
Recall from (7.9) that $\mathring{R}_{q+1}^{\widetilde{n}}$ will include
$\mathring{R}_{q+1}^{{\widetilde{n}}-1}$ in addition to error terms arising
from the addition of $w_{q+1,{\widetilde{n}}}$ which are small enough to be
absorbed in $\mathring{R}_{q+1}$. Then to check (7.17), we return to (8.46)
and use (8.48), (8.49), (8.50), (8.51), and (8.52) to write
$\displaystyle\partial_{t}v_{q,{\widetilde{n}}}+\mathrm{div\,}\left(v_{q,{\widetilde{n}}}\otimes
v_{q,{\widetilde{n}}}\right)+\nabla p_{q,{\widetilde{n}}-1}$
$\displaystyle=\mathrm{div\,}\left(\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\left(\mathring{R}_{q+1}^{{\widetilde{n}}-1}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}$
$\displaystyle\quad+(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{\widetilde{n}}}+w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}$
$\displaystyle\quad+\sum_{n^{\prime}\leq{\widetilde{n}}-1}\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{\widetilde{n}}}\right)$
$\displaystyle\quad+\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,{\widetilde{n}}}+\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right)$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\left(\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\bigg{(}\mathring{R}_{q+1}^{{\widetilde{n}}-1}+(\mathcal{H}+\mathcal{R}^{*})\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)$
$\displaystyle\qquad+(\mathcal{H}+\mathcal{R}^{*})\left(w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)+\sum_{n^{\prime}\leq{\widetilde{n}}-1}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes
w_{q+1,{\widetilde{n}}}\right)\bigg{)}$
$\displaystyle\qquad+\mathrm{div\,}\left(\mathcal{O}_{{\widetilde{n}},1,2}+\mathcal{O}_{{\widetilde{n}},1,3}+\mathcal{O}_{{\widetilde{n}},2}\right)+\nabla\pi+\mathrm{div\,}\left(\mathcal{O}_{{\widetilde{n}},1,1}+\sum_{{\widetilde{p}}=1}^{p_{\rm
max}}\mathring{R}_{q,{\widetilde{n}},{\widetilde{p}}}\right)$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\left(\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}\right)+\mathrm{div\,}\bigg{(}\mathring{R}_{q+1}^{{\widetilde{n}}-1}+(\mathcal{H}+\mathcal{R}^{*})\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)$
$\displaystyle\quad+(\mathcal{H}+\mathcal{R}^{*})\left(w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)+\sum_{n^{\prime}\leq{\widetilde{n}}-1}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes
w_{q+1,{\widetilde{n}}}\right)\bigg{)}$
$\displaystyle\quad+\mathrm{div\,}\left(\mathcal{O}_{{\widetilde{n}},1,2}+\mathcal{O}_{{\widetilde{n}},1,3}+\mathcal{O}_{{\widetilde{n}},2}\right)+\nabla\pi$
$\displaystyle\quad+\mathrm{div\,}\bigg{[}\left(\mathcal{H}+\mathcal{R}^{*}\right)\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}$
$\displaystyle\qquad\qquad\qquad\times\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}\bigg{]}$
(8.53)
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\left({\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{{\widetilde{n}}-1}\mathring{H}_{q,n,p}^{n^{\prime}}}\right)+\mathrm{div\,}\bigg{(}\mathring{R}_{q+1}^{{\widetilde{n}}-1}+\underbrace{(\mathcal{H}+\mathcal{R}^{*})\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)}_{\textnormal{transport}}$ (8.54)
$\displaystyle\qquad+\underbrace{(\mathcal{H}+\mathcal{R}^{*})\left(w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)}_{\textnormal{Nash}}+\underbrace{\sum_{n^{\prime}\leq{\widetilde{n}}-1}\left(w_{q+1,{\widetilde{n}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes
w_{q+1,{\widetilde{n}}}\right)}_{\textnormal{Type 2}}\bigg{)}$ (8.55)
$\displaystyle\qquad+\mathrm{div\,}\bigg{(}\underbrace{\mathcal{O}_{{\widetilde{n}},1,2}+\mathcal{O}_{{\widetilde{n}},1,3}}_{\textnormal{divergence
corrector}}+\underbrace{\mathcal{O}_{{\widetilde{n}},2}}_{\textnormal{Type
2}}\bigg{)}+\nabla\pi$ (8.56)
$\displaystyle\qquad+\mathrm{div\,}\bigg{[}\underbrace{(\mathcal{H}+\mathcal{R}^{*})\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{[q,{n_{\rm max}},{p_{\rm
max}}+1]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{\widetilde{n}}}$ (8.57)
$\displaystyle\qquad\qquad+\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{[q,{n_{\rm
max}},{p_{\rm
max}}+1]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\theta}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{\widetilde{n}}}$ (8.58)
$\displaystyle+\mathcal{R}^{*}\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}$
$\displaystyle\qquad\qquad\times\underbrace{\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{\widetilde{n}}}$ (8.59)
$\displaystyle+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}$
$\displaystyle\qquad\qquad\times\underbrace{\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}}_{\textnormal{Type
1 - part of }\mathring{R}_{q+1}^{\widetilde{n}}}\bigg{)}\bigg{]}$ (8.60)
$\displaystyle\quad+\mathrm{div\,}\underbrace{\mathcal{H}\bigg{[}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type
1 - }\mathring{H}_{q,n,p}^{\widetilde{n}}}$ (8.61)
$\displaystyle\qquad+\underbrace{\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\left(\sum_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum_{p=1}^{{p_{\rm
max}}}\mathbb{P}_{[q,n,p]}\right)(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{\widetilde{n}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{]}}_{\textnormal{Type
1 - }\mathring{H}_{q,n,p}^{\widetilde{n}}}$ (8.62)
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\mathring{R}_{q+1}^{{\widetilde{n}}}+\mathrm{div\,}\sum\limits_{n={\widetilde{n}}+1}^{n_{\rm
max}}\sum_{p=1}^{p_{\rm
max}}\sum\limits_{n^{\prime}=0}^{\widetilde{n}}\mathring{H}_{q,n,p}^{n^{\prime}}+\nabla\pi,$
which concludes the proof after identifying the first seven lines (save for
the double sum of $\mathring{H}_{q,n}^{n^{\prime}}$ terms) of the second to
last equality as $\mathring{R}_{q+1}^{\widetilde{n}}$ and using (8.52) to
incorporate the eighth and ninth lines into the new double sum of
$\mathring{H}_{q,n}^{n^{\prime}}$ terms. Note that we have implicitly used in
the above equalities that
$\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)w_{q+1,{\widetilde{n}}}$ has
zero mean, which can be deduced in the same fashion as for the case
${\widetilde{n}}=0$.
#### 8.3.3 The case ${\widetilde{n}}={n_{\rm max}}$
From (7.24), we assume that $v_{q,{n_{\rm max}}-1}$ is divergence-free and is
a solution to
$\displaystyle\partial_{t}v_{q,{n_{\rm max}}-1}+$
$\displaystyle\mathrm{div\,}\left(v_{q,{n_{\rm max}}-1}\otimes v_{q,{n_{\rm
max}}-1}\right)+\nabla p_{q,{n_{\rm max}}-1}$
$\displaystyle=\mathrm{div\,}\left(\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right)+\mathrm{div\,}\left(\sum\limits_{n^{\prime}=0}^{{n_{\rm
max}}-1}\sum_{p=1}^{p_{\rm max}}\mathring{H}_{q,{n_{\rm
max}},p}^{n^{\prime}}\right)+\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}\,.$
Now using the definition of $\mathring{R}_{q,{n_{\rm max}},p}$ from (8.7) and
adding $w_{q+1,{n_{\rm max}}}$ as defined in (8.13), we have that
$v_{q+1}:=v_{q,{n_{\rm max}}-1}+w_{q+1,{n_{\rm max}}}$ solves
$\displaystyle\partial_{t}v_{q+1}+$
$\displaystyle\mathrm{div\,}\left(v_{q+1}\otimes v_{q+1}\right)+\nabla
p_{q,{n_{\rm max}}-1}$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\left(\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right)+(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{n_{\rm
max}}}+w_{q+1,{n_{\rm max}}}\cdot\nabla v_{\ell_{q}}$
$\displaystyle\quad+\sum_{n^{\prime}\leq{n_{\rm
max}}-1}\mathrm{div\,}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{n_{\rm max}}}\right)$
$\displaystyle\quad+\mathrm{div\,}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,{n_{\rm max}}}+\sum_{p=1}^{p_{\rm max}}\mathring{R}_{q,{n_{\rm
max}},p}\right).$ (8.63)
We absorb the term $\mathrm{div\,}\left(\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right)$ into $\mathring{R}_{q+1}$ immediately. We will then show that
up to a pressure term,
$\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)w_{q+1,{n_{\rm
max}}}\right),\qquad\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(w_{q+1,{n_{\rm
max}}}\cdot\nabla v_{\ell_{q}}\right)$
can be absorbed into $\mathring{R}_{q+1}$ in Sections 8.4 and 8.5,
respectively. We will be show in 8.7 that the interaction of $w_{q+1,{n_{\rm
max}}}$ with previous perturbations $w_{q+1,n^{\prime}}$ will satisfy
$\sum_{n^{\prime}\leq{n_{\rm max}}-1}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{n_{\rm max}}}\right)=0.$
(8.64)
Thus it remains to analyze
$\mathrm{div\,}\left(w_{q+1,{n_{\rm max}}}\otimes w_{q+1,{n_{\rm
max}}}+\sum_{p=1}^{p_{\rm max}}\mathring{R}_{q,{n_{\rm max}}}\right)$
from (8.63). Reusing the notations from (8.28)–(8.29), we can write out the
self-interaction of $w_{q+1,{n_{\rm max}}}$ as
$\displaystyle\mathrm{div\,}$ $\displaystyle\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,{n_{\rm max}}}\right)$
$\displaystyle=\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\right)\otimes\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{i,k}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\right)\right)$
$\displaystyle\qquad+\sum_{\neq\\{\xi,i,j,k,p,\vec{l}\\}}\mathrm{div\,}\left(\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\right)\otimes\mathrm{curl\,}\left(a_{(\xi^{\prime})}\nabla\Phi_{(i^{\prime},k^{\prime})}^{T}\mathbb{U}_{\xi^{\prime},q+1,{n_{\rm
max}}}\right)\right)$ $\displaystyle:=\mathrm{div\,}\mathcal{O}_{{n_{\rm
max}},1}+\mathrm{div\,}\mathcal{O}_{{n_{\rm max}},2}.$ (8.65)
As before, we will show in Section 8.7 that $\mathcal{O}_{{n_{\rm max}},2}$ is
a Type 2 oscillation error and so
$\mathcal{O}_{{n_{\rm max}},2}=0.$
Splitting $\mathcal{O}_{{n_{\rm max}},1}$ gives
$\displaystyle\mathrm{div\,}\mathcal{O}_{{n_{\rm
max}},1}=\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\otimes\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle\qquad+2\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\otimes_{\mathrm{s}}\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle\qquad+\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\right)\otimes\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{n_{\rm
max}}}\circ\Phi_{(i,k)}\right)\right)\right)$
$\displaystyle:=\mathrm{div\,}\left(\mathcal{O}_{{n_{\rm
max}},1,1}+\mathcal{O}_{{n_{\rm max}},1,2}+\mathcal{O}_{{n_{\rm
max}},1,3}\right).$ (8.66)
The last two of these three terms are again divergence corrector errors and
will therefore be absorbed into $\mathring{R}_{q+1}$ and estimated in Section
8.8.
Recall cf. (8.3) that $\mathbb{W}_{\xi,q+1,{n_{\rm max}}}$ is periodized to
scale $\left(\lambda_{q+1}r_{q+1,{n_{\rm max}}}\right)^{-1}=\lambda_{q,{n_{\rm
max}}}^{-1}$. Combining this observation with (4.15) from Proposition 4.4 and
(7.8), we further split $\mathcal{O}_{{n_{\rm max}},1,1}$ as373737In this
case, $\mathbb{P}_{\geq\lambda_{q,{n_{\rm max}}}}$ has a greater minimum
frequency than $\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm max}}+1\right]}$,
cf. (9.28), (9.22), and (7.6). For the sake of consistency, we write
$\mathbb{P}_{\geq\lambda_{q,{n_{\rm max}}}}\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm max}}+1\right]}$ throughout this section.
$\displaystyle\mathrm{div\,}\left(\mathcal{O}_{{n_{\rm
max}},1,1}\right)=\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{T}^{3}}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\otimes\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}(\Phi_{(i,k)})\right)\nabla\Phi_{(i,k)}^{-T}\right)$
$\displaystyle\quad+\sum_{\xi,i,j,k,p,\vec{l}}\mathrm{div\,}\left(a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\otimes\mathbb{W}_{\xi,q+1,{n_{\rm
max}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}\right)$
$\displaystyle=\mathrm{div\,}\sum_{i,j,k,p,\vec{l}}\sum_{\xi}\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,{n_{\rm
max}},p,j,i,k}}{\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\quad+\sum_{\xi,i,j,k,p,\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\otimes\mathbb{W}_{\xi,q+1,{n_{\rm
max}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\quad+\sum_{\xi,i,j,k,p,\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}^{\theta}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}.$
(8.67)
By (4.1) from Proposition 4.1 and (8.8), we obtain that
$\displaystyle\mathrm{div\,}\sum_{i,j,k,p,\vec{l}}\sum_{\xi}\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}\eta_{(i,j,k)}^{2}\gamma_{\xi}^{2}\left(\frac{R_{q,{n_{\rm
max}},p,j,i,k}}{\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}}\right)\nabla\Phi_{(i,k)}^{-1}\left(\xi\otimes\xi\right)\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad=\mathrm{div\,}\sum_{i,j,k,p,\vec{l}}\eta_{(i,j,k)}^{2}\left(\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}\mathrm{Id}-\mathring{R}_{q,{n_{\rm
max}},p}\right)$
$\displaystyle\qquad=-\mathrm{div\,}\sum_{i,j,k,\vec{l}}\sum_{p=1}^{p_{\rm
max}}\eta^{2}_{(i,j,k)}\mathring{R}_{q,{n_{\rm
max}},p}+\nabla\left(\sum_{i,j,k,p,\vec{l}}\eta_{(i,j,k)}^{2}\delta_{q+1,{n_{\rm
max}},p}\Gamma_{q+1}^{2j+4}\right)$
$\displaystyle\qquad:=-\mathrm{div\,}\sum_{p=1}^{p_{\rm
max}}\mathring{R}_{q,{n_{\rm max}},p}+\nabla\pi\,,$ (8.68)
where in the last line we have used (6.142). We can apply Proposition A.18 to
the remaining two terms in (8.67) to produce the terms
$\displaystyle\left(\mathcal{H}+\mathcal{R}^{*}\right)\bigg{(}\sum_{\xi,i,j,k,p,\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{n_{\rm
max}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad+\sum_{\xi,i,j,k,p,\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{n_{\rm
max}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)},$
(8.69)
which will be absorbed into $\mathring{R}_{q+1}$ and estimated in Section 8.6.
Before combining the previous steps, we remind the reader that at this point,
$\mathring{R}_{q+1}$ will be fully defined, and will include
$\mathring{R}_{q+1}^{{n_{\rm max}}-1}$, all the error terms arising from the
addition of $w_{q+1,{n_{\rm max}}}$, and
$\mathring{R}_{q}^{\textnormal{comm}}$. Then from (8.63), (8.64), (8.65),
(8.66), (8.67), (8.68), and (8.69), we can finally write that
$\displaystyle\partial_{t}$ $\displaystyle
v_{q+1}+\mathrm{div\,}\left(v_{q+1}\otimes v_{q+1}\right)+\nabla p_{q,{n_{\rm
max}}-1}$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\left(\mathring{R}_{q+1}^{{n_{\rm
max}}-1}\right)+(\partial_{t}+v_{\ell_{q}}\cdot\nabla)w_{q+1,{n_{\rm
max}}}+w_{q+1,{n_{\rm max}}}\cdot\nabla v_{\ell_{q}}$
$\displaystyle\quad+\sum_{n^{\prime}\leq{n_{\rm
max}}-1}\mathrm{div\,}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{n_{\rm max}}}\right)$
$\displaystyle\quad+\mathrm{div\,}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,{n_{\rm max}}}+\sum_{p=1}^{p_{\rm max}}\mathring{R}_{q,{n_{\rm
max}},p}\right)$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\bigg{(}\mathring{R}_{q+1}^{{n_{\rm
max}}-1}+\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(\partial_{t}w_{q+1,{n_{\rm
max}}}+v_{\ell_{q}}\cdot\nabla w_{q+1,{n_{\rm max}}}\right)$
$\displaystyle\qquad+(\mathcal{H}+\mathcal{R}^{*})\left(w_{q+1,{n_{\rm
max}}}\cdot\nabla v_{\ell_{q}}\right)+\sum_{n^{\prime}\leq{n_{\rm
max}}-1}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{n_{\rm
max}}}\right)\bigg{)}$
$\displaystyle\qquad+\mathrm{div\,}\left(\mathcal{O}_{{n_{\rm
max}},1,1}+\mathcal{O}_{{n_{\rm max}},1,2}+\mathcal{O}_{{n_{\rm
max}},1,3}+\mathcal{O}_{{n_{\rm max}},2}+\sum_{p=1}^{p_{\rm
max}}\mathring{R}_{q,{n_{\rm max}},p}\right)+\nabla\pi$
$\displaystyle=\mathrm{div\,}\mathring{R}_{q}^{\textnormal{comm}}+\mathrm{div\,}\bigg{(}\mathring{R}_{q+1}^{{n_{\rm
max}}-1}+\underbrace{\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(\partial_{t}w_{q+1,{n_{\rm
max}}}+v_{\ell_{q}}\cdot\nabla w_{q+1,{n_{\rm
max}}}\right)}_{\textnormal{transport}}$ (8.70)
$\displaystyle\qquad+\underbrace{(\mathcal{H}+\mathcal{R}^{*})\left(w_{q+1,{n_{\rm
max}}}\cdot\nabla
v_{\ell_{q}}\right)}_{\textnormal{Nash}}+\underbrace{\sum_{n^{\prime}\leq{n_{\rm
max}}-1}\left(w_{q+1,{n_{\rm max}}}\otimes
w_{q+1,n^{\prime}}+w_{q+1,n^{\prime}}\otimes w_{q+1,{n_{\rm
max}}}\right)}_{\textnormal{Type 2}}\bigg{)}$ (8.71)
$\displaystyle\qquad+\mathrm{div\,}\bigg{(}\underbrace{\mathcal{O}_{{n_{\rm
max}},1,2}+\mathcal{O}_{{n_{\rm max}},1,3}s}_{\textnormal{divergence
corrector}}+\underbrace{\mathcal{O}_{{n_{\rm max}},2}}_{\textnormal{Type
2}}\bigg{)}+\nabla\pi$ (8.72)
$\displaystyle+\mathrm{div\,}\underbrace{\left(\mathcal{H}+\mathcal{R}^{*}\right)\bigg{(}\sum_{\xi,i,j,k,p,\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}\otimes\mathbb{W})_{\xi,q+1,{n_{\rm
max}}}(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}_{\textnormal{Type 1}}$ (8.73)
$\displaystyle\qquad\qquad+\underbrace{\sum_{\xi,i,j,k,p}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}^{\theta}\mathbb{W}^{\gamma})_{\xi,q+1,{n_{\rm
max}}}(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}}_{\textnormal{Type
1}}$ (8.74) $\displaystyle=\mathrm{div\,}(\mathring{R}_{q+1})+\nabla\pi,$
concluding the proof after again noting that
$\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)w_{q+1,{\widetilde{n}}}$ has
zero mean.
### 8.4 Transport errors
###### Lemma 8.4.
For all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$, the transport errors satisfy
$D_{t,q}w_{q+1,{\widetilde{n}}}=\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}=\mathrm{div\,}\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)+\nabla p_{\widetilde{n}}$
with the estimates
$\displaystyle\left\|\psi_{i,q}D^{N}D_{t,q}^{M}\left(\left(\mathcal{H}+\mathcal{R}^{*}\right)\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)\right)\right\|_{L^{1}}$
$\displaystyle\qquad\qquad\lesssim\delta_{q+2}\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
for all $N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
###### Proof of Lemma 8.4.
The transport errors are given in (8.38), (8.54), and (8.70). Writing out the
transport error, we have that
$\displaystyle\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)w_{q+1,{\widetilde{n}}}$
$\displaystyle=\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)\left(\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}\mathrm{curl\,}\left(a_{\xi,i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)$
$\displaystyle=\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\right)\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}$
$\displaystyle\qquad+\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}\left(\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)\nabla
a_{(\xi)}\right)\times\left(\nabla\Phi_{(i,k)}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)$
$\displaystyle\qquad+\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}\nabla
a_{(\xi)}\times\left(\left(\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)\nabla\Phi_{(i,k)}\right)\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)$
(8.75)
Due to the fact that the second two terms arise from the addition of the
corrector defined in (8.5) and (8.12), and the fact that the bounds for the
corrector in (8.17) are stronger than that of the principal part of the
perturbation, we shall completely estimate only the first term and simply
indicate the set-up for the second and third. Before applying Proposition
A.18, recall that the inverse divergence of (8.75) needs to be estimated on
the support of a cutoff $\psi_{i,q}$ in order to verify (7.13), and (7.20),
and (7.27). Recall from the identification of the error terms (cf. (8.36) and
the subsequent argument) that for all ${\widetilde{n}}$,
$\left(\partial_{t}+v_{\ell_{q}}\cdot\nabla\right)w_{q+1,{\widetilde{n}}}$ has
zero mean. Thus, although each individual term in the final equality in (8.75)
may not have zero mean, we can safely apply $\mathcal{H}$ and
$\mathcal{R}^{*}$ to each term and estimate the outputs while ignoring the
last term in (A.78).
We will apply Proposition A.18, specifically Remark A.19, to each summand in
the first term on the right side of (8.75), with the following choices. We set
$v=v_{\ell_{q}}$, and $D_{t}=D_{t,q}=\partial_{t}+v_{\ell_{q}}\cdot\nabla$ as
usual. We set
$N_{*}=M_{*}=\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor$, with ${\mathsf{N}_{\rm dec}}$
and ${\mathsf{d}}$ satisfying (9.60a). We define
$G=(\partial_{t}+v_{\ell_{q}}\cdot\nabla)(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1})\xi,$
with $\lambda=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}$,
$\nu=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3}$,
$M_{t}=\mathsf{N}_{\textnormal{ind,t}}$,
$\widetilde{\nu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$, and
$\mathcal{C}_{G}=\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\delta_{q+1,{\widetilde{n}},1}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+j+5}\tau_{q}^{-1}\,,$
which is the correct amplitude in view of (8.14) with $r=1$ and
$r_{1}=r_{2}=2$, and (6.114). Thus, we have that
$\displaystyle\left\|D^{N}D_{t,q}^{M}G\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\left(\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}}-1,\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right),$
(8.76)
for all
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor$ after using (9.42) and (9.52),
and so (A.66) is satisfied. We set $\Phi=\Phi_{i,k}$ and
$\lambda^{\prime}=\widetilde{\lambda}_{q}$. Appealing as usual to Corollary
6.27 and (6.60), we have that (A.67) and (A.68) are satisfied.
Referring to (1) from Proposition 4.4, we set
$\varrho=\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}$ and
$\vartheta=\vartheta_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}$. Setting
$\zeta=\lambda_{q+1}$, we have that (i) is satisfied. Setting
$\mu=\lambda_{q+1}r_{q+1,{\widetilde{n}}}=\lambda_{q,{\widetilde{n}}}$ and
referring to (2) from Proposition 4.4, we have that (ii) is satisfied. Setting
$\Lambda=\zeta=\lambda_{q+1}$ and $C_{*}=r_{q+1,{\widetilde{n}}}$ and
referring to (4.11) and (4.12) from Proposition 4.4, we have that (A.69) is
satisfied. (A.70) is immediate from the definitions. Referring to (9.48), we
have that (A.71) is satisfied. Thus, we conclude from (A.73) with
$\alpha_{\mathsf{R}}$ as in (9.53), that for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor-{\mathsf{d}}$,
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\left((\partial_{t}+v_{\ell_{q}}\cdot\nabla)(a_{(\xi)}\nabla\Phi_{(i,k)}^{-1})\xi\right)\right)\right\|_{L^{1}}=\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\left(G\varrho\circ\Phi\right)\right)\right\|_{L^{1}}$
$\displaystyle\qquad\lesssim\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\delta_{q+1,{\widetilde{n}},1}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+j+6}\tau_{q}^{-1}r_{q+1,{\widetilde{n}}}\lambda_{q+1}^{-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,,$
after appealing to (9.42). From (9.60c), these bounds are valid for all
$N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$. The bound obtained above is next
summed over $(i,j,k,{\widetilde{p}},{\widetilde{n}},\vec{l})$. First, we treat
the sum over $\vec{l}$. By noting that (6.147) with $r_{1}=2$ and $r_{2}=2$,
and (9.42) imply
$\sum_{\vec{l}}\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+j+6}\leq\Gamma_{q+1}^{-2\left(\frac{i}{2}+\frac{j}{2}\right)+\frac{{\mathsf{C}_{b}}}{2}+2}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+j+6}=\Gamma_{q+1}^{\frac{{\mathsf{C}_{b}}}{2}+3}\,,$
we conclude that
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\left(\partial_{t}w_{q+1,{\widetilde{n}}}+v_{\ell_{q}}\cdot\nabla
w_{q+1,{\widetilde{n}}}\right)\right)\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\qquad\lesssim\sum_{i^{\prime}=i-1}^{i+1}\sum_{j,k,{\widetilde{p}},\xi}\Gamma_{q+1}^{\frac{{\mathsf{C}_{b}}}{2}+3}\delta_{q+1,{\widetilde{n}},1}^{\nicefrac{{1}}{{2}}}\tau_{q}^{-1}r_{q+1,{\widetilde{n}}}\lambda_{q+1}^{-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i^{\prime}},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{4+\frac{{\mathsf{C}_{b}}}{2}}\delta_{q+1,{\widetilde{n}},1}^{\frac{1}{2}}\tau_{q}^{-1}r_{q+1,{\widetilde{n}}}\lambda_{q+1}^{-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\delta_{q+2}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(8.77)
after also using (9.57).
To finish the proof for the first term in (8.75), we must provide a matching
estimate for the $\mathcal{R}^{*}$ portion. Following again the parameter
choices in Remark A.19, we set
$N_{\circ}=M_{\circ}=3\mathsf{N}_{\textnormal{ind,v}}$. As in the argument
from Lemma 8.6, we have that (A.75), (A.76), and (A.77) are satisfied, this
time with $\zeta=\lambda_{q+1}$. Thus we achieve the estimate in (A.79).
Summing over $\vec{l}$ loses a factor less than $\lambda_{q+1}^{3}$, while
summing over the other indices costs a constant independent of $q$. This
completes the estimate for the first term from (8.75).
For the second and third terms, we explain how to identify $G$ and $\varrho$
in order to give an idea of how to obtain similar estimates. Using 1 from
Proposition 4.4 and the vector calculus identity
$\mathrm{curl\,}\mathrm{curl\,}=\nabla\mathrm{div\,}-\Delta$, we obtain that
$\mathbb{U}_{\xi,q+1,{\widetilde{n}}}=\mathrm{curl\,}\left(\xi\lambda_{q+1}^{-2{\mathsf{d}}}\Delta^{{\mathsf{d}}-1}\left(\vartheta_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\right)=\lambda_{q+1}^{-2{\mathsf{d}}}\xi\times\nabla\left(\Delta^{{\mathsf{d}}-1}\left(\vartheta_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\right).$
(8.78)
With a little massaging, one can now rewrite the second and third terms in
(8.75) in the form $G\varrho\circ\Phi_{(i,k)}$. Since both terms have traded a
spatial derivative on $\mathbb{U}_{\xi,q+1,{\widetilde{n}}}$ for a spatial
derivative on $a_{(\xi)}$, inducing a gain, one can easily show that the
estimates for these terms will be even stronger than those for the first term.
Notice that we have set
$N_{*}=M_{*}=\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-7\right)\rfloor$ since we have lost a spatial
derivative on $a_{(\xi)}$. We omit the rest of the details. ∎
### 8.5 Nash errors
###### Lemma 8.5.
For all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$, the Nash errors satisfy
$w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}=\mathrm{div\,}\left(\left(\mathcal{H}+\mathcal{R}^{*}\right)w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)+\nabla p_{\widetilde{n}}$
with
$\displaystyle\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\left(\left(\mathcal{H}+\mathcal{R}^{*}\right)w_{q+1,{\widetilde{n}}}\cdot\nabla
v_{\ell_{q}}\right)\right\|_{L^{1}}\lesssim\delta_{q+2}\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
for all $N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
###### Proof of Lemma 8.5.
The estimates are similar to those in Lemma 8.4. Writing out the Nash error,
we have that
$\displaystyle w_{q+1,{\widetilde{n}}}\cdot\nabla v_{\ell_{q}}$
$\displaystyle=\sum_{i-1\leq i^{\prime}\leq
i+1}\sum_{j,k,{\widetilde{p}},\vec{l},\xi}\mathrm{curl\,}\left(a_{\xi,i,j,k,q,{\widetilde{n}}}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)$
$\displaystyle=\left(\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}\nabla
a_{(\xi)}\times\left(\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\right)\cdot\nabla
v_{\ell_{q}}$
$\displaystyle\qquad+\left(\sum_{i,j,k,{\widetilde{p}},\vec{l},\xi}a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)\cdot\nabla
v_{\ell_{q}}.$ (8.79)
Due to the fact that the first term arises from the addition of the corrector
defined in (8.5) and (8.12), and the fact that the bounds for the corrector in
(8.17) are stronger than that of the principal part of the perturbation, we
shall completely estimate only the second term and simply indicate the set-up
for the first. Before applying Proposition A.18, recall that the inverse
divergence of (8.75) needs to be estimated on the support of a cutoff
$\psi_{i,q}$ in order to verify (7.5), (7.13), and (7.20). Note that the Nash
error can be written as $\mathrm{div\,}\left(w_{q+1,{\widetilde{n}}}\cdot
v_{\ell_{q}}\right)$ and so has zero mean. Thus, although each individual term
in the final equality in (8.79) may not have zero mean, we can safely apply
$\mathcal{H}$ and $\mathcal{R}^{*}$ to each term and estimate the outputs
while ignoring the last term in (A.78).
We will apply Proposition A.18 to the second term with the following choices.
We set $v=v_{\ell_{q}}$, and
$D_{t}=D_{t,q}=\partial_{t}+v_{\ell_{q}}\cdot\nabla$ as usual. We set
$N_{*}=M_{*}=\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,x}-\mathsf{N}_{\rm cut,t}-4\right)\rfloor$, with ${\mathsf{N}_{\rm dec}}$
and ${\mathsf{d}}$ satisfying (9.60a). We define
$G=a_{(\xi)}\nabla\Phi_{(i,k)}^{-1}\xi\cdot\nabla v_{\ell_{q}}$
and set
$\mathcal{C}_{G}=\bigl{|}\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\delta_{q+1,{\widetilde{n}},1}^{\nicefrac{{1}}{{2}}}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+j+5}\tau_{q}^{-1}\,,$
$\lambda=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}$,
$\nu=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3}$,
$M_{t}=\mathsf{N}_{\textnormal{ind,t}}$, and
$\widetilde{\nu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$. From (8.14) with
$r=1$ and $r_{1}=r_{2}=2$, (6.114), and (6.60), we have that for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,x}-\mathsf{N}_{\rm cut,t}-4\right)\rfloor$
$\displaystyle\left\|D^{N}D_{t,q}^{M}G\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right),$
(8.80)
and so (A.66) is satisfied. Note that we have used (9.39) when converting the
$\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}$ to a
$\tau_{q}^{-1}$. Setting $\Phi=\Phi_{(i,k)}$ and
$\lambda^{\prime}=\widetilde{\lambda}_{q}$, we have that (A.67) and (A.68) are
satisfied as usual. The choices of $\varrho$, $\vartheta$, $\zeta$, $\mu$,
$\Lambda$, and $\mathcal{C}_{*}$ are identical to those of the transport error
(both terms contain $\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}$),
and so we have that (i)-(ii), (A.69), (A.70), and (A.71) are satisfied as
well. Since the bound (8.80) is identical to that of (8.76), we obtain an
estimate identical to (8.77). The argument for the $\mathcal{R}^{*}$ portion
follows analogously to that for the first term from the transport error.
Finally, after using (8.78) again, one may obtain similar estimates for the
first term in (8.79), concluding the proof. ∎
### 8.6 Type 1 oscillation errors
The Type 1 oscillation errors are defined in the three parameter regimes
${\widetilde{n}}=0$, $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$, and
${\widetilde{n}}={n_{\rm max}}$. In the case ${\widetilde{n}}=0$, Type 1
oscillation errors stem from the term identified in (8.37), which we recall is
$\displaystyle(\mathcal{H}+\mathcal{R}^{*})\bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,0}}\Bigl{(}\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\Bigr{)}$
$\displaystyle\quad\qquad\qquad\qquad\times(\mathbb{W}_{\xi,q+1,0}\otimes\mathbb{W}_{\xi,q+1,0})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,0}}\Bigl{(}\sum\limits_{n=1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\Bigr{)}$
$\displaystyle\quad\qquad\qquad\qquad\times(\mathbb{W}_{\xi,q+1,0}^{\theta}\mathbb{W}_{\xi,q+1,0}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\bigg{)}.$
(8.81)
This sum is divided into the terms identified in (8.39), (8.40), (8.41),
(8.42), (8.44), and (8.45). The errors defined in (8.44) and (8.45) are
$\mathring{H}_{q,n,p}^{0}$ errors and will be corrected by later perturbations
$w_{q+1,n,p}$, while the others will be immediately absorbed into
$\mathring{R}_{q+1}^{0}$.
In the case $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$, Type 1 oscillation
errors stem from the term identified in (8.53)
$\displaystyle\left(\mathcal{H}+\mathcal{R}^{*}\right)\Biggl{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\Bigl{(}\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\Bigr{)}$
$\displaystyle\quad\qquad\qquad\qquad\times(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\Bigl{(}\sum\limits_{n={\widetilde{n}}+1}^{{n_{\rm
max}}}\sum\limits_{p=1}^{p_{\rm
max}}\mathbb{P}_{[q,n,p]}+\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\Bigr{)}$
$\displaystyle\quad\qquad\qquad\qquad\times(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\theta}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\Biggr{)}.$
(8.82)
This sum is divided into the terms identified in (8.57), (8.58), (8.59),
(8.60), (8.61), and (8.62). As before, the last two terms are
$\mathring{H}_{q,n,p}^{{\widetilde{n}}}$ errors and will be corrected by later
perturbations, while the others are absorbed into
$\mathring{R}_{q+1}^{\widetilde{n}}$.
In the case ${\widetilde{n}}={n_{\rm max}}$, Type 1 oscillation errors are
identified in (8.73) and (8.74) as
$\displaystyle{\left(\mathcal{H}+\mathcal{R}^{*}\right)\Bigg{(}\sum_{\xi,i,j,k,p,\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}\otimes\mathbb{W}_{\xi,q+1,{n_{\rm
max}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}}$
$\displaystyle\quad+{\sum_{\xi,i,j,k,p,\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}(\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}^{\theta}\mathbb{W}_{\xi,q+1,{n_{\rm
max}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\Bigg{)}}.$
(8.83)
These errors are completely absorbed into $\mathring{R}_{q+1}$.
To prove the desired estimates on these error terms, we will first analyze a
single term of the form
$\displaystyle(\mathcal{H}+\mathcal{R}^{*})\Bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad\qquad\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\alpha\theta}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\theta}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\zeta\gamma}\Bigg{)}$
$\displaystyle\qquad=:\left(\mathcal{H}+\mathcal{R}^{*}\right)\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},n,p}\,.$
(8.84)
The estimates in Lemma 8.6 for this term on the support of a cutoff function
$\psi_{i,q}$ will depend on ${\widetilde{n}}$ and ${\widetilde{p}}$, which
range from $0\leq{\widetilde{n}}\leq{n_{\rm max}}$ and
$1\leq{\widetilde{p}}\leq{p_{\rm max}}$, respectively, and $n$ and $p$, which
range from ${\widetilde{n}}+1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm
max}}$, with the additional endpoint case $n={n_{\rm max}}$, $p={p_{\rm
max}}+1$. We then use this general estimate to specify in Remark 8.7 how the
terms corresponding to various values of $n$, ${\widetilde{n}}$, $p$, and
${\widetilde{p}}$ are absorbed into either higher order stresses
$\mathring{H}_{q,n,p}^{\widetilde{n}}$ or
$\mathring{R}_{q+1}^{\widetilde{n}}$, and eventually $\mathring{R}_{q+1}$.
###### Lemma 8.6.
The terms $\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},n,p}$ defined in
(8.84) satisfy the following.
1. (1)
For the special case $n={n_{\rm max}}$, $p={p_{\rm max}}+1$ and all
$0\leq{\widetilde{n}}\leq{n_{\rm max}}$, $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$, as well as for all cases $0\leq{\widetilde{n}}<n\leq{n_{\rm max}}$,
$1\leq p,{\widetilde{p}}\leq{p_{\rm max}}$, the nonlocal portion of the
inverse divergence satisfies
$\left\|D^{N}D_{t,q}^{M}\left(\mathcal{R}^{*}\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},n,p}\right)\right\|_{L^{1}\left(\mathbb{T}^{3}\right)}\leq\frac{\delta_{q+2}}{\lambda_{q+1}}\lambda_{q+1}^{N}\tau_{q}^{-M}\,$
(8.85)
for all $N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
2. (2)
For $n={n_{\rm max}}$, $p={p_{\rm max}}+1$, all
$0\leq{\widetilde{n}}\leq{n_{\rm max}}$ and $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$, the high frequency, local portion of the inverse divergence satisfies
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},{n_{\rm
max}},{p_{\rm
max}}+1}\right)\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\qquad\qquad\lesssim\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(8.86)
for all $N,M\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
3. (3)
For $0\leq{\widetilde{n}}<n\leq{n_{\rm max}}$ and $1\leq
p,{\widetilde{p}}\leq{p_{\rm max}}$, the medium frequency, local portion of
the inverse divergence satisfies
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},n,p}\right)\right\|_{L^{1}\mathrm{supp\,}\left(\psi_{i,q}\right)}$
$\displaystyle\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
(8.87)
for all $N+M\leq\mathsf{N}_{\rm fin,n}$.
###### Remark 8.7.
Note that after appealing to ${\widetilde{n}}\leq n-1$, (9.35), and (9.42),
(8.87) matches (7.15), (7.22), and (7.29), or equivalently (6.118). In
addition, after appealing again to ${\widetilde{n}}\leq n-1$, (9.35), and
(9.42), (8.85) and (8.86) are sufficient to meet (7.13), (7.20), and (7.27).
###### Proof of Lemma 8.6.
The first step is to use item (1) and (4.15) from Proposition 4.4 to rewrite
(8.84) as
$\displaystyle(\mathcal{H}+\mathcal{R}^{*})\Bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\nabla
a_{(\xi)}^{2}\nabla\Phi_{(i,k)}^{-1}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\otimes\mathbb{W}_{\xi,q+1,{\widetilde{n}}})(\Phi_{(i,k)})\nabla\Phi_{(i,k)}^{-T}$
$\displaystyle\qquad+\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}a_{(\xi)}^{2}(\nabla\Phi_{(i,k)}^{-1})_{\theta\alpha}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\theta}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{\gamma})(\Phi_{(i,k)})\partial_{\alpha}(\nabla\Phi_{(i,k)}^{-1})_{\gamma\kappa}\Bigg{)}$
$\displaystyle=\left(\mathcal{H}+\mathcal{R}^{*}\right)\Bigg{(}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}\left(\left(\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)^{2}\right)(\Phi_{(i,k)})$
$\displaystyle\qquad\times\bigg{(}\partial_{\alpha}a_{(\xi)}^{2}\left(\nabla\Phi_{(i,k)}^{-1}\right)_{\gamma\kappa}\xi^{\theta}\xi^{\gamma}\left(\nabla\Phi_{(i,k)}^{-T}\right)_{\theta\alpha}+a_{(\xi)}^{2}\left(\nabla\Phi_{(i,k)}\right)^{-1})_{\theta\alpha}\xi^{\theta}\xi^{\gamma}\partial_{\alpha}\left(\nabla\Phi_{(i,k)}^{-1}\right)_{\gamma\kappa}\bigg{)}\Bigg{)}.$
(8.88)
Next, we must identify the functions and the values of the parameters which
will be used in the application of Proposition A.18, specifically Remark A.19.
We first address the bounds required in (A.66), (A.67), and (A.68), which we
can treat simultaneously for items (1), (2), and (3). Afterwards, we split the
proof into two parts. First, we set $n={n_{\rm max}}$, $p={p_{\rm max}}+1$ and
prove (8.85) for _only_ these specific values of $n$ and $p$, as we
simultaneously prove (8.86). Next, we consider $n<{n_{\rm max}}$ and prove
(8.85) in the remaining cases, as we simultaneously prove (8.87).
Returning to (A.66), we will verify that this inequality holds with
$v=v_{\ell_{q}}$, $D_{t}=D_{t,q}=\partial_{t}+v_{\ell_{q}}\cdot\nabla$, and
$\displaystyle N_{*}=M_{*}=\lfloor\nicefrac{{N^{\sharp}}}{{2}}\rfloor$, where
$N^{\sharp}=\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5$. In order to verify the assumption
$N_{*}-{\mathsf{d}}\geq 2{\mathsf{N}_{\rm dec}}+4$, we use that
${\mathsf{N}_{\rm dec}}$ and ${\mathsf{d}}$ satisfy (9.60a), which gives that
$2{\mathsf{N}_{\rm
dec}}+4\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)-{\mathsf{d}}\rfloor\,.$ (8.89)
Denoting the $\kappa^{\textnormal{th}}$ component of the below vector field
$G$ by $G_{\kappa}$, we _fix_ a value of $(\xi,i,j,k,{\widetilde{p}},\vec{l})$
and set
$\displaystyle
G_{\kappa}=\partial_{\alpha}a_{(\xi)}^{2}\left(\nabla\Phi_{(i,k)}^{-1}\right)_{\gamma\kappa}\xi^{\theta}\xi^{\gamma}\left(\nabla\Phi_{(i,k)}^{-T}\right)_{\alpha\theta}+a_{(\xi)}^{2}\left(\nabla\Phi_{(i,k)}\right)^{-1})_{\alpha\theta}\xi^{\theta}\xi^{\gamma}\partial_{\alpha}\left(\nabla\Phi_{(i,k)}^{-1}\right)_{\kappa\gamma}\,.$
(8.90)
We now establish (A.66)–(A.68) with the parameter choices
$\displaystyle\mathcal{C}_{G}=|\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma^{2j-3-{\mathsf{C}_{b}}}_{q+1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\,,$
(8.91)
$\lambda=\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}$,
$M_{t}=\mathsf{N}_{\textnormal{ind,t}}$,
$\nu=\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4}$,
$\widetilde{\nu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$, and
$\lambda^{\prime}=\widetilde{\lambda}_{q}$. Applying Lemma 8.1 and estimate
(8.25) with $r=2$, $r_{2}=1$, $r_{1}=\infty$, and the bounds (6.113) and
(6.114), we see that
$\displaystyle\left\|D^{N}D_{t,q}^{M}\Bigl{(}\partial_{\alpha}a_{(\xi)}^{2}\left(\nabla\Phi_{(i,k)}^{-1}\right)_{\gamma\kappa}\xi^{\theta}\xi^{\gamma}\left(\nabla\Phi_{(i,k)}^{-T}\right)_{\alpha\theta}\Bigr{)}\right\|_{L^{1}}$
$\displaystyle\lesssim|\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma^{2j+5}_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}})^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\lesssim|\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma^{2j-2-{\mathsf{C}_{b}}}_{q+1}$
$\displaystyle\qquad\times\Gamma_{q+1}^{-1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
(8.92)
holds for all
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor$. To achieve the last
inequality, we have used the definition of
$\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}$ in (9.34) and the definition of
$f_{q,{\widetilde{n}}}$ in (9.31) to rewrite
$\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\Gamma_{q+1}^{7+{\mathsf{C}_{b}}}=\Gamma_{q+1}^{-1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\,.$
For the second half of $G_{\kappa}$, we can appeal to (6.113) and (6.114), and
use that
$\widetilde{\lambda}_{q}\leq\lambda_{q,{\widetilde{n}},{\widetilde{p}}}$ for
all ${\widetilde{n}}$ and ${\widetilde{p}}$ to deduce that for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor$ we have
$\displaystyle\left\|D^{N}D_{t,q}^{M}\partial_{\alpha}\left(\nabla\Phi_{i,k}^{-1}\right)_{\gamma\kappa}\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i,q}\widetilde{\chi}_{i,k,q})}\leq\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N+1}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c_{0}}},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right).$
Combining these estimates shows that
$\displaystyle\left\|D^{N}D_{t,q}^{M}G_{\kappa}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\left(\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\right)^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+3},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,$
(8.93)
for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor$, showing that (A.66) has been
satisfied.
We set the flow in Proposition A.18 as $\Phi=\Phi_{i,k}$, which by definition
satisfies $D_{t,q}\Phi_{i,k}=0$. Appealing to (6.109) and (6.112), we have
that (A.67) is satisfied. From (6.60), the choice of $\nu$ from earlier, and
(9.39), we have that $Dv=Dv_{\ell_{q}}$ satisfies the bound (A.68).
Proof of item (2) and of item (1) when $n={n_{\rm max}}$, $p={p_{\rm max}}+1$.
We first assume that ${\widetilde{n}}<{n_{\rm max}}$. In this case, we have
that the minimum frequency $\lambda_{q,{n_{\rm max}}+1,0}$ of
$\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm max}}+1\right]}$ is larger than the
minimum frequency $\lambda_{q,{\widetilde{n}}}$ of
$\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}$ from (9.28) and (9.22). We
therefore can discard $\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}$ from
(8.88) and with the goal of satisfying verifying (i)–(iii) of Proposition
A.18, we set
$\displaystyle\zeta=\lambda_{q,{n_{\rm max}}+1,0},$
$\displaystyle\qquad\mu=\lambda_{q,{\widetilde{n}}},\qquad\Lambda=\lambda_{q+1},$
(8.94)
and
$\displaystyle\varrho$ $\displaystyle=\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm
max}}+1\right]}\left(\left(\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)^{2}\right),$
(8.95a) $\displaystyle\vartheta$ $\displaystyle=\lambda_{q,{n_{\rm
max}}+1,0}^{2{\mathsf{d}}}\Delta^{-{\mathsf{d}}}\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm
max}}+1\right]}\left(\varrho^{2}_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\,,$
(8.95b)
where we recall that $\varrho_{\xi,\lambda,r}$ is defined via Propositions 4.3
and 4.4. We then have immediately that
$\displaystyle\varrho$ $\displaystyle=\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm
max}}+1\right]}\left(\left(\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)^{2}\right)$
$\displaystyle=\lambda_{q,{n_{\rm
max}}+1,0}^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\lambda_{q,{n_{\rm
max}}+1,0}^{2{\mathsf{d}}}\Delta^{-{\mathsf{d}}}\left(\mathbb{P}_{\left[q,{n_{\rm
max}},{p_{\rm
max}}+1\right]}\left(\varrho^{2}_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\right)$
$\displaystyle=\lambda_{q,{n_{\rm
max}}+1,0}^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\vartheta\,,$ (8.96)
and so (i) from Proposition A.18 is satisfied. By property (1) of Proposition
4.3, we have that the functions $\varrho$ and $\vartheta$ defined in (8.95)
are both periodic to scale
$\left(\lambda_{q+1}r_{q+1,{\widetilde{n}}}\right)^{-1}=\lambda_{q,{\widetilde{n}}}^{-1}$,
and so (ii) is satisfied. The estimates in (A.69) follow with
$\mathcal{C}_{*}=1$ from standard Littlewood-Paley arguments (see also the
discussion in part (b) of Remark A.21) and item (5) from Proposition 4.4. Note
that in the case $N=2{\mathsf{d}}$ in (A.69), the inequality is weakened by a
factor of $\lambda_{q+1}^{\alpha_{\mathsf{R}}}$, for an arbitrary
$\alpha_{\mathsf{R}}>0$; thus, (ii) is satisfied. At this stage let us fix a
value for this parameter $\alpha_{\mathsf{R}}$: we choose it to be
sufficiently small (with respect to $b$ and $\varepsilon_{\Gamma}$) to ensure
that the loss $\lambda_{q+1}^{\alpha_{\mathsf{R}}}$ may be absorbed by the
spare negative factor of $\Gamma_{q+1}$ in the definition of
$\mathcal{C}_{G}$, as is postulated in (9.53). From (9.19), (9.22), (9.26),
and (9.29), we have that
$\widetilde{\lambda}_{q}\leq\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\ll\lambda_{q,{\widetilde{n}}}\leq\lambda_{q,{n_{\rm
max}}+1,0}\leq\lambda_{q+1},$
and so (A.70) is satisfied. From (9.48) we have that
$\lambda_{q+1}^{4}\leq\left(\frac{\lambda_{q,{\widetilde{n}}}}{2\pi\sqrt{3}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}\right)^{\mathsf{N}_{\rm
dec}}$
if ${\mathsf{N}_{\rm dec}}$ is chosen large enough, and so (A.71) is
satisfied. Applying the estimate (A.73) with $\alpha$ as in (9.53), recalling
the value for $\mathcal{C}_{G}$ in (8.91), using (6.19) and (6.147) with
$r_{1}=\infty$ and $r_{2}=1$, we obtain that
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},{n_{\rm
max}},{p_{\rm
max}}+1}\right)\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\quad\lesssim\sum_{i^{\prime}=i-1}^{i+1}\sum_{\xi,j,k,\vec{l}}\Lambda^{\alpha_{\mathsf{R}}}|\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma^{2j-3-{\mathsf{C}_{b}}}_{q+1}\Gamma_{q}^{{-\mathsf{C_{R}}}}$
$\displaystyle\qquad\qquad\qquad\times\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\mathcal{C}_{*}\zeta^{-1}\mathcal{M}\left(N,1,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
$\displaystyle\quad\lesssim\Gamma_{q+1}\Big{(}\Gamma_{q+1}^{-1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\big{(}f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\big{)}\Big{)}\lambda_{q,{n_{\rm
max}}+1,0}^{-1}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\quad\lesssim\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\lambda_{q+1}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,,$
(8.97)
for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor-{\mathsf{d}}$. In the last
inequality, we have used the parameter estimate (9.54), which directly implies
$\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\lambda_{q,{n_{\rm
max}}+1,0}^{-1}\leq\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\,.$
(8.98)
Then, after using (9.60c), which gives that for all ${\widetilde{n}}$ we have
$\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor-{\mathsf{d}}\geq
3\mathsf{N}_{\textnormal{ind,v}},$ (8.99)
and thus the range of derivatives allowed in (8.97) is exactly as needed in
(8.86), thereby proving this bound.
Continuing to follow the parameter choices in Remark A.19, we set
$N_{\circ}=M_{\circ}=3\mathsf{N}_{\textnormal{ind,v}}$, and as before
$N^{\sharp}=\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5$. From (9.60d), we have that the condition
$N_{\circ}\leq\nicefrac{{N^{\sharp}}}{{4}}$ is satisfied. The inequalities
(A.75) and (A.76) follow from the discussion in Remark A.19. The inequality in
(A.77) follows from (9.43), (9.55), the fact that
$\lambda=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\leq\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{p_{\rm
max}}}$, and $\zeta=\lambda_{q,{n_{\rm max}}+1,0}>\lambda_{q,{n_{\rm
max}}-1}\geq\lambda_{q,{\widetilde{n}}}$, as in the discussion in Remark A.19.
Having satisfied these assumptions, we may now appeal to estimate in (A.79),
which gives (8.85) for the case ${\widetilde{n}}<n={n_{\rm max}}$, $p={p_{\rm
max}}+1$, and any value of ${\widetilde{p}}$.
Recall we began this case by assuming that ${\widetilde{n}}<{n_{\rm max}}$. In
the case ${\widetilde{n}}={n_{\rm max}}$ and $1\leq{\widetilde{p}}\leq{p_{\rm
max}}$, we have from (9.22) and (9.29) that $\lambda_{q,{n_{\rm
max}}}>\lambda_{q,{n_{\rm max}}+1,0}$, and so
$\mathbb{P}_{\left[q,{n_{\rm max}},{p_{\rm
max}}+1\right]}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}=\mathbb{P}_{\geq\lambda_{q,{n_{\rm
max}}}}\,.$
Then we can set $\zeta=\mu=\lambda_{q,{n_{\rm max}}}$. The only change is that
(8.98) becomes stronger, since $\lambda_{q,{n_{\rm max}}}>\lambda_{q,{n_{\rm
max}}+1,0}$, and so the desired estimates follow by arguing as before. We omit
further details.
Proof of item (3) and of item (1) when $p\neq{p_{\rm max}}+1$ and
$n\leq{n_{\rm max}}$. Note that in both of these cases we have
${\widetilde{n}}<n$. We first point that that we may assume that $n$ and $p$
are such that $\lambda_{q,{\widetilde{n}}}<\lambda_{q,n,p}$. If not, then
$\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}=0$, and so
the estimate is trivially satisfied. We then set
$\displaystyle\zeta=\max\left\\{\lambda_{q,{\widetilde{n}}},\lambda_{q,n,p-1}\right\\},$
$\displaystyle\qquad\mu=\lambda_{q,{\widetilde{n}}},\qquad\Lambda=\lambda_{q,n,p},$
(8.100)
and
$\displaystyle\varrho$
$\displaystyle=\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}\left(\left(\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)^{2}\right),$
(8.101a) $\displaystyle\vartheta$
$\displaystyle=\zeta^{2{\mathsf{d}}}\Delta^{-{\mathsf{d}}}\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}\left(\varrho^{2}_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\,.$
(8.101b)
We then have from the discussion part (b) of Remark A.21 that
$\displaystyle\varrho$
$\displaystyle=\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}\left(\varrho_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}^{2}\right)$
$\displaystyle=\zeta^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\zeta^{2{\mathsf{d}}}\Delta^{-{\mathsf{d}}}\left(\mathbb{P}_{\geq\lambda_{q,{\widetilde{n}}}}\mathbb{P}_{[q,n,p]}\left(\varrho^{2}_{\xi,\lambda_{q+1},r_{q+1,{\widetilde{n}}}}\right)\right),$
$\displaystyle=\zeta^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\vartheta\,,$ (8.102)
and so (i) from Proposition A.18 is satisfied. By property (1) of Proposition
4.3, $\varrho$ and $\vartheta$ are both periodic to scale
$\left(\lambda_{q+1}r_{q+1,{\widetilde{n}}}\right)^{-1}=\lambda_{q,{\widetilde{n}}}^{-1}$,
and so (ii) is satisfied. The estimates in (A.69) follow with
$\mathcal{C}_{*}=1$ from the discussion in part (b) of Remark A.21. Note that
in the case $N=2{\mathsf{d}}$ in (A.69), the inequality is weakened by a
factor of $\lambda_{q+1}^{\alpha_{\mathsf{R}}}$, and so (ii) is satisfied.
Here we again use $\alpha_{\mathsf{R}}$ as in (9.53), so this loss will be
absorbed using a factor of $\Gamma_{q+1}$. From (9.19), (9.26), (9.29) and
(9.22), and the assumption that $\lambda_{q,{\widetilde{n}}}<\lambda_{q,n,p}$,
we have that
$\widetilde{\lambda}_{q}\leq\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\ll\lambda_{q,{\widetilde{n}}}\leq\max\left\\{\lambda_{q,{\widetilde{n}}},\lambda_{q,n,p-1}\right\\}\leq\lambda_{q,n,p},$
and so, since $\Lambda\leq\lambda_{q+1}$, (A.70) is satisfied. From (9.48) we
have that
$\lambda_{q+1}^{4}\leq\left(\frac{\lambda_{q,{\widetilde{n}}}}{2\pi\sqrt{3}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}\right)^{\mathsf{N}_{\rm
dec}}\,,$
and so (A.71) is satisfied. Applying the estimate (A.73) for the parameter
range in Remark A.19, recalling that (8.90) includes the indicator function of
$\mathrm{supp\,}\left(\psi_{i,q}\right)$, recalling the definition of
$\mathcal{C}_{G}$ in (8.91), using (6.19) and (6.147) with $r_{1}=\infty$ and
$r_{2}=1$, and using $\zeta^{-1}\leq\lambda_{q,n,p-1}^{-1}$, we have that
$\displaystyle\left\|D^{N}D_{t,q}^{M}\left(\mathcal{H}\mathcal{O}_{{\widetilde{n}},{\widetilde{p}},n,p}\right)\right\|_{L^{1}\left(\mathrm{supp\,}\psi_{i,q}\right)}$
$\displaystyle\qquad\lesssim\sum_{i^{\prime}=i-1}^{i+1}\sum_{\xi,j,k,\vec{l}}\Lambda^{\alpha_{\mathsf{R}}}|\mathrm{supp\,}(\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}})\bigr{|}\Gamma^{2j-3-{\mathsf{C}_{b}}}_{q+1}\Gamma_{q}^{{-\mathsf{C_{R}}}}$
$\displaystyle\qquad\qquad\times\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\mathcal{C}_{*}\zeta^{-1}\mathcal{M}\left(N,1,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
$\displaystyle\qquad\lesssim\Gamma_{q+1}\Gamma_{q+1}^{-1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\lambda_{q,n,p-1}^{-1}\lambda_{q,n,p}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)$
$\displaystyle\qquad\lesssim\delta_{q+1,n,p}\lambda_{q,n,p}^{N}\mathcal{M}\left(M,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\,.$
(8.103)
In the last inequality, we have used that since $n<{\widetilde{n}}$, by (9.34)
we have
$\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{\widetilde{n}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\lambda_{q,n,p-1}^{-1}\leq\delta_{q+1,n,p}$
(8.104)
for all
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor-{\mathsf{d}}$. Then after using
(9.61), which gives that for all ${\widetilde{n}}<n$ that
$\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5\right)\rfloor-{\mathsf{d}}\geq\mathsf{N}_{\rm
fin,n},$ (8.105)
we have achieved (8.87).
Continuing to follow the parameter choices in Remark A.19, we set
$N_{\circ}=M_{\circ}=3\mathsf{N}_{\textnormal{ind,v}}$, and as before
$N^{\sharp}=\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-5$. From (9.60d), we have that the condition
$N_{\circ}\leq\nicefrac{{N^{\sharp}}}{{4}}$ is satisfied. The inequalities
(A.75) and (A.76) follow from the discussion in Remark A.19. The inequality in
(A.77) follows from (9.55) and the fact that
$\lambda=\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}\leq\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{p_{\rm
max}}}$ and
$\zeta=\max\\{\lambda_{q,{\widetilde{n}}},\lambda_{q,n,p-1}\\}\geq\lambda_{q,{\widetilde{n}}}$.
We then achieve the concluded estimate in (A.79), which gives (8.85) for the
case $p\neq{p_{\rm max}}+1$, $n\leq{n_{\rm max}}$ and any values of
${\widetilde{n}}$, ${\widetilde{p}}$ with ${\widetilde{n}}<n$. ∎
### 8.7 Type 2 oscillation errors
In order to show that the Type 2 errors (previously identified in (8.43),
(8.55), (8.56), (8.71), (8.72)) vanish, we will apply Proposition 4.8 on the
support of a specific cutoff function
$\eta=\eta_{i,j,k,q,n,p,\vec{l}}=\psi_{i,q}\chi_{i,k,q}\overline{\chi}_{q,n,p}\omega_{i,j,q,n,p}\zeta_{i,q,k,n,\vec{l}}\,.$
Before we may apply the proposition, we first estimate in Lemma 8.8 the number
of cutoff functions $\eta^{*}$ which may overlap with $\eta$, with an eye
towards keeping track of all the pipes that we will have to dodge in order to
successfully place pipes on $\eta$. The next three Lemmas ((8.9)-(8.11)) are
technical in nature and are necessary in order to apply Lemma 4.7.
Specifically, we show that given $\eta$, $\eta^{*}$ and a fixed time $t^{*}$,
one may find a convex set which contains the intersection of the supports of
$\eta$ and $\eta^{*}$ at $t^{*}$. The time $t^{*}$ will be the time at which
the pipes on $\eta^{*}$ are _straight_ , and combined with the convexity,
Lemma 4.7 may be applied. The upshot of this is that the pipes belonging to
$\eta^{*}$ only undergo mild deformations on the support of $\eta$. This
allows us to finally apply Proposition 4.8 to place pipes on $\eta$ which
dodge all pipes originating from overlapping cutoff functions $\eta^{*}$. We
remark that since $\overline{\chi}_{q,n,p}$ depends only on $n$ and $p$, which
are indices already encoded in $\omega_{i,j,q,n,p}$, throughout this section
we will suppress the dependence of the cumulative cutoff function $\eta$ on
$\overline{\chi}_{q,n,p}$ (defined in (6.104)), as it does not affect any of
the estimates.
#### 8.7.1 Preliminary estimates
###### Lemma 8.8 (Keeping Track of Overlap).
Given a cutoff function $\eta_{i,j,k,q,n,p,\vec{l}}$, consider the set of all
tuples $\left({i^{*}},{j^{*}},{k^{*}},{n^{*}},{p^{*}},\vec{l}^{*}\right)$ such
that the cutoff function
$\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}$ satisfies:
1. (1)
${n^{*}}\leq n$
2. (2)
There exists $(x,t)$ such that
$\eta_{i,j,k,q,n,p,\vec{l}}(x,t)\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}(x,t)\neq
0.$ (8.106)
Then the cardinality of the set of all such tuples is bounded above by
$\mathcal{C}_{\eta}\Gamma_{q+1}$, where the constant $\mathcal{C}_{\eta}$
depends only on ${n_{\rm max}}$, ${p_{\rm max}}$, ${j_{\rm max}}$, and
dimensional constants. In particular, due to (9.2), (9.3), and (6.129),
$\mathcal{C}_{\eta}$ is independent of $q$ and the values of the other
parameters indexing the cutoff functions.
###### Proof of Lemma 8.8.
Recall that the cutoff functions are defined by
$\eta_{i,j,k,q,n,p,\vec{l}}(x,t)=\psi_{i,q}(x,t)\chi_{i,k,q}(t)\overline{\chi}_{q,n,p}(t)\omega_{i,j,q,n,p}(x,t)\zeta_{i,q,k,n,\vec{l}}(x,t).$
(8.107)
As noted in the outline of this section, we will suppress the dependence on
$\overline{\chi}_{q,n,p}$, since the $n$ and $p$ indices are already accounted
for in $\omega_{i,j,q,n,p}$. The proof proceeds by first counting the number
of combinations $({i^{*}},{k^{*}})$ for which it is possible that there exists
$(x,t)$ such that
$\psi_{i,q}(x,t)\chi_{i,k,q}(t)\psi_{{i^{*}},q}(x,t)\chi_{{i^{*}},{k^{*}},q}(t)\neq
0.$ (8.108)
Next, for a given $({i^{*}},{k^{*}})$, we count the number of values of
$({j^{*}},{n^{*}},{p^{*}})$ such that there exists $(x,t)$ such that
$\omega_{i,j,q,n,p}(x,t)\omega_{{i^{*}},{j^{*}},q,{n^{*}},{p^{*}}}(x,t)\neq
0.$ (8.109)
Finally, for a given $({i^{*}},{k^{*}},{j^{*}},{n^{*}},{p^{*}})$, we count the
number of triples $(l^{*},w^{*},h^{*})$ such that ${n^{*}}\leq n$ and there
exists $(x,t)$ such that
$\zeta_{i,q,k,n,p,\vec{l}}(x,t)\zeta_{{i^{*}},q,{k^{*}},{n^{*}},{p^{*}},\vec{l}^{*}}(x,t)\neq
0.$ (8.110)
Recalling the definition of $\chi_{i,k,q}$ from (6.96) and (6.98), we see that
$\psi_{i,q}\chi_{i,{k^{*}},q}$ may have non-empty overlap with
$\psi_{i,q}\chi_{i,k,q}$ if and only if ${k^{*}}\in\\{k-1,k,k+1\\}$. Next,
from (6.19), we have that only $\psi_{i-1,q}$ and $\psi_{i+1,q}$ may overlap
with $\psi_{i,q}$. Now, let $(x,t)\in\mathrm{supp\,}\psi_{i,q}\chi_{i,k,q}$ be
given such that there exists $k_{i-1}$ such that
$\psi_{i,q}(x,t)\chi_{i,k,q}(t)\psi_{i-1,q}(x,t)\chi_{i-1,k_{i-1},q}(t)\neq
0.$
From the definition of $\chi_{i-1,k_{i-1},q}$, it is immediate that the
diameter of the support of $\chi_{i-1,k_{i-1},q}$ is _larger_ than the
diameter of the support of $\chi_{i,k,q}$. It follows that there can be at
most three values of ${k^{*}}$ (one of which is $k_{i-1}$) such that
$\chi_{i-1,{k^{*}},q}$ has non-empty overlap with $\chi_{i,k,q}$. Finally, let
$(x,t)\in\mathrm{supp\,}\psi_{i,q}\chi_{i,k,q}$ be given such that there
exists $k_{i+1}$ such that
$\psi_{i,q}(x,t)\chi_{i,k,q}(t)\psi_{i+1,q}(x,t)\chi_{i+1,k_{i+1},q}(t)\neq
0.$
From the definition of $\chi_{i+1,{k^{*}},q}$, there exists a constant
$\mathcal{C}_{\chi}$ depending on $\chi$ but not $i$, $q$, or ${k^{*}}$ such
that for all $|k^{\prime}|\geq\mathcal{C}_{\chi}\Gamma_{q+1}$
$\chi_{i+1,k_{i+1}+k^{\prime},q}(t)\chi_{i,k,q}(t)=0$
for all $t\in\mathbb{R}$. Therefore, the number of ${k^{*}}$ such that
$\chi_{i+1,{k^{*}},q}$ may have non-empty overlap with $\chi_{i,k,q}$ is no
more than $2\mathcal{C}_{\chi}\Gamma_{q+1}+1$. In summary, the number of pairs
$({i^{*}},{k^{*}})$ such that (8.108) holds for some $(x,t)$ is bounded above
by
$3+3+2\mathcal{C}_{\chi}\Gamma_{q+1}+1\leq 3\mathcal{C}_{\chi}\Gamma_{q+1}$
(8.111)
if $\lambda_{0}$ is sufficiently large, where the implicit constant is
independent of $q$ or any other parameters which index the cutoff functions.
Now let $({i^{*}},{k^{*}})$ be given such that
$\psi_{{i^{*}},q}\chi_{{i^{*}},{k^{*}},q}$ has nonempty overlap with
$\psi_{i,q}\chi_{i,k,q}$. Once values of ${n^{*}}$, ${p^{*}}$, and ${j^{*}}$
are chosen, these three parameters along with the value of ${i^{*}}$ uniquely
determine a stress cutoff function
$\omega_{{i^{*}},{j^{*}},q,{n^{*}},{p^{*}}}$. Since ${i^{*}}$ was fixed, we
may let ${j^{*}}$, ${n^{*}}$, and ${p^{*}}$ vary. Using that
${j^{*}}\leq{j_{\rm max}}\leq 4b/(\varepsilon_{\Gamma}(b-1))$ from (6.129),
${n^{*}}\leq{n_{\rm max}}$, ${p^{*}}\leq{p_{\rm max}}$ where ${n_{\rm max}}$,
and ${p_{\rm max}}$ are independent of $q$, the number of tuples
$({i^{*}},{k^{*}},{j^{*}},{n^{*}},{p^{*}})$ such that there exists $(x,t)$
with
$\psi_{i,q}(x,t)\chi_{i,k,q}(x,t)\omega_{i,j,q,n,p}(x,t)\psi_{{i^{*}},q}(x,t)\chi_{{i^{*}},{k^{*}},q}(x,t)\omega_{{i^{*}},{j^{*}},q,{n^{*}},{p^{*}}}(x,t)\neq
0$ (8.112)
is bounded by a dimensional constant multiplied by $\Gamma_{q+1}{n_{\rm
max}}{p_{\rm max}}4b/(\varepsilon_{\Gamma}(b-1))$.
Finally, fix a tuple $({i^{*}},{k^{*}},{j^{*}},{n^{*}},{p^{*}})$ such that
(8.112) holds at $(x,t)$. From (6.139), there exists
$\vec{l}^{*}=(l^{*},w^{*},h^{*})$ such that
$\zeta_{{i^{*}},q,{k^{*}},{n^{*}},\vec{l}^{*}}(x,t)\neq 0$. From (6.141),
(6.108), and the fact that ${n^{*}}\leq n$, there exists a dimensional
constant $\mathcal{C}_{\zeta}$ such at most $\mathcal{C}_{\zeta}$ of the
checkerboard cutoffs neighboring
$\zeta_{{i^{*}},q,{k^{*}},{n^{*}},\vec{l}^{*}}$ can intersect the support of
$\zeta_{i,q,k,n,\vec{l}}$. Since all Lagrangian trajectories originating at
$(x,t)$ follow the same velocity field $v_{\ell_{q}}$ and the checkerboard
cutoffs are precomposed with Lagrangian flows, this property is preserved in
time. Thus we have shown that for each tuple
$({i^{*}},{k^{*}},{j^{*}},{n^{*}},{p^{*}})$, the number of associated tuples
$(l^{*},w^{*},h^{*})$ such that
$\zeta_{{i^{*}},q,{k^{*}},{n^{*}},\vec{l}^{*}}$ can have nonempty intersection
with $\zeta_{i,q,k,n,\vec{l}}$ is bounded by a dimensional constant
independent of $q$.
Combining the preceding arguments, we obtain that the number of cutoff
functions $\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}$ which
may overlap nontrivially with $\eta_{i,j,k,q,n,p,\vec{l}}$ is bounded by at
most a dimensional constant multiplied by $\Gamma_{q+1}{n_{\rm max}}{p_{\rm
max}}4b/(\varepsilon_{\Gamma}(b-1))$, finishing the proof. ∎
###### Lemma 8.9.
Let $(x,t),(y,t)\in\mathrm{supp\,}\psi_{i,q}$ be such that
$\psi_{i,q}^{2}(x,t)\geq\nicefrac{{1}}{{4}}$ and
$\psi_{i,q}^{2}(y,t)\leq\nicefrac{{1}}{{8}}$. Then there exists a geometric
constant $\mathcal{C}_{\ast}>1$ such that
$|x-y|\geq\mathcal{C}_{*}\left(\Gamma_{q}\lambda_{q}\right)^{-1}.$ (8.113)
###### Proof Lemma 8.9.
Let $L(x,y)$ be the line segment connecting $x$ and $y$. From (6.36), we have
that for $z\in L(x,y)$ (in fact for all $z\in\mathbb{T}^{3}$),
$\left|\nabla\psi_{i,q}(z)\right|\lesssim\psi_{i,q}^{1-\frac{1}{\mathsf{N}_{\rm
fin}}}(z)\lambda_{q}\Gamma_{q}.$ (8.114)
Thus we can write
$\displaystyle\frac{1}{8}\leq\left|\psi_{i,q}^{2}(x,t)-\psi_{i,q}^{2}(y,t)\right|$
$\displaystyle\leq 2\left|\psi_{i,q}(x)-\psi_{i,q}(y)\right|$
$\displaystyle\leq
2\left|\int_{0}^{1}\nabla\psi_{i,q}(x+t(y-x))\cdot(y-x)\,dt\right|$
$\displaystyle\leq 2|x-y|\left\|\nabla\psi_{i,q}\right\|_{L^{\infty}}$
$\displaystyle\lesssim\Gamma_{q}\lambda_{q}|x-y|,$
and (8.113) follows. ∎
###### Lemma 8.10.
Consider cutoff functions
$\eta:=\eta_{i,j,k,q,n,p,\vec{l}}=\psi_{i,q}\chi_{i,k,q}\omega_{i,j,q,n,p}\zeta_{i,k,q,n,\vec{l}},$
$\eta^{*}:=\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}=\psi_{{i^{*}},q}\chi_{{i^{*}},{k^{*}},q}\omega_{{i^{*}},{j^{*}},q,{n^{*}},{p^{*}}}\zeta_{{i^{*}},{k^{*}},q,{n^{*}},\vec{l}^{*}},$
where $n^{*}\leq n$ and $\eta$ and $\eta^{*}$ overlap as in Lemma 8.8. Let
$t^{*}\in\mathrm{supp\,}\chi_{{i^{*}},{k^{*}},q}$ be given. Then there exists
a convex set $\Omega:=\Omega(\eta,\eta^{*},t^{*})$ with diameter
$\lambda_{q,n,0}^{-1}\Gamma_{q+1}$ such that
$\left(\mathrm{supp\,}\zeta_{i,k,q,n,\vec{l}}\cap\\{t=t^{*}\\}\right)\subset\Omega\subset\mathrm{supp\,}\psi_{i\pm,q}.$
(8.115)
###### Proof of Lemma 8.10.
Let $(x,t_{0})\in\mathrm{supp\,}\left(\eta\eta^{*}\right)$. Then there exists
$i^{\prime}\in\\{i-1,i,i+1\\}$ such that
$\psi_{i^{\prime},q}^{2}(x,t_{0})\geq\frac{1}{2}$. Consider the flow $X(x,t)$
originating from $(x,t_{0})$. Then for any $t$ such that
$|t-t_{0}|\leq\tau_{q}\Gamma_{q+1}^{-i+5+\mathsf{c_{0}}}$, we can apply Lemma
6.24 to deduce that $\psi_{i^{\prime},q}^{2}(t,X(x,t))\geq\frac{1}{4}$. By the
definition of $\chi_{{i^{*}},{k^{*}},q}$, the fact that
${i^{*}}\in\\{i-1,i,i+1\\}$, the existence of
$(x,t_{0})\in\mathrm{supp\,}(\chi_{i,k,q}\chi_{{i^{*}},{k^{*}},q})$, and the
fact that $t^{*}\in\mathrm{supp\,}\chi_{{i^{*}},{k^{*}},q}$, we in particular
deduce that $\psi_{i^{\prime},q}^{2}(t^{*},X(x,t^{*}))\geq\frac{1}{4}$. Now,
let $y$ be such that
$|X(x,t^{*})-y|\leq\lambda_{q,n,0}^{-1}\Gamma_{q+1}\leq\widetilde{\lambda}_{q}^{-1}<\mathcal{C}_{*}\widetilde{\lambda}_{q}^{-1}$
for $\mathcal{C}_{*}$ given in (8.113), where we have used the definitions of
$\lambda_{q,n,0}$ in (9.26), (9.27), and (9.28). Then from Lemma 8.9, it
cannot be the case that $\psi_{i^{\prime},q}^{2}(t^{*},y)\leq\frac{1}{8}$, and
so
$\displaystyle
y\in\mathrm{supp\,}\psi_{i^{\prime},q}\cap\\{t=t^{*}\\}\subset\mathrm{supp\,}\psi_{i\pm,q}\cap\\{t=t^{*}\\}\,.$
(8.116)
Since $y$ is arbitrary, we conclude that the ball of radius
$\Gamma_{q+1}\lambda_{q,n,0}^{-1}$ is contained in
$\mathrm{supp\,}\psi_{i\pm,q}\cap\\{t=t^{*}\\}$. We let
$\Omega(\eta,\eta^{*},t^{*})$ to be precisely this ball (hence a convex set).
Since $D_{t,q}\zeta_{i,k,q,n,\vec{l}}=0$ and
$(x,t_{0})\in\mathrm{supp\,}\zeta_{i,k,q,n,\vec{l}}$, we have that
$X(x,t^{*})\in\mathrm{supp\,}\zeta_{i,k,q,n,\vec{l}}\cap\\{t=t^{*}\\}$. Then,
recalling that the support of $\zeta_{i,k,q,n,\vec{l}}$ must obey the diameter
bound in (6.141) on the support of $\widetilde{\chi}_{i,k,q}$, which contains
the support of $\chi_{{i^{*}},{k^{*}},q}$ by (6.103), we conclude that
$\mathrm{supp\,}\zeta_{i,k,q,n,\vec{l}}\cap\\{t=t^{*}\\}\subset\Omega\,.$
(8.117)
Combining (8.116) and (8.117) concludes the proof of the lemma. ∎
###### Lemma 8.11.
As in Lemma 8.8, consider cutoff functions
$\eta:=\eta_{i,j,k,q,n,p,\vec{l}}=\psi_{i,q}\chi_{i,k,q}\omega_{i,j,q,n,p}\zeta_{i,k,q,n,\vec{l}},$
$\eta^{*}:=\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}=\psi_{{i^{*}},q}\chi_{{i^{*}},{k^{*}},q}\omega_{{i^{*}},{j^{*}},q,{n^{*}},{p^{*}}}\zeta_{{i^{*}},{k^{*}},q,{n^{*}},\vec{l}^{*}}.$
Let $t^{*}\in\mathrm{supp\,}\chi_{{i^{*}},{k^{*}},q}$ be such that
$\Phi^{*}:=\Phi_{({i^{*}},{k^{*}})}$ is the identity at time $t^{*}$. Using
Lemma 8.10, define $\Omega:=\Omega(\eta,\eta^{*},t^{*})$. Define
$\Omega(t):=\Omega(\eta,\eta^{*},t^{*},t):=X(\Omega,t)$, where
$X(\cdot,t^{*})$ is the identity.
1. (1)
For $t\in\mathrm{supp\,}\chi_{i,k,q}$,
$\mathrm{supp\,}\eta(\cdot,t)\subset\Omega(t)\subset\mathrm{supp\,}\psi_{i\pm,q}.$
(8.118)
2. (2)
Let
$\mathbb{W}^{*}\circ\Phi^{*}:=\mathbb{W}_{\xi*,q+1,n*}^{{i^{*}},{j^{*}},{k^{*}},{n^{*}},\vec{l}^{*}}\circ\Phi_{({i^{*}},{k^{*}})}$
be an intermittent pipe flow supported on $\eta^{*}$. Then there exists a
geometric constant $\mathcal{C}_{\textnormal{pipe}}$ such that
$\left(\mathrm{supp\,}\mathbb{W}^{*}\circ\Phi^{*}\cap\\{t=t^{*}\\}\cap\Omega\right)\subset\bigcup_{n=1}^{N}S_{n},$
where the sets $S_{n}$ are cylinders concentrated around line segments $A_{n}$
for $n\in\\{1,...,N\\}$ with
$N\leq\mathcal{C}_{\textnormal{pipe}}\left(\frac{\lambda_{q,n}}{\lambda_{q,n,0}\Gamma_{q+1}^{-1}}\right)^{2}.$
(8.119)
3. (3)
$\mathbb{W}^{*}\circ\Phi^{*}(\cdot,t)$ and the associated axes $A_{n}(t)$ and
sets $S_{n}(t)$ satisfy the conclusions of Lemma 4.7 on the set $\Omega(t)$
for $t\in\mathrm{supp\,}\chi_{i,k,q}$.
###### Proof of Lemma 8.11.
From the previous lemma, we have that for all $y\in\Omega$,
$\psi_{i\pm,q}^{2}(y,t^{*})\geq\nicefrac{{1}}{{8}}$. Applying Lemma 6.24, we
have that for all $t$ with
$|t-t^{*}|\leq\tau_{q}\Gamma_{q+1}^{-i+5+\mathsf{c_{0}}}$, the Lagrangian flow
originating from $(y,t^{*})$ has the property that
$\psi_{i\pm,q}^{2}(t,X(y,t))\geq\nicefrac{{1}}{{16}}\,.$ (8.120)
Recalling from (6.102) that the diameter of the support of
$\widetilde{\chi}_{{i^{*}},{k^{*}},q}$ is
$\tau_{q}\Gamma_{q+1}^{-{i^{*}}+\mathsf{c_{0}}}$ and that $i-1\leq{i^{*}}\leq
i+1$, we have that in particular the Lagrangian flow originating at
$(y,t^{*})$ satisfies (8.120) for all
$t\in\mathrm{supp\,}\widetilde{\chi}_{{i^{*}},{k^{*}},q}$. From (6.103),
(8.120) is then satisfied in particular for all
$t\in\mathrm{supp\,}\chi_{i,k,q}$, thus proving the second inclusion from
(8.118). To prove the first inclusion, we use (8.115), the definition of
$\Omega(t)$, and the equality $D_{t,q}\zeta_{i,k,q,n,\vec{l}}=0$ to deduce
that
$\mathrm{supp\,}\zeta_{i,k,q,n,\vec{l}}(\cdot,t)\subset\Omega(t),$
finishing the proof of (8.118).
To prove the second claim, recall that $\mathbb{W}^{*}\circ\Phi^{*}$ at
$t=t^{*}$ is periodic to scale $\lambda_{q,{n^{*}}}^{-1}$ for ${n^{*}}\leq n$,
and the diameter of $\Omega$ is $2\lambda_{q,n,0}^{-1}\Gamma_{q+1}$ (in fact
$\Omega$ is a ball). Considering the quotient of the respective diameters
squared, the claim then follows after absorbing the geometric constant
$n_{\xi}^{*}$ from Proposition 4.3 into $\mathcal{C}_{\textnormal{pipe}}$.
To see that we may apply Lemma 4.7, first note that $\Omega=\Omega(t^{*})$ is
convex by construction, and so the first assumption of Lemma 4.7 is met. We
choose $v=v_{\ell_{q}}$ and $X$ and $\Phi$ to be the associated backwards and
forwards flows originating from $t_{0}=t^{*}$. From (6.60), (8.118), and
(9.19), we have that for $t\in\mathrm{supp\,}\chi_{i,k,q}$ and
$x\in\Omega(t)$,
$\left|\nabla
v_{\ell_{q}}(x,t)\right|\lesssim\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\Gamma_{q+1}^{i+2}=\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+7},$
(8.121)
and so (4.21) is satisfied with $C=i+7$. Recall again from (6.103) that
$\mathrm{supp\,}\widetilde{\chi}_{{i^{*}},{k^{*}},q}$ contains the support of
$\chi_{i,k,q}$, and that from (6.102) the support of
$\widetilde{\chi}_{{i^{*}},{k^{*}},q}$ has diameter
$\tau_{q}\Gamma_{q+1}^{-{i^{*}}+\mathsf{c_{0}}}$. We then use (9.39) and
(9.19) to write that for any
$t\in\mathrm{supp\,}\widetilde{\chi}_{{i^{*}},{k^{*}},q}$ we have
$\displaystyle|t-t^{*}|\leq\tau_{q}\Gamma_{q+1}^{-{i^{*}}+\mathsf{c_{0}}+1}$
$\displaystyle\leq\tau_{q}\Gamma_{q+1}^{-i+\mathsf{c_{0}}+2}$
$\displaystyle\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\Gamma_{q+1}^{\mathsf{c_{0}}+6}\right)^{-1}\Gamma_{q+1}^{-i+\mathsf{c_{0}}+2}$
$\displaystyle=\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{\mathsf{c_{0}}+11}\right)^{-1}\Gamma_{q+1}^{-i+\mathsf{c_{0}}+2}$
$\displaystyle\leq\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{i+9}\right)^{-1},$
so that (4.20) is satisfied since $C+2=i+9$. We can now apply Lemma 4.7,
concluding the proof of the Lemma. ∎
#### 8.7.2 Applying Proposition 4.8
###### Lemma 8.12.
The Type 2 oscillation errors vanish. More specifically,
1. (1)
When ${\widetilde{n}}=0$, the Type 2 errors identified in (8.43) vanish.
2. (2)
When $1\leq{\widetilde{n}}\leq{n_{\rm max}}-1$, the Type 2 errors identified
in (8.55) and (8.56) vanish.
3. (3)
When ${\widetilde{n}}={n_{\rm max}}$, the Type 2 errors identified in (8.71)
and (8.72) vanish.
###### Proof of Lemma 8.12.
We first recall what the Type $2$ oscillation errors are. When
${\widetilde{n}}=0$, the errors identified in (8.43) can be written using
(8.30) as
$\mathcal{O}_{0,2}=\sum_{\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}}\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,0}\circ\Phi_{(i,k)}\right)\otimes\mathrm{curl\,}\left(a_{({\xi^{*}})}\nabla\Phi_{({i^{*}},{k^{*}})}^{T}\mathbb{U}_{{\xi^{*}},q+1,0}\circ\Phi_{({i^{*}},{k^{*}})}\right)\,,$
(8.122)
where the notation $\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}$ is defined in
(8.29) and denotes summation over all pairs of cutoff function indices for
which at least one parameter differs between the two pairs. When
$1\leq{\widetilde{n}}\leq{n_{\rm max}}$, the Type 2 errors identified in
(8.55) and (8.71) can be written as
$\displaystyle
2\sum_{n^{\prime}\leq{\widetilde{n}}-1}w_{q+1,{\widetilde{n}}}\otimes_{\textnormal{s}}w_{q+1,n^{\prime}}=2$
$\displaystyle\sum_{{n^{*}}\leq{\widetilde{n}}-1}\sum_{\xi,i,j,k,{\widetilde{p}},\vec{l}}\sum_{{\xi^{*}},{i^{*}},{j^{*}},{k^{*}},{p^{*}},\vec{l}^{*}}\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i,k)}\right)$
$\displaystyle\qquad\qquad\otimes_{\textnormal{s}}\mathrm{curl\,}\left(a_{({\xi^{*}})}\nabla\Phi_{({i^{*}},{k^{*}})}^{T}\mathbb{U}_{{\xi^{*}},q+1,{n^{*}}}\circ\Phi_{({i^{*}},{k^{*}})}\right)\,.$
(8.123)
When $1\leq{\widetilde{n}}\leq{n_{\rm max}}$, the Type 2 errors identified in
(8.56) and (8.72) can be written as
$\sum_{\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}}\mathrm{curl\,}\left(a_{(\xi)}\nabla\Phi_{(i,k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\right)\otimes\mathrm{curl\,}\left(a_{({\xi^{*}})}\nabla\Phi_{({i^{*}},{k^{*}})}^{T}\mathbb{U}_{{\xi^{*}},q+1,{\widetilde{n}}}\right)\,,$
(8.124)
where the notation $\neq\\{\xi,i,j,k,{\widetilde{p}},\vec{l}\\}$ has been
reused from (8.29). To show that the errors defined in (8.122), (8.7.2), and
(8.124) vanish, it suffices to show the following. For pairs of cutoff
functions $\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}$ and
$\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}$ satisfying the
two conditions in Lemma 8.8, and vectors $\xi,{\xi^{*}}\in\Xi$,
$\displaystyle\mathrm{supp\,}\left(\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{i,j,k,{\widetilde{n}},{\widetilde{p}},\vec{l}}\circ\Phi_{(i,k)}\right)\cap\mathrm{supp\,}\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}$
$\displaystyle\qquad\qquad\qquad\cap\mathrm{supp\,}\left(\mathbb{W}_{{\xi^{*}},q+1,{n^{*}}}^{{i^{*}},{j^{*}},{k^{*}},{n^{*}},{p^{*}},\vec{l}^{*}}\circ\Phi_{({i^{*}},{k^{*}})}\right)\cap\mathrm{supp\,}\eta_{{i^{*}},{j^{*}},{k^{*}},q,{n^{*}},{p^{*}},\vec{l}^{*}}=\emptyset.$
(8.125)
The proof of this claim will proceed by fixing ${\widetilde{n}}$, using the
preliminary estimates, and applying Proposition 4.8.
Let ${\widetilde{n}}$ be fixed and assume that $w_{q+1,n^{\prime}}$ for
$n^{\prime}<{\widetilde{n}}$ has been defined (when ${\widetilde{n}}=0$, this
assumption is vacuous). In particular, placements have been chosen for all
intermittent pipe flows indexed by $n^{\prime}$. Now, consider all the cutoff
functions $\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}$ utilized at
stage ${\widetilde{n}}$. Since the parameters indexing the cutoff functions
are countable, we may choose _any_ ordering of the tuples
$(i,j,k,{\widetilde{p}},\vec{l})$ at level ${\widetilde{n}}$. Combined with an
ordering of the direction vectors $\xi\in\Xi$, we thus have an ordering of the
cutoff functions $\eta_{i,j,k,q,{\widetilde{n}},{\widetilde{p}},\vec{l}}$ and
the associated intermittent pipe flows
$\mathbb{W}_{\xi,q+1,{\widetilde{n}}}^{i,j,k,{\widetilde{n}},{\widetilde{p}},\vec{l}}\circ\Phi_{(i,k)}$.
To ease notation, we will abbreviate the cutoff functions as $\eta_{z}$ and
the associated intermittent pipe flows as $(\mathbb{W}\circ\Phi)_{z}$, where
$z\in\mathbb{N}$ corresponds to the ordering. We will apply Proposition 4.8
inductively on $z$ such that the following two conditions hold. Our goal is to
place the pipe flow $(\mathbb{W}\circ\Phi)_{z}$ such that
$\mathrm{supp\,}(\mathbb{W}\circ\Phi)_{z^{\prime}}\cap\mathrm{supp\,}(\mathbb{W}\circ\Phi)_{z}\cap\mathrm{supp\,}\eta_{z}=\emptyset\,,$
(8.126)
for all $z^{\prime}<z$, and such that
$\mathrm{supp\,}w_{q+1,n^{\prime}}\cap\mathrm{supp\,}(\mathbb{W}\circ\Phi)_{z}\cap\mathrm{supp\,}\eta_{z}=\emptyset\,,$
(8.127)
for all $n^{\prime}<{\widetilde{n}}$. The first condition shows that all Type
2 errors such as (8.122) and (8.124) which arise from two sets of pipes both
indexed by ${\widetilde{n}}$ vanish, while the second condition shows that the
Type 2 errors which arise from pipes indexed by $n^{\prime}<{\widetilde{n}}$
interacting with pipes indexed by ${\widetilde{n}}$ vanish, such as (8.7.2).
Throughout the rest of the proof, $z^{\prime}$ will only ever denote an
integer less than $z$ such that $\eta_{z}$ and $\eta_{z^{\prime}}$ overlap.
Although we have suppressed the indices, note that $\eta_{z^{\prime}}$ and
$\eta_{z}$ both correspond to the index ${\widetilde{n}}$. Conversely, let
$\eta_{z^{\prime\prime}}$ denote a generic cutoff function indexed by
$n^{\prime}$ which overlaps with $\eta_{z}$. By Lemma 8.8, there exists a
geometric constant $\mathcal{C}_{\eta}$ such that the number of cutoff
functions $\eta_{z^{\prime}}$ or $\eta_{z^{\prime\prime}}$ which overlap with
$\eta_{z}$ is bounded above by $\mathcal{C}_{\eta}\Gamma_{q+1}$. Let
$t_{z^{\prime}}\in\mathrm{supp\,}\chi_{i_{z^{\prime}},k_{z^{\prime}},q}$ be
the time for which $\Phi_{i_{z^{\prime}},k_{z^{\prime}},q}$ is the identity,
and let $\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}}\right)$ be the
convex set constructed in Lemma 8.10, where we have set
$t^{*}=t_{z^{\prime}}$. Let
$\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t\right)$ denote the
image of $\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}}\right)$ under
this flow, as defined in Lemma 8.11. We then have that the set
$\mathrm{supp\,}(\mathbb{W}\circ\Phi)_{z^{\prime}}\cap\mathrm{supp\,}\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}}\right)\cap\\{t=t_{z^{\prime}}\\}$
(8.128)
is contained in the union of sets $S_{n}^{z^{\prime}}$ concentrated around
axes $A_{n}^{z^{\prime}}$ for
$n\leq\mathcal{C}_{\textnormal{pipe}}\Gamma_{q+1}^{2}\frac{\lambda_{q,{\widetilde{n}}}^{2}}{\lambda_{q,{\widetilde{n}},0}^{2}}\,,$
and the flowed axes $A_{n}^{z^{\prime}}$ and pipes of
$(\mathbb{W}\circ\Phi)_{z^{\prime}}$ satisfy the conclusions of Lemma 4.7.
Furthermore, substituting $z^{\prime\prime}$ for $z^{\prime}$ in the preceding
discussion, all the analogous definitions and conclusions can be made for
cutoff functions $\eta_{z^{\prime\prime}}$ and pipe flows
$(\mathbb{W}\circ\Phi)_{z^{\prime\prime}}$.
We will apply Proposition 4.8 with the following choices. Let $t_{z}$ be the
time at which the flow map $\Phi_{i,k,q}$ corresponding to $\eta_{z}$ is the
identity. Set
$\Omega=\left(\bigcup_{z^{\prime}<z}\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t_{z}\right)\right)\bigcup\left(\bigcup_{n^{\prime}<{\widetilde{n}}}\Omega\left(\eta_{z},\eta_{z^{\prime\prime}},t_{z^{\prime\prime}},t_{z}\right)\right)$
(8.129)
and set
$\displaystyle
r_{1}=\Gamma_{q+1}^{-1}\frac{\lambda_{q,{\widetilde{n}},0}}{\lambda_{q+1}}=\begin{cases}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}-1}\cdot\frac{5}{6}}\Gamma_{q+1}^{-1}&\mbox{if
}{\widetilde{n}}\geq 2\\\
\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\frac{4}{5}}\Gamma_{q+1}^{-1}&\mbox{if
}{\widetilde{n}}=1\\\ \frac{\widetilde{\lambda}_{q}}{\lambda_{q+1}}&\mbox{if
}{\widetilde{n}}=0.\end{cases}$ (8.130)
We have used here the definitions of $\lambda_{q,{\widetilde{n}},0}$ given in
(9.27), (9.26), and (9.28). Note that by (8.118),
$\mathrm{supp\,}\eta_{z}(\cdot,t_{z})\subset\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t_{z}\right)$
for each $z^{\prime}<z$, with the analogous inclusion holding when
$z^{\prime}$ is replaced by $z^{\prime\prime}$. In particular, we have that
$\mathrm{supp\,}\eta_{z}(\cdot,t_{z})\subset\Omega$. Furthermore, we have
additionally from Lemma 8.11 that Lemma 4.7 may be applied on $\Omega(t)$ for
all $t\in\chi_{i,k,q}$. Thus, the diameter of
$\Omega(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t_{z})$ satisfies
$\textnormal{diam}\left(\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t_{z}\right)\right)\leq(1+\Gamma_{q+1}^{-1})\textnormal{diam}\left(\Omega(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}})\right)=2(1+\Gamma_{q+1}^{-1})\lambda_{q,{\widetilde{n}},0}^{-1}\Gamma_{q+1}$
(8.131)
Using that the diameter of the support of $\eta_{z}(\cdot,t_{z})$ is bounded
by a dimensional constant times $\lambda_{q,{\widetilde{n}},0}^{-1}$ from
(6.141) and recalling that
$\mathrm{supp\,}\eta_{z}(\cdot,t_{z})\subset\Omega\left(\eta_{z},\eta_{z^{\prime}},t_{z^{\prime}},t_{z}\right)$
with the analogous conclusion holding for $z^{\prime\prime}$, we have that
$\displaystyle\textnormal{diam}(\Omega)$ $\displaystyle\leq
4(1+\Gamma_{q+1}^{-1})\lambda_{q,{\widetilde{n}},0}^{-1}\Gamma_{q+1}+\Gamma_{q+1}\lambda_{q,{\widetilde{n}},0}^{-1}$
$\displaystyle\leq
6(1+\Gamma_{q+1}^{-1})\Gamma_{q+1}\left(\lambda_{q,{\widetilde{n}},0}\right)^{-1}$
$\displaystyle\leq 16(\lambda_{q+1}r_{1})^{-1}$
for each value of ${\widetilde{n}}$ from (8.130), and so (4.28) is satisfied.
Now set
$\mathcal{C}_{A}=\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1},\qquad
r_{2}=r_{q+1,n}\approx\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}},$
where above we have appealed to (9.23) and (9.25). By (8.119) and Lemma 8.8,
the total number of pipes contained in $\Omega$ is no more than
$\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}^{3}\frac{\lambda_{q,{\widetilde{n}}}^{2}}{\lambda_{q,{\widetilde{n}},1}^{2}}.$
Then we can write
$\displaystyle\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}^{3}\frac{\lambda_{q,{\widetilde{n}}}^{2}}{\lambda_{q,{\widetilde{n}},0}^{2}}=\mathcal{C}_{A}\frac{r_{2}^{2}}{r_{1}^{2}},$
and so (4.29) is satisfied. Furthermore, the assumptions on the axes and the
neighborhoods of the axes required by Proposition 4.8 follow from Lemma 8.11,
which allows us to appeal to the conclusions of Lemma 4.7. Finally, from
(9.58a), we have that for ${\widetilde{n}}\geq 2$,
$\displaystyle C_{*}\mathcal{C}_{A}r_{2}^{4}\leq
16C_{*}\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}\cdot
4}\leq\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}-1}\cdot\frac{5}{6}\cdot
3}\Gamma_{q+1}^{-3}=r_{1}^{3},$ (8.132)
showing that (4.31) is satisfied for ${\widetilde{n}}\geq 2$. In the cases
${\widetilde{n}}=0$ and ${\widetilde{n}}=1$, the desired inequalities follow
from (8.130) and (9.58b) and (9.58c), and so we have checked that (4.31) is
satisfied for all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$. Then from the
conclusion (4.32) of Proposition 4.8, we have that on the support of $\Omega$,
which in particular contains the support of $\eta_{z}(\cdot,t_{z})$ from
(8.118), we can choose the support of $(\mathbb{W}\circ\Phi)_{z}$ to be
disjoint from the support of $(\mathbb{W}\circ\Phi)_{z^{\prime}}$ and
$(\mathbb{W}\circ\Phi)_{z^{\prime\prime}}$ for all overlapping
$z^{\prime\prime}$ and $z^{\prime}$. Then since
$D_{t,q}(\mathbb{W}\circ\Phi)_{z}=D_{t,q}(\mathbb{W}\circ\Phi)_{z^{\prime}}=D_{t,q}(\mathbb{W}\circ\Phi)_{z^{\prime\prime}}=0$,
(8.126) and (8.127) are satisfied, concluding the proof. ∎
### 8.8 Divergence corrector errors
###### Lemma 8.13.
For all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$,
$1\leq{\widetilde{p}}\leq{p_{\rm max}}$, and $j\in\\{2,3\\}$, the divergence
corrector errors $\mathcal{O}_{{\widetilde{n}},1,j}$ satisfy
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\mathcal{O}_{{\widetilde{n}},1,j}\right\|_{L^{1}}\lesssim\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(k,\mathsf{N}_{\textnormal{ind,t}},\Gamma_{q+1}^{i+1}\tau_{q}^{-1},\Gamma_{q+1}^{-1}\widetilde{\tau}_{q}^{-1}\right)$
for all $k,m\leq 3\mathsf{N}_{\textnormal{ind,v}}$.
###### Proof of Lemma 8.13.
The divergence corrector errors are given in (8.31), (8.49), and (8.66). The
estimates for $j=\\{2,3\\}$ are each similar, and so we shall only prove the
case $j=2$. Thus we estimate
$\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\sum_{\xi,i^{\prime},j,k,{\widetilde{p}},\vec{l}}\left(\left(a_{(\xi)}\nabla\Phi_{(i^{\prime},k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i^{\prime},k)}\right)\otimes\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i^{\prime},k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i^{\prime},k)}\right)\right)\right)\right\|_{L^{1}}.$
(8.133)
Recall that $\xi$ takes only six distinct values and that $j\leq{j_{\rm
max}}$, ${\widetilde{p}}\leq{p_{\rm max}}$ are bounded independently of $q$.
Furthermore, on the support of $\psi_{i,q}$, only $\psi_{i-1,q}$,
$\psi_{i,q}$, and $\psi_{i+1,q}$ are non-zero from (6.19). As a result, only
time cutoffs $\chi_{i-1,k,q}$, $\chi_{i,k,q}$, and $\chi_{i+1,k,q}$ may be
non-zero. Since for each $i$ the $\chi_{i,k,q}$’s form a partition of unity in
time for which only two cutoff functions are non-zero at any fixed time, for
every time, the sum in (8.133) is a finite sum for which the number of non-
zero terms in the summand is bounded independently of $q$. Similarly, the sum
over $\vec{l}$ forms a partition of unity which only finitely many cutoff
functions overlap at any fixed point in space and time. Therefore we may
absorb the effects of $\xi$, $j$, $k$, ${\widetilde{p}}$, and $\vec{l}$ in the
implicit constant in the inequality.
Using Hölder’s inequality and estimates (8.16) and (8.17) from Corollary 8.2
with $r=2$, $r_{2}=1$, and $r_{1}=\infty$, we have that for
$N,M\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9\right)\rfloor$,
$\displaystyle\sum_{\xi,i^{\prime},j,k,{\widetilde{p}},\vec{l}}\left\|\psi_{i,q}D^{k}D_{t,q}^{m}\left(\left(a_{(\xi)}\nabla\Phi_{(i^{\prime},k)}^{-1}\mathbb{W}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i^{\prime},k)}\right)\otimes\left(\nabla
a_{(\xi)}\times\left(\nabla\Phi_{(i^{\prime},k)}^{T}\mathbb{U}_{\xi,q+1,{\widetilde{n}}}\circ\Phi_{(i^{\prime},k)}\right)\right)\right)\right\|_{L^{1}}$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i-\mathsf{c}_{\widetilde{\textnormal{n}}}+4},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right)\frac{\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}{\lambda_{q+1}}$
$\displaystyle\qquad\lesssim\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\delta_{q+2}\lambda_{q+1}^{k}\mathcal{M}\left(m,\mathsf{N}_{\textnormal{ind,t}},\tau_{q}^{-1}\Gamma_{q+1}^{i+1},\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}\right),$
which proves the desired estimate after recalling that for all
${\widetilde{n}}$,
$\displaystyle\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9\right)\rfloor$
$\displaystyle\geq 3\mathsf{N}_{\textnormal{ind,v}}$
$\displaystyle\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\frac{\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}{\lambda_{q+1}}$
$\displaystyle\leq\delta_{q+2}\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}$
$\displaystyle-\mathsf{c}_{\widetilde{\textnormal{n}}}+4$ $\displaystyle\leq
1\,,$
which follow from (9.60b), (9.34) and (9.54), and (9.42), respectively. ∎
### 8.9 Time support of perturbations and stresses
First, we prove (7.12). Indeed, appealing to (5.1), which defines
$\mathring{R}_{\ell_{q}}$ in terms of a mollifier applied to
$\mathring{R}_{q}$, (9.20), which defines the scale at which
$\mathring{R}_{q}$ is mollified, and (6.104), which ensures that the time
support of $w_{q+1,0}$ is only enlarged relative to the time support of
$\mathring{R}_{\ell_{q}}$ by
$2\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\Gamma_{q+1}^{2}\right)^{-1}$,
we achieve (7.12). To prove (7.14) and (7.16), first note that application of
the inverse divergence operators $\mathcal{H}$ and $\mathcal{R}^{*}$
_commutes_ with multiplication by $\overline{\chi}_{q,n,p}$.383838This is
simple to check from the formula given in Proposition A.17 and the formula for
the standard nonlocal inverse divergence operator given in (A.100), both of
which involve operations which are purely spatial, such as differentiation and
application of Fourier multipliers. Then by the definition of
$\mathring{R}_{q+1}^{0}$ and $\mathring{H}_{q,n,p}^{0}$ in Section 8.3, we
achieve (7.14) and (7.16). Proving the inclusions in (7.19), (7.21), (7.23),
(7.26), (7.28), and (7.30), follows similarly from (6.104), the properties of
$\mathcal{H}$ and $\mathcal{R}^{*}$, and the definitions of
$\mathring{R}_{q+1}^{\widetilde{n}}$ and
$\mathring{H}_{q,n,p}^{\widetilde{n}}$ in Section 8.3. Finally, to see that
(7.4) follows from the inclusions already demonstrated, notice that the
threshold in (7.4) is weaker than any of the previous inclusions by a factor
of $\Gamma_{q+1}$, and so we may allow the time support of
$\mathring{R}_{q+1}^{\widetilde{n}}$ to expand slightly as ${\widetilde{n}}$
increases from $0$ to ${n_{\rm max}}$ while still meeting the desired
inclusion.
## 9 Parameters
The purpose of this section is to provide an exhaustive delineation of the
many parameters, inequalities, and notations which arise throughout the bulk
of the paper. In Section 9.1, we define the $q$-independent parameters _in
order_ , beginning with the regularity index $\beta$, and ending with the
number $a_{*}$, which will be used to absorb every implicit constant
throughout the paper. Then in Section 9.2, we define the parameters which
depend on $q$, as well as the parameters which depend in addition on $n$ and
$p$. The definitions of both the $q$-independent and $q$-dependent parameters
will appear rather arbitrary, but are justified in Section 9.3. This section
contains, in no particular order, consequences of the definitions made in the
previous two sections which are necessary to close the estimates in the proof.
Finally, Sections 9.4 and 9.5 contain the definitions of a few operators and
some notations that are used throughout the paper.
### 9.1 Definitions and hierarchy of the parameters
The parameters in our construction are chosen as follows:
1. (i)
Choose an arbitrary regularity parameter
$\beta\in[\nicefrac{{1}}{{3}},\nicefrac{{1}}{{2}})$. In light of [11, 43],
there is no reason to consider the regime $\beta<\nicefrac{{1}}{{3}}$.
2. (ii)
Choose $b\in(1,\nicefrac{{3}}{{2}})$ sufficiently small such that
$\displaystyle 2\beta b$ $\displaystyle<1\,.$ (9.1)
The heuristic reason for (9.1) is given by (2.4.1). Note that (9.1) and the
inequality $\beta<\nicefrac{{1}}{{2}}$ imply that
$\beta(2b+1)<\nicefrac{{3}}{{2}}$, which is a required inequality for the
heuristic estimate (2.22).
3. (iii)
With $\beta$ and $b$ chosen, we may now designate a number of parameters:
1. (a)
The parameter ${n_{\rm max}}$, which per Section 2.4.2 denotes the total
number of higher order stresses $\mathring{R}_{q,n}$ and thus primary
frequency divisions in between $\lambda_{q}$ and $\lambda_{q+1}$, is defined
as the smallest integer for which
$\displaystyle 1-2\beta b>\frac{5}{6}\left(\frac{4}{5}\right)^{{n_{\rm
max}}-1}\,.$ (9.2)
2. (b)
The parameter ${p_{\rm max}}$, which per Section 2.4.2 denotes the total
number of subdivided components $\mathring{R}_{q,n,p}$ of a higher order
stress $\mathring{R}_{q,n}$ and thus secondary frequency divisions in between
$\lambda_{q}$ and $\lambda_{q+1}$, is defined as the smallest integer for
which
$\displaystyle\frac{1}{{p_{\rm max}}}<\frac{1-2\beta b}{10}\,.$ (9.3)
3. (c)
The parameter ${\mathsf{C}_{b}}$ appearing in (3.21) is use to quantify the
$L^{1}$ norm of the velocity cutoff functions $\psi_{i,q}$. It is defined as
$\displaystyle{\mathsf{C}_{b}}=\frac{b+4}{b-1}\,.$ (9.4)
4. (d)
The exponent $\mathsf{C_{R}}$ is used in order to define a small parameter in
the estimate for the Reynolds stress, cf. (3.15). This parameter is then used
in the proof to absorb geometric constants in the construction. It is defined
as
$\displaystyle\mathsf{C_{R}}=4b+1\,.$ (9.5)
4. (iv)
The parameter $\mathsf{c_{0}}$, which is first introduced in (3.20) and
utilized in Sections 7 and 8 to control small losses in the sharp material
derivative estimates, is defined in terms of ${n_{\rm max}}$ as
$\displaystyle\mathsf{c_{0}}=4{n_{\rm max}}+5\,.$ (9.6)
5. (v)
The parameter $\varepsilon_{\Gamma}>0$, which is used in (9.18) to quantify
the _finest_ frequency scale between $\lambda_{q}$ and $\lambda_{q+1}$
utilized throughout the scheme, is defined as the greatest real number for
which the following inequalities hold
$\displaystyle\varepsilon_{\Gamma}\Big{(}7+\mathsf{C_{R}}+{n_{\rm
max}}(8+{\mathsf{C}_{b}}))\Big{)}$ $\displaystyle<\frac{1-2\beta}{10}$ (9.7a)
$\displaystyle\varepsilon_{\Gamma}$
$\displaystyle<\frac{1}{100}\left(\frac{4}{5}\right)^{{n_{\rm max}}-1}$ (9.7b)
$\displaystyle\varepsilon_{\Gamma}$ $\displaystyle<\frac{b}{9(b-1)}$ (9.7c)
$\displaystyle 2b\varepsilon_{\Gamma}(\mathsf{c_{0}}+7)$
$\displaystyle<1-\beta\,.$ (9.7d)
6. (vi)
The parameter $\alpha_{\mathsf{R}}>0$ from the $L^{1}$ loss of the inverse
divergence operator is now defined as
$\displaystyle\alpha_{\mathsf{R}}=\frac{\varepsilon_{\Gamma}(b-1)}{2b}\,.$
(9.8)
7. (vii)
The parameters $\mathsf{N}_{\rm cut,t}$ and $\mathsf{N}_{\rm cut,x}$ are used
in Section 6 in order to define the velocity and stress cutoff functions.
$\mathsf{N}_{\rm cut,x}$ is the number of space derivatives which are embedded
into the definitions of these cutoff functions, while $\mathsf{N}_{\rm cut,t}$
is the number of material derivatives. See (6.6), (6.14), and (6.119). These
large parameters are chosen solely in terms of $b$ and $\varepsilon_{\Gamma}$
as
$\displaystyle\frac{1}{2}\mathsf{N}_{\rm cut,x}=\mathsf{N}_{\rm
cut,t}=\left\lceil\frac{3b}{\varepsilon_{\Gamma}(b-1)}+\frac{15b}{2}\right\rceil\,.$
(9.9)
8. (viii)
The parameter $\mathsf{N}_{\textnormal{ind,t}}$, which is the number of sharp
material derivatives propagated on stresses and velocities in Sections 3
through 8, is chosen as the smallest integer for which we have
$\displaystyle\mathsf{N}_{\textnormal{ind,t}}=\left\lceil\frac{4}{\varepsilon_{\Gamma}(b-1)}\right\rceil\mathsf{N}_{\rm
cut,t}\,.$ (9.10)
9. (ix)
The parameter $\mathsf{N}_{\textnormal{ind,v}}$, whose primary role is to
quantify the number of sharp space derivatives propagated on the velocity
increments and stresses, cf. (3.12) and (3.15), is chosen as the smallest
integer for which we have the bounds
$\displaystyle
4b\mathsf{N}_{\textnormal{ind,t}}+8+b(\mathsf{C_{R}}+3)\varepsilon_{\Gamma}(b-1)+2\beta(b^{3}-1)$
$\displaystyle<\varepsilon_{\Gamma}(b-1)\mathsf{N}_{\textnormal{ind,v}}\,.$
(9.11)
10. (x)
The value of the decoupling parameter ${\mathsf{N}_{\rm dec}}$, which is used
in the $L^{p}$ decorrelation Lemma A.2, is chosen as the smallest integer for
which we have
$\displaystyle{\mathsf{N}_{\rm
dec}}\left(\frac{1}{30}\left(\frac{4}{5}\right)^{{n_{\rm
max}}}-\varepsilon_{\Gamma}\right)>\frac{4b}{b-1}\,.$ (9.12)
11. (xi)
The value of the parameter ${\mathsf{d}}$, which in essence is used in the
inverse divergence operator of Proposition A.18 to count the order of a
parametrix expansion, is chosen as the smallest integer for which we have
$\displaystyle({\mathsf{d}}-1)\left(\frac{1}{30}\left(\frac{4}{5}\right)^{{n_{\rm
max}}}-\varepsilon_{\Gamma}\right)>\frac{(12\mathsf{N}_{\textnormal{ind,v}}+7)b}{b-1}\,.$
(9.13)
12. (xii)
The value of $\mathsf{N}_{\rm fin}$, which is introduced in Section 3 and used
to quantify the highest order derivative estimates utilized throughout the
scheme is chosen as the smallest integer such that
$\displaystyle\frac{3}{2}\mathsf{N}_{\rm fin}>(2\mathsf{N}_{\rm
cut,t}+\mathsf{N}_{\rm
cut,x}+14\mathsf{N}_{\textnormal{ind,v}}+2{\mathsf{d}}+2{\mathsf{N}_{\rm
dec}}+12)2^{{n_{\rm max}}+1}\,.$ (9.14)
13. (xiii)
Having chosen all the previous parameters in items (i)–(xii), there exits a
sufficiently large parameter $a_{*}\geq 1$, which depends on all the
parameters listed above (which recursively means that $a_{*}=a_{*}(\beta,b)$),
and which allows us to choose $a$ an arbitrary number in the interval
$[a_{*},\infty)$. While we do not give a formula for $a_{*}$ explicitly, it is
chosen so that $a_{*}^{(b-1)\varepsilon_{\Gamma}}$ is at least twice larger
than all the implicit constants in the $\lesssim$ symbols throughout the
paper; note that these constants only depend on the parameters in items
(i)–(xii) — never on $q$ — which justifies the existence of $a_{*}$.
Having made the choices in items (i)–(xiii) above, we are now ready to define
the $q$-dependent parameters which appear in the proof.
### 9.2 Definitions of the $q$-dependent parameters
#### 9.2.1 Parameters which depend on $q$
For $q\geq 0$, we define the fundamental frequency parameter used in this
paper as
$\displaystyle\lambda_{q}$
$\displaystyle=2^{\big{\lceil}(b^{q})\log_{2}a\big{\rceil}}\,.$ (9.15)
Definition (9.15) gives that $\lambda_{q}$ is an integer power of $2$, and
that we have the bounds
$\displaystyle a^{(b^{q})}\leq\lambda_{q}\leq
2a^{(b^{q})}\qquad\mbox{and}\qquad\frac{1}{3}\lambda_{q}^{b}\leq\lambda_{q+1}\leq
2\lambda_{q}^{b}$ (9.16)
for all $q\geq 0$. Throughout the paper the above two inequalities are used by
putting the factors of $\nicefrac{{1}}{{3}}$ and $2$ into the implicit
constants of $\lesssim$ symbols. In terms of $\lambda_{q}$, the fundamental
amplitude parameter used in the paper is
$\displaystyle\delta_{q}$
$\displaystyle=\lambda_{1}^{(b+1)\beta}\lambda_{q}^{-2\beta}\,.$ (9.17)
In terms of the parameter $\varepsilon_{\Gamma}$ from (9.7), we introduce a
parameter which is used repeatedly throughout the paper to mean “a tiny power
of the frequency parameter”:
$\displaystyle\Gamma_{q+1}$
$\displaystyle=\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\varepsilon_{\Gamma}}\,.$
(9.18)
In order to cap off our derivative losses, we need to mollify in space and
time using the operators described in Section 9.4 below. This is done in terms
of the following space and time parameters:
$\displaystyle\widetilde{\lambda}_{q}$
$\displaystyle=\lambda_{q}\Gamma_{q+1}^{5}$ (9.19)
$\displaystyle\widetilde{\tau}_{q}^{-1}$
$\displaystyle=\tau_{q}^{-1}\widetilde{\lambda}_{q}^{3}\widetilde{\lambda}_{q+1}\,.$
(9.20)
While $\widetilde{\tau}_{q}$ is used for mollification and thus for rough
material derivative bounds, the fundamental time parameter used in the paper
for sharp material derivative bounds is
$\displaystyle\tau_{q}$
$\displaystyle=\left(\delta_{q}^{\nicefrac{{1}}{{2}}}\widetilde{\lambda}_{q}\Gamma_{q+1}^{\mathsf{c_{0}}+6}\right)^{-1}\,.$
(9.21)
Note that besides depending on the parameters introduced in (i)–(xiii), the
parameters introduced above only depend on $q$, but are independent of $n$ and
$p$.
#### 9.2.2 Parameters which depend also on $n$ and $p$
The rest of the parameters depend on $n\in\\{0,\ldots,{n_{\rm max}}\\}$ and on
$p\in\\{0,\ldots,{p_{\rm max}}\\}$. We start by defining the frequency
parameter $\lambda_{q,n}$ and the intermittency parameter $r_{q+1,n}$ by
$\displaystyle\lambda_{q,n}$
$\displaystyle=2^{\left\lceil\left(\frac{4}{5}\right)^{n+1}\log_{2}\lambda_{q}+\left(1-\left(\frac{4}{5}\right)^{n+1}\right)\log_{2}\lambda_{q+1}\right\rceil}$
(9.22) $\displaystyle r_{q+1,n}$
$\displaystyle=\frac{\lambda_{q,n}}{\lambda_{q+1}}$ (9.23)
for $0\leq n\leq{n_{\rm max}}$. In particular, (9.22) shows that
$\lambda_{q+1}r_{q+1,n}$ is an integer power of $2$, and we have the bound
$\displaystyle\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}\leq\lambda_{q,n}\leq
2\lambda_{q}^{\left(\frac{4}{5}\right)^{n+1}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n+1}}\,,$
(9.24)
while (9.23) implies that $r_{q+1}^{-1}$ is an integer power of $2$, and we
have the estimates
$\displaystyle\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}\leq
r_{q+1,n}\leq
2\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{n+1}}\,.$
(9.25)
As with (9.16) we absorb the factors of $2$ in (9.24) and (9.25) into the
implicit constants in $\lesssim$ symbols.
We also define the frequency parameters $\lambda_{q,n,p}$ by
$\displaystyle\lambda_{q,0,p}$
$\displaystyle=\Gamma_{q+1}\widetilde{\lambda}_{q}$ $\displaystyle n=0,0\leq
p\leq{p_{\rm max}}$ (9.26) $\displaystyle\lambda_{q,1,0}$
$\displaystyle=\lambda_{q}^{\frac{4}{5}}\lambda_{q+1}^{\frac{1}{5}}$
$\displaystyle n=1,p=0$ (9.27) $\displaystyle\lambda_{q,n,0}$
$\displaystyle=\lambda_{q}^{\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}\lambda_{q+1}^{1-\left(\frac{4}{5}\right)^{n-1}\cdot\frac{5}{6}}$
$\displaystyle 2\leq n\leq{n_{\rm max}}+1$ (9.28)
$\displaystyle\lambda_{q,n,p}$
$\displaystyle=\lambda_{q,n,0}^{1-\nicefrac{{p}}{{{p_{\rm
max}}}}}\lambda_{q,n+1,0}^{\nicefrac{{p}}{{{p_{\rm max}}}}}$ $\displaystyle
1\leq n\leq{n_{\rm max}},0\leq p\leq{p_{\rm max}}.$ (9.29)
For $0\leq n\leq{n_{\rm max}}$, we define
$\displaystyle f_{q,0}$ $\displaystyle=1$ $\displaystyle n=0$ (9.30)
$\displaystyle f_{q,n}$
$\displaystyle=\left(\frac{\lambda_{q,n+1,0}}{\lambda_{q,n,0}}\right)^{\nicefrac{{1}}{{{p_{\rm
max}}}}}$ $\displaystyle 1\leq n\leq{n_{\rm max}}.$ (9.31)
We define $\delta_{q+1,0,p}$ by
$\displaystyle\delta_{q+1,0,1}$
$\displaystyle=\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}$ $\displaystyle p=1$
(9.32) $\displaystyle\delta_{q+1,0,p}$ $\displaystyle=0$ $\displaystyle 2\leq
p\leq{p_{\rm max}}.$ (9.33)
When $1\leq n\leq{n_{\rm max}}$ and $1\leq p\leq{p_{\rm max}}$, we define
$\delta_{q+1,n,p}$ by
$\delta_{q+1,n,p}=\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\cdot\left(\frac{\widetilde{\lambda}_{q}}{\lambda_{q,n,p-1}}\right)\cdot\prod_{n^{\prime}<n}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\,.$
(9.34)
We remark that by the definition of $\lambda_{q,1,0}$ given in (9.27), and
more generally $\lambda_{q,n,p}$ in (9.29), the fact that $n\geq 1$, and a
large choice of ${p_{\rm max}}$ which makes $f_{q,n}$ (defined in (9.31))
small, $\delta_{q+1,n,p}$ is significantly smaller than
$\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}$.
For $1\leq n\leq{n_{\rm max}}$, we define $\mathsf{c}_{\textnormal{n}}$ in
terms of $\mathsf{c_{0}}$ by
$\displaystyle\mathsf{c}_{\textnormal{n}}$
$\displaystyle=\mathsf{c_{0}}-4n\,.$ (9.35)
For $n=0$, we set
$\mathsf{N}_{\textnormal{fin},0}=\frac{3}{2}\mathsf{N}_{\rm fin},$ (9.36)
while for $1\leq n\leq{n_{\rm max}}$, we define $\mathsf{N}_{\rm fin,n}$
inductively on $n$ by using (9.36) and the formula
$\mathsf{N}_{\rm
fin,n}=\left\lfloor\frac{1}{2}\left(\mathsf{N}_{\textnormal{fin},\textnormal{n}-1}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-6\right)-{\mathsf{d}}\right\rfloor\,.$ (9.37)
### 9.3 Inequalities and consequences of the parameter definitions
The definitions made in the previous two sections have the following
consequences, which will be used frequently throughout the paper.
Due to (9.15) we have that
$\Gamma_{q+1}\geq(\nicefrac{{1}}{{2}})^{b\varepsilon_{\Gamma}}\lambda_{q}^{(b-1)\varepsilon_{\Gamma}}\geq(\nicefrac{{1}}{{2}})^{b\varepsilon_{\Gamma}}\lambda_{0}^{(b-1)\varepsilon_{\Gamma}}\geq(\nicefrac{{1}}{{2}})a_{*}^{(b-1)\varepsilon_{\Gamma}}$.
As was already mentioned in item (xiii), we have chosen $a_{*}$ to be
sufficiently large so that $a_{*}^{(b-1)\varepsilon_{\Gamma}}$ is at least
twice larger than all the implicit constants appearing in all $\lesssim$
symbols throughout the paper. Therefore, for any $q\geq 0$, we may use a
single power of $\Gamma_{q+1}$ to absorb any implicit constant in the paper:
an inequality of the type $A\lesssim B$ may be rewritten as
$A\leq\Gamma_{q+1}B$.
From (9.18), (9.19), and (9.7c), we have that
$\displaystyle\Gamma_{q+1}^{4}\widetilde{\lambda}_{q}\leq\lambda_{q+1}\,.$
(9.38)
From the definition (9.21) of $\tau_{q}$ and (9.35), which gives that
$\mathsf{c}_{\textnormal{n}}$ is decreasing with respect to $n$, we have that
for all $0\leq n\leq{n_{\rm max}}$
$\displaystyle\Gamma_{q+1}^{\mathsf{c}_{\textnormal{n}}+6}\delta_{q}^{\nicefrac{{1}}{{2}}}{\widetilde{\lambda}_{q}}$
$\displaystyle\leq\tau_{q}^{-1}\,.$ (9.39)
Using the definitions (9.17), (9.18), (9.19), and (9.21), writing out
everything in terms of $\lambda_{q-1}$, and appealing to (9.7d), we have that
$\displaystyle\tau_{q-1}^{-1}\Gamma_{q+1}^{3+\mathsf{c_{0}}}$
$\displaystyle\leq\tau_{q}^{-1}$ (9.40)
$\displaystyle\tau_{q-1}^{-1}\Gamma_{q+1}$
$\displaystyle\leq\delta_{q}^{\nicefrac{{1}}{{2}}}\lambda_{q}\,.$ (9.41)
From the definitions (9.6) of $\mathsf{c_{0}}$ and (9.35) of
$\mathsf{c}_{\textnormal{n}}$, we have that for all $0\leq n\leq{n_{\rm
max}}$,
$-\mathsf{c}_{\textnormal{n}}+4\leq-1.$ (9.42)
From the definition of $\widetilde{\tau}_{q}$, it is immediate that
$\displaystyle\tau_{q}^{-1}{\widetilde{\lambda}_{q}^{4}}$
$\displaystyle\leq{\widetilde{\tau}_{q}^{-1}}\leq\tau_{q}^{-1}{\widetilde{\lambda}_{q}^{3}}{\widetilde{\lambda}_{q+1}}\,.$
(9.43)
From (9.7d), the assumption that $\beta\geq\nicefrac{{1}}{{3}}$, and the
assumption $b\leq\nicefrac{{3}}{{2}}$, we can write everything out in terms of
$\lambda_{q}$ to deduce that
$\displaystyle\tau_{q}^{-1}\Gamma_{q+1}^{9}$
$\displaystyle\leq\tau_{q+1}^{-1}\,.$ (9.44)
From the definitions (9.22) and (9.26)–(9.29), for all $0\leq n\leq{n_{\rm
max}}$ and $0\leq p\leq{p_{\rm max}}$ we have
$\frac{\lambda_{q,n,p}}{\lambda_{q,n}}\ll 1\,.$
More precisely, when $n=0$ we have that
$\displaystyle\frac{\Gamma_{q+1}\lambda_{q,n,p}}{\lambda_{q,n}}=\frac{\Gamma_{q+1}^{2}\widetilde{\lambda}_{q}}{\lambda_{q,0}}=\frac{\Gamma_{q+1}^{7}\lambda_{q}}{\lambda_{q,0}}=\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{-\frac{1}{5}+7\varepsilon_{\Gamma}}$
(9.45)
while for $n\geq 1$ it holds that
$\displaystyle\frac{\Gamma_{q+1}\lambda_{q,n,p}}{\lambda_{q,n}}\leq\frac{\Gamma_{q+1}\lambda_{q,n+1,0}}{\lambda_{q,n}}=\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{(\frac{4}{5})^{n}(\frac{4}{5}-\frac{5}{6})+\varepsilon_{\Gamma}}\leq\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{-\frac{1}{30}(\frac{4}{5})^{{n_{\rm
max}}}+\varepsilon_{\Gamma}}$ (9.46)
as it is clear that the quotient on the left hand side is largest when
$n={n_{\rm max}}$. Note that due to (9.2) we have
$\frac{1}{30}\left(\frac{4}{5}\right)^{{n_{\rm
max}}}-\varepsilon_{\Gamma}<\frac{1-2\beta
b}{30}-\varepsilon_{\Gamma}\leq\frac{1}{5}-7\varepsilon_{\Gamma}$; here we
also used that $\varepsilon_{\Gamma}\leq\frac{1}{36}$, which handily follows
from (9.7b). Combining (9.45) and (9.46) we thus arrive at
$\displaystyle\frac{\Gamma_{q+1}\lambda_{q,n,p}}{\lambda_{q,n}}\leq\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{-\frac{1}{30}(\frac{4}{5})^{{n_{\rm
max}}}+\varepsilon_{\Gamma}}\leq\left(2\lambda_{q}^{b-1}\right)^{-\frac{1}{30}(\frac{4}{5})^{{n_{\rm
max}}}+\varepsilon_{\Gamma}}$ (9.47)
for all $0\leq n\leq{n_{\rm max}}$ and $0\leq p\leq{p_{\rm max}}$. Combining
the above estimate with our choice of ${\mathsf{N}_{\rm dec}}$ in (9.12), we
thus arrive at
$\displaystyle\lambda_{q+1}^{4}\leq\left(\frac{\lambda_{q,{\widetilde{n}}}}{2\pi\sqrt{3}\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{\widetilde{p}}}}\right)^{{\mathsf{N}_{\rm
dec}}}\,.$ (9.48)
for all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$ and
$1\leq{\widetilde{p}}\leq{p_{\rm max}}$.
Next, we a list a few consequences of the fact that
$\mathsf{N}_{\textnormal{ind,v}}\gg\mathsf{N}_{\textnormal{ind,t}}$, as
specified in (9.11). First, we note from (9.43) that
$\displaystyle\widetilde{\tau}_{q-1}^{-1}\tau_{q-1}\leq\widetilde{\lambda}_{q-1}^{3}\widetilde{\lambda}_{q}\leq\lambda_{q}^{4}$
(9.49)
where in the second inequality we have used that
$\varepsilon_{\Gamma}\leq\frac{3}{20b}$. In turn, the above inequality
combined with (9.11) implies the following estimates, all of which are used
for the first time in Section 5:
$\displaystyle\lambda_{q-1}^{8}\Gamma_{q+1}^{1+\mathsf{C_{R}}}\frac{\delta_{q-1}}{\delta_{q+2}}\left(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1}\right)^{\mathsf{N}_{\textnormal{ind,t}}}$
$\displaystyle\leq\Gamma_{q}^{\mathsf{N}_{\textnormal{ind,v}}-2}$ (9.50a)
$\displaystyle\widetilde{\lambda}_{q}^{2}\left(\tilde{\tau}_{q-1}^{-1}\tau_{q-1}\right)^{\mathsf{N}_{\textnormal{ind,t}}}$
$\displaystyle\leq\Gamma_{q+1}^{5\mathsf{N}_{\textnormal{ind,v}}}$ (9.50b)
$\displaystyle\lambda_{q-1}^{4}\delta_{q-1}^{\nicefrac{{1}}{{2}}}\Gamma_{q}^{2}\delta_{q}^{-\nicefrac{{1}}{{2}}}(\widetilde{\tau}_{q-1}^{-1}\tau_{q-1})^{\mathsf{N}_{\textnormal{ind,t}}}$
$\displaystyle\leq\Gamma_{q}^{\mathsf{N}_{\textnormal{ind,v}}}\,.$ (9.50c)
Next, as a consequence of our choice of $\mathsf{N}_{\rm cut,t}$ and
$\mathsf{N}_{\rm cut,x}$ in (9.9), we obtain the following bounds, which are
used in Section 6
$\displaystyle\widetilde{\lambda}_{q}^{\nicefrac{{3}}{{2}}}\Gamma_{q}^{-\mathsf{N}_{\rm
cut,t}}\leq\lambda_{q}^{3}\Gamma_{q}^{-\mathsf{N}_{\rm cut,t}}\leq 1\,.$
(9.51)
for all $q\geq 0$. The fact that $\mathsf{N}_{\textnormal{ind,t}}$ is taken to
be much larger than $\mathsf{N}_{\rm cut,t}$, as expressed in (9.10), implies
when combined with (9.49) the following bound, which is also used in Section
6:
$\displaystyle\left(\tau_{q}\widetilde{\tau}_{q}^{-1}\right)^{\mathsf{N}_{\rm
cut}}\leq\lambda_{q+1}^{4\mathsf{N}_{\rm
cut}}\leq\Gamma_{q+1}^{\mathsf{N}_{\textnormal{ind,t}}}$ (9.52)
for all $q\geq 1$.
The parameter $\alpha_{\mathsf{R}}$ is chosen in (9.8) in order to ensure the
inequality
$\lambda_{q+1}^{\alpha_{\mathsf{R}}}\leq\Gamma_{q+1}.$ (9.53)
for all $q\geq 0$. This fact is used in Section 8. Several other, much more
hideous, parameter inequalities are used in Section 8, and for the readers’
convenience we list them next. First, we claim that
$\Gamma_{q+1}\Gamma_{q}^{{-\mathsf{C_{R}}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{n_{\rm
max}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\lambda_{q,{n_{\rm
max}}+1,0}^{-1}\leq\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\,.$
(9.54)
In order to verify the above bound, we appeal to to the choices made in (9.1),
(9.2), and (9.3), to the definitions (9.19), (9.27), (9.28), (9.31), and the
fact that ${\widetilde{n}}\leq{n_{\rm max}}$, to deduce that the left side of
(9.54) is bounded from above by
$\displaystyle\delta_{q+1}\Gamma_{q+1}^{6+{n_{\rm
max}}(8+{\mathsf{C}_{b}})}\frac{\lambda_{q}}{\lambda_{q,{n_{\rm
max}}+1,0}}\left(\frac{\lambda_{q,{n_{\rm
max}}+1,0}}{\lambda_{q,1,0}}\right)^{\frac{1}{{p_{\rm max}}}}$
$\displaystyle=\delta_{q+1}\Gamma_{q+1}^{6+{n_{\rm
max}}(8+{\mathsf{C}_{b}})}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(1-(\frac{4}{5})^{{n_{\rm
max}}}\frac{5}{6}\right)}\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\frac{1}{{p_{\rm
max}}}\left(\frac{4}{5}-(\frac{4}{5})^{{n_{\rm max}}}\frac{5}{6}\right)}$
$\displaystyle\leq\frac{\lambda_{q}\delta_{q+1}}{\lambda_{q+1}}\Gamma_{q+1}^{6+{n_{\rm
max}}(8+{\mathsf{C}_{b}})}\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{(1-2\beta
b)\frac{4}{5}}\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\frac{1-2\beta
b}{10}\frac{4}{5}}$
$\displaystyle\leq\left(\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\right)\frac{\lambda_{q}\delta_{q+1}}{\lambda_{q+1}\delta_{q+2}}\Gamma_{q+1}^{7+\mathsf{C_{R}}+{n_{\rm
max}}(8+{\mathsf{C}_{b}})}\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{(1-2\beta
b)\frac{22}{25}}$
$\displaystyle\leq\left(\Gamma_{q+1}^{-\mathsf{C_{R}}}\Gamma_{q+1}^{-1}\delta_{q+2}\right)\Gamma_{q+1}^{7+\mathsf{C_{R}}+{n_{\rm
max}}(8+{\mathsf{C}_{b}})}\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{-(1-2\beta
b)\frac{3}{25}}$
The proof of (9.54) is now completed by appealing to (9.7a), which ensures
that $\Gamma_{q+1}$ represents a sufficiently small power of
$\nicefrac{{\lambda_{q+1}}}{{\lambda_{q}}}$.
Next, we claim that due to our choice of ${\mathsf{d}}$, we have
$\Gamma_{q}^{-\mathsf{C_{R}}}\delta_{q+1}\widetilde{\lambda}_{q}\prod_{n^{\prime}\leq{n_{\rm
max}}}\left(f_{q,n^{\prime}}\Gamma_{q+1}^{8+{\mathsf{C}_{b}}}\right)\lambda_{q+1}\left(\frac{\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{p_{\rm
max}}}}{\lambda_{q,{\widetilde{n}}}}\right)^{{\mathsf{d}}-1}\left(\lambda_{q+1}^{4}\right)^{3\mathsf{N}_{\textnormal{ind,v}}}\leq\frac{\delta_{q+2}}{\lambda_{q+1}^{5}}.$
(9.55)
In order to verify the above bound we use the previously established estimate
(9.54) in conjunction with (9.47); after dropping the helpful factor of
$\Gamma_{q+1}^{-2-\mathsf{C_{R}}}$, we deduce that the left side of (9.55) is
bounded from above by
$\displaystyle\delta_{q+2}\lambda_{q,{n_{\rm
max}}+1,0}\lambda_{q+1}\left(\frac{\Gamma_{q+1}\lambda_{q,{\widetilde{n}},{p_{\rm
max}}}}{\lambda_{q,{\widetilde{n}}}}\right)^{{\mathsf{d}}-1}\left(\lambda_{q+1}^{4}\right)^{3\mathsf{N}_{\textnormal{ind,v}}}$
$\displaystyle\leq\frac{\delta_{q+2}}{\lambda_{q+1}^{5}}\lambda_{q+1}^{3}\left(2\lambda_{q}^{b-1}\right)^{-({\mathsf{d}}-1)\left(\frac{1}{30}(\frac{4}{5})^{{n_{\rm
max}}}-\varepsilon_{\Gamma}\right)}\lambda_{q+1}^{12\mathsf{N}_{\textnormal{ind,v}}}$
The choice of ${\mathsf{d}}$ in (9.13) shows that the above estimate directly
implies (9.55).
The amplitudes of the higher order corrections $w_{q+1,n,p}$ must meet the
inductive assumptions stated in (3.13). In order to meet the satisfactory
bound in Remark 8.3, from (9.32)–(9.34), we deduce the bound
$\delta_{q+1,{\widetilde{n}},{\widetilde{p}}}^{\nicefrac{{1}}{{2}}}\leq\Gamma_{q+1}^{-2}\delta_{q+1}^{\nicefrac{{1}}{{2}}}.$
(9.56)
Indeed, the case ${\widetilde{n}}=0$ follows from the definition of
$\mathsf{C_{R}}$ in (9.5), while the case ${\widetilde{n}}\geq 1$ is a
consequence of the definition (9.34), which implies that
$\delta_{q,{\widetilde{n}},{\widetilde{p}}}\leq\delta_{q,0,1}$, for any
${\widetilde{n}}\geq 1$ and any ${\widetilde{p}}\geq 1$.
Another parameter inequality which is necessary to estimate the transport and
Nash errors in Sections 8.4 and 8.5, is
$\Gamma_{q+1}^{4+\frac{{\mathsf{C}_{b}}}{2}}\delta_{q+1,{\widetilde{n}},1}^{\frac{1}{2}}\tau_{q}^{-1}r_{q+1,{\widetilde{n}}}\lambda_{q+1}^{-1}\leq\Gamma_{q+1}^{{-\mathsf{C_{R}}}-1}\delta_{q+2}$
(9.57)
for all $0\leq{\widetilde{n}}\leq{n_{\rm max}}$. When ${\widetilde{n}}=0$,
this inequality may be deduced by writing everything out in terms of
$\lambda_{q}$, appealing to the appropriate definitions, and then using that
$\beta<\nicefrac{{1}}{{2}}$ from item i, (9.1), (9.4), (9.5), (9.6), (9.7b),
after which one arrives at
$\displaystyle\varepsilon_{\Gamma}\left(4+\frac{b-4}{2}+\frac{1}{2}{-\mathsf{C_{R}}}+\mathsf{c_{0}}+12\right)+\beta(2b+1)<\frac{1}{100}+\frac{3}{2}<\frac{9}{5}.$
It is clear there is quite a bit of room in the above inequality, and
similarly, (9.57) becomes _most_ restrictive when ${\widetilde{n}}={n_{\rm
max}}$. In this case, one may again write everything out in terms of
$\lambda_{q}$, move everything to the left hand side, and appeal to the most
of the same referenced inequalities as before to see that
$\displaystyle\varepsilon_{\Gamma}\left(22+4{n_{\rm
max}}\right)+\beta(2b+1)-\frac{3}{2}$
$\displaystyle\leq\varepsilon_{\Gamma}\left(22+4{n_{\rm
max}}\right)+\beta-\frac{1}{2}<0\,,$
where in the last inequality we have instead appealed to (9.7a) rather than
(9.7b), proving (9.57) in the remaining cases $1\leq{\widetilde{n}}\leq{n_{\rm
max}}$.
Parameter inequalities which play a crucial role in showing that the
Oscillation 2 type errors vanish, see Section 8.7, are:
$\displaystyle
16C_{*}\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}\cdot
4}$
$\displaystyle<\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}-1}\cdot\frac{5}{6}\cdot
3}\Gamma_{q+1}^{-3}\,,\qquad\mbox{for}\qquad{\widetilde{n}}\geq 2\,,$ (9.58a)
$\displaystyle
16C_{*}\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\frac{4}{5}\cdot
4}$
$\displaystyle<\left(\frac{\widetilde{\lambda}_{q}}{\lambda_{q+1}}\right)^{3}\,\,,$
(9.58b) $\displaystyle
16C_{*}\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\Gamma_{q+1}^{4}\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\left(\frac{4}{5}\right)^{2}\cdot
4}$
$\displaystyle<\left(\frac{\lambda_{q}}{\lambda_{q+1}}\right)^{\frac{4}{5}\cdot
3}\,.$ (9.58c)
where $C_{*}$ is the geometric constant from Lemma 4.8–estimate (4.31),
$\mathcal{C}_{\textnormal{pipe}}$ is a geometric constant which appears in
Lemma 8.11–estimate (8.119), and $\mathcal{C}_{\eta}$ is the constant from
Lemma 8.8. In order to verify (9.58), we first note that
$C_{*}\mathcal{C}_{\textnormal{pipe}}\mathcal{C}_{\eta}\leq\Gamma_{q+1}$,
since $a_{*}$ was chosen to be sufficiently large. Inequality (9.58b) is then
an immediate consequence of the fact that $\nicefrac{{16}}{{5}}>3$. The bound
(9.58a) follows from
$\Gamma_{q+1}^{5}<\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\left(\frac{4}{5}\right)^{{n_{\rm
max}}-1}\left(\frac{64}{25}-\frac{5}{2}\right)}\leq\left(\frac{\lambda_{q+1}}{\lambda_{q}}\right)^{\left(\frac{4}{5}\right)^{{\widetilde{n}}+1}\cdot
4-\left(\frac{4}{5}\right)^{{\widetilde{n}}-1}\cdot\frac{5}{6}\cdot 3}\,.$
(9.59)
The second inequality in the above display is a consequence of
${\widetilde{n}}\leq{n_{\rm max}}$, while the first one follows from (9.7b).
Finally, inequality (9.58c) is a consequence of the fact that
$\nicefrac{{64}}{{25}}-\nicefrac{{12}}{{5}}>\nicefrac{{64}}{{25}}-\nicefrac{{5}}{{2}}$
and the first inequality in (9.59), which bounds $\Gamma_{q+1}^{5}$.
We conclude this section by verifying a few inequalities concerning the
parameter $\mathsf{N}_{\rm fin,n}$, which counts the number of available
space-plus-material derivative for the residual stress $\mathring{R}_{q,n}$.
For all $0\leq n\leq{n_{\rm max}}$ we require that
$\displaystyle\mathsf{N}_{\textnormal{ind,t}},2{\mathsf{N}_{\rm dec}}+4$
$\displaystyle\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm
fin,n}-\mathsf{N}_{\rm cut,t}-\mathsf{N}_{\rm
cut,x}-5\right)\rfloor-{\mathsf{d}}\,,$ (9.60a) $\displaystyle
14\mathsf{N}_{\textnormal{ind,v}}$
$\displaystyle\leq\mathsf{N}_{\textnormal{fin},\textnormal{n}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-2{\mathsf{N}_{\rm dec}}-9\,,$ (9.60b)
$\displaystyle 6\mathsf{N}_{\textnormal{ind,v}}$
$\displaystyle\leq\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm
fin,n}-\mathsf{N}_{\rm cut,t}-\mathsf{N}_{\rm
cut,x}-6\right)\rfloor-{\mathsf{d}}\,,$ (9.60c) $\displaystyle
6\mathsf{N}_{\textnormal{ind,v}}$
$\displaystyle\leq\lfloor\nicefrac{{1}}{{4}}\left(\mathsf{N}_{\rm
fin,n}-\mathsf{N}_{\rm cut,t}-\mathsf{N}_{\rm cut,x}-7\right)\rfloor\,.$
(9.60d)
for all $0\leq n\leq{n_{\rm max}}$. Additionally for
$0\leq{\widetilde{n}}<n\leq{n_{\rm max}}$, we require that
$\displaystyle\lfloor\nicefrac{{1}}{{2}}\left(\mathsf{N}_{\rm{fin,}{\widetilde{n}}}-\mathsf{N}_{\rm
cut,t}-\mathsf{N}_{\rm cut,x}-6\right)\rfloor-{\mathsf{d}}\geq\mathsf{N}_{\rm
fin,n}$ (9.61)
holds. The inequality (9.61) is a direct consequence of the recursive formula
(9.37) and of the fact that the sequence $\mathsf{N}_{\rm fin,n}$ is monotone
decreasing with respect to $n$. Using (9.36) and (9.37) one may show that
$\mathsf{N}_{\rm fin,n}\geq
2^{-n}\mathsf{N}_{\textnormal{fin},0}-(2{\mathsf{d}}+\mathsf{N}_{\rm
cut,t}+\mathsf{N}_{\rm cut,x}+8)\,.$
Noting that the bounds (9.60) are most restrictive for $n={n_{\rm max}}$, they
now readily follow from our choice (9.14).
### 9.4 Mollifiers and Fourier projectors
Let $\phi(\zeta):\mathbb{R}\rightarrow\mathbb{R}$ be a smooth, $C^{\infty}$
function compactly supported in the set $\\{\zeta:|\zeta|\leq 1\\}$ which in
addition satisfies
$\int\phi(\zeta)\,d\zeta=1,\qquad\int\phi(\zeta)\zeta^{n}=0\quad\forall
n=1,2,...,\mathsf{N}_{\textnormal{ind,v}}.$ (9.62)
Let $\widetilde{\phi}(x):\mathbb{R}^{3}\rightarrow\mathbb{R}$ be defined by
$\widetilde{\phi}(x)=\phi(|x|)$. For $\lambda,\mu\in\mathbb{R}$, define
$\phi_{\lambda}^{(x)}(x)={{\lambda}^{3}}\widetilde{\phi}\left(\lambda
x\right),\qquad\phi_{\mu}^{(t)}(t)=\mu\phi(\mu t).$ (9.63)
For $q\in\mathbb{N}$, we will define the spatial and temporal convolution
operators
$\mathcal{P}_{q,x}:=\phi_{\widetilde{\lambda}_{q}}^{(x)}\ast,\qquad\mathcal{P}_{q,t}:=\phi_{\widetilde{\tau}_{q-1}^{-1}}^{(t)}\ast,\qquad\mathcal{P}_{q,x,t}:=\mathcal{P}_{q,x}\mathcal{P}_{q,t}.$
(9.64)
We will use the notation $\mathbb{P}_{\leq\lambda}$ to denote the standard
(Littlewood-Paley) Fourier projection operators onto spatial frequencies which
are less than or equal to $\lambda$, $\mathbb{P}_{\geq\lambda}$ to denote the
standard Littlewood-Paley projection operators onto spatial frequencies which
are greater than or equal to $\lambda$, and the notation
$\mathbb{P}_{[\lambda_{1},\lambda_{2})}$
to denote the Fourier projection operator onto spatial frequencies $\xi$ such
that $\lambda_{1}\leq|\xi|<\lambda_{2}$. If $\lambda_{1}=\lambda_{2}$, we
adopt the convention that $\mathbb{P}_{[\lambda_{1},\lambda_{2})}f=0$ for any
$f$.
### 9.5 Notation
$\displaystyle\mathcal{M}\left(n,N,\lambda,\Lambda\right)$
$\displaystyle=\lambda^{\min\\{n,N\\}}\Lambda^{\max\\{n-N,0\\}}$
$\displaystyle a\otimes_{\mathrm{s}}b$ $\displaystyle=\frac{1}{2}(a\otimes
b+b\otimes a)$ (9.65) $\displaystyle a\,\mathring{\otimes}_{\rm s}\,b$
$\displaystyle=\frac{1}{2}(a\,\mathring{\otimes}\,b+b\,\mathring{\otimes}\,a)$
(9.66) $\displaystyle\mathrm{supp\,}_{t}f$
$\displaystyle=\overline{\\{t:f|_{\mathbb{T}^{3}\times\\{t\\}}\not\equiv
0\\}}$ (9.67)
We will use repeatedly the notation (noted in the introduction in (2.3) and
(2.4) and in Remark 3.2)
$\left\|f\right\|_{L^{p}}:=\left\|f\right\|_{L^{\infty}_{t}(L^{p}(\mathbb{T}^{3}))}\,.$
(9.68)
That is, all $L^{p}$ norms stand for $L^{p}$ norms in space, uniformly in
time. Similarly, when we wish to emphasize a set dependence on
$\Omega\subset\mathbb{R}\times\mathbb{T}^{3}$ of an $L^{p}$ norm, we write
$\left\|f\right\|_{L^{p}(\Omega)}:=\left\|{\mathbf{1}}_{\Omega}\;f\right\|_{L^{\infty}_{t}(L^{p}(\mathbb{T}^{3}))}\,.$
(9.69)
## Appendix A Useful lemmas
This appendix contains a collection of auxiliary lemmas which are used
throughout the paper:
* •
Section A.1 recalls the classical $C^{N}$ estimates for solutions of the
transport equation. This is for instance used in Section 6.4.
* •
Section A.2 gives the detailed construction of the basic cutoff functions
$\widetilde{\psi}_{m,q}$ and $\psi_{m,q}$, which are used in Section 6 to
construct the velocity and the stress cutoff functions.
* •
Section A.3 recalls the fundamental fact that the $L^{p}$ norm of the product
of a slowly oscillating function and a fast periodic function is essentially
bounded by the product of their $L^{p}$ norms.
* •
Section A.4 contains a version of the Sobolev inequality which takes into
account the support of the velocity cutoff functions.
* •
Section A.5 contains a number of consequences of the multivariate Faa di Bruno
formula. Most of the results here are used for bounding the space and material
derivatives of the cutoff functions in Section 6. We also present here, cf.
Lemma A.7, a version of the $L^{p}$ decorrelation lemma from Section A.3 in
which the fast periodic function is composed with a volume-preserving flow
map. Lemma A.7 plays a crucial role in estimating the $L^{2}$ norms of the
velocity increments in Section 8.2.
* •
Sections A.6 and A.7 contain a number of lemmas which allow us to go back and
forth between information for (arbitrarily) high order derivative bounds in
Eulerian and Lagrangian variables. These lemmas concerning sums of operators
and commutators with material derivatives are frequently used throughout the
paper to overcome the fact that material derivatives and spatial/temporal
derivatives do not commute.
* •
Section A.8 introduces in Proposition A.18 the inverse divergence operator
used in this paper. We call this operator “intermittency friendly” because it
is composed of a principal part which precisely maintains the spatial support
of the vector field it is applied to, plus a secondary part which is nonlocal,
but whose amplitude is incredibly small. It is here that the definition (4.10)
for the density of our pipe flows plays an important role, as the high order
${\mathsf{d}}$ of the Laplacian present in (4.10) allows us to perform a
parametric expansion which maintains (to leading order) the support of pipes,
and also takes into account deformations due to the flow map.
### A.1 Transport estimates
We shall require the following estimates for smooth solutions of transport
equations. For proofs we refer the reader to [8, Appendix D].
###### Lemma A.1 (Transport Estimates).
Consider the transport equation
$\partial_{t}f+u\cdot\nabla f=g,\qquad\qquad f|_{t_{0}}=f_{0}$
where $f,g:\mathbb{T}^{n}\rightarrow\mathbb{R}$ and
$u:\mathbb{T}^{n}\rightarrow\mathbb{R}^{n}$ are smooth functions. Let $X$ be
the flow of $u$, defined by
$\frac{d}{dt}X=u(X,t),\qquad X(x,t_{0})=x,$
and let $\Phi$ be the inverse of the flow of $X$, which is the identity at
time $t_{0}$. Then the following hold:
1. (1)
$\displaystyle\|f(t)\|_{C^{0}}\leq\|f_{0}\|_{C^{0}}+\int_{t_{0}}^{t}\|g(s)\|_{C^{0}}\,ds$
2. (2)
$\displaystyle\|Df(t)\|_{C^{0}}\leq\|Df_{0}\|_{C^{0}}e^{(t-t_{0})\|Du\|_{C^{0}}}+\int_{t_{0}}^{t}e^{(t-s)\|Du\|_{C^{0}}}\|Dg(s)\|_{C^{0}}\,ds$
3. (3)
For any $N\geq 2$, there exists a constant $C=C(N)$ such that
$\displaystyle\qquad\quad\|D^{N}f(t)\|_{C^{0}}\leq$
$\displaystyle\left(\|D^{N}f_{0}\|_{C^{0}}+C(t-t_{0})\|D^{n}u\|_{C^{0}}\|Df\|_{C^{0}}\right)e^{C(t-t_{0})\|Du\|_{C^{0}}}$
$\displaystyle\quad+\int_{t_{0}}^{t}e^{C(t-s)\|Du\|_{C^{0}}}\left(\|D^{N}g(s)\|_{C^{0}}+C(t-s)\|D^{N}u\|_{C^{0}}\|Dg(s)\|_{C^{0}}\right)\,ds$
4. (4)
$\displaystyle\|D\Phi(t)-\mathrm{Id}\|_{C^{0}}\leq
e^{(t-t_{0})\|Du\|_{C^{0}}}-1\leq(t-t_{0})\|Du\|_{C^{0}}e^{(t-t_{0})\|Du\|_{C^{0}}}$
5. (5)
For $N\geq 2$ and a constant $C=C(N)$,
$\|D^{N}\Phi(t)\|_{C^{0}}\leq
C(t-t_{0})\|D^{N}u\|_{C^{0}}e^{C(t-t_{0})\|Du\|_{C^{0}}}$
### A.2 Proof of Lemma 6.2
We first consider the function
$\displaystyle f(x)=\begin{cases}0&\mbox{if }x\leq 0\\\
e^{-\frac{1}{x^{2}}}&\mbox{if }x>0.\end{cases}$ (A.1)
We claim that for all $0\leq N\leq\mathsf{N}_{\rm fin}$ and $x>0$,
$\frac{|D^{N}f(x)|}{(f(x))^{1-\frac{N}{\mathsf{N}_{\rm fin}}}}\lesssim 1.$
(A.2)
The proof of this is achieved in two steps; first, one can show by induction
that for all $0\leq N\leq\mathsf{N}_{\rm fin}$, there exist constants $K_{N}$
and $c_{k}$ for $0\leq k\leq K_{N}$ such that
$D^{N}\left(e^{-\frac{1}{x^{2}}}\right)=\sum\limits_{k=0}^{K_{N}}\frac{c_{k}}{x^{k}}e^{-\frac{1}{x^{2}}}.$
(A.3)
Next, one may also check that for any powers $p,q>0$,
$\lim_{x\rightarrow 0^{+}}e^{-\frac{q}{x^{2}}}\frac{1}{x^{p}}=0.$ (A.4)
Then for $1\leq N\leq\mathsf{N}_{\rm fin}$, we see that $0\leq
1-\frac{N}{\mathsf{N}_{\rm fin}}<1$, and so using (A.3), we have that the
left-hand side of (A.2) may be split into a finite linear combination of terms
of the form in (A.4), showing that (A.2) is valid.
We now glue together two versions of $f$ as follows with the goal of forming a
prototypical cutoff function $\psi$. First, let
$x_{0}=\sqrt{\frac{1}{\ln(2)}}$ so that $f(x_{0})=\frac{1}{2}$. Now consider
the function $\widetilde{f}(x)=f(2x_{0}-x)$, and set
$\displaystyle F(x)=\begin{cases}f(x)&\mbox{if }x\leq x_{0}\\\
1-f(2x_{0}-x)&\mbox{if }x>x_{0}.\end{cases}$ (A.5)
Then $F(x)$ is continuous everywhere, and $C^{\infty}$ everywhere except
$x_{0}$, where it is not necessarily differentiable. Furthermore, one can
check that by the definition of $F$ and (A.2), for all $0\leq
N\leq\mathsf{N}_{\rm fin}$,
$\frac{|D^{N}F(x)|}{(F(x))^{1-\frac{N}{\mathsf{N}_{\rm fin}}}}\lesssim 1\mbox{
for all
}0<x<x_{0},\qquad\frac{|D^{N}\left(1-(F(x))^{2}\right)^{\frac{1}{2}}|}{\left(1-(F(x))^{2}\right)^{\frac{1}{2}\left(1-\frac{N}{\mathsf{N}_{\rm
fin}}\right)}}\lesssim 1\mbox{ for all }x_{0}<x<2x_{0}.$ (A.6)
The latter inequality follows from noticing that for $x$ close to $2x_{0}$,
$\displaystyle\left(1-(F(x))^{2}\right)^{\frac{1}{2}}$
$\displaystyle=\left((1+F(x))(1-F(x))\right)^{\frac{1}{2}}=\left(1+F(x)\right)^{\frac{1}{2}}\left(f(2x_{0}-x)\right)^{\frac{1}{2}}\,.$
Since multiplying by a smooth function strictly larger than $1$, rescaling $f$
by a fixed parameter, and raising $f$ to a positive power preserves the
estimate (A.2) up to implicit constants (in fact raising $f$ to a power is
equivalent to rescaling), (A.6) is verified.
Towards the goal of adjusting $F$ to be differentiable at $x_{0}$, let $E$ be
the set $\left(\frac{x_{0}}{2},\frac{3x_{0}}{2}\right)$, and let $\phi$ be a
compactly, $C^{\infty}$ mollifier such that the support of the mollified
characteristic function $\mathcal{X}_{E}\ast\phi(x)$ is contained in
$\left(\frac{x_{0}}{4},\frac{7x_{0}}{4}\right)$. Setting
$\psi(x)=\left(\mathcal{X}_{E}\ast\phi(x)\right)\phi\ast
F(x)+\left(1-\mathcal{X}_{E}\ast\phi(x)\right)F(x),$ (A.7)
one may check that $\psi$ is $C^{\infty}$ and has the following properties:
$\displaystyle\psi(x)$ $\displaystyle=0\mbox{ for }x\leq 0$ (A.8)
$\displaystyle 0<\psi(x)$ $\displaystyle<1\mbox{ for }0<x<2x_{0}$ (A.9)
$\displaystyle\psi(x)$ $\displaystyle=1\mbox{ for }x\geq 2x_{0}$ (A.10)
$\displaystyle\frac{|D^{N}\psi(x)|}{(\psi(x))^{1-\frac{N}{\mathsf{N}_{\rm
fin}}}}$ $\displaystyle\lesssim 1\mbox{ for all }0<x$ (A.11)
$\displaystyle\frac{|D^{N}\left(1-(\psi(x))^{2}\right)^{\frac{1}{2}}|}{\left(1-(\psi(x))^{2}\right)^{\frac{1}{2}\left(1-\frac{N}{\mathsf{N}_{\rm
fin}}\right)}}$ $\displaystyle\lesssim 1\mbox{ for all }0<x<2x_{0}.$ (A.12)
We can now build $\widetilde{\psi}_{m,q}$. By rescaling and translating $\psi$
and using (A.8)-(A.10), one can check that
$\widetilde{\psi}_{m,q}(x)=\psi\left(\frac{x-\Gamma_{q}^{2(m+1)}}{\frac{1}{2x_{0}}\left(\frac{1}{4}-1\right)\Gamma_{q}^{2(m+1)}}\right)$
(A.13)
satisfies all components of (1). Notice that this rescaling involves a factor
proportional to $\Gamma_{q}^{-2(m+1)}$. Then using (A.11) and the fact that
every derivative $\psi_{m,q}$ introduces another factor of
$\Gamma_{q}^{-2(m+1)}$, we have that (6.3) is satisfied.
We now outline how to construct $\psi_{m,q}(\Gamma_{q}^{-2(m+1)}y)$, which is
the first term in the series in (6.1) and will define $\psi_{m,q}(y)$. The
basic idea is that the region
$(\frac{1}{4}\Gamma_{q+1}^{2(m+1)},\Gamma_{q+1}^{2(m+1)})$ where
$\widetilde{\psi}_{m,q}$ decreases from $1$ to $0$ will be the region where
$\psi_{m,q}(\Gamma_{q+1}^{-2(m+1)}y)$ increases from $0$ to $1$, and
furthermore in order to satisfy (6.1), we have a formula for
$\psi_{m,q}(\Gamma_{q+1}^{-2(m+1)}y)$ for these $y$-values. Specifically, in
order to ensure (6.1) for
$y\in(\frac{1}{4}\Gamma_{q+1}^{2(m+1)},\Gamma_{q+1}^{2(m+1)})$, we define
$\psi_{m,q}^{2}\left(\Gamma_{q+1}^{-2(m+1)}y\right)=1-\widetilde{\psi}_{m,q}^{2}(y)$
in this range of $y$-values. Then by adjusting (A.12) to reflect the
rescalings present in the definition of $\widetilde{\psi}_{m,q}$ and
$\psi_{m,q}(\Gamma_{q}^{-2(m+1)}y)$, we have that for
$y\in\left(\frac{1}{4},1\right)$, $\psi_{m,q}$ is well-defined and (6.4)
holds. To define $\psi_{m,q}(\Gamma_{q}^{-2(m+1)}y)$ for
$y\in[\frac{1}{4}\Gamma_{q}^{4(m+1)},\Gamma_{q}^{4(m+1)}]$ and thus
$\psi_{m,q}\left(y\right)$ for
$y\in[\frac{1}{4}\Gamma_{q}^{2(m+1)},\Gamma_{q}^{2(m+1)}]$, we can use that
for $y\in[\frac{1}{4}\Gamma_{q}^{4(m+1)},\Gamma_{q}^{4(m+1)}]$, the rescaled
function $\psi_{m,q}(\Gamma_{q+1}^{-4(m+1)}y)$ (i.e. the term in (6.1) with
$i=2$) is now well-defined. Then we can set
$\psi_{m,q}^{2}\left(\Gamma_{q+1}^{-2(m+1)}y\right)=1-\psi^{2}_{m,q}\left(\Gamma_{q+1}^{-4(m+1)}y\right)$
so that $\psi_{m,q}$ is well-defined for
$y\in[\frac{1}{4}\Gamma_{q}^{2(m+1)},\Gamma_{q}^{2(m+1)}]$ and (6.1) holds in
this range of $y$-values. Appealing again to (A.11) and (A.12), we have that
(6.5) is satisfied in the claimed range of $y$-values. Finally, in the missing
interval $[1,\frac{1}{4}\Gamma_{q}^{2(m+1)}]$, we set $\psi_{m,q}\equiv 1$.
One can now check that (6.1) holds for all $y\geq 0$, and that (6.2) follows
from (1), (2), and (6.1), concluding the proof.
### A.3 $L^{p}$ decorrelation
The following lemma may be found in [13, Lemma 3.7].
###### Lemma A.2 ($L^{p}$ de-correlation estimate).
Fix integers ${\mathsf{N}_{\rm dec}}\geq 1$, $\mu\geq\lambda\geq 1$ and assume
that these integers obey
$\displaystyle\lambda^{{\mathsf{N}_{\rm
dec}}+4}\leq\left(\frac{\mu}{2\pi\sqrt{3}}\right)^{{\mathsf{N}_{\rm dec}}}\,.$
(A.14)
Let $p\in\\{1,2\\}$, and let $f$ be a $\mathbb{T}^{3}$-periodic function such
that
$\displaystyle\max_{0\leq N\leq{\mathsf{N}_{\rm
dec}}+4}\lambda^{-N}\|D^{N}f\|_{L^{p}}\leq\mathcal{C}_{f}$ (A.15)
for a constant $\mathcal{C}_{f}>0$.393939For instance, if $f$ has frequency
support in the ball of radius $\lambda$ around the origin, we have that
$\mathcal{C}_{f}\approx\left\|f\right\|_{L^{p}}$. Then, for any
$(\mathbb{T}/\mu)^{3}$-periodic function $g$, we have that
$\displaystyle\|fg\|_{L^{p}}\lesssim\mathcal{C}_{f}\|g\|_{L^{p}}\,,$
where the implicit constant is universal (in particular, independent of $\mu$
and $\lambda$).
### A.4 Sobolev inequality with cutoffs
###### Lemma A.3.
Let $0\leq\psi_{i}\leq 1$ be cutoff functions such that
$\psi_{i\pm}=(\psi_{i-1}^{2}+\psi_{i}^{2}+\psi_{i+1}^{2})^{\nicefrac{{1}}{{2}}}=1$
on $\mathrm{supp\,}(\psi_{i})$, and such that for some $\rho>0$ we have
$\displaystyle|D^{K}\psi_{i}(x)|\lesssim\psi_{i}^{1-K/\mathsf{N}_{\rm
fin}}(x)\rho^{K}$ (A.16)
for all $K\leq 4$. Fix parameters $p\in[1,\infty]$,
$0<\lambda\leq\widetilde{\lambda}$, $0<\mu_{i}\leq\widetilde{\mu}_{i}$,
$N_{x},N_{t}\geq 0$, and assume that the sequences $\\{\mu_{i}\\}_{i\geq 0}$
and $\\{\widetilde{\mu}_{i}\\}_{i\geq 0}$ are nondecreasing. Assume that there
exist $N_{*},M_{*}\geq 0$ such that the function
$f\colon\mathbb{T}^{3}\to\mathbb{R}$ obeys the estimate
$\displaystyle\left\|\psi_{i}D^{N}D_{t}^{M}f\right\|_{L^{p}}\lesssim\mathcal{C}_{f}\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu_{i},\widetilde{\mu}_{i}\right)$
(A.17)
for all $N\leq N_{*}$ and $M\leq M_{*}$. Then, we have that
$\displaystyle\left\|\psi_{i}^{2}D^{N}D_{t}^{M}f\right\|_{L^{\infty}}$
$\displaystyle\lesssim\mathcal{C}_{f}(\max\\{1,\rho,\widetilde{\lambda}\\})^{\nicefrac{{3}}{{p}}}\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu_{i},\widetilde{\mu}_{i}\right)$
(A.18a)
$\displaystyle\left\|D^{N}D_{t}^{M}f\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i})}$
$\displaystyle\lesssim\mathcal{C}_{f}(\max\\{1,\rho,\widetilde{\lambda}\\})^{\nicefrac{{3}}{{p}}}\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu_{i+1},\widetilde{\mu}_{i+1}\right)$
(A.18b)
for all $N\leq N_{*}-\lfloor\nicefrac{{3}}{{p}}\rfloor-1$ and $M\leq M_{*}$.
Lastly, if the inequality (A.17) holds for all $N+M\leq N_{\circ}$ for some
$N_{\circ}\geq 0$ (instead of $N\leq N_{*}$ and $M\leq M_{*}$), then the
bounds (A.18a) and (A.18b) hold for $N+M\leq
N_{\circ}-\lfloor\nicefrac{{3}}{{p}}\rfloor-1$.
###### Proof of Lemma A.3.
The proof uses that $\lfloor\nicefrac{{3}}{{p}}\rfloor+1>\nicefrac{{3}}{{p}}$
for all $p\in[1,\infty]$, and that $W^{s,p}\subset L^{\infty}$ for
$s>\nicefrac{{3}}{{p}}$. Moreover, the proof of (A.18a) is nearly identical to
that of (A.18b), and thus we only give the proof of (A.18b); moreover, for
simplicity we only give the proof for $p=2$, as all the other Lebesgue
exponents are treated in the same way. By Gagliardo-Nirenberg-Sobolev
interpolation we have
$\displaystyle\left\|D^{N}D_{t}^{M}f\right\|_{L^{\infty}(\mathrm{supp\,}\psi_{i})}$
$\displaystyle\leq\left\|\psi_{i\pm}^{2}D^{N}D_{t}^{M}f\right\|_{L^{\infty}(\mathbb{T}^{3})}$
$\displaystyle\lesssim\left\|\psi_{i\pm}^{2}D^{N}D_{t}^{M}f\right\|_{L^{2}(\mathbb{T}^{3})}^{\nicefrac{{1}}{{4}}}\left\|\psi_{i\pm}^{2}D^{N}D_{t}^{M}f\right\|_{\dot{H}^{2}(\mathbb{T}^{3})}^{\nicefrac{{3}}{{4}}}+\left\|\psi_{i\pm}^{2}D^{N}D_{t}^{M}f\right\|_{L^{2}(\mathbb{T}^{3})}\,.$
Using (A.16), (A.17), and the monotonicity of the $\mu_{i}$ and
$\widetilde{\mu}_{i}$, we obtain
$\displaystyle\left\|\psi_{i\pm}^{2}D^{N}D_{t}^{M}f\right\|_{\dot{H}^{2}(\mathbb{T}^{3})}$
$\displaystyle\lesssim\left\|\psi_{i\pm}D^{N+2}D_{t}^{M}f\right\|_{L^{2}}+\left\|D\psi_{i\pm}\right\|_{L^{\infty}}\left\|\psi_{i\pm}D^{N+1}D_{t}^{M}f\right\|_{L^{2}}+\left\|\frac{D^{2}(\psi_{i\pm}^{2})}{\psi_{i\pm}}\right\|_{L^{\infty}}\left\|\psi_{i\pm}D^{N}D_{t}^{M}f\right\|_{L^{2}}$
$\displaystyle\lesssim\left\|\psi_{i\pm}D^{N+2}D_{t}^{M}f\right\|_{L^{2}}+\rho\left\|\psi_{i\pm}D^{N+1}D_{t}^{M}f\right\|_{L^{2}}+\rho^{2}\left\|\psi_{i\pm}D^{N}D_{t}^{M}f\right\|_{L^{2}}$
$\displaystyle\lesssim(\max\\{\widetilde{\lambda},\rho\\})^{2}\mathcal{C}_{f}\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu_{i+1},\widetilde{\mu}_{i+1}\right)\,,$
for all $N\leq N_{*}-2$ and $M\leq M_{*}$. In the second inequality above we
have used that $|D^{2}(\psi_{i\pm}^{2})|\lesssim\rho^{2}\psi_{i\pm}(x)$, which
follows from (A.16). Combining the above two displays proves (A.18b).
Note that for $p=1$ we require that
$|D^{4}(\psi_{i\pm}^{2})|\lesssim\rho^{4}\psi_{i\pm}(x)$, which also follows
from (A.16) since $\mathsf{N}_{\rm fin}\geq 4$, and this is why we have
assumed this inequality to hold for all $K\leq 4$.
Lastly, assume that (A.17) holds for all $N+M\leq N_{\circ}$, and fix any
$N^{\prime},M^{\prime}\geq 0$ such that $N^{\prime}+M^{\prime}\leq
N_{\circ}-\lfloor\nicefrac{{3}}{{p}}\rfloor-1$. Let
$N_{*}=N^{\prime}+\lfloor\nicefrac{{3}}{{p}}\rfloor+1$ and $M_{*}=M^{\prime}$.
Then (A.17) gives a bound for
$\|\psi_{i}D^{N^{\prime\prime}}D_{t}^{M^{\prime\prime}}f\|_{L^{p}}$ for all
$N^{\prime\prime}\leq N_{*}$ and $M^{\prime\prime}\leq M_{*}$. The bounds
(A.18a) and (A.18b) thus give an estimate for
$\|\psi_{i}D^{N^{\prime}}D_{t}^{M^{\prime}}f\|_{L^{p}}$, which concludes the
proof. ∎
### A.5 Consequences of the Faa di Bruno formula
We are using the following version of the multivariable Faa di Bruno formula
[25, Theorem 2.1]. Let $g=g(x_{1},\ldots,x_{d})=f(h(x_{1},\ldots,x_{d}))$,
where $f\colon\mathbb{R}^{m}\to\mathbb{R}$, and
$h\colon\mathbb{R}^{d}\to\mathbb{R}^{m}$ are $C^{n}$ smooth functions of their
respective variables. Let $\alpha\in{\mathbb{N}}_{0}^{d}$ be s.t.
$|\alpha|=n$, and let $\beta\in\mathbb{N}_{0}^{m}$ be such that
$1\leq|\beta|\leq n$. We then define
$\displaystyle p(\alpha,\beta)$
$\displaystyle=\Bigg{\\{}(k_{1},\ldots,k_{n};\ell_{1},\ldots,\ell_{n})\in(\mathbb{N}_{0}^{m})^{n}\times(\mathbb{N}_{0}^{d})^{n}\colon\exists
s\mbox{ with }1\leq s\leq n\mbox{ s.t. }$
$\displaystyle\qquad\qquad|k_{j}|,|\ell_{j}|>0\Leftrightarrow 1\leq j\leq
s,\,0\prec\ell_{1}\prec\ldots\prec\ell_{s},\sum_{j=1}^{s}k_{j}=\beta,\sum_{j=1}^{s}|k_{j}|\ell_{j}=\alpha\Bigg{\\}}.$
(A.19)
Then the multivariable Faa di Bruno formula states that we have the equality
$\displaystyle\partial^{\alpha}g(x)=\alpha!\sum_{|\beta|=1}^{n}(\partial^{\beta}f)(h(x))\sum_{p(\alpha,\beta)}\prod_{j=1}^{n}\frac{(\partial^{\ell_{j}}h(x))^{k_{j}}}{k_{j}!(\ell_{j}!)^{k_{j}}}.$
(A.20)
Note that in (A.19) we have that $k_{j}=0\in\mathbb{N}_{0}^{m}$ and
$\ell_{j}=0\in N_{0}^{d}$ for $j\geq s+1$. Therefore, we could write the sums
and products with $j\in\\{1,\ldots,s\\}$ as sums for $j\in\\{1,\ldots,n\\}$.
Keeping in mind this convention, we importantly note that in (A.20) we can
have $|\ell_{j}|=0$ only if $|k_{j}|=0$, and in this case the entire term in
the product is equal to $1$. That is, the product in (A.20) only goes from $1$
to $s$, and in this case $|\ell_{j}|\geq 1$ for $j\in\\{1,\ldots,s\\}$. This
fact will be used frequently.
For applications to cutoff functions we apply this formula for scalar
functions $h$, i.e. $m=1$, while for applications to the perturbation or
Reynolds stress sections we apply this formula for vector fields $h$, i.e.
$m=3$.
Since throughout this manuscript the number of derivatives that we need to
estimate is uniformly bounded (say by $\mathsf{N}_{\rm fin}$), we may ignore
the factorial terms in (A.20) and include them in the implicit constant of
$\lesssim$. Using this convention, we summarize in the following lemma a
useful consequence of the Faá di Bruno formula above.
###### Lemma A.4 (Faá di Bruno).
Fix $N\leq\mathsf{N}_{\rm fin}$. Let $\psi\colon[0,\infty)\to[0,1]$ be a
smooth function obeying
$\displaystyle|D^{B}\psi|\lesssim\Gamma_{\psi}^{-2B}\psi^{1-B/\mathsf{N}_{\rm
fin}}$ (A.21)
for all $B\leq N$, and some $\Gamma_{\psi}>0$. Let $\Gamma,\lambda,\Lambda>0$
and $N_{*}\leq N$. Furthermore, let
$h\colon\mathbb{T}^{3}\times\mathbb{R}\to\mathbb{R}$ and denote
$g(x)=\psi(\Gamma^{-2}h(x)).$
Assume the function $h$ obeys
$\displaystyle\left\|D^{B}h\right\|_{L^{\infty}(\mathrm{supp\,}g)}\lesssim\mathcal{C}_{h}\mathcal{M}\left(B,N_{*},\lambda,\Lambda\right)$
(A.22)
for all $B\leq N$, where the implicit constant is independent of
$\lambda,\Lambda,\Gamma,\mathcal{C}_{h}>0$. Then, we have that for all points
$(x,t)\in\mathrm{supp\,}h$, the bound
$\displaystyle\frac{|D^{N}g|}{g^{1-N/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,N_{*},\lambda,\Lambda\right)\max\\{(\Gamma_{\psi}\Gamma)^{-2}\mathcal{C}_{h},(\Gamma_{\psi}\Gamma)^{-2N}\mathcal{C}_{h}^{N}\\}$
(A.23)
holds. If the $\psi^{1-B/\mathsf{N}_{\rm fin}}$ factor on the right side of
(A.21) is replaced by $1$, then the $g^{1-N/\mathsf{N}_{\rm fin}}$ factor on
the left side of (A.23) also has to be replaced by $1$.
###### Proof of Lemma A.4.
The goal is to apply (A.19)–(A.20) with $f(x)=\psi(\Gamma^{-2}x)$. For
$(x,t)\in\mathrm{supp\,}(g)$ we obtain from (3.9), (A.21), and (A.23) that
$\displaystyle\frac{|D^{N}g|}{g^{1-N/\mathsf{N}_{\rm fin}}}$
$\displaystyle\lesssim\sum_{B=1}^{N}\frac{|D^{B}\psi|}{\psi^{1-B/\mathsf{N}_{\rm
fin}}}\psi^{(N-B)/\mathsf{N}_{\rm
fin}}\Gamma^{-2B}\sum_{p(\alpha,B)}\prod_{j=1}^{n}\left\|\partial^{\ell_{j}}h\right\|_{L^{\infty}(\mathrm{supp\,}g)}^{k_{j}}$
$\displaystyle\lesssim\sum_{B=1}^{N}(\Gamma_{\psi}\Gamma)^{-2B}\sum_{p(\alpha,B)}\prod_{j=1}^{n}\left(\mathcal{C}_{h}\mathcal{M}\left(\ell_{j},N_{*},\lambda,\Lambda\right)\right)^{k_{j}}$
$\displaystyle\lesssim\sum_{B=1}^{N}(\Gamma_{\psi}\Gamma)^{-2B}\mathcal{C}_{h}^{B}\mathcal{M}\left(N,N_{*},\lambda,\Lambda\right)$
for any $1\leq B\leq N$. The conclusion of the lemma follows upon bounding the
geometric sum. ∎
Frequently in the paper, we need a version of Lemma A.4 which also deals with
mixed spatial and material derivatives. A convenient statement is:
###### Lemma A.5 (Mixed derivative Faá di Bruno).
Fix $N,M\in\mathbb{N}$ such that $N+M\leq\mathsf{N}_{\rm fin}$. Let
$\psi\colon[0,\infty)\to[0,1]$ be a smooth function obeying
$\displaystyle|D^{B}\psi|\lesssim\Gamma_{\psi}^{-2B}\psi^{1-B/\mathsf{N}_{\rm
fin}}$ (A.24)
for all $B\leq N$ and a constant $\Gamma_{\psi}>0$. Let $v$ be a fixed vector
field, and denote $D_{t}=\partial_{t}+v\cdot\nabla$, which is a scalar
differential operator. Let
$\Gamma,\lambda,\widetilde{\lambda},\mu,\widetilde{\mu}\geq 1$ and
$N_{x},N_{t}\leq N$. Furthermore, let
$h\colon\mathbb{T}^{3}\times\mathbb{R}\to\mathbb{R}$ and denote
$g(x,t)=\psi(\Gamma^{-2}h(x,t)).$
Assume the function $h$ obeys
$\displaystyle\left\|D^{N^{\prime}}D_{t}^{M^{\prime}}h\right\|_{L^{\infty}(\mathrm{supp\,}g)}\lesssim\mathcal{C}_{h}\mathcal{M}\left(N^{\prime},N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M^{\prime},N_{t},\mu,\widetilde{\mu}\right)$
(A.25)
for all $N^{\prime}\leq N$, and $M^{\prime}\leq M$, where the implicit
constant is independent of
$\lambda,\widetilde{\lambda},\mu,\widetilde{\mu},\Gamma$, and
$\mathcal{C}_{h}$. Then, we have that for all points
$(x,t)\in\mathrm{supp\,}h$, the bound
$\displaystyle\frac{|D^{N}D_{t}^{M}g|}{g^{1-(N+M)/\mathsf{N}_{\rm
fin}}}\lesssim\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu,\widetilde{\mu}\right)\max\left\\{(\Gamma_{\psi}\Gamma)^{-2}\mathcal{C}_{h},((\Gamma_{\psi}\Gamma)^{-2}\mathcal{C}_{h})^{N+M}\right\\}$
(A.26)
holds. If the $\psi^{1-B/\mathsf{N}_{\rm fin}}$ factor on the right side of
(A.24) is replaced by $1$, then the $g^{1-(N+M)/\mathsf{N}_{\rm fin}}$ factor
on the left side of (A.26) also has to be replaced by $1$.
###### Proof of Lemma A.5.
Let $X(a,t)$ be the flow induced by the vector field $v$, with initial
condition $X(a,t)=x$. Denote by $a=X^{-1}(x,t)$ the inverse of the map $X$. We
then note that
$\displaystyle D_{t}^{M}g(x,t)=\left(\partial_{t}^{M}((g\circ
X)(a,t))\right)|_{a=X^{-1}(x,t)}.$
We wish to apply the above with the function $g(x,t)=\psi(\Gamma^{-2}h(x,t))$.
We apply the Faa di Bruno formula (A.19)–(A.20) with the one dimensional
differential operator $\partial_{t}^{M}$ to the composition $g\circ X$, note
that $\partial_{t}^{\beta_{i}}(h(X(a,t),t))=(D_{t}^{\beta_{i}}h)(X(a,t),t)$,
and then evaluate the resulting expression at $a=X^{-1}(x,t)$, to obtain
$\displaystyle
D_{t}^{M}g(x,t)=M!\sum_{B=1}^{M}\Gamma^{-2B}\psi^{(B)}(\Gamma^{-2}h(x,t))\sum_{\begin{subarray}{c}\\{\kappa,\beta\in\mathbb{N}^{M}\colon\\\
|\kappa|=B,\kappa\cdot\beta=M\\}\end{subarray}}\prod_{i=1}^{M}\frac{\left((D_{t}^{\beta_{i}}h)(x,t)\right)^{\kappa_{i}}}{\kappa_{i}!(\beta_{i}!)^{\kappa_{i}}}.$
We now apply $D^{N}$ to the above expression, use the Leibniz rule, and then
appeal again to the Faa di Bruno formula (A.19)–(A.20), this time for spatial
derivatives. We obtain
$\displaystyle D^{N}D_{t}^{M}g(x,t)$
$\displaystyle=M!N!\sum_{B=1}^{M}\sum_{K=0}^{N}\sum_{B^{\prime}=0}^{K}\Gamma^{-2(B+B^{\prime})}\psi^{(B+B^{\prime})}(\Gamma^{-2}h(x,t))\sum_{p(K,B^{\prime})}\prod_{j=1}^{K}\frac{(D^{\ell_{j}}h(x,t))^{k_{j}}}{k_{j}!(\ell_{j}!)^{k_{j}}}$
$\displaystyle\quad\times\sum_{\begin{subarray}{c}\\{\alpha\in\mathbb{N}^{M}\colon\\\
|\alpha|=N-K\\}\end{subarray}}\sum_{\begin{subarray}{c}\\{\kappa,\beta\in\mathbb{N}^{M}\colon\\\
|\kappa|=B,\kappa\cdot\beta=M\\}\end{subarray}}\prod_{i=1}^{M}\frac{D^{\alpha_{i}}(((D_{t}^{\beta_{i}}h)(x,t))^{\kappa_{i}})}{\alpha_{i}!\kappa_{i}!(\beta_{i}!)^{\kappa_{i}}}.$
(A.27)
Upon dividing by $g^{1-(N+M)/\mathsf{N}_{\rm fin}}$ and noting that
$B+B^{\prime}\leq M+N$, from (A.24), identity (A.27), the Leibniz rule, and
assumption (A.25), we obtain
$\displaystyle\frac{|D^{N}D_{t}^{M}g|}{g^{1-(N+M)/\mathsf{N}_{\rm fin}}}$
$\displaystyle\lesssim\sum_{B=1}^{M}\sum_{K=0}^{N}\sum_{B^{\prime}=0}^{K}(\Gamma_{\psi}\Gamma)^{-2(B+B^{\prime})}\mathcal{C}_{h}^{B^{\prime}}\mathcal{M}\left(K,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{C}_{h}^{B}\mathcal{M}\left(N-K,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu,\widetilde{\mu}\right)$
$\displaystyle\lesssim\mathcal{M}\left(N,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(M,N_{t},\mu,\widetilde{\mu}\right)\sum_{B=1}^{M}\sum_{B^{\prime}=0}^{N}(\Gamma_{\psi}\Gamma)^{-2(B+B^{\prime})}\mathcal{C}_{h}^{B^{\prime}+B}$
from which (A.26) follows by summing the geometric series. ∎
###### Lemma A.6.
Given a smooth function $f\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}$,
suppose that for $\lambda\geq 1$ the vector field
$\Phi\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}^{3}$ satisfies the
estimate
$\displaystyle\left\|D^{N+1}\Phi\right\|_{L^{\infty}(\mathrm{supp\,}f)}$
$\displaystyle\lesssim\lambda^{N}$ (A.28)
for $0\leq N\leq N_{*}$. Then for any $1\leq N\leq N_{*}$ we have
$\displaystyle\left|D^{N}\left(f\circ\Phi\right)(x,t)\right|\lesssim$
$\displaystyle\sum_{m=1}^{N}\lambda^{N-m}\left|(D^{m}f)\circ\Phi(x,t)\right|$
(A.29)
and thus trivially we obtain
$\displaystyle\left|D^{N}\left(f\circ\Phi\right)(x,t)\right|\lesssim$
$\displaystyle\sum_{m=0}^{N}\lambda^{N-m}\left|(D^{m}f)\circ\Phi(x,t)\right|.$
for any $0\leq N\leq N_{*}$.
###### Proof of Lemma A.6.
Applying (A.20), noting that $|\ell_{j}|=0$ implies $|k_{j}|=0$, and
assumption (A.28), we have that for any multi index
$\alpha\in\mathbb{N}_{0}^{3}$ with $|\alpha|=N$,
$\displaystyle\left|\partial^{\alpha}\left(f\circ\Phi\right)(x,t)\right|$
$\displaystyle\lesssim\sum_{|\beta|=1}^{N}\left|((\partial^{\beta}f)\circ\Phi)(x,t)\right|\prod_{j=1}^{N}\sum_{p(\alpha,\beta)}\left|\left(\partial^{\ell_{j}}\Phi(x,t)\right)^{k_{j}}\right|$
$\displaystyle\lesssim\sum_{|\beta|=1}^{N}\left|(\partial^{\beta}f)\circ\Phi\right|\prod_{j=1}^{N}\sum_{p(\alpha,\beta)}\lambda^{({\left|\ell_{j}\right|-1})|k_{j}|}$
$\displaystyle\lesssim\sum_{m=1}^{N}\lambda^{N-m}\left|(D^{m}f)\circ\Phi\right|$
by the definition (A.19). Thus we obtain (A.29). ∎
In order to estimate the perturbation in $L^{p}$ spaces as well as terms
appearing in the Reynolds stress we will need the following abstract lemma,
which follows from Lemmas A.2 and A.6.
###### Lemma A.7.
Let $p\in\\{1,2\\}$, and fix integers $N_{*}\geq M_{*}\geq{\mathsf{N}_{\rm
dec}}\geq 1$. Suppose $f\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}$ and
let $\Phi\colon\mathbb{R}^{3}\times\mathbb{R}\to\mathbb{R}^{3}$ be a vector
field advected by an incompressible velocity field $v$, i.e.
$D_{t}\Phi=(\partial_{t}+v\cdot\nabla)\Phi=0$. Denote by $\Phi^{-1}$ the
inverse of the flow $\Phi$, which is the identity at a time slice which
intersects the support of $f$. Assume that for some
$\lambda,\nu,\widetilde{\nu}\geq 1$ and $\mathcal{C}_{f}>0$ the functions $f$
satisfies the estimates
$\displaystyle\left\|D^{N}D_{t}^{M}f\right\|_{L^{p}}$
$\displaystyle\lesssim\mathcal{C}_{f}\lambda^{N}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)$
(A.30)
for all $N\leq N_{*}$ and $M\leq M_{*}$, and that $\Phi$, and $\Phi^{-1}$ are
bounded as
$\displaystyle\left\|D^{N+1}\Phi\right\|_{L^{\infty}(\mathrm{supp\,}f)}$
$\displaystyle\lesssim\lambda^{N}$ (A.31)
$\displaystyle\left\|D^{N+1}\Phi^{-1}\right\|_{L^{\infty}(\mathrm{supp\,}f)}$
$\displaystyle\lesssim\lambda^{N}$ (A.32)
for all $N\leq N_{*}$. Lastly, suppose that $\varphi$ is
$(\mathbb{T}/\mu)^{3}$-periodic, and that there exist parameters
$\widetilde{\zeta}\geq\zeta\geq\mu$ and $\mathcal{C}_{\varphi}>0$ such that
$\displaystyle\left\|D^{N}\varphi\right\|_{L^{p}}\lesssim\mathcal{C}_{\varphi}\mathcal{M}\left(N,N_{x},\zeta,\widetilde{\zeta}\right)\,$
(A.33)
for all $0\leq N\leq N_{*}$. If the parameters
$\lambda\leq\mu\leq\zeta\leq\widetilde{\zeta}$
satisfy
$\displaystyle\widetilde{\zeta}^{4}\leq\left(\frac{\mu}{2\pi\sqrt{3}\lambda}\right)^{{\mathsf{N}_{\rm
dec}}}\,,$ (A.34)
and we have
$2{\mathsf{N}_{\rm dec}}+4\leq N_{*}\,,$ (A.35)
then the bound
$\displaystyle\left\|D^{N}D_{t}^{M}\left(f\;\varphi\circ\Phi\right)\right\|_{L^{p}}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{\varphi}\mathcal{M}\left(N,N_{x},\zeta,\widetilde{\zeta}\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.36)
holds for $N\leq N_{*}$ and $M\leq M_{*}$.
###### Remark A.8.
We emphasize that (A.36) holds for the same range of $N$ and $M$ that (A.30)
holds, as soon as $N_{*}$ is sufficiently large compared to ${\mathsf{N}_{\rm
dec}}$ so that (A.35) holds.
###### Remark A.9.
We note that if estimate (A.30) is known to hold for $N+M\leq N_{\circ}$ for
some $N_{\circ}\geq 2{\mathsf{N}_{\rm dec}}+4$ (instead of, for $N\leq N_{*}$
and $M\leq M_{*}$), and if the bounds (A.31)–(A.32) hold for all $N\leq
N_{\circ}$, then it follows from the below proof that the bound (A.36) holds
for $N+M\leq N_{\circ}$ and $M\leq N_{\circ}-2{\mathsf{N}_{\rm dec}}-4$. The
only modification required to the proof (given below) is that instead of
considering the cases $N^{\prime}\leq N_{*}-{\mathsf{N}_{\rm dec}}-4$ and
$N^{\prime}>N_{*}-{\mathsf{N}_{\rm dec}}-4$, we now have to split according to
$N^{\prime}+M\leq N_{\circ}-{\mathsf{N}_{\rm dec}}-4$ and
$N^{\prime}+M>N_{\circ}-{\mathsf{N}_{\rm dec}}-4$. In the second case we use
that $N-N^{\prime\prime}\geq N_{\circ}-M-{\mathsf{N}_{\rm
dec}}-4\geq{\mathsf{N}_{\rm dec}}$, which holds exactly because $M\leq
N_{\circ}-2{\mathsf{N}_{\rm dec}}-4$.
###### Proof of Lemma A.7.
Since $D_{t}\Phi=0$ we have $D_{t}^{M}(\varphi\circ\Phi)=0$. Using that
$\mathrm{div\,}v\equiv 0$, so that $\Phi$ and $\Phi^{-1}$ preserve volume, and
Lemma A.6, which we may apply due to (A.31), we have
$\displaystyle\left\|D^{N}D_{t}^{M}\left(f\;\varphi\circ\Phi\right)\right\|_{L^{p}}$
$\displaystyle\lesssim\sum_{N^{\prime}=0}^{N}\left\|D^{N^{\prime}}D_{t}^{M}f\;D^{N-N^{\prime}}\left(\varphi\circ\Phi\right)\right\|_{L^{p}}$
$\displaystyle\lesssim\sum_{N^{\prime}=0}^{N}\sum_{N^{\prime\prime}=0}^{N-N^{\prime}}\lambda^{N-N^{\prime}-N^{\prime\prime}}\left\|D^{N^{\prime}}D_{t}^{M}f\;(D^{N^{\prime\prime}}\varphi)\circ\Phi\right\|_{L^{p}}$
$\displaystyle\lesssim\sum_{N^{\prime}=0}^{N}\sum_{N^{\prime\prime}=0}^{N-N^{\prime}}\lambda^{N-N^{\prime}-N^{\prime\prime}}\left\|\left(D^{N^{\prime}}D_{t}^{M}f\right)\circ\Phi^{-1}D^{N^{\prime\prime}}\varphi\right\|_{L^{p}}.$
(A.37)
In (A.37) let us first consider the case $N^{\prime}\leq
N_{*}-{\mathsf{N}_{\rm dec}}-4$, so that $N^{\prime}+M\leq
N_{*}+M_{*}-{\mathsf{N}_{\rm dec}}-4$. Under assumption (A.32) we may apply
Lemma A.6, and using (A.30) we have
$\displaystyle\left\|D^{n}\left((D^{N^{\prime}}D_{t}^{M}f)\circ(\Phi^{-1},t)\right)\right\|_{L^{p}}$
$\displaystyle\lesssim\sum_{n^{\prime}=0}^{n}\lambda^{n-n^{\prime}}\left\|(D^{n^{\prime}+N^{\prime}}D_{t}^{M}f)\circ\Phi^{-1}\right\|_{L^{p}}$
$\displaystyle\lesssim\mathcal{C}_{f}\sum_{n^{\prime}=0}^{n}\lambda^{n-n^{\prime}}\lambda^{n^{\prime}+N^{\prime}}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)$
$\displaystyle\lesssim\left(\mathcal{C}_{f}\lambda^{N^{\prime}}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\right)\lambda^{n}\,,$
(A.38)
for all $n\leq{\mathsf{N}_{\rm dec}}+4$. This bound matches (A.15), with
$\mathcal{C}_{f}$ replaced by
$\mathcal{C}_{f}\lambda^{N^{\prime}}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)$.
Since like $\varphi$, the function $D^{N^{\prime\prime}}\varphi$ is
$(\mathbb{T}/\mu)^{3}$-periodic, due to (A.38), the fact that
$\lambda\leq\widetilde{\zeta}$, and assumption (A.34), we may apply Lemma A.2
to conclude
$\displaystyle\left\|\left(D^{N^{\prime}}D_{t}^{M}f\right)\circ\Phi^{-1}D^{N^{\prime\prime}}\varphi\right\|_{L^{p}}\lesssim\mathcal{C}_{f}\lambda^{N^{\prime}}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\left\|D^{N^{\prime\prime}}\varphi\right\|_{L^{p}}.$
Inserting this bound back into (A.37) and using (A.33) concludes the proof of
(A.36) for the values of $N^{\prime}$ considered in this case.
Next, let us consider the case $N^{\prime}>N_{*}-{\mathsf{N}_{\rm dec}}-4$.
Since $0\leq N^{\prime}\leq N$, in particular this means that
$N>N_{*}-{\mathsf{N}_{\rm dec}}-4$, and since $N^{\prime\prime}\leq
N-N^{\prime}$ we also obtain that $N-N^{\prime\prime}\geq
N^{\prime}>N_{*}-{\mathsf{N}_{\rm dec}}-4\geq{\mathsf{N}_{\rm dec}}$. Here we
have used (A.35). Then the Hölder inequality, the fact that $\Phi^{-1}$ is
volume preserving, the Sobolev embedding $W^{4,p}\subset L^{\infty}$, the
ordering $\widetilde{\zeta}\geq\zeta\geq\mu\geq 1$ and assumption (A.34),
imply that
$\displaystyle\lambda^{N-N^{\prime}-N^{\prime\prime}}\left\|\left(D^{N^{\prime}}D_{t}^{M}f\right)\circ\Phi^{-1}D^{N^{\prime\prime}}\varphi\right\|_{L^{p}}$
$\displaystyle\lesssim\lambda^{N-N^{\prime}-N^{\prime\prime}}\left\|D^{N^{\prime}}D_{t}^{M}f\right\|_{L^{p}}\left\|D^{N^{\prime\prime}}\varphi\right\|_{L^{\infty}}$
$\displaystyle\lesssim\lambda^{N-N^{\prime}-N^{\prime\prime}}\mathcal{C}_{f}\lambda^{N^{\prime}}\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\mathcal{C}_{\varphi}\mathcal{M}\left(N^{\prime\prime}+4,N_{x},\zeta,\widetilde{\zeta}\right)$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{\varphi}\mathcal{M}\left(N,N_{x},\zeta,\widetilde{\zeta}\right)\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\widetilde{\zeta}^{4}\left(\frac{\lambda}{\zeta}\right)^{N-N^{\prime\prime}}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{\varphi}\mathcal{M}\left(N,N_{x},\zeta,\widetilde{\zeta}\right)\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\widetilde{\zeta}^{4}\left(\frac{\lambda}{\mu}\right)^{{\mathsf{N}_{\rm
dec}}}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{\varphi}\mathcal{M}\left(N,N_{x},\zeta,\widetilde{\zeta}\right)\mathcal{M}\left(M,N_{t},\nu,\widetilde{\nu}\right)\,.$
Combining the above estimate with (A.37), we deduce that also for
$N^{\prime}>N_{*}-{\mathsf{N}_{\rm dec}}-4$, the bound (A.36) holds,
concluding the proof of the lemma. ∎
### A.6 Bounds for sums and iterates of operators
For two differential operators $A$ and $B$ we have the expansion
$\displaystyle(A+B)^{m}=\sum_{k=1}^{m}\sum_{\begin{subarray}{c}\alpha,\beta\in{\mathbb{N}}^{k}\\\
|\alpha|+|\beta|=m\end{subarray}}\left(\prod_{i=1}^{k}A^{\alpha_{i}}B^{\beta_{i}}\right).$
(A.39)
Clearly (A.39) simplifies if $[A,B]=0$. A lot of times we need to apply the
above formula with
$A=v\cdot\nabla,$
for some vector field $v$. The question we would like to address in this
section is the following: Assume that we have already established estimates on
$(\prod_{i}D^{\alpha_{i}}B^{\beta_{i}})v$, for $|\alpha|+|\beta|\leq m$. Can
we deduce estimates for the operator $(A+B)^{m}=(v\cdot\nabla+B)^{m}$? The
answer is “yes”, and is summarized in the following lemma:
###### Lemma A.10.
Fix $N_{x},N_{t},N_{*}\in\mathbb{N}$,
$\Omega\in\mathbb{T}^{3}\times\mathbb{R}$ a space-time domain, and let $v$ be
a vector field. For $k\geq 1$ and $\alpha,\beta\in\mathbb{N}^{k}$ such that
$|\alpha|+|\beta|\leq N_{*}$, we assume that we have the bounds
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}B^{\beta_{i}}\right)v\right\|_{L^{\infty}(\Omega)}\lesssim\mathcal{C}_{v}\mathcal{M}\left(|\alpha|,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(|\beta|,N_{t},\mu_{v},\widetilde{\mu}_{v}\right)$
(A.40)
for some $\mathcal{C}_{v}\geq 0$,
$1\leq\lambda_{v}\leq\widetilde{\lambda}_{v}$, and
$1\leq\mu_{v}\leq\widetilde{\mu}_{v}$. With the same notation and restrictions
on $|\alpha|,|\beta|$, let $f$ be a function which for some $p\in[1,\infty]$
obeys
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}B^{\beta_{i}}\right)f\right\|_{L^{p}(\Omega)}\lesssim\mathcal{C}_{f}\mathcal{M}\left(|\alpha|,N_{x},\lambda_{f},\widetilde{\lambda}_{f}\right)\mathcal{M}\left(|\beta|,N_{t},\mu_{f},\widetilde{\mu}_{f}\right)$
(A.41)
for some $\mathcal{C}_{f}\geq 0$,
$1\leq\lambda_{f}\leq\widetilde{\lambda}_{f}$, and
$1\leq\mu_{f}\leq\widetilde{\mu}_{f}$. Denote
$\displaystyle\lambda=\max\\{\lambda_{f},\lambda_{v}\\},\quad\widetilde{\lambda}=\max\\{\widetilde{\lambda}_{f},\widetilde{\lambda}_{v}\\},\quad\mu=\max\\{\mu_{f},\mu_{v}\\},\quad\widetilde{\mu}=\max\\{\widetilde{\mu}_{f},\widetilde{\mu}_{v}\\}.$
Then, for
$A=v\cdot\nabla$
we have the bounds
$\displaystyle\left\|D^{n}\left(\prod_{i=1}^{k}A^{\alpha_{i}}B^{\beta_{i}}\right)f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{v}^{|\alpha|}\mathcal{M}\left(n+|\alpha|,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\beta|,N_{t},\mu,\widetilde{\mu}\right)$
(A.42)
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(n,N_{x},\lambda,\widetilde{\lambda}\right)(\mathcal{C}_{v}\widetilde{\lambda})^{|\alpha|}\mathcal{M}\left(|\beta|,N_{t},\mu,\widetilde{\mu}\right)$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(n,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\alpha|+|\beta|,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}\\}\right)$
(A.43)
as long as $n+|\alpha|+|\beta|\leq N_{*}$. As a consequence, if $k=m$ then
(A.39) and (A.43) imply the bound
$\displaystyle\left\|D^{n}(A+B)^{m}f\right\|_{L^{p}(\Omega)}\lesssim\mathcal{C}_{f}\mathcal{M}\left(n,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}\\}\right)$
(A.44)
for $n+m\leq N_{*}$.
###### Remark A.11.
The previous lemma is applied typically with $v=u_{q}$ and $B=D_{t,q-1}$ in
order to obtain estimates for
$D^{n}(\prod_{i}D_{q}^{\alpha_{i}}D_{t,q-1}^{\beta_{i}})f$, and hence for
$D^{n}D_{q}^{m}f$. A more non-standard application of this lemma uses
$v=-v_{q-1}$ and $B=D_{t,q-1}$ in order to obtain estimates for time
derivatives via
$D^{n}\partial_{t}^{m}f=D^{n}(-v_{q-1}\cdot\nabla+D_{t,q-1})^{m}f$.
###### Proof of Lemma A.10.
We recall (6.54)–(6.55) and note that we may write (ignoring the way in which
tensors are contracted)
$\displaystyle
A^{n}=(v\cdot\nabla)^{n}=\sum_{j=1}^{n}f_{j,n}D^{j}\quad\mbox{where}\quad
f_{j,n}=\sum_{\begin{subarray}{c}\zeta\in\mathbb{N}^{n}\\\
|\zeta|=n-j\end{subarray}}c_{n,j,\zeta}\prod_{\ell=1}^{n}(D^{\zeta_{\ell}}v)$
(A.45)
where the $c_{n,j,\zeta}$ are certain combinatorial coefficients (tensors)
with the dependence given in the subindex, and $D^{a}$ represents
$\partial^{\alpha}$ for some multi-index $\alpha$ with $|\alpha|=a$. Inserting
(A.45) into the product of operators in (A.39), we see that
$\displaystyle D^{n}\prod_{i=1}^{k}A^{\alpha_{i}}B^{\beta_{i}}$
$\displaystyle=\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{k}\\\
1^{k}\leq\gamma\leq\alpha\end{subarray}}D^{n}\prod_{i=1}^{k}(f_{\gamma_{i},\alpha_{i}}D^{\gamma_{i}}B^{\beta_{i}})$
$\displaystyle=\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{k}\\\
1^{k}\leq\gamma\leq\alpha\end{subarray}}\sum_{\begin{subarray}{c}0\leq
n^{\prime}\leq n+|\gamma|\\\ 0\leq
m^{\prime}\leq|\beta|\end{subarray}}\sum_{\begin{subarray}{c}\delta,\kappa\in\mathbb{N}^{k}\\\
|\delta|=n+|\gamma|-n^{\prime}\\\
|\kappa|=|\beta|-m^{\prime}\end{subarray}}\left(\prod_{i=1}^{k}\left(\sum_{\begin{subarray}{c}\delta_{i}^{\prime},\kappa_{i}^{\prime}\in\mathbb{N}^{k}\\\
|\delta_{i}^{\prime}|=\delta_{i}\\\
|\kappa_{i}^{\prime}|=\kappa_{i}\end{subarray}}\widetilde{c}_{(\ldots)}\left(\prod_{\ell_{i}=1}^{k}D^{\delta_{i,\ell_{i}}^{\prime}}B^{\kappa_{i,\ell_{i}}^{\prime}}\right)f_{\gamma_{i},\alpha_{i}}\right)\right)$
$\displaystyle\qquad\qquad\times\left(\sum_{\begin{subarray}{c}\eta,\rho\in\mathbb{N}^{k}\\\
|\eta|=n^{\prime}\\\
|\rho|=m^{\prime}\end{subarray}}\bar{c}_{(\ldots)}\prod_{s=1}^{k}D^{\eta_{s}}B^{\rho_{s}}\right)$
(A.46)
where the $\widetilde{c}_{(\dots)},\bar{c}_{(\dots)}\geq 0$ are certain
combinatorial coefficients (tensors). Combining (A.39)–(A.46), we obtain that
$\displaystyle D^{n}\left(\prod_{i=1}^{k}A^{\alpha_{i}}B^{\beta_{i}}\right)f$
$\displaystyle=\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{k}\\\
1^{k}\leq\gamma\leq\alpha\end{subarray}}\sum_{\begin{subarray}{c}0\leq
n^{\prime}\leq n+|\gamma|\\\ 0\leq
m^{\prime}\leq|\beta|\end{subarray}}\sum_{\begin{subarray}{c}\delta,\kappa\in\mathbb{N}^{k}\\\
|\delta|=n+|\gamma|-n^{\prime}\\\
|\kappa|=|\beta|-m^{\prime}\end{subarray}}\left(\sum_{\begin{subarray}{c}\eta,\rho\in\mathbb{N}^{k}\\\
|\eta|=n^{\prime}\\\
|\rho|=m^{\prime}\end{subarray}}\bar{c}_{(\dots)}\left(\prod_{s=1}^{k}D^{\eta_{s}}B^{\rho_{s}}\right)f\right)$
$\displaystyle\
\times\left(\prod_{i=1}^{k}\left(\sum_{\begin{subarray}{c}\delta_{i}^{\prime},\kappa_{i}^{\prime}\in\mathbb{N}^{k}\\\
|\delta_{i}^{\prime}|=\delta_{i}\\\
|\kappa_{i}^{\prime}|=\kappa_{i}\end{subarray}}\widetilde{c}_{(\dots)}\left(\prod_{\ell_{i}=1}^{k}D^{\delta_{i,\ell_{i}}^{\prime}}B^{\kappa_{i,\ell_{i}}^{\prime}}\right)\left(\sum_{\begin{subarray}{c}\zeta_{i}\in\mathbb{N}^{\alpha_{i}}\\\
|\zeta_{i}|=\alpha_{i}-\gamma_{i}\end{subarray}}c_{(\dots)}\prod_{r_{i}=1}^{\alpha_{i}}(D^{\zeta_{i,r_{i}}}v)\right)\right)\right)$
(A.47)
where the $c_{(\dots)},\widetilde{c}_{(\dots)},\bar{c}_{(\dots)}\geq 0$ are
certain combinatorial coefficients (tensors) whose dependence is omitted for
simplicity (it may depends on all the parameters in the sums and products).
The above expansion combined with the Leibniz rule, the bound (3.9), and
assumptions (A.40)–(A.41), implies
$\displaystyle\left\|D^{n}\left(\prod_{i=1}^{k}A^{\alpha_{i}}B^{\beta_{i}}\right)f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{k}\\\
1^{k}\leq\gamma\leq\alpha\end{subarray}}\sum_{\begin{subarray}{c}0\leq
n^{\prime}\leq n+|\gamma|\\\ 0\leq
m^{\prime}\leq|\beta|\end{subarray}}\sum_{\begin{subarray}{c}\delta,\kappa\in\mathbb{N}^{k}\\\
|\delta|=n+|\gamma|-n^{\prime}\\\
|\kappa|=|\beta|-m^{\prime}\end{subarray}}\left(\sum_{\begin{subarray}{c}\eta,\rho\in\mathbb{N}^{k}\\\
|\eta|=n^{\prime}\\\
|\rho|=m^{\prime}\end{subarray}}\left\|\left(\prod_{s=1}^{k}D^{\eta_{s}}B^{\rho_{s}}\right)f\right\|_{L^{p}(\Omega)}\right)$
$\displaystyle\quad\times\left(\prod_{i=1}^{k}\left(\sum_{\begin{subarray}{c}\zeta_{i}\in\mathbb{N}^{\alpha_{i}}\\\
|\zeta_{i}|=\alpha_{i}-\gamma_{i}\end{subarray}}\sum_{\begin{subarray}{c}\delta_{i}^{\prime},\kappa_{i}^{\prime}\in\mathbb{N}^{k}\\\
|\delta_{i}^{\prime}|=\delta_{i}\\\
|\kappa_{i}^{\prime}|=\kappa_{i}\end{subarray}}\left\|\left(\prod_{\ell_{i}=1}^{k}D^{\delta_{i,\ell_{i}}^{\prime}}B^{\kappa_{i,\ell_{i}}^{\prime}}\right)\left(\prod_{r_{i}=1}^{\alpha_{i}}(D^{\zeta_{i,r_{i}}}v)\right)\right\|_{L^{\infty}(\Omega)}\right)\right)$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}\gamma\in\mathbb{N}^{k}\\\
1^{k}\leq\gamma\leq\alpha\end{subarray}}\sum_{\begin{subarray}{c}0\leq
n^{\prime}\leq n+|\gamma|\\\ 0\leq
m^{\prime}\leq|\beta|\end{subarray}}\sum_{\begin{subarray}{c}\delta,\kappa\in\mathbb{N}^{k}\\\
|\delta|=n+|\gamma|-n^{\prime}\\\
|\kappa|=|\beta|-m^{\prime}\end{subarray}}\left(\mathcal{C}_{f}\mathcal{M}\left(n^{\prime},N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m^{\prime},N_{t},\mu,\widetilde{\mu}\right)\right)$
$\displaystyle\quad\times\left(\prod_{i=1}^{k}\mathcal{C}_{v}^{\alpha_{i}}\mathcal{M}\left(\alpha_{i}-\gamma_{i}+\delta_{i},N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(\kappa_{i},N_{t},\mu,\widetilde{\mu}\right)\right)$
$\displaystyle\lesssim\mathcal{C}_{f}\sum_{\begin{subarray}{c}0\leq
n^{\prime}\leq n+|\alpha|\\\ 0\leq
m^{\prime}\leq|\beta|\end{subarray}}\left(\mathcal{C}_{f}\mathcal{M}\left(n^{\prime},N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m^{\prime},N_{t},\mu,\widetilde{\mu}\right)\right)$
$\displaystyle\quad\times\left(\mathcal{C}_{v}^{|\alpha|}\mathcal{M}\left(|\alpha|+n-n^{\prime},N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\beta|-m^{\prime},N_{t},\mu,\widetilde{\mu}\right)\right)$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{v}^{|\alpha|}\mathcal{M}\left(|\alpha|+n,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\beta|,N_{t},\mu,\widetilde{\mu}\right)$
which is precisely the bound claimed in (A.42). Estimate (A.43) follows
immediately, while the bound (A.44) is a consequence of the above and (A.39).
∎
### A.7 Commutators with material derivatives
Let $D$ represent a pure spatial derivative and let
$D_{t}=\partial_{t}+v\cdot\nabla$
denote a material derivative along the smooth (incompressible) vector field
$v$. This vector field $v$ is fixed throughout this section. The question we
would like to address in this section is the following: Assume that for the
vector field $v$ we have $D^{a}D_{t}^{b}Dv$ estimates available. Can we then
bound the operator norm of $D_{t}^{b}D^{a}$ in terms of the operator norm of
$D^{a}D_{t}^{b}$?
Following Komatsu [47, Lemma 5.2], a useful ingredient for bounding
commutators of Eulerian and material derivatives is the following lemma. We
use the following commutator notation:
$\displaystyle(\mathrm{ad\,}D_{t})^{0}(D)$ $\displaystyle=D$
$\displaystyle(\mathrm{ad\,}D_{t})^{1}(D)$
$\displaystyle=[D_{t},D]=-Dv\cdot\nabla$
$\displaystyle(\mathrm{ad\,}D_{t})^{a}(D)=(\mathrm{ad\,}D_{t})((\mathrm{ad\,}D_{t})^{a-1}(D))$
$\displaystyle=[D_{t},(\mathrm{ad\,}D_{t})^{a-1}(D)]$
for all $a\geq 2$. Note that for any $a\geq 0$, $(\mathrm{ad\,}D_{t})^{a}(D)$
is a differential operator of order $1$.
###### Lemma A.12.
Let $m,n\geq 0$. Then we have that the commutator of $D_{t}^{m}$ and $D^{n}$
is given by
$\displaystyle\left[D_{t}^{m},D^{n}\right]=\sum_{\\{\alpha\in\mathbb{N}^{n}\colon
1\leq|\alpha|\leq
m\\}}\frac{m!}{\alpha!(m-|\alpha|)!}\left(\prod_{\ell=1}^{n}(\mathrm{ad\,}D_{t})^{\alpha_{\ell}}(D)\right)D_{t}^{m-|\alpha|}.$
(A.48)
By the product in (A.48) we mean the product/composition of operators
$\prod_{\ell=1}^{n}(\mathrm{ad\,}D_{t})^{\alpha_{\ell}}(D)=(\mathrm{ad\,}D_{t})^{\alpha_{n}}(D)(\mathrm{ad\,}D_{t})^{\alpha_{n-1}}(D)\ldots(\mathrm{ad\,}D_{t})^{\alpha_{1}}(D)\,,$
so that on the right side of (A.48) we have a sum of differential operators of
order at most $n$.
For the above lemma to be useful, we need to be able to characterize the
operator $(\mathrm{ad\,}D_{t})^{a}(D)$.
###### Lemma A.13.
Let $a\in\mathbb{N}$. Then the order $1$ differential operator
$(\mathrm{ad\,}D_{t})^{a}(D)$ may be expressed as
$\displaystyle(\mathrm{ad\,}D_{t})^{a}(D)=\sum_{k=1}^{a}\sum_{\\{\beta\in\mathbb{N}^{k}\colon|\beta|=a-k\\}}c_{a,k,\beta}\prod_{j=1}^{k}(D_{t}^{\beta_{j}}Dv)\cdot\nabla$
(A.49)
where the $\prod$ in (A.49) denotes the product of matrices, $c_{a,k,\beta}$
are coefficients which depend only on $a,k$, $\beta$.
###### Proof of Lemma A.13.
When $a=1$ we know that $(\mathrm{ad\,}D_{t})(D)=-Dv\cdot\nabla$, so that the
lemma trivially holds. We proceed by induction on $a$. Using the fact that
$[D_{t},\nabla]=-Dv\cdot\nabla$, we obtain
$\displaystyle(\mathrm{ad\,}D_{t})^{a+1}(D)$
$\displaystyle=D_{t}\left(\sum_{k=1}^{a}\sum_{\beta\in\pi(k,a)}c_{a,k,\beta}\prod_{j=1}^{k}(D_{t}^{\beta_{j}}Dv)\right)\cdot\nabla+\sum_{k=1}^{a}\sum_{\beta\in\pi(k,a)}c_{a,k,\beta}\prod_{j=1}^{k}(D_{t}^{\beta_{j}}Dv)\cdot[D_{t},\nabla]$
$\displaystyle=D_{t}\left(\sum_{k=1}^{a}\sum_{\beta\in\pi(k,a)}c_{a,k,\beta}\prod_{j=1}^{k}(D_{t}^{\beta_{j}}Dv)\right)\cdot\nabla-\sum_{k=1}^{a}\sum_{\beta\in\pi(k,a)}c_{a,k,\beta}\prod_{j=1}^{k}(D_{t}^{\beta_{j}}Dv)Dv\cdot\nabla$
where we have denoted by
$\displaystyle\pi(k,a)=\left\\{\beta\in\mathbb{N}^{k}\colon|\beta|=a-k\right\\}$
the set of all partitions of a set of $a-k$ elements into $k$ sets. For the
first term we use the Leibniz rule for $D_{t}$, so that for any
$\beta\in\pi(k,a)$, we obtain an element $\beta+e_{j}\in\pi(k,a+1)$, with
$e_{j}=(0,\ldots,0,1,0,\ldots,0)\in\mathbb{N}^{k}$, and the $1$ lies in the
$j^{th}$ coordinate. For $1\leq k\leq a$, this in fact lists all the elements
in $\pi(k,a+1)$. For the second sum, we identify $\beta\in\pi(k,a)$ with
$\beta\in\pi(k+1,a+1)$, upon padding it with a $0$ in the $k+1^{st}$ entry.
Changing variables $k+1\to k$, then recovers an element $\beta\in\pi(k,a+1)$,
including the case $k=a+1$, which was missing from the first sum. ∎
From Lemma A.12 and Lemma A.13 we deduce the following:
###### Lemma A.14.
Let $p\in[1,\infty]$. Fix $N_{x},N_{t},N_{*},M_{*}\in\mathbb{N}$, let $v$ be a
vector field, let $D_{t}=\partial_{t}+v\cdot\nabla$ be the associated material
derivative, and let $\Omega$ be a space-time domain. Assume that the vector
field $v$ obeys
$\displaystyle\left\|D^{N}D_{t}^{M}Dv\right\|_{L^{\infty}(\Omega)}\lesssim\mathcal{C}_{v}\mathcal{M}\left(N+1,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(M,N_{t},\mu_{v},\widetilde{\mu}_{v}\right)$
(A.50)
for $N\leq N_{*}$ and $M\leq M_{*}$. Moreover, let $f$ be a function which
obeys
$\displaystyle\left\|D^{N}D_{t}^{M}f\right\|_{L^{p}(\Omega)}\lesssim\mathcal{C}_{f}\mathcal{M}\left(N,N_{x},\lambda_{f},\widetilde{\lambda}_{f}\right)\mathcal{M}\left(M,N_{t},\mu_{f},\widetilde{\mu}_{f}\right)$
(A.51)
for all $N\leq N_{*}$ and $M\leq M_{*}$. Denote
$\displaystyle\lambda=\max\\{\lambda_{f},\lambda_{v}\\},\quad\widetilde{\lambda}=\max\\{\widetilde{\lambda}_{f},\widetilde{\lambda}_{v}\\},\quad\mu=\max\\{\mu_{f},\mu_{v}\\},\quad\widetilde{\mu}=\max\\{\widetilde{\mu}_{f},\widetilde{\mu}_{v}\\}.$
Let $m,n,\ell\geq 0$ be such that $n+\ell\leq N_{*}$ and $m\leq M_{*}$. Then,
we have that the commutator $[D_{t}^{m},D^{n}]$ is bounded as
$\displaystyle\left\|D^{\ell}\left[D_{t}^{m},D^{n}\right]f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{C}_{v}\widetilde{\lambda}_{v}\mathcal{M}\left(\ell+n,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m-1,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}_{v}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}_{v}\\}\right)$
(A.52)
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(\ell+n,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}_{v}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}_{v}\\}\right).$
(A.53)
Moreover, we have that for $k\geq 2$, and any $\alpha,\beta\in\mathbb{N}^{k}$
with $|\alpha|\leq N_{*}$ and $|\beta|\leq M_{*}$, the estimate
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t}^{\beta_{i}}\right)f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(|\alpha|,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\beta|,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}_{v}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}_{v}\\}\right)$
(A.54)
holds.
###### Remark A.15.
If instead of (A.50) and (A.51) holding for $N\leq N_{*}$ and $M\leq M_{*}$,
we know that both of these inequalities hold for all $N+M\leq N_{\circ}$ for
some $N_{\circ}\geq 1$, then the conclusions of the Lemma hold as follows: the
bounds (A.52) and (A.53) hold for $\ell+n+m\leq N_{\circ}$, while (A.54) holds
for $|\alpha|+|\beta|\leq N_{\circ}$. This fact follows immediately from the
proof of the Lemma, but may alternatively also be derived from its statement
(see also Lemma A.3).
###### Remark A.16.
In Lemma A.14, if the assumption (A.51) is replaced by
$\displaystyle\left\|D^{N}D_{t}^{M}f\right\|_{L^{p}(\Omega)}\lesssim\mathcal{C}_{f}\mathcal{M}\left(N-1,N_{x},\lambda_{f},\widetilde{\lambda}_{f}\right)\mathcal{M}\left(M,N_{t},\mu_{f},\widetilde{\mu}_{f}\right)\,,$
(A.55)
whenever $1\leq N\leq N_{*}$, then the conclusion (A.54) changes, and it
instead becomes
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t}^{\beta_{i}}\right)f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(|\alpha|-1,N_{x},\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(|\beta|,N_{t},\max\\{\mu,\mathcal{C}_{v}\widetilde{\lambda}_{v}\\},\max\\{\widetilde{\mu},\mathcal{C}_{v}\widetilde{\lambda}_{v}\\}\right)$
(A.56)
whenever $|\alpha|\geq 1$. This follows for instance by noting that the sum on
the second line of (A.61) only contains terms with $j\geq 1$, so that (A.55)
is not required when $N=0$.
###### Proof of Lemma A.14.
First, we deduce from (A.49) that for any $\alpha_{i}\geq 1$ and $1\leq i\leq
n$, we have
$\displaystyle(\mathrm{ad\,}D_{t})^{\alpha_{i}}(D)=\sum_{\kappa_{i}=1}^{\alpha_{i}}f_{\kappa_{i},\alpha_{i}}\cdot\nabla$
(A.57)
where the functions $f_{\kappa_{i},\alpha_{i}}$ are computed as
$\displaystyle
f_{\kappa_{i},\alpha_{i}}=\sum_{\\{\beta\in\mathbb{N}^{\kappa_{i}}\colon|\beta|=\alpha_{i}-\kappa_{i}\\}}c_{(\dots)}\prod_{j=1}^{\kappa_{i}}(D_{t}^{\beta_{j}}Dv)$
for suitable combinatorial coefficients (tensors) $c_{(\dots)}$ which depend
on $\kappa_{i},\alpha_{i}$, and $\beta$. In particular, in view of assumption
(A.50), and the Leibniz rule, we have that
$\displaystyle\left\|D^{\ell}f_{\kappa_{i},\alpha_{i}}\right\|_{L^{\infty}(\Omega)}\lesssim\mathcal{C}_{v}^{\kappa_{i}}\mathcal{M}\left(\kappa_{i}+\ell,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(\alpha_{i}-\kappa_{i},N_{t},\mu_{v},\widetilde{\mu}_{v}\right).$
(A.58)
Next, from (A.57) we deduce that for any $\alpha\in\mathbb{N}^{n}$ with
$|\alpha|\geq 1$, one may write
$\displaystyle\prod_{i=1}^{n}(\mathrm{ad\,}D_{t})^{\alpha_{i}}(D)=\sum_{j=1}^{n}g_{j,\alpha}D^{j}$
(A.59)
where
$\displaystyle g_{j,\alpha}=\sum_{\\{\kappa\in\mathbb{N}^{n}\colon
1^{n}\leq\kappa\leq\alpha\\}}\sum_{\\{\gamma\in\mathbb{N}^{n}\colon|\gamma|=n-j\\}}\widetilde{c}_{(\dots)}\prod_{i=1}^{n}D^{\gamma_{i}}f_{\kappa_{i},\alpha_{i}}.$
As a consequence of (A.58) we see that
$\displaystyle\left\|D^{\ell}g_{j,\alpha}\right\|_{L^{\infty}(\Omega)}\lesssim\sum_{|\kappa|=1}^{|\alpha|}\mathcal{C}_{v}^{|\kappa|}\mathcal{M}\left(\ell+n-j+|\kappa|,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(|\alpha|-|\kappa|,N_{t},\mu_{v},\widetilde{\mu}_{v}\right).$
(A.60)
From (A.48), assumption (A.51), identity (A.59), and bound (A.60), we see that
$\displaystyle\left\|D^{\ell}\left[D_{t}^{m},D^{n}\right]f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\sum_{|\alpha|=1}^{m}\sum_{j=1}^{n}\left\|D^{\ell}\left(g_{j,\alpha}D^{j}D_{t}^{m-|\alpha|}\right)f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\sum_{|\alpha|=1}^{m}\sum_{j=1}^{n}\left\|D^{\ell}g_{j,\alpha}\right\|_{L^{\infty}(\Omega)}\left\|D^{j}D_{t}^{m-|\alpha|}f\right\|_{L^{p}(\Omega)}+\left\|g_{j,\alpha}\right\|_{L^{\infty}(\Omega)}\left\|D^{\ell+j}D_{t}^{m-|\alpha|}f\right\|_{L^{p}(\Omega)}$
$\displaystyle\lesssim\sum_{k=1}^{m}\sum_{j=1}^{n}\mathcal{C}_{f}\mathcal{C}_{v}^{k}\mathcal{M}\left(\ell+n-j+k,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(j,N_{x}\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m-k,N_{t},\mu,\widetilde{\mu}\right)$
$\displaystyle\qquad\qquad+\mathcal{C}_{f}\mathcal{C}_{v}^{k}\mathcal{M}\left(n-j+k,N_{x},\lambda_{v},\widetilde{\lambda}_{v}\right)\mathcal{M}\left(j+\ell,N_{x}\lambda,\widetilde{\lambda}\right)\mathcal{M}\left(m-k,N_{t},\mu,\widetilde{\mu}\right)$
$\displaystyle\lesssim\mathcal{C}_{f}\mathcal{M}\left(\ell+n,N_{x},\lambda,\widetilde{\lambda}\right)\sum_{k=1}^{m}(\mathcal{C}_{v}\widetilde{\lambda}_{v})^{k}\mathcal{M}\left(m-k,N_{t},\mu,\widetilde{\mu}\right)$
(A.61)
from which (A.53) follows directly.
In order to prove (A.54) we proceed by induction on $k$. For $k=1$ the
statement holds in view of (A.51). We assume that (A.54) holds for
$k^{\prime}\leq k-1$, and denote
$\displaystyle
P_{k^{\prime}}=\left(\prod_{i=1}^{k^{\prime}}D^{\alpha_{i}}D_{t}^{\beta_{i}}\right)f.$
With this notation we have
$\displaystyle P_{k}$
$\displaystyle=D^{\alpha_{k}}D_{t}^{\beta_{k}}D^{\alpha_{k-1}}D_{t}^{\beta_{k-1}}P_{k-2}$
$\displaystyle=D^{\alpha_{k}+\alpha_{k-1}}D_{t}^{\beta_{k}+\beta_{k-1}}P_{k-2}+D^{\alpha_{k}}\left[D_{t}^{\beta_{k}},D^{\alpha_{k-1}}\right]D_{t}^{\beta_{k-1}}P_{k-2}.$
Using (A.54) with $k-1$ gives the desired estimate for the first term above.
For the second term, we appeal to the commutator bound (A.53), applied to
$D_{t}^{\beta_{k-1}}P_{k-2}$, which obeys condition (A.51) in view of (A.54)
at level $k-1$. This concludes the proof of (A.54) at level $k$. ∎
### A.8 Intermittency-friendly inversion of the divergence
Given a vector field $G^{i}$, a zero mean periodic function $\varrho$ and an
incompressible flow $\Phi$, our goal in this section is to write
$G^{i}(x)\varrho(\Phi(x))$ as the divergence of a symmetric tensor.
###### Proposition A.17 (Inverse divergence iteration step).
Fix two zero-mean $\mathbb{T}^{3}$-periodic functions $\varrho$ and
$\vartheta$, which are related by $\varrho=\Delta\vartheta$. Let $\Phi$ be a
volume preserving transformation of $\mathbb{T}^{3}$, such that
$\left\|\nabla\Phi-\mathrm{Id}\right\|_{L^{\infty}(\mathbb{T}^{3})}\leq\nicefrac{{1}}{{2}}$.
Define the matrix $A=(\nabla\Phi)^{-1}$. Given a vector field $G^{i}$, we have
$\displaystyle
G^{i}\varrho\circ\Phi=\partial_{n}\mathring{R}^{in}+\partial_{i}P+E^{i}$
(A.62)
where the traceless symmetric stress $R^{in}$ is given by
$\displaystyle\mathring{R}^{in}$
$\displaystyle=\left(G^{i}A^{n}_{\ell}+G^{n}A^{i}_{\ell}-A^{i}_{k}A^{n}_{k}G^{p}\partial_{p}\Phi^{\ell}\right)(\partial_{\ell}\vartheta)\circ\Phi-P\delta_{in}\,,$
(A.63)
where the pressure term is given by
$\displaystyle P$
$\displaystyle=\left(2G^{n}A^{n}_{\ell}-A^{n}_{k}A^{n}_{k}G^{p}\partial_{p}\Phi^{\ell}\right)(\partial_{\ell}\vartheta)\circ\Phi\,,$
(A.64)
and the error term $E^{i}$ is given by
$\displaystyle E^{i}$
$\displaystyle=\left(\partial_{n}\left(G^{p}A^{i}_{k}A^{n}_{k}-G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}\Phi^{\ell}-\partial_{n}G^{i}A^{n}_{\ell}\right)(\partial_{\ell}\vartheta)\circ\Phi\,.$
(A.65)
###### Proof of Proposition A.17.
Note that by definition we have
$A^{k}_{\ell}\partial_{j}\Phi^{\ell}=\delta_{kj}$. Since $\Phi$ is volume
preserving, $\det(\nabla\Phi)=1$, and so each entry of the matrix $A$ equals
the corresponding cofactor of $\nabla\Phi$, which in three dimensions is a
quadratic function of entries of $\nabla\Phi$ given explicitly by
$A^{i}_{j}=\frac{1}{2}\varepsilon_{ipq}\varepsilon_{jk\ell}\partial_{k}\Phi^{p}\partial_{\ell}\Phi^{q}$.
In two dimensions $A$ is a linear map in $\nabla\Phi$. Moreover, since $\Phi$
is volume preserving, the Piola identity $\partial_{j}A^{j}_{i}=0$ holds for
every $i\in\\{1,2,3\\}$. The main identity that we use in the proof is that
for any scalar function $\varphi$ we have
$(\partial_{i}\varphi)\circ\Phi=A^{m}_{i}\partial_{m}(\varphi\circ\Phi)=\partial_{m}(A^{m}_{i}\varphi\circ\Phi)$.
Starting from $\varrho=\Delta\vartheta$, we have
$\displaystyle G^{i}\varrho\circ\Phi$
$\displaystyle=G^{i}(\partial_{kk}\vartheta)\circ\Phi$
$\displaystyle=G^{i}A^{n}_{k}\partial_{n}(\partial_{k}\vartheta)\circ\Phi$
$\displaystyle=\partial_{n}\left(G^{i}A^{n}_{k}(\partial_{k}\vartheta)\circ\Phi\right)-\partial_{n}G^{i}A^{n}_{k}(\partial_{k}\vartheta)\circ\Phi$
$\displaystyle=\partial_{n}\left(G^{i}A^{n}_{k}(\partial_{k}\vartheta)\circ\Phi+G^{n}A^{i}_{k}(\partial_{k}\vartheta)\circ\Phi\right)-\partial_{n}\left(G^{n}A^{i}_{k}(\partial_{k}\vartheta)\circ\Phi\right)-\partial_{n}G^{i}A^{n}_{k}(\partial_{k}\vartheta)\circ\Phi\,.$
Next, we have
$\displaystyle\partial_{n}\left(G^{n}A^{i}_{k}(\partial_{k}\vartheta)\circ\Phi\right)$
$\displaystyle=\partial_{n}\left(G^{n}A^{i}_{k}A^{p}_{k}\partial_{p}(\vartheta\circ\Phi)\right)$
$\displaystyle=\partial_{p}\partial_{n}\left(G^{n}A^{i}_{k}A^{p}_{k}\vartheta\circ\Phi\right)-\partial_{n}\left(\partial_{p}(G^{n}A^{i}_{k}A^{p}_{k})\vartheta\circ\Phi\right)$
$\displaystyle=\partial_{p}\left(G^{n}A^{i}_{k}A^{p}_{k}\partial_{n}(\vartheta\circ\Phi)\right)+\partial_{p}\left(\partial_{n}(G^{n}A^{i}_{k}A^{p}_{k})\vartheta\circ\Phi\right)-\partial_{n}\left(\partial_{p}(G^{n}A^{i}_{k}A^{p}_{k})\vartheta\circ\Phi\right)$
$\displaystyle=\partial_{n}\left(G^{p}A^{i}_{k}A^{n}_{k}\partial_{p}(\vartheta\circ\Phi)\right)+\partial_{n}\left(\partial_{p}(G^{p}A^{i}_{k}A^{n}_{k})\vartheta\circ\Phi\right)-\partial_{n}\left(\partial_{p}(G^{n}A^{i}_{k}A^{p}_{k})\vartheta\circ\Phi\right)$
where in the last equality we have just switched the letters of summation $n$
and $p$. We further massage the last term in the above equality.
$\displaystyle\partial_{n}\left(\partial_{p}(G^{n}A^{i}_{k}A^{p}_{k})\vartheta\circ\Phi\right)$
$\displaystyle=\partial_{p}\left(G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{n}(\vartheta\circ\Phi)+\partial_{np}\left(G^{n}A^{i}_{k}A^{p}_{k}\right)\vartheta\circ\Phi$
$\displaystyle=\partial_{p}\left(G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{n}(\vartheta\circ\Phi)+\partial_{p}\left(\partial_{n}\left(G^{n}A^{i}_{k}A^{p}_{k}\right)\vartheta\circ\Phi\right)-\partial_{n}\left(G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}(\vartheta\circ\Phi)$
Combining the above three equalities, we arrive at
$\displaystyle G^{i}\varrho\circ\Phi$
$\displaystyle=\partial_{n}\left((G^{i}A^{n}_{k}+G^{n}A^{i}_{k})(\partial_{k}\vartheta)\circ\Phi-A^{i}_{k}A^{n}_{k}G^{p}\partial_{p}(\vartheta\circ\Phi)\right)$
$\displaystyle\quad+\partial_{n}\left(G^{p}A^{i}_{k}A^{n}_{k}-G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}(\vartheta\circ\Phi)-\partial_{n}G^{i}A^{n}_{k}(\partial_{k}\vartheta)\circ\Phi$
$\displaystyle=\partial_{n}\left((G^{i}A^{n}_{k}+G^{n}A^{i}_{k})(\partial_{k}\vartheta)\circ\Phi-A^{i}_{k}A^{n}_{k}G^{p}\partial_{p}\Phi^{\ell}(\partial_{\ell}\vartheta)\circ\Phi\right)$
$\displaystyle\quad+\partial_{n}\left(G^{p}A^{i}_{k}A^{n}_{k}-G^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}\Phi^{\ell}(\partial_{\ell}\vartheta)\circ\Phi-\partial_{n}G^{i}A^{n}_{\ell}(\partial_{\ell}\vartheta)\circ\Phi$
In the last equality we have exchanged the order of summation. Identities
(A.62)–(A.65) follow upon declaring that the trace part of the symmetric
stress is the pressure. ∎
Proposition A.17 allows us to obtain the following result, which is the main
conclusion of this section.
###### Proposition A.18 (Inverse divergence with error term).
Fix an incompressible vector field $v$ and denote its material derivative by
$D_{t}=\partial_{t}+v\cdot\nabla$. Fix integers $N_{*}\geq M_{*}\geq 1$. Also
fix ${\mathsf{N}_{\rm dec}},{\mathsf{d}}\geq 1$ such that
$N_{*}-{\mathsf{d}}\geq 2{\mathsf{N}_{\rm dec}}+4$.
Let $G$ be a vector field and assume there exists a constant
$\mathcal{C}_{G}>0$ and parameters $\lambda,\nu\geq 1$ such that
$\displaystyle\left\|D^{N}D_{t}^{M}G\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\lambda^{N}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.66)
for all $N\leq N_{*}$ and $M\leq M_{*}$.
Let $\Phi$ be a volume preserving transformation of $\mathbb{T}^{3}$, such
that
$D_{t}\Phi=0\,\qquad\mbox{and}\qquad\left\|\nabla\Phi-\mathrm{Id}\right\|_{L^{\infty}(\mathrm{supp\,}G)}\leq\nicefrac{{1}}{{2}}\,.$
Denote by $\Phi^{-1}$ the inverse of the flow $\Phi$, which is the identity at
a time slice which intersects the support of $G$. Assume that the velocity
field $v$ and the flow functions $\Phi$ and $\Phi^{-1}$ satisfy the following
bounds
$\displaystyle\left\|D^{N+1}\Phi\right\|_{L^{\infty}(\mathrm{supp\,}G)}+\left\|D^{N+1}\Phi^{-1}\right\|_{L^{\infty}(\mathrm{supp\,}G)}$
$\displaystyle\lesssim\lambda^{\prime N}$ (A.67)
$\displaystyle\left\|D^{N}D_{t}^{M}Dv\right\|_{L^{\infty}(\mathrm{supp\,}G)}$
$\displaystyle\lesssim\nu\lambda^{\prime
N}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,,$ (A.68)
for all $N\leq N_{*}$, $M\leq M_{*}$, and some $\lambda^{\prime}>0$.
Lastly, let $\varrho,\vartheta\colon\mathbb{T}^{3}\to\mathbb{R}$ be two zero
mean functions with the following properties:
1. (i)
there exists ${\mathsf{d}}\geq 1$ and a parameter $\zeta\geq 1$ such that
$\varrho(x)=\zeta^{-2{\mathsf{d}}}\Delta^{\mathsf{d}}\vartheta(x)$
2. (ii)
there exists a parameter $\mu\geq 1$ such that $\varrho$ and $\vartheta$ are
$(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$-periodic
3. (iii)
there exists parameters $\Lambda\geq\zeta$ and $\mathcal{C}_{*}\geq 1$ such
that
$\displaystyle\left\|D^{N}\varrho\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\Lambda^{N}\qquad\mbox{and}\qquad\left\|D^{N}\vartheta\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\mathcal{M}\left(N,2d,\zeta,\Lambda\right)$
(A.69)
for all $0\leq N\leq\mathsf{N}_{\rm fin}$, except for the case
$N=2{\mathsf{d}}$ when the Calderón-Zygmund inequality fails. In this
exceptional case, the second inequality in (A.69) is allowed to be weaker by a
factor of $\Lambda^{\alpha}$, for an arbitrary $\alpha\in(0,1]$; that is, we
only require that
$\left\|D^{2{\mathsf{d}}}\vartheta\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\Lambda^{\alpha}\zeta^{2{\mathsf{d}}}$.
If the above parameters satisfy
$\displaystyle\lambda^{\prime}\leq\lambda\ll\mu\leq\zeta\leq\Lambda\,,$ (A.70)
where by the second inequality in (A.70) we mean that
$\displaystyle\Lambda^{4}\left(\frac{\mu}{2\pi\sqrt{3}\lambda}\right)^{-{\mathsf{N}_{\rm
dec}}}\leq 1\,,$ (A.71)
then, we have that
$\displaystyle G\;\varrho\circ\Phi$
$\displaystyle=\mathrm{div\,}\mathring{R}+\nabla
P+E=:\mathrm{div\,}\left(\mathcal{H}\left(G\varrho\circ\Phi\right)\right)+\nabla
P+E.$ (A.72)
where the traceless symmetric stress
$\mathring{R}=\mathcal{H}(G\varrho\circ\Phi)$ and the scalar pressure $P$ are
supported in $\mathrm{supp\,}G$, and for any fixed $\alpha\in(0,1)$ they
satisfy
$\displaystyle\left\|D^{N}D_{t}^{M}\mathring{R}\right\|_{L^{1}}+\left\|D^{N}D_{t}^{M}P\right\|_{L^{1}}$
$\displaystyle\lesssim\Lambda^{\alpha}\mathcal{C}_{G}\mathcal{C}_{*}\zeta^{-1}\mathcal{M}\left(N,1,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.73)
for all $N\leq N_{*}-{\mathsf{d}}$ and $M\leq M_{*}$. The implicit constants
depend on $N,M,\alpha$ but not $G$, $\varrho$, or $\Phi$. Lastly, for $N\leq
N_{*}-{\mathsf{d}}$ and $M\leq M_{*}$ the error term $E$ in (A.72) satisfies
$\displaystyle\left\|D^{N}D_{t}^{M}E\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}\Lambda^{\alpha}\lambda^{\mathsf{d}}\zeta^{-{\mathsf{d}}}\Lambda^{N}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,.$
(A.74)
We emphasize that the range of $M$ in (A.73) and (A.74) is exactly the same as
the one in (A.66), while the range of permissible values for $N$ shrank from
$N_{*}$ to $N_{*}-{\mathsf{d}}$.
Lastly, let $N_{\circ},M_{\circ}$ be integers such that $1\leq M_{\circ}\leq
N_{\circ}\leq M_{*}/2$. Assume that in addition to the bound (A.68) we have
the following global lossy estimates
$\displaystyle\left\|D^{N}\partial_{t}^{M}v\right\|_{L^{\infty}(\mathbb{T}^{3})}\lesssim\mathcal{C}_{v}\widetilde{\lambda}_{q}^{N}\widetilde{\tau}_{q}^{-M}$
(A.75)
for all $M\leq M_{\circ}$ and $N+M\leq N_{\circ}+M_{\circ}$, where
$\displaystyle\mathcal{C}_{v}\widetilde{\lambda}_{q}\lesssim\widetilde{\tau}_{q}^{-1},\qquad\mbox{and}\qquad\lambda^{\prime}\leq\widetilde{\lambda}_{q}\leq\Lambda\leq\lambda_{q+1}\,.$
(A.76)
If ${\mathsf{d}}$ is chosen large enough so that
$\displaystyle\mathcal{C}_{G}\mathcal{C}_{*}\Lambda\left(\frac{\lambda}{\zeta}\right)^{{\mathsf{d}}-1}\left(1+\frac{\max\\{\widetilde{\tau}_{q}^{-1},\widetilde{\nu},\mathcal{C}_{v}\Lambda\\}}{\tau_{q}^{-1}}\right)^{M_{\circ}}\leq\frac{\delta_{q+2}}{\lambda_{q+1}^{5}}\,,$
(A.77)
then we may write
$\displaystyle
E=\mathrm{div\,}\mathring{R}_{\mathrm{nonlocal}}+\fint_{\mathbb{T}^{3}}G\varrho\circ\Phi
dx=:\mathrm{div\,}\left(\mathcal{R}^{*}(G\varrho\circ\Phi)\right)+\fint_{\mathbb{T}^{3}}G\varrho\circ\Phi
dx\,,$ (A.78)
where $\mathring{R}_{\mathrm{nonlocal}}=\mathcal{R}^{*}(G\varrho\circ\Phi)$ is
a traceless symmetric stress which satisfies
$\displaystyle\left\|D^{N}D_{t}^{M}\mathring{R}_{\mathrm{nonlocal}}\right\|_{L^{1}}\leq\frac{\delta_{q+2}}{\lambda_{q+1}^{5}}\lambda_{q+1}^{N}\tau_{q}^{-M}$
(A.79)
for $N\leq N_{\circ}$ and $M\leq M_{\circ}$.
Before turning to the proof of Lemma A.18, let us make three remarks. First,
we highlight certain parameter values which will occur commonly in
applications of the proposition. Second, we comment on a technical aspect of
the application of the Proposition in Section 8.3. Finally, we comment on the
assumptions (i)–(iii) and (A.71) and (A.77) for the functions $\varrho$ and
$\vartheta$, which in applications are related to the transversal densities of
the pipe flows.
###### Remark A.19.
Frequently, $G$ will come with derivative bounds which are satisfied for
$N+M\leq N^{\sharp}$. In this case, we set
$N_{*}=M_{*}=\nicefrac{{N^{\sharp}}}{{2}}$, so that (A.66) is satisfied. The
bounds in (A.67) and (A.68) will be true (due to Corollary 6.27 and estimate
(6.60)) for much higher order derivatives than $\nicefrac{{N^{\sharp}}}{{2}}$,
and so we ignore them. The bounds in (A.69) are given by construction in
Proposition 4.4. Then the bounds (A.73) and (A.74) are satisfied for
$N\leq\nicefrac{{N^{\sharp}}}{{2}}-{\mathsf{d}}$ and
$M\leq\nicefrac{{N^{\sharp}}}{{2}}$, and in particular for
$N+M\leq\nicefrac{{N^{\sharp}}}{{2}}-{\mathsf{d}}$. In (A.75) we will then set
$N_{\circ}=M_{\circ}\leq\nicefrac{{N^{\sharp}}}{{4}}$, which in practice will
give $N_{\circ}=M_{\circ}=3\mathsf{N}_{\textnormal{ind,v}}$. Arguing in the
same way used to produce the bound (5.18) shows that for
$N+M\leq\mathsf{N}_{\rm fin}$,
$\left\|D^{N}\partial_{t}^{M}v_{\ell_{q}}\right\|_{L^{\infty}}\lesssim\left(\lambda_{q}^{4}\delta_{q}^{\nicefrac{{1}}{{2}}}\right)\widetilde{\lambda}_{q}^{N}\widetilde{\tau}_{q}^{-M}$
(A.80)
and so (A.75) is satisfied with
$\mathcal{C}_{v}=\lambda_{q}^{4}\delta_{q}^{\nicefrac{{1}}{{2}}}$ up to
$N+M\leq 2\mathsf{N}_{\rm fin}$ (which will in fact be far beyond anything
required for the inverse divergence). The inequalities in (A.76) follow from
(9.43), (9.39), and the definitions of
$\lambda^{\prime}=\widetilde{\lambda}_{q}$ and $\Lambda=\lambda_{q+1}$. In
applications, $\widetilde{\nu}=\widetilde{\tau}_{q}^{-1}\Gamma_{q+1}^{-1}$, so
that from (9.39) and (9.43), we have that
$\max\\{\widetilde{\tau}_{q}^{-1},\widetilde{\nu},\mathcal{C}_{v}\Lambda\\}\leq\tau_{q}^{-1}\widetilde{\lambda}_{q}^{3}\widetilde{\lambda}_{q+1}\leq\tau_{q}^{-1}\lambda_{q+1}^{4}\,,$
which holds as soon as $\varepsilon_{\Gamma}$ is taken to be sufficiently
small. Then, (A.77) will follow from (9.55). Finally, (A.79) will hold for all
$N,M\leq\nicefrac{{N^{\sharp}}}{{4}}$, which will be taken larger than
$3\mathsf{N}_{\textnormal{ind,v}}$. In summary, if (A.66) is known to hold for
$N+M\leq N^{\sharp}$, then (A.73) holds for
$N\leq\nicefrac{{N^{\sharp}}}{{2}}-{\mathsf{d}}$ and
$M\leq\nicefrac{{N^{\sharp}}}{{2}}$, while (A.79) is valid for
$N,M\leq\nicefrac{{N^{\sharp}}}{{4}}$.
###### Remark A.20.
In the identification of the error terms in Section 8.3, we apply Proposition
A.18 to write
$G\varrho\circ\Phi=\mathrm{div\,}\left(\mathcal{H}(G\varrho\circ\Phi)\right)+\nabla
P+\mathrm{div\,}\left(\mathcal{R}^{*}\left(G\varrho\circ\Phi\right)\right)+\fint_{\mathbb{T}^{3}}G\varrho\circ\Phi
dx.$
The estimates on $G$, $\varrho$, and $\Phi$, and then the right hand side of
the above equality will be checked in later sections. We emphasize that
$\mathcal{H}$ is a _local_ operator and is thus well-suited to working with
estimates on the support of a cutoff function. Conversely, $\mathcal{R}^{*}$
is non-local but will always produce extremely small errors which can be
absorbed into $\mathring{R}_{q+1}$ and for which the cutoff functions are not
relevant.
###### Remark A.21.
We consider examples of functions $\vartheta$ and $\varrho$ with which
Proposition A.18 is used.
1. (a)
This is the case corresponding to the density of a pipe flow. Recalling the
construction of pipe flows from Proposition 4.4, we let
$\varrho=\varrho_{\xi,\lambda,r}^{k}$ and
$\vartheta=\vartheta_{\xi,\lambda,r}^{k}$. Set $\zeta=\Lambda=\lambda$ (where
the $\lambda$ refers to Proposition 4.4, not the $\lambda$ from Proposition
A.18) and $\mu=\lambda r$. To verify (i), we appeal to item (1) from
Proposition 4.4 and our choice of $\Lambda$ and $\mu$. The periodicity
requirement in (ii) follows from item (2) from Proposition 4.4 and referring
back to item (1) from Proposition 4.3. Next, (A.69) is satisfied with
$\mathcal{C}_{\ast}=r$ using (4.11). Finally, (A.71) and (A.77) will follow
from large choice of ${\mathsf{N}_{\rm dec}}$ and ${\mathsf{d}}$ and the fact
that our choice of $\lambda$ can always be related to $\zeta$ and $\mu$ by a
power strictly less than $1$ (see (9.48) and (9.55)).
2. (b)
This is the case corresponding to the Littlewood-Paley projection for the
square of the density of a pipe flow. Fix $1\leq\mu\leq\zeta<\Lambda$, and a
constant $\mathcal{C}_{*}>0$. Let $\eta(x)$ be any
$(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$-periodic function (which need not have
zero-mean), with
$\left\|\eta\right\|_{L^{p}(\mathbb{T}^{3})}\leq\mathcal{C}_{*}$. In
applications, we shall refer to (4.15) from Proposition 4.4 and set
$\eta=\left(\varrho_{\xi,\lambda,r}^{k}\right)^{2}$ and $\mu=\lambda r$. This
means that we may write $\eta(x)=\eta_{\mu}(\mu x)$ where $\eta_{\mu}$ is
$\mathbb{T}^{3}$-periodic, with
$\left\|\eta_{\mu}\right\|_{L^{1}(\mathbb{T}^{3})}\leq\mathcal{C}_{*}$.
Following (4.15) from Proposition 4.4 with $\lambda_{1}=\zeta$,
$\lambda_{2}=\Lambda$, we may define
$\varrho(x)=\left(\mathbb{P}_{[\zeta,\Lambda]}\eta\right)(x)=\left(\mathbb{P}_{\left[\frac{\zeta}{\mu},\frac{\Lambda}{\mu}\right]}\eta_{\mu}\right)(\mu
x)\,,$
a function which is $(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$-periodic and has
zero mean (since $\zeta\geq\mu>0$), and clearly
$\left\|D^{N}\varrho\right\|_{L^{1}(\mathbb{T}^{3})}\leq\mathcal{C}_{*}\Lambda^{N}.$
We now define the associated function $\vartheta$ by first defining the zero
mean $\mathbb{T}^{3}$-periodic function
$\vartheta_{\mu}=\left(\frac{\zeta}{\mu}\right)^{2{\mathsf{d}}}\Delta^{-{\mathsf{d}}}\mathbb{P}_{\left[\frac{\zeta}{\mu},\frac{\Lambda}{\mu}\right]}\eta_{\mu}\,,$
where the negative powers of the Laplacian are defined simply as a Fourier
multiplier (since the periodic function we apply it to has zero mean). Then we
let
$\vartheta(x)=\vartheta_{\mu}(\mu x)$
which has zero mean, is $(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$-periodic, and
clearly satisfies $\Delta^{\mathsf{d}}\vartheta=\zeta^{2{\mathsf{d}}}\varrho$,
as required. It only remains to estimate the $\dot{W}^{N,1}$ norms of
$\vartheta$, which up to paying a factor of $\mu^{N}$ is equivalent to
estimating the $\dot{W}^{N,1}$ norms of $\vartheta_{\mu}$. When $0\leq
N<2{\mathsf{d}}$, the operator
$\displaystyle
D^{N}\Delta^{-{\mathsf{d}}}\mathbb{P}_{\left[\frac{\zeta}{\mu},\frac{\Lambda}{\mu}\right]}$
is a bounded operator on $L^{1}$, whose operator norm is
$\lesssim(\nicefrac{{\zeta}}{{\mu}})^{N-2{\mathsf{d}}}$. This may be verified
via a standard Littlewood-Paley argument. The exceptional case
$N=2{\mathsf{d}}$ leads to a logarithmic loss since there are roughly
$\log(\Lambda/\mu)$-many Littlewood-Paley shells to estimate; we absorb this
loss into a factor of $\Lambda^{\alpha}$, with $\alpha>0$ arbitrarily small.
Since $\left\|\eta_{\mu}\right\|_{L^{1}}\leq\mathcal{C}_{*}$, the second
estimate in (iii) above clearly follows, at least when $N\leq 2{\mathsf{d}}$.
The case $N>2{\mathsf{d}}$ follows similarly, except that now
$D^{N}\Delta^{-{\mathsf{d}}}$ is a positive order operator, and thus the
$L^{1}$ operator norm of
$D^{N}\Delta^{-{\mathsf{d}}}\mathbb{P}_{\left[\frac{\zeta}{\mu},\frac{\Lambda}{\mu}\right]}$
is bounded by $\approx(\nicefrac{{\Lambda}}{{\mu}})^{N-2{\mathsf{d}}}$. We
remark that as in the previous case, (A.71) and (A.77) will follow from large
choices of ${\mathsf{N}_{\rm dec}}$ and ${\mathsf{d}}$ and the fact that our
choice of $\lambda$ can always be related to $\zeta$ and $\mu$ by a power
strictly less than $1$.
###### Proof of Proposition A.18.
Since $D_{t}\Phi\equiv 0$, we have that
$D^{N}D_{t}^{m}\nabla\Phi=D^{N}[D_{t}^{M},\nabla]\Phi$. We may now appeal to
Lemma A.14, more precisely, to Remark A.16. Let $\Omega=\mathrm{supp\,}G$, and
$f=\Phi$, so that (A.67) implies that (A.55) holds with $\mathcal{C}_{f}=1$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\lambda^{\prime}$, and
$\mu_{f}=\widetilde{\mu}_{f}=1$ (in fact, whenever $M\geq 1$ we may replace
the right side of (A.55) by $0$). Moreover, (A.68) implies that (A.50) holds
with $\mathcal{C}_{v}=\nu/\lambda^{\prime}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\lambda^{\prime}$, $N_{t}=M_{t}$,
$\mu_{v}=\nu$ and $\widetilde{\mu}_{v}=\widetilde{\nu}$. We deduce from (A.56)
that
$\displaystyle\left\|D^{N^{\prime\prime}}D_{t}^{M}D^{N^{\prime}}D\Phi\right\|_{L^{\infty}(\mathrm{supp\,}G)}\lesssim\lambda^{\prime
N^{\prime}+N^{\prime\prime}}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.81)
whenever $N^{\prime}+N^{\prime\prime}\leq N_{*}$ and $M\leq M_{*}$. Similarly,
we use Lemma A.14 with $f=G$, so that due to (A.66) we know that (A.51) holds
with $\mathcal{C}_{f}=\mathcal{C}_{G}$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\lambda$, $\mu_{f}=\nu$,
$\widetilde{\mu}_{f}=\widetilde{\nu}$, and $N_{t}=M_{t}$. With
$\Omega=\mathrm{supp\,}G$, since $\lambda^{\prime}\leq\lambda$, as before we
have that (A.68) implies that (A.50) holds with $\mathcal{C}_{v}=\nu/\lambda$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\lambda$, $N_{t}=M_{t}$, $\mu_{v}=\nu$
and $\widetilde{\mu}_{v}=\widetilde{\nu}$. Therefore, from (A.54) we obtain
that
$\displaystyle\left\|D^{N^{\prime\prime}}D_{t}^{M}D^{N^{\prime}}G\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\lambda^{N^{\prime}+N^{\prime\prime}}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.82)
whenever $N^{\prime}+N^{\prime\prime}\leq N_{*}$ and $M\leq M_{*}$. With
(A.81) and (A.82), we turn to the proof of (A.73).
Instead of defining $\mathring{R}$ and $P$ separately, we shall simply
construct a symmetric stress $R$ with a prescribed divergence, and use the
convention that $P=\mathrm{tr\,}(R)$ and
$\mathring{R}=R-\mathrm{tr\,}(R)\mathrm{Id}$. The construction is based on
iterating Proposition A.17, $d$ times. For notational purposes, let
$\varrho_{(0)}=\varrho$, and for $1\leq k\leq{\mathsf{d}}$ we let
$\varrho_{(k)}=(\zeta^{-2}\Delta)^{{\mathsf{d}}-k}\vartheta$. Then
$\varrho_{(k-1)}=\zeta^{-2}\Delta\varrho_{(k)}$, and
$\varrho_{({\mathsf{d}})}=\vartheta$. We also define $G_{(0)}=G$.
Since $\rho_{(0)}=\zeta^{-2}\Delta\rho_{(1)}$, we deduce from Proposition
A.17, identities (A.62)–(A.65) that
$\displaystyle
G_{(0)}^{i}\varrho_{(0)}\circ\Phi=\partial_{n}R_{(0)}^{in}+G_{(1)}^{i\ell}(\zeta^{-1}\partial_{\ell}\varrho_{(1)})\circ\Phi$
(A.83)
where the symmetric stress $R_{(0)}$ is given by
$\displaystyle
R_{(0)}^{in}=\zeta^{-1}\underbrace{\left(G_{(0)}^{i}A^{n}_{\ell}+G_{(0)}^{n}A^{i}_{\ell}-A^{i}_{k}A^{n}_{k}G_{(0)}^{p}\partial_{p}\Phi^{\ell}\right)}_{=:S_{(0)}^{in\ell}}(\zeta^{-1}\partial_{\ell}\varrho_{(1)})\circ\Phi\,,$
(A.84)
the error terms are computed as
$\displaystyle
G_{(1)}^{i\ell}=\zeta^{-1}\left(\partial_{n}\left(G_{(0)}^{p}A^{i}_{k}A^{n}_{k}-G_{(0)}^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}\Phi^{\ell}\right)-\partial_{n}G_{(0)}^{i}A^{n}_{\ell}\,,$
(A.85)
where as before we denote $(\nabla\Phi)^{-1}=A$. We first show that the
symmetric stress $R_{(0)}$ defined in (A.84) satisfies the estimate (A.73).
First, we note that the $\zeta^{-1}$ factor has already been accounted for
explicitly in (A.84). Second, we note that since $D_{t}\Phi=0$, material
derivatives may only land on the components of the $3$-tensor $S_{(0)}$.
Third, the function $\zeta^{-1}D\varrho_{(1)}$ has zero mean, is
$(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$ periodic, and satisfies
$\displaystyle\left\|D^{N}(\zeta^{-1}D\varrho_{(1)})\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\mathcal{M}\left(N,1,\zeta,\Lambda\right)$
(A.86)
for $1\neq N\leq\mathsf{N}_{\rm fin}$, in view of (A.69). For $N=1$, the above
estimate incurs a logarithmic loss of $\Lambda$, which we can absorb with
$\Lambda^{\alpha}$ for any $\alpha>0$ to produce the estimate
$\displaystyle\left\|D(\zeta^{-1}D\varrho_{(1)})\right\|_{L^{1}}\lesssim\Lambda^{\alpha}\mathcal{C}_{*}\mathcal{M}\left(N,1,\zeta,\Lambda\right).$
(A.87)
The implicit constants depend on $\alpha$ and degenerate as $\alpha\rightarrow
0$. Fourth, the components of the $3$-tensor $S_{(0)}$ are sums of terms of
two kinds: $G_{(0)}\otimes A$ is a linear function of $G_{(0)}$ multiplied by
a homogeneous quadratic polynomial in $D\Phi$, while $G\otimes A\otimes
A\otimes D\Phi$ is a linear function of $G$ multiplied by a homogeneous
polynomial of degree $5$ in the entries of $D\Phi$. In particular, due to our
assumption (A.66) and the previously established bound (A.81), upon applying
the Leibniz rule and using that $\lambda^{\prime}\leq\lambda$, we obtain that
$\displaystyle\left\|D^{N}D_{t}^{M}S_{(0)}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\lambda^{N}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.88)
for $N\leq N_{*}$ and $M\leq M_{*}$. Having collected these estimates, the
$L^{1}$ norm of the space-material derivatives of $R_{(0)}$ is obtained from
Lemma A.7. As dictated by (A.84) we apply this lemma with
$f=\zeta^{-1}S_{(0)}$, and $\varphi=\zeta^{-1}\nabla\varrho_{(1)}$. Due to
(A.88), the bound (A.30) holds with
$\mathcal{C}_{f}=\mathcal{C}_{G}\zeta^{-1}$. Due to (A.67) and
$\lambda^{\prime}\leq\lambda$, the assumptions (A.31) and (A.32) are verified.
Next, due to (A.86) and (A.87), the assumption (A.33) is verified, with
$N_{x}=1$, $\widetilde{\zeta}=\Lambda$, and
$\mathcal{C}_{\varphi}=\mathcal{C}_{*}\Lambda^{\alpha}$. Lastly, assumption
(A.71) verifies the condition (A.34) of Lemma A.7. Thus, applying estimate
(A.36) we deduce that
$\displaystyle\left\|D^{N}D_{t}^{M}R_{(0)}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}\Lambda^{\alpha}\zeta^{-1}\mathcal{M}\left(N,1,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.89)
for all $N\leq N_{*}$ and $M\leq M_{*}$, which is precisely the bound stated
in (A.73). Here we have used that $N_{*}\geq 2{\mathsf{N}_{\rm dec}}+4$, which
was required due to (A.35).
Next we analyze the second term in (A.83). The point is that this term has the
same structure as what we started with; for every fixed $\ell\in\\{1,2,3\\}$,
we may replace $G_{(0)}^{i}$ by $G_{(1)}^{i\ell}$, and we replace
$\varrho_{(0)}$ with $\zeta^{-1}\partial_{\ell}\varrho_{(1)}$; the only
difference is that the bounds for this term are better. Indeed, from (A.85) we
see that the $2$-tensor $G_{(1)}$ is the sum of entries in
$\zeta^{-1}DG_{(0)}\otimes A$, $\zeta^{-1}DG_{(0)}\otimes A\otimes A\otimes
D\Phi$, and $\zeta^{-1}G_{(0)}\otimes DA\otimes A\otimes D\Phi$. Recalling
that the entries of $A$ are homogeneous quadratic polynomials in the entries
of $D\Phi$, from (A.81), (A.82), $\lambda^{\prime}\leq\lambda$, and the
Leibniz rule we deduce that
$\displaystyle\left\|D^{N^{\prime\prime}}D_{t}^{M}D^{N^{\prime}}G_{(1)}^{i\ell}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}(\lambda\zeta^{-1})\lambda^{N^{\prime}+N^{\prime\prime}}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,.$
(A.90)
for $N^{\prime}+N^{\prime\prime}\leq N_{*}-1$, and $M\leq M_{*}$. Compare the
above estimate with (A.82), and we notice that since $\lambda\zeta^{-1}\ll 1$,
the bounds for $G_{(1)}$ are indeed better than those for $G_{(0)}$; the only
caveat is the the bounds hold for one fewer spatial derivatives. In order to
iterate Proposition A.17, for simplicity we ignore the $\ell$ index, since the
argument works in exactly the same way for all values of $\ell$, we write
$G_{(1)}^{i\ell}$ simply as $G_{(1)}^{i}$, and $\partial_{\ell}\varrho_{(1)}$
as $D\varrho_{(1)}$. We start by noting that
$\zeta^{-1}D\varrho_{(1)}=\zeta^{-2}\Delta(\zeta^{-1}D\varrho_{(2)})$. Thus,
using identities (A.62)–(A.65) we obtain that the second term in (A.83) may be
written as
$\displaystyle
G_{(1)}^{i}(\zeta^{-1}D\varrho_{(1)})\circ\Phi=\partial_{n}R_{(1)}^{in}+G_{(2)}^{i\ell}(\zeta^{-2}\partial_{\ell}D\varrho_{(2)})\circ\Phi$
(A.91)
where the symmetric stress $R_{(1)}$ is given by
$\displaystyle
R_{(1)}^{in}=\zeta^{-1}\underbrace{\left(G_{(1)}^{i}A^{n}_{\ell}+G_{(1)}^{n}A^{i}_{\ell}-A^{i}_{k}A^{n}_{k}G_{(1)}^{p}\partial_{p}\Phi^{\ell}\right)}_{=:S_{(1)}^{in\ell}}(\zeta^{-2}\partial_{\ell}D\varrho_{(2)})\circ\Phi\,,$
(A.92)
the error terms are computed as
$\displaystyle
G_{(2)}^{i\ell}=\zeta^{-1}\left(\partial_{n}\left(G_{(1)}^{p}A^{i}_{k}A^{n}_{k}-G_{(1)}^{n}A^{i}_{k}A^{p}_{k}\right)\partial_{p}\Phi^{\ell}\right)-\partial_{n}G_{(1)}^{i}A^{n}_{\ell}\,,$
(A.93)
We emphasize that by combining (A.85) with (A.92) and (A.93), we may compute
the $3$-tensor $S_{(1)}$ and the $2$-tensor $G_{(2)}$ explicitly in terms of
just space derivatives of $G$ and $D\Phi$. Using a similar argument to the one
which was used to prove (A.88), but by appealing to (A.90) instead of (A.82),
we deduce that for $N\leq N_{*}-1$ and $M\leq M_{*}$,
$\displaystyle\left\|D^{N}D_{t}^{M}S_{(1)}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}(\lambda\zeta^{-1})\lambda^{N}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,.$
(A.94)
Using the bound (A.94) and the estimate
$\displaystyle\left\|D^{N}(\zeta^{-2}\partial_{\ell}D\varrho_{(2)})\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\mathcal{M}\left(N,2,\zeta,\Lambda\right)\,,$
which is a consequence of (A.69) — in the case $N=2$ as before we may weaken
the bound by a factor of $\Lambda^{\alpha}$ — we may deduce from Lemma A.7
that
$\displaystyle\left\|D^{N}D_{t}^{M}R_{(1)}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}\Lambda^{\alpha}(\lambda\zeta^{-2})\mathcal{M}\left(N,2,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.95)
for $N\leq N_{*}-1$ and $M\leq M_{*}$, which is an estimate that is even
better than (A.89), since $\lambda\ll\zeta\leq\Lambda$. This shows that the
first term in (A.91) satisfies the expected bound. The second term in (A.91)
may in turn be shown to satisfy
$\displaystyle\left\|D^{N^{\prime\prime}}D_{t}^{M}D^{N^{\prime}}G_{(2)}^{i\ell}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}(\lambda^{2}\zeta^{-2})\lambda^{N^{\prime}+N^{\prime\prime}}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,.$
(A.96)
for $N^{\prime}+N^{\prime\prime}\leq N_{*}-2$ and $M\leq M_{*}$, and it is
clear that this procedure may be iterated ${\mathsf{d}}$ times.
Without spelling out these details, the iteration procedure described above
produces
$\displaystyle
G_{(0)}\varrho_{(0)}\circ\Phi=\sum_{k=0}^{{\mathsf{d}}-1}\mathrm{div\,}R_{(k)}+\underbrace{G_{({\mathsf{d}})}\otimes(\zeta^{-{\mathsf{d}}}D^{\mathsf{d}}\vartheta)\circ\Phi}_{=:E}$
(A.97)
where each of the ${\mathsf{d}}$ symmetric stresses satisfies
$\displaystyle\left\|D^{N}D_{t}^{M}R_{(k)}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}\Lambda^{\alpha}(\lambda^{k}\zeta^{-k+1})\mathcal{M}\left(N,1,\zeta,\Lambda\right)\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)\,,$
(A.98)
for $N\leq N_{*}-k$, and $M\leq M_{*}$. Each component of the the error tensor
$G_{({\mathsf{d}})}$ in (A.97) is recursively computable solely in terms of
$G$ and $D\Phi$ and their spatial derivatives, and satisfies
$\displaystyle\left\|D^{N^{\prime\prime}}D_{t}^{M}D^{N^{\prime}}G_{({\mathsf{d}})}\right\|_{L^{1}}\lesssim\mathcal{C}_{G}(\lambda^{\mathsf{d}}\zeta^{-{\mathsf{d}}})\lambda^{N^{\prime}+N^{\prime\prime}}\mathcal{M}\left(M,M_{t},\nu,\widetilde{\nu}\right)$
(A.99)
for $N^{\prime}+N^{\prime\prime}\leq N_{*}-{\mathsf{d}}$ and $M\leq M_{*}$.
Lastly, since
$\left\|D^{N}(\zeta^{-{\mathsf{d}}}D^{\mathsf{d}}\vartheta)\right\|_{L^{1}}\lesssim\mathcal{C}_{*}\Lambda^{\alpha}\mathcal{M}\left(N,{\mathsf{d}},\zeta,\Lambda\right)$
and $D^{\mathsf{d}}\vartheta$ is
$(\nicefrac{{\mathbb{T}}}{{\mu}})^{3}$-periodic, a final application of Lemma
A.7 combined with (A.99) and the assumption that $N_{*}-{\mathsf{d}}\geq
2{\mathsf{N}_{\rm dec}}+4$, shows that estimate (A.74) holds.
Next, we turn to the proof of (A.78) and (A.79). Recall that $E$ is defined by
the second term in (A.97), and thus $\fint_{\mathbb{T}^{3}}G\varrho\circ\Phi
dx=\fint_{\mathbb{T}^{3}}Edx$. Using the standard nonlocal inverse-divergence
operator
$\displaystyle\mathcal{R}v=\Delta^{-1}\left(\nabla v+(\nabla
v)^{T}\right)-\frac{1}{2}\left(\mathrm{Id}+\nabla\nabla\Delta^{-1}\right)\Delta^{-1}\mathrm{div\,}v\,.$
(A.100)
we may define
$\displaystyle\mathring{R}_{\mathrm{nonlocal}}=\mathcal{R}E\,.$
By the definition of $\mathcal{R}$ we have that
$\mathring{R}_{\mathrm{nonlocal}}$ is traceless, symmetric, and satisfies
$\mathrm{div\,}\mathring{R}_{\mathrm{nonlocal}}=E-\fint_{\mathbb{T}^{3}}Edx$,
i.e. (A.78) holds. In the last equality we have used that by assumption
$G\varrho\circ\Phi$ has zero mean, and thus so does $E$. The idea here is very
simple: because ${\mathsf{d}}$ is very large, the gain of
$(\lambda\zeta^{-1})^{\mathsf{d}}$ present in the $E$ estimate (A.74) is so
strong, that we may simply convert $D$ and $D_{t}$ bounds on $E$ to (terrible)
$\partial_{t}$ bounds, which commute with $\mathcal{R}$, and we can still get
away with it.
Using the formulas (5.17a) and (5.17b) and the assumption (A.75), since $D$
and $\partial_{t}$ commute with $\mathcal{R}$, we deduce that for every $N\leq
N_{\circ}$ and $M\leq M_{\circ}$ we have
$\displaystyle\left\|D^{N}D_{t}^{M}\mathring{R}_{\mathrm{nonlocal}}\right\|_{L^{1}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}M^{\prime}\leq M\\\
N^{\prime}+M^{\prime}\leq
N+M\end{subarray}}\sum_{K=0}^{M-M^{\prime}}\mathcal{C}_{v}^{K}\widetilde{\lambda}_{q}^{N-N^{\prime}+K}\widetilde{\tau}_{q}^{-(M-M^{\prime}-K)}\left\|D^{N^{\prime}}\partial_{t}^{M^{\prime}}\mathcal{R}E\right\|_{L^{1}}$
$\displaystyle\lesssim\sum_{\begin{subarray}{c}M^{\prime}\leq M\\\
N^{\prime}+M^{\prime}\leq
N+M\end{subarray}}\widetilde{\lambda}_{q}^{N-N^{\prime}}\widetilde{\tau}_{q}^{-(M-M^{\prime})}\left\|D^{N^{\prime}}\partial_{t}^{M^{\prime}}E\right\|_{L^{p}}$
(A.101)
for any $p\in(1,\nicefrac{{3}}{{2}})$, where in the last inequality we have
used that by assumption
$\mathcal{C}_{v}\widetilde{\lambda}_{q}\lesssim\widetilde{\tau}_{q}^{-1}$, and
that $\mathcal{R}\colon L^{p}(\mathbb{T}^{3})\to L^{1}(\mathbb{T}^{3})$ is a
bounded operator.
Our goal is to appeal to estimate (A.44) in Lemma A.10, with
$A=-v\cdot\nabla$, $B=D_{t}$ and $f=E$ in order to estimate the $L^{p}$ norm
of
$D^{N^{\prime}}\partial_{t}^{M^{\prime}}E=D^{N^{\prime}}(A+B)^{M^{\prime}}E$.
First, we claim that $v$ satisfies the lossy estimate
$\displaystyle\left\|D^{N}D_{t}^{M}v\right\|_{L^{\infty}}\lesssim\mathcal{C}_{v}\widetilde{\lambda}_{q}^{N}\widetilde{\tau}_{q}^{-M}$
(A.102)
for $M\leq M_{\circ}$ and $N+M\leq N_{\circ}+M_{\circ}$. This estimate does
not follow from (A.68), which only provides bounds for $Dv$, instead of $v$.
For this purpose, we apply Lemma A.10 with $f=v$, $B=\partial_{t}$,
$A=v\cdot\nabla$, and $p=\infty$. Using (A.75), and the fact that
$B=\partial_{t}$ and $D$ commute, we obtain that bounds (A.40) and (A.41) hold
with $\mathcal{C}_{f}=\mathcal{C}_{v}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\lambda_{f}=\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q}$,
and
$\mu_{v}=\widetilde{\mu}_{v}=\mu_{f}=\widetilde{\mu}_{f}=\widetilde{\tau}_{q}^{-1}$.
Since $A+B=D_{t}$, we obtain from the bound (A.44) and our assumption
$\mathcal{C}_{v}\widetilde{\lambda}_{q}\lesssim\widetilde{\tau}_{q}^{-1}$ that
(A.102) holds.
Second, we claim that for any $k\geq 1$ we have
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t}^{\beta_{i}}\right)v\right\|_{L^{\infty}(\mathrm{supp\,}G)}\lesssim\mathcal{C}_{v}\widetilde{\lambda}_{q}^{|\alpha|}(\max\\{\widetilde{\nu},\widetilde{\tau}_{q}^{-1}\\})^{|\beta|}$
(A.103)
whenever $|\beta|\leq M_{\circ}$ and $|\alpha|+|\beta|\leq
N_{\circ}+M_{\circ}$. To see this, we use Lemma A.14 with $f=v$, $p=\infty$,
and $\Omega=\mathrm{supp\,}G$. From (A.68) we have that (A.50) holds with
$\mathcal{C}_{v}=\nu/\lambda^{\prime}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\lambda^{\prime}$, $\mu_{v}=\nu$, and
$\widetilde{\mu}_{v}=\widetilde{\nu}$. On the other hand, from (A.102) we have
that (A.51) holds with $\mathcal{C}_{f}=\mathcal{C}_{v}$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\widetilde{\lambda}_{q}$, and
$\mu_{f}=\widetilde{\mu}_{f}=\widetilde{\tau}_{q}^{-1}$. Since
$\widetilde{\lambda}_{q}\geq\lambda^{\prime}$, we deduce from (A.54) that
(A.103) holds.
Third, we claim that
$\displaystyle\left\|\left(\prod_{i=1}^{k}D^{\alpha_{i}}D_{t}^{\beta_{i}}\right)E\right\|_{L^{p}(\mathrm{supp\,}G)}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}(\lambda\zeta^{-1})^{\mathsf{d}}\Lambda^{|\alpha|+1}\mathcal{M}\left(|\beta|,M_{t},\nu,\widetilde{\nu}\right)$
(A.104)
holds whenever $|\alpha|\leq N_{*}-d$ and $|\beta|\leq M_{*}$. This estimate
again follows from Lemma A.14, this time with $f=E$, by appealing to the
previously established bound (A.74) and the Sobolev embedding
$W^{1,1}(\mathbb{T}^{3})\subset L^{p}(\mathbb{T}^{3})$ for
$p\in(1,\nicefrac{{3}}{{2}})$.
At last, we are in the position to apply Lemma A.10. The bound (A.103) implies
that assumption (A.40) holds with $B=D_{t}$,
$\lambda_{v}=\widetilde{\lambda}_{v}=\widetilde{\lambda}_{q}$, and
$\mu_{v}=\mu_{v}=\max\\{\widetilde{\tau}_{q}^{-1},\widetilde{\nu}\\}$. The
bound (A.104) implies that assumption (A.41) of Lemma A.10 holds with
$\mathcal{C}_{f}=\mathcal{C}_{G}\mathcal{C}_{*}(\lambda\zeta^{-1})^{\mathsf{d}}\Lambda$,
$\lambda_{f}=\widetilde{\lambda}_{f}=\Lambda$, $\mu_{f}=\nu$, and
$\widetilde{\mu}_{f}=\widetilde{\nu}$. We may now use estimate (A.44), and the
assumption that $\Lambda\geq\widetilde{\lambda}_{q}$ to deduce that
$\displaystyle\left\|D^{N^{\prime}}\partial_{t}^{M^{\prime}}E\right\|_{L^{p}}\lesssim\mathcal{C}_{G}\mathcal{C}_{*}(\lambda\zeta^{-1})^{\mathsf{d}}\Lambda^{N^{\prime}+1}(\max\\{\mathcal{C}_{v}\Lambda,\widetilde{\nu},\widetilde{\tau}_{q}^{-1}\\})^{M^{\prime}}$
(A.105)
holds whenever $M^{\prime}\leq M_{\circ}$ and $N^{\prime}+M^{\prime}\leq
N_{\circ}+M_{\circ}$. Combining (A.101) and (A.105) we deduce that
$\displaystyle\left\|D^{N}D_{t}^{M}\mathring{R}_{\mathrm{nonlocal}}\right\|_{L^{1}}$
$\displaystyle\lesssim\mathcal{C}_{G}\mathcal{C}_{*}(\lambda\zeta^{-1})^{\mathsf{d}}\sum_{\begin{subarray}{c}M^{\prime}\leq
M\\\ N^{\prime}+M^{\prime}\leq
N+M\end{subarray}}\widetilde{\lambda}_{q}^{N-N^{\prime}}\widetilde{\tau}_{q}^{-(M-M^{\prime})}\Lambda^{N^{\prime}+1}(\max\\{\mathcal{C}_{v}\Lambda,\widetilde{\nu},\widetilde{\tau}_{q}^{-1}\\})^{M^{\prime}}$
$\displaystyle\lesssim\mathcal{C}_{G}\mathcal{C}_{*}(\lambda\zeta^{-1})^{\mathsf{d}}\Lambda^{N+1}(\max\\{\mathcal{C}_{v}\Lambda,\widetilde{\nu},\widetilde{\tau}_{q}^{-1}\\})^{M}$
(A.106)
whenever $N\leq N_{\circ}$ and $M\leq M_{\circ}$. Estimate (A.79) follows by
appealing to the assumption (A.77), which ensures that the gain from
$(\lambda\zeta^{-1})^{{\mathsf{d}}-1}$ is already a sufficiently strong
amplitude gain, and we use the leftover factor of $\lambda\zeta^{-1}$ to
absorb implicit constants. ∎
## References
* [1] F. Anselmet, Y. Gagne, E. J. Hopfinger, and R. A. Antonia. High-order velocity structure functions in turbulent shear flows. J. Fluid Mech., 140:63–89, 1984.
* [2] R. Beekie, T. Buckmaster, and V. Vicol. Weak solutions of ideal mhd which do not conserve magnetic helicity. Annals of PDE, 6(1):Paper No. 1, 40 pp., 2020.
* [3] R. Beekie and M. Novack. Non-conservative solutions of the Euler-$\alpha$ equations. arXiv:2111.01027, 2021.
* [4] E. Bruè and M. Colombo. Nonuniqueness of solutions to the Euler equations with vorticity in a Lorentz space. arXiv:2108.09469, 2021.
* [5] E. Bruè, M. Colombo, and C. De Lellis. Positive solutions of transport equations and classical nonuniqueness of characteristic curves. Arch. Rational Mech. Anal., 240:1055–1090, 2021.
* [6] T. Buckmaster. Onsager’s conjecture almost everywhere in time. Comm. Math. Phys., 333(3):1175–1198, 2015.
* [7] T. Buckmaster, M. Colombo, and V. Vicol. Wild solutions of the Navier-Stokes equations whose singular sets in time have Hausdorff dimension strictly less than $1$. arXiv:1809.00600, 2018.
* [8] T. Buckmaster, C. De Lellis, P. Isett, and L. Székelyhidi, Jr. Anomalous dissipation for $1/5$-Hölder Euler flows. Ann. of Math., 182(1):127–172, 2015.
* [9] T. Buckmaster, C. De Lellis, and L. Székelyhidi, Jr. Dissipative Euler flows with Onsager-critical spatial regularity. Comm. Pure Appl. Math., 69(9):1613–1670, 2016.
* [10] T. Buckmaster, C. De Lellis, and L. Székelyhidi Jr. Transporting microstructure and dissipative Euler flows. arXiv:1302.2815, 2013.
* [11] T. Buckmaster, C. De Lellis, L. Székelyhidi Jr., and V. Vicol. Onsager's conjecture for admissible weak solutions. Comm. Pure Appl. Math., 72(2):229–274, July 2018.
* [12] T. Buckmaster and V. Vicol. Convex integration and phenomenologies in turbulence. EMS Surv. Math. Sci., 6(1):173–263, 2019.
* [13] T. Buckmaster and V. Vicol. Nonuniqueness of weak solutions to the Navier-Stokes equation. Ann. of Math., 189(1):101–144, 2019.
* [14] T. Buckmaster and V. Vicol. Convex integration constructions in hydrodynamics. Bull. Amer. Math. Soc., 58(1):1–44, 2020.
* [15] S. Chen, B. Dhruva, S. Kurien, K. Sreenivasan, and M. Taylor. Anomalous scaling of low-order structure functions of turbulent velocity. J. Fluid Mech., 533:183–192, 2005.
* [16] A. Cheskidov, P. Constantin, S. Friedlander, and R. Shvydkoy. Energy conservation and Onsager’s conjecture for the Euler equations. Nonlinearity, 21(6):1233–1252, 2008.
* [17] A. Cheskidov and X. Luo. Nonuniqueness of weak solutions for the transport equation at critical space regularity. arXiv:2004.09538, 2020.
* [18] A. Cheskidov and X. Luo. Sharp nonuniqueness for the Navier-Stokes equations. arXiv:2009.06596, 2020.
* [19] A. Cheskidov and X. Luo. $L^{2}$-critical nonuniqueness for the 2D Navier-Stokes equations. arXiv:2105.12117, 2021.
* [20] A. Cheskidov and R. Shvydkoy. Euler equations and turbulence: analytical approach to intermittency. SIAM J. Math. Anal., 46(1):353–374, 2014.
* [21] P. Constantin. On the Euler equations of incompressible fluids. Bull. Amer. Math. Soc. (N.S.), 44(4):603–621, 2007.
* [22] P. Constantin, W. E, and E. Titi. Onsager’s conjecture on the energy conservation for solutions of Euler’s equation. Comm. Math. Phys., 165(1):207–209, 1994.
* [23] P. Constantin and C. Fefferman. Scaling exponents in fluid turbulence: some analytic results. Nonlinearity, 7(1):41, 1994.
* [24] P. Constantin, Q. Nie, and S. Tanveer. Bounds for second order structure functions and energy spectrum in turbulence. Phys. Fluids, 11(8):2251–2256, 1999. The International Conference on Turbulence (Los Alamos, NM, 1998).
* [25] G. Constantine and T. Savits. A multivariate Faà di Bruno formula with applications. Trans. Amer. Math. Soc., 348(2):503–520, 1996.
* [26] M. Dai. Non-uniqueness of Leray-Hopf weak solutions of the 3d Hall-MHD system. arXiv:1812.11311, 2018.
* [27] S. Daneri and L. Székelyhidi, Jr. Non-uniqueness and h-principle for Hölder-continuous weak solutions of the Euler equations. Arch. Rational Mech. Anal., 224(2):471–514, 2017.
* [28] C. De Lellis and H. Kwon. On non-uniqueness of Hölder continuous globally dissipative Euler flows. arXiv:2006.06482, 2020.
* [29] C. De Lellis and L. Székelyhidi, Jr. The Euler equations as a differential inclusion. Ann. of Math. (2), 170(3):1417–1436, 2009.
* [30] C. De Lellis and L. Székelyhidi, Jr. The $h$-principle and the equations of fluid dynamics. Bull. Amer. Math. Soc. (N.S.), 49(3):347–375, 2012.
* [31] C. De Lellis and L. Székelyhidi, Jr. Dissipative continuous Euler flows. Invent. Math., 193(2):377–407, 2013.
* [32] C. De Lellis and L. Székelyhidi Jr. High dimensionality and h-principle in PDE. Bull. Amer. Math. Soc., 54(2):247–282, 2017.
* [33] C. De Lellis and L. Székelyhidi Jr. On turbulence and geometry: from Nash to Onsager. arXiv:1901.02318, 01 2019.
* [34] T. Drivas and G. Eyink. An Onsager singularity theorem for Leray solutions of incompressible Navier–Stokes. Nonlinearity, 32(11):4465, 2019.
* [35] J. Duchon and R. Robert. Inertial energy dissipation for weak solutions of incompressible euler and navier-stokes equations. Nonlinearity, 13(1):249, 2000.
* [36] G. Eyink. Energy dissipation without viscosity in ideal hydrodynamics I. Fourier analysis and local energy transfer. Physica D: Nonlinear Phenomena, 78(3–4):222–240, 94.
* [37] G. Eyink and K. Sreenivasan. Onsager and the theory of hydrodynamic turbulence. Rev. Modern Phys., 78(1):87–135, 2006.
* [38] C. Foias, U. Frisch, and R. Temam. Existence de solutions ${C}^{\infty}$ des équations d’euler. C. R. Acad. Sci. Paris, Ser. A, 280:505–508, 1975.
* [39] U. Frisch. Turbulence. Cambridge University Press, Cambridge, 1995. The legacy of A. N. Kolmogorov.
* [40] M. Gromov. Partial differential relations, volume 9. Springer Science & Business Media, 1986.
* [41] P. Isett. Hölder continuous Euler flows in three dimensions with compact support in time. arXiv:1211.4065, 11 2012.
* [42] P. Isett. On the endpoint regularity in Onsager’s conjecture. arXiv preprint arXiv:1706.01549, 2017.
* [43] P. Isett. A proof of Onsager's conjecture. Annals of Mathematics, 188(3):871, 2018.
* [44] T. Ishihara, T. Gotoh, and Y. Kaneda. Study of high–Reynolds number isotropic turbulence by direct numerical simulation. Annual Review of Fluid Mechanics, 41:165–180, 2009.
* [45] Y. Kaneda, T. Ishihara, M. Yokokawa, K. Itakura, and A. Uno. Energy dissipation rate and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box. Physics of Fluids, 15(2):L21–L24, 2003.
* [46] A. Kolmogorov. Local structure of turbulence in an incompressible fluid at very high reynolds number. Dokl. Acad. Nauk SSSR, 30(4):299–303, 1941. Translated from the Russian by V. Levin, Turbulence and stochastic processes: Kolmogorov’s ideas 50 years on.
* [47] G. Komatsu. Analyticity up to the boundary of solutions of nonlinear parabolic equations. Comm. Pure Appl. Math., 32(5):669–720, 1979.
* [48] L. Lichtenstein. Über einige Existenzprobleme der Hydrodynamik homogener, unzusammendrückbarer, reibungsloser Flüssigkeiten und die Helmholtzschen Wirbelsätze. Mathematische Zeitschrift, 23(1):89–154, 1925.
* [49] X. Luo. Stationary solutions and nonuniqueness of weak solutions for the Navier-Stokes equations in high dimensions. Arch. Ration. Mech. Anal., 233(2):701–747, 2019.
* [50] C. Meneveau and K. Sreenivasan. The multifractal nature of turbulent energy dissipation. J. Fluid Mech., 224:429–484, 1991.
* [51] S. Modena and L. Székelyhidi, Jr. Non-renormalized solutions to the continuity equation. arXiv:1806.09145, 2018.
* [52] S. Modena and L. Székelyhidi, Jr. Non-uniqueness for the transport equation with Sobolev vector fields. Ann. PDE, 4(2):Paper No. 18, 38, 2018.
* [53] S. Müller and V. Šverák. Convex Integration for Lipschitz Mappings and Counterexamples to Regularity. Annals of Mathematics, 157(3):pp. 715–742, 2003.
* [54] J. Nash. ${C}^{1}$ isometric imbeddings. Ann. of Math., 60(3):383–396, 1954.
* [55] F. Nguyen, J.-P. Laval, P. Kestener, A. Cheskidov, R. Shvydkoy, and B. Dubrulle. Local estimates of Hölder exponents in turbulent vector fields. Physical Review E, 99(5):053114, 2019.
* [56] M. Novack and V. Vicol. An intermittent Onsager theorem. arXiv:2203.13115, 2022.
* [57] L. Onsager. Statistical hydrodynamics. Nuovo Cimento (9), 6(Supplemento, 2(Convegno Internazionale di Meccanica Statistica)):279–287, 1949.
* [58] V. Scheffer. An inviscid flow with compact support in space-time. J. Geom. Anal., 3(4):343–401, 1993.
* [59] A. Shnirelman. Weak solutions with decreasing energy of incompressible Euler equations. Comm. Math. Phys., 210(3):541–603, 2000.
* [60] R. Shvydkoy. Lectures on the Onsager conjecture. Discrete Contin. Dyn. Syst. Ser. S, 3(3):473–496, 2010.
* [61] K. Sreenivasan, S. Vainshtein, R. Bhiladvala, I. San Gil, S. Chen, and N. Cao. Asymmetry of Velocity Increments in Fully Developed Turbulence and the Scaling of Low-Order Moments. Phys. Rev. Lett., 77(8):1488–1491, 1996.
* [62] P.-L. Sulem and U. Frisch. Bounds on energy flux for finite energy turbulence. J. Fluid Mech., 72(3):417–423, 1975.
* [63] L. Székelyhidi Jr. From isometric embeddings to turbulence. HCDTE lecture notes. Part II. Nonlinear hyperbolic PDEs, dispersive and transport equations, 7:63, 2012.
* [64] T. Tao. 255b, notes 2: Onsager’s conjecture, 2019.
* [65] E. Wiedemann. Weak-strong uniqueness in fluid dynamics. arXiv:1705.04220, 2017.
|
# Predicting Autism Spectrum Disorder Using Machine Learning Classifiers
Koushik Chowdhury
MSc. Student
Saarland University
Saarbrücken, Germany
<EMAIL_ADDRESS>Mir Ahmad Iraj
MSc. Student
American International University-Bangladesh
Dhaka, Bangladesh
<EMAIL_ADDRESS>
###### Abstract
Autism Spectrum Disorder (ASD) is on the rise and constantly growing. Earlier
identify of ASD with the best outcome will allow someone to be safe and
healthy by proper nursing. Humans can hardly estimate the present condition
and stage of ASD by measuring primary symptoms. Therefore, it is being
necessary to develop a method that will provide the best outcome and
measurement of ASD. This paper aims to show several measurements that
implemented in several classifiers. Among them, Support Vector Machine (SVM)
provides the best result and under SVM, there are also some kernels to
perform. Among them, the Gaussian Radial Kernel gives the best result. The
proposed classifier achieves 95% accuracy using the publicly available
standard ASD dataset.
###### Index Terms:
ASD, SVM, Classifier, ROC, Accuracy.
## I INTRODUCTION
Analyzing data for different purposes (like a prediction or measuring
performances) is called the data mining process. It has several tasks such as
association rule mining, classification, prediction and clustering.
Researchers have found a suitable way of different purposes of data mining for
different fields of research. Here in this paper, we have gone through to
predict autism spectrum disorder (ASD) by applying the data mining technique
specifically in Support-Vector Machine (SVM).
Autism spectrum disorder (ASD) is a neurological and developmental disorder
that begins early in childhood and lasts throughout a person’s life. It
affects how a person acts and interacts with others. It is called a "spectrum"
disorder because people with ASD can have a range of symptoms [1]. People with
ASD might have problems talking with others, or they might not look in the eye
when someone talks to them. They may also have restricted interests and
repetitive behaviors. Research says that both genes and environment play
important roles and there is currently no standard treatment for ASD but there
are many ways to increase a person’s ability to grow normally. We really need
to check a person’s behavior and symptoms whether he is autistic or not.
Therefore, we need a strong dataset to perform a technique that will identify
a person’s autism spectrum disorder. However, very limited autism datasets
associated with clinical or screening are available and most of them are
genetic in nature.
A suitable data set of autistic spectrum disorder (ASD) has been selected that
is combined with three different data sets such as ASD Screening Data for
Child, ASD Screening Data for Adolescent and ASD Screening Data for Adult [1].
We have merged three of these data sets into one dataset for our mission. Our
desired dataset consists of ten individual characteristics and ten behavioral
features in binary data. We consider with their data type respectively such as
String, Number, Boolean and the ten behavioral questions is in Binary data
type. Using Python programming language and python libraries such as sklearn,
pandas, keras, numpy and matplotlib libraries, we have got our predicting
results both in graphical and numeric from SVM.
## II Background Study
### II-A Literature Review
Analysis of the increasing trend of using modern machine learning as well as
data mining technologies to explore data efficiently is on the rise. We are at
the age where we need more than perfection.
Some researchers used machine learning techniques to measure data in the most
accurate form. Implementing datasets in several classifiers and several
algorithms, the outcome was in terms of calculating the accuracy rate. Such
as, research was in 2016 for breast cancer risk prediction and diagnosis by
using machine learning algorithms [2]. There they had performed a comparison
between different machine learning algorithms. Such as Support Vector Machine
(SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN).
These algorithms had been implemented on the Wisconsin Breast Cancer datasets
[2]. Their experimental results showed that SVM gave the highest accuracy with
the lowest error rate and all experiments were executed within a simulation
environment and conducted in WEKA data mining tool [2].
There are many more similar researches took place on several topics and
several datasets. Here, we would like to mention another one that was on
Magnetic Resonance Imaging (MRI) stroke classification by Support Vector
Machine (SVM) [3]. They performed a method to classify the MRI images of the
brain related stroke. The MRI images for stroke classification used Gabor
filters and Histograms to extract features from the images and the features
were classified using Support Vector Machine (SVM) with various kernels.
Finally, the experimental results were shown that the presented method
achieves satisfactory classification accuracy and the classification accuracy
was the best for the linear kernel.
From this kind of research mentioned above here, we gather knowledge and
related information that we have used for our work. We aim to improve the
result of the prediction of the ASD Screening test of an individual by doing
some analysis of Data Mining techniques.
### II-B Data Understanding
#### II-B1 Attributes
Our work uses publicly available standard data sets [4]. Score such as
A1_Score, A2_Score are the result of a questionnaire survey by the Autism
Research Center at the University of Cambridge, UK [5]. The Dataset consists
of nine individual characteristics and ten behavioral features. They are
following.
Attribute | Type
---|---
gender | String
ethnicity | String
jaundice | Boolean (yes or no)
PDD | Boolean (yes or no)
relation | String
country_of_res | String
did_the_qn_before | Boolean (yes or no)
age_desc | Integer
A1_Score | Binary (0, 1)
A2_Score | Binary (0, 1)
A3_Score | Binary (0, 1)
A4_Score | Binary (0, 1)
A5_Score | Binary (0, 1)
A6_Score | Binary (0, 1)
A7_Score | Binary (0, 1)
A8_Score | Binary (0, 1)
A9_Score | Binary (0, 1)
A10_Score | Binary (0, 1)
Class/ASD | Boolean (yes=1 or no=0)
#### II-B2 Missing Value
The dataset contains so many missing values. Dealing with these missing values
was the biggest challenge as we have 19 variables. There are a few ways to
handle the missing values. For example, replace the missing values with
averaged values or delete instances that have missing values. Since we have 19
variables, we decide to remove the instances with missing values.
## III Approaches
### III-A Data Classification
There are two steps in data classification. The first step is known as a
learning step. A given set of classes based on the analysis of a set of data
instances is explained. Each instance belongs to a predefined class. In the
last step, the data set is tested using various machine learning techniques
that are used to calculate the classification accuracy, AUC value, precision,
recall, etc. A model is then designed that predicts the future outcome based
on the historical or recorded data. There are various machine learning
techniques for classification. We have used the following machine learning
classifier in our work, which is seen in the section ’Classifiers’.
### III-B Classifiers
We applied the following machine learning classifier to the data set. First,
we divided our data set into two categories: training (70%) and testing (30%).
After splitting the data set, we used this machine learning classifier to
determine the results of the assessment matrices.
* •
Naïve Bayes.
* •
K-Nearest Neighbor(kNN).
* •
Logistic Regression.
* •
Gradient Boosting.
* •
Support Vector Machine.
* •
Decision Tree.
* •
MLP Classifier.
### III-C Evaluation Metrics
We select the following evaluation metrics to evaluate our results. These
metrics, especially accuracy and AUC value, help us to find the best
classifier for the data set.
* •
Accuracy.
* •
AUC Value.
* •
Precision.
* •
Recall.
* •
F1 Score.
## IV Results and Finding
### IV-A Comparison between different classifiers
In order to prove which classifier is performing better than others, we must
need a comparison between them.
#### IV-A1 Results
To compare between classifiers we need proper measurement result of their
accuracy rate, AUC value, Precision, Recall and F1-Score. So they are..
| NB | kNN | LR | GB | SVM | DT | MLP
---|---|---|---|---|---|---|---
Acc | 0.76 | 0.92 | 0.915 | 0.921 | 0.95 | 0.87 | 1
AUC | 0.78 | 0.81 | 0.90 | 0.88 | 0.87 | 0.86 | 0.67
Pre (no) | 0.86 | 0.87 | 0.93 | 0.89 | 0.89 | 0.90 | 1
Pre (yes) | 0.69 | 0.73 | 0.89 | 0.92 | 0.91 | 0.84 | 0.47
Rec (no) | 0.81 | 0.84 | 0.94 | 0.96 | 0.95 | 0.92 | 0.37
Rec (yes) | 0.76 | 0.78 | 0.87 | 0.80 | 0.79 | 0.82 | 1
F1 (no) | 0.83 | 0.86 | 0.93 | 0.93 | 0.92 | 0.91 | 0.54
F1 (yes) | 0.72 | 0.75 | 0.88 | 0.85 | 0.84 | 0.83 | 0.64
TABLE I: Classifiers Results
From the table I, Naïve Bayes is providing the lowest accuracy that is 0.76
and MLP Classifier is showing the highest accuracy that is 1.00. But MLP
Classifier is providing the lowest AUC value which is only 0.686. Moreover,
comparing with MLP Classifier’s AUC value, Precision, Recall, F1-score are not
even close to other classifiers. MLP Classifier gets the data set over-fit.
This can be a cause of MLP Classifier’s not working well with small datasets.
So if we don’t count MLP Classifier in that way, Support Vector Machine (SVM)
would be the best performer for this kind of operation as SVM has the second-
best accuracy rate that is 0.950 which is 95%. Though Logistic Regression is
providing the highest AUC value that is 0.903, The Support Vector Machine
shows a decent overall result for all evaluation metrics. So comparing these
sides, we found support Vector Machine is very much appropriate for this kind
of operation.
#### IV-A2 ROC Curve
The ROC curve is a graphical diagram that illustrates the probability curve of
binary classifiers.
Figure 1: ROC Curve of Classifiers
Here, from figure 1, we can see that there are four ROC curves of four
classifiers and they are plotted according to their AUC value. A perfect test
result has an AUC value of 1.0, whereas random chance gives an AUC value of
0.5 or may lower. Here, our desired classifiers have AUC value of 0.782 Naïve
Bayes, 0.809 k-Nearest Neighbor, 0.866 Decision Tree, 0.878 Gradient Boosted
Trees. Gradient Boosting gives the best AUC value if we only compare these 4
classifiers. This is the advantage of ROC curve measurement that we can
compare two or more sensitive tests visually and simultaneously in one figure.
Now we want to compare between Logistic Regression, SVM and MLP Classifier.
According to figure 2, Logistic Regression gives the best result here in the
ROC curve and MLP Classifier gives the worst result here and SVM gives the
best appreciable result in accuracy.
Figure 2: ROC Curve of Classifiers
From the above figure 2 we can see there are three ROC curves plotted
according to their AUC values. To compare between them we need to check out
their AUC values. Logistic Regression provides AUC of 0.903, MLP Classifier
0.686 and SVM provides AUC of 0.870. Here in ROC curve Logistic Regression
gives the best result and MLP Classifier gives the worst. SVM is slightly less
than Logistic Regression here in ROC curves but it gives the best appreciable
result in accuracy.
### IV-B Findings
MLP Classifier provides 100% accuracy but its AUC value, Precision, Recall and
F1-Score are very low. So it is not fit for our dataset. Logistic Regression
and SVM provide almost same values and they are very much fit for this kind of
dataset. SVM works slightly better than Logistic Regression if we evaluate
based on their accuracy metrics. So it is better to select SVM for this kind
of datasets.
### IV-C Prediction using Support Vector Machine
Support Vector Machine algorithm uses several sets of mathematical functions
that are defined as SVM kernels. The function of these SVM kernels is to take
data as input and transform it into the required form. There are different
kinds of SVM algorithms that use different types of kernel functions and these
functions can be different types [6]. Most used SVM kernels are Linear SVM
kernel, Polynomial SVM Kernel, Gaussian Radial Basis Kernel and Sigmoid SVM
kernel. Our data does not suit the linear kernel since one dependent response
and one or more independent variables are linear concerns. Out data have 2
dependent responses, they are Yes and No. But we continue our operation by
implementing in rest of the most used three SVM kernels.
#### IV-C1 SVM Kernel Results
We need to find which SVM kernel is performing the best with our dataset. So
we have to compare their measurement result of accuracy rate, AUC value,
Precision, Recall and F1-Score.
| Polynomial | Gaussian | Sigmoid
---|---|---|---
Acc | 0.9479 | 0.9503 | 0.4570
AUC | 0.8318 | 0.8703 | 0.4015
Pre (no) | 0.87 | 0.89 | 0.58
Pre (yes) | 0.81 | 0.91 | 0.22
Rec (no) | 0.90 | 0.95 | 0.60
Rec (yes) | 0.77 | 0.79 | 0.60
F1 (no) | 0.89 | 0.92 | 0.59
F1 (yes) | 0.79 | 0.84 | 0.59
TABLE II: SVM Kernel Results
Here from the above comparison table we can see that Sigmoid SVM kernel is
providing very poor result and it is always less than half (0.5). On the other
hand Polynomial Kernel and Gaussian Radial Basis Kernel are performing well.
Among them Gaussian Radial Basis Kernel is proving better result than
Polynomial Kernel.
#### IV-C2 ROC Curves of SVM Kernels
Here the ROC Curves of SVM Kernels are plotted according to their AUC value. A
perfect test result has an AUC value of 1.0, whereas random chance gives an
AUC value of 0.5 or may more lower.
Here, from figue 3, we can see AUC value of SVM Polynomial kernel is 0.832,
SVM Gaussian Radial Basis kernel is 0.870 and SVM Sigmoid Kernel is 0.401. So
here SVM Sigmoid kernel performance is very poor but Polynomial kernel and
Gaussian Radial Basis kernel is performed well almost similar.
Figure 3: ROC Curve of SVM Kernels
#### IV-C3 SVM Finding
SVM Sigmoid kernel is giving very poor result for this dataset, whereas
Gaussian Radial Basis kernel and Polynomial kernel performed almost similar
and that is good for our dataset. But SVM Gaussian Radial Basis kernel
slightly performed better than SVM Polynomial Kernel. So we can say that SVM
Gaussian Radial Basis kernel is the best performed algorithm as well as
classifier for this kind of datasets. The following figure 4 shows a sample
comparison of the actual results and the results of SVM kernels.
Figure 4: Actual Results vs Predicted Results (sample comparison)
## V Future Work
We all know that, with the arrival of the massive cloud age, big data mining
is the latest research trend. In the future, to focus on the study of
achieving efficient and accurate classification under the environment of data
mining or machine learning, we need to work with a huge number of data. Also,
our suggestion is to do Deep Learning instead of using traditional
classifiers.
## VI Conclusion
This work investigates the efficiency of various kinds of machine learning
classifiers for a specific kind of dataset. We choose medical data to predict
autism spectrum disorder. To perform the work smoothly, we mapped the features
and build the dataset free of missing values. We found that the Support Vector
Machine ( SVM) is very suitable for this dataset based on the experiments
conducted in this study, and it performed very well. So we choose SVM to
analyze the dataset more deeply. To get a more accurate result, we implemented
the most used kernels of SVM. Among them, we found SVM Gaussian Radial Basis
kernel is the best performed algorithm as well as a classifier for this kind
of dataset. To deal with such a kind of medical dataset and to find the most
powerful classifier that gives the most reliable result was a great challenge.
## References
* [1] Thabtah, F. (2019). Machine learning in autistic spectrum disorder behavioral research: A review and ways forward. Informatics for Health and Social Care, 44(3), 278-297.
* [2] Asri, H., Mousannif, H., Al Moatassime, H., & Noel, T. (2016). Using machine learning algorithms for breast cancer risk prediction and diagnosis. Procedia Computer Science, 83, 1064-1069.
* [3] A.S.Shanthi and A.S.Shanthi. “Support Vector Machine for MRI Stroke Classification”, International Journal on Computer Science and Engineering (IJCSE), ISSN: 0975-3397 in April, 2014.
* [4] Autism Screening Adult Data Set. Source by Fadi Fayez Thabtah, Department of Digital Technology, Manukau Institute of Technology, Auckland, New Zealand. https://archive.ics.uci.edu/ml/datasets/Autism+Screening+Adult
* [5] National Institue of Health Research. Autism Research Centre. http://docs.autismresearchcentre.com/tests/AQ10.pdf
* [6] Achirul Nanda, M. (2018). Boro Seminar, K.; Nandika, D.; Maddu. A. A Comparison Study of Kernel Functions in the Support Vector Machine and Its Application for Termite Detection. Information, 9(5).
|
# First Star Formation in the Presence of Primordial Magnetic Fields
Daegene Koh Kavli Institue for Particle Astrophysics and Cosmology
Stanford University, SLAC National Accelerator Laboratory
Menlo Park, CA 94025 Tom Abel Kavli Institue for Particle Astrophysics and
Cosmology
Stanford University, SLAC National Accelerator Laboratory
Menlo Park, CA 94025 Karsten Jedamzik Laboratoire d’Univers et Particules de
Montpellier, UMR5299-CNRS, Université de Montpellier, 34095 Montpellier,
France
###### Abstract
It has been recently claimed that primordial magnetic fields could relieve the
cosmological Hubble tension. We consider the impact of such fields on the
formation of the first cosmological objects, mini-halos forming stars, for
present-day field strengths in the range of $2\times 10^{-12}$ \- $2\times
10^{-10}$ G. These values correspond to initial ratios of Alvén velocity to
the speed of sound of $v_{a}/c_{s}\approx 0.03-3$. We find that when
$v_{a}/c_{s}\ll 1$, the effects are modest. However, when $v_{a}\sim c_{s}$,
the starting time of the gravitational collapse is delayed and the duration
extended as much as by $\Delta$z = 2.5 in redshift. When $v_{a}>c_{s}$, the
collapse is completely suppressed and the mini-halos continue to grow and are
unlikely to collapse until reaching the atomic cooling limit. Employing
current observational limits on primordial magnetic fields we conclude that
inflationary produced primordial magnetic fields could have a significant
impact on first star formation, whereas post-inflationary produced fields do
not.
stars: Population III — (cosmology:) dark ages, reionization, first stars —
stars: magnetic field — magnetohydrodynamics (MHD)
††journal: ApJL††software: Enzo (Bryan et al., 2014), YT (Turk et al., 2011)
## 1 Introduction
Magnetic fields are observed throughout the local Universe, in galaxies, in
clusters of galaxies, as well as very possibly in the extra-galactic medium.
Observations of TeV blazars (Neronov & Vovk, 2010) are most easily explained
by the existence of an almost volume filling magnetic field permeating the
space between galaxies. Recently it has also been shown by Jedamzik & Pogosian
(2020) that magnetic fields of $\sim 0.05\,$nG existant before the epoch of
recombination show good promise to alleviate the $4-5\sigma$ cosmic Hubble
tension and the $\sim 2\sigma$ cosmic $S_{8}$ tension within standard
$\Lambda$CDM. The Hubble tension is the mismatch between the inferred small
present day Hubble constant $H_{0}$ from observations of the cosmic microwave
background radiation by the Planck satellite when assuming $\Lambda$CDM
(Planck Collaboration et al., 2018), and a larger $H_{0}$ inferred by local
observations (Reid et al., 2019; Wong et al., 2019; Pesce et al., 2020). The
$S_{8}$ tension is the difference between the $\Lambda$CDM predicted current
matter fluctuation amplitude on a scale of 8 Mpc $h^{-1}$ and that observed
directly via weak lensing (Abbott et al., 2018; Asgari et al., 2020). It has
been shown that weak magnetic fields induce density fluctuations on small,
sub-Jeans comoving $\sim$ kpc scales before recombination (Jedamzik & Abel,
2013) which would indeed alter the prediction for $H_{0}$ and $S_{8}$ within
$\Lambda$CDM in a favorable way. Such fields necessarily would be of
primordial origin.
There are two conceptually different possibilities for the generation of
primordial magnetic fields (PMFs, hereafter), magnetogenesis during inflation
and post-inflationary generation (often referred to as ”causal” scenarios) as
for example during a first-order electroweak phase transition. Though multiple
proposals exist (cf. to Durrer & Neronov (2013); Subramanian (2016);
Vachaspati (2020) for reviews), there is no preferred candidate. Inflationary
scenarios have to lead to an approximate scale-invariant magnetic spectrum to
be successful, whereas most causal scenarios develop a very blue Batchelor
spectrum, with most magnetic power on small scales (Durrer & Caprini, 2003;
Saveliev et al., 2012). Stringent upper limits on PMFs from observations of
the cosmic microwave background radiation have been placed by Jedamzik &
Saveliev (2019) at ${}_{\sim}^{<}\,10^{-10}$G and ${}_{\sim}^{<}\,2\times
10^{-11}$ for inflationary and causal fields, respectively.
In this Letter, we entertain the idea that a primordial magnetic field indeed
existed and make steps to investigate their impact on first structure
formation. In $\Lambda$CDM structure is built bottom up, marginal differences
in the formation of the first objects could have significant impact for all
subsequent structure formation. Population III stars were formed at z $>$ 20
in mini-halos of mass $\sim 10^{5-6}M_{\odot}$ which gravitational collapse
via $H_{2}$ cooling (Tegmark et al., 1997; Abel et al., 2002). Magnetic fields
are understood to impact present day star formation in a number of ways (McKee
& Ostriker, 2007). Although the exact nature of their impact on Pop III stars
is still yet unknown, there have been a number of somewhat idealized studies
exploring these effects (McKee et al., 2020), including reducing fragmentation
(Sharda et al., 2020), increasing ionization degree (Nakauchi et al., 2019),
and angular momentum transport (Machida & Doi, 2013). Any weak initial field
will be amplified by the small scale dynamo (Xu et al., 2008; Sur et al.,
2010; Schleicher et al., 2011; Turk et al., 2012), but tends to not strongly
alter the initial collapse forming primordial stars.
On the other hand, Sanati et al. (2020) considered the impact of PMFs on the
total matter spectrum and the subsequently formed dwarf galaxies. They ruled
out the highest strengths, (i.e. $0.2-0.5\,$nG for inflationary fields) for a
number of reasons. First, the dwarf galaxies in these cases overproduce stars,
in contrast to local scaling relations. They also produce enough ionizing
photons to reionize the universe prior to z=9, also contradicting numerous
other measurements. The properties of dwarf galaxies depend on the chemo-
dynamical environment that they are embedded within. As a result, these
results cannot be fully interpreted without explicitly resolving the mini-halo
scales where the first objects were formed.
The latter is what we attempt here. In addition, we evolve the matter
perturbations and magnetic fields from the linear regime at high redshift
considering the earliest collapsing object in a fairly large volume. This is
in contrast to Machida & Doi (2013); Nakauchi et al. (2019); McKee et al.
(2020); Sharda et al. (2020) which adopt non-linear initial conditions at
lower redshift. However, we stress that unlike Sanati et al. (2020) we do not
take into account the additional power in the baryon density fluctuations
generated by the magnetic fields themselves (Wasserman, 1978; Kim et al.,
1996; Subramanian & Barrow, 1998). As much as the above mentioned works, our
study can therefore not reach ultimate conclusions but should add an important
element to the discussion. In the following section, we introduce our
simulation setup. Then, in section 3, we describe the results. Finally, we
discuss potential impact and caveats in section 4.
## 2 Simulation Setup
Our exploration involves a set of 5 cosmological simulations using the latest
public version of the adaptive mesh refinement ideal MHD simulation code Enzo
v2.6 (Bryan et al., 2014; Brummel-Smith et al., 2019). Our basic setup is
taken from Koh & Wise (2016) with a nested initial grid focusing on a single
mini-halo in a 250 $h^{-1}$ comoving kpc box. We consider the same physics as
with this earlier study including a nine-species non-equilibrium chemistry
model (Abel et al., 1997), a time-dependent Lyman-Werner background (Wise et
al., 2012), and a radiative self-shielding model (Wolcott-Green et al., 2011).
Our main focus is the evolution of the mini-halo prior to self-collapse and
thus, terminate the simulation when a maximal refinement level of 15 (a
spatial resolution of $\sim 400$ astronomical units) is reached and do not
follow the subsequent star formation and feedback processes.
In contrast to this earlier study, we modify the following parameters. First,
the Jeans length criterion is reduced down to 32 zones per Jeans length. In
studies of magnetic field amplification by gravitational turbulence the
required minimum for this parameter is 30 to capture the small scale dynamo
(Federrath et al., 2011; Turk et al., 2012).
The central variable of interest is the initial magnetic field strength. We
seed the entire simulation domain with a magnetic field initially pointed in
the z-direction. To contextualize the values of the magnetic field strengths
chosen, we refer to the speed ratio $v_{a}/c_{s}$, where $v_{a}$ is the Alfvén
speed, and $c_{s}$ is the sound speed, and choose the following ratios for our
study
* •
0.03 , 0.30, 0.66, 1.00, 3.00
For brevity, we will, in the rest of the Letter, refer to the ratios $<$1,
$\sim$1, $>$1 as sub-sonic, trans-sonic, and super-sonic ratios respectively.
Figure 1: Density-weighted projection plots of density (top) and temperature
(bottom) of the different runs all spanning 1 kpc wide. From left to right,
the plots are of increasing Alfvén to sound speed ratios corresponding to
0.03, 0.30, 0.66, 1.00, and 3.00 respectively. Each plot is taken from the
final data output produced at the time of collapse except for the right-most
panel which was terminated at a comparable time with the trans-sonic (second
from the right) case at z=12.6.
A speed ratio of 1 then is equivalent to an initial proper B-field strength, B
= $v_{a}*\sqrt{4\pi\rho}$ = 1.32 x $10^{-6}$ G at z = 150, where $\rho$ is the
mean density in the box. This field strength corresponds to a comoving field
strength of 5.79 x $10^{-11}$ G. So the full range of initial comoving field
strengths spans 1.75 x $10^{-12}$ G to 1.73 x $10^{-10}$ G. The comoving
region from which the $\sim 10^{6}M_{\odot}$ halo forms is approximately 10
kpc across and hence is the relevant scale over which we assume the magnetic
field to be initially uniform.
## 3 Results
We analyze the numerical simulations based on full snapshots stored on disk
for every 10 Myrs of the evolution until the highest level of refinement of 15
is reached for the first time at which the simulation is terminated and a
final data output is produced. The exception is the super-sonic ratio of 3.0
in which the strong B-field inhibited the collapse and we terminated the
simulation at z = 12.6.
### 3.1 Central Halo Inspection
Here we show plots comparing the different runs. Figure 1 shows projection
plots of density and temperature of the various runs, each panel spanning 1
kpc across. From left to right, the plots are in order of increasing
$v_{a}/c_{s}$ ratios, shown at the time when the highest refinement level is
reached, corresponding to collapse of the central region. The 0.03 and 0.30
sub-sonic ratio central halos collapse at z = 15, the 0.66 halo at z =14, and
the trans-sonic halo at z =12.7. In comparing the two sub-sonic cases of
ratios 0.03 and 0.30 where the collapse times are similar, we can see that the
extended filament protruding from the central object is less dense in the
greater ratio case. Also the surrounding satellite gas clouds all have
noticeably reduced densities.
As the time of collapse is delayed at higher $v_{a}/c_{s}$ ratios, the central
region has more time to grow both in size as it merges with the nearby sub-
halos and in temperature. In particular, for the trans-sonic case where
$v_{a}\sim c_{s}$, we see an extended temperature cavity where the central
halo has collapsed, surrounded by highly heated gas, with temperatures
reaching several $T\sim 10^{4}$ K. The nearby gas clouds to the left and below
the central halo in the plane of the plot in the leftmost panel have
completely merged with the central halo in the trans-sonic case resulting in
significantly elevated temperatures. By the time these clouds merge in, the
central halo had already been cooled to form a dense core. The infalling gas
then collides with the dense core and is scattered around it producing the
extended sub-structure shown in the plot. In the super-sonic case, we see
continued heating of the central region of the halo while the density remains
quite low and that cooling has not begun even at this late stage.
One significant trend is that the time of collapse is delayed as a function of
increasing speed ratio. The additional magnetic pressure heats the gas and
adds an additional barrier for the gravity to overcome to initiate self-
collapse. This trend is such that already the trans-sonic halo collapses at z
= 12.66 where the magnetic field is contributing to a delay time of $\Delta$z
= 2.5. Following this trend, we estimate that the central halo in the highest
ratio run of 3.0 is unlikely to collapse even until z = 10 and may only
collapse once reaching the atomic cooling limit.
Figure 2 shows the temperature evolution of the highest density point as a
function of redshift for each of the different simulations. As the initial
$v_{a}/c_{s}$ increases, the peak temperature reached by this point also
increases. On the other hand, the minimum temperature reached by this densest
point is lowered by a few 10s of Kelvin as the ratio is increased. This
pattern does not hold true for the ratio of 3 as the object has yet to undergo
collapse, but we would expect it to follow the pattern once it does collapse.
This rise in temperature not only results in delayed collapse, but we also
observe that, once collapse takes place, it is progressively elongated with
increasing $v_{a}/c_{s}$ ratio. In the case of the super-sonic ratio of 3, the
halo has not begun to cool even at the final data dump at z = 12.6.
Figure 2: Plot of temperature at the cell with the highest density as a
function of redshift for all simulations. In the super-sonic case, the central
halo has yet to collapse and is continually heating up as it grows in mass. As
the initial $v_{a}/c_{s}$ ratio increases, the peak temperature reached prior
to cooling is higher, the time to collapse is delayed, and the duration of the
cooling is extended. Figure 3: Radial profiles of density (upper left),
temperature (upper right), HII fraction (lower left), and $\mathrm{H_{2}}$
fraction (lower right) centered about the densest point in the central halo
for ratios of 0.03 (solid) and 1.00 (dashed). The trans-sonic case results in
larger halo which produces the larger extended profile as seen in the density
and temperature plots. Embedded in the HII fraction plot is a projection plot
of the HII fraction weighted by density spanning 10pc centered around the
densest point which highlights the highly asymmetric nature of the collapse.
Figure 3 shows radial profiles centered around the central halo for a sub-
sonic ratio of 0.03 and a trans-sonic ratio of 1.00. As the overall collapse
is delayed, the halo is able to accrete more mass and thus we see a more
extended profile in the density plot (upper left). The temperature profile
(upper right) shows that the temperature in the core for the trans-sonic halo
is a few 10s of K cooler but with a more extended heated tail. Surrounding the
cool inner region is a hot gas which is heated to over 1000 degrees greater in
the trans-sonic halo. As the profile is spherically-averaged, the actual
temperatures in the heated region reach several thousand K. The neutral
$H_{2}$ fraction (lower right) shows a corresponding extended profile for the
trans-sonic case as the gas clouds that harbored the molecular cloud in the
sub-sonic cases has been merged into the central object.
Inspecting the collapsed region in detail shows that the nature of the
collapse itself has changed drastically between the two cases. In the sub-
sonic cases, the central halo undergoes a mostly self-similar spherical
collapse. On the other hand, the collapse in the trans-sonic halo proceeds
along a particular axis along a filamentary structure. This results in a
vastly different substructure particularly noticeable in the HII fraction
profile. The ion fraction shows a pronounced increase by a couple orders of
magnitude in the available ions in the cool region surrounding the central
heated core. Figure 3 also includes a projection of the ion fraction for the
trans-sonic case as weighted by density spanning 10 pc to show the asymmetric
nature of the collapse.
Figure 4: Phase plot of the comoving magnetic field strength against baryon
overdensity within a sphere of 1 kpc surrounding the densest point. The left
shows the sub-sonic ratio of 0.03 while the right shows 1.00. Red line
indicates the average field strength while the dotted line is a
$B\propto\rho^{2/3}$ reference trendline representing the ideal magnetized
spherical collapse. While the sub-sonic case suggests some turbulent
amplification, the trans-sonic halo is not even matching the compressional
amplification. This further suggests that the gravitational collapse is
occurring along the field lines.
### 3.2 Magnetic Field Evolution
We now present the behavior of the magnetic fields in our simulation.
Figure 4 shows a phase diagrams of comoving magnetic field strength vs the
baryon overdensity in a sphere of radius 1 kpc surrounding densest cell in the
trans-sonic run. The red line follows the mass-weighted average magnetic field
strengths. The dotted line shows $B\propto\rho^{2/3}$, where $\rho$ is
density, which is the scaling for the magnetic field amplification in the case
of a ideal spherical collapse, or compressional amplification. In the sub-
sonic scenario, there is still evidence of small scale dynamo driven
amplification shown by the red line growing steeper than the dotted line,
noting that it is muted relative to scenarios where $v_{a}<<$ 1\. On the other
hand, in the trans-sonic case, the collapse is not driving strong
amplification, and in fact the field is even less amplified than expected
during ideal compression. The latter implies that much of the collapse must be
occurring along the field lines, rather than squeezing lines together to drive
amplification. This is supported by the projection plot in Fig. 3, where the
collapse was no longer spherical and had a preferential axis. Furthermore, the
field is effectively saturated having reached corresponding energies
comparable to the kinetic energy in the system. And in fact, in this
particular scenario, the magnetic energy is comparable to the kinetic
component in this entire domain.
Figure 5 shows a projection plot of the plasma $\beta$, which is the ratio of
the thermal pressure to the magnetic pressure for the trans-sonic case. The
magnetic fields are overplotted as streamlines and show that the initial
magnetic field which was initialized in the z-direction is still coherent
across the halo that is formed. Within the halo, the turbulent collapse of the
gas pulls the field lines with it and reorders them.
Figure 5: Projection plot of plasma $\beta$ weighted by density centered
around the central object spanning 1 kpc across in the trans-sonic case. The
streamlines trace the magnetic fields and shows the coherence of the initial
magnetic field oriented in the z-direction over the span of the figure. In a
large fraction of the total region, the magnetic pressure dominates over the
thermal pressure.
Figure 6 shows the $v_{a}/c_{s}$ ratio at the highest density point as a
function of redshift for all the simulations. There is a ceiling for this
ratio in the core of the central halo at $v_{a}/c_{s}\sim$10\. For the sub-
sonic cases, the magnetic field is rapidly amplified to approach this limit.
As the initial field strength is increased, the degree of amplification is
reduced overall and the ratio reaches a plateau. This corresponds again to the
saturation point.
Figure 6: Plot of the $v_{a}/c_{s}$ ratio at the cell with the highest density
as a function of redshift for all simulations. At sub-sonic ratios, the small-
scale dynamo driven amplification rapidly increases this ratio. As the initial
ratio is increased, the amplification is suppressed and the ratio near the
center of the halo stays near $v_{a}/c_{s}\sim$10.
## 4 Conclusions and Discussion
In this Letter, we followed the collapse of the first mini-halos in
cosmological simulations in the presence of primordial magnetic fields of
comoving strengths in the range $\sim 2\times 10^{-12}-2\times 10^{-10}$G. We
find that when fields are of order $5\times 10^{-11}$ G or larger,
corresponding to the Alfvén speed being of the same order or larger than the
sound speed at high redshift $z{}_{\sim}^{>}\,100$, the formation of first
stars in such mini-halos is significantly impacted. In particular,
* •
With increasing initial magnetic field strength, the collapse of mini-halos
due to the $H_{2}$-cooling is progressively delayed both in duration and final
time of collapse.
* •
Magnetic field amplification by the small-scale dynamo and by simple flux-
freezing during spherical collapse is increasingly reduced with increasing
field strength.
* •
At high B-field strengths, $H_{2}$-cooling induced collapse can potentially be
completely suppressed.
* •
Magnetic fields lead to asymmetric gravitational collapse and an elevated ion
population in the central few pcs of the mini-halo.
* •
For all magnetic field strengths investigated, the Alfvén- to sound -speed
ratio in the center of the mini-halos saturates at $v_{a}/c_{s}\sim$10, with
saturation occurring earlier for stronger initial fields.
Our findings could have profound implications for early universe structure
formation and beyond. First of all, the minimum collapse mass scale would be
greatly sensitive to this effect as the magnetic fields suppress collapse in
smaller mini-halos. This can impact the initial mass function of primordial
stars and the resulting chemical evolution of the universe. The presence of
larger pristine gas reservoirs by the time of collapse can result in radically
different star formation scenarios. It can also perhaps more readily
facilitate more exotic formation scenarios such as direct-collapse black
holes.
We caution though, that our study has not included all relevant effects. Apart
from the neglect of ambipolar diffusion, a serious limitation is the neglect
of enhanced baryonic density perturbations induced by the magnetic fields
themselves. When taken into account, it may well be that, rather than being
delayed, star formation may occur earlier than in the no initial magnetic
field counterpart. This is due to the enhanced power on small scales making
mini-halos collapse earlier. However, we believe that most other effects found
in this study should remain and possibly lead to more massive first stars.
To place our findings into context, either inflationary produced PMFs or phase
transition produced PMFs could relieve the Hubble tension. In both cases a
pre-recombination field strength of $\sim 5\times 10^{-11}$G is required.
However whereas in the former scenario this field strength is kept to the
present epoch, in the latter scenario fields are subject to further damping
down to $1\times 10^{-11}$G through the epoch of recombination. Coincidentally
this is approximately the strength required to explain cluster magnetic fields
to be entirely primordial. We may therefore conclude that only inflationary
produced PMFs may influence first structure formation, whereas phase
transition produced fields are too weak to have significant impact.
This work was performed using the open-source Enzo and YT codes, which are the
products of collaborative efforts of many independent scientists from
institutions around the world. Their commitment to open science has helped
make this work possible. This work was supported in part by the U.S.
Department of Energy SLAC Contract No. DE-AC02-76SF00515.
## References
* Abbott et al. (2018) Abbott, T. M. C., et al. 2018, Phys. Rev., D98, 043526, doi: 10.1103/PhysRevD.98.043526
* Abel et al. (1997) Abel, T., Anninos, P., Zhang, Y., & Norman, M. L. 1997, New A, 2, 181, doi: 10.1016/S1384-1076(97)00010-9
* Abel et al. (2002) Abel, T., Bryan, G. L., & Norman, M. L. 2002, Science, 295, 93, doi: 10.1126/science.295.5552.93
* Asgari et al. (2020) Asgari, M., Lin, C.-A., Joachimi, B., et al. 2020, arXiv e-prints, arXiv:2007.15633. https://arxiv.org/abs/2007.15633
* Brummel-Smith et al. (2019) Brummel-Smith, C., Bryan, G., Butsky, I., et al. 2019, The Journal of Open Source Software, 4, 1636, doi: 10.21105/joss.01636
* Bryan et al. (2014) Bryan, G. L., Norman, M. L., O’Shea, B. W., et al. 2014, ApJS, 211, 19, doi: 10.1088/0067-0049/211/2/19
* Durrer & Caprini (2003) Durrer, R., & Caprini, C. 2003, JCAP, 11, 010, doi: 10.1088/1475-7516/2003/11/010
* Durrer & Neronov (2013) Durrer, R., & Neronov, A. 2013, A&A Rev., 21, 62, doi: 10.1007/s00159-013-0062-7
* Federrath et al. (2011) Federrath, C., Sur, S., Schleicher, D. R. G., Banerjee, R., & Klessen, R. S. 2011, ApJ, 731, 62, doi: 10.1088/0004-637X/731/1/62
* Jedamzik & Abel (2013) Jedamzik, K., & Abel, T. 2013, JCAP, 10, 050, doi: 10.1088/1475-7516/2013/10/050
* Jedamzik & Pogosian (2020) Jedamzik, K., & Pogosian, L. 2020, Relieving the Hubble tension with primordial magnetic fields. https://arxiv.org/abs/2004.09487
* Jedamzik & Saveliev (2019) Jedamzik, K., & Saveliev, A. 2019, Phys. Rev. Lett., 123, 021301, doi: 10.1103/PhysRevLett.123.021301
* Kim et al. (1996) Kim, E.-j., Olinto, A., & Rosner, R. 1996, Astrophys. J., 468, 28, doi: 10.1086/177667
* Koh & Wise (2016) Koh, D., & Wise, J. H. 2016, MNRAS, 462, 81, doi: 10.1093/mnras/stw1673
* Machida & Doi (2013) Machida, M. N., & Doi, K. 2013, MNRAS, 435, 3283, doi: 10.1093/mnras/stt1524
* McKee & Ostriker (2007) McKee, C. F., & Ostriker, E. C. 2007, ARA&A, 45, 565, doi: 10.1146/annurev.astro.45.051806.110602
* McKee et al. (2020) McKee, C. F., Stacy, A., & Li, P. S. 2020, MNRAS, 496, 5528, doi: 10.1093/mnras/staa1903
* Nakauchi et al. (2019) Nakauchi, D., Omukai, K., & Susa, H. 2019, MNRAS, 488, 1846, doi: 10.1093/mnras/stz1799
* Neronov & Vovk (2010) Neronov, A., & Vovk, I. 2010, Science, 328, 73, doi: 10.1126/science.1184192
* Pesce et al. (2020) Pesce, D., et al. 2020, Astrophys. J., 891, L1, doi: 10.3847/2041-8213/ab75f0
* Planck Collaboration et al. (2018) Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2018, arXiv e-prints, arXiv:1807.06209. https://arxiv.org/abs/1807.06209
* Reid et al. (2019) Reid, M., Pesce, D., & Riess, A. 2019, Astrophys. J. Lett., 886, L27, doi: 10.3847/2041-8213/ab552d
* Sanati et al. (2020) Sanati, M., Revaz, Y., Schober, J., Kunze, K. E., & Jablonka, P. 2020, arXiv e-prints, arXiv:2005.05401. https://arxiv.org/abs/2005.05401
* Saveliev et al. (2012) Saveliev, A., Jedamzik, K., & Sigl, G. 2012, Phys. Rev. D, 86, 103010, doi: 10.1103/PhysRevD.86.103010
* Schleicher et al. (2011) Schleicher, D. R. G., Sur, S., Banerjee, R., et al. 2011, ArXiv e-prints
* Sharda et al. (2020) Sharda, P., Federrath, C., & Krumholz, M. R. 2020, MNRAS, doi: 10.1093/mnras/staa1926
* Subramanian (2016) Subramanian, K. 2016, Rept. Prog. Phys., 79, 076901, doi: 10.1088/0034-4885/79/7/076901
* Subramanian & Barrow (1998) Subramanian, K., & Barrow, J. D. 1998, Phys. Rev. Lett., 81, 3575, doi: 10.1103/PhysRevLett.81.3575
* Sur et al. (2010) Sur, S., Schleicher, D. R. G., Banerjee, R., Federrath, C., & Klessen, R. S. 2010, ApJ, 721, L134
* Tegmark et al. (1997) Tegmark, M., Silk, J., Rees, M. J., et al. 1997, ApJ, 474, 1, doi: 10.1086/303434
* Turk et al. (2012) Turk, M. J., Oishi, J. S., Abel, T., & Bryan, G. L. 2012, ApJ, 745, 154
* Turk et al. (2011) Turk, M. J., Smith, B. D., Oishi, J. S., et al. 2011, ApJS, 192, 9, doi: 10.1088/0067-0049/192/1/9
* Vachaspati (2020) Vachaspati, T. 2020. https://arxiv.org/abs/2010.10525
* Wasserman (1978) Wasserman, I. 1978, ApJ, 224, 337
* Wise et al. (2012) Wise, J. H., Turk, M. J., Norman, M. L., & Abel, T. 2012, ApJ, 745, 50, doi: 10.1088/0004-637X/745/1/50
* Wolcott-Green et al. (2011) Wolcott-Green, J., Haiman, Z., & Bryan, G. L. 2011, MNRAS, 418, 838, doi: 10.1111/j.1365-2966.2011.19538.x
* Wong et al. (2019) Wong, K. C., et al. 2019. https://arxiv.org/abs/1907.04869
* Xu et al. (2008) Xu, H., O’Shea, B. W., Collins, D. C., et al. 2008, ApJ, 688, L57, doi: 10.1086/595617
|
# Impact of initial mass functions on the dynamical channel of gravitational
wave sources
Long Wang, 1,2 Michiko S. Fujii,1 Ataru Tanikawa3
1Department of Astronomy, School of Science, The University of Tokyo, 7-3-1
Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
2RIKEN Center for Computational Science, 7-1-26 Minatojima-minami-machi, Chuo-
ku, Kobe, Hyogo 650-0047, Japan
3Department of Earth Science and Astronomy, The University of Tokyo, Japan
<EMAIL_ADDRESS>
(Accepted –. Received –; in original form –)
###### Abstract
Dynamically formed black hole (BH) binaries (BBHs) are important sources of
gravitational waves (GWs). Globular clusters (GCs) provide a major environment
to produce such BBHs, but the total mass of the known GCs is small compared to
that in the Galaxy; thus, the fraction of BBHs formed in GCs is also small.
However, this assumes that GCs contain a canonical initial mass function (IMF)
similar to that of field stars. This might not be true because several studies
suggest that extreme dense and metal-poor environment can result in top-heavy
IMFs, where GCs may originate. Although GCs with top-heavy IMFs were easily
disrupted or have become dark clusters, the contribution to the GW sources can
be significant. Using a high-performance and accurate $N$-body code, petar, we
investigate the effect of varying IMFs by carrying out four star-by-star
simulations of dense GCs with the initial mass of $5\times 10^{5}M_{\odot}$
and the half-mass radius of $2$ pc. We find that the BBH merger rate does not
monotonically correlate with the slope of IMFs. Due to a rapid expansion, top-
heavy IMFs lead to less efficient formation of merging BBHs. The formation
rate continuously decreases as the cluster expands because of the dynamical
heating caused by BHs. However, in star clusters with a top-heavier IMF, the
total number of BHs is larger, and therefore, the final contribution to
merging BBHs can still be more than from clusters with the standard IMF, if
the initial cluster mass and density is higher than those used in our model.
###### keywords:
methods: numerical – galaxies: star clusters: general – stars: black holes
††pagerange: Impact of initial mass functions on the dynamical channel of
gravitational wave sources–Impact of initial mass functions on the dynamical
channel of gravitational wave sources††pubyear: 2019
## 1 Introduction
After LIGO/VIRGO detected gravitational wave (GW) events from mergers of
stellar-mass black holes (BHs) and neutron stars (NSs) (Abbott et al., 2019,
2020), many studies have investigated the origins of these events: isolated
binaries through common envelope evolution (e.g. Giacobbo & Mapelli, 2018;
Belczynski et al., 2020), through chemically homogeneous evolution (e.g.
Marchant et al., 2016; Mandel & de Mink, 2016) and stable mass transfer (e.g.
Kinugawa et al., 2014; Tanikawa et al., 2020), hierarchical stellar systems
(e.g. Antonini et al., 2014), open clusters (e.g. Ziosi et al., 2014; Kumamoto
et al., 2019; Di Carlo et al., 2019; Banerjee, 2020a), and galactic centers
(e.g. O’Leary et al., 2009). Dense stellar systems like globular clusters
(GCs) are considered to provide conducive environments to form GW progenitors
via few-body dynamical interactions (Portegies Zwart & McMillan, 2000; Downing
et al., 2010; Tanikawa, 2013; Bae et al., 2014; Rodriguez et al., 2016a, b;
Fujii et al., 2017; Askar et al., 2017; Park et al., 2017; Samsing et al.,
2018; Hong et al., 2020). Although this dynamical channel can efficiently
produce GW events, the total contribution seems to be less than the events
driven by the binary stellar evolution, because the total mass of GCs is a
small fraction of the total galactic stellar mass. However, it is assumed that
the initial mass function (IMF) of the GCs is the same as that of the field
stars, but it might not be true because we do not have sufficient
observational constraints on the IMF of the GCs yet.
Recently, several observational evidences indicate that extreme dense star
forming regions may have top-heavy IMFs, such as the Arches cluster in the
Galactic center and 30 Doradus in the Large Magellanic Cloud, where the heavy-
end of IMFs have exponent, $\alpha\approx-1.7\sim-1.9$, (Lu et al., 2013;
Schneider et al., 2018; Hosek et al., 2019). Therefore, it is natural to
expect that the old and massive GCs in the Milky Way may also contain top-
heavy IMFs since its birth environment is very different from the present-day
one. Zonoozi, Haghi, & Kroupa (2016) and Haghi et al. (2017) found the
indirect evidence that a top-heavy IMF can explain the observed trend of
metallicity and mass-to-light ratio among the GCs in M31. On the galactic
scale, Zhang et al. (2018) found that for high-redshift ($z\sim 2-3$) star-
burst galaxies, a top-heavy (integrated) IMF is necessary to explain their
star formation rate.
Meanwhile, there is a puzzle related to the phenomenon of multiple stellar
populations in GCs, called the “mass budget problem”. Observations show that
several GCs have more than half of enriched stellar populations (Milone et
al., 2017). The stellar evolution model to explain the formation of element-
enriched stars, especially the AGB scenario, cannot produce sufficient
materials to form such a large fraction of young populations (Bastian & Lardo,
2018, and references there in). A top-heavy IMF is a natural solution for this
problem (e.g. Wang et al., 2020).
With top-heavy IMFs, the strong wind mass loss from massive stars in the first
100 Myr significantly affects the density of the systems. Subsequently, the
massive stars leave a large number of BHs in the star clusters, and they will
also have a strong impact on the long-term evolution of the clusters. Indeed,
it has been shown that GCs with top-heavy IMFs are much easier to expand and
be disrupted (Chatterjee, Rodriguez, & Rasio, 2017; Giersz et al., 2019; Wang,
2020; Weatherford et al., 2021).
Therefore, the dynamical evolution of GCs provides a strong constraint on the
shape of IMFs. By using semi-analytic models, Marks et al. (2012) suggested
that the shape of IMFs in GCs might depend on the metallicity and initial
cloud density. Their model, however, ignored the dynamical impact of BHs, and
thus, they overestimated the slopes of IMFs for observed GCs. By comparing
with scaled $N$-body models, Baumgardt & Sollima (2017) also argued that 35
observed dense GCs might not have top-heavy IMFs. However, GCs with top-heavy
IMFs might have existed in the past but have already disappeared or have
become dark clusters (Banerjee & Kroupa, 2011). Thus, they cannot be directly
observed today. Considering that a large number of BHs existed there, the
contribution of binary black hole (BBHs) mergers might be significant.
Chatterjee, Rodriguez, & Rasio (2017), Giersz et al. (2019) and Weatherford et
al. (2021) have performed Monte-Carlo simulations of GCs to study the effect
of top-heavy IMFs on the survival of GCs and the BBH mergers. However, the
Monte-Carlo method has not been fully tested for the condition of top-heavy
IMFs, wherein a large fraction of BHs exist. Rodriguez et al. (2016) compared
the Monte-Carlo (the cmc code; e.g. Joshi, Rasio, & Portegies Zwart, 2000) and
the direct $N$-body methods (the nbody6++gpu code; Wang et al., 2015) for
million-body simulations of (Dragon) GCs (Wang et al., 2016). In this
comparison, the cmc simulation shows a short-period oscillation of the BH core
radius. This does not appears in the direct $N$-body model. Such a short-
period oscillation is connected to the formation of BH binaries and the
interaction between the binaries with other BHs in the core. Since the Dragon
models have a low-density and only pass $<2$ initial half-mass relaxation
time, it is unclear that for mode dense GCs, whether such a different
behaviour of core evolution can result in different formation and evolution of
BH binaries. Besides, Rodriguez et al. (2018) showed that when the BH number
is large, the cmc simulations show a different core radius compared to that of
the direct $N$-body simulation. It is unclear as to how such a large
difference could exist in the case of top-heavy IMFs.
There are also studies using direct $N$-body methods, which have no
approximation on the gravitational interaction, unlike the case of the Monte-
Carlo method. Wang (2020) studied the effect of top-heavy IMFs on the survival
of star clusters, but used a simplified two-component models without stellar
evolution. Haghi et al. (2020) carried out a group of $N$-body models, but
without a realistic number of stars like in the GCs. These models properly
integrated individual stellar orbits and obtained correct dynamical evolution
of GCs, but they cannot be used to study the BBH mergers. Therefore, it is
necessary to carry out accurate star-by-star $N$-body simulations to study the
GCs with top-heavy IMFs.
In this work, we carry out four $N$-body simulations of GCs with the
sufficient mass ($5\times 10^{5}M_{\odot}$) and density (the half-mass radius
is 2 pc). IMFs with different slopes are used. In Section 2.1, we describe the
$N$-body tool, petar, used in this study. The initial conditions of models are
presented in Section 2.2. We describe the BBHs mergers and the dynamical
evolution of the models in Section 3. Finally, we discuss our results and draw
conclusions in Sections 4 and 5.
## 2 Methods
### 2.1 Petar code
We use the petar code (Wang et al., 2020b) to develop the $N$-body models of
GCs. This is a hybrid code that combines the particle-tree particle-particle
(Oshino, Funato, & Makino, 2011) and slowdown algorithm regularization (Wang,
Nitadori & Makino, 2020a) methods. Such a combination reduces the computing
cost compared to the direct $N$-body method while maintaining sufficient
accuracy to deal with close encounters and few-body interactions. The code is
based on the fdps framework, which can achieve a high performance with the
multi-process parallel computing (Iwasawa et al., 2016, 2020; Namekata et al.,
2018).
The single and binary stellar evolution packages, sse/bse, are implemented in
petar (Hurley, Pols, & Tout, 2000; Hurley, Tout, & Pols, 2002). We adopt the
updated version of sse/bse from Banerjee et al. (2020b). The simulations use
the semi-empirical stellar wind prescriptions from Belczynski et al. (2010),
the “rapid” supernova model for the remnant formation and material fallback
from Fryer et al. (2012), along with the pulsation pair-instability supernova
(PPSN; Belczynski et al., 2016). In Figure 1, we show the relation between
zero-age main-sequence (ZAMS) masses and final masses of stars from $0.08$ to
$150$ $M_{\odot}$ for $Z=0.001$. With PPSN, massive stars which have the ZAMS
mass above approximately $100M_{\odot}$ result in a final mass of BHs with a
value of $40.5M_{\odot}$. This has a significant impact on the mass ratio
distribution of BBHs, as described in section 3. Meanwhile, the fallback
mechanism results in zero natal kick velocity for massive BHs with ZAMS mass
$>30M_{\odot}$. They can be retained in the clusters after supernovae, while
low mass BHs and most of neutron stars can gain high velocity kicks and
immediately escape from the system.
Figure 1: The initial-final mass relation of stars by applying the stellar
evolution model from sse. The orange points indicate the remnants natal kick
velocity of neutron stars and black holes after the supernova. Due to the
fallback treatment, massive BHs have no kick velocity.
### 2.2 Initial conditions
To specifically investigate how the shape of IMFs affects the formation and
evolution of BBHs in GCs, we develop four models by varying the $\alpha_{3}$
in the Kroupa (2001) IMF with a multi-component power-law shape:
$\displaystyle\xi(m)\propto m^{\alpha_{\mathrm{i}}}$ (1)
$\displaystyle\alpha_{\mathrm{1}}$ $\displaystyle=-1.3,$ $\displaystyle
0.08\leq m/M_{\odot}$ $\displaystyle<0.50$ $\displaystyle\alpha_{\mathrm{2}}$
$\displaystyle=-2.3,$ $\displaystyle 0.50\leq m/M_{\odot}$
$\displaystyle<1.00$ $\displaystyle\alpha_{\mathrm{3}}$ , $\displaystyle
1.00\leq m/M_{\odot}$ $\displaystyle<150.0.$
The value of $\alpha_{3}$ and corresponding model names are listed in Table 1.
Table 1: The name of $N$-body models with corresponding $\alpha_{3}$ of IMFs and the number of stars ($N$) Model | A1.5 | A1.7 | A2.0 | A2.3
---|---|---|---|---
$\alpha_{3}$ | -1.5 | -1.7 | -2.0 | -2.3
N | 182306 | 312605 | 581582 | 854625
A2.3 has the canonical value of $\alpha_{3}$, while all the others correspond
to different degrees of top-heavy IMFs. The maximum value of $\alpha_{3}$ is
chosen to be 1.5. The minimum and the maximum masses of stars in our models
are $0.08$ and $150$ $M_{\odot}$, respectively. For the metallicity, which
determines the stellar evolution (mass loss) of stars, we adopt a typical
value for GCs, that is, $Z=0.001$.
For all models, we adopt the Plummer profile to set up the positions and
velocities of individual stars. We fix the initial total mass to $5\times
10^{5}M_{\odot}$ and the initial half-mass radius, $r_{h,0}$, to 2 pc. Thus,
our models have a mass and density as high as those typically observed in GCs.
In fact, the density of our model is higher than that of the previous largest
DRAGON GC model (Wang et al., 2016). Thus, our simulations are still time-
consuming even after using the petar.
To begin with, we assume no primordial binaries being present in these models.
This simplifies the discussion since we can purely focus on the dynamical
formation of BBHs. Meanwhile, it is easier to perform the $N$-body
simulations. We do not apply the galactic potential either, but we remove
unbounded stars when they reach more than 200 pc far from the cluster centre.
## 3 Results
We do not intend to create realistic GC models with primordial binaries and
galactic tidal field, but focus on the theoretical studies of the BBHs in GCs.
Thus, we only evolve our models up to $3$ Gyr instead of Hubble time, using
the available computing resource. However, this already covers approximately
$10$ initial half-mass relaxation times of the systems, and thus, it is
sufficient for analysis.
### 3.1 BBH mergers
In our simulations, we do not model BBH mergers via the effect of general
relativity. Instead, we detect all BBHs that are potential GW mergers in the
post-process using the snapshots of the simulations. We apply the following
two steps to obtain the BBH mergers.
Firstly, we estimate the merger timescale for each binary by using the formula
provided by Peters (1964):
$\displaystyle t_{\mathrm{gw}}$
$\displaystyle=\frac{12}{19}\frac{c_{0}^{4}}{\beta}\int_{0}^{e_{0}}{\frac{e^{29/19}[1+(121/304)e^{2}]^{1181/2299}}{(1-e^{2})^{3/2}}\mathrm{d}e}$
(2) $\displaystyle c_{0}$
$\displaystyle=\frac{a_{0}(1-e_{0}^{2})[1+(121/304)e_{0}^{2}]^{870/2299}}{e_{0}^{12/19}}$
$\displaystyle\beta$
$\displaystyle=\frac{64}{5}\frac{G^{3}m_{1}m_{2}(m_{1}+m_{2})}{c^{5}}.$
By calculating $t_{\mathrm{gw}}$ of BBHs in the snapshots with the time
interval of $0.25-1$ Myr, we select the merger candidates which have the
actual (delayed) merging time $t_{\mathrm{merge}}\equiv
t_{\mathrm{gw}}+t<12$Gyr, where $t$ is the physical time of the cluster when
$t_{\mathrm{gw}}$ is calculated.
However, these candidates may not actually merge because BBHs with high
eccentricities can be perturbed by their surrounding stars, if they are still
in the cluster. The orbits of BBHs can also dramatically change after a strong
few-body interaction. If the eccentricity decreases, $t_{\mathrm{gw}}$ can
significantly increases. Thus, in the second step, we look at the evolution of
each candidate and check whether they escape from the system or merge before
their $t_{\mathrm{gw}}$ increases. Some BBHs can suffer exchange of members
after interaction with other BHs or BBHs (hereafter referred to as exchanged
BBH). For example, after an interaction between a BBH (BH1, BH2) and a single
BH3, one member (BH2) in the BBH may be exchanged to BH3. In such a case, we
first check whether BH1 and BH2 can merge before exchange. If not, we check
whether the new BBH (BH1, BH3) can merge. If the merger occurs in any case, we
consider it as one merger event.
The numbers of candidates, the candidates without and with exchanged BBHs, and
confirmed mergers for each model are shown in Table 2. We can notice that the
number of candidates does not monotonically depends on the $\alpha_{3}$ of
IMFs. The more top-heavy IMF is, less the $N_{\mathrm{cand}}$ tend to be,
except for that of the A1.5 model. The A1.5 model has more $N_{\mathrm{cand}}$
than that of the A1.7. Besides, the number of exchanged BBHs is large in A2.0
and A2.3 models. We will explain the reason in Section 3.2.
Table 2: The number of BBH merger candidates ($N_{\mathrm{cand}}$), the candidates without ($N_{\mathrm{cand,noex}}$) and with ($N_{\mathrm{cand,ex}}$) exchanged BBHs mergers inside GCs ($N_{\mathrm{in}}$) and escaped mergers ($N_{\mathrm{out}}$) up to the 3 Gyr evolution of GCs. The exchanged BBHs are counted multiple times in $N_{\mathrm{cand}}$. Model | A1.5 | A1.7 | A2.0 | A2.3
---|---|---|---|---
$N_{\mathrm{cand}}$ | 20 | 10 | 30 | 43
$N_{\mathrm{cand,noex}}$ | 15 | 8 | 18 | 19
$N_{\mathrm{cand,ex}}$ | 2 | 1 | 5 | 11
$N_{\mathrm{in}}$ | 0 | 1 | 3 | 3
$N_{\mathrm{out}}$ | 3 | 3 | 3 | 4
Figure 2: The evolution of semi-major axes of all BBHs in the four models. The
gray scale indicates the total masses of BBHs. The yellow “x” represents the
merger candidates. The red triangles and red crosses represents escaped
mergers and mergers inside GCs.
Figure 2 shows the evolution of the semi-major axis ($a$) of all BBHs detected
in the snapshots of our simulations. The candidates, inner and escaped BBH
mergers are shown together. In this figure, we can identify two types of BBHs,
the soft and hard ones separated by $a\approx 10^{3}$ AU. As explained by the
Heggie-Hills law (Heggie, 1975; Hills, 1975), soft binaries are easily
disrupted by close encounters, but hard binaries become tighter. This can be
seen in Figure 2. Soft BBHs randomly appear and vanish, while hard binaries
continuously evolve harder, following a clear trace of decreasing $a$.
Typically, only one or two hard BBHs appear at one time because binary-binary
encounters tend to break softer binaries. Once their $a$ becomes small enough,
a strong encounter can eject them out of the GCs, and they disappear. Then,
the new hard BBHs are formed, and the same process is repeated.
From the A2.3 model, we can identify a clear trend that BBHs with large masses
prefer to form first. Once they escape, BBHs with lower masses form and escape
one by one. This feature results in a clear difference in the mass ratio
distribution of BBH components depending on IMFs, as shown in Figure 3. As
more top-heavy IMFs have larger fractions of stars with the ZAMS masses above
$100M_{\odot}$, more equal-mass BHs of $40.5M_{\odot}$ form as a result of the
PPSN. These most massive BHs form binaries first. Thus, the BBH mergers in
top-heavy IMFs tend to have $q$ closer to unity.
Figure 3: The cumulative distribution of mass ratio $q$ of two components in
BBH candidates (lines) and confirmed (triangles) mergers.
Another major feature shown in Figure 2 is the increase of minimum semi-major
axis (maximum binding energy), especially in the A1.5 model. As a result, it
becomes more difficult to form tight BBHs in the later evolution of GCs. This
is reflected on the $N_{\mathrm{cand}}$ as a function of time. Most of the
candidates form in the first 1000 Myr in the A1.5 model, and the formation
rate significantly decreases after 2000 Myr. In Section 3.3, we explain why
the minimum semi-major axis increases by analyzing the escape velocities of
GCs.
Meanwhile, the formation rate of tight BBHs is higher when IMF is more top-
light. This also explains the larger $N_{\mathrm{cand}}$.
Figure 4: The cumulative number of BBHs candidates (solid curves) and
confirmed ones (triangles) vs. the masses of BBHs.
In Figure 4, we show the cumulative distribution of the masses of BBH merger
candidates and confirmed ones. There are only a few confirmed mergers with
$m_{\mathrm{BBH}}<60M_{\odot}$ in the A2.3 model. In A1.5 models, however,
most mergers have $m_{\mathrm{BBH}}=81M_{\odot}$ with the mass of each
component being $m_{\mathrm{BBH}}=40.5M_{\odot}$ due to the PPSN. Thus,
clusters with a more top-heavy IMF form more massive mergers.
Here, the analysis includes all BBH mergers. However, the mergers that
occurred at an early time are not observable. In Figure 5, we show the
cumulative distribution of $t_{\mathrm{gw}}$ and $t_{\mathrm{merge}}$ for BBH
merger candidates and confirmed ones for all models. While mergers inside GCs
have a short merger time ($t_{\mathrm{gw}}<1$ Myr and $t_{\mathrm{merge}}<3$
Gyr), most escaped mergers have a relatively long merger time
($t_{\mathrm{gw}}>1$ Gyr and $t_{\mathrm{merge}}=1-10$ Gyr). Therefore, the
escaped mergers from all models with different IMFs can be detected today.
In our models with the age of 3 Gyr, there are only a few BBH merger
candidates with redshift $<1$ (within approximately 4 Gyr of lookback time).
Thus, we cannot provide the statistical analysis of the model directly.
However, if all mergers after 3 Gyr are included, we expect that the main
trend, i.e., top-heavy IMFs result in more massive and more $q\sim 1$ mergers,
becomes stronger. In the later evolution (after 3 Gyr), all models with
different IMFs will have more low-mass BBH mergers. However, the top-heavy
model has a lower escape velocity, and the BBH merger rate is lower than that
of the top-light model. Thus, more low-mass BBH mergers will appear in the
top-light model.
Figure 5: The cumulative number of BBH merger candidates (solid curves),
mergers inside GCs (crosses) and escaped mergers (triangles) vs.
$t_{\mathrm{gw}}$ (upper panel) and $t_{\mathrm{merge}}$ (lower panel).
### 3.2 Effect of stellar evolution on dynamics
The stellar evolution, especially the wind mass loss of massive stars in the
first $100$ Myr, has a significant impact on the later evolution of GCs (e.g.
Trani, Mapelli, & Bressan, 2014). In particular, the mass loss with a top-
heavy IMF is more intense due to the larger fraction of massive stars. This
significantly affects the central density of GCs. To investigate this, we
calculate the evolution of the core radii ($R_{\mathrm{c}}$) defined by
Casertano & Hut (1985) as
$R_{\mathrm{c}}=\sqrt{\frac{\sum_{i}\rho_{i}^{2}r_{i}^{2}}{\sum_{i}\rho_{i}^{2}}},$
(3)
where $\rho_{i}$ is the local density of object $i$ estimated by counting 6
nearest neighbors, and $r_{i}$ is the distance to the center of the system. As
shown in Figure 6, $R_{\mathrm{c}}$ in the first 100 Myr becomes larger with
more top-heavy IMFs. When most massive stars evolve to compact remnants, the
stellar-wind mass loss becomes weak. Then, $R_{\mathrm{c}}$ starts to shrink,
and finally, the core collapse occurs. It must be noted here that the core is
dominated by BHs after mass segregation111The theoretical $R_{\mathrm{c}}$
discussed in this work includes all objects in the system. This is not the
same as the core radius defined in observation, which is measured by fitting
the surface brightness profile. The core collapse actually occurs in the BH
subsystem while the observed core radius is much more extended.. Figure 6
shows that the A1.5 model contracts the most during the core collapse,
probably due to the larger masses of the BHs, and the longer distance of
sinking. This causes the formation of the densest core between $350-450$ Myr
compared to those of the other models. Such a high density might be the reason
why the A1.5 model has more BBH merger candidates compared to that of the A1.7
model. Figure 2 shows that many of the BBH merger candidates appear around 400
Myr in the A1.5 model. This is consistent with the feature of core collapse.
Figure 6: The evolution of core radii ($R_{\mathrm{c}}$) defined by Equation
3; averaged masses ($\langle m_{\mathrm{BH,c}}\rangle$) and numbers of BHs
($N_{\mathrm{BH,c}}$) inside $R_{\mathrm{c}}$. BBHs are counted as single
objects.
### 3.3 Expansion due to BH heating
Due to the larger masses of BHs compared to those of stars, BHs have a strong
impact on the long-term evolution of star cluster (Breen & Heggie, 2013). In
star clusters including BHs, massive BHs sink into the cluster center due to
the dynamical friction (mass segregation) and form a dense subsystem.
Thereafter, the core collapse of the BH subsystem occurs and drives the
formation of BBHs, which heats the systems via few-body interactions. As a
result of the energy-balanced evolution, the halo composed of light stars
expands. When more BHs exist, the expansion is faster (see also in Giersz et
al., 2019; Wang, 2020). This can be identified in Figure 7, in which the
evolution of the half-mass radii of non-BH objects ($R_{\mathrm{h,noBH}}$) and
BHs ($R_{\mathrm{h,BH}}$) are compared for all models.
In the first 300 Myr, $R_{\mathrm{h,BH}}$ decreases due to the mass
segregation, and later, it increases due to BH heating. Meanwhile,
$R_{\mathrm{h,noBH}}$ always increases. The stronger stellar-wind mass loss in
more top-heavy models results in larger $R_{\mathrm{h,BH}}$ and
$R_{\mathrm{h,noBH}}$ in a short time ($<100$ Myr). This is similar to the
evolution of the core radii shown in Figure 6. As a result of stronger BH
heating during the long-term evolution after 300 Myr, $R_{\mathrm{h,BH}}$ and
$R_{\mathrm{h,noBH}}$ increase faster in the more top-heavy models.
Figure 7: The evolution of half-mass radii of non-BH objects
($R_{\mathrm{h,noBH}}$; dashed curves) and BHs ($R_{\mathrm{h,BH}}$; solid
curves).
According to Heggie-Hill law, the boundary of the semi-major axis between soft
and hard BBHs depend on the masses of BHs and local velocity dispersion:
$a_{\mathrm{s/h}}=\frac{Gm_{1}m_{2}}{\langle m\rangle\sigma^{2}}$ (4)
We check the density ($\rho_{\mathrm{BH}}$) and 3D velocity dispersion
($\sigma_{\mathrm{BH}}$) of the BHs within $10\%$ Lagrangian radii of all BHs
and $R_{\mathrm{c}}$ of the BH subsystem. The results are shown in Figure 8.
We find that $\rho_{\mathrm{BH}}$ inside $R_{\mathrm{lagr,BH,10\%}}$ varies as
the slope of the IMF changes, i.e., models with more top-heavy IMFs have lower
central densities. This is consistent with the behaviour of
$R_{\mathrm{h,BH}}$ (see Figure 6). In contrast, $\rho_{\mathrm{BH}}$ inside
$R_{\mathrm{c}}$ is very similar for all models, different from the behaviour
on a larger distance scale.
We can divide the models into two groups, A1.5/A1.7 and A2.0/A2.3, based on
the values of $\sigma_{\mathrm{BH}}$ within $R_{\mathrm{lagr,BH,10\%}}$ or
$R_{\mathrm{c}}$. There is a gap of $\sim 2$ pc/Myr in $\sigma_{\mathrm{BH}}$
between the two groups. Models with more top-heavy IMFs have smaller
$\sigma_{\mathrm{BH}}$ values, and thus, their $a_{\mathrm{s/h}}$ are larger
as explained by Equation 4. This is consistent with the hard-soft boundaries
shown in Figure 2.
As clusters expand, $\sigma_{\mathrm{BH}}$ continues to decrease for all
models, and $a_{\mathrm{s/h}}$ increases. This is identified in the A1.5 and
A1.7 models in Figure 2. In the A2.0 and A2.3 models, however, we can clearly
see that the masses of BBHs decrease with time. This balances the effect of
decreasing $\sigma_{\mathrm{BH}}$, and thus, the change in $a_{\mathrm{s/h}}$
is not obvious for these two models.
Figure 8: The evolution of density of BHs ($\rho_{\mathrm{BH}}$) and 3D
velocity dispersion of BHs ($\sigma_{\mathrm{BH}}$) within $10\%$ Lagrangian
radii ($R_{\mathrm{lagr,BH,10\%}}$) and $R_{\mathrm{c}}$
The key quantity that controls the minimum semi-major axis of binaries is the
central escape velocity of star clusters. As the semi-major axis of a hard
binary shrinks, the binary can gain larger kinetic energy (larger kick to the
center-of-mass velocity) after a strong encounter. This process finally causes
the ejection of the binary from the center of the cluster after a strong
encounter. If the ejection velocity is below the escape velocity, the
dynamical friction brings it back to the center. The binary can continue to
encounter with others and become harder, until the next strong encounter
ejects it again. Therefore, the escape velocity of star clusters limits the
minimum semi-major axis of hard binaries that can be reached by few-body
encounters.
Based on the mechanism described above, we can estimate GW merger timescale
($t_{\mathrm{gw}}$) of an ejected BBH, which should be an indicator for
$t_{\mathrm{gw}}$ of BBHs in a star cluster. According to Equation 2 (Peters,
1964), $t_{\mathrm{gw}}\propto a_{\rm esc}^{4}/m_{1}^{3}$, where $a_{\rm esc}$
is the semi-major axis of an ejected BBH, and we assume $m_{2}=m_{1}$ for the
BBH. Since an ejected BBH has an internal velocity of $\sim v_{\mathrm{esc}}$,
$a_{\rm esc}=Gm_{1}/(2v_{\mathrm{esc}}^{2})$, and $t_{\mathrm{gw}}\propto
m_{1}/v_{\mathrm{esc}}^{8}$. Eventually, an ejected BBH has larger
$t_{\mathrm{gw}}$ when it contains heavier BHs, and when it is ejected from a
star cluster with a smaller escape velocity.
In Figure 9, we estimate the central escape velocity of the Plummer model,
$v_{\mathrm{esc}}=\sqrt{2\times 1.305G\frac{M}{R_{\mathrm{h}}}},$ (5)
where $M$ is the total mass of the system. Due to the larger $R_{\mathrm{h}}$
in models with top-heavy IMFs, their $v_{\mathrm{esc}}$ is smaller. Moreover,
their BHs are heavier in models with more top-heavy IMFs. Therefore, it is
difficult to form BBH mergers there, although the number of BHs is much
larger. Meanwhile, $v_{\mathrm{esc}}$ decreases, as the cluster expands. This
explains why the minimum semi-major axis increases as shown in Figure. 2,
especially for the A1.5 model, because the masses of BBHs maintain a similar
value up to 3 Gyr. In the case of the A2.3 model, the masses of BBHs decrease,
while the minimum semi-major axis does not show a strong increasing trend.
Since the models with top-heavy IMFs have a stronger expansion and maintain
heavier BHs inside the clusters, the efficiency of forming BBH mergers
decreases faster. Therefore, with the same initial total mass and size, the
BBH merger rate is actually lower in the GCs with top-heavy IMFs. If the
cluster with a top-heavy IMF was under a strong tidal field, the decrease of
$v_{\mathrm{esc}}$ would be faster, and therefore, it would be more difficult
for the cluster to generate merging BBHs. However, if the cluster was
initially denser and more massive, it could survive up to Hubble time and
continue to generate merging BBHs all the time. In such a case, the cluster
with a top-heavy IMF contains a large number of BHs, and therefore, the total
contribution of BBH mergers can be significantly larger than that in a GC with
the same mass and size but a top-light IMF. Such ‘dark clusters’ have been
studied using Monte-Carlo simulations (Weatherford et al., 2021).
Figure 9: The evolution of the central escape velocity of the models
### 3.4 Mass loss
The stellar wind and BH heating drive the mass loss of star clusters in the
early and later phases, respectively. Figure 10 (upper panel) shows the time
evolution of the total masses of BH and non-BH components in the four models.
After 100 Myr, the A1.5 model loses about $60\%$ of the initial mass due to
stellar-wind mass loss, while the A2.3 model loses only $30\%$. Therefore, the
impact of the stellar evolution is more significant in star clusters with more
top-heavy IMFs. Such strong mass loss and the subsequent BH heating drive the
fast expansion of the system as shown in Figure 6 and 7. This is also reported
in previous studies (e.g. Chernoff & Weinberg, 1990; Banerjee & Kroupa, 2011;
Chatterjee, Rodriguez, & Rasio, 2017; Giersz et al., 2019). Besides, after the
core collapse of BH subsystems, the few-body interactions between BBHs and
others start to eject BHs from the GCs, and thus, $M_{\mathrm{BH}}$ starts to
decrease after approximately 300 Myr. Since our models do not have a tidal
field but simply remove unbound stars above 200 pc, the A1.5 model still
survives and maintains approximately $34\%$ initial mass at 3 Gyr. In the
realistic condition, where the galactic tidal field plays a role, the clusters
with top-heavy IMFs tend to dissolute much faster than those in our current
models (Wang, 2020).
Figure 10: The evolution of total masses of BH and non-BH components (upper
panel) and mass ratio between these two (lower panel).
Breen & Heggie (2013) and Wang (2020) have found that the mass fraction of BHs
in the system ($M_{\mathrm{BH}}/M$) evolves depending on the initial fraction
and the tidal field. Since BHs are centrally concentrated, they are not
directly affected by the tidal field. Thus, the tidal evaporation of BHs can
be neglected. As mentioned in Section 3.3, strong close encounters between
hard BBHs and intruders can eject BHs from the cluster. This is the major
scenario that causes the mass loss of BHs.
Meanwhile, light stars in the halo are truncated by the tidal field. When BH
heating occurs, this process is accelerated (Breen & Heggie, 2013; Giersz et
al., 2019; Wang, 2020). In the lower panel of Figure 10, we can identify the
two different evolution trends of $M_{\mathrm{BH}}/M$. In the A2.3 model,
$M_{\mathrm{BH}}/M$ decreases with time, and finally, most of the BHs will be
ejected from the clusters. The core collapse of light stars will occur, and a
GC with a dense core will appear in the observation.
In contrast, $M_{\mathrm{BH}}/M$ in the A1.5 and A1.7 models increases with
time. This means that light stars will finally evaporate from such clusters
and that dark clusters will form. In a strong tidal field, the mass loss of
light stars is faster, and such this process can be accelerated. The A2.0
model is in the transition region.
With a two-component simplified model, Wang (2020) found that the mass loss
rate of BHs, $M_{\mathrm{BH}}(t)/M_{\mathrm{BH}}(0)$, simply depends on the
mass segregation time of heavy components ($t_{\mathrm{ms}}$) in isolated
clusters, but the dependence becomes complex if a tidal field exists. In our
models with stellar evolution and IMFs, even without a tidal field,
$M_{\mathrm{BH}}(t)/M_{\mathrm{BH}}(0)$ does not simply depend on
$t_{\mathrm{ms}}$ as shown in Figure 11. In a two-component model, the
definition of $t_{\mathrm{ms}}$ is as follows:
$t_{\mathrm{ms}}=\frac{m_{1}}{m_{2}}t_{\mathrm{rh,1}},$ (6)
where the relaxation time of the ight component has the form (Spitzer & Hart,
1971)
$t_{\mathrm{rh,1}}=0.138\frac{N^{1/2}R_{\mathrm{h}}^{3/2}}{m_{1}^{1/2}G^{1/2}\ln\Lambda}.$
(7)
However, with a mass function, the precise definition of $t_{\mathrm{ms}}$ is
difficult to determine. Here, we use the averaged mass of non-BH and BH
components as a replacement of $m_{1}$ and $m_{2}$ in Equation 6 and 7.
Theoretical studies have shown that when multiple components exist, the
diffusion coefficients need to be properly averaged to obtain the correct
relaxation time (Spitzer & Hart, 1971; Antonini & Gieles, 2019; Wang, 2020).
Thus, using averaged mass is not accurate to calculate $t_{\mathrm{rh,1}}$. In
particular, since the mass functions are different among the four models, we
probably need to introduce a correction factor $\psi$ to $t_{\mathrm{rh,1}}$,
similar to the case in the two-component models done in Wang (2020). Moreover,
the stellar evolution introduces another complexity. Thus, it is reasonable to
see that $M_{\mathrm{BH}}(t)/M_{\mathrm{BH}}(0)$ does not simply depend on
$t_{\mathrm{ms}}$. Instead, Figure 11 shows that the mass loss rate of BHs is
faster when IMF is more top-heavy.
Figure 11: The evolution of total mass of BHs, $M_{\mathrm{BH}}$($t$). The
mass is normalized by using the value at $100$ Myr. Time is in the unit of
$t_{\mathrm{ms}}$ (eq. 6).
### 3.5 Energy-balanced evolution
Breen & Heggie (2013) established a theory to describe the long-term evolution
of star clusters with BH subsystems. The key idea is based on the energy-
balanced evolution of star clusters (Hénon, 1961, 1975). After the BH
subsystem forms in the center of star clusters due to the mass segregation,
BHs drive the binary heating process and provide the energy to support the
whole system. The Hénon’s principle suggests that the energy flux from the
center (BH subsystem) should balance the one required by the global system.
This can be described by (Eq. 1 in Breen & Heggie, 2013)
$\frac{E}{t_{\mathrm{rh}}}\approx
k\frac{E_{\mathrm{BH}}}{t_{\mathrm{rh,BH}}},$ (8)
where $E_{\mathrm{BH}}$ and $E$ are the total energy of the BH subsystem and
the global system respectively; $t_{\mathrm{rh}}$ and $t_{\mathrm{rh,BH}}$ are
the two-body relaxation times measured at $R_{\mathrm{h}}$ and
$R_{\mathrm{h,BH}}$, respectively.
This relation constrains the behaviour of BHs, i.e., the density of the BH
subsystem. The formation rate of BBHs and the escape rate of BHs are
controlled by the global system (light stars), and not the BH subsystem
itself. By extending the Breen & Heggie (2013) theory to top-heavy IMFs, Wang
(2020) found that when a large fraction of BHs exists, the energy balance is
different from the description of Equation 8. To properly measure the energy
balance between the central BH subsystem and the global system, the correction
factor $\psi$ is required to define the relaxation time more appropriately (as
discussed in Section 3.4). In Wang (2020), $\psi$ is defined as
$\psi=\frac{\sum_{k}n_{\mathrm{k}}m_{\mathrm{k}}^{2}/v_{\mathrm{k}}}{\langle
n\rangle\langle m\rangle^{2}/\langle v\rangle},$ (9)
where $n_{\mathrm{k}}$, $m_{\mathrm{k}}$ and $v_{\mathrm{k}}$ are the number
density, the mass of one object and the mean square velocity of the component
$k$, $\langle n\rangle$, $\langle m\rangle$ and $\langle v\rangle$ represent
the average values of all components, respectively.
Figure 1 in Wang (2020) shows that if $\psi$ is not included, $k$ in Equation
8 is not a constant, but depends on the individual and total masses of BHs.
With the correction factor $\psi$, the value of $k$ becomes constant for most
models with $M_{\mathrm{BH}}/M<0.4$. The $N$-body models in Wang (2020) are
low-mass clusters with only two mass components and no stellar evolution. With
more realistic models, we provide a similar analysis by treating BHs and non-
BHs as two components.
Figure 12 shows the evolution of energy flux rate (measured at
$R_{\mathrm{h}}$ and $R_{\mathrm{h,BH}}$) with and without the $\psi$ factor,
and the evolution of $\psi$ of all objects and BHs, respectively. With the
correction of $\psi$ to $t_{\mathrm{rh}}$ (the upper panel), the A2.0 and A2.3
models show the same ratio of energy flux, while the A1.5 and A1.7 models have
higher ratios. Without correction (the middle panel), however, there is no
common ratio among all four models. The lower panel shows the evolution of
$\psi$ measured at $R_{\mathrm{h}}$ and $R_{\mathrm{h,BH}}$. The value of
$\psi[R_{\mathrm{h,BH}}]$ initially increases for all models but it increases
more rapidly for the top-heavy models (the A1.5 and A1.7 models). Then, it
slightly decreases after 1 Gyr for the A2.0 and A2.3 models. The value of
$\psi[R_{\mathrm{h}}]$ also increases in the beginning. For the A2.3 model, it
significantly increases at around 100 Myr and then decreases to a similar
value to $\psi[R_{\mathrm{h}}]$. For the other models, it also peaks at around
100 Myr but has a lower value compared to $\psi[R_{\mathrm{h,BH}}]$ later on.
The analysis here ignores the issue of the internal $\psi$ factors for the BH
and non-BH components discussed in Section 3.4. But the result is consistent
with that reported by Wang (2020).
Figure 12: The evolution of energy flux rate (measured at $R_{\mathrm{h}}$ and
$R_{\mathrm{h,BH}}$) with (the upper panel) and without (the middle panel) the
correction factor, $\psi$, respectively. $\psi$ measured at $R_{\mathrm{h}}$
and $R_{\mathrm{h,BH}}$ are shown in the bottom panel.
## 4 Discussion
In this work, we did not include the general relativity effect on BH orbits
during the simulation, but considered it in the post-process analysis to
detect the BBH mergers. Thus, the BBH mergers that occurred in a short time
between two snapshots are missed in our analysis. Such events can occur in
chaotic interactions of triple or quadruple systems. Kremer et al. (2019) and
Samsing et al. (2020) show that GW capture during resonant encounters can
contribute to the BH mergers in GCs. Our models also miss such events.
Therefore, the number of BBH mergers in our analysis is the lower limit.
Besides, we cannot detect hierarchical mergers, which can be the sources for
massive BHs detected by LIGO/VIRGO (Abbott et al., 2019, 2020). For the same
reason, in our simulations, we also did not find intermediate-mass black holes
(IMBHs), which are discussed in (Portegies Zwart et al., 2004; Giersz et al.,
2015; Rizzuto et al., 2020), and the possible tidal-encounter driven BBH
mergers (Fernández & Kobayashi, 2019).
Without primordial binaries, these models probably underestimate the BBH
merger rates, since primordial binaries can lead to stellar-evolution-driven
formation of BBHs. The exchange of components via dynamical encounters between
primordial binaries and BHs can also generate BBHs. This channel is also
missed in our current simulations.
By including the tidal field, the survival timescale of A1.5 and A1.7 can be
much shorter. This can also affect the BBH merger rates. In the future work,
we will develop new models by improving all these aspects.
Since we include PPSN in our simulations, the models with top-heavy IMFs have
a large fraction of equal-mass BBHs ($40.5M_{\odot}$). This can be changed by
adopting the metallicity that is different from $Z=0.001$ or different
stellar-evolution models for massive stars. Thus, our finding of the mass
ratio distribution of BBHs cannot represent all kinds of conditions. However,
the trend, i.e., top-heavy IMFs tend to lead to a higher mass ratio of BBHs,
would be general even if the stellar evolution models were changed.
Since the dynamical driven BBH mergers have a large initial eccentricity
compared to the mergers via binary stellar evolution, the upcoming space-borne
GW detectors (LISA and Tian-qin) may detect the high eccentricy BBH mergers
that can help to distinguish the origins (e.g. Kremer et al., 2019; Liu et
al., 2020). The top-heavy IMF results in more massive mergers of about
$80M_{\odot}$, which could be easier to detect by these GW detectors.
## 5 Conclusions
In this work, we carry out four star-by-star $N$-body simulations of GCs with
different IMFs, and initially, with $M=5\times 10^{5}M_{\odot}$ and
$R_{\mathrm{h}}=2$ pc. We find that the formation rate of BBH mergers depends
on the stellar evolution and dynamical process (core collapse of BH subsystems
and BH heating) in a complicated way. There is no monotonical correlation
between the slope of IMF ($\alpha_{3}$) and the number of (potential) BBH
mergers (Figure 2, Table 2). The stronger stellar-wind mass loss in the first
100 Myr leads to a faster expansion of GCs with more top-heavy IMFs. As the
escape velocity is lower in more top-heavy models, although the number of BHs
is much higher, the BBH merger rate is lower. However, the A1.5 model, which
shows a deeper core collapse of the BH subsystem after expansion, can produce
a burst of BBH merger candidates at the momentum of the core collapse (Figure
2,6).
During the long-term evolution, it is difficult to form BBH mergers in GCs
with more top-heavy IMFs because they expand faster. This trend can be
identified from the evolution of the minimum semi-major axis of hard BBHs
shown in Figure 2. Therefore, with the same initial mass and size, GCs with
more top-heavy IMFs less efficiently produce BBH mergers within the same time
interval of evolution.
However, GCs with top-heavy IMFs may maintain the escape velocity high enough
for a long term to retain BHs inside clusters. As a result, the total number
of BBH mergers can be large in the case of the top-heavy IMFs (Weatherford et
al., 2021). In other words, although the efficiency is low, high-density GCs
with top-heavy IMFs can take a longer time to produce BBH mergers. In the case
of GCs with top-light IMFs, however, the total amount of BBH mergers are
limited to the total number of BHs, even though they are more efficient.
The comparison with the two-component models from Wang (2020) suggests that
the mass loss rate of BHs does not simply depend on the mass segregation time
($t_{\mathrm{ms}}$), probably because it is difficult to define an accurate
$t_{\mathrm{ms}}$ when a mass spectrum exists. However, the general trend of
energy balance (energy flux rate) is consistent with the result of Wang
(2020). We also identify the two evolution trends of GCs. BHs escape faster in
GCs with the canonical Kroupa (2001) IMF ($\alpha_{3}=-2.3$), while light
stars are lost faster in the case of top-heavy IMFs (the A1.5 and A1.7
models). The former can finally become dense GCs like those observed in the
Milky-Way galaxy, while the latter become dark clusters with none or very few
stars. Since observations can only detect luminous GCs, the contributions of
BBH mergers from dark clusters are ignored, but their contributions can be
important, as is also discussed in (Weatherford et al., 2021).
## Acknowledgments
L.W. thanks the financial support from JSPS International Research Fellow
(School of Science, The University of Tokyo). M.F. was supported by The
University of Tokyo Excellent Young Researcher Program. This work was
supported by JSPS KAKENHI Grant Numbers 17H06360 and 19H01933 and MEXT as
“Program for Promoting Researches on the Supercomputer Fugaku” (towards a
unified view of the universe: from large scale structures to planets,
revealing the formation history of the universe with large-scale simulations
and astronomical big data). Numerical computations were carried out on Cray
XC50 at Center for Computational Astrophysics, National Astronomical
Observatory of Japan.
## Data availability
The simulation data underlying this article are stored on Cray XC50. The data
were generated by the software petar, which is available in GitHub, at
https://github.com/lwang-astro/PeTar. The simulation data will be shared via
private communication with a reasonable request.
## References
* Abbott et al. (2019) Abbott B. P., et al., 2019, Physical Review X, 9, 031040
* Abbott et al. (2020) Abbott B. P., et al., 2020, arXiv, arXiv:2010.14527
* Antonini & Gieles (2019) Antonini F., Gieles M., 2019, arXiv, arXiv:1906.11855
* Antonini et al. (2014) Antonini F., Murray N., Mikkola S., 2014, ApJ, 781, 45
* Askar et al. (2017) Askar A., Szkudlarek M., Gondek-Rosińska D., Giersz M., Bulik T., 2017, MNRAS, 464, L36
* Bae et al. (2014) Bae Y.-B., Kim, C., Lee, H. M., MNRAS, 440, 2714
* Banerjee & Kroupa (2011) Banerjee S., Kroupa P., 2011, ApJL, 741, L12
* Banerjee (2020a) Banerjee S., 2020, arXiv, arXiv:2011.07000
* Banerjee et al. (2020b) Banerjee S., Belczynski K., Fryer C. L., Berczik P., Hurley J. R., Spurzem R., Wang L., 2020, A&A, 639, A41. doi:10.1051/0004-6361/201935332
* Bastian & Lardo (2018) Bastian, N. & Lardo, C,. 2018, ARA&A, 56, 83
* Baumgardt & Sollima (2017) Baumgardt H., Sollima S., 2017, MNRAS, 472, 744
* Belczynski et al. (2010) Belczynski K., Bulik T., Fryer C. L., Ruiter A., Valsecchi F., Vink J. S., Hurley J. R., 2010, ApJ, 714, 1217. doi:10.1088/0004-637X/714/2/1217
* Belczynski et al. (2016) Belczynski K., et al., 2016, A&A, 594, A97
* Belczynski et al. (2020) Belczynski K., et al., 2020, A&A, 636, A104
* Breen & Heggie (2013) Breen P. G., Heggie D. C., 2013, MNRAS, 432, 2779
* Casertano & Hut (1985) Casertano S., Hut P., 1985, ApJ, 298, 80
* Chatterjee, Rodriguez, & Rasio (2017) Chatterjee S., Rodriguez C. L., Rasio F. A., 2017, ApJ, 834, 68
* Chernoff & Weinberg (1990) Chernoff D. F., Weinberg M. D., 1990, ApJ, 351, 121. doi:10.1086/168451
* Di Carlo et al. (2019) Di Carlo U. N., Giacobbo N., Mapelli, M., Pasquato, M., Spera, M., Wang, L., Haardt, F., 2019, MNRAS, 487, 2947
* Downing et al. (2010) Downing J. M. B., Benacquista M. J., Giersz M., Spurzem R., 2010, MNRAS, 477, 1946
* Fernández & Kobayashi (2019) Fernández J. J., Kobayashi S., 2019, MNRAS, 487, 1200. doi:10.1093/mnras/stz1353
* Fryer et al. (2012) Fryer C. L., Belczynski K., Wiktorowicz G., Dominik M., Kalogera V., Holz D. E., 2012, ApJ, 749, 91. doi:10.1088/0004-637X/749/1/91
* Fujii et al. (2017) Fujii M. S., Tanikawa A., Makino J., 2017, PASJ, 69, 94
* Giacobbo & Mapelli (2018) Giacobbo N., Mapelli M., 2018, MNRAS, 480, 2011
* Giersz et al. (2019) Giersz M., Askar A., Wang L., Hypki A., Leveque A., Spurzem R., 2019, MNRAS, 487, 2412
* Giersz et al. (2015) Giersz M., Leigh N., Hypki A., Lützgendorf N., Askar A., 2015, MNRAS, 454, 3150
* Haghi et al. (2017) Haghi H., Khalaj P., Hasani Zonoozi A., Kroupa P., 2017, ApJ, 839, 60. doi:10.3847/1538-4357/aa6719
* Haghi et al. (2020) Haghi H., Safaei G., Zonoozi A. H., Kroupa P., 2020, ApJ, 904, 43. doi:10.3847/1538-4357/abbfb0
* Hénon (1961) Hénon M., 1961, AnAp, 24, 369
* Hénon (1975) Hénon M., 1975, IAUS, 133, IAUS…69
* Heggie (1975) Heggie D. C., 1975, MNRAS, 173, 729
* Hills (1975) Hills J. G., 1975, AJ, 80, 809
* Hurley, Pols, & Tout (2000) Hurley J. R., Pols O. R., Tout C. A., 2000, MNRAS, 315, 543
* Hurley, Tout, & Pols (2002) Hurley J. R., Tout C. A., Pols O. R., 2002, MNRAS, 329, 897
* Hong et al. (2020) Hong J., Askar A., Giersz M., Hypki A., Yoon S.-J., 2020, MNARS, 498, 4287
* Hosek et al. (2019) Hosek M. W., et al., 2019, ApJ, 870, 44
* Iwasawa et al. (2016) Iwasawa M., Tanikawa A., Hosono N., Nitadori K., Muranushi T., Makino J., 2016, PASJ, 68, 54
* Iwasawa et al. (2020) Iwasawa M., Namekata D., Nitadori K., Nomura K., Wang L., Tsubouchi M., Makino J., 2020, PASJ, 72, 13
* Joshi, Rasio, & Portegies Zwart (2000) Joshi K. J., Rasio F. A., Portegies Zwart S., 2000, ApJ, 540, 969
* Kinugawa et al. (2014) Kinugawa, T., Inayoshi, K., Hotokezaka, K., Nakauchi, D., Nakamura, T., 2014, MNRAS, 442, 2963. doi:10.1093/mnras/stu1022
* Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231
* Kremer et al. (2019) Kremer K., Rodriguez C. L., Amaro-Seoane P., Breivik K., Chatterjee S., Katz M. L., Larson S. L., et al., 2019, PhRvD, 99, 063003. doi:10.1103/PhysRevD.99.063003
* Kumamoto et al. (2019) Kumamoto J., Fujii M. S, Tanikawa A., 2019, MNRAS, 486, 3942
* Liu et al. (2020) Liu S., Hu Y.-M., Zhang J.-. dong ., Mei J., 2020, PhRvD, 101, 103027. doi:10.1103/PhysRevD.101.103027
* Lu et al. (2013) Lu J. R., Do T., Ghez A. M., Morris M. R., Yelda S., Matthews K., 2013, ApJ, 764, 155
* Marks et al. (2012) Marks M., Kroupa P., Dabringhausen J., Pawlowski M. S., 2012, MNRAS, 422, 2246
* Mandel & de Mink (2016) Mandel, I., de Mink, S. E., 2016, MNRAS, 458, 2634
* Marchant et al. (2016) Marchant P., Langer, N., Podsiadlowski, P., Tauris, T. M., Moriya, T. J., 2016, A&A, 588, A50
* Milone et al. (2017) Milone A. P., et al., 2017, MNRAS, 464, 3636
* Namekata et al. (2018) Namekata D., Iwasawa M., Nitadori K., Tanikawa A., Muranushi T., Wang L., Hosono N., et al., 2018, PASJ, 70, 70. doi:10.1093/pasj/psy062
* O’Leary et al. (2009) O’Leary R. M., Kocsis, B., Loeb, A., 2009, MNRAS, 395, 2127
* Oshino, Funato, & Makino (2011) Oshino S., Funato Y., Makino J., 2011, PASJ, 63, 881
* Park et al. (2017) Park D., Kim, C., Lee, H. M., Bae Y.-B., Belczynski K., 2017, MNRAS, 469, 4665
* Peters (1964) Peters P. C., 1964, PhRv, 136, 1224
* Rizzuto et al. (2020) Rizzuto F. P., Naab T., Spurzem R., Giersz M., Ostriker J. P., Stone N. C., Wang L., et al., 2020, MNRAS.tmp. doi:10.1093/mnras/staa3634
* Portegies Zwart & McMillan (2000) Portegies Zwart S. F., McMillan S. L. W., 2000, ApJL, 528, L17
* Portegies Zwart et al. (2004) Portegies Zwart S. F., Baumgardt H., Hut P, Makino J., McMillan S. L. W. Natur, 428, 724. doi:10.1038/nature02448
* Rodriguez et al. (2016a) Rodriguez C. L., Chatterjee S., Rasio F. A., 2016, Physical Review D, 93, 084029
* Rodriguez et al. (2016b) Rodriguez C. L., Haster C.-J., Chatterjee S., Kalogera V., Rasio F. A., 2016, ApJL, 824, L8
* Rodriguez et al. (2016) Rodriguez C. L., Morscher M., Wang L., Chatterjee S., Rasio F. A., Spurzem R., 2016, MNRAS, 463, 2109
* Rodriguez et al. (2018) Rodriguez C. L., Pattabiraman B., Chatterjee S., Choudhary A., Liao W.-. keng ., Morscher M., Rasio F. A., 2018, ComAC, 5, 5. doi:10.1186/s40668-018-0027-3
* Samsing et al. (2018) Samsing J., Askar A., Giersz M., ApJ, 855, 124
* Samsing et al. (2020) Samsing J., D’Orazio D. J., Kremer K., Rodriguez C. L., Askar A., 2020, PhRvD, 101, 123010. doi:10.1103/PhysRevD.101.123010
* Schneider et al. (2018) Schneider F. R. N., et al., 2018, Sci, 359, 69
* Spitzer & Hart (1971) Spitzer L., Hart M. H., 1971, ApJ, 164, 399
* Tanikawa (2013) Tanikawa, A., 2013, MNRAS, 435, 1358
* Tanikawa et al. (2020) Tanikawa, A., Susa, H., Yoshida, T., Trani, A. A., Kinugawa, T., 2020, arXiv, arXiv:2008.01890
* Trani, Mapelli, & Bressan (2014) Trani A. A., Mapelli M., Bressan A., 2014, MNRAS, 445, 1967. doi:10.1093/mnras/stu1898
* Wang et al. (2015) Wang L., Spurzem R., Aarseth S., Nitadori K., Berczik P., Kouwenhoven M. B. N., Naab T., 2015, MNRAS, 450, 4070
* Wang et al. (2016) Wang L., et al., 2016, MNRAS, 458, 1450
* Wang et al. (2020) Wang L., Kroupa P., Takahashi K., Jerabkova T., 2020, MNRAS, 491, 440
* Wang (2020) Wang L., 2020, MNRAS, 491, 2413
* Wang, Nitadori & Makino (2020a) Wang L., Nitadori K., Makino J., 2020, MNRAS, 493, 3398
* Wang et al. (2020b) Wang L., Iwasawa M., Nitadori K., Makino J., 2020, MNRAS, 497, 536
* Weatherford et al. (2021) Weatherford N. C., Fragione G., Kremer K., Chatterjee S., Ye C. S., Rodriguez C. L., Rasio F. A., 2021, arXiv, arXiv:2101.02217
* Zhang et al. (2018) Zhang Z.-Y., Romano D., Ivison R. J., Papadopoulos P. P., Matteucci F., 2018, Natur, 558, 260
* Ziosi et al. (2014) Ziosi B. M., Mapelli M., Branchesi M., Tormen G., 2014, MNRAS, 441, 3703
* Zonoozi, Haghi, & Kroupa (2016) Zonoozi A. H., Haghi H., Kroupa P., 2016, ApJ, 826, 89. doi:10.3847/0004-637X/826/1/89
|
# The art of coarse Stokes: Richardson extrapolation improves the accuracy and
efficiency of the method of regularized stokeslets
M. T. Gallagher1 and D. J. Smith2†
(1Centre for Systems Modelling and Quantitative Biomedicine, University of
Birmingham
2School of Mathematics, University of Birmingham
$\dagger$<EMAIL_ADDRESS>
###### Abstract
The method of regularised stokeslets is widely used in microscale biological
fluid dynamics due to its ease of implementation, natural treatment of complex
moving geometries, and removal of singular functions to integrate. The
standard implementation of the method is subject to high computational cost
due to the coupling of the linear system size to the numerical resolution
required to resolve the rapidly-varying regularised stokeslet kernel. Here we
show how Richardson extrapolation with coarse values of the regularisation
parameter is ideally-suited to reduce the quadrature error, hence dramatically
reducing the storage and solution costs without loss of accuracy. Numerical
experiments on the resistance and mobility problems in Stokes flow support the
analysis, confirming several orders of magnitude improvement in accuracy
and/or efficiency.
## 1 Introduction: the method of
regularised stokeslets
Flow problems associated with flagellar propulsion of cells, cilia-driven
fluid transport, and synthetic microswimmers, are characterised by the
inertialess regime of approximately zero Reynolds number flow, described
mathematically – in Newtonian flow – by the Stokes flow equations,
$-\bm{\nabla}p+\mu\nabla^{2}\bm{u}=\bm{0},\quad\bm{\nabla}\cdot\bm{u}=0.$ (1)
Typically these conditions are associated with no-flux, no-penetration
conditions on complex-shaped moving boundaries modelling cell surfaces and
motile appendages. For a detailed introduction to the subject, see the recent
text [1]. A range of mathematical and computational techniques are available
to approach this problem; a computational method that has seen significant
uptake and development over the last two decades is the method of regularized
stokeslets, first described by Cortez [2] and subsequently elaborated for
three-dimensional flow [3, 4].
This technique can be viewed as a modification of the method of fundamental
solutions and/or the boundary integral method for Stokes flow [5], the basis
for which is the _stokeslet_ [6] or _Oseen tensor_ [7]:
$\displaystyle S_{jk}(\bm{x},\bm{y})$
$\displaystyle=\frac{\delta_{jk}}{|\bm{x}-\bm{y}|}+\frac{(x_{j}-y_{j})(x_{k}-y_{k})}{|\bm{x}-\bm{y}|^{3}},$
(2) $\displaystyle P_{k}(\bm{x},\bm{y})$
$\displaystyle=\frac{x_{k}-y_{k}}{|\bm{x}-\bm{y}|^{3}}.$ (3)
The pair of tensors $S_{jk},P_{k}$ provide the solutions
$\bm{u}=(8\pi\mu)^{-1}(S_{1k},S_{2k},S_{3k})$ and $p=(4\pi)^{-1}P_{k}$ to the
singularly-forced Stokes flow equations,
$\displaystyle-\bm{\nabla}p+\mu\nabla^{2}\bm{u}+\delta(\bm{x}-\bm{y})\hat{\bm{e}}_{k}$
$\displaystyle=\bm{0},$ (4) $\displaystyle\bm{\nabla}\cdot\bm{u}$
$\displaystyle=0,$ (5)
where $\delta(\bm{x})$ is the three-dimensional Dirac delta distribution and
$\hat{\bm{e}}_{k}$ is a unit basis vector pointing in the $k$-direction.
Equations (2)-(3) are singular when the source point $\bm{x}$ and field point
$\bm{y}$ coincide. To facilitate numerical computation, the method of
regularized stokeslets instead considers the Stokes flow equation with
spatially-smoothed point force,
$\displaystyle-\bm{\nabla}p+\mu\nabla^{2}\bm{u}+\phi_{\epsilon}(\bm{x}-\bm{y})\hat{\bm{e}}_{k}$
$\displaystyle=\bm{0},$ (6) $\displaystyle\bm{\nabla}\cdot\bm{u}$
$\displaystyle=0.$ (7)
where $\phi_{\epsilon}(\bm{x})$ is a family of “blob” functions approximating
$\delta(\bm{x})$ as $\epsilon\rightarrow 0$.
Several different choices for $\phi_{\epsilon}$ and associated regularised
stokeslets $S_{jk}^{\epsilon}$ have been studied; the most extensively-used
was presented in the original 3D formulation of Cortez and co-authors [3],
$\displaystyle\phi_{\epsilon}(\bm{x})$
$\displaystyle=\frac{15\epsilon^{4}}{8\pi(|\bm{x}|^{2}+\epsilon^{2})^{7/2}},$
(8) $\displaystyle P_{k}^{\epsilon}(\bm{x},\bm{y})$
$\displaystyle=\frac{x_{k}}{(|\bm{x}|^{2}+\epsilon^{2})^{5/2}}(2|\bm{x}|^{2}+5\epsilon^{2}),$
(9) $\displaystyle S_{jk}^{\epsilon}(\bm{x},\bm{y})$
$\displaystyle=\frac{\delta_{jk}(|\bm{x}|^{2}+2\epsilon^{2})+x_{j}x_{k}}{(|\bm{x}|^{2}+\epsilon^{2})^{3/2}}.$
(10)
Developments focussing on the use of alternative blob functions to improve
convergence include ref. [8] (near-field) and, more recently, ref. [9] (far-
field).
The pressure $P_{k}^{\epsilon}(\bm{x},\bm{y})\sim P_{k}(\bm{x},\bm{y})$ and
velocity $S_{jk}^{\epsilon}(\bm{x},\bm{y})\sim S_{jk}(\bm{x},\bm{y})$ as
$\epsilon\rightarrow 0$; moreover the corresponding single layer boundary
integral equation is
$u_{j}(\bm{x})=-\frac{1}{8\pi\mu}\iint_{B}S_{jk}^{\epsilon}(\bm{x},\bm{y})f_{k}(\bm{y})dS_{\bm{y}}+O(\epsilon^{p}),$
(11)
where $p=1$ for $\bm{x}$ on or near $B$ and $p=2$ otherwise [3]. In equation
(11) and below, summation over repeated indices in $j=1,2,3$ or $k=1,2,3$ is
implied. The reduction to the single-layer potential is discussed by e.g. [5,
3, 10]; in brief this equation can describe flow due to motion of a rigid
body, or with suitable adjustment to $f_{k}$, the flow exterior to a body
which does not change volume. A feature common to both standard and
regularised stokeslet versions of the boundary integral equation is non-
uniqueness of the solution $f_{k}$. This non-uniqueness occurs due to
incompressibility of the stokeslet, i.e. provided the interior of $B$
maintains its volume, then $\iint_{B}S_{jk}n_{k}dS_{\bm{y}}=0$ so that if
$f_{k}$ is a solution of equation (11) then so is $f_{k}+an_{k}$ for any
constant $a$. From the perspective of the original partial differential
equation system, the non-uniqueness follows from the fact that the pressure
part of the solution to equations (1) with velocity-only boundary conditions
is determined only up to an additive constant. This issue is not dynamically
important, and moreover the discretised approximations to the system described
below result in invertible matrices.
Boundary integral methods have the major advantage of removing the need for a
volumetric mesh, which both reduces computational cost, and moreover avoids
the need for complex meshing and mesh movement. The key strength of the method
of regularised stokeslets is in enabling the boundary integral method to be
implemented in a particularly simple way: by replacing the integral by a
numerical quadrature rule $\\{\bm{x}[n],w[n],dS(\bm{x}[n])\\}$ (abscissae,
weight and surface metric), equation (11) may be approximated by,
$u_{j}(\bm{x}[m])\approx\frac{1}{8\pi\mu}\sum_{n=1}^{N}S_{jk}^{\epsilon}(\bm{x}[m],\bm{x}[n])f_{k}(\bm{x}[n])w[n]dS(\bm{x}[n]).$
(12)
As is standard terminology in numerical methods for integral equations we will
refer to this as the _Nyström_ discretisation [11]. By allowing $m=1,\ldots,N$
and $j=1,2,3$, a dense system of $3N$ linear equations in $3N$ unknowns
$F_{k}[n]:=f_{k}(\bm{x}[n])w[n]dS(\bm{x}[n])$ is formed. The diagonal entries
when $j=k$ and $m=n$ are finite but numerically on the order of $1/\epsilon)$,
leading to (by the Gershgorin circle theorem) a well-conditioned matrix
system.
The approach outlined above can be used to solve the _resistance problem_ in
Stokes flow, which involves prescribing a rigid body motion and calculating
the force distribution, and hence total force and moment on the body. Once the
force and moment associated with each of the six rigid body modes (unit
velocity translation in the $x_{j}$ direction, unit angular velocity rotation
about $x_{j}$-axis, for $j=1,2,3$) are calculated, the _grand resistance
matrix_ $A$ can be formed [5], which by linearity of the Stokes flow equations
relates the force $\bm{F}$ and moment $\bm{M}$ to the velocity $\bm{U}$ and
angular velocity $\bm{\Omega}$ for any rigid body motion;
$\begin{pmatrix}\bm{F}\\\
\bm{M}\end{pmatrix}=\underbrace{\begin{pmatrix}A_{FU}&A_{F\Omega}\\\
A_{MU}&A_{M\Omega}\end{pmatrix}}_{A}\begin{pmatrix}\bm{U}\\\
\bm{\Omega}\end{pmatrix}.$ (13)
For example, for a sphere of radius $a$ centred at the origin, the matrix
blocks are $A_{FU}=6\pi\mu aI$, $A_{F\Omega}=0=A_{MU}$ and
$A_{M\Omega}=8\pi\mu a^{3}I$ where $I$ is the $3\times 3$ identity matrix.
A closely-related problem is the two-step calculation of the flow field due to
a prescribed boundary motion; starting with prescribed surface velocities
$u_{j}(\bm{x}[m])$, first, the discrete force distribution $F_{k}[n]$ is found
by inversion of the Nyström matrix system; the velocity field at any point in
the fluid $\tilde{\bm{x}}$ can then be found through the summation,
$u_{j}(\tilde{\bm{x}})=\frac{1}{8\pi\mu}\sum_{n=1}^{N}S_{jk}^{\epsilon}(\tilde{\bm{x}},\bm{x}[n])F_{k}[n].$
(14)
The _mobility problem_ is formulated by prescribing the total force and moment
on the body (yielding $6$ scalar equations) and augmenting the system with
unknown velocity $\bm{U}$ and angular velocity $\bm{\Omega}$, which adds $6$
scalar unknowns, so that a $(3N+6)\times(3N+6)$ system is formed. At a given
time, these unknowns can be related to the evolution of the body trajectories
(in terms of position $\bm{x}_{0}$ and two basis vectors $\bm{b}^{(1)}$ and
$\bm{b}^{(2)}$), through a system of nine ordinary differential equations
$\dot{\bm{x}}_{0}=\bm{U}\left(\bm{x}_{0},\bm{b}^{(1)},\bm{b}^{(2)},t\right),\quad\dot{\bm{b}}^{(j)}=\bm{\Omega}\left(\bm{x}_{0},\bm{b}^{(1)},\bm{b}^{(2)},t\right)\times\bm{b}^{(j)},\quad
j=1,2,$ (15)
which can be solved using available packages such as MATLAB’s ode45.
Finally the _swimming problem_ further prescribes the motion of cilia or
flagella with respect to a body frame (typically a frame in which the cell
body is stationary), and often assumes zero total force and moment (neglecting
gravity and other forces such as charge), again resulting in a
$(3N+6)\times(3N+6)$ system. The key numerical features and challenges of the
method of regularised stokeslets are exhibited by the resistance and mobility
problems, which will therefore be our primary focus.
## 2 Convergence properties of the Nyström discretisation
Equation (12) is subject to the $O(\epsilon)$ regularisation error in the
boundary integral equation, and the discretisation error in the approximation
of the integral. The integrand consists of a product: the slowly-varying
traction $f_{k}(\bm{y})$ and the stokeslet kernel
$S_{jk}^{\epsilon}(\bm{x}[m],\bm{y})$ which is rapidly-varying when
$\bm{y}\approx\bm{x}[m]$. The error associated with discretisation of the
traction is at worst $O(h)$, where $h$ is the characteristic spacing between
points. The dominant error in the stokeslet kernel can be shown to be
$O(\epsilon^{-1}h^{2}),$ (16)
[see [12], _contained case_ , equation (2.7)].
Reducing the $O(\epsilon)$ regularisation error by reducing $\epsilon$
therefore increases the $O(\epsilon^{-1}h^{2})$ stokeslet quadrature error,
necessitating refinement of the discretisation length $h$. To reduce
$\epsilon$ by a factor of $R$ requires indicatively reducing $h$ by a factor
of $\sqrt{R}$, hence increasing the number of surface points and therefore
degrees of freedom $N$ by a factor of $R$. The cost of assembling the dense
linear system then increases by a factor of $R^{2}$, and the cost of a direct
linear solver by a factor of $R^{3}$. This calculation shows that, for
example, improving from a 10% relative error to a 1% relative error may
indicatively incur a cost increase of $1000$ times. There are several
approaches available already to address this issue, which involve a range of
computational complexities: the fast multipole method [13], boundary element
regularised stokeslet method [14], and the nearest-neighbour discretisation
[15] for example. In the next section we will describe and analyse a very
simple technique which alone, or potentially in combination with the above,
improves the order of the regularisation error, thereby enabling a coarser
$\epsilon$ and hence alleviating the quadrature error. We will then briefly
review an alternative ‘coarse’ approach, the nearest-neighbour method, a
benchmark with similar implementational simplicity. Numerical experiments will
be shown in the Results (§5), and we close with brief discussion (§6).
## 3 Richardson extrapolation in regularisation error
Consider the approximation of a physical quantity (e.g. moment on a rotating
body) which has exact value $M^{*}$. The value of this quantity calculated
with discretisation of size $h$ and regularisation parameter $\epsilon$ is
denoted,
$M(\epsilon,h)=M^{*}+E_{r}(\epsilon)+E_{d}(h;\epsilon),$ (17)
where $E_{r}(\epsilon)$ is the regularisation error associated with the
(undiscretised) integral equation, and $E_{d}(h;\epsilon)$ is the
discretisation error, which as indicated also has an indirect dependence on
$\epsilon$ via the quadrature.
Recall that:
$\displaystyle E_{r}\left(\epsilon\right)$ $\displaystyle=O(\epsilon),$ (18)
$\displaystyle E_{d}(h;\epsilon)$
$\displaystyle=E_{f}(h)+E_{q}(h;\epsilon)=O(h)+O(h^{2}/\epsilon),$ (19)
where $E_{f}(h)$ is the error associated with the force discretisation and
$E_{q}(h;\epsilon)$ is the quadrature error. The analysis below will focus on
the situation in which the regularisation parameter $\epsilon$ is not
excessively small, so that the quadrature error ($h^{2}/\epsilon$) is
subleading and hence the discretisation error has minimal dependence on
$\epsilon$, thus $E_{d}(h;\epsilon)\approx E_{d}(h;\epsilon_{0})$ for some
representative value $\epsilon_{0}$. Writing,
${M}(\epsilon;h)=M^{*}+E_{r}(\epsilon)+E_{d}(h;\epsilon_{0}),$ (20)
we may then expand,
${M}(\epsilon;h)=M^{*}+\epsilon
E_{r}^{\prime}(0)+\frac{\epsilon^{2}}{2}E_{r}^{\prime\prime}(0)+O(\epsilon^{3})+E_{d}(h;\epsilon_{0}).$
(21)
Evaluation of $M(\epsilon_{\ell},h)$ for three values of $\epsilon_{\ell}$ in
this range results in a linear system,
$\begin{pmatrix}{M}(\epsilon_{1},h)\\\ {M}(\epsilon_{2},h)\\\
{M}(\epsilon_{3},h)\\\
\end{pmatrix}=\underbrace{\begin{pmatrix}1&\epsilon_{1}&\epsilon_{1}^{2}\\\
1&\epsilon_{2}&\epsilon_{2}^{2}\\\
1&\epsilon_{3}&\epsilon_{3}^{2}\end{pmatrix}}_{B}\begin{pmatrix}M^{*}\\\
E_{r}^{\prime}(0)\\\
E_{r}^{\prime\prime}(0)/2\end{pmatrix}+\begin{pmatrix}E_{d}(h;\epsilon_{0})+O(\epsilon_{1}^{3})\\\
E_{d}(h;\epsilon_{0})+O(\epsilon_{2}^{3})\\\
E_{d}(h;\epsilon_{0})+O(\epsilon_{3}^{3})\end{pmatrix}.$ (22)
Applying the matrix inverse,
$B^{-1}\begin{pmatrix}{M}(\epsilon_{1},h)\\\ {M}(\epsilon_{2},h)\\\
{M}(\epsilon_{3},h)\\\ \end{pmatrix}=\begin{pmatrix}M^{*}\\\
E_{r}^{\prime}(0)\\\
E_{r}^{\prime\prime}(0)/2\end{pmatrix}+B^{-1}\begin{pmatrix}E_{d}(h;\epsilon_{0})+O(\epsilon_{1}^{3})\\\
E_{d}(h;\epsilon_{0})+O(\epsilon_{2}^{3})\\\
E_{d}(h;\epsilon_{0})+O(\epsilon_{3}^{3})\end{pmatrix}$ (23)
Hence the estimate,
$\widetilde{M}(\epsilon_{1},\epsilon_{2},\epsilon_{3};h):=\begin{pmatrix}1&0&0\end{pmatrix}B^{-1}\begin{pmatrix}{M}(\epsilon_{1},h)\\\
{M}(\epsilon_{2},h)\\\ {M}(\epsilon_{3},h)\\\ \end{pmatrix}$ (24)
provides an approximation to $M^{*}$ that has error
$E_{d}(h;\epsilon_{0})+O(\epsilon_{1}^{3}+\epsilon_{2}^{3}+\epsilon_{3}^{3}).$
(25)
This improvement in order of accuracy comes at a small multiplicative cost
associated with solving the problem three times, however as these are three
independent calculations they are ideally placed to exploit parallel computing
architecture, thus reducing the additional computational cost.
## 4 Comparison with the nearest-neighbour regularised stokeslet method
Before carrying out numerical experiments, we will briefly recap a different
strategy to address the $\epsilon$-dependence of the linear system size which
we have developed and described recently, in order to provide a benchmark with
similar implementational simplicity. The _nearest-neighbour_ version of the
regularised stokeslet method [16] aims to remove the $\epsilon$-dependence of
the linear system size. This change is achieved by separating the degrees of
freedom for traction from the quadrature by using two discretisations: a
‘coarse force’ set $\\{\bm{x}[1],\ldots,\bm{x}[N]\\}$ for the traction and a
finer set $\\{\bm{X}[1],\ldots,\bm{X}[Q]\\}$ for the quadrature. If these sets
are identical, the method reduces to the familiar Nyström discretisation. In
general, choosing $N<Q$ leverages the fact that the traction is more slowly-
varying than the near-field of the regularised stokeslet kernel. Discretising
the integral equation (11) on the fine set gives,
$\displaystyle u_{j}(\bm{x}[m])$
$\displaystyle=\sum_{q=1}^{Q}S_{jk}^{\epsilon}(\bm{x}[m],\bm{X}[q])f_{k}(\bm{X}[q])w[q]dS(\bm{X}[q]).$
(26)
Based on the observation that the traction $f_{k}(\bm{X}[q])$ and associated
weighting $w[q]dS(\bm{X}[q])$ are slowly-varying, the method employs degrees
of freedom $F_{k}[n]$ in the neighbourhood of each point of the coarse
discretisation, so that,
$w[q]dS(\bm{X}[q])f_{k}(\bm{X}[q])\approx\sum_{n=1}^{N}\nu[q,n]F_{k}[n],$ (27)
where $\nu[q,n]$ is a sparse matrix defined so that $\nu[q,n]=1$ if the
closest coarse point to $\bm{X}[q]$ is $\bm{x}[n]$, and $\nu[q,n]=0$
otherwise.
A detail that was not addressed in our recent papers [15, 17, for example] is
that the closest coarse point to a given quadrature point may not be uniquely
defined. Moreover, it is occasionally possible that, for sufficiently
distorted discretisations, a coarse point may have no quadrature points
associated to it at all, resulting in a singular matrix. In the former case,
the weighting may be split between two or more coarse points, so that the sum
of each row of $\nu[q,n]$ is still equal to $1$. In the latter case, the
coarse point may be removed from the problem, or (better) the quadrature
discretisation refined.
The approximation (27) leads to the linear system,
$\displaystyle u_{j}(\bm{x}[m])$
$\displaystyle\approx\sum_{n=1}^{N}F_{k}[n]\sum_{q=1}^{Q}S_{jk}^{\epsilon}(\bm{x}[m],\bm{X}[q])\nu[q,n].$
(28)
The computational complexity of the system is given by the $3N\times 3Q$
function evaluations required to assemble the stokeslet matrix, followed by
the $O(N^{3})$ solution of the dense linear system (for direct methods).
The nearest-neighbour method is subject to similar $O(\epsilon)$
regularisation error and $O(h_{f})$ discretisation error (where $h_{f}$ is
characteristic of the force point spacing) as the Nyström method. Analysis of
the quadrature error associated with collocation [12] identifies two dominant
contributions:
1. 1.
_Contained case_ : Quadrature centred about a force point which is also
contained in the quadrature set is subject to a dominant error term
$O(\epsilon^{-1}h_{q}^{2})$, where $h_{q}$ is the spacing of the quadrature
points; the Nyström method described above is a special case of this, with
$h_{q}=h$;
2. 2.
_Disjoint case_ : Quadrature centred about a force point which is not
contained in the quadrature set is subject to a dominant error term
$O(h_{q}/\delta)^{2}h_{q})$, where $\delta>0$ is the minimum distance between
the force point and quadrature points. This term does not appear in the
Nyström method error analysis. The term is written in this form because
$\delta$ is typically similar in size to $h_{q}$ for a given quadrature set,
so with a little care, $h_{q}/\delta$ behaves as a multiplicative constant.
For contained force and quadrature discretisations (i), the cost of quadrature
is still an important consideration. Reducing $\epsilon$ by a factor of $R$,
necessitates reducing $h_{q}^{2}$ by a factor of $R$, and hence increasing the
number of quadrature points – and associated matrix assembly cost – by a
factor of $R$. Therefore any improvement to the order of convergence of the
regularisation error will result in a corresponding improvement in the
reduction of quadrature error.
However, when disjoint force and quadrature discretisations (ii) are employed,
the nearest-neighbour method is able to entirely decouple the strong
dependence of the degrees of freedom (tied only to $h_{f}$) on the
regularisation parameter $\epsilon$ and quadrature discretisation $h_{q}$. The
nearest-neighbour method therefore provides a relatively efficient and
accurate implementation of the regularised stokeslet method that, with minor
care in the construction of the discretisation sets, can be used as a
benchmark. In the following section we will assess the Richardson
extrapolation approach against analytic solutions for two examples of the
resistance problem, and against the nearest-neighbour method for an example of
the mobility problem.
## 5 Results
We now turn our attention to the application of Richardson extrapolation to a
series of model problems, comprising the calculation of:
1. (a)
The grand resistance matrix for a unit sphere;
2. (b)
The grand resistance matrix for a prolate spheroid; and
3. (c)
The motion of a torus sedimenting under gravity.
For simulations (a) and (b), comparisons can be made to known exact solutions.
For each test case (a-c), we compare the results of simulations using both the
Nyström [Ny] and Nyström + Richardson [NyR] methods. For the latter, we choose
extrapolation points
$\left(\epsilon_{1},\epsilon_{2},\epsilon_{3}\right)=\left(\epsilon,\sqrt{2}\epsilon,2\epsilon\right)$.
The choice of extrapolation rule is discussed further in Appendix A.
For each problem we use the minimum distance between any two force points in
the discretisation as our comparative lengthscale $h$. For the [NyR] method,
results are shown against the smallest value of the regularisation parameter
$\left(\epsilon_{1}\right)$ used in the calculation. Simulations are performed
with GPU acceleration (see [18]) using a Lenovo Thinkstation with an NVIDIA
Quadro RTX 5000 GPU. Each of the test problems that we consider, however, are
easily within the capabilities of more modest hardware.
Figure 1: Relative error in calculating the grand resistance matrix for the
unit sphere. (a) Sketch of the sphere discretisation (orange dots). (b) The
number of scalar degrees of freedom used in calculations as h is varied. (c)
and (d) The relative error of the Nyström and Nyström + Richardson methods as
$\epsilon$ and $h$ are varied. (e) and (f) The same data plotted for each
$\epsilon$ as $h$ is varied.
### 5.1 The grand resistance matrix of a rigid sphere
Application of Stokes’ law gives the force exerted by the translation of the
unit sphere with velocity $\bm{U}=(1,0,0)$ as $\bm{F}=\left(6\pi,0,0\right)$,
and the moment exerted by the unit sphere with rotational velocity
$\bm{\Omega}=(1,0,0)$ as $\bm{M}=(8\pi,0,0)$. From this, the grand resistance
matrix $A$ can be constructed as in (Equation (13)). We solve Equation (12)
[Ny] and Equations (12) and (24) [NyR] for unit translations and rotations
about each axis to obtain the numerical approximation to $A$, $A^{\epsilon}$.
The relative error in the calculation is then given by the relation
$\text{relative error}=\frac{\|A-A^{\epsilon}\|_{2}}{\|A\|_{2}},$ (29)
where $\|A\|_{2}$ denotes the 2-norm ($\|A\|_{2}=\sup\limits_{x\neq
0}\|Ax\|_{2}/\|x\|_{2}$).
The unit sphere is discretised by projecting onto the six faces of a cube
(Figure 1), with the number of scalar degrees of freedom (sDOF) shown plotted
against the minimum spacing between points ($h$) in Figure 1. The relative
error in calculating the grand resistance matrix as $h$ and $\epsilon$ are
varied is shown in Figures 1 and 1 ([Ny] and [NyR] respectively). We report
results for an identical range of $\epsilon$ (and $h$) for both methods,
although as described in §3, the [NyR] method is specifically designed to
exploit larger values of $\epsilon$ for which the quadrature error is small,
so the [NyR] results with $\epsilon=0.1$–$0.4$ are most pertinent.
Figure 2: Relative error in calculating the grand resistance matrix for a
prolate spheroid with major axis length $a$ = 5 and minor axis length $c$ = 1.
(a) Sketch of the discretisation (orange dots). (b) The number of scalar
degrees of freedom used in calculations as h is varied. (c) and (d) The
relative error of the Nyström and Nyström + Richardson methods as $\epsilon$
and $h$ are varied. (e) and (f) The same data plotted for each $\epsilon$ as
$h$ is varied.
The [Ny] method is found to achieve $1\%$ relative error for a select number
of parameter pairs ($\epsilon,h$). This is strongly dependent, however, on the
‘dip’ in error which appears as $h$ is decreased for a given $\epsilon$
(evident in Figure 1) and is a consequence of the balance between the
opposite-signed regularisation and quadrature errors; the small $h$ plateau
remains above $1\%$ error for each choice of $\epsilon$. In contrast, the
[NyR] method is able to significantly reduce the error in the plateau (Figure
1), resulting in sub-$1\%$, errors for $\epsilon$ as large as $0.2$. Indeed
with $\epsilon=0.2$, the range of values of $h$ capable of producing
acceptably accurate results extends from $h=0.00077$ to $h=0.0076$. As a
result of the reduction in regularisation error, brought about by the [NyR]
extrapolation, this method is able to achieve a minimum relative error of
$0.05\%$ compared to $0.6\%$ for the [Ny] method, and moreover accurate
performance no longer depends on a precise interplay between $h$ and
$\epsilon$. In the simulations we performed, the [NyR] method was able to
attain very accurate results ($0.1\%$ error) in $250$ seconds of walltime.
### 5.2 The grand resistance matrix of a rigid prolate spheroid
To assess the performance on a system involving a modest disparity of length
scales, the second model problem is the calculation of the grand resistance
matrix for a prolate spheroid of major axis length $5$ and minor axis length
$1$. Moreover, prolate spheroids are often used as models for both entire
microscopic swimming cells, and for their propulsive cilia and flagella, and
so provide an informative test geometry. The exact solution in the absence of
other bodies is well-known (see e.g. ref. [19]). Details of the discretisation
of the prolate spheroid are provided in Appendix BB.1. A sketch of the
discretisation and plot of sDOF as $h$ is varied are shown in Figures 2 and 2.
Similarly to the case of the unit sphere, the [Ny] method is able to achieve a
minimum error of $0.8\%$ for the smallest $\epsilon$ in this study and a
specific choice of $h$ within the error dip (Figures 2 and 2). For each choice
of $\epsilon$, the error plateau for small $h$ is at least $1\%$ relative
error. Relatively large $\epsilon=0.2,0.4$ yield error plateaus of $8.7\%$ and
$22\%$ respectively.
The [NyR] method also exhibits this dip phenomenon, however the reduction in
regularisation error provided by the Richardson extrapolation (Figures 2 and
2) results in significantly reduced error plateaus of $0.059\%$ and $1.5\%$
($\epsilon=0.2$ and $0.4$ respectively), again being more robustly maintained
over a larger range of values of $h$. For this test problem, the [NyR] method
achieved $0.1\%$ error in $390$ seconds of walltime.
Figure 3: A torus, with central radius $R=2.5$ and tube radius $r=1$,
sedimenting under gravity. (a) Sketch of the Nyström discretisation (orange
dots). (b) Sketch of the [NEAREST] force (large orange dots) and quadrature
(small green dots) discretisations. (c) The number of scalar degrees of
freedom used in Nyström and Nyström + Richardson calculations as h is varied.
(d) and (e) The $z$-position of the torus at $t=98.7$ calculated with the
Nyström and Nyström + Richardson methods as $\epsilon$ and $h$ are varied. (f)
and (g) The same data plotted for each $\epsilon$ as $h$ is varied, with a
dotted line showing the result using the nearest-neighbour method for
comparison. (h) and (i) The error in $z$-position at $t=98.7$ relative to the
the nearest-neighbour calculation. (j) and (k) The same data plotted for each
$\epsilon$ as $h$ is varied. The cross in (e, i) denotes a parameter
combination for which results could not be obtained due to near-singularity of
the linear system.
### 5.3 The motion of a torus sedimenting under gravity
As a final test case, we simulate the mobility problem of a torus sedimenting
under the action of gravity (for detailed setup and discretisation, see
Appendix BB.2). In the absence of an exact solution to this problem, we
compare the distance travelled in the vertical direction after the system
(Equations (35) - (37)) are solved for $t\in[0,98.7]$. We compare the results
obtained with the [Ny] and [NyR] methods with those from a simulation using
the nearest-neighbour method ([NEAREST]) with a refined force discretisation,
disjoint force and quadrature discretisations and $\epsilon=10^{-6}$. Figures
3 \- 3 show, respectively, the discretisations for the [Ny] and [NyR], and
[NEAREST] methods, and the number of sDOF used in the [Ny] and [NyR] methods
as $h$ is varied. For the [NEAREST] simulation, a highly-resolved system is
constructed with $14,667$ sDOF and $231,744$ quadrature points.
Figures 3 & 3, and 3 & 3 show the convergence in $z$-position of the torus at
$t=98.7$ as both $\epsilon$ and $h$ are varied. The relative difference
between these results and the [NEAREST] simulation are shown in Figures 3 \-
3. The error behaves similarly to the previous cases: while [Ny] achieves
accurate results with specific combinations of $\epsilon$ and $h$, by contrast
[NyR] at relatively large values of $\epsilon~{}=~{}0.1$–$0.4$ attains
sub-$1\%$ error over an extended range of $h$ values.
As anticipated by the error analysis, the advantage of [NyR] appears in the
range of relatively coarse values, i.e. $\epsilon=0.1$–$0.4$. A solution could
not be obtained when $\epsilon=0.4$ and $h<0.039$, due to the matrix system
with ${\epsilon_{3}=2\epsilon=0.8}$ becoming close-to-singular. For the choice
of $\epsilon=0.4$, the [NyR] method attained an error of $0.7\%$ (compared to
the result using [NEAREST]) in $144$ seconds of walltime.
The results for the smallest choice of regularisation parameter,
$\epsilon=0.01$, are not converged with $h$, consistent with our analysis in
§3 focussing on moderate values of $\epsilon$ for which the quadrature error
is subleading.
## 6 Discussion
This manuscript considered the implementation of the regularised stokeslet
method, a widely-used in biological fluid dynamics for computational solution
of the Stokes flow equations. An inherent challenge is the strong dependence
of the degrees of freedom on the regularisation parameter $\epsilon$, which
necessitates an inverse-cubic relationship between the linear solver cost and
the regularisation parameter.
Here, we have investigated a simple modification of the widely-used Nyström
method, by employing Richardson extrapolation; performing calculations with
three, coarse values of $\epsilon$ and extrapolating to significantly reduce
the order of the regularisation error. The method was compared with the
original Nyström approach on three test problems: calculating the grand
resistance matrices of the unit sphere and prolate spheroid, and simulating
the motion of a torus sedimenting under gravity.
Investigation of these model problems has highlighted two significant
phenomena, the first of which is well-known but is worth repeating: (1)
obtaining an acceptable level of error using the Nyström method is strongly
dependent on being within the region where the (opposite-signed)
regularisation and quadrature errors exhibit significant cancellation, a
phenomenon which has sensitive dependence on the discretisation $h$ as
$\epsilon$ is varied. (2) The improvement in the order of regularisation error
provided by Richardson extrapolation is able to significantly and robustly
reduce errors for simulations with (relatively) large choices of $\epsilon$,
enabling highly accurate results with relatively modest computational
resources. This advantage is (by design) only maintained for these coarse
values of $\epsilon$, so that the regularisation error is subleading. Another
approach which improves the order of convergence of the (important) local
regularisation error is ref. [8], although the resulting regularised
stokeslets may not be exactly divergence-free.
As discussed above, there are several existing approaches to improving the
efficiency and accuracy of regularised stokeslet methods. The best approach in
terms of strict computational complexity is the use of _fast_ methods such as
the kernel independent fast multipole method, which enables the approximation
of the matrix-vector operation required for iterative solution of the linear
problem [13, 20], resulting in a method $O(N\log N)$ – although with somewhat
greater implementational complexity. Another formulation is to borrow from the
boundary element method developed for the standard singular stokeslet
formulation [14], which has been applied to systems such embryonic left-right
symmetry breaking [21] and bacterial morphology [22]. The boundary element
approach decouples the quadrature from the traction discretisation and hence
degrees of freedom of the system, enabling larger problems to be solved,
although again at the expense of greater complexity through the need to
construct a true surface _mesh_ , with a mapping between elements and nodes.
The nearest-neighbour discretisation [15] retains much of the simplicity of
the Nyström method, while separating the quadrature discretisation from the
degrees of freedom. Provided that the discretisations do not overlap, we still
find this method to be an optimal combination of simplicity and efficiency.
The Richardson approach does not avoid the need for the regularisation
parameter to not exceed the length scales characterising the physical problem,
for example the distance between objects. In this respect the nearest-
neighbour approach is advantageous because of its ability to accommodate
smaller values of the regularisation parameter.
In this work, we have focussed on demonstrating how a numerically simple
modification to the, already easy-to-implement, Nyström method can provide
excellent improvements by employing coarse values of the regularisation
parameter $\epsilon$. This approach can be considered complementary to the
nearest-neighbour method in its _coarse_ philosophy and style: both methods
are figuratively coarse in their simplicity, and literally coarse in their
approach of increasing numerical parameters. the Richardson approach allows
increases in the regularisation parameter, the nearest-neighbour approach
allows increase the force discretisation spacing $h_{f}$. Either method
enables more accurate results to be achieved with greater robustness, and for
lower computational cost. Moreover, both have the advantage of being
formulated in terms of basic linear algebra operations, and therefore can be
further improved through the use of GPU parallelisation with minimal
modifications [18]. The choice of which method to use is a matter of
preference; the Richardson approach has the advantage of being immediately
adoptable by any group with a working Nyström code, alongside the repeated
calculations being embarrassingly parallel; the nearest-neighbour approach has
the advantage of completely removing the dependence of the system size on
$\epsilon$.
Accessible algorithmic improvements such as these provide the improved ability
to solve a plethora of problems in very low Reynolds number hydrodynamics.
Potential application areas are varied including microswimmers such as sperm
[23, 24], algae and bioconvection [25, 26, 27, 28, 29], mechanisms of
flagellar mechanics [30, 31], squirmers [32, 33] and bio-inspired swimmers
[34, 35, 36]. Stokeslet-based methods have been employed since the work of
Gray & Hancock [6] in the 1950s; they continue to provide ease of
implementation, efficiency, and most importantly physical insight into
biological systems.
## Acknowledgment
This work was supported by Engineering and Physical Sciences Research Council
(EPSRC) Award No. EP/N021096/1. MTG acknowledges support from EPSRC Centre
Grant EP/N014391/2. We thank Eamonn Gaffney and Kenta Ishimoto for valuable
discussion.
## Data accessibility
The code to produce the results in this report is contained within the
repositories: https://gitlab.com/meuriggallagher/the-art-of-coarse-stokes
(MATLAB code for Nyström and Richardson extrapolation) and
https://gitlab.com/meuriggallagher/NEAREST (MATLAB code for NEAREST and other
dependencies).
## Appendix A Choice of extrapolation parameters
As a check on the robustness of the results presented in this manuscript to
the choice of extrapolation parameters
$\left(\epsilon_{1},\epsilon_{2},\epsilon_{3}\right)$, we calculate the
relative error in calculating the grand resistance matrix for the unit sphere
(see Section 5) with the rules:
* •
$\left(\epsilon,\sqrt{2}\epsilon,2\epsilon\right)$, Figure 1;
* •
$\left(\epsilon,1.5\epsilon,2\epsilon\right)$, Figure 4;
* •
$\left(\epsilon,2\epsilon,3\epsilon\right)$, Figure 4.
* •
$\left(\epsilon,1.25\epsilon,1.5\epsilon\right)$, Figure 4.
* •
$\left(\epsilon,\sqrt{1.5}\epsilon,1.5\epsilon\right)$, Figure 4.
Visual comparison between Figures 1 and 4 shows that the improvement in
accuracy is relatively similar.
Figure 4: Relative error in calculating the grand resistance matrix for the
unit sphere with the Nyström + Richardson method for four choices of
extrapolation rule.
## Appendix B Further details of numerical experiments
### B.1 Discretisation of the prolate spheroid
The location of points on the prolate spheroid, aligned with the $x$-axis, can
be expressed in terms of the prolate spheroidal coordinates, as
$\displaystyle x$ $\displaystyle=\alpha\cosh\mu\cos\nu,$ (30) $\displaystyle\
y$ $\displaystyle=\alpha\sinh\mu\sin\nu\cos\phi,$ (31) $\displaystyle z$
$\displaystyle=\alpha\sinh\mu\sin\nu\sin\phi,$ (32)
for $\nu\in\left[0,\pi\right]$, $\phi\in\left[0,2\pi\right]$, with
$\alpha=\sqrt{a^{2}-c^{2}},\qquad\mu=\arccos{\frac{a}{\alpha}},$ (33)
where $a$ and $c$ are the major- and minor-axes lengths respectively. We first
discretise $\nu$ into $n$ uniformly spaced points, providing a discretisation
in x which is slightly more dense in regions of higher curvature. For each
choice of $\nu_{i}$ ($i\in[1,n]$) we discretise $\phi$ into $m_{i}$ linearly
spaced points, where the choice
$m_{i}=\left\lceil{\frac{2\pi\alpha\sinh\mu\sin\nu_{i}}{h}}\right\rceil,\quad
i\in\left[1,n\right],$ (34)
ensures that each ring is approximately evenly discretised with spacing $h$.
Here, $\lceil\cdot\rceil$ represents the ceiling function.
### B.2 A torus sedimenting under gravity
The equations of motion for a torus sedimenting under gravity are given by
$\displaystyle-
U_{i}-\varepsilon_{ijk}\Omega_{j}\left(x_{k}-x_{0k}\right)-\frac{1}{8\pi}\iint\limits_{\partial
D}S_{ij}^{\epsilon}\left(\bm{x},\bm{X}\right)f_{j}\left(\bm{X}\right)\mathrm{d}S_{\bm{X}}$
$\displaystyle=0,\quad\forall\bm{x}\in\partial D,$ (35)
$\displaystyle\iint\limits_{\partial
D}f_{i}\left(\bm{X}\right)\mathrm{d}S_{\bm{X}}$ $\displaystyle=-1,$ (36)
$\displaystyle\iint\limits_{\partial
D}\varepsilon_{ikj}X_{k}f_{j}\left(\bm{X}\right)\mathrm{d}S_{\bm{X}}$
$\displaystyle=0,$ (37)
where repeated indices are summed over, $i\in\left[1,2,3\right]$, $\bm{U}$ and
$\bm{\Omega}$ are the translational and rotational velocities of the torus,
$\partial D$ defines the surface of the torus, the central- and tube-radii of
the torus are given by $R$ and $r$ respectively, and $\varepsilon_{ijk}$ is
the Levi-Civita symbol. The term on the right-hand side of Equation (36)
derives from the (dimensionless) effect of gravity. The motion of the torus
can be expressed as a system of 9 ordinary differential equations for the time
derivatives of the torus position $\bm{x_{0}}$ and basis vectors
$\bm{b}^{(1)}$ and $\bm{b}^{(2)}$ (after which
$\bm{b}^{(3)}=\bm{b}^{(1)}\times\bm{b}^{(2)}$). More details of how this
‘mobility problem’ is solved can be found in [15]. While this problem could be
further constrained by enforcing that the angular velocity is zero (due to the
symmetry of the torus), we focus on solving for the full rigid body motion.
The mobility problem is solved using the [Ny], [NyR] and [NEAREST] methods,
with results given in Section 5.3.
Points on the torus surface can be written as
$\displaystyle x$ $\displaystyle=\left(R+r\cos\theta\right)\cos\phi,$ (38)
$\displaystyle y$ $\displaystyle=\left(R+r\cos\theta\right)\sin\phi,$ (39)
$\displaystyle z$ $\displaystyle=r\sin\theta,$ (40)
for $\theta,\ \phi\in\left[0,2\pi\right)$. We discretise $\theta$ into
$n=\left\lceil{2\pi r/h}\right\rceil$ linearly spaced points, ensuring points
on each ring are approximately evenly spaced with lengthscale $h$. For each
$\theta_{i}$ ($i\in[1,n]$) we discretise $\phi$ into $m_{i}$ linearly spaced
points via
$m_{i}=\left\lceil{\frac{2\pi\left(R+r\cos\theta_{i}\right)}{h}}\right\rceil,\quad
i\in\left[1,n\right],$ (41)
resulting in an approximately evenly spaced discretisation for the torus with
lengthscale $h$. For simulations with the [NEAREST] method, a fine quadrature
discretisation is created following the same process with lengthscale
$h_{q}=h/4$. To ensure disjoint force and quadrature discretisations in this
case, a filtering step is performed to remove any quadrature points which lie
within a distance $h_{q}/10$ from their nearest force point.
## References
* [1] E. Lauga. The Fluid Dynamics of Cell Motility, volume 62. Cambridge University Press, 2020.
* [2] R. Cortez. The method of regularized Stokeslets. SIAM J. Sci. Comput., 23(4):1204–1225, 2001.
* [3] R. Cortez, L. Fauci, and A. Medovikov. The method of regularized Stokeslets in three dimensions: analysis, validation, and application to helical swimming. Phys. Fluids, 17(3):031504, 2005.
* [4] J. Ainley, S. Durkin, R. Embid, P. Boindala, and R. Cortez. The method of images for regularized Stokeslets. J. Comp. Phys., 227(9):4600–4616, 2008.
* [5] C. Pozrikidis. Boundary integral and singularity methods for linearized viscous flow. Cambridge University Press, 1992.
* [6] G.J. Hancock. The self-propulsion of microscopic organisms through liquids. Proc. R. Soc. Lond. A, 217(1128):96–121, 1953.
* [7] Carl Wilhelm Oseen. Neuere methoden und ergebnisse in der hydrodynamik. Leipzig: Akademische Verlagsgesellschaft mb H., 1927.
* [8] H. Nguyen and R. Cortez. Reduction of the regularization error of the method of regularized stokeslets for a rigid object immersed in a three-dimensional stokes flow. Commun. Comput. Phys., 15(1):126–152, 2014.
* [9] B. Zhao, E. Lauga, and L. Koens. Method of regularized stokeslets: Flow analysis and improvement of convergence. Phys. Rev. Fluids, 4(8):084104, 2019.
* [10] K. Ishimoto and E.A. Gaffney. Boundary element methods for particles and microswimmers in a linear viscoelastic fluid. J. Fluid Mech., 831:228–251, 2017.
* [11] E.J. Nyström. Über die praktische auflösung von integralgleichungen mit anwendungen auf randwertaufgaben. Acta Math., 54(1):185–204, 1930.
* [12] M.T. Gallagher, D. Choudhuri, and D.J. Smith. Sharp quadrature error bounds for the nearest-neighbor discretization of the regularized stokeslet boundary integral equation. SIAM J. Sci. Comput., 41(1):B139–B152, 2019.
* [13] M.W. Rostami and S.D. Olson. Kernel-independent fast multipole method within the framework of regularized Stokeslets. J. Fluid. Struct., 67:60–84, 2016.
* [14] D.J. Smith. A boundary element regularized Stokeslet method applied to cilia-and flagella-driven flow. Proc. R. Soc. Lond. Ser. A, 465(2112):3605–3626, 2009.
* [15] M.T. Gallagher and D.J. Smith. Meshfree and efficient modeling of swimming cells. Phys. Rev. Fluids, 3(5):053101, 2018.
* [16] David J Smith. A nearest-neighbour discretisation of the regularized stokeslet boundary integral equation. J. Comput. Phys., 358:88–102, 2018.
* [17] M.T. Gallagher, T.D. Montenegro-Johnson, and D.J. Smith. Simulations of particle tracking in the oligociliated mouse node and implications for left–right symmetry-breaking mechanics. Phil. Trans. R. Soc. Ser. B., 375(1792):20190161, 2020.
* [18] M.T. Gallagher and D.J. Smith. Passively parallel regularized stokeslets. Phil. Trans. R. Soc. A, 378(2179):20190528, 2020.
* [19] S Kim and SJ Karilla. Microhydrodynamics: Principles and Selected Applications. Butterworth-Heinemann, Boston and London, 2013.
* [20] M.W. Rostami and S.D. Olson. Fast algorithms for large dense matrices with applications to biofluids. J. Comput. Phys., 2019.
* [21] P. Sampaio, R.R. Ferreira, A. Guerrero, P. Pintado, B. Tavares, J. Amaro, A.A. Smith, T. Montenegro-Johnson, D.J. Smith, and S.S. Lopes. Left-right organizer flow dynamics: how much cilia activity reliably yields laterality? Dev. Cell, 29(6):716–728, 2014.
* [22] R. Schuech, T. Hoehfurtner, D.J. Smith, and S. Humphries. Motile curved bacteria are Pareto-optimal. Proc. Natl. Acad. Sci., 116(29):14440–14447, 2019.
* [23] R.D. Dresdner and D.F. Katz. Relationships of mammalian sperm motility and morphology to hydrodynamic aspects of cell function. Biol. Reprod., 25(5):920–930, 1981.
* [24] S.F. Schoeller and E.E. Keaveny. From flagellar undulations to collective motion: predicting the dynamics of sperm suspensions. J. R. Soc. Interface, 15(140):20170834, 2018.
* [25] N.A. Hill and T.J. Pedley. Bioconvection. Fluid Dyn. Res., 37(1-2):1, 2005.
* [26] R.E. Goldstein. Green algae as model organisms for biological fluid dynamics. Annu. Rev. Fluid Mech., 47:343–375, 2015.
* [27] T.J. Pedley, D.R. Brumley, and R.E. Goldstein. Squirmers with swirl: a model for Volvox swimming. J. Fluid Mech., 798:165–186, 2016.
* [28] A. Javadi, J. Arrieta, I. Tuval, and M. Polin. Photo-bioconvection: towards light control of flows in active suspensions. Phil. Trans. R. Soc. A, 378(2179):20190523, 2020.
* [29] M.A. Bees. Advances in bioconvection. Annu. Rev. Fluid Mech., 52:449–476, 2020.
* [30] C.V. Neal, A.L. Hall-McNair, J. Kirkman-Brown, D.J. Smith, and M.T. Gallagher. Doing more with less: The flagellar end piece enhances the propulsive effectiveness of human spermatozoa. Phys. Rev. Fluids, 5(7):073101, 2020.
* [31] K.Y. Wan. Synchrony and symmetry-breaking in active flagellar coordination. Phil. Trans. R. Soc. B, 375(1792):20190393, 2020.
* [32] J.R. Blake. Self propulsion due to oscillations on the surface of a cylinder at low Reynolds number. Bull. Austr. Math. Soc., 5(02):255–264, 1971.
* [33] K. Ishimoto. A spherical squirming swimmer in unsteady Stokes flow. J. Fluid Mech., 723:163–189, 2013.
* [34] H. Nguyen, R. Ortiz, R. Cortez, and L. Fauci. The action of waving cylindrical rings in a viscous fluid. J. Fluid Mech., 671:574–586, 2011.
* [35] J. Huang and L. Fauci. Interaction of toroidal swimmers in stokes flow. Phys. Rev. E, 95:043102, Apr 2017.
* [36] R.D. Baker, T. Montenegro-Johnson, A.D. Sediako, M.J. Thomson, A. Sen, E. Lauga, and I.S. Aranson. Shape-programmed 3d printed swimming microtori for the transport of passive and active agents. Nat. Commun., 10(1):1–10, 2019.
|
# The Cosmic Axion Background
Jeff A. Dror<EMAIL_ADDRESS>Department of Physics and Santa Cruz Institute
for Particle Physics, University of California, Santa Cruz, CA 95064, USA
Berkeley Center for Theoretical Physics, University of California, Berkeley,
CA 94720, USA Theory Group, Lawrence Berkeley National Laboratory, Berkeley,
CA 94720, USA Hitoshi Murayama<EMAIL_ADDRESS><EMAIL_ADDRESS>Berkeley Center for Theoretical Physics, University of California, Berkeley,
CA 94720, USA Theory Group, Lawrence Berkeley National Laboratory, Berkeley,
CA 94720, USA Kavli Institute for the Physics and Mathematics of the Universe
(WPI), University of Tokyo, Kashiwa 277-8583, Japan Nicholas L. Rodd
<EMAIL_ADDRESS>Berkeley Center for Theoretical Physics, University of
California, Berkeley, CA 94720, USA Theory Group, Lawrence Berkeley National
Laboratory, Berkeley, CA 94720, USA
###### Abstract
Existing searches for cosmic axions relics have relied heavily on the axion
being non-relativistic and making up dark matter. However, light axions can be
copiously produced in the early Universe and remain relativistic today,
thereby constituting a Cosmic axion Background (C$a$B). As prototypical
examples of axion sources, we consider thermal production, dark-matter decay,
parametric resonance, and topological defect decay. Each of these has a
characteristic frequency spectrum that can be searched for in axion direct
detection experiments. We focus on the axion-photon coupling and study the
sensitivity of current and future versions of ADMX, HAYSTAC, DMRadio, and
ABRACADABRA to a C$a$B, finding that the data collected in search of dark
matter can be repurposed to detect axion energy densities well below limits
set by measurements of the energy budget of the Universe. In this way, direct
detection of relativistic relics offers a powerful new opportunity to learn
about the early Universe and, potentially, discover the axion.
## I Introduction
Figure 1: A representative depiction of the landscape of the cosmic axion
background (C$a$B), showing the differential axion energy density, given in
(2), as a function of energy. The black dashed curves show four different
realizations of the C$a$B, corresponding to thermal production (with
$T_{a}=T_{0}$, the CMB temperature), a Gaussian distribution representative of
parametric-resonance production (with $\rho_{a}=\rho_{\gamma}$,
$\bar{\omega}=0.3~{}\mu{\rm eV}$, and $\sigma/\bar{\omega}=0.1$), dark-matter
decay ($\chi\to aa$), and cosmic-string production ($f_{a}=10^{15}$ GeV,
$T_{d}=10^{12}$ GeV). For the dark-matter decay distribution the parameters
are set to parameters already accessible to ADMX, as justified later in this
work. In particular, we take $m_{\scriptscriptstyle{\rm DM}}\simeq
5.4~{}\mu{\rm eV}$ and $\tau\simeq 2\times 10^{3}\,t_{U}$, with $t_{U}$ the
age of the universe. While the thermal distribution will always peak roughly
where shown and the cosmic-string production is dominant at lower frequencies,
the parametric resonance and dark-matter decay signals can populate the full
energy energy range shown. In all cases we set the axion photon coupling to
the largest allowed value consistent with star-emission bounds over this
energy range, $g_{a\gamma\gamma}^{\rm SE}=0.66\times 10^{-10}~{}{\rm
GeV}^{-1}$. The colored regions denote the sensitivity in this same space that
could be obtained by reanalyzing existing ADMX and HAYSTAC data, or with the
future sensitivities of DMRadio and MADMAX. In determining the sensitivities,
we have assumed that the C$a$B axion-photon coupling saturates star-emission
bounds. We show the region of parameter space where the C$a$B could partially
alleviate the Hubble tension, labelled $H_{0}$ Preferred. Finally, the gray
dotted line depicts the approximate boundary, to the left of which the C$a$B
has sufficient number densities to be treated as a classical wave.
The existence of an axion with mass well below the electroweak scale could
resolve the strong CP puzzle Peccei:1977hh ; Peccei:1977ur ; Weinberg:1977ma ;
Wilczek:1977pj , and is entirely in line with UV expectations given the
ubiquity of axions in string theory, where they arise from the deconstruction
of extra-dimensional gauge fields Svrcek:2006yi ; Arvanitaki:2009fg ;
Halverson:2019cmy . The discovery of cosmologies where such a particle
produced in the early Universe could constitute dark matter Preskill:1982cy ;
Abbott:1982af ; Dine:1982ah has motivated a broad program to detect non-
relativistic axions, and the development of instruments that will cover
enormous swaths of unexplored parameter space in the coming decades. Yet the
axion need not be dark matter, and the mere existence of an axion in the
spectrum implies the possibility that a relic population of these states was
produced in the early history of the Universe. Generically, such a population
could be relativistic — a characteristic feature of the axion is an
approximate shift symmetry, leading to a potential suppressed by powers of the
axion decay constant, $f_{a}$, and correspondingly the axion mass, $m_{a}$, is
expected to be small. Accordingly, the Universe may be awash in a sea of
relativistic axions, a Cosmic axion Background (C$a$B).
In this work we will broadly discuss the production and detection of such a
C$a$B. The possibility of a relativistic axion population is not new, and has
been discussed in several contexts, including axion contributions to $\Delta
N_{\rm eff}$ Baumann:2016wac , axions with keV energies motivated by the
prospect of moduli decaying into axions through Planck-suppressed higher
dimensional operators Conlon:2013isa ; Conlon:2013txa ; Cicoli:2014bfa ;
Cui:2017ytb , or constraints on the conversion of relativistic axions off
primordial magnetic fields in the early Universe Higaki:2013qka ;
Evoli:2016zhj . However, our focus here is to systematize the study of the
C$a$B and demonstrate the terrestrial detection prospects, thereby opening new
paths to discovery. In addition to outlining a number of distinct scenarios
where relativistic axions could be produced in the early, or even late,
Universe, we will demonstrate that such a population can leave a detectable
fingerprint in instruments designed to search for dark matter. Studies of an
additional relativistic component to the Universe are particularly relevant in
light of recent discrepancies in measurements of the Hubble constant between
the early ($H_{0}=67.4\pm 0.5~{}{\rm km}/{\rm s}/{\rm Mpc}$) and late
($H_{0}=73.3\pm 0.8~{}{\rm km}/{\rm s}/{\rm Mpc}$) Universe Verde:2019ivm . An
additional contribution to $N_{\rm eff}$ of around $0.4$ – which relativistic
axions could provide – may play a role in resolving the discrepancy, as they
can relax the uncertainties in the early Universe measurement, giving a value
of $66.3\pm 1.4$, which would reduce, although far from resolve, the tension
Aghanim:2018eyx . This provides an experimental target which we will denote by
“$H_{0}$ Preferred” throughout.
A simplified representation of the C$a$B landscape discussed in this work, is
provided in Fig. 1. The black dashed curves show the differential axion energy
density, $\Omega_{a}(\omega)$ (a precise definition is provided below), as a
function of the energy, $\omega$, for the C$a$B variants discussed in this
work. The colored and shaded regions show the reach of two existing (solid
curves) and future (dotted curves) instruments in this same space. We will
explain this figure in more detail later in the introduction, but already we
emphasize that dark-matter searches will probe interesting C$a$B parameters,
particularly at lower frequencies. In Fig. 1, and throughout this work, we
will focus on the axion-photon coupling,
${\cal
L}\supset-\frac{g_{a\gamma\gamma}}{4}a\tilde{F}_{\mu\nu}F^{\mu\nu}=g_{a\gamma\gamma}a\mathbf{E}\cdot\mathbf{B}\,.$
(1)
In general, the coupling of the axion to the Standard Model (SM) is highly
uncertain and there exist experiments targeting a number of different axion-SM
couplings (for a review, see e.g. Graham:2015ouw ; Irastorza:2018dyq ). While
we restrict our discussion to $g_{a\gamma\gamma}$, many aspects of the C$a$B
extend to more general couplings.
At present, there are two primary classes of searches for axion backgrounds
using the coupling in (1). The two strategies are broadly distinguished by
where the axions are produced: a relativistic population produced in the cores
of compact astrophysical objects or a non-relativistic dark-matter population.
For existing relativistic searches, the axions are produced in compact
objects, such as stars like the Sun, which act as a bright source of axions
with energies in the $\sim$keV range. Avoiding excess cooling of these objects
from axion emission already puts a stringent bound on $g_{a\gamma\gamma}$
Raffelt:1996wa with comparable limits obtained by directly searching for the
emitted axions in helioscopes Sikivie:1983ip or absorption in direct
detection experiments Moriyama:1995bz . Together these searches, which we
collectively refer to as “star-emission” bounds, are able to set strong bounds
on axions with $m_{a}\lesssim 1~{}{\rm keV}$, with the strongest limits across
this full range given by $g_{a\gamma\gamma}\lesssim 0.66\times 10^{-10}~{}{\rm
GeV}^{-1}$ as determined by the CAST helioscope Anastassopoulos:2017ftl and
observations of Horizontal Branch stars Ayala:2014pea ; Carenza:2020zil . For
$m_{a}\lesssim 10^{-10}~{}{\rm eV}$, these bounds can be strengthened by X-ray
searches from conversion of axions emitted by SN-1987A Payez:2014xsa
(assuming the supernova remnant is a proto-neutron star Bar:2019ifz ), NGC
1275 Reynolds:2019uqt , and super star clusters Dessert:2020lil , reaching
$g_{a\gamma\gamma}\lesssim 3.6\times 10^{-12}~{}{\rm GeV}^{-1}$. We
collectively denote these existing star-emission bounds by
$g_{a\gamma\gamma}^{\rm SE}$. These will be relevant as in the current work we
will only consider axions with $m_{a}\ll 1~{}{\rm keV}$, and therefore the
same axions constituting the cosmic background could also be produced in these
astrophysical objects, and must therefore satisfy $g_{a\gamma\gamma}\leq
g_{a\gamma\gamma}^{\rm SE}$.
Dark-matter searches instead look for non-relativistic axions with a much
larger local number density Sikivie:1983ip . Traditionally, axion dark matter
has been searched for in microwave cavity haloscopes Krauss:1985ub ;
Sikivie:1985yu . In the presence of a large magnetic field, axions in the
$1-50~{}\mu$eV mass range can resonantly excite the modes of an
$\mathcal{O}({\rm m})$ sized cavity (as $m_{a}^{-1}\sim\mu{\rm
eV}^{-1}\sim{\rm m}$). This detection principle underlies many of the
strongest existing bounds on axion dark matter, as determined by the ADMX
Asztalos:2003px ; Du:2018uak ; Braine:2019fqb and HAYSTAC Zhong:2018rsr
collaborations (see also Ref. Lee:2020cfj ), which already require dark-matter
axions in this mass range to have $g_{a\gamma\gamma}$ orders of magnitude
below $g_{a\gamma\gamma}^{\rm SE}$. Ideas are currently being developed to
extend the accessible axion dark-matter mass window to both higher and lower
values. For $m_{a}\leq 1~{}\mu{\rm eV}$, resonant conversion can still be
obtained when the axion power is read out through a high quality-factor
lumped-element resonator Sikivie:2013laa ; Chaudhuri:2014dla ; Kahn:2016aff ;
Silva-Feaver:2016qhh . A broadband readout of the signal in this mass range
has already been used to set limits comparable to $g_{a\gamma\gamma}^{\rm SE}$
by the ABRACADABRA Ouellet:2018beu ; Ouellet:2019tlz and SHAFT
Gramolin:2020ict instruments, and in the future DMRadio will aim to
significantly improve on these pathfinding results Chaudhuri:2014dla ; Silva-
Feaver:2016qhh ; SnowmassOuellet ; SnowmassChaudhuri . At higher masses the
MADMAX Collaboration will search for dark matter using a dielectric haloscope,
which searches for the electromagnetic emission that an axion generates at
dielectric boundaries in the presence of a magnetic field
TheMADMAXWorkingGroup:2016hpc ; Millar:2016cjp ; Ioannisian:2017srr . Other
proposed instruments searching for dark matter through the axion-photon
coupling include resonant frequency conversion in superconducting cavities
Berlin:2019ahk ; Lasenby:2019prg , looking for a phase difference in locked
lasers Liu:2018icu ; Obata:2018vvr , exciting quasi-degenerate modes in a
superconducting cavity Berlin:2020vrk , detection of small energy deposits in
crystals Marsh:2018dlj ; Trickle:2019ovy , and matching the axion mass to a
plasma frequency Lawson:2019brd ; Gelmini:2020kcu , although this list is far
from exhaustive. In summary, it is likely that in the coming decades the axion
dark matter hypothesis will either be confirmed, or required to satisfy
$g_{a\gamma\gamma}\ll g_{a\gamma\gamma}^{\rm SE}$ in the mass range $1~{}{\rm
neV}\lesssim m_{a}\lesssim 1~{}{\rm meV}$.
Let us now sketch how this progress in the search for axion dark matter can be
repurposed to search for the C$a$B. The detectable power deposited by an axion
population via (1) is naively $\propto g_{a\gamma\gamma}^{2}\rho_{a}$ up to
the details of the experimental readout. Taking the experimental factors as
constant between the two scenarios, we can obtain an estimate of the
sensitivity for a dark-matter instrument to the C$a$B by matching the power
between the two, i.e. $(g_{a\gamma\gamma}^{2}\rho_{a})_{{\rm C}a{\rm
B}}=(g_{a\gamma\gamma}^{2}\rho_{a})_{\scriptscriptstyle{\rm DM}}$. Assuming
axions fully constitute dark matter, astrophysical observations determine that
$\rho_{a}=\rho_{\scriptscriptstyle{\rm DM}}\simeq 0.4~{}{\rm GeV/cm}^{3}$, see
e.g. deSalas:2020hbh .111We take $\rho_{\scriptscriptstyle{\rm DM}}=0.4~{}{\rm
GeV/cm}^{3}$ throughout. The unknown parameter being searched for is then
$g_{a\gamma\gamma}$, which beyond $g_{a\gamma\gamma}\leq
g_{a\gamma\gamma}^{\rm SE}$ is a free parameter, although in specific
scenarios like the QCD axion sharper predictions are possible. Regardless, for
a given instrument we can project the reach in $g_{a\gamma\gamma}$ to
determine the associated reach in deposited axion power. For the C$a$B both
$g_{a\gamma\gamma}\leq g_{a\gamma\gamma}^{\rm SE}$ and $\rho_{a}$ are free
parameters. If the C$a$B is a relic of the early Universe, measurements of
$\Delta N_{\rm eff}$ further require the energy density to be less than that
of the Cosmic Microwave Background (CMB) Aghanim:2018eyx ,
$\rho_{a}\lesssim\rho_{\gamma}$, although the density may be predicted in
certain scenarios. This poses an immediate challenge: for equal
$g_{a\gamma\gamma}$, the power deposited by the C$a$B will be at least a
factor of $\rho_{\scriptscriptstyle{\rm DM}}/\rho_{\gamma}\simeq 10^{9}$
smaller. The situation is even more dire. The detectability of power deposited
by axion dark matter is enhanced by the exceptionally long coherence time of
the signal, originating from the narrow energy distribution associated with
non-relativistic dark matter in the Milky Way. Indeed, for dark matter we
expect $\Delta\omega/\omega\sim 10^{-6}$, whereas generically the C$a$B will
have a broad distribution in energy, $\Delta\omega/\omega\sim 1$. As we will
review, this typically enhances sensitivity to the dark matter signal by a
further three orders of magnitude relative to the C$a$B. Accordingly, for
equal $g_{a\gamma\gamma}$, a relativistic axion that is a relic of the early
Universe is at most a trillionth as detectable as dark matter.
As we will show, the challenge is not insurmountable. Upcoming axion dark-
matter instruments will have a sensitivity that such a C$a$B will be
detectable. This is demonstrated in Fig. 1, where we recast the existing
results of ADMX and HAYSTAC and the expected future reach of DMRadio and
MADMAX onto the equivalent C$a$B parameter space, assuming
$g_{a\gamma\gamma}^{{\rm C}a{\rm B}}=g_{a\gamma\gamma}^{\rm SE}$. In detail,
we define $\Omega_{a}(\omega)$ as the relic density per unit log (angular)
frequency of the axion
$\Omega_{a}(\omega)=\frac{1}{\rho_{c}}\frac{d\rho_{a}}{d\ln\omega}\,,$ (2)
with $\rho_{c}=3M_{\rm Pl}^{2}H_{0}^{2}$ the critical density.222We take
$M_{\rm Pl}\simeq 2.4\times 10^{18}~{}{\rm GeV}$, the reduced Planck constant.
Fixing the coupling, we can recast the stated dark-matter sensitivity to a
sensitivity on $\rho_{a}$ and hence $\Omega_{a}(\omega)$. The figure
demonstrates that DMRadio will be sensitive to scenarios where
$\Omega_{a}(\omega)\lesssim 5\times 10^{-5}$, roughly corresponding to
$\rho_{a}<\rho_{\gamma}$ – a target cavity instruments may also reach in the
future – and further the $H_{0}$ preferred parameter space discussed earlier.
The sensitivity to such small energy densities suggests that the data
collected by axion direct detection experiments can be repurposed to probe a
range of cosmic sources of axions beyond non-relativistic dark matter. The
axion distribution can be narrow or broad and have a peak frequency over a
wide range of energies, depending on how and when they were produced, which
motivates the discussion in this work on mechanisms for generating the C$a$B.
In particular, we discuss a thermal axion background, emission from cosmic
strings, and production from a parametric resonance in the early Universe,
which is expected to produce a roughly Gaussian distribution, all of which are
shown in Fig. 1. For such cosmic relics, the axions will free-stream over long
distances and their spectrum ultimately depends on the cosmic history of the
Universe. In this sense, axion experiments looking for a stochastic axion
background are in close analogy with searches for a stochastic gravitational
wave background (only axions may have a much larger coupling).333For
reference, current pulsar timing arrays and laser interferometers have
sensitivity to gravitational wave backgrounds of relic densities,
$\Omega_{{\rm GW}}$, of ${\cal O}(10^{-10})$ Lentati:2015qwp ; Shannon:2015ect
; Arzoumanian:2018saf and ${\cal O}(10^{-7})$ LIGOScientific:2019vic ,
respectively.
While ADMX is close, no existing instrument is currently sensitive to
cosmological relics. This motivates scenarios where the C$a$B is produced in
the late Universe, where $\rho_{a}$ can be larger than $\rho_{\gamma}$, and in
particular we discuss dark matter decaying to two relativistic axions,
$\chi\to aa$, with $m_{a}\ll m_{\chi}/2$. The resulting spectrum of axions
receives two contributions: one from the decay of dark matter within the Milky
Way, which generates a sharp $\Delta\omega/\omega\sim 10^{-3}$ spectrum, and
the broader spectrum resulting from dark-matter decays throughout the
Universe. Both contributions can be seen in the spectrum shown in Fig. 1. As
we will show, dark-matter instruments can be repurposed into axion telescopes
to search for this dark-matter indirect-detection channel. Such searches can
further exploit the fact that the Milky Way signal will undergo a daily
modulation in microwave cavity instruments, as the relative direction of the
signal, primarily from the Galactic Center, and the experimental magnetic
field vary throughout the day. Indeed, we will show that ADMX is currently
sensitive to unexplored parameter space – a reanalysis of their existing data
may already reveal a signal of the C$a$B.
In the remainder of this work we will expand the above discussion as follows.
To begin with, in Sec. II, we introduce different possible C$a$B sources
focusing on thermal production, dark-matter decay, parametric-resonance
production, and emission from topological defects. Then, in Sec. III we study
the viability of detecting a relativistic axion background with instruments
designed to search for dark-matter axions through the axion-photon coupling.
As already mentioned, we focus on the axion-photon coupling, and further will
restrict our attention to the sensitivity with resonant cavity instruments
such as ADMX and HAYSTAC, and lumped-circuit readout approaches such as
DMRadio. Our analysis will justify the sensitivities shown for these
instruments in Fig. 1. We will not, however, return to carefully consider the
sensitivity of instruments focused on higher mass axion dark matter
$m_{a}\sim\omega>100~{}\mu{\rm eV}$, such as MADMAX. Detecting a relic C$a$B
requires sensitivity to $g_{a\gamma\gamma}$ many orders of magnitude below
$g_{a\gamma\gamma}^{\rm SE}$. This simply will not be achieved in any proposed
high mass instrument.444That the arguably most well motivated C$a$B candidate
– a relativistic thermal relic – is expected to peak in this energy range
justifies considering dedicated experimental efforts, although we will not
pursue this in the present work. In Sec. IV we then combine the results to
determine projected limits on various C$a$B scenarios, and finally present our
outlook in Sec. V.
## II C$a$B Sources
We now turn to a discussion of specific production mechanisms for the C$a$B.
As mentioned already, axions can be produced in the early and late Universe,
and we will consider examples of both. In each scenario, our goal will be to
characterize the associated axion energy spectrum, which will be a central
ingredient when we come to detection. For this purpose we will again use
$\Omega_{a}(\omega)$ as defined in (2). We emphasize that the present
discussion is not intended to be an exhaustive consideration of all scenarios
from which a C$a$B could emerge, rather, we simply demonstrate that there are
many possibilities. Nonetheless, the analysis will reveal a common theme that
emerges across production mechanisms, in particular that the C$a$B will
generically be a broad distribution, $\Delta\omega/\omega\sim 1$. When
contrasted with the highly coherent signal predicted for dark matter, this
expectation will represent a fundamental difference when approaching searches
for relativistic axions.
### II.1 Thermal Relic
We begin by studying the simplest example of a C$a$B source, thermal
production during the early Universe. Early studies of thermal axion
production can be found in Turner:1986tb ; Chang:1993gm ; Masso:2002np ;
Hannestad:2005df ; Graf:2010tv with a more detailed analysis performed in
Salvio:2013iaa ; Ferreira:2018vjj ; Arias-Aragon:2020shv . Fundamentally, if
an axion was ever in thermal contact with the SM bath at high temperatures,
then a residual thermal population is expected to exist to the present day,
generating a C$a$B with the closest resemblance to the CMB. Indeed, a thermal
axion relic will also be described by a blackbody spectrum, so that
$\Omega_{a}(\omega)=\frac{1}{2\pi^{2}\rho_{c}}\frac{\omega^{4}}{e^{\omega/T_{a}}-1}\,,$
(3)
with a total energy density comparable to that of the CMB.
The above distribution is defined by a single parameter, the present day axion
temperature, $T_{a}$.555We note that $T_{a}$ is not a true temperature since
the axion is expected to have feeble self-interactions. Nevertheless, as for
the CMB, frequencies above the horizon size at thermal decoupling redshift
uniformly with the expansion of the Universe, implying that treating $T_{a}$
as an actual temperature is an excellent approximation. Remaining agnostic as
to the exact axion-SM interaction that brought the axion into thermal
equilibrium initially, at some temperature, $T_{d}$, the two will decouple. If
we assume that since axion freeze-out there has not been any entropy dilutions
beyond those in the SM, and further that there was not an early period of
matter domination, then as entropy is approximately conserved, we can relate
the present and decoupling axion temperatures as follows,
$T_{a}\simeq
T_{0}\left(\frac{g_{\ast,S}(T_{0})}{g_{\ast,S}(T_{d})}\right)^{1/3}\,.$ (4)
Here $T_{0}\simeq 2.7~{}{\rm K}$ is the present day CMB temperature and
$g_{\ast,S}(T)$ is the number of entropic degrees as a function of
temperature. Accordingly, we can specify the thermal axion in terms of $T_{a}$
or $T_{d}$. The spectrum for different decoupling temperatures is shown in
Fig. 2. Generically, a thermal distribution is associated with a number
density of axions of ${\cal O}(1-100~{}{\rm cm}^{-3})$ and a peak energy of
$\sim 10^{-4}~{}{\rm eV}$. Again, both are comparable to the CMB. As the
figure shows, $T_{d}\lesssim 1~{}{\rm MeV}$ is excluded by $\Delta N_{\rm
eff}$ measurements, which would include the case where $T_{a}=T_{0}$ as shown
in Fig. 1. Nevertheless, the range $1~{}{\rm MeV}\lesssim T_{d}\lesssim
1~{}{\rm GeV}$ is somewhat favored as a solution to the present $H_{0}$
tension, a possibility that was studied in detail in Ref. DEramo:2018vss .
Ultimately, $T_{d}$ is determined by the microphysics responsible for the
axion coming into thermal contact. Axions coupled to photons have a maximum
possible decoupling temperature since processes such as, $\gamma e\rightarrow
ae$, will keep it in equilibrium. Equating this rate with Hubble leads to an
estimate of the decoupling temperature, $T_{d}\sim~{}{\rm TeV}\left(g^{{\rm
SE}}_{a\gamma\gamma}/g_{a\gamma\gamma}\right)^{2}$. We conclude that axions
saturating the star-emission bounds would have a decoupling temperature of
around a TeV, while additional interactions can keep the axion thermally
coupled at lower temperatures. This motivates a range of decoupling
temperatures. Lastly, we emphasize that a thermal abundance of axions will
always form as long as the temperature of the Universe was ever above the
decoupling temperature, making this population a robust prediction for any
theory without a low reheating temperature after inflation.
### II.2 Dark-Matter Decay
Dark matter need not be absolutely stable, and axions offer one possible decay
channel. Provided that the dark-matter mass is significantly larger than
$m_{a}$, then the axions produced through this process will be relativistic.
If these same axions have a sizable photon coupling then they are in principle
detectable in terrestrial experiments, opening up a new channel for the
indirect detection program. An important aspect of this scenario, is that
since the dark-matter abundance is considerably larger than the CMB
($\rho_{\scriptscriptstyle{\rm DM}}/\rho_{\gamma}\simeq 10^{9}$), decaying
dark matter can result in a C$a$B energy density that is significantly larger
than allowed for a relic population, given bounds from $\Delta N_{\rm
eff}$.666This possibility was also noted in Ref. Cui:2017ytb as an
opportunity for generating keV scale relativistic axions which would be
detectable in axion helioscopes. Accordingly, in the short term decaying dark
matter represents the most accessible C$a$B candidate.
Currently, this scenario is only constrained indirectly as a consequence of
the fact that a significant fraction of dark matter decaying into radiation
would modify the expansion history of the Universe Gong:2008gi ;
Poulin:2016nat . Qualitatively, these bounds require the decay rate to be less
than the current Hubble rate $\Gamma\lesssim H_{0}$, so that the dark-matter
lifetime is longer than the age of the Universe, $\tau=1/\Gamma\gtrsim t_{U}$.
As dark matter decaying to a relativistic species would modify the expansion
history, this possibility has been suggested as a potential resolution to the
Hubble tension Vattis:2019efj (though this was later refuted Haridasu:2020xaa
). Given the existing tension in $H_{0}$ measurements between the early and
late Universe, the current bounds depend noticeably on which data set is used.
Recently, using only local measurements, the Dark Energy Survey constrained
the dark-matter lifetime to $\tau\gtrsim 50~{}{\rm Gyr}\sim 3.6\,t_{U}$
Chen:2020iwm and we consider this as our nominal bound. Although this value
is likely to be revised with developments on the Hubble tension, this will not
qualitatively impact our discussion.
The first goal of this section will be to describe the C$a$B that results from
decaying dark matter. We will then outline an explicit decaying dark-matter
model, and lastly, we discuss the feasibility of detecting axions arising from
the related mechanism of neutrino decays.
Figure 2: The spectrum of thermal axions for different decoupling
temperatures, $T_{d}$. The region excluded by measurements of $\Delta N_{\rm
eff}$ is shown, as well as the range where a contribution to $\Delta N_{\rm
eff}$ can partially alleviate the $H_{0}$ tension.
#### II.2.1 The Axion Spectrum from Decaying Dark Matter
Axions produced from dark-matter decay will have a spectrum that results from
two distinct sources: the decay of galactic dark matter within the Milky Way
and contribution from decays of the extragalactic dark matter throughout the
Universe. While in both cases the fundamental process will be dark matter,
which we denote $\chi$, decaying to axions, the resulting spectra will be
significantly different. Nonetheless, the contributions produce similar axion
abundances.
Consider first the extragalactic contribution resulting from dark matter
decaying to axions throughout the isotropic, homogeneous, and expanding
Universe. The number density of axions observed today produced per unit time
and per unit energy is given by the product of several factors. The first of
these is the number density of dark-matter particles at a given time $t$,
$\rho_{\scriptscriptstyle{\rm DM}}(t)/m_{\chi}$. We must also weight this by
the rate at which dark matter decays at this time, which is $\Gamma e^{-\Gamma
t}$ (we assume that the decay rate $\Gamma$ is constant through cosmic
history). Each decay is associated with a differential energy spectrum of the
emitted axions, $dN/d\omega^{\prime}$, normalized such that its integral over
all $\omega^{\prime}$ gives the number of emitted axions. As the emitted
axions are assumed to be relativistic, the axion energy as observed today will
be suppressed by a ratio of scale factors, $\omega=\omega^{\prime}a$, where
$a$ is the scale factor at time $t$ and we take $a_{0}=1$. Finally, the
present number density will be diluted as compared to the density emitted at
$t$, as the Universe is now larger by a factor of $1/a^{3}$. Combining these
factors and then integrating over all time from $t=0$ to the present
$t=t_{0}$, we obtain the total extragalactic differential number density as
$\frac{dn_{a}}{d\omega}=\int_{0}^{t_{0}}dt\,a^{3}\Gamma e^{-\Gamma
t}\frac{\rho_{\scriptscriptstyle{\rm
DM}}(t)}{m_{\chi}}\frac{dN}{d\omega^{\prime}}\bigg{|}_{\omega^{\prime}=\omega/a}\,.$
(5)
Changing integration variables to the scale factor, we can write this as
$\Omega_{a}(\omega)\simeq\frac{\Omega_{\scriptscriptstyle{\rm
DM}}\,\omega^{2}}{m_{\chi}}\int_{0}^{1}\frac{da}{a}\,\frac{\Gamma e^{-\Gamma
t(a)}}{H(a)}\frac{dN}{d\omega^{\prime}}\bigg{|}_{\omega^{\prime}=\omega/a}\,,$
(6)
with $\Omega_{\scriptscriptstyle{\rm DM}}\simeq 0.27$ the cosmological dark-
matter density and $t(a)$ the age of the Universe as a function of scale
factor, so that $t(1)=t_{U}$.
Figure 3: The spectrum of axions arising from a component of dark matter
decaying into axions through, $\chi\to aa$, for several dark-matter masses and
lifetimes. The extragalactic component gives a broad spectrum of axions due to
cosmological redshift while decays within the Milky Way produce a narrow
spectrum at half the dark-matter mass.
The above result is appropriate for a general dark-matter decay axion
spectrum. Throughout this work, however, we will specialize to the simple
example of a two-body decay, $\chi\to aa$, again assuming $m_{a}\ll
m_{\chi}/2$, so that the produced axions are relativistic. In this case, the
spectrum takes the following particularly simple form,
$\frac{dN}{d\omega^{\prime}}=2\delta(\omega^{\prime}-m_{\chi}/2)\,.$ (7)
Inserting this into the above gives,
$\Omega_{a}(\omega)\simeq\Omega_{\scriptscriptstyle{\rm
DM}}\left(\frac{2\omega}{m_{\chi}}\right)^{2}\frac{e^{-t(2\omega/m_{\chi})/\tau}}{\tau\,H(2\omega/m_{\chi})}\Theta(m_{\chi}/2-\omega)\,,$
(8)
where $\Theta$ is the Heaviside step-function and we have exchanged the decay
rate for the lifetime. The axion energy spectrum as observed today is shown in
Fig. 3 for different dark matter lifetimes and masses. The sharp peak is
associated with galactic decays, described shortly, but the broad continuum
arises from the above expression. The redshifting of the axions produced
throughout the Universe smooths the sharp two-body spectrum into a continuum.
We now compute the axion abundance created within the Milky Way. For this, the
number density of axions per unit energy around Earth is determined by the
conventional indirect detection expression for decaying dark matter, and given
by777For $\tau\gg t_{U}$, we have $e^{-t_{U}/\tau}\simeq 1$, and so this
factor is commonly neglected in indirect detection analyses.
$\frac{dn_{a}}{d\omega}=\frac{e^{-t_{U}/\tau}}{4\pi
m_{\chi}\tau}\frac{dN}{d\omega}\int ds\,d\Omega\,\rho_{\scriptscriptstyle{\rm
DM}}(s,\Omega)\,.$ (9)
Note we have assumed the observable decays within the Milky Way occur at
$t=t_{U}$. The integral at the end of this is commonly referred to as the
$D$-factor in the indirect detection literature (for details see e.g.
Lisanti:2017qoz ). For a canonical Milky Way dark-matter profile, the
$D$-factor has a full sky integrated value of $D_{\scriptscriptstyle{\rm
MW}}\simeq 2.7\times 10^{32}~{}{\rm eV}/{\rm cm}^{2}\cdot{\rm sr}$.888To
obtain this value we assumed a canonical Navarro-Frenk-White profile
Navarro:1995iw ; Navarro:1996gj , took the Earth-Galactic Center distance as
8.127 kpc Abuter:2018drb , and a local dark-matter density of 0.4 GeV/cm3.
The expression in (9) again holds for a general spectrum. If we specialize to
the case of $\chi\to aa$, then the local axion energy density per unit log
frequency is approximately given by,
$\Omega_{a}(\omega)\simeq\frac{\omega^{2}e^{-t_{U}/\tau}}{2\pi
m_{\chi}\tau\rho_{c}}\delta(\omega-m_{\chi}/2)D_{\scriptscriptstyle{\rm
MW}}\,.$ (10)
Integrating (8) and (10), we can determine the total energy density for the
two contributions. This is maximized for $\tau\sim t_{U}$, where we have
$\rho_{a}^{\scriptscriptstyle{\rm MW}}\simeq 2\rho_{a}^{\scriptscriptstyle{\rm
EG}}\simeq 10^{3}\rho_{\gamma}$, so that, as claimed, the energy densities
from the two contributions are comparable, and a combined density larger than
the CMB can be obtained for a range of lifetimes
($\rho_{a}\lesssim\rho_{\gamma}$ for $\tau\gtrsim 10^{4}\,t_{U}$).
The reason (10) is an approximation is because it assumes the observed axion
spectrum is the same as that in the dark-matter rest frame. While this is
often a reasonable approximation, axion experiments are often sensitive to
extremely narrow energy distributions — recall, for dark matter,
$\Delta\omega/\omega\sim 10^{-6}$. This motivates a more detailed
consideration of the axion energy distribution. There are two contributions
that will resolve the distribution in (10) to have a finite width: the
velocity dispersion of dark matter in the Milky Way and the finite velocity of
the Earth through the dark-matter halo. Both velocities result in a net motion
between the observer and source of axions, and therefore the axion energies
will be Doppler shifted by a factor of $v\sim 10^{-3}$, which is the magnitude
of both velocity components. In this work, we will simply replace
$\delta(\omega-m_{\chi}/2)$ in (10) with a Gaussian of width $10^{-3}$
centered at half the dark-matter mass. The actual distribution is more
complex, indeed it depends on the dark-matter distribution and varies across
the sky given the motion of the Earth in the halo frame (for further details,
see Ref. Speckhard:2015eva ). Nevertheless, the main aspect of the
distribution relevant for forecasting sensitivities is the width, and the
Gaussian approximation adequately accounts for this.
#### II.2.2 An Explicit Model: Decaying Scalar Dark Matter
Above we considered the axion abundance produced through dark-matter decay,
with all model dependence in the axion spectrum, $dN/d\omega$, and the
lifetime $\tau$. We now study a decaying dark-matter model which predicts a
detectable C$a$B, and generates the simple $\chi\to aa$ spectrum used above.
Consider a theory with a complex scalar field, $\Phi$, with potential,
$V(\Phi)=\lambda^{2}\left(\left|\Phi\right|^{2}-\frac{f_{a}^{2}}{2}\right)^{2}\,.$
(11)
The theory has a spontaneously broken U(1) and we identify the Goldstone boson
with the axion and the radial mode with dark matter, decomposing the field as
$\Phi=(\chi+f_{a})e^{ia/f_{a}}/\sqrt{2}$.
In the broken phase, the relevant axion dark-matter couplings are,999The
potential also contains terms which can mediate annihilation to axions,
$\chi\chi\to aa$. For the masses considered in this work, this annihilation is
completely subdominant to the decay.
$V(a,\chi)\supset\frac{1}{2}(2\lambda^{2}f_{a}^{2})\chi^{2}+\frac{1}{2}(2\lambda^{2}f_{a})\chi
a^{2}\,,$ (12)
from which we identify the dark-matter mass as $m_{\chi}=\sqrt{2}\lambda
f_{a}$. Further, the axion dark-matter coupling allows us to compute the rate
of dark-matter decay as,
$\Gamma_{\chi\to aa}=\frac{m_{\chi}^{3}}{16\pi f_{a}^{2}}\,,$ (13)
with corresponding axion spectrum as given in (7). In order for
$\Gamma_{\chi\to aa}$ to be, at least, comparable to the age of the Universe,
we then require $f_{a}$ to be well below the weak scale. This may seem hard to
reconcile given the stringent bounds on the axion-photon coupling,
$g_{a\gamma\gamma}^{\rm SE}\ll 1~{}{\rm TeV}^{-1}$, however, this can be
natural if the axion obtains its photon coupling through axion-axion or
photon-dark-photon mixing, and as we demonstrate in App. A this does not
require any elaborate model building. Nevertheless, this does require
forbidding any significant terms in the scalar potential mixing between the SM
Higgs and $\Phi$. In generating $g_{a\gamma\gamma}$, $\chi$ may also obtain a
coupling directly to photons. Even though searches for $\chi\to\gamma\gamma$
are significantly more stringent than the axion searches discussed in this
work, these constraints are not significant in the parameter space of
interest, as we work in the limit of $f_{a}\ll g_{a\gamma\gamma}^{-1}$.
#### II.2.3 Cosmic Neutrino Background Decay
A C$a$B may also be produced as a byproduct of neutrino decays of the cosmic
neutrino background (C$\nu$B). Flavor off-diagonal couplings of axions to
neutrinos can be the result of a global lepton number broken by multiple
scalar fields with the axion playing the role of a Majoron Gelmini:1980re or,
more generally, the familon Feng:1997tn . A generic axion can have a coupling
to neutrinos given by ($Q=Q^{\dagger}$),
${\cal L}\supset
Q_{ij}\frac{\partial_{\mu}a}{f_{a}}\nu^{\dagger}_{i}\bar{\sigma}^{\mu}\nu_{j}\,.$
(14)
Here we assume the neutrinos are Majorana and work with two-component fermion
notation. The neutrino decay rates were recently calculated in Ref.
Dror:2020fbh for $L_{i}-L_{j}$ gauge bosons in the high-energy limit and the
results can be translated to axions with the relation, $1/f_{a}\leftrightarrow
g_{X}/m_{X}$,
$\displaystyle\Gamma_{\nu_{i}\rightarrow\nu_{j}a}$
$\displaystyle=\frac{1}{16\pi m_{i}}\overline{\left|{\cal
M}\right|^{2}}\left(1-m_{j}^{2}/m_{i}^{2}\right)\,,$ (15)
$\displaystyle\overline{\left|{\cal M}\right|^{2}}$
$\displaystyle=\frac{1}{f_{a}^{2}}\left((m_{i}^{2}-m_{j}^{2})^{2}\text{Re}Q_{ij}^{2}+(m_{i}+m_{j})^{4}\text{Im}Q_{ij}^{2}\right)\,.$
Parametrically, $\Gamma_{\nu_{i}\to\nu_{j}a}\sim m_{\nu}^{3}Q^{2}/f_{a}^{2}$
and for the decay to be comparable to Hubble while avoiding the star-emission
bounds requires $f_{a}\ll 1~{}{\rm TeV}$ while keeping $g_{a\gamma\gamma}\ll
1~{}{\rm TeV}^{-1}$. As mentioned previously, this can occur naturally for
axions that inherit a photon interaction through axion-axion or photon-dark
photon mixing, see also App. A.
The strongest constraints on the neutrino lifetime are from observations of
neutrino free-streaming in the CMB Hannestad:2004qu . Current limits allow for
neutrino lifetimes well below the age of the Universe; indeed, a recent
reanalysis of the bounds in Ref. Barenboim:2020vrr found a conservative limit
that is on the order of several days. Accordingly there is a significant
possibility that neutrino decays populate the C$a$B. For decays while the
neutrinos are still relativistic, the axions are produced with an energy
comparable with the neutrino temperature, and hence this results in a spectrum
similar to the thermal background considered in Sec. II.1. Axions produced
from late-time neutrino decays – after neutrinos have become non-relativistic
– will have a peaked spectrum around the neutrino mass. In either event, the
resulting spectrum will be subject to the same challenge as the thermal
background, in that it is located at high frequencies where it is unlikely to
be observable in the near future due to the lack of sensitive experiments at
these energies. As such, we will not evaluate this case in detail, but note
should the thermal C$a$B become accessible, then likely so too would this
scenario.
### II.3 Parametric Resonance
A C$a$B can also be produced through the process of parametric resonance in
the early Universe Dolgov:1989us ; PhysRevD.42.2491 ; Kofman:1994rk ;
Kofman:1997yn . In order for this process to occur, the axion must be coupled
to a scalar field which is heavily displaced from its minimum after inflation.
This will occur by quantum fluctuations for any scalar field, unless it is
fixed to the origin by an effective mass larger than the Hubble scale at
inflation. When such a scalar field begins to oscillate about its minimum it
will produce axions with a bose-enhanced rate that will typically deplete its
energy density within an e-fold. The characteristic axion energy as observed
today will have redshifted dramatically and can be much lower than the energy
of axions produced by perturbative decay of a scalar field. As the parametric-
resonance phenomena is a non-perturbative process that occurs out of
equilibrium, computing the spectrum in detail requires evolving multiple
scalar fields on the lattice. We will not attempt such a calculation here but
instead perform qualitative estimates. Earlier work on relativistic axion
production considered potential modifications to $\Delta N_{\rm eff}$ and
parallel production of gravitational waves Ema:2017krp . In this subsection we
review the dynamics of axion production and explore the parameter space
leading to a detectable axion background. We follow the notation and
discussion of Refs. Co:2017mop ; Dror:2018pdh which studied the prospect of
ultralight bosonic dark matter produced through parametric resonance.
We now focus on an explicit realization of the parametric-resonance phenomena,
which can be achieved using the same model introduced in Sec. II.2. Recall,
there we had the axion arise from a global symmetry breaking complex scalar,
$\Phi$, with a radial mode $\chi$ playing the role of dark matter. Our
starting point will again be the potential given in (11) and as our initial
condition we take $\chi$ to have a large field value, $\chi_{i}\gg f_{a}$. At
early times we assume the second derivative of the potential with respect to
the field is greater than Hubble squared, $V^{\prime\prime}(\chi_{i})\gtrsim
H^{2}$, such that the scalar is stuck and the field redshifts as vacuum
energy. When $V^{\prime\prime}(\chi_{i})\sim H^{2}$ the field begins to
oscillate, resulting in exponential production of both the radial and axion
modes. Since $\chi_{i}\gg f_{a}$, $m_{\chi}^{2}\chi^{2}$ is small relative to
$\lambda\chi^{4}$ and can be neglected during the oscillations. This leads to
a broad resonance that rapidly depletes the energy density stored in the
original scalar field. Furthermore, since the axion energy at the time of
production is set by the effective mass of $\chi$, $m_{\chi}^{\rm
eff}(\chi)\equiv\lambda\chi$, it is independent of the temperature of the SM
bath and often considerably smaller. Subsequent redshift until today can lead
to relativistic axions over a wide range of energies, well below the
temperature of the CMB and potentially within reach of low-frequency axion
haloscopes.
We now estimate the abundance and energy spectrum of the axion and radial
mode. Axions are emitted with energy $\omega_{a}\sim m_{\chi}^{\rm
eff}(\chi_{i})$ during radiation domination at a temperature of oscillation,
$T_{\rm osc}\sim\sqrt{\Gamma M_{\rm Pl}}$, where $\Gamma$ is the oscillation
timescale. For perturbative production, $\Gamma$ cannot be arbitrarily large,
in detail $\Gamma\lesssim m_{\chi}^{3}/f_{a}^{2}$. For parametric resonance,
the particle production will occur within a few oscillations, so
$\Gamma\sim\lambda\chi_{i}$. The characteristic axion energy, as measured
today, is redshifted using $T_{\rm osc}$ and given by,
$\displaystyle\bar{\omega}_{a}$ $\displaystyle\simeq m_{\chi}^{\rm
eff}(\chi_{i})\left(\frac{s(T_{0})}{s(T_{\rm osc})}\right)^{1/3}$ (16)
$\displaystyle\simeq 10^{-15}~{}{\rm eV}\left(\frac{m_{\chi}^{\rm
eff}(\chi_{i})}{1~{}{\rm MeV}}\right)^{1/2}\,,$
where $s(T)$ is the entropy of the SM bath. Accordingly, provided
$m_{\chi}^{\rm eff}(\chi_{i})\ll M_{\rm Pl}$, we have $\bar{\omega}_{a}\ll
T_{0}$, and the axion energy will be well below the cosmic photon temperature.
With an estimate for where the spectrum will peak, next we consider the $a$
and $\chi$ comoving number densities after the oscillations have concluded.
These can be parameterized as,
$Y_{a}=f\frac{\rho_{\chi,{\rm osc}}}{\bar{\omega}_{a}^{\prime}s(T_{\rm
osc})}\,,\hskip 8.5359ptY_{\chi}=(1-f)\frac{\rho_{\chi,{\rm
osc}}}{\bar{\omega}_{\chi}^{\prime}s(T_{\rm osc})}\,,$ (17)
where $f$ denotes the fraction of energy transferred to axions,
$\bar{\omega}_{a}^{\prime}$ and $\bar{\omega}_{\chi}^{\prime}$ are the mean
energies of each particle at the time of production, and $\rho_{\chi,{\rm
osc}}\simeq\frac{1}{4}\lambda^{2}\chi_{i}^{4}$ is the total energy density in
the radial direction prior to oscillations. Since the vacuum mass of radial
mode can be neglected in this limit both $a$ and $\chi$ are produced with
comparable energy densities, $f\simeq 1/2$ and comparable energies,
$\bar{\omega}_{a}^{\prime}\simeq\bar{\omega}_{\chi}^{\prime}\simeq\lambda\chi_{i}$.
As noted above, the energy is determined by the effective mass, which is
driven by the quartic. These determine the comoving number densities to be
$Y_{a}\simeq
Y_{\chi}\simeq\frac{0.01}{\lambda^{1/2}}\left(\frac{\chi_{i}}{M_{\rm
Pl}}\right)^{3/2}\,,$ (18)
so that the axion energy density today is fixed by the initial scalar field
value,
$\frac{\rho_{a}}{\rho_{c}}\simeq 3\times 10^{-7}\left(\frac{\chi_{i}}{M_{\rm
Pl}}\right)^{2}\,.$ (19)
We conclude for $\chi_{i}<M_{\rm Pl}$, parametric resonance produces a maximum
axion relic density a few orders of magnitude below that of the CMB.
The characteristic frequency of the axions may span many orders of magnitude,
and depends on the initial field value as well as the quartic. To explore the
parameter space it is helpful to focus on a specific case where $\chi$ makes
up dark matter.101010Alternatively, if $\chi$ has significant interactions
with the SM, it may transfer its entropy into the rest of the thermal bath and
be cosmologically unobservable as considered in Ref. Co:2017mop . Requiring
$\chi$ to have the observed dark-matter abundance provides an additional
constraint,
$m_{\chi}Y_{\chi}\simeq
m_{\chi}\frac{0.01}{\lambda^{1/2}}\left(\frac{\chi_{i}}{M_{\rm
Pl}}\right)^{3/2}\sim 1~{}{\rm eV}\,.$ (20)
To study the resulting model space, we take the free parameters to be the
three parameters $\\{m_{\chi},\,\chi_{i},\,\lambda\\}$, one combination of
which is restricted by the requirement of (20). Note, the value of the vacuum
expectation value is a dependent parameter, $f_{a}=m_{\chi}/\sqrt{2}\lambda$.
This leads to a prediction for the axion energy today,
$\bar{\omega}_{a}\simeq 5\times 10^{-7}~{}{\rm
eV}\left(\frac{m_{\chi}}{1~{}{\rm eV}}\right)\left(\frac{\chi_{i}}{M_{\rm
Pl}}\right)^{2}\,.$ (21)
The relative spread in the axion spectrum must be determined using lattice
simulations though we expect it to be ${\cal O}(1)$. Simulations for a similar
theory have found the spectrum to be roughly a Gaussian with a relative width
of order unity Micha:2004bv . Here we have only included the influence of
redshift on the axion energy spectrum. It is known that there are additional
number-changing processes that tend to move the axion spectrum toward a
thermal distribution. These are slow and not expected to effectively
thermalize axions on a cosmological timescale, but may shift the peak axion
frequency by an order of magnitude Micha:2004bv .
Figure 4: The parameter space of parametric resonance producing axions and
$\chi$, where everywhere in the plot $\chi$ constitutes all of dark matter.
The constraints shown arise from dark-matter stability, warmness, efficient
parametric resonance, and isocurvature. Further, we show two contours of axion
energy density, and separately the mean C$a$B energy. The higher mean energy
of $\bar{\omega}_{a}=10^{-9}$ eV falls entirely in the region excluded by DM
stability, however this constraint is removed if we no longer assume $\chi$
constitutes the dark matter of the Universe.
The parameter space of PR production, assuming $\chi$ makes up dark matter,
has several constraints summarized below:111111Other bounds on this scenario
include requiring $\chi_{i}$ to be sub-Planckian, avoiding an epoch of early
matter domination (for consistency), and perturbativity of the quartic. These
bounds are subdominant to those we consider for the entire allowed region.
1. 1.
DM Unstable: As discussed in section II.2, $\chi$ may also decay
(perturbativity) into axions with a rate given by (13). As discussed earlier
in the context of dark-matter indirect detection (see Sec. II.2), we use the
nominal bound of $\tau\gtrsim 50~{}{\rm Gyr}\sim 3.6\,t_{U}$ Chen:2020iwm .
2. 2.
Warmness: This bound arises from the requirement that $\chi$ are cold enough
to constitute dark matter today, roughly taken to be
$p_{\chi}(T)/m_{\chi}\lesssim 10^{-3}$ at recombination. In detail, we require
$\frac{p_{\chi}(T_{\rm eq})}{m_{\chi}}\simeq\frac{10^{-11}~{}{\rm
eV}}{m_{\chi}}\left(\frac{m_{\chi}(\chi_{i})}{1~{}{\rm
MeV}}\right)^{1/2}\lesssim 10^{-3}\,.$ (22)
3. 3.
Inefficient PR: Parametric resonance is efficient at producing axions when the
initial field value is much larger than its vacuum value. Otherwise the
resonance is a narrow feature and is unable to convert all the energy density
in $\chi$ into field excitations. For this condition we take the rough bound,
$\chi_{i}\gtrsim 10f_{a}$.
4. 4.
Isocurvature: During inflation we assume $\chi$ has an effective mass below
the Hubble scale at inflation, $\lambda\chi_{i}\lesssim H_{\rm inf}$ such that
fluctuations during inflation displace the field away from the minimum. These
isocurvature perturbations can be observed in the CMB, placing a bound
$\chi_{i}/H_{\rm inf}\gtrsim(\pi^{2}\beta{\cal P}_{R}(k_{\ast}))^{-1/2}$,
where $\beta\leq 0.011$ is the isocurvature fraction and ${\cal
P}_{R}(k_{\ast})\simeq 2.1\times 10^{-9}$ is the observed amplitude of the
curvature power spectrum at the pivot scale Akrami:2018odb . Combining these
results places a limit on the scalar quartic, $\lambda$.
The parameter space of relativistic axions produced through parametric
resonance in light of these bounds is shown in Fig. 4, where we have fixed the
abundance of $\chi$ to match that of dark matter. We see that for $\chi$ to
constitute dark matter, we require $m_{\chi}\lesssim 1~{}{\rm keV}$ and the
condition of a detectable C$a$B further restricts $\chi$ to large initial
field values and smaller masses.
These results demonstrate that a consistent C$a$B produced through parametric
resonance can occur over an enormous range of frequencies. From (19),
detectability will be maximized for $\chi_{i}\sim M_{\rm Pl}$ (up to
consistency of the warmness criteria). The spectrum is then expected to be
roughly an $\mathcal{O}(1)$ width Gaussian Micha:2004bv , with peak frequency
determined from (21) as $\bar{\omega}_{a}\simeq 5\times 10^{-7}~{}{\rm
eV}\,(m_{\chi}/1~{}{\rm eV})$. In the scenario where $\chi$ constitutes dark
matter, this allows a mean energy as low as $\bar{\omega}_{a}\sim 10^{-28}$
eV, when $m_{\chi}\sim 10^{-22}$ eV is in the fuzzy dark-matter regime, and as
high as $5\times 10^{-12}$ eV when saturating the dark-matter stability
criteria, shown in Fig. 4. Removing the requirement that $\chi$ be dark
matter, the frequency range can then be extended even further, particular to
higher frequencies which may be accessible by DMRadio, or even ADMX and
HAYSTAC.
### II.4 Topological Defect Decay
Figure 5: The energy density (left) and spectrum (right) of a C$a$B produced
from the emission of cosmic strings. The relic density is shown as a function
of the axion decay constant for different decoupling temperatures, $T_{d}$.
The spectrum is shown as a function of energy, however, the shape at high
frequencies is sensitive to the decoupling temperature, which we take to be
equal to $f_{a}$ (solid) or $f_{a}/10^{3}$ (dashed). In both figures we also
show bounds from $\Delta N_{\rm eff}$ and the region preferred to mildly
alleviate the Hubble tension. On the left, we further show the region excluded
by the maximum possible reheating scale of the Universe, derived by assuming
instantaneous reheating from the maximum scale of inflation.
A C$a$B may also be produced through the decay of topological defects. In this
section we study the abundance and energy spectrum of axions emitted from a
network of cosmic strings formed during a thermal phase transition. Axions
produced during the phase transition itself are in thermal contact with the SM
bath and will contribute to the thermal background discussed in Sec. II.1. We
focus on the axions produced after chemical decoupling, when the cosmic-string
network has entered the time where the network has a constant number of
strings per Hubble volume (up to log-violations, which we also take into
account), commonly referred to as the “scaling regime”. In computing the
spectrum we work in the limit $m_{a}\to 0$ where strings remain until late
times and there are no domain walls. If there is a finite mass, and the
domain-wall number is equal to 1, the network will quickly collapse when
$H\sim m_{a}$. This will produce an additional burst of axion production and a
sharp drop in the axion spectrum at a characteristic frequency. We do not
consider these effects but they may produce additional distinctive signals.
The spectrum of axions emitted by cosmic strings is still an active area of
discussion in the literature with the debate centered on whether the typical
axion energy emitted by a string is of order the inverse length or inverse
thickness of the string (in particular, see Refs. Gorghetto:2018myk ;
Gorghetto:2020qws and Buschmann:2019icd ; Dine:2020pds ). We estimate the
abundance and spectrum following numerical simulations done for the QCD axion
in Refs. Gorghetto:2018myk ; Gorghetto:2020qws , where the simulations suggest
that the spectrum is dominated by low-energy axions.
To begin with, the energy density of cosmic strings can be parameterized using
the average length of string within a Hubble length, $\xi$, and the energy of
that string, given by the product of its tension, $\mu_{\rm eff}$, and a
Hubble length, $1/H$. The total energy is then averaged over Hubble volume,
$1/H^{3}$. Following Ref. Gorghetto:2018myk we write this as,
$\rho_{s}=\frac{\xi(t)\mu_{\rm eff}(t)}{t^{2}}\,.$ (23)
This form is convenient since both $\xi$ and $\mu_{\rm eff}$ only evolve
logarithmically with time. Their evolution and can be parameterized as
Gorghetto:2018myk
$\displaystyle\xi(t)$ $\displaystyle\simeq\alpha\ln\frac{m_{r}}{H}+\beta\,,$
(24) $\displaystyle\mu_{\rm eff}(t)$ $\displaystyle\simeq\pi
f_{a}^{2}\ln\frac{m_{r}\gamma}{H\sqrt{\xi}}\,.$
Here $m_{r}\sim f_{a}$ is the string width, $\gamma$ is roughly a constant in
time which we will approximate as unity, $\alpha\simeq 0.24\pm 0.02$
Gorghetto:2020qws is also a constant, and finally we take $\beta\simeq 0$
since we are interested in late times, where the log term is the dominant
contribution.
The rate of axion energy emission during the scaling regime per unit volume,
$\Gamma$, is given by the difference of the energy density of “free” strings
(strings without inter-commutation and radiation) and the energy density
stored in strings,
$\Gamma=\dot{\rho}^{\rm free}_{s}-\dot{\rho}_{s}\,.$ (25)
We assume that both energy densities are equal at an initial time, $t_{i}$.
The free energy density at a later time $t$ is then,
$\rho_{s}^{\rm free}=\frac{\xi(t_{i})\mu_{\rm eff}(t)}{t_{i}t}\,.$ (26)
This follows as $\rho_{s}^{\rm free}\propto t^{-1}$ and we require
$\rho_{s}^{\rm free}$ and $\rho_{s}$ to match at $t=t_{i}$. Inserting (23) and
(26) into (25) and working in the large log limit gives,
$\Gamma\simeq 2H(\rho_{s}/\rho_{\scriptscriptstyle{\rm
SM}})\rho_{\scriptscriptstyle{\rm SM}}\,,$ (27)
where $\rho_{\scriptscriptstyle{\rm SM}}$ is the total energy density in the
SM and the combination
$\frac{\rho_{s}}{\rho_{\scriptscriptstyle{\rm SM}}}=\frac{4\xi\mu_{\rm
eff}}{3M_{\rm Pl}^{2}}\,,$ (28)
has only a logarithmic time dependence. The relic density in axions today is
then given by,
$\displaystyle\frac{\rho_{a}}{\rho_{c}}$
$\displaystyle=\frac{1}{\rho_{c}}\int_{a_{d}}^{1}\frac{da}{a}a^{4}\frac{\Gamma(a)}{H}$
(29) $\displaystyle=\frac{8}{3M_{\rm
Pl}^{2}}\int_{a_{d}}^{1}\frac{da}{a}a^{4}\xi\mu_{\rm
eff}\frac{\rho_{\scriptscriptstyle{\rm SM}}}{\rho_{c}}\,,$
where $a_{d}$ is the scale factor at the time the network enters the scaling
regime. This expression applies during both radiation and matter domination,
however we note that the simulations to estimate $\xi$ were only performed for
radiation domination, and we focus on axions produced during this epoch. The
relic density for different decoupling temperatures is shown on the left of
Fig. 5. In addition to bounds on the energy density, cosmic strings have a
constraint on the maximum value of the decay constant. The energy scale of
inflation is given in terms of the tensor-to-scalar ratio as Baumann:2009ds ,
$V^{1/4}\sim 10^{16}~{}{\rm GeV}\left(\frac{r}{0.01}\right)^{1/4}\,.$ (30)
Using the upper bound on $r<0.056$ Akrami:2018odb and setting $V=3M_{\rm
Pl}^{2}H_{I}^{2}$ we can derive an upper bound on the Hubble scale of
inflation, $H_{I}<6\times 10^{13}~{}{\rm GeV}$. Relating $H_{I}$ to the reheat
temperature of the universe, $H_{I}\sim T^{2}_{\rm RH}/M_{\rm Pl}$, gives a
maximum possible $T_{\rm RH}$. To have a thermal phase transition in the early
universe requires $T_{\rm RH}$ to be above the critical temperature for a
phase transition, $\sim f_{a}$, putting an upper bound on the decay constant.
Lastly, we note that if the cosmic string network remains until recombination
there is an additional bound from the string energy density imprinted on the
cosmic microwave background Charnock:2016nzm . The spectrum of axions produced
after this epoch corresponds to frequencies, $\omega\lesssim 10^{-31}~{}{\rm
eV}$, and are not observable with the experiments considered in this work. In
presenting our axion spectrum and experimental projections, we assume the
network collapses before this time.
We now move on to calculate how this energy is distributed. The emission
spectrum of axions from strings has been a source of uncertainty in the
literature with the debate centered around whether axion emission is dominated
by coherent motion of the string producing axions with wavelength of order the
string length (“IR-dominated”) or by small loops and kinks along the string
producing axions with wavelength of order the string width (“UV-dominated”).
This has profound consequences for QCD axion dark matter as it predicts a
relic abundance produced from topological defect decay with an uncertainty of
a few orders of magnitude (see Refs. Gorghetto:2018myk ; Gorghetto:2020qws
when compared to Ref. Buschmann:2019icd ). Fundamentally, the enormous
separation of scales between the string length and its width make this a
challenging problem to resolve. In either case, the spectrum can likely be
approximated by a power-law parameterized by a spectral index $q$ with a high
and low energy cut-off Gorghetto:2018myk ,
$F(x;x_{1},x_{2})=\left\\{\begin{array}[]{lc}{\cal N}x^{-q},&x_{1}<x<x_{2}\\\
0,&{\rm otherwise}\end{array}\right.$ (31)
where ${\cal N}\equiv(q-1)x_{1}^{q-1}/(1-(x_{1}/x_{2})^{q-1})$ normalizes $F$
such that the integral over all $x$ is unity. Here $x$ is the appropriately
normalized energy, while $x_{1}$ and $x_{2}$ are IR and UV cutoffs. In Fig. 6
we show $xF(x)$ for different values of $q$, demonstrating that for $q<1$ the
spectrum is UV-dominated while for $q>1$ the spectrum is IR-dominated. While
the debate is yet to be settled (in particular, see Ref. Dine:2020pds ), a
recent analysis Gorghetto:2020qws suggests that the spectrum is IR dominated
during the scaling regime with estimates for the IR and UV cutoffs of
$x_{1}\simeq 10$ and $x_{2}\simeq m_{r}/H$ respectively. Furthermore, the
authors of Ref. Gorghetto:2020qws find a best fit for the spectral index over
time of,
$q(t)\simeq 0.51+0.053\ln\frac{m_{r}}{H}\,,$ (32)
and we assume this form in our results.
Given a spectral function $F(x)$, the axion spectrum as observed today is the
appropriately weighted time-integral of this expression,
$\Omega_{a}(\omega)=\frac{8\omega}{3M_{\rm
Pl}^{2}}\int_{a_{d}}^{1}\frac{da}{a}\frac{\xi\mu_{\rm
eff}}{H}\frac{\rho_{\scriptscriptstyle{\rm
SM}}}{\rho_{c}}a^{3}F\left(\omega/Ha\right)\,,$ (33)
where $\omega$ is the axion energy as measured today. The spectrum for
different $f_{a}$ is shown on the right of Fig. 5. At low energies the
spectrum is roughly a constant (a consequence of the network being in the
scaling regime) while at high energies the spectrum falls off as it relies on
producing axions with energies much larger then the Hubble scale from cosmic-
string oscillations. The frequency where the drop begins depends on the
decoupling temperature of the axion with the SM bath, $T_{d}$, with solid
lines denoting $T_{d}=f_{a}$ and dashed curves showing $T_{d}=f_{a}/10^{3}$.
In all cases, the abrupt change just below $\omega=10^{-22}~{}{\rm eV}$ is
associated with a drop in the cosmic string energy density after the QCD phase
transition, where the number of relativistic degrees of freedom in the SM
drops considerably.
As for all C$a$B candidates, in addition to the dependence on the energy
density and spectrum, axion detection from cosmic strings is sensitive to the
axion-SM coupling. For generic axions, the axion-photon coupling is
$g_{a\gamma\gamma}\lesssim\alpha/2\pi f_{a}$, which when combined with the
densities above would be challenging to observe. Accordingly, when we discuss
the experimental prospects, we will again consider $g_{a\gamma\gamma}$ larger
than this simplest expectation, which can be induced by mechanisms including
the clockwork Choi:2015fiu (see Refs. Agrawal:2017cmd and Dror:2020zru for
recent summaries of such mechanisms in the context of the QCD axion and
ultralight axion dark matter, respectively). Nonetheless, we note that if
experiments could probe the scenario where $g_{a\gamma\gamma}\sim\alpha/2\pi
f_{a}$, it would be possible to probe all value of $f_{a}$. This is because
the detectable C$a$B power is $\propto g_{a\gamma\gamma}^{2}\rho_{a}$, and for
cosmic strings we have $\rho_{a}\propto\mu_{\rm eff}\propto f_{a}^{2}$,
resulting in $g_{a\gamma\gamma}^{2}\rho_{a}$ being independent of $f_{a}$.
Figure 6: The function in (31), which determines the spectrum of axion
emission from cosmic strings for different spectral indices, $q$. If $q<1$ the
emission spectrum is dominated by axions with a wavelength of order the string
width while if $q>1$ the spectrum is dominated by modes of order the string
length. These two scenarios are referred to as UV and IR dominated,
respectively. We assume an IR dominated spectrum in this work as suggested in
Refs. Gorghetto:2018myk ; Gorghetto:2020qws .
## III Detecting the C$a$B
Having motivated the possibility of a local C$a$B, we now turn to the question
of how that population could be detected. We focus on detection at a few of
the many instruments constituting the burgeoning program to detect ultralight
dark matter. Our central conclusion will be that experiments designed with
axion dark matter in mind are generally also sensitive to a relativistic
population. Indeed, it is possible that ADMX has already collected a
detectable signal that would have been missed by an analysis focused on the
non-relativistic axion.
Qualitatively, detection of relativistic axions proceeds as for their non-
relativistic counterparts. In both cases, the axion can be described as an
oscillating classical wave,121212The classical wave description holds in the
limit of a large number of states per de Broglie volume, $n_{a}\lambda_{\rm
dB}^{3}\gg 1$. For dark-matter axions, using the mean expected dark-matter
density and speed, this is satisfied for $m_{a}\lesssim 10~{}{\rm eV}$. For
the C$a$B, we instead have $n_{a}\lambda_{\rm
dB}^{3}\sim(\rho_{a}/\rho_{\gamma})(\bar{\omega}/1\,{\rm meV})^{-4}$. In the
present work we will consider detection exclusively in scenarios with
$\bar{\omega}\ll 1\,{\rm meV}$ and sufficient densities that classicality
applies. Nevertheless, our description will not apply for arbitrarily large
mean energies or small densities. The approximate boundary between the two
regimes is shown in Fig. 1. which through a coupling to the SM induces a
detectable time-varying signal in, for example, electromagnetic waves or
nuclear spins. A central difference is the signal bandwidth. For dark matter,
the expectation is that the signal power will be deposited in an extremely
narrow range of frequencies centered around its unknown mass $m_{a}$. In
general, the oscillation frequency is set by the axion energy. For a non-
relativistic particle, the energy is $\omega\simeq m_{a}(1+v^{2}/2)$, and
given our expectation for the local dark matter is that the speeds vary over a
range $\Delta v\sim 10^{-3}$ and take a mean value $\bar{v}\sim 10^{-3}$,
axion dark matter carries a large quality factor of
$Q_{a}^{\scriptscriptstyle{\rm DM}}=\bar{\omega}/\Delta\omega\sim 10^{6}$,
where $\bar{\omega}$ is the average energy. For a C$a$B the expectation is
that the local axion field has a wide distribution of energies, such that
generically $Q_{a}^{\text{C$a$B}}\sim 1$. An exception is dark matter decaying
to axions within the Milky Way, where we expect $Q_{a}^{\text{C$a$B}}\sim
10^{3}$. Regardless, in either case $Q_{a}^{\text{C$a$B}}\ll
Q_{a}^{\scriptscriptstyle{\rm DM}}$, and this will represent a challenge to
detection. There are additional important differences between the relativistic
and dark-matter cases – for instance, the relativistic signal can exhibit a
unique daily-modulation signal even at a single detector – and we will explore
these as well. As is the case for the bulk of this work, we restrict our
attention to experiments focused on the axion coupling to electromagnetism,
$g_{a\gamma\gamma}$, although much of our formalism can be lifted for other SM
couplings.
We divide our discussion of the C$a$B detection into five parts. Firstly we
outline several basic features of a relativistic axion population – its
expected amplitude and distribution in both time and frequency – applicable to
any detection strategy. We next use these results to sketch our expected
sensitivity to the C$a$B by comparing the experimentally detectable power
associated with the relativistic and dark-matter axion field. Having provided
a general sensitivity estimate sufficient for understanding Fig. 1, we then
focus specifically on the axion-photon coupling, detailing axion
electromagnetism with a specific focus on the differences in the relativistic
case. Finally, we apply these lessons to existing axion dark-matter detection
strategies, and discuss representative examples of both broadband and resonant
detection strategies approaches. In the following section we will use these
results to set estimated limits on a number of different C$a$B scenarios
discussed in Sec. II.
### III.1 Properties of the Relativistic Axion
To start our discussion, we will outline general properties of the
relativistic axion field relevant to its detection. In particular, we treat
the C$a$B as the superposition of many non-interacting axion particles with
energies $\omega$ drawn from probability distribution
$p(\omega)$.131313Formally we can define $p(\omega)=(1/n_{a})dn_{a}/d\omega$,
as it is the differential number density that controls the probability of
observing an axion at a given energy. Within this framework, we will derive
the expected amplitude of the axion field $a$ – more specifically of $a^{2}$ –
in both the time and frequency domain, and further quantify the fluctuations
around the central value. Any experimental detection will involve a coupling
to the axion field, and therefore the measurements will inherit these average
values and fluctuations. We will consider a general energy distribution, and
show that our results contain the non-relativistic limit as a special case.
Indeed, our results are a direct generalization of the non-relativistic field,
which will allow us to bootstrap known dark-matter results to the C$a$B.
Figure 7: The local axion field for three different $p(\omega)$: a wide
Gaussian (left), the cosmic-string distribution (center), and the expected
dark-matter distribution (right). The broader $p(\omega)$ expected for the
C$a$B generates the additional structure seen for the relativistic axion
field. For each, the three curves represent distinct realizations of $a(t)$
computed directly from (34), with $N_{a}=10^{6}$. To aid the comparison, for
each distribution we choose parameters such that $\bar{\omega}\simeq 10$ neV.
For the two relativistic examples, we have $\rho_{a}\sim\rho_{\gamma}$,
whereas for dark matter we take $\rho_{a}=\rho_{\scriptscriptstyle{\rm DM}}$.
See text for additional details.
For the energies and densities considered in this work, the axion field will
always contain an enormous number of particles per de Broglie volume.
Consequently, the field can be described in terms of an emergent classical
wave. In this respect, the C$a$B directly mirrors non-relativistic dark matter
for $m_{a}\ll 10~{}{\rm eV}$, where the associated statistics were derived in
Ref. Foster:2017hbq , and we will generalize a number of results from that
reference. We imagine the classical axion wave as constructed from a large
number, $N_{a}$, of non-interacting waves,
$a(t)=\sqrt{\frac{2\rho_{a}}{N_{a}\bar{\omega}}}\sum_{i=1}^{N_{a}}\frac{1}{\sqrt{\omega_{i}}}\cos\left[\omega_{i}t+\phi_{i}\right].$
(34)
Each element of this sum is associated with a random variable $\omega_{i}$, an
energy drawn from $p(\omega)$. Beyond their energy, however, there is no
reason to imagine the various states are phase coherent, and this is ensured
by the uniform random variable $\phi_{i}\in[0,2\pi)$. The amplitude is fixed
by ensuring the field carries energy density $\rho_{a}$, which would be equal
to $\rho_{\scriptscriptstyle{\rm DM}}$ for non-relativistic dark matter. In
the discretized picture, $\bar{\omega}=N_{a}^{-1}\sum_{i}\omega_{i}$, but more
generally we take $\bar{\omega}=\int d\omega\,\omega p(\omega)$.
In principle, there is an additional contribution to the phase neglected in
(34): the spatial variation controlled by $-\mathbf{k}_{i}\cdot\mathbf{x}$,
where $\mathbf{k}_{i}$ is the particle momentum. If we imagine measuring the
axion field at a single point, this contribution is irrelevant at the level of
the phase, as we can always center our coordinates such that
$\mathbf{x}=0$.141414If the axion field is measured at multiple spatially
separated locations, however, the $\mathbf{k}\cdot\mathbf{x}$ contribution to
the phase is physical, and can be used to perform interferometry on the wave
Foster:2020fln . Yet where it can be relevant is through effects sensitive to
the spatial gradients of the axion field. As we will discuss in Sec. III.3,
whilst these gradients are usually neglected for a non-relativistic field,
they are parametrically important for a relativistic population. The amplitude
of these effects is fixed by the massive dispersion relation,
$|\mathbf{k}_{i}|=\sqrt{\omega_{i}^{2}-m_{a}^{2}}$, and therefore also
controlled by the energy distribution, $p(\omega)$. The direction of
$\mathbf{k}$, however, is not, and will itself be drawn from a distribution on
the celestial sphere. For instance, if the dark matter is made of axions, the
direction of $\mathbf{k}$ may point towards a dark-matter stream incident on
the Earth, or in the relativistic case the C$a$B would be biased towards the
center of the Milky Way if it originates from dark-matter decay. As shown in
Foster:2020fln , the angular distribution can be fully incorporated into the
description of the non-relativistic axion field, and the arguments there can
be generalized to relativistic axions.151515In the non-relativistic case, it
is convenient to express the energy and momentum both in terms of the particle
velocity, $\mathbf{v}$. In Foster:2020fln , it was shown how the statistics of
the axion field can be described in terms of $p(\mathbf{v})$ (often written
$f(\mathbf{v})$), which includes directional information. This approach can be
generalized to the relativistic case by describing the field in terms of
$\mathbf{k}$ and $p(\mathbf{k})$, rather than the energy as we do in the text.
We will not pursue this direction in the current work, however. While the
directional distribution will be relevant for the fine details of the
relativistic signal – in particular as it relates to daily-modulation effects
unique to the relativistic axion, discussed in Sec. III.3 – it unnecessarily
complicates an estimate of the experimental reach, which is our focus.
To make progress in our description of the axion field, we re-organize the sum
in (34) such that states with nearby energy are combined. Specifically, we
partition the particles into sets, indexed by $j$, containing all those with
$\omega\in[\omega_{j},\,\omega_{j}+\Delta\omega]$, within which the states are
distinguished only by the random phase. Combining the particles within a given
energy cell then amounts to a random walk in the complex plane (see Ref.
Foster:2017hbq ), leaving
$a(t)=\sqrt{\frac{\rho_{a}}{\bar{\omega}}}\sum_{j}\alpha_{j}\sqrt{\frac{p(\omega_{j})\Delta\omega}{\omega_{j}}}\cos\left[\omega_{j}t+\phi_{j}\right]\,.$
(35)
The end stage of the walk is a new random phase $\phi_{j}$, with the distance
traveled dictated by the Rayleigh random variable $\alpha_{j}$, drawn from
$p(\alpha)=\alpha\,e^{-\alpha^{2}/2}$, and the density of states at that
energy, controlled by $p(\omega_{j})$. Through its dependence on $\alpha_{j}$
and $\phi_{j}$, $a(t)$ is itself a random variable. Although $\langle
a\rangle=0$, we expect $\langle a^{2}\rangle>0$; indeed, $a^{2}$ is an
exponentially random variable, with mean (approximating $\Delta\omega$ as
differential)
$\langle
a^{2}\rangle=\frac{\rho_{a}}{\bar{\omega}}\int_{0}^{\infty}\frac{d\omega}{\omega}p(\omega)=\frac{\rho_{a}\langle
1/\omega\rangle}{\bar{\omega}}\,.$ (36)
For a non-relativistic axion, $\langle
1/\omega\rangle^{-1}\simeq\bar{\omega}\simeq m_{a}$, and we recover the
familiar dark-matter result, $\langle a^{2}\rangle=\rho_{a}/m_{a}^{2}$. For a
general energy distribution, however, there is no such simplification (to be
clear $\langle\bar{\omega}/\omega\rangle\neq 1$). To exemplify this point,
consider a $p(\omega)$ which is log flat over $[\omega_{1},\omega_{2}]$. If
$\omega_{1}\sim\omega_{2}$, we have $\langle\bar{\omega}/\omega\rangle\sim 1$.
However, if they are parametrically separated,
$\omega_{1}=\epsilon\,\omega_{2}$ for $\epsilon\ll 1$, then
$\langle\bar{\omega}/\omega\rangle\sim(\epsilon\ln^{2}\epsilon)^{-1}\gg 1$.
To complete our discussion of the relativistic axion in the time domain, in
Fig. 7 we show three realizations of the axion field, determined directly from
(34), for three different $p(\omega)$. In the left two figures we take
$m_{a}\ll\bar{\omega}$, in order to depict examples of the C$a$B, which are
then contrasted with the expected dark-matter axion on the right. On the left,
we take $p(\omega)$ to be a positive-definite normal distribution, which can
be considered an example of a C$a$B emerging from parametric-resonance
production. We take $\bar{\omega}=10$ neV, CMB energy density,
$\rho_{a}=\rho_{\gamma}$, and further set the distribution to be wide,
specifically $\sigma=\bar{\omega}$. The fact that a number of frequencies are
contributing is visible in the realizations. In the middle, we take an even
broader $p(\omega)$, corresponding to the C$a$B as predicted from cosmic-
string production. In detail, the distribution is determined by (33) with
$f_{a}=10^{15}$ GeV and $T_{d}=10^{12}$ GeV, such that
$\rho_{a}\sim\rho_{\gamma}$. Nevertheless, we only draw frequencies in a
restricted range of $\omega\in[5~{}{\rm neV},\,1~{}\mu{\rm eV}]$, over which
we have $\bar{\omega}\sim 10$ neV. The presence of both high and low-frequency
contributions in $p(\omega)$ can be seen in the realizations. Finally, on the
right, we show the conventional dark-matter axion scenario, with
$\bar{\omega}\simeq m_{a}=10$ neV, and small variations around this as
predicted by the standard halo model. The variations are not visible in the
time domain, with the period of the realizations highly regular. The
statistical nature of the amplitude discussed above, can be seen.
While we can understand the time dependence of the C$a$B, its properties are
more transparent in the frequency domain. As such, consider the Fourier
transform of (35). We imagine making measurements of the axion field at a
frequency $f=1/\Delta t$ for a total integration time $T$, thereby collecting
a set of $N=T/\Delta t$ discrete measurements of the field, which we denote by
$\\{a_{n}=a(n\Delta t)\\}$. We then calculate the power spectral density
(PSD), which quantifies the power in the field at a given frequency, as
$\displaystyle S_{a}(\omega)=\frac{(\Delta
t)^{2}}{T}\left|\sum_{n=0}^{N-1}a_{n}e^{-i\omega n\Delta t}\right|^{2}.$ (37)
Technically $\omega$ is a discrete variable, given by $2\pi k/T$, with
$k=0,1,\ldots,N-1$ the relevant Fourier mode, although we will often assume a
sufficiently long integration time that we can approximate $\omega$ as
continuous. As in the time domain, the PSD is an exponentially distributed
random variable, and therefore specified entirely by its mean,
$\displaystyle\langle
S_{a}(\omega)\rangle=\frac{\pi\rho_{a}}{\bar{\omega}}\frac{p(\omega)}{\omega}\,.$
(38)
Once more, this result reduces to the correct dark-matter expression in the
non-relativistic limit. The energy of the non-relativistic wave is specified
by its speed, drawn from a distribution $f(v)$. Changing variables to
$v_{\omega}=\sqrt{2\omega/m_{a}-2}$, in the non-relativistic limit, we have
$\displaystyle\langle S_{a}^{\scriptscriptstyle{\rm
DM}}(\omega)\rangle=\frac{\pi\rho_{\scriptscriptstyle{\rm
DM}}}{m_{a}^{3}}\frac{f(v_{\omega})}{v_{\omega}}\,.$ (39)
This agrees with the dark-matter case in Foster:2017hbq , demonstrating that
the general expression in (38) contains the non-relativistic limit as a
special case.
Nevertheless, the scaling in (38) is misleading. While it quantifies the power
distribution in fluctuations of the axion, experiments can only measure the
induced fluctuations in SM fields derivatively coupled to the axion.
Accordingly, it is more appropriate to consider the power in
$g_{a\scriptscriptstyle{\rm SM}}\partial a$, where $g_{a\scriptscriptstyle{\rm
SM}}$ is the axion-SM coupling. Taking $\partial a\sim\omega a$, we can
determine the parametrics of the accessible power by approximating $p(\omega)$
as a uniform distribution over a range of width $\bar{\omega}/Q_{a}$. Doing
so, the power scales as
$\langle S_{g\partial
a}(\bar{\omega})\rangle\sim\frac{g_{a\scriptscriptstyle{\rm
SM}}^{2}\rho_{a}Q_{a}}{\bar{\omega}}\,.$ (40)
### III.2 Rough Sensitivity
We will now use (40) to determine the parametric sensitivity of dark-matter
experiments to the C$a$B, leaving a detailed calculation of the sensitivities
to the following subsections. We begin with the following simple estimate:
assume the C$a$B can be detected if the power it deposits at $\bar{\omega}$
matches the power produced by the dark-matter axion at the sensitivity
threshold. For the moment, we assume that experiments extract power from a
relativistic and non-relativistic axion wave identically, although we will
later justify this assumption up to $\mathcal{O}(1)$ factors. In order to
compute the power matching, we assume optimistically that the coupling
saturates the existing bounds, $g_{a\scriptscriptstyle{\rm
SM}}=g_{a\scriptscriptstyle{\rm SM}}^{\rm SE}$, so that for a fixed $Q_{a}$
and $\bar{\omega}$ we can constrain $\rho_{a}$. For the dark-matter power, we
need the dark-matter equivalent of (40), which is obtained by setting
$\rho_{a}=\rho_{\scriptscriptstyle{\rm DM}}$,
$Q_{a}=Q_{a}^{\scriptscriptstyle{\rm DM}}\sim 10^{6}$, $\bar{\omega}\sim
m_{a}$, and fixing the coupling to an existing sensitivity threshold, denoted
by $g_{a\scriptscriptstyle{\rm SM}}^{\rm lim}$. Equating the powers at the
frequency $\bar{\omega}=m_{a}$, we expect sensitivity to an axion background
that constitutes the following fraction of the CMB energy density,
$\frac{\rho_{a}}{\rho_{\gamma}}=\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{\rho_{\gamma}}\left(\frac{g_{a\scriptscriptstyle{\rm SM}}^{\rm
lim}}{g_{a\scriptscriptstyle{\rm SM}}^{\rm
SE}}\right)^{2}\frac{Q_{a}^{\scriptscriptstyle{\rm
DM}}}{Q_{a}^{{\text{C$a$B}}}}\hskip 14.22636pt{\rm(naive)}.$ (41)
This scaling is overly pessimistic. The C$a$B will deposit its power over a
much wider range than dark matter, so there is more information than can be
gleaned by comparing power at a single frequency. In principle, our
sensitivity depends on how this additional information is obtained, either
through a broadband or resonant readout strategy, and so we will consider the
two cases separately. As we do so, however, we emphasize a fundamental
challenge: the broad nature of the signal will make it harder to distinguish
from backgrounds. There are handles, for instance as we will show for the
axion-photon coupling, the signal power will continue to scale quadratically
with the magnetic field, and further for the case of the C$a$B from dark-
matter decay, there can be a unique daily-modulation signal. Beyond such
remarks, we will not attempt to determine the optimal analysis for a
relativistic signal here, although we note it will likely require a more
accurate characterization of the background than in dark-matter searches.
Indeed, both ADMX and HAYSTAC usually remove features much broader than that
expected of dark matter, see for example Brubaker:2017rna ; Du:2018uak , which
raises the possibility that a signal of the C$a$B may already be hiding in
existing data, albeit in the most optimistic scenarios.
For a broadband readout of the axion power, we integrate over a range of
energies, and therefore at the level of the integrated signal power the
distribution $p(\omega)$ would seem irrelevant. Yet even when the entire
spectrum is resolved, the width of $p(\omega)$ still determines an important
physical property of the axion field: the coherence time. The coherence time
has a straightforward interpretation in the frequency domain. Recall that the
measurement time $T$ determines the frequency resolution of the associated
discrete Fourier transform, according to $\Delta\omega=2\pi/T$. For
sufficiently small $T$, the entire signal will fit within a single bin, and
the signal amplitude will be associated with one draw from the exponential
distribution as outlined in Sec. III.1. As $T$ is increased, eventually the
resolution will be sufficient to resolve the structure in $p(\omega)$. At this
stage the signal will occupy multiple bins, each of which will have an
independent exponential draw that then combine incoherently. The transition
between these two cases defines the coherence time, which we can quantify by
$2\pi/\tau=\sigma_{\omega}$, with $\sigma_{\omega}$ the width of $p(\omega)$.
Parametrically, we expect $\sigma_{\omega}\sim\bar{\omega}/Q_{a}$, so that
$\tau\sim 2\pi Q_{a}/\bar{\omega}$, or numerically,
$\tau\sim Q_{a}\left(\frac{1~{}{\rm neV}}{\bar{\omega}}\right)~{}\mu{\rm
s}\,.$ (42)
Consequently, the coherence time of the C$a$B will generally be short on the
timescale of experimental measurements,161616Throughout we will always assume
that we are operating in the $T>\tau$ regime so that the energy distributions
can be resolved. implying that we will operate in the regime $T>\tau$, where
we expect the sensitivity to the signal power to be impeded by the increased
background that enters when the signal is distributed over a broader range. As
the C$a$B has a coherence time that is smaller than for dark matter by a
factor of $Q_{a}^{\scriptscriptstyle{\rm DM}}/Q_{a}^{\text{C$a$B}}\gg 1$, the
background will be enhanced by this same scale, and subsequently there is a
reduction in signal power sensitivity of $\sqrt{Q_{a}^{\scriptscriptstyle{\rm
DM}}/Q_{a}^{\text{C$a$B}}}$. This leads to a refined estimate for the
broadband sensitivity of
$\frac{\rho_{a}}{\rho_{\gamma}}=\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{\rho_{\gamma}}\left(\frac{g_{a\scriptscriptstyle{\rm SM}}^{\rm
lim}}{g_{a\scriptscriptstyle{\rm SM}}^{\rm
SE}}\right)^{2}\sqrt{\frac{Q_{a}^{\scriptscriptstyle{\rm
DM}}}{Q_{a}^{\text{C$a$B}}}}\hskip 14.22636pt{\rm(broadband)}.$ (43)
Turning to a resonant detection strategy, the estimate in (41) will be
modified by the experimental quality factor $Q$, associated with the cavity or
readout circuit of the instrument, in three ways. Firstly, the power recorded
by the resonator is controlled by ${\rm min}(Q,Q_{a})$, so that for
$Q<Q_{a}^{\scriptscriptstyle{\rm DM}}$, we have overestimated the deposited
dark-matter power. For the moment, we will assume that we are in this limit,
for instance ADMX and HAYSTAC currently operate with $Q\sim 10^{5}$ and $Q\sim
10^{4}$, respectively. We also expect that $Q_{a}^{\text{C$a$B}}\ll Q$, so
that for equal couplings and density, we expect the C$a$B power to be
suppressed by a factor of $Q/Q_{a}^{\text{C$a$B}}$, rather than
$Q_{a}^{\scriptscriptstyle{\rm DM}}/Q_{a}^{\text{C$a$B}}$ as assumed in (41).
The experimental quality factor will enter a second time in defining the
instrumental bandwidth of $\omega_{0}/Q$, where $\omega_{0}$ is the resonant
frequency. The bandwidth conventionally dictates the range over which the
signal can be analyzed. For dark matter, the signal is narrower than the
bandwidth by a factor of $Q_{a}^{\scriptscriptstyle{\rm DM}}/Q$. This implies
that when searching for dark matter, the background can be restricted to a
smaller range, suppressing its contribution by $Q_{a}^{\scriptscriptstyle{\rm
DM}}/Q$. For the C$a$B there is no such suppression – the signal extends over
the full bandwidth – so using the Dicke radiometer equation Dicke:1946glx ,
our sensitivity will suffer due to the increased background by a further
factor of $\sqrt{Q_{a}^{\scriptscriptstyle{\rm DM}}/Q}$, similar to the
broadband consideration. The third consideration is in the C$a$B’s favor. Its
broad nature implies that the signal will deposit power over many bandwidths
collected during a dark-matter search. At most the number of bins can be
$Q/Q_{a}^{\text{C$a$B}}$, producing an enhancement in the sensitivity of
$\sqrt{Q/Q_{a}^{\text{C$a$B}}}$.171717$N$ additional measurements can be
thought of as scaling the experimental measurement time $T\to NT$. Assuming
$T>\tau$, our sensitivity to the power will scale as $\sqrt{T}\to\sqrt{NT}$
Budker:2013hfa . Taken together, these three factors modify (41) to
$\frac{\rho_{a}}{\rho_{\gamma}}=\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{\rho_{\gamma}}\left(\frac{g_{a\scriptscriptstyle{\rm SM}}^{\rm
lim}}{g_{a\scriptscriptstyle{\rm SM}}^{\rm
SE}}\right)^{2}\sqrt{\frac{Q_{a}^{\scriptscriptstyle{\rm
DM}}}{Q_{a}^{\text{C$a$B}}}}\hskip 14.22636pt{\rm(resonant)}.$ (44)
As $Q$ has dropped out, we are left with the same parametric scaling as for
broadband detection.
Given an experimental limit on or sensitivity for dark matter, we can estimate
our sensitivity using either (43) or (44). For several specific instruments,
we have already shown the results in Fig. 1. We can also consider the expected
reach more generally. Taking $Q_{a}^{\scriptscriptstyle{\rm DM}}=10^{6}$ and
$Q_{a}^{\text{C$a$B}}=1$, all that remains is to fix the couplings. Focussing
on the axion-photon coupling, we have $g_{a\scriptscriptstyle{\rm
SM}}=g_{a\gamma\gamma}\sim\alpha/(2\pi f_{a})$. For the C$a$B, we take the
optimistic value of $g_{a\scriptscriptstyle{\rm SM}}^{\rm
SE}=g_{a\gamma\gamma}^{\rm SE}=0.66\times 10^{-10}~{}{\rm GeV}^{-1}$, whereas
for dark matter, we exploit the fact that a broad goal of the axion dark-
matter program is to probe $g_{a\gamma\gamma}$ values at the scale of the QCD
axion, which satisfies $m_{a}f_{a}\simeq m_{\pi}f_{\pi}$. Assuming this goal
is achieved across a wide range of masses, then the corresponding sensitivity
to a relativistic population is given by
$\frac{\rho_{a}}{\rho_{\gamma}}\simeq\left(\frac{\bar{\omega}}{1~{}\mu{\rm
eV}}\right)^{2}\,,$ (45)
so that for energies below $1~{}\mu{\rm eV}$, we could be sensitive to a C$a$B
with an energy density below that of the CMB. In what follows we will refine
this estimate.
### III.3 Relativistic Axion E&M
We now specialize our discussion to detecting the C$a$B through a coupling to
electromagnetism. As is well known, the coupling in (1) leads to the following
classical equations of motion Sikivie:1983ip
$\displaystyle\nabla\cdot\mathbf{E}$ $\displaystyle=\rho-
g_{a\gamma\gamma}\mathbf{B}\cdot\nabla a\,,$ (46)
$\displaystyle\nabla\cdot\mathbf{B}$ $\displaystyle=0\,,$
$\displaystyle\nabla\times\mathbf{E}$
$\displaystyle=-\partial_{t}\mathbf{B}\,,$
$\displaystyle\nabla\times\mathbf{B}$
$\displaystyle=\partial_{t}\mathbf{E}+\mathbf{J}+g_{a\gamma\gamma}\left(\mathbf{B}\,\partial_{t}a-\mathbf{E}\times\nabla
a\right),$ $\displaystyle(\Box+m_{a}^{2})a$
$\displaystyle=g_{a\gamma\gamma}\mathbf{E}\cdot\mathbf{B}\,,$
with $\rho$ and $\mathbf{J}$ the charge and current densities, respectively.
For non-relativistic dark-matter axions the momentum is parametrically smaller
than the energy ($\nabla a\sim\mathbf{k}\ll\omega\sim\partial_{t}a$) and this
justifies neglecting the two terms involving $\nabla a$. This leaves a single
modification to the Ampére-Maxwell equation, which by analogy enters as an
effective current $\mathbf{J}_{\rm
eff}=g_{a\gamma\gamma}\mathbf{B}\,\partial_{t}a$. The axion field thereby
converts magnetic field lines into oscillating currents, identifying large
magnetic fields as a central ingredient in the detection of axion dark matter.
For the C$a$B spatial gradients cannot be neglected, yielding two additional
sources in (46). The first of these is the generation of an additional
effective current, $\mathbf{J}_{\rm
eff}=-g_{a\gamma\gamma}\mathbf{E}\times\nabla a$. As our focus is on searching
for the C$a$B with existing axion dark-matter detectors, which rely on large
magnetic fields, this term will not be relevant. The second gradient term,
which provides a contribution to Gauss’ law cannot be immediately discarded.
In detail, a relativistic axion field generates an effective charge density
$\rho_{\rm eff}=-g_{a\gamma\gamma}\mathbf{B}\cdot\nabla a$; again by analogy,
the axion converts magnetic field lines into oscillating lines of charge. This
effect is proportional to $\mathbf{B}\cdot\nabla
a\sim\mathbf{B}\cdot\mathbf{k}a$, and is therefore dependent on the incident
direction of the axion relative to the experimentally established magnetic
field. Nevertheless, we will see that for all the experiments we consider the
effective charge does not significantly contribute to the signal. Yet the
incident direction of the axion remains detectable: the relativistic field can
undergo appreciable spatial oscillations over the instrument, leading to an
interference pattern that depends on the incoming angle of the axion wave. We
will show explicitly how this effect arises for resonant cavity instruments.
For a true cosmological relic, the signal, like the CMB, will be almost
completely isotropic (up to $\sim 10^{-3}$ variations associated with our
peculiar velocity with respect to the Hubble flow), resulting in an
effectively time independent signal. In the scenario where the C$a$B arises
from dark-matter decay, the galactic component will be far from isotropic,
instead pointing preferentially towards the Galactic Center. Given that
decaying dark-matter will emerge as a case that can be probed already by
existing datasets (as it can generate $\rho_{a}\gg\rho_{\gamma}$), this
modulation will be an important fingerprint of a genuine C$a$B signal.
The final equation in (46) allows for backreaction of the electromagnetic
fields on the axion itself. To determine when this is effective, consider a
particularly simple experimental configuration with
$\mathbf{B}\cdot\mathbf{k}=|\mathbf{E}|=0$, but with a DC magnetic field of
strength $B_{0}$. From Ampére-Maxwell, the axion will induce an AC electric
field oscillating parallel to the magnetic field, and with amplitude $E\sim
g_{a\gamma\gamma}B_{0}a$, generating a backreaction of
$g_{a\gamma\gamma}\mathbf{E}\cdot\mathbf{B}\sim
g_{a\gamma\gamma}^{2}B_{0}^{2}a$. For a relativistic field we take
$(\Box+m_{a}^{2})a\simeq\Box a$, and so the condition for backreaction to be
irrelevant is parametrically
$g_{a\gamma\gamma}^{2}B_{0}^{2}/\bar{\omega}^{2}\ll 1$, with
$\frac{g_{a\gamma\gamma}B_{0}}{\bar{\omega}}\sim
10^{-8}\bigg{(}\frac{g_{a\gamma\gamma}}{g_{a\gamma\gamma}^{\rm
SE}}\bigg{)}\bigg{(}\frac{1~{}{\rm
neV}}{\bar{\omega}}\bigg{)}\bigg{(}\frac{B_{0}}{1~{}{\rm T}}\bigg{)}\,.$ (47)
The size of the backreaction is negligible for the parameter space considered
in this work, though might be of phenomenological interest for experiments
looking for significantly lower frequency axions.
More generally, effects subleading in $g_{a\gamma\gamma}$ can be neglected.
This motivates studying the fields in (46) in powers of $g_{a\gamma\gamma}$
(see also Ouellet:2018nfr ),
$\displaystyle\mathbf{E}$
$\displaystyle=\mathbf{E}_{0}+\mathbf{E}_{a}+\mathcal{O}(g_{a\gamma\gamma}^{2})\,,$
(48) $\displaystyle\mathbf{B}$
$\displaystyle=\mathbf{B}_{0}+\mathbf{B}_{a}+\mathcal{O}(g_{a\gamma\gamma}^{2})\,.$
Fields carrying a subscript 0 are the dominant fields generated by the
experiment, for example a large static magnetic field in ADMX or HAYSTAC,
whereas a subscript $a$ denotes axion-induced effects, which are
$\mathcal{O}(g_{a\gamma\gamma})$. To simplify the discussion, we will assume
the large fields are DC (i.e. static), as is the case for many axion dark-
matter proposals, although not all, see e.g. Berlin:2019ahk ; Lasenby:2019prg
; Berlin:2020vrk . Under this assumption, the equations for the DC fields
reduce to those of electro- and magneto-statics, so that all the physics of
interest is contained in the equations for the axion-induced AC
fields,181818Here and throughout, we will neglect all couplings to the
detector and readout circuit for simplicity of the discussion. This assumption
will not qualitatively impact our results, however, we note that in detail
these contributions can be important, see for instance Ref. Lasenby:2019hfz .
We leave a detailed treatment of the C$a$B response including the full matter
effects to future work, and thank Robert Lasenby for emphasizing the
importance of this.
$\displaystyle\nabla\cdot\mathbf{E}_{a}$
$\displaystyle=-g_{a\gamma\gamma}\mathbf{B}_{0}\cdot\nabla a\,,$ (49)
$\displaystyle\nabla\cdot\mathbf{B}_{a}$ $\displaystyle=0\,,$
$\displaystyle\nabla\times\mathbf{E}_{a}$
$\displaystyle=-\partial_{t}\mathbf{B}_{a}\,,$
$\displaystyle\nabla\times\mathbf{B}_{a}$
$\displaystyle=\partial_{t}\mathbf{E}_{a}+g_{a\gamma\gamma}\left(\mathbf{B}_{0}\,\partial_{t}a-\mathbf{E}_{0}\times\nabla
a\right).$
We can separate the equations as follows,
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{E}_{a}=\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{B}_{0}\partial_{t}^{2}a-g_{a\gamma\gamma}(\mathbf{B}_{0}\cdot\nabla)\nabla
a$ $\displaystyle-\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{E}_{0}\times\nabla(\partial_{t}a)-g_{a\gamma\gamma}(\nabla
a\cdot\nabla)\mathbf{B}_{0}$ $\displaystyle-\;$ $\displaystyle
g_{a\gamma\gamma}\nabla a\times(\nabla\times\mathbf{B}_{0}),$ (50)
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{B}_{a}=\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{E}_{0}\nabla^{2}a-g_{a\gamma\gamma}(\mathbf{E}_{0}\cdot\nabla)\nabla
a$ $\displaystyle+\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{B}_{0}\times\nabla(\partial_{t}a)+g_{a\gamma\gamma}(\nabla
a\cdot\nabla)\mathbf{E}_{0}$ $\displaystyle-\;$ $\displaystyle
g_{a\gamma\gamma}(\partial_{t}a)(\nabla\times\mathbf{B}_{0})-g_{a\gamma\gamma}(\nabla\cdot\mathbf{E}_{0})\nabla
a\,.$
In the following subsections we will solve the equations in (50) for different
experimental configurations. Before doing so, we consider the equations
parametrically. Firstly, in the non-relativistic limit, we can drop all axion
gradients, and the equations reduce significantly,
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{E}_{a}=$ $\displaystyle
g_{a\gamma\gamma}\mathbf{B}_{0}\partial_{t}^{2}a\,,$ (51)
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{B}_{a}=$ $\displaystyle-
g_{a\gamma\gamma}(\partial_{t}a)(\nabla\times\mathbf{B}_{0}).$
There are two relevant spatial scales in the problem. The first is the
experimental size $L$, which through the boundary conditions dictates the
scale over which the primary fields vary. The second is the Compton wavelength
of the axion field, $\lambda_{a}\sim 1/\bar{\omega}$, or in the non-
relativistic case $\lambda_{a}\sim 1/m_{a}$.191919Even in the non-relativistic
case, the relevant spatial scale is the Compton wavelength, and not the
distance over which the phase of axion field itself varies, which is set by
the coherence length. The rationale is that the axion field will drive
oscillations in the electromagnetic fields, which having a lightlike
dispersion will vary over a spatial scale set by the timescale of their
oscillations. Accordingly, in order to understand the relevance of various
terms in (51) qualitatively we can substitute $\nabla\to 1/L$ and
$\partial_{t}\to 1/\lambda_{a}$, and then determine the relevance of each term
for specific experiments. A more careful discussion of these scalings is
provided in Ref. Ouellet:2018nfr .
To begin with, resonant cavity instruments are designed with a principle that
$\lambda_{a}\sim L$, and therefore all terms are relevant. Experiments
searching for lighter dark matter, such as ABRACADABRA or DMRadio, have
$\lambda_{a}\gg L$, suppressing time derivatives with respect to spatial
gradients. In particular, (51) then implies that
$E_{a}\sim(L/\lambda_{a})B_{a}$, so that the induced electric fields are
parametrically suppressed with respect to the magnetic fields, a point that
has been widely discussed Goryachev:2018vjt ; Ouellet:2018nfr ; Kim:2018sci ;
Beutter:2018xfx ; Lasenby:2019hfz . In the high mass regime considered by, for
instance MADMAX TheMADMAXWorkingGroup:2016hpc ; Brun:2019lyf , where
$\lambda_{a}\ll L$, we instead neglect the spatial gradients. Accordingly,
$B_{a}\sim(\lambda_{a}/L)E_{a}$, so that the dominant effect is now the
induced electric fields.
A similar analysis can be performed in the relativistic case. We first
reinstate the terms containing gradients of the axion field, and then replace
those gradients with their parametric scaling of $1/\lambda_{a}$. For
simplicity, we consider DC field configurations that are purely magnetic.
Then, (50) reduces to
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{E}_{a}=\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{B}_{0}\partial_{t}^{2}a-g_{a\gamma\gamma}(\mathbf{B}_{0}\cdot\nabla)\nabla
a$ (52) $\displaystyle-\;$ $\displaystyle g_{a\gamma\gamma}(\nabla
a\cdot\nabla)\mathbf{B}_{0}-g_{a\gamma\gamma}\nabla
a\times(\nabla\times\mathbf{B}_{0}),$
$\displaystyle(\nabla^{2}-\partial_{t}^{2})\mathbf{B}_{a}=\;$ $\displaystyle
g_{a\gamma\gamma}\mathbf{B}_{0}\times\nabla(\partial_{t}a)-g_{a\gamma\gamma}(\partial_{t}a)(\nabla\times\mathbf{B}_{0}).$
Compared to (51), we see for $\mathbf{B}_{a}$ there is a single additional
term that depends on the relative angle between $\mathbf{B}_{0}$ and
$\mathbf{k}$. For a resonant cavity, once more all terms are in principle
relevant. In the low-frequency limit ($\lambda_{a}\gg L$), we have
$\displaystyle\nabla^{2}\mathbf{E}_{a}\simeq\;$ $\displaystyle-
g_{a\gamma\gamma}(\nabla a\cdot\nabla)\mathbf{B}_{0}-g_{a\gamma\gamma}\nabla
a\times(\nabla\times\mathbf{B}_{0}),$
$\displaystyle\nabla^{2}\mathbf{B}_{a}\simeq\;$ $\displaystyle-
g_{a\gamma\gamma}(\partial_{t}a)(\nabla\times\mathbf{B}_{0}).$ (53)
We now see that $E_{a}\sim B_{a}$, so that the induced electric field is no
longer parametrically suppressed. Nevertheless, the origin of the two effects
is different. The AC magnetic field is generated by $\mathbf{J}_{\rm eff}$,
whereas the AC electric field originates from $\rho_{\rm eff}$. An identical
analysis in the high-frequency regime ($\lambda_{a}\ll L$), results in
$\displaystyle\partial_{t}^{2}\mathbf{E}_{a}\simeq$ $\displaystyle
g_{a\gamma\gamma}(\mathbf{B}_{0}\cdot\nabla)\nabla
a-g_{a\gamma\gamma}\mathbf{B}_{0}\partial_{t}^{2}a\,,$ (54)
$\displaystyle\partial_{t}^{2}\mathbf{B}_{a}\simeq$ $\displaystyle-
g_{a\gamma\gamma}\mathbf{B}_{0}\times\nabla(\partial_{t}a),$
and again, neither field is suppressed.
Going forward, we will specialize to two specific scenarios from which we can
largely infer how the C$a$B could appear in experiments designed to search for
dark matter. In particular, we will consider broadband and resonant detection
for $\lambda_{a}\gg L$, and resonant detection in the cavity regime
$\lambda\sim L$. We will not consider the regime where $\lambda_{a}\ll L$,
relevant for experiments such as MADMAX. For a C$a$B, such experiments will
not be able to reach axion energy densities relevant for cosmic sources, given
the scaling in (45) (see also Fig. 1), though they may have promise in looking
for dark-matter decay. Nonetheless, this is the parameter range relevant for
the thermal C$a$B, and therefore it may be interesting to consider dedicated
experiments searching for such a background. We will not pursue this direction
here.
### III.4 Low-Frequency Detection ($\lambda_{a}\gg L$)
Armed with the expressions for the induced electric and magnetic fields, we
now compute the C$a$B sensitivity of axion dark-matter instruments focusing on
the frequencies well below a $\mu$eV ($\lambda_{a}\gg 1~{}{\rm m}$). For the
C$a$B, our estimated sensitivity in (45) suggests that such experiments are
ideal for probing cosmic relics, for which measurements of $\Delta N_{\rm
eff}$ bound $\rho_{a}<\rho_{\gamma}$. While existing instruments only have
sensitivity for dark-matter axions with a coupling comparable to the star-
emission bounds Ouellet:2018beu ; Ouellet:2019tlz ; Gramolin:2020ict ;
Crisosto:2019fcj , these results are paving the way for future experiments
that will probe the couplings predicted for the QCD axion, such as DMRadio
SnowmassOuellet ; SnowmassChaudhuri . In this mass range, both broadband and
resonant search strategies have been proposed. As such, in this section we
will consider both types of detection, and to be concrete envision a large
scale realization of the DMRadio (or equivalently ABRACADABRA)
instrument.202020For the specific case of DMRadio, it will likely only be
realized on large scales as a resonant instrument given the advantages of a
resonant approach for dark-matter searches Chaudhuri:2018rqn ;
Chaudhuri:2019ntz .
Our starting point is the geometry of DMRadio and ABRACADABRA: a large
toroidal magnet with field strength $B_{0}$. To understand the effects
generated by the C$a$B on such an instrument, we can use the equations of
axion electrodynamics in the $\lambda_{a}\gg L$ limit, as stated in (53). From
the second equation, we see that the axion field will convert the DC magnetic
field into an oscillating toroidal current, which will then induce an AC
magnetic field in the center of the torus. In the presence of an axion field,
a pickup loop placed in the center of the torus would see a varying magnetic
flux in a region where conventionally there should be none. This detection
principle is identical to the conventional strategy for detecting axion dark
matter with such an instrument; that the effect would be the same is clear
from the fact the equation for $B_{a}$ in (53) does not involve any gradients
of the axion field. Even though the C$a$B modes have a significantly smaller
de Broglie wavelength than dark matter, there is no issue of this leading to
an incoherent effect across the instrument, as that would only occur for
$\lambda_{a}\lesssim L$, outside the range considered by these instruments.
There are, however, differences for the C$a$B detection from the conventional
dark-matter axion search. Firstly, the range of frequencies over which the AC
magnetic field will be excited are significantly larger than for dark matter,
as we have emphasized many times already. A second difference is that unlike
in the non-relativistic case, there is now an unsuppressed electric field
generated from the effective charge. Recalling that the relativistic axion
converts magnetic field lines into oscillating charge lines, the instrument
would behave like a torus of oscillating charge, inducing an axial AC electric
field near the center of the torus. Supposing the pickup loop used to search
for $B_{a}$ is perfectly perpendicular to this field, the above detection
scheme is unaffected. Nevertheless, the oscillating electric field is present,
and its detection could provide a confirmation of any magnetic field excess.
Turning to the actual detector response, integrating the effective current
over the torus, the flux induced in the pickup loop will be Kahn:2016aff
$\Phi_{\rm pickup}(t)=g_{a\gamma\gamma}B_{0}V_{B}\partial_{t}a(t)\,,$ (55)
where $B_{0}$ is the magnetic field at the inner radius of the torus and
$V_{B}$ is the magnetic field volume. In ABRACADABRA this flux is read out by
inductively coupling the pickup loop to a SQUID, which will observe a flux
$\Phi_{a}$ proportional to $\Phi_{\rm pickup}$, with a proportionality
constant $\beta$. For a 100 m3 instrument, $\beta\simeq 0.5\%$ Kahn:2016aff .
As discussed in Sec. III.1, this signal can be searched for by collecting a
time series data set of the SQUID flux, taking the discrete Fourier transform
of the measurements, and finally forming the PSD, $S_{\Phi}(\omega)$. Going
through these steps and recalling that the PSD of the C$a$B is exponentially
distributed, with mean given in (38), the result is that the power will be an
exponentially distributed quantity, with mean
$\displaystyle\langle S_{\Phi}(\omega)\rangle$
$\displaystyle=g_{a\gamma\gamma}^{2}\beta^{2}B_{0}^{2}V_{B}^{2}\left\langle
S_{\partial_{t}a}\right\rangle+\lambda_{B}(\omega)$ (56)
$\displaystyle=g_{a\gamma\gamma}^{2}\rho_{a}\beta^{2}B_{0}^{2}V_{B}^{2}\frac{\pi\omega
p(\omega)}{\bar{\omega}}+\lambda_{B}(\omega)\,,$
where we have introduced the time derivative of (38), $\left\langle
S_{\partial_{t}a}\right\rangle=\omega^{2}\left\langle S_{a}\right\rangle$.
Here $\lambda_{B}(\omega)$ is the contribution to the flux from background
sources, which in general will be frequency dependent. For a broadband
strategy, $\lambda_{B}$ is limited by noise within the SQUID, numerically
given by $\lambda_{B}\simeq 1.6\times 10^{5}~{}{\rm eV}^{-1}$. The steps
leading to this result parallel closely those in the non-relativistic case
considered in Foster:2017hbq , and we refer there for additional details. In
particular, recalling that in the non-relativistic limit
$p(\omega)=f(v_{\omega})/(m_{a}v_{\omega})$, (56) contains the dark-matter
result as a special case.
Importantly, the exponential nature of the PSD implies that we can exploit the
full likelihood framework of Foster:2017hbq . We can then analytically
determine the expected sensitivity to our signal through the use of the Asimov
data set Cowan:2010js , where instead of considering the distribution of our
sensitivity on a set of simulated data, we instead replace the data with its
asymptotic expectation. Doing so, for a given $g_{a\gamma\gamma}$, our
sensitivity to the C$a$B density is given by
$\displaystyle\frac{\rho_{a}}{\rho_{\gamma}}=$
$\displaystyle\frac{1}{g_{a\gamma\gamma}^{2}\rho_{\gamma}}\frac{1}{\beta^{2}B_{0}^{2}V_{B}^{2}}\sqrt{\frac{2{\rm
TS}}{T\pi}}$ (57) $\displaystyle\times$ $\displaystyle\left[\int
d\omega\,\left(\frac{\omega
p(\omega)}{\bar{\omega}\lambda_{B}}\right)^{2}\right]^{-1/2}\,.$
Here $T$ is the data collection time, and ${\rm TS}$ is the test statistic
associated with the sensitivity threshold (for 95% expected limits ${\rm
TS}\simeq 2.71$, whereas for an $n$-$\sigma$ discovery, ${\rm TS}=n^{2}$ up to
the look elsewhere effect). To arrive at this result we assumed that
$T\gg\tau$, such that $p(\omega)$ is well resolved, and also that we are in
the background dominated regime. Further, we have left the terminals on the
frequency integration unspecified, but these can at largest be the range over
which the experiment has sensitivity.
The result in (57) is consistent with the simple estimate claimed in Sec.
III.2. To see this, we model the broad C$a$B as a log-uniform distribution
$p(\omega)=Q_{a}^{\text{C$a$B}}/\omega$ defined over a range
$\bar{\omega}/Q_{a}$. Then, treating $\lambda_{B}$ as independent of
frequency, our sensitivity can be written (dropping $\rho_{\gamma}$)
$\displaystyle\rho_{a}=$ $\displaystyle\sqrt{\frac{2{\rm
TS}}{\pi}}\left(\frac{\sqrt{\bar{\omega}/Q_{a}^{\text{C$a$B}}}}{g_{a\gamma\gamma}^{2}}\right)\left(\frac{\lambda_{B}}{\beta^{2}B_{0}^{2}V_{B}^{2}\sqrt{T}}\right)\,.$
(58)
For dark matter, we instead approximate the distribution as a uniform
$p(\omega)=Q_{a}^{\scriptscriptstyle{\rm DM}}/m_{a}$ over a narrow range,
which results in an identical expression but with $Q_{a}^{\text{C$a$B}}\to
Q_{a}^{\scriptscriptstyle{\rm DM}}$ and $\bar{\omega}\to m_{a}$. Taking
$m_{a}=\bar{\omega}$, the ratio of the two expressions yields (44).
The sensitivity in (57) represents the quantitative result for broadband
sensitivity, however to provide additional intuition – especially for why the
width of $p(\omega)$ plays a fundamental role – we can rederive the result
qualitatively from a signal-to-noise ratio Budker:2013hfa ; Kahn:2016aff . The
signal strength is simply the average signal flux, which is given by (using
(36))
$|\Phi_{a}|=g_{a\gamma\gamma}|\partial_{t}a|\beta B_{0}V_{B}\sim
g_{a\gamma\gamma}\sqrt{\rho_{a}}\beta B_{0}V_{B}\,,$ (59)
whereas the flux noise is given by $\sqrt{\lambda_{B}}$. Given the signal and
background flux magnitudes, if we perform a measurement for a time $T$, the
naive expectation is our sensitivity will grow as $\sqrt{T}$, and indeed for a
time it will. Nevertheless, as described already, the C$a$B has a finite
coherence time $\tau\sim 2\pi Q_{a}/\bar{\omega}$, and for $T>\tau$ the signal
will no longer combine coherently, leading to the signal-to-noise only scaling
with the parametrically reduced $(T\tau)^{1/4}$ Budker:2013hfa . Assuming
$T>\tau$, then our sensitivity is given by
$|\Phi_{a}|(T\tau)^{1/4}/\sqrt{\lambda_{B}}=1$. Together, these scalings
provide
$\rho_{a}=\frac{1}{\sqrt{2\pi}}\left(\frac{\sqrt{\bar{\omega}/Q_{a}}}{g_{a\gamma\gamma}^{2}}\right)\left(\frac{\lambda_{B}}{\beta^{2}B_{0}^{2}V_{B}^{2}\sqrt{T}}\right)\,,$
(60)
which is parametrically identical to (58), and demonstrates that the
appearance of the width of $p(\omega)$ is a consequence of measuring the C$a$B
over times longer than the field is coherent.
The alternative qualitatively different readout strategy proposed in this
frequency range is resonant detection. For the toroidal geometry described
above, this can be achieved by reading out the pickup loop through a resonant
circuit, which for our purposes can be characterized by four parameters: the
quality factor $Q$, resonant frequency $\omega_{0}$, total circuit inductance
$L_{T}$, and thermal noise temperature $T_{0}$. As in the broadband case, we
can generalize the known dark-matter result (see e.g. Ref. Foster:2017hbq ) to
the C$a$B as follows,
$\displaystyle\frac{\rho_{a}}{\rho_{\gamma}}$
$\displaystyle=\frac{1}{g_{a\gamma\gamma}^{2}\rho_{\gamma}}\frac{2L_{T}T_{0}}{Q\omega_{0}B_{0}^{2}V_{B}^{2}}\sqrt{\frac{2{\rm
TS}}{T\pi}}$ (61) $\displaystyle\times\left[\int d\omega\left(\frac{\omega
p(\omega)}{\bar{\omega}}\right)^{2}\right]^{-1/2},$
which, up to experimental factors, is identical to (57).
This expression determines the expected C$a$B energy density sensitivity for a
given set of experimental parameters. Yet we can recast this result in the
spirit of Sec. III.2, and forecast C$a$B sensitivity in terms of the expected
dark-matter reach. Indeed (57) holds equally well for dark matter, taking
$\rho_{a}=\rho_{\scriptscriptstyle{\rm DM}}$, and evaluating
$\int d\omega\left(\frac{\omega
p(\omega)}{\bar{\omega}}\right)^{2}=\frac{1}{m_{a}}\int
dv\frac{f(v)^{2}}{v}\simeq\frac{2}{3}\frac{Q_{a}^{\scriptscriptstyle{\rm
DM}}}{m_{a}}\,,$ (62)
where in the final step we assumed $f(v)$ follows the canonical standard halo
model. For a resonant instrument, the frequency range is commonly restricted
to a single bandwidth of size $\Delta\omega=\omega_{0}/Q$ around $\omega_{0}$.
For $Q<Q_{a}^{\scriptscriptstyle{\rm DM}}$ this detail is irrelevant in the
dark-matter computation, as for $m_{a}=\omega_{0}$, then the dark-matter
distribution is contained entirely within the bandwidth. For the C$a$B
however, we will generically have $Q_{a}^{\text{C$a$B}}\ll Q$, and therefore
expect $p(\omega)$ to be constant over the range of integration, leading to
$\int d\omega\left(\frac{\omega
p(\omega)}{\bar{\omega}}\right)^{2}\simeq\frac{\omega_{0}}{Q}\left(\frac{\omega_{0}p(\omega_{0})}{\bar{\omega}}\right)^{2}\,.$
(63)
Having computed the result for both dark matter and relativistic axions, we
can take the ratio to determine the C$a$B sensitivity as a function of the
experimentally achieved dark-matter coupling, $g_{a\gamma\gamma}^{\rm lim}$,
to be
$\frac{\rho_{a}}{\rho_{\gamma}}=\sqrt{\frac{2}{3}}\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{\rho_{\gamma}}\left(\frac{g_{a\gamma\gamma}^{\rm
lim}}{g_{a\gamma\gamma}^{\rm
SE}}\right)^{2}\left(\frac{\bar{\omega}}{m_{a}}\right)\left(\frac{\sqrt{Q_{a}^{\scriptscriptstyle{\rm
DM}}Q}}{m_{a}p(m_{a})}\right)\,.$ (64)
If we approximate the energy distribution as log flat, so
$p(m_{a})=Q_{a}/m_{a}$, take $\bar{\omega}=m_{a}$, and recall that the C$a$B
sensitivity can in principle be enhanced across at most $N\sim
Q/Q_{a}^{\text{C$a$B}}$ bandwidths, then this result reproduces the parametric
scaling given in (44).
### III.5 Resonant Cavity Detection ($\lambda_{a}\sim L$)
We next consider detection with a physical resonator, as pursued by both ADMX
and HAYSTAC, where a microwave cavity is constructed in order to resonantly
enhance power produced at a frequency tuned to the cavity dimension,
$\omega_{0}\sim 1/L$. The design principle for these instruments is to
optimize the search for the narrow spectral feature dark matter predicts when
its mass is such that $1/m_{a}\sim L\sim{\rm m}$. In particular, in the
presence of a large static magnetic field $\mathbf{B}_{0}$, an axion
background will source oscillating electromagnetic fields, thereby generating
potentially detectable power in the cavity. As we will show in this section,
this statement is true for both dark matter and the C$a$B. Importantly, the
existing reach of ADMX is already sufficient to probe open parameter space,
although a reanalysis of the data would be required, and the same will soon be
true of HAYSTAC.
Parametrically our final sensitivity will be identical to the resonant circuit
expression (64). There will, however, be important differences. According to
the expressions in (52), the C$a$B will source AC electric and magnetic fields
in the presence of a large $\mathbf{B}_{0}$, and if we work in a regime where
the DC field is spatially uniform, we have
$(\nabla^{2}-\partial_{t}^{2})\mathbf{E}_{a}=g_{a\gamma\gamma}\mathbf{B}_{0}\partial_{t}^{2}a\,,$
(65)
whilst $\mathbf{B}_{a}$ can be determined from Faraday’s law. This result is
identical to the case of dark matter: we have dropped the contribution from
the effective charge as it cannot excite resonant cavity modes.212121We thank
Asher Berlin and Kevin Zhou for teaching us this point. Further discussion can
be found in, for example, Ref. Berlin:2022mia. Unlike for a non-relativistic
axion, when $\lambda_{a}\sim L$ the axion wave can undergo ${\cal O}(1)$
spatial oscillations across the detector, which will induce a new daily
modulation effect we will explore. Taking a simplified picture of the C$a$B
where $a(t,\mathbf{x})\propto\cos(\omega t-\mathbf{k}\cdot\mathbf{x})$, we
will find an explicit dependence on the direction of $\mathbf{k}$. In
particular, when we compute the cavity form factor – which captures the
overlap between the axion source and the relevant mode of the cavity being
measured – a dependence on the direction of $\mathbf{k}$ and hence the C$a$B
will appear. For a cosmic relic, the C$a$B will be incident approximately
isotropically on the detector, and the effect will take on a sky-averaged, and
effectively time-independent, value.222222If a detection is made, future
experiments could further look for axion spatial correlations analogous to
those present in the CMB. However, for local sources of relativistic axions,
the distribution can be anisotropic. This will certainly be the case for dark-
matter decays in the Milky Way, where the flux predominantly originates from
the Galactic Center. In the rest frame of the Earth, where the orientation of
the cavity is time invariant, the direction of the Galactic Center will vary
throughout the day, leading to a daily variation in the cavity form factor.
Accordingly, in general the C$a$B power, which is proportional to the form
factor, will undergo $\mathcal{O}(1)$ variations throughout the day. These
daily modulations provide a novel handle that can be used to distinguish a
C$a$B with a local origin from potential backgrounds, although in the present
work we will not quantitatively calculate this effect.
Our focus is instead on determining the C$a$B sensitivity of these
instruments, which will require a determination of the power a relativistic
axion field deposits in a cylindrical cavity as used by both ADMX and HAYSTAC.
We do so by repeating the analogous non-relativistic computation Krauss:1985ub
; Sikivie:1985yu (see Brubaker:2018ebj for a recent review) while accounting
for the three important modifications in the relativistic case. Two of these
we have already discussed: the additional gradient term in (65) and the fact
that the C$a$B carries power over a much broader frequency range. The third
relativistic novelty arises from the fact the axion field need not be
spatially coherent across the instrument when $|\mathbf{k}|=\omega\sim 1/L$,
which can suppress the integrated power deposited over the cavity.
We begin with (65). We will solve this equation within a cylindrical cavity,
where a DC magnetic field $\mathbf{B}_{0}=B_{0}\hat{\mathbf{z}}$ has been
established along the cavity axis, and we will assume the field is spatially
uniform. The cavity will have a set of normal modes for the electric fields it
can support, $\mathbf{e}_{\ell mn}(\mathbf{x})$, indexed by three integers
$(\ell,m,n)$, and satisfying
$(\nabla^{2}+\omega_{\ell mn}^{2})\mathbf{e}_{\ell mn}(\mathbf{x})=0\,,$ (66)
with $\omega_{\ell mn}$ the resonant frequency of the mode. On dimensional
grounds we expect $\omega_{\ell mn}\sim 1/L$, however the exact value will be
determined by the spatial variation of the modes. As these basis modes must
satisfy the electric boundary conditions, $\mathcal{O}(1)$ changes to the
resonant frequency can be achieved by modifying these boundary conditions, as
ADMX achieves by mechanically varying the location of tuning rods within the
instrument. We can normalize the basis modes as follows,
$\int d^{3}\mathbf{x}\,\mathbf{e}_{\ell
mn}\cdot\mathbf{e}_{\ell^{\prime}m^{\prime}n^{\prime}}^{*}=\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}}\delta_{nn^{\prime}}\,.$
(67)
The integral is performed over the cavity volume $V$, which reveals that the
modes carry dimension $\mathbf{e}\sim V^{-1/2}$.
As the orthonormal modes are a complete basis for the electric field in the
cavity, we can write $\mathbf{E}_{a}=\sum\alpha_{\ell mn}\mathbf{e}_{\ell mn}$
and solve for the coefficients. Using this expansion in (65), and transforming
to the frequency domain, we obtain
$\displaystyle\sum_{\ell,m,n}(\omega^{2}-\omega_{\ell mn}^{2})\alpha_{\ell
mn}\mathbf{e}_{\ell mn}$ (68) $\displaystyle=$
$\displaystyle-\omega^{2}B_{0}g_{a\gamma\gamma}a(\omega){\rm
cos}(\mathbf{k}\cdot\mathbf{x})\hat{\mathbf{z}}\,,$
where we have assumed a simple form for the axions spatial dependence,
$a(\omega,\mathbf{x})=a(\omega)\cos(\mathbf{k}\cdot\mathbf{x})$. Using the
orthonormality condition, we can then isolate the electric field coefficients
as,
$\displaystyle\alpha_{\ell mn}=$
$\displaystyle-\frac{g_{a\gamma\gamma}B_{0}\,a(\omega)\,\omega^{2}}{\omega^{2}-\omega_{\ell
mn}^{2}}$ (69) $\displaystyle\times$ $\displaystyle\int
d^{3}\mathbf{x}\cos(\mathbf{k}\cdot\mathbf{x})\hat{\mathbf{z}}\cdot\mathbf{e}_{\ell
mn}^{*}\,.$
The first line of this result identifies $\omega_{\ell mn}$ as the resonant
frequencies, with the apparent divergence a remnant of our idealized treatment
of the problem. In particular we have neglected the fact that the electric
field will penetrate the walls of the cavity and dissipate energy due to the
finite resistance of the material, which is the origin of the cavity quality
factor, $Q$. This dissipation can be accounted for as in the standard dark-
matter calculation. More novel is the second line of (69), and we define
$\kappa_{\ell mn}=\int
d^{3}\mathbf{x}\cos(\mathbf{k}\cdot\mathbf{x})\hat{\mathbf{z}}\cdot\mathbf{e}_{\ell
mn}^{*}\,,$ (70)
which we will use to define a generalized cavity form factor shortly, and note
dimensionally $\kappa_{\ell mn}\sim V^{1/2}$.
The energy density in the AC cavity fields for this mode is given by $U_{\ell
mn}=(|\mathbf{E}_{a}|^{2}+|\mathbf{B}_{a}|^{2})/2\simeq|\mathbf{E}_{a}|^{2}$,
as for frequencies near the resonant frequency we will have
$|\mathbf{B}_{a}|\simeq|\mathbf{E}_{a}|$ from Faraday’s law. Collecting our
expressions above, the energy density has the form
$U_{\ell mn}=\frac{1}{4}g_{a\gamma\gamma}^{2}B_{0}^{2}VC_{\ell mn}\omega_{\ell
mn}^{2}|a(\omega)|^{2}\mathcal{T}(\omega)\,.$ (71)
Here we have defined a transfer function, $\mathcal{T}(\omega)$, which
accounts for the resonant response of the circuit when dissipation is
included,
$\mathcal{T}(\omega)=\frac{1}{(\omega-\omega_{\ell mn})^{2}+(\omega_{\ell
mn}/2Q)^{2}}\,,$ (72)
which is sharply peaked around its maximum, $\mathcal{T}(\omega_{\ell
mn})=4Q^{2}/\omega_{\ell mn}^{2}$. The energy density is further expressed in
terms of a cavity form factor $C_{\ell mn}=|\kappa_{\ell mn}|^{2}/V$, where
cavity volume has been introduced to ensure this is an intensive quantity. In
detail, we define a relativistic cavity form factor,
$C^{\text{C$a$B}}_{\ell mn}=\frac{1}{V}\left|\int
d^{3}\mathbf{x}\cos(\mathbf{k}\cdot\mathbf{x})\hat{\mathbf{z}}\cdot\mathbf{e}_{\ell
mn}^{*}\right|^{2},$ (73)
which can be contrasted with the conventional result used for dark matter
$C^{\scriptscriptstyle{\rm DM}}_{\ell mn}=\frac{1}{V}\left|\int
d^{3}\mathbf{x}\,\hat{\mathbf{z}}\cdot\mathbf{e}_{\ell mn}^{*}\right|^{2}.$
(74)
Let us discuss several details of these form factors, which parameterize the
overlap between the induced $\mathbf{E}_{a}$ and the static $\mathbf{B}_{0}$.
In both cases, numerically we have $C<1$. Further, we expect $C$ to be
$\mathcal{O}(1)$ for the lowest lying mode – higher modes correspond to basis
functions of shorter wavelength, which generically suppress the integral. In
the non-relativistic case, contributions to $\mathbf{e}$ perpendicular to
$\hat{\mathbf{z}}$ do not contribute, which identifies the cavity transverse
magnetic (TM) modes as relevant. We have yet to specify $\mathbf{e}$, however
for a cylindrical cavity they are well known (and our results are
qualitatively similar for other geometries). Taking the cylinder to have
height $L$, and radius $R\sim L$, we have
$(\mathbf{e}_{\ell 00})_{z}=\frac{1}{\sqrt{V}}\frac{J_{0}(\omega_{\ell
00}r)}{J_{1}(\omega_{\ell 00}R)}\,,$ (75)
where $J_{n}$ are Bessel functions of the first kind, and the resonant
frequency is given by $\omega_{\ell 00}=j_{0\ell}/R$, with $j_{0\ell}$ the
$\ell$th zero of $J_{0}$. Explicitly evaluating the cavity form factor, we
obtain
$C^{\scriptscriptstyle{\rm DM}}_{\ell 00}=\frac{4}{j_{0\ell}^{2}}\,.$ (76)
Numerically, $C_{100}\simeq 0.69$, and $C_{\ell 00}\sim C_{100}/\ell^{2}$, so
that the response of higher order modes is rapidly suppressed. Accordingly, it
is common to focus on the lowest lying mode, defining
$\omega_{0}=\omega_{100}$ and $C=C_{100}$.
The relativistic cavity factor is complicated by a dependence on the incident
angle of the axion through $\hat{\mathbf{n}}$ (where
$\mathbf{k}=\omega\hat{\mathbf{n}}$) and a frequency dependence through
$\mathbf{k}$. With a view to searching for the C$a$B through a repurposed
dark-matter search, we will only consider the response induced for the lowest
TM mode, although we note from (73) that in this case transverse electric
modes could also contribute. Combining (73) and (75) for $\ell=0$,
$\displaystyle C^{\text{C$a$B}}$ $\displaystyle=C\left[\frac{\sin(\omega
Lc_{\alpha})}{\omega Lc_{\alpha}}\frac{J_{0}(\omega Rs_{\alpha})}{1-(\omega
Rs_{\alpha}/j_{01})^{2}}\right]^{2}$ (77) $\displaystyle\equiv
CK(\omega,\alpha)\,,$
where we employ a shorthand $s_{\alpha}=\sin\alpha$ and
$c_{\alpha}=\cos\alpha$, with $\alpha={\rm
arccos}(\hat{\mathbf{n}}\cdot\hat{\mathbf{z}})$. The additional relativistic
novelty is $K(\omega,\alpha)$, which accounts for the incoherence of the axion
field over the experimental volume, and is shown in Fig. 8. As shown there,
for $\omega\ll\omega_{0}$ we have $K(\omega,\alpha)\to 1$, corresponding to
the limit where the axion is spatially coherent over the instrument, whereas
for $\omega\gg\omega_{0}$ instead $K(\omega,\alpha)\to 0$ as a result of
destructive interference across the cavity, and for $L\sim R$ the result is
only weakly dependent on $\alpha$.232323Both ADMX and HAYSTAC have $L\sim 5R$,
and we will discuss this more realistic case shortly. This factor effectively
removes the contribution of frequency with $\omega>\omega_{0}$, and we will
approximate it by a step function,
$K(\omega,\alpha)\simeq\Theta(\omega_{0}-\omega)$.
Figure 8: The suppression of relativistic axion power deposited in resonant
cavity instruments, as encoded in $K$, defined in (77). We fix the cavity
radius to be equal to the height, $R=L$, and demonstrate the approximate
insensitivity of the result to the relative angle of the incident axions and
the detector magnetic field,
$\alpha=\arccos(\hat{\mathbf{n}}\cdot\hat{\mathbf{z}})$. For $R\neq L$, a
large dependence on $\alpha$ can arise, generating a potential daily
modulation in the C$a$B signal. Note that this factor enters in the
relativistic case in addition to the transfer function in (72), which is
sharply peaked at $\omega_{0}$.
Having determined the modified form factor, we can return to determining the
measurable signal power, which is related to the cavity energy density by
$P=\omega_{0}U/Q$. Using (71), we have
$P_{a}(\omega)=g_{a\gamma\gamma}^{2}\frac{B_{0}^{2}VC\omega_{0}^{3}}{4Q}|a(\omega)|^{2}\mathcal{T}(\omega)\Theta(\omega_{0}-\omega)\,.$
(78)
The form of $|a(\omega)|^{2}$ has already been extensively discussed, it will
be an exponentially distributed variable with mean given in (38). Accordingly,
the power will also be exponentially distributed, however the total power on
average will be given by
$\displaystyle P_{a}^{\text{C$a$B}}=$
$\displaystyle\frac{g_{a\gamma\gamma}^{2}\pi\rho_{a}}{\bar{\omega}}\frac{B_{0}^{2}VC\omega_{0}^{3}}{4Q}\int_{0}^{\omega_{0}}\frac{d\omega}{2\pi}\frac{p(\omega)}{\omega}\mathcal{T}(\omega)$
(79) $\displaystyle\simeq$
$\displaystyle\frac{\pi}{8}g_{a\gamma\gamma}^{2}\rho_{a}p(\omega_{0})\frac{\omega_{0}}{\bar{\omega}}B_{0}^{2}VC\,,$
where in the final step we assumed that $Q_{a}^{\text{C$a$B}}\ll Q$, so that
$p(\omega)$ only varies slowly over the range where the transfer function has
appreciable support.
In (79) we have an expression for the signal power the relativistic axion will
deposit in the cavity when analyzing a single resonant frequency. Combined
with an expected background contribution and a set of experimental parameters,
this result is sufficient to forecast the C$a$B sensitivity. Here, however, we
will instead use a matched power approach to obtain the projected reach. In
particular, existing ADMX limits are a combination of individual experimental
runs, so rather than combining these run-by-run, we will simply recast the
combined dark-matter $g_{a\gamma\gamma}$ limits. To do so, we require the
dark-matter analogue of (79), which is obtained by integrating over the full
frequency range, and taking a mean $|a(\omega)|^{2}$ as given in (39). Re-
evaluating the integral, we find
$P_{a}^{\scriptscriptstyle{\rm
DM}}\simeq\frac{1}{2}g_{a\gamma\gamma}^{2}B_{0}^{2}QVC\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{m_{a}}\,.$ (80)
We cannot directly match C$a$B and dark-matter powers above, as comparing the
signal strength will only determine the sensitivity assuming the background is
equal in the two cases. However, as already discussed in Sec. III.2, it is
not, and the larger background for the C$a$B will reduce its sensitivity by
$\sqrt{Q_{a}^{\scriptscriptstyle{\rm DM}}/Q}$. Once this is accounted for, we
can combine (79) and (80) (evaluated at $\omega_{0}=m_{a}$), to obtain our
estimated sensitivity in this regime of
$\frac{\rho_{a}}{\rho_{\gamma}}=\frac{4}{\pi}\frac{\rho_{\scriptscriptstyle{\rm
DM}}}{\rho_{\gamma}}\left(\frac{g_{a\gamma\gamma}^{\rm
lim}}{g_{a\gamma\gamma}^{\rm
SE}}\right)^{2}\left(\frac{\bar{\omega}}{m_{a}}\right)\left(\frac{\sqrt{Q_{a}^{\scriptscriptstyle{\rm
DM}}Q}}{m_{a}p(m_{a})}\right)\,,$ (81)
which, as promised, is identical to (64) up to a numerical prefactor.
Several aspects of the above discussion are overly idealized.242424We thank
Jonathan Ouellet for this observation. In particular, we considered a
configuration where the scale at which the relativistic power suppression
occurs – as encoded in $K(\omega,\alpha)$ – matches the resonant frequency
scale, both set by the characteristic size of the cavity. In practice,
however, microwave cavity instruments introduce tuning rods in order to vary
the resonant frequency, which will shift $\omega_{0}$ by a factor of a few
above the characteristic value $\sim 1/L$. Yet as the relativistic suppression
occurs for $\omega\gtrsim 1/L$, when the axion field is no longer spatially
coherent across the cavity, $K(\omega,\alpha)$ as it appears in Fig. 8 will
remain qualitatively unchanged. Combining the two effects, naively this would
suggest a significant reduction in sensitivity as the modes that are
resonantly enhanced also experience the incoherent suppression. Nonetheless,
in an actual instrument, the geometry is such that $L\neq R$, and once that is
accounted for $K(\omega,\alpha)$ now varies considerably with angle, and for
$\alpha=90^{\circ}$ the suppression is postponed till higher frequencies, so
that the power can still be absorbed, and with the appearance of a significant
daily-modulation effect that can be exploited.
## IV Projected Limits
Having outlined various forms the C$a$B can take, and having determined the
experimental sensitivity to it, in this section we combine these results to
sketch projected sensitivities. As we will show, detecting the C$a$B will not
necessarily require dedicated instruments, instead the rapid progress in the
search for axion dark matter will simultaneously open enormous swaths of
relativistic axion parameter space. The present discussion will not be
exhaustive. Instead we will show the estimated reach in three cases to
demonstrate various aspects of the detection schemes we have proposed, and the
interplay with specific C$a$B candidates. Firstly, we will discuss the reach
of the existing resonant cavity instruments HAYSTAC and ADMX for a simple
Gaussian $p(\omega)$ as predicted in the parametric resonance scenario,
showing that an order of magnitude improvement in the $g_{a\gamma\gamma}$
sensitivity of ADMX would translate to sensitivity to
$\rho_{a}<\rho_{\gamma}$, although this would still be short of the prediction
of parametric-resonance production discussed in Sec. II.3. Secondly, we
demonstrate that a large scale broadband ABRACADABRA style instrument could
probe relativistic axions originating from cosmic strings in the parameter
space where they could help alleviate the Hubble tension. Finally, we will
consider the case of most immediate interest: indirect detection of dark
matter decaying to axions. As this scenario allows $\rho_{a}>\rho_{\gamma}$,
we will see that ADMX is already sensitive to unexplored parameter space, and
the situation will improve dramatically with future instruments.
Recall that the C$a$B signal is determined by three quantities: $\rho_{a}$,
$g_{a\gamma\gamma}$, and $p(\omega)$ (equivalently, the form of
$\Omega_{a}(\omega)$ fixes $\rho_{a}$ and $p(\omega)$). As already discussed,
here we will take the approach of fixing $g_{a\gamma\gamma}$ to
$g_{a\gamma\gamma}^{\rm SE}$, the largest value consistent with star-emission
constraints. When considering detection with frequencies $\omega\gtrsim
10^{-9}$ eV, we take $g_{a\gamma\gamma}^{\rm SE}=0.66\times 10^{-10}~{}{\rm
GeV}^{-1}$ for consistency with CAST Anastassopoulos:2017ftl and Horizontal
Branch Ayala:2014pea ; Carenza:2020zil constraints. In the future, this limit
may be tightened by IAXO Vogel:2013bta (see also Refs. Mukherjee:2018oeb ;
Mukherjee:2019dsu for future searches using the CMB). Should these
experiments detect an axion signal, the same axion could be produced in the
early Universe, and would strongly motivate further searches for the C$a$B. At
lower frequencies, the bounds strengthen further, and we will adopt
$g_{a\gamma\gamma}^{\rm SE}=3.6\times 10^{-12}~{}{\rm GeV}^{-1}$ as determined
from super star clusters Dessert:2020lil .
### IV.1 Gaussian
To begin with, we consider searching for a Gaussian energy distribution in
existing resonant cavity instruments. Such a distribution can be motivated by
the parametric-resonant production mechanism reviewed in Sec. II.3, however
here we can also envision it as providing an opportunity to explore our
formalism. In detail, we consider a positive definite Gaussian with mean
$\mu=\bar{\omega}$, and variable width $\sigma=\kappa\bar{\omega}$, where we
will explore several values of $\kappa$. In detail, we take
$p(\omega)=\frac{\sqrt{2/\pi}\,e^{-(\omega/\bar{\omega}-1)^{2}/2\kappa^{2}}}{\kappa\bar{\omega}[1-{\rm
erf}[-1/\sqrt{2}\kappa])}\Theta[\omega]\,.$ (82)
Fixing $g_{a\gamma\gamma}$, we can then determine a limit on $\rho_{a}$ for a
given $\kappa$.
To do so, we will recast existing bounds on axion dark matter collected by the
ADMX and HAYSTAC instruments. ADMX is already probing the couplings predicted
for the QCD axion for $m_{a}\sim 2-3~{}\mu{\rm eV}$ Asztalos:2003px ;
Du:2018uak ; Braine:2019fqb . Given (45), we would therefore expect the
instrument to be on the verge of $\rho_{a}\sim\rho_{\gamma}$ sensitivities for
the C$a$B. HAYSTAC, on the other hand, is already within a factor of a few
from the QCD prediction for $m_{a}\sim 23-24~{}\mu{\rm eV}$ Zhong:2018rsr .
Here we take these existing limits and recast them using (81).
Our forecast sensitivity is provided in Fig. 9. In determining the plotted
sensitivities, we combined the single bandwidth sensitivities in (81) across
multiple bins, accounting for the spread of $p(\omega)$. In doing so we
assumed the frequency range scanned by the instruments was divided into bins
of width $\omega_{0}/Q$, taking $Q=10^{5}$ for ADMX and $10^{4}$ for HAYSTAC.
In detail, we compute a limit $\rho_{a,i}$ in each bin, indexed by $i$, and
then determine the combined limit as $\rho_{a}^{-2}=\sum\rho_{a,i}^{-2}$. Note
that in the event of the limit being identical across $N$ bins, this returns
the expected $\rho_{a}=\rho_{a,i}/\sqrt{N}$.
Figure 9: Sensitivity for a Gaussian C$a$B, as predicted by parametric-
resonance scenario, that can be obtained by searching for a relativistic axion
in data already collected by ADMX and HAYSTAC in the search for axion dark
matter. In particular we take $p(\omega)$ as in (82), and consider three
values of the width $\kappa$. If the C$a$B is a cosmological relic, then it
must have $\rho_{a}<\rho_{\gamma}$, which ADMX would reach with an order of
magnitude improvement in its $g_{a\gamma\gamma}$ reach.
As the figure demonstrates, at present neither instrument is sensitive to the
$\rho_{a}<\rho_{\gamma}$ required for a cosmological relic. Nonetheless, ADMX
is within two orders of magnitude of the relevant parameter space, which from
(81), would be achieved with an order of magnitude improvement in their
sensitivity to the dark matter $g_{a\gamma\gamma}$. A factor of 30 improvement
would allow ADMX to access the parameter space required to reduce the $H_{0}$
tension. The situation is more challenging for HAYSTAC, which would require at
least three orders of magnitude improvement in their coupling sensitivity to
reach $\rho_{a}\sim\rho_{\gamma}$, although widening the range of masses
considered by a factor $\alpha$ would also enhance there sensitivity by
$\sqrt{\alpha}$.
Note that in Fig. 9, the peak sensitivity depends on the width of the
distribution, $\kappa$, and is not always located within the range of masses
directly probed by ADMX and HAYSTAC for $\kappa>1$. To understand this, recall
from (40) that the signal power is determined by
$\rho_{a}/\bar{\omega}=n_{a}$, the number density. For a fixed $\rho_{a}$, we
can increase $n_{a}$ by decreasing $\bar{\omega}$, and still obtain a
constraint as long as $p(\omega)$ has support.
### IV.2 Cosmic Strings
Next we consider sensitivity to a cosmic-string origin of the C$a$B as
discussed in Sec. II.4. In this work we will use the specific results provided
in Refs. Gorghetto:2018myk ; Gorghetto:2020qws , as already discussed, however
we note that further improvements in the predicted string spectrum will impact
the sensitivities we present. Regardless, as demonstrated in Fig. 5, the
cosmic-string spectrum is expected to be especially broad. As such, we will
use it as an example to forecast sensitivity with a futuristic broadband
instrument operating in the low-frequency regime, defined by $\lambda_{a}\gg
L$.
The spectrum and energy density of cosmic-string axions is determined
determined by the symmetry breaking scale, $f_{a}$, and subsequent temperature
at which the string network enters the scaling regime, $T_{d}\leq f_{a}$. In
terms of their impact on the predicted C$a$B spectrum, $T_{d}<f_{a}$ provides
an effective cutoff on the spectrum at higher frequencies, whereas the energy
density in the spectrum is controlled by $\sim f_{a}^{2}$. Accordingly, for a
given $T_{d}$, we can construct our sensitivity to $f_{a}$ by determining
where a detectable axion power is produced. Recall our broadband sensitivity
to $\rho_{a}$ was given in (57) for an ABRACADABRA type instrument. Assuming a
frequency independent background, we can rearrange that result to obtain
$\int d\omega\,\left(\omega\frac{dn_{a}}{d\omega}\right)^{2}=\frac{2{\rm
TS}}{T\pi}\left(\frac{\lambda_{B}}{\beta^{2}B_{0}^{2}V_{B}^{2}(g_{a\gamma\gamma}^{\rm
SE})^{2}}\right)^{2}\,.$ (83)
For a fixed $g_{a\gamma\gamma}^{\rm SE}$, the left hand side determines the
signal strength, so that the result determines our sensitivity. The
differential energy density can be determined from (33) as
$\displaystyle\frac{dn_{a}}{d\omega}=$
$\displaystyle\frac{\rho_{c}}{\omega^{2}}\Omega_{a}(\omega)$ (84)
$\displaystyle=$ $\displaystyle\frac{8}{3M_{\rm
Pl}^{2}\omega}\int_{a_{d}}^{1}da\,a^{2}\frac{\xi\mu_{\rm
eff}}{H}\rho_{\scriptscriptstyle{\rm SM}}(a)F(\omega/Ha)\,.$
For a given $T_{d}$ and $f_{a}$, this provides all the ingredients for the
signal prediction, and what remains is to set the experimental parameters. For
this purpose we use the parameters adopted in the most optimistic scenario
provided in the original ABRACADABRA proposal Kahn:2016aff , which involved a
100 m3 volume instrument with a 5 T magnetic field operating for a year. In
order to determine the expected 95% sensitivity we further take ${\rm
TS}=2.71$. Lastly, we need to specify the frequency range of the search, which
will enter in the terminals for the signal integral in (83). Here we take
$2\times 10^{-13}~{}{\rm eV}<\omega<10^{-7}~{}{\rm eV}$, where the lower limit
is set at 50 Hz, where the $1/f$ noise is expected to begin dominating, and
the upper limit is determined by the physical size of the instrument. Going to
such low frequencies requires us to take the enhanced star emission bounds of
$g_{a\gamma\gamma}^{\rm SE}=3.6\times 10^{-12}~{}{\rm GeV}^{-1}$.
The end result of the above discussion is the forecast sensitivity shown in
Fig. 10. In the same figure we also depict the parameter space where the
cosmic-string spectrum over this frequency range would be in tension with
$\Delta N_{\rm eff}$ bounds, and once more where the model could reduce the
Hubble tension. This entire parameter space would be covered by the large
scale broadband instrument. The sensitivity is dominated by the contribution
at the low-frequency end of the instrument, and the flattening of the
sensitivity at $T_{d}\sim 10^{9}$ GeV arises when the decoupling induced
cutoff in the spectrum occurs within the experimental frequency range.
Figure 10: Sensitivity to the cosmic-string C$a$B at a future large scale
broadband instrument, as a function of the axion decay constant $f_{a}$ and
decoupling temperature $T_{d}$. In particular, we consider a 100 m3 volume
ABRACADABRA type instrument, operating with a 5 T field for one year.
### IV.3 Dark-Matter Indirect Detection
Figure 11: Sensitivity to relativistic axions resulting from dark-matter decay
using existing (left) and future (right) resonant instruments. We emphasize
that the forecast sensitivities are likely to be obtained on significantly
different timescales for the various instruments. The $H_{0}$ exclusion region
is a result of the decay of dark matter to a relativistic species leading to
an observable modification of cosmology, and we show the constraints
$\tau\gtrsim 3.6\,t_{U}$ obtained by the DES collaboration Chen:2020iwm .
In the examples thus far, the relevant C$a$B parameter space will only be
reached in future instruments. This in a consequence of bounds from
observational cosmology limiting $\rho_{a}\lesssim\rho_{\gamma}$. However, if
the C$a$B is produced in the late Universe, $\rho_{a}>\rho_{\gamma}$ is
allowed, and in principle may already be detectable. Axions produced from
dark-matter decay, discussed in Sec. II.2, is one such scenario. Searching for
these axions would open up a new channel in the broader search for dark-matter
indirect detection.
Recall that relativistic axions produced from dark-matter decay will receive a
contribution from both decays in the local Milky Way halo and extragalactic
dark matter. The latter will arrive approximately isotropically at the Earth,
whereas the former will preferentially originate from the Galactic Center
given its higher dark-matter density. For detection at resonant cavity
instruments, we reiterate that the local decays will be associated with an
observable daily modulation, as the signal is proportional to
$\sin^{4}\alpha$, with $\alpha$ the relative angle of the incident axions and
the cavity magnetic field. Effectively this search uses the resonant cavities
as dark-matter telescopes, although with peak sensitivity obtained when the
instrument is perpendicular to the source. Whilst this effect can be used as a
fingerprint of a genuine signal, for the sensitivity estimates we present here
we will simply take the sky averaged value.
Doing so, our sensitivity for resonant cavity instruments was provided in
(81), which we can re-express as
$\frac{dn_{a}}{d\omega}(m_{a})=\frac{4}{\pi}\rho_{\scriptscriptstyle{\rm
DM}}\left(\frac{g_{a\gamma\gamma}^{\rm lim}}{g_{a\gamma\gamma}^{\rm
SE}}\right)^{2}\left(\frac{\sqrt{Q_{a}^{\scriptscriptstyle{\rm
DM}}Q}}{m_{a}^{2}}\right)\,,$ (85)
where the signal has been written in terms of the differential number density
evaluated at $\omega=m_{a}$. This number density is a combination of the local
and extragalactic densities given in (9) and (5), respectively. As discussed,
the local distribution will be broadened by the Doppler shifts arising from
both the finite velocity distribution of the dark matter, and also the Earth’s
motion relative to the halo, although here we will simply model the
distribution as a Gaussian with relative width $10^{-3}$. With these
distributions, for a given $m_{\scriptscriptstyle{\rm DM}}$, the flux is
dictated by the dark-matter lifetime $\tau$.
While a search for indirect detection with axions has not yet been performed,
there are constraints on dark matter decaying to a relativistic species from
cosmology, which roughly limit the lifetime to be longer than the age of the
Universe Poulin:2016nat ; Haridasu:2020xaa ; Chen:2020iwm . Repurposed axion
dark-matter searches will be able to do considerably better. To begin with, by
repeating the approach of combining experimental bandwidths as we did for the
Gaussian case above, we determine that ADMX is already sensitive to open
parameter space, as we show on the left of Fig. 11. The strongest sensitivity
is obtained when $2m_{\scriptscriptstyle{\rm DM}}$ falls within the ADMX
search window of $2-3~{}\mu{\rm eV}$, as this corresponds to detecting the
local population of axions, which have energies peaking at
$m_{\scriptscriptstyle{\rm DM}}/2$. For higher dark-matter masses, the decay
can still be detected thanks to the redshifted extragalactic spectrum. We
emphasize once more that this figure shows the sensitivity ADMX can obtain to
this scenario using existing data. Even for local decays, the signal is
significantly broader than that expected of dark matter, and thus in existing
searches the effect would have been discarded as background.
The future prospects for this search are considerable, as we demonstrate on
the right of Fig. 11. Performing a naive forecast of ADMX and HAYSTAC by
assuming they improve their reach for $g_{a\gamma\gamma}$ by one and two
orders of magnitude, respectively, they will both be able to probe open
parameter space. At lower masses, were a future instrument such as DMRadio
able to reach the projected sensitivity of the QCD axion $g_{a\gamma\gamma}$
prediction from 0.5 neV to 0.8 $\mu$eV SnowmassOuellet ; SnowmassChaudhuri ,
their reach would be considerable. In particular, repurposing the low-
frequency resonant result in (64), and assuming DMRadio operates with
$Q=10^{6}$ (although their actual readout strategy will be more complex), we
see that the instrument will be sensitive to an enormous range of parameters.
We note that the future projections shown here may have different timelines
and do not necessarily represent a fair comparison between instruments.
## V Discussion
In this work, we considered the possibility of detecting a Cosmic axion
Background, an ultrarelativistic background of relic axions. Being naturally
light, axions produced in the early Universe are typically relativistic and
may exist over a large range of possible energies. Its detection would have
imprints of the history of the Universe in its energy spectrum, in close
analogy with the program searching for a stochastic gravitational wave
background. In particular, we discuss sources sensitive to the reheat
temperature of the Universe (thermal production), the origin of dark matter
(dark-matter decay), inflation (parametric resonance), and early Universe
phase transitions (cosmic-string emission). Most of these production
mechanisms predict axions with energies in the range detectable by current and
future axion dark-matter experiments. The exception is the well-motivated
thermal C$a$B, where new ideas will be required to probe the predicted
energies and densities. For relativistic axion production before
recombination, probes of the expansion of the Universe limit the axion energy
density to be below that of the CMB. Moreover, recent measurements have found
a persistent discrepancy between the early and late measurements. An
additional source of radiation is the simplest solution to partially alleviate
the tension, motivating a value for the C$a$B energy density and we discuss
the production of axions in light of this target. If an axion is discovered
from star emission searches such as hinted by the recent Xenon-1T excess
Aprile:2020tmw or by future experiments such as IAXO Vogel:2013bta , there
would additionally be a clear target coupling for the axion, greatly
increasing the urgency of conducting C$a$B searches.
For detection, we have focused on axions coupled to photons. Relativistic
axion relics contribute terms to Maxwell’s equations which are negligible for
axion dark matter. The influence of the new terms depends on the experimental
design and we study both resonant cavity experiments (e.g., ADMX, HAYSTAC) and
lumped-circuit readout instruments (e.g., ABRACADABRA and DMRadio). For
resonant cavities, the power deposited in the cavity becomes sensitive to the
axion propagation direction and will exhibit a daily modulation for non-
isotropic sources (such as dark-matter decay). For oscillating magnetic field
searches, we show that the power deposited is independent of the incoming
axion direction, however oscillating electric fields develop within the
experiment that might be useful in confirming a positive signal. In either
case, we develop simple relations to estimate the prospective sensitivity of
axion experiments to a C$a$B. While present searches for dark-matter axions
can have sensitivity to a C$a$B, this typically requires a dedicated search
due to the presence of a broad energy spectra. In particular, with a search
for a relatively broad axion energy distribution, current data from the ADMX
experiment may be able to discover dark matter decaying into axions, thereby
turning the instrument into an indirect-detection axion telescope. In the
future, axion detection experiments will be able to discover ambient axion
densities well below that of the CMB and probe a wide range of motivated
sources.
Throughout this work, we have largely neglected the influence of a prospective
axion mass, $m_{a}$. If, for a given axion source, $m_{a}$ becomes comparable
to the axion energy as the Universe expands, then axions would cluster in
galaxies. This would increase the local density, analogous to the C$\nu$B. If
axions become highly non-relativistic they make up a form of dark matter as
currently searched for by axion haloscopes. Alternatively, if the axion is
still mildly relativistic, it can have a relatively wide energy distribution
and still be overlooked by present analyses. The detection prospects of axions
with intermediate masses is an interesting question we leave for future work.
Experiments could also look for C$a$B with other interaction terms, such as
the axion-nucleon coupling. Since the projections of experiments such as
CASPEr Budker:2013hfa have sensitives well below star-emission bounds, they
will also be able to probe cosmological sources of relativistic axions. In
addition, one could consider other relativistic bosons such as dark photons.
We leave the study of their prospective sources and detection capabilities for
future work.
## Acknowledgments
We thank Yang Bai, Karl van Bibber, Keisuke Harigaya, and Alexander Leder for
useful discussions. We further acknowledge significant and insightful feedback
on a draft provided by Anson Hook, Yonatan Kahn, Robert Lasenby, and Jonathan
Ouellet. Finally, we thank Asher Berlin and Kevin Zhou for pointing out our
misuse of the effective charge in an earlier version of this work. JD is
supported in part by the DOE under contract DE-AC02-05CH11231 and in part by
the NSF CAREER grant PHY-1915852. HM is supported by the Director, Office of
Science, Office of High Energy Physics of the US Department of Energy under
the Contract No. DE-AC02-05CH11231, NSF grant PHY-1915314, the JSPS Grant-in-
Aid for Scientific Research JP17K05409, MEXT Grant-in-Aid for Scientific
Research on Innovative Areas JP15H05887, JP15K21733, by WPI, MEXT, Japan, and
Hamamatsu Photonics, KK. NLR is supported by the Miller Institute for Basic
Research in Science at the University of California, Berkeley. This work made
use of resources provided by the National Energy Research Scientific Computing
Center, a US Department of Energy Office of Science User Facility supported by
Contract No. DE-AC02-05CH11231.
## Appendix A Breaking the Axion-Photon Coupling Relation with Kinetic Mixing
The axion-photon coupling has a natural value related to its decay constant.
If the interactions arises from integrating out charged fermions, $f$, of
charge $Q_{f}$, then the coupling generated is,
$g_{a\gamma\gamma}=\sum_{f}\frac{\alpha Q^{2}}{2\pi f_{a}}\,.$ (86)
In the absence of parametrically small charges, the expectation is that
$g_{a\gamma\gamma}\sim\alpha/2\pi f_{a}$. However, it is conceptually simple
to envision scenarios where the coupling is well below this scale either
through axion-axion Babu:1994id or photon-dark photon Daido:2018dmu kinetic
mixing (see also Refs. Agrawal:2017cmd ; Dror:2020zru for related
discussions).
To see this we first consider a kinetic mixing between an axion, $a$, and a
second axion, $b$. We take the second axion to have a decay constant $f_{b}$
that couples to electromagnetism, whereas $a$ does not couple to any charged
fields, so that it has no direct photon coupling. In detail, we take the
following Lagrangian,
$\displaystyle{\cal L}\supset$
$\displaystyle\frac{1}{2}(\partial_{\mu}a)^{2}+\frac{1}{2}(\partial_{\mu}b)^{2}+\varepsilon(\partial_{\mu}a)(\partial^{\mu}b)$
(87) $\displaystyle+\frac{\alpha}{8\pi f_{b}}bF_{\mu\nu}\tilde{F}^{\mu\nu}\,.$
To diagonalize the axion kinetic terms to leading order in $\varepsilon$, we
send $a\to a$, $b\to b-\varepsilon a$. Once axion masses are introduced, this
transformation leaves the rest of the Lagrangian approximately unchanged only
if $m_{a}\gg m_{b}$. With this shift the Lagrangian for $a$ can now be written
as,
${\cal L}\supset\frac{1}{2}(\partial_{\mu}a)^{2}-\frac{\alpha\varepsilon}{8\pi
f_{b}}aF_{\mu\nu}\tilde{F}^{\mu\nu}\,.$ (88)
Importantly, the coupling between $a$ and electromagnetism is determined by
the free parameters $\varepsilon$ and $f_{b}$ — it can be small even if
$f_{a}$ is well below the weak scale. While axion-axion mixing breaks the
relationship between $g_{a\gamma\gamma}$ and $f_{a}$, it also requires an
additional light axion that may influence the phenomenology.
Alternatively, one can consider a kinetic mixing between the photon and a dark
photon. Consider the Lagrangian,
$\displaystyle\mathcal{L}\supset$ $\displaystyle\frac{\alpha^{\prime}}{8\pi
f_{a}}aF^{\prime}_{\mu\nu}\tilde{F}^{\prime\mu\nu}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$
(89)
$\displaystyle-\frac{1}{4}F^{\prime}_{\mu\nu}F^{\prime\mu\nu}-\frac{\epsilon}{4}F_{\mu\nu}F^{\prime\mu\nu}\,,$
where $\alpha^{\prime}$ is the dark gauge coupling and the photon-dark photon
mixing is set by $\epsilon$. If the dark photon, $A^{\prime}$, has a mass
below the photon plasma mass, then the photon-dark photon mixing can be
eliminated with the transformation $A^{\prime}\to A^{\prime}$, $A\to
A-\epsilon A^{\prime}$. This leaves the dark photon approximately massless,
but generates an axion-photon coupling,
$\mathcal{L}\supset\frac{\epsilon^{2}\alpha^{\prime}}{8\pi
f_{a}}aF\tilde{F}\,.$ (90)
The coefficient can be arbitrarily small even for $f_{a}$ below the
electroweak scale, breaking its conventional relationship to
$g_{a\gamma\gamma}$. Interestingly, there may also be an opportunity to
discover additional light states in this scenario. If $f_{a}\lesssim 1~{}{\rm
TeV}$ then in the limit that the dark-photon mass is negligible, there are
millicharged particles in the spectrum. These particles can have a mass of, at
most, $\sim 4\pi f_{a}$. For $f_{a}\lesssim~{}100~{}{\rm eV}$ solar cooling
bounds on millicharged particles would require $\epsilon\lesssim 10^{-13}$
Vogel:2013raa and may place a bound on the parameter space.
## References
* (1) R. Peccei and H. R. Quinn, CP Conservation in the Presence of Instantons, Phys. Rev. Lett. 38 (1977) 1440–1443.
* (2) R. Peccei and H. R. Quinn, Constraints Imposed by CP Conservation in the Presence of Instantons, Phys. Rev. D 16 (1977) 1791–1797.
* (3) S. Weinberg, A New Light Boson?, Phys. Rev. Lett. 40 (1978) 223–226.
* (4) F. Wilczek, Problem of Strong $P$ and $T$ Invariance in the Presence of Instantons, Phys. Rev. Lett. 40 (1978) 279–282.
* (5) P. Svrcek and E. Witten, Axions In String Theory, JHEP 06 (2006) 051, [hep-th/0605206].
* (6) A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, String Axiverse, Phys. Rev. D 81 (2010) 123530, [arXiv:0905.4720].
* (7) J. Halverson, C. Long, B. Nelson, and G. Salinas, Towards string theory expectations for photon couplings to axionlike particles, Phys. Rev. D 100 (2019), no. 10 106010, [arXiv:1909.05257].
* (8) J. Preskill, M. B. Wise, and F. Wilczek, Cosmology of the Invisible Axion, Phys. Lett. B 120 (1983) 127–132.
* (9) L. Abbott and P. Sikivie, A Cosmological Bound on the Invisible Axion, Phys. Lett. B 120 (1983) 133–136.
* (10) M. Dine and W. Fischler, The Not So Harmless Axion, Phys. Lett. B 120 (1983) 137–141.
* (11) D. Baumann, D. Green, and B. Wallisch, New Target for Cosmic Axion Searches, Phys. Rev. Lett. 117 (2016), no. 17 171301, [arXiv:1604.08614].
* (12) J. P. Conlon and M. C. D. Marsh, The Cosmophenomenology of Axionic Dark Radiation, JHEP 10 (2013) 214, [arXiv:1304.1804].
* (13) J. P. Conlon and M. D. Marsh, Excess Astrophysical Photons from a 0.1–1 keV Cosmic Axion Background, Phys. Rev. Lett. 111 (2013), no. 15 151301, [arXiv:1305.3603].
* (14) M. Cicoli, J. P. Conlon, M. C. D. Marsh, and M. Rummel, 3.55 keV photon line and its morphology from a 3.55 keV axionlike particle line, Phys. Rev. D 90 (2014) 023540, [arXiv:1403.2370].
* (15) Y. Cui, M. Pospelov, and J. Pradler, Signatures of Dark Radiation in Neutrino and Dark Matter Detectors, Phys. Rev. D 97 (2018), no. 10 103004, [arXiv:1711.04531].
* (16) T. Higaki, K. Nakayama, and F. Takahashi, Cosmological constraints on axionic dark radiation from axion-photon conversion in the early Universe, JCAP 09 (2013) 030, [arXiv:1306.6518].
* (17) C. Evoli, M. Leo, A. Mirizzi, and D. Montanino, Reionization during the dark ages from a cosmic axion background, JCAP 05 (2016) 006, [arXiv:1602.08433].
* (18) L. Verde, T. Treu, and A. Riess, Tensions between the Early and the Late Universe, Nature Astron. 3 (7, 2019) 891, [arXiv:1907.10625].
* (19) Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209].
* (20) P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner, and K. A. van Bibber, Experimental Searches for the Axion and Axion-Like Particles, Ann. Rev. Nucl. Part. Sci. 65 (2015) 485–514, [arXiv:1602.00039].
* (21) I. G. Irastorza and J. Redondo, New experimental approaches in the search for axion-like particles, Prog. Part. Nucl. Phys. 102 (2018) 89–159, [arXiv:1801.08127].
* (22) G. Raffelt, Stars as laboratories for fundamental physics: The astrophysics of neutrinos, axions, and other weakly interacting particles. 5, 1996.
* (23) P. Sikivie, Experimental Tests of the Invisible Axion, Phys. Rev. Lett. 51 (1983) 1415–1417. [Erratum: Phys.Rev.Lett. 52, 695 (1984)].
* (24) S. Moriyama, A Proposal to search for a monochromatic component of solar axions using Fe-57, Phys. Rev. Lett. 75 (1995) 3222–3225, [hep-ph/9504318].
* (25) CAST Collaboration, V. Anastassopoulos et al., New CAST Limit on the Axion-Photon Interaction, Nature Phys. 13 (2017) 584–590, [arXiv:1705.02290].
* (26) A. Ayala, I. Domínguez, M. Giannotti, A. Mirizzi, and O. Straniero, Revisiting the bound on axion-photon coupling from Globular Clusters, Phys. Rev. Lett. 113 (2014), no. 19 191302, [arXiv:1406.6053].
* (27) P. Carenza, O. Straniero, B. Döbrich, M. Giannotti, G. Lucente, and A. Mirizzi, Constraints on the coupling with photons of heavy axion-like-particles from Globular Clusters, Phys. Lett. B 809 (2020) 135709, [arXiv:2004.08399].
* (28) A. Payez, C. Evoli, T. Fischer, M. Giannotti, A. Mirizzi, and A. Ringwald, Revisiting the SN1987A gamma-ray limit on ultralight axion-like particles, JCAP 02 (2015) 006, [arXiv:1410.3747].
* (29) N. Bar, K. Blum, and G. D’Amico, Is there a supernova bound on axions?, Phys. Rev. D 101 (2020), no. 12 123025, [arXiv:1907.05020].
* (30) C. S. Reynolds, M. D. Marsh, H. R. Russell, A. C. Fabian, R. Smith, F. Tombesi, and S. Veilleux, Astrophysical limits on very light axion-like particles from Chandra grating spectroscopy of NGC 1275, arXiv:1907.05475.
* (31) C. Dessert, J. W. Foster, and B. R. Safdi, X-ray Searches for Axions from Super Star Clusters, arXiv:2008.03305.
* (32) L. Krauss, J. Moody, F. Wilczek, and D. E. Morris, Calculations for Cosmic Axion Detection, Phys. Rev. Lett. 55 (1985) 1797.
* (33) P. Sikivie, Detection Rates for ’Invisible’ Axion Searches, Phys. Rev. D 32 (1985) 2988. [Erratum: Phys.Rev.D 36, 974 (1987)].
* (34) ADMX Collaboration, S. Asztalos et al., An Improved RF cavity search for halo axions, Phys. Rev. D 69 (2004) 011101, [astro-ph/0310042].
* (35) ADMX Collaboration, N. Du et al., A Search for Invisible Axion Dark Matter with the Axion Dark Matter Experiment, Phys. Rev. Lett. 120 (2018), no. 15 151301, [arXiv:1804.05750].
* (36) ADMX Collaboration, T. Braine et al., Extended Search for the Invisible Axion with the Axion Dark Matter Experiment, Phys. Rev. Lett. 124 (2020), no. 10 101303, [arXiv:1910.08638].
* (37) HAYSTAC Collaboration, L. Zhong et al., Results from phase 1 of the HAYSTAC microwave cavity axion experiment, Phys. Rev. D 97 (2018), no. 9 092001, [arXiv:1803.03690].
* (38) S. Lee, S. Ahn, J. Choi, B. Ko, and Y. Semertzidis, Axion Dark Matter Search around 6.7 $\mu$eV, Phys. Rev. Lett. 124 (2020), no. 10 101802, [arXiv:2001.05102].
* (39) P. Sikivie, N. Sullivan, and D. Tanner, Proposal for Axion Dark Matter Detection Using an LC Circuit, Phys. Rev. Lett. 112 (2014), no. 13 131301, [arXiv:1310.8545].
* (40) S. Chaudhuri, P. W. Graham, K. Irwin, J. Mardon, S. Rajendran, and Y. Zhao, Radio for hidden-photon dark matter detection, Phys. Rev. D 92 (2015), no. 7 075012, [arXiv:1411.7382].
* (41) Y. Kahn, B. R. Safdi, and J. Thaler, Broadband and Resonant Approaches to Axion Dark Matter Detection, Phys. Rev. Lett. 117 (2016), no. 14 141801, [arXiv:1602.01086].
* (42) M. Silva-Feaver et al., Design Overview of DM Radio Pathfinder Experiment, IEEE Trans. Appl. Supercond. 27 (2017), no. 4 1400204, [arXiv:1610.09344].
* (43) J. L. Ouellet et al., First Results from ABRACADABRA-10 cm: A Search for Sub-$\mu$eV Axion Dark Matter, Phys. Rev. Lett. 122 (2019), no. 12 121802, [arXiv:1810.12257].
* (44) J. L. Ouellet et al., Design and implementation of the ABRACADABRA-10 cm axion dark matter search, Phys. Rev. D 99 (2019), no. 5 052012, [arXiv:1901.10652].
* (45) A. V. Gramolin, D. Aybas, D. Johnson, J. Adam, and A. O. Sushkov, Search for axion-like dark matter with ferromagnets, arXiv:2003.03348.
* (46) J. L. Ouellet et al., Probing the QCD Axion with DMRadio-m3, Snowmass 2021 Letter of Interest CF2 (2020), no. 217. Available at https://www.snowmass21.org/docs/files/summaries/CF/SNOWMASS21-CF2_CF0-IF1_IF0_Ouellet-217.pdf.
* (47) S. Chaudhuri et al., DMRadio-GUT: Probing GUT-scale QCD Axion Dark Matter, Snowmass 2021 Letter of Interest CF2 (2020), no. 219. Available at https://www.snowmass21.org/docs/files/summaries/CF/SNOWMASS21-CF2_CF0-IF1_IF0_Saptarshi_Chaudhuri-219.pdf.
* (48) MADMAX Working Group Collaboration, A. Caldwell, G. Dvali, B. Majorovits, A. Millar, G. Raffelt, J. Redondo, O. Reimann, F. Simon, and F. Steffen, Dielectric Haloscopes: A New Way to Detect Axion Dark Matter, Phys. Rev. Lett. 118 (2017), no. 9 091801, [arXiv:1611.05865].
* (49) A. J. Millar, G. G. Raffelt, J. Redondo, and F. D. Steffen, Dielectric Haloscopes to Search for Axion Dark Matter: Theoretical Foundations, JCAP 01 (2017) 061, [arXiv:1612.07057].
* (50) A. N. Ioannisian, N. Kazarian, A. J. Millar, and G. G. Raffelt, Axion-photon conversion caused by dielectric interfaces: quantum field calculation, JCAP 09 (2017) 005, [arXiv:1707.00701].
* (51) A. Berlin, R. T. D’Agnolo, S. A. Ellis, C. Nantista, J. Neilson, P. Schuster, S. Tantawi, N. Toro, and K. Zhou, Axion Dark Matter Detection by Superconducting Resonant Frequency Conversion, arXiv:1912.11048.
* (52) R. Lasenby, Microwave cavity searches for low-frequency axion dark matter, Phys. Rev. D 102 (2020), no. 1 015008, [arXiv:1912.11056].
* (53) H. Liu, B. D. Elwood, M. Evans, and J. Thaler, Searching for Axion Dark Matter with Birefringent Cavities, Phys. Rev. D 100 (2019), no. 2 023548, [arXiv:1809.01656].
* (54) I. Obata, T. Fujita, and Y. Michimura, Optical Ring Cavity Search for Axion Dark Matter, Phys. Rev. Lett. 121 (2018), no. 16 161301, [arXiv:1805.11753].
* (55) A. Berlin, R. T. D’Agnolo, S. A. Ellis, and K. Zhou, Heterodyne Broadband Detection of Axion Dark Matter, arXiv:2007.15656.
* (56) D. J. Marsh, K.-C. Fong, E. W. Lentz, L. r. Smejkal, and M. N. Ali, Proposal to Detect Dark Matter using Axionic Topological Antiferromagnets, Phys. Rev. Lett. 123 (2019), no. 12 121601, [arXiv:1807.08810].
* (57) T. Trickle, Z. Zhang, and K. M. Zurek, Detecting Light Dark Matter with Magnons, Phys. Rev. Lett. 124 (2020), no. 20 201801, [arXiv:1905.13744].
* (58) M. Lawson, A. J. Millar, M. Pancaldi, E. Vitagliano, and F. Wilczek, Tunable axion plasma haloscopes, Phys. Rev. Lett. 123 (2019), no. 14 141802, [arXiv:1904.11872].
* (59) G. B. Gelmini, A. J. Millar, V. Takhistov, and E. Vitagliano, Probing dark photons with plasma haloscopes, Phys. Rev. D 102 (2020), no. 4 043003, [arXiv:2006.06836].
* (60) P. F. de Salas and A. Widmark, Dark matter local density determination: recent observations and future prospects, arXiv:2012.11477.
* (61) L. Lentati et al., European Pulsar Timing Array Limits On An Isotropic Stochastic Gravitational-Wave Background, Mon. Not. Roy. Astron. Soc. 453 (2015), no. 3 2576–2598, [arXiv:1504.03692].
* (62) R. Shannon et al., Gravitational waves from binary supermassive black holes missing in pulsar observations, Science 349 (2015), no. 6255 1522–1525, [arXiv:1509.07320].
* (63) NANOGRAV Collaboration, Z. Arzoumanian et al., The NANOGrav 11-year Data Set: Pulsar-timing Constraints On The Stochastic Gravitational-wave Background, Astrophys. J. 859 (2018), no. 1 47, [arXiv:1801.02617].
* (64) LIGO Scientific, Virgo Collaboration, B. Abbott et al., Search for the isotropic stochastic background using data from Advanced LIGO’s second observing run, Phys. Rev. D 100 (2019), no. 6 061101, [arXiv:1903.02886].
* (65) M. S. Turner, Thermal Production of Not SO Invisible Axions in the Early Universe, Phys. Rev. Lett. 59 (1987) 2489. [Erratum: Phys.Rev.Lett. 60, 1101 (1988)].
* (66) S. Chang and K. Choi, Hadronic axion window and the big bang nucleosynthesis, Phys. Lett. B 316 (1993) 51–56, [hep-ph/9306216].
* (67) E. Masso, F. Rota, and G. Zsembinszki, On axion thermalization in the early universe, Phys. Rev. D 66 (2002) 023004, [hep-ph/0203221].
* (68) S. Hannestad, A. Mirizzi, and G. Raffelt, New cosmological mass limit on thermal relic axions, JCAP 07 (2005) 002, [hep-ph/0504059].
* (69) P. Graf and F. D. Steffen, Thermal axion production in the primordial quark-gluon plasma, Phys. Rev. D 83 (2011) 075011, [arXiv:1008.4528].
* (70) A. Salvio, A. Strumia, and W. Xue, Thermal axion production, JCAP 01 (2014) 011, [arXiv:1310.6982].
* (71) R. Z. Ferreira and A. Notari, Observable Windows for the QCD Axion Through the Number of Relativistic Species, Phys. Rev. Lett. 120 (2018), no. 19 191301, [arXiv:1801.06090].
* (72) F. Arias-Aragon, F. D’Eramo, R. Z. Ferreira, L. Merlo, and A. Notari, Production of Thermal Axions across the ElectroWeak Phase Transition, arXiv:2012.04736.
* (73) F. D’Eramo, R. Z. Ferreira, A. Notari, and J. L. Bernal, Hot Axions and the $H_{0}$ tension, JCAP 1811 (2018) 014, [arXiv:1808.07430].
* (74) Y. Gong and X. Chen, Cosmological Constraints on Invisible Decay of Dark Matter, Phys. Rev. D 77 (2008) 103511, [arXiv:0802.2296].
* (75) V. Poulin, P. D. Serpico, and J. Lesgourgues, A fresh look at linear cosmological constraints on a decaying dark matter component, JCAP 08 (2016) 036, [arXiv:1606.02073].
* (76) K. Vattis, S. M. Koushiappas, and A. Loeb, Dark matter decaying in the late Universe can relieve the H0 tension, Phys. Rev. D 99 (2019), no. 12 121302, [arXiv:1903.06220].
* (77) B. S. Haridasu and M. Viel, Late-time decaying dark matter: constraints and implications for the $H_{0}$-tension, Mon. Not. Roy. Astron. Soc. 497 (2020), no. 2 1757–1764, [arXiv:2004.07709].
* (78) DES Collaboration, A. Chen et al., Constraints on Decaying Dark Matter with DES-Y1 and external data, arXiv:2011.04606.
* (79) M. Lisanti, S. Mishra-Sharma, N. L. Rodd, B. R. Safdi, and R. H. Wechsler, Mapping Extragalactic Dark Matter Annihilation with Galaxy Surveys: A Systematic Study of Stacked Group Searches, Phys. Rev. D 97 (2018), no. 6 063005, [arXiv:1709.00416].
* (80) J. F. Navarro, C. S. Frenk, and S. D. White, The Structure of cold dark matter halos, Astrophys. J. 462 (1996) 563–575, [astro-ph/9508025].
* (81) J. F. Navarro, C. S. Frenk, and S. D. White, A Universal density profile from hierarchical clustering, Astrophys. J. 490 (1997) 493–508, [astro-ph/9611107].
* (82) GRAVITY Collaboration, R. Abuter et al., Detection of the gravitational redshift in the orbit of the star S2 near the Galactic centre massive black hole, Astron. Astrophys. 615 (2018) L15, [arXiv:1807.09409].
* (83) E. G. Speckhard, K. C. Ng, J. F. Beacom, and R. Laha, Dark Matter Velocity Spectroscopy, Phys. Rev. Lett. 116 (2016), no. 3 031301, [arXiv:1507.04744].
* (84) G. B. Gelmini and M. Roncadelli, Left-Handed Neutrino Mass Scale and Spontaneously Broken Lepton Number, Phys. Lett. 99B (1981) 411–415.
* (85) J. L. Feng, T. Moroi, H. Murayama, and E. Schnapka, Third generation familons, b factories, and neutrino cosmology, Phys. Rev. D 57 (1998) 5875–5892, [hep-ph/9709411].
* (86) J. A. Dror, Discovering leptonic forces using nonconserved currents, Phys. Rev. D 101 (2020), no. 9 095013, [arXiv:2004.04750].
* (87) S. Hannestad, Structure formation with strongly interacting neutrinos - Implications for the cosmological neutrino mass bound, JCAP 02 (2005) 011, [astro-ph/0411475].
* (88) G. Barenboim, J. Z. Chen, S. Hannestad, I. M. Oldengott, T. Tram, and Y. Y. Wong, Invisible neutrino decay in precision cosmology, arXiv:2011.01502.
* (89) A. Dolgov and D. Kirilova, On particle creation by a time dependent scalar field, Sov. J. Nucl. Phys. 51 (1990) 172–177.
* (90) J. H. Traschen and R. H. Brandenberger, Particle production during out-of-equilibrium phase transitions, Phys. Rev. D 42 (Oct, 1990) 2491–2504.
* (91) L. Kofman, A. D. Linde, and A. A. Starobinsky, Reheating after inflation, Phys. Rev. Lett. 73 (1994) 3195–3198, [hep-th/9405187].
* (92) L. Kofman, A. D. Linde, and A. A. Starobinsky, Towards the theory of reheating after inflation, Phys. Rev. D 56 (1997) 3258–3295, [hep-ph/9704452].
* (93) Y. Ema and K. Nakayama, Explosive Axion Production from Saxion, Phys. Lett. B 776 (2018) 174–181, [arXiv:1710.02461].
* (94) R. T. Co, L. J. Hall, and K. Harigaya, QCD Axion Dark Matter with a Small Decay Constant, Phys. Rev. Lett. 120 (2018), no. 21 211602, [arXiv:1711.10486].
* (95) J. A. Dror, K. Harigaya, and V. Narayan, Parametric Resonance Production of Ultralight Vector Dark Matter, Phys. Rev. D 99 (2019), no. 3 035036, [arXiv:1810.07195].
* (96) R. Micha and I. I. Tkachev, Turbulent thermalization, Phys. Rev. D 70 (2004) 043538, [hep-ph/0403101].
* (97) Planck Collaboration, Y. Akrami et al., Planck 2018 results. X. Constraints on inflation, arXiv:1807.06211.
* (98) M. Gorghetto, E. Hardy, and G. Villadoro, Axions from Strings: the Attractive Solution, JHEP 07 (2018) 151, [arXiv:1806.04677].
* (99) M. Gorghetto, E. Hardy, and G. Villadoro, More Axions from Strings, arXiv:2007.04990.
* (100) M. Buschmann, J. W. Foster, and B. R. Safdi, Early-Universe Simulations of the Cosmological Axion, Phys. Rev. Lett. 124 (2020), no. 16 161103, [arXiv:1906.00967].
* (101) M. Dine, N. Fernandez, A. Ghalsasi, and H. H. Patel, Comments on Axions, Domain Walls, and Cosmic Strings, arXiv:2012.13065.
* (102) D. Baumann, Inflation, in Theoretical Advanced Study Institute in Elementary Particle Physics: Physics of the Large and the Small, pp. 523–686, 2011. arXiv:0907.5424.
* (103) T. Charnock, A. Avgoustidis, E. J. Copeland, and A. Moss, CMB constraints on cosmic strings and superstrings, Phys. Rev. D 93 (2016), no. 12 123503, [arXiv:1603.01275].
* (104) K. Choi and S. H. Im, Realizing the relaxion from multiple axions and its UV completion with high scale supersymmetry, JHEP 01 (2016) 149, [arXiv:1511.00132].
* (105) P. Agrawal, J. Fan, M. Reece, and L.-T. Wang, Experimental Targets for Photon Couplings of the QCD Axion, JHEP 02 (2018) 006, [arXiv:1709.06085].
* (106) J. A. Dror and J. M. Leedom, The Cosmological Tension of Ultralight Axion Dark Matter and its Solutions, arXiv:2008.02279.
* (107) J. W. Foster, N. L. Rodd, and B. R. Safdi, Revealing the Dark Matter Halo with Axion Direct Detection, Phys. Rev. D 97 (2018), no. 12 123006, [arXiv:1711.10489].
* (108) J. W. Foster, Y. Kahn, R. Nguyen, N. L. Rodd, and B. R. Safdi, Dark Matter Interferometry, arXiv:2009.14201.
* (109) B. Brubaker, L. Zhong, S. Lamoreaux, K. Lehnert, and K. van Bibber, HAYSTAC axion search analysis procedure, Phys. Rev. D 96 (2017), no. 12 123008, [arXiv:1706.08388].
* (110) R. Dicke, The Measurement of Thermal Radiation at Microwave Frequencies, Rev. Sci. Instrum. 17 (1946), no. 7 268–275.
* (111) D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran, and A. Sushkov, Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr), Phys. Rev. X 4 (2014), no. 2 021030, [arXiv:1306.6089].
* (112) J. Ouellet and Z. Bogorad, Solutions to Axion Electrodynamics in Various Geometries, Phys. Rev. D 99 (2019), no. 5 055010, [arXiv:1809.10709].
* (113) R. Lasenby, Parametrics of electromagnetic searches for axion dark matter, arXiv:1912.11467.
* (114) M. Goryachev, B. Mcallister, and M. E. Tobar, Axion Detection with Precision Frequency Metrology, Phys. Dark Univ. 26 (2019) 100345, [arXiv:1806.07141].
* (115) Y. Kim, D. Kim, J. Jung, J. Kim, Y. C. Shin, and Y. K. Semertzidis, Effective Approximation of Electromagnetism for Axion Haloscope Searches, Phys. Dark Univ. 26 (2019) 100362, [arXiv:1810.02459].
* (116) M. Beutter, A. Pargner, T. Schwetz, and E. Todarello, Axion-electrodynamics: a quantum field calculation, JCAP 02 (2019) 026, [arXiv:1812.05487].
* (117) MADMAX Collaboration, P. Brun et al., A new experimental approach to probe QCD axion dark matter in the mass range above 40 $\mu$eV, Eur. Phys. J. C 79 (2019), no. 3 186, [arXiv:1901.07401].
* (118) N. Crisosto, G. Rybka, P. Sikivie, N. Sullivan, D. Tanner, and J. Yang, ADMX SLIC: Results from a Superconducting LC Circuit Investigating Cold Axions, Phys. Rev. Lett. 124 (2020), no. 24 241101, [arXiv:1911.05772].
* (119) S. Chaudhuri, K. Irwin, P. W. Graham, and J. Mardon, Fundamental Limits of Electromagnetic Axion and Hidden-Photon Dark Matter Searches: Part I - The Quantum Limit, arXiv:1803.01627.
* (120) S. Chaudhuri, K. D. Irwin, P. W. Graham, and J. Mardon, Optimal Electromagnetic Searches for Axion and Hidden-Photon Dark Matter, arXiv:1904.05806.
* (121) G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71 (2011) 1554, [arXiv:1007.1727]. [Erratum: Eur.Phys.J.C 73, 2501 (2013)].
* (122) B. M. Brubaker, First results from the HAYSTAC axion search. PhD thesis, Yale U., 2017. arXiv:1801.00835.
* (123) J. Vogel et al., IAXO - The International Axion Observatory, in 8th Patras Workshop on Axions, WIMPs and WISPs, 2, 2013. arXiv:1302.3273.
* (124) S. Mukherjee, R. Khatri, and B. D. Wandelt, Polarized anisotropic spectral distortions of the CMB: Galactic and extragalactic constraints on photon-axion conversion, JCAP 04 (2018) 045, [arXiv:1801.09701].
* (125) S. Mukherjee, D. N. Spergel, R. Khatri, and B. D. Wandelt, A new probe of Axion-Like Particles: CMB polarization distortions due to cluster magnetic fields, JCAP 02 (2020) 032, [arXiv:1908.07534].
* (126) XENON Collaboration, E. Aprile et al., Excess electronic recoil events in XENON1T, Phys. Rev. D 102 (2020), no. 7 072004, [arXiv:2006.09721].
* (127) K. Babu, S. M. Barr, and D. Seckel, Axion dissipation through the mixing of Goldstone bosons, Phys. Lett. B 336 (1994) 213–220, [hep-ph/9406308].
* (128) R. Daido, F. Takahashi, and N. Yokozaki, Enhanced axion–photon coupling in GUT with hidden photon, Phys. Lett. B 780 (2018) 538–542, [arXiv:1801.10344].
* (129) H. Vogel and J. Redondo, Dark Radiation constraints on minicharged particles in models with a hidden photon, JCAP 02 (2014) 029, [arXiv:1311.2600].
|
# Analytic Estimates of the Achievable Precision on the Physical Properties of
Transiting Planets Using Purely Empirical Measurements
Romy Rodríguez Martínez Department of Astronomy, The Ohio State University,
140 W. 18th Avenue, Columbus, OH 43210, USA Daniel J. Stevens Eberly Research
Fellow Center for Exoplanets and Habitable Worlds, The Pennsylvania State
University, 525 Davey Lab, University Park, PA 16802, USA Department of
Astronomy & Astrophysics, The Pennsylvania State University, 525 Davey Lab,
University Park, PA 16802, USA B. Scott Gaudi Department of Astronomy, The
Ohio State University, 140 W. 18th Avenue, Columbus, OH 43210, USA Joseph G.
Schulze School of Earth Sciences, The Ohio State University, 125 South Oval
Mall, Columbus OH, 43210, USA Wendy R. Panero School of Earth Sciences, The
Ohio State University, 125 South Oval Mall, Columbus OH, 43210, USA Jennifer
A. Johnson Department of Astronomy, The Ohio State University, 140 W. 18th
Avenue, Columbus, OH 43210, USA Center for Cosmology and AstroParticle
Phyrics , The Ohio State University, 191 W. Woodruff Ave., Columbus, OH 43210,
USA Ji Wang Department of Astronomy, The Ohio State University, 140 W. 18th
Avenue, Columbus, OH 43210, USA
###### Abstract
We present analytic estimates of the fractional uncertainties on the mass,
radius, surface gravity, and density of a transiting planet, using only
empirical or semi-empirical measurements. We first express these parameters in
terms of transit photometry and radial velocity (RV) observables, as well as
the stellar radius $R_{\star}$, if required. In agreement with previous
results, we find that, assuming a circular orbit, the surface gravity of the
planet ($g_{p}$) depends only on empirical transit and RV parameters; namely,
the planet period $P$, the transit depth $\delta$, the RV semi-amplitude
$K_{\star}$, the transit duration $T$, and the ingress/egress duration $\tau$.
However, the planet mass and density depend on all these quantities, plus
$R_{\star}$. Thus, an inference about the planet mass, radius, and density
must rely upon an external constraint such as the stellar radius. For bright
stars, stellar radii can now be measured nearly empirically by using
measurements of the stellar bolometric flux, the effective temperature, and
the distance to the star via its parallax, with the extinction $A_{V}$ being
the only free parameter. For any given system, there is a hierarchy of
achievable precisions on the planetary parameters, such that the planetary
surface gravity is more accurately measured than the density, which in turn is
more accurately measured than the mass. We find that surface gravity provides
a strong constraint on the core mass fraction of terrestrial planets. This is
useful, given that the surface gravity may be one of the best measured
properties of a terrestrial planet.
methods: analytical — planetary systems — exoplanet composition
††software: EXOFASTv2 (Eastman, 2017; Eastman et al., 2019)
## 1 Introduction
The internal composition and structure of small, terrestrial planets is
generally difficult to characterize. As is well known, mass-radius
relationships alone do not constrain the internal composition of a planet
beyond a measurement of its bulk density. The internal structure is crucial,
as it determines the bulk physical properties of planets and provides valuable
insights into their formation, history, and present composition. Unterborn et
al. (2016) found that the core radius, the presence of light elements in the
core, and the existence of an upper mantle have the largest effects on the
final mass and radius of a terrestrial exoplanet. The final mass and radius in
turn directly determine the planet’s habitability. For example, the core mass
fraction affects the strength of a planet’s magnetic field, which shields it
against harmful radiation from the host star.
At present, we have $\sim$330 small planets ($<4R_{\oplus}$) with masses and
radii constrained to better than 50%111Based on data from the NASA Exoplanet
Archive, https://exoplanetarchive.ipac.caltech.edu/. Such measurement
uncertainties are generally good enough to determine the general structure of
many exoplanets. However, for low-mass terrestrial planets with thin
atmospheres, planetary masses and radii must be measured to precisions better
than 20% and 10%, respectively, in order to constraint the core mass fraction
and structure (Dorn et al., 2015; Schulze et al., 2020).
However, high-precision measurements of low-mass exoplanets between
$1-4R_{\oplus}$ are challenging. Additionally, because of the large number of
individual discoveries, and because (to date) they have been mostly detected
around faint Kepler/K2 (Borucki et al., 2010; Howell et al., 2014) targets
(with typical Kepler and K2 magnitudes of $K\sim 15$ and $K\sim 12$,
respectively, Vanderburg et al. 2016), they are difficult to follow up with
high-resolution radial velocity (RV) observations and thus, obtain precise
masses and other fundamental physical properties. This has already begun to
change with the Transiting Exoplanet Survey Satellite mission (TESS; Ricker et
al. 2015), as its main science driver is to detect and measure masses and
radii for at least 50 small planets ($<4R_{\oplus}$) around bright stars. At
the time of writing, 24 such planets have already been confirmed, and almost
all have masses and radii measured to better than
$30\%$222https://exoplanetarchive.ipac.caltech.edu/.
The discoveries of the TESS mission will also raise very important questions
in exoplanet science. The one that we address here relates to the achievable
precision with which we shall be able to constrain the fundamental parameters
of a transiting planet, such as its mass, density and surface gravity. Given
precise photometric and spectroscopic measurements of the host of a transiting
planet system, it is possible to measure the planet surface gravity with no
external constraints (Southworth et al., 2007). On the other hand, measuring
the mass or radius of a transiting planet requires some external constraint
(Seager & Mallén-Ornelas, 2003). Since, until very recently, it has only been
possible to measure the mass or radius of the closest isolated stars directly,
theoretical evolutionary tracks or empirical relations between stellar mass
and radius and other properties of the star have often been used (e.g., Torres
et al. 2010). However, these constraints typically assume that the star is
representative of the population of systems that were used to calibrate these
relations. In the case of theoretical evolutionary tracks, there may be
systematic errors due to uncertainties in the physics of stellar structure,
atmospheres, and evolution, or second-order properties of the star, such as
its detailed abundance distribution, which can manifest as irreducible
systematic uncertainties on the stellar parameters. For example, most
evolutionary tracks assume a fixed solar abundance pattern scaled to the iron
abundance [Fe/H] of the star, and thus, the same [$\alpha$/Fe] as the Sun. If
the host star has a significantly different [$\alpha$/Fe] than the Sun, that
will lead to incorrect inferences about the properties of the planet. By using
evolutionary tracks that assume a solar [$\alpha$/Fe], one might infer an
incorrect density and mass of the planet, and therefore an incorrect
core/mantle fraction.
Thus, a direct, empirical or nearly empirical measurement of the radius or
mass of the star that does not rely on assumptions that may not be valid is
needed (see Stassun et al. 2017 for a lengthier discussion on the merits and
benefits of using empirical or semi-empirical measurements to infer exoplanet
parameters). As has been demonstrated in numerous papers (see, e.g., Stevens
et al. 2017), with Gaia (Gaia Collaboration et al., 2018) parallaxes, coupled
with the availability of absolute broadband photometry from the near-UV to the
near-IR, it is now possible to measure the radii of bright ($V\lesssim 12$
mag) stars. This allows for direct, nearly empirical measurements of the
masses and radii of transiting planets and their host stars (and, indeed, any
eclipsing single-lined spectroscopic binary) in a nearly empirical way. See
Stevens et al. (2018) for an initial exploration of the precisions with which
these measurements can be made.
In this paper, we build upon the work of Stevens et al. (2018) by also
assessing the precision with which the surface gravity $g_{p}$ of transiting
planets can be measured. Given that a measurement of $g_{p}$ only requires
measurements of direct observables from the transit photometry and radial
velocities without the need for external constraints, the precision on $g_{p}$
in principle improves with ever more data, assuming no systematic floor. Thus
we seek to address two questions. First, with what fractional precision can
$g_{p}$ be measured, and how does this compare to the fractional precision
with which the density or mass can be measured? Second, how useful is $g_{p}$
as a diagnostic of a terrestrial planet’s interior structure and, potentially,
habitability?
Answering these questions is quite important because the surface gravity of a
planet may be a more fundamental parameter than the radius and mass, at least
in addressing certain questions, such as its habitability (O’Neill & Lenardic,
2007; Valencia & O’Connell, 2009; van Heck & Tackley, 2011). For example, the
surface gravity, along with the equilibrium temperature and mean molecular
weight, determines the scale height of any extant atmosphere. If a planet’s
surface gravity provides more of a lever arm in determining certain aspects of
the planet’s interior or atmosphere, and if we can achieve a better precision
on the planet surface gravity measurement than the radius, then we can use
that to better constrain the composition of the planet and, ultimately, its
habitability. Thus, given the importance of the planetary surface gravity,
mass and radius in constraining the habitability of a planet, it is critical
to understand how well we can measure these properties.
Here we focus on the precision with which the surface gravity, density, mass,
and radius of a transiting planet can be measured essentially empirically. We
will employ methodologies that are similar to those used in Stevens et al.
(2018), and thus this work can be considered a companion paper to that one.
## 2 Analysis
We begin by deriving expressions for the surface gravity $g_{p}$, mass
$M_{p}$, density $\rho_{p}$, and radius $R_{p}$, of a transiting planet in
terms of observables from photometric and radial velocity observations, as
well as a constraint on the stellar radius $R_{\star}$ from a Gaia parallax
combined with a bolometric flux from a spectral energy distribution (SED)
(Stassun et al., 2017; Stevens et al., 2017, 2018).
### 2.1 Planet Surface Gravity
The planet surface gravity is defined as
$g_{p}=\frac{GM_{p}}{R_{p}^{2}}.$ (1)
The radial velocity semi-amplitude $K_{\star}$ can be expressed as
$\begin{split}K_{\star}=\left(\frac{2\pi
G}{P}\right)^{1/3}\frac{M_{p}\sin{i}}{(M_{\star}+M_{p})^{2/3}}\frac{1}{\sqrt{1-e^{2}}}\\\
\simeq 28.4~{}{\rm m~{}s}^{-1}\left(\frac{P}{\rm
yr}\right)^{-1/3}\frac{M_{p}\sin{i}}{M_{\rm
J}}\bigg{(}\frac{M_{\star}}{M_{\odot}}\bigg{)}^{-2/3}(1-e^{2})^{-1/2},\end{split}$
(2)
where $M_{\star}$ is the stellar mass, $P$ and $e$ are the planetary orbital
period and eccentricity, $M_{\rm J}$ is Jupiter’s mass, and $i$ is the
inclination angle of the orbit. In the second equality, we have assumed that
$M_{p}\ll M_{\star}$.
Using Newton’s version of Kepler’s third law and Equation 2, the surface
gravity can then be expressed as
$g_{p}=\frac{2\pi}{P}\frac{\sqrt{1-e^{2}}}{(R_{p}/a)^{2}}\frac{K_{\star}}{\sin{i}}.$
(3)
For the majority of the following analysis, will assume circular orbits
($e=0$) for simplicity and drop the eccentricity dependence. This analysis
could be repeated for eccentric orbits, but the algebra is tedious and does
not lead to qualitatively new insights. The assumption of circular orbits thus
provides a qualitative expectation of the uncertainties on the planetary
parameters. Furthermore, in many cases it is justified because we expect many
of the systems to which this analysis is applicable will have very small
eccentricities. We also further assume that $\sin{i}=1$, which is
approximately true for transiting exoplanets. Under these assumptions, we have
that
$g_{p}=\frac{2\pi K_{\star}}{P}\left(\frac{a}{R_{p}}\right)^{2}.$ (4)
The semimajor axis scaled to the planet radius can be converted to the
semimajor axis scaled to the stellar radius by using the depth of the transit
$\delta\equiv(R_{p}/R_{\star})^{2}$, which is a direct observable:
$\frac{a}{R_{p}}=\frac{a}{R_{\star}}\left(\frac{R_{\star}}{R_{p}}\right)=\frac{a}{R_{\star}}\delta^{-1/2}.$
(5)
We can then rewrite the scaled semimajor axis $a/R_{\star}$ in terms of the
stellar density $\rho_{\star}$ (see e.g., Sandford & Kipping 2017 for a
precise derivation):
$\frac{a}{R_{\star}}=\left(\frac{GP^{2}}{3\pi}\right)^{1/3}\left(\rho_{\star}+k^{3}\rho_{p}\right)^{1/3},$
(6)
where $k\equiv R_{p}/R_{\star}$. Since typically $k\ll 1$,
$\frac{a}{R_{\star}}=\left(\frac{GP^{2}\rho_{\star}}{3\pi}\right)^{1/3}.$ (7)
Using Equation 5, we find
$\frac{a}{R_{p}}=\left(\frac{GP^{2}\rho_{\star}}{3\pi}\right)^{1/3}\delta^{-1/2}.$
(8)
Noting that, from Equation 4, we can write
$g_{p}=\frac{2\pi K_{\star}}{P}\left(\frac{a}{R_{p}}\right)^{2}=\frac{2\pi
K_{\star}}{P}\left(\frac{a}{R_{\star}}\right)^{2}\delta^{-1}.$ (9)
Then $a/R_{\star}$ can be written in terms of observables as
$\frac{a}{R_{\star}}=\frac{P}{\pi}\frac{\delta^{1/4}}{\sqrt{T\tau}}.$ (10)
where the observables are the orbital period $P$, transit time $T$ (full-width
at half-maximum), the ingress/egress duration $\tau$, and the transit depth
$\delta$.
Inserting this into Equation 9, we find that the planet surface gravity is
given in terms of pure observables as (Southworth et al. 2007)
$g_{p}=\frac{2K_{\star}P}{\pi T\tau}\delta^{-1/2}.$ (11)
Using linear propagation of uncertainties, and assuming no covariances between
the observable parameters333See Carter et al. (2008) for an exploration of the
covariances between photometric and RV observable parameters., and the
aforementioned assumptions ($M_{P}\ll M_{\star}$, $\sin{i}\sim 1$, $e=0$, and
$k\ll 1$), we can approximate the fractional uncertainty on the surface
gravity as
$\begin{split}\bigg{(}\frac{\sigma_{g_{p}}}{g_{p}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{T}}{T}\bigg{)}^{2}+\\\
\bigg{(}\frac{\sigma_{\tau}}{\tau}\bigg{)}^{2}+\frac{1}{4}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}.\end{split}$
(12)
### 2.2 Planet Mass
We now turn to the uncertainty on the planet mass. We can approach this
estimate two ways. First, we can start from Equation 2, again making the same
simplifying assumptions, and solve for $M_{p}$ in terms of observables. We
note that this method requires the intermediate step of deriving an expression
for the host star mass in terms of direct observables, using the fact that
$M_{\star}=\frac{4\pi}{3}\rho_{\star}R_{\star}^{3}$ (13)
and using Equations 5 and 10 to write $\rho_{\star}$ in terms of observables.
We find
$M_{\star}=\frac{4P}{\pi G}\delta^{3/4}(T\tau)^{-3/2}R_{\star}^{3}.$ (14)
Using this, we can then derive the planet mass in terms of observables as
$M_{p}=\frac{2}{\pi G}\frac{K_{\star}P}{T\tau}R_{\star}^{2}\delta^{1/2}.$ (15)
A more straightforward approach is to use the fact that we have already
derived the planet surface gravity in terms of observables. Starting from the
definition of surface gravity, we can write
$M_{p}=\frac{1}{G}g_{p}R_{p}^{2}=\frac{1}{G}g_{p}R_{\star}^{2}\delta.$ (16)
Using Equation 11, we arrive at the same expression as Equation 15.
Using Equation 15, we derive the fractional uncertainty on the planet mass in
terms of the fractional uncertainty in the observables, again assuming no
covariances and the simplifying assumptions stated before ($M_{P}\ll
M_{\star}$, $\sin{i}\sim 1$, $e=0$, and $k\ll 1$). We find
$\begin{split}\bigg{(}\frac{\sigma_{M_{p}}}{M_{p}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{T}}{T}\bigg{)}^{2}+\\\
\bigg{(}\frac{\sigma_{\tau}}{\tau}\bigg{)}^{2}+\frac{1}{4}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}+4\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}.\end{split}$
(17)
### 2.3 Planet Density
We derive the planet density $\rho_{p}$ in terms of observables. The planet
density is given by
$\rho_{p}=\frac{3M_{p}}{4\pi R_{p}^{3}}=\frac{3M_{p}}{4\pi
R_{\star}^{3}}\delta^{-3/2}.$ (18)
We have already derived the mass of the planet in terms of observables in
Equation 15. Using this expression, we find
$\rho_{p}=\frac{3}{2\pi^{2}G}\frac{K_{\star}P}{T\tau\delta R_{\star}}.$ (19)
From this equation, we derive the fractional uncertainty on the planet density
in terms of the fractional uncertainty in the observables, again assuming no
covariances and the simplifying assumptions stated before. We find
$\begin{split}\bigg{(}\frac{\sigma_{\rho_{p}}}{\rho_{p}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{T}}{T}\bigg{)}^{2}+\\\
\bigg{(}\frac{\sigma_{\tau}}{\tau}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}.\end{split}$
(20)
### 2.4 Planet Radius
Finally, the planet radius uncertainty can be trivially derived from the
definition of the transit depth $\delta$, assuming no limb darkening:
$\delta=\bigg{(}\frac{R_{p}}{R_{\star}}\bigg{)}^{2}.$ (21)
Then,
$R_{p}=\sqrt{\delta}R_{\star},$ (22)
and the fractional uncertainty on the planet radius is simply
$\bigg{(}\frac{\sigma_{R_{p}}}{R_{p}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}+\frac{1}{4}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}.$
(23)
We note that, by assuming that $\delta$ is a direct observable, we are
fundamentally assuming no limb darkening of the star. Of course, in reality
the presence of limb darkening means that the observed fractional depth of the
transit is not equal to $\delta$, and thus the uncertainty in $\delta$ is
larger than one would naively estimate assuming no limb darkening. However,
assuming that the limb darkening is small (as it is for observations in the
near-IR), or that it can be estimated a priori based on the properties of the
star, or that the photometry is sufficiently precise that both the limb
darkening and $\delta$ can be simultaneously constrained, the naive estimate
of the uncertainty on $\delta$ assuming no limb darkening will not be
significantly larger than that in the presence of limb darkening.
## 3 Comparing the Estimated Uncertainties on the Planet Mass, Density, and
Surface Gravity
Comparing the expressions for the fractional uncertainty on $g_{p}$, $M_{p}$,
and $\rho_{p}$ (Equations 12, 17, and 20, respectively), we can make some
broad observations on the precision with which it is possible to measure these
three planetary parameters.
First, comparing the uncertainties on $g_{p}$ and $M_{p}$, we note that the
only difference is that $\sigma_{M_{p}}/M_{p}$ requires the additional term
$4(\sigma_{R_{\star}}/R_{\star})^{2}$. Stevens et al. (2018) estimates that it
should be possible to infer the stellar radii of bright hosts ($G\lesssim 12$
mag) to an accuracy of order $1\%$ using the final Gaia data release
parallaxes, currently-available absolute broadband photometry, and
spectrophotometry from Gaia and the Spectro-Photometer for the History of the
Universe, Epoch of Reionization, and Ices Explorer (SPHEREx; Doré et al.
2018). The exact level of accuracy will depend on the stellar spectral type
and the final parallax precision. It is likely that $R_{\star}$ may dominate
the error budget relative to the other terms, with the possible exception of
the uncertainty in $\tau$. We note that TESS is able to measure $\tau$ more
precisely than either Kepler or K2 were able to for systems with similar
physical parameters and noise properties, primarily because the TESS bandpass
is redder than that of Kepler, and thus the stellar limb darkening is smaller
and less degenerate with $\tau$. Overall, we generically expect the planetary
surface gravity to be measured to smaller fractional precision than the planet
mass.
We now turn to the uncertainty on planetary density. When comparing the
expressions for the uncertainty in $M_{p}$ to $\rho_{p}$, we note that the
uncertainty due to the depth enters as $(1/4)(\sigma_{\delta}/\delta)$ for
$M_{p}$, whereas it enters as simply $\sigma_{\delta}/\delta$ for $\rho_{p}$.
For large planets, the depth should be measurable to a precision of $\sim$1%
or better, particularly in the TESS bandpass, similar to the best expected
precision on $R_{\star}$. Thus, we expect $\sigma_{\delta}$ to be comparable
to $\sigma_{R_{\star}}$, and thus both should contribute at the $\sim$1% level
to $\sigma_{\rho_{p}}$. On the other hand, we expect $\sigma_{R_{\star}}$ to
dominate over the transit depth for $M_{p}$. Thus, for any given system, we
generally expect the following hierarchy:
$\sigma_{M_{p}}/M_{p}>\sigma_{\rho_{p}}/\rho_{p}>\sigma_{g_{p}}/\sigma_{g_{p}}>\sigma_{R_{p}}/R_{p}$.
Similarly, there is a hierarchy in the precision with which the observed
parameters $T$, $P$, $K_{\star}$, $\delta$, $\tau$, and $R_{\star}$ are
measured. For the relatively small sample of planets confirmed from TESS so
far, we find that in general, the most precise observable parameter is the
orbital period, followed by the stellar radius, the transit depth, the RV
semi-amplitude, and the transit duration, such that:
$\sigma_{T}/T>\sigma_{K}/K>\sigma_{\delta}/\delta>\sigma_{R_{\star}}/R_{\star}>\sigma_{P}/P$.
The ingress/egress time $\tau$ is not always reported in discovery papers, so
we do not include it in this comparison. However, we generally expect that it
will be measured to a precision that is worse than $T$ (Carter et al., 2008;
Yee & Gaudi, 2008).
This hierarchy is in agreement with the findings of Carter et al. (2008) and
Yee & Gaudi (2008), who derived the following approximate relations for the
uncertainties in the parameters of a photometric transit (assuming no limb
darkening):
$\displaystyle\frac{\sigma_{\delta}}{\delta}$ $\displaystyle\simeq$
$\displaystyle Q^{-1}$ (24) $\displaystyle\frac{\sigma_{T}}{T}$
$\displaystyle\simeq$ $\displaystyle Q^{-1}\sqrt{\frac{2\tau}{T}}$ (25)
$\displaystyle\frac{\sigma_{\tau}}{\tau}$ $\displaystyle\simeq$ $\displaystyle
Q^{-1}\sqrt{\frac{6T}{\tau}},$ (26)
where $Q$ is the signal-to-noise ratio of the combined transits, defined as
$Q\equiv(N_{\rm tr}\Gamma_{\rm phot}T)^{1/2}\delta,$ (27)
where $N_{\rm tr}$ is the effective number of transits that were observed, and
$\Gamma_{\rm phot}$ is the photon collection rate444Alternatively, assuming
all measurements have a fractional photometric uncertainty $\sigma_{\rm
phot}$, and there are $N$ measurements in transit, the total signal-to-noise
ratio can be defined as $Q\equiv\sqrt{N}(\delta/\sigma_{\rm phot})$.. We note
that Equation 27 implicitly assumes uncorrelated photometric uncertainties.
Since, in general, $\sigma_{\tau}>\sigma_{T}$, we have that
$\sigma_{\delta}/\delta<\sigma_{T}/T<\sigma_{\tau}/\tau$.
In the above equations, we have ignored the uncertainty in the transit
midpoint $t_{c}$ as it does not enter into the expressions for the
uncertainties in $R_{p}$, $M_{p}$, $\rho_{p}$, or $g_{p}$. We also assumed
that the uncertainty in the baseline (out of transit) flux is negligible,
which is generally a good assumption, particularly for space-based missions
such as Kepler, K2, and TESS, where the majority of the measurements are taken
outside of transit.
We note that, particularly for small planets, when the limb darkening is
significant, and when only a handful of transits have been observed, $\tau$
may be poorly measured (i.e., precisions of $\gtrsim 5\%$), and therefore its
uncertainty will dominate the error budget. In such cases, it may be more
prudent to use additional external constraints, such as stellar isochrones, to
improve the overall parameters of the system (at the cost of losing the nearly
purely empirical nature of the inferences as assumed in the derivations
above). See Stevens et al. (2018) for additional discussion.
## 4 Validation of Our Analytic Estimates
Figure 1: Reported fractional uncertainties in $M_{p}$ (diamonds), $g_{p}$
(squares), $\rho_{p}$ (circles), and $R_{p}$ (triangles) versus our model-
independent analytic estimates for a variety of transiting planets. These
include a fiducial hot Jupiter (dark blue), Kepler-93b (pink), KELT-26b (sky
blue), KELT-26b without external constraints (green, open symbols), K2-106b
(brown), and HD 21749b (gold). For K2-106b and HD 21749b, arrows point from
the fractional uncertainties reported in the discovery papers to our
‘forensic’ estimates of the uncertainties that could be achieved had the
authors adopted only empirical constraints. The open symbols are systems for
which no external constraints were used. A dashed, gray one-to-one line is
plotted for reference.
We test the analytic expressions derived in Section 2 using four confirmed
exoplanets and one fiducial hot Jupiter simulated by Stevens et al. (2018).
The confirmed exoplanets are KELT-26b (Rodríguez Martínez et al., 2020), HD
21749b (Dragomir et al., 2019), K2-106b (Adams et al., 2017; Guenther et al.,
2017), and Kepler-93b (Ballard et al., 2014; Dressing et al., 2015). These
systems have masses and radii between $\sim$4–448 $M_{\oplus}$ and
$\sim$1.4–21 $R_{\oplus}$.
We estimated the expected analytic uncertainties on the planet parameters by
inserting the values of $T$, $P$, $K_{\star}$, $\delta$, $\tau$, $R_{\star}$,
and their respective uncertainties from the discovery papers into Equations
12, 17, 20, and 23. Then, we compared the analytic uncertainties to those
reported in the discovery papers, which were derived from MCMC analyses and
not using the analytic approximations presented here. For parameters with
asymmetric uncertainties, we took the average of the upper and lower bounds
and adopted that as the uncertainty.
We note that the discovery papers of all of the examples we present here
(except for KELT-26b) do not provide the transit duration $T$ (or the full
width at half maximum of the transit), but rather $T_{14}$, which is defined
as the difference between the fourth and first contact (see, e.g., Carter et
al., 2008). Since we are generally interested in $T$, we calculate it from the
given observables using
$T=T_{14}-\tau$ (28)
and we estimate its uncertainty with the relationship from Carter et al.
(2008) and Yee & Gaudi (2008):
$\frac{\sigma_{T}}{T}=\left(\sqrt{\frac{1}{3}}\frac{\tau}{T}\right)\frac{\sigma_{\tau}}{\tau}.$
(29)
We use Equations 28 and 29 to calculate the transit duration values and
uncertainties for all the systems for which only $T_{14}$ is given.
There are other ways of estimating the uncertainty in $T$, such as by
propagating the uncertainty on $T$ from Equation 28, or by assuming that the
uncertainty in $T$ is approximately equal to that of $T_{14}$. However, these
approaches overestimate the uncertainty on $T$ as compared to Equation 29
because they do not account for the covariance between the measurements of
$T_{14}$ and $\tau$. Therefore, we adopt the uncertainty in $T$ from Equation
29 for all the exoplanets referenced here.
Finally, for systems where the transit depth $\delta$ is not provided, but
rather the planet-star radius ratio $R_{p}/R_{\star}$, we use linear
propagation of error to estimate $\sigma_{\delta}$, finding
$\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}=4\bigg{(}\frac{\sigma_{R_{p}/R_{\star}}}{R_{p}/R_{\star}}\bigg{)}^{2}.$
(30)
And we adopt the fractional uncertainties on $R_{\star}$ as reported in the
papers. In some cases these were derived using external constraints, such as
stellar models, and thus may be underestimates or overestimates of the
empirical uncertainty in $R_{\star}$ derived from the stellar SED and
parallax.
The fractional uncertainties calculated using our analytic approximations for
the five planets in our sample are listed in Table 1 and shown in Figure 1. As
is clear from Figure 1, our estimates are broadly in agreement with the
fractional uncertainties quoted in the discovery papers. However, we note that
the fractional uncertainties we predict for certain quantities are
systematically larger or smaller than those reported in the papers. After a
careful ‘forensic’ analysis, we have tracked down the reason for these
discrepancies. In the two most discrepant cases, it is because the authors
used external constraints on the properties of the host star (such as $T_{\rm
eff}$, [Fe/H], and $\log{g_{\star}}$) combined with stellar evolutionary
tracks, to place priors on the stellar parameters $R_{\star}$ and $M_{\star}$.
In one case (HD 21749), the resulting constraint on $\rho_{\star}$ is tighter
than results from empirical constraint on the stellar density $\rho_{\star}$
from the light curve. In the other case, the adopted constraint on
$\rho_{\star}$ is weaker than results from empirical constraint on the stellar
density $\rho_{\star}$ from the light curve, but nevertheless the external
(weaker) constraints on $M_{\star}$ and $R_{\star}$ were adopted, rather than
the (tighter) empirical constraints. In the remaining cases, either external
constraints were not assumed, and as a result their parameter uncertainties
agree well with our analytic estimates, or the external constraints were
negligible compared to the empirical constraints and thus the empirical
constraints dominated, again leading to agreement with our analytic estimates.
We ultimately conclude that our analytic estimates are reliable; however, we
describe in detail our forensic analysis of the systems for pedagogical
purposes. In the following subsections, we discuss each system in further
detail.
Before doing so, however, we stress that the advantage of empirical, model-
independent approximations like the ones presented here is that they do not
assume that the physical properties of any particular system is representative
of the systems used to calibrate the empirical models, or that the properties
of the systems necessarily agree with the theoretical predictions. For
example, theoretical models that make assumptions about the elemental
abundances of the host star may not apply to the particular system under
consideration. Therefore, although our empirical approach may lead to weaker
constraints on the parameters of the planets, we believe it leads to more
robust constraints on these parameters.
Table 1: Analytic and reported fractional uncertainties
Planet | Analytic | Literature | Reference
---|---|---|---
| $M_{p}$ | $g_{p}$ | $\rho_{p}$ | $R_{p}$ | $M_{p}$ | $g_{p}$ | $\rho_{p}$ | $R_{p}$ |
Fiducial HJ | 0.06 | 0.04 | 0.06 | 0.01 | 0.05 | 0.05 | 0.05 | 0.01 | Stevens et al. (2018)
KELT-26b | 0.34 | 0.34 | 0.34 | 0.03 | 0.33 | 0.37 | 0.35 | 0.03 | Rodríguez Martínez et al. (2020)
KELT-26b⋆ | 0.35 | 0.35 | 0.35 | 0.03 | 0.33 | 0.37 | 0.35 | 0.03 |
Kepler-93b | 0.17 | 0.16 | 0.17 | 0.01 | 0.17 | 0.16 | 0.17 | 0.01 | Ballard et al. (2014)
HD 21749b† | 0.26 | 0.25 | 0.26 | 0.05 | 0.09 | 0.16 | 0.21 | 0.06 | Dragomir et al. (2019)
| (0.09) | (0.14) | (0.18) | | | | | |
K2-106b† | 0.25 | 0.14 | 0.18 | 0.10 | 0.11 | 0.13 | 0.34 | 0.10 | Guenther et al. (2017)
| (0.11) | | (0.32) | | | | | |
††footnotetext: Notes. The first four columns are the analytic uncertainties
in $M_{p}$ (Eqn 17), $g_{p}$ (Eqn 12), $\rho_{p}$ (Eqn 20), and $R_{p}$ (Eqn
23), while the next four are the uncertainties in those parameters reported in
the literature. Planets with a † were analyzed using external constraints from
stellar evolutionary models. KELT-26b⋆ is KELT-26 analyzed without external
constraints. The quantities in parentheses below HD 21749b and K2-106b are the
values we recover if we assume external constraints, as explained in Sections
4.4 and 4.5.
### 4.1 A Fiducial Hot Jupiter
Stevens et al. (2018) simulated the photometric time series and RV
measurements for a typical hot Jupiter ($M_{p}=M_{\rm J}$ and $R_{p}=R_{\rm
J}$) on a 3 day orbit transiting a G-type star using VARTOOLS (Hartman &
Bakos, 2016). They injected a Mandel-Agol transit model (Mandel & Agol, 2002)
into an (out-of-transit flux) normalized light curve, and simulated
measurement offsets by drawing from a Gaussian distribution with 1
millimagnitude dispersion. They furthermore assumed a cadence of 100 seconds.
They note that these noise properties are typical of a single ground-based
observation of a hot Jupiter from a small, $\sim$1 m telescope. For the RV
data, they simulated 20 evenly-spaced measurements, each with 10 m $\rm
s^{-1}$ precision (which they assumed was equal to the scatter, or ‘jitter’).
They then performed a joint photometric and RV fit to the simulated data using
EXOFASTv2 (Eastman, 2017; Eastman et al., 2019) to model and estimate the star
and planet’s properties. They simulated three different cases: a circular
($e=0$) orbit and equatorial transit, or an impact parameter of $b=0$, an
eccentric orbit with $e=0.5$ and $b=0$, and a circular orbit and $b=0.75$. We
consider the parameters and uncertainties for the case of a circular orbit and
equatorial transit, for which our equations are most applicable, and use the
best-fit values and uncertainties from Table 1 in Stevens et al. (2018).
The fractional uncertainties in the planet mass, surface gravity, and
planetary bulk density quoted in Stevens et al. (2018) are all roughly 5%,
whereas the fractional uncertainty in the planet radius is 1.7%. These
uncertainties are in very good agreement with our analytic estimates, as
Figure 1 and Table 1 show.
### 4.2 A Real Hot Jupiter
KELT-26b is an inflated ultra hot Jupiter on a 3.34 day, polar orbit around a
early Am star characterized by Rodríguez Martínez et al. (2020). It has a mass
and radius of $M_{p}=1.41^{+0.43}_{-0.51}M_{\rm J}$ and
$R_{p}=1.940^{+0.060}_{-0.058}R_{\rm J}$, respectively. The photometry (which
included TESS data) and radial velocity data were jointly fit using EXOFASTv2,
and included an essentially empirical constraint on the radius of the star
from the spectral energy distribution and the Gaia Data Release 2 (DR2)
parallax, as well as theoretical constraints from the MESA Isochrones and
Stellar Tracks (MIST) stellar evolution models (Dotter, 2016; Choi et al.,
2016; Paxton et al., 2011, 2013, 2015). Therefore, unlike the fiducial hot
Jupiter discussed above, this system was modeled using both external empirical
constraints and external theoretical constraints. The uncertainties in the
planet parameters reported by Rodríguez Martínez et al. (2020) are $\sim$34%
for the mass, $\sim$33% for the surface gravity, $\sim$33% for the bulk
density, and 3.8% for the planet’s radius. These are very close to our
estimates of the fractional uncertainties of these parameters, implying that
the constraints from the MIST evolutionary tracks have little effect on the
inferred parameters of the system.
To test this hypothesis, we reanalyzed this system with EXOFASTv2 without
using the external theoretical constraints from the MIST isochrones, that is,
only using the spectral energy distribution of the star, its parallax from
Gaia DR2, and the light curves and radial velocities. The uncertainties from
this analysis are 35% for the planetary mass, surface gravity, and the
density, and 3.3% for the radius. These are consistent with the uncertainties
derived from the analysis using the MIST evolutionary tracks as constraints.
The fractional uncertainties from the original paper and the analysis without
constraints are shown in sky blue (with constraints) and green (without) in
Figure 1. We conclude that the inferred parameters of the system derived using
purely empirical constraints are as precise (and likely more accurate) than
those inferred using theoretical evolutionary tracks. Therefore, at least for
systems similar to KELT-26, we see no need to invoke theoretical priors.
### 4.3 Kepler-93b
Kepler-93b is a terrestrial exoplanet on a 4.7 day period discovered by
Ballard et al. (2014). It has a mass and radius of $M_{p}=4.02\pm
0.68~{}M_{\oplus}$ and $R_{p}=1.483\pm 0.019~{}R_{\oplus}$. With a radius
uncertainty of only 1.2%, it is one of the most precisely characterized
exoplanets to date. Ballard et al. (2014) used asteroseismology to precisely
constrain the stellar density, and then used it as a prior in their MCMC
analysis, leading to the remarkably precise planet radius. Their analysis did
not use external constraints from stellar evolutionary models, however.
Dressing et al. (2015) revisited Kepler-93 and collected HARPS-N (Mayor et
al., 2003) spectra, which they combined with archival Keck/HIRES spectra to
improve upon the planet’s mass estimate. They thus reduced the uncertainty in
the mass of Kepler-93b from $\sim$40% (Ballard et al., 2014) to $\sim$17%. We
used the photometric parameters ($T$, $\tau$, and $\delta$) from Ballard et
al. (2014) and the semi-amplitude $K_{\star}$ from Dressing et al. (2015) to
test our analytic estimates. We compared our results to the reported
uncertainties in $M_{p}$, $g_{p}$ and $\rho_{p}$ from Dressing et al. (2015),
since they provide slightly more precise properties. The uncertainties in the
properties of Kepler-93b are all $\sim$17%, and 1.2% for the radius, which are
in excellent agreement with our analytic estimates, as shown in Figure 1 and
Table 1. Interestingly, this implies that the asteroseismological constraint
on $\rho_{\star}$ does not significantly improve the overall constraints on
the system.
### 4.4 HD 21749b
HD 21749b is a warm sub-Neptune on a 36 day orbit transiting a K4.5 dwarf
discovered by Dragomir et al. (2019). The planet has a radius of
$2.61^{+0.17}_{-0.16}R_{\oplus}$ determined from TESS data, and a mass of
$22.7^{+2.2}_{-1.9}M_{\oplus}$ constrained from high-precision, radial
velocity data from the HARPS spectrograph at the La Silla Observatory in
Chile. Dragomir et al. (2019) performed an SED fit combined with a parallax
from Gaia DR2 to constrain the host star’s radius to $R_{\star}=0.695\pm
0.030R_{\odot}$. They then used the Torres et al. (2010) relations to derive a
stellar mass of $M_{\star}=0.73\pm 0.07M_{\odot}$, although they do not
specify what values of $T_{\rm eff}$, [Fe/H], and $\log{g_{\star}}$ they adopt
as input into those equations, or from where they derive these values. We
assume they were determined from high-resolution stellar spectra. Finally,
they performed a joint fit of their data and constrained the planetary
parameters with the EXOFASTv2 modeling suite, using their inferred values of
$M_{\star}$ and $R_{\star}$ as priors.
When comparing our analytic approximations of the fractional uncertainties in
$M_{p}$, $g_{p}$, and $\rho_{p}$ to the uncertainties in the paper, we find
that our estimates are systematically larger than those of Dragomir et al.
(2019) by 34% ($M_{p}$), 60% ($g_{p}$), and 80% ($\rho_{p}$).
Understanding the nature of such discrepancies requires a closer examination
of the methods employed by Dragomir et al. (2019) as compared to ours. The
fundamental difference is that their uncertainties in the planetary properties
are dominated by their more precise a priori uncertainties on $M_{\star}$ and
$R_{\star}$ (and thus $\rho_{\star}$), rather than the empirically constrained
value of $\rho_{\star}$ from the light curve and radial velocity measurements.
On the other hand, we estimate the uncertainty on $\rho_{\star}$ directly from
observables (e.g., the light curve and the RV data).
Because their prior on $\rho_{\star}$ is more constraining than the value of
$\rho_{\star}$ one would obtain from the light curve, and because the inferred
planetary parameters critically hinge upon $\rho_{\star}$, this ultimately
leads to smaller uncertainties in the planetary parameters than we obtain
purely from the light curve observables.
To show why this is true, we begin by comparing their prior in $\rho_{\star}$
(the value they derive from their estimate of $M_{\star}$ and $R_{\star}$,
which we will denote $\rho_{\star,\rm prior}$) to the uncertainty in
$\rho_{\star}$ from observables (denoted $\rho_{\star,\rm obs}$).
Their prior on $\rho_{\star}$ can be trivially calculated from
$\rho_{\star}=3M_{\star}/4\pi R_{\star}^{3}$, and its uncertainty, through
propagation of error, is therefore simply
$\bigg{(}\frac{\sigma_{\rho_{\star,\rm prior}}}{\rho_{\star,\rm
prior}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+9\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}.$
(31)
Inserting the appropriate values from Dragomir et al. (2019) yields555We note
that the actual value reported in Section 3.1 of Dragomir et al. (2019) is
$\rho_{\star}=3.09\pm 0.23$ g cm-3, but after careful analysis, we believe
that this value is probably a typographical error, as it differs from the
value we derive and from the posterior value in Table 1 of the paper.
$\rho_{\star,\rm prior}=3.07\pm 0.49$ g cm-3. This represents a fractional
uncertainty of $\sigma_{\rho_{\star,\rm prior}}/\rho_{\star,\rm prior}=0.16$.
Now, combining Equations 7 and 10, we can express $\rho_{\star,\rm obs}$ and
its uncertainty in terms of transit observables as
$\rho_{\star,\rm
obs}=\bigg{(}\frac{3P}{G\pi^{2}}\bigg{)}\delta^{3/4}(T\tau)^{-3/2}.$ (32)
Therefore,
$\begin{split}\bigg{(}\frac{\sigma_{\rho_{\star,\rm obs}}}{\rho_{\star,\rm
obs}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\frac{9}{16}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}+\frac{9}{4}\bigg{(}\frac{\sigma_{T}}{T}\bigg{)}^{2}+\\\
\frac{9}{4}\bigg{(}\frac{\sigma_{\tau}}{\tau}\bigg{)}^{2}.\end{split}$ (33)
Inserting the fractional uncertainties on $P$, $R_{p}$, $T$, and $\tau$ from
the discovery paper into Equation 33, we find $\sigma_{\rho_{\star,\rm
obs}}/{\rho_{\star,\rm obs}}=0.37$. This is larger and less constraining than
the fractional uncertainty in the prior on $\rho_{\star,\rm obs}$ from
Dragomir et al. (2019) by a factor of 2.3. Thus, we expect the prior on
$\rho_{\star,\rm obs}$ to dominate over the constraint from the light curve.
However, despite being considerably less constraining than the prior, the
empirical constraint on $\rho_{\star,\rm obs}$ can still influence the
posterior value if the central value is significantly different than the prior
value. Inserting the values of $P$, $\delta$, $T$, and $\tau$ in Equation 32,
we find a central value of $\rho_{\star,\rm obs}=5.56\pm 2.06$ g cm-3. This
value is $(5.56-3.07)/2.06=1.2\sigma$ discrepant from the prior value. Thus,
there is a weak tension between the empirical and prior values of
$\rho_{\star,\rm obs}$ that should be explored.
If we include the eccentricity in the expression for $\rho_{\star}$, we find
much closer agreement between $\rho_{\star,\rm obs}$ and $\rho_{\star,\rm
prior}$.
From Winn (2010), we can express the scaled semi-major axis as a function of
eccentricity as
$\frac{a}{R_{\star}}=\frac{P}{\pi}\frac{\delta^{1/4}}{\sqrt{T\tau}}\bigg{(}\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg{)}.$
(34)
We can then combine this equation with Equation 7 to find the ratio between
the inferred $\rho_{\star}$ assuming a circular orbit ($\rho_{\star,{\rm
obs,c}}$) and that for an eccentric orbit ($\rho_{\star,{\rm obs,e}}$):
$\rho_{\star,{\rm obs,e}}=\rho_{\star,{\rm
obs,c}}\bigg{(}\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg{)}^{3}.$ (35)
Inserting the values from the paper ($e=0.188$ and $\omega=98^{\circ}$) yields
$\rho_{\star,\rm obs}=5.56$ g cm-3$\times 0.568=3.16$ g cm-3, and assuming the
same fractional uncertainty as $\rho_{\star,\rm obs,c}$ of $0.37$ (which we
discuss below), we get a value of $\rho_{\star,\rm obs,e}=3.16\pm 1.17$ g
cm-3, which is $\sim 0.1\sigma$ greater than the prior, and in much better
agreement than our estimate without including eccentricity. The reason why the
eccentricity significantly affects $\rho_{\star,\rm obs}$ in this case,
despite the fact that it is relatively small ($e=0.188$), is that for this
system, the argument of periastron is $\omega\simeq 90^{\circ}$, which implies
that the transit occurs near periastron, and thus the transit is shorter than
if the planet were on a circular orbit by a factor of
$\frac{T_{\rm e}}{T_{\rm c}}\simeq\frac{\sqrt{1-e^{2}}}{1+e}=0.827.$ (36)
Thus $\tau$ is shorter by the same factor. Since
$\rho_{\star}\propto(T\tau)^{-3/2}$, by assuming $e=0$, one overestimates the
density by factor of
$\bigg{(}\frac{\sqrt{1-e^{2}}}{1+e}\bigg{)}^{-3}=0.565^{-1},$ (37)
approximately recovering the factor above.
The eccentricity also affects the uncertainty in $\rho_{\star}$ in the
following way:
$\begin{split}\frac{\rho_{\star,{\rm obs,e}}}{\rho_{\star,{\rm
obs,c}}}\propto\bigg{(}\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg{)}^{3}\simeq(1-3/2e^{2})(1-3e\sin{\omega})\\\
\simeq 1-3e\sin{\omega},\end{split}$ (38)
where we have assumed that $e\ll 1$. Propagating the uncertainty leads to a
final value of $\rho_{\star,\rm obs,e}=3.16\pm 2.04$ g cm-3, which is only
$\sim$0.04$\sigma$ greater than $\rho_{\star,\rm prior}$. Thus, the
eccentricity plays a significant role in the parameter uncertainties for this
system. The uncertainty in the prior constraint on $\rho_{\star}$ is a factor
of $\sim$1.7 times smaller than that derived from the data alone. We therefore
conclude that the prior adopted by (Dragomir et al., 2019) dominates over the
empirical value of $\rho_{\star}$ from the data ($\rho_{\star,\rm obs}$). This
also explains why their final value and uncertainty in $\rho_{\star}$
($3.03^{+0.50}_{-0.47}$ g cm-3) is so close to their prior ($\rho_{\star,\rm
prior}=3.07\pm 0.49$ g cm-3).
Assuming that the uncertainty on their priors for $M_{\star}$ and $R_{\star}$
indeed dominates the fractional uncertainty in the resulting planet
parameters, we can reproduce their uncertainties in $M_{p}$, $g_{p}$, and
$\rho_{p}$ using their prior to recover their reported fractional
uncertainties as follows.
For the surface gravity $g_{p}$, we have
$g_{p}=\frac{GM_{p}}{R_{p}^{2}},$ (39)
and
$M_{p}=\bigg{(}\frac{P}{2\pi
G}\bigg{)}^{1/3}M_{\star}^{2/3}(1-e^{2})^{1/2}K_{\star}$ (40)
while the planet radius can be expressed as
$R_{p}=\delta^{1/2}R_{\star}$ (41)
Therefore,
$g_{p}=\bigg{(}\frac{P}{2\pi
G}\bigg{)}^{1/3}M_{\star}^{2/3}K_{\star}\delta^{-1}R_{\star}^{-2}(1-e^{2})^{1/2}.$
(42)
Instead of simplifying $g_{p}$ in terms of observables (as we have done in
Equation 11), we express it in terms of $M_{\star}$ and $R_{\star}$. Using
propagation of error, the uncertainty is
$\begin{split}\bigg{(}\frac{\sigma_{g_{p}}}{g_{p}}\bigg{)}^{2}\approx\frac{1}{9}\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\frac{4}{9}\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2}+\\\
\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}+4\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}.\end{split}$
(43)
where we have assumed $(1-e^{2})^{1/2}\approx 1$ since the eccentricity is
small.
Inserting the appropriate values from Table 1 in Dragomir et al. (2019) in
Equations 42 and 43, we recover a fractional uncertainty in the surface
gravity of $\sigma_{g_{p}}/g_{p}=0.14$, which is 12.5% different from the
value reported in Dragomir et al. (2019), and thus agrees much better with
their results than our initial estimate.
For the planet’s mass, we start from Equation 40 and propagate its uncertainty
as
$\bigg{(}\frac{\sigma_{M_{p}}}{M_{p}}\bigg{)}^{2}\approx\frac{1}{9}\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\frac{4}{9}\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2},$
(44)
implying a fractional uncertainty in the mass of $\sigma_{M_{p}}/M_{p}=0.093$,
which is only $\sim$0.3% discrepant from the uncertainty in the paper.
Finally, we replicate the analysis for the planet’s density, starting with
$\rho_{p}=\frac{3M_{p}}{4\pi R_{p}^{3}}=\frac{3M_{p}}{4\pi
R_{\star}^{3}}\delta^{-3/2}.$ (45)
Therefore,
$\rho_{p}=\frac{3}{4\pi}\bigg{(}\frac{P}{2\pi
G}\bigg{)}^{1/3}M_{\star}^{2/3}K_{\star}\delta^{-3/2}R_{\star}^{-3}(1-e^{2})^{1/2}.$
(46)
And the uncertainty in $\rho_{p}$ is thus
$\begin{split}\bigg{(}\frac{\sigma_{\rho_{p}}}{\rho_{p}}\bigg{)}^{2}\approx\frac{1}{9}\bigg{(}\frac{\sigma_{P}}{P}\bigg{)}^{2}+\frac{4}{9}\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2}+\\\
\frac{9}{4}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2}+9\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2},\end{split}$
(47)
which leads to $\sigma_{\rho_{p}}/\rho_{p}=0.18$, while the paper reports
$\sigma_{\rho_{p}}/\rho_{p}=0.21$, which is a $\sim$14% difference.
In summary, we can roughly reproduce the uncertainties in Dragomir et al.
(2019) to better than 15% if we assume that such uncertainties are dominated
by the priors on the stellar mass and radius. In Figure 1, we plot both our
initial fractional uncertainties and the recovered uncertainties as pairs
connected by golden arrows that point in the direction of the ‘recovered’
uncertainties based on our forensic analysis.
### 4.5 K2-106b
K2-106b is the inner planet in a system of two transiting exoplanets
discovered by Adams et al. (2017) and later characterized by Guenther et al.
(2017). It is on an ultra short, 0.57 day orbit around a G5V star. It has a
mass and radius of $8.36^{+0.96}_{-0.94}M_{\oplus}$ and $1.52\pm
0.16R_{\oplus}$, leading to a high bulk density of
$\rho_{p}=13.1^{+5.4}_{-3.6}$ g cm-3. Guenther et al. (2017) used data from
the K2 mission combined with multiple radial velocity observations from the
High Dispersion Spectrograph (HDS; Noguchi et al. 2002), the Carnegie Planet
Finder Spectrograph (PFS; Crane et al. 2006), and the FIber-Fed Echelle
Spectrograph (FIES; Frandsen & Lindberg, 1999; Telting et al., 2014) to
confirm and analyze this system. They performed a multi-planet joint analysis
of the data using the code pyaneti (Barragán et al., 2017) and derived the
host star’s mass and radius using the PARSEC model isochrones and the
interface for Bayesian estimation of stellar parameters from da Silva et al.
(2006).
As with HD 21749b, we found large discrepancies ($\sim$50%) between our
analytic estimates of the uncertainties on the planetary mass and density and
the literature values for K2-106b. Unlike HD 21749b, however, the reason for
this discrepancy is that the uncertainty in the density of the host star, and
thus in the properties of the planet, is dominated by the data, including the
light curve + radial velocity, rather than the prior. To see why this is true,
we perform a similar analysis as in Section 4.4, and begin by comparing the
uncertainty in the density from the observables $\rho_{\star,\rm obs}$ to the
density from the prior $\rho_{\star,\rm prior}$.
First, we have that the uncertainty in $\rho_{\star}$ based purely on the
prior fractional uncertainties on $M_{\star}$ and $R_{\star}$ is given by
$\bigg{(}\frac{\sigma_{\rho_{\star,\rm prior}}}{\rho_{\star,\rm
prior}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+9\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}.$
(48)
Inserting the values of $\sigma_{M_{\star}}/M_{\star}$ and
$\sigma_{R_{\star}}/R_{\star}$ from the paper, we derive a fractional
uncertainty on the density of the star from the prior of
$\sigma_{\rho_{\star,prior}}/\rho_{\star,prior}=0.31$. On the other hand,
using Equation 33, the fractional uncertainty in the stellar density from pure
observables $\rho_{\star,\rm obs}$ is $\sigma_{\rho_{\star,\rm
obs}}/\rho_{\star,\rm obs}=0.15$, a factor of $\sim$2 times smaller than the
fractional uncertainty on $\rho_{\star}$ estimated from the prior.
We can compute the uncertainty in the planetary mass assuming the fractional
uncertainty on $M_{\star}$ from the prior and the fractional uncertainty on
the measured semi-amplitude $K_{\star}$ using Equation 44:
$\bigg{(}\frac{\sigma_{M_{p}}}{M_{p}}\bigg{)}^{2}\approx\frac{4}{9}\bigg{(}\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg{)}^{2}+\bigg{(}\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg{)}^{2},$
(49)
where we have assumed that $\sigma_{P}/P\ll 1$. We find
$\sigma_{M_{p}}/M_{p}=0.11$, which is only 1% different from the uncertainty
reported in Guenther et al. (2017).
Further, we infer that Guenther et al. (2017) estimated the density of the
planet by combining their estimate of the mass of the planet by adopting the
prior value of $M_{\star}$, along with the observed values of $K_{\star}$ and
$P$, with the radius of the planet derived by adopting the prior value of
$R_{\star}$ and the observed value of transit depth (and thus
$R_{p}/R_{\star}$). Thus we infer that Guenther et al. (2017) estimated the
uncertainty in the planet density via,
$\bigg{(}\frac{\sigma_{\rho_{p}}}{\rho_{p}}\bigg{)}^{2}\approx\bigg{(}\frac{\sigma_{M_{p}}}{M_{p}}\bigg{)}^{2}+9\bigg{(}\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg{)}^{2}+\frac{9}{4}\bigg{(}\frac{\sigma_{\delta}}{\delta}\bigg{)}^{2},$
(50)
again assuming that $\sigma_{P}/P\ll 1$. Substituting the values quoted in
Guenther et al. (2017) into the expression above, we find
$\sigma_{\rho_{p}}/\rho_{p}=0.32$, whereas they quote a fractional uncertainty
of $\sigma_{\rho_{p}}/\rho_{p}=0.34$, a $\sim$6% difference. On the other
hand, if we analytically estimate the fractional uncertainty on the density of
K2-106b using pure observables (Equation 20), but assume their reported value
and uncertainty on $R_{\star}$, we find $\sigma_{\rho_{p}}/\rho_{p}\sim 0.18$,
i.e., a factor of $\sim$2 times smaller.
In the case of the surface gravity of the planet, however, we find that our
analytic estimates and that reported in the paper only differ by $\sim 8\%$.
The reason is in part because the uncertainty in the stellar density from the
light curve dominates the uncertainty in the planet properties. In this case,
the light curve and radial velocity data tightly constrain the stellar
density, which implies that $M_{\star}\mathrel{\vbox{
\offinterlineskip\halign{\hfil$#$\cr\propto\cr\kern
2.0pt\cr\sim\cr\kern-2.0pt\cr}}}R_{\star}^{3}$. This constraint on
$\rho_{\star}$ causes the prior estimates of the stellar mass, radius, and
their uncertainties to cancel out in the expression for the planet density:
$g_{p}\propto P^{1/3}K_{\star}M_{\star}^{2/3}\delta^{-1}R_{\star}^{-2}.$ (51)
Assuming $M_{\star}\propto\rho_{\star}R_{\star}^{3}$, and
$\rho_{\star}\sim~{}{\rm constant}$, we find
$g_{p}\propto
P^{1/3}K_{\star}R_{\star}^{2}\delta^{-1}R_{\star}^{-2}=P^{1/3}K_{\star}\delta^{-1}.$
(52)
The reason why Equation 52 and Equation 11 do not agree is because Equation 52
does not include the full contributions of the uncertainties in the light
curve observables $P$, $\delta$, $T$, and $\tau$.
Figure 1 shows the fractional uncertainties in $M_{p}$, $g_{p}$, and
$\rho_{p}$ for K2-106b and brown arrows pointing from the original values we
estimate to the ‘recovered’ values.
We reanalyzed K2-106 with EXOFASTv2 to derive stellar and planetary properties
without using either the MIST stellar tracks, the Yonsei Yale stellar
evolutionary models (YY; Yi et al., 2001), or the Torres et al., 2010
relationships that are built into EXOFASTv2. We first constrained the stellar
radius by fitting the star’s SED to stellar atmosphere models to infer the
extinction $A_{V}$ and bolometric flux, which when combined with its distance
from Gaia EDR3 (Gaia Collaboration et al., 2020) provides a (nearly empirical)
constraint on $R_{\star}$. We find a fractional uncertainty in $R_{\star}$ of
2.4%, while Guenther et al. (2017) derive a fractional uncertainty of 10%
using the measured values of $T_{\rm eff}$, $\log{g_{\star}}$, and [Fe/H] from
their HARPS and HARPS-N spectra, combined with constraints from the PARSEC
model isochrones (da Silva et al., 2006).
We used our estimate of the stellar radius to recalculate the fractional
uncertainties in $M_{p}$, $g_{p}$, $\rho_{p}$, and $R_{p}$ using our analytic
expressions, and the constraints on the empirical parameters
$P,~{}K_{\star},~{}T,~{}\tau$, and $\delta$ from Guenther et al. (2017). Our
derived fractional uncertainty in the planetary radius is 3.0%, whereas
Guenther et al. (2017) find 10% 666We note that when
$\sigma_{R_{p}/R_{\star}}\ll\sigma_{R_{\star}}/R_{\star}$, the fractional
uncertainty on the planetary radius is equal to fractional uncertainty on the
radius of the star (Eqn. 22). While this is approximately the case given
fractional uncertainty in $R_{\star}$ estimated by Guenther et al. (2017), for
our estimate the uncertainty in $R_{p}/R_{\star}$ of $1.9\%$ contributes
somewhat to the our estimated fractional uncertainty in $R_{p}$.. Our derived
fractional uncertainty on the density of the planet is a factor of 2.3 times
smaller than reported by (Guenther et al., 2017). This is because the radius
of the star enters into their estimate of $\rho_{p}$ as $R_{\star}^{-3}$ (Eqn.
45), whereas our estimate of $\rho_{p}$ only depends linearly on $R_{\star}$
(Eqn. 19). We estimate an uncertainty in the planet mass of 15%, a bit larger
than that reported by Guenther et al. (2017), as it scales as the square of
the radius of the star (Eqn. 15). Finally, the planetary surface gravity
uncertainty that we estimate is 14%, almost the same as that estimated by
Guenther et al. (2017), as it does not depend directly on
$\sigma_{R_{\star}}/R_{\star}$.
We conclude that a careful reanalysis of the K2-106 system using purely
empirical constraints may well result in a significantly more precise
constraint on the density of K2-106b, which is already a strong candidate for
an exceptionally dense super-Earth.
## 5 Discussion
Figure 2: 1$\sigma$ and 2$\sigma$ mass-radius ellipses for K2-229b (Dai et
al., 2019). The red ellipses assume $\rm M_{p}$ and $\rm R_{p}$ are
uncorrelated, random variables. The black ellipses are the result of
correlating $\rm M_{p}$ and $\rm R_{p}$ via the added constraint of surface
gravity. Planets whose masses and radii lie along the blue solid solid line
would have a constant core mass fraction of 0.565, whereas those that lie
along the blue dotted line would have a constant core mass fraction of 0.29.
Planets forming with iron abundances as expected from K2-229 Fe/Mg abundances
will follow the blue dotted solid line.
Here we discuss the importance of achieving high-precision measurements of
planetary masses, surface gravities, densities and radii, and their overall
role in a planet’s habitability.
The mass and radius of a planet are arguably its most fundamental quantities.
The mass is a measure of how much matter a planet accreted during its
formation and is also tightly connected to its density and surface gravity,
which we discuss below. The mass also determines whether it can acquire and
retain a substantial primordial atmosphere. Atmospheres are essential for a
planet to maintain weather and thus life (see, e.g., Dohm & Maruyama, 2013).
In addition, the planetary core mass and radius (themselves a function of the
total mass) are related to the strength of a planet’s global magnetic field,
although the strength of the field does depend on other factors, such as the
rotation rate of the planet and other aspects of its interior. The presence of
a substantial planetary magnetic field is vital in shielding against harmful
electromagnetic radiation from the host star. This is especially true for
exoplanets orbiting M dwarfs, which are much more active than Sun-like stars.
Without a magnetic field to shield against magnetic phenomena such as flares
and Coronal Mass Ejections (CMEs), planets around such stars may undergo mass
loss and atmospheric erosion on relatively short timescales (see, e.g.,
Kielkopf et al. 2019). The initial mass may also determine whether planets
will have moons, a factor which has been hypothesized to play a role in the
habitability of a planet, as it does for the Earth. Some authors have even
proposed that Mars- and Earth-sized moons around giant planets may themselves
be habitable (see, e.g., Heller et al., 2014; Hill et al., 2018).
The mean density of a planet is also important as it is a first-order
approximation of its composition. Based on their density, we can classify
planets as predominantly rocky (typically Earth-sized and super Earths) or
gaseous (Neptune-sized and hot Jupiters). A reliable determination of the
density and structure of a planet helps to constrain its habitability.
Next, we briefly discuss a few aspects of the importance of knowledge of a
planet’s surface gravity. First, the surface gravity dictates the escape
velocity of the planet, as well as the planet’s atmospheric scale height, $h$,
defined as
$h=\frac{k_{b}T_{\rm eq}}{\mu g_{p}},$ (53)
where $k_{b}$ is the Boltzmann constant, $T_{\rm eq}$ is the planet
equilibrium temperature, and $\mu$ is the atmospheric mean molecular weight.
The surface gravity is connected to mass loss events and the ability of a
terrestrial planet to retain a secondary atmosphere. Perhaps most importantly,
gravity may be a main driver of plate tectonics on a terrestrial planet. One
of the most fundamental questions about terrestrial or Earth-like planets is
whether they can have and sustain active plate tectonics or if they are in the
stagnant lid regime, like Mars or Venus (van Heck & Tackley, 2011). On Earth,
plate tectonics are deeply linked to habitability for several crucial reasons.
Plate tectonics regulate surface carbon abundance by transporting some $\rm
CO_{2}$ out of the atmosphere and into the interior, which helps maintain a
stable climate over long timescales (Sleep & Zahnle, 2001; Unterborn et al.,
2016). An excess of carbon dioxide can result in a runaway greenhouse effect,
as in the case of Venus. Plate tectonics also drive the formation of surface
features like mountains and volcanoes, and play an important role in sculpting
the topography of a rocky planet. Weather can then bring nutrients from
mountains to the oceans, contributing to the biodiversity of the oceans. Some
authors have argued that plate tectonics, dry land (such as continents), and
continents maximize the opportunities for intelligent life to evolve (Dohm &
Maruyama, 2013).
However, the origin and mechanisms of plate tectonics are poorly understood on
Earth, and are even more so for exoplanets. The refereed literature on this
topic includes inconsistent conclusions regarding the conditions required for
plate tectonics, and in particular how the likelihood of plate tectonics
depends on the mass of the planet. For example, there is an ongoing debate
about whether plate tectonics are inevitable or unlikely on super Earths.
Valencia & O’Connell (2009) used a convection model and found that the
probability and ability of a planet to harbor plate tectonics increases with
planet size. On the other hand, O’Neill & Lenardic (2007) came to the opposite
conclusion, finding that plate tectonics are less likely on larger planets,
based on numerical simulations. The resolution to this debate will have
important consequences for our assessment of the likelihood of life on other
planets.
### 5.1 Surface gravity as a proxy for the core mass fraction
The surface gravity of a planet may also play an important role in
constraining other planetary parameters, like the core mass fraction. Here, we
considered K2-229b ($R_{p}=1.197^{+0.045}_{-0.048}~{}R_{\oplus}$ and
$M_{p}=2.49^{+0.42}_{-0.43}~{}M_{\oplus}$, Dai et al. 2019), a potential super
Mercury first discovered by Santerne et al. (2018). This planet has well
measured properties and the prospects for improving the precision of the
planet parameters are good given the brightness of the host star.
We calculate the core mass fraction of K2-229b as expected from the planet’s
mass and radius, $\rm CMF_{\rho}$, which is the mass of the iron core divided
by the total mass of the planet: $\rm CMF_{\rho}$ = $\rm M_{Fe}/M_{p}$. We
compare this to the CMF as expected from the refractory elemental abundances
of the host star, $\rm CMF_{star}$. This definition assumes that a rocky
planet’s mass is dominated by Fe and oxides of Si and Mg. Therefore, the
stellar Fe/Mg and Si/Mg fractions are reflected in the planet’s core mass
fraction. The mass and radius of K2-229b are consistent with a rocky planet
with a 0.57 core mass fraction (CMF), while the relative abundances of Mg, Si,
and Fe of the host star K2-229 (as reported in Santerne et al. 2018) predict a
core mass fraction of 0.29 (Schulze et al., 2020). Figure 2 shows mass-radius
(M-R) ellipses for K2-229b when the mass and radius are assumed to be
uncorrelated (red) and correlated via the added constraint of surface gravity
(black). While apparently enriched in iron, the enrichment is only significant
at the 2$\sigma$ level. The surface gravity, however, is correlated to the
mass and gravity, reducing the uncertainty in $\rm CMF_{\rho}$ (black): the
M-R ellipse that includes the surface gravity constraint reduces the
uncertainty in the differences of CMF measures. This arises because the
planet’s density and surface gravity only differ by one factor of $R_{p}$.
Because the black contours closely follow the line of constant 0.57 CMF, we
assert that surface gravity and planet radius may be a better proxy for core
mass fraction than mass and radius. Indeed, at the current uncertainties, we
calculate that the additional constraint of surface gravity reduces the
uncertainty in the $\rm CMF_{\rho}$ of K2-229b from 0.182 to 0.165. This is
important given that we have demonstrated that the surface gravity of a planet
is likely to be one of the most precisely measured properties of the planet.
Furthermore, the fractional precision of the surface gravity measurement can
be arbitrarily improved with additional data, at least to the point where
systematic errors begin to dominate.
## 6 Conclusions
One of the leading motivations of this paper was the answer to the question:
“given photometric and RV observations of a given exoplanet system, can we
measure a planet’s surface gravity better than its mass?” At first glance, the
surface gravity depends on the mass itself, so it seems that the gravity
should always be less constrained. However, upon expressing the mass, gravity
and density as a function of photometric and RV parameters, we see that the
mass and density have an extra dependence on the stellar radius, which makes
the surface gravity generically easier to constrain to a given fractional
precision than the mass or density. When expressed in terms of pure
observables, a hierarchy in the precisions on the planet properties emerges,
such that the surface gravity is better constrained than the density, and the
latter is in turn better constrained than the mass. The surface gravity is a
crucial planetary property, as it dictates the scale height of a planet’s
atmosphere. It is also a potential driver of plate tectonics, and as we show
in this paper, can be an excellent proxy to constrain a planet’s core mass
fraction to better facilitate the discrimination of planet composition as
different from its host star. With current missions like TESS, we expect to
achieve high precisions in the photometric parameters. State-of-the-art RV
measurements can now reach precisions in the semi-amplitude of $<5\%$. As a
result, the uncertainties in the ingress/egress duration $\tau$ and the host
star radius $R_{\star}$ may be the limiting factors in constraining the
properties of low-mass terrestrial planets.
We would like to thank Andrew Collier Cameron for his suggestion that the
surface gravity of a transiting planet may be more well constrained than its
mass, radius, or density. R.R.M. and B.S.G. were supported by the Thomas
Jefferson Chair for Space Exploration endowment from the Ohio State
University. D.J.S. acknowledges funding support from the Eberly Research
Fellowship from The Pennsylvania State University Eberly College of Science.
The Center for Exoplanets and Habitable Worlds is supported by the
Pennsylvania State University, the Eberly College of Science, and the
Pennsylvania Space Grant Consortium. The results reported herein benefited
from collaborations and/or information exchange within NASA’s Nexus for
Exoplanet System Science (NExSS) research coordination network sponsored by
NASA’s Science Mission Directorate. J.G.S. acknowledges the support of The
Ohio State School of Earth Sciences through the Friends of Orton Hall research
grant. W.R.P. was supported from the National Science Foundation under Grant
No. EAR-1724693. .
## References
* Adams et al. (2017) Adams, E. R., Jackson, B., Endl, M., et al. 2017, AJ, 153, 82
* Ballard et al. (2014) Ballard, S., Chaplin, W. J., Charbonneau, D., et al. 2014, ApJ, 790, 12
* Barragán et al. (2017) Barragán, O., Gandolfi, D., & Antoniciello, G. 2017, pyaneti: Multi-planet radial velocity and transit fitting, , , ascl:1707.003
* Borucki et al. (2010) Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977
* Carter et al. (2008) Carter, J. A., Yee, J. C., Eastman, J., Gaudi, B. S., & Winn, J. N. 2008, ApJ, 689, 499
* Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102
* Crane et al. (2006) Crane, J. D., Shectman, S. A., & Butler, R. P. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6269, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. I. S. McLean & M. Iye, 626931
* da Silva et al. (2006) da Silva, L., Girardi, L., Pasquini, L., et al. 2006, A&A, 458, 609
* Dai et al. (2019) Dai, F., Masuda, K., Winn, J. N., & Zeng, L. 2019, ApJ, 883, 79
* Dohm & Maruyama (2013) Dohm, J., & Maruyama, S. 2013, in European Planetary Science Congress, EPSC2013–467
* Doré et al. (2018) Doré, O., Werner, M. W., Ashby, M. L. N., et al. 2018, arXiv e-prints, arXiv:1805.05489
* Dorn et al. (2015) Dorn, C., Khan, A., Heng, K., et al. 2015, A&A, 577, A83
* Dotter (2016) Dotter, A. 2016, ApJS, 222, 8
* Dragomir et al. (2019) Dragomir, D., Teske, J., Günther, M. N., et al. 2019, ApJ, 875, L7
* Dressing et al. (2015) Dressing, C. D., Charbonneau, D., Dumusque, X., et al. 2015, ApJ, 800, 135
* Eastman (2017) Eastman, J. 2017, EXOFASTv2: Generalized publication-quality exoplanet modeling code, Astrophysics Source Code Library, , , ascl:1710.003
* Eastman et al. (2019) Eastman, J. D., Rodriguez, J. E., Agol, E., et al. 2019, arXiv e-prints, arXiv:1907.09480
* Frandsen & Lindberg (1999) Frandsen, S., & Lindberg, B. 1999, in Astrophysics with the NOT, ed. H. Karttunen & V. Piirola, 71
* Gaia Collaboration et al. (2020) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2020, arXiv e-prints, arXiv:2012.01533
* Gaia Collaboration et al. (2018) —. 2018, A&A, 616, A1
* Guenther et al. (2017) Guenther, E. W., Barragán, O., Dai, F., et al. 2017, A&A, 608, A93
* Hartman & Bakos (2016) Hartman, J. D., & Bakos, G. Á. 2016, Astronomy and Computing, 17, 1
* Heller et al. (2014) Heller, R., Williams, D., Kipping, D., et al. 2014, Astrobiology, 14, 798
* Hill et al. (2018) Hill, M. L., Kane, S. R., Seperuelo Duarte, E., et al. 2018, ApJ, 860, 67
* Howell et al. (2014) Howell, S. B., Sobeck, C., Haas, M., et al. 2014, PASP, 126, 398
* Kielkopf et al. (2019) Kielkopf, J. F., Hart, R., Carter, B. D., & Marsden, S. C. 2019, MNRAS, 486, L31
* Mandel & Agol (2002) Mandel, K., & Agol, E. 2002, ApJ, 580, L171
* Mayor et al. (2003) Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20
* Noguchi et al. (2002) Noguchi, K., Aoki, W., Kawanomoto, S., et al. 2002, PASJ, 54, 855
* O’Neill & Lenardic (2007) O’Neill, C., & Lenardic, A. 2007, Geophys. Res. Lett., 34, L19204
* Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15
* Ricker et al. (2015) Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rodríguez Martínez et al. (2020) Rodríguez Martínez, R., Gaudi, B. S., Rodriguez, J. E., et al. 2020, AJ, 160, 111
* Sandford & Kipping (2017) Sandford, E., & Kipping, D. 2017, AJ, 154, 228
* Santerne et al. (2018) Santerne, A., Brugger, B., Armstrong, D. J., et al. 2018, Nature Astronomy, 2, 393
* Schulze et al. (2020) Schulze, J. G., Wang, J., Johnson, J. A., Unterborn, C. T., & Panero, W. R. 2020, arXiv e-prints, arXiv:2011.08893
* Seager & Mallén-Ornelas (2003) Seager, S., & Mallén-Ornelas, G. 2003, ApJ, 585, 1038
* Sleep & Zahnle (2001) Sleep, N. H., & Zahnle, K. 2001, J. Geophys. Res., 106, 1373
* Southworth et al. (2007) Southworth, J., Wheatley, P. J., & Sams, G. 2007, MNRAS, 379, L11
* Stassun et al. (2017) Stassun, K. G., Collins, K. A., & Gaudi, B. S. 2017, AJ, 153, 136
* Stevens et al. (2018) Stevens, D. J., Gaudi, B. S., & Stassun, K. G. 2018, ApJ, 862, 53
* Stevens et al. (2017) Stevens, D. J., Stassun, K. G., & Gaudi, B. S. 2017, AJ, 154, 259
* Telting et al. (2014) Telting, J. H., Avila, G., Buchhave, L., et al. 2014, Astronomische Nachrichten, 335, 41
* Torres et al. (2010) Torres, G., Andersen, J., & Giménez, A. 2010, A&A Rev., 18, 67
* Unterborn et al. (2016) Unterborn, C. T., Dismukes, E. E., & Panero, W. R. 2016, ApJ, 819, 32
* Valencia & O’Connell (2009) Valencia, D., & O’Connell, R. J. 2009, Earth and Planetary Science Letters, 286, 492
* van Heck & Tackley (2011) van Heck, H. J., & Tackley, P. J. 2011, Earth and Planetary Science Letters, 310, 252
* Vanderburg et al. (2016) Vanderburg, A., Latham, D. W., Buchhave, L. A., et al. 2016, ApJS, 222, 14
* Winn (2010) Winn, J. N. 2010, arXiv e-prints, arXiv:1001.2010
* Yee & Gaudi (2008) Yee, J. C., & Gaudi, B. S. 2008, ApJ, 688, 616
* Yi et al. (2001) Yi, S., Demarque, P., Kim, Y.-C., et al. 2001, ApJS, 136, 417
|
11institutetext: Institute of Astrophysics, Foundation for Research and
Technology-Hellas, GR-71110 Heraklion, Greece 22institutetext: Department of
Physics, and Institute for Theoretical and Computational Physics, University
of Crete, GR-70013 Heraklion, Greece 33institutetext: Institute for Advanced
Study, 1 Einstein Drive, Princeton, NJ 08540, USA 44institutetext: Spitzer
Fellow, Department of Astrophysical Sciences, Princeton University, Princeton,
NJ 08544, USA 55institutetext: Hubble Fellow, California Institute of
Technology, MC350-17, 1200 East California Boulevard, Pasadena, CA 91125, USA
66institutetext: Institute of Theoretical Astrophysics, University of Oslo, PO
Box 1029 Blindern, 0315 Oslo, Norway
# Evidence for Line-of-Sight Frequency Decorrelation
of Polarized Dust Emission in Planck Data
V. Pelgrims<EMAIL_ADDRESS>S. E. Clark 33 B. S. Hensley 44 G.
V. Panopoulou 55 V. Pavlidou , 1122 K. Tassis , 1122 H.K. Eriksen 66 I. K.
Wehus 66
(Received December 23, 2020; accepted January 19, 2021)
If a single line of sight (LOS) intercepts multiple dust clouds having
different spectral energy distributions (SEDs) and magnetic field
orientations, then the frequency scaling of each of the Stokes $Q$ and $U$
parameters of the thermal dust emission may be different, a phenomenon we
refer to as LOS frequency decorrelation. We present first evidence for LOS
frequency decorrelation in Planck data by using independent measurements of
neutral hydrogen (Hi) emission to probe the 3D structure of the magnetized
ISM. We use Hi-based measurements of the number of clouds per LOS and the
magnetic field orientation in each cloud to select two sets of sightlines: (i)
a target sample of pixels that are likely to exhibit LOS frequency
decorrelation and (ii) a control sample of pixels that lack complex LOS
structure. We test the null hypothesis that LOS frequency decorrelation is not
detectable in Planck 353 and 217 GHz polarization data at high Galactic
latitudes. We find that the data reject the null hypothesis at high
significance, showing that the combined effect of polarization angle variation
with frequency and depolarization are detected in the target sample. This
detection is robust against choice of CMB map and map-making pipeline. The
observed change in polarization angle due to LOS frequency decorrelation is
detectable above the Planck noise level. The probability that the detected
effect is due to noise alone ranges from $5\times 10^{-2}$ to $4\times
10^{-7}$, depending on the CMB subtraction algorithm and treatment of residual
systematics; correcting for residual systematics consistently increases the
significance of the effect. Within the target sample, the LOS decorrelation
effect is stronger for sightlines with more misaligned magnetic fields, as
expected. With our sample, we estimate that an intrinsic variation of $\sim
15\%$ in the ratio of 353 to 217 GHz polarized emission between clouds is
sufficient to reproduce the measured effect. Our finding underlines the
importance of ongoing studies to map the three-dimensional structure of the
magnetized and dusty ISM that could ultimately help component separation
methods to account for frequency decorrelation effects in CMB polarization
studies.
###### Key Words.:
ISM: dust, magnetic fields – submillimeter: ISM – (cosmology) cosmic
background radiation – inflation – polarization
## 1 Introduction
Cosmic Microwave Background (CMB) polarization experiments have reached a
sensitivity sufficient to demonstrate that, even in the most diffuse regions
of the sky, cosmological signals of interest lie below the polarized emission
from Galactic foregrounds (BICEP2 Collaboration & Keck Array Collaboration,
2018; Planck Collaboration IV, 2020). In particular, the B-mode signature from
primordial gravitational waves (Kamionkowski & Kovetz, 2016), quantified by
the tensor-to-scalar ratio $r$, is now constrained to be at least $\sim$ten
times fainter than B-mode emission from Galactic dust at 150 GHz, even in the
diffuse BICEP/Keck region (BICEP2 Collaboration & Keck Array Collaboration,
2018). Next-generation experiments like the Simons Observatory (Ade et al.,
2019), CMB-S4 (Abazajian et al., 2016), and LiteBIRD (Suzuki et al., 2018)
seek constraints on $r$ that improve on current upper limits by an order of
magnitude or more and will thus require foreground mitigation to the percent
level or better.
One of the most challenging aspect of modeling dust foregrounds is that the
spectral energy distribution (SED) of dust emission is not uniform across the
sky. Variations in dust temperature and opacity law are now well-attested
across the Galaxy (e.g., Finkbeiner et al. 1999; Planck Collaboration XI 2014;
Meisner & Finkbeiner 2015; Planck Collaboration IV 2020; Irfan et al. 2019),
with evidence for correlations with gas velocity (Planck Collaboration XXIV
2011; Planck Collaboration XI 2014), strength of the ambient radiation field
(Planck Collaboration XXIX, 2016; Fanciullo et al., 2015), and location in the
Galactic disk (Schlafly et al., 2016).
Such variations greatly restrict the ability to use maps of dust emission at
one frequency to constrain dust emission at another frequency–i.e., two maps
at different frequencies differ by more than just an overall multiplicative
factor (frequency decorrelation). The three-dimensional (3D) structure of the
interstellar medium adds to the complexity of this problem (Tassis & Pavlidou,
2015). If a single line of sight (LOS) intercepts multiple dust clouds having
different SEDs and magnetic field orientations, then the frequency scaling of
each of the Stokes $Q$ and $U$ parameters may be different even in a single
pixel (LOS frequency decorrelation). Frequency decorrelation has already been
identified as a critical uncertainty in current $r$ constraints and will be
even more acute at higher sensitivities (BICEP2 Collaboration & Keck Array
Collaboration, 2018; CMB-S4 Collaboration, 2020).
Frequency decorrelation is often quantified at the power spectrum level
through the ratio $R_{\ell}^{BB}$ of the $BB$ cross-spectrum of two
frequencies at some multipole $\ell$ to the geometric mean of their auto-
spectra (Planck Collaboration L, 2017). Computing $R_{\ell}^{BB}$ over large
areas of the Planck polarization maps at 353 and 217 GHz, the channels with
the greatest sensitivity to polarized dust emission, has yielded only limits
of $R_{\ell}^{BB}\gtrsim 0.98$ (Sheehy & Slosar, 2018; Planck Collaboration
XI, 2020). While this limit suggests frequency decorrelation may not be a
limiting concern if $r\gtrsim 0.01$, Planck Collaboration XI (2020) caution
that the level of decorrelation may be variable across the sky with some
limited sky regions potentially having much greater values.
LOS frequency decorrelation can have a particularly pernicious effect on
parametric component separation methods working at the map level, especially
if the SEDs of Stokes $Q$ and $U$ are not modeled with independent parameters
(Poh & Dodelson, 2017; Ghosh et al., 2017; Puglisi et al., 2017; Hensley &
Bull, 2018; Martínez-Solaeche et al., 2018; CMB-S4 Collaboration, 2020). New
techniques employing moment decomposition (Chluba et al., 2017) have shown
promise for mitigating LOS averaging of dust SEDs in polarization at the
expense of additional parameters (Mangilli et al., 2019; Remazeilles et al.,
2020). Distortions of the SED from effects like LOS frequency decorrelation
are also important for power spectrum-based modeling of foregrounds. In
particular, Mangilli et al. (2019) have shown that ignoring effects like LOS
frequency decorrelation can bias $r$ determinations at consequential levels
for next-generation experiments even if a frequency decorrelation parameter is
used when fitting an ensemble of power spectra.
In this work, we focus on LOS frequency decorrelation, adopting a different
approach, based on the fact that regions of the sky where the effect is
expected to be important can be astrophysically identified using ancillary ISM
data. Specifically, we use Hi emission data to identify sightlines that are
potentially most susceptible to this effect. We combine information on the
discrete number of Hi clouds on each sightline (Panopoulou & Lenz, 2020) with
an estimate of the magnetic field orientation in each cloud inferred from the
morphology of linear Hi structures (Clark & Hensley, 2019). This entirely Hi-
based sample selection is agnostic to the Planck dust polarization data. We
then compare the difference in polarization angles at 353 and 217 GHz along
sightlines with and without an expected LOS frequency decorrelation effect,
finding that the Hi data indeed identify sightlines with more significant EVPA
rotation. This is the first detection of LOS frequency decorrelation with
Planck data and illustrates the power of ancillary data such as Hi and stellar
polarizations, to identify regions of the sky where the effect is most
pronounced.
This paper is organized as follows. In Sect. 2 we briefly review the
phenomenology of frequency decorrelation of polarization. In Sect. 3 we
describe the data sets that are used in the analysis. Section 4 presents the
sample selection, the statistical tools that are used and our handling of
biases and systematics. Section 5 presents our results. We discuss the
robustness of our findings, and present further supporting observational
evidence in Sect. 6. An estimate of the required SED variation to reproduce
the observed magnitude of LOS frequency decorrelation is presented in Sect. 7.
We discuss our findings in Sect. 8 and conclude in Sect. 9.
This paper demonstrates that the effect of LOS frequency decorrelation exists
at the pixel level and can be measured in the high-frequency polarization data
from Planck. It does not address whether the amplitude of the effect is large
enough at the sky-map level to affect any particular experiment’s search for
primordial B-modes.
## 2 Phenomenology of LOS frequency decorrelation
We seek to detect LOS frequency decorrelation between Planck polarization data
at 353 and 217 GHz, frequencies dominated by Galactic thermal dust emission
and the CMB. Given that the polarized intensity of the CMB and of thermal dust
emission feature different SEDs and that they are uncorrelated, their relative
contribution to the observed polarization signal depends on the frequency. A
change with frequency of the polarization position angle is therefore expected
even if the polarization pattern of emission from dust remains constant across
frequencies. Additionally, statistical and systematic errors induce scatter in
polarization position angles at each frequency. Therefore, a measured
difference in polarization direction (electric vector position angle, EVPA)
between frequencies cannot be immediately attributed to a LOS frequency
decorrelation induced by multiple dust polarized-emission components.
Similarly, when the EVPA difference between frequencies is computed for a
large statistical sample of different lines of sight, EVPA differences form a
distribution with a finite spread. The three sources of EVPA differences
mentioned above (noise, relative contributions of the CMB and the dust, and
SED difference between dust components) each contribute to the width of the
EVPA difference distribution. We wish to detect a signal that can be directly
attributed to frequency decorrelation of the dust polarized emission, in turn
originating in the 3D structure of interstellar clouds and their magnetic
field. Thus we have to construct a sample of lines of sight where dust
decorrelation is expected to be significant, and then test whether the EVPA
differences between frequencies are larger for that sample than for lines of
sight where we expect that dust decorrelation is subdominant to effects from
the CMB and noise.
The LOS frequency decorrelation of dust polarized emission is more likely to
be observed for a given LOS if the following three conditions are met (Tassis
& Pavlidou, 2015): (i) at least two clouds are present along the LOS and both
have a measurable emission contribution; (ii) the mean plane-of-sky magnetic
field orientations of the clouds differ by an angle $\gtrsim 60^{\circ}$;
(iii) the SEDs of the clouds are different. The first two conditions imply an
emission with polarized intensity weaker than the sum of polarized intensities
from individual clouds (LOS depolarization), and a modified polarization angle
as compared to the emission from the dominant cloud. The third condition
causes the polarization angle to be frequency dependent and is met if the dust
clouds have different temperature and/or different polarization spectral
index, e.g. if the dust grain properties differ between clouds.
In this work we rely on the fact that Hi column density correlates well with
dust in the diffuse ISM (e.g. Boulanger et al. 1996; Planck Collaboration XI
2014; Lenz et al. 2017) and use recent Hi datasets to infer whether or not the
aforementioned conditions are met.
## 3 Data sets
In order to identify lines of sight where the LOS frequency decorrelation
effect is most likely to be significant, we use two types of information that
can be extracted from Hi observations. The first is the number of clouds along
the LOS, obtained via a decomposition of Hi spectra by Panopoulou & Lenz
(2020). We use publicly available111https://doi.org/10.7910/DVN/8DA5LH results
from this analysis to find sky pixels for which multiple clouds contribute to
the dust emission signal in intensity. The second is the plane-of-sky magnetic
field orientation as a function of velocity, estimated via the morphology of
Hi emission by Clark & Hensley (2019). We use publicly
available222https://doi.org/10.7910/DVN/P41KDE results from this analysis to
further constrain our pixel selection to lines of sight that contain clouds
with significantly misaligned magnetic fields, i.e. the magnetic fields of the
clouds form an angle with an absolute value between 60∘ and 90∘. These Hi
datasets allow us to define samples of sky pixels with which to study the sub-
millimeter polarized emission as measured by Planck. We concentrate on the
high-frequency Planck data, at 217 and 353 GHz, where thermal dust emission is
known to dominate the measured polarization signal. In this section we
describe the datasets that we use and the post-processing that we apply.
$\mathcal{N}_{c}$
---
$1$ $4.62$
$\Delta(\theta_{IVC},\theta_{LVC})$
$0$ [∘] $90$
Sky positions
Figure 1: Orthographic projections in Galactic coordinates. Longitude zero is
marked by the vertical thick lines. The Galactic poles are at the centers of
each disks. Galactic longitude increases counter-clockwise in the northern
hemisphere (left) and clockwise in the southern one (right). We show the maps
of effective number of clouds $\mathcal{N}_{c}$ (top), and the map of
$\Delta(\theta_{IVC},\theta_{LVC})$, used in ’Implementation 1’ (second row).
Map of sky positions of pixel samples from ’Implementation 1’ and
’Implementation 2’ (bottom). White pixels are both in target1 and target2
samples. Green pixels are target2 pixel not in target1 and purple pixels are
target1 not in target2. Red pixels belong to control sample. Black pixels are
those that belong to all but neither to control nor in target1 or target2.
### 3.1 Hi velocity components along the line of sight
If multiple components of dust lie along the line of sight, and have different
bulk kinematic properties, then the emission spectrum of the Hi line will show
multiple peaks at different velocities with respect to the observer. This
property of Hi emission was used by Panopoulou & Lenz (2020) to measure the
number of clouds along the LOS. The authors developed a method to identify the
number of peaks in Hi spectra and applied it to data from the Hi4PI survey
(HI4PI Collaboration, 2016) over the high Galactic latitude sky. The analysed
area covers the parts of the sky where Hi column density is well correlated
with far infrared dust emission, as defined by Lenz et al. (2017).
Panopoulou & Lenz (2020) decomposed each Hi spectrum into a set of Gaussian
components. The Gaussian parameters were grouped within HEALPix pixels of
$N_{\rm{side}}=128$ (termed ‘superpixels’), in order to construct a
probability distribution function (PDF) of the components’ centroid velocity.
The PDFs were smoothed at a velocity resolution of 5 km s-1. Within each
superpixel, clouds were identified as kinematically distinct peaks in the PDF
of Gaussian centroid velocity. The Gaussian components belonging to each peak
were used to construct a velocity spectrum for each cloud. The published data
products include: (a) the column density of each cloud, $N_{{\rm HI}}$ and (b)
the first and second moments of each cloud’s spectrum ($v_{0}$, $\sigma_{0}$,
respectively).
In sightlines with multiple components, not all components will contribute
equally to the column density (and similarly to the total dust intensity).
Panopoulou & Lenz (2020) introduced a measure of the number of clouds per LOS
that takes into account the column densities of clouds, defined as:
$\mathcal{N}_{c}=\sum_{i}(N^{i}_{{\rm HI}})/N_{{\rm HI}}^{\rm max}$ (1)
where $N^{i}_{{\rm HI}}$ is the column density of the $i$-th cloud in the
superpixel and $N_{{\rm HI}}^{\rm{max}}$ is the column density of the cloud
with the highest $N_{{\rm HI}}$ in the superpixel. If the column density of a
single cloud dominates the total column density of a superpixel, then
$\mathcal{N}_{c}\sim 1$. If there are two clouds with equal column density,
then $\mathcal{N}_{c}=2$.
In this paper we use $\mathcal{N}_{c}$, a map of which is shown in Fig. 1
(top), to distinguish between sightlines whose dust emission is dominated by a
single component and those where multiple components might be contributing to
the signal. Panopoulou & Lenz (2020) have shown that $\mathcal{N}_{c}$ is
anticorrelated with the degree of linear polarization at 353 GHz, suggesting
that lines of sight where multiple components contribute to the polarization
signal exhibit larger LOS depolarization than the rest of the sky. However, a
simple selection on $\mathcal{N}_{c}$ alone does not imply a high ratio of
column densities between clouds; a value of $\mathcal{N}_{c}=1.5$ can be
achieved by two clouds or by an arbitrary number of clouds, the former case
being in general more likely to induce measurable LOS frequency decorrelation.
Thus in one variation of our pixel selection we consider a different metric
(see Sect. 4) involving the ratio of dominant cloud column densities,
$\mathcal{F}_{21}$, defined as follows: for pixels with at least two clouds
($\mathcal{N}_{c}>1$),
$\mathcal{F}_{21}=N_{{\rm HI}}^{\rm{max2}}/N_{{\rm HI}}^{\rm{max}},$ (2)
where $N_{{\rm HI}}^{\rm{max}}$ is the column density of the cloud with the
highest $N_{{\rm HI}}$, and $N_{{\rm HI}}^{\rm{max2}}$ is that of the cloud
with second-highest $N_{{\rm HI}}$. We use the cloud column densities provided
by Panopoulou & Lenz (2020).
### 3.2 Orientation of Hi structures
The morphology of Hi emission encodes properties of the ambient magnetic field
in two measurable ways. First, high-resolution Hi channel maps reveal thin,
linear structures that are well aligned with the magnetic field as traced by
starlight polarization (Clark et al., 2014) and polarized dust emission (Clark
et al., 2015; Martin et al., 2015). These magnetically aligned Hi structures
are associated with anisotropic cold Hi gas (McClure-Griffiths et al. 2006;
Clark et al. 2019; Peek & Clark 2019; Kalberla & Haud 2020; Murray et al.
2020). Second, the degree of alignment of linear Hi structures as a function
of LOS velocity traces LOS magnetic field tangling, and therefore the observed
dust polarization fraction (Clark, 2018).
These insights were synthesized into a formalism by Clark & Hensley (2019)
that defines 3D maps of the Stokes parameters of linear polarization. These
maps are based purely on the morphology of Hi emission. The distribution of
linear Hi emission as a function of orientation on the sky is quantified by
the Rolling Hough Transform (RHT; Clark et al., 2014). The RHT is applied to
discrete Hi velocity channels in an Hi data cube to calculate maps of
$R(v,\theta)$, the linear intensity as a function of line-of-sight velocity
$v$ and orientation $\theta$. $R(v,\theta)$ is normalized such that it can be
treated analogously to a probability distribution function for the orientation
of Hi in each pixel. The Hi-based Stokes parameters are then defined as:
$Q_{\mathrm{HI}}(v)=I_{\mathrm{HI}}(v)\sum_{\theta}R(v,\theta)\cos(2\theta)d\theta$
(3)
$U_{\mathrm{HI}}(v)=I_{\mathrm{HI}}(v)\sum_{\theta}R(v,\theta)\sin(2\theta)d\theta,$
(4)
where $I_{\mathrm{HI}}(v)$ is the Hi intensity as a function of LOS velocity.
Integrating $Q_{\mathrm{HI}}(v)$ and $U_{\mathrm{HI}}(v)$ over the velocity
dimension yields Hi-based Stokes $Q_{\mathrm{HI}}$ and $U_{\mathrm{HI}}$ maps
that reproduce the Planck 353 GHz $Q$ and $U$ maps with remarkable fidelity.
Clark & Hensley (2019) also demonstrate consistency with a tomographic
determination of the magnetic field orientation along one line of sight based
on measurements of optical starlight polarization and Gaia stellar distances
(Panopoulou et al., 2019).
We therefore use the Clark & Hensley (2019) maps as a probe of the local
magnetic field orientation as a function of LOS velocity. We use their Hi4PI-
based maps, which use a non-uniform LOS velocity bin size and cover the full
sky at the Hi4PI angular resolution of $16.2^{\prime}$ (see Clark & Hensley
(2019) for map details). To match the resolution and pixelization of the
$N_{c}$ map, we apply a Gaussian filter to degrade the Clark & Hensley maps to
a uniform 30′ resolution, and use the healpy function ud_grade to bin the
smoothed maps to $N_{\rm{side}}=128$. We can use these 3D maps to measure the
Hi-based polarization angle in a specified velocity range by summing
$Q_{\mathrm{HI}}(v)$ and $U_{\mathrm{HI}}(v)$ over the desired velocity bins
and computing
$\theta_{\mathrm{HI}}=1/2\,\arctan(-U_{\mathrm{HI}},\,Q_{\mathrm{HI}})$, where
$\arctan$ is the 4-quadrant inverse tangent function here and throughout this
paper. In this paper we use $\theta$ to denote the position angle of Hi
structures and $\psi$ for polarization position angles.
### 3.3 Polarization data from the Planck satellite
In this work we employ two full-sky sets of sub-millimeter polarization data,
both obtained by the Planck satellite. First we utilize the third data release
of the Planck collaboration (PR3). We use the 217 GHz single-frequency maps
and the 353 GHz single-frequency maps from the polarization-sensitive
bolometers only, as recommended in Planck Collaboration III (2020) and Planck
Collaboration XII (2020), which we downloaded from the Planck Legacy
Archive333http://pla.esac.esa.int (PLA).
Second, we use a more recent set of high-frequency polarization maps obtained
from Planck data but processed through the upgraded map-making algorithm
SRoll2 that corrects data for known residual systematics in Legacy maps down
to the detector noise level (Delouis et al. 2019). We use the full-dataset
Polarization Sensitive Bolometers SRoll2 polarization maps at frequency 353
and 217 GHz available at their
website444http://sroll20.ias.u-psud.fr/sroll20_data.html. We note that most of
the analysis presented in this paper was completed before the Npipe maps
became available (Planck Collaboration Int. LVII 2020). Analyzing this new set
of maps would require the implementation of a different analysis pipeline than
the one developed and used in this work because per-pixel block-diagonal
covariance matrices are not available. However, we note that preliminary
studies using Npipe maps yield results consistent with those obtained in this
paper, in the direction of the detection of LOS frequency decorrelation being
more significant than that obtained using PR3 maps.
We apply the same post-processing to both sets of polarization maps. We smooth
the $I$, $Q$, and $U$ maps to a resolution of 30′ in order to increase the
signal-to-noise ratio. We smooth the per-pixel block-diagonal polarization
covariance matrices following the analytical prescription in Appendix A of
Planck Collaboration XIX (2015). This formalism neglects correlations between
neighboring pixels, but takes into account the off-diagonal covariance between
the $Q$ and $U$ Stokes parameters. These terms can be substantial at high
Galactic latitudes.
When necessary, we propagate the observational uncertainties in our analysis
by making use of Monte Carlo (MC) realizations of correlated noise using a
Cholesky decomposition of the smoothed per-pixel block-diagonal covariance
matrix (see e.g. Appendix A of Planck Collaboration XIX (2015) or Appendix B
of Skalidis & Pelgrims (2019)). To assess the observational uncertainty on a
measurement, we repeat our analysis on those simulated Stokes parameters and
study the resulting per-pixel distribution. We validated this approach by
comparing to analytical estimates the uncertainties obtained for the polarized
intensity and the polarization position angle.
Figure 2: Cartoon illustration of the pixel selection described in Sect. 4.1.
Left panel: Hi intensity spectrum of a representative pixel from our control
group. The control sample targets sightlines defined by a single Hi cloud,
parameterized by $\mathcal{N}_{c}=1$. Right panel: Hi intensity spectrum of a
representative pixel that is included in both target1 and target2. Pixels in
the target samples are selected to have multiple Hi clouds along the line of
sight, as parameterized by either $\mathcal{N}_{c}\geq 1.5$ (target1) or
$\mathcal{F}_{21}\geq 1/3$ (target2). Hi orientations are determined for two
clouds along each target line of sight by summing the Clark & Hensley (2019)
Hi-based Stokes parameters over the indicated velocity ranges, and we require
that the angles in these clouds differ by at least 60∘. Cloud orientations in
the target1 sample are determined from predefined IVC and LVC velocity ranges.
Cloud orientations in the target2 sample are determined from the $1\sigma$
velocity range around the two most prominent Hi clouds identified in
Panopoulou & Lenz (2020).
### 3.4 CMB polarization maps
We make use of the CMB polarization maps obtained from the four component-
separation algorithms used by the Planck Collaboration, and applied to the
third release of the Planck data: commander, nilc, sevem, and smica (Planck
Collaboration XII 2014, Planck Collaboration IX 2016, Planck Collaboration IV
2020, and references therein). We downloaded the CMB maps from the PLA and
smoothed them so that they all have an effective resolution corresponding to a
Gaussian beam with FWHM of 30′, just as we do with the single-frequency maps
used in this work.
## 4 Analysis Framework
### 4.1 Sample selection
In order to determine statistically if LOS frequency decorrelation is present
and measurable in the Planck high-frequency polarization data, we construct
astrophysically-selected samples of pixels on the sky based only on Hi data.
We distinguish between our samples using the labels all (all the pixels in the
high Galactic latitude LOS cloud decomposition of Panopoulou & Lenz, 2020);
control (pixels that should not exhibit LOS frequency decorrelation); and
target (pixels that are likely to exhibit large LOS frequency decorrelation).
According to Tassis & Pavlidou (2015), the degree of LOS decorrelation between
two frequencies depends on (a) how the ratio of polarized intensities
contributed by distinct components along a LOS changes between frequencies;
and (b) the degree of magnetic field misalignment between these contributing
components. The first factor above depends non-trivially on both the
temperature difference between components, and on the amount of emitting dust
(column density) in each. Our physical understanding of these dependencies
motivates our definition of control and target samples from Hi data:
* -
control: If the dust emission is strongly dominated by a single component
(cloud), no LOS frequency decorrelation is expected, regardless of the other
criteria above. For this reason, we construct our control sample using Hi data
to select pixels where a single component dominates the Hi emission (proxy for
the emitting dust).
* -
target: For LOS frequency decorrelation to be significant, there must be (a)
more than one contributing component, and (b) a significant misalignment
($\gtrsim 60^{\circ}$) between the orientations of plane-of-sky magnetic field
that permeate the components. Both criteria are required for a pixel to be
included in the target sample. We do not attempt to use the Hi data to make
predictions about the shape of the dust SED.
The nature of the control sample allows for a simple selection criterion:
requiring that pixels contain a single cloud along the LOS. We therefore
select those pixels that have a column-density-weighted number of clouds (see
Sect. 3) equal to unity ($\mathcal{N}_{c}=1$). For the target sample, however,
there exist different ways in which these selection criteria can be
implemented in practice. For this reason, we have performed the analysis using
two distinct implementations of the sample selection, so as to ensure that our
particular choices do not qualitatively affect our results. Our selection
criteria are described below and are summarized in Table 1 and Fig. 2.
Table 1: Criteria to define the samples in Implementation 1 and Implementation 2. | Implementation 1 | Implementation 2
---|---|---
control | $\mathcal{N}_{c}=1$ | $\mathcal{N}_{c}=1$
target | $\mathcal{N}_{c}\geq 1.5$ | $\mathcal{F}_{21}\geq 1/3$
| $\Delta(\theta_{IVC},\theta_{LVC})\geq 60^{\circ}$ | $\Delta(\theta_{1},\theta_{2})\geq 60^{\circ}$
Implementation 1: The first criterion for constructing the target sample in
this implementation (hereafter target1) selects pixels for which
$\mathcal{N}_{c}\geq 1.5$.
This ensures that there is a significant contribution to the dust emission
signal in intensity that is not from the dominant component. The same will
hold for polarized intensity, with the exception of special cases where the
magnetic field in one of the clouds lies mainly along the line of sight (which
would result in very little, if any, polarized emission from the specific
cloud). While we cannot control for the unknown 3D geometry of the magnetic
field in each cloud, this unknown simply adds noise to the LOS frequency
decorrelation signal we are after – our selection of a statistically large
sample of pixels likely contains all possible relative orientations between
the 3D magnetic field of clouds along the same LOS.
In addition to the requirement that $\mathcal{N}_{c}\geq 1.5$, target1 pixels
must also satisfy a misalignment condition. To impose such a condition, we
first post-process the Clark & Hensley (2019) Hi-based Stokes parameter data
(provided in pre-defined discrete velocity bins) to obtain orientation
information on a per-cloud basis. We make use of the commonly used distinction
of high-latitude Hi clouds with respect to their velocity: Low Velocity Clouds
are found in the range $-12\,{\rm{km\,s^{-1}}}\leq v_{0}\leq
10\,{\rm{km\,s^{-1}}}$ while Intermediate Velocity Clouds (IVC) are found in
the range $-70\,{\rm{km\,s^{-1}}}\leq v_{0}\leq-12\,{\rm{km\,s^{-1}}}$ or
$10\,{\rm{km\,s^{-1}}}\leq v_{0}\leq 70\,{\rm{km\,s^{-1}}}$ (where $v_{0}$ is
the cloud centroid velocity and the velocity ranges are defined as in
Panopoulou & Lenz 2020). These two classes of clouds are found to show
systematic differences in their dust properties, with IVCs, for example,
having higher dust temperatures than LVCs on average (e.g., Planck
Collaboration XXIV 2011; Planck Collaboration XI 2014; Panopoulou & Lenz
2020). Pixels in which the Hi orientation changes significantly between the
LVC and IVC range likely satisfy all necessary conditions for the LOS
frequency decorrelation effect: varying dust SED and magnetic field
orientation along the LOS (in addition to the requirement of
$\mathcal{N}_{c}\geq 1.5$).
For each pixel we thus compute the orientation of two ‘effective’ clouds: an
LVC and an IVC. For this we sum the Hi Stokes parameters within the LVC and
IVC velocity ranges separately, and then calculate a single Hi orientation
within the LVC range, $\theta_{LVC}$, and within the IVC range,
$\theta_{IVC}$. For a pixel to be included in the target1 sample, the
misalignment criterion requires that the angles $\theta_{LVC}$ and
$\theta_{IVC}$ differ by at least 60∘. The (unsigned) angle difference between
two angles expressed in radians is computed as
$\Delta(\xi_{1},\,\xi_{2})=\pi/2-|\pi/2-|\xi_{1}-\xi_{2}||$ (5)
where $\xi_{1,2}$ are position angles (either $\theta$’s or $\psi$’s) defined
in the range $\left[0,\pi\right)$ and where the consecutive absolute values
take into account the $\pi$ degeneracy of orientations.
Implementation 2: We modify the criteria for constructing the target sample in
order to test for the robustness of our results against sample selection.
First, we identify pixels with at least two significant Hi components by
requiring (a) that $\mathcal{N}_{c}>1$ and (b) the ratio of column densities
of the two main Hi components, $\mathcal{F}_{21}$, is high (see Eq. 2).
Specifically, candidate pixels for the target sample in this implementation
(hereafter target2) are selected so the column density of the second most
prominent component is at least one third of the dominant component, i.e.
$\mathcal{F}_{21}\geq 1/3$. By using $\mathcal{F}_{21}$ instead of a higher
threshold in the value of $\mathcal{N}_{c}$ (as was done in Implementation 1)
we ensure that the dust emission signal (in intensity at least) arises mainly
from 2 clouds of comparable $N_{\rm{HI}}$, rather than a larger number of
low-$N_{\rm{HI}}$ clouds (as discussed in Sect. 3).
We also modify the construction of the per-cloud Hi orientation, compared to
Implementation 1. For each cloud, we consider the velocity range within
$v_{0}\pm\sigma_{0}$, where $v_{0}$ is the cloud centroid velocity and
$\sigma_{0}$ is the second moment of its spectrum. We sum the Hi Stokes
parameters of the Clark & Hensley maps within this velocity range creating
maps of per-cloud Stokes parameters, $Q_{\rm{HI}}^{\mathrm{cloud}}$ and
$U_{\rm{HI}}^{\mathrm{cloud}}$. For each pixel we use these per-cloud Stokes
parameters to calculate the Hi orientation of the highest-$N_{\rm{HI}}$ cloud,
$\theta_{1}$, and that of the second highest-$N_{\rm{HI}}$ cloud,
$\theta_{2}$. The target2 sample is constructed by requiring pixels to have
$\Delta(\theta_{1},\,\theta_{2})\geq 60^{\circ}$, in addition to the
aforementioned column-density-based criteria. This cloud-based definition of
the misalignment condition avoids relying on the predefined velocity ranges
for the LVC and IVC components.
#### Statistical properties of the samples:
The samples contain $N_{\rm{all}}=83374$, $N_{\rm{control}}=7328$,
$N_{\rm{target1}}=5059$, and $N_{\rm{target2}}=5755$ high-latitude pixels on a
HEALPix map (Górski et al. 2005) of $N_{\rm{side}}=128$. The pixels in target1
(target2) represent about 6.1% (6.9%) of the high-latitude sky defined by the
$\mathcal{N}_{c}$ data and about 2.6% (2.9%) of the full sky. target1 and
target2 have 2383 pixels in common. This overlap is to be expected, since
despite the different specific criteria, both Implementations 1 and 2 are
motivated by the same astrophysical requirements.
In Fig. 1 we show polar projections of the $\mathcal{N}_{c}$ map (top), the
difference of position angle between the IVC and LVC effective clouds (second
row), followed by sky position of the pixels of our control and target1 and
target2 samples (bottom). We note that there is a significant difference
between the locations of target and control pixels: the former are
preferentially found in the northern hemisphere (in both implementations),
while the latter are mostly found in the southern hemisphere. This uneven
distribution is inherited from the spatial distribution of $\mathcal{N}_{c}$.
As noted in Panopoulou & Lenz (2020), $\mathcal{N}_{c}$ is spatially
correlated with the column density of IVCs. The presence of these clouds
primarily in the northern hemisphere has been noted already from earlier
studies of Galactic Hi surveys (e.g. Danly 1989; Kuntz & Danly 1996), and is
tied to their astrophysical origin (e.g. Shapiro & Field 1976; Bregman 1980;
Wesselius & Fejes 1973; Heiles 1984; Verschuur 1993).
### 4.2 Statistical Methodology
We select pixels from the Planck 353 and 217 GHz polarization maps for each of
our three samples, and compute the signed-difference between the EVPAs
according to
$\displaystyle\Delta_{s}(\psi_{353},\psi_{217})=\frac{1}{2}\,\arctan({\sin\left[2\,(\psi_{353}-\psi_{217})\right]},$
(6) $\displaystyle{\cos\left[2\,(\psi_{353}-\psi_{217})\right]})$
where the EVPA at both frequencies is determined from the Stokes $Q_{\nu}$ and
$U_{\nu}$ according to $\psi_{\nu}=1/2\,\arctan(-U_{\nu},\,Q_{\nu})$ and has a
value in the range $[0^{\circ},\,180^{\circ}$).
$\Delta_{s}(\psi_{353},\psi_{217})$ is defined in the range
$[-90^{\circ},\,90^{\circ}]$. The subscript $s$ in $\Delta_{s}$ is used to
denote the signed difference of EVPA from Eq. 5, the un-signed position angle
difference (the two are related through
$\Delta(\xi_{1},\xi_{2})=|\Delta_{s}(\xi_{1},\xi_{2})|$).
We choose to use the signed angle difference rather than the unsigned version
because an ensemble of signed angle differences is centered on and symmetric
about zero in the absence of systematic offsets. For an ensemble of $N$
2-circular quantities $\\{\xi_{1,2,...,N}\\}$, the circular mean and the
circular standard deviation are defined as
$\left\langle\\{\xi\\}\right\rangle=\frac{1}{2}\,\arctan\left({\sum_{n=1}^{N}\sin(2\xi_{n})},\,{\sum_{n=1}^{N}\cos(2\xi_{n})}\right)$
(7)
and
$S(\\{\xi\\})=\sqrt{-\log\left[\left(\frac{1}{N}\sum_{n=1}^{N}\sin(2\xi_{n})\right)^{2}+\left(\frac{1}{N}\sum_{n=1}^{N}\cos(2\xi_{n})\right)^{2}\right]}\;.$
(8)
For a sample of pixels the distribution of $\Delta_{s}(\psi_{353},\psi_{217})$
is expected to have a circular mean close to zero and a finite circular
standard deviation. The latter encodes a decorrelation of EVPAs between
frequencies due to (i) uncorrelated noise at different frequencies; (ii) the
relative contribution of dust and CMB at the two frequencies; and (iii) LOS
frequency decorrelation due to the polarized intensity contribution from
distinct misaligned dust clouds with SEDs varying between frequencies (the
effect we are seeking to detect).
Because the target samples are selected to have a higher likelihood of large
LOS frequency decorrelation, we predict a larger circular standard deviation
for the target sample than for the control sample. Therefore we adopt the
spread of the distribution of polarization angle differences as our test
statistic:
$\mathcal{D}\equiv S(\\{\Delta_{s}(\psi_{353},\psi_{217})\\}),$ (9)
where a detection of LOS frequency decorrelation would correspond to a larger
$\mathcal{D}$ for the target sample than for control. Any inference of the
presence of LOS frequency decorrelation has to account for the other sources
of increased scatter in the distribution of
$\Delta_{s}(\psi_{353},\psi_{217})$, i.e. residual systematics, CMB
polarization, sampling uncertainties, and data noise, and must consider the
possibility that these properties differ between target and control.
We address the first two effects (residual systematics and CMB polarization)
by repeating our analysis on maps that are derived from the same raw Planck
data, but processed differently.
Difference between PR3 and SRoll2 maps
---
Normalized Distribution
Figure 3: Normalized histogram of the difference between PR3 and SRoll2 maps
of the EVPA difference between 353 and 217 GHz
[$\Delta_{s}(\Delta_{s}(\psi_{353},\psi_{217})^{\rm{PR3}},\Delta_{s}(\psi_{353},\psi_{217})^{\rm{SRoll2}})$]
for sky pixels of all, control and target1. For most pixels, the results agree
within $\sim\pm 5^{\circ}$; however pixels of target1 exhibit larger
differences between map versions, centered at $2.3^{\circ}$. This suggests
that the sky area covered by our target1 sample received more correction from
the systematic cleaning. A similar picture is obtained considering target2
instead of target1.
One plausible concern is that spatially correlated systematics in PR3 maps
affect target and control differently, resulting in a false-positive detection
of LOS frequency decorrelation. To exclude this possibility, we repeat our
analysis using the improved version of Planck HFI polarization maps obtained
from the SRoll2 map-making algorithm that better corrects for known residual
systematics down to the detector noise level (Delouis et al. 2019). The
difference between the $\Delta_{s}(\psi_{353},\psi_{217})$ distributions
computed from the PR3 and SRoll2 maps particularized to our samples is shown
in Fig. 3. We find that this difference distribution is offset from 0 for the
target samples, indicating that the region of sky containing the target pixels
differed systematically between the PR3 and SRoll2 maps; a conclusion also
reached from inspection of Fig. 7 of Delouis et al. (2019).
Polarized intensity of pixels in each sample/frequency
---
Normalized Distribution
$\log_{10}(\hat{P}_{217})$ [$\mu$KCMB] $\log_{10}(\hat{P}_{353})$ [$\mu$KCMB]
Figure 4: Histograms of debiased polarized intensity $\hat{P}$ at 217 GHz
(left) and 353 GHz (right) (Plaszczynski et al. 2014) of all (black), control
(blue) and target1 (orange). (Dark) gray shaded areas mark (68) 95 percent of
the CMB contribution to the polarized intensity as inferred by smica for a
FWHM beam of 30′ and for the $\mathcal{N}_{c}$ footprint. The CMB contribution
is negligible at 353 GHz but not at 217 GHz, especially for pixels of low
$\hat{P}_{217}$. Histograms correspond to PR3 polarization maps with no CMB
subtraction.
A second plausible concern is that the contribution of the CMB to the
polarized intensity changes between 353 GHz (where it is largely negligible)
and 217 GHz (where it might be considerable, especially for pixels with low
217 GHz polarized intensity). This would result in measurable decorrelation of
the total emission between 353 GHz and 217 GHz in pixels of low 217 GHz
polarized intensity. This is an especially worrisome possibility because
target pixels are selected for their misaligned LOS magnetic field structure,
and are thus expected to have systematically lower dust polarized intensity in
both frequencies. This is indeed the case, as demonstrated by histograms of
the polarized intensities at 353 and 217 GHz in Fig. 4. To exclude the
possibility of detecting CMB-induced frequency decorrelation and incorrectly
attributing it to frequency decorrelation induced by misaligned magnetic
fields in distinct dust components, we perform our analysis on maps from which
the CMB contribution has been subtracted. To control against differences
between component separation algorithms, we repeat the analysis on maps
obtained using four different algorithms: commander, nilc, sevem, and smica
(Planck Collaboration XII 2014; Planck Collaboration IX 2016; Planck
Collaboration IV 2020).
The two remaining effects (sampling uncertainties and data noise) are
statistical, and we deal with them through the formulation and statistical
testing of two null hypotheses, discussed below. Both null hypotheses express
the same physical conclusion: no LOS frequency decorrelation is detectable in
Planck data. Rejection of these null hypotheses, consistent across different
maps and implementations of the target sample, will constitute evidence for
the presence of frequency decorrelation induced by multiple dust components
permeated by misaligned magnetic fields along selected lines of sight.
We quantify the per-pixel multi-frequency data noise by propagating the
observational uncertainties on the individual Stokes parameters at the two
frequencies to the measurement of the EVPA difference
($\Delta_{s}(\psi_{353}^{i},\psi_{217}^{i})$). For pixel $i$ we thus define
the multi-frequency data noise as
${\sigma_{\Delta_{s}}}^{i}\equiv
S(\\{{\Delta_{s}(\psi_{353}^{i},\psi_{217}^{i})}\\})$ (10)
where the ensemble $\\{{\Delta_{s}(\psi_{353}^{i},\psi_{217}^{i})}\\}$ is
obtained through the computation of EVPA difference on 10,000 MC simulations
of noise-correlated Stokes parameters at each frequency. Therefore in
computing Eq. 10, the sum in Eq. 8 is over realizations, rather than over
sample pixels as Eq. 9.
### 4.3 Null hypotheses
Null Hypothesis I:
“$\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}\leq 0$.” The
selection of the target and control samples is astrophysical and “agnostic” to
other sources of frequency decorrelation. Once the CMB is subtracted, residual
systematics are corrected, and sample size is accounted for, any significant
difference in $\mathcal{D}$ between the two samples should therefore have an
astrophysical explanation. There are two astrophysical reasons why
$\mathcal{D}$ would differ in these samples. First, LOS frequency
decorrelation (the effect we are looking for) induces an EVPA change between
217 and 353 GHz in target. This directly increases $\mathcal{D}$ in target
compared to control. Second, LOS frequency decorrelation results in
depolarization in target pixels. This increases $\mathcal{D}$ indirectly in
target compared to control, since a lower polarization fraction leads to a
lower polarized intensity and thus a higher level of noise (e.g. see Fig. 5).
This difference is also attributable to the effect we are looking for. The
fact that target pixels are more depolarized than control pixels reflects the
anti-correlation between $\mathcal{N}_{c}$ and $p_{353}$ already found in
Panopoulou & Lenz (2020). The dissimilarity of $p_{353}$ in the two samples is
shown in the left panel of Fig. 6. The misalignment criterion used to select
target pixels means that these lines of sight experience more LOS
depolarization. The polarized intensity and multi-frequency polarization angle
uncertainty are anti-correlated (Fig. 5). Thus the preferentially depolarized
target pixels have systematically higher $\sigma_{\Delta_{s}}^{i}$ (Fig. 6).
We have confirmed that there is no systematic difference in the distribution
of total intensity between the target and control samples at either frequency.
Therefore, we conclude that, once we have accounted for sample size, any
deviation of $\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}$
from zero that persists across all PR3/SRoll2 CMB-subtracted maps should be
astrophysical in origin; if the direction of such a deviation is
$\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}>0$, this would
constitute evidence for LOS frequency decorrelation. In practice, we will
calculate and report: the best-guess value
$\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}$; its
uncertainty, calculated from the individual uncertainties in
$\mathcal{D}_{\texttt{target}}$ and $\mathcal{D}_{\texttt{control}}$; the
p-value of the null hypothesis,
$\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}\leq 0$. If we
found the p-value to be improbably low, this would reject null Hypothesis I
and constitute evidence for LOS frequency decorrelation (from a combination of
depolarization and direct EVPA change) caused by misaligned magnetic fields in
distinct dust components.
control
---
${\sigma_{\Delta_{s}}}^{i}$
$\log_{10}(\hat{P}_{353})$ [$\mu$KCMB]
target1
${\sigma_{\Delta_{s}}}^{i}$
$\log_{10}(\hat{P}_{353})$ [$\mu$KCMB]
Figure 5: Two-dimensional normalized histograms of the uncertainties in EVPA
differences (${\sigma_{\Delta_{s}}}^{i}$) and debiased polarized intensity at
353 GHz ($\hat{P}_{353}$) for the control sample (top) and the target1 sample
(bottom), using PR3 maps, with no CMB subtraction. Both histograms are
normalized and bounded to the same color scale. The two quantities are
correlated: target1 has noisier EVPA differences than control, because of the
lower polarized intensities in its pixels.
Null Hypothesis II: “The observed target sample is a coincidental high-noise
draw from the same parent sample as control.” The physical consequence of this
hypothesis is that any excess of $\mathcal{D}_{\texttt{target}}$ over
$\mathcal{D}_{\texttt{control}}$ is entirely due to target being smaller and
noisier555i.e., having pixels featuring larger uncertainties in
$\Delta_{S}(\psi_{353}^{i},\psi_{217}^{i})$ than control (see Fig. 5 and Fig.
6); any direct EVPA change between 217 and 353 GHz because of LOS magnetic-
field misalignment is below the noise level of Planck data. To test this
hypothesis, we will generate draws from control that are as small and as noisy
as target, and we will compare them with the observed target, using the
$\mathcal{D}$ test statistic. Clearly, these ”target-like” Monte-Carlo-
generated draws will not include any EVPA change between 353 and 217 GHz due
to LOS-frequency decorrelation, since all control pixels feature only a single
cloud along that line of sight. To match the noise properties of target, we
weight the probability of choosing a specific pixel $j$ by its value of
${\sigma_{\Delta_{s}}}^{j}$, according to the distribution of
$\\{{\sigma_{\Delta_{s}}}^{i}\\}$ in target (see Fig. 7). We then construct
the distribution of $\mathcal{D}$ in these simulated target-like LOS-
decorrelation–free draws, hereafter referred to as target-like MC, and
calculate and report the one-sided p-value of drawing the observed
$\mathcal{D}_{\texttt{target}}$ from that distribution (i.e., the probability
that $\mathcal{D}\geq\mathcal{D}_{\texttt{target}}$ in that distribution). If
the observed $\mathcal{D}_{\texttt{target}}$ is improbably high compared to
typical values in target-like MC (i.e. if the p-value is improbably low), this
will reject null Hypothesis II and constitute evidence for EVPA change due to
LOS-induced frequency decorrelation in excess of any increased noise in highly
depolarized pixels.
Sample polarization fraction and noise
---
Normalized Distribution
Figure 6: Histograms of polarization fraction at 353 GHz (left) and per-pixel inter-frequency uncertainty (${\sigma_{\Delta_{s}}}^{i}$, Eq. 10) (right), for all (black), control (blue) and target1 (orange). Histograms correspond to PR3 polarization maps with no CMB subtraction. The target1 sample is distinctly less polarized (left) and noisier (right) than all and control. Normalized Distribution |
---|---
| ${\sigma_{\Delta_{s}}}^{i}$
Figure 7: Effectiveness of weighted resampling in producing target-like MC draws from control with noise properties matched to target: means and standard deviations per bin of normalized histograms of ${\sigma_{\Delta_{s}}}^{i}$ for target1-like MC samples (blue), overplotted on the distribution of those uncertainties for the observed target1 sample (orange). The shaded blue area marks the plus and minus one standard deviation around the mean calculated in each bin from 10,000 target1-like MC draws obtained through weighted bootstrap resampling of control. They correspond to sampling uncertainties. The continuous blue line marks the mean in each bin. Very similar results are obtained for the target2 sample and for all combinations of set of polarization maps and CMB estimates. |
---|---
Normalized Distribution |
| $\Delta_{s}(\psi_{353},\psi_{217})$
Figure 8: Normalized histograms of $\Delta_{s}(\psi_{353},\psi_{217})$ for the
all, control and target1 samples in black, blue and orange, respectively. CMB
has been subtracted from the PR3 maps using smica. The shaded area results
from the propagation of observational uncertainties in $Q_{\nu}$ and $U_{\nu}$
down to the computation of $\Delta_{s}(\psi_{353},\psi_{217})$. The shaded
areas mark the plus and minus one standard deviation around the means obtained
in each bins of width $2^{\circ}$ through the MC simulations. Continuous lines
show the means of the three samples.
## 5 Detection of LOS Frequency Decorrelation
Figure 8 shows the normalized distributions of
$\Delta_{s}(\psi_{353},\psi_{217})$, the signed difference of EVPAs between
353 and 217 GHz frequency bands, for the all, control, and target1 samples in
CMB-subtracted PR3 polarization maps. The distributions for control and all
are similar, while the distribution for target1 differs noticeably from the
other two by being much less peaked around zero and much more spread out.
To test Null Hypothesis I, we calculate $\mathcal{D}$, the spread of the
distribution of EVPA differences, for target and control, for both
implementations of target, both sets of Planck polarization maps, and all CMB
estimates from the four component separation algorithms. We also calculate
uncertainties of $\mathcal{D}$ through unweighted bootstrapping for each of
these cases. The left panel of Fig. 9 shows $\mathcal{D}_{\texttt{control}}$
and $\mathcal{D}_{\texttt{target1}}$, with their respective uncertainties, for
the PR3 map, from which the smica CMB estimate has been subtracted. It is
obvious that $\mathcal{D_{\texttt{target1}}}$ is very significantly larger
than $\mathcal{D_{\texttt{control}}}$, so we expect that Null Hypothesis I is
rejected at very high significance, providing clear evidence for the presence
of LOS frequency decorrelation in Planck data. Since the distributions of the
$\mathcal{D}_{\texttt{control}}$ and $\mathcal{D}_{\texttt{target}}$ obtained
from the bootstrapped samples are very nearly Gaussian, the mean of their
difference will be the difference of their means, and the uncertainty of their
difference can be obtained from their individual uncertainties added in
quadrature. These values are given in Table 2, together with the one-sided
p-value of Null Hypothesis I
(”$\mathcal{D}_{\texttt{target}}-\mathcal{D}_{\texttt{control}}\leq 0$”).
Indeed, Null Hypothesis I is very strongly rejected for both sets of maps (PR3
vs SRoll2), all CMB subtraction algorithms, and both target implementations.
Table 2: Testing Null Hypothesis I. Probability distribution of the difference of $\mathcal{D}$ values computed for control and target, with their uncertainties, computed from the sampling uncertainties in $\mathcal{D}_{\texttt{target}}$ and $\mathcal{D}_{\texttt{control}}$, in turn obtained through unweighted bootstrapping. The one-sided p-value gives the probability that $\mathcal{D}_{\texttt{target}}\leq\mathcal{D}_{\texttt{control}}$. Results are given for both for PR3 and SRoll2 polarization maps; for removal of the CMB polarization as estimated from the different Legacy component separation methods, as well as for no CMB removal; and for our two implementations of the target pixel selection, presented in Sect. 4.1. CMB Removal | Implementation 1 | Implementation 2
---|---|---
| PR3 | SRoll2 | PR3 | SRoll2
| diff. | p-value | diff. | p-value | diff. | p-value | diff. | p-value
None | $0.22\pm 0.02$ | $7\times 10^{-34}$ | $0.28\pm 0.02$ | $2\times 10^{-48}$ | $0.19\pm 0.017$ | $4\times 10^{-29}$ | $0.25\pm 0.018$ | $6\times 10^{-44}$
commander | $0.20\pm 0.02$ | $7\times 10^{-34}$ | $0.24\pm 0.02$ | $4\times 10^{-48}$ | $0.17\pm 0.015$ | $1\times 10^{-28}$ | $0.21\pm 0.016$ | $1\times 10^{-40}$
nilc | $0.20\pm 0.02$ | $2\times 10^{-35}$ | $0.26\pm 0.02$ | $2\times 10^{-54}$ | $0.18\pm 0.015$ | $4\times 10^{-32}$ | $0.23\pm 0.016$ | $6\times 10^{-49}$
sevem | $0.19\pm 0.02$ | $3\times 10^{-33}$ | $0.24\pm 0.02$ | $7\times 10^{-46}$ | $0.17\pm 0.015$ | $2\times 10^{-29}$ | $0.21\pm 0.016$ | $1\times 10^{-40}$
smica | $0.20\pm 0.02$ | $4\times 10^{-36}$ | $0.25\pm 0.02$ | $3\times 10^{-49}$ | $0.18\pm 0.015$ | $9\times 10^{-32}$ | $0.22\pm 0.016$ | $5\times 10^{-45}$
Table 3: Testing Null Hypothesis II. Summary statistics of the $\mathcal{D}$ values computed for weighted subsamples of control with level of EVPA difference uncertainties (${\sigma_{\Delta_{s}}}^{i}$) matching those of target and of size equal to the size of target (referred to as target-like MC), compared to the observed $\mathcal{D}$ value of target. The probability of $\mathcal{D}_{\texttt{target}}$ to arise as a random realization of a ${\sigma_{\Delta_{s}}}^{i}$-matched control subsample of size equal to the size of target is also quantified in terms of a p-value for each case studied. The information is presented both for PR3 and SRoll2 polarization maps and when removing the CMB polarization as estimated from the different Legacy component separation methods. Results are shown for both implementations of the target pixel selection presented in Sect. 4.1. We also provide the results for the case of no CMB removal. CMB Removal | | Implementation 1 | Implementation 2
---|---|---|---
| | PR3 | SRoll2 | PR3 | SRoll2
None | $\mathcal{D}_{\texttt{target-like MC}}$ | $1.161\pm 0.014$ | $1.182\pm 0.015$ | $1.152\pm 0.013$ | $1.174\pm 0.013$
$\mathcal{D}_{\texttt{target}}$ | $1.224$ | $1.283$ | $1.195$ | $1.253$
| p-value | $5\times 10^{-6}$ | $4\times 10^{-12}$ | $6\times 10^{-4}$ | $4\times 10^{-9}$
commander | $\mathcal{D}_{\texttt{target-like MC}}$ | $1.036\pm 0.013$ | $1.069\pm 0.013$ | $1.019\pm 0.012$ | $1.047\pm 0.012$
$\mathcal{D}_{\texttt{target}}$ | $1.067$ | $1.113$ | $1.040$ | $1.084$
| p-value | $7\times 10^{-3}$ | $4\times 10^{-4}$ | $4\times 10^{-2}$ | $10^{-3}$
nilc | $\mathcal{D}_{\texttt{target-like MC}}$ | $1.031\pm 0.013$ | $1.063\pm 0.013$ | $1.014\pm 0.012$ | $1.043\pm 0.012$
$\mathcal{D}_{\texttt{target}}$ | $1.071$ | $1.128$ | $1.047$ | $1.100$
| p-value | $10^{-3}$ | $4\times 10^{-7}$ | $3\times 10^{-3}$ | $2\times 10^{-6}$
sevem | $\mathcal{D}_{\texttt{target-like MC}}$ | $1.024\pm 0.013$ | $1.056\pm 0.013$ | $1.010\pm 0.012$ | $1.037\pm 0.012$
$\mathcal{D}_{\texttt{target}}$ | $1.052$ | $1.095$ | $1.029$ | $1.069$
| p-value | $2\times 10^{-2}$ | $2\times 10^{-3}$ | $5\times 10^{-2}$ | $4\times 10^{-3}$
smica | $\mathcal{D}_{\texttt{target-like MC}}$ | $1.052\pm 0.013$ | $1.084\pm 0.014$ | $1.034\pm 0.012$ | $1.037\pm 0.012$
$\mathcal{D}_{\texttt{target}}$ | $1.084$ | $1.129$ | $1.059$ | $1.106$
| p-value | $7\times 10^{-3}$ | $5\times 10^{-4}$ | $2\times 10^{-2}$ | $4\times 10^{-4}$
---
Figure 9: (Left) Rejecting Null Hypothesis I. Summary statistics of
$\mathcal{D}$ values obtained through 10,000 bootstrap resampling of control
(cyan) and target1 (orange) samples. The means and one standard deviations are
reprensented by the thick vertical lines and shaded area respectively. (Right)
Rejecting Null Hypothesis II. The blue histogram shows the distribution of
$\mathcal{D}$ values obtained through 10,000 resampling of control with
weights that guarantee the same level of EVPA difference uncertainties in the
resampled samples than in target1. The shaded blue distribution is a Gaussian
fit to the histogram. The vertical orange arrow indicates the $\mathcal{D}$
value computed for the observed full target1 sample. The examples shown in
both panels make use of the PR3 polarization maps from which we have
subtracted the smica CMB estimate. Results are consistent with all other
implementations as shown in Tables 2 and 3.
---
Figure 10: Summary statistics of $\mathcal{D}$ distributions for target1-like
simulations obtained from control while subtracting different CMB estimates
from the PR3 polarization maps (left) and SRoll2 polarization maps (right),
compared to the $\mathcal{D}$ value of target1. This illustrates part of the
information given in Table 3.
We have thus established that target has statistically greater polarization
angle differences between frequencies than control, and that this is not an
effect of increased CMB contribution in target pixels, nor an artifact of the
CMB estimate produced by any specific component separation algorithm, nor an
artifact of spatially correlated residual systematics.
We now proceed to test whether this excess decorrelation is also significant
beyond what would be justified by the increased noise level of target compared
to control (i.e., test whether Null Hypothesis II is also rejected). For each
case considered, we generate 10,000 target-like MC draws through noise-
weighted sub-sampling from control, as described in the previous section; we
calculate the $\mathcal{D}$ test-statistic for each; we construct the
distribution of $\mathcal{D}$; and we compute the one-sided p-value that
describes the probability that the $\mathcal{D}$ measured for target could be
measured for a random pixel sample that is as small as target, as noisy as
target, but completely free of LOS decorrelation according to the best current
knowledge of the 3D magnetized ISM.
One example of this process is visually represented in the right panel of Fig.
9 for the case of Implementation I of target and PR3 maps from which the smica
CMB estimate has been subtracted. It is clear that the observed target is
highly decorrelated, even compared to comparably high-noise draws from
control. Summary statistics for the $\mathcal{D}$ distributions and p-values
obtained from all samples in both our implementations are reported in Table 3,
while a visual representation of these results for all combinations of
maps/CMB subtraction algorithms and for Implementation I of target is shown in
Fig. 10. For comparison, both in Table 3 and in Fig. 10, we also provide the
results of our analysis on maps without any subtraction of the CMB
contribution. Indeed, the p-value of Null Hypothesis II is low for both sets
of maps (PR3 vs SRoll2), both implementations of target, and all CMB estimate
subtractions, with p-values ranging from $4\times 10^{-3}$ to $5\times
10^{-2}$ for PR3 maps, and from $4\times 10^{-7}$ to $4\times 10^{-3}$ for
SRoll2 maps.
Null Hypothesis II is systematically rejected at a higher significance for
SRoll2 maps than for PR3 maps, if all other features of the analysis remain
the same. The most straightforward way to interpret this trend is that
residual systematics in PR3 maps act as an additional source of noise; the
SRoll2 corrections for these systematics reduces the noise, and the LOS
frequency decorrelation stands out more. Additionally, the significance of the
effect is always higher when the CMB has not been subtracted, confirming that
indeed the CMB makes a distinct contribution to the difference between 353 and
217 GHz EVPAs, and that difference is more pronounced in the lower-polarized-
intensity pixels of target.
The robustness of the low p-value of Null Hypothesis II across maps, CMB-
subtraction algorithms, and target selections gives us confidence that the
effect is real, and that LOS frequency decorrelation due to multiple dust
components is present in Planck data and detectable above the noise level – as
long as one knows where in the sky to look for it.
## 6 Validation
In this section we discuss additional validation tests, both statistical and
physical, to increase our confidence that we have in fact detected LOS-induced
frequency decorrelation in Planck data.
### 6.1 Sky distribution of target and control pixels
Pixels of target and control sample largely disjoint parts of the sky (see
Sect. 4.1). It is thus conceivable that their difference in observed
$\mathcal{D}$ might stem from different local properties, and most notably
different instrumental noise or systematic properties of the data. In
principle, our test of Hypothesis II, where the observed
$\mathcal{D}_{\texttt{target}}$ is compared to that of noise-matched
subsamples of control, should take into account the difference in noise
properties; and the comparison between PR3 and SRoll2 maps is performed
exactly to evaluate the impact of the residual systematics. Nevertheless, we
performed two additional tests to verify that some additional, hidden, spatial
correlation bias is not generating a false-positive detection of LOS frequency
decorrelation.
First, we repeated our analysis inside two sky patches that contain intermixed
target1 and control pixels, and that are sufficiently small so that
instrumental systematics would not vary considerably within each patch. These
sky patches were defined as regions with an angular radius of 15∘, centered on
$(l,\,b)=(70^{\circ},\,50^{\circ})$ in the North, and
$(l,\,b)=(-110^{\circ},\,-50^{\circ})$ in the South. These regions were
visually identified and are indicated by green and magenta outlines,
respectively, in Fig. 11. These patches as a whole contain 3352 (north) and
3353 (south) pixels. Of those, in the northern (southern) patch, 202 (162) are
target1 pixels, and 214 (461) are control pixels. The noise properties within
each patch are overall consistent between samples (Fig. 12), unlike the full
target and control samples (Fig. 6). We found that for all combinations of
maps, CMB subtraction algorithms, and target sample implementations,
$\mathcal{D}_{\texttt{target}}$ is larger than
$\mathcal{D}_{\texttt{control}}$. The sample sizes are now too small for Null
Hypothesis II to be rejected through the weighted-resampling analysis
discussed in Sect. 4.3; we have however verified through sub-sampling of the
full target1 and control samples, that the behavior of both the distribution
of $\mathcal{D}_{\texttt{target-like MC}}$ and the observed
$\mathcal{D}_{\texttt{target}}$ in these sky patches is consistent with what
we would expect given the local noise properties and the decrease in sample
size.
---
Figure 11: Map showing the location of sky pixels belonging to the target1 (white) and control (orange) samples. Black pixels are those in all but neither in target1 nor control. The gray area are pixels where $\mathcal{N}_{c}$ has not be determined (see Panopoulou & Lenz 2020). The location of the northern and southern sky patches studied in order to investigate the effect of target and control sampling different sky regions are shown with the green and magenta circles, respectively. Northern Patch | Southern Patch
---|---
Normalized Distribution |
${\sigma_{\Delta_{s}}}^{i}$ | ${\sigma_{\Delta_{s}}}^{i}$
Figure 12: Normalized histograms of ${\sigma_{\Delta_{s}}}^{i}$ as measured on
PR3 maps in the northern (left) and southern (right) sky patches for the
different sub-samples of target1 (in orange), control (in blue) and all (in
black).
Second, having observed that noise properties differ systematically between
northern and southern hemispheres, we repeated our analysis in the northern
hemisphere alone. We chose the northern hemisphere because it contains more
target pixels: lines of sight intersecting multiple, misaligned clouds are
evidently more common in the northern Galactic sky. We found that, despite the
modest decrease in sample size for target, in this case the significance with
which Null Hypothesis II is rejected in fact increases (p-value decreases),
because in general pixels in the north are less noisy.
### 6.2 Projected Rayleigh Statistic
In order to strengthen our analysis and confirm that our results do not depend
critically on our choice of $\mathcal{D}$ as our test statistic, we have
repeated our analysis using the Projected Rayleigh Statistic (PRS) to quantify
the degree of alignment of EVPA between frequencies. The PRS ($Z_{x}$) is
computed as (e.g., Jow et al. 2018):
$Z_{x}=\frac{1}{N}\sum_{i=1}^{N}{\cos{(2\xi_{i})}}$ (11)
where $\xi_{i}$ is defined in the range $\left[-\pi/2,\,\pi/2\right]$, so that
$Z_{x}$ takes values between -1 and 1. Computing the PRS for a sample of
signed difference angles $\Delta_{s}(\psi_{353},\psi_{217})$ defined in Eq. 6,
we can quantify the level of alignment of EVPA between 353 and 217 GHz. We
expect $Z_{x}$ to be smaller for samples with statistically larger EVPA
differences. We reproduced the analysis presented in Sect. 4 using the PRS in
place of the circular standard statistic ($S$), in order to quantify the
degree of alignment/misalignment in our samples and quantitatively compare
them. We found that the significance with which our hypotheses are rejected in
each case are generally consistent, with no strong dependence on the choice of
test statistic.
### 6.3 $\mathcal{D}$ versus $\Delta(\theta_{LVC},\theta_{IVC})$
According to the simplest two-cloud model (Tassis & Pavlidou 2015), if the
EVPA differences between frequencies are due to SED differences and magnetic
field misalignment between the dust clouds, then for an ensemble of sky pixels
we expect to see (i) a decrease of degree of polarization and (ii) an increase
of LOS frequency decorrelation (which we quantify using $\mathcal{D}$) as
$\Delta(\theta_{LVC},\theta_{IVC})$ increases.
To test this simple scenario, we consider all lines of sight showing a
sufficient degree of complexity in terms of number of clouds, namely
$\mathcal{N}_{c}>1.5$. We bin the sky pixels according to their
$\Delta(\theta_{LVC},\theta_{IVC})$ values as measured from Hi orientation
data in the scheme of Implementation I. Then, for each bin, we examine the
distribution of $p_{353}$ and compute the $\mathcal{D}$ statistic. As
expected, for increasing $\Delta(\theta_{LVC},\theta_{IVC})$, we observe a
small but systematic decrease of degree of polarization and a clear rise of
$\mathcal{D}$ values. The latter is shown in Fig. 13 for the PR3 polarization
maps from which the smica CMB has been subtracted. We obtain similar
conclusions when we use other combinations of polarization maps and removed
CMB estimates, as well as when we consider the Hi orientation as in
Implementation II of the selection of target pixels (i.e., at the peak of the
two dominant clouds) rather than the scheme used in Implementation I.
In the simple two-cloud model of Tassis & Pavlidou (2015), LOS decorrelation
is expected to be more pronounced towards lines of sight where the magnetic
fields of the clouds form an angle of $60^{\circ}$ or more. As noted by these
authors, smaller angle differences can also result in LOS decorrelation, but
at a lower level. LOS decorrelation is therefore not expected to abruptly
appear at some large misalignment angle, but should qualitatively match the
observed smooth trend in Fig. 13. A more quantitative comparison of this
observation with analytic models should take into account a number of factors.
First, in general, lines of sight might be composed of more than two dust
clouds that contribute to the polarized signal. Second, changes in the
spectral index of the dust SED (and not simply the dust temperature, as
assumed in the Tassis & Pavlidou (2015) model) can alter the frequency
dependence of the dust emission EVPA for a given misalignment angle. Finally,
the difference between Hi filament orientation and the plane-of-the-sky (POS)
magnetic field orientation shows an intrinsic astrophysical scatter, which
should also be taken into account as an extra source of uncertainty. Such
detailed comparisons with models will require further work beyond that
presented in this paper.
We note that in our analysis we have not optimized our cutoff in
$\Delta(\theta_{LVC},\theta_{IVC})$ for the selection of our target pixels;
rather, we adopted $60^{\circ}$ based on our a priori physical expectations.
Had we decreased the cutoff to $\Delta(\theta_{LVC},\theta_{IVC})\geq
45^{\circ}$, the size of the target sample, and hence the significance with
which we have detected LOS frequency decorrelation, would have increased, as
subsequent analysis confirms.
$\mathcal{D}$ |
---|---
| $\Delta(\theta_{LVC},\theta_{IVC})$ [∘]
Figure 13: Increase of the spread of EVPA differences between 353 and 217 GHz
as a function of offset angle between Hi structures from integration in LVC
and IVC ranges (‘Implementation 1’). All sky pixels with $N_{c}>1.5$ are
binned according to their $\Delta(\theta_{IVC},\theta_{LVC})$ values and the
$\mathcal{D}$ statistic is computed for each subsample with observational
uncertainties propagated. The error bars in each bin represent the $1\sigma$
value of $\mathcal{D}$ from a bootstrap resampling of the data $10^{3}$ times
per bin.
### 6.4 A case study using starlight polarization
In this paper we have used Hi morphology as an indirect probe of the direction
of magnetic fields in individual clouds. Starlight polarization, induced by
the same dust grains that produce polarized emission, is a more direct probe
of the dust polarization position angle. Currently available starlight
polarization measurements are sparse, but large-scale starlight polarization
surveys like Pasiphae (Tassis et al. 2018) are planned for the near future.
Nevertheless, data do exist in a small sky patch that we can use for a proof-
of-principle analysis using starlight polarization instead of Hi data.
$\Delta_{s}(\psi_{353},\psi_{217})$ [∘]
---
$\Delta_{s}(\psi_{353},\psi_{217})$ [∘]
Figure 14: The case of tomography region of Panopoulou et al. (2019). Top
panel: Map of EVPA differences computed from 353 and 217 GHz polarization maps
from Planck. The 2-cloud and 1-cloud sight lines are marked respectively by
orange and blue crosses at North-East and South-West of the map center. The
circles have 16′ radius and mark the beams within which starlight polarization
data have been taken and studied by Panopoulou et al. (2019). Bottom panel:
Histograms of EVPA differences computed through 10,000 MC simulations to
propagate observational uncertainties on ($Q_{\nu},\,U_{\nu}$). The 2-cloud
LOS histogram is shown in orange, the 1-cloud LOS in blue. The vertical lines,
with corresponding colors, show the EVPA differences from the data.
Panopoulou et al. (2019) used starlight polarization data from the RoboPol
polarimeter (Ramaprakash et al., 2019) to study a sky region where several
Galactic dust components are present along the LOS. Based on these stellar
polarization data the authors inferred the number of dust clouds and the POS
orientation of the magnetic field permeating those for two nearby observing
beams of 16′ radius. The latter two were pre-selected based on Hi data to
likely harbor two dust clouds (2-cloud LOS) and one dust cloud (1-cloud LOS).
The authors demonstrated that the two clouds exhibit significant differences
in terms of column density and polarization properties, and that their mean
POS magnetic field orientations differ by about $60^{\circ}$. In principle,
the different SEDs in those significantly misaligned clouds could lead to a
measurable effect of LOS frequency decorrelation in Planck data towards the
2-cloud LOS. However, if the effect is weak it could be hidden in the noise,
as suggested in Sect. 6.3 of Panopoulou et al. (2019) based on a set of
polarization maps from the second Planck data release.
Here, we investigate further the polarization data for those particular lines
of sight. We retrieve the polarized emission at 217 and 353 GHz measured by
Planck towards the sky region of interest (see Sect. 3) smoothed to a 16′ FWHM
beam, and we compute the signed difference of EVPA in each pixel (see Eq. 6 in
Sect. 4). We thus obtain the EVPA-difference map presented in Fig. 14 (top)
where we highlight the two LOS studied in Panopoulou et al. (2019).
Interestingly, the 2-cloud region at the center, which is known to feature a
complex magnetized ISM structure with at least two dust components (Panopoulou
et al. 2019, Clark & Hensley 2019), displays a higher EVPA difference than the
nearby 1-cloud region.
We quantify the level of uncertainty of the EVPA difference induced by the
observational uncertainties on the Stokes $Q_{\nu}$ and $U_{\nu}$
($\nu=\left\\{217,\,353\right\\}$) through MC simulations (see Sect. 3 for
details). For each MC draw, we compute the EVPA difference and build the
histograms shown in the right panel of Fig. 14. Even when accounting for
Planck noise, the 2-cloud LOS deviates significantly from zero EVPA difference
in the two frequencies, suggesting a LOS frequency decorrelation of the
polarization data. In contrast, the distribution corresponding to the 1-cloud
LOS is compatible with zero EVPA difference in the two frequencies (no LOS
frequency decorrelation). Although less significant, the offset from zero of
the EVPA difference for the 2-cloud LOS survives the subtraction of the CMB
estimates. This is reported in Table 4.
Table 4: EVPA frequency differences in degrees for the 2-cloud and 1-cloud LOSs from PR3 frequency maps and with subtraction of commander and smica CMB estimates. The means and 1$\sigma$ intervals are computed through 10,000 MC simulations to propagate the observational uncertainties. 68% of the draws fall within the quoted uncertainty about the mean. CMB Removal | 1-cloud LOS | 2-cloud LOS
---|---|---
None | $3.30\pm 2.54$ [∘] | $7.98\pm 3.10$ [∘]
commander | $2.91\pm 2.47$ [∘] | $6.44\pm 3.27$ [∘]
smica | $3.06\pm 2.48$ [∘] | $7.33\pm 3.24$ [∘]
This tentative result demonstrates how starlight polarization data can be used
to identify sky pixels that experience LOS-induced frequency decorrelation.
## 7 Estimation of required SED variation
Frequency decorrelation of dust emission is, ultimately, the result of spatial
variations of the dust SED. The detection of rotation of the dust polarization
angle between frequencies is evidence for variation of the dust SED along the
line of sight. We can therefore use the observed magnitude of this effect to
estimate the intrinsic variability of the dust SED.
Let us divide the line of sight into $N$ clouds such that the $i$-th cloud has
column density $N_{\rm HI}^{i}$. Then the observed Stokes parameters of the
polarized dust emission at a frequency $\nu$ are given by (e.g., Hensley et
al., 2019):
$\displaystyle Q_{\nu}$ $\displaystyle=\sum_{i}m_{p}N_{\rm HI}^{i}\delta_{\rm
DG}^{i}f^{i}\kappa_{\nu}^{i}B_{\nu}\left(T_{d}^{i}\right)\cos^{2}\gamma_{i}\cos\left(2\psi_{i}\right)$
(12) $\displaystyle U_{\nu}$ $\displaystyle=\sum_{i}m_{p}N_{\rm
HI}^{i}\delta_{\rm
DG}^{i}f^{i}\kappa_{\nu}^{i}B_{\nu}\left(T_{d}^{i}\right)\cos^{2}\gamma_{i}\sin\left(2\psi_{i}\right)\;,$
(13)
where $f^{i}$, $\delta_{\rm DG}^{i}$, $\kappa_{\nu}^{i}$, $T_{d}^{i}$,
$\gamma_{i}$, and $\psi_{i}$ are the alignment fraction, dust-to-gas mass
ratio, polarized opacity at frequency $\nu$, dust temperature, angle between
the magnetic field and the plane of the sky, and polarization angle of the
$i$th cloud, respectively, and $m_{p}$ is the proton mass. When there are
multiple clouds along the line of sight, Eqs. 12 and 13 make clear that the
ratio $U_{\nu}/Q_{\nu}$, and thus the polarization angle $\psi_{\nu}$, is
generally not constant with frequency.
For a single cloud, the ratios of $Q_{\nu}$ and $U_{\nu}$ at 217 and 353 GHz
are given by
$\left(\frac{Q_{217}}{Q_{353}}\right)_{i}=\left(\frac{U_{217}}{U_{353}}\right)_{i}=\frac{B_{217}\left(T_{d}^{i}\right)\kappa_{217}^{i}}{B_{353}\left(T_{d}^{i}\right)\kappa_{353}^{i}}\;.\\\
$ (14)
If dust everywhere had the same temperature and same opacity law, then this
ratio would be constant across the sky and $\psi_{\nu}$ would be constant with
frequency. Since this is inconsistent with what is observed, let us assume
that this quantity has a mean value $\alpha$ and that cloud-to-cloud
variations are described by a parameter $\rho$ having mean zero, i.e.,
$\left(\frac{Q_{217}}{Q_{353}}\right)_{i}=\left(\frac{U_{217}}{U_{353}}\right)_{i}\equiv\alpha\left(1+\rho_{i}\right)\;.$
(15)
A modified blackbody having $T_{d}=19.6$ K and $\beta=1.55$, typical
parameters for high-latitude dust (Planck Collaboration XI, 2020), has
$\alpha=0.21$, though our analysis is not sensitive to the value of $\alpha$.
$\sigma_{\rho}$ quantifies the intrinsic variation in the dust SED between 217
and 353 GHz, regardless of whether those variations arise from temperature,
composition, or other effects. Modeling $\rho$ as Gaussian distributed with
mean zero and variance $\sigma_{\rho}^{2}$, we seek the value of
$\sigma_{\rho}$ that can account for the enhanced dispersion of polarization
angles on multi-cloud sightlines (Fig. 8).
To estimate the effect of $\sigma_{\rho}$ on the dispersion in polarization
angles, we use the Hi maps to constrain both the line-of-sight distribution of
clouds and their relative orientations. To simplify the analysis we consider
the data from our Implementation 2 (Sect. 4). For each sightline we thus
consider only the two dominant clouds (in Hi column density) as identified by
Panopoulou & Lenz (2020) and create maps of per-cloud Stokes parameter by
integrating the Hi-based $Q$ and $U$ maps of (Clark & Hensley 2019) in the
velocity range within $v_{0}\pm\sigma_{0}$, where $v_{0}$ is the cloud
centroid velocity and $\sigma_{0}$ is the second moment of its spectrum (see
Sect. 3). Then, we estimate $\psi_{353}$ on each sightline as:
$\hat{\psi}_{353}=\frac{1}{2}\arctan\left(\frac{U_{\rm
HI}^{1}\cos^{2}\gamma_{1}+U_{\rm HI}^{2}\cos^{2}\gamma_{2}}{Q_{\rm
HI}^{1}\cos^{2}\gamma_{1}+Q_{\rm HI}^{2}\cos^{2}\gamma_{2}}\right)\;,$ (16)
where $Q_{\rm HI}$ and $U_{\rm HI}$ are given by Eqs. 3 and 4, respectively,
and the superscripts denote integration over clouds 1 and 2. The angles
between the magnetic field and the plane of the sky $\gamma_{1}$ and
$\gamma_{2}$ are unknown, and so we draw $\sin\gamma$ uniformly from the
interval $[-1,1]$ for each; $\gamma=0$ when the magnetic field is in the plane
of the sky. This equation does not explicitly model variations in the 353 GHz
dust emissivity per H atom, although marginalizing over different values of
$\gamma_{1}$ and $\gamma_{2}$ achieves a similar effect numerically. Rather,
since we are interested only in the variability of the polarized dust SED
between 353 and 217 GHz, we model such effects through the $\rho_{1}$ and
$\rho_{2}$ parameters when computing the 217 GHz polarization angle only.
Using Eq. 15, $\psi_{217}$ on each sightline can be modeled as
$\hat{\psi}_{217}=\frac{1}{2}\arctan\left(\frac{U_{\rm
HI}^{1}\left(1+\rho_{1}\right)\cos^{2}\gamma_{1}+U_{\rm
HI}^{2}\left(1+\rho_{2}\right)\cos^{2}\gamma_{2}}{Q_{\rm
HI}^{1}\left(1+\rho_{1}\right)\cos^{2}\gamma_{1}+Q_{\rm
HI}^{2}\left(1+\rho_{2}\right)\cos^{2}\gamma_{2}}\right)\;.$ (17)
On each sightline, $\rho_{1}$ and $\rho_{2}$ are drawn from a Gaussian
distribution of mean zero and variance $\sigma_{\rho}^{2}$. Then for each
sightline we can compute
$\Delta_{s}\left(\hat{\psi}_{353},\hat{\psi}_{217}\right)$ (Eq. 6) and finally
the dispersion $\mathcal{D}$ (Eq. 9) over all target2 sightlines as a function
of $\sigma_{\rho}$.
If the difference in dispersion between the target and control samples is
attributed entirely to varying dust SEDs, then we can estimate
$\mathcal{D}_{\rm LOS}\simeq\sqrt{\mathcal{D}\left({\rm
target}\right)^{2}-\mathcal{D}\left({\rm
control}\right)^{2}}=0.23^{+0.05}_{-0.06}$ (18)
from the resampling analysis presented in Sect. 4 and Table 3. This range is
indicated by the horizontal band in Fig. 15.
The $\pm 1\sigma$ range of $\mathcal{D}$ over 1000 simulations for each value
of $\sigma_{\rho}$ is presented in Fig. 15. We see that $\sigma_{\rho}=0.15$
matches the observed enhancement in dispersion between the target and control
samples. This is consistent with the dust SEDs varying in the ratio of 217 to
353 GHz polarized intensity at the level of 15% from cloud to cloud over the
region analyzed. As we model contributions from only the two most dominant
clouds on each sightline, we may be slightly overestimating the true
dispersion.
Figure 15: The dispersion $\mathcal{D}_{\rm LOS}$ (Eq. 18) resulting solely
from variations in the dust SEDs between two clouds along the line of sight in
the target2 sample. We quantify the level of SED variation by the parameter
$\sigma_{\rho}$ (Eq. 15), finding that $\sigma_{\rho}=0.15$ can account for
the excess dispersion in the target sample. Thus, we estimate that the ratio
of 353 to 217 GHz polarized intensity is varying at roughly the 15% level from
cloud to cloud. The blue shaded regions indicate the observed range of
$\mathcal{D}$ estimated in Sect. 4 and the $\pm 1\sigma$ confidence interval
from 1000 realizations of $\gamma_{1}$, $\gamma_{2}$, $\rho_{1}$, and
$\rho_{2}$ in each pixel. The red shaded region is the resultant constraint on
$\sigma_{\rho}$.
## 8 Discussion
In this paper we report on the detection of the effect of LOS-induced
frequency decorrelation – the combined effect of varying dust SEDs and
magnetic field orientations along the LOS – in Planck polarization data. This
detection was made possible by the use of Hi datasets, which allowed us to
construct our target and control samples a priori, in an astrophysically
motivated way. The consistency of the results between our two implementations
and between the different sets of polarization maps and CMB estimates
reinforces our confidence that our finding is robust. Our analysis has
additionally shown that the significance of the effect becomes higher when we
use maps cleaned from residual systematics that were present in Planck PR3
polarization maps.
We emphasize that we have not in any way optimized our analysis choices to
maximize the significance with which the effect is detected. Rather, whenever
a choice had to be made, we made it based on astrophysical arguments. There
are several examples where different choices in our analysis would have, in
fact, increased the significance of the detection of the effect (decreased the
p-value of Null Hypotheses I and II). These include:
(a) Definition of target: defining target as the union of target1 and target2
increases the significance.
(b) Cutoff in Hi orientation misalignment: changing the misalignment
requirement for inclusion in target from $\geq 60^{\circ}$ to $\geq
45^{\circ}$ increases the significance.
(c) Localization: restricting our analysis to the northern hemisphere
increases the significance.
This work finds evidence for LOS frequency decorrelation, and does not
directly address the question of decorrelation in the dust power spectra. Our
findings show that frequency decorrelation of the dust polarization signal is
not an effect that is uniform throughout the sky since the change in
polarization pattern is more severe for sightlines that pass through more
convoluted magnetized ISM, and that those particular sightlines are
distributed unevenly on the sky (see the bottom map in Fig. 1). This may have
implications in power-spectrum–based estimates as future work will clarify.
From a CMB perspective, it would be interesting to estimate the level of LOS-
induced frequency decorrelation using cross-power spectra (as in, e.g., Planck
Collaboration XXX 2016; Planck Collaboration XI 2020) on maps that have been
corrected for residual systematics (Delouis et al. 2019; Planck Collaboration
Int. LVII 2020) and in sky regions that are dominated by pixels comprising our
target samples. Such an analysis is beyond the scope of this paper.
Our Implementation I of the target pixel selection focused on the distinction
between LVCs and IVCs, based on the physical expectation that IVCs might
feature different dust SEDs than LVCs, due to differences in temperature
and/or dust grain properties (Planck Collaboration XXIV 2011; Planck
Collaboration XI 2014). Our Implementation II imposed no such constraint on
the velocities of the identified distinct peaks in Hi emission. This enables
us to use target2 to a posteriori test whether IVC-LVC cloud pairs exhibit a
stronger LOS frequency decorrelation effect than LVC-LVC pairs.
Concentrating on the two dominant clouds (the ones corresponding to the two
highest-Hi-column-density components), we split pixels in target2 in two
groups: pixels where both dominant clouds have a velocity centroid in the LVC
range (513 pixels), and pixels dominated by LVC-IVC pairs (5242 pixels), with
LVC and IVC ranges defined as in Sect. 4.1. We infer the relative strength of
LOS frequency decorrelation in the two groups through a uniform, unweighted
resampling analysis of each subset of pixels, with $N_{\rm{Boot}}=500$. The
$\mathcal{D}$ values obtained for PR3 polarization maps and smica CMB
subtraction are $\mathcal{D}_{\rm{IVC-LVC}}$ = $1.05\pm 0.04$ and
$\mathcal{D}_{\rm{LVC-LVC}}$ = $1.13\pm 0.05$. The $\mathcal{D}$ statistic is
thus found to be higher for LVC-LVC pairs than for IVC-LVC pairs, although the
two values are consistent within sampling uncertainties. The same trend is
observed for all combinations of polarization maps and subtracted CMB
estimate. As a result, it appears that the LOS frequency decorrelation induced
by dust clouds does not only involve lines of sight passing through IVCs. And,
on the contrary, significantly misaligned LVCs may be a substantial source of
LOS frequency decorrelation.
To improve CMB dust polarization foreground modeling and subtraction,
accounting for LOS frequency decorrelation, observables that can provide
insight on the 3D structure of the magnetized ISM will play a critical role.
Such observables include Hi data (as we have done in this paper) and starlight
polarization. Starlight polarization originates in dichroic absorption by the
same dust grains that produce polarized emission, and thus traces the same
physical processes of grain alignment with the magnetic field, but for the
line of sight between observer and star. Large-scale starlight polarization
surveys like Pasiphae (Tassis et al. 2018) will thus soon provide an
independent, direct probe of dust grain orientations in individual clouds.
## 9 Conclusions
That the SEDs of the dust clouds vary to some extent between different parts
of the Galaxy is certain. That there are in general multiple dust clouds along
a large fraction of lines of sight is certain. That the magnetic field of the
Galaxy is not uniform and may vary along the LOS is certain. Consequently,
decorrelation between polarized dust emission at different frequencies, both
in the plane of the sky and along the LOS, must be present to some extent. The
relevant question is whether the magnitude of this frequency decorrelation
effect is high enough to be detected by an instrument of given specifications.
In this work, we pursue a new approach that specifically targets LOS frequency
decorrelation. Physically, we expect that LOS frequency decorrelation does not
occur at a uniform level throughout the sky, but rather should be more severe
where the orientations of the magnetic field permeating different dust clouds
superposed along the LOS are strongly misaligned. Therefore, we used Hi
velocity and orientation data to select pixels that are most likely to exhibit
significant LOS frequency decorrelation induced by multiple dust SED
components. Each of these target sightlines has an Hi emission structure
consistent with multiple LOS clouds with misaligned magnetic fields. We
compare these to a control sample of sightlines that contain a single Hi
cloud. The use of Hi allows us to distinguish these two sets of pixels using
data that are entirely independent of polarization measurements. The pixels
that maximize the likelihood of showing a LOS frequency decorrelation signal
are highly non-evenly distributed on the sky.
We quantify LOS decorrelation using the dispersion of inter-frequency EVPA
differences. We find that this dispersion is larger for our target sample than
for the control sample in Planck data. We detect the LOS frequency
decorrelation effect at a level above the Planck noise (see Fig. 9). We have
confirmed that our finding is robust to inhomogeneous data noise level,
residual systematics, CMB contamination, or the specifics of sky pixel
selection. We found that trends in polarization data follow closely the
phenomenology expected from the simplest modeling of the effect (Fig. 13).
Additionally, relying on a model-independent approach, we estimated that an
intrinsic variability of the dust SED of $\sim 15\%$ can lead to the observed
magnitude of the effect that we measured from polarization maps at 353 and 217
GHz (see Fig. 15). Finally, we demonstrated that LOS superposition of both
LVC-LVC and LVC-IVC pairs of clouds contributes to the signal detection.
In this study we have presented the first detection of LOS frequency
decorrelation in the Planck data. This detection was made possible thanks to
the use of ancillary datasets, Hi emission data and starlight polarization
data, that allow us to identify sky regions that are potentially most
susceptible to this effect.
###### Acknowledgements.
We thank Vincent Guillet and Aris Tritsis for insightful discussions. We thank
our anonymous referee for her/his report. This work has received funding from
the European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation programme under grant agreement Nos. 771282, 772253,
and 819478. G. V. P. acknowledges support by NASA through the NASA Hubble
Fellowship grant HST-HF2-51444.001-A awarded by the Space Telescope Science
Institute, which is operated by the Association of Universities for Research
in Astronomy, Incorporated, under NASA contract NAS5-26555. S. E. C.
acknowledges support by the Friends of the Institute for Advanced Study
Membership. V. P. acknowledges support from the Foundation of Research and
Technology - Hellas Synergy Grants Program through project MagMASim, jointly
implemented by the Institute of Astrophysics and the Institute of Applied and
Computational Mathematics and by the Hellenic Foundation for Research and
Innovation (H.F.R.I.) under the “First Call for H.F.R.I. Research Projects to
support Faculty members and Researchers and the procurement of high-cost
research equipment grant” (Project 1552 CIRCE). We acknowledge the use of data
from the Planck/ESA mission, downloaded from the Planck Legacy Archive, and of
the Legacy Archive for Microwave Background Data Analysis (LAMBDA). Support
for LAMBDA is provided by the NASA Office of Space Science. Some of the
results in this paper have been derived using the HEALPix (Górski et al. 2005)
package. This work is partially based on publicly released data from the HI4PI
survey which combines the Effelsberg–Bonn HI Survey (EBHIS) in the northern
hemisphere with the Galactic All-Sky Survey (GASS) in the southern hemisphere.
## References
* Abazajian et al. (2016) Abazajian, K. N., Adshead, P., Ahmed, Z., et al. 2016, arXiv e-prints, arXiv:1610.02743
* Ade et al. (2019) Ade, P., Aguirre, J., Ahmed, Z., et al. 2019, J. Cosmology Astropart. Phys., 2019, 056
* BICEP2 Collaboration & Keck Array Collaboration (2018) BICEP2 Collaboration & Keck Array Collaboration. 2018, Phys. Rev. Lett., 121, 221301
* Boulanger et al. (1996) Boulanger, F., Abergel, A., Bernard, J. P., et al. 1996, A&A, 312, 256
* Bregman (1980) Bregman, J. N. 1980, ApJ, 236, 577
* Chluba et al. (2017) Chluba, J., Hill, J. C., & Abitbol, M. H. 2017, MNRAS, 472, 1195
* Clark (2018) Clark, S. E. 2018, ApJ, 857, L10
* Clark & Hensley (2019) Clark, S. E. & Hensley, B. S. 2019, ApJ, 887, 136
* Clark et al. (2015) Clark, S. E., Hill, J. C., Peek, J. E. G., Putman, M. E., & Babler, B. L. 2015, Physical Review Letters, 115, 241302
* Clark et al. (2019) Clark, S. E., Peek, J. E. G., & Miville-Deschênes, M.-A. 2019, ApJ, 874, 171
* Clark et al. (2014) Clark, S. E., Peek, J. E. G., & Putman, M. E. 2014, ApJ, 789, 82
* CMB-S4 Collaboration (2020) CMB-S4 Collaboration. 2020, arXiv e-prints, arXiv:2008.12619
* Danly (1989) Danly, L. 1989, ApJ, 342, 785
* Delouis et al. (2019) Delouis, J. M., Pagano, L., Mottet, S., Puget, J. L., & Vibert, L. 2019, A&A, 629, A38
* Fanciullo et al. (2015) Fanciullo, L., Guillet, V., Aniano, G., et al. 2015, A&A, 580, A136
* Finkbeiner et al. (1999) Finkbeiner, D. P., Davis, M., & Schlegel, D. J. 1999, ApJ, 524, 867
* Ghosh et al. (2017) Ghosh, T., Boulanger, F., Martin, P. G., et al. 2017, A&A, 601, A71
* Górski et al. (2005) Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759
* Heiles (1984) Heiles, C. 1984, ApJS, 55, 585
* Hensley & Bull (2018) Hensley, B. S. & Bull, P. 2018, ApJ, 853, 127
* Hensley et al. (2019) Hensley, B. S., Zhang, C., & Bock, J. J. 2019, ApJ, 887, 159
* HI4PI Collaboration (2016) HI4PI Collaboration. 2016, A&A, 594, A116
* Irfan et al. (2019) Irfan, M. O., Bobin, J., Miville-Deschênes, M.-A., & Grenier, I. 2019, A&A, 623, A21
* Jow et al. (2018) Jow, D. L., Hill, R., Scott, D., et al. 2018, MNRAS, 474, 1018
* Kalberla & Haud (2020) Kalberla, P. M. W. & Haud, U. 2020, arXiv e-prints, arXiv:2003.01454
* Kamionkowski & Kovetz (2016) Kamionkowski, M. & Kovetz, E. D. 2016, ARA&A, 54, 227
* Kuntz & Danly (1996) Kuntz, K. D. & Danly, L. 1996, ApJ, 457, 703
* Lenz et al. (2017) Lenz, D., Hensley, B. S., & Doré, O. 2017, ApJ, 846, 38
* Mangilli et al. (2019) Mangilli, A., Aumont, J., Rotti, A., et al. 2019, arXiv e-prints, arXiv:1912.09567
* Martin et al. (2015) Martin, P. G., Blagrave, K. P. M., Lockman, F. J., et al. 2015, ApJ, 809, 153
* Martínez-Solaeche et al. (2018) Martínez-Solaeche, G., Karakci, A., & Delabrouille, J. 2018, MNRAS, 476, 1310
* McClure-Griffiths et al. (2006) McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., Green, A. J., & Haverkorn, M. 2006, ApJ, 652, 1339
* Meisner & Finkbeiner (2015) Meisner, A. M. & Finkbeiner, D. P. 2015, ApJ, 798, 88
* Murray et al. (2020) Murray, C. E., Peek, J. E. G., & Kim, C.-G. 2020, ApJ, 899, 15
* Panopoulou & Lenz (2020) Panopoulou, G. V. & Lenz, D. 2020, ApJ, 902, 120
* Panopoulou et al. (2019) Panopoulou, G. V., Tassis, K., Skalidis, R., et al. 2019, ApJ, 872, 56
* Peek & Clark (2019) Peek, J. E. G. & Clark, S. E. 2019, ApJ, 886, L13
* Planck Collaboration III (2020) Planck Collaboration III. 2020, A&A, 641, A3
* Planck Collaboration Int. LVII (2020) Planck Collaboration Int. LVII. 2020, A&A, 643, A42
* Planck Collaboration IV (2020) Planck Collaboration IV. 2020, A&A, 641, A4
* Planck Collaboration IX (2016) Planck Collaboration IX. 2016, A&A, 594, A9
* Planck Collaboration L (2017) Planck Collaboration L. 2017, A&A, 599, A51
* Planck Collaboration XI (2014) Planck Collaboration XI. 2014, A&A, 571, A11
* Planck Collaboration XI (2020) Planck Collaboration XI. 2020, A&A, 641, A11
* Planck Collaboration XII (2014) Planck Collaboration XII. 2014, A&A, 571, A12
* Planck Collaboration XII (2020) Planck Collaboration XII. 2020, A&A, 641, A12
* Planck Collaboration XIX (2015) Planck Collaboration XIX. 2015, A&A, 576, A104
* Planck Collaboration XXIV (2011) Planck Collaboration XXIV. 2011, A&A, 536, A24
* Planck Collaboration XXIX (2016) Planck Collaboration XXIX. 2016, A&A, 586, A132
* Planck Collaboration XXX (2016) Planck Collaboration XXX. 2016, A&A, 586, A133
* Plaszczynski et al. (2014) Plaszczynski, S., Montier, L., Levrier, F., & Tristram, M. 2014, MNRAS, 439, 4048
* Poh & Dodelson (2017) Poh, J. & Dodelson, S. 2017, Phys. Rev. D, 95, 103511
* Puglisi et al. (2017) Puglisi, G., Fabbian, G., & Baccigalupi, C. 2017, MNRAS, 469, 2982
* Ramaprakash et al. (2019) Ramaprakash, A. N., Rajarshi, C. V., Das, H. K., et al. 2019, MNRAS, 485, 2355
* Remazeilles et al. (2020) Remazeilles, M., Rotti, A., & Chluba, J. 2020, arXiv e-prints, arXiv:2006.08628
* Schlafly et al. (2016) Schlafly, E. F., Meisner, A. M., Stutz, A. M., et al. 2016, ApJ, 821, 78
* Shapiro & Field (1976) Shapiro, P. R. & Field, G. B. 1976, ApJ, 205, 762
* Sheehy & Slosar (2018) Sheehy, C. & Slosar, A. 2018, Phys. Rev. D, 97, 043522
* Skalidis & Pelgrims (2019) Skalidis, R. & Pelgrims, V. 2019, A&A, 631, L11
* Suzuki et al. (2018) Suzuki, A., Ade, P. A. R., Akiba, Y., et al. 2018, Journal of Low Temperature Physics, 193, 1048
* Tassis & Pavlidou (2015) Tassis, K. & Pavlidou, V. 2015, MNRAS, 451, L90
* Tassis et al. (2018) Tassis, K., Ramaprakash, A. N., Readhead, A. C. S., et al. 2018, arXiv e-prints, arXiv:1810.05652
* Verschuur (1993) Verschuur, G. L. 1993, ApJ, 409, 205
* Wesselius & Fejes (1973) Wesselius, P. R. & Fejes, I. 1973, A&A, 24, 15
|
# Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang<EMAIL_ADDRESS>University of California, San Diego9500 Gilman
DrLa JollaCalifornia92093 and Margaret E. Roberts<EMAIL_ADDRESS>University of California, San Diego9500 Gilman DrLa JollaCalifornia92093
(2021)
###### Abstract.
While artificial intelligence provides the backbone for many tools people use
around the world, recent work has brought to attention that the algorithms
powering AI are not free of politics, stereotypes, and bias. While most work
in this area has focused on the ways in which AI can exacerbate existing
inequalities and discrimination, very little work has studied how governments
actively shape training data. We describe how censorship has affected the
development of Wikipedia corpuses, text data which are regularly used for pre-
trained inputs into NLP algorithms. We show that word embeddings trained on
Baidu Baike, an online Chinese encyclopedia, have very different associations
between adjectives and a range of concepts about democracy, freedom,
collective action, equality, and people and historical events in China than
its regularly blocked but uncensored counterpart – Chinese language Wikipedia.
We examine the implications of these discrepancies by studying their use in
downstream AI applications. Our paper shows how government repression,
censorship, and self-censorship may impact training data and the applications
that draw from them.
word embeddings, censorship, training data, machine learning
††journalyear: 2021††copyright: rightsretained††conference: Conference on
Fairness, Accountability, and Transparency; March 3–10, 2021; Virtual Event,
Canada††booktitle: Conference on Fairness, Accountability, and Transparency
(FAccT ’21), March 3–10, 2021, Virtual Event, Canada††doi:
10.1145/3442188.3445916††isbn: 978-1-4503-8309-7/21/03††ccs: Computing
methodologies Supervised learning by classification††ccs: Information systems
Content analysis and feature selection††ccs: Social and professional topics
Political speech
## 1\. Introduction
Natural language processing (NLP) as a branch of artificial intelligence
provides the basis for many tools people around the world use daily. NLP
impacts how firms provide products to users, content individuals receive
through search and social media, and how individuals interact with news and
emails. Despite the growing importance of NLP algorithms in shaping our lives,
recently scholars, policymakers, and the business community have raised the
alarm of how gender and racial biases may be baked into these algorithms.
Because they are trained on human data, the algorithms themselves can
replicate implicit and explicit human biases and aggravate discrimination
(Sweeney, 2013; Bolukbasi et al., 2016; Caliskan et al., 2017). Additionally,
training data that over-represents a subset of the population may do a worse
job at predicting outcomes for other groups in the population (Dressel and
Farid, 2018). When these algorithms are used in real world applications, they
can perpetuate inequalities and cause real harm.
While most of the work in this area has focused on bias and discrimination, we
bring to light another way in which NLP may be affected by the institutions
that impact the data that they feed off of. We describe how censorship has
affected the development of online encyclopedia corpuses that are often used
as training data for NLP algorithms. The Chinese government has regularly
blocked Chinese language Wikipedia from operating in China, and mainland
Chinese Internet users are more likely to use an alternative Wikipedia-like
website, Baidu Baike. The institution of censorship has weakened Chinese
language Wikipedia, which is now several times smaller than Baidu Baike, and
made Baidu Baike - which is subject to pre-censorship - an attractive source
of training data. Using methods from the literature on gender discrimination
in word embeddings, we show that Chinese word embeddings trained with the same
method but separately on these two corpuses reflect the political censorship
of these corpuses, treating the concepts of democracy, freedom, collective
action, equality, people and historical events in China significantly
differently.
After establishing that these two corpuses reflect different word
associations, we demonstrate the potential real-world impact of training data
politics by using the two sets of word embeddings in a transfer learning task
to classify the sentiment of news headlines. We find that models trained on
the same data but using different pre-trained word embeddings make
significantly different predictions of the valence of headlines containing
words pertaining to freedom, democracy, elections, collective action, social
control, political figures, the CCP, and historical events. These results
suggest that censorship could have downstream effects on AI applications,
which merit future research and investigation.
Our paper proceeds as follows. We first describe the background of how
Wikipedia corpuses came to be used as training data for word embeddings and
how censorship impacts these corpuses. Second, we describe our results of how
word associations from Wikipedia and Baidu Baike word embeddings differ on
concepts that pertain to democracy, equality, freedom, collective action and
historical people and events in China. Last, we show that these embeddings
have downstream implications for AI models using a sentiment prediction task.
## 2\. Pre-Trained Word Embeddings and Wikipedia Corpuses
NLP algorithms rely on numerical representations of text as a basis for
modeling the relationship between that text and an outcome. Many NLP
algorithms use “word embeddings” to represent text, where each word in a
corpus is represented as a k-dimensional vector that encodes the relationship
between that word and other words through the distance between them in
k-dimensional space. Words that frequently co-occur are closer in space.
Popular algorithms such as Glove (Pennington et al., 2014) and Word2Vec
(Mikolov et al., 2013) are used to estimate embeddings for any given corpus of
text. The word embeddings are then used as numerical representations of input
texts, which are then related through a statistical classifier to an outcome.
In comparison to other numerical representations of text, word embeddings are
useful because they communicate the relationships between words. The bag-of-
words representation of text, which represents each word as simply being
included or not included in the text, does not encode the relationship between
words – each word is equidistant from the other. Word embeddings, on the other
hand, communicates to the model which words tend to co-occur, thus providing
the model with information that words like “purse” and “handbag” as more
likely substitutes than “purse” and “airplane”.
Word embeddings are also useful because they can be pre-trained on large
corpuses of text like Wikipedia or Common Crawl, and these pre-trained
embeddings can then be used as an initial layer in applications that may have
less training data. Pre-trained word embeddings have been shown to achieve
higher accuracy faster (Qi et al., 2018). While training on large corpuses is
expensive, companies and research groups have made available pre-trained word
embeddings – typically on large corpuses like Wikipedia or Common Crawl – that
can then be downloaded and used in any application in that language.111For
example, Facebook’s provides word embeddings in 294 languages trained on
Wikipedia (https://fasttext.cc/docs/en/pretrained-vectors.html (Bojanowski et
al., 2017).
The motivation behind using pre-trained word embeddings is that they can
reflect how words are commonly used in a particular language. Indeed, Spirling
and Rodriguez (2019) show that pre-trained word embeddings do surprisingly
well on a “Turing test” where human coders often cannot distinguish between
close words produced by the embeddings and those produced by other humans. To
this end, Wikipedia corpuses are commonly selected to train word embeddings
because they are user-generated, open-source, cover a wide range of topics,
and are very large.222A Google Scholar search of “pre-trained word embeddings”
and Wikipedia returns over 2,000 search results as of January 2021.
At the same time as pre-trained embeddings have become popular for computer
scientists in achieving better performance for NLP tasks, some scholars have
pointed to potential harms these embeddings could create by encoding existing
biases into the representation. The primary concern is that embeddings
replicate existing human biases and stereotypes in language and using them in
downstream applications can perpetuate these biases (see Sun et al. (2019a)
for a review). Caliskan et al. (2017) show that word embeddings reflect human
biases, in that associations of words in trained word embeddings mirror
implicit association tests. Using simple analogies within word embeddings,
Bolukbasi et al. (2016), Garg et al. (2018), and Manzini et al. (2019) show
that word embeddings can encode racial and gender stereotypes. While these
word associations can be of interest to social science researchers, they may
cause harm if used in downstream tasks (Barocas and Selbst, 2016;
Papakyriakopoulos et al., 2020).
More generally, research in machine learning has been criticized for not
paying enough attention to the origin of training datasets and the social
processes that generate them (Geiger et al., 2020). Imbalances in the content
of training data have been shown to create differential error rates across
groups in areas ranging from computer vision to speech recognition (Tatman,
2017; Torralba and Efros, 2011). Some scholars have argued that training
datasets should be representative of the population that the algorithm is
applied to (Shankar et al., 2017).
## 3\. Censorship of Chinese Language Wikipedia and Implications for Chinese
Langauge NLP
We consider another mechanism through which institutional and societal forces
impact the corpuses that are used to train word embeddings: government
censorship. While we use the example of online encyclopedias and word
embeddings to make our point, its implications are much more general.
Government censorship of social media, news, and websites directly affects
large corpuses of text by blocking users’ access, deleting individual
messages, adding content through propaganda, or inducing self-censorship
through intimidation and laws (Deibert et al., 2008; Morozov, 2011; MacKinnon,
2012; King et al., 2013, 2017; Sanovich et al., 2018; Roberts, 2018).
While Wikipedia’s global reach makes it an attractive corpus for training
models in many different languages, Wikipedia has also been periodically
censored by many governments, including Iran, China, Uzbekistan, and Turkey
(Clark et al., 2017). China has had the most extensive and long-lasting
censorship of Wikipedia. Chinese language Wikipedia has been blocked
intermittently ever since it was first established in 2001. Since May 19,
2015, all of Chinese language Wikipedia has been blocked by the Great Firewall
of China (Welinder and Black, 2015; Oberhaus, 2017). More recently, not just
Chinese language Wikipedia, but all language versions of Wikipedia have been
blocked from mainland China (Wik, 2019).
Censorship has weakened Chinese language Wikipedia by decreasing the size of
its audience. Pan and Roberts (2019) estimate that the block of Chinese
language Wikipedia in 2015 decreased page views of the website by around 3
million views per day. Zhang and Zhu (2011) use the 2005 block of Wikipedia to
show that the block decreased views of Chinese language Wikipedia, which in
turn decreased user contributions to Wikipedia not only from blocked users in
mainland China, but also from unblocked users what had fewer incentives to
contribute after the block. While mainland Chinese Internet users can access
Chinese language Wikipedia with a Virtual Private Network (VPN), evidence
suggests that very few do (Chen and Yang, 2019; Roberts, 2018).
Censorship of Chinese language Wikipedia has strengthened its unblocked
substitute, Baidu Baike. A similar Wikipedia-like website, Baidu Baike as of
2019 boasted 16 million entries, 16 times larger than Chinese language
Wikipedia (Zhang, 2019). Yet, as with all companies operating in China, Baidu
Baike is subject to internal censorship that impacts whether and how certain
entries are written. While edits to Chinese language Wikipedia pages are
posted immediately, any edits to Baidu Baike pages go through pre-publication
review. While editors of Wikipedia can be anonymous, editors of Baidu Baike
must register their real names. Additional scrutiny is given to sensitive
pages, such as national leaders, political figures, political information, and
the military, where Baidu Baike regulations stipulate that only government
media outlets such as _Xinhua_ and _People’s Daily_ can be used as
sources.333See instructions at:
https://baike.baidu.com/item/%E7%99%BE%E5%BA%A6%E7%99%BE%E7%A7%91%EF%BC%9A%E5%8F%82%E8%80%83%E8%B5%84%E6%96%99.
Pre-censorship of Baidu Baike affects the types of pages available on Baidu
Baike and the way these pages are written. While it’s impossible to know
without an internal list the extent to which missing pages in Baidu Baike are
a direct result of government censorship, a substantial list of historical
events covered on Chinese language Wikipedia including “June 4th Incident” and
“Democracy Wall” and well-known activists such as Chen Guangcheng and
Wu’erkaixi have no Baidu Baike page (Ng, 2013). For example, when we attempted
to create entries on Baidu Baike such as “June Fourth Movement” or
“Wu’erkaixi,” we were automatically returned an error.
Perhaps because of the size difference between the two corpuses, increasingly
researchers developing cutting edge Chinese langauge NLP models are drawing on
the Baidu Baike corpus (Sun et al., 2019b; Wei et al., 2019). Baidu Baike word
embeddings have been shown to perform better on certain tasks (Li et al.,
2018). Here, we assess the downstream implications of this choice on the
representation of democratic concepts, social control, and historical events
and figures. First, we follow Caliskan et al. (2017) to compare the distance
between these concepts and a list of adjectives and sentiment words. Then, we
show the downstream consequences of the choice of corpus on a predictive task
of the sentiment of headlines.
## 4\. Distance from Democracy: Comparison Between Baidu Baike and Wikipedia
Embeddings
In this section, we consider the differences in word associations among word
embeddings trained with Chinese language Wikipedia and Baidu Baike. We use
word embeddings made available by Li et al.
(2018).444https://github.com/Embedding/Chinese-Word-Vectors Li et al. (2018)
train 300-dimensional word embeddings on both Baidu Baike and Chinese language
Wikipedia using the same algorithm, Word2Vec (Mikolov et al., 2013). For a
benchmark, we also compare these two sets of embeddings to embeddings trained
on articles from the _People’s Daily_ from 1947-2016, the Chinese government’s
mouthpiece.555Also trained by Li et al. (2018) and made available at
https://github.com/Embedding/Chinese-Word-Vectors.
To evaluate word associations, we follow Caliskan et al. (2017) and Rodman
(2019) to compare the distance between a set of target words and attribute
words to establish their relationships in each embedding space. Figure 1 gives
a simplified graphical representation of the evaluation procedure in a
2-dimensional space. In this simple example, we might be interested in the
position of a target word – a concept we are interested in – relative to a
positive attribute word and a negative attribute word. For example, we can
evaluate whether democratic concepts are represented more positively or
negatively by comparing the angle between the vector for the target word
“Democracy” (in black) and a positive attribute word “Stability” as well as a
negative attribute word “Chaos” (both in blue).
2-Dimensional word embeddings with “democracy” as target word and “chaose” and
“stability” as attribute words
Figure 1. Example of Word Embedding Comparison
In Figure 1, “Democracy” in word embedding A has a more positive connotation
than in word embedding B, because the relative position of the word
“Democracy” in embedding A with respect to the positive attribute word
“Stability” and the negative attribute word “Chaos” is closer to the positive
attribute word than “Democracy” is in embedding B. To minimize the
particularities of a single word and hence the variability of the result, we
repeat this evaluation procedure across multiple target words representing the
same concept (e.g. democracy) and compare them with multiple attribute words.
In the next sections, we explain how we select target words, attribute words,
how we pre-process the embedding space, and our results.
### 4.1. Identifying Target Words
We begin by delineating the categories of interest. In general, there are two
broad categories we are interested in: 1) democratic political concepts and
ideas and 2) known targets of propaganda. Based on past work, we know entries
that fall under these categories have been the target of content control on
Baidu Baike (Ng, 2013).Additionally, the first category captures ideas that we
think are normatively desirable but discouraged in China. The second category
captures the extent that the embeddings are consistent with propaganda.
For the first category, we include
1. (1)
Democratic values, in particular freedom and equality of rights.
2. (2)
Procedures of democracy, in particular features pertaining to elections.
3. (3)
Channels for voicing preferences in the form of collective actions such as
protests and petitions.
For the second category, we include
1. (1)
Social control, especially concepts related to repression and surveillance.
2. (2)
The Chinese Communist Party (CCP) and related features.
3. (3)
Significant historical events in China that involved the CCP, such as the
Cultural Revolution.
4. (4)
Important figures who are extolled by the CCP.
5. (5)
Figures who are denounced by the CCP, such as political dissenters.
For each of these categories, we do not want to select only one target word of
interest, but rather a group of related words that all cover the same concept.
We select a group of target words that “represent” this category as follows:
1. (1)
For categories other than historical events and negative figures, we first
select a Chinese word that most closely represents the category of
interest.666We asked three Chinese speakers to independently come up with the
representative words and had them agree on a single word for each category.
This step was done before analysis was performed. For example, for the
category of procedures of democracy, the Chinese word “election” is selected.
2. (2)
We then calculate the cosine similarity of the representative word with all
other words from the word embedding spaces (Wikipedia & Baidu Baike).
3. (3)
From each corpus, we select 50 words that are closest to the representative
word (words with the highest cosine similarity).
4. (4)
Of the 100 words closest to the representative word for each category, we
include all words that could be thought to be synonymous or a subset of the
more general category. We drop those that are domain specific; for example, of
the words for the category of procedures of democracy, we dropped the word
”Japanese Diet”, which is specific to the Japanese political system.
5. (5)
For categories on historical events and negative figures, we simply used the
name of the person or of the historical event.
6. (6)
The full list of words for each category is presented in Appendix D.
We opt for the data-driven approach in (3) and (4) to select target words in
order to limit researcher degree of freedom. Furthermore, the selection of
representative words in (1) and the pruning of synonyms in (4) were done by
three native Chinese speakers to ensure the selected words provide good
coverage of how the categories of interest are discussed in the Chinese
context.
### 4.2. Selecting Attribute Words
We use two strategies for selecting attribute words. First, we draw on the
literature on propaganda in China to select a set of positive and negative
words that would be consistent with what we know about CCP propaganda
narratives. As scholars of propaganda have pointed out, the CCP has actively
tried to promote the image of itself and China’s political system as stable
and prosperous, while characterizing Western democratic systems as chaotic and
in economic decline (Brady, 2015; Zhang and Boukes, 2019; Eco, 2016).
Therefore, for our first set of words, which we call “Propaganda Attributes
Words,” positive words include synonyms of stability and prosperity, while
negative attribute words include synonyms of chaos, decline, and instability.
The full list for the set of propaganda attribute words is presented in
Appendix E.
For the second set of words, we are interested in whether the target words are
more generally evaluated differently between the two corpuses. To test this,
we make use of a dictionary of evaluative words specifically designed for
Chinese natural language processing (Wang and Ku, 2016). The dictionary codes
whether an evaluative word is positive, negative, or neutral. We follow the
preprocessing instructions by Wang and Ku (2016) by dropping all neutral words
and only using the list of positive and negative evaluative words. A sample of
the set of evaluative words is presented in Appendix F. For subsequent
discussions, we refer to this list of attribute words as the “Evaluative
Attribute Words.”
### 4.3. Pre-processing Word Embedding Spaces
There are two notable challenges when comparing different word embeddings.
One, word embeddings produced by stochastic algorithms such as Word2Vec will
embed words in non-aligned spaces defined by different basis vectors. This
precludes naive comparison of word distances across distinct corpuses
(Hamilton et al., 2016; Rodman, 2019). If the centroids of the two word
embeddings are different, then using cosine similarity (i.e. the cosine of the
angle between two vectors) to compare word associations across different
corpuses can yield uninterpretable result. Figure 2 presents a simplified
example of this problem. One word embedding, by virtue of being further away
from the origin, yields a smaller angle between the two vectors, even though
the relative positions of the two vectors in the two word embeddings are the
same.
To solve this problem, we standardize the basis vectors of each word
embeddings by subtracting the means and dividing by the standard deviations of
the basis vectors, so that each word embedding is centered around the origin
with dimension length 1.
An example of nonalignment between two word embeddings
Figure 2. Nonalignment between Two Word Embeddings
Another problem is that word embeddings trained on different corpuses can have
different vocabulary. This precludes us from comparing words that appear in
one word embedding but are not present in the other word embedding. Because of
this, we only keep the intersection of the vocabularies of word embeddings. As
a result, six target words were dropped in the comparison between Wikipedia-
and Baidu Baike-trained word embeddings and five target words were dropped in
the comparison between Wikipedia- and _People’s Daily_ -trained word
embeddings.
### 4.4. Expectations
We expect ideas that are normatively appealing but discouraged in China to be
portrayed more negatively in Baidu Baike. We expect figures who are denounced
by the CCP to be portrayed more negatively in Baidu Baike. On the other hand,
we expect categories that are targets of positive propaganda to be portrayed
more positively in Baidu Baike. Overall, we expect that censorship and
curation of Baidu Baike will mean that the words we are interested in will be
treated similarly in Baidu Baike and state media outlet _The People’s Daily_.
A summary of our theoretical expectations is presented in Table 1 below.
Table 1. Theoretical Expections
Category | Sign
---|---
Freedom | $-$
Democracy | $-$
Election | $-$
Collective Action | $-$
Negative Figures | $-$
Social Control | $+$
Surveillance | $+$
CCP | $+$
Historical Events | $+$
Positive Figures | $+$
_Note:_ Negative sign indicates Baidu Baike and _People’s Daily_ are less
favorable than Wikipedia and positive sign indicates that Baidu Baike and
_People’s Daily_ are more favorable than Wikipedia.
### 4.5. Limitations
Through this design, we test whether there are differences between word
embeddings trained on Chinese language Wikipedia and those trained on Baidu
Baike in topics where there is evidence of censorship on Baidu Baike. While we
think the evidence we produce is suggestive that censorship impacts the
placement of the word embeddings, we cannot isolate the effect of censorship
outside of other differences that may exist between Baidu Baike and Chinese
language Wikipedia. Isolating the effect of censorship is difficult in part
because censorship’s influence is pervasive, affecting the content not only
through pre-publication review, but also likely through the propensity for
individuals to become editors and the information that they have and are
willing to contribute. This makes it very difficult to establish a
counterfactual of what the content on Baidu Baike would have looked like
without censorship. We believe Chinese language Wikipedia is the closest
approximation to this counterfactual.
### 4.6. Results
Following Caliskan et al. (2017), we use a randomization test with one-sided
p-value to compare how words in each category are represented in Wikipedia,
Baidu Baike and _People’s Daily_.
Formally, let $X_{i}$, $i\in{a,b}$ be the set of word vectors for the target
words from embedding $a$ and $b$ respectively. Let $A_{i}$, $B_{i}$,
$i\in{a,b}$ be the two sets of word vectors for the attribute words, with $A$
being the set of positive attributes and $B$ being the set of negative
attributes. Subscript $i$ again denotes the embedding that the word vectors
are from. Let $\cos(\vec{p},\vec{q})$ denote the cosine of the angle between
vectors $\vec{p}$ and $\vec{q}$. The test statistic is
$s_{i}(X,A,B)=\sum_{i\in a}s(x_{i},A_{i},B_{i})-\sum_{i\in
b}s(x_{i},A_{i},B_{i})$
where
$s(t,A,B)=\mbox{mean}_{p\in A}\cos(\vec{t},\vec{p})-\mbox{mean}_{q\in
B}\cos(\vec{t},\vec{q})$
Let $\Omega$ denotes the set of all possible randomization realizations of
assignment of word vector $x$ to embedding $i\in\\{a,b\\}$. The one-sided
p-value of the permutation test is
$\mbox{Pr}_{i}[s_{\omega\in\Omega}(X,A,B)>s_{i}(X,A,B)]$
We present the effect size of the difference in word associations across word
embeddings, defined as
$\frac{\mbox{mean}_{i\in a}s(x_{i},A_{i},B_{i})-\mbox{mean}_{i\in
b}s(x_{i},A_{i},B_{i})}{\mbox{std.dev}_{i}s(x_{i},A_{i},B_{i})}$
Conventional cutoffs for small, medium, and large effect sizes are $0.2$,
$0.5$, and $0.8$, respectively. The comparisons between Wikipedia and Baidu
Baike word embeddings and between Wikipedia and _People’s Daily_ word
embeddings are presented in Table 2 and Table 3 respectively.
Table 2. Wikipedia vs. Baidu Baike
| Propaganda Attributes | Evaluative Attributes
---|---|---
| effect size | p-value | effect size | p-value
Freedom | -0.62 | 0.01 | 0.06 | 0.60
Democracy | -0.50 | 0.05 | -0.56 | 0.03
Election | -0.27 | 0.13 | -0.33 | 0.05
Collective Action | -0.66 | 0.00 | -0.09 | 0.34
Negative Figures | -0.91 | 0.00 | 0.50 | 0.99
Social Control | 0.70 | 0.04 | 0.68 | 0.01
Surveillance | 0.09 | 0.32 | 0.73 | 0.00
CCP | 1.05 | 0.02 | 1.39 | 0.00
Historical Events | 0.14 | 0.19 | 0.27 | 0.01
Positive Figures | 0.59 | 0.00 | 1.17 | 0.00
Table 3. Wikipedia vs. _People’s Daily_
| Propaganda Attributes | Evaluative Attributes
---|---|---
| effect size | p-value | effect size | p-value
Freedom | -0.29 | 0.11 | -0.51 | 0.01
Democracy | -0.40 | 0.09 | -0.97 | 0.00
Election | -0.43 | 0.04 | -0.91 | 0.00
Collective Action | -0.81 | 0.00 | -0.10 | 0.34
Negative Figures | 0.44 | 0.91 | -0.06 | 0.41
Social Control | 0.82 | 0.01 | 0.58 | 0.03
Surveillance | 0.31 | 0.06 | 0.84 | 0.00
CCP | 1.39 | 0.00 | 1.22 | 0.00
Historical Events | 0.29 | 0.08 | 0.22 | 0.04
Positive Figures | 1.51 | 0.00 | 1.29 | 0.00
Across most categories and for both sets of attribute words, the differences
in word embeddings are in line with our theoretical expectations. Table 2
indicates that for categories Freedom, Democracy, Election, Collective Action,
and Negative Figures, word embeddings trained with Baidu Baike display a more
negative connotation than embeddings trained with Wikipedia. For categories
Social Control, Surveillance, CCP, and Historical Events, word embeddings
trained with Baidu Baike display a more positive connotation than embeddings
trained with Wikipedia. The effect sizes indicate substantial differences for
target words that are related to democracy and those that are targets of
propaganda. This is consistent across both set of attribute words and across
the two comparisons. In Table 3 we show that the effect sizes when comparing
Wikipedia and Baidu Baike are similar to comparing Wikipedia with the
government publication _The People’s Daily_.
While most categories accord with our expectations, one in particular deserves
further explanation. Negative figures, including activists and dissidents who
the CCP denounces, are only more significantly associated with negative words
on Baidu Baike and _People’s Daily_ in one instance and even have a positive
effect size comparing Baidu Baike to Wikipedia in Table 2. It is likely that
because of censorship there is very little information about these figures in
the Baidu Baike and _People’s Daily_ corpuses, so their word embeddings do not
show strong relationships with the attribute words. To examine this, we used
Google Search to count the number of pages on Chinese language Wikipedia and
Baidu Baike that link to each negative figure. Out of 18 negative figures,
Chinese language Wikipedia has more page links to two thirds of them, even
though Chinese language Wikipedia is 16 times smaller. Therefore, the
uncertainty around the result we have for negative figures may be a result of
lack of information about these individuals in Baidu Baike.
## 5\. Application: Sentiment Analysis of News Headlines
In this section, we demonstrate that the differences we detected in word
embeddings have tangible effect on downstream machine learning tasks. To do
this, we make use of the pre-trained word embeddings on each of the different
corpuses as inputs in a larger machine learning model that automatically
labels the sentiment polarity of news headlines. We chose the automated
classification of news headlines because machine learning based on news
headlines is used in recommendation systems for social media news feeds and
news aggregators, as well as for analysts using automated classification of
news to make stock price and economic predictions.777For example, EquBot
https://equbot.com/. We show that using the pre-trained word embeddings from
Baidu Baike and Chinese language Wikipedia with identical training data
produces sentiment predictions for news headlines that differ systematically
across our categories of interest.
### 5.1. Data and Method
We imagine a scenario where the task is to label the sentiment of news
headlines where the model is trained on a large, general sample of news
headlines. We then examine the performance of this model on an oversample of
headlines that include our target words. This allows us to evaluate how a
general news sentiment classifier performs on words that are politically
valanced in China, varying the origin of the pre-trained embeddings, but
holding constant the sentiment labels in the training and test sets.
For the training set, we randomly select 5,000 headlines from the TNEWS
dataset. The TNEWS dataset contains 73,360 Chinese news headlines of various
categories.888For more details about the TNEWS dataset, see Appendix. It is
part of the Chinese Language Understanding Evaluation (CLUE) Benchmark and is
widely used as the training data for Chinese news classification models. For
each of the randomly selected 5,000 headlines, we label each news headline as
positive, negative, or neutral in line with the general sentiment of the
headline. For our training set from the TNEWS dataset, we have 1,861 headlines
with positive sentiment, 781 with negative sentiment, and 2,342 with neutral
sentiment.99916 duplicated news headlines are dropped, resulting in 4,984
headlines in total.
For the test set, we collect Chinese news headlines that contain any of our
target words from Google News. For each of the target words, we collect up to
100 news headlines. Because some target words yield only a handful of news
headlines, we collected 12,669 news headlines in total, out of 182 target
words. Data collection was done in July and August of 2020. Using the exact
same coding scheme as the training set, we label these headlines as positive,
negative, or neutral. The test set contains 5,291 headlines with positive
sentiment, 3,913 with negative sentiment, and 3,424 with neutral
sentiment.10101041 duplicated news headlines are dropped, resulting in 12,628
headlines in total.
We preprocess the news headlines by removing punctuation, numbers, special
characters, the names of the news agency (if they appear on the headline), and
duplicated headlines. To convert the news headlines into input for machine
learning models, we first use a Chinese word segmentation tool to segment each
news headline into a sequence of words. We then look up the word embedding for
each word in the sequence. Following a conventional approach, we take the
average of the pre-trained word embeddings of the words in a given news
headline to represent each headline. Any word that does not have a
corresponding word embedding in the Word2Vec models is dropped. This leaves us
with three different representations of the headlines: one for Baidu Baike,
one for Chinese language Wikipedia, and one for the _People’s Daily_.
With each of these three different representations of the text based on
different pre-trained embeddings, we train three machine learning models –
Naive Bayes (NB), support vector machines (SVM) and TextCNN (Kim, 2014). For
each model, we use identical training labels, from the TNEWS
dataset.111111Because headlines with neutral labels are more noisy and given
the difficulty of training a three-class classifier with limited training
data, we report results in the main text based on models that are trained with
only positive and negative headlines. We report results with neutral headlines
included in the Appendix. Our substantive conclusions are largely intact. This
yields a total of nine models, with three for each pre-trained word
embeddings. Each trained model is then used to predict sentiment labels on the
test set. Because of the stochastic nature of TextCNN, the TextCNN results are
averaged over 10 runs for each model.
We compare different trained models of the same architecture (NB, SVM, or
TextCNN) by looking at the mis-classifications for each category of target
words. Intuitively, a model that is pre-disposed to associate more positive
words with a certain category of headlines will have more false-positives
(e.g. negative headlines mis-classified as positive), whereas a model that is
pre-disposed to associate more negative words with a certain category of
headlines will have more false-negatives (e.g. positive headlines mis-
classified as negative).
Because the overall mis-classification rate may differ for headlines of
different target words, we use a linear mixed effects model to compare the
different embeddings, allowing headlines of different target words to have
different intercepts. More formally, let $L_{ij}$ be a list of $N$ human-
labeled sentiment scores for headlines containing target word $i$ in category
$j$. Let $\hat{L}_{ij}^{a}$ and $\hat{L}_{ij}^{b}$ be the predicted sentiment
scores from model $a$ and $b$ for the same headlines. We estimate the linear
mixed effects model for each category $j$ of news headlines by
(1) $\displaystyle y_{j}=\alpha_{ij}+X_{j}\beta_{j}+\epsilon_{j}$
where the outcome variable $y_{j}$ is a $2N\times 1$ vector of difference in
classifications against human labels,
$\big{(}\begin{smallmatrix}\hat{L}_{j}^{a}-L_{j}\\\
\hat{L}_{j}^{b}-L_{j}\end{smallmatrix}\big{)}$. $\alpha_{ij}$ is a $2N\times
1$ vector of random intercepts corresponding to headlines of each target word
$i$ in category $j$. $X_{j}$ is an indicator variable for model $a$ (as
opposed to $b$) and $\beta_{j}$ is the coefficient of interest.
### 5.2. Results
Before turning to the results of the impact of pre-trained embeddings on the
predicted classifications of the model, we report the overall accuracy of each
of the models on the test set in Table 4. Overall, TextCNN performs the best
out of the three models. However, within models no set of pre-trained word
embeddings performs better than the other – they all perform quite similarly.
Table 4. Model Accuracy in Test Set | Model | Accuracy
---|---|---
Naive Bayes | |
| Baidu Baike | 76.83
| Wikipedia | 76.29
SVM | |
| Baidu Baike | 77.12
| Wikipedia | 76.68
TextCNN | |
| Baidu Baike | 82.84
| Wikipedia | 81.60
Even though the selection of pre-trained embeddings does not seem to impact
overall accuracy, the pre-trained embeddings do influence the false positive
and false negative rates of different categories of headlines. In Table 5 we
show the comparison of Baidu Baike and Wikipedia, where Baidu Baike is model
$a$ and Wikipedia is model $b$. This means $X_{j}$ from Equation 1 is 1 for
category $j$ if the model were trained with Baidu Baike word embeddings and 0
for Wikipedia. A negative coefficient indicates that on average Baidu Baike
rates this category more negatively than Wikipedia. A positive coefficient
indicates that on average Baidu Baike rates this category as more positive
than Wikipedia.
Table 5. Baidu Baike vs. Wikipedia
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.13 | 0.00 | -0.06 | 0.00 | -0.04 | 0.04
Democracy | -0.08 | 0.00 | -0.05 | 0.04 | -0.04 | 0.06
Election | -0.11 | 0.00 | -0.06 | 0.03 | -0.02 | 0.48
Collective Action | -0.13 | 0.00 | -0.07 | 0.00 | -0.05 | 0.01
Negative Figures | -0.04 | 0.03 | 0.00 | 0.96 | -0.01 | 0.54
Social Control | 0.03 | 0.12 | 0.00 | 0.93 | 0.03 | 0.13
Surveillance | -0.01 | 0.68 | -0.01 | 0.80 | 0.00 | 0.91
CCP | 0.03 | 0.21 | 0.01 | 0.65 | 0.03 | 0.05
Historical Events | -0.04 | 0.04 | 0.01 | 0.75 | -0.02 | 0.26
Positive Figures | 0.06 | 0.00 | 0.06 | 0.00 | 0.06 | 0.00
The results are largely consistent with what we found in Section 4.
Overwhelmingly, Wikipedia predicts headlines that contain target words in the
categories of freedom, democracy, election, and collective action to be more
positive. In contrast, Baidu Baike predicts headlines that contain target
words of figures that the CCP views positively to be more positive. The
exceptions to our expectations are the categories of social control,
surveillance, CCP, and historical events, where we cannot reject the null of
no difference between the two corpuses, although they do not go against our
expectations. We find similar results for the comparison between _People’s
Daily_ and Chinese language Wikipedia, in Table 6.
Table 6. _People’s Daily_ vs. Wikipedia
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.22 | 0.00 | -0.08 | 0.00 | -0.12 | 0.00
Democracy | -0.14 | 0.00 | -0.06 | 0.02 | -0.07 | 0.00
Election | -0.13 | 0.00 | -0.01 | 0.62 | -0.04 | 0.12
Collective Action | -0.19 | 0.00 | -0.05 | 0.05 | -0.06 | 0.00
Negative Figures | 0.01 | 0.78 | 0.01 | 0.72 | -0.05 | 0.01
Social Control | 0.05 | 0.00 | 0.01 | 0.66 | 0.01 | 0.63
Surveillance | -0.04 | 0.11 | -0.02 | 0.34 | -0.03 | 0.22
CCP | 0.07 | 0.00 | 0.00 | 0.82 | 0.02 | 0.24
Historical Events | -0.01 | 0.77 | 0.02 | 0.29 | -0.01 | 0.44
Positive Figures | 0.13 | 0.00 | 0.04 | 0.00 | 0.06 | 0.00
To provide intuition, Figure 3 shows examples of headlines labeled differently
between model trained with Baidu Baike pre-trained embeddings and model
trained with Chinese language Wikipedia in our test set. The model trained
with Baidu Baike pre-trained word embedding labeled “Tsai Ing-wen: Hope Hong
Kong Can Enjoy Democracy as Taiwan Does” as negative, while Wikipedia and
humans labeled this headline as positive. The difference in these predictions
do not stem from the training data – which is the same – or the model – which
is the same. Instead, the associations made within the pre-trained word
embeddings drive these differences.
Example 1: 蔡英文: 盼台湾享有的民主自由香港也可以有 Tsai Ing-wen: Hope Hong Kong Can Enjoy
Democracy as Taiwan Does Baidu Baike Label: - Wikipedia Label: + Human Label:
+ Example 2: 封杀文化席卷欧美 自由反被自由误? Cancel Culture Spreading through the Western
World, Is It the Fault of Freedom? Baidu Baike Label: - Wikipedia Label: +
Human Label: - Example 3: 共产暴政录: 抗美援朝真相 Communist Tyranny: The Truth about
Chinese Involvement in the Korean War Baidu Baike Label: + Wikipedia Label: -
Human Label: - Example 4: 香港《国安法》:中国驻港部队司令强硬表态维稳 Hong Kong Security Law: PLA
Hong Kong Garrison Commander Takes Tough Stance in Support of Stability
Maintenance Baidu Baike Label: + Wikipedia Label: - Human Label: - Examples of
Headlines Labeled Differently By Naive Bayes Models Trained with Baidu Baike
and Wikipedia
Figure 3. Examples of Headlines Labeled Differently By Naive Bayes Models
Trained with Baidu Baike and Wikipedia
## 6\. Conclusion
The extensive use of censorship in China means that the Chinese government is
in the dominant position to shape the political content of large Chinese
language corpuses. Even though corpuses like Chinese language Wikipedia exist
outside of the Great Firewall, they are significantly weakened by censorship,
as shown by the smaller size of Chinese language Wikipedia in comparison to
Baidu Baike. While more work would need to be done to understand how these
discrepancies affects users of any particular application, we showed in this
paper that political differences reflective of censorship exist between two of
the corpuses commonly used to train Chinese language NLP. While our work
focuses on word embeddings, the discrepancies we uncovered likely affect other
pre-trained NLP models as well, such as BERT (Devlin et al., 2018) and ERNIE
(Sun et al., 2019b). Furthermore, these political differences present a
pathway through which political censorship can have downstream effects on
applications that may not themselves be political but that rely on NLP, from
predictive text and article recommendation systems to social media news feeds
and algorithms that flag disinformation.
The literature in computer science has taken on the problem of bias in
training data by looking for ways to de-bias it – for example, through data
augmentation (Zhao et al., 2018), de-biasing word embeddings (Bolukbasi et
al., 2016), and adversarial learning (Zhang et al., 2018).121212Although
methods for de-biasing have also been shown to often be inadequate (Gonen and
Goldberg, 2019; Blodgett et al., 2020). However, it is unclear how to think
about de-biasing attitudes toward democracy, freedom, surveillance, and social
control. What does unbiased look like in these circumstances, and how would
one test it? The only way we can think about an unbiased training set in this
circumstance is one where certain ideas are not automatically precluded from
being included in any given corpus. But knowing what perspectives have been
omitted is difficult to determine and correct after the fact.
###### Acknowledgements.
This work is partially supported by the Sponsor National Science Foundation
https://www.nsf.gov/awardsearch/showAward?AWD_ID=1738411 under Grant
No.:˜Grant #0001738411. Thanks to Guanwei Hu, Yucong Li, and Zoey Jialu Xu for
their excellent research assistance. We thank Michelle Torres, Allan Dafoe,
and Jeffrey Ding for their helpful comments on this work.
## References
* (1)
* Eco (2016) 2016\. No News is Bad News. _The Economist_ (2016). https://www.economist.com/china/2016/02/04/no-news-is-bad-news
* Wik (2019) 2019\. Wikipedia blocked in China in All Languages. _BBC News_ (2019). https://www.bbc.com/news/technology-48269608
* Barocas and Selbst (2016) Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. _Calif. L. Rev._ 104 (2016), 671.
* Blodgett et al. (2020) Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of “Bias” in NLP. _arXiv preprint arXiv:2005.14050_ (2020).
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. _Transactions of the Association for Computational Linguistics_ 5 (2017), 135–146.
* Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In _Advances in neural information processing systems_. 4349–4357.
* Brady (2015) Anne-Marie Brady. 2015\. Authoritarianism Goes Global (II): China’s Foreign Propaganda Machine. _Journal of Democracy_ 26, 4 (2015), 51–59.
* Caliskan et al. (2017) Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017\. Semantics derived automatically from language corpora contain human-like biases. _Science_ 356, 6334 (2017), 183–186.
* Chen and Yang (2019) Yuyu Chen and David Y. Yang. 2019. The Impact of Media Censorship: 1984 or Brave New World? _American Economic Review_ 109, 6 (2019).
* Clark et al. (2017) Justin Clark, Robert Faris, and Rebekah Heacock Jones. 2017\. Analyzing Accessibility of Wikipedia Projects Around the World. _Berkman Klein Center Research Publication_ 2017-4 (2017).
* Deibert et al. (2008) Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, and Janice Gross Stein. 2008\. _Access Denied: The Practice and Policy of Global Internet Filtering_. MIT Press, Cambridge.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* Dressel and Farid (2018) Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. _Science Advances_ 4, 1 (2018), eaao5580.
* Garg et al. (2018) Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. _Proceedings of the National Academy of Sciences_ 115, 16 (2018), E3635–E3644.
* Geiger et al. (2020) R Stuart Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. 2020. Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from?. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_. 325–336.
* Gonen and Goldberg (2019) Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 609–614.
* Hamilton et al. (2016) William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016\. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 1489–1501.
* Kim (2014) Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. 1746–1751.
* King et al. (2013) Gary King, Jennifer Pan, and Margaret E Roberts. 2013. How censorship in China allows government criticism but silences collective expression. _American Political Science Review_ (2013), 326–343.
* King et al. (2017) Gary King, Jennifer Pan, and Margaret E Roberts. 2017. How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. _American Political Science Review_ 111, 3 (2017), 484–501.
* Li et al. (2018) Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on chinese morphological and semantic relations. _arXiv preprint arXiv:1805.06504_ (2018).
* MacKinnon (2012) Rebecca MacKinnon. 2012\. _Consent of the Networked: The Worldwide Struggle For Internet Freedom_. Basic Books, New York.
* Manzini et al. (2019) Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. 615–621.
* Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In _Advances in neural information processing systems_. 3111–3119.
* Morozov (2011) Evgeny Morozov. 2011\. _The Net Delusion: The Dark Side of Internet Freedom_. PublicAffairs, New York.
* Ng (2013) Jason Ng. 2013. Who’s the Boss? The difficulties of identifying censorship in an environment with distributed oversight A large-scale comparison of Wikipedia China with Hudong and Baidu Baike. _Citizen Lab_ (2013). https://citizenlab.ca/2013/08/a-large-scale-comparison-of-wikipedia-china-with-hudong-and-baidu-baike/
* Oberhaus (2017) Daniel Oberhaus. 2017\. Wikipedia’s Switch to HTTPS Has Successfully Fought Government Censorship. _Motherboard_ (2017). https://bit.ly/2T5aEWm
* Pan and Roberts (2019) Jennifer Pan and Margaret E. Roberts. 2019. Censorship’s Effect on Incidental Exposure to Information: Evidence from Wikipedia. _SAGE Open_ (2019).
* Papakyriakopoulos et al. (2020) Orestis Papakyriakopoulos, Simon Hegelich, Juan Carlos Medina Serrano, and Fabienne Marco. 2020\. Bias in word embeddings. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_. 446–457.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_. 1532–1543.
* Qi et al. (2018) Ye Qi, Devendra Singh Sachan, Matthieu Felix, Sarguna Janani Padmanabhan, and Graham Neubig. 2018\. When and why are pre-trained word embeddings useful for neural machine translation? _arXiv preprint arXiv:1804.06323_ (2018).
* Roberts (2018) Margaret E. Roberts. 2018\. _Censored: Distraction and Diversion Inside China’s Great Firewall_. Princeton University Press, Princeton.
* Rodman (2019) Emma Rodman. 2019\. A Timely Intervention: Tracking the Changing Meanings of Political Concepts with Word Vectors. _Political Analysis_ (2019), 1–25.
* Sanovich et al. (2018) Sergey Sanovich, Denis Stukal, and Joshua A Tucker. 2018\. Turning the virtual tables: Government strategies for addressing online opposition with an application to Russia. _Comparative Politics_ 50, 3 (2018), 435–482.
* Shankar et al. (2017) Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. 2017\. No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. _stat_ 1050 (2017), 22.
* Spirling and Rodriguez (2019) Arthur Spirling and P Rodriguez. 2019. Word embeddings: What works, what doesn’t, and how to tell the difference for applied research. (2019).
* Sun et al. (2019a) Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019a. Mitigating Gender Bias in Natural Language Processing: Literature Review. _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ (2019). https://doi.org/10.18653/v1/p19-1159
* Sun et al. (2019b) Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. Ernie: Enhanced representation through knowledge integration. _arXiv preprint arXiv:1904.09223_ (2019).
* Sweeney (2013) Latanya Sweeney. 2013\. Discrimination in online ad delivery. _Queue_ 11, 3 (2013), 10–29.
* Tatman (2017) Rachael Tatman. 2017\. Gender and dialect bias in YouTube’s automatic captions. In _Proceedings of the First ACL Workshop on Ethics in Natural Language Processing_. 53–59.
* Torralba and Efros (2011) Antonio Torralba and Alexei A Efros. 2011. Unbiased look at dataset bias. In _CVPR 2011_. IEEE, 1521–1528.
* Wang and Ku (2016) Shih-Ming Wang and Lun-Wei Ku. 2016. ANTUSD: A large Chinese sentiment dictionary. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_. 2697–2702.
* Wei et al. (2019) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, and Qun Liu. 2019\. NEZHA: Neural contextualized representation for chinese language understanding. _arXiv preprint arXiv:1909.00204_ (2019).
* Welinder and Black (2015) Victoria Baranetsky Welinder, Yana and Brandon Black. 2015. Securing Access to Wiki-media Sites with HTTPS. _Wikimedia Blog_ (2015). https://diff.wikimedia.org/2015/06/12/securing-wikimedia-sites-with-https/
* Zhang et al. (2018) Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018\. Mitigating unwanted biases with adversarial learning. In _Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society_. 335–340.
* Zhang (2019) Jane Zhang. 2019\. How Baidu built an encyclopedia with 16 times more Chinese entries than Wikipedia. _South China Morning Post_ (2019). https://www.scmp.com/tech/big-tech/article/3038402/how-baidu-baike-has-faced-against-wikipedia-build-worlds-largest
* Zhang and Boukes (2019) Xiaodong Zhang and Mark Boukes. 2019. How China’s flagship news program frames “the West”: Foreign news coverage of CCTV’s Xinwen Lianbo before and during Xi Jinping’s presidency. _Chinese Journal of Communication_ 12, 4 (2019), 414–430.
* Zhang and Zhu (2011) Xiaoquan Michael Zhang and Feng Zhu. 2011. Group size and incentives to contribute: A natural experiment at Chinese Wikipedia. _American Economic Review_ 101, 4 (2011), 1601–15.
* Zhao et al. (2018) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_. 15–20.
## Appendix A Additional Sentiment Analysis Results
### A.1. Model Accuracy on Validation Set
In training the TextCNN models, we held out $20\%$ of our training set as a
validation set. The validation set was used to assess the quality of the
models during training. The model with the best accuracy on the validation set
in each run was selected as the outputted model. A.1 reports the average
accuracy (over 10 runs) of the models on the validation sets.
Table A.1. Model Accuracy on Validation Sets
2-class | |
---|---|---
| Baidu Baike | 90.29
| Wikipedia | 89.65
| People’s Daily | 92.64
3-class | |
| Baidu Baike | 67.44
| Wikipedia | 66.07
| People’s Daily | 67.80
_Note:_ “2-class” classification means that the training and validation sets
contain only negative and positive headlines. “3-class” classification
additionally has neutral headlines included.
### A.2. Sentiment Analysis Results with Neutral Headlines Included
Table A.2. Model Accuracy | Model | Accuracy
---|---|---
Naive Bayes | |
| Baidu Baike | 56.42
| Wikipedia | 55.63
| People’s Daily | 57.79
SVM | |
| Baidu Baike | 55.53
| Wikipedia | 55.29
| People’s Daily | 54.71
TextCNN | |
| Baidu Baike | 61.71
| Wikipedia | 60.89
| People’s Daily | 58.55
Table A.3. Wikipedia vs. Baidu Baike
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.11 | 0.00 | -0.06 | 0.00 | -0.03 | 0.12
Democracy | -0.08 | 0.00 | -0.04 | 0.04 | -0.02 | 0.23
Election | -0.09 | 0.00 | 0.00 | 0.87 | -0.01 | 0.62
Collective Action | -0.10 | 0.00 | -0.06 | 0.00 | 0.00 | 0.89
Negative Figures | -0.05 | 0.00 | -0.01 | 0.47 | 0.03 | 0.02
Social Control | 0.01 | 0.59 | 0.03 | 0.08 | 0.03 | 0.04
Surveillance | -0.06 | 0.00 | -0.05 | 0.00 | 0.01 | 0.51
CCP | 0.05 | 0.00 | 0.03 | 0.01 | 0.04 | 0.01
Historical Events | -0.04 | 0.02 | -0.01 | 0.66 | 0.02 | 0.05
Positive Figures | 0.08 | 0.00 | 0.07 | 0.00 | 0.08 | 0.00
Table A.4. Wikipedia vs. People’s Daily
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.17 | 0.00 | -0.07 | 0.00 | -0.05 | 0.01
Democracy | -0.13 | 0.00 | -0.07 | 0.00 | -0.06 | 0.00
Election | -0.13 | 0.00 | 0.00 | 0.93 | -0.01 | 0.53
Collective Action | -0.15 | 0.00 | -0.06 | 0.00 | -0.02 | 0.22
Negative Figures | -0.02 | 0.17 | 0.00 | 0.96 | 0.01 | 0.32
Social Control | 0.05 | 0.00 | 0.02 | 0.22 | 0.00 | 0.97
Surveillance | -0.01 | 0.61 | -0.04 | 0.02 | -0.01 | 0.56
CCP | 0.04 | 0.01 | 0.04 | 0.00 | 0.03 | 0.02
Historical Events | -0.01 | 0.53 | 0.00 | 0.78 | 0.03 | 0.00
Positive Figures | 0.10 | 0.00 | 0.06 | 0.00 | 0.10 | 0.00
### A.3. Sentiment Analysis Results Comparing Baidu Baike and People’s Daily
A.5 reports the results comparing models trained on Baidu Baike and those
trained on People’s Daily, where Baidu Baike is model $a$ and People’s Daily
is model $b$. A positive coefficient means that on average People’s Daily
model rates a given category more positively than Baidu Baike.
A.6 reports results from the same comparison but with headlines with neutral
labels included in the training and test sets.
Table A.5. Baidu Baike vs. People’s Daily (2-class)
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.09 | 0.00 | -0.02 | 0.48 | -0.07 | 0.00
Democracy | -0.05 | 0.05 | -0.01 | 0.68 | -0.02 | 0.29
Election | -0.03 | 0.31 | 0.04 | 0.08 | -0.02 | 0.36
Collective Action | -0.06 | 0.01 | 0.02 | 0.28 | -0.01 | 0.57
Negative Figures | 0.05 | 0.02 | 0.01 | 0.69 | -0.04 | 0.04
Social Control | 0.03 | 0.09 | 0.01 | 0.72 | -0.02 | 0.27
Surveillance | -0.03 | 0.25 | -0.02 | 0.49 | -0.02 | 0.24
CCP | 0.04 | 0.04 | 0.00 | 0.82 | -0.01 | 0.33
Historical Events | 0.04 | 0.07 | 0.01 | 0.46 | 0.01 | 0.72
Positive Figures | 0.07 | 0.00 | -0.01 | 0.35 | 0.00 | 0.92
Table A.6. Baidu Baike vs. People’s Daily (3-class)
| Naive Bayes | SVM | TextCNN
---|---|---|---
| estimate | p-value | estimate | p-value | estimate | p-value
Freedom | -0.07 | 0.00 | -0.01 | 0.64 | -0.02 | 0.21
Democracy | -0.06 | 0.01 | -0.03 | 0.17 | -0.04 | 0.04
Election | -0.04 | 0.07 | 0.00 | 0.93 | 0.00 | 0.88
Collective Action | -0.06 | 0.00 | 0.00 | 0.84 | -0.02 | 0.26
Negative Figures | 0.03 | 0.07 | 0.01 | 0.44 | -0.02 | 0.20
Social Control | 0.04 | 0.02 | -0.01 | 0.59 | -0.03 | 0.04
Surveillance | 0.05 | 0.00 | 0.01 | 0.55 | -0.02 | 0.20
CCP | -0.01 | 0.63 | 0.01 | 0.46 | 0.00 | 0.73
Historical Events | 0.03 | 0.06 | 0.00 | 0.88 | 0.01 | 0.36
Positive Figures | 0.02 | 0.01 | -0.01 | 0.34 | 0.01 | 0.12
## Appendix B Further Details on the TNEWS Dataset
The TNEWS Dataset comprises of 73,360 Chinese news headlines from Toutiao, a
Chinese news and information content platform. The dataset contains news
headlines from 15 categories: story, culture, entertainment, sports, finance,
house, car, education, technology, military, travel, world, stock, agriculture
and gaming.
The TNEWS dataset is part of the Chinese Language Understanding Evaluation
(CLUE) Benchmark, which serves as a common repository of datasets used to test
the accuracy of trained models. (For an equivalent of CLUE in English, see
GLUE: https://gluebenchmark.com/). Because the length of a news headline is
usually short, the TNEWS dataset is widely used as either training or testing
data for machine learning models that tackle short-text classification tasks.
Given that the downstream task we are interested in is the classification of
news headlines, the TNEWS dataset serves as the ideal source of data in our
case.
The TNEWS dataset is split into a training set (53,360 headlines), a
validation set (10,000 headlines) and a test set (10,100 headlines). For our
purpose, we pooled the three sets and randomly selected 5,000 news headlines
from the pooled set. Because the news headlines are not labeled according to
sentiment in the dataset, we manually labeled the sentiment of the headlines
in our selected subset. Each headline is labeled by two independent coders of
native Chinese speaker and any conflict in labeling is resolved.
## Appendix C List of Target Words
Freedom (自由) = {自由 (freedom), 言论自由 (freedom of speech), 集会自由 (freedom of
assembly), 新闻自由 (freedom of the press), 结社自由 (freedom of association), 自由权
(right to freedom), 民主自由 (democracy and freedom), 自由言论 (free speech), 创作自由
(creative freedom), 婚姻自主 (marital autonomy), 自由民主 (freedom and democracy),
自由市场 (free market), 自决 (self-determination), 自决权 (right to self-
determination), 生而自由 (born free), 自由自在 (free), 自由选择 (freedom of choice), 自由思想
(freedom of thought), 公民自由 (civil liberties), 自由竞争 (free competition), 宗教自由
(freedom of religion), 自由价格 (free price)}
Election (选举) = {选举 (election), 直接选举 (direct election), 议会选举 (parliamentary
election), 间接选举 (indirect election), 直选 (direct election), 换届选举 (general
election), 民选 (democratically elected), 投票选举 (voting), 全民公决 (referendum), 总统大选
(presidential election), 大选 (election), 普选 (universal suffrage), 全民投票
(referendum), 民主选举 (democratic election)}
Democracy (民主) = {民主 (democracy), 自由民主 (freedom and democracy), 民主自由
(democracy and freedom), 民主制度 (democratic system), 民主化 (democratization),
社会民主主义 (social democracy), 民主运动 (democratic movement), 民主主义 (democracy) , 民主改革
(democratic reform), 民主制 (democratic system), 民主选举 (demoratic election), 民主权力
(democratic rights), 多党制 (multi-party system), 民主法制 (democracy and rule of
law), 民主权利 (democratic rights)}
Social Control (维稳) = {维稳 (social control), 处突 (emergency handling), 社会治安
(public security), 反恐怖 (counter-terrorism), 公安工作 (police work), 预防犯罪 (crime
prevention), 收容审查 (arrest and investigation), 治安工作 (public security work), 大排查
(inspections), 扫黄打非 (combating pornography and illegal publications), 接访
(petition reception), 反邪教 (anti-cult)}
Surveillance (监控) = {监控 (surveillance), 监测 (monitor), 监视 (surveillance), 管控
(control), 监看 (monitor), 监视系统 (surveillance system), 截听 (tapping), 监控中心
(surveillance center), 情报服务 (intelligence service), 排查 (inspection), 监视器
(surveillance equipment), 情报搜集 (intelligence collection), 间谍卫星 (reconnaissance
satellite) , 管理网络 (internet control), 监控器 (surveillance equipment), 监控站
(surveillance center), 监控室 (surveillance center), 数据采集 (data collection)}
Collective Action (抗议) = {抗议 (protest), 示威 (demonstration), 示威游行
(demonstration; march), 示威抗议 ( demonstration; protest), 游行示威 (demonstration;
march), 静坐示威 (sit-in), 绝食抗议 (hunger strike), 请愿 (petition), 示威运动
(demonstration), 游行 (demonstration; march), 罢教 (strike), 静坐 (sit-in), 集会游行
(demonstration; assembly), 罢课 (strike), 签名运动 (signature campaign)}
Positive Figures (党和国家) = {毛泽东 (Mao Zedong), 江泽民 (Jiang Zemin), 胡锦涛 (Ju
Jintao), 习近平 (Xi Jinping), 周恩来 (Zhou Enlai), 朱镕基 (Zhu Rongji), 温家宝 (Wen
Jiabao), 李克强 (Li Keqiang), 邓小平 (Deng Xiaoping), 曾庆红 (Zeng Qinghong), 华国锋 (Hua
Guofeng), 李鹏 (Li Peng), 杨尚昆 (Yang Shangkun), 谷牧 (Gu Mu), 吴邦国 (Wu Bangguo), 李岚清
(Li Lanqing), 纪登奎 (Ji Dengkui), 乔石 (Qiao Shi), 邹家华 (Zou Jiahua), 李瑞环 (Li
Ruihuan), 俞正声 (Yu Zhengsheng), 张高丽 (Zhang Haoli), 田纪云 (Tian Jiyun), 回良玉 (Hui
Liangyu), 李源潮 (Li Yuanchao), 贾庆林 (Jia Qinglin), 姚依林 (Yao Yilin), 张立昌 (Zhang
Lichang), 尉健行 (Wei Jianxing), 姜春云 (Jiang Chunyun), 李铁映 (Li Tieying), 王兆国 (Wang
Zhaoguo), 罗干 (Luo Gan), 刘靖基 (Liu Jingji), 杨汝岱 (Yang Rudai), 王光英 (Wang
Guangying), 彭佩云 (Peng Peiyun), 刘云山 (Liu Yunshan), 丁关根 (Ding Guangen), 彭真 (Peng
Zhen), 胡启立 (Hu Qili), 曾培炎 (Zeng Peiyan), 何东昌 (He Dongchang)}
Negative Figures = {林彪 (Lin Biao), 王洪文 (Wang Hongwen), 张春桥 (Zhang Chunqiao),
江青 (Jiang Qing), 姚文元 (Yao Wenyuan), 刘晓波 (Liu Xiaobo), 丹增嘉措 ( Tenzin Gyatso),
李洪志 ( Li Hongzhi), 陈水扁 (Chen Shui-bian), 黄之锋 (Joshua Wong), 黎智英 (Jimmy Lai),
艾未未 (Ai Weiwei), 李登辉 (Lee Teng-hui), 李柱铭 (Martin Lee), 何俊仁 (Albert Ho), 陈方安生
(Anson Chan), 达赖 (Dalai Lama), 陈光诚 (Chen Guangcheng), 滕彪 (Teng Biao), 魏京生 (Wei
Jingsheng), 鲍彤 (Bao Tong)}
CCP (中国共产党) = {党中央 (central committee), 中国共产党 (CCP), 党支部 (party branch), 中共中央
(central committee), 共青团 (CCP youth league), 共青团中央 (youth league central
committee), 党委 (party committee), 中央党校 (central party school)}
Historical Events = {抗日战争 (Anti-Japanese War), 解放战争 (China’s War of
Liberation), 抗美援朝 (the War to resist U.S. Aggression and Aid Korea), 改革开放
(Reform and Opening up), 香港回归 (Hong Kong reunification), 长征 (Long March), 三大战役
(Three Great Battles in the Second Civil War), 秋收起义 (Autumn Harvest Uprising),
南昌起义 (Nanchang Uprising), 澳门回归 (Transfer of sovereignty over Macau), 志愿军
(Volunteer Army), 土地改革 (Land Reform), 六四 (June Fourth Movement), 遵义会议 (Zunyi
Conference), 九二南巡 (Deng’s Southern Tour in 1992), 广州起义 (Guangzhou Uprising),
西藏和平解放 (Annexation of Tibet), 井冈山会师 (Jinggangshan Huishi), 百团大战 (Hundred
Regiments Offensive), 文革 (Cultural Revolution), 文化大革命 (Cultural Revolution),
大跃进 (Great Leap Forward), 四人帮 (Gang of Four), 解放农奴 (Serfs Emancipation)}
## Appendix D Lists of Propaganda Attribute Words
Positive Adjectives = {稳定, 繁荣, 富强, 平稳, 幸福, 振兴, 发展, 兴旺, 昌盛, 强盛, 稳当, 安定, 局势稳定,
安定团结, 长治久安, 安居乐业}
Negative Adjectives = {动荡, 衰落, 震荡, 贫瘠, 不幸, 衰退, 萧条, 败落, 没落, 衰败, 摇摆, 不稳, 时局动荡,
颠沛流离, 动荡不安, 民不聊生}
## Appendix E Examples of Evaluative Attribute Words
Positive Evaluative = {情投意合, 精选, 严格遵守, 最根本, 确有必要, 重镇, 直接接管, 收获, 思想性, 均需参加,
可用于, 当你落后, 同意接受, 居冠, 感化, 完美演出, 急欲, 多元地理环境, 形影不离的朋友, 一举击败, …}
Negative Evaluative = {金融波动, 科以, 畸型, 向..开枪, 破碎家庭, 撬动, 头皮发麻, 颠覆, 迟疑, 血淋淋地, 驱赶,
干的好事, 责骂不休, 生硬, 沖蚀, 拉回, 走失的家畜, 燃眉之急, 喷溅, 违反, …}
For the full list of evaluative words from the augmented NTU sentiment
dictionary (ANTUSD), see https://academiasinicanlplab.github.io/#resources.
|
# A Novel Genetic Algorithm with Hierarchical Evaluation Strategy for
Hyperparameter Optimisation of Graph Neural Networks
Yingfang Yuan2, Wenjun Wang2, George M. Coghill, Wei Pang1 Y.F. Yuan, W.J.
Wang and W. Pang are with the School of Mathematical and Computer Sciences,
Heriot-Watt University, Edinburgh, EH14 4AS, UK.G.M. Goghill is with the
School of Natural and Computer Sciences, University of Aberdeen, Aberdeen,
AB24 3UE, UK.Manuscript received XX XX, 2020; revised XX, XX, 20XX.2 Equal
contribution (co-first authors)1 Corresponding author: Wei Pang (email:
<EMAIL_ADDRESS>
###### Abstract
Graph representation of structured data can facilitate the extraction of
stereoscopic features, and it has demonstrated excellent ability when working
with deep learning systems, the so-called Graph Neural Networks (GNNs).
Choosing a promising architecture for constructing GNNs can be transferred to
a hyperparameter optimisation problem, a very challenging task due to the size
of the underlying search space and high computational cost for evaluating
candidate GNNs. To address this issue, this research presents a novel genetic
algorithm with a hierarchical evaluation strategy (HESGA), which combines the
full evaluation of GNNs with a fast evaluation approach. By using full
evaluation, a GNN is represented by a set of hyperparameter values and trained
on a specified dataset, and root mean square error (RMSE) will be used to
measure the quality of the GNN represented by the set of hyperparameter values
(for regression problems). While in the proposed fast evaluation process, the
training will be interrupted at an early stage, the difference of RMSE values
between the starting and interrupted epochs will be used as a fast score,
which implies the potential of the GNN being considered. To coordinate both
types of evaluations, the proposed hierarchical strategy uses the fast
evaluation in a lower level for recommending candidates to a higher level,
where the full evaluation will act as a final assessor to maintain a group of
elite individuals. To validate the effectiveness of HESGA, we apply it to
optimise two types of deep graph neural networks. The experimental results on
three benchmark datasets demonstrate its advantages compared to Bayesian
hyperparameter optimization.
###### Index Terms:
Graph Neural Network, Hyperparameter Optimization, Genetic Algorithm,
Hierarchical Evaluation Strategy, Difference of RMSEs.
## I Introduction
Graph can be used to represent features of structured data. Deep learning
equipped with graph models, the so-called graph deep learning approaches, have
recently been used to predict molecular and polymer proprieties [1], and
tremendous success has been achieved in comparison to the traditional
approaches based on semantic SMILES strings [2] only. Among many types of
graph deep learning systems, Graph Convolutional Neural Networks (GNNs)
succeed in deep learning with promising performance and scalability [3].
Generally, a GNN models a set of objects (nodes) as well as their connections
(edges) in the form of topological graphs using stereoscopic features [4],
which is distinct from traditional vector-based machine learning systems.
Thus, GNNs are good at solving graph-related problems, and it can deal with
complex real-world systems in an end-to-end manner [5]. Technically, GNNs can
operate directly on graphs, while in molecular and polymer property prediction
problems, there is a common representation transfer module which bridges the
gap between the SMILES strings and graphs [6]. Fed by the graphs, GNNs can
learn to approximate the desirable properties of molecules or polymers on
various user-specific scenarios or applications.
As with most of the machine learning approaches, GNNs also need a set of
hyperparameters to shape their architectures, and examples of hyperparameters
include the numbers of convolutional layers, filters (kernels), full connected
nodes and training epochs. These hyperparameters will affect the training and
learning performance, i.e., a good configuration of hyperparameters for a GNN
will lead to effective training and accurate predictions, while a poor
configuration will generate otherwise results. Therefore, hyperparameter
optimisation (HPO) for GNN architectures is vital. Recently, Nunes et.al. [7]
compared reinforcement learning based and evolutionary algorithms based
methods for optimising GNN architectures. Moreover, GraphNAS [8] employs a
recurrent network trained with the policy gradient to explore network
architecture. However, in the context of GNN, the HPO research is still
growing [7].
On the other hand, compared with traditional machine learning methods, most of
deep learning models including GNNs have more sophisticated architectures and
are more time-consuming to train. This means HPO for GNN is indeed a very
expensive task: for each trial regarding a configuration of hyperparameters,
it has to complete the full training process to evaluate the quality of this
configuration. Existing HPO methods include grid search [9], random search
[10], Gaussian and Bayesian methods [11], as well as evolutionary approaches
[12] [13] [14], however most of these suffer from the expensive computational
cost.
To address the expensive HPO problem, in this research we aim to develop a
novel genetic algorithm (GA) with two evaluation methods: full and fast
evaluations. Regarding the full evaluation, GNN will be trained on a specified
dataset given a set of hyperparameter values, and root mean square error
(RMSE) on the validation set will be considered as a full score of this
solution. The proposed fast evaluation approach employs the difference of
RMSEs between the early stage and the beginning of training as the fitness
score to approximate the performance of a GNN being fully trained. A
hierarchical evaluation strategy named HES is also proposed for coordinating
these two evaluation methods, in which the fast evaluation operates in a lower
level for recommending candidates, thereafter the full evaluation will act as
a final assessor to maintain a group of elite individuals. Finally, the above
procedures and operations constitute the proposed algorithm termed HESGA.
To assess the effectiveness of the proposed HESGA, we carried out experiments
on three public molecule datasets: EOSL [15], FreeSolv [16], and Lipophilicity
[17], which involve the predictions of three properties: molecular solubility,
hydration free energy, and lipophilicity, respectively. Each dataset has all
the molecules represented by SMILES strings, which can be used to construct
molecular graphs. These constructed graphs were then used as input to GNNs for
the predictive tasks. In this research, we apply HESGA to optimise the
hyperparameters of two types of Graph Neural Networks: Graph Convolution (GC)
[6] and Message Passing Neural Network (MPNN) [18], to improve their learning
performance in terms of RMSE. The promising results compared with benchmarks
[1] show that HESGA has advantages in both achieving good/better solutions
compared to the Bayesian based HPO and require less computational cost than
the original genetic algorithm.
The main contributions of this research are as follows:
1. 1.
We proposed a fast approach for evaluating GNNs by using the difference of
RMSEs between the early training stage and the very beginning of the training.
2. 2.
We proposed a novel hierarchical evaluation strategy used together with GA
(HESGA) for hyperparameter optimisation.
3. 3.
We conducted systematic experiments on three benchmark datasets (ESOL,
FreeSolv, and Lipophilicity) to assess the performance of HESGA on GC and MPNN
models as opposed to the Bayesian method.
The rest of this paper is organized as follows. Section II introduces relevant
work and methods for HPO. In Section III, the details of HESGA is presented.
The experiments are reported and the results are analysed in Section IV. Then
regarding more interesting topics we have a further discussion in Section V.
Finally, Section VI concludes the paper and explores some directions in future
work.
## II Background and Relevant Methods
### II-A Grid Search and Random Search
Grid search and random search are two most commonly used approaches for HPO.
Grid-based method initializes the hyperparameter space using grid layout and
tests each point (representing a configuration of hyperparameters) in the
grid. As grid search is performed in an exhaustive manner to evaluate all grid
points, its cost is determined by the resolution of the pre-specified grid
layout. In contrast, random search is supported by a series of defined
probabilistic distributions (e.g. uniform distribution), which suggest a
number of points in trials. It is noted that random search is in general more
practical and efficient than grid search for HPO of neural networks given the
same computational budget [19].
### II-B Bayesian and Gaussian Approaches
Bayesian optimization can be used to suggest the probabilistic distributions
mentioned in random search. It is assumed that the performance of the learning
model is correlated to its hyperparameters. Thus, we give a higher probability
to the set of hyperparameter values with better performance [11], which means
that it will be allocated with more chances to be sampled further. After
sufficient iterations of calculations, a probability distribution function
similar to the maximum likelihood function can be learned by Bayesian
approaches [20], random forest [21] and other surrogate models. As a result,
the computational cost for model validation will be saved in this way.
Gaussian process is suitable for approximating the distribution of evaluation
results because of their flexibility and traceability [22]. The combination of
Bayesian optimisation with Gaussian process outperforms human expert-level
optimization in many problems [11]. For example, in [23], FABOLAS is poposed
to accelerate Bayesian optimisation of hyperparameters on large datasets, and
this method benefits from sub-sampling. Another successful case is a method
which combines Bayesian optimisation and Hyperband, and it possesses the
features of simplicity, efficiency, robustness and flexibility [24].
### II-C Evolutionary Computation
Recent years, evolutionary algorithms (EAs) have demonstrated advantages in
solving large-scale, highly non-linear and expensive optimisation problems
[25] [26]. HPO for GNNs is usually expensive [27] in evaluating each of the
feasible architectures. Thus, using EAs to solve HPO problems have been
explored due to their excellent search ability [28].
For using EA, the representation of solutions (encoding) is a key issue, for
which direct acyclic graphs [29] and binary representation [30] have
demonstrated their advantages. Given good representations of hyperparameters,
EA will generate a population of individuals as potential solutions, each of
which will be evaluated by a fitness function. In most cases the fitness
function is the objective function, which in our context means first fully
training a GNN with the specified hyperparameters and then evaluating its
learning performance ( (in terms of RMSE)) as the fitness value. Thereafter,
these individuals are selected by a selection method (e.g., the roulette)
based on their fitness values to be the parents. Through evolutionary
iterations, those GNNs with higher fitness values are more likely to be
maintained in the population, and those fitter solutions will have more chance
to produce offspring. In the end, the best individual will be selected as the
final GNN model.
There are two main issues in evolutionary computation: convergence of the
algorithm and diversity of population. To make the evolutionary search
converge faster, researchers have proposed many methods, including
modification of evolutionary operators [31], using elite archive [32],
ensembles [33] to increase the chance of selecting better parents, and niching
methods [34] for local exploitation [35]. In terms of population diversity,
some approaches have designed for escaping local optima and improving the
performance of exploration [36]. Regarding the above two issues, in this
research we will propose a novel GA with an elite archive for increasing
convergence and a mating selection strategy which allows one parent to be
selected from the whole population for increasing diversity.
## III Genetic Algorithm with Hierarchical Evaluation Strategy
In many real-world applications GNN suffers from expensive computational cost,
so HPO for GNN is a challenging task, particularly in those cases with huge
hyperparameter search space. Moreover, GA maintains a population of
individuals (as solutions) during the search, which means in one generation
the computational cost may involve evaluating all GNN models in the
population. To address this issue, a surrogate model with lower evaluation
cost [37] or a faster evaluation method [38] can be considered. However, there
is no guarantee that the fitness values generated by such methods would
reliably approximate those obtained from the original evaluation function, and
therefore the HPO results based on such methods may be poor. A good idea is to
combine both the original and fast evaluation strategies together in GA, in
which case we can achieve a tradeoff between performance and computational
cost.
In the rest of this section we will first introduce the motivation of HESGA,
and then we will present the following two detailed processes: (1) fast
evaluation by using difference of RMSEs and (2) the hierarchical evaluation
strategy. Next, the full HESGA is presented with a scalable module for fast
evaluation. At last, the settings for HESGA are presented.
### III-A Solution Encoding
Take an example by four of the hyperparameters mentioned in the benchmark
problems: batch size ($s_{b}$), the number of filters in convolution layer
($n_{f}$), learning rate ($r_{l}$) and the number of fully connected nodes
($n_{n}$). A binary encoding for these four hyperparameters is shown in Table
I.
TABLE I: Encoding for Hyperparameter and Solution | $s_{b}$ | $n_{f}$ | $r_{l}$ | $n_{n}$
---|---|---|---|---
Binary encoding | [0 0 0]$\sim$[1 1 1] | [0 0 0]$\sim$[1 1 1] | [0 0 0 0]$\sim$[1 1 1 1] | [0 0 0]$\sim$[1 1 1]
Range of hyperparameters | 1$\sim$8 | 1$\sim$8 | 1$\sim$16 | 1$\sim$8
Resolution (step increment) | 32 | 32 | 0.0001 | 64
Full integer ranges | 32$\sim$256 | 32$\sim$256 | 0.0001$\sim$0.0016 | 64$\sim$512
In Table I, three 3-bit binary strings are used to represent the parameters:
$s_{b}$, $n_{f}$, and $n_{n}$, together with the resolutions of 32, 32, and
64, respectively according to the benchmark problems. A 4-bit binary string is
used to represent the learning rate ($r_{l}$) with a resolution (step
increment) of 0.001 accordingly. Thus, we have the feasible ranges for batch
size as $[32\sim 256]$, the number of filters as $[32\sim 256]$, learning rate
as $[0.0001\sim 0.0016]$ and the number of fully connected nodes as $[64\sim
512]$. It is noted that because the binary string “$000$” corresponds to the
decimal integer $0$, but $0$ is not expected by all of the hyperparameters. So
we transfer the mapping from binary to decimal integer by adding the value of
1 upon the decimal integer, e.g. $000$ will be mapped as decimal integer $1$,
$001$ is mapped to $2$, and $111$ will correspond to integer $8$. According to
Table I, an example of encoding a solution is shown in Fig. 1.
Figure 1: An Example Encode of Solution
With the encoding strategy specified, EA will be able to perform effective
search for the optimal individual in the hyperparameter space. In deed, the
performance of EA will be affected by many factors, such as population size,
maximum number of generations, operators for producing offspring, and
population maintenance strategy, etc. However, we believe that with common
parameters, EA will reach the optimal solution with less evaluation times than
grid search and random search [39].
### III-B Full Evaluation and Fast Evaluation
Regarding full evaluation, a GNN is first represented by a set of
hyperparameter values and then trained on a specified dataset. At the end of
training, the trained GNN will be validated on another specified dataset, and
the RMSE of the validation will be used to measure the quality of the set of
hyperparameter values as full evaluation.
There are already several approaches on developing fast evaluations, such as
partial training [38] [40], and incomplete training [41] [42]. Partial
training with a sub-dataset is good at tackling big datasets and complicated
models. However, when the dataset is not very big, e.g. the dataset FreeSolv
[16] with only $642$ data points, partial training seems not appropriate due
to the lack of data points for training. While the incomplete training with
early stop policy might be helpful for processing such datasets like FreeSolv.
Based on these ideas, a fast evaluation method by using the difference of
RMSEs of validation between the early stage and the very beginning of training
is introduced. In the below Equation 1, F(t) stands for the fitness value at
epoch t during GNN training.
$\Delta F(1,t)=F(1)-F(t).$ (1)
In the above, $\Delta F(1,t)$ is defined as the difference of fitness between
the $1^{st}$ epoch and $t^{th}$epoch. In our experiments, RMSE was used as the
fitness evaluation metric, so $\Delta F(1,t)$ can approximate the rate of
decrease in RMSE. As a heuristic, those individuals in the population with
bigger $\Delta F(1,t)$ values are more promising to achieve smaller RMSE at
the end of their training. We note that this may not be always the case, but
we use this as an approximate value for the final fitness value in order to
reduce the cost for evaluating GNNs. We also note that the number of $t$
epochs will be far less than the number of epochs needed in training, so
$\Delta F(1,t)$ can also be called difference fitness in the early training
stage.
By using this difference fitness, we can offer a fast evaluation to all
individuals in the population, according to their performance in the early
training stage. However, there is a key issue that needs to be addressed: how
to choose the argument $t$. Since some training algorithms would terminate the
training by a fixed maximum number of epochs, while the others might have a
more adaptive criterion for termination, we cannot set a fixed argument $t$
for the fast evaluation. So $10\%^{\sim}20\%$ of the maximum number of epochs
is proposed, which means the fast evaluation will only consume approx.
$10\%^{\sim}20\%$ of the computational cost compared to the full evaluation.
### III-C Hierarchical Evaluation Strategy
The fast evaluation will only suggest the individuals which have high
probability of achieving better results after full training, but it still
cannot guarantee that this is always the case. Thus, a hierarchical structure
including both fast and full evaluations is designed as shown in Fig. 2.
Figure 2: Hierarchical Evaluation Strategy
In Fig. 2, after population initialization, all the individuals are assessed
by the full evaluation method in Step (1), and in Step (2), those with higher
fitness values are selected and sent to the elite archive accordingly. In
Steps (3) and (4), parents A and B are individually selected by the roulette
method from the elite archive and the whole population. A new population are
generated in Step (5) to replace the old one. In Step (6), all the individuals
in the new population will not take the full evaluation, alternatively they
are assessed by the fast evaluation method, and a small number of candidates
with better fitness values will be selected in Step (7). Further, these
candidates are assessed by the full evaluation method in Step (8), and then in
Step (9), they will update the elite archive depending on if they are better
than some of the individuals in the elite archive. Next, Steps (3) and (4)
will be repeated to generate new offspring, and the whole process will be run
iteratively until the termination criteria are met.
### III-D Full HESGA and Parameter Settings
The pseudo code of HESGA and the parameter settings are shown in Algorithm 1.
Algorithm 1 HESGA
1: Initialization with solution and population, $n_{pop}$, ${d_{indi}}$
2: gen = 0, $maxgen$, $r_{e}=0.1$, $r_{c}=0.1$, $p_{c}=0.8$, $p_{m}=0.2$,
$ev_{fast}=0$, $ev_{full}=0$
3: population evaluated by full evaluation, update the elite archive,
$ev_{full}+=n_{pop}$,
4: while $gen<maxgen$ do
5: select Parents A and B from the elite archive and the whole population,
respectively, to generate $n_{pop}$ new offspring
6: fast evaluation on the new population, then select $n_{pop}\times r_{c}$
better individuals to enter the candidate group,
$\mathit{ev_{fast}+=n_{pop},}$
7: full evaluation on the candidate group, and update the elite archive,
$\mathit{ev_{full}+=n_{pop}\times r_{c},}$
8: save the best individual of elite archive, $gen++$
9: end while
10: Output the final GNN model decoded from the best individual in the elite
archive
In Algorithm 1, $n_{pop}$ is the size of population, $d_{indi}$ is the
dimension of solution which depends on the resolution as mentioned in Section
III-A, ${gen}$ is the counter for generations, ${maxgen}$ is the maximum
number of generations allowed in one execution, $r{{}_{e}}$ and $r_{c}$ are
the proportions for elite archive and candidates group, $p_{c}$ and $p_{m}$
are the probabilities for crossover and mutation, $ev_{fast}$ and $ev_{full}$
is the counter for the times of fast and full evaluations. In Line 3, the
initial population will be evaluated by the full evaluation method to select
elites, which will be sent to the elite archive. From Line 4 to Line 9, the
loop is executed until the termination conditions are met. In Line 10, the
final GNN model decoded from the best individual in the elite archive will be
the output.
In each loop (Lines $\mathbf{4}\sim\mathbf{9}$), HESGA will first assess the
new offspring by fast evaluation, then the better candidates selected via fast
evaluation will undergo full evaluation process as in Fig. 2. The elite
archive is then updated by the better candidates. This hierarchical evaluation
strategy offers a pre-selection mechanism by the fast evaluation method
proposed and could save around $80\%^{\sim}90\%$ computational cost. On the
other hand, the full evaluation approach acts as a final assessor, which
ensures that the population moves to the right direction towards the objective
function all the time.
### III-E Evolutionary Operators and Other Settings
We use the classical binary crossover and mutation operators as in [43] [44]
[45] [46], and their mechanisms are demonstrated by the example shown in Fig.
3. In Fig. 3, the position parameter $p$ in both crossover and mutation is a
randomly generated integer in the range of $(1,\textit{len})$, where $len$ is
the solution length (i.e. the number of bits in the binary string).
Figure 3: Binary Crossover and Mutation
The maximum generation and population size are set according to the specific
problems that HESGA aims to solve. As for the population maintenance, the
elite archive is maintained by the fitness sorting method, while the
population does not need a maintenance policy. In elite archive update, when a
better candidate can successfully update the elite archive, the worst one in
the elite archives will be discarded.
## IV Experiments
In this section, the performance of HESGA will be experimentally investigated
on several datasets mentioned in Section I, and we use two types of deep graph
neural architectures, Graph Convolution (GC) [6] and Message Passing Neural
Network (MPNN) [18] to assess the performance of HESGA. Section IV-A shows the
advantage and disadvantage of the traditional GA for HPO compared with the
default parameter settings. Section IV-B presents the results obtained from
optimising the GC model with the proposed HESGA compared to the Gaussian HPO
method on three datasets, i.e. ESOL [15], FreeSolv [16] and Lipophilicity
[17]. Section IV-C reports the performance of HESGA on MPNN model. All
experiments are performed on a PC with Inter (R) Core i5-8300 CPU, 8GB Memory,
and GeForce GTX 1050 GPU.
### IV-A Advantage and Disadvantage of the Traditional GA
For a case study on GA to optimize hyperparameters, we use a traditional GA to
optimise the GC model and run it on the FreeSolv dataset. In this experiment,
three parameters: batch size ($s_{b}$), the number of execution epochs
($n_{e}$), and learning rate ($r_{l}$) are optimized by GA. The hyperparameter
optimized by GA are $s_{b}=32$, $n_{e}=240$ and $r_{l}=0.0015$; on the other
hand, the default hyperparameters pre-set in GC are $s_{b}=128$, $n_{e}=100$
and $r_{l}=0.0005$. These two configurations of parameters are used to run GC
for 30 times independently. The average RMSEs of training, validation and
test, as well as their standard deviations are plotted in Fig. 4. More details
about the distribution of RMSE of validation will be presented in Section V-A.
Figure 4: A comparison between GC with optimised hyperparameters and GC with
default hyperparameters
We carried out $t$-test on the RMSE results obtained from GC with optimized
hyperparameters and GC with default hyperparameters, and it shows that these
two groups of RMSEs do not have the same mean value at a significant level of
5%, regarding training, validation and test, respectively. Thus, it is
significant that using GA hyperparameter optimisation approach will improve
the learning performance of GC with respect to the RMSE.
The disadvantage of the traditional GA for HPO is its intolerable
computational cost, especially to those highly expensive problems. So, as
mentioned in Section III, we present HSEGA contains a fast evaluation strategy
for candidate selection.
### IV-B Experimental Results of HESGA Added on GC Model
To further investigating the performance of our proposed HESGA, in this
section we will apply HESGA to optimise GC model, then this combination is
tested on the ESOL, FreeSolv and Lipophilicity datasets. We record RMSE values
for comparison with GC with Bayesian hyperparameter optimization (BHO). Each
of the experiments was executed by 30 independent trials to obtain statistical
results. In Tables II$\sim$ IV, $n_{f},n_{n},s_{b},n_{e},l_{r}$ stand for the
number of filters, the number of fully connected nodes, batch size, the number
of maximum epoch, and learning rate, respectively. Symbol M_ and Std_ denote
the mean and standard deviation of the corresponding RMSE, respectively. $h$
is the indicator of $t$-test, $h=1$ indicates that the hypothesis of two
population groups have equal mean is rejected with a default significance
level of $5\%$, in the case of which means the two group of samples have
significantly different mean values. In details, M_RMSE obtained by GC+HESGA
will minus the one obtained by GC+BHO, so a negative $t$ value indicates the
former is better, while a positive $t$ value indicates the former is worse.
TABLE II: The Results on ESOL Dataset
ESOL | Hyperparameters | Training Results | Validation Results | Test Results
---|---|---|---|---
GC + BHO | $n_{f}$ = 128 | M_RMSE | 0.43 | M_RMSE | 1.05 | M_RMSE | 0.97
$n_{n}$ = 256
$s_{b}$ = 128 | Std_RMSE | 0.20 | Std_RMSE | 0.15 | Std_RMSE | 0.01
$r_{l}$ = 0.0005
GC + HSEGA | $n_{f}$ = 192 | M_RMSE | 0.34 | M_RMSE | 0.89 | M_RMSE | 0.89
$n_{n}$ = 448
$s_{b}$ = 32 | Std_RMSE | 0.07 | Std_RMSE | 0.04 | Std_RMSE | 0.04
$r_{l}$ = 0.0009
| T-test on results
---
(with significance level of $\alpha$ = 5%)
| $t$ = -2.436, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -5.624, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -9.708, $h$ = 1
---
to reject the
equal mean hypothesis
Table II shows very good performance of HESGA on ESOL dataset compared to the
BHO approach, in which our results of average RMSE (M_RMSE) are all
significant less than those of BHO. Moreover, the hyperparameters obtained by
HESGA had more stable RMSE values during 30 independent trials (i.e. less
standard deviation) in both the training and validation dataset.
TABLE III: The Results on FreeSolv Dataset (1)
FreeSolv | Hyperparameters | Training Results | Validation Results | Test Results |
---|---|---|---|---|---
GC + BHO | $n_{f}$ = 128 | M_RMSE | 0.31 | M_RMSE | 1.35 | M_RMSE | 1.40 |
$n_{n}$ = 256 |
$s_{b}$ = 128 | Std_RMSE | 0.09 | Std_RMSE | 0.15 | Std_RMSE | 0.16 |
$r_{l}$ = 0.0005 |
GC + HSEGA | $n_{f}$ = 192 | M_RMSE | 0.63 | M_RMSE | 1.29 | M_RMSE | 1.21 |
$n_{n}$ = 512 |
$s_{b}$ = 32 | Std_RMSE | 0.12 | Std_RMSE | 0.13 | Std_RMSE | 0.12 |
$r_{l}$ = 0.0012 |
| T-test on results
---
with significance level of $\alpha$ = 5%
| $t$ = 12.031, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -2.117, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -5.184, $h$ = 1
---
to reject the
equal mean hypothesis
Regarding the FreeSolv dataset, as the results shown in Table III, in the
training dataset our M_RMSE is worse than the one obtained from GC+BHO,
however, in the results on validation and test datasets, our method are
slightly better than GC+BHO. It should be noted that in terms of validation,
these two results are very similar, as the reject $t$-value for d.f. (degree
of freedom) at 30 and at 60 are 2.042 and 2.0, as the $t$ value we got is
-2.117, which is nearly on the boundary of acceptance. It is also noted that,
as we do not know the exact size of RMSE sample group in the reference paper
[1], we suppose it was in the range of 1$\sim$30, thus d.f. at 30 and 60 are
both considered in this work.
TABLE IV: The Results on Lipophilicity Dataset
Lipophilicity | Hyperrarameters | Training Results | Validation Results | Test Results |
---|---|---|---|---|---
GC + BHO | $n_{f}$ =128 | M_RMSE | 0.471 | M_RMSE | 0.678 | M_RMSE | 0.655 |
$n_{n}$ =256 |
$s_{b}$ = 128 | Std_RMSE | 0.001 | Std_RMSE | 0.04 | Std_RMSE | 0.036 |
$r_{l}$ =0.0005 |
GC + HSEGA | $n_{f}$ =160 | M_RMSE | 0.24 | M_RMSE | 0.68 | M_RMSE | 0.67 |
$n_{n}$ =192 |
$s_{b}$ =64 | Std_RMSE | 0.02 | Std_RMSE | 0.02 | Std_RMSE | 0.02 |
$r_{l}$ =0.0013 |
| T-test on results
---
with significance level of $\alpha$ = 5%
| $t$ = -59.840, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = 0.745, $h$ = 0
---
to accept the
equal mean hypothesis
| $t$ = 1.816, $h$ = 0
---
to accept the
equal mean hypothesis
In tackling the Lipophilicity dataset, the results in Table IV show that the
proposed approach is far better on the training dataset, and not worse than
GC+BHO on validation and test datasets. As the M_RMSE on the training set is
less than that on the validation and test dataset, the proposed HESGA might
have the over-fitting issue, which reduce its performance on validation and
test datasets. Moreover, the Lipophilicity dataset has the biggest size among
the three (more than 4,000 SMILE entries), so it introduces more complicated
computational operations than the other datasets, which makes the execution
very time-consuming.
### IV-C Experimental Results of HESGA to optimise MPNN Models
As MPNN models is more time-consuming than GC, we only carried out experiments
on FreeSolv dataset, and the detailed results are shown in Table V.
TABLE V: The Results on FreeSolv Dataset (2)
FreeSolv | Hyperarameters | Training Results | Validation Results | Test Results |
---|---|---|---|---|---
MPNN + BHO | T = 2 | M_RMSE | 0.31 | M_RMSE | 1.20 | M_RMSE | 1.15 |
M = 5 |
$s_{b}$ = 16 | Std_RMSE | 0.05 | Std_RMSE | 0.02 | Std_RMSE | 0.12 |
$r_{l}$ = 0.001 |
MPNN + HSEGA | T = 1 | M_RMSE | 0.70 | M_RMSE | 1.15 | M_RMSE | 1.09 |
M = 10 |
$s_{b}$ = 8 | Std_RMSE | 0.13 | Std_RMSE | 0.15 | Std_RMSE | 0.14 |
$r_{l}$ = 0.0012 |
| T-test on results
---
with significance level of $\alpha$ = 10%
| $t$ = 14.693, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -1.835, $h$ = 1
---
to reject the
equal mean hypothesis
| $t$ = -1.842, $h$ = 1
---
to reject the
equal mean hypothesis
As shown in Tab V, we carried out experiments on applying BHO and HESGA to
optimise MPNN models. In terms of validation and test, the results show that
there is no significant difference between the two sample groups with the
significance level at 5%; however, with the significance level of two tailed
5% (i.e. 10%), the equal mean hypothesis was rejected, which indicates that
our algorithm is slightly better. Moreover, it is observed that on the
training dataset the compared algorithm (MPNN + BHO) is far better than ours,
which indicates that there may be a potential overfitting issue in that
approach.
Overall, it seems that there are some cases of overfitting in the experiments
(Tables II$\sim$ V). The experimental results show that all RMSEs on the
training datasets are less than those on validation and test. Particularly,
the RMSE on the validation and test datasets is around two to four times than
that on the training set in GC + BHO on the FreeSolv dataset and MPNN + BHO on
the Lipophilicity dataset. As a result, overfitting might lead to poorer model
performance on validation/test datasets. For example, in Table V, the training
loss of MPNN + BHO is just 50% of that of MPNN + HESGA, but the loss of MPNN +
BHO on the test and validation datasets are worse than that of MPNN + HESGA.
## V Further Discussions
### V-A The Distributions of RMSEs
Given the same set of hyperparameter values for a GNN model, the training
results may be still different from time to time, even for the same split of
the datasets, and this is mainly because in each training process, the weight
vectors for a neural network are randomly initialised. As a result, GNN may
produce variate RMSEs as a full evaluation function for GA, which increases
uncertainty in evaluating all individuals in the GA population. Fig. 5 shows
the distribution of RMSE results on the validation set under two given
hyperparameter settings (one is the default parameters and the other is the
hyperparameters optimised by HESGA).
(a) with the hyperparameter solution optimized
(b) with the default parameters pre-set in GC
Figure 5: The Distribution of RMSE Results on the Validation Set by the
Hyperparameters Optimized and the Default Hyperparameters Pre-set in GC
As shown in Fig. 5, the RMSE values are quite variable in 30 independent
trials. One method to alleviate this negative effect is as follws: we
performed experiments on using the average RMSE of several times (e.g. 3 times
in a trial) of running GNN, however it will make the computational cost 3
times more expensive than before. And this is another reason that we need to
develop a fast evaluation strategy for GA.
### V-B Solution Resolution and Feasible Searching Space
As presented in Section III-A, with a higher resolution of hyperparameters
being set up, we will have to deal with more feasible solution points. On one
hand, lower resolution would alleviate the computational cost by reducing the
number of feasible solutions, but it would be more likely to miss high quality
solutions. On the other hand, a higher resolution of hyperparameter space will
incur heavier computational overheads, but it would be more likely to identify
a better set of hyperparameters compared with lower resolution. For
comparison, we set up a series of experiments on the FreeSolv dataset with a
varying resolutions of 8, 16, 32, 64 for encoding the batch size and the
number of filters. In Table VI, we list the number of feasible solutions
according to the different resolutions.
Figure 6: The RMSE Results of HESGA + GC on FreeSolv Dataset with Varing
Resolutions of Hyperparameters
As shown in Fig. 6, HESGA with lower resolution (bigger step increment) cannot
find the optimal solutions found by those with higher resolution (smaller step
increment). This is mainly because the grid generated in lower resolution is
so coarse for this problem. On the other hand, the resolution of 16 will be
acceptable for this problem as further higher resolution such as the
resolution at 8 will not gain much improvement on the RMSE values. However,
choosing an appropriate resolution may be a problem-specified issue; in the
case of having abundant computational resources, we recommend to use as high
resolution as possible to achieve better performance for GNN.
TABLE VI: Solution Number under Different Resolution
Resolution (step increment) | 8 | 16 | 32 | 64
---|---|---|---|---
Size of binary solution | 19 | 17 | 15 | 13
Solution number | 524288 | 131072 | 32768 | 8192
### V-C Computational Cost
There are three processes that can affect the computational cost of the
algorithms used in our experiments, 1) full GC evaluation, 2) fast evaluation
of GC, and 3) HESGA.
#### V-C1 Full Evaluation of GC
Here we take the GC model as an example. As the GC model will first transfer a
SMILE representation to a molecular fingerprint, we suppose that the
fingerprints have a depth of $R$ and length of $L$, $N$ atoms were used in a
molecular convolutional net [6], and $F$ features (filters) are used. In this
case, in each layer the computational costs of feedforward and backpropagation
process can be estimated by $O(\textit{RNFL}+\textit{RNF}^{2})$ [6]. For
simplicity, $O_{GC}$ stands for $O(\textit{RNFL}+\textit{RNF}^{2})$, which
denotes the cost of GC model with one layer and one epoch of training.
#### V-C2 Fast Evaluation of GC
As mentioned above, approx. 10%$\sim$20% of the maximum number of epochs were
used to get the fast evaluation score, thus the cost will be 10%$\sim$20% of
the full evaluation. As a result, for a number of $n_{e}$ epochs, the time
cost will be $n_{e}\times O_{GC}$ approximately. Suppose the fast evaluation
will use a percentage of $p_{f}$ of the total epochs, and in this case the
cost of fast evaluation will be $p_{f}\times n_{e}\times O_{GC}$.
#### V-C3 Total Cost of HESGA
Suppose we have a GC with one convolution layer, a population of solutions
with size $n_{pop}$, and each individual is trained by $n_{e}$ epoch at most,
the proportion of elite group is $r_{e}$, and the maximum generation is
$maxgen$. The detailed cost of HESGA is listed as follows based on Algorithm
1:
Lines 1$\sim$3: for full evaluations on the whole population, it will cost
$n_{pop}\times n_{e}\times O_{GC}$ approximately.
Lines 4$\sim$9: in one generation, for fast evaluation on the whole new
offspring, it will cost $n_{pop}\times p_{f}\times n_{e}\times O_{GC}$; for
the full evaluation on candidates, it will cost $n_{pop}\times r_{c}\times
n_{e}\times O_{GC}$, thus totally the total cost will be $(p_{f}+r_{c})\times
n_{pop}\times n_{e}\times O_{GC}$. Therefore, for $maxgen$ generations, it
will cost $(p_{f}+r_{c})\times n_{pop}\times n_{e}\times O_{GC}\times maxgen$.
It should be noted here that the cost of sorting and counting operations can
be ignored compared with $O_{GC}$.
As a result, HESGA will cost approximately $[(p_{f}+r_{c})\times
maxgen+1]\times n_{pop}\times n_{e}\times O_{GC}$. Take an example as follows:
suppose we have $n_{pop}=10$, $maxgen=10$, $p_{f}=0.1$, $r_{c}=0.1$,
$n_{e}=100$. In this case, thus running HESGA once will equal to running 3,000
times of single $O_{GC}$ in terms of the computational cost.
### V-D The Scalability of HESGA
We argue that HESGA possesses scalability for different problems and datasets.
For example, for a large dataset, the fast evaluation approach can be replaced
by any reasonable approaches, such as partial training by using sampled data
points randomly from the whole dataset, as in [42]. When historical datasets
are available, a fast surrogate model could be built and trained with the
datasets, to approximate the results obtained from the complete training. No
matter what types of fast evaluation approaches are used, the HESGA will
always be a good mechanism to combine both fast and full evaluation to achieve
a trade-off between solution quality and computational cost.
## VI Conclusion and Future Work
In this research, we proposed HESGA, a novel GA equipped with a hierarchical
evaluation strategy and full and fast evaluation methods, is proposed to
address expensive the HPO problems for GNNs. Experiments are carried out on
three representative datasets in material property prediction problems: ESOL,
FreeSolv, and Lipophilicity datasets; by applying HESGA to optimise the
hyperparameters of GC and MPNN models, two types of commonly used graph deep
neural networks in material design and discovery. Results show that HESGA can
outperform BHO when optimising GC models, meanwhile it achieves comparable
performance to Bayesian approaches to optimising MPNN models. In Section 5, we
also analysed the uncertainty and distributions of RMSE results, the learning
performance in terms of the resolution of the hypereparameter search space,
the computational cost, and the scalability of HESGA.
In the future, we would like to investigate the following two aspects:
##### Dealing with the over-fitting issue in the experiments
This is an issue observed in both the Bayesian approaches and our HESGA. In
our experiments, the number of epochs ($n_{e}$) is not specified as one
hyperparameter, which might be one reason for overfitting. For an example,
overtraining might cause HPO biases to a perfect fitted model on the training
dataset but this model may perform poorly in validation and test datasets.
Therefore, from our perspective, we would like to investigate how we can
incorporate more hyperparameters in the search space, or to monitor the
overfitting and introduce the penalty item in the evaluation functions.
##### Bi-objective Optimization
As mentioned in Section V-C, the hyperparameters such as the number of epochs
($n_{e}$) and the number of filters ($n_{f}$) are selected to be optimized,
and this will affect the computational cost of HESGA. Suppose the RMSE might
be improved while the cost would be increased at the same time when we
increase $n_{e}$ and $n_{f}$, and in this case a balance between the
performance and cost needs to be considered. In our future work, we will
consider dealing with this balance issue as a bi-objective optimization
problem, and a Pareto-optimal front (PF) [47] is expected to offer more
options of GNN models considering the trade-off between performance and cost.
## VII Acknowledgement
This research is supported by the Engineering and Physical Sciences Research
Council (EPSRC) funded Project on New Industrial Systems: Manufacturing
Immortality (EP/R020957/1). The authors are also grateful to the Manufacturing
Immortality consortium.
## References
* [1] Z. Wu, B. Ramsundar, E. N. Feinberg, J. Gomes, C. Geniesse, A. S. Pappu, K. Leswing, and V. Pande, “Moleculenet: a benchmark for molecular machine learning,” _Chemical science_ , vol. 9, no. 2, pp. 513–530, 2018.
* [2] D. Weininger, “Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules,” _Journal of chemical information and computer sciences_ , vol. 28, no. 1, pp. 31–36, 1988.
* [3] Q. Long, Y. Jin, G. Song, Y. Li, and W. Lin, “Graph structural-topic neural network,” in _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 2020, pp. 1065–1073.
* [4] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” _arXiv preprint arXiv:1812.08434_ , 2018.
* [5] H. Cai, V. W. Zheng, and K. C.-C. Chang, “A comprehensive survey of graph embedding: Problems, techniques, and applications,” _IEEE Transactions on Knowledge and Data Engineering_ , vol. 30, no. 9, pp. 1616–1637, 2018.
* [6] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in _Advances in neural information processing systems_ , 2015, pp. 2224–2232.
* [7] M. Nunes and G. L. Pappa, “Neural architecture search in graph neural networks,” in _Brazilian Conference on Intelligent Systems_. Springer, 2020, pp. 302–317.
* [8] Y. Gao, H. Yang, P. Zhang, C. Zhou, and Y. Hu, “Graph neural architecture search,” in _IJCAI_ , vol. 20, 2020, pp. 1403–1409.
* [9] M. Claesen and B. De Moor, “Hyperparameter search in machine learning,” _arXiv preprint arXiv:1502.02127_ , 2015.
* [10] M. Schumer and K. Steiglitz, “Adaptive step size random search,” _IEEE Transactions on Automatic Control_ , vol. 13, no. 3, pp. 270–276, 1968.
* [11] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in _Advances in neural information processing systems_ , 2012, pp. 2951–2959.
* [12] C. Di Francescomarino, M. Dumas, M. Federici, C. Ghidini, F. M. Maggi, W. Rizzi, and L. Simonetto, “Genetic algorithms for hyperparameter optimization in predictive business process monitoring,” _Information Systems_ , vol. 74, pp. 67–83, 2018.
* [13] D. Orive, G. Sorrosal, C. E. Borges, C. Martín, and A. Alonso-Vicario, “Evolutionary algorithms for hyperparameter tuning on neural networks models,” in _Proceedings of the 26th european modeling & simulation symposium. Burdeos, France_, 2014, pp. 402–409.
* [14] X. Xiao, M. Yan, S. Basodi, C. Ji, and Y. Pan, “Efficient hyperparameter optimization in deep learning using a variable length genetic algorithm,” _arXiv preprint arXiv:2006.12703_ , 2020.
* [15] J. S. Delaney, “Esol: estimating aqueous solubility directly from molecular structure,” _Journal of chemical information and computer sciences_ , vol. 44, no. 3, pp. 1000–1005, 2004.
* [16] D. L. Mobley and J. P. Guthrie, “Freesolv: a database of experimental and calculated hydration free energies, with input files,” _Journal of computer-aided molecular design_ , vol. 28, no. 7, pp. 711–720, 2014.
* [17] M. Wenlock and N. Tomkinson, “Experimental in vitro DMPK and physicochemical data on a set of publicly disclosed compounds,” 2015. [Online]. Available: https://doi.org/10.6019/chembl3301361
* [18] O. Vinyals, S. Bengio, and M. Kudlur, “Order matters: Sequence to sequence for sets,” _arXiv preprint arXiv:1511.06391_ , 2015.
* [19] J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” _The Journal of Machine Learning Research_ , vol. 13, no. 1, pp. 281–305, 2012.
* [20] M. Wistuba, N. Schilling, and L. Schmidt-Thieme, “Scalable gaussian process-based transfer surrogates for hyperparameter optimization,” _Machine Learning_ , vol. 107, no. 1, pp. 43–78, 2018.
* [21] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown, “Towards an empirical foundation for assessing bayesian optimization of hyperparameters,” in _NIPS workshop on Bayesian Optimization in Theory and Practice_ , vol. 10, 2013, p. 3.
* [22] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Prabhat, and R. Adams, “Scalable bayesian optimization using deep neural networks,” in _International conference on machine learning_ , 2015, pp. 2171–2180.
* [23] A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter, “Fast bayesian optimization of machine learning hyperparameters on large datasets,” in _Artificial Intelligence and Statistics_. PMLR, 2017, pp. 528–536.
* [24] S. Falkner, A. Klein, and F. Hutter, “Bohb: Robust and efficient hyperparameter optimization at scale,” _arXiv preprint arXiv:1807.01774_ , 2018.
* [25] K. Deb, _Multi-objective optimization using evolutionary algorithms_. John Wiley & Sons, 2001, vol. 16.
* [26] C. A. C. Coello, G. B. Lamont, D. A. Van Veldhuizen _et al._ , _Evolutionary algorithms for solving multi-objective problems_. Springer, 2007, vol. 5.
* [27] L. Ma, J. Cui, and B. Yang, “Deep neural architecture search with deep graph bayesian optimization,” in _2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI)_ , 2019, pp. 500–507.
* [28] S. R. Young, D. C. Rose, T. P. Karnowski, S.-H. Lim, and R. M. Patton, “Optimizing deep learning hyper-parameters through an evolutionary algorithm,” in _Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments_ , 2015, pp. 1–5.
* [29] M. Suganuma, S. Shirakawa, and T. Nagao, “A genetic programming approach to designing convolutional neural network architectures,” in _Proceedings of the genetic and evolutionary computation conference_ , 2017, pp. 497–504.
* [30] L. Xie and A. Yuille, “Genetic cnn,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , Oct 2017.
* [31] Q. Zhu, Q. Lin, Z. Du, Z. Liang, W. Wang, Z. Zhu, J. Chen, P. Huang, and Z. Ming, “A novel adaptive hybrid crossover operator for multiobjective evolutionary algorithm,” _Information Sciences_ , vol. 345, pp. 177–198, 2016.
* [32] Q. Zhu, Q. Lin, W. Chen, K.-C. Wong, C. A. C. Coello, J. Li, J. Chen, and J. Zhang, “An external archive-guided multiobjective particle swarm optimization algorithm,” _IEEE transactions on cybernetics_ , vol. 47, no. 9, pp. 2794–2808, 2017.
* [33] W. Wang, S. Yang, Q. Lin, Q. Zhang, K.-C. Wong, C. A. C. Coello, and J. Chen, “An effective ensemble framework for multiobjective optimization,” _IEEE Transactions on Evolutionary Computation_ , vol. 23, no. 4, pp. 645–659, 2018.
* [34] Q. Lin, Z. Liu, Q. Yan, Z. Du, C. A. C. Coello, Z. Liang, W. Wang, and J. Chen, “Adaptive composite operator selection and parameter control for multiobjective evolutionary algorithm,” _Information Sciences_ , vol. 339, pp. 332–352, 2016.
* [35] M. Li, S. Yang, and X. Liu, “Pareto or non-pareto: Bi-criterion evolution in multiobjective optimization,” _IEEE Transactions on Evolutionary Computation_ , vol. 20, no. 5, pp. 645–665, 2015.
* [36] S. Yang, W. Wang, Q. Lin, and J. Chen, “A novel pso-de co-evolutionary algorithm based on decomposition framework,” in _International conference on smart computing and communication_. Springer, 2016, pp. 381–389.
* [37] T. Chugh, C. Sun, H. Wang, and Y. Jin, “Surrogate-assisted evolutionary optimization of large problems,” in _High-Performance Simulation-Based Optimization_. Springer, 2020, pp. 165–187.
* [38] L. Frachon, W. Pang, and G. M. Coghill, “Immunecs: Neural committee search by an artificial immune system,” _arXiv preprint arXiv:1911.07729_ , 2019.
* [39] E. Bochinski, T. Senst, and T. Sikora, “Hyper-parameter optimization for convolutional neural network committees based on evolutionary algorithms,” in _2017 IEEE International Conference on Image Processing (ICIP)_ , 2017, pp. 3924–3928.
* [40] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8697–8710.
* [41] A. Zela, A. Klein, S. Falkner, and F. Hutter, “Towards automated deep learning: Efficient joint neural architecture and hyperparameter search,” _arXiv preprint arXiv:1807.06906_ , 2018.
* [42] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, “Regularized evolution for image classifier architecture search,” in _Proceedings of the aaai conference on artificial intelligence_ , vol. 33, 2019, pp. 4780–4789.
* [43] K. Deb, R. B. Agrawal _et al._ , “Simulated binary crossover for continuous search space,” _Complex systems_ , vol. 9, no. 2, pp. 115–148, 1995.
* [44] S. M. Lim, A. B. M. Sultan, M. N. Sulaiman, A. Mustapha, and K. Leong, “Crossover and mutation operators of genetic algorithms,” _International Journal of Machine Learning and Computing_ , vol. 7, no. 1, pp. 9–12, 2017.
* [45] M. Mitchell, “An introduction to genetic algorithms mit press,” _Cambridge, Massachusetts. London, England_ , vol. 1996, 1996.
* [46] D. Whitley, “A genetic algorithm tutorial,” _Statistics and computing_ , vol. 4, no. 2, pp. 65–85, 1994.
* [47] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” _IEEE transactions on evolutionary computation_ , vol. 6, no. 2, pp. 182–197, 2002.
|
# F3ORNITS : a flexible variable step size non-iterative co-simulation method
handling subsytems with hybrid advanced capabilities
Yohan ÉGUILLON1 , Bruno LACABANNE1 and Damien TROMEUR-DERVOUT2
1Siemens Industry Software, Roanne, France
2Institut Camille Jordan, Université de Lyon ,UMR5208 CNRS-U.Lyon1,
Villeurbanne, France
{yohan.eguillon<EMAIL_ADDRESS>damien.tromeur-dervout@univ-
lyon1.fr
https://orcid.org/0000-0002-9386-4646 https://orcid.org/0000-0003-1790-3663
https://orcid.org/0000-0002-0118-8100
###### Abstract
This paper introduces the F3ORNITS non-iterative co-simulation algorithm in
which F3 stands for the $3$ flexible aspects of the method: flexible
polynomial order representation of coupling variables, flexible time-stepper
applying variable co-simulation step size rules on subsystems allowing it and
flexible scheduler orchestrating the meeting times among the subsystems and
capable of asynchronousness when subsystems’ constraints requires it. The
motivation of the F3ORNITS method is to accept any kind of co-simulation
model, including any kind of subsystem, regardless on their available
capabilities. Indeed, one the major problems in industry is that the
subsystems usually have constraints or lack of advanced capabilities making it
impossible to implement most of the advanced co-simulation algorithms on them.
The method makes it possible to preserve the dynamics of the coupling
constraints when necessary as well as to avoid breaking $C^{1}$ smoothness at
communication times, and also to adapt the co-simulation step size in a way
that is robust both to zero-crossing variables (contrary to classical relative
error-based criteria) and to jumps. Two test cases are presented to illustrate
the robustness of the F3ORNITS method as well as its higher accuracy than the
non-iterative Jacobi coupling algorithm (the most commonly used method in
industry) for a smaller number of co-simulation steps.
## 1 INTRODUCTION
Co-simulation consists in processing a simulation of a modular model, that is
to say a model composed of several dynamical subsystems connected together.
This is usually motivated by the need to simulate a model with multiphysical
parts. Designing a subsystem representing the physics of a given field
(electricity, mechanics, fluids, thermodynamic, etc) allows the use of a
specific and adapted solver for this field, or even the modelling with a
specific third party software. Regarding industrial applications, a modular
model is prefered because subsystem providers can focus on a part of the
global system without taking the rest into account. Nonetheless, gathering
different subsystems is not straightforward: a simulation of the equations of
the global system cannot be retrieved. Co-simulation is the field
investigating the ways to solve such systems, based on regular data
communications between the subsystems solved separately.
The co-simulation method (or co-simulation algorithm) is the rule used to
process the simulation on such modular systems. It namely deals with: the
determination of the times of the data communications, the way the inputs each
subsystem should use at each step are computed, and the way the outputs are
used and so on. Many co-simulation algorithms have been established until now
[Kübler and Schiehlen, 2000] [Arnold and Unther, 2001] [Gu and Asada, 2004]
[Bartel et al., 2013] [Sicklinger et al., 2014] [Busch, 2016] and studied [Li
et al., 2014] [Schweizer et al., 2016], with various complexity of
implementation, subsystems capabilities requirements, or physical field-
specific principles (see also the recent state of art on co-simulation of
[Gomes et al., 2018]). It may appear in some co-simulation algorithms that a
time interval should be integrated more than once on one or more subsystems.
This is called iterative co-simulation. The systems that need to do so must
have the capability to replay a simulation over this time-interval with
different data (time to reach, input values, …). Depending on the model
provider, this capability, so-called ”rollback”, is not always available on
every subsystem. When it is available, an iterative algorithm such as
[Éguillon et al., 2019] can be used. Nevertheless, if at least one subsystem
cannot rollback, a non-iterative method has to be chosen. The rollback
capability is not available in most industrial models. As the F3ORNITS method
presented in this paper is non-iterative, it can be applied to configurations
with rollback-less subsystems.
Besides the rollback, other plateform-dependent capabilities may lead to an
impossibility of use of a given co-simulation method on certain subsystems.
Amongst them, the ability to provide inputs with $n$th order time-derivatives
and the ability to obtain the time-derivatives of the outputs at the end of a
macro-step can be mentioned. The motivation of the F3ORNITS method is to
accept any kind of modular model, including any kind of subsystem, regardless
on their available capabilities. The first consequence of this specification
is that F3ORNITS is non-iterative (in order to accept rollback-less
subsystems) and asynchronous (in order to accept modular models with
subsystems with an imposed step, even when several subsystems with imposed
steps non multiple of one another). Such a method is presented on figure 1.
F3ORNITS also represents the inputs as time-dependent polynomials (when
supported by the subsystems concerned), and adapts the communication step size
of the subsystems which can handle variable communication step sizes. This
both allows accuracy win when frequent exchanges are needed, and saves time
when coupling variables can be represented with a high enough accuracy.
Figure 1: Visualization of the behavior of a non-iterative asynchronous co-
simulation method on $2$ subsystems
This algorithm is based on a variable order polynomial representation of every
input (the polynomial degree might be different for each input variable at a
same time) determined by an a posteriori criterion and redefined at each
communication step. A smoother version can be triggered by interpolating on
the extrapolated values, as done in [Busch, 2019] with the so-called
”EXTRIPOL” technique. This technique avoids the non-physical jumps on coupling
variables at each communication time, and may help the solvers of the
subsystems to restart faster after a discontinuity since we can guarantee the
$C^{1}$ smoothness of the input variables. The F3ORNITS method adapts this
smoothening with flexible order polynomials, variable step size subsystems,
and asynchronous cases.
The paper is structured as follows. Section 2 presents the mathematical
formalism we used to develop the co-simulation algorithm. This formalism is
made of general notations, of subsystems topologies detailing and of common
polynomial techniques used in F3ORNITS method. Section 3 presents the F3ORNITS
algorithm. On the one side, it shows the way time-dependent inputs are
determined. On the other side the way the step size determination is handled.
Section 4 gives the results of F3ORNITS method both on a controlled speed
model and on the classical linear two-mass oscillator test case [Schweizer et
al., 2016] [Éguillon et al., 2019]. Comparison of F3ORNITS algorithm with
different options and basic non-iterative Jacobi method (most basic non-
iterative co-simulation technique with fixed step size) is also achieved. The
conclusion is given in section 5.
## 2 MATHEMATICAL FORMALISM AND MOTIVATIONS
A subsystem that communicates with other subsystems to form a co-simulation
configuration will be represented by its equation. As the context is about
time integration, these equations will be time differential equations. First
of all, we present the general form of a monolithic system, in other words a
closed system (which neither has inputs nor outputs) represented by a $1$st
order differential equation which covers, among others things, ODE, DAE and
IDE as follows:
$\left\\{\begin{array}[]{lll}F(\frac{d}{dt}x,x,t)&=&0\\\
x(t^{[\text{init}]})&=&x^{[\text{init}]}\\\ \end{array}\right.$ (1)
where
$\begin{array}[]{l}n_{st}\in\mathbb{N}^{*},\
x^{[\text{init}]}\in\mathbb{R}^{n_{st}},\
t\in[t^{[\text{init}]},t^{[\text{end}]}],\\\
{[}t^{[\text{init}]},t^{[\text{end}]}]\subset\mathbb{R}\text{ so that
}t^{[\text{end}]}-t^{[\text{init}]}\in\mathbb{R}_{+}^{*},\\\
F:\mathbb{R}^{n_{st}}\times\mathbb{R}^{n_{st}}\times[t^{[\text{init}]},t^{[\text{end}]}]\rightarrow\mathbb{R}^{n_{st}},\\\
\end{array}$ (2)
are given, and
$x:[t^{[\text{init}]},t^{[\text{end}]}]\rightarrow\mathbb{R}^{n_{st}}$ (3)
is the solution state vector whose $n_{st}$ components are called the state
variables.
This paper will only cover the ODEs case, which are of the form (4). The terms
”equation”, ”equations”, and ”differential equations” will refer to the ODE of
a system throughout this document.
$\frac{d}{dt}x=f(t,x)\\\ $ (4)
where
$f:[t^{[\text{init}]},t^{[\text{end}]}]\times\mathbb{R}^{n_{st}}\rightarrow\mathbb{R}^{n_{st}}$
(5)
### 2.1 Framework and notations
In a cosimulation context, the principle of linking the derivatives of the
states to themselves (and potentially with time) is the same as in the
monolithic case, yet the inputs and outputs also have to be considered.
Let $n_{sys}\in\mathbb{N}^{*}$ be the number of subsystems. Please note that
the case $n_{sys}=1$ corresponds to a monolithic system. The cases that will
be considered here are connected subsystems, that is to say $n_{sys}\geqslant
2$ subsystems that need to exchange data to one another: each input of each
subsystem has to be fed by an output of another subsystem. A subsystem will be
referenced by its subscript index $k\in[\\![1,n_{sys}]\\!]$ so that subsystem-
dependent functions or variables will have an index indicating the subsystem
they are attached to.
Considering the inputs and the outputs, the co-simulation version of (4) - (5)
for subsystem $k\in[\\![1,n_{sys}]\\!]$ is:
$\left\\{\begin{array}[]{lll}\displaystyle{\frac{d}{dt}}x_{k}&=&f_{k}(t,x_{k},u_{k})\\\
y_{k}&=&g_{k}(t,x_{k},u_{k})\\\ \end{array}\right.$ (6)
where
$\begin{array}[]{l}x_{k}\in\mathbb{R}^{n_{st,k}},\
u_{k}\in\mathbb{R}^{n_{in,k}},\ y_{k}\in\mathbb{R}^{n_{out,k}}\\\
f_{k}:[t^{[\text{init}]},t^{[\text{end}]}]\times\mathbb{R}^{n_{st,k}}\times\mathbb{R}^{n_{in,k}}\rightarrow\mathbb{R}^{n_{st,k}}\\\
g_{k}:[t^{[\text{init}]},t^{[\text{end}]}]\times\mathbb{R}^{n_{st,k}}\times\mathbb{R}^{n_{in,k}}\rightarrow\mathbb{R}^{n_{out,k}}\\\
\end{array}$ (7)
Equations (6) - (7) are the equations representing a given subsystem. Ther are
the minimal data required to entirely characterize any subsystem, yet they do
not define the whole co-simulation configuration (connections are missing and
can be represented by extra data, see link function [Éguillon et al., 2019]).
To stick to precise concepts, we should write $x$ as a function
$x:[t^{[\text{init}]},t^{[\text{end}]}]\rightarrow\mathbb{R}^{n_{st}}$
(respectively for $y$, and $u$) as it is a time-dependent vector. That being
said, we will keep on using $x$, $y$ and $u$ notations, as if they were simple
vectors.
This representation enables us to evaluate derivatives at a precise point that
has been reached. It is compliant to methods that need only the subsystems’
equations, such as Decoupled Implicit Euler Method and Decoupled Backward
Differentiation Formulas in [Skelboe, 1992].
When $n_{in,k}$ and/or $n_{out,k}$ are $0$ for a given subsystem
$k\in[\\![1,n_{sys}]\\!]$, we will tag the topology of the subsystem $(S_{k})$
with a special name: NI, NO, NINO, or IO depending on the case (see figure 2).
This will be usefull to treat different behaviors when scheduling.
Figure 2: The four different topologies of subsystems, depending on the number
of their coupling variables
As a co-simulation implies communications at discrete times, let’s introduce
discrete notations. Let $n\in\mathbb{N}$ be the time index. Subsystem
$(S_{l})$ (with $l\in[\\![1,n_{sys}]\\!]$) communicates at times
$t_{l}^{[0]},t_{l}^{[1]},t_{l}^{[2]},...$. At these times, the $n_{out,l}$
outputs of $(S_{l})$ are known: the value of the $j$th output (with
$j\in[\\![1,n_{out,l}]\\!]$) of subsystem $(S_{l})$ at time $t_{l}^{[n]}$ will
be written $y_{l,j}^{[n]}$.
Let $(S_{k})$ be a subsystem with $n_{in,k}>0$. The $i$th input is given by
$u_{k,i}^{[n]}$ on macro-step $[t_{k}^{[n]},t_{k}^{[n+1]}[$.
Please note that $\forall n\in\mathbb{N}$:
$\forall l\in[\\![1,n_{sys}]\\!],\forall
j\in[\\![1,n_{out,l}]\\!],y_{l,j}^{[n]}\in\mathbb{R}$ whereas
$\forall k\in[\\![1,n_{sys}]\\!],\forall i\in[\\![1,n_{in,k}]\\!],$
$u_{k,i}^{[n]}:[t_{k}^{[n]},t_{k}^{[n+1]}[\rightarrow\mathbb{R}$.
### 2.2 Polynomial calibration
For $q\in\mathbb{N}^{*}$, let $\mathbb{A}(q)\subsetneq\mathbb{R}^{q}$ be the
set of elements that do not have any common values at different coordinates.
In other words:
$\begin{array}[]{l}\mathbb{A}(q)=\big{\\{}(t^{[r]})_{r\in[\\![1,q]\\!]}\in\mathbb{R}^{q}\
\big{|}\\\ \hskip 14.22636pt\forall(r_{1},r_{2})\in[\\![1,q]\\!]^{2},\
r_{1}\neq r_{2}\Rightarrow t^{[r_{1}]}\neq t^{[r_{2}]}\big{\\}}\end{array}$
(8)
For a given set of $q$ points $(t^{[r]},z^{[r]})_{r\in[\\![1,q]\\!]}$ whose
abscissas satisfy $(t^{[r]})_{r\in[\\![1,q]\\!]}\in\mathbb{A}(q)$, we define
the two following polynomials:
$\Omega_{q-1}^{Ex}:\left\\{\begin{array}[]{lcl}\mathbb{A}(q)\times\mathbb{R}^{q}\times\mathbb{R}&\rightarrow&\mathbb{R}\\\
\left(\begin{array}[]{c}(t^{[r]})_{r\in[\\![1,q]\\!]}\\\
(z^{[r]})_{r\in[\\![1,q]\\!]}\\\
t\end{array}\right)&\mapsto&\Omega_{q-1}^{Ex}\left(t\right)\\\
\end{array}\right.$ (9)
$\Omega_{q-2}^{CLS}:\left\\{\begin{array}[]{lcl}\mathbb{A}(q)\times\mathbb{R}^{q}\times\mathbb{R}&\rightarrow&\mathbb{R}\\\
\left(\begin{array}[]{c}(t^{[r]})_{r\in[\\![1,q]\\!]}\\\
(z^{[r]})_{r\in[\\![1,q]\\!]}\\\
t\end{array}\right)&\mapsto&\Omega_{q-2}^{CLS}\left(t\right)\\\
\end{array}\right.$ (10)
respectively called the extrapolation and the constrained least squares
polynomials111For the sake of readability, we will sometimes write only the
last variable of $\Omega_{q-1}^{Ex}$ and $\Omega_{q-2}^{CLS}$..
These polynomials are defined in order to have
$\Omega_{q-1}^{Ex}\in\mathbb{R}_{q-1}[t]$ and
$\Omega_{q-2}^{CLS}\in\mathbb{R}_{q-2}[t]$, where $\forall p\in\mathbb{N},\
\mathbb{R}_{p}[t]$ is the set of polynomials of the variable $t$ with
coefficients in $\mathbb{R}$ and with a degree lower or equal to $p$. We have:
$\forall r\in[\\![1,q]\\!],\Omega_{q-1}^{Ex}(t^{[r]})=z^{[r]}$ (11)
and
$\begin{array}[]{c}\Omega_{q-2}^{CLS}:t\mapsto\sum_{i=0}^{q-2}a_{i}t^{i}\
\text{where}\\\ (a_{i})_{\begin{subarray}{c}\\\
i\in[\\![0,q-2]\\!]\end{subarray}}=\\!\\!\\!\\!\\!\\!\\!\underset{\begin{subarray}{c}(\bar{a}_{i})_{i\in[\\![0,q-2]\\!]}\\\
\Omega_{q-2}^{CLS}(t^{[1]})=z^{[1]}\end{subarray}}{\arg\min\vspace{-0.1cm}}\\!\\!\\!\\!\\!\\!\Big{\\{}\underset{r=1}{\overset{q}{\sum}}\\!\\!\Big{(}\\!\\!z^{[r]}\\!-\\!\\!\underset{i=0}{\overset{q-2}{\sum}}\\!\bar{a}_{i}(t^{[r]})^{i}\\!\Big{)}^{2}\Big{\\}}\end{array}$
(12)
In practice, the $z$ variables will be either inputs $u$ or outputs $y$, and
index will correpond to a time index where $z^{[1]}$ is the latest one, so
that the constraint on the least squares (12) is the equality on the most
recent point.
When we will consider one of these polynomials of degree $p$ (either
extrapolation on $p+1$ points or constrained least squares on $p+2$ points)
without specifying which one is used, we will use the generic notation
$\Omega_{p}$.
The coefficients of $\Omega_{p}^{Ex}$ can be computed by several methods
(Lagrange polynomials, Newton’s formula, barycentric approach [Berrut and
Trefethen, 2004]), and the coefficients of $\Omega_{p}^{CLS}$ can be obtained
with a cosntrained linear model (formula $(1.4.11)$ page $22$ of [Amemiya,
1985]).
Finally, we define the Hermite interpolation polynomial $\mathcal{H}$ in the
specific case with two points and first order derivatives:
$\mathcal{H}:\left\\{\begin{array}[]{lcl}\mathbb{R}^{2}\times\mathbb{R}^{2}\times\mathbb{R}^{2}\times\mathbb{R}&\rightarrow&\mathbb{R}\\\
(\left(\begin{subarray}{c}t^{[1]}\\\
t^{[2]}\end{subarray}\right),\left(\begin{subarray}{c}z^{[1]}\\\
z^{[2]}\end{subarray}\right),\left(\begin{subarray}{c}\dot{z}^{[1]}\\\
\dot{z}^{[2]}\end{subarray}\right),t)&\mapsto&\mathcal{H}(t)\end{array}\right.$
(13)
where
$\begin{array}[]{l}\mathcal{H}\in\mathbb{R}_{3}[t]\ \text{and}\ \forall
r\in[\\![1,2]\\!],\\\ \mathcal{H}(t^{[r]})=z^{[r]}\ \text{and}\
\textstyle{\frac{d\mathcal{H}}{dt}}(t^{[r]})=\dot{z}^{[r]}\\\ \end{array}$
(14)
The coefficients of $\mathcal{H}$ can be computed with the square of the
Lagrange polynomial basis or by using divided differences [Hildebrand, 1956].
Hermite interpolation will be used for smoothness enhancement in 3.1.4.
## 3 F3ORNITS ALGORITHM
We introduce here the F3ORNITS method, standing for Flexible Order
Representation of New Inputs, including flexible Time-stepper (with variable
step size, when applicable) and flexible Scheduler (asynchronous-capable, when
applicable).
The method stems from the desire to keep the dynamical behavior of the
coupling variables, what zero order hold (ZOH) does not do. At a given
communication time, the outputs of the past data exchange will be reused in
order to fit polynomial estimations for the future (the upcoming step). This
is done in several co-simulation methods [Kübler and Schiehlen, 2000] [Busch,
2016], yet usually the polynomial order is decided in advance. F3ORNITS has a
flexible order and will decide of an order for every coupling variable at each
communication time. Moreover, we will focus on the way to use these
estimations in an asynchronous context (as it is not always possible to pass
information at the same simulation time from one system to another). The
error, depending on the polynomial order, will also be used to decide the
evolution of the macro-step size (see figure 1). Regarding subsystems with
limited capabilities for holding time-dependent inputs, strategies are
proposed to take advantage of the variable order thanks to an adaptation of
the data to the capabilities of such subsystems. The latter strategies fit the
specification to handle any modular model regardless the missing capabilities
in every subsystem.
Finally, the time-stepping strategy (including the time-stepper itself and the
scheduler) will be presented. The time-stepper is based on the error made by
the estimation described in 3.1. The normalization of this error has a strong
impact on the time-stepping criterion, so several normalization methods will
be described and a new one will be introduced: the damped amplitude
normalization method. The scheduler is also a part of the F3ORNITS method, it
occurs once the time-stepper produced an estimation of the upcoming step
sizes, yet it shall not be fully detailed for the sake of space.
The smoothness enhancement [Busch, 2019] [Dronka and Rauh, 2006] will also be
presented as it is compliant with the F3ORNITS method. Nevertheless, we
adapted it to the context of a flexible order and variable step size method.
The motivation is similar to the one in [Éguillon et al., 2019]. In the case
where subsystems do not have sufficient capabilities (up to $3$rd order
polynomial inputs), the F3ORNITS method can still run without smoothness
enhancement.
### 3.1 Flexible polynomial inputs
We arbitrary set the maximum degree for polynomial inputs to:
$M=2$ (15)
Let $(m_{k})_{k\in[\\![1,n_{sys}]\\!]}$ be the maximum degrees for polynomial
inputs supported for each subsystem. As we want to support every kind of
subsystem, we cannot assume anything on $m_{k}$ (we only know that $\forall
k\in[\\![1,n_{sys}]\\!],m_{k}\geqslant 0$).
Let’s define the effective maximum degree for each subsystem, by adding the
constraint (15):
$\forall k\in[\\![1,n_{sys}]\\!],\ M_{k}:=\min(M,m_{k})$ (16)
The time-dependent inputs that will be generated will always satisfy (17), and
for each degree in these ranges, the maximum supported degree for each
subsystem will never be exceeded thanks to (16).
$\forall k\in[\\![1,n_{sys}]\\!],\ \forall n\in\mathbb{N},\
u_{k}^{[n]}\in\left(\mathbb{R}_{M_{k}}[t]\right)^{n_{in,k}}$ (17)
The determination of the order to use for polynomial inputs is quite
straightforward. However, as this order may be different for each variable and
as it is determined in the subsystem holding the corresponding variable as an
output, some complications due to the asynchronousness may appear on
subsystems having an input connected to this variable.
In order to clarify the process and to deal with properly defined mathematical
concepts, we will split this explanation into three parts:
To begin with, we will define the order function for every output variable and
see how the values of this function are found. Then, we will define the
estimated output variables based on order function. Finally, we will see how
these estimated output variables are used from the connected input’s
perspective.
#### 3.1.1 Order function
Let’s consider a subsystem $(S_{l})$ with $l\in[\\![1,n_{sys}]\\!]$. Let’s
consider we have already done $n$ macro-steps with $n\in\mathbb{N}^{*}$, id
est at least one.
For $j\in[\\![1,n_{out,l}]\\!]$ and $q\in[\\![0,\min\left(M,n-1\right)]\\!]$,
we define ${}^{(q)}err_{l,j}^{[n+1]}$ the following way:
$^{(q)}err_{l,j}^{[n+1]}=\left|y_{l,j}^{[n+1]}-\Omega_{q}^{Ex}\left(t_{l}^{[n+1]}\right)\right|$
(18)
where $\Omega_{q}^{Ex}$ is calibrated on $(t_{l}^{[n-r]})_{r\in[\\![0,q]\\!]}$
and $(y_{l,j}^{[n-r]})_{r\in[\\![0,q]\\!]}$ as explained in (9) and (11). In
other words the value obtained at $t_{l}^{[n+1]}$ is compared to the
extrapolation estimation based on the $(q+1)$ exchanged values before
$t_{l}^{[n+1]}$ (excluded), that is to say at
$t_{l}^{[n-q]},t_{l}^{[n-q+1]},t_{l}^{[n-q+2]},...,t_{l}^{[n]}$ as shown on
figure 3.
We now define the order function $p_{l}$ for subsystem $(S_{l})$, which is a
vectorial function for which each coordinate corresponds to one output of the
subsystem $(S_{l})$. It is a step function defining the ”best order” of
extrapolation for each macro-step, based on the previous macro-step. This
includes a delay that makes it possible to integrate the subsystems
simultaneously.
$p_{l}:\left\\{\begin{array}[]{lcl}[t^{[\text{init}]},t^{[\text{end}]}[&\rightarrow&[\\![0,M]\\!]^{n_{out,l}}\\\
t&\mapsto&\left(p_{l,j}(t)\right)_{j\in[\\![1,n_{out,l}]\\!]}\\\
\end{array}\right.$ (19)
For $j\in[\\![1,n_{out,l}]\\!]$, the $p_{l,j}$ functions are defined this way:
$\begin{array}[]{l}p_{l,j}:\left\\{\begin{array}[]{lcl}[t^{[\text{init}]},t^{[\text{end}]}[&\rightarrow&[\\![0,M]\\!]\\\
t&\mapsto&p_{l,j}(t)\ \text{with:}\\\ \end{array}\right.\\\
p_{l,j}(t)=\displaystyle{\sum_{n=1}^{n_{\max}}}\left(\mathds{1}_{[t_{l}^{[n]},t_{l}^{[n+1]}[}(t)\\!\\!\\!\\!\\!\\!\\!\underset{q\in[\\![0,\min(M,n-1)]\\!]}{\arg\min}\\!\\!\left\\{{}^{q}err_{l,j}^{[n]}\right\\}\right)\end{array}$
(20)
where $n_{\max}$ satisfies $t_{l}^{[n_{\max}]}=t^{[\text{end}]}$.
An illustration of the ”$\arg\min$” choice in (20) is presented on figure 3,
and the order function itself can be visualized on its plot in figure 4.
Figure 3: Order determination
#### 3.1.2 Estimated outputs
We still consider the subsystem $(S_{l})$ and its $n_{out,l}$ outputs. Let’s
admit the step $[t_{l}^{[n]},t_{l}^{[n+1]}[$ is computed. Therefore, for all
$j\in[\\![1,n_{out,l}]\\!]$, the value of $p_{l,j}(t)$ for $t$ in
$[t_{l}^{[n+1]},t_{l}^{[n+2]}[$ is known.
We can now determine the estimated outputs
$\hat{y}_{l}^{[n+1]}\in\left(\mathbb{R}_{q}[t]\right)^{n_{out,l}},\
t\in[t_{l}^{[n+1]},t_{l}^{[n+2]}[$. We have two choices for the way this
estimation is made: Extrapolation mode and Constrained Least Squares (CLS)
mode.
For the sake of genericity, we will define the abstraction of this choice
$\Omega_{q}$ introduced in 2.2.
Estimated outputs are defined the following way on the step
$[t_{l}^{[n+1]},t_{l}^{[n+2]}[$:
$\hat{y}_{l}^{[n+1]}:\left\\{\begin{array}[]{lcl}[t_{l}^{[n+1]},t_{l}^{[n+2]}[&\rightarrow&\mathbb{R}^{n_{out,k}}\\\
t&\mapsto&\left(\hat{y}_{l,j}^{[n+1]}(t)\right)_{j\in[\\![1,n_{out,l}]\\!]}\\\
\end{array}\right.$ (21)
where
$\hat{y}_{l,j}^{[n+1]}:\left\\{\begin{array}[]{lcl}[t_{l}^{[n+1]},t_{l}^{[n+2]}[&\rightarrow&\mathbb{R}\\\
t&\mapsto&{\Omega_{q}}_{\big{|}_{\big{[}t_{l}^{[n+1]},t_{l}^{[n+2]}\big{[}}}\left(t\right)\\\
\end{array}\right.$ (22)
where $\Omega_{q}$ is calibrated on
$(t_{l}^{[n+1-r]},y_{l,j}^{[n+1-r]})_{r\in[\\![0,p]\\!]}$ in extrapolation
mode (see (11)), on
$(t_{l}^{[n+1-r]},y_{l,j}^{[n+1-r]})_{r\in[\\![0,p+1]\\!]}$ in CLS mode (see
(12)), and where $q:=p_{l,j}(t_{l}^{[n+1]})$ so that
$\deg(\hat{y}_{l,j}^{[n+1]})=p_{l,j}(t_{l}^{[n+1]})$ (order determined
previously is respected).
Let $\breve{y}_{l,j}^{[n+1]}$ be the extension of the polynomial
$\hat{y}_{l,j}^{[n+1]}$ on the whole $\mathbb{R}$ domain.
The degree of $\breve{y}_{l,j}^{[n+1]}$ is given by $p_{l,j}(t_{l}^{[n+1]})$
as shown on figure 4.
Figure 4: Order function and corresponding output estimation with
corresponding orders
As in (22), for the rest of 3.1.2, we will use the following notation (for the
sake of readability):
$q=p_{l,j}(t_{l}^{[n+1]})$ (23)
The definition of $\Omega_{q}$ varies depending on the mode. In
”Extrapolation” mode, the oldest point is forgotten (id est not taken into
account in the extrapolation): $(t_{l}^{[n-q]},\ y_{l,j}^{[n-q]})$. This
point, taken into account to choose the order $q$ but not used in
extrapolation calibration, is represented as striped in figure 5.
On the other hand, the idea of ”CLS” mode is to take into account the point
described above in the estimation of the output. An idea can be to forget the
most recent point, that is to say: $(t_{l}^{[n+1]},y_{l,j}^{[n+1]})$, but this
would mean that the value given by the subsystem’s integrator would be unused,
so we will introduce a delay in the coupling process. Thus, the strategy is to
take into account all the $(q\\!+\\!1)$ points that have been used in the
determination of the chosen order, and the most recent point as well. We thus
have $(q\\!+\\!2)$ points to adjust a polynomial of degree at most $q$: an
extrapolation process cannot be made, but the ”best fitting” polynomial can be
found for the CLS criterion (12). Please note that removing the constraint
$\Omega_{q-2}^{CLS}(t^{[1]})=z^{[1]}$ in (12) corresponds to the relaxation
technique on the past refered to as ”method 1” in [Li et al., 2020] in the
particular case of $q=0$.
Figure 5: Extrapolation mode vs. CLS mode, on step
$[t_{l}^{[n+1]},t_{l}^{[n+2]}[$ once step $[t_{l}^{[n]},t_{l}^{[n+1]}[$ is
done (so that $p_{l,j}(t^{[n+1]})$ could be computed)
#### 3.1.3 Estimated inputs
From the inputs perspective, the order function of the connected output is
used. Let’s consider a subsystem $(S_{k})$ with $k\in[\\![1,n_{sys}]\\!]$,
which have $n_{in,k}>0$ inputs. As the case $n_{in,k}=0$ should not be
excluded, let’s admit that nothing is done from the inputs perspective in this
case (because there is no input). From here and for the whole 3.1.3 section,
we will consider $n_{in,k}\in\mathbb{N}^{*}$.
We will consider the input $i$ with $i\in[\\![1,n_{in,k}]\\!]$ and, to
properly consider the connected output, we stand $l\in[\\![1,n_{sys}]\\!]$ and
$j\in[\\![1,n_{out,l}]\\!]$ so that input $i$ of subsystem $k$ is fed by
output $j$ of subsystem $l$ (with the link function notation of [Éguillon et
al., 2019], we would write $(l,j)=L(k,i)$).
As asynchronousness should be supported, a special care should be made when
the step $[t_{k}^{[n]},t_{k}^{[n+1]}[$ does not fit into one single definition
of $\hat{y}_{l,j}^{[m]}$ for an $m\in\mathbb{N}$. In this case, we will use:
$u_{k,i}^{[n]}:\left\\{\begin{array}[]{lcl}[t_{k}^{[n]},t_{k}^{[n+1]}[&\rightarrow&\mathbb{R}\\\
t&\mapsto&\breve{y}_{l,j}^{[m]}(t)\end{array}\right.$ (24)
where $\breve{y}$ is defined in 3.1.2 and where
$m=\max\left\\{m\in\mathbb{N}\ \big{|}\ t_{l}^{[m]}\leqslant
t_{k}^{[n]}\right\\}$ (25)
(24) (25) can be visualized figure 6.
Figure 6: Estimated input using corresponding extended estimated output
Some subsystems cannot hold polynomial inputs, and some other can but not
necessary at any order.
As F3ORNITS method only requires to hold up to order $2$ (except when
smoothness enhancement is triggered (see 3.1.4), which is a particular case),
we will consider $9$ cases represented in the table below.
Figure 7: Alternatives to decrease polynomial input degree for subsystems with
limited capabilities
#### 3.1.4 Smoothness enhancement
Smoothness enhancement can be triggered to enable $C^{1}$ inputs. This mode
will only be applicable on subsystems supporting at least $3$rd order
polynomial inputs.
Let’s consider a system $(S_{k})$ with $k\in[\\![1,n_{sys}]\\!]$. Let’s
consider the step $[t_{k}^{[n]},t_{k}^{[n+1]}[$ with $n\in\mathbb{N}^{*}$. We
will also consider only the input $i$ of this system, with
$i\in[\\![1,n_{in,k}]\\!]$. As the smoothness enhancement process should be
applied on every input separately, it will be detailed only on $u_{k,i}$ here.
The idea is to guarantee that $C^{1}$ smoothness is not broken at time
$t^{[n]}$. In other word, we will remove the jump at the communication time.
Moreover, as several consecutive steps are concerned, the $C^{1}$ smoothness
won’t be broken on the whole time interval (union of the steps).
Regardless of the degree of the polynomial input, we will extend it to a third
order polynomial using Hermite interpolation, as shown on figure 8.
Figure 8: Redefinition of the time-dependent input on a co-simulation step in
the case of smoothness enhancement
As $(S_{k})$ reached the time $t_{k}^{[n]}$, we know the expression of the
input $u_{k,i}^{[n-1]}$ that has been used for $[t^{[n-1]},t^{[n]}[$, so we
can compute the left constraints.
$\begin{array}[]{c}u^{[n],left}:=\underset{\begin{subarray}{c}t\rightarrow
t^{[n]}\\\ t<t_{[n]}\end{subarray}}{\lim\ \ }u^{[n-1]}(t)\\\
\dot{u}^{[n],left}:=\underset{\begin{subarray}{c}t\rightarrow t^{[n]}\\\
t<t_{[n]}\end{subarray}}{\lim\ \ }\dot{u}^{[n-1]}(t)\end{array}$ (26)
Moreover, before performing the simulation on the step $[t^{[n]},t^{[n+1]}[$,
we compute the polynomial input (of degree $0$, $1$, or $2$, computed by
extrapolation or by CLS) $u_{k,i}^{[n]}$. We thus compute the right
constraints.
$\begin{array}[]{c}u^{[n],right}:=u^{[n+1],left}:=\underset{\begin{subarray}{c}t\rightarrow
t^{[n+1]}\\\ t<t_{[n+1]}\end{subarray}}{\lim\ \ }u^{[n]}(t)\\\
\dot{u}^{[n],right}:=\dot{u}^{[n+1],left}:=\underset{\begin{subarray}{c}t\rightarrow
t^{[n+1]}\\\ t<t_{[n+1]}\end{subarray}}{\lim\ \ }\dot{u}^{[n]}(t)\end{array}$
(27)
Finally, instead of using $u_{k,i}^{[n]}$ on $[t^{[n]},t^{[n+1]}[$, we will
use the ”smooth” version of it:
$^{smooth}u_{k,i}^{[n]}:t\mapsto\mathcal{H}\Big{(}\left(\begin{subarray}{c}t^{[n]}\\\
t^{[n+1]}\end{subarray}\right),\left(\begin{subarray}{c}u^{[n],left}\\\
u^{[n],right}\end{subarray}\right),\left(\begin{subarray}{c}\dot{u}^{[n],left}\\\
\dot{u}^{[n],right}\end{subarray}\right),t\Big{)}$ (28)
where $\mathcal{H}$ denotes the Hermite interpolation polynomial described in
section 2.2.
Using Hermite interpolation on values known by extrapolation is sometimes
refered to as ”extrapolated interpolation” [Busch, 2019] [Dronka and Rauh,
2006].
### 3.2 Flexible time management
The other major aspect of F3ORNITS algorithm (the first one being the flexible
polynomial inputs) is the time management, which includes a time stepper and a
scheduler.
The time-stepper defines the next communication time after a macro-step is
finished. The scheduler ensures the coherence of the rendez-vous times of all
subsystems based on their connections, topologies and constraints.
#### 3.2.1 Time-stepper
Let’s consider a system $(S_{l}),l\in[\\![1,n_{sys}]\\!]$ that is either IO or
NI (see figure 2). We have: $n_{out,l}>0$. Let’s consider an output
$y_{l,j},j\in[\\![1,n_{out,l}]\\!]$ of $(S_{l})$.
The aim of the time-stepper is to determine $t_{l}^{[n+2]}$ once step
$[t_{l}^{[n]},t_{l}^{[n+1]}[$ have been computed.
For the sake of readability, we will use the notation
$p:=p_{l,j}(t_{l}^{[n]})$ (different from $q$ in (23) as here we focus on the
lastly computed step, and not the upcoming one). Let’s introduce the macro-
step size and the dilatation coefficient, respectively (29) and (30).
$\delta t_{l}^{[n]}=t_{l}^{[n+1]}-t_{l}^{[n]}$ (29)
$\rho_{l}^{[n+1]}=\nicefrac{{\delta t_{l}^{[n+1]}}}{{\delta t_{l}^{[n]}}}$
(30)
They will be detailed later in this subsection, but for now we only need to
know that the dilatation coefficients are bounded:
$\exists(\rho_{\min},\rho_{\max})\in\mathbb{R}_{+}^{*},\forall
l\in[\\![1,n_{sys}]\\!],\forall
n\in\mathbb{N},\rho_{l}^{[n]}\in[\rho_{\min},\rho_{\max}]$.
In extrapolation mode, it exists
$\zeta_{t}\in\Big{[}t_{l}^{[n-p]},t_{l}^{[n+1]}\Big{]}$ so that
$\begin{array}[]{l}y_{l,j}^{[n+1]}-\breve{y}_{l,j}^{[n]}(t_{l}^{[n+1]})\\\
=\underbrace{\displaystyle{\frac{1}{(p+1)!}}\displaystyle{\frac{d^{(p+1)}y_{l,j}}{dt^{(p+1)}}}(\zeta_{t})}_{\text{independent
of }\delta
t_{l}^{[n]}}\displaystyle{\prod}_{r=0}^{p}(t_{l}^{[n+1]}-t_{l}^{[n-r]})\\\
=c_{1}\cdot\displaystyle{\prod}_{r=0}^{p}\Big{(}\sum_{s=0}^{r}\delta
t_{l}^{[n-s]}\Big{)}\\\ =c_{1}\cdot\displaystyle{\prod}_{r=0}^{p}\Big{(}\delta
t_{l}^{[n]}\Big{(}1+\sum_{s=1}^{r}\Big{(}\prod_{c=1}^{s}\textstyle{\frac{1}{\underbrace{\rho_{l}^{[n+1-c]}}_{\in[\rho_{\min},\rho_{\max}]}}}\Big{)}\Big{)}\Big{)}\\\
\end{array}$
$\begin{array}[]{l}\leqslant
c_{1}\cdot\displaystyle{\prod}_{r=0}^{p}\Big{(}\delta
t_{l}^{[n]}\Big{(}1+\sum_{s=1}^{r}\Big{(}\prod_{c=1}^{s}\textstyle{\frac{1}{\rho_{\min}}}\Big{)}\Big{)}\Big{)}\\\
\leqslant c_{1}\cdot\Big{(}\delta
t_{l}^{[n]}\Big{)}^{p+1}\underbrace{\displaystyle{\prod}_{r=0}^{p}\Big{(}\sum_{s=0}^{r}\Big{(}\textstyle{\frac{1}{\rho_{\min}}}\Big{)}^{s}\Big{)}}_{\text{independent
of }\delta t_{l}^{[n]}}\\\ \leqslant c_{1}\cdot\Big{(}\delta
t_{l}^{[n]}\Big{)}^{p+1}\cdot c_{2}\\\ \end{array}$ (31)
where $c_{1}$ and $c_{2}$ are random constants.
Final expression of error in (31) shows that the error is of order $(p+1)$ on
the mean macro-step sizes. Analogously, we will consider that the error is of
the same order error in CLS mode: $(p+1)$ for a polynomial of degree $p$
(generated with a constrained least-square fitting on $(p+2)$ points).
The time-stepper uses this known error and error order to adapt the step size
accordingly. We use a formula similar to the one in [Schierz et al., 2012]
(also mentioned in [Gomes et al., 2018]) to define a dilatation coefficient
candidate per output $j\in[\\![1,n_{out,l}]\\!]$.
$\rho_{l,j}^{[n+1]}:=\sqrt[p+1]{\frac{1}{error_{l,j}^{[n+1]}}}$ (32)
In (32), the $error$ term is expected to be a relative error either relative
to the values of the concerned variable and to a relative tolerance given
which will determine the error threshold over which the step size is expected
to decrease.
Given such a tolerance $tol_{rel}$ and an absolute tolerance $tol_{abs}$, a
first approach refered to as Magnitude relies on the order of magnitude of the
variable at the moment of the communication. In that case, the error is
defined as follow.
$^{Magn.}error_{l,j}^{[n+1]}:=\frac{\left|y_{l,j}^{[n+1]}-\breve{y}_{l,j}^{[n]}(t_{l}^{[n+1]})\right|}{tol_{abs}+tol_{rel}\cdot
y_{l,j}^{[n+1]}}$ (33)
The problem in this approach is that when values of $y_{l,j}$ are close to
zero, the trend is to give a big error (and the step size will then be
reduced). It is particularly problematic with variables with a great order of
magnitude and periodically crossing the value zero, such as sinusoids.
An approach that might reduce this effect is to normalize the error according
to the amplitude (observed since the beginning of the simulation) instead of
the order of magnitude. This approach will be refered to as Amplitude and
defines the error as follow.
$\begin{array}[]{l}{}^{Ampl.}error_{l,j}^{[n+1]}:=\\\ \\\
\lx@intercol\hfil\displaystyle{\frac{\left|y_{l,j}^{[n+1]}-\breve{y}_{l,j}^{[n]}(t_{l}^{[n+1]})\right|}{tol_{abs}+tol_{rel}\\!\left(\\!\underset{m\in[\\![0,n+1]\\!]}{\max}\\!\\!\left(y_{l,j}^{[m]}\right)-\\!\\!\\!\\!\underset{m\in[\\![0,n+1]\\!]}{\min}\\!\\!\left(y_{l,j}^{[m]}\right)\\!\\!\right)}}\lx@intercol\end{array}$
(34)
The problem in this approach is that when the values of $y_{l,j}$ undergoes a
great jump at one single moment (e.g. at initialization), the trend will be to
produce artificially small errors. Therefore, big step sizes will be produced
by the time-stepper, and the accuracy may dwindle.
Solving this problem can be done by damping the amplitude, in order to
progressively erase the effects of jumps while keeping the local amplitude of
the variable $y_{l,j}$. The error produced with this principle will refer to
the Damped Amplitude strategy, and will be defined as follow.
$\begin{array}[]{l}\\!\\!\\!\\!\\!\\!\\!\\!\\!{}^{\begin{subarray}{c}Damped\\\
Ampl.\end{subarray}}error_{l,j}^{[n+1]}:=\\\ \\\ \lx@intercol\hfil\hskip
28.45274pt\displaystyle{\frac{\left|y_{l,j}^{[n+1]}-\breve{y}_{l,j}^{[n]}(t_{l}^{[n+1]})\right|}{tol_{abs}+tol_{rel}\\!\left(\\!{}^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{l,j}^{[n+1]}-^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{l,j}^{[n+1]}\\!\\!\right)}}\lx@intercol\end{array}$
(35)
Expression (35) refers to the damped minimal and damped maximal sequences
which are recursively defined as follow.
$\left\\{\begin{array}[]{>{\hspace{-0.2cm}}l<{\hspace{-0.3cm}}c<{\hspace{-0.3cm}}l}\alpha^{[0]&=&0\\\
^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{l,j}^{[0]&=&y_{l,j}^{[0]}\\\
^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{l,j}^{[0]&=&y_{l,j}^{[0]}\\\
\alpha^{[m]&=&{}^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{l,j}^{[m]}-^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{l,j}^{[m]}\\\ ^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{l,j}^{[m]&=&\max\\!\Big{\\{}y_{l,j}^{[m]},^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{l,j}^{m-1}\\!-\textstyle{\frac{\nu\cdot\delta
t_{l}^{[m-1]}}{2}}\cdot\alpha^{[m-1]}\Big{\\}}\\\ ^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{l,j}^{[m]&=&\min\\!\Big{\\{}y_{l,j}^{[m]},^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{l,j}^{m-1}\\!+\textstyle{\frac{\nu\cdot\delta
t_{l}^{[m-1]}}{2}}\cdot\alpha^{[m-1]}\Big{\\}}\\\ \end{array}\right.}}}}}}$
(36)
where $\forall n\in\mathbb{N},\ \delta t_{l}^{[n]}=t_{l}^{[n+1]}-t_{l}^{[n]}$
denotes the step size.
The damping coefficient $\nu\geqslant 0$ has to be defined. The greater it
will be, the faster an event such as a jump will be ”forgotten”; the smaller
it will be, the closer ${}^{\begin{subarray}{c}Damped\\\
Ampl.\end{subarray}}error_{l,j}^{[n+1]}$ will be to
${}^{Ampl.}error_{l,j}^{[n+1]}$.
We can now provide a proper definition of the subsystem’s dilatation ratio
mentioned in (30) as the safest $\rho$ among the candidates (32).
$\rho_{l}^{[n+1]}:=\underset{j\in[\\![1,n_{out,l}]\\!]}{\min}\left(\rho_{l,j}^{[n+1]}\right)$
(37)
With this dilatation ratio, we obtain an estimation of the next communication
time for subsystem $(S_{l})$:
$\begin{array}[]{lcl}\delta t_{l}^{[n]}&=&t_{l}^{[n+1]}-t_{l}^{[n]}\\\
\widetilde{\delta t}_{l}^{[n+1]}&=&\rho_{l}^{[n+1]}\cdot\delta t_{l}^{[n]}\\\
\tilde{t}_{l}^{[n+2]}&=&t_{l}^{[n+1]}+\widetilde{\delta
t}_{l}^{[n+1]}\end{array}$ (38)
This next estimation temporary solves the problematic introduced at the
beginning of 3.2.1, the determination of $t_{l}^{[n+2]}$ once
$[t_{l}^{[n]},t_{l}^{[n+1]}[$ is computed. Nonetheless, this is only an
estimation as the scheduler presented in 3.2.2 may modify it and determine the
$t_{l}^{[n+2]}$ to use.
Remark 1 (ratio bounds): In practice, a minimal and a maximal value for the
dilatation ratio (37) can be define in order to avoid unsafe extreme step size
reduction/increase. In section 4, the dilatation coefficient is projected in
the interval $[10\%,105\%]$.
Remark 2 (subsystems without outputs): The above procedure only works for
subsystems having at least one output variable. For the other subsystems (NO
and NINO), we will use the following rule.
$\begin{array}[]{l}\forall l\in\left\\{l\in[\\![1,n_{sys}]\\!]\ |\
n_{out,l}=0\right\\},\\\ \hskip 56.9055pt\forall
n\in\mathbb{N}\cap[2,+\infty[,\
\tilde{t}_{l}^{[n]}=t^{[\text{end}]}\end{array}$ (39)
Remark 3 (co-simulation start): The rules presented above do not enable to set
the two first times. We will obviously set the first one with $\forall
l\in[\\![1,n_{sys}]\\!],\tilde{t}_{l}^{[0]}=t^{[\text{init}]}$ and the next
one by using an initial step $\delta t_{l}^{[0]}$ given as co-simulation
parameter for every subsystem. This initial step size will have a limited
influence on the whole co-simulation. We then set
$\tilde{t}_{l}^{[1]}=t^{[\text{init}]}+\delta t_{l}^{[0]}$ for each subsystem.
#### 3.2.2 Scheduler
The scheduler is intended to adjust the communication times of the subsystems
according to the times they reached individually. It is based on topological
data (an adjacency matrix representing the connections between the subsystems
and the set of constraints regarding step sizes on each subsystem) and can
handle asynchronousness in a way that makes it robust to co-simulation
involving subsystems with imposed step sizes. Several stages happen
sequentially once every subsystem has produced an estimated next communication
time (see 3.2.1). Among them, some are dedicated to avoid to use a variable
further than the time it is supposed to be used according to the time-stepper
(trying to avoid phenomena like the one happening on figure 6), and others act
as optimisations, as they avoid to communicate data when this has no effect
(NI systems feeding a NO subsystem with a large imposed step size, for
instance).
The stages of the scheduler will not be detailed here for the sake of space.
It can be seen as a black box producing the effective next communication steps
once the time-stepper produced the estimated ones.
In other words, once $\forall l\in[\\![1,n_{sys}]\\!],\
[t_{l}^{[n_{l}]},t_{l}^{[n_{l}+1]}[$ is computed ($n_{l}$ might be different
per subsystems as some subsystems may be idle when other keep on going, so
they might do a different amount of steps), the time-stepper has produces
$(\tilde{l}^{[n_{l}+1]})_{l\in[\\![1,n_{sys}]\\!]}$, and the scheduler can be
seen as the black box $Sch$ below.
$(t_{l}^{[n_{l}+2]})_{l\in[\\![1,n_{sys}]\\!]}:=Sch\left(\begin{array}[]{c}(t_{l}^{[n_{l}+1]})_{l\in[\\![1,n_{sys}]\\!]}\\\
(\tilde{t}_{l}^{[n_{l}+2]})_{l\in[\\![1,n_{sys}]\\!]}\end{array}\right)$ (40)
## 4 RESULTS AND BEHAVIOR ON TWO TEST CASES
Two test cases will be presented in this section. Both of them have been made
with Simcenter Amesim.
The first model will present the importance that keeping the dynamics on the
coupling variables may have. This is actually done with the polynomial inputs
in F3ORNITS method. The second model is a variation of the 2-masses-springs-
dampers [Éguillon et al., 2019] [Busch, 2016] and performances will be
measured on several co-simulations.
The most widely used co-simulation method is the non-iterative Jacobi
(parallel) fixed-step algorithm, with zero-order-hold inputs. Therefore, the
comparisons will be made between the latter (refered to as ”NI Jacobi”) and
F3ORNITS algorithm.
### 4.1 Car with controlled speed
The model consists in two subsystems, one corresponds to a $1000$kg car
(simplified as a mass) moving on a $1$D axis (straight road) modelled with
Newton’s second law, which takes a force on entry and gives its position as
output, and the other corresponds to a controller, producing a force and using
the car position on input.
Figure 9: Test case 1: car with controlled speed - Subsystems and monolithic
reference in Simcenter Amesim
As it can be seen on figure 9, the car subsystem adds a random perturbation to
the input force (this may be seen as a 1D wind, for instance). Moreover, on
the first $10$s of the (co-)simulation, the controller will output a force
that is predetermined and that can be seen on figure 10.
Figure 10: Preset output force from the controller on $[0,10[$
At $t=10$s, the output of the controller subsystem becomes the output of a ”P”
controller based on the vehicle speed and which is designed to make the
vehicle reach (or maintain) a target speed of $16$m s$-1$. The velocity needed
by the ”P” controller is computed from the input position, using an explicit
equation $\dot{x}=v$ and a constraint equation $x-u=0$ where $v$ denotes the
computed speed, $x$ a hidden variable representing the position, and $u$ the
position input in the controller subsystem.
Due to the zero order hold, the NI Jacobi method does not allow $(S_{2})$ to
properly retrieve the vehicle speed. Indeed, on every co-simulation step, the
input position will be constant and the DAE will produce a null speed (see
figure 11, as it can be seen on the zoom on $[13,13.2]$.
Figure 11: Vehicle speed computed in controller $(S_{2})$ using its input
(vehicle position) with a ZOH input
The consequence of this is that the controller will put force to push the
vehicle in order to increase the speed (attempting to make it reach $16$m
s$-1$) without even realizing that the vehicle has a non-null velocity.
Therefore, the velocity will keep on increasing indefinitely (curve with
triangles on figure 12). Inversely, F3ORNITS represents the dynamics of the
coupling variables (Force and vehicle position) when needed (this is the
”flexible order” part presented in 3.1). The velocity computed from such a
position input is more relyable, and the controller $(S_{2})$ will send proper
force to make the vehicle maintain its target velocity.
The effective vehicle velocity inside of $(S_{1})$ across the time is
presented in figure 12 in the three cases of (co-)simulation.
Figure 12: Vehicle speed in $(S_{1})$ in the monolithic reference and in co-
simulations with respectively the NI Jacobian (ZOH) method and the F3ORNITS
algorithm
### 4.2 Two masses, springs and dampers
The model consists of two masses coupled with force, displacement and velocity
as shown on figure 13. It is a variant of the test cases presented in
[Éguillon et al., 2019] or in [Busch, 2016] in the sense that, after a
transition time (set to the middle of the simulation time interval), the
behavior of one of the models changes. This generates a discontinuity on
coupling variables and different behaviors before and after.
$\begin{array}[]{|lrr|lrr|lrr|}\hline\cr c_{1}&=&1\ \text{kN/m}&\ d_{1}&=&1\
\text{kN/(m/s)}&\ x_{1}(t^{[\text{init}]})&=&-1\ \text{m}\\\ c_{2}&=&1\
\text{kN/m}&\ d_{2}&=&0\ \text{kN/(m/s)}&\ x_{2}(t^{[\text{init}]})&=&0\
\text{m}\\\ c_{3}&=&1\ \text{kN/m}&\ d_{3}&=&1\
\text{kN/(m/s)}&v_{1}(t^{[\text{init}]})&=&0\ \text{m/s}\\\ m_{1}&=&1000\
\text{kg}&\ m_{2}&=&1000\ \text{kg}&v_{2}(t^{[\text{init}]})&=&0\
\text{m/s}\\\ \vrule\lx@intercol\hfil[t^{[\text{init}]},t^{[\text{end}]}]=[0\
\text{s},\ 200\ \text{s}]\hfil\lx@intercol\vrule\lx@intercol\\\
\hline\cr\end{array}$
Figure 13: Two masses model with coupling on force, displacement and velocity.
Behavior of the right mass changes after $100$s
First of all, for the sake of performance and accuracy comparisons, the model
have been co-simulated using the NI Jacobi method with five fixed step size
values: $0.01$s, $0.05$s, $0.1$s, $0.2$s and $0.4$s. Then, F3ORNITS method was
used with $\delta t_{1}^{[0]}=\delta t_{2}^{[0]}=0.01$s (also used as ”minimal
step size value”), for different strategies regarding polynomial calibration
(extrapolation / CLS, see 2.2), smoothness enhancement (disabled / make
interfaces $C^{1}$, see 3.1.4) and error normalization (w.r.t. amplitude /
magnitude / damped amplitude with $\nu=5\%$, see 3.2.1).
Figure 14: Evolution of $y_{1,1}$ (coupling variable corresponding to the
position of the left mass $x_{1}$) and its damped bounds sequences a defined
in (36) Figure 15: RMSE/#steps trade-off (on $x_{1}$ and $v_{1}$) for various
sets of parameters of F3ORNITS algorithm on 2 masses model
All co-simulations run instantly (measured elapsed time: $0.0$s), but a good
performance indicator can be the number of steps proceeded. Indeed, the main
cost of a co-simulation run is the frequent restart of subsystems’ solvers
after each discontinuity introduced by a communication time. On the other
side, bigger co-simulation steps are expected to produce inaccurate results:
the larger a co-simulation step is, the older an input value might be used.
The precision criterion will be the RMSE (root mean square error) compared to
the monolithic reference.
The error normalization criterion in the time-stepper has a strong impact on
the results: as expected, ${}^{Magn.}error$ generates larger errors and
smaller co-simulation steps. The total amount of steps is bigger than the
other normalization methods, but the results are also more accurate. The
trade-off is: results are either more accurate than with NI Jacobi for the
same total number of steps, or obtained in less steps than NI Jacobi to
achieve the same accuracy (see right-hand dashed circled sets of points on
figure 15).
In order to reduce the effect of artificial step size reduction when coupling
variables cross zero, the error normalization ${}^{Ampl.}error$ method may be
used in the time-stepper. The steps are larger, and the total numbers of steps
are the smallest ones on every case (compare the $3$rd subtable of table 1 to
the $2$nd and the $4$th ones). However, as the model is damped
($d_{1}=d_{3}>0$), the initial highest and lowest values of the variables are
not forgotten throughout the whole co-simulation (this can be observed on the
blue curve of figure 14). The trade-off is still better than NI Jacobi on
every case.
In order to improve the accuracy of the results obtained using the
${}^{Ampl.}error$, the damped amplitude alternative for error normalization
has been proposed in subsubsection 3.2.1. Indeed, damping the amplitude with a
factor $\nu=5\%$ allows the coupling to progressively forget the initial high
values, as the minimum and maximum values will ”follow” the order of magnitude
of the variables amplitude. The ${}^{\begin{subarray}{c}damp\\\
max\end{subarray}}\\!\\!y_{1,1}^{[n]}$ and ${}^{\begin{subarray}{c}damp\\\
min\end{subarray}}\\!\\!y_{1,1}^{[n]}$ sequences can be seen on figure 14,
$y_{1,1}$ being the coupling variable associated to $x_{1}$.
Table 1 compiles the results of figure 15 and shows precisely the rmse values.
Among others, it shows that the polynomial inputs calibration method (either
with an extrapolation $\Omega_{q}^{Ex}$ or a CLS fitting $\Omega_{q}^{CLS}$)
has minor incidence on the final results on the two masses test case.
Nonetheless, it can be observed on this table that the smoothness enhancement
(presented in 3.1.4) slightly damages accuracy, except when the error is
normalized using ${}^{Magn.}error$. The reason of this is the fact that the
value of the inputs is not the one given as output by its connected subsystem
at the beginning of the step. In other word, this phenomena (that can be
observed at $t_{k}^{[n]}$ on figure 8) generates a small extra error at the
beginning of every co-simulation step, but the value of the input is quickly
adapted to match the one it would have had without the smoothness enhancement.
The smaller the steps are, the less this phenomena can be observed: that is
the reason why the subtable regarding ${}^{Magn.}error$ does not show it. The
benefit of smoothness enhancement is mainly the possibility to restart faster
the solvers inside of the subsystems due to the $C^{1}$ smoothness of the
input variables. The consequence of it would be a faster co-simulation run for
the same number of co-simulation steps.
Table 1: Results on 2 masses test model - comparing number of co-simulation
steps and rmse on $2$ state variables NI Jacobi (ZOH)
---
$\delta t$ | #steps | $\begin{subarray}{c}\text{rmse}\\\ \text{on}\ x_{1}\end{subarray}$ | $\begin{subarray}{c}\text{rmse}\\\ \text{on}\ v_{1}\end{subarray}$
$0.01$ | $20000$ | $0.030\%$ | $0.074\%$
$0.05$ | $4000$ | $0.197\%$ | $0.462\%$
$0.1$ | $2000$ | $0.412\%$ | $0.981\%$
$0.2$ | $1000$ | $0.772\%$ | $2.044\%$
$0.4$ | $500$ | $1.928\%$ | $5.194\%$
F3ORNITS
$\\!\Omega_{q}\\!$ | Smoothness | #steps | $\begin{subarray}{c}\text{rmse}\\\ \text{on}\ x_{1}\end{subarray}$ | $\begin{subarray}{c}\text{rmse}\\\ \text{on}\ v_{1}\end{subarray}$
time-stepper using ${}^{Magn.}error$
$\\!\Omega^{Ex}_{q}\\!$ | disabled | $1935$ | $0.017\%$ | $0.024\%$
enhanced | $1848$ | $0.014\%$ | $0.019\%$
$\\!\Omega^{CLS}_{q}\\!$ | disabled | $1768$ | $0.017\%$ | $0.024\%$
enhanced | $1786$ | $0.017\%$ | $0.024\%$
time-stepper using ${}^{Ampl.}error$
$\\!\Omega^{Ex}_{q}\\!$ | disabled | $669$ | $0.083\%$ | $0.183\%$
enhanced | $674$ | $0.122\%$ | $0.269\%$
$\\!\Omega^{CLS}_{q}\\!$ | disabled | $636$ | $0.083\%$ | $0.188\%$
enhanced | $672$ | $0.104\%$ | $0.227\%$
time-stepper using damped amplitude error
$\\!\Omega^{Ex}_{q}\\!$ | disabled | $989$ | $0.019\%$ | $0.030\%$
enhanced | $1007$ | $0.024\%$ | $0.048\%$
$\\!\Omega^{CLS}_{q}\\!$ | disabled | $1071$ | $0.020\%$ | $0.037\%$
enhanced | $911$ | $0.025\%$ | $0.048\%$
## 5 CONCLUSIONS
The F3ORNITS coupling algorithm used with a time-stepping criterion based on
the local error estimation normalized with regard to magnitude gives a better
error/#steps trade-off that the non-iterative Jacobi method. The number of co-
simulation steps can even be reduced by tuning the error normalization method,
and the involved coefficient (refered to as $\nu$ damping coefficient). In
other words, a safe approach could be to start with a co-simulation with
F3ORNITS using the error normalized with regard to the order of magnitude and,
if the total number of steps is not satisfactory, the strategy can be to use
the damped amplitude with a decent $\nu$ coefficient and to progressively
decrease it ($\nu=0\%$ corresponds to the classical amplitude approach).
On the second test case, with error normalized regarding amplitude damped with
a coefficient $\nu=5\%$ (shapes in the middle dashed circles on figure 15), we
can achieve, with approximately $20$ times less co-simulation steps (and as
many avoided discontinuities), an accuracy similar to the one obtained with
the non-iterative Jacobi method (or ever a bit more accurate results, as non-
iterative Jacobi method does not reach an rmse lower than $0.03\%$ for $x_{1}$
even with $20\ 000$ steps). From another point of view, a similar amount of
communication times produces an error $38$ times smaller with F3ORNITS than
with the non-iterative Jacobi method. Moreover, a co-simulation step size does
not need to be chosen in advance as it is automatically adapted with F3ORNITS
on the subsystems allowing it.
The robustness of the method comes from the motivation at the starting point:
F3ORNITS can be applied regardless of the subsystems and their capabilities.
It is therefore possible to integrate the method on an industrial product that
must handle subsystems coming from a wide range of platforms.
The CLS approach, designed in order to avoid forgetting data used for
calibration, does not show any significant difference compared to the
classical extrapolation approach on the second test case, yet the behavior on
a wider set of models has to be studied.
Regarding the enhancements, we can point out a faster restart of embedded
solvers inside the subsystems when $C^{1}$ smoothness of inputs is garanteed
(presented in 3.1.4) at the communication times: rough discontinuities do not
necessarily occur at every communication time. This can only be considered for
subsystems supporting $3$rd order polynomial inputs.
## REFERENCES
* Amemiya, 1985 Amemiya, T. (1985). Advanced econometrics. Cambridge, Mass.: Harvard University Press.
* Arnold and Unther, 2001 Arnold, M. and Unther, M. G. (2001). Preconditioned dynamic iteration for coupled differential-algebraic systems. Bit, 41(1):1–25.
* Bartel et al., 2013 Bartel, A., Brunk, M., Günther, M., and Schöps, S. (2013). Dynamic iteration for coupled problems of electronic circuits and distributed devices. SIAM J. Sci. Comp., 35(2):315–335.
* Berrut and Trefethen, 2004 Berrut, J.-P. and Trefethen, L. N. (2004). Barycentric lagrange interpolation. SIAM Review, 46(3):501–517.
* Busch, 2016 Busch, M. (2016). Continuous approximation techniques for co-simulation methods: Analysis of numerical stability and local error. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 96(9):1061–1081.
* Busch, 2019 Busch, M. (2019). Performance Improvement of Explicit Co-simulation Methods Through Continuous Extrapolation. In IUTAM Symposium on solver-coupling and co-simulation, volume 35 of IUTAM Bookseries, pages 57–80. IUTAM. IUTAM Symposium on Solver-Coupling and Co-Simulation, Darmstadt, Germany, September 18-20, 2017.
* Dronka and Rauh, 2006 Dronka, S. and Rauh, J. (2006). Co-simulation-interface for user-force-elements. In Proceedings of SIMPACK User Meeting, Baden-Baden.
* Éguillon et al., 2019 Éguillon, Y., Lacabanne, B., and Tromeur-Dervout, D. (2019). IFOSMONDI: a generic co-simulation approach combining iterative methods for coupling constraints and polynomial interpolation for interfaces smoothness. In Science, S. and Publications, T., editors, Proceedings of the 9th International Conference on Simulation and Modeling Methodologies, Technologies and Applications, pages 176–186. INSTICC.
* Gomes et al., 2018 Gomes, C., Thule, C., Broman, D., Larsen, P. G., and Vangheluwe, H. (2018). Co-simulation: a survey. ACM Computing Surveys (CSUR), 51(3):1–33.
* Gu and Asada, 2004 Gu, B. and Asada, H. H. (2004). Co-Simulation of Algebraically Coupled Dynamic Subsystems Without Disclosure of Proprietary Subsystem Models. Journal of Dynamic Systems, Measurement, and Control, 126(1):1.
* Hildebrand, 1956 Hildebrand, F.-B. (1956). Introduction to Numerical Analysis. Dover Publications, 2nd edition.
* Kübler and Schiehlen, 2000 Kübler, R. and Schiehlen, W. (2000). Two methods of simulator coupling. Mathematical and Computer Modelling of Dynamical Systems, 6(2):93–113.
* Li et al., 2014 Li, P., Meyer, T., Lu, D., and Schweizer, B. (2014). Numerical stability of explicit and implicit co-simulation methods. J, 10(5):051007.
* Li et al., 2020 Li, P., Yuan, Q., Lu, D., Meyer, T., and Schweizer, B. (2020). Improved explicit co-simulation methods incorporating relaxation techniques. Archive of applied mechanics, 90:17–46.
* Schierz et al., 2012 Schierz, T., Arnold, M., and Clauß, C. (2012). Co-simulation with communication step size control in an fmi compatible master algorithm. pages 205–214.
* Schweizer et al., 2016 Schweizer, B., Li, P., and Lu, D. (2016). Implicit co-simulation methods: Stability and convergence analysis for solver coupling approaches with algebraic constraints. ZAMM Zeitschrift fur Angewandte Mathematik und Mechanik, 96(8):986–1012.
* Sicklinger et al., 2014 Sicklinger, S., Belsky, V., Engelman, B., Elmqvist, H., Olsson, H., Wüchner, R., and Bletzinger, K.-U. (2014). Interface Jacobian-based Co-Simulation. International Journal for Numerical Methods in Engineering, 98:418–444.
* Skelboe, 1992 Skelboe, S. (1992). Methods for Parallel Integration of Stiff Systems of ODEs. BIT Numerical Mathematics, 32(4):689–701.
|
# Piezoelectric Energy Harvesting: a Systematic Review of Reviews
Jafar Ghazanfarian<EMAIL_ADDRESS>Mechanical Engineering
Department, Faculty of Engineering, University of Zanjan, P.O. Box 45195-313,
Zanjan, Iran. Mohammad Mostafa Mohammadi Mechanical Engineering Department,
Faculty of Engineering, University of Zanjan, P.O. Box 45195-313, Zanjan,
Iran. Kenji Uchino International Center for Actuators and Transducers, The
Pennsylvania State University, University Park, PA, 16802, USA
###### Abstract
In the last decade, an explosive attention has been paid to piezoelectric
harvesters due to their flexibility in design and increasing need to small-
scale energy generation. As a result, various energy review papers have been
presented by many researchers to cover different aspects of piezoelectric-
based energy harvesting, including piezo-materials, modeling approaches, and
design points for various applications. Most of such papers tried to shed
light on recent progresses in related interdisciplinary fields, and to pave
the road for future prospects of development of such technologies. However,
there are some missing parts, overlaps, or even some contradictions in the
review papers. In the present review of review articles, recommendations for
future research directions suggested by the review papers have been
systematically summed up under one umbrella. At the final section, topics for
missing review papers, concluding remarks on outlooks and possible research
topics, and strategy-misleading contents have been presented. The review
papers have been evaluated based on merits and subcategories and authors’
choice papers have been presented for each section based on clear
classification criteria.
††journal: ?
## Highlights
* 1.
A comparative overview of reviews in the map of piezo-energy harvesting is
presented.
* 2.
An extensive description of research lines for future research is provided.
* 3.
Classification of reviews is presented based on subcategories and merits.
* 4.
Authors’ choice papers are presented for each section.
## Keywords
Energy harvesting; piezoelectric; energy conversion; renewable energies;
micro-electro-mechanical systems
###### Contents
1. 1 Introduction
2. 2 Reviews with non-focused topics
3. 3 Design and fabrication
1. 3.1 Materials
2. 3.2 Structure
3. 3.3 MEMS/NEMS-based devices
4. 3.4 Modeling approaches
4. 4 Applications
1. 4.1 Vibration
2. 4.2 Biological sources
3. 4.3 Fluids
4. 4.4 Ambient waste energy sources
5. 5 Challenges and the roadmap for future research
## 1 Introduction
Due to recent developments of portable and wearable electronics, wireless
electronic systems, implantable medical devices, energy-autonomous systems,
monitoring systems, and MEMS/NEMS-based devices, the procedure of small-scale
generation of energy may lead to a revolution in development of compact power
technologies.
Figure 1 presents the output power density variation versus the actual motor
power for 2000 commercial electromagnetic motors. Electromagnetic motors are
superior for the production of power levels higher than 100 W. However,
because the efficiency is significantly dropped below 100 W, the piezoelectric
devices with power density insensitive to their size will replace battery-
operated small portable electronic equipment less than 50 W level. It is not
logical to compare the energy harvesting systems with the MW power level.
Hence, it is necessary for researchers to determine their original piezo-
harvesting target, which should be basically the replacement of compact
batteries, one of the toxic wastes in the sustainable society [1].
Figure 1: Comparison of the specific power with respect to the power [1].
Dutoit et al. [2] provided a comparison based on the density of the output
power, and indicated that the power densities of the fixed-energy density
sources extensively drop after just 1 year of operation. So, they need
maintenance and repair if possible. Designing an effective power normalization
scheme, strain cancelation due to multiple input vibration components,
optimizing the minimum vibration level required for positive energy
harvesting, and the prototype testing to eliminate the proof mass are among
the suggestions as future works.
Advantages of the piezoelectric energy harnessing include simple structure
without several additional components, no need to moving parts or mechanical
constraints, environment friendliness and being ecologically safe,
portability, coupled operation with other renewable energies, no need to an
external voltage source, compatibility with MEMS, easy fabrication with
microelectronic devices, reasonable output power density, cost effectiveness,
and scalability. Hence, piezo-materials are an excellent candidate to replace
batteries with short lifespan for powering macro to nanoscale electronic
devices.
Piezo-materials can extract power directly from structural vibrations or other
environmental mechanical waste energy sources in infrastructures (bridges,
buildings), biomedical systems, health care and medicine, and they can be used
for transducers, actuators, and surface acoustic wave device operation. Some
disadvantages of the piezo-harvesters are high output impedance, producing
relatively high output voltages at low electrical current, and rather large
mechanical impedance.
The number of review papers on piezoelectric energy harvesting has been
extensively increased in the recent decade. Due to the tremendous number of
published review papers in this field, finding an appropriate review paper
became challenging. On the other hand, there are lots of overlaps,
similarities, missing parts, and sometimes contradictions between different
reviews. Therefore, the main motivation of the present paper is to present a
systematic review of the review papers on piezoelectric energy harvesting. We
tried to summarize all deficits, advantages, and missing parts of the existing
review papers on piezo-energy harvesting systems.
An extensive search among database sources identified 91 review papers in
diverse applications related to the piezoelectric energy harvesting. As will
be demonstrated later, such papers have present different concluding remarks
for the area of usage, materials, design approaches, and mathematical models.
We tried to perform a very detailed searching procedure with several keywords
and search engines to cover all published review papers, and to find the
review papers without ”piezo” directly mentioned in the title.
The statistics of publications during two recent decades excluding conference
papers, extracted using the keyword ”piezo AND energy harvesting” from SCOPUS
are shown in Fig. 2. The results from SCOPUS included the overall number of
4435 documents, containing 874 open access papers, 130 book chapters, and 36
books. The national natural science foundation of China, the fundamental
research funds for the central universities, and the national research
foundation of Korea were the most frequent funding sponsors. Most common
subject areas were engineering, material sciences, physics and astronomy,
chemistry, and energy. An extrapolation shown in the figure anticipates
publication of about 2500 articles per year during the coming three years.
Figure 2: Statistics and future estimation of publications on piezoelectric
energy harvesting.
Due to interdisciplinary nature of piezoelectric energy harvesting, prediction
of behavior of the piezo-generators are related to different thermo-electro-
mechanical sciences as well as material engineering. We have illustrated a
systematic map of various aspects of piezo-energy harvesting in Fig. 3.
Different branches of connected sciences and applications include fabrication
methods, hybrid systems, performance evaluation, size, utilization methods,
configurations, modeling aspects, economical points, energy sources,
optimization, design of an electric interface, and selection of proper
materials. All sub-branches in the figure will be discussed in subsections of
the present paper.
Figure 3: Strategic map of piezoelectric energy harvesting design aspects,
modeling approaches, and applications.
A review article is not an omnibus of the paper collection. The review should
be written for criticizing or praising each paper. Evaluation of the review
papers and their contribution to the field will be presented based on the
following criteria:
1. 1.
Having a solid evaluation philosophy by the reviewer.
2. 2.
Presenting non-general future research directions in the summary/conclusion of
the paper.
3. 3.
Paying attention to the critical design aspects such as electromechanical
coupling factor or actual resonance frequency.
4. 4.
Many papers report the harvesting energy around the resonance range. Though
the typical noise vibration is in a much lower frequency range, the
researchers measure the amplified resonance response (even at a frequency
higher than 1 kHz).
5. 5.
If the harvested energy is lower than 1mW, which is lower than the required
electric energy to operate a typical energy harvesting electric circuit with a
DC/DC converter (typically around 2-3mW), it is somehow difficult to describe
the system as an energy harvesting device.
6. 6.
The complete energy flow or exact efficiency from the input mechanical noise
energy to the final electrical energy in a rechargeable battery via the
piezoelectric transducer is an important part of the review from the
applicational/industrial viewpoint.
7. 7.
Number of sub-fields covered in the review paper.
8. 8.
The review papers may provide enough theoretical background of piezoelectric
energy harvesting, practical material selection, device design optimization,
energy harvesting electric circuits to help readers prevent the ”Google
syndrome” [3].
The scoring strategy is as follows: 1 point for the number of conclusions
reported, 1 point for the number of sub-categories covered, 2 points for
paying attention to the merits, and 1 point for reporting the minimum required
energy output level. Details of scores for each parts is presented in the
tables inside brackets. The reviews with the scores between 0-1, 1-2, 2-3,
3-4, 4-5, respectively are labeled with E to A. It should be noted that the
value of minimum required output should be clearly addressed among concluding
remarks, conclusions, future directions, abstract, or introduction.
The outline of the paper is as follows. At the first section, the focus is on
the reviews about the design process, structure, material considerations, size
effects, and the mathematical modeling challenges. At the second part of the
article, the main theme will be evaluating applications of the piezo-
harvesters. The most common applications include vibrational energy sources,
fluid-based harvesters, scavenging energy from ambient waste energies, and
energy harnessing in biological applications. In the last section, a summary
of future challenges, research directions, and missing review topics will be
presented.
## 2 Reviews with non-focused topics
The discussed papers in this section are general review articles without
having a specific focal point. Safaei et al. [4] presented a review of energy
harvesting using piezoelectric materials during 2008 to 2018. This article is
an update of their previous review [5], and covers lead-free piezo-materials,
piezoelectric single crystals, high-temperature piezoelectricity,
piezoelectric nanocomposites, piezoelectric foams, nonlinear and broadband
transducers, and micro-electro-mechanical transducers. They also discussed
several types of piezoelectric transducers, the mathematical modeling, energy
conditioning circuitry, and applications such as fluids, windmill-style
harvesters, flutter-style harvesters, from human body, wearable devices,
implantable devices, animal-based systems, infrastructure, vehicles, and
multifunctional/multi-source energy harvesting. Several useful illustrations
have been presented in the paper, which sum up different technologies in a
unified framework. However, their brief recommendations for future horizons in
the field, including fabrication of piezoelectric nanofibers, piezoelectric
thin films, printable piezoelectric materials, exploiting internal resonance
of structures, and development of metamaterials and metastructures may be
extended to cover other aspects presented in Tab. 1.
Anton and Sodano [5] reviewed some general topics published between 2003 to
2006, discussing the efficiency improvement, configurations, circuitry and
method of power storage, implantable and wearable power supplies, harvesting
from the ambient fluid flows, the micro-electro-mechanical systems, and the
self-powered sensors without a clear classification. They described that the
future directions are the development of a complete self-powered device that
includes a combination of power harvester, storage, and application circuitry.
Also, they declared that the enhancement of energy generation and storage
methods along with decreasing the power requirements of electronic devices may
be a prime target. Taware and Deshmukh [6] briefly reviewed a number of
literature in the field of piezoelectric energy harvesting. They mentioned
advantages and disadvantages of some of piezoelectric materials. They
explained the cantilever-based piezoelectric energy harvesters, their related
design points, and mathematical modeling. Khaligh et al. [7] addressed the
piezoelectric and electromagnetic generators suitable for human-powered and
vibration-based devices, including resonant, rotational, and hybrid devices. A
brief information has been presented about the hybrid generators provided by
an imbalanced rotor that needs more in-deep investigations in future reviews.
Sharma and Baredar [8] analyzed the current methods to harvest energy from
vibration using a piezoelectric setup in low-range frequency zone by analyzing
piezoelectric material properties based on modeling and experimental
investigations. They indicate that the disadvantages of the piezo-harvesters
are depolarization, sudden breaking of piezo layer due to high brittleness and
poor coupling coefficient, poor adhesive properties of PVDF material, and
lower electromagnetic coupling coefficient of PZT. They discussed that the
design of high-efficiency energy harvesters, invention of new energy
harvesting designs by exploring non-linear benefits, and design of portable
compact-size systems with integrated functions are forthcoming challenges.
Mateu and Moll [9] presented an overview of several methods to design an
energy harvesting device for microelectronics depending on the type of
available energy. They summarized the power consumption of the microelectronic
devices and explained the working principals of piezoelectric, electrostatic,
magnetic induction, and electromagnetic radiation-based generators. Calio et
al. [10] reviewed the material properties of about 19 piezo-materials, the
piezo-harvesters operating modes, resonant/non-resonant operations, optimal
shape of the beam, the frequency tuning, the rotational device configurations,
the power density and bandwidth, and the conditioning circuitry. They tried to
present a selection guide between piezoelectric materials based on the power
output and the operating modes. They concluded that the resonant $d_{33}$
cantilever beam needs to be optimized and $d_{15}$ harvester is still too
complex to be fabricated but has great potentials. This paper may be a good
suggestion for beginners to start a research in the field of piezoelectric
energy harvesting. Batra et al. [11] reviewed mathematical modeling and
constitutive equations for piezo-materials, the lumped parameter modeling,
mechanisms of piezoelectric energy conversion, and operating principles of the
piezoelectric energy harvesters. Sun et al. [12] made a review on applications
of piezoelectric harvesters. However, they put everything in a nutshell. Such
topics need more close considerations. A super short review paper exists [13]
that mainly has focused on some points about the history of piezoelectric
effect, piezo-materials, and applications like harvesting from footsteps and
roads.
Table 1: Overall evaluation of review papers written about non-focused topics
on piezoelectric energy harvesting. ”Cons.” stands for conclusions. Numbers in
brackets are scores for each item.
Conclusions: 1 Efficiency/performance improvement, 2 frequency tuning, 3
safety issues, 4 costs, hybrid harvesters, 5 non-linear models, 6 battery
replacement, 7 miniaturization, 8 steady operation, 9 more efficient
materials.
Merits: 1: electromechanical coupling factor, 2: realistic resonance, 3:
energy flow, 4: paying attention to the range of output power.
Sub-categories: 1: microscale, 2: electrostatic, 3: magnetic induction, 4:
electromagnetic radiation, 5: thermal energy, 6: circuit, 7: wearable device,
8: ambient fluid flow, 9: sensors, 10: material, 11: human, 12: vibration, 13:
hybrid device, 14: modelling, 15: material, 16: road and shoe, 17: fluids, 18:
animals.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
6 (0.67) | $\mu W$ to $mW$ (1) | 478 | 1, 2, 3, 4 (2.00) | 1, 6, 11, 14, 15, 16, 17, 18 (0.44) | Safaei et al. [4] | A | High-temperature devices, metamaterials
5 (0.56) | $\mu W$ (1) | 90 | 1, 3, 4 (1.5) | 1, 6, 7, 8, 9, 10 (0.33) | Anton and Sodani [5] | B | -
5 (0.56) | 375$\mu W$ (1) | 14 | 2, 4 (1.0) | 11, 12, 14 (0.17) | Taware and Deshmukh [6] | C | -
3 (0.33) | $\mu W$ to $mW$ (1) | 54 | 1, 4 (1.0) | 2, 11, 12, 13 (0.22) | Khaligh et al. [7] | C | -
6 (0.67) | \- (0) | 70 | 1, 2, 4 (1.5) | 14, 15 (0.11) | Sharma and Baredar [8] | C | Depolarization, sudden breaking of piezo layer due to high brittleness
4 (0.44) | 100$\mu W$ (1) | 33 | 4 (0.5) | 1, 2, 3, 4, 5, 6 (0.33) | Mateu and Moll [9] | C | A discussion on power consumption of microelectronic devices
4 (0.44) | \- (0) | 153 | 1, 2, 4 (1.5) | 6, 11, 14, 15 (0.22) | Calio et al. [10] | C | Optimal shapes, buckling
1 (0.11) | $\mu W$ to $mW$ (1) | 95 | 4 (0.5) | 11, 12, 14, 15, 16 (0.28) | Batra et al. [11] | D | -
3 (0.33) | 1.3$mW$ (1) | 16 | 4 (0.5) | -(0) | Sun et al. [12] | D | Comparison with energy from wind, solar, geothermal, coal, oil and gas
1 (0.11) | \- (0) | 13 | -(0) | 14, 15, 16 (0.17) | Sharma et al. [13] | E | Historical points
Although most of the aforementioned general review papers have more or less
similar titles, but their scientific depth and the number of reviewed items
are different. Some papers like Ref. [10] have focused on design strategies of
the piezoelectric energy harvesters. They try to present a guide for the
selection of piezoelectric materials as harvesters. Moreover, almost all the
mentioned reviews suffer from weak classifications stemming from generality of
their topic.
The results of evaluation of generally-written review papers on piezoelectric
energy harvesting have been presented in Table 1. The table contains different
sub-categories, the range of output power, the number of reviewed articles,
the merits, general conclusions, and some other extra descriptions. The grade
for each paper has been computed based on the number of merits, the number of
subcategories, the number of concluding remarks, and declaration of minimum
required output power.
## 3 Design and fabrication
### 3.1 Materials
The choice of suitable piezoelectric material is a critical step in designing
energy harvesters [14]. Thus, lots of the review papers in the field of energy
harvesters less or more have addressed the piezoelectric materials. Different
performance metrics have been selected for comparing piezoelectric materials
on diverse applications. In actuating and sensing applications, the
piezoelectric strain and piezoelectric voltage constants are appropriate
criteria. However, the electromechanical coupling factor, power density,
mechanical stiffness, mechanical strength, manufacturability, and quality
factor are the most important factors for energy harvesting. Also the
operating temperature is important in material selection [15].
Li et al. [16] divided the piezoelectric materials into four categories
(ceramics, single crystals, polymers, and composites) based on their structure
characteristics. They described the general properties of these four piezo-
material categories, and compared some of the most important candidate
materials form these categories in terms of piezoelectric strain constant,
piezoelectric voltage constant g, electromechanical coupling factor k
,mechanical quality factor Q , and dielectric constant e. They commented that
piezoelectric ceramics and single crystals have much better piezoelectric
properties than piezoelectric polymers that is due to the strong polarizations
in their crystalline structures. On the other hand, piezoelectric ceramics and
single crystals are more rigid and brittle then piezoelectric polymers. Both
piezoelectric properties and mechanical properties are important in selection
of a certain piezoelectric material for a specific piezoelectric harvesting
application. Other important parameters in selecting the suitable materials
are the application frequency, the available volume, and the form in which
mechanical energy is fed into the system. In order to harvest maximum amount
of energy, the piezoelectric energy harvester should operate at its resonance
frequency. However, in many cases such as low frequency applications, it is
impractical to match the resonance frequency of the piezoelectric with the
input frequency of the host structure. They demonstrated that for low
frequency applications in off-resonance conditions the piezoelectric element
can be approximated as a parallel plate capacitor and for harvesting more
electric energy the product of piezoelectric strain constant and piezoelectric
voltage constant should be high. On the other hand, for near-resonance
conditions, the optimum output power of the harvester is independent of
piezoelectric properties of piezo-element but the maximum output voltage
depends on piezoelectric strain constant. It is obvious that the selection of
suitable piezomaterial for piezo-harvester depends on working condition, and
it makes the selection of piezo-material more complex. The article has not
specified the minimum required power output for piezoelectric energy
harvesters. Also, the energy density of piezoelectric materials were not
reported. The focus is on macroscale piezomaterials and micro- and nono-scale
materials were not covered.
Narita and Fox [17] reviewed three categories including piezoelectric
ceramics/polymers, magnetostrictive alloys, and magnetoelectric multiferroic
composites. Their review included describing the properties of PZT, PVDF, ZnO.
They compared some of the piezoelectric materials based on their piezoelectric
coefficients (Fig. 4). Also, they remarked some advantages and disadvantages
of traditional piezoelectric ceramics, piezoelectric polymers, and composites.
They focused on characterization, fabrication, modeling, simulation,
durability and reliability of piezo-devices. Based on their analysis, the
future directions include the device size reduction to make them suitable for
nanotechnology, optimization, and developing accurate multi-scale
computational methods to link atomic, domain, grain, and macroscale behaviors.
Investigation of temperature-dependent properties, development of materials
and structures capable of withstanding prolonged cyclic loading, duration of
electro-magneto-mechanical properties, and fracture/fatigue studies are other
recommendations for future research. The review does not reported some of the
important mechanical and piezoelectric properties of the piezo-materials like
electromechanical coupling factor and quality factor, mechanical strength and
mechanical stiffness, and the materials were compared based on their
piezoelectric coefficients and the output power of the energy harvesters.
Figure 4: Piezoelectric coefficient range for some of piezoelectric materials
[17].
Safaei et al. reviewed the recent progresses in the field of piezoelectric
ceramics like soft and hard PZTs, piezoelectric polymers including PVDF,
piezoelectric single crystals, lead-free piezoelectrics, high temperature
piezoelectrics, piezoelectric nanocomposites, and also piezoelectric foams.
They reported the piezoelectric coefficient, and the maximum output voltage
for some of these materials without describing the geometry of the
piezoelectric harvester. Brittleness of PZTs and existence of health risks in
PZT ceramics due to the toxicity of lead are the most important challenges of
using PZTs, which motivates the development of lead-free flexible and high-
performance piezoelectric materials. They concluded that the need for
enhancement of electromechanical, thermal, and biocompatible properties has
led to the introduction of new piezoelectric materials including new lead-free
piezoelectrics, high-temperature piezoelectrics, piezoelectric foams, and
piezoelectric nanocomposites. The paper have eplained lots of piezo-materials
however there is not a systematic comparison between the piezoelectric
materials in terms of piezoelectric and mechanical properties. It seems that
the main target is only reporting the recent progresses in the field. Also the
minimum required output power for the piezoelectric harvesters was not
remarked.
Zaarour et al. [18] summarized the energy harvesting technologies developed
based on piezoelectric polymeric fibers, inorganic piezoelectric fibers, and
inorganic nanowire. The paper contains a review of piezoelectric fibers and
nanowires with respect to the peak voltage, the peak current, the active area,
and their advantages, without describing the working conditions and mechanical
structure of the related piezoelectric energy harvester. Maybe due to lack of
available data on properties of nano-scale piezoelectric materials, there is
not any comparison between the selected materials in terms of their
piezoelectric and mechanical properties. The reported output powers are in the
range of micro watt which is not enough for empowering real electronic systems
and circuits. They concluded that, standardizing the performance of the piezo-
nanogenerator, developing effective packaging technology, packaging of nano-
piezo-harvesters, commercializing products for harsh environments, finding a
suitable approach to enhance the electrical outputs, and augmenting the
durability and the output stability are some future horizons.
Yuan et al. [19] introduced the dielectric electroactive polymers as promising
replacements for conventional piezoelectric materials. Electroactive polymers
are lightweight, flexible, ductile, low-cost manufactured, with high strength-
to-weight ratio, low mechanical impedance, and can endure large strains. The
dielectric polymers need high voltage to realize energy cycles that may lead
to the breakdown of the device. Piezoelectric materials are employed in energy
harvesters because of their compact configuration and compatibility. However,
these materials have inherent limitations including aging, depolarization, and
brittleness. In comparison, electrostrictive polymers are promising candidates
to replace piezoelectric materials in vibration energy harvesting cases. The
challenge in design of electroactive polymer energy harvesters is to develop
systems that are capable of ensuring a constant initial voltage on the polymer
at small cost.
There are some other review papers which have focused on several issues in the
field of piezoelectric materials. Piezoelectric polymers were reviewed some of
papers like Mishara’s review paper et al. [20]. High temperature single
crystals is the subject of Priya’s paper which made a comparative study of the
main high temperature piezoelectric single crystals. Bio-piezoelectric
materials were described by Liu et al. [21]. Also they have reviewed micro and
nano fabrication techniques for micro/nano scale energy harvesters. useful
information on micro/nano scale piezoelectric materials may be found in Gosavi
et al. [14]. They defined a systematic roadmap to select the piezoelectric
materials for micro and nanoscale energy harvesters. They pointed out that the
ZnO thin film is the most widely used structure in micro and nanoscale
harvesters, and can be economically synthesized in arbitrary sizes and shapes.
A detailed comparison between traditional macro materials and new micro/nano
piezoelectric materials in terms of dielectric, mechanical and piezoelectric
properties was performed by Bowen et al. [22]. They mentioned some points
about high-temperature harvesting related to the Curie temperature, light
harvesting into chemical or electrical energy, and optimization algorithms.
Their investigation contains parameters like pyroelectric coefficient
(harvesting from temperature fluctuations), the electro-mechanical coupling,
the mechanical quality factor, the constant-strain relative permittivity, the
constant-stress relative permittivity, the piezoelectric coefficient, and the
elastic constant of piezoelectric materials. For high-strain applications,
they suggested polymeric or composite-based systems. Their suggested future
directions are understating and development of new materials and gaining
strong scientific underpinning of the technology and reliable measurements.
Most of the review papers tried to compare the piezoelectric materials and
draw a roadmap for selecting an appropriate material for energy harvesters.
However, the choice of material is strictly dependent on type of the energy
harvester, its working condition, the cost level, accessibility and ease of
fabrication/synthesis of the piezoelectric material. For example, Ullah Khan
and Ahmad [23] who have reviewed vibrational energy harvesters utilizing
bridge oscillations, pointed out that the main selection criteria for
piezoelectric vibrational energy harvesting are the dielectric constant, the
Curie temperature, and the modulus of elasticity of the material.
The piezoelectric materials with high value of elastic modulus can be an
appropriate choice for high acceleration vibrations. However, the
piezoelectric materials like lead lanthanum zirconate titanate that has a high
value of dielectric constant will perform very well in low-acceleration
vibrational environments. Also, due to the easiness of in situ fabrication of
lead zirconate titanate (PZT) with sol-gel technique, and its easy integration
with the other microfabrication processes, PZT has been largely utilized in
most of such applications.
As another example, we can point out the selection of a desirable
piezoelectric material for walking energy harvesting applications. Based on a
review performed by Maghsoudi Nia et al. [24], this application needs an
incombustible, chemically resistant, low-price material, which should be
unbreakable under harsh conditions. The mentioned criteria have made PVDF more
suitable than PZT for the most of piezoelectric harnessing from walking. Most
of the review papers have contented themselves with reporting some
electromechanical properties of piezoelectric materials, and provided a few
information on the accessibility, relative cost, chemical properties, ease of
fabrication, and suitable working conditions of different piezoelectrics. The
lack of such information indicates the need for further research and also the
necessity for making more comprehensive and application-based reviews on
piezoelectric materials.
The results of evaluation of the review papers on piezoelectric materials have
been presented in Table 2. The table also contains different sub-categories,
the range of output power, the number of reviewed articles, the merits,
general conclusions, and some other extra descriptions. The rank of each paper
has been computed based on the number of merits, the number of subcategories,
the number of concluding remarks, and clear emphasizing on value of minimum
required output power. As indicated by table 1 unless a few papers like [16]
other reviews suffer from lack of reported data on mechanical piezoelectric
materials, their fabrication methods and other figure of merits in material
selection. Also, unless the paper [17] which has pointed out the needed energy
for empowering the electronic devices, other papers have neglected the minimum
required energy for an energy harvester.
Table 2: Overall evaluation of review papers written on materials in
piezoelectric energy harvesting. The numbers in brackets denote non-general
future lines. ”Cons.” stands for conclusions.
Conclusions: Efficiency/performance improvement, cost reduction, lead free
materials, increasing life time, endurance, size reduction and
manufacturability.
Merits: 1: piezoelectric coefficients, 2: coupling factors, 3:
manufacturability, 4: mechanical strength, 5: guidelines for material
selection, 6: paying attention to the minimum required output power (1 mW), 7:
energy density, 8: stiffness, 9: quality factor
Sub-categories: 1- Micro and nano materials: 1-1 Piezoelectric micro/macro
fibers, 1-2 Polymer nano fibers, 1-3 Ceramic nano-fibers, 1-4 Piezoelectric
nano wires, 1-5 Micro/nano fibers/wires composites.
2- macro scale materials: 2-1- Piezoelectric polymers (PVDF, Pu, P(VDF-TrFE),
cellular PP), 2-2-piezoelectric ceramics (PZT, PMM-PT, PMN-PZT …), 2-3-
piezoelectric single crystals (Quartz …), 2-4- piezoelectric foams (PDMS
piezoelectric, PET/EVA/PET piezoelectret, FEP piezoelectric), 2-5-
piezoelectric powders, 2-6- piezoelectric composites (PVDF with Nanofillers,
Non-Piezoelectric Polymer with BaTiO3), 2-7-bio materials.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
6 (0.75) | $\mu W$ to $mW$ (0) | 120 | 1, 2, 3, 5, 6, 8, 9 (1.75) | 2-1, 2-2, 2-3, 2-6 (0.33) | Li et al. [16] | C | 1 the current state of research on piezoelectric energy harvesting devices for low frequency (0-100 Hz) applications and the methods that have been developed to improve the power outputs of the piezoelectric energy harvesters have been reviewed. 2 The selection of the appropriate piezoelectric material for a specific application and methods to optimize the design of the piezoelectric energy harvester were discussed.
6 (0.75) | $\mu W$ to $nW$ (0) | 478 | 1, 3, 6, 7 (1.15) | 1, 2-2, 2-3 2-4, 2-6 (0.75) | Safae et al. [4] | C | Reporting the recent advances in the field of piezoelectric materials. Reviewing some novel piezoelectric materials like piezoelectric foam and high temperature materials
9 (1) | $\mu W$ (0) | 173 | 3, 4, 5 (0.75) | 1-1 to 1-5 (0.42) | Zaarour et al. [18] | C | 1 Manufacturing methods of nano fibers and wires, 2 Mentioning output voltage and currents of nano/micro materials, 3 Comparison of nano/micro materials based on maximum voltage and currant and active area
3 (0.4) | $\mu W$ (0) | 446 | 1, 3, 5 (0.75) | 1-2, 1-3, 2-1, 2-2, 2-3, 2-4 (0.35) | Liu et al. [21] | D | 1 Reporting recent progresses in the field of piezoelectric materials, 2- description of fabrication techniques of lots of piezoelectric materials in energy harvesting applications, 3- explaining the main frequency bandwidth broadening techniques, 4- Classifying piezoelectric materials, fabrication techniques, and frequency bandwidth broadening techniques.
6 (0.75) | $\mu W$ to $mW$ (0) | 175 | 1, 3, 6 (0.75) | 1-1, 2-1, 2-2, 2-3 (0.33) | Narita and Fox [17] | D | 1 Reporting the harvested power of PZT based PEH s with different structures, 2 Reporting the recent advances in the field op PEHs which were made of PVDF, and polymer based composite piezoelectrics. Comparing the output power of some of the piezoelectric energy harvesters.
2 (0.25) | $mW$ | 50 | 1, 4, 7, 8 (1.15) | 1, 2 (0.25) | Yoan et al. [19] | D | 1 introducing electrostrictive and dielectric electro-active polymers, 2 performance comparison of PZT, PVDF, and DEAPs and electrostrictive polymers. Describing the industrial challenges for dielectric electro-active polymers.
4 (0.57) | $\mu W$ (0) | 158 | 1, 2, 3 (0.75) | 2-1, 2-2, 2-3, 2-6 (0.33) | Mishra et l. [20] | D | The article basically aimed at exploring the basic theory behind the piezoelectric behavior of polymeric and composite systems and comparing the important types of piezoelectric polymers and composites. The article described the piezoelectric properties of lots of the piezo-polymers and polymer composites.
6 (0.75) | $\mu W$ (0) | 216 | 1, 2, 9 (0.75) | 2-1, 2-2, 2-3 (0.25) | Bowen et al. [22] | D | Reviewing some resent topics like piezoelectric light harvesting, Pyroelectric based harvesting, and nano scale Pyroelectric systems
3 (0.4) | $\mu W$ (0) | 24 | 2, 6, 9 (0.75) | 2-1, 2-3 (0.25) | Lefeuvre [25] | D | 1 Figure of merit for energy conversion efficiency, 2 figure of merit for piezoelectric materials, 3 comparing the one, two and three stage electric power interfaces
2 (0.25) | $\mu W$ to $mW$ (0) | 16 | 1, 2, 5, 8 (1) | 2 (0.1) | Mukherjee and Datta [26] | D | 1 Effect of load resistance on the output power of PEHs, 2 Selection criteria for piezoelectric ceramics
### 3.2 Structure
All piezoelectric energy harvesters include a mechanical part (or transduction
part) to convert the input mechanical energy into the electric charges in the
piezoelectric element, and an electric part that keeps the electric charges
and converts them into a suitable form of electric output like direct voltage.
Design of the mechanical part of a piezoelectric energy harvester usually
includes the determination of its size, configuration, working modes, and
selection of appropriate materials to enhance its performance characteristics
like the output electric energy, the conversion efficiency and the working
bandwidth. The size of the piezoelectric energy harvester may vary from micro
and nanoscale (lower than 0.01$cm^{3}$) to macroscale (75$cm^{3}$) [2].
Upon the literature, the piezoelectric energy harvesters can be classified
from different viewpoints. Form the view point of operating frequency, they
may be categorized into two main sections: the resonant type devices that
operate at or near their resonance frequency, and non-resonant systems that do
not depend on any specific frequency. The piezoelectric energy harvesters may
harvest energy from motions in a unique direction or from multi-directions.
Accordingly, they may be single-directional or multi-directional harvesters.
Also, they have a single or several vibration modes (multi-modal harvesters).
From the viewpoint of governing dynamic models, the piezoelectric harvesters
may be linear or non-linear [27]. As indicated in Fig. 3, their configuration
can be classified as cantilever type, stack type, cymbal type, circular
diaphragm type, or the shell and film types.
Uchino [28] started his review by mentioning the historical background of the
piezoelectric energy harvesting, and explaining several important
misconceptions. He reviewed the different design approaches followed by
mechanical, electrical, and MEMS engineers. He remarked that there are three
major phases associated with piezoelectric energy harvesting: (i) mechanical-
mechanical energy transfer, (ii) mechanical-electrical energy transduction,
and (iii) electrical-electrical energy transfer to accumulate the energy into
a rechargeable battery. Fig. 5 represents these three major phases. In order
to provide comprehensive strategies on how to improve the efficiency of the
harvesting system, a step-by-step detailed energy flow analysis is essential.
It was mentioned that the five important figure of merits in piezoelectrics
are the piezoelectric strain constant $d$, the piezoelectric voltage constant
$g$, the electromechanical coupling factor $k$, the mechanical quality factor
$Q_{m}$, and the acoustic impedance $Z$. Also, the energy transfer rates for
the piezoelectric energy harvesting systems with typical stiff cymbals and
flexible piezoelectric transducers were evaluated for three aforementioned
phases/steps. Moreover, a hybrid energy harvesting device that operates under
either magnetic and/or mechanical noises was introduced. It was concluded that
the remote signal transmission, energy accumulation in rechargeable batteries,
discovering a genius idea to combine nano-devices in parallel, and enhancing
the energy density in medical applications have been introduced as future
research fields. It was declared that a clear future perspective for NEMS and
MEMS piezoelectric harvesters is missing due to their low energy levels (in
the order of pW to nW). We need to discover a genius idea on how to combine
thousands of nano-devices in parallel and synchronously in phase. Describing
the performance improvement techniques for non-resonant and resonant energy
harvesters, are felt missing in this article.
Figure 5: Three major phases associated with piezoelectric energy harvesting
[28].
Priya [29] classified the energy harvesting approaches in two categories (1)
power harvesting for sensor networks using MEMS/thin/thick film approach, and
(2) power harvesting for electronic devices using bulk approach. His review
article covered the later category in more details. He listed almost all the
energy sources available in the surrounding which may be used for energy
harvesting and commented that the selection of the energy harvester as
compared to other alternatives such as battery depends on two main factors
cost effectiveness and reliability. Also, he reported the daily average power
consumption for a wearable device, and of common household devices. Next,
comparison of the energy density for the three types of mechanical to
electrical energy converters including electrostatic, electromagnetic and
piezoelectric were performed. The results were represented in Fig. 6. He
concluded that piezoelectric converters are prominent choice for mechanical to
electric energy conversion because the energy density is three times higher as
compared to electrostatic and electromagnetics. He gave a review of piezo-
harvesters appropriate for light-weight flexible systems with easy mounting,
large response, and low-frequency operation; called the low-profile piezo-
transducer in on/off-resonance condition. A good discussion on piezoelectric
polymers, energy storage circuit, and microscale piezo-harvesting device is
available in the article. He mentioned that the electrical power generated by
the piezoelectric energy harvester is inversely proportional to the damping
ratio that should be minimized through proper selection of the material and
design. He also have summarized the conditions leading to appearance of
maximum efficiency in low profile piezoelectric energy harvesters. An
interesting part of the paper is the description of the piezoelectric material
selection procedure for on/off-resonance condition. However, the description
of performance improvement techniques for enhancing the system frequency
response are felt missing in this article.
Figure 6: Comparison of the energy density for the three types of mechanical
to electrical energy converters [29].
Yang et al. [30] commented that from the perspective of applications,the
output power of the harvester and its operational frequency bandwidth are the
two metrics most useful to product development engineers. They explained the
materials selection procedure for piezoelectric energy harvesters in off-
resonant condition and remarked why PZT’s are steel the most popular
piezoelectric materials for energy harvesters. They stated that linear
resonant harvesters are not suitable for harvesting energy from broadband or
frequency-varying excitations, and in this condition nonlinear energy
harvesters have been proven to be able to exhibit a broadband performance.
Therefore, researchers have explored monostable, bistable, and tristable
systems and developed some frequency tuning approaches, such as multi-
cantilever structures, bistable composite plate designs, and passive and
active stiffness-tuning technologies. On the nonlinear energy harvesters they
remarked that maintaining the nonlinear harvesters in the high-energy
oscillation states, especially under weak excitations is a difficult task.
specially, with zero initial conditions, nonlinear harvesters usually follow
the low-energy orbits, which results in small-amplitude voltage responses.
Thus maintaining the nonlinear PHE in the high-energy states is a critical
problem which is possible with active and passive control. Efficiently
transferring and storing the generated broadband or random electric energy is
another critical problem for nonlinear PHEs. Moreover, they reviewed the
different designs strategies, the optimization techniques, and the harvesting
piezo-materials in applications like shoes, pacemakers, tire pressure
monitoring systems, bridge and building monitoring. They declared that high
energy conversion efficiency, ease of implementation, and miniaturization are
the main advantages of such systems. However, authors state that enhancement
of energy efficiency of the piezo-based harvesters is still an open challenge.
They also made a systematic performance comparison on some of the energy
harvesters. They pointed out that a considerable gap exists between the
achieved performance and the expected performance. Therefore, in situ testing,
applying more realistic excitations, system-level investigations on piezo-
harvesters integrated with the power conditioning circuits, energy storage
elements, sensors, and control circuits need to be investigated. This article
has focused on mechanical part of energy harvesters and subjects like the
electric interface circuits of the harvesters and their energy flow analysis
have not been remarked.
There are some other review papers which have focused on several issues in the
field of design of piezoelectric harvesters. Performance improvement
techniques for PHEs and design optimization methods are hot topics covered by
reviews [31], [16], [27] . Manual and autonomous tuning systems for widening
the operating frequency bandwidth and the future plans in this field were
discussed by Ibrahim and Vahied [32]. A good review of PEH configurations such
as cantilever beam, discs, cymbals, diaphragms, circular diaphragms, shell-
type, and ribbon geometries may be found in [16]. Talib et al. [33] explained
effective strategies and the key factors to enhance the performance of
piezoelectric energy harvesters operating at low frequencies, including
selection of the piezoelectric material, optimization of the shape, size,
structure, and development of multi-modal, nonlinear, multi-directional, and
hybrid energy harvesting systems. This review paper is suitable for the
beginners who want to get acquainted with the piezoelectric materials and some
designs of piezoelectric energy harvesters. They concluded that the recent
developments are inclined towards generation of more power from low-frequency
and low-amplitude ambient vibrations with reduced required piezoelectric
material. Adding a single DOF system in the form of an extension beam or a
spring to the piezoelectric beam is a remarkable advise to enhance the power
output. They showed that the multi-modal energy harvester exhibits a broader
bandwidth when its multiple resonance peaks get closer.
Brenes et al. [34] provided an overlook of existing energy harvesting circuits
and techniques for piezoelectric energy scavenging to distinguish between
existing similar solutions that are different in practice. Such categorization
is helpful to ponder the advantages and drawbacks of each available item.
Their review is unique since they have classified the piezo-systems based on
adaptive/non-adaptive control strategies, topologies, architectures,
techniques form one hand, and electromechanical models from the other hand.
The best system has been introduced with respect to the optimized power
efficiency, the design complexity, the strength of coupling, the multi-stage
load adaption, and the vibration frequency.
Issues like AC-DC conversion mechanism, the passive and active rectifications,
the start-up issues, the harvester-specific interactions, the voltage
conditioning, the DC-DC charge pumps, the power regulation, and the impedance
matching were discussed by Szarka et al. [35] and Dell’Anna et al. [36] . Non-
linear electronic interfaces for energy harvesting from mechanical vibrations
was remarked in [37].
Tables 3 and 4 gather together the results of evaluation of the review papers
written on design methods and the power interface considerations,
respectively. The table also contains different sub-categories, the range of
output power, the number of reviewed articles, the merits, general
conclusions, and some other extra descriptions. The rank of each paper has
been computed based on the number of merits, the number of subcategories, the
number of concluding remarks, and clear emphasizing on value of minimum
required output power.
The results of evaluation of review papers on design of piezoelectric energy
harvesting have been presented in Table 3. The table also contains different
sub-categories, the range of output power, the number of reviewed articles,
the merits, general conclusions, and some other extra descriptions. The grade
for each paper has been computed based on the number of merits, the number of
subcategories, the number of concluding remarks, and declaration of minimum
required output power. Table 3 is designed to evaluate the review papers about
design of PHEs. The merits that are selected as the necessary considerations
in the field of design of PHEs are 1: reporting the output power of PHEs, 2:
reporting the coupling factors and operational modes, 3: including
mathematical models, 4: attending to the motivating frequencies of PHEs, 5:
Attending to the mechanical and electrical energy conversion efficiencies.
Quantitative evaluation of the papers was performed based on the number of
merits which have been followed by the article, number of sub-categories which
were covered in the review and the number of conclusions. according to the
table, unless the first papers, other papers have neglected some merits like
the minimum required output power for the harvesters and energy flows analysis
of them, also most of the papers have not reviewed some of the issues in the
field as reported by paper ”*”.
Table 3: Overall evaluation of review papers written on design of
piezoelectric energy harvesters. The numbers in brackets denote the number of
non-general future lines. ”Cons.” stands for conclusions.
Conclusions: Efficiency/performance improvement, necessity of frequency
bandwidth broadening, necessity of optimizations, increasing life time and
endurance, size reduction and manufacturability, importance of electric
interface circuits, the necessity of material properties improvement.
Merits: 1: reporting the output power of PHEs, 2: coupling factors and
operational mode, 3: including mathematical models, 4: matching the resonance
frequency of PHEs with motivating frequencies, 5: paying attention to the
energy conversion efficiencies
Sub-categories: Performance improvement: 1- frequency tuning approaches:
1-1-manual tuning and 1-2-autonomous tuning methods, 2- Multi-frequency
systems, 3- nonlinear systems, 4- frequency up conversion approach, 5- systems
with free moving mass , 6- Bi directional and three directional systems, 7-
Amplification techniques 8- material selection criteria 9- energy conversion
efficiency 10- low profile piezoelectric harvesters 11- geometric optimization
12- Mathematical modeling of PHEs. Design improvements for 13- piezoelectric
cantilevers, 14- piezoelectric cymbal 15- piezoelectric stack configuration,
16- electrode optimization…. 17-performance quantification and comparison
strategies…18- electronic interface circuits for PHEs 19- Hybrid energy
harvesting mechanism.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
10(1) | $mW$ (1) | 35 | 1, 2, 3, 4, 5 (2) | 8, 9, 11, 13, 14, 15, 17, 18, 19 (0.5) | Uchino [28] | A | 1 describing the historical background of piezoelectric energy harvesting, 2 commenting on several misconceptions by the current researchers, 3 step-by-step detailed energy flow analysis in energy harvesting systems, 4- describing the key to dramatic enhancement in the efficiency, 5- important comments on the useful/un-useful output power level for the harvesters
6(0.85) | $mW$ (1) | 75 | 1, 2, 3, 4, 5 (2) | 9, 8, 10, 12 (0.2) | Priya [29] | A | Describing the material selection criteria in on- and off-resonance condition, Describing the factors which affect the conversion efficiency of PHEs, introduction of some low profile PHEs for realizing a self powered sensor nodes
5(0.7) | $\mu W$ to $mW$ (1) | 338 | 1, 2, 3, 4, 5 (2) | 3, 11, 12, 13, 14, 15, 16, 17 (0.36) | Yang et al. [30] | A | Analysis of different designs, nonlinear methods, optimization techniques, and materials for increasing performance. Introducing a set of metrics for the end users of PHEs for comparison of performance of PHEs
5(0.7) | $\mu W$ to $mW$ (0) | 120 | 1, 2, 3, 4, 5 (2) | 1, 4, 8, 12, 14, 18 (0.32) | Li et al. [16] | B | Commenting on the biggest challenges for PHEs, describing the most important limitations of piezoelectric materials
4(0.75) | $\mu W$ to $mW$ (0) | 446 | 1, 2, 3, 4, 5 (2) | 2, 3, 4, 6, 12, 13, 14, 15, 19 (0.52) | Liu et al. [21] | B | Various key aspects to improve the overall performance of a PEH device are discussed. Classification of performance improvement approaches have been performed.
3(0.42) | $\mu W$ (0) | 149 | 1, 3, 4, 5 (1.6) | 1, 2, 3, 4, 5, 6 (0.3) | Maamer et al. [27] | C | Proposing new generic categorization, approach based on the improvement aspect of the harvester, which includes techniques for widening operating frequency, conceiving a non-resonant system and multidirectional harvester. Evaluating the applicability of the performance improvement techniques under different conditions and their compatibility with MEMS technology
5(0.7) | $mW$ (0) | 105 | 1, 3, 4 (1.2) | 1, 2, 3, 4, 5, 6, 7 (0.36) | Yildirim et al. [31] | C | New classification of performance enhancement techniques, Comparison of lots of performance enhancement techniques.
8(1) | $\mu W$ to $mW(0)$ | 66 | 1, 3, 4 (1.2) | 1-1, 1-2 (0.1) | Ibrahim and Vahied. [32] | C | Classifying, reviewing and comparing the different manual and autonomous tuning methods, challenge of energy consumption by self-tuning structures
4(0.6) | $\mu W$ to $mW$ (0) | 135 | 1, 2, 4, 5 (1.6) | 3, 6, 8, 11 (0.25) | Talib et al. [33] | C | They commented that the anticipated performance of a piezoelectric harvester can be attained by achieving the trade-off between output power and bandwidth.
Table 4: Overall evaluation of review papers written on power interfaces in
piezoelectric energy harvesters. The numbers in brackets denote the number of
non-general future lines. ”Cons.” stands for conclusions.
Conclusions: Efficiency/performance improvement, necessity to considering
interactions between the mechanical harvester and the power electronics,
necessity of optimizations, importance of electric interface circuits
Merits: 1: Energy flow analysis of PHEs, 2: Practical implementation of
electronic interfaces, 3: including mathematical models, 4: paying attention
to electrical impedance matching, 5: paying attention to the energy
consumption of electric interfaces, 6: analysis of energy conversion
efficiency.
Sub-categories: 1: Three Phases in Energy Harvesting Process, 2: mechanical-
electrical energy transduction, 3: energy flow analysis, 4: electrical-to-
electrical energy transfer, 5: DC-DC converters and conversions, 6: electric
impedance matching, 7: electromechanical models of PHEs, 8: requirements for
power electronics, 9: AC-DC conversion with voltage conditioning, 10: DC-DC
conversion with voltage conditioning, 11: Power regulation, 12: conversion
efficiency of PHEs, 13: rectification approaches: (13-1 resonant PEH
rectifiers, 13-2- series synchronized switch harvesting on inductor (S-SSHI),
13-3- synchronized switching and discharging to a storage capacitor through an
inductor (SSDCI) rectifier, 13-4- synchronous electric charge extraction
(SECE), 13-5- synchronized switch harvesting on inductor magnetic rectifier
(MR-SSHI), 13-6- hybrid SSHI, 13-7- adaptive synchronized switch harvesting
(ASSH), 13-8- enhanced synchronized switch harvesting (ESSH), 13-9 MPPT-based
PEH Rectifiers), 14: performance of rectification approaches, 15: autonomous
switch control in resonant PEH rectifiers, 16: switching techniques, 17:
parallel SSHI, 18: load decoupling interfaces, 19: non-adaptive MPPT control,
21: characteristics of existing adaptive control strategies, 20: the tunable
OSECE technique, 21: two-stage load adaptation FB technique, 22: two-stage
load adaptation shunt rectifier technique, 23: PS SECE technique, 24: tunable
SECE technique 25: tunable USECE technique, 26: N-SECE technique, 27: FTSECE
technique, 28: HB and FB 3-stage load adaptation technique, 29: tunable SCSECE
technique, 30: four-stage topology: the SSH architecture, 31: parallel SSHI
(p-SSHI) technique, 32: Series SSHI (s-SSHI), DSSH and ESSH techniques, 33:
technical guidelines for the choice of an adequate circuit.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
9(1) | $mW$ (1) | 35 | 1, 2, 3, 4, 5, 6 (2) | 1, 2, 3, 4, 5, 6, 7, 8, 12 (0.27) | Uchino [28] | A | Mentioning minimum acceptable output power for harvesters, energy flow analysis for cymbal type transducer, describing the electric impedance matching technique
3(0.5) | \- (0) | 109 | 1, 2, 3, 4, 5, 6 (2) | 12 to 34 (1) | Brenes et al. [34] | B | Comparison of the conditions for electric tuning techniques to maximize the power flow from an external vibration source to an electrical load description of necessary conditions for Maximum Power Point Tracking (MPPT)
8(1) | $\mu W$ (0) | 113 | 2, 3, 4, 5, 6 (1.66) | 1, 2, 4, 5, 8, 9, 10, 11, 13 (0.27) | Szarka et al. [35] | B | Overview of power management techniques that aim to maximize the extracted, power of PHEs Describing the Requirements for power electronics reviewing various power conditioning techniques and comparing them in terms of complexity, efficiency, quiescent power consumption, startup behavior
6(0.75) | -(0) | 113 | 2, 3, 4, 5, 6 (1.66) | 7, 12, 13 (all items), 14, 15 (0.15) | Francesco et al. [36] | C | 1: Almost all the rectification techniques employed in PEH systems were discussed and compared emphasizing the advantages and disadvantages of each approach. 2: Introducing the seven criteria used to evaluate the performance of a harvesting interface
1(0.2) | \- (0) | 64 | 2, 3, 5, 6 (1.33) | 16, 17, 18, 13-2, 13-3 (0.15) | Guyomar and Lallart [37] | D | 1: review of nonlinear electronic interfaces for energy harvesting from mechanical vibrations, 2: comparative analysis of various switching techniques in terms of efficiency, performance under several excitation conditions, complexity of implementation
### 3.3 MEMS/NEMS-based devices
A large number of reviews on piezo-harvesters have been devoted to the field
of MEMS/NEMS piezoelectric harvesters. Micro and nanoscale energy harvesters
maybe useful at future for easy powering or charging of mobile electronics,
even in remote areas, without the need for large power storage elements. MEMS-
type devices include cantilever, cymbal and stack whereas NEMS type devices
are wires, rods, fibers, belts, and tubes. Generation of output electric
current using piezoelectric energy harvesters faces with many limitations and
difficulties. Some of these limitations are low output power, high electric
impedance, crack propagation in most piezoelectric materials due to
overloading, frequency matching of the harvester with vibrational energy
sources, and fabrication/integration of piezoelectrics in micro/nanoscale
[40].
Kim et al. [41] commented that for the elimination of chemical batteries and
complex wiring in microsystems, a fully assembled energy harvester with the
size of a US quarter dollar coin should be able to generate about 100$\mu$W of
continuous power from ambient vibrations. In addition, the cost of the device
should be sufficiently low. The article have addressed two important questions
that are ”how can one achieve self-powering when the power required is much
larger than what can be achieved by MEMS-scale piezoelectric harvesters?” and
”what is the best mechanism for converting mechanical energy into electrical
energy at mm 3 dimensions?”. Also, they commented that for harvesting the
power robustly, the resonance bandwidth of piezoelectric cantilevers should be
wide enough to accommodate the uncertain variance of ambient vibrations. Thus,
the resonance bandwidth is a significant characteristic for trapping an enough
amount of energy onto the harvester and should be accounted for in determining
the performance of energy harvesters. MEMS technology is a cost-effective
fabrication technology for PHEs if it can meet the requirements for power
density and bandwidth. Three major aspects to make the MEMS PEHs appropriate
for use in real applications are the final cost of the PEH, the normalized
power density, and the operational frequency range (including the bandwidth
and center frequency). They added that piezoelectric MEMS energy harvesters
mostly have a unimorph cantilever configuration (Fig. 7). The proof mass (M)
in Fig. 7 is used to adjust the resonant frequency to the available
environmental frequency, normally below 100 Hz. Recently, integrated MEMS
energy harvesters have been developed and in comparison of MEMS PEHs some
essential merits like the active area of PEH, active volume, resonant
frequency, harvested power, and power densities in volume or area, should be
considered. They reviewed challenges of the piezo-harvesters, including the
need to high power density and wide bandwidth of operation of the
piezoelectric systems, the non-linear resonating beams for wide bandwidth
resonance, and improvements in materials and the structural design. They
concluded that the epitaxial growth and grain texturing of the piezo-
materials, the embedded medical systems, the lead-free piezoelectric MEMS-
based materials, and materials with giant piezoelectric coefficient are active
research fields. They presented an extensive comparison of thin-film piezo-
systems from various sources and concluded that the state-of-the-art of power
density is still about one order smaller than what is needed for practical
applications.
Figure 7: Unimorph structure of piezoelectric energy harvester that has one
piezo-layer and a proof mass [41].
Toprak and Tigli [42] conducted a review on piezoelectric harvesters based on
their size (nanoscale, microscale, mesoscale, macroscale). They also presented
an interesting statistics that the number of publications between 2009 and
2014 on piezoelectric harvesting is more than twice the sum of publications
about the electromagnetic and electrostatic systems. They commented that the
inherent reciprocal conversion capability is an important advantage of the
piezoelectric energy harvesters that allows them to have simpler architectures
in comparison to the electromagnetic and electrostatic counterparts. It is
declared that the bio-compatibility, the reconciliation with the CMOS
technology, the rectification and storage losses, and enhancing the operation
bandwidth are the most challenging issues about such systems. A discussion on
validity of the classical constitutive relations for the piezo-materials in
nanoscale and pay attention to the minimum required power output of PEHs are
felt missing in the paper.
Todaro et al. [43] reviewed the current status of the MEMS-based energy
harvesters using piezoelectric thin films, and highlighted approaches and
strategies. They commented that such harvesters are compact and cost-effective
especially for harvesting energy from environmental vibrations. They believe
that two main challenges of this topic to achieve high-performance devices are
increasing the amount of generated power and the frequency bandwidth. They
also introduced the theoretical principles and the main figures of merit of
energy conversion in piezoelectric thin films. They compared most important
thin film piezo-materials based on the introduced figure of merit. Their
recommendations for future research are developing proper materials, new
device architectures and strategies involving bimorph and multimorph designs
exploited for bandwidth and power density improvements, progressing in
synthesis and growth technologies for lead-free high quality piezoelectrics,
employing new flexible materials with tailored mechanical properties for
larger displacement and lower frequencies, and taking advantage of non-linear
effects to obtain a wider bandwidth and a higher efficiency. Specifying the
minimum required output power and attending to mechanical and electrical
energy conversion efficiencies are felt missing in this review paper.
Dutoit et al. focused on design considerations for piezoelectric-based energy
harvesters for MEMS-scale sensors. They stated that the power consumption of
tens to hundreds of $\mu$W is predicted for sensor nodes and nowadays milli-
scale commercial node has an average power consumption of 6–300 $\mu$W [2].
With the reduction of power requirements for sensor nodes, the application of
piezoelectric energy harvesters has become viable.They stated that the power
or energy sources can be divided into two groups: sources with a fixed energy
density (e.g. batteries) and sources with a fixed power density (normally
ambient energy harvesters). They suggested that the following information be
made available in research papers to facilitate a relative comparison of PEH
devices: device size, the maximum tip displacement at maximum power output,
the mechanical damping ratio, the electrical load, the device mass, and the
input vibration characteristics. Also, in this paper a fully coupled
electromechanical model was developed to analyze the response of a
piezoelectric energy harvester and the difference in optimization strategies
od PEHs in on-resonant and off-resonant conditions were remarked.
Other review papers on MEMS PEHs have focused on several issues including ZnO
nonorods and flexible substrates, and ZnO-base nano-devices [44], comparison
of existing piezoelectric micro generators (including the impact coupled, the
resonant and human-powered devices, and the cantilever-based setup) with
electromagnetic and electrostatic mechanisms [45], the description of micro
and nano device fabrication techniques, performance metrics, and device
characterization [14], hybrid electromagnetic-piezoelectric and
triboelectric/piezoelectric MEMS-based harvesters and their privileges [46],
ZnO nanostructure-based photovoltaic, piezoelectric nano-generators, and the
hybrid approach harvesting energy harvesting [47], reporting the benefits,
capacities, applications, challenges, and constraints of micro-power
harvesting methods using thermoelectric, thermophotovoltaic, piezoelectric,
and microbial fuel cell [40], nanostructured polymer-based piezoelectric and
triboelectric materials as flexible, lightweight, easy/cheap to fabricate,
being lead-free, biocompatible, and robust harvesters [48], theoretical and
experimental characterization methods for predicting and determining the
potential output of nano wire-based nanogenerators [49], reviewing the
research progress in the field of piezoelectric nanogenerators and describing
their working mechanism, modeling, and structural design [50], discussing the
impact of composition, orientation, and microstructures on piezoelectric
properties of perovskite thin films like PbZr1-xTixO3 (PZT) in applications
such as low-voltage radio frequency MEMS switches and resonators, actuators
for millimeter-scale robotics, droplet ejectors, energy harvesters for
unattended sensors, and medical imaging transducers [51].
Table 5 presents details of evaluation of reviews on micro/nanoscale energy
harvesting. In summery, almost all review articles discussed some great
challenges of development of MEMS/NEMS-based piezoelectric harvesters such as
the limited bandwidth and low output power. On the other hand, there are some
competitive technologies like electromagnetic, thermoelectric, and
electrostatic energy harvesting that can be employed for scavenging the
environment waste energy. Most of the comparative review papers have focused
on the output power and coupling coefficient of the harvesting systems and
other important features such as the lifetime, capability of working in harsh
environmental conditions, the cost level, commercial accessibility, and the
technology readiness level (TRL) need more deep considerations.
Table 5: Overall evaluation of review papers written on MEMS piezoelectric
energy harvesters. The numbers in brackets denote the number of non-general
future lines. ”Cons.” stands for conclusions.
Conclusions: Efficiency/performance improvement, necessity of frequency
bandwidth broadening, necessity of optimizations, increasing life time and
endurance, size reduction and manufacturability, importance of electric
interface circuits, the necessity of material properties improvement.
Merits: 1: reporting the output power or power density of MEMS/NEMS PHEs, 2:
coupling factors and operational mode, 3: describing the fabrication
techniques, 4: matching the resonance frequency of PHEs with motivating
frequencies, 5:paying attention to the minimum required output power,6: CMOS
compatibility, 7: energy flow analysis,
Sub-categories: 1- Micro/ nano scale materials (1-1- Grain textured and
epitaxial piezoelectric films, 1-2- Lead-free piezoelectric films, 1-3-
Aluminum nitride piezoelectric film, 1-4- piezoelectric nano-polymers, 1-5-
Polymer-Ceramic nanocomposite nano generators (NG), 1-6- Electrospun P(VDF-
TrFE) nanofiber hybrid NGs, 1-7- Nylon nanowire-based piezoelectric NG, 1-8-
Template-grown poly-L-lactic acid, 1-9- Electrospun poly-L-lactic acid
nanofibers, 1-10- ZnO-polymer nanocomposite piezoelectric NG, 1-11- ZnO nano-
rods, 1-12- Nano wires, 1-13- Nanowire-Composites, 1-14- PZT thin films, 1-15-
piezo-polymer thin films, 1-16- piezoelectric electroactive polymers), 2-
Nonlinear resonance-based energy harvesting structures, 3- energy conversion
efficiency, 4- figure of merit for MEMS PHEs, 5- Material synthesis and
deposition (5-1- solution phase synthesis, 5-2- thin film deposition, 5-3-
growth of polymer-based nanowires), 6- Modes of operations for MEMS PHEs, 7-
design configurations for MEMS PHEs (7-1- cantilever based piezoelectric
generators, 7-2- other types of piezoelectric generators), 8- Microscale scale
PHEs, 9- Substrate and electrode and their impact on performance, 10- MEMS
device performance parameters, 11- Characterization of MEMS PHEs, 12- MEMS
hybrid harvesters (12-1 Architectures of hybrid harvesters, 12-2- Mathematical
models of (PZT hybrid harvesters, 12-3 PZT - Tribo-electric hybrid harvester),
13- Nano-scale PHEs (13-1 working principles, 13-2- design, fabrication and
implementation of nanogenerators, 13-3- Hybrid nano-generators, 13-4- nano-rod
arrays, 13-5- flexible nano generators, 13-6- ZnO nano-PHEs, 13-7-
applications of nano-generators, 13-8- Flexoelectric enhancement at the
nanometer scale, 13-9- Characterization of piezoelectric potential from
piezoelectric NWs, 13-10- prototypes of nano generators, 13-11- Prediction of
the power output from piezoelectric NWs, 13-12- vertically aligned nanowire
arrays and their fabrication, 13-13- laterally aligned nanowire arrays and
their fabrication), 14- Impact coupled devices, 15- Human powered
piezoelectric generation, 16- Evolving technology of miniature power
harvesters, 17- Positive prospects of micro-scale electricity harvesters, 18-
Challenges and constraints of minute-scale energy harvesters, 19- CMOS
compatibility, 20- biocompatibility, 21- bandwidth of PHEs, 22- figure of
merit for PHEs, 23- piezoelectric thin films, 24- screening effect in PHEs,
25- Energy harvesting by piezoelectric thin films.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
6(0.85) | $\mu W$ (1) | 89 | 1, 2, 3, 4, 5, 6, 7 (2) | 1 to 4 (0.25) | Kim et al. [41] | A | Describing figure of merits for MEMS PHEs, Mentioning the key attributes for MEMS PHEs, Describing minimum acceptable power density for MEMS PHEs
4(0.57) | $\mu W$ (0) | 95 | 1, 3, 4 (0.6) | 1-11, 13-4, 13-5, 13-6 (0.15) | Briscoe and Dunn [44] | C | 1: This review has summarized the work to date on nanostructured piezoelectric energy harvesters. 2: They stated that in order to satisfy the needs of real power delivery, devices need to maximize the rate of change of any strain delivered into a system in order to increase the polarization developed by the functional layers, and improve the coupling of the device to the environment.
4(0.57) | $\mu W$ to $mW(0)$ | 123 | 1, 2, 3, 4, 5, 6 (1.71) | 8, 13, 18, 19, 20, 21 (0.25) | Toprak and Tigli [42] | C | 1: They commented that the size-based classification provides a reliable and effective basis to study various piezoelectric energy harvesters. 2: They discussed the most prominent challenges in piezoelectric energy harvesting and the studies focusing on these challenges.
4(0.57) | $\mu W$ to $mW(0)$ | 145 | 1, 2, 3, 4, 5 , 6 (1.71) | 8, 18, 22, 23 (0.15) | Todaro et al. [43] | C | 1: The paper has reviewed the current status of MEMS energy harvesters based on piezoelectric thin films. 2: The paper has highlighted approaches/strategies to face the two main challenges to be addressed for high performance devices, namely generated power and frequency bandwidth. 3: Comparison of lots of MEMS energy harvesters performances has been performed.
5(0.71) | $mW$ (0) | 34 | 1, 2, 3, 4, 5 (1.4) | 12 (12-1 to 12-3) (0.1) | Salim et al. [46] | C | Elaborating on the hybrid energy harvesters, reported Literature on such harvesters for recent years with different architectures, models, and results comparison of the present hybrid PHEs in terms of output power.
6(0.85) | $\mu W$ to $mW(0)$ | 74 | 1, 2, 4, 5 (1.15) | 6, 7, 8, 10, 25 (0.2) | Dutoit et al. [2] | C | Commenting on the necessary information for comparing different PHEs. Pointing on the difference between dominant damping components at the micro- vs. macro-scale. Developing a fully coupled electromechanical model for analyzing the response of PHEs with cantilever configuration.
7(1) | $\mu W$ to $mW(0)$ | 108 | 1, 3, 4, 5 (1.15) | 1-4 to 1-10, 5-3 (0.1) | Jing and Kar-Narayan [48] | C | 1: Discussing the growth of nanomaterials including nanowires of polymers of polyvinylidene fluoride and its co-polymers, Nylon-11, and poly-lactic acid for scalable piezoelectric and triboelectric nanogenerator applications. 2: discussing design and performance of polymer-ceramic nanocomposite.
6(0.85) | $\mu W$ to $mW(0)$ | 115 | 1, 2, 4, 5 (1.15) | 7-1, 7-2, 14, 15 (0.2) | Beeby [45] | C | Characterization and comparison of piezoelectric, electromagnetic and electrostatic MEMS generators
5(0.71) | $\mu W$ (0) | 140 | 1, 3, 4, 5 (1.15) | 1-2, 1-15, 1-16 (0.15) | Asif Khan [52] | C | 1: The review has covered the available material forms and applications of piezoelectric thin films. 2: The electromechanical properties and performances of piezoelectric films have been compared and their suitability for particular applications were reported. 3: Control over the growth of the piezoelectric thin films and lead-free compositions of thin films can lead to good environmental stability and responses, coupled with higher piezoelectric coupling coefficients.
3(0.4) | \- (0) | 75 | 1, 3, 4, 5 (1.15) | 1-15, 25 (0.1) | Muralt et al. [51] | D | The article has reviewed the impact of composition, orientation, and microstructure on the piezoelectric properties of perovskite thin films The author described useful power levels for MEMS PHEs.
5(0.71) | $\mu W$ (0) | 78 | 1, 2, 3 (0.85) | 1-12, 1-13, 13-7, 13-12, 13-13, 18 (0.25) | Wang et al. [50] | D | The working mechanism, modeling, and structure design of piezoelectric nanogenerators were discussed. Integration of nanogenerators for high output power sources, the structural design for increasing the energy harvesting efficiency in different conditions, and the development of practicable integrated self-powered systems with improved stability and reliability are the critical issues in the field classification of nano generators based on their desing and working modes were performed.
6(0.85) | $mW$ (0) | 112 | 1, 4, 5 (0.85) | 3, 16, 17, 18 (0.15) | Selvan and Ali [40] | D | The capabilities and efficiencies off our micro-power harvesting methods including thermoelectric, thermo-photovoltaic ,piezoelectric, and microbial fuel cell renewable power generators are thoroughly reviewed and reported
4(0.57) | $\mu W$ (0) | 69 | 1, 2, 4 (0.85) | 1-12, 13-8 to 13-11, 18 (0.15) | Wang [49] | D | 1: theoretical calculations and experimental characterization methods for predicting or determining the piezoelectric potential output of NWs were reviewed. 2: numerical calculation of the energy output from NW-based NGs. 3: Integration of a large number of ZnO NWs was demonstrated as an effective pathway for improving the output power.
4(0.57) | -(0) | 80 | 2, 3 (0.57) | 5 to 11 (0.3) | Gosavi and Balpande [14] | D | Description of some of synthesis and deposition techniques and performance parameters for MEMS PHEs
5(0.71) | -(0) | 100 | 1, 3 (0.57) | 13 (13-1 to 13-3) (0.05) | Kumar and Kim [47] | D | 1: Describing the mechanism of power generation behavior of nano-generators fabricated from ZnO nanostructures, 2: describing an innovative and important hybrid approach based on ZnO nano-structures.
### 3.4 Modeling approaches
Some review papers have focused on the modeling of PHEs to clarify the
physical bases behind the piezoelectric energy harvesting. There are a few
number of review papers that have totally focused on evaluation of different
modeling approaches for piezoelectric energy harvesting.
Erturk and Inman investigated mechanical [55] and mathematical [56] aspects of
the cantilevered piezoelectric energy harvesters to avoid reuse of simple and
incorrect older models in literature. They reviewed the general solution of
the base excitation problem for transverse and longitudinal vibrations of a
cantilevered Euler-Bernoulli beam. They proved that the classical single-
degree-of-freedom (SODF) predictions may yield highly inaccurate results, and
they are just appropriate for high tip-mass-to-beam-mass ratios. Damping due
to internal friction (the Kelvin-Voigt damping), damping related to the fluid
medium, the base excitation as a forcing function, and the backward
piezoelectric coupling in the beam equation are among modeling parameters.
Modelling of energy conversion efficiency is felt missing in the article.
Zhao et al. [57] compared different modeling approaches for harvesting the
wind energy, including the single-degree-of-freedom, the single-mode and
multi-mode Euler-Bernoulli distributed-parameter models (ignored in Ref.
[56]). They concluded that the distributed-parameter model has a more rational
representation of aerodynamic forces, while the SDOF model more precisely
predicts the cut-in wind speed and the electro-aeroelastic behavior. In
addition, they performed a parametric study on the effect of the load
resistance, wind exposure area, mass of the bluff body, and the length of the
piezoelectric sheet on the cut-in wind speed as well as the output power level
of the GPEH. Again, modelling of energy conversion efficiency is felt missing
in the article.
Wei and Jing [58] presented a state-of-the-art review of theory, modeling, and
realization of the piezoelectric, electromagnetic, and electrostatic energy
harvesters. The linear inertia-based theory and the non-linear models have
been described for three mentioned vibration-to-electricity converters. They
investigated some characteristics of the piezo-harvesters such as being
unaffected from external/internal electromagnetic waves, simple structure,
depolarization, brittleness of the bulk piezo-layer, the poor coupling in
piezo-film, and the poor adhesion with the electrode materials. Development of
new piezoelectric materials, creation of new energy harvesting configurations
by exploring the non-linear benefits, and design of efficient energy
harvesting interface circuits are among their suggestions as future prospects.
They concluded that the non-linearity is an important and effective parameter
in terms of performance enhancement. Theoretical modeling of the non-linear
systems with keeping reliability and stability is a challenging task. The
reviewed models have not been compared in the paper.
Table 6 sums up the results of evaluation of the review papers written about
the modelling approaches. The table also contains different sub-categories,
the range of output power, the number of reviewed articles, the merits,
general conclusions, and some other extra descriptions. The rank of each paper
has been computed based on the number of merits, the number of subcategories,
the number of concluding remarks, and clear emphasizing on Svalue of minimum
required output power.
Table 6: Overall evaluation of review papers written on modeling of
piezoelectric energy harvesters. The numbers in brackets denote the number of
non-general future lines. ”Cons.” stands for conclusions.
Conclusions: Efficiency/performance improvement, necessity of frequency
bandwidth broadening, necessity of optimizations, increasing life time and
endurance, size reduction and manufacturability, necessity to improve the
accuracy of PHE models.
Merits: 1: giving the mathematical background of the models, 2: considering
the energy losses, 3: taking into account the resonance and off-resonance
conditions, 4: efficiency modeling, 5: mentioning the constraints and
limitations of the models, 6: mentioning the assumptions followed by the
models, 7: comparison of the existing models.
Sub-categories: 1- Energy conversion in PHEs with linear models, 2- Energy
conversion in PHEs with nonlinear models, 3- Modelling efficiency, 4-
Modelling cantilever PHEs (4-1- SDOF models 4-2- distributed parameter
modeling), 5- modeling the aeroelastic energy harvesting, (5-1- Flutter in
airfoil sections, 5-2- vortex-induced vibrations in circular cylinders, 5-3-
Galloping in prismatic structures, 5-4- VIV-/cylinder-based aeroelastic energy
harvesters, 5-5- Galloping-based aeroelastic energy harvesters,5-6- Wake
galloping, 5-7- SDOF models 5-8- Euler-Bernoulli distributed parameter model).
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
6(1) | $mW$ (0) | 21 | 1, 2, 3, 4, 5, 6, 7 (2) | 4-1 to 4-3 (0.2) | Erturk and Inman [55, 56] | B | Issues of the correct formulation for piezoelectric coupling, correct physical modeling, use of low fidelity models, incorrect base motion
6(1) | $mW$ (0) | 48 | 1, 3, 4, 5, 6, 7 (1.7) | 5-7 to 5-8 (0.2) | Zhao et al. [57] | C | Comparing the performance of the modeling methods for GPEH, including the SDOF model, and single mode and multimode Euler-Bernoulli distributed parameter models.
5(0.85) | $\mu W$ (0) | 204 | 1, 2, 4 (0.85) | 1, 2, 3, 4 (0.8) | Wei and Jing [58] | C | 1 reviewing the energy conversion efficiency of some of the conversion mechanisms, 2 describing several configuration design for PHEs like cantilever structures, and uniform membrane structures.
6(1) | $mW$ (0) | 201 | 3-5 (0.6) | 5-1- to 5-6 (0.2) | Abdelkefi [59] | D | Qualitative and quantitative comparisons between existing flow-induced vibrations energy harvesters, describing some of the limitations of existing models and recommending some improvement for future
## 4 Applications
### 4.1 Vibration
Vibration is the most common source of energy for piezoelectric harvesters,
since there is no need to convert the input energy to the mechanical energy to
produce electricity in piezo-materials. Also, its abundance, accessibility,
and ubiquity in environment, in addition to multiple possible transduction
types have made it more attractive for energy harvesting applications. The
response of piezoelectric materials to the employed vibrations depends on
their electromechanical properties like the natural frequency, their geometry,
the electromechanical coefficients, and the damping characteristics. The
design strategies for such types of harvesters, performance enhancement
methodologies, behavior of the energy harvesters in harsh environment, their
fatigue life, and failure mode, and the conditioning electric circuits are
some of the important issues that should be addressed in review papers.
Kim et al. [60] summarized the key ideas behind the performance evaluation of
the piezoelectric energy harvesters based on vibration, classifications,
materials, and the mathematical modeling of vibrational energy harvesting
devices. They listed 17 important electro-mechanical characteristics of
PZT-5H, PZT-8, PVDF, and described various configurations such as the
cantilever type, the cymbal type, the stack type, and the shell type. They
advised that the future opportunities for research are development of high
coupling coefficient of piezoelectric materials, giving the ability to sustain
under harsh vibrations and shocks, development of flexible and resilient
piezoelectric materials, and designing efficient electronic circuitry for
energy harvesters. Siddique et al. [61] provided a literature review on
vibration-based micropower generation using electromagnetic and piezoelectric
transduction systems and hybrid configurations. They reported some performance
characteristics of the piezoelectric energy harvesters with different
materials and configurations. They claimed that most of the recent research
have been devoted to modifications of the generator size, shape, and to
introduce a power conditioning circuit to widen the frequency bandwidth of the
system. Further research topics are development of the MEMS-based energy
harvesters from renewable resources and making the miniature electric devices
more reliable. Figure 8 presents three schematic views of micorscale piezo-
generators designed for vibration-based energy harvesting applications.
Sodano et al. [62], as one of the earliest reviewers of the field, discussed
the future goals that must be achieved for power harvesting systems to find
their way towards the everyday use, and to generate sufficient energy to power
the necessary electronic devices. They mentioned that the major limitations in
the field of power harvesting revolve around the fact that the power generated
by the piezoelectric energy harvesters is far too small to power most
electronic devices. Increasing the amount of energy generation, developing
innovative methods of accumulating the energy, use of rechargeable batteries,
optimization of the power flow from a piezoelectric setup, minimizing the
circuit losses, identifying the location of power harvesting and the
excitation range, proper tuning of the power harvesting device are their
predictions for future prospects of the vibration-based piezo harvesters.
Figure 8: (a) Geometry and position of neutral axis of piezocomposite composed
of layers of carbon/epoxy, PZT ceramic and glass/epoxy [60], (b) a MEMS-based
piezo-generator in 3-3 mode [62], (c) schematic diagram of cross sectional
view of a fabricated vibration-based micro power generator [61].
Saadon and Sidek [63] presented a brief discussion of vibration-based MEMS
piezoelectric energy harvesters. They summarized various designs of harvesters
and reviewed experimental results presented in the last 3 years before the
date of publication of the paper. They focused on the working modes and
maximum output power of the MEMS piezoelectric energy harvesters. Harb [64]
reviewed a brief history of all energy harvesting methods including the
vibration-based, the electromagnetic-based, the thermal or radioactive-based,
pressure gradient-based, the solar and light-based, biological, and micro-
water flow systems. However, it is advised that the different types of
vibrations are the most available and the highest power provider sources. The
review papers like the one presented by Zhu et al. [65] are the result of an
explosive utilization of the vibration-based micro-generators in powering the
wireless sensor networks. They demonstrated an overall review of the
principles and the operating strategies to increase the operational frequency
range of the vibration-based micro-generators. Harne and Wang [66] reported
the major efforts and findings about common analytical frameworks and
principal results for bi-stable electromechanical dynamics, and a wide variety
of bi-stable energy harvesters. Based on their discussion, the remaining
challenges of such systems are maintaining high-energy orbits, operation under
stochastic vibratory conditions, designing the coupled bi-stable harvesters,
and defining proper performance metrics.
In summery, different configurations of the piezoelectric cantilevers, their
power output and the performance enhancement strategies have been covered by
the review papers well. However, a systematic comparison of different
configurations of piezoelectric energy harvesters, and also their ability to
sustain harsh vibrations and shocks, their fatigue life, their cost and
accessibility have not been considered by the reviews. Table 7 presents the
results of evaluation of the piezo-electric energy harvesters from vibrational
sources. The table also contains different sub-categories, the range of output
power, the number of reviewed articles, the merits, general conclusions, and
some other extra descriptions. The rank of each paper has been computed based
on the number of merits, the number of subcategories, the number of concluding
remarks, and clear emphasizing on the value of minimum required output power.
Table 7: Overall evaluation of review papers written on piezoelectric energy
harvesting from vibration sources. ”Cons.” stands for conclusions.
Conclusions: 1: Efficiency/performance improvement, 2: frequency tuning, 3:
safety issues, 4: costs, 5: hybrid harvesters, 6: non-linear models, 7:
battery replacement, 8: miniaturization, 9: steady operation, 10: more
efficient materials, 11: stochastic modeling.
Merits: 1: electromechanical coupling factor, 2: realistic resonance, 3:
energy flow, 4: range of output.
Sub-categories: 1: circuits, 2: type of materials, 3: modeling, 4: noise
level, 5: wearable, 6: frequency range, 7: MEMS
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
3 (0.27) | $\mu W$ to $mW$ (1) | 93 | 1-4 (2.0) | 1, 2, 3, 5 (0.57) | Kim et al. [60] | B | Comparison with electrostatic and electromagnetic energy conversions
6 (0.55) | $\mu W$ to $mW$ (1) | 145 | 1-4 (2.0) | 7 (0.14) | Siddique et al. [61] | B | Comparison with electromagnetic and electrostatic
6 (0.55) | 0.17$\mu W$ (1) | 35 | 2-4 (1.5) | 1, 3, 4, 5 (0.57) | Sodano et al. [62] | B | Insufficient output power
3 (0.27) | 60$\mu W$(1) | 23 | 2, 4 (1.0) | 1, 7 (0.29) | Saadon and Sidek [63] | C | Inadequate output power
\- (0.0) | 2.46$mW$ (1) | 56 | 3, 4 (10.) | 1, 6 (0.29) | Harb [64] | C | From thermal sources, RF sources, CMOS devices, power management sources
7 (0.64) | \- (0) | 50 | 2, 3 (1.0) | 6 (0.14) | Zhu et al. [65] | D | Focused on frequency tuning
5 (0.46) | \- (0) | 84 | 1, 2 (1.0) | 2, 3 (0.29) | Harne and Wang [66] | D | Focused on bistable systems, stochastic vibrations
### 4.2 Biological sources
Biomechanical energy harvesting provides an important alternative to
electrical energy for portable electronic devices. Hwang et al. [67] addressed
the developments of flexible piezoelectric energy-harvesting devices by using
high-quality perovskite thin film and innovative flexible fabrication
processes. In addition, the energy harvesting devices with thick and rigid
substrates are unsuitable for responding to the movements of internal organs
and muscles. They commented that the electric power harvested from the bending
motion of a flexible thin film is sufficient to stimulate heart muscles. Also,
Easy bendability, higher conversion efficiency, enhanced sensing capability in
nanoscale, self-energy generation and the real-time diagnosis/therapy
capabilities are among advantages of such systems. Ali et al. [68] discussed
the possibilities of utilizing the piezo-based energy conversion from the
source of muscle relaxation and contraction, the body movement, the blood
circulation, the lung and cardiac motion in applications such as pacemakers,
blood pressure sensors, cardiac sensors, pulse sensors, deep brain
simulations, biomimetic artificial hair cells, active pressure sensors, and
active strain sensors. The piezoelectric materials containing nanowires,
nanorods, nanotubes, nanoparticles, thin films, the lead-based ceramics, the
lead-free ceramics, the polymer-based materials, the textured polycrystalline
materials, and the biological piezo-materials have been evaluated. They
proposed several challenging problems such as the flexibility to fit into the
shape of an organ, the proper management of power, selection of a media for
the electrical connection, enhancing the biological safety, designing the
interface between the body tissue and the implanted piezo-material, efficient
encapsulation, further miniaturization, and conducting related experiments on
small/large animal and human cases.
Surmenev et al. [69] described novel techniques in fabrication of hybrid
piezoelectric polymer-based materials for biomedical energy harvesting
applications such as detection of motion rate of humans, degradation of
organic pollutants, and sterilization of bacteria. They described the
different methods that can be employed for the improvement of the
piezoelectric response of polymeric materials and scaffolds. They also
reviewed biomedical devices and sensors based on hybrid piezo-composites.
Similar to most other reviews, increasing the performance is one of proposed
future works. Others are alignment of nanofiller particles inside the
piezopolymer matrix, developing common standards for consistently quantifying
and evaluating the performance of various types of piezoelectric materials,
and investigation of the structural parameters.
The internal charging of implantable medical devices (IMD) is another
important biological application of piezoelectric energy harvesting. Extending
the lifespan of IMDs and the size minimization have become main challenges for
their development. For such devices, energy from the body movement, muscle
contraction/relaxation, cardiac/lung motions, and the blood circulation is
used for powering medical devices. Zheng et al. [70] presented an overall
review of the piezoelectric energy devices in comparison to the triboelectric
harvesters with the source of body movement, muscle contraction/relaxation,
cardiac/lung motions, and the blood circulation. They proposed that future
opportunities are fabrication of intelligent, flexible, stretchable, and fully
biodegradable self-powered medical systems for monitoring biological signals,
in vivo and in vitro treatment of various diseases, optimization of the output
performance, obtaining higher sensitivity, elasticity, durability and
biocompatibility, biodegradable transient electronics, intelligent control of
dynamic properties in vivo, improving the operating lifetimes, and the
absorption efficiency. Mhetre et al. [71] gave a brief review of micro energy
harvesting techniques and methods from the limb movement for drug delivery
purposes, dental applications, and the body heat recovery using the
piezoelectric transducers. They just announced that the main challenge is to
enhance the energy output using proper electronic circuit designs. Much more
research is required to harvest energy from other biological parameters such
as the body temperature and respiration. An average amount of energy used by
the body is $1.07\times 10^{7}J$ per day. This amount of energy is equivalent
to approximately 800AA (2500mAh) batteries with the total weight of about 20
kg. This considerable amounts of human energy opens the road of development of
energy harvesting technologies for powering electronic devices [72]. Riemer
and Shapiro [72] investigated the amount of electricity that can be generated
from motion of various parts of the body such as heel strike, ankle, knee,
hip, shoulder, elbow, arm, leg, the center of mass vertical motion, and the
body heat emission, using the piezo-harvesters and electrical induction
generators. They claimed that such technologies are appropriate for the third
world countries, which is to some extent doubtful referring to low performance
and high cost of fabrication.
In addition to biocompatibility problems, the main challenges in development
of these types of energy harvesters are constructing a device that can harvest
as much energy as possible with minimal interference with the natural function
of the body. Also, the device should not increase the amount of energy
required by a person to perform his/her activities. Specially for IMDs, the
lifetime and efficient power output of the energy harvesters are of outmost
importance. Figure 9 illustrates magnitude of harvestable energy sources from
the human body organs. Similar values can be predicted more or less from
organs of animals in related applications.
Figure 9: Available sources of energy from the human body organs. The data
number 1-4 from Refs. [10], number 4-13 from [72]. The results are illustrated
on Reza Abbasi’s ”Prince Muhammad-Beik” drawing (1620, public domain).
Xin et al. [73] reviewed shoes-equipped piezoelectric energy harvesters. They
described advantages and limitations of the current and newly developing
piezoelectric materials, including the flat plate type, the arch type, the
cantilever type, the nanocomposite-based, the photosensitive-based, and the
hybrid piezoelectric-semiconductors technologies. They announced that
enhancing the coupling coefficient of the piezoelectric materials and
optimizing the structure of the energy harvester and the energy storing
circuit require further investigation.
The reviewed articles about the biological applications have focused on
highlighting new materials and structures of biological energy harvesters and
their power output. The bio-compatibility, the interference of the device with
the biological organ, reliability of the device along with its lifetime and
economic issues are open topics in the field. Table 8 summarizes different
highlights and descriptions of the review articles related to the biological
applications. The grade of each paper has been computed based on the number of
merits, the number of subcategories, the number of concluding remarks, and
clear emphasizing on value of minimum required output power.
Table 8: Overall evaluation of review papers written on piezoelectric energy
harvesting from biological applications. ”Cons.” stands for conclusions.
Conclusions: 1: Efficiency/performance improvement, 2: safety issues, 3:
costs, 4: hybrid harvesters, 5: non-linear models, 6: battery replacement, 7:
miniaturization, 8: steady operation, 9: efficient (flexible, stretchable,
bio-compatible) materials, 10: self-powered, 11: being wearable, 12: control
systems.
Merits: 1: electromechanical coupling factor, 2: realistic resonance, 3:
energy flow, 4: range of output.
Sub-categories: 1: organ motion, 2: heel strike, 3: ankle, 4: Knee, 5: hip, 6:
center of mass, 7: arms, 8: muscles, 9: cardiac/lung motion, 10: blood
circulation, 11: heat emission, 12: drug delivery, 13: dental cases, 14: thin
films, 15: artificial hair cell, 16: biosensors.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
5 (0.42) | 250 V, 8.7 $\mu$A (1) | 71 | 1, 3, 4 (1.5) | 1, 9, 14, 15 (0.25) | Hwang et al. [67] | B | Focused on thin films
9 (0.75) | 11V, 283$\mu A$ (1) | 240 | 1, 4 (1.0) | 1, 8, 9, 10, 16 (0.31) | Ali et al. [68] | B | -
8 (0.67) | 1$\mu F$, 20$V$, 50s (1) | 235 | 1, 4 (1.0) | 1, 16 (0.13) | Surmenev et al. [69] | C | Lead-free polymer-based, size-dependent effects, insufficient output power of piezoelectric polymers and their copolymers
7 (0.58) | 11$mWcm^{-3}$ (1) | 107 | 1, 4 (1.0) | 8, 9, 10 (0.19) | Zheng et al. [70] | C | Comparison with triboelectric
1 (0.08) | mW(1) | 29 | 4 (0.5) | 12, 13 (0.13) | Mhetre et al. [71] | C | -
4 (0.33) | 2W (1) | 38 | 4 (0.5) | 1, 2, 3, 4, 5, 6, 7, 11 (0.5) | Riemer and Shapiro [72] | C | Comparison with electrical induction generators and electroactive polymers
4 (0.33) | -(0) | 53 | 1 (0.5) | 1 (0.06) | Xin et al. [73] | E | Shoes-equipped Geometry classification
### 4.3 Fluids
Wang et al. [74] categorized the fluid-induced vibrations for the purpose of
energy harvesting into four categories based on different vibration
mechanisms: the vortex induced vibration, galloping, fluttering, and
buffeting. They discussed the vortex-induced vibrations and buffeting (as
forced vibration cases), galloping and flutter (as limit-cycle vibration
items) using electromagnetic, piezoelectric, electrostatic, dielectric, and
triboelectric methods along with the corresponding numerical and experimental
endeavors. They presented a fruitful summary of the current research status on
flow-induced vibration hydro/aero energy harvesters. It is concluded that the
flow pattern around bluff bodies, the size limitations, estimation of costs of
equipment, the maintenance costs, the lifespan, protection of equipment in the
case of extreme weather, possible environmental impacts, the non-linear
modeling, the intelligent regulating elements such as artificial neural
network, implementation of hybrid multi-purpose energy harvesters, and
development of new materials need to be further studied. Figure 10 presents
four classes of energy harvesting: vortex-induced vibrations, buffeting,
galloping, and fluttering, from vibration mechanisms corresponding to fluid
flows [74].
Figure 10: Different classes of energy harvesting categories from flow-induced
vibrations [74].
Truitt and Mahmoodi [75] reviewed effects of wind-based energy harvesting from
flow-induced vibrations by bluff bodies and aeroelastic instabilities
(fluttering and galloping). They presented an overall study of energy
generation density and the peak power outputs versus the bandwidth. After a
brief review of dynamics of piezoelectric energy harvesting, theories and
principles, energy densities and output powers, they concluded that the
balance of efficiency-cost-manufacturability is the future horizon of the
topic. They suggested the use of PVDFs in fluid excitation applications due to
their increased flexibility over PZTs. They concluded that the fluttering- and
galloping-based methods generate a higher output power, but with a narrower
frequency bandwidth in comparison to the vortex induced methods. Also, the
final vision for energy harvesting may be active energy harvesting in which
the system dynamics can actively change in real-time to meet changing
environmental dynamics. Viet et al. [76] compared three energy harvesting
methods, including electrostatic, electromagnetic, and piezoelectric
technologies to indicate privileges of the piezoelectric harvesting in power
generation, transmission, structural installation, and the economic costs.
Then, they reviewed different design methodologies of harvesting energy from
ocean waves. Effects of longitudinal, bending, and shear couplings have been
discussed. It is concluded that due to higher energy generation density,
higher voltage generation capability, simpler configuration, and more economic
benefits, the piezoelectric technology is superior to the other methods. Elahi
et al. [77] studied the fluid-structure interaction-based, the human-based,
and the vibration-based energy harvesting mechanisms by qualitatively and
quantitatively analyzing the existing piezoelectric mechanisms. They reviewed
the vortex-induced vibration, fluttering, galloping, and the human-related
structures. They commented that a significant amount of research has been
conducted on aeroelastic energy harvesters, but aerodynamic models can be
improved by taking into account steady, quasi-steady, and unsteady
aerodynamics. McCarthy et al. [78] reviewed the research done on piezoelectric
energy harvesting based on fluttering. They introduced the mathematical terms
needed to define the performance of the fluttering harvester. They discussed
effects of the Strouhal number as a function of the Reynolds number, the wind
characteristics, and formation of the atmospheric boundary layer (ABL). They
declared that the ultra-low power densities, the long return period of
investments, and quantification and alleviation of the fatigue damage are the
most challenges for fluttering energy harvesting. Based on their opinion,
determining the fatigue life and some metrics for a piezoelectric flutter,
weather and precipitation effects are active research fields.
Hamlehdar et al. [79] presented a review of energy harvesting from fluid
flows. Despite the general topic of the paper, the piezo-energy harvesting
from blood as a liquid has been ignored. They have performed a literature
review on energy production from vortex induced vibration, the Karman vortex
street, the flutter induced motion, galloping, and the waves with water and
air as working fluids. Also, there is a short discussion on modeling
challenges. The results of the review conducted by Wong et al. [80] implies
that the piezoelectric energy harvesting form the rain drop has privileges
such as simple structure, easier fabrication, reduced number of components,
and direct conversion of vibrational energy to electrical charge. They stated
that the main challenge in this field is to design and optimize the raindrop
harvester for outdoor uses, being resistant against sunlight, wind, the impact
force of larger drops, being waterproof, showing appropriate sensitivity to
drops, supplying constant-rate energy over long periods of time, and
optimizing the power efficiency. Chua et al. [81] reviewed different types of
the raindrop kinetic energy piezoelectric harvester, including the bridge-
structure, the cantilever structure with the impact point near the free-end,
the cantilever structure with six impact points at varies surface locations,
the cantilever structure with impact point at the center, the PVDF membrane or
the PZT edge-anchored plate, and the collecting diaphragm cantilevers. Also,
they presented a brief summary of characteristics of hybrid harvesters. It is
stated that the best parameter to compare different harvesters is the
efficiency rather than the output peak power. Then based on this criterion, it
is found that the cantilever-type and the bridge-type energy harvesters made
of PZT are the best choices. This is to some extent in contrast to the
recommendations of Wong et al. [80].
Table 9 presents the details and highlights of review papers on fluid-based
piezo-energy harvesting. The grade of each paper has been computed based on
the number of merits, the number of subcategories, the number of concluding
remarks, and clear emphasizing on value of minimum required output power.
Table 9: Overall evaluation of review papers written on piezoelectric energy
harvesting from fluids. The numbers in parentheses denote number of non-
general future lines. ”Cons.” stands for conclusions.
Conclusions: 1: Efficiency/performance improvement, 2: frequency tuning, 3:
safety issues, 4: costs, 5: hybrid harvesters, 6: non-linear models, 7:
battery replacement, 8: miniaturization, 9: steady operation, 10: more
efficient materials.
Merits: 1: electromechanical coupling factor, 2: realistic resonance, 3:
energy flow, 4: range of output
Sub-categories: 1: water waves, 2: galloping, 3: fluttering, 4: buffeting, 5:
modelling, 6: wind’s vortex street, 7: instabilities, 8: raindrop, 9:
mechanical design.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
9 (0.9) | $>$0.0289mW (1) | 125 | 1, 2, 3, 4 (2.0) | 1, 2, 3, 4 (0.44) | Wang et al. [74] | A | Internet of things, machine learning tools
4 (0.4) | 115mW(1) | 62 | 1 2, 3, 4 (2.0) | 2, 3, 6, 7 (0.44) | Truitt and Mahmoodi [75] | B | Active control theory
4 (0.4) | 116$\mu W/cm^{3}$ (1) | 96 | 1, 2, 4 (1.5) | 1, 9 (0.22) | Viet et al. [76] | B | -
5 (0.5) | nW to mW (1) | 256 | 1, 4 (1.0) | 2, 3, 6 (0.33) | Elahi et al. [77] | C | Human-based sources
5 (0.5) | 440$\mu W/cm^{3}$ (1) | 96 | 2, 4 (1.0) | 3, 6, 9 (0.33) | McCarthy et al. [78] | C | Noise level, Atmospheric boundary layer, Fatigue life
5 (0.5) | $>$1$\mu$W(1) | 199 | 4 (0.5) | 1, 2, 5, 6, 7 (0.56) | Hamlehdar et al. [79] | C | Biomimetic design
4 (0.4) | -(0) | 87 | 1, 3, 4 (1.5) | 8 (0.11) | Wong et al. [80] | C | Size effects
4 (0.4) | $\mu W$(1) | 73 | 4 (0.5) | 8 (0.11) | Chua et al. [81] | C | Circuit design, Hydrophilic surface
### 4.4 Ambient waste energy sources
Guo and Lu [82] discussed recent advances in application of thermoelectric and
piezoelectric energy harvesting technologies from the pavements. They found
out that a pipe system cooperating with a thermoelectric generator is superior
in terms of cost effectiveness and electricity output to piezoelectric
transducers (fabricated with PZT). Based on their recommendations, the impact
of the mentioned energy harvesting facilities to pavement performance, life
cycle assessments, optimization with respect to traffic conditions and solar
radiation, and the change in vehicle fuel consumption due to additional
vehicle vibration or resistance should be evaluated in future works. Duarte
and Ferreira [83] presented a comparative study of photovoltaic,
thermoelectric, electromagnetic, hydraulic, pneumatic, electromechanical, and
piezoelectric harvesting technologies. Evaluation parameters are the
conversion efficiency, the maximum generated power, the installation method,
and their TRL. They declared that the essential economic data of products are
not yet available. Wang et al. [84] illustrated applications of the
photovoltaic cells, solar collectors, geothermal, thermoelectric,
electromagnetic, and piezoelectric energy extraction systems from bridges and
roads in terms of energy output, benefit-cost ratio, and the technology
readiness level. Based on their conclusions, the grade of support of the
piezoelectric harvesters by governments is low to medium, while the solar and
geothermal systems are strongly being supported. Pillai and Deenadayalan [85]
presented a review of acoustic energy harvesting methods and piezoelectricity
as a promising technology in this category due to being sensitive and
efficient at high frequency excitations. they declared that optimization of
the resonator and the coupling of thermo-acoustic engine to the acoustic-
electricity conversion transducer are open research fields. Khan and Izhar
[86] reviewed the recent developments in the field of electromagnetic- and
piezoelectric-based acoustic energy harvesting. They reported sound pressure
levels of various ambient acoustic energy sources. A set of useful data about
the sound pressure level (dB) and the frequency of various acoustic energy
sources have been reported. They declared that researchers were focusing on
enhancing the performance of the piezoelectric membrane through novel
fabrications and optimized geometrical configurations. Duarte and Ferreira
[87] made a comparative study on road pavement energy harvesting technologies.
They compared existing technologies based on the installed power (per area or
volume), the conversion efficiency, the power density. Also, they classified
the harvesting technologies based on their TRL (technology readiness levels)
values. It is demonstrated that the piezoelectric technology is at high TRL
grades. But, it delivers insufficient energy production rate with low economic
characteristics.
Also, some of previously discussed papers have devoted a part of their review
to piezo-based energy harvesting from waste energies. Performance of the
electromagnetic- and piezoelectric-based vibration energy harvesters for
energy production from bridges has been evaluated by Khan and Ahmad [23]. They
have expressed that the majority of current harvesters are constructed based
on the electromagnetic effect, but the piezo-materials are commercially
available and are easy to develop. The resonant frequency is a critical
parameter in such narrow-band low-frequency applications, which is a privilege
of the electromagnetic systems. Maghsoudi Nia et al. [24] presented different
technologies of converting the kinetic energy of the human body during walking
to electricity by locating a harvesting system on the body or inserting a
harvester in the floor. In contrast to the results of Guo and Lu [82], it is
recommended that the piezoelectric harvester is a better choice for such
applications, due to simplicity and flexibility, regardless of a lower power
output. Yildirim et al. [31] reviewed amplification techniques, resonance
tuning methods, and non-linear oscillations in applications involving the
ambient vibration harvesting, based on piezoelectric, electrostatic, and
electromagnetic conversion methods. Al-Yafeai et al. [38] reviewed
methodologies to convert the dissipated energy in the suspension dampers of a
car to electricity, along with discussing the mathematical car models and
respective experimental setups. The disadvantages of the piezo-generator in
comparison to other methods are poor coupling, high output impedance, charge
leakage, and the low output current. However, the advantages are simple
structure, no need to external voltage sources and mechanical constraints,
compatibility with MEMS-based devices, high output power, and having wide
frequency range. Al-Yafeai et al. [38] presented a review of design
considerations for energy harvesting from car suspension system, including
different piezo-materials, various mathematical modeling, the power
dissipation, the number of degree-of-freedoms, the road input, the location of
the piezo-system, and the electronic circuit. Dagdeviren et al. [39]
highlighted essential mechanical to electrical conversion processes and the
key design considerations of flexible and stretchable piezoelectric energy
harvesters appropriate for soft tissues of human body, smart robots and
metrology tools. They declared that the development outlooks of such devices
are the designs and fabrication techniques.
Table 10 presents details and highlights of the review papers on ambient and
waste energy piezo-harvesting methods. The grade of each paper has been
computed based on the number of merits, the number of subcategories, the
number of concluding remarks, and clear emphasizing on value of minimum
required output power.
Table 10: Overall evaluation of review papers written on piezoelectric energy
harvesting from waste energies. ”Cons.” stands for conclusions.
Conclusions: 1: Efficiency/performance improvement and optimization, 2: safety
issues, 3: costs, 4: hybrid harvesters, 5: non-linear models, 6: battery
replacement, 7: miniaturization, 8: steady operation, 9: efficient materials,
10: control systems.
Merits: 1: electromechanical coupling factor, 2: realistic resonance, 3:
energy flow, 4: range of output
Sub-categories: 1: acoustic energy, 2: modelling, 3: road pavement, 4:
railway, 5: bridge.
# Cons. | Minimum required output | # Refs. | Merits | Sub-categories | Ref. | Grade | Highlights
---|---|---|---|---|---|---|---
4 (0.4) | 241$Wh/y$ (1) | 120 | 3, 4 (1.0) | 3, 5 (0.4) | Wang et al. [84] | C | Comparison with photovoltaic cell, solar collector, geothermal, thermoelectric, electromagnetic devices, Fatigue failure and life-cycle
4 (0.4) | 100mW (1) | 65 | 3, 4 (1.0) | 2, 3 (0.4) | Guo and Lu [82] | C | Comparison with thermoelectrics
4 (0.4) | 10-100W(1) | 34 | 3, 4 (1.0) | 4 (0.2) | Duarte and Ferreira [83] | C | Comparison with electromagnetic devices, TRL level presented
3 (0.3) | -(0) | 80 | 2, 4 (1.0) | 1, 2 (0.4) | Pillai and Deenadayalan [85] | D | Comparison with thermo-acoustics
2 (0.2) | -(0) | 54 | 2, 4 (1.0) | 1 (0.2) | Khan and Izhar [86] | D | Comparison with electromagnetics
2 (0.2) | -(0) | 97 | 4 (0.5) | 3 (0.2) | Duarte and Ferreira [87] | E | Comparison with solar, thermoelectric, electromagnetic devices
## 5 Challenges and the roadmap for future research
Table 11 illustrates the number of published review papers on each field, the
year of the first and the last published review paper, and the research fund
sources. It is obvious that the energy harvesting from ambient energies, the
MEMS/NEMS and the fluid-based harvestinggs, and material considerations,
respectively have the highest rate of publication of the review papers. The
numbers in brackets demonstrate the number of funded review papers. Although
it is predictable that some supporters prefer to remain anonymous, it is seen
that about 46% of papers have been supported by a non-university organization.
The last column of the table presents a list of organizations and respected
countries that have devoted a full/partial financial support to the review
papers on piezo-materials.
It is expected that the forthcoming review papers focus on specialized topics.
However, they may still contain some degree of generality. Due to the
multidisciplinary nature of the field, it is vital to publish comprehensive
reviews on detailed aspects of the piezoelectric harvesters. Publication of
review papers with general topics is not very welcomed anymore. The rate of
publication of the review papers on biological topics is less than expected.
Due to the rapid progress of piezoelectricity in biomedical engineering,
increasing the number of reviews in related fields is inevitable. We suggest
the researchers to present some state-of-the-art articles with specific topics
including progress in piezoelectric materials, new applications of
piezoelectric energy harvesters, and new developments in MEMS and NEMS
piezoelectric harvesters.
The results of comparative researches on energy harvesters for the railway
demonstrated that, even in macroscale energy harvesting, the piezoelectric
energy harvesters are not very successful with respect to other harvesting
technologies. This situation may be worst for micro and nanoscale harvesters.
We predict that the single (non-hybrid) piezoelectric energy harvesters would
be the true choice only in some specific applications for which other
harvesting systems have inherent limitations. Thus, there is an essential need
for making fair comparisons of all types of energy harvesters for specific
applications. On the other hand, we encounter the growing number of
publications on piezoelectric energy harvesters. It should be noted that the
real world selects the energy harvesting systems with higher performances and
lower costs.
Based on the data listed in Table 11, three types of research lines have been
detected:
1. 1.
Pioneering topics that are still under consideration: general reviews
(2005-2019), the design key points (2005-2020), the material-related studies
(2009-2019), the MEMS-based devices (2006-219),
2. 2.
Pioneering topics without any recent publication of review papers: the
modeling approaches (2008-2017), the vibration-based harvesters (2004-2015),
sensors and actuators (2007-2016),
3. 3.
The newly developed topics: fluids (2013-2020), ambient waste energy
(2014-2020), the biological applications (2011-2019).
Table 11: Statistics of review papers published on different topics related
to piezoelectric energy harvesting. The numbers in brackets demonstrate the
number of funded review papers in each field.
| Topic | #reviews | Period | # reviews per year | Non-university research fund sources
---|---|---|---|---|---
1 | General | 8(4) | 2005-2019 | 0.53 | National Science Foundation (USA), National Natural Science Foundation (China), Spanish Ministry of Science and Technology and the Regional European Development Funds (European Union), NanoBioTouch European project/Telecom Italia/Scuola Superiore SantAnna (Italy).
2 | Design | 15(5) | 2005-2020 | 0.94 | Texas ARP (USA), U.S. Department of Energy Wind and Water Power Technologies Office (USA), Ministry of Higher Education (Malaysia), Natural Science and Engineering Research Council (Canada), National Natural Science Foundation (China)/EU Erasmus+ project/Bevilgning
3 | Material | 11(7) | 2009-2019 | 1.00 | M/s Bharat Electronics Limited (India), National Nature Science Foundation (China), Office of Basic Energy Sciences, Department of Energy (USA)/Center for Integrated Smart Sensors funded by the Korea Ministry of Science (Korea), National Natural Science Foundation (China)/Shanghai Municipal Education Commission and Shanghai Education Development Foundation (China), European Research Council/European Metrology Research Programme/ UK National Measurement System, National Natural Science Foundation (China), China scholarship Council/China Ministry of Education/Institute of sound and vibration
4 | Modeling | 5(3) | 2008-2017 | 0.5 | Air Force Office of Scientific Research (USA), Air Force Office of Scientific Research (USA), a NSFC project of China
5 | Vibration | 8(1) | 2004-2015 | 0.67 | Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation/Creative Research Initiatives
6 | Biology | 6(5) | 2011-2019 | 0.67 | Russian Science Foundation/Alexander von Humboldt Foundation/European Commission, National Key R&D Project from Minister of Science and Technology (China), Basic Science Research Program (Korea)/Center for Integrated Smart Sensors as Global Frontier Project, R&D Center for Green Patrol Technologies through the R&D for Global Top Environmental Technologies program funded by the Korean Ministry of Environment, Paul Ivanier Center for Robotics and Manufacturing Research/Pearlstone Center for Aeronautics Research
7 | Sensors | 5(4) | 2007-2016 | 0.5 | Spanish Ministry of Education and Science, NSSEFF/fellowship/ NSF/ Ben Franklin Technology PArtners/the Center for Dielectric Studies/ARO/DARPA/the Materials Research Institute/U.S Army Research Laboratory, Converging Research Center Program by the Ministry of Education Science and Technology (Korea), Basic Science Research Program through the National Research Foundation of Korea
8 | MEMS/NEMS | 15(7) | 2006-2019 | 1.07 | National Science Foundation (China), the Basic Science Research Program, through the National Research Foundation of Korea, European Research Council, Ministry of Education (Malaysia), Office of Basic Energy Sciences Department of Energy (USA), International Research and Development Program of the National Research Foundation of Korea
9 | Fluids | 8(3) | 2013-2020 | 1.00 | Ministry of Higher Education (Malaysia), National Natural Science Foundation (China), Australian Research Council/FCSTPty Ltd
10 | Ambient | 11(4) | 2014-2020 | 1.57 | Center for Advanced Infrastructure and Transportation (USA), Portuguese Foundation of Science and Technology, European Regional Development Fund, National Natural Science Foundation of China
The missing topics and the concluding future research topics, which need more
close investigations to demonstrate their state-of-the-art are
1. 1.
Development of hybrid multi-purpose energy generators to completely harness
energy of any kind and with any characteristics combining the piezo-pyro-
tribo-flexo-thermo-photoelectric technologies.
2. 2.
Investigation of the mathematical models, the analytical and numerical
solution techniques especially in nanoscale geometries where the classical
continuum mechanics principal fails or in stochastic and non-linear
situations. Some modified constitutive relations may need to be developed in
non-continuum regimes. Also, the second law analysis and analysis of such
systems from the thermodynamic viewpoint are the missing topics. The ab initio
first principal simulations with atomistic nature are other challenging
aspects of the nanoscale piezo-harvesters. Development of opensource codes
like OpenFOAM and LAMMPS to include the solvers involving the piezoelectric
effect may be another future research topic.
3. 3.
Application of piezo-materials in energy saving or reducing energy demand of a
system rather than generation of energy requires a comprehensive review. An
example of such energy reduction is the delay in decaying disturbances and
delaying transition to turbulence using piezo-actuators placed on the surface
of bluff bodies.
4. 4.
Due to the multi-physics nature of the piezoelectric effect, it is highly
recommended to prepare review papers on optimization methods or machine
learning-related topics.
5. 5.
Commercialization of the piezo-based harvesters and enhancing the technology
readiness level need a serious attention. Perhaps, the next decade is the
decade of extensive commercialization of the piezo-harvesters.
6. 6.
Plenty of patents have been published in recent years. Even some review papers
should be devoted to investigation of patents presented in the field.
7. 7.
Focused reviews are needed on vibration-based piezo-harvesters in four recent
years, development of piezotronics, and design of complete self-powered
autonomous systems.
8. 8.
The overall design of devices including all parts, integrating the whole
device in thin films, accumulation in rechargeable batteries, and taking into
account the energy consumption needed to store the harvested energy.
9. 9.
Optimization of device architecture and size reducing configurations for
portable applications, flexible wearable compact embedding implantable
devices.
10. 10.
In situ prototype testing and design of harvesters coupled with environment
and realistic applications to face with sunlight in outdoor applications,
naturally occurring stochastic vibrations, the wind speed variation, dust,
noises, required flexibility to fit the shape of human organs, and
waterproofness.
11. 11.
Quantification of the figure of merit for the piezo-material properties such
as energy transforming or conversion efficiency and standardizing the
performance of piezo-based devices.
12. 12.
Reducing the maintenance cost, enhancing the lifespan, ameliorating the
performance, analysis of government supports, the cost-benefit balance, and
investigations of piezo-harvesting from energy policy viewpoint.
13. 13.
Thermal design of piezo-systems including the temperature-dependent properties
and high-temperature harvesting limitations.
14. 14.
Fabrication of new piezo-materials with the non-linear behavior, larger
displacements, lower frequencyies, wider operation bandwidth, and the
frequency self-adaptation capabilities.
15. 15.
Use of meta-materials, non-toxic, biocompatible, printable piezo-materials,
nanofibers, lead-free and high-piezoelectric coefficient materials.
16. 16.
Improving the design of electrical circuitry and managing rectification and
storage losses.
17. 17.
Modification of structural designs including fracture-fatigue studies to
increases reliability, stability, and durability of the device.
18. 18.
Design of efficient control techniques.
19. 19.
Extending the application of piezo-materials in novel fields such as internet
of things.
20. 20.
Paying close attention to the use of unimorph design for high-energy
harvesting rate, obtaining realistic resonance data in order to reach
compactness, investigating energy output much lower than 1mW, and step-by-step
report of successive energy flow or efficiency from input mechanical energy to
the final electric energy in a rechargeable battery.
21. 21.
Focusing on applications involving elimination, restriction, and replacement
of toxic materials and environmental pollution.
22. 22.
Development of designs exhibiting the highest electromechanical coupling
factor.
23. 23.
Considering mechanical impedance matching, electromechanical transduction,
electrical impedance matching and priority of these factors.
24. 24.
Development of other applications as energy harvesting devices with low energy
demand.
25. 25.
Designing a grid of nano-devices (thousands) or thick films (10 to 30 microns)
to generate minimum 1mW power (the required electric energy to operate a
typical energy harvesting electric circuit with a DC/DC converter).
26. 26.
General development directions may be remote signal transmission, and energy
saving in rechargeable batteries.
However, the research on piezoelectric energy harvesting is not mature enough
and many interdisciplinary active research fields are currently available. It
should be mentioned that the progress of small-scale devices with very low
power need is tightly tied to the revolution in design of efficient high-
output power piezoelectric energy harvesters. It is recommended to lie on
fundamental principles in order to obtain unique designs for future research.
## Acknowledgment
This research was supported by the Iran National Science Foundation (Grant
number 98017606).
## References
* [1] Uchino, K., Micromechatronics, CRC Press, 2019.
* [2] Dutoit, N. E., Wardle, B. L., Kim, S. G. (2005). Design considerations for MEMS-scale piezoelectric mechanical vibration energy harvesters. Integrated ferroelectrics, 71(1), 121-160.
* [3] Uchino, K., Essentials of Piezoelectric Energy Harvesting, World Scientific Publishing, 2021.
* [4] Safaei, M., Sodano, H. A., Anton, S. R. (2019). A review of energy harvesting using piezoelectric materials: state-of-the-art a decade later (2008-2018). Smart Materials and Structures, 28(11), 113001.
* [5] Anton, S. R., Sodano, H. A. (2007). A review of power harvesting using piezoelectric materials (2003-2006). Smart Materials and Structures, 16(3), R1.
* [6] Taware, S. M., Deshmukh, S. P. (2013). A review of energy harvesting from piezoelectric materials. IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE), 43-50.
* [7] Khaligh, A., Zeng, P., Zheng, C. (2009). Kinetic energy harvesting using piezoelectric and electromagnetic technologies-state of the art. IEEE transactions on industrial electronics, 57(3), 850-860.
* [8] Sharma, P. K., Baredar, P. V. (2019). Analysis on piezoelectric energy harvesting small scale device: a review. Journal of King Saud University-Science, 31(4), 869-877.
* [9] Mateu, L., Moll, F. (2005, June). Review of energy harvesting techniques and applications for microelectronics. In VLSI Circuits and Systems II (Vol. 5837, pp. 359-373). International Society for Optics and Photonics.
* [10] Calio, R., Rongala, U. B., Camboni, D., Milazzo, M., Stefanini, C., De Petris, G., Oddo, C. M. (2014). Piezoelectric energy harvesting solutions. Sensors, 14(3), 4755-4790.
* [11] Batra, A. K., Alomari, A., Chilvery, A. K., Bandyopadhyay, A., Grover, K. (2016). Piezoelectric power harvesting devices: An overview. Advanced Science, Engineering and Medicine, 8(1), 1-12.
* [12] Sun, C. H., Shang, G. Q., Tao, Y. Y., Li, Z. R. (2012). A review on application of piezoelectric energy harvesting technology. In Advanced Materials Research (Vol. 516, pp. 1481-1484). Trans Tech Publications Ltd.
* [13] Sharma, S., Sharma, N. C., Upadhayay, A., Hore, D. (2018). An Innovative Strategy of Energy Generation using Piezoelectric Materials: A Review. ADBU Journal of Engineering Technology, 7.
* [14] Gosavi, S. K., Balpande, S. S. (2019). A Comprehensive Review of Micro and Nano Scale Piezoelectric Energy Harvesters. Sensor Letters, 17(3), 180-195.
* [15] Bedekar, V., Oliver, J., Zhang, S., Priya, S. (2009). Comparative study of energy harvesting from high temperature piezoelectric single crystals. Japanese Journal of Applied Physics, 48(9R), 091406.
* [16] Li, H., Tian, C., Deng, Z. D. (2014). Energy harvesting from low frequency applications using piezoelectric materials. Applied physics reviews, 1(4), 041301.
* [17] Narita, F., Fox, M. (2018). A review on piezoelectric, magnetostrictive, and magnetoelectric materials and device technologies for energy harvesting applications. Advanced Engineering Materials, 20(5), 1700743.
* [18] Zaarour, B., Zhu, L., Huang, C., Jin, X., Alghafari, H., Fang, J., Lin, T. (2019). A review on piezoelectric fibers and nanowires for energy harvesting. Journal of Industrial Textiles, 1528083719870197.
* [19] Yuan, X., Changgeng, S., Yan, G., Zhenghong, Z. (2016, September). Application review of dielectric electroactive polymers (DEAPs) and piezoelectric materials for vibration energy harvesting. In Journal of Physics: Conference Series (Vol. 744, No. 1, p. 012077). IOP Publishing.
* [20] Mishra, S., Unnikrishnan, L., Nayak, S. K., Mohanty, S. (2019). Advances in piezoelectric polymer composites for energy harvesting applications: A systematic review. Macromolecular Materials and Engineering, 304(1), 1800463.
* [21] Liu, H., Zhong, J., Lee, C., Lee, S. W., Lin, L. (2018). A comprehensive review on piezoelectric energy harvesting technology: Materials, mechanisms, and applications. Applied Physics Reviews, 5(4), 041306.
* [22] Bowen, C. R., Kim, H. A., Weaver, P. M., Dunn, S. (2014). Piezoelectric and ferroelectric materials and structures for energy harvesting applications. Energy & Environmental Science, 7(1), 25-44.
* [23] Khan, F. U., Ahmad, I. (2016). Review of energy harvesters utilizing bridge vibrations. Shock and Vibration, 2016.
* [24] Nia, E. M., Zawawi, N. A. W. A., Singh, B. S. M. (2017) A review of walking energy harvesting using piezoelectric materials. In IOP Conference Series: Materials Science and Engineering 291(1), 012026.
* [25] Lefeuvre, E., Sebald, G., Guyomar, D., Lallart, M., Richard, C. (2009). Materials, structures and power interfaces for efficient piezoelectric energy harvesting. Journal of electroceramics, 22(1-3), 171-179.
* [26] Mukherjee, A., Datta, U. (2010, December). Comparative study of piezoelectric materials properties for green energy harvesting from vibration. In 2010 Annual IEEE India Conference (INDICON) (pp. 1-4). IEEE.
* [27] Maamer, B., Boughamoura, A., El-Bab, A. M. F., Francis, L. A., Tounsi, F. (2019). A review on design improvements and techniques for mechanical energy harvesting using piezoelectric and electromagnetic schemes. Energy Conversion and Management, 199, 111973.
* [28] Uchino, K. (2018). Piezoelectric energy harvesting systems Essentials to successful developments. Energy Technology, 6(5), 829-848.
* [29] Priya, S. (2007). Advances in energy harvesting using low profile piezoelectric transducers. Journal of Electroceramics, 19(1), 167-184.
* [30] Yang, Z., Zhou, S., Zu, J., Inman, D. (2018). High-performance piezoelectric energy harvesters and their applications. Joule, 2(4), 642-69
* [31] Yildirim, T., Ghayesh, M. H., Li, W., Alici, G. (2017). A review on performance enhancement techniques for ambient vibration energy harvesters. Renewable and Sustainable Energy Reviews, 71, 435-449.
* [32] Ibrahim, S. W., Ali, W. G. (2012). A review on frequency tuning methods for piezoelectric energy harvesting systems. Journal of Renewable and Sustainable Energy, 4(6), 062703.
* [33] Talib, N. H. H. A., Salleh, H., Youn, B. D., Resali, M. S. M. (2019). Comprehensive Review on Effective Strategies and Key Factors for High Performance Piezoelectric Energy Harvester at Low Frequency. International Journal of Automotive and Mechanical Engineering, 16(4), 7181-7210.
* [34] Brenes, A., Morel, A., Juillard, J., Lefeuvre, E., Badel, A. (2020). Maximum power point of piezoelectric energy harvesters: a review of optimality condition for electrical tuning. Smart Materials and Structures, 29(3), 033001.
* [35] Szarka, G. D., Stark, B. H., Burrow, S. G. (2011). Review of power conditioning for kinetic energy harvesting systems. IEEE Transactions on Power Electronics, 27(2), 803-815.
* [36] Dell’Anna, F. G., Dong, T., Li, P., Wen, Y., Yang, Z., Casu, M. R., … Berg, Y. (2018). State-of-the-art power management circuits for piezoelectric energy harvesters. IEEE Circuits and Systems Magazine, 18(3), 27-48.
* [37] Guyomar, D., Lallart, M. (2011). Recent progress in piezoelectric conversion and energy harvesting using nonlinear electronic interfaces and issues in small scale implementation. Micromachines, 2(2), 274-294. 7\.
* [38] Al-Yafeai, D., Darabseh, T., Mourad, A. H. I. (2020). A State-Of-The-Art Review of Car Suspension-Based Piezoelectric Energy Harvesting Systems. Energies, 13(9), 2336.
* [39] Dagdeviren, C., Joe, P., Tuzman, O. L., Park, K. I., Lee, K. J., Shi, Y., … Rogers, J. A. (2016). Recent progress in flexible and stretchable piezoelectric devices for mechanical energy harvesting, sensing and actuation. Extreme Mechanics Letters, 9, 269-281.
* [40] Selvan, K. V., Ali, M. S. M. (2016). Micro-scale energy harvesting devices: Review of methodological performances in the last decade. Renewable and Sustainable Energy Reviews, 54, 1035-1047.
* [41] Kim, S. G., Priya, S., Kanno, I. (2012). Piezoelectric MEMS for energy harvesting. MRS bulletin, 37(11), 1039-1050.
* [42] Toprak, A., Tigli, O. (2014). Piezoelectric energy harvesting: State-of-the-art and challenges. Applied Physics Reviews, 1(3), 031104.
* [43] Todaro, M. T., Guido, F., Mastronardi, V., Desmaele, D., Epifani, G., Algieri, L., De Vittorio, M. (2017). Piezoelectric MEMS vibrational energy harvesters: Advances and outlook. Microelectronic Engineering, 183, 23-36.
* [44] Briscoe, J., Dunn, S. (2015). Piezoelectric nanogenerators: a review of nanostructured piezoelectric energy harvesters. Nano Energy, 14, 15-29.
* [45] Beeby, S. P., Tudor, M. J., White, N. M. (2006). Energy harvesting vibration sources for microsystems applications. Measurement science and technology, 17(12), R175.
* [46] Salim, M., Aljibori, H. S. S., Salim, D., Khir, M. H. M., Kherbeet, A. S. (2015). A review of vibration-based MEMS hybrid energy harvesters. Journal of Mechanical Science and Technology, 29(11), 5021-5034
* [47] Kumar, B., Kim, S. W. (2012). Energy harvesting based on semiconducting piezoelectric ZnO nanostructures. Nano Energy, 1(3), 342-355.
* [48] Jing, Q., Kar-Narayan, S. (2018). Nanostructured polymer-based piezoelectric and triboelectric materials and devices for energy harvesting applications. Journal of Physics D: Applied Physics, 51(30), 303001.
* [49] Wang, X. (2012). Pieogenerators Harvesting ambient mechanical energy at the nanometer scale. Nano Energy, 1(1), 13-24.
* [50] Wang, Z., Pan, X., He, Y., Hu, Y., Gu, H., Wang, Y. (2015). Piezoelectric nanowires in energy harvesting applications. Advances in Materials Science and Engineering, 2015.
* [51] Muralt, P., Polcawich, R. G., Trolier-McKinstry, S. (2009). Piezoelectric thin films for sensors, actuators, and energy harvesting. MRS bulletin, 34(9), 658-664.
* [52] Khan, A., Abas, Z., Kim, H. S., Oh, I. K. (2016). Piezoelectric thin films: an integrated review of transducers and energy harvesting. Smart Materials and Structures, 25(5), 053002.
* [53] Cook-Chennault, K. A., Thambi, N., Sastry, A. M. (2008). Powering MEMS portable devices: a review of non-regenerative and regenerative power supply systems with special emphasis on piezoelectric energy harvesting systems. Smart materials and structures, 17(4), 043001.
* [54] Kang, M. G., Jung, W. S., Kang, C. Y., Yoon, S. J. (2016, March). Recent progress on PZT based piezoelectric energy harvesting technologies. In Actuators (Vol. 5, No. 1, p. 5). Multidisciplinary Digital Publishing Institute.
* [55] Erturk, A., Inman, D. J. (2008). On mechanical modeling of cantilevered piezoelectric vibration energy harvesters. Journal of intelligent material systems and structures, 19(11), 1311-1325.
* [56] Erturk, A., Inman, D. J. (2008). Issues in mathematical modeling of piezoelectric energy harvesters. Smart Materials and Structures, 17(6), 065016.
* [57] Zhao, L., Tang, L., Yang, Y. (2013). Comparison of modeling methods and parametric study for a piezoelectric wind energy harvester. Smart materials and Structures, 22(12), 125003.
* [58] Wei, C., Jing, X. (2017). A comprehensive review on vibration energy harvesting: Modelling and realization. Renewable and Sustainable Energy Reviews, 74, 1-18.
* [59] Abdelkefi, A. (2016). Aeroelastic energy harvesting: A review. International Journal of Engineering Science, 100, 112-135.
* [60] Kim, H. S., Kim, J. H., Kim, J. (2011). A review of piezoelectric energy harvesting based on vibration. International journal of precision engineering and manufacturing, 12(6), 1129-1141.
* [61] Siddique, A. R. M., Mahmud, S., Van Heyst, B. (2015). A comprehensive review on vibration based micro power generators using electromagnetic and piezoelectric transducer mechanisms. Energy Conversion and Management, 106, 728-747.
* [62] Sodano, H. A., Inman, D. J., Park, G. (2004). A review of power harvesting from vibration using piezoelectric materials. Shock and Vibration Digest, 36(3), 197-206.
* [63] Saadon, S., Sidek, O. (2011). A review of vibration-based MEMS piezoelectric energy harvesters. Energy Conversion and Management, 52(1), 500-504.
* [64] Harb, A. (2011). Energy harvesting: State-of-the-art. Renewable Energy, 36(10), 2641-2654.
* [65] Zhu, D., Tudor, M. J., Beeby, S. P. (2009). Strategies for increasing the operating frequency range of vibration energy harvesters: a review. Measurement Science and Technology, 21(2), 022001.
* [66] Harne, R. L., Wang, K. W. (2013). A review of the recent research on vibration energy harvesting via bistable systems. Smart Materials and Structures, 22(2), 023001.
* [67] Hwang, G. T., Byun, M., Jeong, C. K., Lee, K. J. (2015). Flexible piezoelectric thin-film energy harvesters and nanosensors for biomedical applications. Advanced healthcare materials, 4(5), 646-658.
* [68] Ali, F., Raza, W., Li, X., Gul, H., Kim, K. H. (2019). Piezoelectric energy harvesters for biomedical applications. Nano Energy, 57, 879-902.
* [69] Surmenev, R. A., Orlova, T., Chernozem, R. V., Ivanova, A. A., Bartasyte, A., Mathur, S., Surmeneva, M. A. (2019). Hybrid lead-free polymer-based scaffolds with improved piezoelectric response for biomedical energy-harvesting applications: A review. Nano Energy, 62, 475-506.
* [70] Zheng, Q., Shi, B., Li, Z., Wang, Z. L. (2017). Recent progress on piezoelectric and triboelectric energy harvesters in biomedical systems. Advanced Science, 4(7), 1700029.
* [71] Mhetre, M. R., Nagdeo, N. S., Abhyankar, H. K. (2011, April). Micro energy harvesting for biomedical applications: A review. In 2011 3rd International Conference on Electronics Computer Technology (Vol. 3, pp. 1-5). IEEE.
* [72] Riemer, R., Shapiro, A. (2011). Biomechanical energy harvesting from human motion: theory, state of the art, design guidelines, and future directions. Journal of Neuroengineering and rehabilitation, 8(1), 22.
* [73] Xin, Y., Li, X., Tian, H., Guo, C., Qian, C., Wang, S., Wang, C. (2016). Shoes-equipped piezoelectric transducer for energy harvesting: A brief review. Ferroelectrics, 493(1), 12-24.
* [74] Wang, J., Geng, L., Ding, L., Zhu, H., Yurchenko, D. (2020). The state-of-the-art review on energy harvesting from flow-induced vibrations. Applied Energy, 267, 114902.
* [75] Truitt, A., Mahmoodi, S. N. (2013). A review on active wind energy harvesting designs. International Journal of Precision Engineering and Manufacturing, 14(9), 1667-1675.
* [76] Viet, N. V., Wu, N., Wang, Q. (2017). A review on energy harvesting from ocean waves by piezoelectric technology. Journal of Modeling in Mechanics and Materials, 1(2).
* [77] Elahi, H., Eugeni, M., Gaudenzi, P. (2018). A review on mechanisms for piezoelectric-based energy harvesters. Energies, 11(7), 1850.
* [78] McCarthy, J. M., Watkins, S., Deivasigamani, A., John, S. J. (2016). Fluttering energy harvesters in the wind: A review. Journal of Sound and Vibration, 361, 355-377.
* [79] Hamlehdar, M., Kasaeian, A., Safaei, M. R. (2019). Energy harvesting from fluid flow using piezoelectrics: A critical review. Renewable Energy, 143, 1826-1838
* [80] Wong, C. H., Dahari, Z., Manaf, A. A., Miskam, M. A. (2015). Harvesting raindrop energy with piezoelectrics: a review. Journal of Electronic Materials, 44(1), 13-21.
* [81] Chua, K. G., Hor, Y. F., Lim, H. C. (2016). Raindrop kinetic energy piezoelectric harvesters and relevant interface circuits: Review, issues and outlooks. Sensors & transducers, 200(5), 1.
* [82] Guo, L., Lu, Q. (2017). Potentials of piezoelectric and thermoelectric technologies for harvesting energy from pavements. Renewable and Sustainable Energy Reviews, 72, 761-773.
* [83] Duarte, F., Ferreira, A. (2017, June). Energy harvesting on railway tracks: state-of-the-art. In Proceedings of the Institution of Civil Engineers-Transport (Vol. 170, No. 3, pp. 123-130). Thomas Telford Ltd.
* [84] Wang, H., Jasim, A., Chen, X. (2018). Energy harvesting technologies in roadway and bridge for different applications A comprehensive review. Applied energy, 212, 1083-1094.
* [85] Pillai, M. A., Deenadayalan, E. (2014). A review of acoustic energy harvesting. International journal of precision engineering and manufacturing, 15(5), 949-965.
* [86] Khan, F.U., Izhar (2015). State of the art in acoustic energy harvesting. Journal of Micromechanics and Microengineering, 25(2), 023001.
* [87] Duarte, F., Ferreira, A. (2016). Energy harvesting on road pavements: state of the art. Proceedings of the Institution of Civil Engineers-Energy, 169(2), 79-90.
|
# $k$-Neighbor Based Curriculum Sampling for Sequence Prediction
First Author
Affiliation / Address line 1
Affiliation / Address line 2
Affiliation / Address line 3
email@domain
&Second Author
Affiliation / Address line 1
Affiliation / Address line 2
Affiliation / Address line 3
email@domain
###### Abstract
Multi-step ahead prediction in language models is challenging due to the
discrepancy between training and test time processes. At test time, a sequence
predictor is required to make predictions given past predictions as the input,
instead of the past targets that are provided during training. This
difference, known as exposure bias, can lead to the compounding of errors
along a generated sequence at test time. To improve generalization in neural
language models and address compounding errors, we propose Nearest-Neighbor
Replacement Sampling – a curriculum learning-based method that gradually
changes an initially deterministic teacher policy to a stochastic policy. A
token at a given time-step is replaced with a sampled nearest neighbor of the
past target with a truncated probability proportional to the cosine similarity
between the original word and its top $k$ most similar words. This allows the
learner to explore alternatives when the current policy provided by the
teacher is sub-optimal or difficult to learn from. The proposed method is
straightforward, online and requires little additional memory requirements. We
report our findings on two language modelling benchmarks and find that the
proposed method further improves performance when used in conjunction with
scheduled sampling.
## 1 Introduction
Language modeling is a foundational challenge in natural language processing
(NLP) and is also important for many related sequential tasks such as machine
translation, speech recognition, image captioning and question answering
[mikolov2010recurrent, sutskever2014sequence]. It involves predicting the next
token given past tokens in a sequence using a parametric model,
$f_{\theta}(\cdot)$, parameterized by $\theta$. This is distinctly different
from the standard supervised learning that assumes the input distribution to
be i.i.d., which leads to a non-convex objective even when the loss is convex
given the inputs.
In supervised learning, the standard modeling procedure111In the context of RL
we will refer to tokens $x$ as actions $a$, models $f$ as policies $\pi$ and
predictions $\hat{y}$ as $\pi(s^{\prime}|a,s)$. involves training a policy
$\hat{\pi}$ to perform actions given an expert policy $\pi^{*}$ at each time
step $t\in T$. This approach is also known as teacher forcing
[williams1989learning].
However, using the past targets $y_{t-n}$ and current input $x_{t}$ to predict
$y_{t+1}$ at training time does not emulate the process at test time where the
model is instead required to use its own past predictions $\hat{y}_{t-n}$ to
generate the sequence. Sequence predictors can suffer from this discrepancy
since they are not trained to use past predictions. This leads to a problem
where errors compound along a generated sequence at test time, in the worst
case leading to errors quadratic in $T$. Therefore, learning from high
dimensional and sparsely structured patterns at training time to generate
sequences from the same distribution at test time is challenging for neural
language models (NLMs). Beam search (BS) is often used to address this
challenge by greedily searching a subset of the most probable trajectories.
However, BS is not suited to the continuous state spaces in NLM, furthermore
it requires the use of dynamic programming over large discrete states in the
size of the vocabulary. Hence, BS as method for mitigating compounding errors
at training time is too slow, not to mention the problem of state transitions
being influenced by long-term dependencies (BS depth is typically small). In
such cases, learning from a teacher policy with full-supervision throughout
training can be sub-optimal.
We argue that in cases where the learner finds it difficult to learn from the
teacher policy alone, sampling alternative inputs that are similar to the
original input under a suitable curriculum can lead to improved out-of-sample
performance, and can be considered as an exploration strategy. Moreover,
interpolating this scheme with past predictions further improves performance.
We are motivated by the past findings that using outputs other than that
provided by an expert can be beneficial for generation tasks
[hinton2015distilling, norouzi2016reward].
In this paper, we propose a curriculum learning-based method, whereby the
teacher policy is augmented by sampling neighboring word vectors within a
chosen radius with probability assigned via the normalized cosine similarities
between each of the $k$ neighbors and a corresponding input $x_{t}$. By using
pretrained word embeddings as the basis for choosing the neighborhood for each
word, we only require to store an embedding matrix $\mat{E}\in\R^{|V|\times
d}$ where $|V|$ is the vocabulary size and the $d$ is the dimensionality of
the embedding space. In prediction-based word embeddings, typically $d\ll|V|$,
which results in a significantly smaller memory footprint when using $\mat{E}$
than using a transition probability matrix computed from a co-occurrence
matrix in $\N^{|V|\times|V|}$. During training we monotonically increase the
replacement probability using a number of simple non-parametric functions. We
show that performance can be further improved when Nearest Neighbor
Replacement Sampling (NNRS) is used in conjunction with scheduled sampling
[bengio2015scheduled], a standard technique that also addresses the
compounding error problem.
## 2 Related Work
The most relevant prior work to ours is scheduled sampling [[,
SS;]]bengio2015scheduled, which also uses an online sampling strategy to
reduce compounding errors. To alleviate this problem, SS alternates between
$\hat{y}_{t-1}$ and $y_{t-1}$ using a sampling schedule, whereby the
probability of using $\hat{y}_{t-1}$ instead of $y_{t-1}$ increases throughout
training, allowing the learner to generate multi-step ahead predictions by
improving the models’ robustness with respect to its own prediction errors.
Dataset Aggregation [DAgger;][]ross2011reduction finds a stationary
deterministic policy by allowing the model to first make predictions at test
time and then queries the teacher as to what actions it could have taken given
the observed errors the model made on the validation data. DAgger attempts to
address compounding errors by initially using an expert policy to generate a
set of sequences. These samples are added to the dataset $\mathcal{D}$ and a
new policy is learned, which subsequently generates more samples appended to
$\mathcal{D}$. This process is repeated until convergence and the resultant
policy $\pi$ that best emulates the expert policy is selected. In the initial
stages of learning, a modified policy is considered
$\pi_{i}=\beta_{i}\pi^{*}+(1-\beta_{i})\hat{\pi}_{i}$, where the expert
$\pi^{*}$ is used for a portion of the time (referred to as a mixture expert
policy) so that the generated trajectories in the initial stages of training
by $\hat{\pi}_{i}$ at time $i$ do not diverge to irrelevant states.
Likewise, Mixed Incremental Cross-Entropy Reinforce [[,
MIXER;]]ranzato2015sequence applies REINFORCE for text generation. MIXER also
uses a mixed expert policy (originally inspired by DAgger), whereby
incremental learning is used with REINFORCE and cross-entropy (CE). The main
difference between MIXER and SS is that the former relies on past target
tokens in prediction and the latter uses REINFORCE to determine if the
predictions lead to a sufficiently good return from a task-specific score. In
our proposed method, we instead dynamically interpolate between the model’s
past predictions, past targets and the additional intermediate step of using
past target neighbors and use individual schedules for the past prediction and
the past targets’ $k$-neighbors.
## 3 Methodology
### 3.1 Model Description
In this paper, we consider sequence modelling tasks where the training set
contains input-output pairs $(X,Y)$, where $X=x_{1},x_{2},\ldots,x_{T}$ is an
input sequence of length $T$ and $Y=y_{1},y_{2},\ldots,y_{T}$ is its
corresponding output sequence of the same length. We use the compact notation
$x_{1:T}$ to denote the sequence $X=x_{1},x_{2},\ldots,x_{T}$. In language
modelling, the task is to predict the next token in the sequence, $x_{t}$,
given $x_{1:t-1}$. For this reason, language modelling can be seen as a
special (unsupervised) instance of the structure prediction problem where
$Y=x_{2:T}$ and the input sequence is defined as $X=x_{1:T-1}$. Different
models have been proposed in the literature for sequence-to-sequence
prediction problem such as variants of recurrent neural networks [[,
RNNs;]]sundermeyer2012lstm and Transformers [vaswani2017attention]. We model
these methods collectively as first computing a representation $\vec{h}_{t}$
for the sequence $x_{1:t-1}$ using some (recurrent in the case of RNNs and
attention-weighted sum in the case of Transformers) function,
$f(x_{1:t-1};\vec{\theta})$, parameterised by $\vec{\theta}$. Next, the
prediction, $\hat{y}_{t}$, of the next target output, $y_{t}$, is modelled as
a classification problem where a probability distribution, $p(\cdot)$, over
the possible output labels is computed (e.g., using $\mathtt{softmax}$
function) and the label with the maximum probability is selected as
$\hat{y}_{t}$. Then, the parameters $\vec{\theta}$ of the generator can be
learned such that the log-likelihood of the output sequence $Y$ given by (1)
is maximized.
$\displaystyle\frac{1}{T}\sum_{t=1}^{T}\log p(y_{t}|y_{1:t-1},X;\vec{\theta})$
(1)
As a concrete example of this setting, let us consider NLM using an RNN. The
recurrent function in the neural network updates its hidden state vector at
each time step. Language modelling can be seen as a classification problem
where each word in the output vocabulary, $V$, is a candidate label, and we
must select the most probable output label (word) as the next prediction.
Given the hidden state vector $\vec{h}_{t-1}$, after observing the sequence
$x_{1:t-1}$, we can compute the probability of each word $w\in V$ using
$\sigma(\vec{h}_{t}\T\vec{\theta}_{w})$, where $\vec{\theta}_{w}$ is a
parameter vector (embedding) corresponding to $w$ and $\sigma$ is the
$\mathtt{softmax}$ function.
Finally, the log-likelihood of the given corpus is computed using (1) and
stochastic gradient descent (SGD) (or a variant) can be used to find the
optimal model parameters for the generator.
Although training can be conducted using the target outputs $Y$, note that
they are _not_ available during test time. Therefore, the model uses its
current prediction $\hat{y}_{t}$ with hidden state $\vec{h}_{t}$ to predict
$p(\hat{y}_{t+1}|\hat{y}_{t},h_{t};\vec{\theta})$. As already discussed, this
can lead to an accumulation of errors. The current prediction $\hat{y}_{t}$
can be obtained by either acting greedily and selecting the most probable word
or chosen by samping from the distribution $p_{\theta}(y|h_{t};\vec{\theta})$,
calibrated using the $\mathtt{softmax}$ output.
### 3.2 Addressing Compounding Errors
Teacher forcing is when a sequence predictor is given full supervision during
training, which is the most popular way to train NLMs. However, this does not
reflect sequence generation at test time since targets are not provided and
the model has to rely on its own past predictions to generate the next word.
SS addresses this by alternating between $y_{t-1}$ and $\hat{y}_{t-1}$ at
training time to encourage the model to improve multi-step ahead prediction
used at test time. The trade-off is controlled with probability $\gamma_{i}$
($\in[0,1]$) at the $i$-th epoch such that with probability $(1-\gamma_{i})$
the model chooses $\hat{y}_{t-1}$. Herein, we denote the sampling rate as
$\epsilon$ for the schedule corresponding to SS and that for NNRS by $\gamma$.
The $\epsilon_{i}$ coefficients are incrementally decreased during training as
a curriculum learning strategy where the model uses more true targets at the
beginning of training and gradually changes to using its own predictions
during learning.
There are three ways in which $\epsilon_{i}$ is set, (1) a linear decay, (2)
an exponential decay or (3) an inverse Sigmoid decay. However, as noted in
previous work [huszar2015not], SS has the limitation that it can lead to
learning an incorrect conditional distribution. Additionally, it is assumed
that the teacher is optimal when using full supervision. Next, we describe our
proposed method that addresses these criticisms while also being sufficiently
modular to be used in conjunction with other sampling-based techniques for
mitigating exposure bias.
### 3.3 Nearest-Neighbor Replacement Sampling
#### Defining Neighbors via Embedding Similarity:
Suppose we have $d$-dimensional pretrained embeddings for words $w\in V$,
arranged as rows in an embedding matrix $\mat{E}\in\mathbb{R}^{|V|\times d}$.
We denote the embedding of word $w$ by $\vec{w}\in\mathbb{R}^{d}$. For each
$w\in V$, we measure its similarity to all other words $w^{\prime}\in
V-\\{w\\}$ using the cosine similarity, $\cos(w,w^{\prime})$, given by (3.3).
$\displaystyle\cos(w,w^{\prime})=$
We then define the neighborhood similarity matrix, $\mat{N}\in\R^{|V|\times
k}$, containing similarities of the top $k$ most similar words $w^{\prime}$ to
$w$ according to $\cos(\vec{w},\vec{w}^{\prime})$ for all $w\in V$. We denote
$\mat{N}_{w}$ as the row of neighbor embedding similarities corresponding to
word $w$. Once $\mat{N}$ is computed, we convert each row to a probability
distribution in order to sample the neighbors, which we denote as
$\tilde{\mat{N}}$ where for a given row corresponding to $w$,
$\sum_{i=1}^{k}\tilde{\mat{N}}_{w,i}=1$. Hence, a neighbor $w^{\prime}$ in the
$i$-th position of $\tilde{\mat{N}}_{w,i}$ is associated with a probability
$p(w^{\prime};w,k)$ given by (2), where the sampling probabilities are defined
using the $\mathtt{softmax}$ function to normalize the cosine similarities
between pretrained word embeddings.
$\displaystyle
p(w^{\prime}|w;k,\tau)=\frac{\exp(\cos(\vec{w},\vec{w}^{\prime})/\tau)}{\sum_{\vec{u}\in\mat{N}_{w}}\exp(\cos(\vec{u},\vec{w})/\tau)}$
(2)
Here the temperature $\tau$ controls the “peakiness” of the distribution,
lower $\tau$ corresponding to much higher sampling probability for the closest
neighbor and far smaller probabilities for the $k$-th furthest neighbor.
Furthermore, a $w$ that occurs at $y_{t-1}$, can instead be represented as a
weighted average (i.e centroid) of its neighbor embeddings $\tilde{\vec{w}}$
as shown in (3) and subsequently passed as input at $x_{t}$.
$\displaystyle\tilde{\vec{w}}=\frac{1}{k}\sum_{i=1}^{k}p(w^{\prime}_{i}|w,k)\vec{w}^{\prime}_{i}$
(3)
In our experiments, $k\approx\log_{2}(|V|)$ which is sample efficient and
speeds up sampling at run time when compared to using the entire vocabulary of
size $|V|$. Note that, in the case where $|V|$ is very large (e.g
$|V|>10^{4}$), efficient k-NN search can be performed using KD-Trees, Metric
Trees or Cover Trees algorithms [kibriya2007empirical], which has been applied
in a similar context [bollegala2017think]. In the best case, computation can
be reduced from $\mathcal{O}(n^{2})$ to $\mathcal{O}(n\log n$) at the expense
of some accuracy. However, this was not required as $|V|\ll 10^{4}$ for the
datasets we report results on.
During training, samples from top $k$-neighbors are drawn
$\tilde{w}\sim\tilde{\mat{N}}_{w}$, while $\tilde{\mat{N}}_{w}$ can change at
the end of each epoch $i\in\Gamma$ based on the validation perplexity $\ell$
by updating $\tau$. This has the effect of shifting the probability mass for
the $k$-neighbors sampling probability distribution, proportional to the
increase or decrease in $\ell$ from epoch $i$ to $i+1$.
At the start of training the model is conservative, only sampling the nearest
neighbors. As the model begins to learn and minimize the loss on the
validation set, it gradually explores more distant $k$ neighbors. If replacing
neighbors that have smaller cosine similarity has a similar effect on the
validation loss to neighbors with higher cosine similarity, then $\tau$ is
increased and the model carries out more exploration. In the opposite case,
where validation loss decreases, $\tau$ becomes smaller and the model will
only choose the closest of the $k$-neighbors for replacement. Concretely,
$\ell_{i}$ is the validation set loss at epoch $i$, and $\ell_{*}$ is the best
validation performance up until $i$ number of epochs. When
$\ell_{i}<\ell_{*}$, $\tau$ is increased using the update rule shown in (4)
where $\tau_{i}$ controls the temperature in (2) at epoch $i$ and
$\varepsilon_{i-1}=(2^{\tau_{i-1}}-1)$.
$\displaystyle\vspace{-3mm}\tau_{i}:=\begin{cases}\tau_{i-1}+|\tau_{i-1}-\varepsilon_{i-1}|,&\ell_{i}-\ell_{*}\geq
0,\\\ \tau_{i}-|\tau_{i}-\varepsilon_{i-1}|,&\ell_{i}-\ell_{*}<0\\\
\end{cases}$ (4)
Because this procedure depends on $\ell$, the sampling process is directly
controlled by the performance metric of interest. $\tau\in[0.5,10]$ is used to
prevent (2) becoming too narrow or completely uniform.
This approach can be considered as an auxiliary task optimized for the
expected reward. However, we require a curriculum for NNRS to control the
amount of replacement sampling, as $\tau$ would be increased very early on in
training because large loss reductions are made where the model is initialized
at random. Thus, changes in $\tau$ have a larger effect on the overall
sampling process as the chosen schedule monotonically increases the
replacement sampling rate over training epochs.
(5) shows the procedure for choosing whether to use past target $y_{t-1}$,
past prediction $\hat{y}_{t-1}$ or a sampled neighbor of $y_{t-1}$ and assign
it to the next input $x_{t}$.
$\displaystyle
x_{t}=\begin{cases}\tilde{y}_{t-1}\sim\mathbb{B}(\tilde{\mat{N}}_{y},\hat{y}),&(\epsilon>\xi_{\text{ss}})\hskip
2.70003pt\&\hskip 2.70003pt(\gamma>\xi_{\text{nnrs}})\\\
\hat{y}_{t-1},&(\epsilon>\xi_{\text{ss}})\hskip 2.70003pt\&\hskip
2.70003pt(\gamma<\xi_{\text{nnrs}})\\\
y_{t-1},&(\epsilon<\xi_{\text{ss}})\hskip 2.70003pt\&\hskip
2.70003pt(\gamma>\xi_{\text{nnrs}})\\\ \end{cases}$ (5)
We start by sampling uniformly at random
$\xi_{\text{ss}}\sim\mathrm{Uniform}(0,1)$ and
$\xi_{\text{nnrs}}\sim\mathrm{Uniform}(0,1)$ for each sample $Y$ within a
training mini-batch $B^{i}$ in $\mathbf{B}_{tr}=\\{B^{1},B^{2},B^{i},\dots
B^{K}\\}$. Here, $\xi_{\text{ss}}$ is used as a threshold to decide which
$y\in Y$ are used for SS and $\xi_{\text{nnrs}}$ sets a threshold for NNRS
when both are used in conjunction. When both $\epsilon$ and $\gamma$ choose
the same tokens for sampling, we choose one at random with equal probability.
This point becomes more relevant towards the end of training when the schedule
outputs a high sampling rate. For faster inference, we fix the sampled indices
corresponding to $y\in Y$ for all samples in $Y$. This means NNRS can be
performed in parallel on all mini-batch samples in $\mathbf{B}_{tr}$. Below,
we have described the use of NNRS in conjunction with another popular sample-
based method to demonstrate its modularity. However, NNRS can also be used
standalone as we see later in our experiments. In fact, NNRS can be expanded
upon to learn which neighbors to sample as we describe next.
#### Gumbel Softmax Neighbor Sampling:
An alternative to updating the temperature via the incremental update rule in
(5) is to use a straight-through estimator that allows us to differentiate
through drawn samples $p(w^{\prime}|w;k,\tau)$. This can be achieved using the
Gumbel-Softmax [[, GS;]]jang2016categorical trick. For NNRS, we refer to this
as Gumbel-Softmax Neighbor Sampling (GSNS). The GS allows us to sample and
backpropogate through $\tilde{\mat{N}}$ by reparameterizing the sampling
process as $\mathbf{d}\tilde{\mat{N}}_{w}/\mathbf{d}\alpha$ where $\alpha$ is
a multinomial distribution. (6) shows the GS where each componentwise Gumbel
noise $\kappa\in[1..,k]$ added to the original distribution
$p(\tilde{\mat{N}}_{w})$ for $w$, we find $\kappa$ that maximizes
$G_{w}^{\kappa}:=\log\alpha_{w}^{\kappa}-\log(-\log U_{w}^{\kappa})$ and then
set $D_{w}^{\kappa}=1$ and the remaining elements $D_{w}^{\neg\kappa}=0$ (i.e
a one-hot vector). Here, $U_{w}^{\kappa}\sim\text{Uniform}(0,1)$ and
$\alpha_{w}^{\kappa}$ is drawn from the discrete distribution
$D\sim\text{Discrete}_{w}(\alpha)$. We then sample $p(w^{\prime}|w;k,\tau)$ as
in (6) after updating $\nabla_{p(\mat{N}_{w})}\ell_{i}$.
$\displaystyle
p(w^{\prime}|w;k,\tau)=\frac{\exp((\log\alpha_{w}^{\kappa}+G_{w}^{\kappa})/\tau)}{\sum_{i=1}^{k}\exp((\log\alpha_{w}^{i}+G_{w}^{i})/\tau)}$
(6)
This allows us to draw samples from a Gumbel distribution while performing
gradient updates on $p(\mat{N}_{w})$. In contrast to NNRS, which performs
updates according to the validation perplexity at the end of the of each
training epoch, we instead update the neighbor distribution throughout
training. However, similar to before, the curriculum still controls the amount
of GSNS updates that are performed. Therefore the gradient updates are only
for the tokens, which are chosen for GSNS according to the curriculum
schedule. We test $\tau=0.5$ as a constant during training,
We set an upper bound of $\tau=10$, which corresponds to a heavy dampening of
(6).
Small $\tau$ early avoids high variance in gradient updates, even-though we
would expect the curriculum to use NNRS less early on.
We expect the output to be less sensitive to perturbations in the input
because the input is locally bounded by the space occupied by the $k$ nearest
neighbors of the target $\tilde{y}_{t}$. Likewise, $\tilde{y}_{t}$ can be
considered as emulating the problem of compounding errors, since the
conditional probability $p(y_{t+1}|x_{1:t},\tilde{y}_{t};\theta)$ is
conditioned on the sampled neighbor $\tilde{y}_{t}$ instead of the true target
$y_{t}$. In cases where the model finds it difficult to transition from using
$y_{t-1}\to\hat{y}_{t-1}$, interpolating with the neighborhood samples
$\tilde{y}$ can provide a smoother policy ($y\to\tilde{y}\to\hat{y}$). This
smoothing assigns some mass to unseen transitions (similar to Laplacian
smoothing), bounded by $k$ neighbors, which is directly proportional to
transition probabilities. We now consider curriculum schedules to
monotonically increase both $\gamma$ and $\gamma_{\mathrm{nnrs}}$,
corresponding to the sampling rates for SS and NNRS respectively and aim to
identify schedules that help mitigate compounding errors by controlling the
amount of exploration of neighbors throughout training.
Objective Definition We note that, in SS, the objective is not a strictly
proper scoring rule (i.e the maximum of the function is unique) since the
objective is dependent on the models own distribution. However, in NNRS, we do
not use the model’s distribution but instead neighbors of past targets which
are not subject to change throughout training. Therefore, the cost is strictly
proper and proportional to the $k$ neighbors centroids.
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, What do you mean by the cost to be
“proper”? And what does it mean to be “proportional to the centroids”? The
number of centroids?
In the case of a static sampling rate for $\epsilon$ and $\gamma$, we can
define the log-likelihood loss with SS and NNRS in terms of KL-divergences, as
shown in (12). $P_{x_{t}}$ and $Q_{x_{t}}$ are the marginal distributions for
input token $x_{t}$, while $P_{x_{t}|x_{t-1}=h}$ and $Q_{x_{t}|x_{t-1}=h}$ are
the conditional distributions. SGD with cosine annealing
[reddi2018convergence] of the learning rate is also considered as the
optimizer for training the model.
$\displaystyle\begin{gathered}D_{\text{SS-
NNRS}}[P||Q]=\text{KL}[P_{x_{t-1}}||Q_{x_{t-1}}]+\\\
\underbrace{(1-\epsilon)\mathbb{E}_{h\sim
Q_{x_{t-1}}}\text{KL}[P_{x_{t-1}}||Q_{x_{t}|x_{t-1}=h}]}+\\\
\epsilon\underbrace{\mathbb{E}_{h\sim
P_{x_{t-1}}}\text{KL}[P_{x_{t}|x_{t-1}}||Q_{x_{t}|x_{t-1}}]}_{\text{SS}}+\\\
\underbrace{(1-\gamma)\mathbb{E}_{h\sim
Q_{x_{t-1}}}\text{KL}[P_{x_{t+1}}||Q_{x_{t}|x_{t-1}=h}]}+\\\
\underbrace{\gamma\mathbb{E}_{h\sim
P_{x_{t-1}}}\text{KL}[P_{x_{t}|x_{t-1}}||Q_{x_{t}|x_{t-1}}]}_{\text{k-NN
replacement sampling}}\end{gathered}$ (12) color=yellow, size=, caption=2do,
color=yellow, size=, caption=2do, todo: color=yellow, size=, caption=2do, The
first and third underbraces do not have any annotations
In practice, we need only use the CE loss when using NNRS. The aforementioned
KL analysis suggests that when using the CE loss, we the scoring function is
proper, unlike the SS objective. At this point, we have defined the whole
process to carry out NNRS and summarize the training with NNRS in alg:knn_alg.
Input: Training epochs $\Gamma$. Sentence length $T$ for training mini-batch
training data
$\mathbf{B}_{\text{tr}}=\\{B_{\text{tr}}^{1},B_{\text{tr}}^{2},\dots,B_{\text{tr}}^{K}\\}$
and validation data
$\mathbf{B}_{\text{val}}=\\{B_{\text{val}}^{1},B_{\text{val}}^{2},\dots,B_{\text{val}}^{R}\\}$.
An RNN $f_{\theta}(\cdot,\cdot)$ parameterized by $\theta$. Curriculum
schedule functions $g(\cdot)_{ss},g(\cdot)_{nnrs}$.
Initialize the RNN parameters $\theta$ at random
Define k-NN embedding similarity for vocabulary $V$ using (3.3) and normalize
to a probability distribution $p(\tilde{\mat{N}})$ using (2)
Set $\tau=0.1,i=0,\xi=0$, $\ell_{*}=|V|$
foreach _$i\in\Gamma$_ do
Set
$\epsilon=g_{\text{ss}}(i),\gamma_{\text{nnrs}}=g_{\text{nnrs}}(i),\ell_{i}=0$
foreach _$B^{k}\in\mathbf{B}_{tr}$_ do
for _step $t=1\to T_{k}$_ do
Sample $\xi_{\text{ss}}\sim\mathbb{U}(0,1)$ and
$\xi_{\text{nnrs}}\sim\mathbb{U}(0,1)$
Sample $\vec{x}_{t}$ using (5) and input
$\hat{\vec{y}}_{t},h_{t}=f_{\theta}(\vec{x}_{t},\vec{h}_{t-1})$
Compute cross-entropy loss
$\ell_{\text{tr}}=\mathcal{L}_{\text{ce}}(\hat{\mat{Y}},\mat{Y})$
Update $\theta$
$\theta\from\eta\theta+(1-\eta)\nabla_{\theta}\log\ell_{\text{tr}}$
Compute $\ell_{i}$ by repeating above on $\mathbf{B}_{\text{val}}$ without
updating $\theta$
Update $\ell_{*}$ if $\ell_{i}<\ell_{*}$
if using the Gumbel-Softmax
Update $\mat{N}:=\beta\mat{N}-(1-\beta)\nabla_{\tilde{\mat{N}}}\ell_{*}$
else
Update $\mat{N}$ by changing $\tau$ based on the difference
between $\ell_{*}$ and $\ell_{i}$ using (4)
Renormalize $p(\mat{N})$ using the $\mathtt{softmax}$
Algorithm 1 NNRS-based Training
## 4 Experiments
Configuration | Parameter Setting | Wiki-102 | Penn-Treebank
---|---|---|---
| Linear | S-Shaped Curve | Exponential Increase | Static | Linear | S-Shaped Curve | Exponential Increase | Static
| $\epsilon_{s}$ | $\epsilon_{e}$ | $\gamma_{s}$ | $\gamma_{e}$ | Valid | Test | Valid | Test | Valid | Test | Valid | Test | Valid | Test | Valid | Test | Valid | Test | Valid | Test
No Sampling | - | - | - | - | 140.28 | 128.78 | 140.28 | 128.78 | 140.28 | 128.78 | 140.28 | 128.78 | 76.25 | 71.81 | 76.25 | 71.81 | 76.25 | 71.81 | 76.25 | 71.81
TPRS-1 | 0 | 0 | 0 | 0.2 | _143.78_ | _131.18_ | _137.69_ | _127.05_ | 137.63 | 136.88 | 136.31 | 126.49 | _81.42_ | _76.58_ | _83.40_ | _81.22_ | 77.56 | 73.02 | _81.45_ | _80.36_
TPRS-2 | 0 | 0 | 0 | 0.3 | 154.93 | 144.02 | 146.92 | 137.07 | _137.11_ | _125.41_ | _137.10_ | _125.95_ | 96.91 | 94.30 | 88.79 | 86.17 | _77.63_ | _72.80_ | 91.73 | 84.48
TPRS-3 | 0 | 0 | 0 | 0.5 | 159.11 | 148.41 | 148.10 | 139.58 | 138.46 | 127.97 | 138.53 | 129.87 | 97.62 | 94.78 | 88.46 | 86.05 | 76.60 | 72.77 | 98.31 | 95.23
NNRS-1 | 0 | 0 | 0 | 0.2 | _142.53_ | _130.45_ | _137.08_ | _126.82_ | 136.81 | 136.11 | 135.06 | 136.02 | _80.91_ | _76.70_ | _83.17_ | _79.62_ | _76.00_ | _72.19_ | _82.23_ | _80.03_
NNRS-2 | 0 | 0 | 0 | 0.3 | 154.50 | 143.13 | 149.69 | 136.98 | _136.83_ | _125.53_ | 136.34 | 125.84 | 96.66 | 93.18 | 88.68 | 85.29 | 77.12 | 73.05 | 92.12 | 84.29
NNRS-3 | 0 | 0 | 0 | 0.5 | 158.30 | 147.13 | 148.02 | 137.34 | 137.56 | 127.61 | _132.62_ | _121.50_ | 96.95 | 93.15 | 88.21 | 84.47 | 75.91 | 72.46 | 97.55 | 94.79
SS-1 | 0 | 0.2 | 0 | 0 | 139.31 | 128.50 | 143.82 | 131.08 | 137.72 | 126.46 | 135.55 | 122.61 | 83.63 | 80.03 | _82.33_ | _79.02_ | 76.71 | 73.32 | 74.50 | 70.20
SS-2 | 0 | 0.3 | 0 | 0 | 137.78 | 126.00 | 138.39 | 126.80 | _135.89_ | _125.03_ | 131.29 | 121.52 | _94.74_ | _79.82_ | 84.42 | 80.03 | 76.43 | 73.46 | _74.56_ | _70.25_
SS-3 | 0 | 0.5 | 0 | 0 | _135.14_ | _124.29_ | _136.88_ | _125.41_ | 136.98 | 125.96 | _131.28_ | _121.51_ | 92.48 | 88.92 | 85.87 | 82.37 | 76.41 | 73.15 | 74.11 | 69.48
SS-4 | 0 | 0.8 | 0 | 0 | 140.97 | 130.00 | 141.13 | 129.99 | 138.09 | 127.23 | 134.32 | 122.86 | 92.94 | 88.85 | 87.29 | 84.27 | _76.22_ | _72.98_ | 74.02 | 70.23
SS-NNRS-1 | 0 | 0.2 | 0 | 0.2 | 139.32 | 128.50 | _141.57_ | _129.67_ | 137.73 | 126.47 | 135.55 | 122.60 | _83.62_ | _80.03_ | _81.99_ | _77.86_ | 76.71 | 73.31 | 74.50 | 70.20
SS-NNRS-2 | 0 | 0.3 | 0 | 0.3 | 137.78 | 126.01 | 147.34 | 135.61 | 136.02 | 125.73 | 137.73 | 126.47 | 95.39 | 92.18 | 87.97 | 84.64 | 75.43 | 72.46 | 74.64 | 70.43
SS-NNRS-3 | 0 | 0.5 | 0 | 0.2 | _134.79_ | _123.90_ | 149.00 | 137.97 | _135.82_ | _124.72_ | _130.95_ | _120.76_ | 95.90 | 92.96 | 88.56 | 84.96 | _74.83_ | _70.95_ | _72.89_ | 69.06
SS-NNRS-4 | 0.2 | 0.5 | 0.2 | 0.5 | 150.97 | 138.59 | 146.01 | 134.67 | 135.88 | 126.02 | 123.84 | 121.98 | 96.78 | 93.20 | 88.56 | 87.06 | 76.41 | 73.15 | 74.42 | 69.88
SS-NNRS-5 | 0 | 0.5 | 0 | 0.5 | 136.22 | 125.70 | 151.39 | 138.44 | 137.02 | 125.84 | 132.17 | 121.86 | 97.12 | 93.90 | 90.37 | 86.25 | 76.02 | 72.31 | 74.08 | 70.79
SS-NNRS-6 | 0 | 0.8 | 0 | 0.2 | 148.96 | 130.01 | 141.14 | 129.99 | 138.09 | 127.24 | 134.32 | 122.89 | 95.91 | 92.87 | 81.89 | 78.54 | 74.74 | 70.64 | 74.02 | 70.23
SS-NNRS-7 | 0.2 | 0.8 | 0.2 | 0.5 | 155.17 | 148.21 | 149.58 | 137.83 | 135.45 | 124.82 | 131.09 | 121.43 | 96.77 | 93.64 | 88.27 | 85.04 | 76.35 | 72.74 | 74.43 | 70.12
Table 1: Test Perplexity for a 2-hidden layer LSTM hochreiter1997long using
Transition Probability Sampling (TPRS), SS and Nearest-Neighbor Replacement
Sampling (NNRS) with linear, s-shaped curve and exponential sampling functions
Figure 1: Perplexity on WikiText2: SS and NNRS (best viewed in color)
### 4.1 Experimental Results
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, First tell the datasets (WikiText-2, Penn
Treebank) used in the experiments and evaluation measure (perplexity). We have
some space for this
fig:wiki2_lc_ss shows the perplexity scores on WikiText-2 for the different
parameter settings for $\epsilon$ and $\gamma$ over 40 training epochs. Solid
lines indicate validation perplexity scores throughout training and dashed
horizontal lines indicate corresponding test perplexity score. We find that
most learning is carried out after 20 epochs given that the learning rate is
initially high ($\alpha=20$) and annealed using cosine annealing. Based on the
validation set performance, we found the optimal settings to be $\epsilon=0.5$
and $\gamma=0.2$ with a static probability sampling rate. color=yellow, size=,
caption=2do, color=yellow, size=, caption=2do, todo: color=yellow, size=,
caption=2do, Did we describe what is “static probability sampling rate”?
Additionally, using an exponential sampling rate outperforms the linear and
sigmoid functions. It appears that this is because the exponential function
carries out the majority of sampling late in the training when the model has
reached learning capacity from the teacher policy.
For Penn Treebank, static and exponential settings for both SS and NNRS show
best performance and converge quicker. In all cases, convergence for all
functions behaves as expected for $\\{\sigma,\gamma\\}<0.3$. We also see the
same trend as with WikiText-2, where the exponential function allows for
higher sampling rates when compared to linear and sigmoid functions. Again,
this suggests that NNRS is most effective near convergence, as the sampling
probability exponentially increases while the validation perplexity begin to
plateau over epochs.
#### Schedule Parameter Grid Search Results
tab:nsr_results shows the results of the model with varying $\epsilon$ and
$\gamma$ upper and lower thresholds using a linear, s-curve, exponential and
static sampling strategy for all tested datasets. Here, subscript $s$ for
$\epsilon_{s},\gamma_{s}$ are the initial probabilities at $i=0$ and subscript
$e$ for $\epsilon_{e},\gamma_{e}$ denote end probabilities $i=\Gamma$. Bolded
results are those which perform within the cell that are contained (aggregated
by configuration and type of schedule e.g SS-1 - SS4 for linear) and shaded
cells corresponds to the best performing configuration and schedule for either
Wiki-102 or PTB. These experiments describe results using the simple update
rule described in (4).
Eval. | Model | MLE | TPRS | NNRS | SS | SS-NNRS | MLE | TPRS | NNRS | SS | SS-NNRS
---|---|---|---|---|---|---|---|---|---|---|---
BLEU-4 | | | | | | | | | | | | | | | | | | | | |
LSTM | 7.84 | 8.75 | 7.14 | 6.88 | 7.38 | 7.83 | 10.06 | 9.61 | 7.72 | 7.88 | 6.11 | 6.62 | 5.74 | 5.31 | _4.49_ | _4.31_ | 6.02 | 5.66 | 5.03 | 4.79
GRU | 8.22 | 9.53 | 7.33 | 7.41 | _7.88_ | _7.23_ | 9.78 | 9.90 | 7.27 | 7.50 | 6.03 | 6.31 | 5.45 | 5.03 | 4.79 | 4.67 | 5.73 | 5.50 | 5.26 | 4.61
Highway | 9.13 | 9.26 | 8.04 | 7.93 | 8.29 | 9.14 | 11.11 | 10.51 | 8.73 | 8.18 | 6.84 | 7.04 | 6.34 | 6.13 | 5.77 | 5.59 | 5.63 | 5.10 | 6.08 | 5.28
WMD | | | | | | | | | | | | | | | | | | | | |
LSTM | 0.41 | 0.40 | 0.48 | 0.41 | 0.35 | 0.30 | 0.34 | 0.32 | _0.29_ | _0.28_ | 0.51 | 0.46 | 0.44 | 0.36 | 0.41 | 0.36 | 0.43 | 0.42 | _0.36_ | _0.34_
GRU | 0.43 | 0.42 | 0.47 | 0.36 | 0.38 | 0.34 | 0.39 | 0.33 | 0.36 | 0.31 | 0.53 | 0.49 | 0.43 | 0.34 | 0.42 | 0.38 | 0.40 | 0.39 | _0.38_ | _0.35_
Highway | 0.48 | 0.45 | 0.51 | 0.53 | 0.52 | 0.54 | 0.59 | 0.31 | 0.36 | 0.36 | 0.53 | 0.51 | 0.48 | 0.40 | 0.39 | 0.39 | 0.42 | 0.40 | _0.40_ | _0.38_
Table 2: PTB (left) and WikiText-2 Self-BLEU4 and Self-WMD Validation/Test
scores
NNRS and SS used in conjunction yield the best performance in most cases for
both datasets. Best performance for both Wikitext-2 and PTB is found with
$\epsilon\in[0,0.5]$ and $\gamma\in[0,0.2]$, and slightly improves over only
using SS. For PTB, $\gamma_{e}=0.2$ performs the best for linear and sigmoid
functions, $\gamma=0.5$ for exponential and static sampling rates and overall
a constant sampling rate. Although, the normalized exponential distribution
is? better than the sigmoid and linear scheduled for both $\epsilon$ and
$\gamma$ on both datasets. It seems that this is because the majority of
neighbor replacements are towards the end of the training when the learner has
already reached the full capacity in what can be learned from the teacher
policy. Moreover, this would approximately be the inverse of the normalized
validation perplexity. color=yellow, size=, caption=2do, color=yellow, size=,
caption=2do, todo: color=yellow, size=, caption=2do, I did not get the
“inverse of the normalised validation perplexity This coincides with the
theoretical guarantees of using an exponential decay schedule provided in
DAgger [ross2010efficient].
Overall, interpolating between NNRS and SS produces the best performance on
average for both datasets. A low constant sampling rate yields best
performance for both datasets. Controlling the temperature $\tau$ based on the
validation perplexity allows all $k$-neighbors to be explored. When compared
to using no sampling, there is an 8 point perplexity decrease using our
approach on WikiText-2 and a 2.75 test perplexity decrease on Penn-Treebank.
In comparison to Transition Probability Replacement Sampling (TPRS), NNRS
slightly improves while requiring less memory to store embeddings.
### 4.2 Evaluation
tab:diversity shows the self-BLEU [zhu2018texygen] scores, where higher scores
correspond to less diversity in the predictions. Similarly, we also use self-
WMD to measure the semantic diversity which is computed by averaging the
average Word Movers Distance (WMD) between $\ell_{2}$-normalized embeddings
(the same GoogleNews pretrained $\mathtt{skipgram}$ embeddings
[mikolov2013distributed] that we used to define neighbors) of predicted words
and target words in the test set. The WMD scores are in [-1, 1] as the cosine
similarity is used as the WMD measure. After computing WMD, the scores are
normalized to be in the range of $[0,1]$. Hence, low self-WMD similarity
scores closer to 0 correspond to high diversity. In contrast, when both self-
BLEU and self-WMD are high in tab:quality, signifying high quality.
Apart from improving over baselines in terms of perplexity, we identify what
tradeoff is incurred between text generation quality and diversity for our
proposed method. For MLE results, $\gamma_{s}=\gamma_{e}=0$ and
$\epsilon_{s}=\epsilon_{e}=0$, for TPRS and NNRS $\gamma_{s}=0,\gamma_{e}=0.2$
with a static sampling rate, for SS $\epsilon_{s}=0,\epsilon_{e}=0.5$ and for
SS-NNRS we combine previous $\gamma$ and $\epsilon$ settings.
color=blue, inline,size=, caption=2do, color=blue, inline,size=, caption=2do,
todo: color=blue, inline,size=, caption=2do, There is no tuning here, it’s
just a post analysis of what settings performed the best.
In short, these are the settings that led to the lower test perplexity on
average across each model on both PTB and WikiText-2.
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, Is it fair to tune these on test
perplexity? Or is it validation perplexity?
#### Text Generation Diversity
zhu2018texygen measured the diversity of text generation by computing BLEU
scores between generated samples and sentences to a reference sample, or to
the rest of the generated samples (referred to as self-BLEU). color=yellow,
size=, caption=2do, color=yellow, size=, caption=2do, todo: color=yellow,
size=, caption=2do, “between generated samples and sentences to a reference
sample, or to the rest of the generated samples” this is not very clear. What
and what are being compared? We carry out the latter on both validation and
test sets for the best performing models on PTB and WikiText-2 in
tab:diversity. To speed up the computation of self-BLEU4, we reshuffle mini-
batches in the generated samples with the original generated text
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, What does “original” mean in “original
generated text”? and take the average BLEU-4 score over 20 reshuffles. Hence,
each generated sample is randomly compared to at least 20 other generated
samples.
We also propose to use a variant of this procedure but instead use the average
Word Mover’s Distance [[, WMD;]]kusner2015word between (pretrained)
$\ell_{2}$-norm embeddings of predicted sentences, color=yellow, size=,
caption=2do, color=yellow, size=, caption=2do, todo: color=yellow, size=,
caption=2do, Move the full-form and citation for WMD when it is first
mentioned in the paper. We used it already before. instead of word-overlap
(e.g BLEU) to measure diversity, omitting out-of-vocabulary terms that inflate
similarity scores. We refer to this as Self-WMD. Results are shown in
tab:diversity for $k$=$\log_{2}|V|$ as before. We find that self-WMD in
particular shows an increase in diversity for NNRS when compared to standard
maximum likelihood training. Although coherently evaluating diversity is
difficult, it at least suggests that NNRS is generating sentences that are
semantically similar according to the pretrained embeddings. color=yellow,
size=, caption=2do, color=yellow, size=, caption=2do, todo: color=yellow,
size=, caption=2do, What pre-trained embeddings are we talking here?
| Model | MLE | TPRS | NNRS | SS | SS-NNRS
---|---|---|---|---|---|---
BLEU4 | | | | | | | | | | |
LSTM | 7.87 | 8.28 | 9.24 | 8.16 | 11.81 | 11.26 | 10.93 | 10.53 | _11.62_ | _11.20_
GRU | 9.39 | 8.58 | 9.49 | 10.67 | 11.35 | 11.04 | 10.98 | 11.60 | 12.14 | 11.79
Highway | 9.03 | 8.56 | 9.72 | 9.40 | 10.81 | 10.37 | 11.24 | 11.96 | _13.75_ | _14.03_
WMD | | | | | | | | | | |
LSTM | 0.72 | 0.84 | 0.85 | 0.93 | 0.91 | 0.88 | 0.89 | 0.91 | _0.95_ | _0.93_
GRU | 0.72 | 0.72 | 0.67 | 0.63 | 0.70 | 0.69 | 0.70 | 0.69 | _0.82_ | _0.84_
Highway | 0.70 | 0.69 | 0.74 | 0.72 | 0.78 | 0.76 | 0.72 | 0.75 | _0.79_ | _0.80_
Table 3: WikiText-2 BLEU-4 & WMD Quality Scores
#### Text Generation Quality
Following prior work by yu2017seqgan, we compute BLEU between the predictions
and the corresponding targets as shown in tab:quality. Similar to the the
diversity evaluation, we also report WMD between the pretrained embeddings of
the corresponding to both predicted and target tokens. color=yellow, size=,
caption=2do, color=yellow, size=, caption=2do, todo: color=yellow, size=,
caption=2do, Not clear what “corresponding” refer to here. “WMD between two
sentences right?, not “between the pretrained embeddings of the corresponding
to both predicted and target tokens”? WMD is not a measure between two tokens
but two sentences/texts. Typically, there is some tradeoff between text
diversity and text quality when using $n$-gram overlap based evaluations.
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, Any citation to back up this claim? We find
that SS-NNRS leads to slightly more diverse sentences given BLEU-4, but when
taking sentence-level semantic similarity using WMD, we find that the
improvement is more evident as WMD takes into account the similarity
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, “similarity of neighbours or among the
neighbours? of $k$ local neighbors in the NNRS method. For larger $k$,
diversity is increased in expense for generation quality, while $k$=0
corresponds to ML training.
### 4.3 Closing Remarks
In closing, we summarize our findings based on the above LM experiments on PTB
and WikiText-2. SS-NNRS with $\epsilon=0\to 0.5$ and $\gamma=0\to 0.2$ has
shown the best performance on both PTB and WikiText-2. From this, we conclude
that beginning color=yellow, size=, caption=2do, color=yellow, size=,
caption=2do, todo: color=yellow, size=, caption=2do, beginning of what?
training? “initially in the training…” it in general it is color=yellow,
size=, caption=2do, color=yellow, size=, caption=2do, todo: color=yellow,
size=, caption=2do, “it in general it is ”?? better to beginning in the
standard maximum likelihood regime so the model is first robust enough to
before it begins to learn to predict future tokens from synonymous neighboring
tokens. Secondly, color=yellow, size=, caption=2do, color=yellow, size=,
caption=2do, todo: color=yellow, size=, caption=2do, There was no “firstly”
NNRS improves over SS when used in isolation, most notably using a linear
schedule. We posit that this is because transitioning to using past
predictions in the intermediate stages of training is too difficult for the
model to adapt to, in comparison to transitioning the neighbors color=yellow,
size=, caption=2do, color=yellow, size=, caption=2do, todo: color=yellow,
size=, caption=2do, What does it mean to “transition the neighbours”? of past
targets which are semantically closer than the predicted targets at that stage
in training. Thirdly, sampling neighbors using the whole transition
probability matrix leads to poor results even with relatively low sampling
rates. Therefore, not only is lower-dimensional embedding more efficient in
sampling during training, but also leads to better performance in the
practical setting with a set number of epochs. color=yellow, size=,
caption=2do, color=yellow, size=, caption=2do, todo: color=yellow, size=,
caption=2do, Which experiment/result showed this? Lastly, larger test
perplexity reductions are found for WikiText-2 when using NNRS. Unlike PTB,
WikiText-2 does not have a reduced vocabulary set and therefore rare terms are
kept. color=yellow, size=, caption=2do, color=yellow, size=, caption=2do,
todo: color=yellow, size=, caption=2do, What were the sizes of the vocabulary
sets for these two corpora? You could put the numbers here. Hence, NNRS is
particularly useful color=yellow, size=, caption=2do, color=yellow, size=,
caption=2do, todo: color=yellow, size=, caption=2do, “when” is unnecessary?
when for rare words, even more so when the dataset is relatively small such
that the co-occurrences containing rare terms are not large enough for the
transition probabilities to be accurate. color=yellow, size=, caption=2do,
color=yellow, size=, caption=2do, todo: color=yellow, size=, caption=2do, I
did not understand what you mean by “co-occurrences containing rare terms are
not large enough”. What do you mean by “a co-occurrence to contain a rare
term”? Did you mean a co-occurrence with a rare term? or a bigram where one
word is a rare word? And how large is “large enough”?
color=yellow, size=, caption=2do, color=yellow, size=, caption=2do, todo:
color=yellow, size=, caption=2do, “I haven’t seen conference papers with
“closing remarks”. Isn’t that the same as “Conclusion”? This is more like a
summary/discussion?
## 5 Conclusion
We presented a curriculum learning sampling strategy color=yellow, size=,
caption=2do, color=yellow, size=, caption=2do, todo: color=yellow, size=,
caption=2do, A “sampling strategy for curriculum learning” would parse better
than “curriculum learning sampling strategy”. As a general rule it is better
to avoid creating new terms by putting sequences of words. Including
prepositions such as for/of in between improves readability and reduces
ambiguity of dependency. to mitigate compounding errors in neural sequence
models. Consistent performance improvements are made over standard maximum
likelihood training, particularly for sampling functions that generate
monotonically increasing sampling rates that are inversely proportional to the
slope of decreasing validation performance. color=yellow, size=, caption=2do,
color=yellow, size=, caption=2do, todo: color=yellow, size=, caption=2do,
What does “slope of decreasing validation performance” mean? I can understand
“Slope of validation performance?” but not the slope of “decreasing validation
performance”. That would be a second-order derivative This is empirically
demonstrated when comparing a standard 2-hidden layer LSTM (with identical
hyperparameter settings) with (1) no sampling strategies, (2) a baseline
transition probability replacement sampling, (3) the proposed nearest neighbor
replacement sampling technique, (4) scheduled sampling and (5) a combination
of (3) and (4). The best sampling probability bounds and schedules for each
dataset and conclude that color=yellow, size=, caption=2do, color=yellow,
size=, caption=2do, todo: color=yellow, size=, caption=2do, “The best
sampling probability bounds and schedules for each dataset and conclude that ”
is odd. overall, an exponential schedule and static sampling probability
outperform sigmoid and linear functions. color=yellow, size=, caption=2do,
color=yellow, size=, caption=2do, todo: color=yellow, size=, caption=2do, Is
it “functions” or “schedules”? Concretely, a schedule that has a high sampling
rate too early in training leads to a performance degradation whereas sampling
with high probability towards the end of training can improve generalization.
Lastly, the test set perplexity scores increase for both datasets when using
the nearest neighbor replacement sampling. color=yellow, size=, caption=2do,
color=yellow, size=, caption=2do, todo: color=yellow, size=, caption=2do, we
introduced NNRS, so can use NNRS instead of nearest neighbor replacement
sampling
|
colorlinks=true, citecolor=MidnightBlue, linkcolor=MidnightBlue,
filecolor=MidnightBlue, urlcolor=MidnightBlue,
# Tighter Expected Generalization Error Bounds via Wasserstein Distance
Borja Rodríguez-Gálvez
KTH Royal Institute of Technology
Stockholm, Sweden
<EMAIL_ADDRESS>
&Germán Bassi
Ericsson Research
Stockholm, Sweden
<EMAIL_ADDRESS>&Ragnar Thobaben
KTH Royal Institute of Technology
Stockholm, Sweden
<EMAIL_ADDRESS>
&Mikael Skoglund
KTH Royal Institute of Technology
Stockholm, Sweden
<EMAIL_ADDRESS>
###### Abstract
This work presents several expected generalization error bounds based on the
Wasserstein distance. More specifically, it introduces full-dataset, single-
letter, and random-subset bounds, and their analogues in the randomized
subsample setting from Steinke and Zakynthinou [1]. Moreover, when the loss
function is bounded and the geometry of the space is ignored by the choice of
the metric in the Wasserstein distance, these bounds recover from below (and
thus, are tighter than) current bounds based on the relative entropy. In
particular, they generate new, non-vacuous bounds based on the relative
entropy. Therefore, these results can be seen as a bridge between works that
account for the geometry of the hypothesis space and those based on the
relative entropy, which is agnostic to such geometry. Furthermore, it is shown
how to produce various new bounds based on different information measures
(e.g., the lautum information or several $f$-divergences) based on these
bounds and how to derive similar bounds with respect to the backward channel
using the presented proof techniques.
## 1 Introduction
A _learning algorithm_ is a mechanism that takes a _dataset_
$s=(z_{1},\ldots,z_{n})$ of $n$ samples $z_{i}\in\mathcal{Z}$ taken i.i.d.
from a distribution $P_{Z}$ as an input, and produces a hypothesis
$w\in\mathcal{W}$ by means of the conditional probability distribution
$P_{W|S}$.
The ability of a hypothesis $w$ to characterize a sample $z$ is described by
the loss function $\ell(w,z)\in\mathbb{R}$. More precisely, a hypothesis $w$
describes well the samples from a population $P_{Z}$ when its _population
risk_ , i.e., $\smash{\mathscr{L}_{P_{Z}}(w)\triangleq\mathbb{E}[\ell(w,Z)]}$,
is low. However, the distribution $P_{Z}$ is often not available and the
_empirical risk_ on the dataset $s$, i.e.,
$\smash{\mathscr{L}_{s}(w)\triangleq\frac{1}{n}\sum_{i=1}^{n}\ell(w,z_{i})}$,
is considered as a proxy. Therefore, it is of interest to study the
discrepancy between the population and empirical risks, which is defined as
the _generalization error_ :
$\smash{\textnormal{gen}(w,s)\triangleq\mathscr{L}_{P_{Z}}(w)-\mathscr{L}_{s}(w).}$
Classical approaches bound the generalization error in expectation and in
probability (PAC Bayes) either by studying the complexity and the geometry of
the hypothesis’ space $\mathcal{W}$ or by exploring properties of the learning
algorithm itself; see, e.g., [2, 3] for an overview.
More recently, the relationship (or amount of information) between the
generated hypothesis and the training dataset has been used as an indicator of
the generalization performance. In [4], based on [5], it is shown that the
expected generalization error, i.e.,
$\smash{\overline{\textnormal{gen}}(W,S)\triangleq\mathbb{E}[\textnormal{gen}(W,S)]}$,
is bounded from above by a function that depends on the mutual information
between the hypothesis $W$ and the dataset $S$ with which it is trained, i.e.,
$I(W;S)$. However, this bound becomes vacuous when $I(W;S)\to\infty$, which
occurs for example when $W$ and $S$ are separately continuous and $W$ is a
deterministic function of $S$. To address this issue, it is shown in [6] that
the generalization error is also bounded by a function on the dependency
between the hypothesis and individual samples, $I(W;Z_{i})$, which is usually
finite due to the smoothing effect of marginalization. Following this line of
work, in [7], the authors present data-dependent bounds based on the
relationship between the hypothesis and random subsets of the data, i.e.,
$D_{\textnormal{KL}}(P_{W|s}\>\|\>P_{W|s_{j^{c}}})$ where $j\subseteq[n]$.
After that, a more structured setting is introduced in [1], studying instead
the relationship between the hypothesis and the _identity_ of the samples. The
authors consider a super-sample of $2n$ i.i.d. instances $\tilde{z}_{i}$ from
$P_{Z}$, i.e., $\tilde{s}=(\tilde{z}_{1},\ldots,\tilde{z}_{2n})$. This super
sample is used to construct the dataset $s$ by choosing between the samples
$\tilde{z}_{i}$ and $\tilde{z}_{i+n}$ using a Bernoulli random variable
$U_{i}$ with probability $\frac{1}{2}$, i.e., $z_{i}=\tilde{z}_{i+u_{i}n}$. In
this paper, the two settings are referred to as the _standard_ and
_randomized-subsample_ settings.111In [8] the latter is called the random-
subset setting. However, this may cause confusion with the random-subset
bounds in the present work. In the randomized-subsample setting, the
_empirical generalization error_ is defined as the difference between the
empirical risk on the samples from $\tilde{s}$ not used to obtain the
hypothesis, i.e., $\bar{s}=\tilde{s}\setminus s$, and the empirical risk on
the dataset $s$, i.e.,
$\smash{\widehat{\textnormal{gen}}(w,\tilde{s},u)\triangleq\mathscr{L}_{\bar{s}}(w)-\mathscr{L}_{s}(w)=\frac{1}{n}\sum\nolimits_{i=1}^{n}\big{(}\ell(w,\tilde{z}_{i+(1-u_{i})n})-\ell(w,\tilde{z}_{i+u_{i}n})\big{)}},$
where $u$ is the sequence of $n$ i.i.d. Bernoulli trial outcomes $u_{i}$. The
expected value of the empirical and the (standard) generalization errors
coincide, i.e.,
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]=\overline{\textnormal{gen}}(W,S)$.
Also, the expected generalization error is controlled by the conditional
mutual information between the hypothesis $W$ and the Bernoulli trials $U$,
given the super sample $\tilde{S}$ [1], i.e., $I(W;U|\tilde{S})$, by the
individual conditional mutual information [9], i.e.,
$I(W;U_{i}|\tilde{Z}_{i},\tilde{Z}_{i+n})$, and by the “disintegrated” mutual
information with a subset $U_{J}$ of the Bernoulli trials [10], i.e.,
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U}}\>\|\>P_{\smash{W|\tilde{S},U_{J^{c}}}})$.
A highlight of this setting is that these conditional notions of information
are always finite [1] and smaller than their “unconditional” counterparts
[10], e.g., $I(W;U|\tilde{S})\leq I(W;S)$ and $I(W;U|\tilde{S})\leq n\log(2)$.
Some steps towards unifying these results are taken in [8], where the authors
develop a framework that makes it possible to recover the expected
generalization error bounds based on the mutual information $I(W;S)$ and the
conditional mutual information $I(W;U|\tilde{S})$. Then, the aforementioned
framework is further exploited in [9] to recover the single-sample and the
random-subsets bounds, which are based on $I(W;Z_{i})$,
$D_{\textnormal{KL}}(P_{W|S}\>\|\>P_{W|S_{J^{c}}})$, and
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U}}\>\|\>P_{\smash{W|\tilde{S},U_{J^{c}}}})$,
and to generate new individual conditional mutual information bounds, i.e.,
$\smash{I(W;U_{i}|\tilde{Z}_{i},\tilde{Z}_{i+n})}$. Finally, in [11, 12],
other systematic ways to recover some of the said bounds and obtain similar
new ones are studied.
In parallel, there were some attempts to bridge the gap between employing the
geometry and complexity of the hypothesis space and the relationship between
the hypothesis and the training samples. In [13], the authors bound
$\overline{\textnormal{gen}}(W,S)$ with a function of weighted dependencies
between the dataset and increasingly finer quantizations of the hypothesis,
i.e., $\smash{\\{2^{-k/2}I([W]_{k};S)\\}_{k}}$, which can be finite even if
$I(W;S)\to\infty$. This result stems from a clever usage of the chaining
technique [14, Theorem 5.24], and a comparison with this kind of approaches is
given in Appendix A. Later, in [15] and [16], it is shown that the expected
generalization error is bounded from above by a function of the Wasserstein
distance between the hypothesis distribution after observing the dataset
$P_{W|S}$ and its prior $P_{W}$, i.e., $\mathbb{W}_{p}(P_{W|S},P_{W})$, and by
a function of the Wasserstein distance between the hypothesis distribution
after observing a single sample $P_{W|Z_{i}}$ and its prior $P_{W}$, i.e.,
$\mathbb{W}_{p}(P_{W|Z_{i}},P_{W})$, which are finite when a suitable metric
is chosen but are difficult to evaluate. Concurrently, in [17] it is shown
that a similar result holds, if the metric is the Minkowski distance, for the
distribution of the data $P_{S}$ and the backward channel $P_{S|W}$, i.e.,
$\mathbb{W}_{p,\|\cdot\|}^{p}(P_{S|W},P_{S})$.
The main contributions of this paper are the following:
* •
It introduces new, tighter single letter and random-subset Wasserstein
distance bounds for the standard and randomized-subsample settings (Theorems
3.1, 3.1, 3.2, and 3.2).
* •
It shows that when the loss is bounded and the geometry of the space is
ignored, these bounds recover from below (and thus are tighter than) the
current relative entropy and mutual information bounds on both the standard
and randomized-subsample settings. In fact, they are also tighter when the
loss is additionally subgaussian or under certain milder conditions on the
geometry. However, these results are deferred to Appendix B to expose the main
ideas more clearly. Moreover, Corollaries 3.1 and 3.1 overcome the issue of
potentially vacuous relative entropy bounds on the standard setting.
* •
It introduces new bounds based on the backward channel, which are analogous to
those based on the forward channel and more general than previous results in
[17].
* •
It shows how to generate new bounds based on a variety of information
measures, e.g., the lautum information or several $f$-divergences like the
Hellinger distance or the $\chi^{2}$-divergence, thus making the
characterization of the generalization more flexible.
## 2 Preliminaries
### 2.1 Notation
Random variables $X$ are written in capital letters, their realizations $x$ in
lower-case letters, their set of outcomes $\mathcal{X}$ in calligraphic
letters, and their Borel $\sigma$-algebras $\mathscr{X}$ in script-style
letters. Moreover, the probability distribution of a random variable $X$ is
written as $P_{X}:\mathscr{X}\to[0,1]$. Hence, the random variable $X$ or the
probability distribution $P_{X}$ induce the probability space
$(\mathcal{X},\mathscr{X},P_{X})$. When more than one random variable is
considered, e.g., $X$ and $Y$, their joint distribution is written as
$P_{X,Y}:\mathscr{X}\otimes\mathscr{Y}\to[0,1]$ and their product distribution
as $P_{X}\otimes P_{Y}:\mathscr{X}\otimes\mathscr{Y}\to[0,1]$. Moreover, the
conditional probability distribution of $Y$ given $X$ is written as
$P_{Y|X}:\mathscr{Y}\otimes\mathcal{X}\to[0,1]$ and defines a probability
distribution $P_{Y|X=x}$ (or $P_{Y|x}$ for brevity) over $\mathcal{Y}$ for
each element $x\in\mathcal{X}$. Finally, there is an abuse of notation writing
$P_{X,Y}=P_{Y|X}\times P_{X}$ since
$\smash{P_{X,Y}(B)=\int\big{(}\int\chi_{B}\big{(}(x,y)\big{)}dP_{Y|X=x}(y)\big{)}dP_{X}(x)}$
for all $B\in\mathscr{X}\otimes\mathscr{Y}$, where $\chi_{B}$ is the
characteristic function of the set $B$. The natural logarithm is $\log$.
### 2.2 Necessary definitions, remarks, claims, and lemmas
###### Definition 1.
Let $\rho:\mathcal{X}\times\mathcal{X}\to\mathbb{R}_{+}$ be a metric. A space
$(\mathcal{X},\rho)$ is Polish if it is complete and separable. Throughout it
is assumed that all Polish spaces $(\mathcal{X},\rho)$ are equipped with the
Borel $\sigma$-algebra $\mathscr{X}$ generated by $\rho$. When there is no
ambiguity, both the metric space $(\mathcal{X},\rho)$ and the generated
measurable space $(\mathcal{X},\mathscr{X})$ are written as $\mathcal{X}$.
###### Definition 2.
Let $(\mathcal{X},\rho)$ be a Polish metric space and let $p\in[1,\infty)$.
Then, the _Wasserstein distance_ of order $p$ between two probability
distributions $P$ and $Q$ on $\mathcal{X}$ is
$\mathbb{W}_{p}(P,Q)\triangleq\Big{(}\inf_{R\in\Pi(P,Q)}\int_{\mathcal{X}\times\mathcal{X}}\rho(x,y)^{p}dR(x,y)\Big{)}^{1/p},$
where $\Pi(P,Q)$ is the set of all couplings $R$ of $P$ and $Q$, i.e., all
joint distributions on $\mathcal{X}\times\mathcal{X}$ with marginals $P$ and
$Q$, that is, $P(B)=R(B,\mathcal{X})$ and $Q(B)=R(\mathcal{X},B)$ for all
$B\in\mathscr{X}$.
###### Remark 1.
Hölder’s inequality implies that $\mathbb{W}_{p}\leq\mathbb{W}_{q}$ for all
$p\leq q$ [18, Remark 6.6]. Hence, since this work is centered on upper bounds
the focus is on $\mathbb{W}\triangleq\mathbb{W}_{1}$.
###### Definition 3.
A function $f:\mathcal{X}\to\mathbb{R}$ is said to be $L$-Lipschitz under the
metric $\rho$, or simply $f\in L\textnormal{-Lip}(\rho)$, if $|f(x)-f(y)|\leq
L\rho(x,y)$ for all $x,y\in\mathcal{X}$.
###### Lemma 1 (Kantorovich-Rubinstein duality [18, Remark 6.5]).
Let $\mathcal{P}_{1}(\mathcal{X})$ be the space of probability distributions
on $\mathcal{X}$ with a finite first moment. Then, for any two distributions
$P$ and $Q$ in $\mathcal{P}_{1}(\mathcal{X})$
$\mathbb{W}(P,Q)=\sup_{f\in
1\textnormal{-Lip}(\rho)}\Big{\\{}\int\nolimits_{\mathcal{X}}f(x)dP(x)-\int\nolimits_{\mathcal{X}}f(x)dQ(x)\Big{\\}}.$
(KR duality)
###### Definition 4.
The total variation between two probability distributions $P$ and $Q$ on
$\mathcal{X}$ is
$\smash{\textnormal{{TV}}(P,Q)\triangleq\sup\nolimits_{A\in\mathscr{X}}\big{\\{}P(A)-Q(A)\\}.}$
###### Definition 5.
The discrete metric is $\rho_{\textnormal{H}}(x,y)\triangleq\mathbbm{1}[x\neq
y]$, where $\mathbbm{1}$ is the indicator function.
###### Remark 2.
A bounded function $f:\mathcal{X}\to[a,b]$ is $(b-a)$-Lipschitz under the
discrete metric $\rho_{\textnormal{H}}$.
###### Remark 3.
The Wasserstein distance of order 1 is dominated by the total variation. For
instance, if $P$ and $Q$ are two distributions on $\mathcal{X}$ then
$\mathbb{W}(P,Q)\leq d_{\rho}(\mathcal{X})\textnormal{{TV}}(P,Q)$, where
$d_{\rho}(\mathcal{X})$ is the diameter of $\mathcal{X}$. In particular, when
the discrete metric is considered $\mathbb{W}(P,Q)=\textnormal{{TV}}(P,Q)$
[18, Theorem 6.15].
###### Lemma 2 (Pinsker’s and Bretagnolle–Huber’s (BH) inequalities).
Let $P$ and $Q$ be two probability distributions on $\mathcal{X}$ and define
$\Psi(x)\triangleq\sqrt{\min\\{x/2,1-\exp(-x)\\}}$, then [19, Theorem 6.5] and
[20, Proof of Lemma 2.1] state that
$\displaystyle\smash{\textnormal{{TV}}(P,Q)\leq\Psi(D_{\textnormal{KL}}(P\>\|\>Q)).}$
## 3 Expected Generalization Error Bounds
This section presents our main results. First, in §3.1 and §3.2, single-letter
and random-subset bounds based on the Wasserstein distance are introduced for
the studied settings. These subsections also show how these bounds are tighter
than current bounds based on the Wasserstein distance and the relative
entropy. Moreover, an example where these bounds outperform current bounds is
provided. Then, in §3.3 it is shown how to obtain analogous bounds to those in
§3.1 and §3.2 for the backward channel. Finally, §3.4 shows how the presented
results lead to a rich set of new bounds based on different information
measures. All complete proofs and technical details are deferred to the
appendix.
### 3.1 Standard setting
In [15, Theorem 2], the authors show that the expected generalization error is
bounded from above by the Wasserstein distance between the forward channel
distribution $P_{W|S}$ and the marginal distribution of the hypothesis
$P_{W}$. More specifically, when the loss function $\ell$ is $L$-Lipschitz
under a metric $\rho$ for all $z\in\mathcal{Z}$ and the hypothesis space
$\mathcal{W}$ is Polish, then
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq
L\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]}=L\int_{\mathcal{Z}^{n}}\mathbb{W}(P_{W|S=s},P_{W})dP_{Z}^{\otimes{n}}(s)}.$
(1)
This bound considers both the geometry of the hypothesis space by means of the
metric $\rho$ and the dependence between the hypothesis and the dataset via
the discrepancy between the forward channel $P_{W|S}$ and the marginal
$P_{W}$. Nonetheless, it is not clear how it relates with other results
agnostic to the geometry of the space. For instance, when the loss function
$\ell$ is bounded in $[a,b]$, if the geometry is ignored (i.e., the discrete
metric is considered), then
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq(b-a)\mathbb{E}\big{[}\texttt{TV}(P_{W|S},P_{W})\big{]}\leq(b-a)}\Psi(I(W;S)),$
where the inequalities follow from Remark 2, Lemma 2, and Jensen’s inequality
(note $\Psi(x)$, defined in Lemma 2, is concave on $x$). This result compares
negatively with other results employing the mutual information, e.g., [4,
Theorem 1], where the bound has a decaying factor of $1/\sqrt{n}$.
Nonetheless, it is possible to find a single-letter version of [15, Theorem 2]
using a similar strategy to [6, Proposition 1] and [9, Propositions 1 and 3],
which generalizes [16, Theorem 1] to algorithms that may consider the ordering
of the samples. More concretely, the expected generalization error is
controlled by a function of the Wasserstein distance of the hypothesis’
distribution before and after observing a _single sample_ $Z_{i}$, i.e.,
$\mathbb{W}(P_{W|Z_{i}},P_{W})$.
Suppose that the loss function $\ell$ is $L$-Lipschitz for all
$z\in\mathcal{Z}$ and that the hypothesis space $\mathcal{W}$ is Polish. Then,
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{L}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}.}$
Moreover, when the loss function is bounded and the geometry of the space is
ignored by considering the discrete metric, this single-letter result can
improve upon current relative entropy and mutual information bounds.
Under the conditions of Theorem 3.1, if the loss $\ell$ is bounded in $[a,b]$,
then
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{b-a}{n}\sum\nolimits_{i=1}^{n}\mathbb{E}\big{[}\textnormal{{TV}}(P_{W|Z_{i}},P_{W})\big{]}\leq\frac{b-a}{n}\sum\nolimits_{i=1}^{n}\mathbb{E}\big{[}\Psi\big{(}D_{\textnormal{KL}}(P_{W|Z_{i}}\>\|\>P_{W})\big{)}\big{]}}$
Corollary 3.1 improves upon [6, Proposition 1] in two different ways. First,
it pulls the expectation with respect to the samples $P_{Z_{i}}$ outside of
the concave square root, thus strengthening that result via Jensen’s
inequality. Second, the addition of the BH inequality ensures that heavily
influential samples (high $I(W;Z_{i})$) do not contribute too negatively to
the bound, which is ensured to be non-vacuous. Moreover, contrarily to (1), a
further application of Jensen’s inequality and [6, Proposition 2] indicates
that Corollary 3.1 compares positively to [4, Theorem 1], exhibiting the
decaying factor of $1/\sqrt{n}$,
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{b-a}{n}\smash{\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq(b-a)\Psi\Big{(}\frac{I(W;S)}{n}\Big{)}\leq\sqrt{\frac{(b-a)^{2}I(W;S)}{2n}}}.$
(2)
It is also possible to obtain a random-subset version of [15, Theorem 2] using
a similar strategy to [9, Propositions 2 and 4]. This kind of bounds, rather
than looking at how knowing a _single sample_ $Z_{i}$ modifies the hypothesis
distribution, i.e., $\mathbb{W}(P_{W|Z_{i}},P_{W})$, look at how the knowledge
of a set of samples $S_{J}$ alters the hypothesis distribution when all the
other samples, $S_{J^{c}}$, used to obtain the hypothesis are known too, i.e.,
$W(P_{W|S},P_{W|S_{J^{c}}})$.
Suppose that the loss function $\ell$ is $L$-Lipschitz for all
$z\in\mathcal{Z}$ and that the hypothesis space $\mathcal{W}$ is Polish. Let
$J$ be a uniformly random subset of $[n]$ such that $|J|=m$, and that is
independent of $W$ and $S$. Let also $R$ be a random variable independent of
$S$ and $J$. Then,
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq
L\mathbb{E}\big{[}\mathbb{W}(P_{W|S,R},P_{W|S_{J^{c}},R})\big{]}\textnormal{
and}$ $\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{L}{m}\mathbb{E}\bigg{[}\sum_{i\in
J}\mathbb{E}\big{[}\mathbb{W}(P_{W|S_{J^{c}}\cup
Z_{i},R},P_{W|S_{J^{c}},R})\mid J\big{]}\bigg{]}.$
In particular, when $m=1$, the two equations from Theorem 3.1 reduce to [21,
Lemma 3]
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq
L\mathbb{E}\big{[}\mathbb{W}(P_{W|S,R},P_{W|S^{-J},R})\big{]}},$
where $\smash{S^{-J}=S\setminus Z_{J}}$, i.e., the whole dataset except sample
$Z_{J}$. Moreover, if the loss is bounded and the geometry is ignored, Theorem
3.1 improves upon the tightest bounds in terms of the relative entropy of
random subsets, cf. [7, Theorem 2.5]. In the conditions of Theorem 3.1, if the
loss is bounded in $[a,b]$, then
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq(b-a)\mathbb{E}\big{[}\textnormal{{TV}}(P_{W|S,R},P_{W|S^{-J},R})\big{]}\leq\frac{b-a}{n}\sum_{j=1}^{n}\mathbb{E}\big{[}\Psi\big{(}D_{\textnormal{KL}}(P_{W|S,R}\>\|\>P_{W|S^{-j},R})\big{)}\big{]}.}$
These data-dependent bounds characterize well the expected generalization
error of the Langevin dynamics (LD) and stochastic gradient Langevin dynamics
(SGLD) algorithms [7, Theorems 3.1 and 3.3], where $R$ is an artificial random
variable used to encode some knowledge necessary to characterize the
hypothesis distribution, such as the batch indices of SGLD. In particular,
Corollary 3.1 improves upon [7, Theorem 2.5] tightening the elements of the
expectation with respect to $J$ for which the divergence is large ($\gtrapprox
1.6$).
It is possible to prove that Theorem 3.1 is tighter than [15, Theorem 1]. This
results by studying the KR dual representation of the Wasserstein distance and
noting that the conditional distribution $P_{W|Z_{i}}$ is a smoothed version
of the forward channel, i.e., $P_{W|Z_{i}}=\mathbb{E}[P_{W|S}|Z_{i}]$.
Comparisons with Theorem 2 are also possible using similar arguments and the
triangle inequality. These results are informally summarized below and
presented with more details and the proofs in Appendix D.1.
Consider the standard setting. Then, for all $j\subseteq[n]$ and all $i\in j$:
$\displaystyle\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]},\textnormal{where
$j=[n]$,}$ ($\Longrightarrow$ Theorem 3.1 $\leq$ [15, Theorem 1])
$\displaystyle\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})\big{]},\textnormal{and}$
($\Longrightarrow$ Theorem 3.1 $\leq$ Theorem 3.1)
$\displaystyle\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})\big{]}\leq
2\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]}.$ ($\Longrightarrow$
Theorem 3.1 $\leq$ $2\cdot$[15, Theorem 1])
The following example showcases a situation where the presented bounds
outperform the current known bounds based on the Wasserstein distance and the
mutual information.
###### Example 1 (Gaussian location model).
Consider the problem of estimating the mean $\mu$ of a $d$-dimensional
Gaussian distribution with known covariance matrix $\sigma^{2}I_{d}$. Further
consider that there are $n$ samples $S=(Z_{1},\ldots,Z_{n})$ available, the
loss is measured with the Euclidean distance $\ell(w,z)=\lVert w-z\rVert_{2}$,
and the estimation is their empirical mean $W=\frac{1}{n}\sum_{i=1}^{n}Z_{i}$.
Figure 1: Expected generalization error and generalization error bounds for
the Gaussian location model with $\mathcal{N}(\mu,1)$ (left) and
$\mathcal{N}(\mu,I_{250})$ (right). See Appendix E for the details.
In this example, the expected generalization error can be calculated exactly
(see Appendix E):
$\overline{\textnormal{gen}}(W,S)=\sqrt{\frac{2\sigma^{2}}{n}}\Big{(}\sqrt{n+1}-\sqrt{n-1}\Big{)}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2})}\in\mathcal{O}\bigg{(}\frac{\sqrt{\sigma^{2}d}}{n}\bigg{)}.$
As discussed in [6], the bound from [4] is not applicable in this setting
since $I(W;S)\to\infty$ and since $\ell(w,Z)$ is not subgaussian given that
$\textnormal{Var}[\ell(w,Z)]\to\infty$ as $\lVert w\rVert_{2}\to\infty$. When
$d=1$, the loss $\ell(W,Z)$ is $1$-subgaussian and the individual sample
mutual information (ISMI) bound from [6] produces a bound in
$\smash{\mathcal{O}\big{(}\sqrt{\sigma^{2}/n}\big{)}}$, which decreases slower
than the true generalization error, see Figure 1. This happens since the bound
grows as the square root of $I(W;Z_{i})$, which is in $\mathcal{O}(1/n)$.
In this scenario, the loss is 1-Lipschitz under
$\smash{\rho(w,w^{\prime})=\lVert w-w^{\prime}\rVert_{2}}$, and thus the
bounds based on the Wasserstein distance are applicable. Applying the bound
from [15] yields a bound in
$\smash{\mathcal{O}\big{(}\sqrt{\sigma^{2}d/n}\big{)}}$, which decreases at
the same sub-optimal rate as the ISMI bound. However, both the individual and
random-subset Wasserstein distance bounds from Theorems 3.1 and 3.1 produce
bounds in $\smash{\mathcal{O}\big{(}\sqrt{\sigma^{2}d}/n\big{)}}$, which
decrease at the same rate as the true generalization error (see Figure 1).
#### 3.1.1 Outline of the proofs
Similarly to [15, 16], the proofs of the theorems in this section are based on
operating with $\overline{\textnormal{gen}}(W,S)$ until an expression of the
type $\mathbb{E}[f(X^{\prime},Y)-f(X,Y)]$ is reached, where $X^{\prime}$ is an
independent copy of $X$ such that $P_{X^{\prime},Y}=P_{X}\otimes P_{Y}$, and
then applying the KR duality. For example, in Theorem 3.1 such an expression
is achieved with $X=W$, $Y=Z_{i}$, and $f=\ell$. To arrive at these
expressions, the proofs of Theorem 3.2 and 3.2 operate with
$\overline{\textnormal{gen}}(W,S)$ in different forms. More precisely,
* (Th. 3.1)
Since the samples $Z_{i}$ are independent and the expectation is a linear
operator, the proof follows working with the quantity
$\overline{\textnormal{gen}}(W,S)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})]$.
* (Th. 3.1)
Note that $\mathbb{E}[\mathscr{L}_{s_{J}}(w)]=\mathscr{L}_{s}(w)$, where $J$
is a uniformly random subset of $[n]$ of size $m$ and $s_{J}$ is the subset of
$s$ indexed by $J$. This equality follows since there are $\binom{n}{m}$
subsets of size $m$ and each sample $z_{i}$ belongs to only $\binom{n-1}{m-1}$
of them. Hence,
$\smash{\mathbb{E}[\mathscr{L}_{s_{J}}(w)]=\frac{1}{\binom{n}{m}}\sum\nolimits_{j\in\mathcal{J}}\frac{1}{m}\sum_{i\in
j}\ell(w,z_{i})=\frac{1}{n}\sum_{i=1}^{n}\ell(w,z_{i})=\mathscr{L}_{s}(w).}$
Then, the proof follows working with the quantity
$\overline{\textnormal{gen}}(W,S)=\mathbb{E}[\mathscr{L}_{S_{J}}(W^{\prime})-\mathscr{L}_{S_{J}}(W)]$.
### 3.2 Randomized-subsample setting
In the randomized-subsample setting the focus shifts from studying the impact
of the samples on the hypothesis distribution to the impact of the samples’
identities on the hypothesis distribution. For example, the analogous result
to (1) is
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq
2L\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})\big{]}.}$
(3)
Similarly to the standard setting, considering the discrete metric and
applying Pinsker’s and Jensen’s inequalities leads to a less favorable bound
than current bounds based on the mutual information [1, Theorem 5.1] since it
does not explicitly decrease as $1/\sqrt{n}$. However, the bound still admits
a tighter (see Appendix D.1) single-letter version.
Suppose that the loss function $\ell$ is $L$-Lipschitz for all
$z\in\mathcal{Z}$ and that the hypothesis space $\mathcal{W}$ is Polish and
let $\tilde{S}_{i}\triangleq(\tilde{Z}_{i},\tilde{Z}_{i+n})$. Then,
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{2L}{n}\smash{\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}.}$
As in the standard setting, when the loss function is bounded and the geometry
of the space is ignored, this result improves upon current single-letter
bounds based on the mutual information [9, 12] by pulling the expectation with
respect to the samples $P_{\smash{\tilde{S}_{i}}}$ out of the square root.
Here, the BH inequality is not considered since
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S}_{i},U_{i}}}\>\|\>P_{\smash{W|\tilde{S}_{i}}})\leq\log(2)$;
see Appendix F for the details. Under the conditions of Theorem 3.2, if the
loss $\ell$ is bounded in $[a,b]$, then
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{2(b-a)}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\textnormal{{TV}}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}$
$\displaystyle\leq\frac{b-a}{n}\smash{\sum_{i=1}^{n}\mathbb{E}\Big{[}\sqrt{2D_{\textnormal{KL}}(P_{\smash{W|\tilde{S}_{i},U_{i}}}\>\|\>P_{\smash{W|\tilde{S}_{i}}})}\Big{]}}.$
In this setting, Corollary 3.2 also decreases at a $1/\sqrt{n}$ rate and is
tighter than [1, Theorem 5.1].
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{2(b-a)}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n},U_{i}}},P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n}}})\big{]}\leq\sqrt{\frac{2(b-a)^{2}I(W;U|\tilde{S})}{n}}.$
Finally, the randomized-subsample setting also accepts random-subset bounds.
These bounds study how the knowledge of the _identities of a set of samples_
that where used for training, $U_{J}$, alters the hypothesis distribution when
all the other identities, $U_{J^{c}}$, and all the samples, $\tilde{S}$, are
known. Suppose that the loss function $\ell$ is $L$-Lipschitz for all
$z\in\mathcal{Z}$ and that the hypothesis space $\mathcal{W}$ is Polish. Let
$J$ be a uniformly random subset of $[n]$ such that $|J|=m$, and that is
independent of $W$, $\tilde{S}$, and $U$Let also $R$ be a random variable
independent of $\tilde{S}$,$U$, and $J$. Then,
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq
2L\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U,R}},P_{\smash{W|\tilde{S},U_{J^{c}},R}})\big{]}\
\textnormal{ and}$
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{2L}{m}\mathbb{E}\Big{[}\sum_{i\in
J}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U_{J^{c}}\cup
U_{i},R}},P_{\smash{W|\tilde{S},U_{J^{c}},R}})\mid J\big{]}\Big{]}.$
Although these bounds are weaker than Theorem 3.2 (see Appendix D.1), their
data-dependent nature may lead to more tractable and sharper bounds in
practice. For example, when the discrete metric is considered, Theorem 3.2
recovers from below current random-subset bounds based on the relative
entropy, which are used to obtain some of the tightest bounds for LD and SGLD
[9, 10]. In the conditions of Theorem 3.2, for $m=1$, if the loss is bounded
in $[a,b]$, then
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq
2(b-a)\mathbb{E}\big{[}\textnormal{{TV}}(P_{\smash{W|\tilde{S},U,R}},P_{\smash{W|\tilde{S},U^{-J},R}})\big{]}$
$\displaystyle\leq\frac{b-a}{n}\smash{\sum_{j=1}^{n}}\,\mathbb{E}\Big{[}\sqrt{2D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U,R}}\>\|\>P_{\smash{W|\tilde{S},U^{-j},R}})}\Big{]}.$
#### 3.2.1 Outline of the proofs
The proofs of the results in this section are similar to those of the standard
setting, hence their similar expressions. However, instead of operating with
the expected generalization error in the form of
$\mathbb{E}[\textnormal{gen}(W,S)]$ they operate with
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]$.
There are two issues that complicate the application of the KR duality as in
the previous proofs. For instance, consider
$\widehat{\textnormal{gen}}(W,\tilde{S},U)$, then:
* •
Both $\mathscr{L}_{\bar{S}}(W)$ and $\mathscr{L}_{S}(W)$ depend on $P_{U}$.
Hence, considering a copy $W^{\prime}$ of $W$ such that
$P_{\smash{W^{\prime},\tilde{S},U}}=P_{\smash{W,\tilde{S}}}\otimes P_{U}$ does
not help since
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]\neq\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{S}(W)]$.
* •
Even if
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]=\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{S}(W)]$
were true, for some fixed $\tilde{s}$ and $u$, the functions
$\mathscr{L}_{\bar{s}}(w)$ and $\mathscr{L}_{s}(w)$ on $w$ are different, and
thus the KR duality cannot be invoked.
Nonetheless, these two issues are resolved considering instead
$\smash{\widehat{\textnormal{gen}}(W,\tilde{S},U)=\mathscr{L}_{\bar{S}}(W)-\mathscr{L}_{S}(W)-\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{S}(W^{\prime})],}$
where $W^{\prime}$ is an independent copy of $W$ such that
$P_{\smash{W^{\prime},\tilde{S},U}}=P_{\smash{W,\tilde{S}}}\otimes P_{U}$.
Hence, the inequalities
$\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{S}(W^{\prime})]=0$
and $|x+y|\leq|x|+|y|$ lead to the upper bound
$\smash{\big{|}\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]\big{|}\leq\big{|}\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{\bar{S}}(W)]\big{|}+\big{|}\mathbb{E}[\mathscr{L}_{S}(W^{\prime})-\mathscr{L}_{S}(W)]\big{|},}$
where the KR duality can be applied to each of the terms, albeit at the
expense of an extra factor of 2.
### 3.3 Backward channel
In [17], the authors study the characterization of the expected generalization
error in terms of the discrepancy between the data distribution $P_{S}$ and
the backward channel distribution $P_{S|W}$ motivated by its connection to
rate–distortion theory, see e.g., [19, Chapters 25–57] or [22, Chapter 10]. An
approach formalizing this intuitive connection is given in Appendix G and
different angles, based on chaining mutual information [13] and compression,
are found in [11, Section 5] and [23].
More concretely, they proved that the generalization error is bounded from
above by the discrepancy of these distributions, where the discrepancy is
measured by the Wasserstein distance of order $p$ with the Minkowski distance
of order $p$ as a metric, i.e., $\rho(x,y)=\lVert x-y\rVert_{p}$. Namely,
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{L}{n^{1/p}}\mathbb{E}[\mathbb{W}_{p,\lvert\cdot\rvert}^{p}(P_{S},P_{S|W})]^{1/p}.$
Similarly, the results from §3.1 and §3.2 can be replicated considering the
backward channel instead of the forward channel, e.g., $P_{S|W}$ instead of
$P_{W|S}$ in (1), $P_{Z_{i}|W}$ instead of $P_{W|Z_{i}}$ in Theorem 3.1.
However, in this case, the loss $\ell$ would be required to be Lipschitz with
respect to the samples space $\mathcal{Z}$ and not the hypothesis space
$\mathcal{W}$, i.e., Lipschitz for all fixed $w\in\mathcal{W}$, thus
exploiting the geometry of the samples’ space and not the hypotheses’ one.
As an example, noting that
$\overline{\textnormal{gen}}(W,S)=\mathbb{E}[\mathscr{L}_{S^{\prime}}(W)-\mathscr{L}_{S}(W)]$,
where $S^{\prime}$ is an independent copy of $S$ such that
$P_{W,S^{\prime}}=P_{W}\otimes P_{S}$ produces the bound
$\smash{\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq
L\mathbb{E}[\mathbb{W}(P_{S},P_{S|W})]}.$
Compared to [17], these results (i) are valid for any metric $\rho$ as long as
the loss $\ell$ is Lipschitz under $\rho$, and (ii) have single-letter and
random-subset versions, and (iii) have variants in both the standard and
randomized-subsample settings.
### 3.4 Other information measures
The bounds obtained in §3.1 and §3.2 may be manipulated to produce a variety
of new bounds based on common information measures. For example, once the
discrete metric is assumed and since the total variation is symmetric,
applying Pinsker’s inequality with the distributions in the opposite order to
Corollaries 3.1, 3.1, 3.2, and 3.2 and further applying Jensen’s inequality
yields bounds based on the lautum information L [24]. For instance, a
corollary of Theorem 3.2 is
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\smash{\frac{b-a}{n}}\sum\nolimits_{i=1}^{n}\Psi\big{(}\textnormal{{L}}(W;Z_{i})\big{)}.$
Similarly, several new bounds based on different $f$-_divergences_ [19,
Chapter 7] may be obtained employing the _joint range strategy_ once the
discrete metric is assumed. As an example, some corollaries of Theorem 3.1
based on the Hellinger distance H and the $\chi^{2}$-divergence (see Appendix
H for a tighter and more general version of (6)) are
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{L}{2n}\sum\nolimits_{i=1}^{n}\mathbb{E}\bigg{[}\texttt{H}(P_{W|Z_{i}},P_{W})\sqrt{4-\texttt{H}^{2}(P_{W|Z_{i}},P_{W})}\bigg{]},$
(4) $\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{L}{\sqrt{2}n}\sum\nolimits_{i=1}^{n}\mathbb{E}\bigg{[}\sqrt{\log\big{(}1+\chi^{2}(P_{W|Z_{i}},P_{W})\big{)}}\bigg{]},\textnormal{
and}$ (5) $\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle\leq\frac{L}{2n}\sum\nolimits_{i=1}^{n}\mathbb{E}\bigg{[}\sqrt{\chi^{2}(P_{W|Z_{i}},P_{W})}\bigg{]}.$
(6)
### 3.5 Final remarks on the generality of the results
Due to Bobkov–Götze’s theorem [14, Theorem 4.8], the relative entropy results
still hold when the loss is both Lipschitz and subgaussian. Hence, the
presented Wasserstein distance bounds are tighter than [4, 6, 7, 9, 10] in a
more general setting. Moreover, the total variation results also hold for any
metric with the added factor of $d_{\rho}(\mathcal{W})$ as per Remark 3. These
results were omitted in the main text for clarity of exposition, but are
included in Appendix B.
Therefore, only when the loss is not Lipschitz but is subgaussian or has a
bounded cumulant function, or $\mathcal{W}$ is not Polish, the bounds from [6,
7, 10, 12] are preferred. As an example, some common loss functions such as
the cross-entropy, the Hinge loss, the Huber loss, or any $L_{p}$ norm are
Lipschitz [25, 26] under an appropriate metric $\rho$, see Appendix A for a
discussion of the role of the metric and the space geometry in the presented
bounds.
## 4 Discussion
This paper introduced several expected generalization error bounds based on
the Wasserstein distance. In particular, these are full-dataset, single-
letter, and random-subset bounds on both the standard and the randomized-
subsample settings. When the Wasserstein distance ignores the geometry of the
hypothesis space and the loss is bounded, the presented bounds are tighter and
recover from below the current bounds based on the relative entropy and the
mutual information [4, 6, 7, 9, 10], see also Appendix B for stronger, more
general statements. Furthermore, the obtained total variation and relative-
entropy bounds on the standard setting are ensured to be non-vacuous, i.e.,
smaller or equal than the trivial bound, thus resolving the issue of
potentially vacuous relative-entropy and mutual-information bounds on the
standard setting. Interestingly, the results for the randomized-subsample
setting are tighter than their analogous in the standard setting only if their
Wasserstein distance (or total variation) is twice as small.
Moreover, the techniques employed to obtain these bounds can also be used to
obtain analogous bounds considering the backward channel and the samples’
space geometry, aiming to facilitate connections between the generalization
error characterization and rate–distortion theory, as suggested by Lopez and
Jog [17]. Nonetheless, when the backward channel can be characterized, these
bounds are interesting in their own right. Finally, the presented bounds may
be used to generate a variety of new bounds in terms of, e.g., the lautum
information or $f$-divergences like the total variation, the relative entropy,
the Hellinger distance, or the $\chi^{2}$-divergence.
### 4.1 Limitations and future work
##### PAC-Bayes bounds
PAC-Bayes bounds ensure that $\smash{\mathbb{E}[\textnormal{gen}(W,S)\mid
S]\geq\alpha(\beta^{-1})}$ with probability no greater than $\beta\in(0,1)$.
Similarly, single-draw PAC-Bayes bounds ensure that
$\textnormal{gen}(W,S)\allowbreak\geq\smash{\alpha(\beta^{-1})}$ with
probability no greater than $\beta\in(0,1)$. These concentration bounds are of
high probability when the dependency on $\smash{\beta^{-1}}$ is logarithmic,
i.e., $\log(1/\beta)$. See, [27, 2] for an overview.
The bounds from this work may be used to obtain single-draw PAC-Bayes bounds
applying Markov’s inequality [22, Problem 3.1] directly. For instance,
employing it in Theorem 3.1 implies that
$P_{W,S}\Big{(}\textnormal{gen}(W,S)\geq\smash{\frac{L}{\beta
n}\sum\nolimits_{i=1}^{n}}\mathbb{E}[\mathbb{W}(P_{W|Z_{i}},P_{W})]\Big{)}\leq\beta,$
for all $\beta\in(0,1)$. However, this is not a high-probability bound since
the dependency on $\smash{\beta^{-1}}$ is linear. Hence, high-probability
concentration bounds based on the Wasserstein distance and the total variation
are a path of future research. As an example, [28, 29, 30, 31] provide high-
probability single-draw PAC-Bayes bounds based on, respectively, max-
information, differential privacy, $\alpha$-mutual information, and uniform
stability. Similarly, high-probability PAC-Bayes bounds based on the relative
entropy and the hypothesis’ space geometry are given in [8] and [32, 33],
respectively.
##### New bounds to specific algorithms
The Wasserstein distance is difficult to characterize and/or estimate.
Nonetheless, some of the bounds that can be obtained from it, e.g., mutual-
information and relative-entropy bounds, have been used to obtain analytical
bounds on specific algorithms, e.g., Langevin dynamics and stochastic gradient
Langevin dynamics [6, 7, 9, 10]. Some of these results can be readily
tightened with Corollaries 3.1, 3.1, and 3.2. Thence, deriving new analytical
bounds for specific algorithms based on the presented results is also a topic
for further research.
##### Connections to stability and privacy measures
A learning algorithm is said to be stable if a small change on the input
dataset produces a small variation in the output hypothesis. There are various
attempts at quantifying this notion such as uniform stability [34], where the
variation in the output hypothesis is seen in terms of the loss, and
differential privacy (DP) [28], where this variation is seen in terms of the
hypothesis distribution. These notions are tied to the generalization
capability of an algorithm, i.e., the less a hypothesis depends on the
specifics of the data samples, the better it will generalize, and hence there
are works obtaining generalization bounds based on stability, see e.g., [29,
31]. In particular, there are some works that, assuming some stability notion
such as DP, bound from above the relative entropy and the mutual information
appearing in some of the bounds that can be derived from the results presented
in this work, hence also tying stability and generalization, c.f. [1, 35, 36].
Therefore, a future line of research is to investigate how different notions
of stability can be combined with the measures of similarity between
distributions employed in this work to characterize the generalization error.
## Funding
This work was funded in part by the Swedish research council under contract
2019-03606.
## References
* Steinke and Zakynthinou [2020] T. Steinke and L. Zakynthinou, “Reasoning about generalization via conditional mutual information,” in _Conference on Learning Theory_ , ser. Proceedings of Machine Learning Research, vol. 125, Jul. 2020, pp. 3437–3452.
* Shalev-Shwartz and Ben-David [2014] S. Shalev-Shwartz and S. Ben-David, _Understanding machine learning: From theory to algorithms_. Cambridge university press, 2014.
* Vapnik [2013] V. Vapnik, _The Nature of Statistical Learning Theory_ , ser. Information Science and Statistics. Springer Science & Business Media, 2013.
* Xu and Raginsky [2017] A. Xu and M. Raginsky, “Information-theoretic analysis of generalization capability of learning algorithms,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 2524–2533.
* Russo and Zou [2020] D. Russo and J. Zou, “How much does your data exploration overfit? Controlling bias via information usage,” _IEEE Transactions on Information Theory_ , vol. 66, no. 1, pp. 302–323, Jan. 2020.
* Bu et al. [2020a] Y. Bu, S. Zou, and V. V. Veeravalli, “Tightening mutual information based bounds on generalization error,” _IEEE Journal on Selected Areas in Information Theory_ , vol. 1, no. 1, pp. 121–130, May 2020.
* Negrea et al. [2019] J. Negrea, M. Haghifam, G. K. Dziugaite, A. Khisti, and D. M. Roy, “Information-theoretic generalization bounds for SGLD via data-dependent estimates,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 11 015–11 025.
* Hellström and Durisi [2020] F. Hellström and G. Durisi, “Generalization bounds via information density and conditional information density,” _IEEE Journal on Selected Areas in Information Theory_ , vol. 1, no. 3, pp. 824–839, Nov. 2020.
* Rodríguez-Gálvez et al. [2020] B. Rodríguez-Gálvez, G. Bassi, R. Thobaben, and M. Skoglund, “On random subset generalization error bounds and the stochastic gradient langevin dynamics algorithm,” in _IEEE Information Theory Workshop (ITW)_. IEEE, 2020.
* Haghifam et al. [2020] M. Haghifam, J. Negrea, A. Khisti, D. M. Roy, and G. K. Dziugaite, “Sharpened generalization bounds based on conditional mutual information and an application to noisy, iterative algorithms,” in _Advances in Neural Information Processing Systems_ , 2020.
* Hafez-Kolahi et al. [2020] H. Hafez-Kolahi, Z. Golgooni, S. Kasaei, and M. Soleymani, “Conditioning and processing: Techniques to improve information-theoretic generalization bounds,” _Advances in Neural Information Processing Systems_ , vol. 33, 2020\.
* Zhou et al. [2020] R. Zhou, C. Tian, and T. Liu, “Individually conditional individual mutual information bound on generalization error,” _arXiv preprint arXiv:2012.09922_ , 2020.
* Asadi et al. [2018] A. Asadi, E. Abbe, and S. Verdú, “Chaining mutual information and tightening generalization bounds,” in _Advances in Neural Information Processing Systems_ , 2018, pp. 7234–7243.
* van Handel [2014] R. van Handel, “Probability in high dimension,” Princeton University, NJ, Tech. Rep., 2014.
* Wang et al. [2019] H. Wang, M. Diaz, J. C. S. Santos Filho, and F. P. Calmon, “An information-theoretic view of generalization via wasserstein distance,” in _2019 IEEE International Symposium on Information Theory (ISIT)_. IEEE, 2019, pp. 577–581.
* Zhang et al. [2018] J. Zhang, T. Liu, and D. Tao, “An optimal transport view on generalization,” _arXiv preprint arXiv:1811.03270_ , 2018.
* Lopez and Jog [2018] A. T. Lopez and V. Jog, “Generalization error bounds using wasserstein distances,” in _2018 IEEE Information Theory Workshop (ITW)_. IEEE, 2018, pp. 1–5.
* Villani [2008] C. Villani, _Optimal Transport: Old and New_ , ser. Grundlehren der mathematischen Wissenschaften. Springer Science & Business Media, 2008, vol. 338.
* Polyanskiy and Wu [2017] Y. Polyanskiy and Y. Wu, “Lecture notes on Information Theory,” _MIT (6.441), UIUC (ECE 563), Yale (STAT 664)_ , 2017.
* Bretagnolle and Huber [1978] J. Bretagnolle and C. Huber, “Estimation des densités: risque minimax,” in _Séminaire de Probabilités XII_. Springer, 1978, pp. 342–363, in French.
* Raginsky et al. [2016] M. Raginsky, A. Rakhlin, M. Tsao, Y. Wu, and A. Xu, “Information-theoretic analysis of stability and bias of learning algorithms,” in _2016 IEEE Information Theory Workshop (ITW)_. IEEE, 2016, pp. 26–30.
* Cover and Thomas [2006] T. M. Cover and J. A. Thomas, _Elements of Information Theory_ , 2nd ed. John Wiley & Sons, 2006.
* Bu et al. [2020b] Y. Bu, W. Gao, S. Zou, and V. Veeravalli, “Information-theoretic understanding of population risk improvement with model compression,” _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 04, pp. 3300–3307, Apr. 2020.
* Palomar and Verdú [2008] D. P. Palomar and S. Verdú, “Lautum information,” _IEEE Transactions on Information Theory_ , vol. 54, no. 3, pp. 964–975, 2008.
* Steinwart and Christmann [2008] I. Steinwart and A. Christmann, _Support vector machines_. Springer Science & Business Media, 2008.
* Gao and Pavel [2018] B. Gao and L. Pavel, “On the properties of the softmax function with application in game theory and reinforcement learning,” 2018.
* Guedj [2019] B. Guedj, “A primer on PAC-Bayesian learning,” _arXiv preprint arXiv:1901.05353_ , 2019.
* Dwork et al. [2014] C. Dwork, A. Roth _et al._ , “The algorithmic foundations of differential privacy.” _Foundations and Trends in Theoretical Computer Science_ , vol. 9, no. 3-4, pp. 211–407, 2014.
* Dwork et al. [2015] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth, “Generalization in adaptive data analysis and holdout reuse,” in _Advances in Neural Information Processing Systems_ , 2015, pp. 2350–2358.
* Esposito et al. [2021] A. R. Esposito, M. Gastpar, and I. Issa, “Generalization error bounds via rényi-, $f$-divergences and maximal leakage,” _IEEE Transactions on Information Theory_ , vol. 67, no. 8, pp. 4986–5004, 2021.
* Bousquet et al. [2020] O. Bousquet, Y. Klochkov, and N. Zhivotovskiy, “Sharper bounds for uniformly stable algorithms,” in _Conference on Learning Theory_ , ser. Proceedings of Machine Learning Research, 2020, pp. 610–626.
* Audibert and Bousquet [2003] J.-Y. Audibert and O. Bousquet, “Pac-bayesian generic chaining.” in _NIPS_. Citeseer, 2003, pp. 1125–1132.
* Audibert and Bousquert [2007] J.-Y. Audibert and O. Bousquert, “Combining pac-bayesian and generic chaining bounds.” _Journal of Machine Learning Research_ , vol. 8, no. 4, 2007.
* Bousquet and Elisseeff [2002] O. Bousquet and A. Elisseeff, “Stability and generalization,” _Journal of machine learning research_ , vol. 2, no. Mar, pp. 499–526, 2002.
* Bun and Steinke [2016] M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in _Theory of Cryptography Conference_. Springer, 2016, pp. 635–658.
* Rodríguez-Gálvez et al. [2021] B. Rodríguez-Gálvez, G. Bassi, and M. Skoglund, “Upper bounds on the generalization error of private algorithms for discrete data,” _IEEE Transactions on Information Theory_ , vol. 67, no. 11, pp. 7362–7379, 2021.
* Haussler [1995] D. Haussler, “Sphere packing numbers for subsets of the boolean n-cube with bounded vapnik-chervonenkis dimension,” _Journal of Combinatorial Theory, Series A_ , vol. 69, no. 2, pp. 217–232, 1995.
* Feldman and Steinke [2018] V. Feldman and T. Steinke, “Calibrating noise to variance in adaptive data analysis,” pp. 535–544, 2018. [Online]. Available: http://proceedings.mlr.press/v75/feldman18a.html
* McDonald and Weiss [2013] J. N. McDonald and N. A. Weiss, _A Course in Real Analysis_ , 2nd ed. Cambridge, Massachusetts: Elsevier, 2013.
* Gray [2012] R. M. Gray, _Source coding theory_ , ser. Engineering and Computer Science. Springer Science & Business Media, 2012, vol. 83.
* Wu [2017] Y. Wu, “Lecture notes on information-theoretic methods for high-dimensional statistics,” _Lecture Notes for ECE598YW (UIUC)_ , vol. 16, 2017.
* Popoviciu [1935] T. Popoviciu, “Sur les équations algébriques ayant toutes leurs racines réelles,” _Mathematica_ , vol. 9, pp. 129–145, 1935.
* Harremoës and Vajda [2011] P. Harremoës and I. Vajda, “On pairs of $f$-divergences and their joint range,” _IEEE Transactions on Information Theory_ , vol. 57, no. 6, pp. 3230–3235, 2011.
## Appendix A A short note on geometry and generalization error bounds
### A.1 Chaining- and Wasserstein-based bounds
Ever since it was shown that some infinitely-dimensional hypothesis spaces
where PAC-learnable thanks to the VC dimension [2, Chapter 6], there has been
an interest in studying the role of the space complexity and its geometry in
determining the generalization of an algorithm, e.g. [37].
Some of the most interesting results come from the theory of random processes.
Adapted to our notation, this theory considers the set of random variables (or
random process) $\\{\textnormal{gen}(w,S)\\}_{w\in\mathcal{W}}$. Then, this
theory bounds the generalization error using the $\epsilon$-covering number of
the hypothesis space $\mathcal{N}(\mathcal{W},\rho,\epsilon)$ with some
requirements on the smoothness of the hypothesis’ space $\mathcal{W}$ under a
metric $\rho$. Here, the geometry of the space captures its complexity via the
$\epsilon$-covering number, which is defined as the cardinality of the minimum
set $\mathcal{N}$ such that for all hypothesis $w$ in $\mathcal{W}$, there is
an element of the set $x\in\mathcal{N}$ such that $\rho(x,w)\leq\epsilon$. The
two main techniques from this line of work are:
* •
The Lipschitz maximal inequality [14, Lemma 5.7]. Here, the smoothness
condition is to require the process to be Lipschitz; that is, that there is a
random variable $C$ such that for all $w,w^{\prime}\in\mathcal{W}$
$\lvert\textnormal{gen}(w,S)-\textnormal{gen}(w^{\prime},S)\rvert\leq
C\rho(w,w^{\prime}).$
Then, with the additional condition that $\textnormal{gen}(w,S)$ should be
$\sigma$-subgaussian under $P_{S}$ for all $w\in\mathcal{W}$, this inequality
bounds the generalization error as follows:
$\mathbb{E}[\textnormal{gen}(W,S)]\leq\inf_{\epsilon\in\mathbb{R}}\big{\\{}\epsilon\mathbb{E}[C]+\sqrt{2\sigma^{2}\log\mathcal{N}(\mathcal{W},\rho,\epsilon)}\big{\\}}.$
Note how this technique creates a tension between the finesse $\epsilon$ of
the covering and the smoothness of the process $\mathbb{E}[C]$.
* •
Dudley’s chaining technique [14, Theorem 5.24]. Here, the smoothness condition
is to require the process to be subgaussian; that is, for all
$w,w^{\prime}\in\mathcal{W}$ and all $\lambda\geq 0$,
$\log\mathbb{E}\Big{[}e^{\lambda\big{(}\textnormal{gen}(w,S)-\textnormal{gen}(w^{\prime},S)\big{)}}\Big{]}\leq\frac{\lambda^{2}\rho\big{(}w,w^{\prime}\big{)}^{2}}{2}$
and $\mathbb{E}[\textnormal{gen}(w,S)]=0$. Then, with the additional condition
that the process is separable, this inequality bounds the generalization error
as follows:
$\mathbb{E}[\textnormal{gen}(W,S)]\leq
6\sum_{k\in\mathbb{Z}}2^{-k}\sqrt{\log\mathcal{N}(\mathcal{W},\rho,2^{-k})}.$
Note that the subgaussian requirement is a relaxation of the Lipschitz
requirement. As such, now the bound is expressed as a sum where each finer
covering number of the space is weighted by the finesse ($2^{-k}$) of such a
covering. Further refined bounds based on this technique can be found in [32,
33].
The first work that included the relationship between the hypothesis and the
training samples to the analysis using random processes was [13]. There, the
authors combined the chaining technique with [4, Lemma 1] to derive a formula
which bounds the generalization error by a weighted average of the mutual
information between the dataset and increasingly finer quantizations $W_{k}$
of the hypothesis. More precisely, they proved that
$\mathbb{E}[\textnormal{gen}(W,S)]\leq
3\sqrt{2}\sum_{k\in\mathbb{Z}}2^{-k}\sqrt{I(W_{k};S)}.$
Therefore, in [13], the geometry of the hypothesis space $\mathcal{W}$ under
the metric $\rho$ is expressed as the amount of information that the
quantizations of the hypothesis under $\rho$ contain about the dataset $S$.
On the other hand, in [15] and this paper, a stronger smoothness requirement
is considered. Namely that the loss function is $L$-Lipschitz; i.e.,
$|\ell(w,z)-\ell(w^{\prime},z)|\leq L\rho(w,w^{\prime})$ for all
$w,w^{\prime}\in\mathcal{W}$ and all $z\in\mathcal{Z}$. This is a stronger
statement since the subgaussian assumption can be viewed as an “in-
probability” version of the Lipschitz assumption. Nonetheless, this stronger
assumption allows these bounds to bound the generalization error by the
minimum cost to go from the distribution of the hypothesis after observing
some samples to its marginal distribution, e.g.,
$\mathbb{W}(P_{W|Z_{i}},P_{W})$. Here the metric $\rho$ is used to quantify
how far away are the realizations of both probability distributions, on
average, if they are coupled in the best way possible.
An interesting property of the presented bounds is that they have the
flexibility to consider either the hypothesis or the sample space (backward
channel). For instance, for [13, Example 1], the presented bounds using the
backward channel are tighter than the bounds arising from the chaining
technique.
In this example, the authors consider the canonical Gaussian process. The
hypothesis space is $\mathcal{W}=\\{w\in\mathbb{R}^{2}:\lVert
w\rVert_{2}=1\\}$, the samples are $Z_{i}\sim\mathcal{N}(0,I_{2})$, the loss
function is $\ell(w,z)=-w^{T}z$, and the hypothesis is selected with the
empirical risk minimization (ERM) algorithm, i.e.,
$w^{\star}=\text{arg}\min_{w\in\mathcal{W}}\big{\\{}\frac{1}{n}\sum_{i=1}^{n}\ell(w,z_{i})\big{\\}}$.
In this setting, by Cauchy–Schwarz we see that the loss $\ell(w,\cdot)$ is
1-Lipschitz for all $w\in\mathcal{W}$; i.e.,
$|-w^{T}z+w^{T}z^{\prime}|\leq\lVert w\rVert_{2}\lVert
z-z^{\prime}\rVert_{2}\leq\lVert z-z^{\prime}\rVert_{2}$ for all
$z,z^{\prime}\in\mathcal{Z}$. Therefore, the backward channel equivalent of
Theorem 3.1 holds. Moreover, the function is 1-subgaussian under $P_{Z}$ for
all $w\in\mathcal{W}$. Therefore, as shown in Appendix B.1 the presented bound
is tighter than [23], which is shown to be tighter than [13] in this setting
(see [23, Section IV-B]).
### A.2 Choice of the metric
The presented bounds in Sections 3.1, 3.2, and 3.3 are valid for any metric
$\rho$ under which the loss is Lipschitz. The Lipschitz property is a property
of the hypothesis space $\mathcal{W}$ (forward channel bounds) or the sample
space $\mathcal{Z}$ (backward channel bounds) and _not_ of the algorithm. That
is, if two different algorithms operate on the same sample space and produce
the same hypothesis space, then they can be characterized with the same
metric.
The choice of the metric can be decisive for a tight analysis of the presented
bounds, and there are times where a loss function can be Lipschitz under
several metrics. For example, a bounded loss function represented as a norm is
Lipschitz with respect to that norm and the discrete metric. Nonetheless, in
many situations the metric of choice becomes apparent based on the loss
function. For example, if we consider the forward channel bounds and samples
of the type $z=(x,y)$ and the following two common supervised tasks:
* •
Regression. If a norm is used as the loss function $\ell(w,z)=\lVert
w-y\rVert$, then such a norm is also a good choice for a metric since by the
reverse triangle inequality the loss is 1-Lipschitz under that metric:
$\big{\lvert}\lVert w-y\rVert-\lVert w^{\prime}-y\rVert\big{\rvert}\leq\lVert
w-w^{\prime}\rVert$ for all $w,w^{\prime}\in\mathcal{W}$.
* •
Classification. If the 0-1 loss is used as the loss function
$\ell(w,z)=\text{Ind}(w\neq y)$, then the discrete metric is a good choice
since the loss is also 1-Lipschitz under this metric: $\lvert\text{Ind}(w\neq
y)-\text{Ind}(w^{\prime}\neq y)\big{\rvert}\leq\text{Ind}(w\neq w^{\prime})$.
Similarly, for the backward channel bounds, it is known that the logistic
loss, the softmax loss,222This result can be derived from [26, Proposition 3]
and the $L_{1}-L_{2}$ inequality. the Hinge loss, and many distance-based
losses like norms, the Huber, $\epsilon$-insensitive, and pinball losses, are
Lipschitz under the $L_{1}$ norm metric $\rho(z,z^{\prime})=\lvert
z-z^{\prime}\rvert$ [25, 26].
## Appendix B Generality of the results
The main text only presents total variation and relative entropy bounds for
bounded losses. This is to clarify the (geometrical) relationship between the
bounds based on the Wasserstein distance and those based on the relative
entropy. That is, that the former recover the latter when the geometry of the
hypothesis space is ignored.
Nonetheless, these bounds hold more generally for any metric under certain
mild modifications. More concretely, the relative entropy bounds hold when the
loss is also subgaussian333A random variable $X$ is said to be
$\sigma$-subgaussian if
$\log\mathbb{E}[\exp\lambda(X-\mathbb{E}[X])]\leq\frac{\lambda^{2}\sigma^{2}}{2}$
for all $\lambda\in\mathbb{R}$. Also, a function $f:\mathcal{X}\to\mathbb{R}$
is said to be $\sigma$-subgaussian under $P$ if $f(X)$ is $\sigma$-subgaussian
and $X\sim P$.. Furthermore, at the cost of an extra factor of
$d_{\rho}(\mathcal{W})$, the two kinds of bound still hold.
### B.1 Extension of the relative entropy bounds to subgaussian losses
Consider a Polish space $(\mathcal{X},\rho)$ and a probability distribution
$P$ on $\mathcal{X}$ with a finite first moment. Then, the Bobkov–Götze’s
theorem [14, Theorem 4.8] says that the following statements are equivalent:
* •
$f$ is $\sigma$-subgaussian under $P$ _for every_ 1-Lipschitz function
$f:\mathcal{X}\to\mathbb{R}$; and,
* •
$\mathbb{W}(Q,P)\leq\sqrt{2\sigma^{2}D_{\textnormal{KL}}(Q\>\|\>P)}$ for all
$Q$ on $\mathcal{X}$.
Therefore, the results from Corollaries 3.1, 3.1, 3.2, and 3.2 are valid in a
more general setting, namely when the loss function $\ell$ is both
$L$-Lipschitz and $\sigma$-subgaussian for all $z\in\mathcal{Z}$. To realize
this, note that if $X\sim P$ is $L$-Lipschitz and $\sigma$-subgaussian, then
$X/L$ is $1$-Lipschitz and $(\sigma/L)$-subgaussian. Hence, if
$|X-\mathbb{E}[X]|\leq L\mathbb{W}(Q,P)$, then it follows that
$|X-\mathbb{E}[X]|\leq
L\sqrt{2(\sigma/L)^{2}D_{\textnormal{KL}}(Q\>\|\>P)}=\sqrt{2\sigma^{2}D_{\textnormal{KL}}(Q\>\|\>P)}$.
As an example, assume that the loss function $\ell$ is $L$-Lipschitz and
$\sigma$-subgaussian under $P_{W}$ for all $z\in\mathcal{Z}$ and that
$\mathcal{W}$ is Polish. Then,
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{L}{n}\smash{\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\frac{1}{n}\sum_{i=1}^{n}\sqrt{2\sigma^{2}I(W;Z_{i})}\leq\sqrt{\frac{2\sigma^{2}I(W;S)}{n}}}$
is a corollary of Theorem 3.1 due to Bobkov–Götze’s theorem. As when the loss
is bounded, this equation shows that Theorem 3.1 is tighter than [6,
Proposition 2] and [4, Theorem 1] and exhibits a decaying factor of
$1/\sqrt{n}$. Moreover, this result encompasses the case where the loss is
bounded in $[a,b]$ since if a random variable is bounded in $[a,b]$ it is
$(b-a)/2$-subgaussian.
Note, however, that in this case the subgaussianity constant is different than
the constant from, e.g. [4], where the loss $\ell$ was supposed to be
$\nu$-subgaussian under $P_{Z}$ for all $w\in\mathcal{W}$. Considering the
bounds based on the backward channel (§3.3), these constants are exactly the
same, since assuming that the loss function $\ell$ is $L$-Lipschitz and
$\nu$-subgaussian under $P_{Z}$ for all $w\in\mathcal{W}$ and that
$\mathcal{Z}$ is Polish, means that
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{L}{n}\smash{\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{Z_{i}|W},P_{Z_{i}})\big{]}\leq\frac{1}{n}\sum_{i=1}^{n}\sqrt{2\nu^{2}I(W;Z_{i})}\leq\sqrt{\frac{2\nu^{2}I(W;S)}{n}}},$
which is always tighter than [4, 6]. As mentioned above, when the loss is
bounded the subgaussianity constant is the same under any distribution.
### B.2 Extension to the total variation bounds for any metric
As per Remark 3, the results based on the total variation also hold for any
metric with the extra factor $d_{\rho}(\mathcal{W})$, where
$d_{\rho}(\mathcal{W})$ is the diameter of the hypothesis space $\mathcal{W}$
under the metric $\rho$. For instance, Corollary 3.1 results in
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{Ld_{\rho}(\mathcal{W})}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\textnormal{{TV}}(P_{W|Z_{i}},P_{W})\big{]}\leq\frac{Ld_{\rho}(\mathcal{W})}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\Psi\big{(}D_{\textnormal{KL}}(P_{W|Z_{i}}\>\|\>P_{W})\big{)}\big{]}.$
Note that, since the results based on the total variation hold for any metric,
all the derived results based on different information measures from §3.4 hold
too.
The extra term can be arbitrarily large as, for example, when the hypothesis
is the weights of a neural network and metric $\rho$ is the $\ell_{2}$ norm
$\lVert\cdot\rVert_{2}$. Nonetheless, this term can still be small and
relevant for practical settings. For instance, consider again that the
hypothesis is the weights of a neural network. However, consider now that the
metric $\rho$ is the infinity norm $\lVert\cdot\rVert_{\infty}$ and that each
weight is enforced to be smaller than some small constant $C$. Then, the
diameter of the space $d_{\lVert\cdot\rVert_{\infty}}(\mathcal{W})$ is (at
most) equal to $C$.
## Appendix C Proofs of the theorems from Section 3
### C.1 Proof of Theorem 3.1
Note that
$\mathbb{E}[\mathscr{L}_{P_{Z}}(W)]=\int_{\mathcal{W}\times\mathcal{Z}}\ell(w,z)d(P_{W}\otimes
P_{Z})(w,z)$. If $W^{\prime}$ is an independent copy of $W$ such that
$P_{W^{\prime},Z_{i}}=P_{W}\otimes P_{Z}$ for all $i\in[n]$, then
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}$
$\displaystyle=\Big{|}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})\big{]}\Big{|}$
$\displaystyle=\Big{|}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\Big{[}\mathbb{E}\big{[}\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})\mid
Z_{i}\big{]}\Big{]}\Big{|}$
$\displaystyle\leq\frac{L}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]},$
where the last inequality stems from the KR duality and the Lipschitzness of
$\ell$ for all $z\in\mathcal{Z}$. The absolute value is removed since $\rho$
is a metric.
### C.2 Proof of Theorem 3.1
Consider the quantity
$\textnormal{gen}_{j}(w,s_{j})\triangleq\mathscr{L}_{P_{Z}}(w)-\mathscr{L}_{s_{j}}(w),$
where $\mathscr{L}_{s_{j}}=\frac{1}{m}\sum_{i\in j}\ell(w,z_{i})$. Then, note
that
$\mathbb{E}[\textnormal{gen}(W,S)]=\mathbb{E}[\textnormal{gen}_{J}(W,S_{J})]$
if $\mathscr{L}_{s}(w)=\mathbb{E}[\mathscr{L}_{s_{J}}(w)]$. This last equality
follows since $J$ is uniformly distributed, there are $\binom{n}{m}$ possible
subsets of size $m$ in $[n]$, and each sample $z_{i}$ belongs to
$\binom{n-1}{m-1}$ of those subsets; hence
$\displaystyle\mathbb{E}[\mathscr{L}_{s_{J}}(w)]$
$\displaystyle=\frac{1}{\binom{n}{m}}\sum_{j\in\mathcal{J}}\frac{1}{m}\sum_{i\in
j}\ell(w,z_{i})=\frac{1}{n}\sum_{i=1}^{n}\ell(w,z_{i})=\mathscr{L}_{s}(w).$
(7)
Subsequently, the two bounds from the theorem are obtained after bounding the
innermost expectation on the right hand side of the following equation:
$\displaystyle\big{|}\overline{\textnormal{gen}}(W,S)\big{|}=\big{|}\mathbb{E}[\textnormal{gen}_{J}(W,S_{J})\big{|}\leq\mathbb{E}\big{[}\big{|}\mathbb{E}[\textnormal{gen}_{J}(W,S_{J})\mid
J,S_{J^{c}},R]\big{|}\big{]},$
where the last step follows from Jensen’s inequality.
1. (a)
Consider, until stated otherwise, that all random objects and expectations are
conditioned to a fixed $j,s_{j^{c}}$, and $r$. Then note that
$\mathbb{E}[\mathscr{L}_{P_{Z}}(W)]=\mathbb{E}[\mathscr{L}_{S_{j}}(W^{\prime})]$,
where $W^{\prime}$ is an independent copy of $W$ such that
$P_{W^{\prime},S_{j}|s_{j^{c}},r}=P_{W|s_{j^{c}},r}\otimes P_{S_{j}}$.
Therefore
$\big{|}\mathbb{E}[\textnormal{gen}_{j}(W,S_{j})]\big{|}\leq\big{|}\mathbb{E}[\mathscr{L}_{S_{j}}(W^{\prime})-\mathscr{L}_{S_{j}}(W)]\big{|}\leq
L\mathbb{E}[\mathbb{W}(P_{W|S_{j},s_{j^{c}},r},P_{W|s_{j^{c}},r})],$
where the last inequality stems from the KR duality. Finally, taking the
expectation with respect to $P_{J,S_{J^{c}},R}$ in both sides of the equation
completes the proof.
2. (b)
Consider again, until stated otherwise, that all random objects and
expectations are conditioned to a fixed $j$, $s_{j^{c}}$, and $r$. Then,
similarly to the proof of Theorem 3.1
$\displaystyle\big{|}\mathbb{E}[\textnormal{gen}_{j}(W,S_{j})]\big{|}$
$\displaystyle\leq\Big{|}\frac{1}{m}\sum_{i\in
j}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})]\Big{|}$
$\displaystyle\leq\frac{L}{m}\sum_{i\in
j}\mathbb{E}[\mathbb{W}(P_{W|Z_{i},s_{j^{c}},r},P_{W|s_{j^{c}},r})],$
where the last inequality stems from the KR duality and $W^{\prime}$ is an
independent copy of $W$ such that
$P_{W^{\prime},Z_{i}|s_{j^{c}},r}=P_{W^{\prime}|s_{j^{c}},r}\otimes P_{Z_{i}}$
for all $i\in j$. Finally, taking the expectation with respect to
$P_{J,S_{J^{c}},R}$ in both sides of the equation completes the proof.
### C.3 Proof of Equation 3
Consider an independent copy $W^{\prime}$ of $W$ such that
$P_{\smash{W^{\prime},\tilde{S},U}}=P_{\smash{W^{\prime},\tilde{S}}}\otimes
P_{U}$. Then,
$\mathbb{E}[\mathscr{L}_{S}(W^{\prime})]=\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})]$,
and therefore
$\widehat{\textnormal{gen}}(w,\tilde{s},u)=\mathscr{L}_{\bar{s}}(w)-\mathscr{L}_{s}(w)-\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{S}(W^{\prime})].$
Then, re-arranging the expectation of the above expression and using the fact
that $|x+y|\leq|x|+|y|$ results in
$\displaystyle\big{|}\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]\big{|}$
$\displaystyle\leq\big{|}\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{\bar{S}}(W)]\big{|}+\big{|}\mathbb{E}[\mathscr{L}_{S}(W^{\prime})-\mathscr{L}_{S}(W)]\big{|}$
$\displaystyle=\big{|}\mathbb{E}\big{[}\mathbb{E}[\mathscr{L}_{\bar{S}}(W^{\prime})-\mathscr{L}_{\bar{S}}(W)\mid\tilde{S},U]\big{]}\big{|}+\big{|}\mathbb{E}\big{[}\mathbb{E}[\mathscr{L}_{S}(W^{\prime})-\mathscr{L}_{S}(W)\mid\tilde{S},U]\big{]}\big{|}$
$\displaystyle\leq
2L\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})],$
where the last step stems from the KR duality. Finally, noting that
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]=\overline{\textnormal{gen}}(W,S)$
completes the proof.
### C.4 Proof of Theorem 3.2
Similarly to the proof of (1), consider an independent copy $W^{\prime}$ of
$W$ such that
$P_{\smash{W^{\prime},\tilde{S}_{i},U_{i}}}=P_{\smash{W^{\prime},\tilde{S}_{i}}}\otimes
P_{U_{i}}$. Then
$\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})]=\mathbb{E}[\ell(W^{\prime},Z_{i})]$,
where $Z_{i}=\tilde{Z}_{i+U_{i}n}$, $\bar{Z}_{i}=\tilde{Z}_{i+(1-U_{i})n}$,
and $\tilde{S}_{i}=(Z_{i},\bar{Z}_{i})$. Therefore,
$\widehat{\textnormal{gen}}(w,\tilde{s},u)=\frac{1}{n}\sum_{i=1}^{n}\Big{(}\ell(w,\bar{z}_{i})-\ell(w,z_{i})-\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})-\ell(W^{\prime},Z_{i})]\Big{)}.$
Then, re-arranging the expectation of the above expression and using the fact
that $\big{|}\sum_{i=1}^{n}x_{i}\big{|}\leq\sum_{i=1}^{n}|x_{i}|$ results in
$\displaystyle\big{|}\widehat{\textnormal{gen}}$
$\displaystyle(W,\tilde{S},U)\big{|}\leq\frac{1}{n}\sum_{i=1}^{n}\Big{(}\big{|}\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})-\ell(W,\bar{Z}_{i})]\big{|}+\big{|}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})]\big{|}\Big{)}$
$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\Big{(}\big{|}\mathbb{E}\big{[}\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})-\ell(W,\bar{Z}_{i})\mid\tilde{S}_{i},U_{i}]\big{]}\big{|}+\big{|}\mathbb{E}\big{[}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})\mid\tilde{S}_{i},U_{i}]\big{]}\big{|}\Big{)}$
$\displaystyle\leq\frac{2L}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]},$
where the last step stems from the KR duality. Finally, noting that
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]=\overline{\textnormal{gen}}(W,S)$
completes the proof.
### C.5 Proof of Theorem 3.2
Similarly to the proof of Theorem 3.1, consider the quantity
$\widehat{\textnormal{gen}}_{j}(w,\tilde{s}_{j},u_{j})\triangleq\mathscr{L}_{\bar{s}_{j}}(w)-\mathscr{L}_{s_{j}}(w),$
where $\tilde{s}_{j}=(s_{j},\bar{s}_{j})$ and $s_{j}$ and $\bar{s}_{j}$ are
the subsets of $s$ and $\bar{s}$, respectively, indexed by $j$. Then, note
that
$\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]=\mathbb{E}[\widehat{\textnormal{gen}}_{J}(W,\tilde{S}_{J},U_{J})]$
since, as shown in (7), the equalities
$\mathbb{E}[\mathscr{L}_{s_{j}}(w)]=\mathscr{L}_{s}(w)$ and
$\mathbb{E}[\mathscr{L}_{\bar{s}_{j}}(w)]=\mathscr{L}_{\bar{s}}(w)$ hold.
Then, the two bounds from the theorem are obtained after bounding the
innermost expectation on the right hand side of the following equation:
$\big{|}\mathbb{E}[\widehat{\textnormal{gen}}(W,\tilde{S},U)]\big{|}=\big{|}\mathbb{E}[\widehat{\textnormal{gen}}_{J}(W,\tilde{S}_{J},U_{J})]\big{|}\leq\mathbb{E}\big{[}\big{|}\mathbb{E}[\widehat{\textnormal{gen}}_{J}(W,\tilde{S}_{J},U_{J})\mid
J,\tilde{S},U_{J^{c}},R]\big{|}\big{]},$
where the last step follows from Jensen’s inequality.
1. (a)
Consider, until stated otherwise, that all random objects and expectations are
conditioned to a fixed $j$, $\tilde{s}$, $u_{j^{c}}$, and $r$. Then note that
$\mathbb{E}[\mathscr{L}_{\bar{s}_{j}}(W^{\prime})]=\mathbb{E}[\mathscr{L}_{s_{j}}(W^{\prime})]$,
where $W^{\prime}$ is an independent copy of $W$ such that
$P_{W^{\prime},U_{j}|\tilde{s},u_{j^{c}},r}=P_{W^{\prime}|\tilde{s},u_{j^{c}},r}\otimes
P_{U_{j}}$. Therefore,
$\widehat{\textnormal{gen}}_{j}(w,\tilde{s}_{j},u_{j})=\mathscr{L}_{\bar{s}_{j}}(w)-\mathscr{L}_{s_{j}}(w)-\mathbb{E}[\mathscr{L}_{\bar{s}_{j}}(W^{\prime})-\mathscr{L}_{s_{j}}(W^{\prime})].$
Then, re-arranging the expectation of the above expression and using the fact
that $|x+y|\leq|x|+|y|$ results in
$\displaystyle\big{|}\widehat{\textnormal{gen}}_{j}(W,\tilde{s}_{j},U_{j})\big{|}\leq\big{|}\mathbb{E}[\mathscr{L}_{\bar{s}_{j}}(W^{\prime})-\mathscr{L}_{\bar{s}_{j}}(W)]\big{|}+\big{|}\mathbb{E}[\mathscr{L}_{s_{j}}(W^{\prime})-\mathscr{L}_{s_{j}}(W)]\big{|}$
$\displaystyle=\big{|}\mathbb{E}\big{[}\mathbb{E}[\mathscr{L}_{\bar{s}_{j}}(W^{\prime})-\mathscr{L}_{\bar{s}_{j}}(W)\mid
U_{j}]\big{]}\big{|}+\big{|}\mathbb{E}\big{[}\mathbb{E}[\mathscr{L}_{s_{j}}(W^{\prime})-\mathscr{L}_{s_{j}}(W)\mid
U_{j}]\big{]}\big{|}$ $\displaystyle\leq
2L\mathbb{E}[\mathbb{W}(P_{W|U_{j},\tilde{s},u_{j^{c}},r},P_{W|\tilde{s},u_{j^{c}},r})],$
where the last inequality stems from the KR duality. Finally, taking the
expectation with respect to $P_{\smash{J,\tilde{S},U_{J^{c}},R}}$ in both
sides of the equation completes the proof.
2. (b)
Consider again, until stated otherwise, that all random objects and
expectations are conditioned to a a fixed $j$, $\tilde{s}$, $u_{j^{c}}$, and
$r$. Then, similarly to the proof of Theorem 3.2
$\displaystyle\big{|}\widehat{\textnormal{gen}_{j}}(W,\tilde{s},U_{j})\big{|}\leq\frac{1}{m}\sum_{i\in
j}\Big{(}\big{|}\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})-\ell(W,\bar{Z}_{i})]\big{|}+\big{|}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})]\big{|}\Big{)}$
$\displaystyle=\frac{1}{m}\sum_{i\in
j}\Big{(}\big{|}\mathbb{E}\big{[}\mathbb{E}[\ell(W^{\prime},\bar{Z}_{i})-\ell(W,\bar{Z}_{i})\mid
U_{i}]\big{]}\big{|}+\big{|}\mathbb{E}\big{[}\mathbb{E}[\ell(W^{\prime},Z_{i})-\ell(W,Z_{i})\mid
U_{i}]\big{]}\big{|}\Big{)}$ $\displaystyle\leq\frac{2L}{m}\sum_{i\in
j}\mathbb{E}\big{[}\mathbb{W}(P_{W|U_{i},\tilde{s},u_{j^{c}},r},P_{W|\tilde{s},u_{j^{c}},r})\big{]},$
where the last inequality stems from the KR duality. Finally, taking the
expectation with respect to $P_{\smash{J,\tilde{S},U_{J^{c}},R}}$ in both
sides of the equation completes the proof.
## Appendix D Comparison of the bounds
In this section of the appendix, the full-dataset, single-letter, and random-
subset bounds are compared. First, remember that the bounds based on the
Wasserstein distance are tighter than the respective ones based on the
relative entropy, and thus, those based on the mutual information under the
conditions specified in Appendix B. The comparison in terms of the Wasserstein
distance and in terms of the mutual information are found in §D.1 and in D.2,
respectively.
When the bounds are compared in terms of the mutual information and in terms
of the Wasserstein distance, the individual-sample bounds are shown to be
tighter than the full-dataset bounds. Therefore, the idea that the individual
forward channels $P_{W|Z_{i}}$, which are smoothed versions of the full-
dataset forward channel $P_{W|S}$, are closer to the hypothesis marginal
distribution $P_{W}$ is backed by the theory in both cases.
Then, both cases also agree in that individual-sample bounds are tighter than
random-subset bounds. Even though the proof for the Wasserstein distance based
bounds also follows from a smoothing argument, a better insight is gained
through the proof for the mutual information. These are based on the fact that
$I(W;Z_{i})\leq I(W;Z_{i}|S_{\mathcal{A}})$, where $\mathcal{A}$ is a subset
of $[n]$ where $i$ is not included. This inequality holds since the knowledge
of the samples $S_{\mathcal{A}}$ provides information about $Z_{i}$ through
$W$. More precisely,
$\displaystyle I(W;Z_{i})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(a)}}}{{\leq}}I(W;Z_{i})+\smash{\overbrace{I(S_{\mathcal{A}};Z_{i}|W)}^{\textnormal{extra
information}}}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(b)}}}{{=}}I(W,S_{\mathcal{A}};Z_{i})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(c)}}}{{=}}I(W;Z_{i}|S_{\mathcal{A}})+I(Z_{i};S_{\mathcal{A}})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(d)}}}{{=}}I(W;Z_{i}|S_{\mathcal{A}}),$
where $(a)$ is due to the non-negativity of the mutual information, $(b)$ and
$(c)$ follow from the chain rule, and $(d)$ stems from the fact that $Z_{i}$
and $S_{\mathcal{A}}$ are independent.
Finally, the comparison of the random-subset and full-dataset bounds behaves
differently when the Wasserstein distance or the mutual information is
employed. The analysis using the Wasserstein distance suggests that the
random-subset bounds are sharper than the full-dataset bounds with an extra
factor of two, namely
$\mathbb{E}[\mathbb{W}(P_{W|S},P_{W|S_{J^{c}}})]\leq
2\mathbb{E}[\mathbb{W}(P_{W|S},P_{W})].$
On the other hand, the analysis using the mutual information indicates that
the random-subset bounds are looser than the full dataset bounds.
This discrepancy might help understand the reason why, in practice, the
random-subset bounds from [7, 10, 9] with the expectation of the square root
of the relative entropy result in tighter characterizations of the
generalization error than the full-dataset bounds from [4, 1] using the mutual
information. To be precise, this suggests that the loss of performance of a
further application of the Jensen’s inequality to incorporate the expectations
inside the square root of the bounds from [7, 10, 9] is big. In other words,
that $D_{\textnormal{KL}}(P_{W|S}\>\|\>P_{W|S_{J}})$ has high variance in
practical settings.
A summary of these comparisons is shown in Figure 2. Note that the mutual
information bounds are written after Jensen’s inequality is applied in order
to allow comparisons between them. However, the relationships between the
Wasserstein and mutual information based bounds still hold when the
expectations of the relative entropy are outside of the square root.
Figure 2: Summary of the comparison between the current and presented bounds
based on the Wasserstein distance and the mutual information.
### D.1 Comparison of the Wasserstein distance based bounds
#### D.1.1 Standard setting
Consider the standard setting. Then,
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]}.$
###### Proof.
The proposition follows by noting that, for all $i\in[n]$,
$\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]},$
(8)
which is a stronger statement than the original. More precisely, it can be
shown that, for all $i\in[n]$,
$\mathbb{E}\bigg{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\Big{\\{}\mathbb{E}[f(W)\mid
Z_{i}]-\mathbb{E}[f(W)]\Big{\\}}\bigg{]}\leq\mathbb{E}\bigg{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\Big{\\{}\mathbb{E}[f(W)\mid
S]-\mathbb{E}[f(W)]\Big{\\}}\bigg{]},$
which is equivalent to (8) due to the KR duality.
After writing $\mathbb{E}\big{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\big{\\{}\mathbb{E}[f(W)\mid
S]-\mathbb{E}[f(W)]\big{\\}}\big{]}$ in integral form, the result is shown as
follows:
$\displaystyle\int_{\mathcal{Z}^{n}}$ $\displaystyle\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{W}}f(w)dP_{W|s}(w)-\int_{\mathcal{W}}f(w)dP_{W}(w)\bigg{\\}}dP_{S}(s)$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(a)}}}{{\geq}}\int_{\mathcal{Z}}\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{Z}^{n-1}}\bigg{(}\int_{\mathcal{W}}f(w)dP_{W|s}(w)-\int_{\mathcal{W}}f(w)dP_{W}(w)\bigg{)}P_{Z}^{\otimes
n-1}(s^{-i})\bigg{\\}}dP_{Z}(z_{i})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(b)}}}{{=}}\int_{\mathcal{Z}}\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{W}}f(w)dP_{W|z_{i}}(w)-\int_{\mathcal{W}}f(w)dP_{W}(w)\bigg{\\}}dP_{Z}(z_{i}),$
where $s^{-i}=(z_{1},\ldots,z_{i-1},z_{i+1},\ldots,z_{n})$, $(a)$ is due to
the fact that $\sup_{g}\mathbb{E}[g(X)]\leq\mathbb{E}[\sup_{g}g(X)]$, and
$(b)$ follows from Fubini–Tonelli’s theorem and the fact that
$P_{W|Z_{i}}=\mathbb{E}[P_{W|S}\mid Z_{i}]$.
Finally, noting that $(b)$ is the integral form of
$\mathbb{E}\big{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\big{\\{}\mathbb{E}[f(W)\mid
Z_{i}]-\mathbb{E}[f(W)]\big{\\}}\big{]}$ concludes the proof. ∎
Consider the standard setting. Consider also a uniformly random subset of
indices $J\subseteq[n]$ of size $m$. Then,
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{J^{c}}})\big{]}.$
###### Proof.
The proof of this proposition follows closely that of Proposition D.1.1. First
note that the statement may be written as
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\frac{1}{\binom{n}{m}}\sum_{j\in\mathcal{J}}\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})\big{]},$
where the expectation with respect to $P_{J}$ has been written explicitly.
Then, this result follows by noting that, for all $i\in[n]$ and all
$j\subseteq[n]$ such that $i\in j$
$\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})\big{]},$
(9)
which is a stronger statement than the original. This is stronger since one
can, without loss of generality, consider the samples $Z_{i}$ ordered so that
the sequence $\\{\mathbb{E}[\mathbb{W}(P_{W|Z_{i}},P_{W})]\\}_{i\in[n]}$ is
decreasing. Then, $\mathbb{E}[\mathbb{W}(P_{W|Z_{1}},P_{W})]$ is smaller than
$\mathbb{E}[\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})]$ for the $\binom{n-1}{m-1}$
sets $j$ in which sample $1$ appears,
$\mathbb{E}[\mathbb{W}(P_{W|Z_{2}},P_{W})]$ is smaller than
$\mathbb{E}[\mathbb{W}(P_{W|S},P_{W|S_{j^{c}}})]$ for the sets $j$ in which
sample $2$ appears and sample $1$ does not, and so on.
More precisely, it can be shown that, for all $i\in[n]$ and all
$j\subseteq[n]$ such that $i\in j$,
$\mathbb{E}\bigg{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\Big{\\{}\mathbb{E}[f(W)\mid
Z_{i}]-\mathbb{E}[f(W)]\Big{\\}}\bigg{]}\leq\mathbb{E}\bigg{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\Big{\\{}\mathbb{E}[f(W)\mid S]-\mathbb{E}[f(W)\mid
S_{j^{c}}]\Big{\\}}\bigg{]},$
which is equivalent to (9) due to the KR duality.
After writing $\mathbb{E}\big{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\big{\\{}\mathbb{E}[f(W)\mid S]-\mathbb{E}[f(W)\mid
S_{j^{c}}]\big{\\}}\big{]}$ in integral form, the result is shown as follows:
$\displaystyle\int_{\mathcal{Z}^{n}}\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{W}}f(w)dP_{W|s}(w)-\int_{\mathcal{W}}f(w)dP_{W|s_{j^{c}}}(w)\bigg{\\}}dP_{S}(s)$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(a)}}}{{\geq}}\int_{\mathcal{Z}}\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{Z}^{n-1}}\bigg{(}\int_{\mathcal{W}}f(w)dP_{W|s}(w)-\int_{\mathcal{W}}f(w)dP_{W|s_{j^{c}}}(w)\bigg{)}P_{Z}^{\otimes
n-1}(s^{-i})\bigg{\\}}dP_{Z}(z_{i})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(b)}}}{{=}}\int_{\mathcal{Z}}\sup_{f\in
1\textnormal{-Lip}(\rho)}\bigg{\\{}\int_{\mathcal{W}}f(w)dP_{W|z_{i}}(w)-\int_{\mathcal{W}}f(w)dP_{W}(w)\bigg{\\}}dP_{Z}(z_{i}),$
where $(a)$ stems from the fact that
$\sup_{g}\mathbb{E}[g(X)]\leq\mathbb{E}[\sup_{g}g(X)]$, and $(b)$ follows from
Fubini–Tonelli’s theorem and the fact that $P_{W|Z_{i}}=\mathbb{E}[P_{W|S}\mid
Z_{i}]$ and $P_{W}=\mathbb{E}[P_{W|S_{j^{c}}}\mid Z_{i}]$, since $i\in j$ and
therefore $i\not\in j^{c}$.
Finally, noting that $(b)$ is the integral form of
$\mathbb{E}\big{[}\sup_{f\in
1\textnormal{-Lip}(\rho)}\big{\\{}\mathbb{E}[f(W)\mid
Z_{i}]-\mathbb{E}[f(W)]\big{\\}}\big{]}$ concludes the proof. ∎
Consider the standard setting. Consider also a uniformly random subset of
indices $J\subseteq[n]$ of size $m$. Then,
$\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{J^{c}}})\big{]}\leq
2\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]}.$
###### Proof.
An application of the triangle inequality on Wasserstein distances [18,
Chapter 6] states that, for all $j\subseteq[n]$ such that $|j|=m$ and all
$s\in\mathcal{Z}^{n}$
$\mathbb{W}(P_{W|s},P_{W|s_{j^{c}}})\leq\mathbb{W}(P_{W|s},P_{W})+\mathbb{W}(P_{W|s_{j^{c}}},P_{W}).$
(10)
Then, the inequality
$\mathbb{E}[\mathbb{W}(P_{W|S_{j^{c}}},P_{W})]\leq\mathbb{E}[\mathbb{W}(P_{W|S},P_{W})]$
holds by the same arguments of Proposition D.1.1. That is, writing the
Wasserstein distance in its KR dual form and noting that the integral of a
supremum is greater than the supremum of the integral and that
$P_{W|S_{j^{c}}}=\mathbb{E}[P_{W|S}\mid S_{j^{c}}]$ for all $j\subseteq[n]$.
Hence, taking expectations on both sides of (10) results in
$\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S_{J^{c}}})\big{]}\leq
2\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]},$
which completes the proof. ∎
#### D.1.2 Randomized-subsample setting
Consider the randomized-subsample setting. Then,
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})\big{]},$
where $\tilde{S}_{i}$ is defined in Theorem 3.2.
###### Proof.
The proposition follows noting that, for all $i\in[n]$
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}}})]\leq\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})],$
(11)
which is a stronger statement than the original. Then, Equation (11) follows
by the same arguments as Propositions D.1.1 and D.1.1. That is, writing the
Wasserstein distance in its KR dual form and noting that the integral of a
supremum is greater than the supremum of the integral and that
$P_{\smash{W|\tilde{S}_{i},U_{i}}}=\mathbb{E}[P_{\smash{W|\tilde{S},U}}\mid\tilde{S}_{i},U_{i}]$
and
$P_{\smash{W|\tilde{S}_{i}}}=\mathbb{E}[P_{\smash{W|\tilde{S}}}\mid\tilde{S}_{i},U_{i}]$.
∎
Consider the randomized-subsample setting. Consider also a uniformly random
subset of indices $J\subseteq[n]$ of size $m$. Then,
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{J^{c}}}})\big{]},$
###### Proof.
Note that the statement of the proposition may be written as
$\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}\leq\frac{1}{\binom{n}{m}}\sum_{j\in\mathcal{J}}\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{j^{c}}}})\big{]},$
where the expectation with respect to $P_{J}$ has been written explicitly.
Then, this result follows by noting that, for all $i\in[n]$ and all
$j\subseteq[n]$ such that $i\in j$
$\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{j^{c}}}})\big{]},$
(12)
which is a stronger statement than the original. Similarly to Proposition
D.1.1, this is stronger since one can, without loss of generality, consider
the tuple of pairs of samples and their deciding index
$(\tilde{S}_{i},U_{i})=\big{(}(\tilde{Z}_{i},\tilde{Z}_{i+n}),U_{i}\big{)}$
ordered so that the sequence
$\\{\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})]\\}_{i\in[n]}$
is decreasing. Then,
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}_{1},U_{1}}},P_{\smash{W|\tilde{S}_{1}}})]$
is smaller than
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{j^{c}}}})]$
for the $\binom{n-1}{m-1}$ sets $j$ in which the tuple $1$ appears,
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}_{2},U_{2}}},P_{\smash{W|\tilde{S}_{2}}})]$
is smaller than
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{j^{c}}}})]$
for the sets $j$ in which tuple $2$ appears and tuple $1$ does not, and so on.
Then, Equation (12) follows by the same arguments than Propositions D.1.1 and
D.1.1. That is, writing the Wasserstein distance in its KR dual form and
noting that the integral of a supremum is greater than the supremum of the
integral and that
$P_{\smash{W|\tilde{S}_{i},U_{i}}}=\mathbb{E}[P_{\smash{W|\tilde{S},U}}\mid\tilde{S}_{i},U_{i}]$
and
$P_{\smash{W|\tilde{S}_{i}}}=\mathbb{E}[P_{\smash{W|\tilde{S},U_{j^{c}}}}\mid\tilde{S}_{i},U_{i}]$,
since $i\in j$ and therefore $i\not\in j^{c}$. ∎
Consider the randomized-subsample setting. Consider also a uniformly random
subset of indices $J\subseteq[n]$ of size $m$. Then,
$\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{J^{c}}}})\big{]}\leq
2\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})\big{]}.$
###### Proof.
An application of the triangle inequality on Wasserstein distances [18,
Chapter 6] states that, for all $j\subseteq[n]$ such that $|j|=m$, all
$\tilde{s}\in\mathcal{Z}^{2n}$, and all $u\in[0,1]^{n}$
$\mathbb{W}(P_{W|\tilde{s},u},P_{W|\tilde{s},u_{j^{c}}})\leq\mathbb{W}(P_{W|\tilde{s},u},P_{W|\tilde{s}})+\mathbb{W}(P_{W|\tilde{s},u_{j^{c}}},P_{W|\tilde{s}}).$
(13)
Then, the inequality
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U_{j^{c}}}},P_{\smash{W|\tilde{S}}})]\leq\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})]$
holds by the same arguments of Proposition D.1.1. That is, writing the
Wasserstein distance in its KR dual form and noting that the integral of a
supremum is greater than the supremum of the integral and that
$P_{\smash{W|\tilde{S},U_{j^{c}}}}=\mathbb{E}[P_{\smash{W|\tilde{S},U}}\mid\tilde{S},U_{j^{c}}]$
for all $j\subseteq[n]$. Hence, taking expectations on both sides of (13)
results in
$\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{J^{c}}}})\big{]}\leq
2\mathbb{E}\big{[}\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})\big{]},$
which completes the proof. ∎
#### D.1.3 Comparison between the settings
Consider the standard and the randomized-subsample settings. Consider also a
uniformly random subset of indices $J\subseteq[n]$ of size $m$. Then,
$\displaystyle\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})]$
$\displaystyle\leq 2\mathbb{E}[\mathbb{W}(P_{W|S},P_{W})],$
$\displaystyle\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}_{i},U_{i}}},P_{\smash{W|\tilde{S}_{i}}})]$
$\displaystyle\leq 2\mathbb{E}[\mathbb{W}(P_{W|Z_{i}},P_{W})],\textnormal{
and}$
$\displaystyle\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S},U_{J^{c}}}})]$
$\displaystyle\leq 2\mathbb{E}[\mathbb{W}(P_{W|S},P_{W|S_{J^{c}}})].$
###### Proof.
The proofs of the three statements are analogous. Therefore only the proof of
the first statement is explicitly written.
An application of the triangle inequality on Wasserstein distances [18,
Chapter 6] states that, for $\tilde{s}\in\mathcal{Z}^{2n}$ and all
$u\in[0,1]^{n}$ such that $s$ is obtained through $\tilde{s}$ and $u$ as
explained in the introduction,
$\mathbb{W}(P_{W|\tilde{s},u},P_{W|\tilde{s}})\leq\mathbb{W}(P_{W|\tilde{s},u},P_{W})+\mathbb{W}(P_{W|\tilde{s}},P_{W}).$
(14)
Then, the inequality
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S}}},P_{W})]\leq\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W}})]$
holds by the same arguments of Proposition D.1.1. That is, writing the
Wasserstein distance in its KR dual form and noting that the integral of a
supremum is greater than the supremum of the integral and that
$P_{\smash{W|\tilde{S}}}=\mathbb{E}[P_{\smash{W|\tilde{S},U}}\mid\tilde{S}]$.
Hence, taking expectations on both sides of (14) and noting that
$P_{\tilde{S},U}=P_{S}$ almost surely results in
$\mathbb{E}[\mathbb{W}(P_{\smash{W|\tilde{S},U}},P_{\smash{W|\tilde{S}}})]\leq
2\mathbb{E}[\mathbb{W}(P_{W|S},P_{W})],$
which completes the proof. ∎
### D.2 Comparison of the mutual information based bounds
#### D.2.1 Standard setting
In the standard setting, after a further application of Jensen’s inequality,
the bounds derived from Corollary 3.1 or [6, Proposition 1] are the tightest,
followed by [4, Theorem 1] and the bounds derived from Corollary 3.1 when
$|J|=1$ or [7, Theorem 2.4] if the arbitrary random variable $R$ is not
considered. This follows since by [6, Proposition 2] and [38, Lemma 3.7] or
[9, Lemma 2]
$\sum_{i=1}^{n}I(W;Z_{i})\leq I(W;S)\leq\sum_{i=1}^{n}I(W;Z_{i}|S^{-i}),$
where $S^{-i}=S\setminus Z_{i}$. More generally, with trivial modifications to
the proof from [9, Lemma 2], it can be shown that
$\sum_{i=1}^{n}I(W;Z_{i})\leq I(W;S)\leq\mathbb{E}[I(W;S_{J}|S_{J^{c}})],$
noting that, for any $j\subseteq[n]$ of size $m$, if the elements of $s_{j}$
are ordered as $Z_{k_{1}},\ldots,Z_{k_{m}}$,
$I(W;S)=\sum_{i=1}^{n}I(W;Z_{i}|S^{i-1})\leq\frac{1}{\binom{n}{m}}\sum_{\iota=1}^{m}I(W;Z_{k_{\iota}}|S_{j^{c}},S_{j}^{k_{\iota-1}}),$
since every time that $i=k_{\iota}$ then $S^{i-1}\subseteq(S_{j^{c}}\cup
S_{j}^{k_{\iota-1}})$, where $S_{j}^{k_{\iota-1}}$ are the first $\iota$-1
elements of $S_{j}$.
#### D.2.2 Randomized-subsample setting
With a similar argument to the one for the standard setting, after a further
application of Jensen’s inequality, the bounds derived from [10, Theorem 3.4]
are tighter than [1, Theorem 5.1], which in turn are tighter than the bounds
derived from Corollary 3.2 when $|J|=1$ or [10, Theorem 3.7] if the arbitrary
random variable $R$ is not considered, since
$\sum_{i=1}^{n}I(W;U_{i}|\tilde{S})\leq
I(W;U|\tilde{S})\leq\mathbb{E}[I(W;U_{J}|\tilde{S},U_{J^{c}})].$
Furthermore, the bounds derived from Corollary 3.2 or [9, Proposition 3] are
the tightest as dictated by [9, Lemma 3] or [12, Lemma 2]. This way, the
relationship between the conditional mutual information terms is
$\sum_{i=1}^{n}I(W;U_{i}|\tilde{Z}_{i},\tilde{Z}_{i+n})\leq\sum_{i=1}^{n}I(W;U_{i}|\tilde{S})\leq
I(W;U|\tilde{S})\leq\mathbb{E}[I(W;U_{J}|\tilde{S},U_{J^{c}})].$
#### D.2.3 Comparison between the settings
Similarly, one may note that $I(W;U|\tilde{S})\leq I(W;S)$ by [10],
$I(W;U_{i}|\tilde{Z}_{i},\tilde{Z}_{i+n})\leq I(W;Z_{i})$, and
$I(W;U_{j}|\tilde{S})\leq I(W;U_{j}|\tilde{S},U_{j^{c}})\leq
I(W;S_{j}|S_{j^{c}})$ for any subset of indices $j\subseteq[n]$. Nonetheless,
the additional factor of two in the bounds of the randomized-subsample setting
makes the comparison between the bounds of the different settings harder.
An attempt for this comparison is given in [8], where they note that, since
$(U,\tilde{S})\leftrightarrow S\leftrightarrow W$ form a Markov chain and $S$
is a deterministic function of $\tilde{S}$ and $U$, then
$I(W;S)=I(W;\tilde{S})+I(W;U|\tilde{S})$, and hence the bound from the
randomized-subsample setting [1, Theorem 5.1] is tighter than the one from the
standard setting [4, Theorem 1] if $3I(W;U|\tilde{S})\leq I(W;\tilde{S})$.
There are similar requirements for the single-letter and random-subset bounds,
namely:
* •
The bound derived from Corollary 3.2 or [9, Proposition 3] is tighter than [6,
Proposition 1] if $3I(W;U_{i}|\tilde{Z}_{i},\tilde{Z}_{i+n})\leq
I(W;\tilde{Z}_{i},\tilde{Z}_{i+n})$.
* •
The bound derived from Corollary 3.2 or [10, Theorem 3.7] is tighter than [7,
Theorem 2.4] if $3I(W;U_{j}|\tilde{S},U_{j^{c}})\leq
I(W;\tilde{S}_{j}|S_{j^{c}})$.
###### Remark 4.
Note that, sometimes, seemingly looser bounds can lead to tighter or more
tractable bounds for specific algorithms. For instance, the random-subset
bounds from [7, 9, 10] lead to tighter Langevin dynamics and stochastic
gradient Langevin dynamics than the single-letter bounds from [6].
## Appendix E Derivations for the Gaussian location model example
The problem considered in the example is the estimation of the mean $\mu$ of a
$d$-dimensional Gaussian distribution with known covariance matrix
$\sigma^{2}I_{d}$. Furthermore, there are $n$ samples $S=(Z_{1},\ldots,Z_{n})$
available, the loss is measured with the Euclidean distance $\ell(w,z)=\lVert
w-z\rVert_{2}$, and the estimation is their empirical mean
$W=\frac{1}{n}\sum_{i=1}^{n}Z_{i}$.
To calculate the expected generalization error and derive different bounds, it
is convenient to know how the random variables are distributed. For example,
in this setting $P_{Z}=\mathcal{N}(\mu,\sigma^{2}I_{d})$,
$P_{W}=\mathcal{N}\Big{(}\mu,\frac{\sigma^{2}}{n}I_{d}\Big{)}$,
$P_{W|Z_{i}}=\mathcal{N}\Big{(}\frac{(n-1)\mu+Z_{i}}{n},\frac{\sigma^{2}(n+1)}{n^{2}}I_{d}\Big{)}$,
$P_{W|S^{-j}}=\mathcal{N}\Big{(}\frac{\mu}{n}+\frac{1}{n}\sum_{i\neq
j}Z_{i},\sigma^{2}I_{d}\Big{)}$, and
$P_{W|S}=\delta\Big{(}\frac{1}{n}\sum_{i=1}^{n}{Z_{i}}\Big{)}$. Another
important feature of this problem is that the loss function is $1$-Lipschitz
under $\rho(w,w^{\prime})=\lVert w-w^{\prime}\rVert_{2}$.
### E.1 Expected generalization error
In order to derive an exact expression of the generalization error, it is
suitable to write it in the following explicit form:
$\overline{\textnormal{gen}}(W,S)=\mathbb{E}[\ell(W,Z)]-\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[\ell(W,Z_{i})],$
where $Z\sim P_{Z}$ is independent of $W$. Then, the two terms can be
evaluated independently.
The first term is equivalent to
$\mathbb{E}[\ell(W,Z)]=\mathbb{E}[\lVert
W-Z\rVert_{2}]=\sqrt{2\sigma^{2}\Big{(}1+\frac{1}{n}\Big{)}}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}},$
where the first equality follows from the definition of the loss function. The
second equality follows from noting that
$(W-Z)\sim\mathcal{N}\big{(}0,\sigma^{2}\big{(}1+\frac{1}{n}\big{)}I_{d}\big{)}$
and therefore $\lVert
W-Z\rVert_{2}=\sqrt{\sigma^{2}\big{(}1+\frac{1}{n}\big{)}}X$, where $X$ is
distributed according to the chi distribution with $d$ degrees of freedom.
Similarly, the summands of the second term are equivalent to
$\mathbb{E}[\ell(W,Z_{i})]=\mathbb{E}[\lVert
W-Z_{i}\rVert_{2}]=\sqrt{2\sigma^{2}\Big{(}1-\frac{1}{n}\Big{)}}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}},$
where as before the first equality follows from the definition of the loss
function. The second equality follows from noting that
$(W-Z_{i})\sim\mathcal{N}\big{(}0,\sigma^{2}\big{(}1-\frac{1}{n}\big{)}I_{d}\big{)}$
and therefore $\lVert
W-Z_{i}\rVert_{2}=\sqrt{\sigma^{2}\big{(}1-\frac{1}{n}\big{)}}X$, where $X$ is
distributed according to the chi distribution with $d$ degrees of freedom. In
this case, $W$ and $Z_{i}$ are not independent random variables. In fact,
$(W,Z_{i})$ is normally distributed with covariance matrix
$\begin{pmatrix}\frac{\sigma^{2}}{n}I_{d}&\frac{\sigma^{2}}{n}I_{d}\\\
\frac{\sigma^{2}}{n}I_{d}&\sigma^{2}I_{d}\end{pmatrix},$
from which the distribution of $W-Z_{i}$ is deduced.
Finally, subtracting both terms results in
$\overline{\textnormal{gen}}(W,S)=\sqrt{\frac{2\sigma^{2}}{n}}\Big{(}\sqrt{n+1}-\sqrt{n-1}\Big{)}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}}\leq\frac{\sqrt{2\sigma^{2}d}}{n}.$
where the inequality follows from the following two bounds: (i)
$\sqrt{n+1}-\sqrt{n-1}\leq\sqrt{\frac{2}{n}}$, which is obtained by
multiplying and dividing by $\sqrt{n+1}+\sqrt{n-1}$ and noting that
$\sqrt{n+1}+\sqrt{n-1}\geq\sqrt{2n}$, and (ii) the upper bound on the ratio of
gamma distributions by $\sqrt{\frac{d}{2}}$ using the series expansion at
$d\to\infty$.
### E.2 Wasserstein distance bound
The bound from [15] can be calculated exactly since $P_{W|S}$ is a delta
distribution, that is
$\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W})\big{]}=\mathbb{E}\bigg{[}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n}Z_{i}-\frac{1}{n}\sum_{i=1}^{n}Z_{i}^{\prime}\Big{\rVert}_{2}\bigg{]}=\sqrt{\frac{4\sigma^{2}}{n}}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}}\leq\sqrt{\frac{2\sigma^{2}d}{n}}$
where $Z_{i}^{\prime}\sim P_{Z}$ are independent copies of $Z_{i}$. Hence, the
difference is distributed as a normal distribution with mean 0 and covariance
$\frac{2\sigma^{2}}{n}I_{d}$, which means that the norm is
$\sqrt{\frac{2\sigma^{2}}{n}}X$, where $X$ is a chi random variable with $d$
degrees of freedom.
### E.3 Individual sample Wasserstein distance bound
An exact calculation of the bound from Theorem 3.1 is cumbersome. However, the
Wasserstein distance of order one can be bounded from above by the Wasserstein
distance of order two (Remark 1), which has a closed form expression for
Gaussian distributions. More specifically,
$\mathbb{E}\big{[}\mathbb{W}(P_{W|Z_{i}},P_{W})\big{]}\leq\mathbb{E}\big{[}\mathbb{W}_{2}(P_{W|Z_{i}},P_{W})\big{]}\leq\frac{\sqrt{2\sigma^{2}}}{n}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}}+\sqrt{\frac{\sigma^{2}d}{n^{3}}}\leq\frac{\sqrt{\sigma^{2}d}}{n}+\sqrt{\frac{\sigma^{2}d}{n^{3}}}.$
The second inequality follows from the closed-form expression for the squared
Wasserstein distance of order 2, namely
$\mathbb{W}(P_{W|Z_{i}},P_{W})^{2}=\frac{1}{n^{2}}\lVert\mu-
Z_{i}\rVert^{2}+\frac{\sigma^{2}d}{n}\Big{(}1+\frac{n-1}{n}-2\sqrt{\frac{n-1}{n}}\Big{)},$
where the term $(1+\frac{n-1}{n}-2\sqrt{\frac{n-1}{n}})$ is a perfect square
that is bounded from above by $\frac{1}{n^{2}}$. Then the expression results
from employing the inequality $\sqrt{x+y}\leq\sqrt{x}+\sqrt{y}$ and noting
that $\lVert\mu-Z_{i}\rVert_{2}=\sigma X$, where $X$ is a chi distributed
random variable with $d$ degrees of freedom.
### E.4 Random subset Wasserstein distance bound
As in E.2, since $P_{W|S}$ is a delta distribution, the bound from Theorem 3.1
can be calculated exactly. In particular, the bound assuming that $|J|=1$ is
$\mathbb{E}\big{[}\mathbb{W}(P_{W|S},P_{W|S^{-J}})\big{]}=\mathbb{E}\bigg{[}\Big{\lVert}\frac{1}{n}\sum_{i=1}^{n}Z_{i}-\Big{(}\frac{Z_{J}^{\prime}}{n}+\frac{1}{n}\sum_{i\neq
J}Z_{i}\Big{)}\Big{\rVert}_{2}\bigg{]}=\frac{\sqrt{4\sigma^{2}}}{n}\frac{\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}}\leq\frac{\sqrt{2\sigma^{2}d}}{n},$
where $Z_{J}^{\prime}\sim P_{Z}$ is an independent copy of $Z_{J}$. Hence the
norm is $\frac{\sqrt{2\sigma^{2}}}{n}X$, where $X$ is a chi random variable
with $d$ degrees of freedom.
### E.5 Individual sample mutual information bound
The individual sample mutual information is
$I(W;Z_{i})=\frac{d}{2}\log\big{(}\frac{n}{n-1}\big{)}$ for all $i\in[n]$ [6].
Nonetheless, in order to employ the bound from [6], the loss function
$\ell(W,Z)$ needs to have a cumulant generating function $\Lambda(\lambda)$
bounded from above by a convex function $\psi(\lambda)$ such that
$\psi(0)=\psi^{\prime}(0)=0$ for all $\lambda\in(-b,0]$ for some
$b\in\mathbb{R}_{+}$, where $Z\sim P_{Z}$ is independent of $W$.
The loss function $\ell(W,Z)=\lVert W-Z\rVert_{2}$ is
$\sqrt{\sigma^{2}(1+\frac{1}{n})}X$, where $X$ is a chi random variable with
$d$ degrees of freedom. The moment generating function $M(\lambda)$ of such a
random variable is
$M(\lambda)=\bar{M}\Big{(}\frac{d}{2},\frac{1}{2},\frac{\lambda^{2}}{2}\Big{)}+\frac{\lambda\sqrt{2}\Gamma\big{(}\frac{d+1}{2}\big{)}}{\Gamma\big{(}\frac{d}{2}\big{)}}\bar{M}\Big{(}\frac{k+1}{2},\frac{3}{2},\frac{\lambda^{2}}{2}\Big{)},$
where $\bar{M}$ is the Kummer’s confluent hypergeometric function.
The expression of this moment generating function is too convoluted to study
for $d>1$. Nonetheless, for $d=1$ it has a closed form expression, namely
$M(\lambda)=e^{\frac{\lambda^{2}}{2}}\Big{(}1+\textnormal{erf}\Big{(}\frac{\lambda}{\sqrt{2}}\Big{)}\Big{)}.$
Therefore, the cumulant generating function is
$\Lambda(\lambda)=\frac{\lambda^{2}}{2}+\log(1+\textnormal{erf}(\frac{\lambda}{\sqrt{2}}))$,
which is bounded from above by the convex function
$\psi(\lambda)=\frac{\lambda^{2}}{2}$ for all $\lambda\in(-\infty,0]$. Hence,
the bound from [6] can be applied yielding
$\overline{\textnormal{gen}}(W,S)\leq\frac{1}{n}\sum_{i=1}^{n}\sqrt{2\sigma^{2}\Big{(}1+\frac{1}{n}\Big{)}I(W;Z_{i})}\leq\sqrt{\sigma^{2}\Big{(}1+\frac{1}{n}\Big{)}\log\Big{(}\frac{n}{n-1}\Big{)}}\leq\sqrt{\frac{2\sigma^{2}}{n-1}},$
where the last inequality stems from noting that
$\frac{n}{n-1}=1+\frac{1}{n-1}$, the fact that $\log(1+x)\leq x$, and bounding
$(1+\frac{1}{n})$ from above by $2$.
## Appendix F Randomized-subsample setting and the BH inequality
In Corollaries 3.2 and 3.2, the immediate bound that stems from the use of the
BH inequality is not included. The reason for this is that the relative
entropies
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n},U_{i}}}\>\|\>P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n}}})$
and
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U,R}}\>\|\>P_{\smash{W|\tilde{S},U_{J^{c}},R}})$,
when $|J|=1$, are never greater than $\log(2)$ as shown in Lemma 3 below.
Hence, the range of these relative entropies is inside the range where
Pinsker’s inequality is tighter than the BH inequality.
###### Lemma 3.
Let $P_{X|A,B}$ be a conditional probability distribution on $\mathcal{X}$,
where $B$ is a Bernoulli random variable with probability $1/2$ and
$A\in\mathcal{A}$ is a random variable independent of $B$. Let also
$P_{X|A}=\mathbb{E}[P_{X|A,B}\mid A]$. Then,
$D_{\textnormal{KL}}(P_{X|A,B}\>\|\>P_{X|A})\leq\log(2)$.
###### Proof.
In this situation $P_{X|A}$ dominates $P_{X|A,B}$ and $P_{X|A,(1-B)}$, that is
$P_{X|A,B}\ll P_{X|A}$ and $P_{X|A,(1-B)}\ll P_{X|A}$, since
$P_{X|A}=(P_{X|A,B}+P_{X|A,(1-B)})/2$. Therefore,
$\displaystyle D_{\textnormal{KL}}(P_{X|A,B}\>\|\>P_{X|A})$
$\displaystyle=\mathbb{E}\Bigg{[}\log\Bigg{(}\frac{dP_{X|A,B}}{d\big{(}\frac{1}{2}P_{X|A,B}+\frac{1}{2}P_{X|A,(1-B)}\big{)}}\Bigg{)}\
\Bigg{|}\ A,B\Bigg{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(a)}}}{{=}}-\mathbb{E}\Bigg{[}\log\Bigg{(}\frac{d\big{(}\frac{1}{2}P_{X|A,B}+\frac{1}{2}P_{X|A,(1-B)}\big{)}}{dP_{X|A,B}}\Bigg{)}\
\Bigg{|}\ A,B\Bigg{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(b)}}}{{=}}\log(2)-\mathbb{E}\bigg{[}\log\bigg{(}1+\frac{dP_{X|A,(1-B)}}{dP_{X|A,B}}\bigg{)}\
\bigg{|}\ A,B\bigg{]}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{(c)}}}{{\leq}}\log(2),$
where $(a)$ stems from [39, Exercise 9.27], $(b)$ follows from the linearity
$P_{X|A,B}$-a.e. of the Radon–Nikodym derivative, and $(c)$ is due to the fact
that $\log(1+x)\geq 0$ for all $x\geq 0$ and the fact that
$dP_{X|A,(1-B)}/dP_{X|A,B}$ is always positive. Also, steps $(a)$, $(b)$, and
$(c)$ are possible since the expectation integrates over the support of
$P_{X|A,B}$, avoiding the problems of absolute continuity in $(c)$ and
absorbing the $P_{X|A,B}$-a.e. properties for $(a)$ and $(b)$. ∎
###### Remark 5.
Lemma 3 can be easily extended to the case where $B$ is a sequence of $k$
Bernoulli random variables $B_{i}$, noting that
$P_{X|A}=2^{-k}\sum_{j=1}^{2^{k}}P_{X|A,\mathcal{B}_{j}}$, where $\mathcal{B}$
are all the $2^{k}$ random sequences $\mathcal{B}_{j}$ where the $i$-th
element can be either $B_{i}$ or $(1-B_{i})$. In that case, we have that
$D_{\textnormal{KL}}(P_{X|A,B}\>\|\>P_{X|A})\leq k\log(2)$.
Then, note that
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n},U_{i}}}\>\|\>P_{\smash{W|\tilde{Z}_{i},\tilde{Z}_{i+n}}})\leq\log(2)$
if $A=(\tilde{Z}_{i},\tilde{Z}_{i+n})$, $B=U_{i}$, and $X=W$. Similarly, for
$|J|=1$, note that
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U,R}}\>\|\>P_{\smash{W|\tilde{S},U_{J^{c}},R}})\leq\log(2)$
if $A=(\tilde{S},U_{J^{c}},R)$, $B=U_{J}$, and $X=W$.
###### Remark 6.
In Corollary 3.2, when $|J|>2$, it is not guaranteed that the inequality
obtained from Pinsker’s inequality is tighter than the one obtained with the
BH inequality. For instance, as per Remark 5,
$D_{\textnormal{KL}}(P_{\smash{W|\tilde{S},U,R}}\>\|\>P_{\smash{W|\tilde{S},U_{J^{c}},R}})$
could be as large as $|J|\log(2)$, which is already larger than $1.6$ for
$|J|=3$. Hence, for $|J|>2$, one should also consider that inequality if one
desires the tightest bound.
However, the bound derived from the BH inequality was not included since this
kind of bounds are usually employed for $|J|=1$, e.g., [10, Theorem 4.2] and
[9, Proposition 6]. Moreover, after applying Jensen’s inequality, it is shown
that the derived mutual-information bounds are the tightest when $|J|=1$ [10,
Corollary 3.3].
## Appendix G Rate-distortion theory and generalization
### G.1 Rate–distortion theory
Rate–distortion theory [22, 19] deals with the problem of determining the
minimum number of bits, determined by the _rate_ $R$, that should be employed
to characterize a signal $X$ by $Y$ so that this signal can later be recovered
with an expected _distortion_ lower than $\delta$. Formally, given a signal
$X$ with distribution $P_{X}$ and a distortion measure $d$, the
_rate–distortion_ function $R(\delta)$ finds the optimal encoding distribution
$P_{Y|X}^{\star}$, i.e., the channel $P_{Y|X}$ that generates a representation
$Y$ with the minimum amount of bits $R(\delta)$ and an expected distortion
lower than $\delta$. Namely,
$R(\delta)=\inf_{P_{Y|X}:\mathbb{E}[d(X,Y)]\leq\delta}I(X;Y).$
A celebrated result in rate–distortion theory is its duality. More precisely,
instead of looking for the channel $P_{Y|X}$ that most compresses a signal $X$
with a limited distortion $\delta$, one can look for the channel $P_{Y|X}$
that less distorts the signal $X$ with a limited budget of bits $r$. That is,
one can solve the _distortion–rate_ function
$D(r)=\inf_{P_{Y|X}:I(X;Y)\leq r}\mathbb{E}[d(X,Y)].$
Then, the duality theorem states that $R(\delta)=D^{-1}(\delta)$ and
$D(r)=R^{-1}(r)$ [40, Lemma 4.1.2], or, in words, that the inverse of the
rate–distortion function is the distortion–rate function, and vice versa.
###### Remark 7.
Rate–distortion theory is a well-studied field and the rate–distortion and
distortion–rate functions have many more interesting properties and analytical
solutions and bounds for particular (and common) cases.
##### Single-letterization
Sometimes, if $X$ is a sequence of signals $X=(X_{1},\ldots,X_{n})$, solving
the rate–distortion function is challenging. Assume that the signals $X_{i}$
are independent and the distortion $d$ is separable, i.e.,
$\mathbb{E}[d(X,Y)]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(X_{i},Y_{i})]$.
Then, a simpler task is to solve the single-letter version of the
rate–distortion function. Namely, for any $i\in[n]$, to solve
$R_{i}(\delta_{i})=\inf_{P_{Y_{i}|X_{i}}:\mathbb{E}[d(X_{i},Y_{i})]\leq\delta}I(X_{i};Y_{i}).$
Then, for any $i$, the equality $nR_{i}(\delta)=R(\delta)$ [19, Theorem 26.1]
holds. Hence, the channel $P_{Y|X}=P_{Y_{i}|X_{i}}^{\otimes n}$ can be used as
a proxy for $P_{Y|X}^{\star}$.
##### Backward-channel
Sometimes, working with the backward channel $P_{X|Y}$ is convenient to derive
an analytical solution to the rate–distortion function for a particular input
signal distribution, e.g., Bernoulli or Gaussian [19, Chapter 27.1]; to derive
an analytical bound for certain distribution families, e.g., bounded variance;
or to derive an analytical bound for certain distortion families, e.g.,
difference, additive, or autoregressive distortions [40, Sections 4.3, 4.6,
and 4.7]
### G.2 Connection to the generalization error
When designing a learning algorithm $P_{W|S}$, the aim is an algorithm that
attains a low accuracy error (or risk) while generalizing well. This informal
sentiment can be posed as constrained optimization as follows: to design an
algorithm $P_{W|S}$ that has the minimum possible expected generalization
error $G(\epsilon)$ while maintaining a population risk lower than $\epsilon$.
Namely, one may consider the _generalization–risk_ function to be
$G(\epsilon)=\inf_{P_{W|S}:\mathbb{E}[L_{S}(W)]\leq\epsilon}\mathbb{E}[\textnormal{gen}(W,S)],$
and select the algorithm $P_{W|S}^{\star}$ that solves it.
Furthermore, the expected generalization error increases with $I(W;S)$ as
shown in [4, Theorem 1] or (2). Therefore, one may instead consider the
_information-generalization–risk_ function
$G^{\blacktriangle}(\epsilon)=\inf_{P_{W|S}:\mathbb{E}_{P_{W,S}}[L_{S}(W)]\leq\epsilon}I(W;S),$
and use it as a surrogate of the generalization–risk function to choose the
algorithm $P_{W|S}^{\blacktriangle}$ that solves
$G^{\blacktriangle}(\epsilon)$ as a proxy for $P_{W|S}^{\star}$. Therefore,
the powerful rate–distortion theory may be employed to select a sensible
learning algorithm or, at least, to better understand the trade-off between
generalization and risk.
###### Remark 8.
There are several issues to be considered when applying rate–distortion theory
to the information-generalization–risk function. For instance, usually a
hypothesis is not separable, i.e., $W\neq(W_{1},\ldots,W_{n})$, and hence the
single-letterization of the problem must be carried obtaining $P_{W|Z_{i}}$
instead of $P_{W_{i}|Z_{i}}$, which is inconvenient since then obtaining
$P_{W|S}$ from $P_{W|Z_{i}}$ is cumbersome. Nonetheless, the aim of this
section is just to give some intuition about the connection between
rate–distortion and generalization, and a proper formal framework is beyond
the objective of this paper. The reader is referred to [11] and [23] for other
connections between these two concepts.
## Appendix H Additional remarks on the chi-squared based bounds
As mentioned in §3.4, the presented bounds in terms of the total variation
result in bounds based on the $\chi^{2}$-divergence and other $f$-divergences
employing the joint range strategy [19, Chapter 7]. In order to do so, the
loss function $\ell$ is required to be $L$-Lipschitz for all $z\in\mathcal{Z}$
under the discrete metric, or, in other words, to be bounded in a range
$[c,c+L]$ for some $c\in\mathbb{R}$.
###### Claim 1.
If a function $f$ is $L$-Lipschitz under the discrete metric, it is bounded in
$[c,c+L]$ for some $c\in\mathbb{R}$.
###### Proof.
If $|f(x)-f(y)|\leq L\rho_{\textnormal{H}}(x,y)$ for all $x,y\in\mathcal{X}$
then $|f(x)-f(y)|\leq L$. This holds if and only if $f:\mathcal{X}\to[c,c+L]$
for some $c\in\mathbb{R}$. ∎
As an example, Equation (6) is obtained as a corollary of Theorem 3.1.
However, note that Theorems 3.1, 3.1, 3.2, and 3.2, and Equations (1) and (3)
can be replicated using the variational representation of the
$\chi^{2}$-divergence [41, Example 6.4], which states that for all
distributions $P,Q$ over $\mathcal{X}$
$\chi^{2}(P,Q)=\sup_{f:\mathcal{X}\to\mathbb{R}}\Bigg{\\{}\frac{\big{(}\mathbb{E}[f(X)]-\mathbb{E}[f(X)]\big{)}^{2}}{\textnormal{Var}[Y]}\Bigg{\\}},$
where $X\sim P$, $Y\sim Q$, and the supremum is taken over all functions $f$
with finite expectation with respect to $P$ and $Q$ and finite variance with
respect to $Q$. Using this tool instead of the KR duality, Theorem 3.1 would
result in
$\big{|}\overline{\textnormal{gen}}(W,S)\big{|}\leq\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\bigg{[}\sqrt{\textnormal{Var}\big{[}\ell(W^{\prime},Z_{i})\big{]}\
\chi^{2}(P_{W|Z_{i}},P_{W})}\bigg{]},$ (15)
where $W^{\prime}$ is an independent copy of $W$ such that
$P_{W,Z_{i}}=P_{W}\otimes P_{Z_{i}}$ for all $i\in[n]$. Equation (15), instead
of requiring that the loss $\ell$ is $L$-Lipschitz under the discrete metric
for all $z\in\mathcal{Z}$, it requires that the function $\ell(w,z)$ has
finite expectation with respect to $P_{W|Z_{i}=z}$ and $P_{W}$ and finite
variance with respect to $P_{W}$ for all $z\in\mathcal{Z}$. Note that this is
a weaker requirement since Lipschitzness under the discrete metric implies
boundedness (Claim 1) and Popoviciu’s inequality states that a bounded random
variable has finite variance.
In particular, under the assumption that $\ell$ is bounded with a range
$[c,L+c]$, one may use Popoviciu’s inequality [42], i.e.,
$\textnormal{Var}[\ell(W^{\prime},Z_{i})]\leq L^{2}/4$, to recover the
corollaries of Theorems 3.1, 3.1, 3.2, and 3.2 using the discrete metric and
the joint range. As an example, using Popoviciu’s inequality in combination
with (15) recovers (6). Hence, the bounds obtained through the variational
representation of the $\chi^{2}$-divergence are both tighter and more general
than those obtained after applying the joint range strategy as in [43, §IV-B]
to the total variation bounds obtained through the KR duality. Compared to the
bound from (5), obtained after applying first Pinsker’s inequality and then
the joint range strategy, Equation (15) becomes looser as soon as
$\chi^{2}(P_{W|Z_{i}},P_{W})\geq\frac{-1}{\alpha}\Big{(}\texttt{W}\big{(}-\alpha
e^{-\alpha}\big{)}+\alpha\Big{)},$
where W denotes the Lambert or product-log function and
$\alpha=\textnormal{Var}[\ell(W^{\prime},Z_{i})]/2$. In the extreme case where
$\textnormal{Var}[\ell(W^{\prime},Z_{i})]=L^{2}/4$, Equation (5) is tighter
than (15) as soon as $\chi^{2}(P_{W|Z_{i}},P_{W})\gtrapprox 2.51$, a
restriction that becomes more favorable to (15) as the variance decreases.
More precisely, the bound obtained from the variational representation of the
$\chi^{2}$-divergence is tighter than (5) in the range where both bounds are
non-vacuous, i.e., when
$\textnormal{Var}[\ell(W^{\prime},Z_{i})]\leq(e^{2}-1)^{-1}L^{2}$.
|
11institutetext: F. Patricia Medina 22institutetext: Yeshiva University;
22email<EMAIL_ADDRESS>
Randy Paffenroth 33institutetext: Worcester Polytechnic Institute; 33email:
<EMAIL_ADDRESS>
# Machine Learning in LiDAR 3D point clouds
F. Patricia Medina Randy Paffenroth
###### Abstract
LiDAR point clouds contain measurements of complicated natural scenes and can
be used to update digital elevation models, glacial monitoring, detecting
faults and measuring uplift detecting, forest inventory, detect shoreline and
beach volume changes, landslide risk analysis, habitat mapping and urban
development, among others. A very important application is the classification
of the 3D cloud into elementary classes. For example, it can be used to
differentiate between vegetation, man-made structures and water. Our goal is
to present a preliminary comparison study for classification of 3D point cloud
LiDAR data that includes several types of feature engineering. In particular,
we demonstrate that providing context by augmenting each point in the LiDAR
point cloud with information about its neighboring points can improve the
performance of downstream learning algorithms. We also experiment with several
dimension reduction strategies, ranging from Principal Component Analysis
(PCA) to neural network based auto-encoders, and demonstrate how they affect
classification performance in LiDAR point clouds. For instance, we observe
that combining feature engineering with a dimension reduction method such as
PCA, there is an improvement in the accuracy of the classification with
respect to doing a straightforward classification with the raw data.
## 1 Introduction
LiDAR point clouds contain measurements of complicated natural scenes and can
be used to update digital elevation models, glacial monitoring, detecting
faults and measuring uplift detecting, forest inventory, detect shoreline and
beach volume changes, landslide risk analysis, habitat mapping and urban
development, among others. A very important application is the classification
of the 3D cloud into elementary classes. For example, it can be used to
differentiate between vegetation, man-made structures and water.
This paper describes results from using several classification frameworks in
3D LiDAR point clouds. We present a preliminary comparison study for
classification of 3D point cloud LiDAR data. We experiment with several types
of feature engineering by augmenting each point in the LiDAR point cloud with
information about its neighboring points and also with dimension reduction
strategies, ranging from Principal Component Analysis (PCA) to neural network
based auto-encoders, and demonstrate how they affect classification
performance in LiDAR point clouds. We present $F_{1}$ scores for each of the
experiments, accuracy and error rates to exhibits the improvement in
classification performance. Two of our proposed frameworks showed a big
improvement in error rates.
LiDAR is an active optical sensor that transmits laser beams towards a target
while moving through specific survey routes. The reflection of the laser from
the target is detected and analyzed by receivers in the LiDAR sensor. These
receivers record the precise time from when the laser pulse leaving the system
to when it returns to calculate the range distance between the sensor and the
target, combined with the positional information GPS (Global Positioning
System), and INS (inertial navigation system). These distance measurements are
transformed to measurements of actual three-dimensional points of the
reflective target in object space. See inbook and Mather2004 for a technical
treatment of remote sensing.
Deep learning for 3D point clouds has received a lot of attention due to its
applicability to various domains such as computer vision, autonomous driving
and robotics. The most common tasks performed are 3D shape classification
shape , 3D object detection and tracking tracking , and 3D point cloud
segmentation segmentation . Key challenges in this domain include the high
dimensionality and the unstructured nature of 3D point clouds. In the case of
3D shape classification, recent methods include: projection-based networks
(multi-view representation and volumetric representation) Volumetric01 ;
Volumetric02 and point–based networks (point-wise MLP networks,convolution-
based networks, graph-based networks and others) graphbased . See unknown for
a comprehensive survey in deep learning for 3D point clouds. This paper
describes results from different classification frameworks in 3D LiDAR point
clouds in relevant classes of a natural scene. Note that our goal is to
classify point by point instead of performing shape classification and we
develop a preliminary framework to gain understating of the performance of
specific combinations of algorithms applied to a specific LiDAR point cloud
dataset.
Our framework includes engineering new features from existent ones, possible
non-linear dimensionality reduction (auto-encoders), linear dimensionality
reduction (PCA) and finally the use of a feed-forward neural network
classifier. The outputs of these preprocessing steps are then used as training
data for a number of classifications algorithms including random forest and
k-nearest neighbor classifiers.
LiDAR stands for light detection and ranging and it is an optical remote
sensing technique that uses laser light to densely sample the surface of the
earth, producing highly accurate $x,\,y$ and $z$ measurements. The resulting
mass point cloud data sets can be managed, visualized, analyzed and shared
using ArcGIS arcgis . The collection vehicle of LiDAR data might be an
aircraft, helicopter, vehicle or tripod. (See Fig.1)
Figure 1: The profile belonging to a series of terrain profiles is measured in
the cross track direction of an airborne platform. The image was recreated
from figure 1.5 (b), pp. 8 in Mather2004 . The figure was used for the first
time in one of the authors’ paper (see medina2019heuristic .)
LiDAR can be applied, for instance, to update digital elevation models,
glacier monitoring, detecting faults and measuring uplift detecting, the
forest inventory, shoreline detection, measuring beach volume changes,
landslide risk analysis, habitat mapping and urban development Mather2004 ;
inbook .
3D LiDAR point clouds have many applications in the Geosciences. A very
important application is the classification of the 3D cloud into elementary
classes. For example, it can be used to differentiate between vegetation, man-
made structures and water. Alternatively, only two classes such as ground and
non-ground could be used. Another useful classification is based on the
heterogeneity of surfaces. For instance, we might be interested classifying
the point cloud of reservoir into classes such as gravel, sand and rock. The
design of algorithms for classification of this data using a multi-scale
intrinsic dimensionality approach is of great interest to different scientific
communities. See the work in BRODU and BaIzMcNeSh:2012 for classification of
a natural scene using support vector machines. We also refer the interested
reading to medina2019heuristic which multi-scale testing of a multi-manifold
hypothesis where LiDAR data is used as a case study and intrinsic dimension is
computed.
The paper is organized as follows. First, in section 2 the attributes of LiDAR
data are described. In section 2, we provide the formal classification code
for each class in Table 1 . In section 3 we describe the construction of the
neighbor matrix, which is a way of generating a new data frame using the
original features of the nearest neighbors of each design point. Next, in
section 4, we briefly describe the machine learning frameworks used in our
experiments and define the metric uses in our experiments. Three of the
frameworks the construction of a neighbor matrix as a way of feature
engineering. Two of the latter frameworks include linear dimension reduction
(PCA) or non-linear dimension reduction (auto-encoder.) In Section 5, we
describe the experiments, give a more detailed description of each
classification framework, and provide a summary of the $F_{1}$ scores in Table
3. Section 6 summarizes the results and proposes some future research
directions.
## 2 The data
LiDAR points can be classified into a number of categories including bare
earth or ground, top of canopy, and water (see Fig.LABEL:fig:_classes). The
different classes are defined using numeric integer codes in the LAS files.
Classification codes were defined by the American Society for Photogrammetry
and Remote Sensing (ASPRS) for LAS formats. In the most update version
eighteen classes were defined and it includes
0 | Never classified |
---|---|---
1 | Unassigned |
2 | Ground | $\xleftarrow{\hskip 85.35826pt}$
3 | Low vegetation |
4 | Medium vegetation |
5 | High vegetation |
6 | Building |
7 | Noise | $\xleftarrow{\hskip 85.35826pt}$
8 | Model key/ Reserved |
9 | Water | $\xleftarrow{\hskip 85.35826pt}$
10 | Rail | $\xleftarrow{\hskip 85.35826pt}$
11 | Road surface |
$\vdots$ | $\vdots$ |
17 | Bridge deck | $\xleftarrow{\hskip 85.35826pt}$
18 | High noise | $\xleftarrow{\hskip 85.35826pt}$
Table 1: Classification codes. The arrows are pointed towards the six classes (ground, noise, water, rail, bridge deck and high noise we use for our experiments in the LiDAR data set graphed in Fig. 2. See https://desktop.arcgis.com/en/arcmap/10.3/manage-data/las-dataset/lidar-point-classification.html for a complete class code list. Class code | Number of points
---|---
2 | 1578495
17 | 1151888
18 | 611201
9 | 226283
10 | 21261
7 | 462
Table 2: Number of points used per class. See Table 1 for class codes.
In our experiments, we use a publicly available LiDAR data set (USGS
Explorer)from a location close to the JFK airport. We used the visualization
toll from the LAS tool LAStools to graph the scene by intensity (see Fig. 2.)
The data consists of $5.790384e\times 10^{6}$ points. We work with six classes
(See codes in Table LABEL:fig:_classes.) The unassigned classification class
is not providing any useful information for training the learning algorithm.
We decided to consider the six remaining classes. Note that noise points are
the ones which typically have a detrimental impact on data visualization and
analysis. For example, returns from high-flying birds and scattered pulses
that encountered cloud cover, smog haze, water bodies, and highly reflective
mirrors can distort the z-range of the points surrounding that location.
Figure 2: 3D LiDAR point cloud graphed by intensity for a location close to
the JFK airport, NY.
We included a snapshot of the satellite view from Google maps in Fig.3. The
geographical information in LiDAR is given in UTM.
Figure 3: Google map googlemaps satellite image of the location of associated
to the 3D point cloud in the JFK airport, NY. Coordinates:
$40^{\circ}38^{\prime}38.6"N73^{\circ}44^{\prime}46.9"W$ Rockaway Blvd,
Rosedale, NY 11422 See Fig.2. Link to the exact location:
https://goo.gl/maps/aWa47Gxzb5wuYNu76
The following attributes along with the position $(x,y,z)$ are maintained for
each recorded laser pulse. We stress that we are working with airborne LiDAR
data and not terrestrial LiDAR (TLS.)
1. 1.
Intensity. Captured by the LiDAR sensors is the intensity of each return. The
intensity value is a measure of the return signal strength. It measures the
peak amplitude of return pulses as they are reflected back from the target to
the detector of the LiDAR system.
2. 2.
Return number. An emitted laser pulse can have up to five returns depending on
the features it is reflected from and the capabilities of the laser scanner
used to collect the data. The first return will be flagged as return number
one, the second as return number two, and so on. (See Fig.4) Note that for TLS
we only have one return so this attribute would not be used in that case.
3. 3.
Number of returns. The number of returns is the total number of returns for a
given pulse. Laser pulses emitted from a LiDAR system reflect from objects
both on and above the ground surface: vegetation, buildings, bridges, and so
on. One emitted laser pulse can return to the LiDAR sensor as one or many
returns. Any emitted laser pulse that encounters multiple reflection surfaces
as it travels toward the ground is split into as many returns as there are
reflective surfaces. (See Fig.4)
4. 4.
Point classification. Every LiDAR point that is post-processed can have a
classification that defines the type of object that has reflected the laser
pulse. LiDAR points can be classified into a number of categories including
bare earth or ground, top of canopy, and water. The different classes are
defined using numeric integer codes in the LAS files.
Airborn LiDAR data is usually collected into surface data products at local
and regional level. The data is collected and post-processed by a very
specialized and expensive software that is not available to the general
public. One of the attributes produced in the post-processing phase is
“classification”. Many users are not able to extract directly classes from the
the LiDAR point cloud due to the lack of accessibility of such commercial
software. This classification is not always to be trusted and a machine
learning algorithms for automated classification would simplify this task for
user reduces costs. (See ramamurthy2015geometric .)
5. 5.
Edge of flight line. The points will be symbolized based on a value of 0 or 1.
Points flagged at the edge of the flight line will be given a value of 1, and
all other points will be given a value of 0.
6. 6.
RGB. LiDAR data can be attributed with RGB (red, green, and blue) bands. This
attribution often comes from imagery collected at the same time as the LiDAR
survey.
7. 7.
GPS time. The GPS time stamp at which the laser point was emitted from the
aircraft. The time is in GPS seconds of the week.
8. 8.
Scan angle. The scan angle is a value in degrees between -90 and +90. At 0
degrees, the laser pulse is directly below the aircraft at nadir. At -90
degrees, the laser pulse is to the left side of the aircraft, while at +90,
the laser pulse is to the right side of the aircraft in the direction of
flight. Most LiDAR systems are currently less than $\pm$30 degrees.
9. 9.
Scan direction. The scan direction is the direction the laser scanning mirror
was traveling at the time of the output laser pulse. A value of 1 is a
positive scan direction, and a value of 0 is a negative scan direction. A
positive value indicates the scanner is moving from the left side to the right
side of the in-track flight direction, and a negative value is the opposite.
In all of our experiments we only keep a total of seven attributes: x,
y,z,intensity,scan angle,number of returns,number of this return. Note that
RGB values can be obtained form satellite map images such as Google maps. We
decided not perform the data integration step to include these values since we
prefer to work with only the original LiDAR data set (see Fig.2).
Figure 4: A pulse can be reflected off a tree’s trunk, branches and foliage as
well as reflected off the ground. The image is recreated from figure from p.7
in inbook .
## 3 Feature engineering: nearest neighbor matrix
We uniformly select s examples out of the original data. For each LiDAR data
point (example) we consider $k$ nearest neighbors based on spatial coordinates
$(x_{i},y_{i},z_{i})$ and create a new example which is in higher dimensions.
The new example we generated includes all the features of all neighbors (not
only the spatial features.)
More precisely, let $F_{n(0)}^{(i)}$ the set of $N$ features associated to the
$i$th example (the first three features are spatial.) Now let $F_{n(j)}^{(i)}$
the set of $N$ features associated to the $j$th nearest neighbor to the $i$th
example. So if we consider the first $k$th nearest neighbors (computed respect
to the spatial features), we end up with set of set of features associated to
the $i$th example:
$F_{n(0)}^{(i)},F_{n(1)}^{(i)},\ldots,F_{n(k)}^{(i)},$ (1)
where $i=1,\ldots,s.$ Here $F_{n(j)}^{(i)}\in\mathbb{R}^{1\times N}$ for each
$j=1,\ldots,k.$
We concatenate the features in (1) and obtain rows
$\left[\begin{array}[]{c|c|c|c}F_{n(0)}^{(i)}&F_{n(1)}^{(i)}&\ldots&F_{n(k)}^{(i)}\end{array}\right]\in\mathbb{R}^{1\times(k+1)\cdot
N}$ (2)
for each $i=1,\ldots,s.$ We then put all the rows together and get what we
call the neighbor matrix in (3)
$\left[\begin{array}[]{c|c|c|c}F_{n(0)}^{(1)}&F_{n(1)}^{(1)}&\ldots&F_{n(k)}^{(1)}\\\
F_{n(0)}^{(2)}&F_{n(1)}^{(2)}&\ldots&F_{n(k)}^{(2)}\\\
\vdots&\vdots&\vdots&\vdots\\\
F_{n(0)}^{(s)}&F_{n(1)}^{(s)}&\ldots&F_{n(s)}^{(1)}\\\
\end{array}\right]\in\mathbb{R}^{s\times(k+1)\cdot N}$ (3)
We illustrate how to obtain the second row of the neighbor matrix in Fig. 5.
Figure 5: Forming the second row by concatenating the features of of the 3
nearest neighbors to the the second example in the original data frame. The
neighbors are computed respect to the spatial coordinates $(x,y,z)$ of the
design point. We are working with the list of features presented in (1) for
$i=2$ and $k=3.$ See also the second row of the matrix in (3). Observe that if
the original data has $N=7$ features, the neighbor matrix has $(3+1)\times
7=28$ features.
Observe that in Fig. 5, $F_{n(1)}^{(2)}$ can also be a design point
$F_{n(0)}^{(4)}$ and it could share nearest neighbors with the design point
$F_{n(0)}^{(2)}.$ In our experiments described in section 5, we chose
$s=100,000$ construct the neighbor matrix.
## 4 Machine learning frameworks
Two of our frameworks use the neighbor matrix described in section 3 as input.
We design a machine learning algorithm for our neighbor matrix. We summarize
the steps for the frameworks with dimensional reduction step. First, perform
dimensionality reduction wither using either PCA (for a linear projection) or
an auto-encoder. If using PCA, then use the projected features as the
predictors for our learning algorithm (classifier.) If using an auto-encoder,
then use the inner layer as the predictor for our classifier. Last, provide
the projected training sample (labeled) to a classifier. We use K-nearest
neighbor (KNN) and Random Forrest classifiers (RF and RF-Ens), feed forward
neural network (NN).
The metric that we use to measure precision of our algorithm is given by
$PRE_{micro}=\dfrac{\sum_{j=1}^{N}TP_{j}}{\sum_{j=1}^{N}TP_{j}+\sum_{j=1}^{N}FP_{j}},$
(4)
(known as micro average) where $TP_{i}$ means true positive on the $ith$ class
and $FP_{i}$ means false positive on the $ith$ class.
The recall (or sensitivity) is given by
$Recall=\dfrac{\sum_{j=1}^{N}TP_{j}}{\sum_{j=1}^{N}TP_{j}+\sum_{j=1}^{N}FN_{j}},$
(5)
where $FN_{j}$ means false negative on the $jth$ class.
We provide the
$F_{1}\mbox{ score }=2\dfrac{PRE_{micro}\cdot Recall}{PRE_{micro}+Recall},$
(6)
Using the $F_{1}$-scores as metric, the learning algorithm including the auto-
encoder to perform dimensionality reduction performs better than the one that
feeds the classifier with the projected features resulting from performing
PCA.
We use a K-fold cross validation score with the $F_{1}$ scores. The general
idea is to randomly divide the data into $K$ equal-size parts. We leave out
part $k$, fit the model to the other $K-1$ parts (combined), and then obtain
predictions for the left-out $k$th part. This is done in turn for each part
$k=1,2,\ldots K$ , and then the results are combined. See hastie_09_elements-
of.statistical-learning for a more detailed description of re-sampling
methods. Fig. 6 illustrate the 5-fold re-sampling procedure.
The scores in Table 3 are the
$\mbox{ mean of the CV score }\pm 2\times\mbox{standard deviations of the CV
score},$ (7)
where CV scores means the 5-fold cross validation score for $F_{1}$ scores.
Figure 6: 5-fold CV example for $s$ data points: $p_{1},p_{2},\ldots,p_{s}$.
Each randomly selected fifth is used as a validation set (shown in purple),
and the remainder as a training set (shown in orange). The $F_{1}$ score is
computed for each split and then the mena of the $F_{1}$ scores is computed.
The CV scores calculated as in 7. Such scores for experiments described in
Section 5 are summarized in Table3. The figure is a recreation of a graph from
hastie_09_elements-of.statistical-learning , p. 181.
We used TensorFlow (an open source software library for numerical computation
using data flow graphs, see tensorflow ) to build the auto-encoder. The rest
of the scripts are in Python using Sci-kit Learn scikit and Pandas pandas
libraries.
In all experiments in Section 5, we performed the final classification stage
with K-nearest neighbors (KNN), random forest (RF), and ensemble of random
forest (RF-Ens) and a (layer) feed forward neural network (NN.) We
standardized and normalized the input data for all of our experiments.
### 4.1 Dimension reduction
We chose PCA among the unsupervised linear methods and an auto-encoder as an
unsupervised non-linear method to perform dimension reduction. Recall that we
inserted a dimension reduction stage in some of our frameworks (see Section 5
for experiment descriptions including dimension reduction methods.)
PCA is one of the most popular unsupervised learning techniques and it
performs linear dimensionality reduction that preserves as much of the
variance in the data as possible after embedding the data into a linear
subspace of lower dimension. The interested reader can look the detailed
exposition in hastie_09_elements-of.statistical-learning .
Deep auto-encoders are feed-forward neural networks with an odd number of
hidden layers and shared weights between the left and right layers
10.5555/2207825 . The input data $X$ (input layer) and the output data
$\hat{X}$ (output layer) have $d^{(0)}$ nodes (the dimension of the layer.)
More precisely, auto-encoders learn a non-linear map from the input to itself
through a pair of encoding and decoding phases Zhou-Paffenroth2017
$\hat{X}=D(E(X)),$ (8)
where $E$ maps the input layer $X\in\mathbb{R}^{d(0)}$ to the “most” hidden
layer (encodes the input data) in a non-linear fashion, $D$ is a non-linear
map from the “most” hidden layer to the output layer (decodes the “most”
hidden layer), and $\hat{X}$ is the recovered version of the input data. In a
5-layer auto-encoder $\hat{X}\in\mathbb{R}^{d(3)}$. An auto-encoder therefore
solves the optimization problem:
$\operatorname*{argmin}_{E,\,D}{\|X-D(E(X))\|}_{2}^{2},$ (9)
We are motivated to include deep auto-encoders (or multilayer auto-encoders)
in our experiments, since they demonstrated to be effective for discovering
non-linear features across problem domains.
In Fig.7, we show a 5-layer auto-encoder (a neural network with five hidden
layers.) We denote the dimension of the $i$th layer by $d^{(i)}.$ The encoder
is the composition of the first three inner layers. The third inner layer (the
most hidden layer) is the output of the encoder and its dimension is
$d^{(3)}$. In two of our experiments, we use this third layer to reduce the
dimension of the input data $X$. The input layers $X$ can be either the raw
data or the neighbor matrix described in Section 3.
Figure 7: 5-layer auto–encoder diagram. The input layer has dimension
$d^{(0)}$, the five inner layers have dimensions
$d^{(1)},\,d^{(2)},d^{(3)},d^{(4)}$ and $d^{(4)}$, respectively. The dimension
of the outer layer $\hat{X}$ has dimension $d^{(6)}=d^{(0)}$ since this is an
auto-encoder. The 5th hidden layer has dimension $d^{(5)}=d^{(1)}$ and the 4th
hidden layer has dimension $d^{(4)}=d^{(2)}$. The 3rd layer is the most inner
layer with dimension $d^{(}3)$ which is the reduced dimension we use in some
of the frameworks for classification.
## 5 Classification experiments
We include a more granular description on each of the frameworks described in
Section 4 that we used on our experiments.
We have three frameworks consisting of two stages. The first two includes
stage 1: perform dimension reduction in raw data with a linear unsupervised
method (PCA) or a non-linear unsupervised method (most inner layer of an auto-
encoder); stage 2: feed the classifier with new predictors resulting from the
dimension reduction. The second two-stage framework includes stage 1: neighbor
matrix assembly; stage 2: feed the classifiers with new generated data with
features from the neighbor matrix.
The two frameworks with with three stages include stage 1: construction of the
neighbor matrix; stage 2: perform dimension reduction in neighbor matrix with
linear supervised method (PCA) a non-linear unsupervised method (most inner
layer of an auto-encoder); stage 3: feed the classifiers with new predictors
resulting from the dimension reduction.
We consider two classifiers, K-nearest neighbors and Random Forest for 6
classes (ground, bridge deck, high noise, water, rail and noise). We choose
$k=15$ as the number for nearest neighbors for the construction of the
neighbor matrix described in Section 3.
We use 100,000 sub-sampled for assembling the neighbor matrix. We chose the
latter sub-sample equally spaced according to the order of the original LiDAR
data set. We perform two processing steps to the training and testing sets. We
basically apply to data transformations: standardization and normalization.
Step 1.
Standardization of each feature. Compute the mean and standard deviation for
the training set and the testing set. Each transformed data set has mean 0 and
standard deviation 1.
Step 2.
Normalization of transformed data sets from Step 1. Re-scaling the training
set and testing set to have norm 1. That is, apply the map
$x\mapsto\dfrac{x}{{\|x\|}_{2}},$
where ${\|\cdot\|}_{2}$ is the euclidean norm. The maps sends the data points
to the points in the unit spheres.
When the dimension reduction stage is inserted in the basic classification
framework, we used the explain variance to choose the number of components for
PCA and the number of nodes of the inner layer in the auto-encoder. We
described the auto-encoder layer terminology in Section 4.1 and, in particular
Fig. 7 to ease understanding. We have two cases depending if we we have the
neighbor matrix construction stage:
1. 1.
If the framework does not include the neighbor matrix construction stage, we
use 5 components for the PCA and a 5 dimensional inner layer of a 5-layer
auto-encoder. For the 5-layer auto-encoder, the input layer dimension is
$d^{(0)}=7$ (input features: x, y,z,intensity,scan angle,number of
returns,number of this return.) The dimension of the first hidden has
dimension $d^{(1)}=6$, the second inner layer has dimension $d^{(2)}=5$ and
the most inner layer has also dimension $d^{(3)}=5.$ The dimension layers
included in the decoder are $d^{(4)}=5,\,d^{(5)}=7$ and $d^{(6)}=7=d^{(0)}.$
2. 2.
If the framework includes neighbor matrix (see (3) and Fig.5), we use 40
components for the PCA and a 40 dimensional inner layer of a 5-layer auto-
encoder to perform non-linear dimension reduction. For the 5-layer auto-
encoder, the input layer dimension is $d^{(0)}=8(k+1)$, where $k$ is the
number of nearest neighbors used to assemble the neighbor matrix. We chose
$k=15$ in our experiments. The dimension of the first inner layer is
$d^{(1)}=7(k+1)$, the second layer has dimension $d^{(2)}=5(k+1)$ and the most
inner layer has dimension $d^{(3)}=40$ (and the dimension of $E(X)$ where $X$
is the input data.) The dimension layers included in the decoder are
$d^{(4)}=5(k+1),\,d^{(5)}=7(k+1)$ and $d^{(6)}=8(k+1)=d^{(0)}.$ In our case,
we chose $k=15$ nearest neighbors to generate the neighbor matrix.
The following parameters used in the auto-encoder implementation. A learning
rate of 0.01, 200,000 number of epochs and batch size of 1,000.
In all experiments, the feed forward neural network classifier architecture
consists of an input layer made of the new predictors obtained after
dimensionality reduction, two hidden layers (first hidden layer has dimension
20, second hidden layer has dimension 15.)
| KNN | RF | RF-Ens | NN
---|---|---|---|---
Raw | 0.8670 (+/- 0.0004) | 0.8701 (+/- 0.0007) | 0.8564 (+/- 0.0019) | 0.8241 (+/- 0.0018)
PCA | 0.8399 (+/- 0.0002) | 0.8384 (+/- 0.0010) | 0.8212 (+/- 0.0011) | 0.7791 (+/- 0.0069)
Enc | 0.8223 (+/- 0.0004) | 0.8160 (+/- 0.0003) | 0.7902 (+/- 0.0041) | 0.6331 (+/- 0.0110)
Neig+PCA | 0.8291 (+/- 0.0032) | 0.8445 (+/- 0.0029) | 0.8361 (+/- 0.0031) | 0.9748 (+/- 0.0042)
Neig+Enc | 0.7366 (+/- 0.0045) | 0.7816 (+/- 0.0044) | 0.7700 (+/- 0.0049) | 0.6770 (+/- 0.0059)
Neig | 0.8303 (+/- 0.0025) | 0.9497 (+/- 0.0101) | 0.9499 (+/- 0.0118) | 0.9792 (+/- 0.0044)
Table 3: 5-fold cross validation of $F_{1}$ scores for different classification frameworks; number of classes=6; RAW+ Norm= Standardized and normalized raw data (includes pre-processing step) Enc= Encoder (using inner layer of auto encoder for dimension reduction); PCA and Enc have already been standardized and normalized. | accuracy | error rate
---|---|---
RF Raw | 0.8844 | 0.1156
RF Enc | 0.7637 | 0.2363
KNN PCA | 0.8416 | 0.1584
KNN Enc | 0.8391 | 0.1609
NN Neigh | 0.9847 | 0.0153
NN Neigh + PCA | 0.9770 | 0.0230
Table 4: Accuracy and error rates associated to the best f1 scores presented
on Table 3
We explain each of the experiments include in Table 3. We are using the
following classifiers: K-nearest neighbor (KNN), random forest (RF), and
ensemble of random 20 random forests of maximum depth 20, and a feed forward
neural network with two hidden layers (NN). The first hidden layer of NN has
dimension 20 and he second hidden layer has dimension 15.
We describe frameworks associated to each row on table Table 3 in Table 5.
Experiment 1 | “Raw” | The standardized and normalized raw data is directly used as input for each
---|---|---
| | of the classifiers mentioned above (KNN, RF, RF-Ens, NN.)
Experiment 2 | “PCA” | The input the is the standardized and normalized raw data.
| | We first insert the linear dimension reduction stage
| | by performing PCA with 5 components.
| | We feed each of the classifiers with the new predictors obtained by
| | projecting into the subspace generated by the 5 principal components.
Experiment 3 | “Enc” | The input the is the standardized and normalized raw data.
| | We first insert the non-linear dimension reduction stage by using
| | the most inner layer (the third one) of the 5-layer auto-encoder.
| | The dimension of the most inner layer is $d^{(}3)=5$.
| | We feed each of the classifiers with the new predictors obtained by projecting into
| | the manifold generated by the encoder, $E(X).$
Experiment 4 | “Neigh + PCA” | The input the is the standardized and normalized neighbor matrix
| | (assembled with 100,000 examples.)
| | We first insert the linear dimension reduction stage by performing
| | PCA with 40 components.
| | We feed each of the classifiers with the new predictors obtained by projecting into
| | the subspace generated by the 5 principal components.
Experiment 5 | “Neigh + Enc” | The input the is the standardized and normalized neighbor matrix
| | (assembled with 100,000 examples.)
| | We first insert the non-linear dimension reduction stage by using
| | the most inner layer (the third one) of the 5-layer auto-encoder.
| | The dimension of the most inner layer is $d^{(}3)=40$.
| | We feed each of the classifiers with the new predictors obtained by projecting into
| | the manifold generated by the encoder, $E(X).$
Experiment 6 | “Neigh” | The standardized and normalized neighbor matrix
| | (assembled with 100,000 examples) is directly used as
| | input for the classifiers mentioned above (KNN, RF, RF-Ens, NN.)
Table 5: Description of experiment. The cross-validated $F_{1}$ scores for
these experiments are presented in Table 3.
We defined the $F_{1}$ metric in 6. Table 3 shows the 5-fold cross validated
scores as described in Section 4.
In Table 3,the highest 5-CV-$F_{1}$ scores are observed when using the
neighbor matrix with random forest, the ensemble of random forest and the
neural network. Also, we also observe a high scores (0.9748) when combining a
neighbor matrix (previously standardized and normalized), performing PCA and
then using the feed forward neural network classifier.
We also note that using the neighbor matrix as input and using the inner layer
of the auto-encoder does not perform as well as the combination neighbor
matrix and auto-encoder. On the other hand, observe that for classifiers KNN
and RF, using raw data as input and then reducing the dimension with the
encoder gives similar results as when using neighbor matrix as input and
reducing the dimension with PCA.
Table 4 includes the accuracy and error rates for the best $F_{1}$ scores as
observed in Table 3. Notices that the error rate corresponding to the neighbor
matrix with and without inserting PCA on the framework is at least six times
less than the error rate corresponding to the rest of the methods.
We included the confusion matrices corresponding to the highest f1-score for
each case on Figures 8–13.
Figure 8: Confusion matrix corresponding to the random forest classifier with
raw data as input Figure 9: Confusion matrix corresponding to the random
forest classifier with new predictors originated from the inner layer of the
auto-encoder as input Figure 10: Confusion matrix corresponding to k-nearest
neighbor classifier with predictors originated from PCA as input Figure 11:
Confusion matrix corresponding to k-nearest neighbor classifier with
predictors originated from the inner layer of the auto-encoder as input Figure
12: Confusion matrix corresponding to feed-forward neural network classifier
with the neighborhood matrix as input Figure 13: Confusion matrix
corresponding to feed-forward neural network classifier with the new
predictors originated from PCA applied to the neighbor matrix as input
## 6 Summary and Future Research Directions
We performed a comparison of various classification techniques using linear
dimension reduction (PCA) and non-linear dimension reduction (auto-encoder.)
The best results ($F_{1}$ scores) were obtained by using the neighbor matrix
as input and the reducing the dimension of the new data frame using PCA and
using a feed forward neural network as classifier. Moreover, using a feed
forward neural networks as classifier applied to the neighbor matrix with and
without inserting the PCA step shows great improvement in the error rates
respect to the other frameworks. Improving the performance of a classifying
framework to differentiate elementary classes such as vegetation, water,
ground, etc. will help to automate processes on applications such habitat
mapping, elevation models among others.
The research effort revealed a number of potential future research directions:
* •
Exploiting intrinsic dimension techniques at different scales to generate more
features. In this way, the algorithm will have more information on the
geometry of the data to perform better classification of the classes. See
Fukunaga198215ID for work in estimation of intrinsic dimension using local
PCA and BRODU for a multi-scale classification example using support vector
machines. LevinaB04 and lebi2005 provides a maximum likelihood framework for
intrinsic dimension estimation.
* •
Determine relationships between encoder-decoders and product coefficient
representations of measures
* •
Analyze a larger forestry data with trees and classes such as trunk, ground
and leaves. This is linked to an important application related to climate
change. See Moskal for definition and theories of indirect and direct methods
to estimate the leave to area (LAI) index in terrestrial LiDAR which is
relevant to the gas-vegetation exchange phenomenon understanding.
* •
Modify the architecture of the auto-encoder by adding more layers and/or
changing the dimension of the inner layers. Compare the accuracy using this
new preprocessing step with the one resulting from PCA.
* •
Perform shape analysis by combining the results from this paper with the
current shape analysis state-of-the art techniques. The application would use
the shape recognition in forestry data where the recognition of leaf shapes
would be of great interest for practitioner.
###### Acknowledgements.
This research is supported by the Azure Microsoft AI for Earth grant. Many
thanks to Monika Moskal (WU) Jonathan Batchelor (WU) and Zheng Guang (NU) for
sharing their expertise in the technical aspects of LiDAR data acquisition and
for encouraging pursuing the next future directions for application in
forestry. We gratefully acknowledge Linda Ness for encouraging further
discussions on manifold learning for LiDAR data in the _Women in Data Science
and Mathematics Research Collaboration Workshop (WiSDM)_ , July 17-21, 2017,
at the _Institute for Computational and Experimental Research in Mathematics
(ICERM)_. The workshop was partially supported by grant number NSF-HRD
1500481-AWM ADVANCE and co-sponsored by Brown’s Data Science Initiative.
## References
* (1) https://www.esri.com/en-us/arcgis/products/.
* (2) http://lastools.org/.
* (3) http://maps.google.com.
* (4) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
* (5) Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T. Lin, Learning From Data, AMLBook, 2012.
* (6) N. Anantrasirichai, C. Canagarajah, D. Redmill, and D. Bull, Volumetric representation for sparse multi-views, 11 2006, pp. 1221 – 1224.
* (7) D. Bassu, R. Izmailov, A. McIntosh, L. Ness, and D. Shallcross, Centralized multi-scale singular vector decomposition for feature construction in lidar image classification problems, in IEEE Applied Imagery and Pattern Recognition Workshop (AIPR), IEEE, 2012.
* (8) N. Brodu and D. Lague, 3d terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology, ISPRS Journal of Photogrammetry and Remote Sensing, 68 (2012), pp. 121 – 134.
* (9) K. Fukunaga, 15 intrinsic dimensionality extraction, 1982.
* (10) Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, Deep learning for 3d point clouds: A survey, 12 2019.
* (11) T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning: data mining, inference and prediction, Springer, 2 ed., 2009.
* (12) Y. Jung, S.-W. Seo, and S.-W. Kim, Curb detection and tracking in low-resolution 3d point clouds based on optimization framework, IEEE Transactions on Intelligent Transportation Systems, PP (2019), pp. 1–16.
* (13) E. Levina and P. Bickel, Maximum likelihood estimation of intrinsic dimension, in Advances in Neural Information Processing Systems (NIPS), vol. 17, MIT Press, 2005, pp. 777–784.
* (14) E. Levina and P. J. Bickel, Maximum Likelihood Estimation of Intrinsic Dimension., in NIPS, 2004.
* (15) P. M. Mather, Computer Processing of Remotely-Sensed Images: An Introduction, John Wiley & Sons, Inc., USA, 2004.
* (16) W. McKinney, Data structures for statistical computing in python, in Proceedings of the 9th Python in Science Conference, S. van der Walt and J. Millman, eds., 2010, pp. 51 – 56.
* (17) F. P. Medina, L. Ness, M. Weber, and K. Y. Djima, Heuristic framework for multiscale testing of the multi-manifold hypothesis, in Research in Data Science, Springer, 2019, pp. 47–80.
* (18) J. Neil, C. Storlie, and A. Brugh, Graph-based network anomaly detection, (2010).
* (19) A. Nguyen and B. Le, 3d point cloud segmentation: A survey, 11 2013, pp. 225–230.
* (20) V.-T. Nguyen, T.-T. Tran, V.-T. Cao, and D. Laurendeau, 3d point cloud registration based on the vector field representation, 11 2013.
* (21) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research, 12 (2011), pp. 2825–2830.
* (22) G. Petrie and C. Toth, Introduction to Laser Ranging, Profiling, and Scanning, 11 2008, pp. 1–28.
* (23) R. Ramamurthy, K. Harding, X. Du, V. Lucas, Y. Liao, R. Paul, and T. Jia, Geometric and topological feature extraction of linear segments from 2d cross-section data of 3d point clouds, in Dimensional Optical Metrology and Inspection for Practical Applications IV, vol. 9489, International Society for Optics and Photonics, 2015, p. 948905.
* (24) R. Schnabel, R. Wahl, R. Wessel, and R. Klein, Shape recognition in 3d point-clouds, (2012).
* (25) G. Zheng and L. Moskal, Retrieving leaf area index (lai) using remote sensing: Theories, methods and sensors, Sensors (Basel, Switzerland), 9 (2009), pp. 2719–45.
* (26) C. Zhou and R. C. Paffenroth, Anomaly detection with robust deep autoencoders, in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, New York, NY, USA, 2017, ACM, pp. 665–674.
|
, , ,
# The Effect of Pressure Fluctuations on the Shapes of Thinning Liquid
Curtains
Bridget M. Torsey School of Mathematical Sciences, Rochester Institute of
Technology, Rochester, NY 14623, USA Steven J. Weinstein School of
Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623,
USA Department of Chemical Engineering, Rochester Institute of Technology,
Rochester, NY 14623, USA David S. Ross School of Mathematical Sciences,
Rochester Institute of Technology, Rochester, NY 14623, USA Nathaniel S.
Barlow School of Mathematical Sciences, Rochester Institute of Technology,
Rochester, NY 14623, USA
###### Abstract
We consider the time-dependent response of a gravitationally-thinning inviscid
liquid sheet (a coating curtain) leaving a vertical slot to sinusoidal ambient
pressure disturbances. The theoretical investigation employs the hyperbolic
partial differential equation developed by [27]. The response of the curtain
is characterized by the slot Weber number, $W_{e_{0}}=\rho qV/2\sigma$, where
$V$ is the speed of the curtain at the slot, $q$ is the volumetric flow rate
per unit width, $\sigma$ is the surface tension, and $\rho$ is the fluid
density. Flow disturbances travel along characteristics with speeds relative
to the curtain of $\pm\sqrt{uV/W_{e_{0}}}$, where $u=\sqrt{V^{2}+2gx}$ is the
curtain speed at a distance $x$ downstream from the slot. When the flow is
subcritical ($W_{e_{0}}<1$), upstream traveling disturbances near the slot
affect the curtain centerline, and the slope of the curtain centerline at the
slot oscillates with an amplitude that is a function of $W_{e_{0}}$. In
contrast, all disturbances travel downstream in supercritical curtains
($W_{e_{0}}>1$) and the slope of the curtain at the slot is vertical. Here, we
specifically examine the curtain response under supercritical and subcritical
flow conditions near $W_{e_{0}}=1$ to deduce whether there is a substantial
change in the overall shape and magnitude of the curtain responses. Despite
the local differences in the curtain solution near the slot, we find that
subcritical and supercritical curtains have similar responses for all imposed
sinusoidal frequencies.
## I Introduction
Curtain coating is a common industrial process that uses wide, thin planar
liquid sheets (curtains) to deposit uniform thin films on moving substrates
[28]. In one of its simplest configurations, a curtain leaves an inverted slot
die and thins under the influence of gravity as it falls, see figure 1.
Curtains are subjected to ambient disturbances that deflect them; these
deflections can cause nonuniform liquid coatings and imperfections in final
dried products. Flow disturbances are often examined using linear theory
because coated product quality is sensitive to even small thickness
variations. Curtains have significant surface area in contact with the
surrounding air along their long and wide faces, so understanding how they
respond to pressure disturbances is of practical importance. In this study we
focus on the effects of sinusoidal time-varying pressure disturbances on the
shapes and deflections of curtains.
The response of liquid curtains to pressure disturbances has been well-studied
both experimentally and theoretically. Much of that work has focused on the
related problem of water bells, i.e. radially-symmetric liquid sheets (among
the many references, see [14]; [16]; [20], [4], [19]). In aggregate, this work
shows that agreement with experiment is obtained when models use the
approximation of inviscid flow with small variations in thickness about the
curved centerline of the water bell; that is, the curtain is gradually
thinning in the direction of flow. These studies also demonstrate that small
pressure differences across the surface of a liquid sheet can have a large
impact on the shape of water bells.
Curtains have been similarly studied with respect to ambient pressure
disturbances. In previous work, [27] justify a potential flow approximation to
the flow in a curtain, and derive an equation that governs deflections of a
curtain’s centerline, $y=F(x)$ shown in figure 1, given as:
$\left(\frac{\partial}{\partial t}+g\frac{\partial}{\partial
u}\right)^{2}F-\frac{2\sigma g^{2}}{\rho q}\frac{\partial}{\partial
u}\left(\frac{1}{u}\frac{\partial F}{\partial
u}\right)=\frac{(M_{B}-M_{A})u}{\rho q}$ (1)
where:
$u=\sqrt{V^{2}+2gx}$ (2)
and the local thickness of the curtain, $h$, is expressed as:
$h(x)=\frac{q}{u}.$ (3)
In (1) - (3), $t$ is time, $u$ is the curtain speed at position $x$, $V$ is
the curtain speed at the slot, $g$ is the acceleration of gravity, $q$ is the
volumetric flow rate, $\sigma$ is the surface tension (which we take to be
constant), $\rho$ is the liquid density, and $M_{A}$ and $M_{B}$ are the
respective pressures on the front and back faces of the curtain as shown in
figure 1. [27] derive the equation (1.1) as follows. The equation for the
velocity field and thickness of the undisturbed time-independent curtain
(centerline $y=0$ for all $x$) is determined via an asymptotic expansion in
the small parameter $\epsilon=gq/V^{3}$. The limit $\epsilon\rightarrow 0$
corresponds physically to a gradually-thinning liquid curtain. The resulting
lowest-order flow in the curtain is plug at each location $x$ according to
equation (1.2), where velocity variations across the curtain thickness (i.e.
the $y$-direction in figure 1) are of $\mathcal{O}(\epsilon^{2})$. The time-
dependent potential flow equations are then linearized for small perturbations
about the asymptotic steady-state equations (the base flow). Since the base
flow is approximate, terms to $\mathcal{O}(\epsilon^{2})$ are required to
preserve accuracy in the linearization. The resulting time-dependent equation
(1.1) is valid for curtain deflections such that $F<<h$. In the limit taken,
pressure disturbances do not affect the local thickness of the curtain to
leading order, so the resulting curtain response is captured by the deflection
of its centerline [27].
When examined under conditions of small deflection, the equation of [10] is
identical to (1.1) at steady state. Equations (2) and (3) are valid regardless
of the magnitude of deflection provided that the curtain is long and thin.
[27] shows via dominant balance that the inviscid result, equation (1.2), is
consistent with Taylor’s equation (derived in the Appendix of [3]) for large
$x$, even when viscosity is included. Predictions of (1) and (1.2) agree well
with experiments under steady-state ([10]) and transient ([5]) conditions. In
the latter study, [5] demonstrate that predictions agree with experiments when
the initial velocity is adjusted to account for the effect of an entrance
region through extrapolation to the slot location at $x=0$. This result agrees
with the empirical equation of [3] for an undisturbed vertical curtain. Note
that equations (1.1) and (1.2) are strictly valid in a region displaced
downstream from the slot because the loss in viscous traction from the slot
leads to a flow rearrangement not captured in these equations ([6], [26];
[23]; [11]). Other studies have used the slender curtain/inviscid approach to
examine the effect of nonlinear dynamics ([22]) and stationary wave formation
([8]).
Closely related to the current work is the nappe oscillation configuration,
which is used to model observed perturbations in liquid sheets formed as water
flows over waterfalls, dams, and weirs. In such a configuration, pressure
disturbances are affected by the curtain motion via an enclosure that includes
the curtain as one of its long and wide sides; the other side of the curtain
is maintained at atmospheric pressure. As a result, the motion of the curtain
affects the volume of the enclosed region, which affects the pressure in that
region and provides a restoring force. There have been many experimental
studies of this phenomena (see for example, [2]; [24]; [18]), and observed
oscillation frequencies measured in the curtain correlate with those measured
in the enclosure itself. Theoretical analyses follow the modeling approach
used to obtain equations (1.1) to (1.3) and predict the natural frequencies of
the system in the absence of surface tension ([25], [9]); these provide a more
precise model of the curtain dynamics than the simplified theory provided by
[2]. The more recent analysis of [12] incorporates surface tension and
predicts natural frequencies that agree favorably with the cited experiments
of [2] and [24].
The governing equation, (1.1), is a second-order hyperbolic partial
differential equation (PDE). The two sets of characteristics associated with
the PDE carry information that moves relative to the curtain at speeds of
$\pm\sqrt{uV/W_{e_{0}}}$. It is established in hyperbolic PDE theory that the
number of constraints specified along a boundary must be equal to the number
of characteristics that emanate from that boundary at each point [17]. The
directions of the characteristics, and therefore the associated boundary
conditions, are determined by the Weber number at the slot, given by
$W_{e_{0}}=\rho qV/2\sigma$. In supercritical curtains ($W_{e_{0}}>1$), both
sets of characteristics leave the slot and are oriented downstream. Thus, two
conditions must be specified along this boundary. This constraint placement
corresponds to the physics of supercritical flows, in which the momentum flux
is greater than the surface tension at every location, so any disturbances are
washed downstream. [10] establish experimentally that supercritical curtains
leave the slot vertically under steady conditions, confirming that points
downstream do not influence upstream locations. In a subcritical curtain
($W_{e_{0}}<1$), there is a region in which one set of characteristics is
oriented upstream and the other set is oriented downstream. This region begins
at the slot and ends at the critical point, the point in the curtain at which
the local Weber number, $W_{e}=\rho qu/2\sigma$, equals 1. Downstream of the
critical point, both sets of characteristics are oriented downstream.
Therefore, if $W_{e_{0}}<1$, only one set of characteristics leaves the slot,
so only one condition is applied along this boundary. Disturbances in this
region of the curtain move upstream and downstream along the characteristics.
[10] demonstrate experimentally that when a constant pressure drop is applied
across a curtain containing such a region, the curtain takes on an angle at
the slot. They show theoretically that this angle can be predicted by removing
a singularity in the governing equation that arises at the location where
$W_{e}=1$, the precise location where the direction of characteristics
changes. As demonstrated in the appendix of [10], a subcritical water bell
issuing from an annular slit and having an applied pressure difference takes
on an angle at the slit. Their analytical solution, obtained under conditions
of negligible gravity, accounts for the earlier qualitative experimental
observations of [1]. [21] confirms this conclusion in a later examination of
this problem. [4] examine water bells with applied pressure differences that
undergo a subcritical-to-supercritical transition, both experimentally and
theoretically. They predict water bell shapes that agree well with experiment.
As noted previously, the water bell equations use the same thin-film-modeling
assumptions that [10] used to study the planar configuration, so these water
bell results also may be used to confirm the gradually-thinning and inviscid
assumptions used in the development of equations (1.1) to (1.3).
Figure 1: Side view schematic of a liquid exiting a slot of height $h_{0}$ and
falling under the influence of gravity while subjected to ambient gas
pressures $M_{A}$ and $M_{B}$ on its sides. The curtain is assumed to be
infinite and invariant in the $z$ direction, oriented out of the figure. The
centerline of the curtain and its local thickness are denoted as $y=F(x)$ and
$h(x)$, respectively. The governing equations (1.1)-(1.3) are valid for
curtain deflections $F<<h$, so dimensions in the figure are chosen for clarity
and are not to drawn to scale.
With this background, we now return to the nappe configuration analysis of
[12] cited above. There, the equation governing the curtain motion caused by
the applied pressure is the same as that of equation (1.1) although the
pressure disturbances $M_{A}-M_{B}$ are expressed in terms motion of the
curtain shape itself through the enclosure volume. Despite this mathematical
difference, the characterization of subcritical and supercritical flows
discussed above in terms of $W_{e_{0}}$ applies. [12] examine curtain flows
with slot Weber numbers near one. As we discussed above, when the flow is
subcritical ($W_{e_{0}}<1$), there exists a location in the curtain at which
$W_{e}=1$, below which the flow becomes supercritical. As in the steady
problem of [10], the time-dependent equation is singular at the point at which
$W_{e}=1$. Because (1.1) is of second order in the spatial dimension, the
general solution of the initial-boundary value problem has two degrees of
freedom that allow us to satisfy spatial constraints. In supercritical cases
we use those degrees of freedom to specify two conditions: that the curtain
centerline is not displaced at the slot exit, and that the curtain leaves the
slot with its centerline aligned with that of the slot. In subcritical cases
we still use one degree of freedom to enforce the condition that the curtain
centerline is not displaced at the slot exit, but we use the second degree of
freedom to enforce the condition that the solution be smooth at the singular
point. This switch in the application of the second degree of freedom
corresponds to the physical facts that subcritical curtains leave the slot at
non-zero, and sometimes varying, angles, and that the transition from
subcritical to supercritical flow in such curtains is smooth. In this paper we
refer to subcritical curtains as those for which $W_{e_{0}}<1$ and for which
there exists a location where the flow transitions from subcritical to
supercritical. This definition is necessary, as [10] show experimentally that
it is possible for curtains to remain subcritical over their entire lengths.
This observation is also consistent with experimental findings of [7], who
show that long subcritical curtains can persist without rupture. We define
supercritical curtains as those for which $W_{e_{0}}>1$ and thus the flow is
supercritical over the entire domain. For supercritical curtains, [12] and
[10] both find that the slope of the curtain at the slot is vertical.
[12] report a distinct increase in dominant oscillation frequency as the slot
Weber number is decreased from supercritical to subcritical. It is not clear
from the investigation whether this effect is a result of an inherent
susceptibility of the curtain to pressure disturbances because of the
oscillation of the curtain centerline slope at the slot exit, or is a result
of the pressure coupling present in a nappe configuration. In this paper we
study the imposition of a pressure drop across the curtain that is sinusoidal
in time and is not coupled to the curtain motion. This configuration is itself
practically relevant to curtain coating processes. We specifically examine the
response of the curtain as the flow is reduced from supercritical to
subcritical, and we determine whether the corresponding change in the boundary
conditions at the slot exit leads to a substantial change in the overall shape
and magnitude of the curtain response. This provides a comparison of the
capacities of subcritical and supercritical curtains to resist pressure
disturbances.
## II Theory
The dimensionless form of (1) is
$\frac{\partial^{2}\bar{F}}{\partial\bar{t}^{2}}+2\bar{u}\frac{\partial^{2}\bar{F}}{\partial\bar{x}\partial\bar{t}}+\bar{u}\frac{\partial}{\partial\bar{x}}\left[\left(\bar{u}-\frac{1}{W_{e_{0}}}\right)\frac{\partial\bar{F}}{\partial\bar{x}}\right]=\bar{u}(\bar{P}_{2}-\bar{P}_{1}).$
(4)
Here, $\bar{P}_{2}-\bar{P}_{1}$ is the pressure difference across the curtain,
whose displacement from its centerline is $\bar{F}(\bar{x},\bar{t})$ [27]. The
dimensionless version of (1.2) is
$\bar{u}=\sqrt{1+2\bar{x}},$ (5)
where dimensionless variables are defined as:
$\bar{F}=\frac{F}{h_{0}}\qquad\bar{u}=\frac{u}{V}\qquad\bar{x}=\frac{xg}{V^{2}}\qquad\bar{t}=\frac{tg}{V}$
$None$
$\bar{P}_{2}-\bar{P}_{1}=\frac{M_{B}-M_{A}}{\rho\epsilon^{2}V^{2}}\qquad\epsilon=\frac{gq}{V^{3}}\qquad
W_{e_{0}}=\frac{\rho qV}{2\sigma}.$ $None$
In (2.3a), $h_{0}$ is the slot height, and other variables have been defined
in section 1 and shown schematically in figure 1. We then convert the
coordinate system from Eulerian to Lagrangian by making the following variable
change:
$\xi=\bar{u}-1.$ (8)
In order to study the response of a curtain, we consider an applied sinusoidal
pressure disturbance of the form $\bar{P}_{2}-\bar{P}_{1}=\beta
e^{i\bar{\omega}\bar{t}}$ such that (2.1) is written, using coordinate
transformation (2.5), as:
$\frac{\partial^{2}\bar{F}}{\partial\bar{t}^{2}}+2\frac{\partial^{2}\bar{F}}{\partial\xi\partial\bar{t}}+\frac{\partial}{\partial\xi}\left(\left(1-\frac{1}{(\xi+1)W_{e_{0}}}\right)\frac{\partial\bar{F}}{\partial\xi}\right)=(\xi+1)\beta
e^{i\bar{\omega}\bar{t}}$ (9)
where $\bar{\omega}$ and $\beta$ are given by
$\bar{\omega}=\frac{\omega V}{g}\qquad\beta=\frac{\alpha
e^{i\theta_{R}}}{\rho\epsilon^{2}V^{2}}.$ $None$
Here, $\omega$ is the angular frequency and $\alpha$ and $\theta_{R}$ are the
magnitude and reference phase of the pressure disturbance, respectively. The
reference phase, $\theta_{R}$, is adjusted to provide clarity in the
presentation of results to follow (see discussion in Section 3). It is
understood that only the real part of $\bar{F}$ is taken as the actual
solution. The periodic solution for $\bar{F}$ is of the form
$\bar{F}(\xi,\bar{t})=\beta\bar{H}(\xi)e^{i\bar{\omega}\bar{t}}$ (11)
where $\bar{H}(\xi)$ is to be determined. We obtain a second-order ordinary
differential equation (ODE) by substituting (11) into (9):
$\left((\xi+1)^{2}-\frac{(\xi+1)}{W_{e_{0}}}\right)\frac{d^{2}\bar{H}}{d\xi^{2}}+\left(2i(\xi+1)^{2}\bar{\omega}+\frac{1}{W_{e_{0}}}\right)\frac{d\bar{H}}{d\xi}-(\xi+1)^{2}\bar{\omega}^{2}\bar{H}=(\xi+1)^{3}.$
(12)
A second change of coordinates is implemented to simplify calculations, given
by:
$z=\xi-c\qquad c=\frac{1}{W_{e_{0}}}-1.$ $None$
In these coordinates, the critical point is $z=0$ and (12) becomes
$z(z+c+1)\frac{d^{2}\bar{H}}{dz^{2}}+(2i(z+c+1)^{2}\bar{\omega}+c+1)\frac{d\bar{H}}{dz}-(z+c+1)^{2}\bar{\omega}^{2}\bar{H}=(z+c+1)^{3}.$
(14)
As discussed in section 1, there are two characteristics that are oriented
downstream from the slot in supercritical curtains ($W_{e_{0}}>1$), and the
appropriate constraints to apply here are
$\bar{H}(-c)=0$ (15)
$\frac{d\bar{H}}{dz}(-c)=0.$ (16)
Note that the subcritical case ($W_{e_{0}}<1$) corresponds to $c>0$. Condition
(15) sets the curtain centerline to be that of the slot, and (16) specifies
that the curtain be vertical there. We can see that for supercritical
curtains, (14), along with the conditions stated in (15) and (16), constitute
a well-posed initial value problem [15]. Supercritical solutions for this
system may be obtained over the entire domain using a fourth-order accurate
Runge-Kutta marching scheme.
For subcritical curtains ($W_{e_{0}}<1$), only one characteristic is oriented
downstream from the slot, so we only apply (15). Furthermore, the coefficient
of the highest order term in (14) goes to zero at the critical point ($z=0$),
and therefore the ODE is singular ([13]). An implication of this singularity
is that any marching scheme will become infinitely stiff as $z$ approaches 0.
Thus, we use a power series centered at $z=0$ to determine the solution from
the slot ($z=-c$) to a point downstream from the transition point ($z=c$).
Then, taking the first coefficients of the power series solution as initial
conditions, we use the Runge-Kutta scheme mentioned above to complete the
solution from $z=c$ to the end of the curtain ($z=z_{L}$); the flow is
supercritical below the critical point, allowing us to apply a marching scheme
in this way. We present the derivation of the power series in appendix A.1. In
the power series, enough terms are used to achieve machine precision.
## III Results and Discussion
Figures 2-9 provide results for subcritical and supercritical curtains
corresponding to the configuration shown in figure 1. Note that these figures
have been rotated counter-clockwise 90 degrees, such that the gravitational
force points right, along the $\bar{x}$-axis. In all of the figures, the
location of the unperturbed curtain centerline, $\bar{y}=0$, is shown as a
dashed line and the shapes that the curtains adopt at 1/4 increments of the
periods of motion are represented by the solid lines. For clarity in the
presentation of results, the reference phase of the pressure disturbance,
$\theta_{R}$ in (2.7b), is chosen to be
$\theta_{R}=\arctan\left(\frac{\text{Im}(\bar{H}(\bar{x}_{L}))}{\text{Re}(\bar{H}(\bar{x}_{L}))}\right),$
(17)
where Im and Re denote the imaginary and real parts of the complex argument.
Note that reference phase merely adjusts the time at which the curtain adopts
a shape, but does not affect the sequence of shapes predicted through its
period of oscillation. Equation (3.1) assures that the bottom of the curtain,
located at $\bar{x}=\bar{x_{L}}$, sweeps through the same locations as it
moves forward and backward throughout an oscillation cycle–this makes it
easier to interpret the curtain response.
Figures 2 and 3 provide curtain responses for $W_{e_{0}}=1.1$ (supercritical)
and $W_{e_{0}}=0.9$ (subcritical) conditions both with $\bar{\omega}=0.1$. The
insets in these figures show the magnifications of the curtain shapes near the
slot exit. As described earlier, the centerline slope of supercritical
curtains is always zero at the slot exit, while the slope of supercritical
curtains changes throughout the oscillation cycle. Despite this change in
behavior, figures 2 and 3 show that the overall shape of the curtain is quite
similar for subcritical and supercritical curtains, with the magnitude of the
subcritical responses comparable. Figures 4 and 5 ($\bar{\omega}=0.5$) and
figures 6 and 7 ($\bar{\omega}=2$) compare $W_{e_{0}}=1.1$ and $W_{e_{0}}=0.9$
curtain responses. Again, the magnitude and shapes of the curtain responses
are similar for supercritical and subcritical flows. Figures 8 and 9 provide
additional results for $W_{e_{0}}=1.3$ and $W_{e_{0}}=0.7$, respectively, both
with $\bar{\omega}=2$. Even though the range of Weber numbers has extended
further into the supercritical and subcritical regimes compared with figures 6
and 7, there is no marked change in curtain shape or magnitude. Additionally,
we have solved Equation 2.11 for a selection of subcritical and supercritical
slot Weber numbers ($W_{e_{0}}$) sweeping over a wide range of frequencies.
The trends shown in Figures 2-7 are maintained over all frequencies and slot
Weber numbers surveyed. That is, the number of spatial oscillations within a
given curtain length increases, and the maximum magnitude of curtain responses
decreases with increasing frequency.
The latter observation is relevant as it mathematically eliminates the
possibility of forcing the curtain via a pressure disturbance in such a way
that natural curtain frequencies are excited. If resonance were to arise,
there would be a large amplification in the vicinity of a given imposed
frequency. At the precise resonant frequency, the assumed form of the forced
solution (2.8) would become invalid, necessitating a secular time dependence
(such as $te^{\bar{\omega}t}$). The monotonic decrease in curtain deflection
with increasing frequency (as seen in figures 2 through 9) demonstrates that
such behavior does not occur. This mathematical result has a physical basis.
In supercritical curtains, disturbances are washed downstream, so the
reinforcement of repeated disturbances that characterizes resonance is not
possible. In curtains that are subcritical near the slot but turn
supercritical downstream, one might expect that some reinforcement is
possible, because disturbances in the subcritical region propagate upstream to
the slot. However these reflect at the slot and are washed downstream.
Figure 10 provides the maximum curtain angle at the slot as a function of
$W_{e_{0}}$ for various frequencies. Note that the curtain takes on a range of
angles at the slot as it cycles through its motion. The maximum angle of the
response at the slot immediately drops to zero as the slot Weber number
increases past $W_{e_{0}}=1$, in accordance with a subcritical to
supercritical transition.
Despite the marked difference in curtain motion at the slot quantified by
figure 10, the subcritical and supercritical curtain responses shown in
figures 2-9 are not appreciably different at a given frequency. These results
indicate that there is not a significant change in a curtain’s sensitivity to
pressure disturbances in the subcritical to supercritical transition. The
abrupt frequency shift observed by [12] in this transition is thus attributed
to the coupling between the pressure disturbances and curtain motion that
arises in the nappe configuration.
## IV Conclusion
We consider the time-dependent response of a gravitationally-thinning liquid
curtain that is subjected to sinusoidal ambient pressure disturbances under
subcritical and supercritical conditions. Consistent with previous studies
under both steady state and transient conditions, we find that the centerline
slope of the curtain is not the same as that of the slot under subcritical
conditions. This is a direct consequence of the flow of information
propagating along characteristics in the governing hyperbolic PDE. We examine
whether the increased susceptibility of the curtain to pressure disturbances
arises because of the oscillation in curtain centerline slope at the slot exit
for subcritical curtains. Our investigation shows there are no abrupt
differences in the responses of subcritical and supercritical curtains for
Weber numbers near 1 for all imposed sinusoidal frequencies. Experimental
confirmation of these results is needed in future studies.
Figure 2: The location of the curtain centerline, $\text{Re}(\bar{F})$, as a
function of distance down the curtain, $\bar{x}$, when it is subjected to a
sinusoidal pressure disturbance. The inset is a magnification of the region
near the slot. Here, the flow is supercritical ($W_{e_{0}}=1.1$) and
$\bar{\omega}=0.1$, $\beta=1$, and $\theta_{R}=-0.4048$ radians (see (3.1)).
The dashed line denotes the centerline of the unperturbed curtain. Figure 3:
The location of the curtain centerline, $\text{Re}(\bar{F})$, as a function of
distance down the curtain, $\bar{x}$, when it is subjected to a sinusoidal
pressure disturbance. The inset is a magnification of the region near the
slot. Here, the flow is subcritical ($W_{e_{0}}=0.9$) and $\bar{\omega}=0.1$,
$\beta=1$, and $\theta_{R}=-0.4366$ radians (see (3.1)). The dashed line
denotes the centerline of the unperturbed curtain. Figure 4: The location of
the curtain centerline, $\text{Re}(\bar{F})$, as a function of distance down
the curtain, $\bar{x}$, when it is subjected to a sinusoidal pressure
disturbance. The inset is a magnification of the region near the slot. Here,
the flow is supercritical ($W_{e_{0}}=1.1$) and $\bar{\omega}=0.5$, $\beta=1$,
and $\theta_{R}=-1.7794$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 5: The location of the curtain
centerline, $\text{Re}(\bar{F})$, as a function of distance down the curtain,
$\bar{x}$, when it is subjected to a sinusoidal pressure disturbance. The
inset is a magnification of the region near the slot. Here, the flow is
subcritical ($W_{e_{0}}=0.9$) and $\bar{\omega}=0.5$, $\beta=1$, and
$\theta_{R}=-1.7924$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 6: The location of the curtain
centerline, $\text{Re}(\bar{F})$, as a function of distance down the curtain,
$\bar{x}$, when it is subjected to a sinusoidal pressure disturbance. The
inset is a magnification of the region near the slot. Here, the flow is
supercritical ($W_{e_{0}}=1.1$) and $\bar{\omega}=2$, $\beta=1$, and
$\theta_{R}=-2.8491$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 7: The location of the curtain
centerline, $\text{Re}(\bar{F})$, as a function of distance down the curtain,
$\bar{x}$, when it is subjected to a sinusoidal pressure disturbance. The
inset is a magnification of the region near the slot. Here, the flow is
subcritical ($W_{e_{0}}=0.9$) and $\bar{\omega}=2$, $\beta=1$, and
$\theta_{R}=-2.9888$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 8: The location of the curtain
centerline, $\text{Re}(\bar{F})$, as a function of distance down the curtain,
$\bar{x}$, when it is subjected to a sinusoidal pressure disturbance. The
inset is a magnification of the region near the slot. Here, the flow is
supercritical ($W_{e_{0}}=1.3$) and $\bar{\omega}=2$, $\beta=1$, and
$\theta_{R}=-2.7377$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 9: The location of the curtain
centerline, $\text{Re}(\bar{F})$, as a function of distance down the curtain,
$\bar{x}$, when it is subjected to a sinusoidal pressure disturbance. The
inset is a magnification of the region near the slot. Here, the flow is
subcritical ($W_{e_{0}}=0.7$) and $\bar{\omega}=2$, $\beta=1$, and
$\theta_{R}=3.1252$ radians (see (3.1)). The dashed line denotes the
centerline of the unperturbed curtain. Figure 10: The maximum angle at the
slot, $\theta_{0,\text{max}}=\underaccent{\forall
t}{\text{max}}[\arctan(\text{Re}(\partial\bar{F}/\partial x))]_{\bar{x}=0}$,
as a function of slot Weber number, $W_{e_{0}}$, for a range of dimensionless
disturbance frequencies, $\bar{\omega}$, obtained from the solution of (14)
with $\beta=1$. The maximum angle at the slot, $\theta_{0,max}$, is taken here
to correspond with the dimensionless curtains shown in figures 2-9 (as well as
additional cases).
## Appendix A
### A.1 Power Series
For subcritical cases ($W_{e_{0}}<1$), the power series
$\bar{H}(z)=\sum\limits_{j=0}^{\infty}(\lambda_{j}+i\Gamma_{j})z^{j}$,
centered at $z=0$ (the critical point), is a solution of (14) on the interval
$-b<z<b$, if the following conditions are met, where $b=c+1$ and
$Q_{j}=[\lambda_{j},\Gamma_{j}]^{T}$:
$Q_{1}=\left(A_{1}Q_{0}+\left[\begin{array}[]{cc}b^{4}\\\
-2\bar{\omega}b^{5}\\\ \end{array}\right]\right)s_{1}$ (18)
$Q_{2}=\left(A_{2}Q_{1}+B_{2}Q_{0}+\left[\begin{array}[]{cc}12b^{3}\\\
-12b^{4}\bar{\omega}\\\ \end{array}\right]\right)s_{2}$ (19)
$Q_{3}=\left(A_{3}Q_{2}+B_{3}Q_{1}+C_{3}Q_{0}+\left[\begin{array}[]{cc}27b^{2}\\\
-18b^{3}\bar{\omega}\\\ \end{array}\right]\right)s_{3}$ (20)
$Q_{4}=\left(A_{4}Q_{3}+B_{4}Q_{2}+C_{4}Q_{1}+\left[\begin{array}[]{cc}16b\\\
-8b^{2}\bar{\omega}\\\ \end{array}\right]\right)s_{4}$ (21)
And for all $j>3$
$Q_{j+1}=(A_{j+1}Q_{j}+B_{j+1}Q_{j-1}+C_{j+1}Q_{j-2})s_{j+1}$ (22)
where:
$A_{j}=\left[\begin{array}[]{cc}A_{11j}&A_{12j}\\\
A_{21j}&A_{22j}\end{array}\right]$ (23)
$A_{11j}=b(j+1)^{2}(b^{2}\bar{\omega}^{2}-j(j-1))-8b^{3}\bar{\omega}^{2}j(j+1)$
(24)
$A_{12j}=2\bar{\omega}b^{2}(2j(j+1)^{2}+(j+1)(b^{2}\bar{\omega}^{2}-j(j-1))$
(25)
$A_{21j}=-2\bar{\omega}b^{2}(2j(j+1)^{2}+(j+1)(b^{2}\bar{\omega}^{2}-j(j-1))$
(26)
$A_{22j}=b(j+1)^{2}(b^{2}\bar{\omega}^{2}-j(j-1))-8b^{3}\bar{\omega}^{2}j(j+1)$
(27)
$B_{j}=\left[\begin{array}[]{cc}B_{11j}&B_{12j}\\\ B_{21j}&B_{22j}\\\
\end{array}\right]$ (28)
$B_{11j}=2b^{2}\bar{\omega}^{2}(j+1)(3-j)$ (29)
$B_{12j}=2b\bar{\omega}(j+1)(2b^{2}\bar{\omega}^{2}+(j+1)(j-1))$ (30)
$B_{21j}=-2b\bar{\omega}(j+1)(2b^{2}\bar{\omega}^{2}+(j+1)(j-1))$ (31)
$B_{22j}=2b^{2}\bar{\omega}^{2}(j+1)(3-j)$ (32)
$C_{j}=\left[\begin{array}[]{cc}C_{11j}&C_{12j}\\\ C_{21j}&C_{22j}\\\
\end{array}\right]$ (33)
$C_{11j}=b\bar{\omega}^{2}(j+1)^{2}$ (34)
$C_{12j}=2b^{2}\bar{\omega}^{3}(j+1)$ (35)
$C_{21j}=-2b^{2}\bar{\omega}^{3}(j+1)$ (36)
$C_{22j}=b\bar{\omega}^{2}(j+1)^{2}$ (37)
$s_{j}=\frac{1}{b^{2}j^{4}+4b^{4}\bar{\omega}^{2}j^{2}}\\\ $ (38)
If $\lambda_{0}$ and $\Gamma_{0}$ are known, the rest of the parameters are
determined. So we have a two-parameter family of solutions. In particular
cases the two parameters are determined by the boundary condition, which is
the Dirichlet condition given by Equation (2.13) for the cases we consider in
this paper.
### A.2 Power Series Convergence Proof
Here we prove that the power series
$\bar{H}(z)=\sum\limits_{j=0}^{\infty}(\lambda_{j}+i\Gamma_{j})z^{j}$
converges for $|z|<b$ . Let $T_{j}=\|Q_{j}\|$. It follows from Equation A5 and
the triangle inequality that
$T_{j+1}\leq\|A_{j}s_{j}\|T_{j}+\|B_{j}s_{j}\|T_{j-1}+\|C_{j}s_{j}\|T_{j-2}$
(39)
The diagonal elements of $A_{j}$ are fourth-order polynomials in $j$ with
leading coefficient $b$, the off-diagonal elements are third-order polynomials
in $j$, and $s_{j}$ is a fourth-order polynomial with leading coefficient
$b^{2}$. Thus the matrix $A_{j}s_{j}$ approaches $\frac{1}{b}I$, where $I$ is
the identity matrix, as $j$ goes to infinity. All of the elements in the
matrices $B_{j}$ and $C_{j}$ are polynomials of order less than four, so
$B_{j}s_{j}$ and $C_{j}s_{j}$ approach the zero matrix as $j$ approaches
infinity. Thus for every $r>0$, there exists an integer $K$, such that for all
$j>K$:
$\|A_{j}s_{j}\|<\frac{1}{b}+r$ (40) $\|B_{j}s_{j}\|<\frac{r}{b}$ (41)
$\|C_{j}s_{j}\|<\frac{r}{b^{2}}$ (42)
If $\gamma>\left(\frac{1}{b}+3r\right)$, then:
$\gamma>\frac{1}{b}+r+\frac{r}{\gamma b}+\frac{r}{\gamma^{2}b^{2}}$ (43)
Let $q>0$ be such that $T_{j}<q\gamma^{j}$, for all $j<K+3$.
Then,
$q\gamma^{j}>q\gamma^{j-1}(\frac{1}{b}+r)+q\gamma^{j-2}\frac{r}{b}+q\gamma^{j-3}\frac{r}{b^{2}}>(\frac{1}{b}+r)T_{j-1}+\frac{r}{b}T_{j-2}+\frac{r}{b^{2}}T_{j-3}\geq
T_{j}$ (44)
Therefore by induction, $T_{j}<q\gamma^{j}$ for all $j$. So
$\sum\limits_{j=0}^{\infty}T_{j}|z^{j}|$ is dominated by the geometric series
$\sum\limits_{j=0}^{\infty}q\gamma^{j}|z^{j}|$ and this converges for
$|z|<\frac{1}{\gamma}$. Therefore, $\sum\limits_{j=0}^{\infty}Q_{j}z^{j}$
converges absolutely for $|z|<\frac{1}{\gamma}$. Because we have the derived
bound for every $r$, where $\gamma=r+\frac{1}{b}$, we can take $r$ as small as
we’d like to and establish that $b$ is the radius of convergence.
## References
* [1] M.H.I. Baird and J.F. Davidson. Annular jets-I: Fluid dynamics. Chemical Engineering Science, 17(6):467–472, 1962.
* [2] A.M. Binnie. Resonating waterfalls. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, 339(1619):435–449, 1974.
* [3] D.R. Brown. A study of the behaviour of a thin sheet of moving liquid. Journal of fluid mechanics, 10(2):297–305, 1961.
* [4] P. Brunet, Christophe C., and L Laurent. Transonic liquid bells. Physics of Fluids, 16(7):2668–2678, 2004.
* [5] A. Clarke, S.J. Weinstein, A.G. Moon, and E.A. Simister. Time-dependent equations governing the shape of a two-dimensional liquid curtain, Part 2: Experiment. Physics of Fluids, 9(12):3637–3644, December 1997.
* [6] N. S. Clarke. Two-dimensional flow under gravity in a jet of viscous liquid. Journal of Fluid Mechanics, 31(3):481–500, 1968.
* [7] L. de Luca. Experimental investigation of the global instability of plane sheet flows. Journal of Fluid Mechanics, 399:355–376, 1999.
* [8] L. De Luca and M. Costa. Stationary waves on plane liquid sheets falling vertically. European journal of mechanics. B, Fluids, 16(1):75–88, 1997.
* [9] F. De Rosa, M. Girfoglio, and L. de Luca. Global dynamics analysis of nappe oscillation. Physics of Fluids, 26(12):122109, 2014.
* [10] D.S. Finnicum, S.J. Weinstein, and K.J. Ruschak. The effect of applied pressure on the shape of a two-dimensional liquid curtain falling under the influence of gravity. Journal of Fluid Mechanics, 255:647–665, October 1993.
* [11] G.C. Georgiou, T.C. Papanastasiou, and J.O. Wilkes. Laminar newtonian jets at high reynolds number and high surface tension. 1988\.
* [12] M. Girfoglio, F. De Rosa, G. Coppola, and L. De Luca. Unsteady critical liquid sheet flows. Journal of Fluid Mechanics, 821:219–247, June 2017.
* [13] F.B. Hildebrand. Advanced Calculus for Applications. Textbook Publishers, 2003.
* [14] F.L. Hopwood. Water bells. Proceedings of the Physical Society. Section B, 65(1):2–5, jan 1952\.
* [15] F. John. Partial Differential Equations. Applied Mathematical Sciences. Springer New York, 2012.
* [16] G.N. Lance and R.L. Perry. Water bells. 66(12):1067–1072, dec 1953.
* [17] P.D. Lax. Hyperbolic Partial Differential Equations. Courant lecture notes in mathematics. Courant Institute of Mathematical Sciences, 2006.
* [18] H. Mori, T. Nagamine, R. Ito, and Y. Sato. Mechanism of self-excited vibration of a falling water sheet. Nihon Kikai Gakkai Ronbunshu, C Hen/Transactions of the Japan Society of Mechanical Engineers, Part C, 78(792):2720–2732, 2012.
* [19] M. Paramati and M.S. Tirumkudulu. Open water bells. Physics of Fluids, 28(3):032105, 2016.
* [20] J.I. Ramos. Liquid curtains—i. fluid mechanics. Chemical engineering science, 43(12):3171–3184, 1988.
* [21] J.I. Ramos. Analysis of annular liquid membranes and their singularities. Meccanica, 32(4):279–293, 1997.
* [22] J.I. Ramos. Oscillatory dynamics of inviscid planar liquid sheets. Applied Mathematics and Computation, 143(1):109–144, October 2003\.
* [23] K.J. Ruschak. A method for incorporating free boundaries with surface tension in finite element fluid-flow simulators. International Journal for Numerical Methods in Engineering, 15(5):639–648, 1980.
* [24] Y. Sato, S. Miura, T. Nagamine, S. Morii, and S. Ohkubo. Behavior of a falling water sheet. Journal of Environment and Engineering, 2(2):394–406, 2007.
* [25] P. Schmid and D.S. Henningson. On the stability of a falling liquid curtain. Journal of Fluid Mechanics, 463(july):163–171, 2002.
* [26] J.P.K Tillett. On the laminar flow in a free jet of liquid at high reynolds numbers. Journal of Fluid Mechanics, 32(2):273–292, 1968.
* [27] S.J. Weinstein, A. Clarke, A.G. Moon, and E.A. Simister. Time-dependent equations governing the shape of a two-dimensional liquid curtain, Part 1: Theory. Physics of Fluids, 9(12):3625–3636, December 1997.
* [28] Steven J. Weinstein, , and Kenneth J. Ruschak. Coating flows. Annual Review of Fluid Mechanics, 36(1):29–53, 2004.
|
# Theory of Mind for
Deep Reinforcement Learning in Hanabi
Andrew Fuchs
Naval Information Warfare Center - Pacific
<EMAIL_ADDRESS>
&Michael Walton
Naval Information Warfare Center - Pacific
<EMAIL_ADDRESS>
&Theresa Chadwick
Naval Information Warfare Center - Pacific
<EMAIL_ADDRESS>
&Doug Lange
Naval Information Warfare Center - Pacific
<EMAIL_ADDRESS>
###### Abstract
The partially observable card game Hanabi has recently been proposed as a new
AI challenge problem due to its dependence on implicit communication
conventions and apparent necessity of theory of mind reasoning for efficient
play. In this work, we propose a mechanism for imbuing Reinforcement Learning
agents with a theory of mind to discover efficient cooperative strategies in
Hanabi. The primary contributions of this work are threefold: First, a formal
definition of a computationally tractable mechanism for computing hand
probabilities in Hanabi. Second, an extension to conventional Deep
Reinforcement Learning that introduces reasoning over finitely nested theory
of mind belief hierarchies. Finally, an intrinsic reward mechanism enabled by
theory of mind that incentivizes agents to share strategically relevant
private knowledge with their teammates. We demonstrate the utility of our
algorithm against Rainbow, a state-of-the-art Reinforcement Learning agent.
## 1 Introduction
In order to make decisions that effectively achieve their goals, humans must
base their choices on the behavior of other agents whose actions may impact
their respective outcomes. To navigate this complexity, humans utilize a
theory of mind: the ability of an agent to ascribe beliefs, preferences and
intentions to other agents with which they interact in a shared environment.
Such a theory of mind enables humans to reason recursively over nested belief
states: “I believe you believe I believe," and make inferences about the
intent or behavior of others based on this belief. For instance, when told the
story, “A man walks into a room, looks around then leaves," human subjects
often ascribe the man’s behavior to his mental state and intention, that is,
“The man entered the room to look for something and forgot what it was" Baron-
Cohen et al. (1995). The ability to comprehend our world and one another in
terms of Theory of Mind (ToM) is fundamental to our everyday experience;
however, relatively few works in reinforcement learning (RL) have successfully
combined such capabilities into multi-agent decision making.
## 2 Background
### 2.1 Theory of Mind
In Byom and Mutlu (2013), the authors identify two relevant tools humans use
in order to infer each others’ mental states: interpreting the motivation
governing the actions of others (intent) and inferences over shared context
(beliefs). These two aspects are particularly salient to Hanabi; although
partial shared context is provided by the fully observable cards, crucial
context exists within the minds of the players. For instance, if a particular
player would like to estimate the likelihood that another player plays their
Red 2 card, one need not only guess their decision making process, but their
belief state as well. This raises the question: How can the players
efficiently reach agreement on conventions of play and grounded context?
As proposed in Bard et al. (2019), we argue that ToM reasoning will be crucial
for Hanabi and other related multi-agent tasks, particularly in ad-hoc
settings which necessitate few-shot inference of other agents beliefs,
decision making process, and hinting strategy. We begin to tackle this
challenging domain by proposing a method to address the representation of
nested belief in deep reinforcement learning and expand on this approach by
deriving a mechanism for achieving coherent hinting strategies.
### 2.2 Hanabi
There are several aspects of the Hanabi game which make it worthy of
investigation. First, it is an imperfect information game with players that
have asymmetric knowledge about the environment state. Further, Hanabi is a
cooperative game which requires coordination, credit assignment between
agents, and learning grounded communication conventions for sharing
information.
In Hanabi, agents communicate via direct information by sharing hints as well
as “play", “discard", and “draw" actions they may choose to take. Hints in
Hanabi are restricted to only true hints and require that all cards matching
the given hint be noted when the hint is given. Additionally, the set of valid
hints restricts communication in such a way that complete information about a
card cannot be achieved through a single hint, unless the game state provides
additional context that forces the hint to explicitly define the card (e.g.
only one card of a particular color remains, so a hint of that color would
necessarily define the card).
We limited the scope of this paper to the self-play Sample Limited posing of
the problem. Under the Sample Limited regime, agents’ total experience
(interactions with the environment) is limited to 100 million timesteps.
### 2.3 Related Work
#### 2.3.1 Rule-based Methods
Bard et al. (2019) note several heuristic agents for playing Hanabi [HatBot
Cox et al. (2015), SmartBot O’Dwyer (2018), WTFWThat Wu (2018)], which, though
performant, are reliant on hand crafted rules and communication conventions.
#### 2.3.2 Reinforcement Learning Methods
Closest and most relevant to our method is Policy Belief Iteration (Pb-It)
Tian et al. (2018), which proposes a method for learning implicit
communication protocols between agents in the card game bridge. More
specifically, the agents need to infer information about their hands to their
partner via bidding actions. Pb-It learns a belief update function which
predicts the other player’s private information given their action. Their
method also includes an auxiliary reward that incetivizes agents to
communicate with bids that provide the largest reduction in uncertainty for
their partner’s belief. However, the belief update and policy functions in Pb-
It must have access to private information of other players during training,
which compromises the practicality of the method. Our algorithm differs
crucially in that we are able to exploit a similar reward incentive without
violating the rules of the game during training by explicitly sharing private
information. Our algorithm uses approximations to other agents’ belief
distributions as a proxy for this ground truth information.
Actor-Critic-Hanabi-Agent (ACHA) Mnih et al. (2016) is an actor-critic model
parameterized with deep neural networks. ACHA uses the Importance Weighted
Actor-Learner to avoid stale gradients and population-based training for
hyperparameter optimization.
Rainbow Hessel et al. (2018) combines several state of the art DQN techniques,
such as Double DQN, Noisy Networks, Prioritized experience replay, and
Distributional RL. For our experiments, we train an implementation of Rainbow
provided by Bard et al. (2019) as a comparative baseline.
Bayesian Action Decoder (BAD) Foerster et al. (2018) uses a Bayesian belief
update conditioned on the acting agent’s policy. In BAD, all agents utilize a
public belief which incorporates all card-related common knowledge.
### 2.4 Problem Statement
Partially observable cooperative multi-agent decision making problems with
stochastic state transitions are commonly modeled as Decentralized POMDPs
(Dec-POMDPs). A common alternative which more naturally expresses ToM concepts
of belief, preference, and desire is the Interactive POMDP (I-POMDP) framework
Doshi and Gmytrasiewicz (2004). In this work, we propose a simplified ToM
framework which embeds k-nested beliefs present in I-POMDPs in the belief
state of a Dec-POMDP.
A Dec-POMDP is defined by $\langle
S,\\{A_{i}\\},T,R,\\{\Omega_{i}\\},O,\gamma\rangle$. Where $S$ defines the
global state, $\\{A_{i}\\}$ denotes a set of agent action spaces for each
agent $i$. The state transition function $T$ defines the probability of next
state observations given a current state and agent action, $T:S\times
A\rightarrow\Delta(S)$ where $A=\times_{i}A_{i}$ and $\Delta(S)$ defines a set
of probability distributions over $S$. $\Omega_{i}$ is the set of possible
observations for agent $i$ with $O:S\times A\rightarrow\Delta(\Omega)$ where
$\Omega=\times_{i}\Omega_{i}$. The reward function, $R:S\times
A\rightarrow\mathbb{R}$, defines the global objective the team of agents seek
to maximize. Lastly, $\gamma\in[0,1)$ is a discount factor, intuitively
encoding how an agent should trade off between achieving near-term versus long
term rewards. Given an Dec-POMDP and a set of possible joint policies
$\pi=\bigcup_{i}\pi_{i}$, our objective is to obtain an optimal joint policy
which maximizes expected cumulative reward, such that
$\pi^{*}=max_{\pi}\mathbb{E}[\Sigma_{t}\gamma^{t}R(s_{t},a_{t})|\pi]$.
Absent from the Dec-POMDP literature is a formal statement of agents’ beliefs
about each other’s mental states. An alternative modeling framework which
augments the POMDP with nested reasoning over beliefs and agents’ decision
making process may be found in the Interactive POMDP (I-POMDP) [e.g. Doshi and
Gmytrasiewicz (2004) and Karkus et al. (2017)]. The I-POMDP is Dec-POMDP;
however, it introduces the notion of a set of interacting states
$IS_{i}=S\times M_{j}$, where for each agent $i$ there is defined a set of
possible models $M_{j}$ for other agents $j$ in the system. A rigorous
treatment of I-POMDPs and solution methods for planning problems may be found
in Doshi and Gmytrasiewicz (2004). Due to the recursive formulation of
I-POMDPs, agents’ beliefs and intentional models may be infinitely nested;
making belief updates and optimality proofs intractable. Alternatively,
finitely nested I-POMDPs introduce a parameter $k$ called the strategy level
which limits the depth of the recursion. For notational compactness and
without loss of generality, we consider a two-player setting with players $i$
and $j$. Agent $i$’s 0-th level belief $b_{i}^{0}$ are probability
distributions over the physical state $S$. Its 0-th level type $\Theta_{i,0}$
is a tuple containing its 0-th level belief and agent models $M_{j,0}$.1110-th
level types are therefore POMDPs, as the other agents’ actions are folded into
$T$, $O$ and $R$ as noise (as in naive single-agent RL applied to multi-agent
settings) An agent’s first level beliefs are probability distributions over
the physical state and the 0-th level models of the other agent. Its first
level model consists of types up to level 1, second level beliefs are defined
in terms of first level models, and so on and so forth.
We propose a simple extension to Dec-POMDPs to leverage ToM reasoning
motivated by the I-POMDP depth finitely nested belief model. For each agent
$i$, we define a ToM belief set $B^{i}=S_{0}\times S_{1}\times...\times S_{k}$
which defines the set of possible worlds our agent may exist in; including at
each depth $k$ other agents’ perspectives defined over the corresponding state
space. Therefore, a belief at depth $k$ is given by
$b_{k}^{ij...}=\Delta(S_{k})$, such that $|ij...|=k+1$. For clarity, we denote
the relationship between agents with a superscript on the belief indicating
the direction of the relationship. For even $k$, the relationship is self-
reflective, that is, $i$ models their own belief through recursion on
themselves and $j$, for odd $k$, the relationship models other players. For
instance, $b^{iji}_{2}$ is "I believe you believe I believe". We model $i$’s
level $k$ utility function $Q_{k}^{i}$ associated with a particular belief
state as $Q_{k}^{i}:b_{0}...\times b_{k}\rightarrow\mathbb{R}^{|A_{i}|}$.
Hence, we are able to obtain a tractable ToM architecture similar to I-POMDPs
though simple enough for model-free value function approximation.
## 3 Methods
We propose the use of nested theory of mind (ToM) belief levels to generate a
mechanism for performing and incentivizing communication for MARL agents in
the Hanabi environment. These belief levels not only allow agents to estimate
the portion of the state that is not fully observable, but also supply a
method of simulating the same belief estimate for other agents. The ability to
simulate a belief level for other agents allows each to recursively reason
about the other agents’ ToM beliefs. This recursive reasoning can be taken to
arbitrary depth. Our method includes two levels of belief; we argue this is
sufficient for efficient gameplay in Hanabi and analogous to human reasoning
on this task. We demonstrate that ToM-based agents will more effectively learn
the mechanics of the game to play through information-maximising hint
conventions and belief update strategies.
### 3.1 Nested Beliefs in Hanabi ToM
Given the nature of the Hanabi game, ToM agents must first estimate a belief
regarding their own cards, which we denote $b_{0}^{i}$ for player $i$. The
$b_{0}^{i}$ distribution provides the probability of a hand of cards for
player $i$ given the observed cards for other players, hints received, cards
in the discard pile, and cards on the firework piles. The information provided
to the agents allows for a reduction in the sample space by eliminating cards
which violate the hints observed as well as any cards that have been exhausted
through gameplay. With this reduced sample space, we can calculate a hand
probability based on sampling without replacement, noted in equation 3.
The following definitions outline the characteristics of our belief levels.
Let $F=\\{R,G,B,W,Y\\}$ and $V=\\{1,2,3,4,5\\}$ denote the sets of possible
colors and ranks respectively for cards in the game. Then the possible
knowledge for any given card is defined by $H_{F}=F\cup\\{\varnothing\\}$ and
$H_{V}=V\cup\\{\varnothing\\}$, where $\varnothing$ denotes no hint given.
Therefore, for each player $i\in[1,N]$, their cards and hand knowledge are
defined by $C^{i}=(c_{1},\dots c_{\eta}),c_{k}\in F\times V,\eta\in[4,5]$ and
$H^{i}=(h_{1},\dots,h_{\eta}),h_{k}\in H^{F}\times H^{V},\eta\in[4,5]$,
respectively. Given the observed hands and hints regarding their own cards,
players can generate their knowledge set and the unique set of hints:
$\displaystyle K^{i}$ $\displaystyle=\\{(c_{k},h_{k}):k\in[0,\eta],c_{k}\in
C^{i},h_{k}\in H^{i}\\}$ (1) $\displaystyle H^{*}$
$\displaystyle=\\{h:\exists_{k\in[1,\eta]}\ni h=h_{k},\forall h_{k}\\}$ (2)
The knowledge set and hint set provide the cases required to calculate a
hand’s probability given the current game state. Given $K^{i}$ and $H^{*}$,
players estimate the probabilities for belief level 0 and belief level 1
using:
$\displaystyle b_{0}^{i}$ $\displaystyle\sim
P(C^{i}|H^{i},\eta):=\frac{\prod_{(c,h)\in
K^{i}}(n_{\nu}(c|h))_{\delta^{i}_{c}(h)}}{\prod_{h\in
H^{*}}({|\nu(h)|})_{\lambda^{i}(h)}}$ (3) $\displaystyle\hat{b}_{0}^{i}$
$\displaystyle:=\frac{\prod_{(c,h)\in K^{i}}n_{\nu}(c|h)}{\prod_{h\in
H^{*}}{|\nu(h)|}}$ (4) $\displaystyle\hat{b}_{1}^{ij}$
$\displaystyle:=\frac{1}{Z}\sum_{C^{i}\sim\hat{b}_{0}^{i}}P(C^{i}|H^{i},\nu)\hat{b}_{0}^{j}$
(5) $\displaystyle\hat{b}_{0}^{j}$
$\displaystyle:=P(C^{j}|H^{j},C^{i},\hat{\nu}^{j})$ (6)
where:
$\displaystyle\nu$ $\displaystyle=\textrm{multiset}(F\times V,n_{\nu}(c))$
$\displaystyle\delta^{i}_{c}(h)$ $\displaystyle=\textrm{multiplicity of
}(c,h)\textrm{ for hand }C^{i}$ $\displaystyle\lambda^{i}(h)$
$\displaystyle=\textrm{multiplicity of }h\textrm{ for hints }H^{i}$
$\displaystyle\nu(h)$ $\displaystyle=\textrm{sub-multiset of }\nu\textrm{
given hint }h$ $\displaystyle n_{\nu}(c|h)$
$\displaystyle=\textrm{multiplicity of card }c\textrm{ in }\nu(h)$
$\displaystyle n_{C^{i}}(c)$ $\displaystyle=\textrm{multiplicity of card
}c\textrm{ in hand }C^{i}$ $\displaystyle|\nu(h)|$ $\displaystyle=\sum_{c\in
Supp(\nu(h))}n_{\nu}(c|h)$ $\displaystyle\hat{\nu}^{j}$
$\displaystyle=\\{F\times V,\hat{n}_{\nu}\\}$ $\displaystyle\hat{n}_{\nu}(c)$
$\displaystyle=n_{\nu}(c)+n_{C^{i}}(c)-n_{\hat{C}^{i}}(c),$
$\displaystyle(\hat{C}^{i}\textrm{ is sampled, }C^{i}\textrm{ is observed by
}j)$
For clarity, a working example of the above (equations 3 - 6) is provided in
Appendix A.2. For tractability and to avoid enumerating all possible hands, we
estimate each player’s belief $b_{0}^{i}$ using the estimated belief defined
by equation 4, which allows us to represent the card probabilities as
independent samples with replacement when generating sample hands.
We defined two methods for sampling hands for player $i$ when calculating
$b_{1}^{ij}$. In the first case, we generate a maximum a posteriori hand
$C_{MAP}=\operatorname*{argmax}_{C^{i}}P(C^{i}|H^{i})$, which is then used to
augment the observed player hands from player $j$’s perspective. The augmented
player hand is used to generate $\hat{\nu}^{j}$, which is then used in the
calculation of $b_{0}^{j}$. The resulting $b_{0}^{j}$ is then used as
$b_{1}^{ij}$. This heuristic approximation is used to mimic a particular
interpretation of human play: humans may reason about one another’s beliefs
with respect to their most likely hand, rather than integrate over all
possible hands weighted by $b_{0}^{i}$.
In the alternative case where we do not use the $C_{MAP}$ method, hands are
sampled proportional to equation 4 and then selected given
$b_{0}^{i}(C^{i})>0$ until the number of samples matches the requested number
of hands. The resulting sampled hands are then used as input for equations 5
and 6, which estimate the level-0 belief for other agents based on player
$i$’s estimated hand, and generate a weighted average based on probability of
$C^{i}\sim b_{0}^{i}$. The resulting weighted average is normalized in
equation 5 and used to represent player $i$’s estimate of the 0-level belief
for agent $j$.
### 3.2 Incentivising Efficient Hints
Given that each agent has access to belief level $b_{1}^{ij}$ by equations 4
and 6, it is natural to consider how this approximation may be further
leveraged to encourage Hanabi agents to make maximally informative hint
actions. Since each agent $C^{j}$ is in the observation of every other agent
$i$, each agent may independently compute $D(C^{j}~{}||~{}b_{1}^{ij})$, where
$D$ is some divergence measure between the true hand and $i$’s approximation
to $j$’s belief. Partially motivated by Tian et al. (2018) we encourage our
agents to minimize this quantity by providing the intrinsic reward
$r^{i}_{c,t}$ given by:
$\displaystyle r_{c,t}^{i}$ $\displaystyle=\max_{j\neq
i}\sum_{k=1}^{\eta}\bigl{[}{W_{p}(\psi(C_{t}^{j}[k]),\hat{b}_{1;t}^{ij})}-W_{p}(\psi(C_{t+1}^{j}[k]),\hat{b}_{1;t+1}^{ij})\bigr{]}$
(7)
where $W_{p}(\psi(C_{t}^{j}[k]),\hat{b}_{1;t}^{ij})$ is the Wasserstein metric
and $\psi(C_{t}^{j}[k])$ is the one-hot encoding of the cards in player $i$’s
hand. In our experiments we use $p=2$ and weight the intrinsic communication
reward by a hyperparameter $\beta$, which roughly trades off between sharing
information via hint actions and engaging in more aggressive play and discard
strategies. We hypothesize that using this reward term usefully biases the
agents’ value function towards more conservative play and helps infer the
effectiveness of hint actions. This is accomplished by determining how
incorrect our estimated belief for other players are by comparing to their
ground truth hands. As such, we can directly measure how much a given hints
shifts the distribution to match the ground truth and determine whether there
is a belief significantly different from the truth to warrant performing a
hint action rather than a play action.
### 3.3 ToM Rainbow Agent
Own-hand beliefs $b_{0}^{i}$ were represented as flattened vectors
$b_{0}^{i}\in\mathbf{R}^{\eta\times|F\times V|}$ which replace the partial
card knowledge encoding component of the Hanabi environment’s default
observation vector. In experiments which reason over first-order beliefs, both
belief levels $\\{b_{0}^{i},b_{1}^{ij}\\}$ were concatenated. The resultant
modified observation vector is roughly the same dimension as the default in
Hanabi. Our models do not use or require observation history stacking, since
$b^{0}_{i}$ is a sufficient statistic for any other agent $j$’s private
knowledge of $i$’s hand given the most recent hint and $i$’s observation.
Figure 1: ToM architecture for Hanabi. Fully colored cards denote the state of
the game board and pastel colored hands indicate agents’ belief over
unobserved parts of the game state and one anther’s beliefs, given their
observations.
Computing $b_{1}^{ij}$ requires marginalizing over the agent’s own-hand belief
$b^{0}_{i}$. We have proposed two mechanisms for mitigating this issue, the
latter of which we explore empirically. The first is to obtain a Monte-Carlo
approximation of $b^{j}_{0}$ by iteratively sampling from agent $i$’s own-hand
belief, computing $b^{0}_{j}$ given each sample, and weighting the associated
probabilities by the probability of the sampled hand under agent $i$’s current
own-hand belief from equation 5.
We observed that the empirical runtime cost of the MC variant of our algorithm
scales roughly linearly in the number of samples drawn from $b^{ij}_{1}$.
Small sample sizes produced high variance estimates of $b^{ij}_{1}$ and
empirically lead to low scores. In order to yield a more computationally
tractable algorithm for estimating $b^{ij}_{1}$, we instead compute this
approximation given the current maximum a posteriori hand under agent $i$’s
own-hand belief $C^{j}_{MAP}$.
All agents including the baseline were trained using the value-based Rainbow
algorithm and shared all associated hyperparameters. Unless otherwise noted,
we train with $\gamma=0.99$, $V_{max}=25$, update horizon of one and 51 atoms
to approximate the value function distribution. We train our models using
using RMSProp Tieleman and Hinton (2012) with a learning rate of $0.0025$,
decay of $0.95$, 0.0 momentum, and $\epsilon=10^{6}$.
### 3.4 Experiments
We first assess total scores and sample efficiency of our method against a
baseline Rainbow agent. As described, these experiments share hyperparameters
with the code associated with the Hanabi Rainbow baseline in Bard et al.
(2019). All models were trained in the two player sample limited version of
Hanabi for 20k iterations of 500 episodes each. We present final evaluation of
all agent scores at $10^{8}$ total timesteps (individual interactions with the
environment) to conform with the sample-limited regime guidelines.
Intuitively, our intrinsic reward mechanism should increase the frequency of
hint actions that are informative under the current policy and state of the
game board. We explore whether this is in fact the case by first assessing the
sample efficiency and final game score achieved by ToM depth-1 agents trained
with our hint reward weighted by a range of $\beta$ values, where
$\beta=(0,2,7,25)$.
## 4 Results
Our full method shows improved sample efficiency during the early stages of
training as well as better evaluation scores over the baseline. Our model’s
performance in self-play after being trained also demonstrates slightly
reduced variance against the baseline. In ablation experiments, we found that
agents that implemented belief level 1 were at least as performant as belief
level 0 (in terms of median scores) and outperform belief level 0 in the three
and four player cases. Both methods show a larger margin of improvement over
the baseline Rainbow agent in settings with a greater number of players; with
an improvement of two points in the five player setting. It is interesting to
note that simply providing the nested belief representation to our agents did
not improve their utility above the baseline model. However, in combination
with the proposed intrinsic reward, agents score above our baseline model and
achieve scores exceeding those of the best sample limited agent reported in
Bard et al. (2019).
Figure 2: Average Score Per iteration, 2 and 3 player games in sample-limited self-play Agent | 2P | 3P | 4P | 5P
---|---|---|---|---
Rainbow | | 21
---
20.64 (.22)
| 19
---
18.71 (.20)
| 18
---
18.0 (.17)
| 17
---
15.26 (.18)
ToM $b_{0}^{i}$ | | 22
---
21.55 (.07)
| 19
---
19.78 (.07)
| 18
---
17.40 (.09)
| 19
---
18.87 (.07)
ToM $b_{1}^{ij}$ | | 22
---
21.43 (.08)
| 20
---
19.76 (.08)
| 19
---
19.13 (.09)
| 19
---
18.49 (.06)
Table 1: Results for the three agents: Rainbow, ToM belief level 0 $b_{0}^{i}$
and belief level 1 $b_{1}^{ij}$. Shown in each row are the median scores of
trained agents over 1000 episodes of self-play followed by mean scores and
(standard error of the mean). Rainbow scores are the best agent results from
Bard et al. (2019) ToM agents are the results of our proposed agents Figure 3:
Score distributions from best Rainbow vs ToM agents. Score distributions were
generated by running each agent for 1000 episodes in self-play after training
We next report the impact of our intrinsic communication reward for various
values of $\beta$ which trades off between the weight placed on the
environmental reward, which is always the final score of the game when it
terminates, versus the intrinsic reward. We observe that $\beta=2$ achieves
higher scores than other alternatives. This performance increase is fairly
robust to a wide range of choices for this hyperparameter. However, in Hanabi,
we observe performance falls off sharply for $\beta>10$. We speculate that in
such situations, agents trained with too large a $\beta$ will exhaust their
communication tokens attempting to fully reveal one another’s hands as quickly
as possible while ignoring the task of building the required firework decks
and winning the game.
Figure 4: Effect of intrinsic communication reward on expected return. (left)
Training curves for different choices of $\beta$. (right) Resulting scores
after training for the same values of $\beta$
## 5 Discussion
We have proposed an approach to achieving theory of mind reasoning over nested
beliefs using reinforcement learning for Hanabi agents. We have defined the
necessary hint-conditional posterior distribution over player hands and
proposed two efficient algorithms for constructing approximations to other
players’ belief about their own hands. With these tools we are able to
incentivize efficient hint actions using a unique intrinsic reward term
defined over the other agent’s belief.
Our approach shows promise for further investigation in both Hanabi and other
ToM reasoning tasks. In future work, it will be interesting to empirically
assess the scalability of our algorithm to deeper belief nesting. We
conjecture that the intrinsic reward mechanism elicits changes in the action
distribution that cannot be clearly intuited from measuring extrinsic reward.
Also, we will explore methods for characterizing the strategies learned by our
method to more efficiently identify the grounded hint conventions RL agents
may be learning from the inclusion of nested ToM beliefs.
## References
* Bard et al. [2019] Nolan Bard, Jakob N. Foerster, Sarath Chandar, Neil Burch, Marc Lanctot, H. Francis Song, Emilio Parisotto, Vincent Dumoulin, Subhodeep Moitra, Edward Hughes, Iain Dunning, Shibl Mourad, Hugo Larochelle, Marc G. Bellemare, and Michael Bowling. The hanabi challenge: A new frontier for ai research, 2019.
* Baron-Cohen et al. [1995] Simon Baron-Cohen, Leda Cosmides, and John Tooby. _Mindblindness: An Essay on Autism and Theory of Mind_. A Bradford Book, 1995. ISBN 0262023849.
* Byom and Mutlu [2013] Lindsey J. Byom and Bilge Mutlu. Theory of mind: mechanisms, methods, and new directions. _Frontiers in Human Neuroscience_ , 7, 2013. doi: 10.3389/fnhum.2013.00413. URL https://doi.org/10.3389/fnhum.2013.00413.
* Cox et al. [2015] Christopher Cox, Jessica De Silva, Philip Deorsey, Franklin HJ Kenter, Troy Retter, and Josh Tobin. How to make the perfect fireworks display: Two strategies for hanabi. _Mathematics Magazine_ , 88(5):323–336, 2015\.
* Doshi and Gmytrasiewicz [2004] Prashant Doshi and Piotr J. Gmytrasiewicz. A framework for sequential planning in multi-agent settings. _ArXiv_ , abs/1109.2135, 2004.
* Foerster et al. [2018] Jakob N Foerster, Francis Song, Edward Hughes, Neil Burch, Iain Dunning, Shimon Whiteson, Matthew Botvinick, and Michael Bowling. Bayesian action decoder for deep multi-agent reinforcement learning. _arXiv preprint arXiv:1811.01458_ , 2018.
* Hessel et al. [2018] Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018\.
* Karkus et al. [2017] Péter Karkus, David Hsu, and Wee Sun Lee. Qmdp-net: Deep learning for planning under partial observability. _ArXiv_ , abs/1703.06692, 2017.
* Mnih et al. [2016] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In _International conference on machine learning_ , pages 1928–1937, 2016.
* O’Dwyer [2018] A. O’Dwyer. Github - quuxplusone/hanabi: Framework for writing bots that play hanabi, 2018. URL https://github.com/Quuxplusone/Hanabi.
* Tian et al. [2018] Zheng Tian, Shihao Zou, Tim Warr, Lisheng Wu, and Jun Wang. Learning multi-agent implicit communication through actions: A case study in contract bridge, a collaborative imperfect-information game. _ArXiv_ , abs/1810.04444, 2018.
* Tieleman and Hinton [2012] T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
* Wu [2018] J. Wu. Github - wuthefwasthat/hanabi.rs: Hanabi simulation in rust. https://github.com/WuTheFWasThat/hanabi.rs, 2018.
## Appendix
### A.1 Proof Equation 3 Sums to 1
###### Theorem 1.
$\displaystyle\sum\limits_{C^{i}=(c_{1},c_{2},...c_{\eta})\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}P(C^{i}|H^{i},\eta)=1$
###### Proof.
Assume there are $s$ unique hints, so $|H^{*}|=s$ and $r$ unique hint-card
combinations, so $|K^{i}|=r$.
$\sum\limits_{C^{i}=(c_{1},c_{2},...c_{\eta})\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}P(C^{i}|H^{i},\eta)$
$=\sum\limits_{C^{i}\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}\frac{\prod\limits_{(c,h)\in
K^{i}}(n_{\nu}(c|h))_{\delta^{i}_{c}(h)}}{\prod\limits_{h\in
H^{*}}({|\nu(h)|})_{\lambda^{i}(h)}}$
$=\frac{{\sum\limits_{{C}^{i}\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}}\hskip
2.84526pt\prod\limits_{(c,h)\in{K}^{i}}(n_{\nu}(c|h))_{\delta^{i}_{c}(h)}}{\prod\limits_{h\in
H^{*}}(|\nu(h)|)_{\lambda^{i}(h)}}$
$=\frac{\sum\limits_{{C}^{i}\in\bigtimes\limits_{\eta,h\in H^{i}}\nu(h)}\hskip
2.84526pt\prod\limits_{(c,h)\in{K}^{i}}(n_{\nu}(c|h))_{\delta_{c}^{i}(h)}}{\prod\limits_{h\in
H^{*}}\left(\sum\limits_{c\in
Supp(\nu(h))}n_{\nu}(c|h)\right)_{\lambda^{i}(h)}}$
$=\frac{\sum\limits_{{C}^{i}\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}\left[(n_{\nu}(c_{1}|h_{1}))_{\delta_{c}^{i}(h_{1})}\cdots(n_{\nu}(c_{k}|h_{r}))_{\delta_{c}^{i}(h_{r})}\right]}{\left(\sum\limits_{c\in
Supp(\nu(h_{1}))}n_{\nu}(c|h_{1})\right)_{\lambda^{i}(h_{1})}\cdots\left(\sum\limits_{c\in
Supp(\nu(h_{s}))}n_{\nu}(c|h_{s})\right)_{\lambda^{i}(h_{s})}}$
Factor out the hint $h_{r}$ from the sum over all possible card hands on the
numerator, which consequently requires factoring out all $c\in C^{i}$
associated with that hint. Those card-hint combinations can be expressed by
the set $K^{*}\subseteq K^{i}\ni h_{k}=h_{r}$. Without loss of generality,
assume $h_{r}=h_{s}$, thus
$\sum\limits_{c}\delta^{i}_{c}(h_{r})=\lambda^{i}(h_{r})=\lambda^{i}(h_{s})$.
$=\frac{\sum\limits_{{C}^{i}\in\bigtimes\limits_{\eta-\lambda^{i}(h_{r}),h\in
H^{i}}\nu(h)}\left[(n_{\nu}(c_{1}|h_{1}))_{\delta_{c}^{i}(h_{1})}\cdots(n_{\nu}(c_{r-\lambda^{i}(h_{s})}|h_{k-\lambda^{i}(h_{s})}))_{\delta_{c}^{i}(h_{r-\lambda^{i}(h_{s})})}\right]\cancel{\left(\sum\limits_{c\in
Supp(\nu(h_{r}))}n_{\nu}(c|h_{r})\right)_{\sum\limits_{c}\delta_{c}^{i}(h_{r})}}}{\left[\left(\sum\limits_{c\in
Supp(\nu(h_{1}))}n_{\nu}(c|h_{1})\right)_{\lambda^{i}(h_{1})}\hskip
2.84526pt\cdots\hskip 2.84526pt\cancel{\left(\sum\limits_{c\in
Supp(\nu(h_{s}))}n_{\nu}(c|h_{s})\right)_{\lambda^{i}(h_{s})}}\right]}$
Continue on for all unique hints.
⋮
$=1$
$\therefore\sum\limits_{C^{i}=(c_{1},c_{2},...c_{\eta})\in\bigtimes\limits_{\eta,h\in
H^{i}}\nu(h)}P(C^{i}|H^{i},\eta)=1$
∎
### A.2 Equations 3 - 6 Example
1. 1.
Given hints: [R, G, R, 1, 2]
2. 2.
Valid Cards and Counts given hint i.e. $n_{\nu}(c|h)$ Counting of 0 for any
card not listed:
* •
Card1: [R3: 2, R4: 2, R5: 1]
* •
Card2: [G3: 2, G4: 2, G5: 1]
* •
Card3: [R3: 2, R4: 2, R5: 1]
* •
Card4: [Y1: 3, W1: 3, B1: 3]
* •
Card5: [Y2: 2, W2: 2, B2: 2]
3. 3.
Calculating $b_{0}^{i}$ using samples without replacement (equation 3):
$b_{0}^{i}\sim P([R3,G3,R4,W1,W2]|[R,G,R,1,2],5)$
= $(2/5)(2/5)((2-1)/(5-1))(3/9)(2/6)$
= $1/225$
4. 4.
Calculating $\hat{b}_{0}^{i}$ using samples with replacement (equation 4):
$\hat{b}_{0}^{i}=(2/5)(2/5)(2/5)(3/9)(2/6)=8/1125$
5. 5.
Calculating $\hat{b}_{0}^{ij}$ follows the same method as $b_{0}^{i}$, but
first relies on modifying the counts to approximate agent $j$’s perspective.
This is done by using a hand sampled by $b_{0}^{i}$ as the cards observed for
player $i$’s hand from $j$’s perspective. Additionally, the cards observed in
$j$’s hand from $i$’s perspective are added back to the set of unobserved
cards. This gives all the necessary information for repeating equation 3 for
agent $j$’s perspective.
6. 6.
Equation 6 is equivalent to equation 4, but with the same updated counts as
defined in the previous section to simulate $j$’s perspective.
|
# Rikudo is NP-complete
Viet-Ha Nguyen Univ. Côte d’Azur, CNRS, Inria, I3S, Sophia Antipolis, France
Kévin Perrot Univ. Côte d’Azur, CNRS, Inria, I3S, Sophia Antipolis, France
Aix-Marseille Univ., Univ. de Toulon, CNRS, LIS, Marseille, France
###### Abstract
Rikudo is a number-placement puzzle, where the player is asked to complete a
Hamiltonian path on a hexagonal grid, given some clues (numbers already placed
and edges of the path). We prove that the game is complete for
${\mathsf{NP}}$, even if the puzzle has no hole. When all odd numbers are
placed it is in ${\mathsf{P}}$, whereas it is still ${\mathsf{NP}}$-hard when
all numbers of the form $3k+1$ are placed.
## 1 Introduction
We discovered Rikudo in the game column of local newspaper Le Progrès. It has
been imagined a few years ago by two French guys, Paul and Xavier, willing to
offer a new challenge to Sudoku lovers[1]. The study of the computational
complexity of games is quite a tradition now [2, 4, 5, 6, 10, 11, 12, 13, 15,
16, 17], and we could not resist to state these results. Even if they are not
highly technical, some are not trivial.
The game consists in placing numbers from $1$ to $n$ on the cells of a
hexagonal grid, so that the sequence forms a Hamiltonian path (successive
numbers must be placed on adjacent cells). Furthermore, some numbers are
already placed, and some adjacencies are given so that the path must follow
them.
In Section 2 we present the theoretical modelization of Rikudo and the
problems of solving such puzzles, which are proven to be ${\mathsf{NP}}$-hard
in Section 3. Given that the reductions do not make use of numbers already
placed on the grid, Section 4 discusses the complexity under the additional
constraint that some fixed fraction $\alpha$ of the numbers are already
placed, or that all numbers of the form $xk+1$ are already placed for some
fixed integer $k$.
## 2 Model
As for Sudoku, every Rikudo instance one finds in newspapers are solvable. In
order to model it as a decision problem in complexity theory, we have to
create some negative instances (i.e. games that do not admit any solution),
and also to create games of arbitrary size. The original game can be played at
http://www.rikudo.fr/.
Let us consider the hexagonal grid with pointy orientation111Let us recall
that hexagonal grids follow one of two orientations: pointy or flat., and
denote $\mathcal{C}$ the set of cells. Given a subset
$\tau\subset\mathcal{C}$, we associate the loopless graph $G_{\tau}$ on vertex
set $\tau$ with adjacency relation corresponding to pairs of cells sharing an
edge. We say that $\tau$ is connected whenever $G_{\tau}$ is connected. A
Rikudo game is a connected finite subset of cells $\tau$ and, with $n=|\tau|$
and $[n]=\\{1,\dots,n\\}$,
* $\bullet$
a partial injective map $m:\tau\to[n]$ (numbers already placed),
* $\bullet$
a subset of the adjacency relation $p\subseteq{\leavevmode\hbox to5.73pt{\vbox
to6.49pt{\pgfpicture\makeatletter\hbox{\hskip 2.86414pt\lower-3.24544pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}{{}}{} {}{} {}{} {}{} {}{}
{}{} {}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{-2.84544pt}\pgfsys@lineto{2.46414pt}{-1.42271pt}\pgfsys@lineto{2.46414pt}{1.42271pt}\pgfsys@lineto{0.0pt}{2.84544pt}\pgfsys@lineto{-2.46414pt}{1.42271pt}\pgfsys@lineto{-2.46414pt}{-1.42271pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}$ (given
adjacencies).
A solution is a bijective map $s:\tau\to[n]$ such that, with symmetric binary
relation ${\leftrightsquigarrow_{s}}$ on $\mathcal{C}$ defined as
$c\,{\leftrightsquigarrow_{s}}\,c^{\prime}\iff|s(c)-s(c^{\prime})|=1$,
* $\bullet$
${\leftrightsquigarrow_{s}}\subseteq{\leavevmode\hbox to5.73pt{\vbox
to6.49pt{\pgfpicture\makeatletter\hbox{\hskip 2.86414pt\lower-3.24544pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}{{}}{} {}{} {}{} {}{} {}{}
{}{} {}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{-2.84544pt}\pgfsys@lineto{2.46414pt}{-1.42271pt}\pgfsys@lineto{2.46414pt}{1.42271pt}\pgfsys@lineto{0.0pt}{2.84544pt}\pgfsys@lineto{-2.46414pt}{1.42271pt}\pgfsys@lineto{-2.46414pt}{-1.42271pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}$ (the sequence of
numbers forms a Hamiltonian path in $G_{\tau}$),
* $\bullet$
$s(c)=m(c)$ for all cells $c$ in the domain of $m$ (respect given numbers),
* $\bullet$
$p\subseteq{\leftrightsquigarrow_{s}}$ (respect given adjacencies).
See some examples on Figure 1.
Figure 1: Two examples of Rikudo games. Player is asked to complete a
Hamiltonian path of numbers (${\leftrightsquigarrow_{s}}$) on a subset of
cells ($\tau$) from the hexagonal grid (from $1$ to $n=61$ on the left, from
$1$ to $n=85$ on the righ), given some numbers already placed ($m$), following
edge-to-edge adjacencies (), and respecting the ones indicated by a circle
($p$). Solutions in Appendix A.
The main problems we are interested in are:
Rikudo
Input: a game $(\tau,m,p)$. Question: does it admit a solution?
Rikudo without holes
Input: a game $(\tau,m,p)$ such that $G_{\mathcal{C}\setminus\tau}$ is
connected. Question: does it admit a solution?
Observe that both problems are trivially in ${\mathsf{NP}}$.
## 3 ${\mathsf{NP}}$-hardness
Both reductions will be made from a closely related problem, namely of
deciding the existence of a Hamiltonian cycle in a hexagonal grid graph. In
order to avoid ambiguity, we call the six “vertices” of a hexagonal cell, its
six corners. A hexagonal grid graph is given by a vertex set that is a subset
of corners from a unit side length regular hexagonal tiling of the plane, and
whose edge set connects vertices that are one unit apart. The problem is
called Hamiltonian circuits in hexagonal grid graphs (HCH), known to be
${\mathsf{NP}}$-complete [8]. Without loss of generality we will consider that
the instances of HCH have only vertices of degree two and three (no vertex of
degree one).
###### Theorem 1.
Rikudo is ${\mathsf{NP}}$-hard, even when $p=\emptyset$ and $m$ has domain
${\mathtt{dom}}(m)=\\{1,n\\}$.
###### Proof.
The reduction from HCH is almost trivial, by noting that any hexagonal grid
graph $H$ equals some $G_{\tau}$. Note that the vertices of $H$ correspond to
corners of the hexagonal cells, whereas the vertices of $G_{\tau}$ correspond
to the cells themselves. To align two such graphs, one may consider the graph
$H$ from the hexagonal grid with flat orientation, scale up $H$ by a factor of
$\sqrt{3}$, and align its vertices with the center of cells in the pointy
hexagonal grid to play Rikudo.
Figure 2: Reduction from HCH (graph in blue) to Rikudo, with $n=43$.
The reduction (see Figure 2) then simply consists in the game $(\tau,m,p)$
with $\tau$ the set of cells hosting a vertex of $H$, $p=\emptyset$, and $m$
defined on only two cells $t,t^{\prime}\in\tau$, chosen such that
$t\,{\leavevmode\hbox to5.73pt{\vbox
to6.49pt{\pgfpicture\makeatletter\hbox{\hskip 2.86414pt\lower-3.24544pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}{{}}{} {}{} {}{} {}{} {}{}
{}{} {}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{-2.84544pt}\pgfsys@lineto{2.46414pt}{-1.42271pt}\pgfsys@lineto{2.46414pt}{1.42271pt}\pgfsys@lineto{0.0pt}{2.84544pt}\pgfsys@lineto{-2.46414pt}{1.42271pt}\pgfsys@lineto{-2.46414pt}{-1.42271pt}\pgfsys@closepath\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,t^{\prime}$ and $t$
has degree $2$ in $G_{\tau}$ (such a cell always exists), as $m(t)=1$ and
$m(t^{\prime})=n$ where $n$ is the number of vertices in $H$. We have
$H=G_{\tau}$ and the Hamiltonian path of Rikudo is enforced to be a cycle by
$m$, hence the HCH and Rikudo instances are identical. ∎
The restriction on $p$ and $m$ in Theorem 1 motivates the consideration of
Rikudo without holes, which seems closer to the spirit of the “real” Rikudo
game and requires more advanced constructions.
###### Theorem 2.
Rikudo without holes is ${\mathsf{NP}}$-hard.
###### Proof.
Let $H$ be an instance of HCH. Without loss of generality we can consider that
the vertices of $H$ have degree two or three (hexagonal grid graphs have
degree at most three, and if $H$ has a vertex of degree one then it is
trivially impossible to have a Hamiltonian cycle). We construct the game
$(\tau,m,p)$ as follows (see Figure 3). First, take $H$ with flat orientation,
scale it up by a factor of $2\sqrt{7}$, rotate it counterclockwise by an angle
of $\arctan\frac{1}{3\sqrt{3}}$, and align its vertices with the common
corners of three cells in the pointy hexagonal grid to play Rikudo. We have
that each vertex $v$ of $H$ is at the corner of three cells
$c_{1}(v),c_{2}(v),c_{3}(v)$ of the game, and that the middle of each edge $e$
of $H$ is at the center of a cell $c(e)$ of the game. Let $c_{4}(v)$
(respectively $c_{5}(v)$; $c_{6}(v)$) denote the cell adjacent to both
$c_{1}(v),c_{2}(v)$ (respectively $c_{2}(v),c_{3}(v)$; $c_{3}(v),c_{1}(v)$)
which is not $c_{3}(v)$ (respectively $c_{1}(v)$; $c_{2}(v)$). For all $v$ the
six cells $c_{1}(v),\dots,c_{6}(v)$ form an upward or downward triangle.
Consider the set of cells $\tau^{\prime}$ obtained by:
* $\bullet$
for each vertex $v$ of $H$, add
$c_{1}(v),c_{2}(v),c_{3}(v),c_{4}(v),c_{5}(v),c_{6}(v)$ to $\tau^{\prime}$,
and call this subset of six cells a vertex gadget;
* $\bullet$
for each edge $e$ of $H$, add the cell $c(e)$ to $\tau^{\prime}$, and call
this cell an edge gadget.
The cells from each finite connected component of
$G_{\mathcal{C}\setminus\tau^{\prime}}$ is called a hole of $\tau^{\prime}$.
The set $\tau$ is obtained from $\tau^{\prime}$ by adding all the holes of
$\tau^{\prime}$. This ensures that $G_{\mathcal{C}\setminus\tau}$ is
connected.
Figure 3: Reduction from HCH (graph in blue) to Rikudo without holes. Vertex
gadgets are hatched in green, edge gadgets are dotted in blue, holes are
filled in grey (with the absent vertex and edge gadgets slightly marked) and
the Hamiltonian paths of holes are highlighted in purple. In this example,
$n=272$.
The set of adjacencies $p$ is build from a Hamiltonian path on each hole of
$\tau^{\prime}$, which requires some additional definitions (see Figure 4).
The holes of $\tau^{\prime}$ are made of three types of cells: v-cells which
may have been part of a vertex gadget, e-cells which may have been part of an
edge gadget, and h-cells the remaning cells. For a letter
$\text{x}\in\\{\text{v},\text{h}\\}$ we call x-component a connected component
of x-cells in $G_{\tau^{\prime\prime}}$. Note that each v-component has 6
cells, and each h-component has 13 cells. Given a hole $\tau^{\prime\prime}$
of $\tau^{\prime}$, we define the graph $H_{\tau^{\prime\prime}}$ whose
vertices are the v- and h-components of $G_{\tau^{\prime\prime}}$, and such
that two components are adjacent when two of their cells are. Remark that
$H_{\tau^{\prime\prime}}$ is a bipartite connected graph, since we have not
considered e-cells. Now we attach each e-cell to a v-component vertex of
$H_{\tau^{\prime\prime}}$ adjacent to it, according to some map $\vartheta$
from the e-cells to the v-components of $G_{\tau^{\prime\prime}}$. For any
vertex $v$ of $H$ and the associated vertex gadget, we call each of the
following couples of cells an access: $(c_{1}(v),c_{6}(v))$,
$(c_{2}(v),c_{4}(v))$, $(c_{3}(v),c_{5}(v))$. We associate to
$\tau^{\prime\prime}$ a set of adjacent accesses, which are the vertex
gadget’s access whose cells are adjacent to cells of $\tau^{\prime\prime}$.
Among these, the canonical access of $\tau^{\prime\prime}$ is the maximal one
according to some fixed direction (e.g. the leftmost in our figures). Accesses
will be used to plug partial paths together, and we extend their definition to
the v-components of $\tau^{\prime\prime}$. Finally, given
$\tau^{\prime\prime}$ and an adjacent access $\alpha$, let
$T_{\tau^{\prime\prime}}^{\alpha}$ be a rooted spanning tree of
$H_{\tau^{\prime\prime}}$, whose root is the h-component of
$\tau^{\prime\prime}$ adjacent to $\alpha$ (observe that there is a unique
such h-component). Given a vertex $t$ of $H_{\tau^{\prime\prime}}$ which is a
h-component, let $\tau^{\prime\prime}[t]$ be the subset of
$\tau^{\prime\prime}$ corresponding to the subtree of
$T_{\tau^{\prime\prime}}^{\alpha}$ rooted at $t$. We build the Hamiltonian
path $P_{\tau^{\prime\prime}}^{\alpha}$ along
$T_{\tau^{\prime\prime}}^{\alpha}$, from one of the access cell of $\alpha$ to
the other, recursively as follows.
* $\bullet$
For the root $r$ of $T_{\tau^{\prime\prime}}^{\alpha}$, which is a
h-component, build the h-basepath depicted on the left of Figure 5, rotated in
order to fit the access $\alpha$.
* $\bullet$
For each child $s$ of $r$, which is a v-component, build the v-basepath
depicted on the right of Figure 5, including its associated e-cells $t$ such
that $\vartheta(t)=s$.
* $\bullet$
For each child $t$ of $s$, consider the access $\beta$ of $s$ to which $t$ is
adjacent, and build recursively a Hamitlonian path
$P_{\tau^{\prime\prime}[t]}^{\beta}$ along the subtree
$T_{\tau^{\prime\prime}[t]}^{\beta}$.
* $\bullet$
Finally, plug the pieces together at accesses, by operating the flips
illustrated on Figure 6. Remark that by the construction of h-basepath and
v-basepath, these flips are always possible.
Now consider, for each hole $\tau^{\prime\prime}$ of $\tau^{\prime}$, the path
$T_{\tau^{\prime\prime}}^{\alpha}$ with $\alpha$ the canonical access of
$\tau^{\prime\prime}$, and add the adjacencies of this path to $p$.
Figure 4: Example construction of $p$ from a Hamiltonian path for one hole
$\tau^{\prime\prime}$. The function $\vartheta$ is illustrated on each e-cell
$t$ with an arrow pointing towards the v-component $\vartheta(t)$, v- e- and
h-cells have slightly marked patterns, v- and h-components hold an orange node
of $H_{\tau^{\prime\prime}}$, edges of a spanning tree
$T_{\tau^{\prime\prime}}^{\alpha}$ are drawn in orange with the canonical
access $\alpha$ hatched in red. Top: the h- and v-basepaths before the flips.
Middle: the Hamiltonian path $P_{\tau^{\prime\prime}}^{\alpha}$ obtained after
the flips. Bottom: the corresponding adjacencies in $p$.
Figure 5: Left: in purple the h-basepath according to the (canonical) access
hatched in red, other accesses are hatched in pink. Right: in purple the
v-basebaths according to the number of e-cells attached to the v-component,
the three accesses are hatched in three different colors.
Figure 6: Flips operated in order to connect h-basepath and v-basepath (on a
downward triangle without e-cell in this example), the h-component cells are
filled in grey and the v-component access in hatched in cyan. The location of
the flip is highlighted. Left: from a h-basepath parent to a v-basepath child.
Right: from a v-basepath parent to a h-basepath child.
For $m$, choose an access $(t,t^{\prime})$ of some vertex gadget corresponding
to a vertex of $H$ of degree two, such that $t,t^{\prime}$ do not appear in
$p$, and set $m(t)=1$, $m(t^{\prime})=n$ with $n$ the total number of cells in
$\tau$.
If a cell belongs to two elements of $p$, i.e. has two enforced adjacencies,
then any Hamiltonian path from $1$ to $n$ crosses this cell along these
adjacencies. Thus, as a consequence of the definition of $p$, the game
$(\tau,m^{\prime},p)$ is equivalent (in terms of decision) to the game
$(\tau^{\prime},m^{\prime},p^{\prime})$, where $m^{\prime}$ is the same as $m$
except that $m(t^{\prime})=n^{\prime}$ with $n^{\prime}$ the total number of
cells in $\tau^{\prime}$, and where $p^{\prime}$ are the couples of canonical
access cells in vertex gadgets for each hole of $\tau^{\prime}$ (see Figure
8).
Figure 7: Game $(\tau^{\prime},m^{\prime},p^{\prime})$ equivalent to
$(\tau,m,p)$ from Figure 3.
Figure 8: A solution to the game $(\tau^{\prime},m^{\prime},p^{\prime})$ with
only the Hamiltonian path from $1$ to $n$ depicted in purple, and the
corresponding Hamiltonian cycle on $G$ in red.
Now let us argue that $H$ has a Hamiltonian cycle if and only if the game
$(\tau^{\prime},m^{\prime},p^{\prime})$ has a solution. If $H$ has a
Hamiltonian cycle, then it is straightforward to construct a solution to the
game $(\tau^{\prime},m^{\prime},p^{\prime})$ respecting the adjacencies given
by $p$, as shown on Figure 8, by starting from the vertex gadget hosting
numbers $1,n$ and following the Hamiltonian cycle on $H$: for each vertex
gadget, construct a Hamiltonian path from on edge gadget to the other, and
include the third edge gadget to the path if it has not been included so far
(recall that the vertices of $H$ have degree two or three); the vertex gadget
hosting numbers $1,n$ has a special pattern (it always corresponds to a vertex
of $H$ of degree two). Note that the use of canonical accesses ensures that
each vertex gadget has at most one access in $p^{\prime}$. All the
possibilities are presented on Figure 9.
Figure 9: Hamiltonian path for all the possibilities of vertex and edge
gadgets combination (up to rotation), in order to build a solution to the
instance $(\tau^{\prime},m^{\prime},p^{\prime})$ from a Hamiltonian cycle on
$H$.
If the game $(\tau^{\prime},m^{\prime},p^{\prime})$ has a solution $s$, then
the order of vertex gadgets induced by ${\leftrightsquigarrow_{s}}$ gives a
Hamiltonian path in $H$:
* $\bullet$
it is only possible to go from one vertex gadget to another vertex gadget by
going over an edge gadget, hence adjacencies of $H$ are respected,
* $\bullet$
an edge gadget consists of only one cell, thus we cannot use twice an edge of
$H$, and the degree of every vertex in $H$ is two or three; it follows that we
cannot visit twice a vertex of $H$.
Since a solution to the game includes all cells (i.e. all vertex gadgets) and
goes back to the starting vertex gadget thanks to the positions of $1$ and $n$
given by $m^{\prime}$, we conclude that the corresponding cycle on $H$ is
Hamiltonian. ∎
###### Remark 1.
It is a result of [14] that a h-component corresponds to the only finite,
2-connected, linearly convex, subgraph of the Archimedean triangular tiling
(the graph obtained from the hexagonal grid cells with adajcencies ) which do
not have a Hamiltonian cycle; but in our construction we only need Hamiltonian
paths on h-components.
###### Remark 2.
With almost trivial adaptations of the proof of Theorem 2, one can also obtain
the ${\mathsf{NP}}$-hardness when the shape of the game is a hexagon (as on
the left of Figure 1), and with one missing cell at the center, such as in
“real” rikudo games.
## 4 Rikudo with already placed numbers
The hardness proofs presented in Section 3 reduce from Hamiltonian cycle
problems, and we (almost) do not use the already placed numbers given by $m$
(intuitively, because it would have required to have some prior knowledge on
the path). It is therefore natural to ask whether Rikudo games may become
easier to solve when it is imposed that some numbers are initially placed by
$m$? Given that the reduction leading to Theorem 2 let $m$ have domain
${\mathtt{dom}}(m)=\\{1,n\\}$, we straightforwardly have that the following
variant is still ${\mathsf{NP}}$-hard.
$\alpha$-Rikudo without holes (for some real constant $0<\alpha\leq 1$) Input:
a game $(\tau,m,p)$ such that $|{\mathtt{dom}}(m)|\geq\alpha n$ with
$n=|\tau|$. Question: does it admit a solution?
Indeed, one can construct a game for some $n^{\prime}$ and then do some
padding, by placing with $m$ all the numbers from $n^{\prime}+1$ until
$n=\lceil\frac{n^{\prime}}{\alpha}\rceil$.
The following question is then of particular interest.
1-over-$k$-Rikudo (for some integer constant $k\geq 2$) Input: a game
$(\tau,m,p)$ such that ${\mathtt{dom}}(m)=\\{xk+1\mid
x\in\mathbb{N}\\}\cap[n]$ with $n=|\tau|$. Question: does it admit a solution?
In words, it imposes that $m$ places the numbers $1,k+1,2k+1,3k+1,\dots$ i.e.
one number every $k$ numbers. The total fraction
$\frac{|{\mathtt{dom}}(m)|}{n}$ approaches $\frac{1}{k}$, but contrary to
$\alpha$-Rikudo the repartition of numbers is constrained. In the case $k=2$
all odd numbers are already placed and the problem turn out to be in
${\mathsf{P}}$ (Theorem 3), whereas we prove that it is ${\mathsf{NP}}$-hard
for $k=3$ (Theorem 4).
###### Theorem 3.
1-over-2-Rikudo is in ${\mathsf{P}}$.
###### Proof.
The holes do not matter in this simple reduction to 2-SAT. Given a game
$(\tau,m,p)$ with all odd numbers placed, it is an easy observation that for
any pair of already placed integers $x,x+2$ there are at most two possible
positions for the number $x+1$. The reduction goes as follows: start by
placing all (even) numbers having a unique possible position until no such
number exist (it corresponds to performing unit propagation). Then for all
remaining (even) numbers $x$, create two variables $v_{x}^{1}$ and $v_{x}^{2}$
corresponding to the two possible positions. Construct a formula having the
following clauses:
* $\bullet$
$v_{x}^{1}\vee v_{x}^{2}$ for each remaining (even) number $x$,
* $\bullet$
$\neg v_{x}^{i}\vee\neg v_{y}^{j}$ for each pair of variables corresponding to
the same position.
The variable creation ensure that the numbers form a path, the first set of
clauses ensure that all remaining numbers are placed, and the second set of
clauses ensure that no two numbers are placed at the same position. ∎
###### Theorem 4.
1-over-3-Rikudo is ${\mathsf{NP}}$-hard.
###### Proof.
We present a reduction from the problem Planar 1-in-3-SAT (clauses of size
three, satisfied by exactly one literal), which is proven to be
${\mathsf{NP}}$-hard in [3]. A formula $\phi$ in conjunctive normal form (CNF)
is planar when so is the bipartite graph $G_{\phi}$ having one vertex for each
variable of $\phi$, one vertex for each clause of $\phi$, and an edge between
a variable $x_{i}$ and a clause $c_{j}$ whenever $x_{i}$ appears in $c_{j}$.
From a planar 3-CNF formula $\phi$ (checking planarity can be done in linear
time [7]), we slightly modify the graph $G_{\phi}$ while preserving planarity:
each variable vertex $x_{i}$ is replaced by a binary tree (called variable
tree) having as many leaves as occurrence of $x_{i}$ in $\phi$, and each of
these leaves is connected to one clause in which $x_{i}$ appears. We also add
a negation vertex between variable $x_{i}$ and clause $c_{j}$ if $x_{j}$
appears as $\neg x_{i}$ in $c_{j}$. We consider a planar embedding of this new
graph $G_{\phi}$ into the graph underlying a flat hexagonal grid (i.e.
vertices of $G_{\phi}$ are cells of the grid, and edges follow cell to cell
adjacencies). Such an embedding can easily be computed in polynomial time (see
[9, 18] for reference, but naive greedy methods are enough for our purpose).
Finally, for technical reasons to be explained later in this proof, we scale
up the obtained graph on the hexagonal grid by a factor of two. See Figure 10
for an example.
Figure 10: Example embedding of the planar formula $\phi=(x_{1}\vee\neg
x_{2}\vee x_{3})\wedge(x_{2}\vee\neg x_{3}\vee x_{4})$ and its graph
$G_{\phi}$ into a flat hexagonal grid. Variables in blue (choice-cells),
binary trees in purple (duplicate-cells), negations in red (negation-cells),
clauses in orange (clause-cells), edges in green (wire-cells).
A Rikudo game is obtained by replacing each cell of the embedding of
$G_{\phi}$ by a macrocell for the game, depending on the content of this cell.
We distinguish five types of cells, and five corresponding types of
macrocells. The game macrocells are formally defined on a subset of cells
forming a flat hexagon of side length $9$, which are assembled side by side to
form a Rikudo game. On the sides one row of cells from adjacent macrocells are
merged. Illustrations are presented on Figure 11.
###### Remark.
When we refer to a solution of a macrocell, we mean to place all the numbers
between the endpoints of its subpath(s) of numbers (macrocells have one, two
or three subpaths). Furthermore, a solution must place numbers on all game
cells within the macrocell (i.e. except possibly on its sides, precisions are
given in the next remark on input bit). Indeed, if this is not the case then
cells left empty within a macrocell will be left empty in the solution
obtained by the assembly of macrocells’ solutions.
* $\bullet$
The root of each variable tree is called a choice-cell, its macrocell admits
two kinds of solutions corresponding to true and false (one bit). In each
solution one of the two cells on the top side hosts a number, and the other
one does not: in the true solutions the left cell contains a number and the
right cell does not, whereas in the false solutions the left cell does not
contain a number and the right cell does. Remark that this is the convention
for an output bit, for an input bit it is reversed.
###### Remark.
More generally, the input and output bits of a macrocell are defined as
follows. Let us call number-cell a cell where a number is already placed (i.e.
in ${\mathtt{dom}}(m)$), and game-cell a cell where the player will place a
number (i.e. in $\tau\setminus{\mathtt{dom}}(m)$).
Consider the row of cells on an output side. In the clockwise order, it has:
one number-cell, one game-cell $A$, one number-cell, and one game-cell $B$. A
true (respectively false) output bit corresponds to cell $A$ (respectively
$B$) containing a number from this macrocell, whereas cell $B$ (respectively
$A$) does not.
Symmetrically, consider the row of cells on an input side. In the clockwise
order, it has: one game-cell $B$, one number-cell, one game-cell $A$, and one
number-cell. A true (respectively false) input bit corresponds to cell $A$
(respectively $B$) containing a number from the adjacent macrocell, whereas
cell $B$ (respectively $A$) does not. As a consequence, a solution to this
macrocell must place a number in cell $B$ (respectively $A$), but not in cell
$A$ (respectively $B$).
Also observe that the merge of an input side with an output side matches the
position of game-cells and numbers-cells. The assembly of macrocells will be
precised just after the presentation of all macrocells.
* $\bullet$
An internal node of a variable tree is called a duplicate-cell, given an input
bit on the bottom side its macrocell admits only solutions which copy this bit
to the top left and top right sides (it has one input and two outputs). In a
solution each variable tree therefore has the same bit on all its leaves.
* $\bullet$
A negation vertex is called a negation-cell, given an input bit on the bottom
side its macrocell admits only solutions which flip this bit to the top side
(it has one input and one output).
* $\bullet$
A clause vertex is called a clause-cell, it has a solution if and only if
exactly one of the three input bits on its sides is true (it has three
inputs).
* $\bullet$
All other non-empty cells are called wire-cells, given an input bit on one
side the macrocells admit only solutions which copy this bit to the other side
(it has one input and one output).
These claims can easily be verified by hand on the figures from Appendix B,
for the interested reader. Macrocells can be rotated (but not reflected, since
this corrupts the subpaths merging). Note that a bit of information reads
differently as an input and as an output of a macrocell, and that the subpath
endpoints are placed accordingly so that an output merges an input. Macrocells
are directed as the graph $G_{\phi}$ from Figure 10.
Choice-macrocellDuplicate-macrocellNegation-macrocellClause-macrocellWire-
macrocellWire-macrocells Figure 11: Macrocell types, with the rows merged
with the adjacent macrocell hashed in red (input/output). Distinct subpaths of
numbers are marked with $x$, $x^{\prime}$ and $x^{\prime\prime}$ starting at
$0$, $0^{\prime}$ and $0^{\prime\prime}$ respectively. Choice-macrocell
(output on the top), duplicate-macrocell (one input on the bottom and two
outputs), negation-macrocell (input on the bottom and output on top), clause-
macrocell (three inputs), straight wire-macrocell (input on the bottom and
output on top), and four bending wire-macrocells assembled (from input on left
to output on right).
Let us now give precisions on how macrocells are assembled, so that the
obtained game may have a solution forming only one long path from $1$ to $n$
(made by connecting all macrocell’s subpaths together). One bit of information
is transported by two subpaths of numbers. The game path will simply follow
the Eulerian path on the doubling tree222The doubling tree consists in
replacing each edge of the tree by two copies of itself, hence the Eulerian
path corresponds to walking along the contour of the spanning tree. associated
to a spanning tree of $G_{\phi}$. The numbers already placed on two adjacent
macrocells are identified at the merged row of cells on the common side: we
can shift the numbers on each subpath of one macrocell to match the other, and
reverse some subpath of numbers to be coherent relative to the
increasing/decreasing order (in the walk on the spanning tree the orders of
the subpaths of a bit are typically opposite). One last macrocell is required
for wire-cells corresponding to edges not part of the spanning tree: a cutter-
macrocell replaces one of the straight wire-cell on these edges (hence the
factor two scaling creating a straight wire-cell on every edge). See the left
of Figure 13, the cutter-macrocell copies the bit from the bottom side to the
top side, but the two subpaths are shunted differently. Finally we choose some
start (number $1$) and stop (number $n$) on a remaining straight wire-cell, as
presented in the start-macrocell on the right of Figure 13. Figure 13 presents
a spanning tree and the way subpaths of already placed numbers are assembled.
Figure 12: Cutter-macrocell (left, input on the bottom and output on top) and
start-macrocell (right, input on the bottom and output on top, the start is
$1$ and stop is $12^{\prime}$, the three subpaths may be shifted
independently), replacing straight wire-cells.
Figure 13: Spanning tree and the way subpaths of already placed numbers are
assembled, for the graph $G_{\phi}$ from Figure 10. Blue arrows follow the
increasing order of numbers, with the start-macrocell and cutter-macrocells
(there is only one in this example) highlighted in red. Starting from number
$1$ on the start-macrocell, we follow the blue arrows to assign subsequent
number sequences to the macrocells’ subpaths, leading to a unique path from
$1$ to $n$ (if all macrocells admit a solution).
The construction is finished. If $\phi$ has a solution then we choose a
satisfying assignment for the bits of the choice-macrocells, which are copied
by duplicate-macrocells and transported by wire-cells to the clause-
macrocells, each of these later having a solution (and the whole play forms a
path from $1$ to $n$ as detailed in the previous paragraph). If $\phi$ has no
solution then for any assignment for the bits of the choice-macrocells, these
bits are copied by duplicate-macrocells and transported by wire-macrocells to
the clause-macrocells, and at least one of the clause-macrocells does not have
exactly one true among the three bits on its input sides, hence the game has
no solution. ∎
We strongly believe that 1-over-$k$-Rikudo is also ${\mathsf{NP}}$-hard for
any $k\geq 4$ based on analogous constructions. Also remark that the double-
wire logics from the proof of Theorem 4 may allow to simulate arbitrary
computation in Rikudo games (we have negation-macrocells to perform cross-
over, and clause-macrocells use some disjunction and conjunction of bits). The
game has some more variants (e.g. with indications in some cells of the parity
of the number to be placed), and it would be interesting to study how these
new features may or may not embed complexity, i.e. in some sense whether more
clues make the game easier or not.
## Acknowledgments
The authors are thankful to Papicri for bringing those games almost daily
during France COVID-19 first lockdown.
## References
* [1] Rikudo: dossier de presse. http://www.rikudo.fr/v2.0/dossier-presse/. Accessed December 1st, 2020.
* [2] J.-F. Baffier, M.-K. Chiu, Y. Diez, M. Korman, V. Mitsou, A. van Renssen, M. Roeloffzen, and Y. Uno. Hanabi is NP-complete, Even for Cheaters who Look at Their Cards. In Proceedings of FUN’2016, volume 49 of LIPIcs, pages 4:1–4:17, 2016.
* [3] M. E. Dyer and A. M. Frieze. Planar 3DM is NP-complete. Journal of Algorithms, 7(2):174–184, 1986.
* [4] S. Even and R. E. Tarjan. A combinatorial problem which is complete in polynomial space. In Proceedings of STOC’1975, pages 66–71, 1975.
* [5] A. S. Fraenkel, M. R. Garey, D. S. Johnson, T. Schaefer, and Y. Yesha. The complexity of checkers on an N times N board. In Proceedings of SFCS’1978, pages 55–64, 1978.
* [6] L. Gualà, S. Leucci, and E. Natale. Bejeweled, Candy Crush and other match-three games are (NP-)hard. In Proceedings of IEEE CIG’2014, pages 1–8, 2014.
* [7] J. Hopcroft and R. Tarjan. Efficient planarity testing. Journal of the ACM, 21(4):549–568, 1974.
* [8] K. Islam, H. Meijer, Y. Núñez, D. Rappaport, and H. Xiao. Hamilton circuits in hexagonal grid graphs. In Proceedings of CCCG’2007, pages 85–88, 2007.
* [9] G. Kant. Hexagonal grid drawings. In Proceedings of WG’92, volume 657 of LNCS, pages 263–276, 1993.
* [10] R. Kaye. Minesweeper is NP-complete. The Mathematical Intelligencer, 22(2):9–15, 2000.
* [11] M. Lampis and V. Mitsou. The Computational Complexity of the Game of Set and Its Theoretical Applications. In Proceedings of LATIN’2014, volume 8392 of LNCS, pages 24–34, 2014.
* [12] D. Lichtenstein and M. Sipser. GO Is Polynomial-Space Hard. Journal of the ACM, 27(2):393–401, 1980.
* [13] V.-H. Nguyen, K. Perrot, and M. Vallet. NP-completeness of the game Kingdomino. Theoretical Computer Science, 822:23–35, 2020.
* [14] J. R. Reay and T. Zamfirescu. Hamiltonian Cycles in T-Graphs. Discrete & Computational Geometry, 24:497–502, 2000.
* [15] S. Reisch. Gobang ist PSPACE-vollständig. Acta Informatica, 13:59–66, 1980.
* [16] S. Reisch. Hex ist PSPACE-vollständig. Acta Informatica, 15:167–191, 1981.
* [17] A. Scott, U. Stege, and I. van Rooij. Minesweeper May Not Be NP-Complete but Is Hard Nonetheless. The Mathematical Intelligencer, 33(4):5–17, 2011.
* [18] R. Tamassia. On embedding a graph in the grid with the minimum number of bends. SIAM Journal on Computing, 16:421–444, 1974.
## Appendix A Solutions
Figure 14: Solutions to the games from Figure 1.
## Appendix B Macrocells from the proof of Theorem 4
Precisions on macrocell solutions, and on the input and outputs bits, are
given in the two remarks within the proof of Theorem 4.
Figure 15: Choice-macrocells have two kinds of solutions when placing the
numbers from $0$ to $36$. On the top side, if the left cell hosts a number and
the right cell does not then the output bit is true, and if the left cell does
not host a number and the right cell does then the output bit is false.
Figure 16: Given an input bit on the bottom side, duplicate-macrocells have
only solutions for placing the numbers from $0$ to $24$, from $0^{\prime}$ to
$33^{\prime}$ and from $0^{\prime\prime}$ to $42^{\prime\prime}$, which copy
the input bit as output bits to the two other sides. The true and false input
bits consists in a cell on the bottom side which already hosts a number from
the adjacent macrocell.
Figure 17: Given an input bit on the bottom side, negation-macrocells have
only solutions for placing the numbers from $0$ to $33$ and from $0^{\prime}$
to $27^{\prime}$, which flip this bit as an output bit to the top side. The
true and false input bits consists in a cell on the bottom side which already
hosts a number from the adjacent macrocell.
Figure 18: Given three input bits on its sides, the clause-macrocell has a
solution if and only if exactly one bit is true. The six pictures cover all
the possibilities: when a bit is not indicated it means that regardless of its
value the macrocell has no solution (because it already has the two other
input bits set to true).
Figure 19: Given an input bit on the bottom side, each wire-macrocell has
only solutions which copy this bit as an output bit to the other side. The
true and false input bits consists in a cell on the bottom side which already
hosts a number from the adjacent macrocell.
|
metadata
11institutetext: Centre for Theoretical Atomic, Molecular, and Optical
Physics, Queen’s University Belfast, Belfast BT7 1NN, United Kingdom 11email:
<EMAIL_ADDRESS>22institutetext: Max Born Institute for Nonlinear
Optics and Short Pulse Spectroscopy, Max-Born-Straße 2A, Berlin 12489, Germany
33institutetext: Department of Physics, Imperial College London, South
Kensington Campus, London SW7 2AZ, United Kingdom 44institutetext: PASTEUR,
Département de chimie, Ecole Normale Supérieure, PSL University, Sorbonne
Université, CNRS, 75005 Paris, France 55institutetext: Institut de Ciencies
Fotoniques, The Barcelona Institute of Science and Technology, 08860
Castelldefels (Barcelona), Spain 66institutetext: Department of Physics and
Astronomy, University College London, Gower Street, London WC1E 6BT, United
Kingdom
# Dialogue on analytical and ab initio methods in attoscience
Gregory S. J. Armstrong 11 Margarita A. Khokhlova 2233 Marie Labeye 44
Andrew S. Maxwell 5566 Emilio Pisanty 2255 Marco Ruberti 33
###### Abstract
The perceived dichotomy between analytical and _ab initio_ approaches to
theory in attosecond science is often seen as a source of tension and
misconceptions. This Topical Review compiles the discussions held during a
round-table panel at the ‘Quantum Battles in Attoscience’ cecam virtual
workshop, to explore the sources of tension and attempt to dispel them. We
survey the main theoretical tools of attoscience—covering both analytical and
numerical methods—and we examine common misconceptions, including the
relationship between _ab initio_ approaches and the broader numerical methods,
as well as the role of numerical methods in ‘analytical’ techniques. We also
evaluate the relative advantages and disadvantages of analytical as well as
numerical and _ab initio_ methods, together with their role in scientific
discovery, told through the case studies of two representative attosecond
processes: non-sequential double ionisation and resonant high-harmonic
generation. We present the discussion in the form of a dialogue between two
hypothetical theoreticians, a numericist and an analytician, who introduce and
challenge the broader opinions expressed in the attoscience community.
Accepted Manuscript for Eur. Phys. J. D 75, 209 (2021), available as
arXiv:2101.09335 under CC BY.
###### pacs:
31.15.A-Ab initio calculations (electronic structure of atoms and molecules)
and 32.80.WrMultiphoton ionization and excitation and 42.50.HzStrong-field
excitation of optical transitions in quantum systems
## Introduction
Modern developments in laser technologies have kick-started the attosecond
revolution, which formed the field of attoscience, dealing with dynamics on
the attosecond (${10}^{-18}\text{\,}\mathrm{s}$) timescale brabec2000 ;
krausz2009 ; calegari2016 . Attosecond science was born with the study of
above-threshold ionisation (ATI) and high-order harmonic generation (HHG)
driven by strong laser pulses. As it has matured over the past three decades,
attoscience has given us access to phenomena which were previously thought to
be inaccessible—including the motion of valence electrons in atoms
Goulielmakis2010Aug , charge oscillations in molecules Calegari2014 , as well
as the direct observation of the electric-field oscillations of a laser pulse
Goulielmakis2004direct —and it has also spurred advances in ultrafast pulse
generation which have opened a completely new window into the dynamics of
matter.
The meteoric progress of attoscience has been fuelled, on the one hand, by
formidable experimental efforts, and, on the other hand, it has been supported
by a matching leap in our theoretical capabilities. These theoretical advances
have come in a wide variety, forming two opposing families of analytical and
numerical approaches. While these two families generally work together, the
dichotomy between analytical and numerical methods is sometimes perceived as a
source of tension within the attoscience community.
In this paper we present an exploration of this dichotomy, which collects the
arguments presented in the panel discussion ‘Quantum Battle 3 –- Numerical vs
Analytical Methods’ held during the online conference ‘Quantum Battles in
Attoscience’ battle . Our main purpose is to resolve the tension caused by
this dichotomy, by identifying the critical tension points, developing the
different viewpoints involved, and finding a common ground between them.
This process forms a natural dialogue between the analytical and numerical
perspectives. We delegate this dialogue to two hypothetical ‘combatants’— [1]
* Analycia Hi, I’m Analycia Formuloff, and I am an attoscience theorist working with analytical approaches. Numerio Hello, my name is Numerio Codeman, and I’m a computational scientist working on _ab initio_ methods.
—who will voice the different views expressed during the panel discussion.
We follow the dialogue between Analycia and Numerio through three main
questions. First, in Section 1, we explore the scope and nature of analytical
and numerical methods, including the interchangeability of the terms
‘numerical’ and ‘ _ab initio_ ’. We then analyse, in Section 2, the relative
advantages and disadvantages of the two approaches, using non-sequential
double ionisation (NSDI) as a case study. Finally, in Section 3, we examine
their roles in scientific discovery, via the case study of resonant HHG. In
addition, in Section 4, we present some extra discussion points, as well as
our combatants’ responses to the questions raised by audience members, and a
summary of the responses to several polls taken during the live session.
## 1 ‘ _Ab initio_ ’ and analytical methods
A constructive discussion is always based on a good knowledge of the subject.
To this end, in this section we tackle the subtleties in the definitions of ‘
_ab initio_ ’, ‘numerical’ and ‘analytical’ methods: we detail their
differences, and we present a rough classification of the various theoretical
methods used in attosecond science. We first concentrate on Numerio’s
speciality, _ab initio_ methods, then we move to Analycia’s forte, analytical
theories. For each combatant, we first introduce their theoretical approach,
and then list the main methods in the corresponding toolset. After these
presentations, Numerio and Analycia discuss the friction points they have with
each other’s methods.
### 1.1 _Ab initio_ and numerical methods
In its dictionary sense, _ab initio_ is Latin for ‘from the beginning’. Thus,
a theoretical method can be defined to be _ab initio_ when it tackles the
description of a certain physical process starting from first principles,
i.e., using the most fundamental laws of nature that—according to our best
understanding—govern the physics of the phenomena that we aim to describe.
Within an _ab initio_ framework, the inputs of the theoretical calculation
should be limited to only well-known physical constants, with any interactions
kept as fundamental as possible. This means that no additional simplifications
or assumptions may be made on top of what we believe are the established laws
of nature. In other words, the specific aspects of the physical process of
interest need to be approached without using specially-tailored models.
_We now bring our combatants to the stage, to discuss the consequences of this
definition._
[1]
* Analycia This is an extremely stringent definition, which will substantially limit the number of methods that can be classified in the _ab initio_ category. But, more importantly, it just delays the real question: what does ‘fundamental’ mean in this context?
Numerio The answer to this question is, in essence, a choice of ‘reference
frame’, within theory-space, which will frame our work. This choice is tightly
connected to the physical regime that we want to describe. We know that
attoscience, as part of atomic, molecular and optical physics, is ultimately
grounded on the Standard Model of elementary interactions in particle physics,
which gives—in principle—the ‘true’ fundamental laws. However, much of this
framework is largely irrelevant at the energies that concern us. Instead, we
are only interested in the quantum mechanics of electrons and atomic nuclei
interacting with each other and with light, and this gives us the freedom to
restrict ourselves to quantum electrodynamics (QED), or with its ‘friendlier’
face as light-matter interaction Cohen-Tannoudji1997 .
Analycia What does this mean, precisely? If QED is the right framework, that
means we must retain a fully relativistic approach as well as a full
quantisation of the electromagnetic field.
Numerio For most problems in attoscience, this would be overkill, as
relativistic effects are rarely relevant. Instead, it is generally acceptable
to work in the context of non-relativistic quantum mechanics, and to introduce
relativistic terms into the Hamiltonian at the required level of
approximation. These are the basic laws responsible for ‘a large part of
physics and the whole of chemistry’, as recognised by Dirac as early as 1929
dirac1929 .
Analycia I can see how this is appropriate, so long as spin-orbit and inner-
core effects are correctly accounted for. However, what about field
quantisation?
Numerio We normally deal with strong-field settings where laser pulses are in
coherent states comprising many trillions of photons, which means that a
classical description for electromagnetic radiation is suitable.
Analycia That typically works well, yes, but it is also important to keep in
mind that it can blind you to deep questions that lie outside of that
framework Lewenstein2020quantum . In any case, though: as ‘fundamental’, would
you be satisfied with a single-electron solution of the Schrödinger equation?
Numerio No, this would not be appropriate—cutting down to a single electron is
generally going too far. While this can be very convenient for numerical
reasons, restricting the dynamics to a single particle invariably requires
adjusting the interactions to account for the effect of the other electrons in
the system, via the introduction of a model potential. This approximation can
be validated using a number of techniques which can make it very solid, but it
always entails a semi-empirical step and, as such, it rules out the ‘ _ab
initio_ ’ label in its strict sense.
Analycia That leaves a many-body Hamiltonian of formidable complexity.
Numerio It does! Let me show you how it can be handled.
##### The ‘ _ab initio_ ’ toolset
The complexity of simulating the time-dependent Schrödinger equation (TDSE)
with the multi-electron Hamiltonian of atomic and molecular systems can be
tamed using a wide variety of approaches. Most of these are inherited from the
field of quantum chemistry, and they differentiate from each other via the
level of approximation they employ, and by the ranges of applicability where
they are accurate.
However, every approach in this space must face a challenging trade-off
between accuracy in capturing the relevant many-body effects, on the one hand,
and the computational cost that it requires, on the other. The key difficulty
here is the handling of electron-correlation effects, which are difficult to
manage at full rigour. Because of this, many methods adopt an ‘intermediate’
approach, which allows for lower computational expense, while at the same time
limiting the accuracy of the physical description.
Numerical methods thus form a hierarchy, schematised in Fig. 1, with rising
accuracy as more electron correlation effects are included:
* •
Single-Active-Electron (SAE) approaches are the simplest numerical approaches
to the TDSE scrinzi2014 , though they are only _ab initio_ for atomic
hydrogen, and require model potentials to mimic larger systems. Nevertheless,
they can be used effectively to tackle problems where electron correlation
effects do not play a role, and their relative simplicity has allowed multiple
user-ready software packages that offer this functionality in strong-field
settings Bauer2006 ; Tulsky2020 ; Patchkovskii2016 ; Fritzsche2019 .
* •
Density Functional Theory (DFT) allows an effective single-particle
description Ullrich2012 , widely considered as _ab initio_ , which still
includes electron correlation effects through the use of a suitable ‘exchange-
correlation functional’ Kohn1965 . Within attoscience, examples of approaches
which specifically target attosecond molecular ionisation dynamics include
real-time time-dependent (TD)-DFT DeGiovannini2013 ; Wopperer2017 ;
DeGiovannini2018 and time-dependent first-order perturbation theory static-
exchange DFT Calegari2014 ; Lara-Astasio2018 . More broadly, TD-DFT approaches
are robust enough that they appear in several user-ready software packages
Tancogne2020 ; Noda2019 ; Apra2020 ; Garcia2020Siesta suitable for
attoscience.111For a detailed examination of the status (and shortcomings) of
DFT and TD-DFT, see the talks of N. Maitra, A. Schild and K. Lopata at Ref.
BIRS-CMO-Workshop
* •
Non-equilibrium Green’s function theory also allows one to describe the many-
body problem from first principles by using effectively-single-particle
approaches Perfetto2018 ; Perfetto2018JPCL .
* •
Quantum-chemistry approaches go beyond the SAE approximation and DFT to
include, directly, the effects of electron correlation szabo1996 . The
starting point for this is generally the Hartree-Fock (HF) mean-field
approach, though this is rarely sufficient on its own. Because of this,
quantum-chemistry methods climb the ladder all the way to the full
Configuration Interaction (CI) limit, a complete description of electron
correlation (which is generally so computationally-intensive that it is out of
reach in practice).
Most of the standard approaches of quantum chemistry were developed to
describe bound states of molecular systems szabo1996 ; Schirmer2018 , and they
have also proven to be highly successful for modelling band structures in
solid-state systems Evarestov2012 . Nevertheless, they often require
significant extensions to work well in attoscience, particularly regarding how
the ionisation continuum is handled. Recent examples of these extensions
include _ab initio_ methods based on the algebraic diagrammatic construction
(ADC) Ruberti2014JCP ; Simpson2016 ; Averbukh2018 ; Ruberti2018PCCP ;
ruberti2018 and its restricted-correlation-space extension (RCS-ADC)
Ruberti2019 ; Ruberti2019PCCP ; Ruberti2021 ; schwickert2020 , multi-reference
configuration interaction (MRCI) majety2015hacc ; Majety2015PRL , and multi-
configuration time-dependent Hartree Beck2000 and Hartree-Fock sukiasyan2009
methods, as well as restricted-active-space self-consistent-field (RAS-SCF)
Marante2017 ; Marante2017PRA ; Klinker2018 approaches.
* •
Basis-set development is another crucial element of the numerical
implementation work for _ab initio_ methods in attoscience, since the physics
accessible to the method, as well as its computational cost, are often
determined by the basis set in use. Recent work on basis sets includes the
development of B-spline functions, both on their own Ruberti2014JCP ;
Simpson2016 ; Averbukh2018 ; Ruberti2018PCCP ; ruberti2018 ; Ruberti2019 ;
Ruberti2019PCCP ; Ruberti2021 ; schwickert2020 ; Toffoli2016 , and in hybrid
combinations with Gaussian-type orbitals (GTOs) Marante2017 ; Marante2017PRA ;
Klinker2018 , as well as finite-difference approaches clarke2018 ; brown2020 ;
Benda2020 , finite-element discrete-variable-representation functions
Miyagi2013 ; Miyagi2014 , grid-based methods sato2013 ; Greenman2010 , and
simple plane-waves NguyenDang2014 .
A more extensive (though still non-exhaustive) list of methods is shown in
Table 1. Here we focus on methods for the description of ultrafast electron
dynamics, happening on the attosecond time-scale. For numerical methods
tackling the (slower) nuclear motion in attoscience, we refer the reader to
Ref. Vrakking2018 .
Figure 1: Schematic representation of the hierarchy of methods to describe
electron correlation. headercellbackgroundFully ab initio methods
---
Time-Dependent B-spline Algebraic Diagrammatic Construction | Ruberti2014JCP ; Simpson2016 ; Averbukh2018 ; Ruberti2018PCCP ; ruberti2018 | TD B-spline ADC
Time-Dependent B-spline Restricted-Correlation-Space ADC | Ruberti2019 ; Ruberti2019PCCP ; Ruberti2021 ; schwickert2020 | TD B-spline RCS-ADC
Multi-Reference Configuration Interaction: TRECX | trecx ; majety2015hacc ; Majety2015PRL | MRCI: TRECX
Restricted-Active-Space Self-Consistent-Field: XCHEM | Marante2017 ; Marante2017PRA ; Klinker2018 | RAS-SCF: XCHEM
Multi-Configuration Time-Dependent Hartree-Fock | Carette2013 ; Sawada2016 | MCTDHF
Time-Dependent Complete-Active-Space Self-Consistent-Field | sato2013 | TD CAS-SCF
Time-Dependent Restricted-Active-Space Self-Consistent-Field | Miyagi2013 ; Miyagi2014 | TD RAS-SCF
Time-Dependent Configuration Interaction Singles | Greenman2010 ; Toffoli2016 ; NguyenDang2014 | TD CIS
$R$-matrix with time dependence | clarke2018 ; brown2020 ; Benda2020 | RMT
Real-time Non-Equilibrium Green’s Function | Perfetto2018 ; Perfetto2018JPCL | Real-time NEGF
headercellbackgroundDFT/hybrid/non-ab-initio methods
Real-time Time-Dependent Density Functional Theory | DeGiovannini2013 ; Wopperer2017 ; DeGiovannini2018 | Real-time TDDFT
Time-Dependent First-Order Perturbation Theory static-exchange DFT | Calegari2014 ; Lara-Astasio2018 |
Single-Active Electron Time-Dependent Schrödinger Equation | Bauer2006 ; Tulsky2020 ; Patchkovskii2016 ; Fritzsche2019 | SAE TDSE
Table 1: Rough survey of numerical methods for attoscience and strong-field
physics.
### 1.2 Analytical methods
The use of analytical methods to describe strong-field phenomena has a long-
storied pedigree dating back to the 1960s keldysh1965 , before laser sources
could reach sufficient intensities to drive quantum systems beyond the lowest
order of perturbation theory. As a general rule, analytical methods are
approaches for which the governing equations can be solved directly, under
suitable approximations, and the solutions can be written down in ‘exact’ or
‘closed’ form.
_We return to our combatants on the stage, where Numerio is dissatisfied with
this definition._
[1]
* Numerio That just seems like it is kicking the can down the road. What does ‘closed form’ mean?
Analycia As it turns out, when the term ‘closed form’ is placed under
examination, its precise meaning turns out to be rather elusive and ultimately
quite ambiguous berry2007 ; Borwein2013closed . That is to say: which ‘forms’
does the term ‘closed form’ actually include? Which ones does it exclude? Does
it stop at elementary functions, i.e., exponentials and logarithms? Or must it
cover special functions, like the Bessel functions? And if we do intend to
include special functions as part of the toolbox of analytical methods, which
special functions should be included? Do hypergeometric functions or Meijer
$G$ functions make the cut? What about newly-minted functions expressly
defined to encapsulate some hard numerical problem? (As one example of this,
take the recent proof of the integrability of the Rabi model of quantum optics
Braak2011 —should the functions defined in that work be considered special
functions?) More importantly: what does it really mean for a function to be a
‘special’ function?
Numerio But hasn’t this question been answered long ago?
Analycia Well, this is the kind of question where one could hope that we could
look to the mathematicians to provide an answer—say, by supplying an objective
classification of functions, from elementary through exponentials to the
‘higher’ transcendentals—as to where this class should stop. Unfortunately,
however, when such objective classifications are attempted, they run into a
bog of vague answers and incomplete taxonomies which leave out large classes
of useful functions. Ultimately, as Paul Turán put it andrews1999special ,
‘special’ functions are simply useful functions: they are a shared language
that we use to encapsulate and communicate concepts and patterns berry2007 ,
and their boundaries (and with it, the boundaries of analytical methods in
general) are subjective and a product of tradition and consensus.
Numerio This distinction seems like trivial semantics to me, to be honest.
Analycia At first glance, yes, but it is important to keep in mind that, as a
rule, special functions like the Bessel functions are defined as the solutions
of hard problems—canonical solutions of ordinary differential equations,
integrals that cannot be expressed in elementary terms, non-summable power
series—and when it comes to evaluating them in practice, they generally
require at least some degree of numerical calculation. In this regard, then,
what is to stop us from packaging up one of the numerical problems that face
us, be it a full TDSE simulation or one of its modular components, calling it
a special function, and declaring that methods that use it are ‘analytical’?
Numerio That sounds rather absurd.
Analycia Indeed it does, at face value, but it is not all that far from how
special functions are actually defined, i.e., as the solution of a tough
differential equation, or to dodge the fact that a given integral cannot be
evaluated in elementary terms by re-christening it as an integral
representation. More importantly, it encodes a serious question—what happens
when the ‘back end’ of analytical methods involves more numerical calculations
than the TDSE simulations they were intended to replace?
Numerio They should give the job to me, of course!
##### The analytical toolset
These issues aside, the analytical methods of strong-field physics, as
traditionally understood in the field, form a fairly well-defined set. This
set can be further subdivided into three main classes:
* •
Fully quantum models, which retain the full coherence of the quantum-
mechanical framework. These frameworks date back to key conceptual leaps in
the early days of laser-matter interaction keldysh1965 ; Faisal1973 ;
Reiss1980 ; perelomov1966ionization ; ammosov1986tunnel , but they also
include applications of more standard perturbation-theory tools.
The central method in this category is known as the Strong-Field Approximation
(SFA) keldysh1965 ; Faisal1973 ; Reiss1980 (see Ref. Amini2019 for a recent
review), which builds on the solvability of the field-driven free-particle
problem, used to great effect for HHG Lewenstein1994Mar . The SFA is more
properly a family of related methods galstyan2016 with the key commonality of
taking the driving laser field as the dominant factor after the electron has
been released; in its fully-quantum version, it produces observables in the
form of highly-oscillatory time integrals.
* •
Semiclassical models, which bridge the gap between the full quantum
description and the classical realm by incorporating recognizable trajectory
language but still keeping the quantum coherence of the different pathways
involved. The paradigmatic example is the quantum-orbit version of the SFA
Popruzhenko2014 , obtained by applying saddle-point approximations to the
SFA’s time integrals, which results in trajectory-centred models analogous to
Feynman path integrals salieres2001feynman where the particles’ positions are
generally evaluated over complex-valued times Popruzhenko2014 ; ivanov2014
and are often complex-valued themselves torlina2012 ; Pisanty2016 ;
Pisanty2017 ; torlina2017 .
As a general structure, the relationship between semiclassical methods and the
full TDSE is the same as between ray optics and wave optics for light, a
correspondence that can be made rigorous as an eikonal limit Smirnova2008 .
This has the caveat that optical tunnelling requires evanescent waves in the
classically-forbidden region, with an optical counterpart in the use of
complex rays for evanescent light einziger1982 . The presence of these complex
values complicates the analysis, but it also presents its own opportunities
for insight Pisanty2020 .
The recent development of analytical methods has centred on correcting the SFA
to account in various ways for the interaction of the photoelectron with its
parent ion, from straightforward rescattering milosevic2007intensity through
fuller Coulomb corrections Popruzhenko2008 ; torlina2012 and explicit path-
integral formulations Maxwell2017 , which now span a wide family of approaches
figueira2020 . On the other hand, it is important to keep in mind that there
are also multiple techniques, such as semiclassical propagators Zagoya2014 ,
which are independent of the SFA.
* •
Fully classical models, which can retain a small core of quantum features
(most often, the tunnelling probability obtained from tunnelling theories
ammosov1986tunnel ) but which generally treat all the particle dynamics using
classical trajectories. This includes the paradigmatic Simple Man’s Model
Corkum1993 ; Kulander1993 for HHG, but it also covers much more elaborate
methods, often of a statistical kind, that look at classical trajectory
ensembles to understand the dynamics Panfili2001 , and in particular the
specific formulation as Classical Trajectory Monte Carlo dimitriou2004origin
‘shotgun’ approach to predicting photoelectron momentum and energy spectra.
These methods are summarised in Table 2.
headercellbackgroundFully quantum models
---
Lowest-order perturbation theory | semianalytic | Matrix elements must be taken from quantum chemistry
Strong field approximation Lewenstein1994Mar | semianalytic | Intensive numerical integration
headercellbackgroundSemiclassical models
Quantum Orbit models Popruzhenko2014 ; figueira2020phases | semianalytic | Saddle-point equations must be solved numerically
‘Improved’ SFA milosevic2007intensity | semianalytic | Saddle-point equations must be solved numerically
Coulomb-Corrected SFA Popruzhenko2008 | semianalytic | Significant numerical integration
Path-integral approaches Maxwell2017 | not analytic | Numerical solutions for equations of motion
Semiclassical propagators Zagoya2014 | not analytic | Numerical solutions for equations of motion
headercellbackgroundFully classical models
Simple Man’s Model Corkum1993 ; Kulander1993 | semianalytic | Trajectories are only analytical for a few driving fields
Classical Ensemble Models Panfili2001 | not analytic | Equations of motion must be solved numerically
Classical Trajectory Monte Carlo dimitriou2004origin | not analytic | Equations of motion must be solved numerically
Table 2: Rough survey of analytical methods of strong-field physics and
attoscience.
### 1.3 Hybrid methods
In addition to the purely-numerical and purely-analytical approaches discussed
above, it is also possible to use hybrid approaches, which involve nontrivial
analytical manipulations coupled with numerical approaches that incur
significant computational expense.
This class of methods can include relatively simple variations on standard
themes, such as multi-channel SFA approaches that include transition
amplitudes and dipole transition matrix elements derived from quantum
chemistry Mairesse2010 , but it also includes long-standing pillars like
Molecular Dynamics and other rate-equation approaches—which use _ab initio_
potential-energy surfaces and cross-sections, but discard the quantum
coherence—that are now being applied within attoscience Shi2019 . Beyond these
simpler cases, there is also a wide variety of novel and creative methods,
such as the classical-ensemble back-propagation of TDSE results employed
recently to analyse tunnelling times Ni2016 , which hold significant promise
for the future of attoscience.
### 1.4 Friction points
_Now with a complete set of basic definitions, our combatants Analycia and
Numerio turn their discussion to more specific aspects of analytical and _ab
initio_ methods. _
#### 1.4.1 Numerical $\neq$ _ab initio_
[1]
* Analycia It seems to me that the classification of numerical and _ab initio_ methods, as presented in the ‘ _ab initio_ toolset’, is out of step with the definitions as originally stated. Are you using the terms ‘ _ab initio_ ’ and ‘numerical’ interchangeably?
Numerio You are right. It is important to emphasize the difference between
numerical methods and _ab initio_ methods. Both classes share and benefit from
the development and application of ‘computational thinking’, but strictly
speaking the latter category is a subset of the former. On the other hand, in
the literature, the two terms are often used interchangeably.
Analycia That may be, but then that is a problem with the literature. There
are many methods on the toolset that are very far from _ab initio_ as you
defined it.
The clearest examples of this are the methods based on the SAE approximation
Abusamha2010 ; Kukk2013 ; Saito2004 ; Gozem2015 ; Labeye2018 ;
vandenWildenberg2019 . This approach neglects, in an extremely crude way, the
two-body nature of the Coulomb electrostatic repulsion between the different
electrons, which is often called ‘electron correlation’. Should these methods
really be called ‘ _ab initio_ ’?
Numerio Most of these methods try—with varying degrees of success—to correct
for the neglect of electron correlations by introducing various
parameterisations of effective one-particle Hamiltonians. However, these
constructions are for the most part semi-empirical, and as such they introduce
significant physics beyond the fundamental laws, and definitely cannot be
called _ab initio_ methods.
Analycia It is good to see that laid out clearly. In a similar vein, what
about DFT and TD-DFT? I notice that many of the openly-available DFT packages
explicitly market themselves as being ‘ _ab initio_ ’ approaches Tancogne2020
; Noda2019 ; Apra2020 .
Numerio DFT is a rigorously _ab initio_ method, and it takes its validity from
strict theorems (originally for static systems Hohenberg1964 ; Kohn1965 and
subsequently extended to time-dependent ones runge1984 ; vanleeuwen1998 ;
vanleeuwen1999 ) that show that the complexity of the full multi-electron
wavefunction can be reduced to single-electron quantities. In brief, there
exists an ‘exchange-correlation functional’ that allows us to get multi-
electron rigour while calculating only single-electron densities.
Analycia That may be the case in the ideal world of mathematicians, but it
does not work in the real world. The formal DFT and TDDFT frameworks only work
if one knows what the exchange-correlation functional actually is, as well as
the functionals for any observables such as photoelectron spectra. In
practice, however, we can only guess at what those might be. I have a deep
respect for DFT and TDDFT: for large classes of systems it is our only viable
tool, and there is a large body of science which validates the functionals it
employs. Nevertheless, the methods for validating the ‘F’ in DFT are semi-
empirical, and do not have the full-sense rigour of _ab initio_.
Numerio Yes, those are fair points. However, it is worth noting that there
also exists a rigorous method to construct approximate parameterised
functionals. This is based on introducing parameters whose value can be fixed
by requiring them to satisfy the known exact properties of the functional.
These parameters are of universal nature in the sense that once they have been
determined, they are kept fixed for all systems to be calculated. Having said
this, in practice, when the DFT Hamiltonian ends up in the form of a semi-
empirical parameterisation Tong1997 ; Stener2005 ; Toffoli2013 , then this
takes it out of the _ab initio_ class.222It is worth noting that ‘ _ab initio_
’ can take different meanings in different fields. There are multiple
contexts, such as the study of condensed matter and other extended systems,
where wavefunction methods are not feasible, and DFT methods are the closest
approach to the _ab initio_ ideal as we have defined it here; most
descriptions of DFT as _ab initio_ appear in such contexts.
Analycia So, are there _any_ numerical methods which truly satisfy the _ab
initio_ definition?
Numerio Yes, there are. Most of the approaches based on quantum chemistry
possess potentially full _ab initio_ rigour. In practice, of course, for some
applications this full potential is not needed, and the degree of electron
correlation in the calculation can be restricted in order to reduce the
calculation time. However, even in those cases, there is still an _ab initio_
method underlying the computation.
That said, even within an _ab initio_ method, it is common to introduce semi-
empirical parametrisations of the Hamiltonian. This happens most often when we
cannot describe (or do not need to) every term in the Hamiltonian to an _ab
initio_ standard.
Analycia What kind of interactions would this approach apply to?
Numerio The introduction of pseudo-potentials can be used, for example, to
model the effect of core electrons in an atom or molecule. Another common case
is the effect of spin-orbit interactions in a semi-relativistic regime. This
can be seen as a non-_ab initio_ description of certain degrees of freedom or
interactions whose effect is not dominant within a given physical process or
regime. Sometimes this has a limited scope, but it can also extend out to what
we have described as ‘hybrid’ methods (such as Molecular Dynamics
simulations), which are not fully _ab initio_ but which nevertheless maintain
a very strong _ab initio_ identity.
Analycia This does not really paint a picture of a ‘single class’ of _ab
initio_ methods: instead, you have depicted a continuum of methods, which goes
smoothly from a full accounting of electron correlation down to restricted
numerical simulations which operate under substantial approximations.
Numerio I agree, and if you press me I should be able to organise these
methods on a spectrum, between approaches which are fully _ab initio_ and
techniques which are simply numerical approaches.
#### 1.4.2 Analytical methods generally involve computation
_Having conceded that _ab initio_ methods span a rather large continuum,
Numerio strikes back at just how ‘analytical’ the analytical approaches really
are._
[1]
* Numerio Since you are so keen to hold _ab initio_ methods to the ‘golden standard’ of the definition, it is only fair that we do the same for analytical methods. Many of the methods you have listed look rather heavy on the numerics to me, particularly on the fully-quantum side. To pick on something, perturbation theory is certainly purely analytical on its own, but those models often require accurate matrix elements for the transitions they describe, and those can only be obtained from quantum chemistry, often at great expense.
Analycia Yes, that is true—
Numerio interrupts Analycia
Numerio And is that not also the case even for the ‘stars’ of the show? The
SFA, in its time-integrated version, produces integrals which are highly
oscillatory, and this generally implies a significant computational cost.
Analycia I agree, the SFA and related methods often involve a large fraction
of numerical effort. Even for the quantum-orbit version, the key stages in the
calculation—the actual solution of the saddle-point equations—rely completely
on numerical methods. On the other hand, of course, this is typically at a
much lower computational cost than most TDSE simulations.
Numerio For most methods, that is quite clear. However, this lower
computational cost is much less clear for some of the more recent approaches
that implement Coulomb corrections on the SFA. The analytical complexity of
those methods can get very high—does that not come together with a higher
computational cost?
Analycia To be honest, the computational expense in some of the more complex
Coulomb-corrected approaches (in particular those that utilise ensembles of
quantum trajectories) to the SFA can, in fact, exceed that of some of the
simpler single-electron TDSE simulations.
Numerio I also notice that you have classified several classical trajectory
methods as ‘analytical’, including statistical ensemble and Monte Carlo
approaches that often involve substantial computation expense calculating
aggregates of millions of trajectories. More importantly, this goes beyond the
raw numbers—the Newtonian equations of motion are only solvable in closed form
for field-driven free particles. As soon as any sort of atomic or molecular
potential is included, one must turn to numerical integration.
Analycia Yes, that is also correct. The character of the method is different
to a direct simulation of the TDSE, but the numerical component cannot be
denied.
Numerio So, similarly to the continuum of ‘ _ab-initio_ -ness’ we agreed upon
earlier, what you are saying is that for analytical methods there is also a
continuous spectrum between fully analytical and exclusively numerical.
Analycia Yes, I suppose I am. We should then be able to place the theoretical
methods of attoscience on a two-dimensional spectrum depending on how much
they have an analytical and _ab initio_ character.
_Our two theorists sit down to chart the methods they have discussed so far,
and report their findings in Fig. 2._
Figure 2: Rough spectrum of the theoretical methods of attoscience, ranked by
their analytical (horizontal) and _ab initio_ (vertical) character.
#### 1.4.3 Quantitative vs qualitative insights
[1]
* Numerio It seems we have completely eliminated the dichotomy that we started with between analytical and ab initio methods.
Analycia So it seems, at least on the surface, but there is still a clear
difference between the two approaches. In this regard, I would like to make a
somewhat contentious claim: it is more important to distinguish methods
according to whether the insights we can obtain from them are of a more
quantitative nature or of a more qualitative one. It seems to me that it is
the spectrum between _those_ two extremes that carries more value.
Numerio Speaking of ‘qualitative methods’ is certainly unusual in the physical
sciences, and to me it feels like it carries some negative connotations.
Analycia Perhaps this is because that phrase has been mistakenly associated
too tightly with biological and social sciences, and physical scientists
sometimes want to distance themselves from that perception? If that is so,
then it is important to work to de-stigmatise that classification.
In any case, though, I am curious to know whether the attoscience community
agrees that this is a more important distinction.
The audience response to this poll is presented in Table 16, in Section 4
below.
#### 1.4.4 Analytical $\neq$ approximate
[1]
* Numerio You mentioned above that some of the ‘analytical’ methods of attoscience can involve substantial computational expense. What is the point of performing such computations, for approaches that can only ever be approximate.
Analycia This is a common misconception. ‘Analytical’ does not necessarily
mean ‘approximate’, and there are problems where analytical approaches can be
fully exact. Some of this list is limited to the canonical examples (the
particle in a box, the harmonic oscillator, the hydrogen atom, the free
particle driven by an electromagnetic field), but it is important to emphasise
that it also covers perturbation theory, which is exact in the regimes where
it holds. And, in that sense, it includes the exact solutions for single- and
few-photon ionisation and excitation processes, which are crucial to large
sections of attoscience, particularly when it comes to matter interacting with
XUV light.
Numerio With the caveat we discussed above, surely? The perturbation-theory
calculations are exact in their own right, but their domain of applicability
without numerical calculations is extremely limited.
#### 1.4.5 _Ab initio_ $\neq$ exact
[1]
* Analycia One striking aspect which is implicit in your description of _ab initio_ methods, and in how they are handled in the broader literature, is the implication that any _ab initio_ method is automatically exact.
Numerio No, that is inaccurate. The two descriptors are distinct and they
should not be considered as synonyms.
Analycia Perhaps the term is used to somehow overvalue the results of a
numerical simulation? It is easy to fall into the trap of thinking that a
result obtained in an _ab initio_ fashion is automatically quantitatively
accurate, but that is a misconception. The clearest examples of this
difference are simulations that work in reduced dimensionality, e.g. 1D or 2D,
but, more generally, plenty of _ab initio_ approaches make full use of
approximations when they are necessary.
Numerio That is true, and if a method’s approximations cannot be lifted then
it does not really fit the definition of _ab initio_. However, it is common to
use ‘lighter’, more flexible numerical methods—which use approximations to
reduce the cost—for more intensive investigations, while still benchmarking
them against an orthodox, fully _ab initio_ approach, and then we can be
confident in the accuracy of the more flexible methods.
Analycia But that is no different to how we benchmark analytical methods. What
is it about _ab initio_ approaches that singles them out as the ‘gold
standard’, then?
Numerio I would say that the key feature is the existence of a systematic way
to improve the accuracy which does not rely on any empirical fittings or
parametrisations, as the central part of the numerical convergence of the
method. When this is present, we can expect to get a description at the same
level of accuracy for the same physical observables, even if we change the
system in question, and we can also estimate the error we make in a systematic
way. Under these conditions, then, the _ab initio_ methods can achieve
fidelities that are so high that they can be considered to be fully
exact.333Having said this, it is important to notice a difference between
wavefunction-based and functional-based _ab intio_ methods. Although non-
semiempirical functionals can, in principle, be systematically improved, the
nature of this improvement will generally be highly system-specific and, in
contrast to wavefunction methods, a better description is not always
guaranteed for a particular system of interest.
#### 1.4.6 The choice of basis set
[1]
* Analycia You mentioned that a key part of ‘the _ab initio_ toolset’ is the development of suitable basis sets. This sounds odd to me: any two basis sets should be equivalent, so long as they are both complete—which, on the _ab initio_ side, corresponds to numerical convergence.
Numerio That is formally true, but it is not very useful in practice. The
basis set used to implement an _ab initio_ method—to formulate and solve the
Schrödinger equation—is a crucial factor in the numerical aspects, and it
determines the level of accuracy of the calculations as well as the
computational cost required to reach convergence to a stable solution that
captures the full physics of your problem.
More broadly, this is a source of approximation (and thus an entry point for
errors), as well as a powerful ally in the search for new physics. In short,
the choice of basis set largely determines the subspace of the solutions that
we can reasonably explore, and this in turn influences the physics that can be
investigated with the method.
Analycia You mentioned that many of the _ab initio_ approaches in attoscience
have their roots in quantum chemistry, and I understand that quantum chemists
have worked very hard at optimising basis sets for their work. Why can’t those
sets be used in attoscience?
Numerio The basis sets most commonly used in traditional quantum chemistry,
particularly Gaussian-type orbitals (GTOs), have indeed been highly optimised
by tailored fitting procedures over many years and, as a result, they have
enabled the flourishing of _ab initio_ quantum chemical methods szabo1996 .
There, the driving goal is to have accurate and fast numerical convergence for
the physical quantities that interest quantum chemists, such as ground-state
energies and electric polarizabilities.
Analycia Ah—and these goals do not align well with attoscience?
Numerio Exactly. These basis sets are generally poorly suited to describe free
electrons in the continuum. As such, traditional basis sets struggle when
describing molecular ionisation over a wide range of photoelectron kinetic
energies Ruberti2013 ; Ruberti2014 . By extension, this limits our ability to
describe general attosecond and strong-field physics.
Analycia So this is where the attoscience-specific development of basis sets
comes in, then.
Numerio Yes. For attoscience the key requirement is an accurate description of
wavefunctions with oscillatory behaviour far away from the parent molecular
region, and this drives the development when existing basis sets are
insufficient.
Analycia So what determines the choice of basis set in any given situation?
Numerio This depends on a number of factors—some down to numerical convenience
in the specific implementation, but also, often, determined by the physics
that the method seeks to describe. Within any particular _ab initio_
framework, the use of new basis sets allows us to explore different parts of
the Hilbert space of the system under investigation, and to look for new and
interesting solutions there.
Analycia This sounds reasonable enough, but it also speaks against the strict
definition of ‘ _ab initio_ ’ as you formulated it, which requires us not to
input any physics beyond the fundamental interactions. To the extent that the
basis-set choice determines the subspace where solutions will play out, that
represents an additional input about the physics which is built directly into
the code. This can then limit the reach of the method; one clear example of
this is the elimination of double-ionisation effects if a continuum with
doubly-ionised states is not included in the basis set. Given these
limitations, can we ever truly reach the _ab initio_ ideal?
Numerio When phrased in those terms, then I agree that it is an ideal, but
there is also no denying the practical reality that has been achieved in
describing the full complexity of quantum mechanics as regards attoscience.
And, I would argue, the methods we have available do offer systematic ways to
ensure convergence in a controlled fashion, and we can very well say that we
are approaching physics in an _ab initio_ way.
## 2 Advantages and disadvantages of analytical and numerical methods
In the previous section we developed, through our combatants Numerio and
Analycia, a framework that allows us to place the theoretical methods of
attoscience in a continuous spectrum: from analytical to numerical, and from
_ab initio_ to approximate, as well as from methods that offer qualitative
insights to ones whose output is most valuable in its quantitative aspects. In
this section we move on, to focus on the strengths and weaknesses of methods
across the theoretical spectrum established in Fig. 2. This analysis is
crucial, as it enables an impartial evaluation of different methods, which in
turn allows attoscientists to use the most suitable tools for the job at hand.
Understanding the advantages and disadvantages of different methods, as well
as their successes and shortcomings, allows us to highlight the most efficient
one—or the most effective combination—for the chosen application, and it is an
important guide in the development of hybrid methods.
### 2.1 Fundamental strengths and weaknesses
_Continuing the conversation Analycia and Numerio each make a case for their
respective methods, and attempt to scrutinise the shortcomings of each other’s
favoured methods. _ [1]
* Numerio The main advantage that has struck me in recent years is the impressive progress in the application of numerical methods to problems of increasing complexity. A number of problems which were once well beyond our reach are now possible. This has been achieved both through the development and refinement of efficient computational methods, as well as the increasing availability of high-performance computing (HPC) platforms. Such methods can also act as benchmarks against which to test the validity of simpler, smaller-scale, or more approximate methods. Their other clear advantage is their generality, which enables their application to a variety of physical problems.
Analycia True, but despite these advantages, you must admit there can be a
heavy price to pay. As you mention, application of these methods can require
large-scale HPC resources, and such calculations can be extremely time-
consuming, even if optimised codes and efficient numerical methods are used.
It may not be possible to perform a large number of such calculations, which
then makes it infeasible to perform scans over laser parameters that are often
crucial to understand the physics. Additionally, an inherent difficulty in
many methods is the rapid increase in required numerical effort with the
number of degrees of freedom of the target system. This often restricts
methods, for instance, to the treatment of one active electron, or to
linearly-polarised laser fields. Releasing these restrictions, and others,
then incurs a significant computational cost.
Analytical methods, however, are not encumbered with many of the difficulties
encountered by numerical methods. Their inherent approximations afford them a
large speed advantage as well as a high degree of modularity. These qualities
allow them to provide an intuitive physical picture of the complex dynamics.
They may also avoid the unfavourable scaling properties with which numerical
methods can be saddled, allowing them to explore a more expansive parameter
space. This, coupled with the understanding they provide, can be used to
direct more resource-expensive numerical or experimental approaches.
Numerio Yes, I’m aware that analytical methods provide a number of advantages,
but the price tag is the required level of approximation that enables
analyticity. Approximation is a double-edged sword: ideally we would only
discard unnecessary details in order to highlight the important processes,
but, most commonly, we also end up discarding important details, and this may
imply that some physical processes are not accurately captured. Approximations
also often carry more restrictive regimes of validity, and this makes them
less general than _ab initio_ approaches. So, despite the advantage they may
have with regard to scaling properties, they can also be rather restricted in
some respects. This can often come in the form of rather unrealistic
assumptions, such as the assumption of monochromatic laser pulses.
_Unable to find common ground on their favoured methods, Numerio and Analycia
decide to look at the specific example of NSDI. _
### 2.2 In context: non-sequential double ionisation
_This case study on NSDI will explore the impact that the characteristics of
various methods can have in understanding a physical process. However, before
we rejoin our debating combatants, we will present a few of the key concepts
of NSDI._
Figure 3: Singly- and doubly-charged helium ion yields as a function of laser
intensity, at a wavelength of $\lambda=$780\text{\,}\mathrm{n}\mathrm{m}$$,
showing the ‘knee’ structure associated with the transition between NSDI and
sequential double ionisation Walker1994 .333Reprinted with permission from
Ref. Walker1994 . © 1994 by the American Physical Society. Figure 4: Schematic
of the two main mechanisms in NSDI MaxwellThesis2019 . Panel (a) shows
electron impact ionisation (EI), where the recolliding electron mediates
direct double ionisation. Panel (b) shows the recollision excitation with
subsequent ionisation (RESI) mechanism, where the recolliding electron excites
the bound electron, which is subsequently released by the field.
NSDI has been studied using a wide variety of analytical and numerical
methods. These include both classical and quantum approaches, solving the
Newtonian equations of motions and the TDSE. This range of methods is a
testament to the difficulty in modelling this process, and thus makes it an
ideal case study.
##### What is NSDI?
Put simply, NSDI is a correlated double ionisation process, where the
recollision of a photoelectron with its parent ion leads to the ionisation of
a second electron. Historically, NSDI was discovered as an anomaly, where the
experimental ionisation rate did not agree with analytical computations for
sequential double ionisation for lower laser intensities, giving rise to the
famous ‘knee’ structure (see Fig. 3) LHuillier1983 ; Agostini1984 ;
Agostini1985 ; Fittinghoff1992 ; Kondo1993 ; Walker1994 . Originally, there
was contention over the precise mechanism, but over time the three-step model
Kuchiev1987 ; Corkum1993 ; Schafer1993 involving the laser-driven recollision
was accepted. The three steps of this model are (i) strong-field ionisation of
one electron, (ii) propagation of this electron in the continuum, and (iii)
laser-driven recollision and the release of two electrons. This classical
description is based on strong approximations, and it is generally considered
to be an analytical method (although, as we discussed in Section 1.4.2, it
generally relies on some numerical computations). In particular, the
exploitation of classical trajectories gives it the intuitive descriptive
power of an analytical method.
Within the three-step model, two main mechanisms have been identified for
NSDI. The first is electron impact (EI) ionisation, where the returning
electron has enough energy to release the second electron, leading to
simultaneous emission of both electrons, as depicted in Fig. 4(a). The
alternative mechanism is recollision with subsequent ionisation (RESI), which
occurs when the returning electron only has enough energy to excite the second
electron (but not remove it directly), and this second electron is
subsequently released by the strong field, leading to a delay between the
ionisation of the first and second electron, as shown in Fig. 4(b). The
separation of these mechanisms is best expressed by semi-analytic models based
on the SFA Becker1994 ; Becker1996 , where the mechanisms can be represented
as Feynman diagrams and linked to rescattering events Becker1999 .
##### The NSDI Toolset
Here we summarise some of the methods that are available to model NSDI. For
detailed reviews on these methods see Refs. becker2011 ; figueira2011 .
* •
Three-step model: This simple and intuitive classical description neglects the
Coulomb potential and quantum effects Corkum1993 ; Schafer1993 ; Kuchiev1987 .
Nonetheless, this formulation has become the accepted mechanism of NSDI.
* •
Classical Models: These can be split into those with some quantum ingredients
like a tunnelling rate Ye2008 ; Emmanouilidou2008 ; Emmanouilidou2009 ;
Brabec1996 ; Chen2000 and those that are fully classical, so that ionisation
only occurs by overcoming a potential barrier Panfili2001 ; Haan2008 ;
Haan2008PRL ; Ho2005 ; Haan2002 ; Panfili2002 ; Goreslavskii2001 ;
Popruzhenko2002 . The electron dynamics are approximated by classical
trajectories, which permits a clear and intuitive description. The
contributions of classes of trajectory can be analysed, which is crucial in
tracing the origin of certain physical processes. However, the model neglects
quantum phenomena such as interference maxwell2016 ; Hao2014 .
* •
Semi-classical SFA: The Coulomb potential is neglected but the dynamics can be
understood via intuitive quantum orbits, and the different mechanisms can
easily be separated Becker1994 ; Becker1996 ; Becker1999 ; Becker2000 ;
Goreslavskii2001 ; Popruzhenko2002 ; Quan2009 ; Shaaran2010 ; Shaaran2012 ;
figueira2012 ; Maxwell2015 ; maxwell2016 . This also allows quantum effects
such as tunnelling and interference to be included, with interference effects
in NSDI being predicted Maxwell2015 ; maxwell2016 ; Hao2014 and measured
Liao2017 fairly recently.
* •
Reduced-dimensionality TDSE simulations: Solution of the TDSE assuming that a
particular aspect of the motion can be restricted to the laser polarisation
axis. One-dimensional treatments restrict the entire electron motion to this
axis lein2000 , and two-dimensional treatments restrict the centre of mass
ruiz2006 , while treating electron correlation in full dimensionality. Similar
approximations are made in other methods, such as the multi-configurational
time-dependent Hartree method sukiasyan2009 , which treats NSDI with the
assumption of planar electron motion.
* •
_Ab initio_ full dimensional TDSE simulation: Full quantum mechanical
treatment of a two-electron atom through direct solution of the time-dependent
close-coupling equations smyth1998 ; parker2001 ; pindzola1998 ; pindzola2007
; colgan2012 ; parker2006 . Such methods are computationally intensive,
although efficiency improvements have been made in recent years. To date,
these methods have not been extended to treat molecules or atoms other than
helium.
_We rejoin our two debating attoscientists, whose discussion has now moved on
to the specifics of different analytic and _ab initio_ methods in NSDI. The
discussion begins with a debate on the positive and negative aspects of a
direct _ab initio_ approach._
#### 2.2.1 Full-dimensional numerical solution of the TDSE
[1]
* Numerio As a numericist, I often feel that there is no substitute for solving the TDSE in its full dimensionality. In the context of NSDI, this is a daunting computational task, involving solution of many coupled radial equations—often thousands—on a two-dimensional grid. The first code development to do this began in the late 1990s smyth1998 ; parker2001 ; pindzola1998 and, by 2006, calculations could be carried out for double ionisation of helium at $390\text{\,}\mathrm{n}\mathrm{m}$ parker2006 . However, these calculations typically required enormous computational resources—an entire supercomputer, in fact—using all 16,000 cores available at that time on the UK’s national high-end computing platform (HECToR).
Following the literature over the next few years, I noted the development of a
number of similar approaches feist2008 ; feist2009 ; hu2010 ; nepstad2010 ;
hu2013 ; djiokap2012 ; djiokap2015 ; donsa2019 ; donsa2019prl . In NSDI
applications in particular, I was struck by the significant progress made in
reducing the scale of such calculations by the tsurff method scrinzi2012 ;
zielinski2016 ; trecx ; zhu2020 . This approach allowed calculations for
double ionisation of helium at $800\text{\,}\mathrm{n}\mathrm{m}$ to be
carried out using only 128 CPUs for around 10 days zielinski2016 . Fig. 4
shows a recent highlight of this work, the two-electron momentum distribution
for helium at $780\text{\,}\mathrm{n}\mathrm{m}$ zielinski2016 . The
calculation successfully displayed the expected minimum in the distribution
when both electrons attain equal momenta greater than $2U_{p}$. Watching these
developments unfold over the past 15 years, it has become clear to me that
even a daunting problem such as this is well within our grasp, and should be
attempted.
Figure 5: Two-electron momentum distribution for double ionisation of helium
at $780\text{\,}\mathrm{n}\mathrm{m}$, calculated using the tsurff method
zielinski2016 .444Reprinted with permission from Ref. zielinski2016 . © 2016
by the American Physical Society.
Analycia These are quite intensive calculations, so my first question would
be: is it always worth it? Calculations should not only be feasible—they
should also be justifiable. The large scale of each single calculation can be
a very limiting factor, since you may need further computations, perhaps to
perform intensity averaging, or to scan over a particular laser parameter.
Here you may encounter additional hurdles, since it is well known that the
computational cost can scale very unfavourably with certain laser parameters,
particularly wavelength. Even with the efficiency savings that you mention,
the method may struggle to perform calculations at longer wavelengths, or in
sufficient quantity to scan over experimental uncertainties.
Secondly, it’s true that significant progress has made these large-scale
calculations more tractable. However, this does not necessarily mean that the
results will be easy to analyse. Disentangling the complex web of physical
processes included in such calculations can be very difficult. This requires
tools and switches within the method, for example to evaluate the role of
certain interactions, and thereby aid your understanding. Even with such
analysis tools at hand, gaining strong physical insight may be an arduous
procedure, involving further large-scale calculations, and these may not even
be guaranteed to provide the insights you desire.
Numerio Absolutely, you have highlighted the main difficulties with _ab
initio_ methods that I have encountered. The scale of the calculations can
impose a limit on their scope, and their complexity can obscure
interpretation. On the other hand, simpler methods avoid these difficulties,
but they rely on approximations which need to be justified. For me, the ideal
tool would be a method with qualities representing the best of both worlds — a
method where many small-scale but accurate TDSE calculations could be carried
out to provide detailed interpretation. Although this is feasible in some
fields, in the context of NSDI currently it is not. However, equipped with an
arsenal of _ab initio_ methods, there is an opportunity to benchmark simpler
methods which fall short of a full _ab initio_ treatment. If their
approximations can be validated by such comparisons, then their interpretive
power will be valuable.
Analycia I think now we are beginning to agree.
_The debate above highlights that both calculation and interpretation are
important. Often, an _ab initio_ approach can provide a calculation, but
detailed interpretation may require analytical techniques. To discuss these
techniques further, the debate now moves to focus on the merits of analytical
methods used to study NSDI._
#### 2.2.2 Analytical approaches
Figure 6: Comparison of experimental data (upper row) Kubel2014
55footnotemark: 5 with theoretical focal-averaged distributions (lower row)
selected from maxwell2016 .666Reprinted with permission from Ref. maxwell2016
. © 2016 by the American Physical Society. The left and right columns present
$16\text{\,}\mathrm{f}\mathrm{s}$ and $30\text{\,}\mathrm{f}\mathrm{s}$ laser
pulse lengths, respectively, with $\lambda=$800\text{\,}\mathrm{n}\mathrm{m}$$
($\omega=$0.057\text{\,}\mathrm{{a.u.}}$$) and
$I=${10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$$
($U_{p}=$0.22\text{\,}\mathrm{{a.u.}}$$). Specific features associated with
quantum interference are marked by polygons in both upper and lower panels. It
was necessary to account for interference effects in the theoretical results
to get this agreement. ††footnotetext: Reproduced from Ref. Kubel2014 under a
CC BY license.
[1]
* Analycia You see in my experience working on NSDI, descriptive power is often enabled by the high degree of modularity that analytical methods possess. This modularity may be harnessed to determine the physical origin of an effect by switching certain interactions on and off. Like intermediate-rigour numerical methods, the light computational demand means that large sets of individual calculations may be carried out where necessary.
A good example of the power of modularity in analytical models is the use of
interference in SFA models for NSDI to match experimental results maxwell2016
; Hao2014 . In Fig. 6 we see experimental results Kubel2014 for two pulse
lengths. The lower panels show the results of the SFA model maxwell2016 that
uses a superposition of different excited states in the RESI mechanism of
NSDI. Including interference leads to a good match, which provides strong
evidence for interference effects in NSDI. This was only possible because
interference effects could be switched on and off,777The consequences of this
difficulty were discussed in detail in Battle 2 of the Quantum Battles in
Attoscience conference battle2 , reported in a companion paper in this Special
Issue Amini2020 . thereby allowing analysis of the different shapes and
structures within the distribution. Each of these shapes could then be
directly attributed to different excited states, which demonstrates the power
of the modularity of analytical methods in providing an intuitive
understanding of the physics.
Figure 7: Photoelectron spectra for NSDI in helium driven by a wavelength of
$800\text{\,}\mathrm{n}\mathrm{m}$ and intensity of
$4.5\text{\times}{10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$
Staudte2007 , showing (a) the correlated momentum distribution, with a detail
(b) shown with superimposed results from a classical electron scattering
model, as well as (c) the electron energy spectrum of He2+ and
He+.888Reprinted with permission from Ref. Staudte2007 . © 2016 by the
American Physical Society.
Numerio The interpretive power is certainly valuable, and the availability of
switches such as these is often the key to a good physical understanding. My
main concern, however, is that the approximations may affect the accuracy of
the results. In particular, the SFA neglects the Coulomb potential, and it is
known that this influences the famous finger-like structure in NSDI seen in
Fig. 8, causing a suppression of two-electron ejection with equal momenta.
Furthermore, we would expect a host of other Coulomb effects just as there are
in single electron ionisation figueira2020phases . Thus, care must be taken
with the conclusions that you draw from such an analytical model. As I said
earlier, many numerical methods may not afford this degree of modularity, but
it would strengthen my confidence in the conclusions if an _ab initio_ method
also observed these effects. In this way, a numerical method could be guided
by analytical predictions to assess the accuracy of certain approximations.
Analycia This is a fair point, but the considerable speed advantage means that
you can often do additional checks and analysis to get around this problem.
The SFA model presented could be solved in five minutes on a desktop computer,
whereas, as you mentioned, _ab initio_ models will take days on hundreds of
cores. The fast SFA calculations can then account for additional factors such
as focal volume averaging, even though it increases the overall runtime by a
factor of ten or more. It can also perform scans through intensity and
frequency in a timely manner. Such scans can provide important insights, for
example in Fig. 9 where the contributions of various excited states is
monitored as a function of laser intensity and frequency. Their relative
contributions then explain the shapes appearing in various regions of the
momentum distribution. The extra analysis can increase the overall runtime by
factors of 100–1000, which is still perfectly manageable for the SFA, but
would be out of the question for most _ab initio_ methods.
Furthermore, there is always a place for analytical methods in performing
computationally inexpensive initial investigations, which then provide the
evidence needed to commit to using more expensive _ab initio_ or experimental
efforts. In recent work on interference in NSDI, motivated by predictions of
SFA models, experimental work was done to investigate interference effects
quan2017 .
Figure 8: Scan over intensity ($U_{p}$) and frequency ($\omega$), that
attributed preferential excitation to states with different orbital angular
momenta $l$ (s-, p- and d-states) in the RESI process for different pulse
lengths maxwell2016 to the shapes found in Kubel2014 . The contributions of
s-states are displayed in (a), (d) and (g), those of p-states in (b), (e) and
(h) and those of d-states in (c), (f) and (i).999Reprinted with permission
from Ref. maxwell2016 . © 2016 by the American Physical Society.
Numerio Yes, I agree in _some_ cases the extra analysis is beneficial.
However, _ab initio_ methods are still much more generalised than their
analytical counterparts. Take Fig. 4, where many different processes
contribute, including both the RESI and EI mechanisms, together with
sequential double ionisation. The presented SFA model includes only the RESI
mechanism.
Analycia There are two sides to this: it is nice to be able to clearly
separate EI and RESI in the SFA, but it is true that it introduces a lack of
flexibility.
With the goal of reaching some kind of agreement, I would posit that the
benefits of both types of models outweigh the negatives. In the case of
classical and semi-classical models, they have clearly led to huge leaps in
understanding for the mechanisms of NSDI. Furthermore, I would add that NSDI
in particular is a good candidate for hybrid models. Strongly correlated
dynamics and multi-electron effects are well-suited to an _ab initio_
approach, while the main ionisation dynamics are well-described by semi-
classical models.
That said, I would also like to know how the broader community feels about
this.
The audience response to this poll is presented as Poll 2 in Table 16, in
Section 4 below.
_Within the context of NSDI, our combatants have discussed the merits and
drawbacks of their respective approaches, and have begun to appreciate the
computational and interpretational qualities that analytical and _ab initio_
contribute. In the following section, we focus on how progress in scientific
discovery can be aided by both types of method._
## 3 Scientific discovery
The seed of a scientific discovery can be planted in the form of a bump or a
dip on a smooth curve of experimental data, as a whimsical term in the
denominator of some equation, or as a quirky splash in numerical results. In
other words, a scientific discovery can be triggered by experimental results
or theoretical ones, either analytical or numerical. As soon as something new
has been spotted, to become a real full-grown discovery it has to be examined
and explained by each of these aforementioned components, each branch of
research, and in the end there has to be an agreement among all of them.
In some cases, the initiating site is analytical and the others come next, as
in the case of optical tunnelling ionisation, predicted in 1965 keldysh1965
before its much later experimental observation in 1989 Corkum1989Mar
.101010Optical tunnelling ionisation also gives rise to the question of the
tunnelling time, which was discussed in depth during Battle 1 of the Quantum
Battles in Attoscience conference battle1 , reported in a companion paper in
this Special Issue Hofmann2021 . Sometimes, the role of a trigger is played by
numerical calculations, as for coherent multi-channel strong-field ionisation
Rohringer2009May , which was shortly followed by its experimental validation
Goulielmakis2010Aug . It can even be a little bit of both, as in the first
description of the RABBITT scheme in 1990 veniard_phase_1990 , or in single-
photon laser-enabled Auger decay (sp-LEAD), which was predicted in 2013
Cooper2013Aug , first observed in 2017 Iablonskyi2017Aug , and further
characterised in You2019 . There are also theoretical predictions—both
analytical, like molecular Auger interferometry Khokhlova2019Jun , and
numerical, like HHG in topological solids Bauer2018 ; Silva2019Dec ;
Chacon2020 —which have already sparked experimental efforts to confirm them,
but which are still waiting for their observations to come. On the other side,
we have discoveries which arise from experimental observations and are then
explained theoretically, such as NSDI, which was discussed in detail in
Section 2.
In this section we tell another story of scientific discovery in attoscience,
through the case study of resonant HHG, which also starts from recorded
experimental data.
### 3.1 Experimental kick-off
By the year 2000, HHG was already a full-grown discovery. It had been observed
mcpherson_studies_1987 ; ferray_mutliple_1988 and theoretically modelled
Kuchiev1987 ; Corkum1993 ; krause_hihg-order_1992 ; Lewenstein1994Mar a
decade previously. After this breakthrough, many features of HHG were under
active investigation, both experimentally and theoretically. In particular,
resonances in the HHG spectrum had been extensively studied since the 1990s.
Some structures in the HHG spectra of atomic gases had been very early on
attributed to single-atom resonances lhuillier_coherence_1992 ; balcou_phase-
matching_1993 . Then more recent measurement toma_resonance-enhanced_1999 and
theoretical works figueira2002Jan ; Taieb2003Sep explained these structures
with multiphoton resonances with bound excited states linked to enhancement of
specific electron trajectories that were recolliding multiple times with the
ionic core.
\begin{overpic}[width=130.08731pt]{figures/Ganeev2006figure.png}
\put(18.0,65.0){\bf(a)} \end{overpic}
\begin{overpic}[width=138.76157pt]{figures/Gilbertson2008figure.pdf}
\put(18.0,65.0){\bf(b)}
\end{overpic}\begin{overpic}[width=156.10345pt]{figures/Shiner2011figure.pdf}
\put(15.0,58.0){\bf(c)} \end{overpic}
Figure 9: First observations of resonant HHG. (a) Figure taken from
Ganeev2006Jun : High-order harmonic spectra from (1) indium and (2) silver
plumes.1111footnotemark: 11 (b) Figure taken from Gilbertson2008Sep : Spectra
of the harmonic supercontinuum generated with double optical gating from the
different values of input pulse duration for a helium target
gas.121212Reprinted from Ref. Gilbertson2008Sep , with the permission of AIP
Publishing (c) Figure taken from Shiner2011Jun : Top, the raw HHG spectrum
from xenon at an intensity of
$1.9\text{\times}{10}^{14}\text{\,}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}$.
Bottom, experimental HHG spectrum divided by the krypton wave packet (blue)
and the relativistic random-phase approximation (RRPA) calculation of the
xenon photoionisation cross-section from kutzner_extended_1989 (green). The
red and green symbols are PICS measurements from fahlman_xe_1984 and
becker_subshell_1989 respectively, each weighted using the anisotropy
parameter calculated in kutzner_extended_1989 . ††footnotetext: Reprinted
with permission from Ref. Ganeev2006Jun ©The Optical Society.
In this context, Ganeev et al. Ganeev2006Jun first measured, in 2006, a
strong enhancement of a single harmonic, by two orders of magnitude, in the
HHG spectrum of plasma plumes. Their result is shown in Fig. 12(a). At that
time, they attributed this resonance to the multiple recolliding electron
trajectories that had previously been observed and modelled in atoms
toma_resonance-enhanced_1999 ; Taieb2003Sep , and they related these
trajectories to multiphoton resonances with excited states.
Then, in 2008, when studying the spectra of single attosecond pulses generated
in noble gases, Gilbertson et al. measured for the first time a strong
enhancement in the HHG spectrum of helium Gilbertson2008Sep , as shown on Fig.
12(b). Since they employed single attosecond pulses, they recorded continuous
spectra which allowed them to see the enhancement perfectly, as it would
otherwise fall between two harmonics if observed in an attosecond pulse train.
They did not give any explanation for this enhancement, as it was not their
main focus, but they observed that it appears at the energy of the $2s2p$
autoionising state (AIS) of helium.
Then in 2011, Shiner et al. measured a strong enhancement at
$100\text{\,}\mathrm{e}\mathrm{V}$ in the HHG spectrum of xenon gas
Shiner2011Jun ; Schmidt2012Mar , shown in Fig. 12(c). The experimental HHG
spectrum is displayed on the upper panel. From that spectrum the authors
extracted the photoionisation cross-section (PICS) by first dividing by the
spectrum of krypton (obtained at the same conditions), and then multiplying by
the photoionisation cross-section of krypton from Ref. huang_theoretical_1981
. The obtained experimental PICS is shown as a blue line in the lower panel.
The green curve is the photoionisation cross-section of xenon from Ref.
kutzner_extended_1989 . The very good agreement between the two curves,
combined with the qualitative agreement with a toy model including only the
$4d$ and $5p$ states of xenon, allowed the authors to relate the enhancement
at $100\text{\,}\mathrm{e}\mathrm{V}$ to the giant resonance of xenon.
Thus, by the late 2000s, there were observations of resonant enhancement
features in HHG from plasma plumes as well as few- and many-electron rare-gas
atoms—but no solid theoretical explanation.
_We now hand again the stage to our theoretical acquaintances, who have begun
discussing the ingredients required for a theoretical model for resonant HHG
and will guide us through the rest of the story._
### 3.2 Building the model
[1]
* Numerio An explanation of the observed process demands the creation of a model, and this requires a thorough analysis of the experimental data revealing the same phenomenon and distinguishing its essential features. The essential feature of resonant HHG, which is common for all observations independently from the medium—gaseous or plasma—, is the enhancement of one or of a group of high harmonics. This does not sound like a lot to start with. However, this already gives a hint that the desired explanation has no connection to propagation effects. This fact restricts the model to an account of the single-particle response only.
Analycia There have been a number of attempts to create the model describing
resonant HHG. One group of theories is based on bound-bound transitions
Gaarde2001Jun ; figueira2002Jan ; Taieb2003Sep ; Ishikawa2003Jul , but it
cannot be applied for plateau harmonics due to the crucial role Ganeev2006Jun
played by the free-electron motion. Another group of theories mentions a
connection of the multi-electron excited states to the enhanced yield of
harmonics Milosevic2007Aug ; Frolov2009Jun ; Frolov2010Aug . In particular,
the enhancement of high harmonics generated in xenon Shiner2011Jun was
associated Frolov2009Jun with the region of the well-known ‘giant’ dipole
resonance in the photoionisation (photorecombination) cross-section of xenon
atoms.
Numerio This sounds closer to the ingredients that are likely required to
explain the phenomenon. Does this not get us closer to resolving the puzzle?
Analycia Indeed it does! After revealing a similar correspondence between
experimental HHG enhancements Gilbertson2008Sep ; Ganeev2006Jun and
transitions with high oscillator strengths between the ground state and AIS of
the generating ions Chan1991Jul ; Duffy2001Mar , the model of resonant HHG was
forged in the form of the ‘four-step model’ Strelkov2010Mar .
Numerio It seems like there should be a vivid similarity with the common
three-step model for HHG, should there not?
Analycia Of course! The four-step model Strelkov2010Mar extends the three-
step model Corkum1993 ; Schafer1993 ; Kuchiev1987 to include the resonant
harmonic emission along with the ‘classic’, nonresonant one. The first two
steps of the four-step model—(i) tunnelling ionisation, and (ii) free-electron
motion—repeat those of its forerunner. Then, if the energy of the electron
returning back to the parent ion is close to the one of the ground–AIS
transitions, the third step of the three-step model turns into two: (iii)
electron capture into the AIS, and (iv) relaxation from the AIS down to the
ground state, accompanied by the XUV emission.
Numerio Sure, that seems like a possible chain of events, but how can it lead
to a higher emission probability if it requires an extra step?
Analycia You are right that a substitution of one step by two of them should
intuitively cause a decrease in probability, but the combination of higher
probability for electron capture into the AIS (corresponding to the looser
localisation of the AIS), together with the high oscillator strength of the
transition between the AIS and the ground state, results in an increase of the
resonant harmonic yield by several orders of magnitude. Perhaps you could
argue this is similar to how NSDI may dominate over sequential double
ionisation despite having more steps, as discussed in Section 2.
_By this point a convincing model has been suggested; however, it is far from
an end of the story of ‘scientific discovery of resonant HHG’, and a series of
hurdles still has to be surmounted._
### 3.3 Challenging the model: numerical calculations
[1]
* Numerio Alright, this model sounds physically reasonable enough, but we still need some actual proof that it’s describing the experiment properly. Small-scale numerical simulations were of great help in that matter. When building the four-step model, Strelkov also performed some TDSE simulations at the SAE level, and compared it with several experimental results for singly-ionised indium and tin, as shown in Fig. 13. The very good agreement shows that a single active electron is able to accurately model the process.
Analycia Sure, that is an important result, but in the paper, Strelkov also
made an analytical estimate of the enhancement using the oscillator strength
and lifetime of the resonant transition. The result of this estimate is shown
in Fig. 13 as blue squares for several singly-ionised atoms. The good
agreement both with experiment and with the TDSE calculations marks another
step in the confirmation of the four-step model.
Numerio Indeed, that was quite convincing already, but all these
considerations were time independent. When building a model in attosecond
science, it is often useful to have a dynamical point of view on the process
under study. Tudoroskaya and Lein investigated resonant HHG and the four-step
model using time-frequency analysis tudorovskaya_high-order_2011 . They solved
the SAE TDSE for 1D model potentials with a shape resonance that models an
AIS. They were able to reproduce an enhancement of more than two orders of
magnitude at the harmonic order corresponding to the shape resonance. Their
time-frequency analysis confirmed that the harmonic emission at resonance
starts when the electron returns to the ionic core. More interestingly, it
shows that the duration of the emission at resonance is much longer than the
emission duration at the other harmonic orders. More precisely, the emission
duration at resonance corresponds to the shape resonance lifetime, indicating
that the electron gets trapped in the resonance and emits from there, thus
validating the four-step model.
_After this convincing achievement, the model seems to be validated,
especially from Numerio’s point of view. But for Analycia the story is not
finished yet._
\begin{overpic}[scale={1.0},unit=1mm]{figures/strelkov-redone-2.pdf}
\put(15.0,38.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Ganeev2006Jun}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(15.0,60.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_harmonic_2006}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(47.0,25.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_harmonic_2006}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(27.0,39.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{suzuki_anomalous_2006}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(43.0,35.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{suzuki_intense_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(71.0,35.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_strong_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(71.0,30.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(85.0,18.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(71.0,20.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(43.0,20.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(34.0,22.0){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\put(27.0,46.5){\small\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ganeev_systematic_2007}{\@@citephrase{(}}{\@@citephrase{)}}}}
\end{overpic} Figure 10: Comparison of experimental measurements
Ganeev2006Jun ; ganeev_harmonic_2006 ; suzuki_anomalous_2006 ;
suzuki_intense_2007 ; ganeev_strong_2007 ; ganeev_systematic_2007 with
analytical theory and single-electron TDSE simulations Strelkov2010Mar for
the enhancement factor in resonant HHG in plasma-plume ions, as reported in
Ref. Strelkov2010Mar .131313Adapted with permission from Ref. Strelkov2010Mar
. © 2010 by the American Physical Society.
### 3.4 Generalisation: analytical theory
[1]
* Numerio Perfect, we now have the model of resonant HHG in our arsenal, which allows us to conduct a qualitative analysis and to make qualitative predictions. Moreover, we also possess quantitative answers based on SAE TDSE solutions for a number of generating particles in given laser fields. So I believe we have all we wanted then?
Analycia Not so fast! Even though there is a tool providing us with a
quantitative answer, it cannot be easily re-applied for a different generating
system or slightly different field parameters, in other words, there is a lack
of generality. This creates a strong demand for a computationally cheap and
more flexible tool.
Numerio Do you have some concrete solutions in mind?
Analycia This theoretical demand has been satisfied within the introduction of
the analytical theory of resonant HHG Strelkov2014May . The analytical theory
is built on two pillars: Lewenstein’s SFA-based theory Lewenstein1994Mar
(conventional for HHG), and Fano’s theory Fano1961Dec , which guides the
treatment of AISs originated from the configuration interaction.
Numerio I understand, each of these theories is indeed very successful in
describing the two physical processes at hand. But how do you combine them to
reproduce the experimental observations?
Analycia The resonant HHG theory delivers the answer—the spectrum of the
dipole moment of the system—as a product of the spectrum of the nonresonant
dipole moment and a Fano-like factor. The nonresonant dipole moment is the
same as in the well-known Lewenstein theory, which captures the field
configuration and the major characteristics of the ground state of the
generating particle. On the other hand, the Fano-like factor encodes the
resonance, and depends on the AIS’s features: its energy and its energy width,
as well as the dipole matrix element for the transition between the AIS and
the ground state.
As a result, the harmonic spectrum in the resonant case is identical to the
one in the nonresonant case far from the resonance, while in the vicinity of
the resonance it acquires a Lorentzian-like-shape profile due to the Fano-like
factor (see Fig. 14). This Fano-like profile around the resonance carries the
information about two major properties of the resonant harmonics—their
behaviour in amplitude and phase—which result in an enhancement and an
emission time delay of resonant harmonics, respectively.
Numerio Ok, I agree this analytical theory provides a much more general
picture of the process. But is it really that useful? I mean, what could we do
with resonant HHG?
Analycia The two features of resonant harmonics, amplitude and phase, provide
us with extra handles for improving the generation of attosecond pulses, an
intensity boost and an elongation of duration, and they also provide an
opportunity to study the structure of the AIS using the harmonic spectrum.
_With a robust framework in place, our scientists discuss the final obstacles
faced by the theory._
Figure 11: Squared absolute value (red) and phase of the Fano-like factor
calculated analytically (solid) and numerically within SAE TDSE (with symbols)
for HHG in tin plasma plume Strelkov2014May .141414Reprinted with permission
from Ref. Strelkov2014May . © 2014 by the American Physical Society.
### 3.5 Closure: _ab initio_ calculations
[1]
* Analycia Although the model and the analytical theory of resonant HHG coincide with the results of numerical TDSE calculations in the SAE approximation, this theory encountered significant resistance, both in conferences and in peer review, insofar as the model potential used in these calculations is artificial and does not reflect the fully multi-electron nature of AISs. What is your opinion regarding this issue?
Numerio I would say that, on the one hand, this is an instance of a broader
discussion regarding the role, advantages and disadvantages of the use of
model potentials in numerical calculations. On the other hand, however, this
limitation can be addressed using fully-_ab initio_ calculations, eliminating
this final uncertainty.
Recent first-principle calculations for resonant HHG by manganese atoms and
ions Wahyutama2019Jun show the characteristic enhancement observed earlier in
the energy region around a group of AISs.
Analycia Finally! These results close the remaining questions in the
theoretical understanding and description of resonant HHG, and open a wide
front of study into the applications of this process, equipped with a full
toolset: analytical theory as well as numerical (SAE and _ab initio_)
calculations.
Numerio I agree, we are not always on great terms, but we really made a nice
team on this one!
The audience opinion on the necessity of combining different approaches is
presented as Poll 3 in Table 16 in Section 4 below.
_After this constructive exchange, the two agree to work more tightly together
from now on._
As we discuss below in Section 4.5 as a response to Audience Question 1, we
are not always necessarily after discoveries in our field, but also after
finding and solving interesting problems. Nonetheless, any scientific
production, or creative activity, before it can considered as scientific,
requires confrontation of different points of view. We argue here that this
confrontation is all the more efficient and constructive when it involves all
the different aspects of scientific work: experimental, analytical, numerical,
and _ab initio_. As we have seen at the start of this section, the initial
trigger can be pushed by any of them, but the actual scientific progress
generally happens afterwards, when they collaborate together.
## 4 Discussions
The dialogue between proponents of analytical and _ab initio_ approaches, as
we have followed it so far, opens a number of additional questions for deeper
examination. We now turn to these more specific points, as well as our
(combatants’) responses to the questions raised by audience members during the
talk.
During the online conference battle , in addition to the talk, several
questions were directed to the audience in the form of polls, both over the
Zoom platform as well as to a wider public over Twitter. We present in Table
16 a summary of the results of these polls.
_Our combatants return to the stage to resolve several still-itching questions
that remain from their conversation._
### 4.1 Is approximation a strength or a weakness?
_The degree of approximation made by a particular method is a typical source
of contention between numericists and analyticists. Here, Numerio and Analycia
discuss how they feel approximation should be characterised._
[1]
* Analycia It has been said that approximation is a downside of analytical methods. However, I would like to argue—perhaps somewhat provocatively—that approximation is more of a strength. Approximation is what drives the interpretation—the qualitative picture—of a physical process as constructed by an analytical model. If you can remove all that is unnecessary, and still achieve reasonable agreement with _ab initio_ simulations or with experimental results, then this is when you actually start to gain some real understanding and interpretation of physical processes.
In other words, I do not think any method, analytical or numerical, is
scientifically useful by itself. Science stems from the comparison and
interplay of different methods, and particularly of different levels of
approximations.
Numerio I tend to agree that, as an ideal, this is where approximations can
really bring clarity to the table. However, in practice, most of the time when
we approximate we end up dropping some of the things that we would like to
retain. In that sense, approximation is both a blessing and a curse: it
simplifies the picture so we can better understand it, but we generally lose
out on some of the physics we want to describe. There is rarely a ‘happy
medium’ where approximation is purely a strength.
Having said that, though, I should also point out that these benefits and
disadvantages of approximation are equally applicable to numerical methods. If
an approximate calculation matches experiment or an _ab initio_ simulation in
full rigour, then we can be confident that we have captured the physics.
Analycia Wait! eagerly I think I see what you mean—there is no demand that
this approximate calculation needs to be analytical?
Numerio Yes. Moreover, approximation also mitigates some of the problems in
numerical methods regarding the complexity of interpretation. An overly-
complex method may yield information that is simply too fine-grained to be
analysed easily, but an approximate numerical method can strip away much of
that complexity by focusing on a suitable subspace of solutions, and if it
correctly matches the rigorous outcome then we can be confident that we
understand the physics.
_Numerio and Analycia agree to treat approximation as both a strength and a
weakness, and as a vital way to obtain new perspectives on physics, and move
on to the question of modularity in _ab initio_ methods. _
### 4.2 Modularity in ab initio methods
[1]
* Numerio It is often thought that ab initio methods do not provide the level of modularity that analytical methods can. However, even if the solutions provided by ab initio methods are numerical, the Hamiltonian is typically comprised of a set of analytical terms. By switching these terms on and off, we can gain insights into their role in a particular aspect of the physics in question.
To give one example of this, in Fig. 12 I show an intensity scan of the degree
of coherence of the remaining ion in strong-field ionisation of CO2. To gain
physical insight, we can deactivate a number of interactions, and then compare
the result to the ‘true’ coherence. This comparison shows how the interplay of
different mechanisms contributes, in a non-trivial way, to the total
coherence.
Figure 12: Modularity in ab initio calculations: the quantum coherence
between the $\sigma_{g}$ and $\sigma_{u}$ ionic states of CO2, as a function
of laser intensity, can be examined by switching different couplings on and
off, providing a valuable window into which effects are most essential.
Plotted from unpublished data obtained during the initial phase of the
research reported in ruberti2018 .
Analycia This is certainly a good demonstration of modularity within an ab
initio method! However, would you say this is a typical example? I imagine
that many _ab initio_ methods would struggle to match this level of
modularity. In this case, each individual calculation should come at a
reasonably small cost in terms of computational time and resource, so that
many calculations can be carried out. For some methods, the massive scale of
individual calculations means that this level of modularity cannot be
afforded.
More broadly, in numerical methods you cannot always do this type of switching
procedure, particularly when it comes to spatial or momentum interference
patterns battle2 ; Amini2020 . It is extremely rare to come across numerical
methods that are able to split between these, and provide clear assignments to
the different channels that are interfering. So sometimes, yes, you can switch
interactions on and off and assign things in a modular way, but _ab initio_
methods are often limited in the degree to which they can do this.
Numerio Yes, the degree of modularity often depends on the problem at hand.
Activating and deactivating Hamiltonian terms provides insight in certain
problems, but others will not be aided by this procedure. I suppose the ideal
situation would be to have a method which can solve a given problem using
reasonable computational resources, while keeping enough modularity to provide
the required physical understanding.
In that regard, one of the strongest tools is the use of approximations
specifically tailored to the situation—which is one clear instance of
approximation being a strength, as we have just agreed.
### 4.3 Are _both_ analytical and numerical methods required in scientific
discovery?
_Numerio and Analycia, agreeing that analytical and _ab initio_ methods are
not always used in equal measure, turn to discuss the impact that this has on
knowledge and discovery._
[1]
* Numerio We have presented a case for analytical and numerical methods working best for scientific discovery when they are used in equal measure. However, is this always necessary? Take Fig. 13, where the analytical model matches experiment just as well as the TDSE model. My natural inclination in a case like this would be to carry out a large, multi-electron calculation for this problem—but since the analytical model has described the experiment so well, would it be worthwhile? In some cases, like this one, analytical methods can stand on their own, while in other cases it will not get you very far and you need to really crank the handle of big codes.
Analycia I agree—often one method will dominate, and it may be because it
works better or it may be due to historical reasons. However, while you can
prove wrong a model when its results do not match experiments, it does not
work the other way around: you can never prove that a physical model is right.
Therefore the agreement of different theoretical approaches is all the more
precious in that regards. In addition to what you said, analytical and _ab
initio_ methods are two different powerful tools, which lead to differences in
our understanding and interpretations. In different situations one method is
more useful than the other for advancing knowledge.
Numerio Yes, the methods we use will affect our understanding, but maybe we
should not be too hung up on this. We should mostly be driven by moving
between the discoveries of new knowledge. For instance, when we explore
different systems such as high-harmonic generation in liquids (e.g.
Flettner2003 ; Luu2018 ) or in solids (e.g. Ghimire2011 ; Vampa2017 ), we
start from the knowledge we had in the gas phase and push its limits and
extend this knowledge. Whether this knowledge originates from analytical, ab
initio, or experimental studies is ultimately not so important.
Analycia I can see what you are getting at, but in practice we cannot ignore
biases different methods imbue in our knowledge. Let us see what our community
thinks about this?
The audience responses to this question are presented as poll 3 in Table 16.
Numerio It seems it is a mixed bag, with most hedging their bets in how often
each method should be used.
Analycia We should take this result with a pinch of salt but perhaps we can
agree this means it is very contextual. Physical processes we study should be
attacked by exploring all of the approaches we have at hand.
Numerio Yes, but also we should prioritise these methods by their range of
applicability as well as the level of insight they elucidate.
_Our combatants decide that they will each include more methods in their
arsenal, as well as working together, to aid the process of scientific
discovery. However, Numerio still has one final bone to pick. _
headercellbackground 1 | headercellbackgroundThe distinction of methods that provide quantitative vs qualitative insights is more useful in practice than the analytical and ab initio distinction? (Twitter link)
---|---
lg | lg Agree | lg 84% | Disagree | 16% | lg | lg | | | lg Sample size | lg 38
lg | lg Agree | lg 100% | Disagree | 0% | lg | lg | | | lg Sample size | lg 10
headercellbackground 2 | headercellbackgroundWould you feel more confident in planning an experiment based on guidance from analytical theory, ab initio simulations or a hybrid model? (Twitter link)
lg | lg Analytical | lg 19% | ab initio | 38% | lg hybrid | lg 43% | | | lg Sample size | lg 42
lg | lg Analytical | lg 29% | ab initio | 57% | lg hybrid | lg 14% | | | lg Sample size | lg 7
headercellbackground 3 | headercellbackgroundCombining analytical, numerical, ab initio and experimental methods in equal measure is the best route to discovery? (Twitter link)
lg | lg Always | lg 15% | Mostly | 41% | lg Sometimes | lg 44% | Never | 0% | lg Sample size | lg 46
lg | lg Always | lg 18% | Mostly | 36% | lg Sometimes | lg 36% | Never | 9% | lg Sample size | lg 11
headercellbackground 4 | headercellbackgroundWhat do you agree with more:
(1) Computing (and quantum computing power) will advance so much we will not
need analytical methods anymore.
(2) We do not actually really need such computational power to understand
physics, as our understanding is more closely represented by analytical
models. (Twitter link)
lg | lg I agree with (1) | lg 11% | I agree with (2) | 34% | lg I agree with neither | lg 55% | | | lg Sample size | lg 43
lg | lg I agree with (1) | lg 50% | I agree with (2) | 50% | lg I agree with neither | lg 0% | | | lg Sample size | lg 2
Table 3: Audience polls taken over the Zoom (upper rows) and Twitter (lower
rows) platform during the presentation.161616As part of the audience response
to this poll, J. Tennyson argued that this is a rhetorical question with the
wording placing the velvet on the answer ‘yes’, which should be considered
when interpreting the poll results.
### 4.4 The role of increasing computational power
_The consequences of increasing computational power are a common theme in the
development of modern physics mentioned time and time again. Numerio turns to
its role in attoscience. _
[1]
* Numerio One aspect that was very apparent as we looked at the evolution of numerical and _ab initio_ methods in attoscience is that, even given the considerable challenges initially faced by the field, these methods have achieved many tasks that would have seemed completely impossible even a scant few years ago.
Going out on a limb, I would even claim that these improvements will continue
and accelerate, particularly once quantum computers become available, and that
these advancements will drastically reduce the need for analytical methods, or
even—despite their advantages, which we discussed earlier— eliminate it
altogether. And, I wonder, does our community agree with this?
The response to the poll is shown in poll 4, on Table 16.
Analycia I find it quite interesting that you should use a phrasing of the
form ‘computing power will make analytical theory obsolete’—because of how
_old_ that idea is. That concept dates back a full six decades berry2007 , to
when electronic computers were first being developed in the 1960s (to replace
_human_ computers). Within that context, it is understandable that people got
the impression that analytical theory—with its emphasis on special functions,
integral transforms and asymptotic methods—would be displaced by raw
computation.
However, over the past sixty years, time and time again the facts have
demonstrated the opposite: we now place a _higher_ value on special functions
and asymptotic methods than we did back then. Of course, it is possible that
at least some of the analytical methods of attoscience will be displaced by
raw simulations, at least of the single-electron TDSE, but whenever this
narrative starts to look appealing, it is important to take the long view and
keep this historical context in mind.
### 4.5 Audience Questions and Comments
Over the course of the panel discussion battle , questions and comments were
raised by the audience which helped challenge and develop the arguments being
fielded by the combatants. We present them here, voicing our answers through
Numerio and Analycia, and referencing answers already given above.
[1]
* Reinhard Dörner Is our field really after discoveries? Is it not more about finding and solving interesting puzzles?
This question was motivated by the distinction made by Thomas Kuhn in his
famous book _The structure of scientific revolutions_ Kuhn2012 . Therein, he
argued that the times where the most progress is steadily made are times of
‘normal science’, where what scientists do is best described as solving
riddles with the tools of the paradigm they are working in Doerner2020 .
Numerio I agree that we are not looking for new fundamental laws. Here it is
instructive to connect back to the definition of ‘ _ab initio_ ’, particularly
to remind ourselves that we have fixed the fundamental, theoretical ‘reference
frame’. In attosecond physics, we are not yet looking for new fundamental
laws: we already have established fundamental laws, the ‘rules of the game’,
and we are looking for new solutions to the fundamental quantum-mechanical
equations of motion. The space of solutions is potentially infinite, as is the
amount of new physical phenomena yet to be described. In our case, we are
interested in understanding the physics of atoms and molecules, driven by
light-matter interactions, in new and unexplored regimes—and I would agree
that this can be described as finding and solving interesting puzzles.
Analycia I would disagree: the fact that something is not ‘fundamental’ does
not stop it from being a discovery. If nothing else, that viewpoint completely
disregards discoveries made in other sciences which are not ‘fundamental’. I
would say that it is still discovery if it is new knowledge.
Numerio Perhaps this is a matter of terminology: to me, speaking of
‘discovery’ entails finding new laws or entirely novel particles or
dimensions, which do not occur in attoscience. In our domain, the basic rules
are already set, and we are solving a puzzle which is as interesting as it is
difficult. There are many different ways of arranging the pieces of this
puzzle, with each one representing, in principle, a different physical
scenario that we can tackle with our theoretical methods, be they _ab initio_
, analytical, or hybrid. That said, we know only a limited set of such
scenarios and I agree that, when we find a new one, it can also be seen as a
discovery.
Analycia Yes, I see what you mean—but there is not always such a clear split
between rules and scenarios, i.e., between laws of physics and their
solutions. There is a level where we only have the fundamental laws, but there
are also higher levels of understanding and abstraction where the behaviour of
a set of solutions can become a ‘rule’, a law of physics, in itself. And, I
would argue, our role in attoscience is to discover these laws. However, I do
agree that our work mostly takes place within the fixed paradigm of a single
set of fundamental laws.
_The conclusion that Analycia and Numerio take away from this is that a
solution to a problem may still be interesting and useful, irrespective of
whether it is called a discovery._
[1]
* Tom Meltzer Do you think the range of applicability of a model is not more important than whether it’s _ab initio_ , numerical, analytical, semiclassical and so on?
Numerio I agree that this is an important aspect of any method. That said, I
would also say that an even more important lens is whether the model gives us
insights of a qualitative or quantitative nature, as we argued in more detail
in Section 1.4.3.
Analycia I have a similar view on this. Here it is important to remark that
one of the big reasons why, I would argue, we should move away from the
‘analytical-versus-_ab initio_ ’ view is that, ultimately, it is impossible to
have a method which is truly _ab initio_. We discussed some of this in detail
in Section 1.4.6, regarding the impact of the choice of basis set has on a
method: to the extent that we must supply physics insights into that choice,
it takes the method away from the _ab initio_ ideal.
However, I would go beyond that, since there are many other ways in which the
base assumptions of how we phrase the problem—which often go unquestioned—can
affect the physics. The most obvious example in attoscience is macroscopic
effects coming from the propagation of light inside our sample, but there are
also other, more esoteric aspects—say, the appearance of collective quantum
effects such as superfluorescence Mercadier2019 ; Rohringer2020qbattles , or
effects coming from field quantisation Lewenstein2020quantum —which are ruled
out by the basic framing, and this takes us away from ever reaching the _ab
initio_ ideal.
Numerio Those are fair points, but there is also a danger of throwing out the
baby with the bathwater here, in discarding the valuable work done in pursuit
of that ideal. In that regard, I would argue that a better definition for ‘
_ab initio_ ’ could be ‘models with approximations that have well-defined
error bounds for an explicit parameter range, such that any neglected physics
will lie within these bounds’. We know what physics gets neglected, and we
should be able to quantify those well enough to know they are not relevant (as
well as the types of questions that become inaccessible); for any additional
sources of error, like the choice of basis set, the error must be
quantifiable.
Here is, I would say, where the range of applicability of the model is most
important, as it dictates whether those sources of error are quantifiable and
negligible—what one could call ‘allowed’ approximations—or into a regime with
unquantified approximations. This is then a major component that determines
where we can place our method on the qualitative-versus-quantitative spectrum.
_In summary, our combatants agree the range of applicability is a central
aspect to consider, which is essential for reshaping ideas on what is ‘ _ab
initio_ ’. However, they also assert that the characterisation of whether a
model provides qualitative or quantitative insights is the most important
feature to consider. _
[1]
* (Anonymous) Can the single-configuration time-dependent Hartree-Fock method be used effectively to study multi-electron effects on atomic and molecular systems?
Numerio The method you mention (TDHF) can definitely be used to describe
multi-electron effects. It is equivalent, in its linear-response
reformulation, to the Random Phase Approximation with exchange (RPAX) method,
which has been widely used (mainly by the condensed-matter community) and
which can provide accurate molecular excitation energies and transition
moments. Using TDHF in its full time-dependent character to study non-
perturbative dynamics (beyond a perturbative approach) is certainly possible.
Analycia That sounds quite complicated for limited gain. Is it really worth
it?
Numerio This is a good method, but we also have available several multi-
configuration versions, including MCTDHF and TD CAS- and RAS-SCF, which are
generally more effective. This makes TDHF, in my opinion, only a
computationally-cheaper alternative which should be considered for large
systems that are not amenable to the full multi-configurational treatment.
[1]
* Jens Biegert An experiment is like doing an _ab initio_ simulation in the sense that one can change the boundary conditions, but it does not necessarily allow you to disentangle what happens. However, analytic, semi-analytical and hybrid methods do allow insight.
Numerio It is true that _ab initio_ calculations are often characterised as
the theoretical analogue of experiments, and that analytical methods drill
down on the insightful details. However, in my experience, I feel that this is
a mischaracterisation, as I am aware of many instances where _ab initio_
methods were able to disentangle a variety of physical interactions, by virtue
of their modular properties.
Analycia Oh, really? I would be interested to hear more, as this is an area
where I always felt we analyticists held an advantage.
_Given the level of interest in this topic, the combatants broadened its scope
into the discussion given in Sec. 4.2._
## Author contributions
All the authors were involved in the preparation of the manuscript. All the
authors have read and approved the final manuscript.
## Author ORCID iDs
1. Gregory S. J. Armstrong: 0000-0001-5949-2626
2. Margarita A. Khokhlova: 0000-0002-5687-487X
3. Andrew S. Maxwell: 0000-0002-6503-4661
4. Emilio Pisanty: 0000-0003-0598-8524
5. Marco Ruberti: 0000-0003-0424-3643
## Acknowledgements
We (the authors, and our combatants Analycia and Numerio) are deeply grateful
to the organisers of the Quantum Battles in Attoscience conference as well as
to the audience members for their active participation, and we are especially
grateful to the Battle’s ‘referee’, Stefanie Gräfe, for her insightful and
even-handed moderation during the panel discussion. We also thank S.D.
Bartlett, T. Rudolph and R.W. Spekkens Bartlett2006 as well as G. Galilei
Galilei1632 for inspiration for the format of this paper.
GSJA acknowledges funding from the UK Engineering and Physical Sciences
Research Council (EPSRC) under grant EP/T019530/1. MAK acknowledges funding
from the Alexander von Humboldt Foundation. ASM acknowledges grant
EP/P510270/1 funded by the UK EPSRC. ASM and EP acknowledge support from ERC
AdG NOQIA, Spanish Ministry of Economy and Competitiveness (“Severo Ochoa”
program for Centres of Excellence in R&D (CEX2019-000910-S), Plan National
FIDEUA PID2019-106901GB-I00/10.13039/501100011033, FPI), Fundació Privada
Cellex, Fundació Mir-Puig, and from Generalitat de Catalunya (AGAUR Grant No.
2017 SGR 1341, CERCA program, QuantumCAT _U16-011424, co-funded by the ERDF
Operational Program of Catalonia 2014-2020), MINECO-EU QUANTERA MAQS (funded
by State Research Agency (AEI) PCI2019-111828-2/10.13039/501100011033), EU
Horizon 2020 FET-OPEN OPTOLogic (Grant No 899794), Marie Sklodowska-Curie
grant STRETCH No. 101029393, and the National Science Centre, Poland-Symfonia
Grant No. 2016/20/W/ST4/00314. MR acknowledges funding from the EPSRC/DSTL
MURI grant EP/N018680/1.
## References
* (1) T. Brabec and F. Krausz. Intense few-cycle laser fields: Frontiers of nonlinear optics. _Rev. Mod. Phys._ 72 no. 2, pp. 545–591 (2000).
* (2) F. Krausz and M. Ivanov. Attosecond physics. _Rev. Mod. Phys._ 81 no. 1, pp. 163–234 (2009). NRC eprint.
* (3) F. Calegari, G. Sansone, S. Stagira, C. Vozzi and M. Nisoli. Advances in attosecond science. _J. Phys. B: At. Mol. Opt. Phys._ 49 no. 6, p. 062001 (2016).
* (4) E. Goulielmakis, Z.-H. Loh, A. Wirth et al. Real-time observation of valence electron motion. _Nature_ 466, pp. 739–743 (2010). Author eprint.
* (5) F. Calegari, D. Ayuso, A. Trabattoni et al. Ultrafast electron dynamics in phenylalanine initiated by attosecond pulses. _Science_ 346 no. 6207, pp. 336–339 (2014). UAM eprint.
* (6) E. Goulielmakis, M. Uiberacker, R. Kienberger et al. Direct measurement of light waves. _Science_ 305 no. 5688, pp. 1267–1269 (2004). Author eprint.
* (7) G. S. J. Armstrong, M. A. Khokhlova, M. Labeye et al. Quantum Battle 3 – Numerical vs Analytical Methods. _Quantum Battles in Attoscience_ online conference (1-3 July 2020). youtu.be/VJnFfHVDym4.
* (8) C. Cohen-Tannoudji, J. Dupont-Roc and G. Grynberg. _Photons and Atoms: Introduction to Quantum Electrodynamics_ (Wiley, 1997).
* (9) P. A. M. Dirac. Quantum mechanics of many-electron systems. _Proc. Roy. Soc. Lond., Ser. A_ 123 no. 792, pp. 714–733 (1929). JSTOR:95222.
* (10) M. Lewenstein, M. F. Ciappina, E. Pisanty et al. The quantum nature of light in high harmonic generation. arXiv:2008.10221, (2020).
* (11) A. Scrinzi. Time-dependent Schrödinger equation. In T. Schultz and M. Vrakking (eds.), _Attosecond and XUV Physics: Ultrafast Dynamics and Spectroscopy_ , pp. 257–292 (Wiley-VCH, Weinheim, 2014).
* (12) D. Bauer and P. Koval. Qprop: A Schrödinger-solver for intense laser–atom interaction. _Comput. Phys. Commun._ 174 no. 5, pp. 396–421 (2006). arXiv:physics/0507089.
* (13) V. Tulsky and D. Bauer. Qprop with faster calculation of photoelectron spectra. _Comput. Phys. Commun._ 251, p. 107098 (2020). arXiv:1907.08595.
* (14) S. Patchkovskii and H. G. Muller. Simple, accurate, and efficient implementation of 1-electron atomic time-dependent Schrödinger equation in spherical coordinates. _Comput. Phys. Commun._ 199, pp. 153–169 (2016). CORE eprint.
* (15) S. Fritzsche. A fresh computational approach to atomic structures, processes and cascades. _Comput. Phys. Commun._ 240, pp. 1–14 (2019). GSI eprint.
* (16) C. Ullrich. _Time-dependent density functional theory: concepts and applications_ (Oxford University Press, 2012).
* (17) W. Kohn and L. J. Sham. Self-consistent equations including exchange and correlation effects. _Phys. Rev._ 140, pp. A1133–A1138 (1965).
* (18) U. De Giovannini, G. Brunetto, A. Castro, J. Walkenhorst and A. Rubio. Simulating pump–probe photoelectron and absorption spectroscopy on the attosecond timescale with time-dependent density functional theory. _ChemPhysChem_ 14, pp. 1363–1376 (2013). arXiv:1301.1958.
* (19) P. Wopperer, U. De Giovannini and A. Rubio. Efficient and accurate modeling of electron photoemission in nanostructures with tddft. _Eur. Phys. J. B_ 90, p. 1307 (2017). arXiv:1608.02818.
* (20) U. De Giovannini and A. Castro. Real-time and real-space time-dependent density-functional theory approach to attosecond dynamics. In M. Vrakking and F. Lepine (eds.), _Attosecond Molecular Dynamics_ , vol. 13 of _Theoretical and Computational Chemistry series_ , pp. 424–461 (Royal Society Chemistry, Cambridge, 2018).
* (21) M. M. Lara-Astiaso, M. Galli, A. Trabattoni et al. Attosecond pump–probe spectroscopy of charge dynamics in tryptophan. _J. Phys. Chem. Lett._ 9, pp. 4570–4577 (2018). DESY eprint.
* (22) N. Tancogne-Dejean, M. J. T. Oliveira, X. Andrade et al. Octopus, a computational framework for exploring light-driven phenomena and quantum dynamics in extended and finite systems. _J. Chem. Phys._ 152 no. 12, p. 124119 (2020).
* (23) M. Noda, S. A. Sato, Y. Hirokawa et al. SALMON: Scalable Ab-initio Light–Matter simulator for Optics and Nanoscience. _Comput. Phys. Commun._ 235, pp. 356–365 (2019). arXiv:1804.01404.
* (24) E. Aprà, E. J. Bylaska, W. A. de Jong et al. NWChem: Past, present, and future. _J. Chem. Phys._ 152 no. 18, p. 184102 (2020).
* (25) A. García, N. Papior, A. Akhtar et al. Siesta: Recent developments and applications. _J. Chem. Phys._ 152 no. 20, p. 204108 (2020). arXiv:2006.01270.
* (26) E. Lorin, A. Bandrauk and B. Sanders (eds.). BIRS-CMO Workshop ‘Mathematical and Numerical Methods for Time-Dependent Quantum Mechanics – from Dynamics to Quantum Information’. Oaxaca de Juárez, México, 13-18 August 2017. Video proceedings available at https://www.birs.ca/events/2017/5-day-workshops/17w5010.
* (27) E. Perfetto and G. Stefanucci. CHEERS: a tool for correlated hole-electron evolution from real-time simulations. _J. Phys: Condens. Matter_ 30, p. 465901 (2018). arXiv:1810.03322.
* (28) E. Perfetto, D. Sangalli, A. Marini and G. Stefanucci. Ultrafast charge migration in XUV photoexcited phenylalanine: A first-principles study based on real-time nonequilibrium Green’s functions. _J. Phys. Chem. Lett._ 9, pp. 1353–1358 (2018). arXiv:1803.05681.
* (29) A. Szabo and N. S. Ostlund. _Modern quantum chemistry: Introduction to advanced electronic structure theory_ (Dover Publications, Mineola, N.Y., 1996).
* (30) J. Schirmer. _Many-Body Methods for Atoms, Molecules and Clusters_ , vol. 94 of _Lecture Notes in Chemistry_ (Springer, 2018).
* (31) R. Evarestov. _Quantum Chemistry of Solids: LCAO Treatment of Crystals and Nanostructures_ , vol. 153 of _Springer Series in Solid-State Sciences_ (Springer, 2012).
* (32) M. Ruberti, V. Averbukh and P. Decleva. B-spline algebraic diagrammatic construction: Application to photoionization cross-sections and high-order harmonic generation. _J. Chem. Phys._ 141, p. 164126 (2014). ICL eprint.
* (33) E. Simpson, A. Sanchez-Gonzalez, D. Austin et al. Polarisation response of delay dependent absorption modulation in strong field dressed helium atoms probed near threshold. _New J. Phys._ 18, p. 083032 (2016).
* (34) V. Averbukh and M. Ruberti. First-principles many-electron dynamics using the B-spline algebraic diagrammatic construction approach. In M. Vrakking and F. Lepine (eds.), _Attosecond Molecular Dynamics_ , vol. 13 of _Theoretical and Computational Chemistry series_ , pp. 68–102 (Royal Society Chemistry, Cambridge, 2018).
* (35) M. Ruberti, P. Decleva and V. Averbukh. Multi-channel dynamics in high harmonic generation of aligned CO2: _ab initio_ analysis with time-dependent B-spline algebraic diagrammatic construction. _Phys. Chem. Chem. Phys._ 20, pp. 8311–8325 (2018). ICL eprint.
* (36) M. Ruberti, P. Decleva and V. Averbukh. Full ab initio many-electron simulation of attosecond molecular pump–probe spectroscopy. _J. Chem. Theory Comput._ 14 no. 10, pp. 4991–5000 (2018).
* (37) M. Ruberti. Restricted correlation space B-spline ADC approach to molecular ionization: Theory and applications to total photoionization cross-sections. _J. Chem. Theory Comput._ 15 no. 6, pp. 3635–3653 (2019).
* (38) M. Ruberti. Onset of ionic coherence and charge dynamics in attosecond molecular ionization. _Phys. Chem. Chem. Phys._ 21, pp. 17584–17604 (2019).
* (39) M. Ruberti. Quantum electronic coherences by attosecond transient absorption spectroscopy: _ab initio_ B-spline RCS-ADC study. _Faraday Discuss._ 228 no. 0, pp. 286–311 (2021).
* (40) D. Schwickert, M. Ruberti, P. Kolorenč et al. Electronic quantum coherence in glycine probed with femtosecond X-rays. arXiv:2012.04852, (2020).
* (41) V. P. Majety, A. Zielinski and A. Scrinzi. Photoionization of few electron systems: a hybrid coupled channels approach. _New J. Phys._ 17 no. 6, p. 063002 (2015).
* (42) V. P. Majety and A. Scrinzi. Dynamic exchange in the strong field ionization of molecules. _Phys. Rev. Lett._ 115, p. 103002 (2015). arXiv:1505.03349.
* (43) M. H. Beck, A. Jäckle, G. A. Worth and H.-D. Meyer. The multiconfiguration time-dependent Hartree (MCTDH) method: a highly efficient algorithm for propagating wavepackets. _Phys. Rep._ 324 no. 1, pp. 1–105 (2000). UH eprint.
* (44) S. Sukiasyan, C. McDonald, C. Van Vlack et al. Correlated few-electron dynamics in intense laser fields. _Chem. Phys._ 366 no. 1, pp. 37–45 (2009).
* (45) C. Marante, M. Klinker, I. Corral et al. Hybrid-basis close-coupling interface to quantum chemistry packages for the treatment of ionization problems. _J. Chem. Theory Comput._ 13 no. 2, p. 499 (2017).
* (46) C. Marante, M. Klinker, T. Kjellsson et al. Photoionization using the xchem approach: Total and partial cross sections of ne and resonance parameters above the 2s2 2p5 threshold. _Phys. Rev. A_ 96, p. 022507 (2017).
* (47) M. Klinker, C. Marante, L. Argenti, J. Gonzalez-Vazquez and F. Martin. Electron correlation in the ionization continuum of molecules: Photoionization of N2 in the vicinity of the hopfield series of autoionizing states. _J. Phys. Chem. Lett._ 9, p. 756 (2018). UAM eprint.
* (48) D. Toffoli and P. Decleva. A multichannel least-squares B-spline approach to molecular photoionization: Theory, implementation, and applications within the configuration-interaction singles approximation. _J. Chem. Theory Comput._ 12, p. 4996 (2016).
* (49) D. D. A. Clarke, G. S. J. Armstrong, A. C. Brown and H. W. van der Hart. $R$-matrix-with-time-dependence theory for ultrafast atomic processes in arbitrary light fields. _Phys. Rev. A_ 98 no. 5, p. 053442 (2018).
* (50) A. C. Brown, G. S. J. Armstrong, J. Benda et al. RMT: R-matrix with time-dependence. Solving the semi-relativistic, time-dependent Schrödinger equation for general, multielectron atoms and molecules in intense, ultrashort, arbitrarily polarized laser pulses. _Comput. Phys. Commun._ 250, p. 107062 (2020). QUB eprint.
* (51) J. Benda, J. Gorfinkiel, Z. Mašín et al. Perturbative and nonperturbative photoionization of H2 and H2O using the molecular $R$-matrix-with-time method. _Phys. Rev. A_ 102, p. 052826 (2020). QUB eprint.
* (52) H. Miyagi and L. B. Madsen. Time-dependent restricted-active-space self-consistent-field theory for laser-driven many-electron dynamics. _Phys. Rev. A_ 87, p. 062511 (2013). arXiv:1304.5904.
* (53) H. Miyagi and L. B. Madsen. Time-dependent restricted-active-space self-consistent-field theory for laser-driven many-electron dynamics. II. Extended formulation and numerical analysis. _Phys. Rev. A_ 89, p. 063416 (2014). arXiv:1405.5380.
* (54) T. Sato and K. L. Ishikawa. Time-dependent complete-active-space self-consistent-field method for multielectron dynamics in intense laser fields. _Phys. Rev. A_ 88 no. 2, p. 023402 (2013). arXiv:1304.5835.
* (55) L. Greenman, P. J. Ho, S. Pabst et al. Implementation of the time-dependent configuration-interaction singles method for atomic strong-field processes. _Phys. Rev. A_ 82, p. 023406 (2010). DESY eprint.
* (56) T.-T. Nguyen-Dang, E. Couture-Bienvenue, J. Viau-Trudel and A. Sainjon. Time-dependent quantum chemistry of laser driven many-electron molecules. _J. Chem. Phys._ 141, p. 244116 (2014).
* (57) M. Vrakking and F. Lepine (eds.). _Attosecond Molecular Dynamics_ , vol. 13 of _Theoretical and Computational Chemistry series_ (Royal Society Chemistry, Cambridge, 2018).
* (58) A. Scrinzi. URL https://trecx.physik.lmu.de.
* (59) T. Carette, J. Dahlström, L. Argenti and E. Lindroth. Multiconfigurational Hartree-Fock close-coupling ansatz: Application to the argon photoionization cross section and delays. _Phys. Rev. A_ 87, p. 023420 (2013). UAM eprint.
* (60) R. Sawada, T. Sato and K. L. Ishikawa. Implementation of the multiconfiguration time-dependent Hatree-Fock method for general molecules on a multiresolution Cartesian grid. _Phys. Rev. A_ 93, p. 023434 (2016).
* (61) L. Keldysh. Ionization in the field of a strong electromagnetic wave. _Sov. Phys. JETP_ 20 no. 5, p. 1307 (1965). [Zh. Eksp. Teor. Fiz. 47 no. 5, p. 1945 (1965)].
* (62) M. Berry. Why are special functions special?. _Phys. Today_ 54 no. 4, p. 11 (2007). Author eprint.
* (63) J. Borwein and R. Crandall. Closed forms: what they are and why we care. _Not. Am. Math. Soc._ 60, pp. 50–65 (2013).
* (64) D. Braak. Integrability of the Rabi model. _Phys. Rev. Lett._ 107 no. 10, p. 100401 (2011). arXiv:1103.2461.
* (65) G. Andrews, R. Askey and R. Roy. _Special Functions_ , vol. 71 of _Encyclopedia of Mathematics and its Applications_ (Cambridge University Press, Cambridge, 1999).
* (66) F. H. M. Faisal. Multiple absorption of laser photons by atoms. _J. Phys. B: At. Mol. Phys._ 6 no. 4, p. L89 (1973).
* (67) H. Reiss. Effect of an intense electromagnetic field on a weakly bound system. _Phys. Rev. A_ 22 no. 5, pp. 1786–1813 (1980).
* (68) A. Perelomov, V. Popov and M. Terent’ev. Ionization of atoms in an alternating electric field. _Sov. Phys. JETP_ 23 no. 5, p. 924 (1966). [Zh. Eksp. Teor. Fiz. 50 no. 5, p. 1393 (1966)].
* (69) M. Ammosov, N. Delone and V. Krainov. Tunnel ionization of complex atoms and of atomic ions in an alternating electromagnetic field. _Sov. Phys. JETP_ 64 no. 6, p. 1191 (1986). [Zh. Eksp. Teor. Fiz. 91 no. 6, p. 2008 (1986)].
* (70) K. Amini, J. Biegert, F. Calegari et al. Symphony on strong field approximation. _Rep. Prog. Phys._ 82 no. 11, p. 116001 (2019). arXiv:1812.11447.
* (71) M. Lewenstein, Ph. Balcou, M. Yu. Ivanov, A. L’Huillier and P. B. Corkum. Theory of high-harmonic generation by low-frequency laser fields. _Phys. Rev. A_ 49 no. 3, pp. 2117–2132 (1994).
* (72) A. Galstyan, O. Chuluunbaatar, A. Hamido et al. Reformulation of the strong-field approximation for light-matter interactions. _Phys. Rev. A_ 93 no. 2, p. 023422 (2016). arXiv:1512.00681.
* (73) S. V. Popruzhenko. Keldysh theory of strong field ionization: history, applications, difficulties and perspectives. _J. Phys. B: At. Mol. Opt. Phys._ 47 no. 20, p. 204001 (2014).
* (74) P. Salières, B. Carré, L. Le Déroff et al. Feynman’s path-integral approach for intense-laser-atom interactions. _Science_ 292 no. 5518, pp. 902–905 (2001).
* (75) M. Ivanov and O. Smirnova. Multielectron high harmonic generation: simple man on a complex plane. In T. Schultz and M. Vrakking (eds.), _Attosecond and XUV Physics: Ultrafast Dynamics and Spectroscopy_ , pp. 201–256 (Wiley-VCH, Weinheim, 2014). arXiv:1304.2413.
* (76) L. Torlina and O. Smirnova. Time-dependent analytical $R$-matrix approach for strong-field dynamics. I. One-electron systems. _Phys. Rev. A_ 86 no. 4, p. 043408 (2012).
* (77) E. Pisanty Alatorre. _Electron dynamics in complex time and complex space_. PhD thesis, Imperial College London (2016).
* (78) E. Pisanty and A. Jiménez-Galán. Strong-field approximation in a rotating frame: High-order harmonic emission from $p$ states in bicircular fields. _Phys. Rev. A_ 96 no. 6, p. 063401 (2017). arXiv:1709.00397.
* (79) L. Torlina and O. Smirnova. Coulomb time delays in high harmonic generation. _New J. Phys._ 19 no. 2, p. 023012 (2017).
* (80) O. Smirnova, M. Spanner and M. Ivanov. Analytical solutions for strong field-driven atomic and molecular one- and two-electron continua and applications to strong-field problems. _Phys. Rev. A_ 77 no. 3, p. 033407 (2008). NRC eprint.
* (81) P. Einziger and L. Felsen. Evanescent waves and complex rays. _IEEE Trans. Antennas Propag._ 30 no. 4, pp. 594–605 (1982).
* (82) E. Pisanty, M. F. Ciappina and M. Lewenstein. The imaginary part of the high-harmonic cutoff. _J. Phys: Photon._ 2 no. 3, p. 034013 (2020). arXiv:2003.00277.
* (83) D. B. Milošević, E. Hasović, M. Busuladžić, A. Gazibegović-Busuladžić and W. Becker. Intensity-dependent enhancements in high-order above-threshold ionization. _Phys. Rev. A_ 76 no. 5, p. 053410 (2007).
* (84) S. V. Popruzhenko, G. G. Paulus and D. Bauer. Coulomb-corrected quantum trajectories in strong-field ionization. _Phys. Rev. A_ 77 no. 5, p. 053409 (2008). OAKTrust eprint.
* (85) A. S. Maxwell, A. Al-Jawahiry, T. Das and C. Figueira de Morisson Faria. Coulomb-corrected quantum interference in above-threshold ionization: Working towards multitrajectory electron holography. _Phys. Rev. A_ 96 no. 2, p. 023420 (2017). arXiv:1705.01518.
* (86) C. Figueira de Morisson Faria and A. S. Maxwell. It is all about phases: ultrafast holographic photoelectron imaging. _Rep. Progr. Phys._ 83 no. 3, p. 034401 (2020).
* (87) C. Zagoya, J. Wu, M. Ronto, D. V. Shalashilin and C. Figueira de Morisson Faria. Quantum and semiclassical phase-space dynamics of a wave packet in strong fields using initial-value representations. _New J. Phys._ 16 no. 10, p. 103040 (2014).
* (88) P. B. Corkum. Plasma perspective on strong field multiphoton ionization. _Phys. Rev. Lett._ 71 no. 13, pp. 1994–1997 (1993). NRC eprint.
* (89) K. C. Kulander, K. J. Schafer and J. L. Krause. Dynamics of short-pulse excitation, ionization and harmonic conversion. In B. Piraux, A. L’Huillier and K. Rzążewski (eds.), _Super-Intense Laser Atom Physics_ , vol. 316 of _NATO Advanced Studies Institute Series B: Physics_ , pp. 95–110 (Plenum, New York, 1993).
* (90) R. Panfili, J. H. Eberly and S. L. Haan. Comparing classical and quantum dynamics of strong-field double ionization. _Opt. Express_ 8 no. 7, pp. 431–435 (2001).
* (91) K. I. Dimitriou, D. G. Arbó, S. Yoshida, E. Persson and J. Burgdörfer. Origin of the double-peak structure in the momentum distribution of ionization of hydrogen atoms driven by strong laser fields. _Phys. Rev. A_ 70 no. 6, p. 061401 (2004).
* (92) C. Figueira de Morisson Faria and A. S. Maxwell. It is all about phases: ultrafast holographic photoelectron imaging. _Rep. Prog. Phys._ 83 no. 3, p. 034401 (2020). arXiv:1906.11781.
* (93) Y. Mairesse, J. Higuet, N. Dudovich et al. High harmonic spectroscopy of multichannel dynamics in strong-field ionization. _Phys. Rev. Lett._ 104 no. 21, p. 213601 (2010).
* (94) X. Shi and H. B. Schlegel. Controlling the strong field fragmentation of ClCHO+ using two laser pulses –an ab initio molecular dynamics simulation. _J. Comput. Chem._ 40 no. 1, pp. 200–205 (2019). Author eprint.
* (95) H. Ni, U. Saalmann and J.-M. Rost. Tunneling ionization time resolved by backpropagation. _Phys. Rev. Lett._ 117 no. 2, p. 023002 (2016). Author eprint.
* (96) M. Abu-samha and L. B. Madsen. Single-active-electron potentials for molecules in intense laser fields. _Phys. Rev. A_ 81, p. 033416 (2010). AU eprint.
* (97) E. Kukk, D. Ayuso, T. D. Thomas et al. Effects of molecular potential and geometry on atomic core-level photoemission over an extended energy range: The case study of the CO molecule. _Phys. Rev. A_ 88, p. 033412 (2013). UAM eprint.
* (98) N. Saito, D. Toffoli, R. R. Lucchese et al. Symmetry- and multiplet-resolved n $1s$ photoionization cross sections of the NO2 molecule. _Phys. Rev. A_ 70, p. 062724 (2004). OAKTrust eprint.
* (99) S. Gozem, A. O. Gunina, T. Ichino et al. Photoelectron wave function in photoionization: Plane wave or Coulomb wave?. _J. Phys. Chem. Lett._ 6, pp. 4532–4540 (2015). USC eprint.
* (100) M. Labeye, F. Zapata, E. Coccia et al. Optimal basis set for electron dynamics in strong laser fields: The case of molecular ion H${}_{2}^{+}$. _J. Chem. Theory Comput._ 14 no. 11, pp. 5846–5858 (2018). PubMedCentral eprint.
* (101) S. van den Wildenberg, B. Mignolet, R. Levine and F. Remacle. Temporal and spatially resolved imaging of the correlated nuclear-electronic dynamics and of the ionized photoelectron in a coherently electronically highly excited vibrating LiH molecule. _J. Chem. Phys._ 151, p. 134310 (2019).
* (102) P. Hohenberg and W. Kohn. Inhomogeneous electron gas. _Phys. Rev._ 136 no. 3B, pp. B864–B871 (1964).
* (103) E. Runge and E. K. U. Gross. Density-functional theory for time-dependent systems. _Phys. Rev. Lett._ 52 no. 12, pp. 997–1000 (1984).
* (104) R. van Leeuwen. Causality and symmetry in time-dependent density-functional theory. _Phys. Rev. Lett._ 80, pp. 1280–1283 (1998).
* (105) R. van Leeuwen. Mapping from densities to potentials in time-dependent density-functional theory. _Phys. Rev. Lett._ 82, pp. 3863–3866 (1999).
* (106) X.-M. Tong and S.-I. Chu. Density-functional theory with optimized effective potential and self-interaction correction ground states and autoionizing resonances. _Phys. Rev. A_ 55, pp. 3406–3416 (1997). KU eprint.
* (107) M. Stener, G. Fronzoni and P. Decleva. Time-dependent density-functional theory for molecular photoionization with noniterative algorithm and multicenter B-spline basis set: CS2 and C6H6 case studies. _J. Chem. Phys._ 122, p. 234301 (2005).
* (108) D. Toffoli and P. Decleva. Multiphoton core ionization dynamics of polyatomic molecules. _J. Phys. B: At., Mol. Opt. Phys._ 46, p. 145101 (2013).
* (109) M. Ruberti, R. Yun, K. Gokhberg et al. Total molecular photoionization cross-sections by algebraic diagrammatic construction-Stieltjes-Lanczos method: Benchmark calculations. _J. Chem. Phys._ 139, p. 144107 (2013).
* (110) M. Ruberti, R. Yun, K. Gokhberg et al. Total photoionization cross-sections of excited electronic states by the algebraic diagrammatic construction-stieltjes-lanczos method. _J. Chem. Phys._ 140, p. 184107 (2014).
* (111) B. Walker, B. Sheehy, L. F. DiMauro et al. Precision measurement of strong field double ionization of helium. _Phys. Rev. Lett._ 73 no. 9, p. 1227 (1994). Author eprint.
* (112) A. S. Maxwell. _Strong-Field Interference of Quantum Trajectories with Coulomb Distortion and Electron Correlation_. PhD thesis, University College London (2019).
* (113) A. L’Huillier, L. A. Lompre, G. Mainfray and C. Manus. Multiply charged ions induced by multiphoton absorption in rare gases at $0.53\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}$. _Phys. Rev. A_ 27 no. 5, p. 2503 (1983).
* (114) P. Agostini and G. Petite. Multiphoton ionisation of calcium with picosecond pulses. _J. Phys. B: At. Mol. Opt. Phys._ 17 no. 23, pp. L811–L816 (1984).
* (115) P. Agostini and G. Petite. Double multiphoton ionisation via above-threshold ionisation in strontium atoms. _J. Phys. B: At. Mol. Opt. Phys._ 18 no. 10, pp. L281–L286 (1985).
* (116) D. Fittinghoff, P. Bolton, B. Chang and K. Kulander. Observation of nonsequential double ionization of helium with optical tunneling. _Phys. Rev. Lett._ 69 no. 18, p. 2642 (1992). Zenodo eprint.
* (117) K. Kondo, A. Sagisaka, T. Tamida, Y. Nabekawa and S. Wantanabe. Wavelength dependence of nonsquential double ionization in He. _Phys. Rev. A_ 35 no. 6, p. R2531 (1999).
* (118) M. Y. Kuchiev. Atomic antenna. _JETP Lett._ 45 no. 7, pp. 404–406 (1987). [Pis’ma Zh. Eksp. Teor. Fiz. 45 no. 7, pp. 319-321 (1987)].
* (119) K. J. Schafer, B. Yang, L. F. DiMauro and K. C. Kulander. Above threshold ionization beyond the high harmonic cutoff. _Phys. Rev. Lett._ 70 no. 11, pp. 1599–1602 (1993). Author eprint.
* (120) A. Becker and F. H. M. Faisal. Correlated Keldysh-Faisal-Reiss theory of above-threshold double ionization of He in intense laser fields. _Phys. Rev. A_ 50 no. 4, pp. 3256–3264 (1994).
* (121) A. Becker and F. H. M. Faisal. Mechanism of laser-induced double ionization of helium. _J. Phys. B: At. Mol. Opt. Phys._ 29 no. 6, pp. L197–L202 (1996).
* (122) A. Becker and F. H. M. Faisal. Interplay of electron correlation and intense field dynamics in the double ionization of helium. _Phys. Rev. A_ 59 no. 3, pp. R1742–R1745 (1999).
* (123) W. Becker, X. Liu, P. J. Ho and J. H. Eberly. Theories of photoelectron correlation in laser-driven multiple atomic ionization. _Rev. Mod. Phys._ 84 no. 3, pp. 1011–1043 (2012).
* (124) C. Figueira de Morisson Faria and X. Liu. Electron–electron correlation in strong laser fields. _J. Mod. Opt._ 58 no. 13, pp. 1076–1131 (2011).
* (125) D. F. Ye, X. Liu and J. Liu. Classical trajectory diagnosis of a fingerlike pattern in the correlated electron momentum distribution in strong field double ionization of helium. _Phys. Rev. Lett._ 101 no. 23, p. 233003 (2008). arXiv:0802.0041.
* (126) A. Emmanouilidou. Recoil collisions as a portal to field-assisted ionization at near-uv frequencies in the strong-field double ionization of helium. _Phys. Rev. A_ 78 no. 2, p. 023411 (2008). arXiv:0805.4117.
* (127) A. Emmanouilidou and A. Staudte. Intensity dependence of strong-field double-ionization mechanisms: From field-assisted recollision ionization to recollision-assisted field ionization. _Phys. Rev. A_ 80 no. 5, p. 053415 (2009). arXiv:0909.3409.
* (128) T. Brabec, M. Yu. Ivanov and P. B. Corkum. Coulomb focusing in intense field atomic processes. _Phys. Rev. A_ 54 no. 4, pp. R2551–R2554 (1996).
* (129) J. Chen, J. Liu, L. B. Fu and W. M. Zheng. Interpretation of momentum distribution of recoil ions from laser-induced nonsequential double ionization by semiclassical rescattering model. _Phys. Rev. A_ 63 no. 1, p. 011404 (2000).
* (130) S. L. Haan, Z. S. Smith, K. N. Shomsky and P. W. Plantinga. Anticorrelated electrons from weak recollisions in nonsequential double ionization. _J. Phys. B: At. Mol. Opt. Phys._ 41 no. 21, p. 211002 (2008).
* (131) S. L. Haan, L. Breen, A. Karim and J. H. Eberly. Variable time lag and backward ejection in full-dimensional analysis of strong-field double ionization. _Phys. Rev. Lett._ 97 no. 10, p. 103008 (2006).
* (132) P. J. Ho, R. Panfili, S. L. Haan and J. H. Eberly. Nonsequential double ionization as a completely classical photoelectric effect. _Phys. Rev. Lett._ 94 no. 9, p. 093002 (2005). arXiv:physics/0409099.
* (133) S. L. Haan, P. S. Wheeler, R. Panfili and J. H. Eberly. Origin of correlated electron emission in double ionization of atoms. _Phys. Rev. A_ 66 no. 6, p. 061402 (2002).
* (134) R. Panfili, S. L. Haan and J. H. Eberly. Slow-down collisions and nonsequential double ionization in classical simulations. _Phys. Rev. Lett._ 89 no. 11, p. 113001 (2002).
* (135) S. P. Goreslavskii, S. V. Popruzhenko, R. Kopold and W. Becker. Electron-electron correlation in laser-induced nonsequential double ionization. _Phys. Rev. A_ 64 no. 5, p. 053402 (2001).
* (136) S. V. Popruzhenko, P. A. Korneev, S. P. Goreslavski and W. Becker. Laser-induced recollision phenomena: Interference resonances at channel closings. _Phys. Rev. Lett._ 89 no. 2, p. 023001 (2002).
* (137) A. S. Maxwell and C. Figueira de Morisson Faria. Controlling below-threshold nonsequential double ionization via quantum interference. _Phys. Rev. Lett._ 116 no. 14, p. 143001 (2016). UCL eprint.
* (138) X. Hao, J. Chen, W. Li et al. Quantum effects in double ionization of argon below the threshold intensity. _Phys. Rev. Lett._ 112 no. 7, p. 073002 (2014).
* (139) A. Becker and F. H. M. Faisal. Interpretation of momentum distribution of recoil ions from laser induced nonsequential double ionization. _Phys. Rev. Lett._ 84 no. 16, pp. 3546–3549 (2000).
* (140) W. Quan, X. Liu and C. Figueira de Morisson Faria. Nonsequential double ionization with polarization-gated pulses. _J. Phys. B: At. Mol. Opt. Phys._ 42 no. 13, p. 134008 (2009). arXiv:0901.3116.
* (141) T. Shaaran, M. T. Nygren and C. Figueira de Morisson Faria. Laser-induced nonsequential double ionization at and above the recollision-excitation-tunneling threshold. _Phys. Rev. A_ 81 no. 6, p. 063413 (2010). arXiv:1001.5225.
* (142) T. Shaaran, C. Figueira de Morisson Faria and H. Schomerus. Causality and quantum interference in time-delayed laser-induced nonsequential double ionization. _Phys. Rev. A_ 85 no. 2, p. 023423 (2012). LU eprint.
* (143) C. Figueira de Morisson Faria, T. Shaaran and M. T. Nygren. Time-delayed nonsequential double ionization with few-cycle laser pulses: Importance of the carrier-envelope phase. _Phys. Rev. A_ 86 no. 5, p. 053405 (2012). UCL eprint.
* (144) A. S. Maxwell and C. Figueira de Morisson Faria. Quantum interference in time-delayed nonsequential double ionization. _Phys. Rev. A_ 92 no. 2, p. 023421 (2015). arXiv:1507.06823.
* (145) Q. Liao, Y. Li, M. Qin and P. Lu. Attosecond interference in strong-field nonsequential double ionization. _Phys. Rev. A_ 96 no. 6, p. 063408 (2017).
* (146) M. Lein, E. K. U. Gross and V. Engel. Intense-field double ionization of helium: Identifying the mechanism. _Phys. Rev. Lett._ 85 no. 22, pp. 4707–4710 (2000). Author eprint.
* (147) C. Ruiz, L. Plaja, L. Roso and A. Becker. Ab initio calculation of the double ionization of helium in a few-cycle laser pulse beyond the one-dimensional approximation. _Phys. Rev. Lett._ 96 no. 5, p. 053001 (2006).
* (148) E. S. Smyth, J. S. Parker and K. T. Taylor. Numerical integration of the time-dependent Schrödinger equation for laser-driven helium. _Comput. Phys. Commun._ 114 no. 1, pp. 1–14 (1998).
* (149) J. S. Parker, L. R. Moore, K. J. Meharg, D. Dundas and K. T. Taylor. Double-electron above threshold ionization of helium. _J. Phys. B: At. Mol. Opt. Phys._ 34 no. 3, pp. L69–L78 (2001). OSU eprint.
* (150) M. S. Pindzola and F. Robicheaux. Time-dependent close-coupling calculations of correlated photoionization processes in helium. _Phys. Rev. A_ 57 no. 1, pp. 318–324 (1998).
* (151) M. S. Pindzola, F. Robicheaux, S. D. Loch et al. The time-dependent close-coupling method for atomic and molecular collision processes. _J. Phys. B: At. Mol. Opt. Phys._ 40 no. 7, pp. R39–R60 (2007). UBA eprint.
* (152) J. Colgan and M. S. Pindzola. Application of the time-dependent close-coupling approach to few-body atomic and molecular ionizing collisions. _Eur. Phys. J. D_ 66 no. 11, p. 284 (2012).
* (153) J. S. Parker, B. J. S. Doherty, K. T. Taylor et al. High-energy cutoff in the spectrum of strong-field nonsequential double ionization. _Phys. Rev. Lett._ 96 no. 13, p. 133001 (2006).
* (154) J. Feist, S. Nagele, R. Pazourek et al. Nonsequential two-photon double ionization of helium. _Phys. Rev. A_ 77 no. 4, p. 043420 (2008). arXiv:0803.0511.
* (155) J. Feist, S. Nagele, R. Pazourek et al. Probing electron correlation via attosecond xuv pulses in the two-photon double ionization of helium. _Phys. Rev. Lett._ 103 no. 6, p. 063002 (2009). arXiv:0812.0373.
* (156) S. X. Hu. Optimizing the FEDVR-TDCC code for exploring the quantum dynamics of two-electron systems in intense laser pulses. _Phys. Rev. E_ 81 no. 5, p. 056705 (2010).
* (157) R. Nepstad, T. Birkeland and M. Førre. Numerical study of two-photon ionization of helium using an _ab initio_ numerical framework. _Phys. Rev. A_ 81 no. 6, p. 063402 (2010). arXiv:1004.3216.
* (158) S. X. Hu. Boosting photoabsorption by attosecond control of electron correlation. _Phys. Rev. Lett._ 111 no. 12, p. 123003 (2013).
* (159) J. M. Ngoko-Djiokap, S. X. Hu, W.-C. Jiang, L.-Y. Peng and A. F. Starace. Enhanced asymmetry in few-cycle attosecond pulse ionization of he in the vicinity of autoionizing resonances. _New J. Phys._ 14 no. 9, p. 095010 (2012).
* (160) J. M. Ngoko-Djiokap, A. V. Meremianin, N. L. Manakov et al. Multistart spiral electron vortices in ionization by circularly polarized UV pulses. _Phys. Rev. A_ 94 no. 1, p. 013408 (2016). UNL eprint.
* (161) S. Donsa, I. Březinová, H. Ni, J. Feist and J. Burgdörfer. Polarization tagging of two-photon double ionization by elliptically polarized XUV pulses. _Phys. Rev. A_ 99 no. 2, p. 023413 (2019). UAM eprint.
* (162) S. Donsa, N. Douguet, J. Burgdörfer, I. Březinová and L. Argenti. Circular holographic ionization-phase meter. _Phys. Rev. Lett._ 123 no. 13, p. 133203 (2019). arXiv:1904.04380.
* (163) A. Scrinzi. t-SURFF: fully differential two-electron photo-emission spectra. _New J. Phys._ 14 no. 8, p. 085008 (2012).
* (164) A. Zielinski, V. P. Majety and A. Scrinzi. Double photoelectron momentum spectra of helium at infrared wavelength. _Phys. Rev. A_ 93 no. 2, p. 023406 (2016). arXiv:1511.06655.
* (165) J. Zhu and A. Scrinzi. Electron double-emission spectra for helium atoms in intense 400-nm laser pulses. _Phys. Rev. A_ 101 no. 6, p. 063407 (2020). arXiv:1912.09250.
* (166) M. Kübel, K. J. Betsch, N. G. Kling et al. Non-sequential double ionization of Ar: from the single- to the many-cycle regime. _New J. Phys._ 16 no. 3, p. 033008 (2014).
* (167) S. Eckart, M. Kübel, B. Fetić, K. Amini and A. Cacón. Quantum Battle 2 – Quantum interference & Imaging. _Quantum Battles in Attoscience_ online conference (1-3 July 2020). youtu.be/r2erW1u65Jk.
* (168) K. Amini, A. Chacón, S. Eckart, B. Fetić and M. Kübel. Quantum interference and imaging using intense laser fields. _Eur. Phys. J. D_ , under review (2021), arXiv:2103.05686.
* (169) A. Staudte, C. Ruiz, M. Schöffler et al. Binary and recoil collisions in strong field double ionization of helium. _Phys. Rev. Lett._ 99 no. 26, p. 263002 (2007). NRC eprint.
* (170) W. Quan, X. Hao, Y. Wang et al. Quantum interference in laser-induced nonsequential double ionization. _Phys. Rev. A_ 96 no. 3, p. 032511 (2017).
* (171) P. B. Corkum, N. H. Burnett and F. Brunel. Above-threshold ionization in the long-wavelength limit. _Phys. Rev. Lett._ 62 no. 11, pp. 1259–1262 (1989).
* (172) N. Shvetsov-Shilovski, A. Bray, H. Ni, C. Hofmann and W. Koch. Quantum Battle 1 – Tunnelling. _Quantum Battles in Attoscience_ online conference (1-3 July 2020). youtu.be/COXjc1GVXhs.
* (173) C. Hofmann, A. Bray, W. Koch, H. Ni and N. I. Shvetsov-Shilovski. Quantum battles in attoscience: tunnelling. _Eur. Phys. J. D_ 75 no. 7, pp. 208–13 (2021). arXiv:2107.10084.
* (174) N. Rohringer and R. Santra. Multichannel coherence in strong-field ionization. _Phys. Rev. A_ 79 no. 5, p. 053402 (2009). DESY eprint.
* (175) V. Véniard, R. Taïeb and A. Maquet. Phase dependence of ($N+1$)-color ($N>1$) ir-uv photoionization of atoms with higher harmonics. _Phys. Rev. A_ 54 no. 1, pp. 721–728 (1996).
* (176) B. Cooper and V. Averbukh. Single-photon laser-enabled Auger spectroscopy for measuring attosecond electron-hole dynamics. _Phys. Rev. Lett._ 111 no. 8, p. 083004 (2013). ICL eprint.
* (177) D. Iablonskyi, K. Ueda, K. L. Ishikawa et al. Observation and control of laser-enabled Auger decay. _Phys. Rev. Lett._ 119 no. 7, p. 073203 (2017). SUT eprint.
* (178) D. You, K. Ueda, M. Ruberti et al. A detailed investigation of single-photon laser enabled Auger decay in neon. _New J. Phys._ 21, p. 113036 (2019).
* (179) M. A. Khokhlova, B. Cooper, K. Ueda et al. Molecular Auger interferometry. _Phys. Rev. Lett._ 122 no. 23, p. 233001 (2019). ICL eprint.
* (180) D. Bauer and K. K. Hansen. High-harmonic generation in solids with and without topological edge states. _Phys. Rev. Lett._ 120 no. 17, p. 177401 (2018). arXiv:1711.05783.
* (181) R. E. F. Silva, A. Jiménez-Galán, B. Amorim, O. Smirnova and M. Ivanov. Topological strong-field physics on sub-laser-cycle timescale. _Nat. Photon._ 13, pp. 849–854 (2019).
* (182) A. Chacón, D. Kim, W. Zhu et al. Circular dichroism in higher-order harmonic generation: Heralding topological phases and transitions in Chern insulators. _Phys. Rev. B_ 102 no. 13, p. 134115 (2020). arXiv:1807.01616.
* (183) A. McPherson, G. Gibson, H. Jara et al. Studies of multiphoton production of vacuum-ultraviolet radiation in the rare gases. _J. Opt. Soc. Am. B_ 4 no. 4, pp. 595–601 (1987).
* (184) M. Ferray, A. L’Huillier, X. F. Li et al. Multiple-harmonic conversion of 1064 nm radiation in rare gases. _J. Phys. B: At. Mol. Opt. Phys._ 21 no. 3, p. L31 (1988).
* (185) J. L. Krause, K. J. Schafer and K. C. Kulander. High-order harmonic generation from atoms and ions in the high intensity regime. _Phys. Rev. Lett._ 68 no. 24, pp. 3535–3538 (1992). Zenodo eprint.
* (186) A. L’Huillier, P. Balcou and L. A. Lompré. Coherence and resonance effects in high-order harmonic generation. _Phys. Rev. Lett._ 68 no. 2, pp. 166–169 (1992).
* (187) P. Balcou and A. L’Huillier. Phase-matching effects in strong-field harmonic generation. _Phys. Rev. A_ 47 no. 2, pp. 1447–1459 (1993).
* (188) E. S. Toma, P. Antoine, A. d. Bohan and H. G. Muller. Resonance-enhanced high-harmonic generation. _J. Phys. B: At. Mol. Opt. Phys._ 32 no. 24, p. 5843 (1999).
* (189) C. Figueira de Morisson Faria, R. Kopold, W. Becker and J. M. Rost. Resonant enhancements of high-order harmonic generation. _Phys. Rev. A_ 65 no. 2, p. 023404 (2002). arXiv:physics/0108010.
* (190) R. Taïeb, V. Véniard, J. Wassaf and A. Maquet. Roles of resonances and recollisions in strong-field atomic phenomena. II. High-order harmonic generation. _Phys. Rev. A_ 68 no. 3, p. 033403 (2003).
* (191) R. A. Ganeev, M. Suzuki, M. Baba, H. Kuroda and T. Ozaki. Strong resonance enhancement of a single harmonic generated in the extreme ultraviolet range. _Opt. Lett._ 31 no. 11, pp. 1699–1701 (2006).
* (192) S. Gilbertson, H. Mashiko, C. Li, E. Moon and Z. Chang. Effects of laser pulse duration on extreme ultraviolet spectra from double optical gating. _Appl. Phys. Lett._ 93 no. 11, p. 111105 (2008).
* (193) A. D. Shiner, B. E. Schmidt, C. Trallero-Herrero et al. Probing collective multi-electron dynamics in xenon with high-harmonic spectroscopy. _Nat. Phys._ 7, pp. 464–467 (2011).
* (194) M. Kutzner, V. Radojević and H. P. Kelly. Extended photoionization calculations for xenon. _Phys. Rev. A_ 40 no. 9, pp. 5052–5057 (1989).
* (195) A. Fahlman, M. O. Krause, T. A. Carlson and A. Svensson. Xe $5s$, $5p$ correlation satellites in the region of strong interchannel interactions, 28–75 eV. _Phys. Rev. A_ 30 no. 2, pp. 812–819 (1984).
* (196) U. Becker, D. Szostak, H. G. Kerkhoff et al. Subshell photoionization of Xe between 40 and 1000 eV. _Phys. Rev. A_ 39 no. 8, pp. 3902–3911 (1989). DESY eprint.
* (197) B. E. Schmidt, A. D. Shiner, M. Giguère et al. High harmonic generation with long-wavelength few-cycle laser pulses. _J. Phys. B: At. Mol. Opt. Phys._ 45 no. 7, p. 074008 (2012). NRC eprint.
* (198) K. N. Huang, W. R. Johnson and K. T. Cheng. Theoretical photoionization parameters for the noble gases argon, krypton, and xenon. _Atom. Data Nucl. Data_ 26 no. 1, pp. 33–45 (1981).
* (199) M. B. Gaarde and K. J. Schafer. Enhancement of many high-order harmonics via a single multiphoton resonance. _Phys. Rev. A_ 64 no. 1, p. 013820 (2001).
* (200) K. Ishikawa. Photoemission and ionization of $\mathrm{He}^{+}$ under simultaneous irradiation of fundamental laser and high-order harmonic pulses. _Phys. Rev. Lett._ 91 no. 4, p. 043002 (2003). Author eprint.
* (201) D. B. Milošević. High-energy stimulated emission from plasma ablation pumped by resonant high-order harmonic generation. _J. Phys. B: At. Mol. Opt. Phys._ 40 no. 17, pp. 3367–3376 (2007).
* (202) M. V. Frolov, N. L. Manakov, T. S. Sarantseva et al. Analytic description of the high-energy plateau in harmonic generation by atoms: Can the harmonic power increase with increasing laser wavelengths?. _Phys. Rev. Lett._ 102 no. 24, p. 243901 (2009). UNL eprint.
* (203) M. V. Frolov, N. L. Manakov and A. F. Starace. Potential barrier effects in high-order harmonic generation by transition-metal ions. _Phys. Rev. A_ 82 no. 2, p. 023424 (2010). UNL eprint.
* (204) W. F. Chan, G. Cooper and C. E. Brion. Absolute optical oscillator strengths for the electronic excitation of atoms at high resolution: Experimental methods and measurements for helium. _Phys. Rev. A_ 44 no. 1, pp. 186–204 (1991).
* (205) G. Duffy and P. Dunne. The photoabsorption spectrum of an indium laser produced plasma. _J. Phys. B: At. Mol. Opt. Phys._ 34 no. 6, pp. L173–L178 (2001).
* (206) V. Strelkov. Role of autoionizing state in resonant high-order harmonic generation and attosecond pulse production. _Phys. Rev. Lett._ 104 no. 12, p. 123901 (2010).
* (207) M. Tudorovskaya and M. Lein. High-order harmonic generation in the presence of a resonance. _Phys. Rev. A_ 84 no. 1, p. 013430 (2011). LUH eprint.
* (208) R. A. Ganeev, H. Singhal, P. A. Naik et al. Harmonic generation from indium-rich plasmas. _Phys. Rev. A_ 74 no. 6, p. 063824 (2006).
* (209) M. Suzuki, M. Baba, R. Ganeev, H. Kuroda and T. Ozaki. Anomalous enhancement of a single high-order harmonic by using a laser-ablation tin plume at 47 nm. _Opt. Lett._ 31 no. 22, pp. 3306–3308 (2006).
* (210) M. Suzuki, M. Baba, H. Kuroda, R. A. Ganeev and T. Ozaki. Intense exact resonance enhancement of single-high-harmonic from an antimony ion by using Ti:Sapphire laser at 37 nm. _Opt. Express_ 15 no. 3, pp. 1161–1166 (2007).
* (211) R. A. Ganeev, P. A. Naik, H. Singhal, J. A. Chakera and P. D. Gupta. Strong enhancement and extinction of single harmonic intensity in the mid- and end-plateau regions of the high harmonics generated in weakly excited laser plasmas. _Opt. Lett._ 32 no. 1, p. 65 (2007).
* (212) R. A. Ganeev, L. B. E. Bom, J.-C. Kieffer and T. Ozaki. Systematic investigation of resonance-induced single-harmonic enhancement in the extreme-ultraviolet range. _Phys. Rev. A_ 75 no. 6, p. 063806 (2007).
* (213) V. V. Strelkov, M. A. Khokhlova and N. Y. Shubin. High-order harmonic generation and Fano resonances. _Phys. Rev. A_ 89 no. 5, p. 053833 (2014). arXiv:1307.5241.
* (214) U. Fano. Effects of configuration interaction on intensities and phase shifts. _Phys. Rev._ 124 no. 6, pp. 1866–1878 (1961).
* (215) I. S. Wahyutama, T. Sato and K. L. Ishikawa. Time-dependent multiconfiguration self-consistent-field study on resonantly enhanced high-order harmonic generation from transition-metal elements. _Phys. Rev. A_ 99 no. 6, p. 063420 (2019).
* (216) A. Flettner, T. Pfeifer, D. Walter et al. High-harmonic generation and plasma radiation from water microdroplets. _Appl. Phys. B_ 77 no. 8, pp. 747–751 (2003).
* (217) T. T. Luu, Z. Yin, A. Jain et al. Extreme–ultraviolet high–harmonic generation in liquids. _Nat. Commun._ 9 no. 1, p. 3723 (2018).
* (218) S. Ghimire, A. D. DiChiara, E. Sistrunk et al. Observation of high-order harmonic generation in a bulk crystal. _Nat. Phys._ 7 no. 2, pp. 138–141 (2011). OSU eprint.
* (219) G. Vampa and T. Brabec. Merge of high harmonic generation from gases and solids and its implications for attosecond science. _J. Phys. B: At. Mol. Opt. Phys._ 50 no. 8, p. 083001 (2017). JASLab eprint.
* (220) T. S. Kuhn. _The Structure of Scientific Revolutions: 50th Anniversary Edition_ (University of Chicago Press, Chicago, 2012).
* (221) R. Dörner. Personal communication (2020).
* (222) L. Mercadier, A. Benediktovitch, C. Weninger et al. Evidence of extreme ultraviolet superfluorescence in xenon. _Phys. Rev. Lett._ 123 no. 2, p. 023201 (2019). arXiv:1810.11097.
* (223) N. Rohringer. The role of decoherence and collisions in collective spontaneous emission of FEL-irradiated clusters. _Quantum Battles in Attoscience_ online conference (2020). youtu.be/MidvIsufacU.
* (224) S. D. Bartlett, T. Rudolph and R. W. Spekkens. Dialogue concerning two views on quantum coherence: factist and fictionist. _Int. J. Quantum Inform._ 04 no. 01, pp. 17–43 (2006). arXiv:quant-ph/0507214.
* (225) G. Galilei. _Dialogo sopra i due massimi sistemi del mondo_ (1632).
|
# Pulse Index Modulation
Sultan Aldirmaz-Colak, Erdogan Aydin, Yasin Celik, Yusuf Acar, and Ertugrul
Basar S. Aldırmaz-Çolak is with the Department of Electronics and
Communication Engineering, Kocaeli University, Kocaeli 41380, Turkey (e-mail:
sultan.aldirmaz@kocaeli.edu.tr)E. Aydın is with the Department of Electrical
and Electronics Engineering, Istanbul Medeniyet University, Istanbul 34857,
Turkey (e-mail: erdogan.aydin@medeniyet.edu.tr)Y. Celik is with the Department
of Electrical and Electronics Engineering, Aksaray University, Aksaray 68100,
Turkey (e-mail: yasincelik@aksaray.edu.tr)Y. Acar is with the Group of
Electronic Warfare Systems, STM Defense Technologies Engineering, Inc., 06510,
Ankara, Turkey (e-mail: yusuf.acar@stm.com.tr)E. Basar is with the CoreLab,
Department of Electrical and Electronics Engineering, Koç University, Istanbul
34450, Turkey (e-mail: ebasar@ku.edu.tr)Manuscript received April 19, 2021;
revised August 26, 2021.
###### Abstract
Emerging systems such as Internet-of-things (IoT) and machine-to-machine (M2M)
communications have strict requirements on the power consumption of used
equipments and associated complexity in the transceiver design. As a result,
multiple-input multiple-output (MIMO) solutions might not be directly suitable
for these system due to their high complexity, inter-antenna synchronization
(IAS) requirement, and high inter-antenna interference (IAI) problems. In
order to overcome these problems, we propose two novel index modulation (IM)
schemes, namely pulse index modulation (PIM) and generalized PIM (GPIM) for
single-input single-output (SISO) schemes. The proposed models use well-
localized and orthogonal Hermite-Gaussian pulses for data transmission and
provide high spectral efficiency owing to the Hermite-Gaussian pulse indices.
Besides, it has been shown via analytical derivations and computer simulations
that the proposed PIM and GPIM systems have better error performance and
considerable signal-to-noise ratio (SNR) gain compared to existing spatial
modulation (SM), quadrature SM (QSM), and traditional $M$-ary systems.
###### Index Terms:
Hermite-Gaussian pulses, Internet-of-things (IoT), index modulation (IM),
machine-to-machine (M2M), single-input single-output (SISO).
## I Introduction
Internet-of-things (IoT) and machine-to-machine (M2M) communications are
emerging technologies which are supported by 5G. According to recent reports
in 2020, the number of IoT connections exceeded the number of non-IoT ones. By
2025, it is expected that there will be more than 30 billion IoT connections,
thus the efficient use of the spectrum comes to the fore in the system design
[1]. IoT and M2M systems should have low-power equipments and low complexity.
Therefore, single-input single-output (SISO) solutions are one step ahead of
their multiple-input multiple-output (MIMO) counterparts.
MIMO transmission provides transmitter (Tx) and receiver (Rx) diversity gain
and increases the data rate [2]. However, inter-antenna synchronization (IAS)
and inter-antenna interference (IAI) are big problems of MIMO schemes. In
2008, Mesleh et al. proposed a novel transmission scheme, namely spatial
modulation (SM) to overcome the aforementioned problems of MIMO systems [3].
SM has attracted quite a lot of attention from researchers. According to this
technique, only one antenna is activated among all transmit antennas for
transmission at one symbol duration. Incoming bits are separated into index
bits, which determine the active antenna indices, and modulated bits, which
constitute symbols. Hence, incoming bits are conveyed not only by symbols but
also by active antenna indices. Since only one antenna is activated at one
symbol time, IAI and IAS requirement are avoided. However, it is stated in [4,
5] that antenna switching in each symbol duration results in decreasing
spectral efficiency (SE) in practice. Moreover, only one RF chain is
sufficient in the SM, still multiple antennas are needed. To further exploit
indexing mechanisms, the concept of SM has been generalized to other resources
of communication systems and index modulation (IM) has emerged. For instance,
Basar et al. proposed orthogonal frequency division multiplexing index
modulation (OFDM-IM) that provides not only higher SE but also improved
performance compared to classical OFDM [6]. In OFDM-IM, subcarriers are
divided into groups and in each group only a few subcarriers are activated
according to index bits to convey modulated symbols. To further improve SE
compared to OFDM-IM, Mao et al. proposed dual-mode OFDM-IM [7] which utilizes
entire subcarriers for symbol transmission, unlike OFDM-IM. These techniques
become very popular, then their variants have been proposed in a very short
time [8, 9]. However, in the recent past, IM has been utilized to select
orthogonal codes in code-division multiple access (CDMA) communication [10].
Both OFDM and CDMA are utilized for wideband communication however, their
complexities are relatively high. Particularly, SM and general MIMO schemes
require the estimation of all channels between each Tx-Rx antenna. Thus, their
receiver complexity and overhead of the channel estimation are quite high.
Unlike these traditional techniques, in this letter we focus on novel IM-based
low complexity transceiver designs with single antennas for use in
applications such as IoT, M2M with high SE.
Hermite-Gaussian pulses are widely used in ultra wide-band communication [11,
12]. In [11], the authors proposed a spectrum efficient communication system
that uses a summation of binary phase shift-keying (BPSK) modulated Hermite-
Gaussian pulses with different orders. Since these pulses are orthogonal to
each other, the transmission of a linear combination of these pulses that
carries different symbols provides higher SE.
In this letter, we propose two novel IM schemes that activate certain Hermite-
Gaussian pulse shapes for transmission instead of antenna indices according to
the incoming information bits. Unlike SM and OFDM-IM systems, proposed pulse
index modulation (PIM) and generalized PIM (GPIM) do not require multiple
antennas or multiple subcarriers, and the transmitters of PIM and GPIM have a
relatively low complexity. Thus they are suitable for M2M or IoT applications.
The main contributions of the letter are summarized as follows:
* •
We introduce two novel PIM schemes for SISO systems. Similar to SM, index bits
are used as an extra dimension to convey data bits besides conventional
constellation mapping.
* •
We propose a low complexity detector that requires only one Tx-Rx channel
state information (CSI) estimation. Since the overhead of channel estimation
is low, more data can be transmitted during the coherence time.
* •
To increase the SE of the PIM technique, more than one pulse can be sent
together owing to orthogonality property of Hermite-Gaussian pulses.
Therefore, we introduce the GPIM scheme for SISO systems.
* •
We also obtain the average bit error probability (ABEP) for the maximum
likelihood (ML) detector. ABEP results match well with the simulation results.
The remainder of this letter is organized as follows. In Section II, we
introduce the system model of PIM and GPIM schemes. In Section III,
performance analysis of the proposed two schemes are presented. Simulation
results and performance comparisons are given in Section IV. Finally, the
letter is concluded in Section V.
Notation: Throughout the letter, scalar values are italicized,
vectors/matrices are presented by bold lower/upper case symbols. The transpose
and the conjugate transpose are denoted by $(\cdot)^{T}$ and $(\cdot)^{H}$.
$\lfloor.\rfloor$, $||.||$, and $C(.,.)$ represent the floor operation,
Euclidean norm, and Binomial coefficient, respectively.
$\mathcal{CN}(0,\sigma^{2})\ $ represents the complex Gaussian distribution
with zero mean and variance $\sigma^{2}$ and $\textbf{I}_{n}$ is the $n\times
n$ identity matrix. Last, $\frac{\partial}{\partial t}$ represents partial
derivative.
Figure 1: Different Hermite-Gaussian pulses (a) with $L=127$ in time domain
(b) bandwidth comparison with SRRC in frequency domain.
## II System Model of Pulse Index Modulation
We constitute a set of Hermite-Gaussian functions $\psi_{v}(t)$ that span the
Hilbert space. These functions are known for their ability to be highly
localized in time and frequency domains. They are defined by a Hermite
polynomial modulated with a Gaussian function as
$\psi_{v}(t)=\frac{2^{1/4}}{\sqrt{2^{v}v!}}H_{v}(\sqrt{2\pi}t)e^{-\pi t^{2}},$
(1)
where $t$ represents time index, $v$ is the order of Hermite-Gaussian
function, and $H_{v}(t)$ is the Hermite polynomial series that is expressed as
$H_{v}(t)=(-1)^{v}e^{t^{2}}\frac{\partial^{v}}{\partial t^{v}}e^{-t^{2}}$.
A number of Hermite polynomials can be given for $v=0,1,2,3$ as follows:
$H_{0}(t)\\!=\\!1,\\!\ H_{1}(t)\\!=\\!2t,\\!\ H_{2}(t)\\!=\\!4t^{2}\\!-\\!2,\
H_{3}(t)\\!=\\!8t^{3}\\!-\\!12t.$ (2)
One of the important properties of Hermite-Gaussian functions is orthogonality
among them, which can be expressed as
$\int_{0}^{\infty}{{\psi_{m}}(t)}{\psi_{n}}(t)dt=0$ for $m\neq n$ [13].
Representation of Hermite-Gaussian pulses $\psi_{0}(t)$, $\psi_{1}(t)$,
$\psi_{2}(t)$, and $\psi_{3}(t)$ in the time-domain are shown in Fig. 1(a) for
$v=0,1,2,3$. As seen from Fig. 1(a), as the order of Hermite-Gaussian pulse
increases, the oscillation of the pulse also increases. The frequency domain
representation of the these Hermite-Gaussian pulses and square root raised
cosine (SRRC) pulse with different roll-off factors ($\beta$) are given in
Fig. 1(b). We can state two important issues from Fig. 1(b). First, the
bandwidth of the Hermite-Gaussian pulses increases with the increase of order,
as the first-null bandwidth is considered. Second, the bandwidth of the zeroth
and the first order Hermite-Gaussian pulses are narrower than that of SRRC,
while the bandwidth of the second and the third order Hermite-Gaussian pulses
are wider. In other words, as can be seen from Fig. 1(b), the bandwidth usage
of the proposed scheme is relatively higher than the SRRC with $\beta=1$ .
Figure 2: The transceiver block diagram of the PIM scheme for $k=1$. Figure 3:
The transceiver block diagram of the GPIM scheme for $k=2$.
In the following sections, to simplify presentation, only discrete signal
samples will be used. Discrete representation of the Hermite-Gaussian pulses
are obtained from the continuous Hermite-Gaussian pulses by using Nyquist
sampling theorem ($\psi_{j}(t)=\psi_{j}[lT_{s}]$, where $l$ is an integer
($l=0,1,\dots L$, where $L$ is number of samples of the $j^{th}$ Hermite-
Gaussian pulse) and $T_{s}$ denotes the sampling interval). Thus, each pulse
is represented by a vector with L samples, such as
$\mbox{\boldmath$\psi$}_{j}=[\psi_{j,1}\psi_{j,2}\dots\psi_{j,L}]^{T}$.
### II-A PIM Transmission Model ($k=1$)
The transceiver block diagram of the proposed PIM scheme for $k=1$ is
represented in Fig. 2 (at the top of the next page), where $k$ is the number
of selected pulses. Firstly, incoming bit sequence b with the size of
$\big{(}1\times p_{\text{PIM}}\big{)}$ is splitted into pulse index selector
and $M$-ary signal constellation blocks. While the first
$p_{1}=\left\lfloor{\log_{2}{n\choose k}}\right\rfloor$ bits determine the
active pulse shape, where $n$ denotes to total number of pulses, and the
remaining $p_{2}=\log_{2}(M)$ bits determine the modulated symbol according to
modulation scheme, where M denotes the modulation order. In the PIM model,
only one pulse is active for transmission. Thus, the baseband PIM-based pulse
signal to be transmitted can be expressed as
$\textbf{x}=s_{i}\mbox{\boldmath$\psi$}_{j},$ (3)
where $s_{i}$ and $\mbox{\boldmath$\psi$}_{j}$ represent modulated $i^{th}$
symbol and $j^{th}$ selected Hermite-Gaussian pulse vector, respectively, and
$i\in\\{1,2,\ldots,M\\}$ and $j\in\\{1,2,\ldots,2^{p_{1}}\\}$. For example,
analytical expressions of the possible transmitted symbols for BPSK modulation
in the time domain are given in Table I with $s_{i}=\pm 1$. The number of
transmitted bits per channel use (bpcu) can be calculated as
$p_{\text{PIM}}=p_{1}+p_{2}$. An example of the pulse index mapping rule for
$p_{1}=2$ bits is given by Table II. In this case, the PIM scheme for BPSK
($n=4$, $k=1$, and $p_{1}$ = 2) transmits $p_{\text{PIM}}=p_{1}+p_{2}=3$ bits.
### II-B GPIM Transmission Model ($k\geq 2$)
To increase the transmitted bpcu of the aforementioned method, we generalized
it for $k\geq 2$. In this letter, to simplify the analysis, we assume $k=2$.
The transmitter block diagram of this scheme and a look-up table, which maps
the index bits to transmitted pulses, are given in Fig. 3 and Table II,
respectively. Similar to the first method, incoming bit sequence b with the
size of $\big{(}1\times p_{\text{GPIM}}\big{)}$ is splitted into pulse index
selector and $M$-ary signal constellation blocks. While the first
$p_{1}=\left\lfloor{\log_{2}{n\choose k}}\right\rfloor$ bits determine the
active pulse shapes as given in last column of Table II ($k=2$), the remaining
$p_{2}=k\log_{2}(M)$ bits determine the modulated symbol according to
modulation scheme so that $p_{\text{GPIM}}=p_{1}+p_{2}$. In Table II, we
assume that $n=4$, $k=2$ for the GPIM scheme. However, there are $C(4,2)$
possible pulse shape pairs and we only use four out of them.
TABLE I: Possible Pulses and Their Analytical Expressions for PIM. Possible Pulses for BPSK | Analytical Expressions
---|---
$\pm\psi_{0}(t)$ | $\pm 2^{1/4}e^{-\pi t^{2}}$
$\pm\psi_{1}(t)$ | $\pm 2^{1/4}(2\sqrt{\pi})te^{-\pi t^{2}}$
$\pm\psi_{2}(t)$ | $\pm\frac{2^{1/4}}{2\sqrt{2}}(8\pi t^{2}-2)e^{-\pi t^{2}}$
$\pm\psi_{3}(t)$ | $\pm 2^{1/4}\left(\frac{4\pi\sqrt{2\pi}t^{3}-3\sqrt{2\pi}t}{\sqrt{3}}\right)e^{-\pi t^{2}}$
TABLE II: A reference look-up table for $n=4$; $k\in\\{1,2\\}$, and $p_{1}$ = 2. Index Bits | | Selected Pulse for
---
PIM scheme ($k=1$)
| Selected Pulse for Generalized
---
PIM scheme ($k=2$)
$\\{0,0\\}$ | $\mbox{\boldmath$\psi$}_{0}$ | $\\{\mbox{\boldmath$\psi$}_{0},\mbox{\boldmath$\psi$}_{1}\\}$
$\\{0,1\\}$ | $\mbox{\boldmath$\psi$}_{1}$ | $\\{\mbox{\boldmath$\psi$}_{0},\mbox{\boldmath$\psi$}_{2}\\}$
$\\{1,0\\}$ | $\mbox{\boldmath$\psi$}_{2}$ | $\\{\mbox{\boldmath$\psi$}_{1},\mbox{\boldmath$\psi$}_{2}\\}$
$\\{1,1\\}$ | $\mbox{\boldmath$\psi$}_{3}$ | $\\{\mbox{\boldmath$\psi$}_{1},\mbox{\boldmath$\psi$}_{3}\\}$
Then, the baseband GPIM based pulse signal to be transmitted is expressed as
$\textbf{x}=\frac{1}{\sqrt{k}}(s_{i}\mbox{\boldmath$\psi$}_{j}+s_{q}\mbox{\boldmath$\psi$}_{\ell}),$
(4)
where $\frac{1}{\sqrt{k}}$ is a normalization coefficient used to make the
total symbol energy $E_{s}=1$. $s_{i}$ and $s_{q}$ denote $i^{th}$ and
$q^{th}$ modulated symbols respectively, $\mbox{\boldmath$\psi$}_{j}$ and
$\mbox{\boldmath$\psi$}_{\ell}$ represent the selected Hermite-Gaussian pulses
according to index bits, respectively, where $i,q\in\\{1,2,\ldots,M\\}$ and
$j,\ell\in\\{1,2,\ldots,2^{p_{1}}\\}$. For example, if the information bit
block is given as $\textbf{b}=[0\ 0\ 1\ 0]$, the selected Hermite-Gaussian
pulses are zeroth order and the first order ones according to the first two
bits, and selected BPSK symbols for the third and the fourth bits are
$s_{i}=1$ and $s_{q}=-1$. Thus the transmitted signal can be expressed as
$\textbf{x}=\frac{1}{\sqrt{2}}\big{(}\mbox{\boldmath$\psi$}_{0}-\mbox{\boldmath$\psi$}_{1}\big{)}$,
if BPSK modulation is applied. As Hermite-Gaussian pulses are orthogonal to
each other, the modulation type can be thought as quadrature phase shift-
keying (QPSK).
TABLE III: Possible Pulses and Their Analytical Expressions for GPIM. Possible Pulses | Analytical Expressions
---|---
$\pm\psi_{0}(t)\pm\psi_{1}(t)$ | $2^{1/4}(1\pm 2\sqrt{\pi}t)e^{-\pi t^{2}}$
$\pm\psi_{0}(t)\pm\psi_{2}(t)$ | $2^{1/4}(1\pm\frac{4\pi t^{2}-1}{\sqrt{2}})e^{-\pi t^{2}}$
$\pm\psi_{1}(t)\pm\psi_{2}(t)$ | $2^{1/4}(2\sqrt{\pi}t\pm\frac{4\pi t^{2}-1}{\sqrt{2}})e^{-\pi t^{2}}$
$\pm\psi_{1}(t)\pm\psi_{3}(t)$ | $2^{1/4}(2\sqrt{\pi}t\pm\frac{4\pi\sqrt{2\pi}t^{3}-3\sqrt{(}2\pi)t}{\sqrt{3}})e^{-\pi t^{2}}$
Analytical expression of the possible transmitted pulses for BPSK modulation
in the time domain is given in Table III for GPIM. SE of GPIM scheme can be
calculated as $p_{\text{GPIM}}=\left\lfloor{\log_{2}{n\choose
k}}\right\rfloor$ \+ $k\log_{2}(M)$ bpcu. Without losing generality, we use
four different Hermite-Gaussian functions of orders $v=0,1,2$, and $3$ for
practical considerations and for simplicity ($n=4$, $M=2$ ). Thus, the SE of
GPIM scheme equals to $p_{\text{GPIM}}=4$ bpcu while the classical SISO system
with $M=2$ obtains $1$ bpcu.
#### II-B1 ML Detection of PIM and GPIM Schemes
The vector representation of the received baseband signal for PIM and GPIM
schemes can be expressed, respectively as follows:
$\displaystyle\mathbf{r}_{\text{PIM}}$ $\displaystyle=$
$\displaystyle\mathbf{x}h+\mathbf{n}$ (5) $\displaystyle=$ $\displaystyle
s_{i}\mbox{\boldmath$\psi$}_{j}h+\mathbf{n},$
$\displaystyle\mathbf{r}_{\text{GPIM}}$ $\displaystyle=$
$\displaystyle\mathbf{x}h+\mathbf{n}$ (6) $\displaystyle=$
$\displaystyle\big{(}s_{i}\mbox{\boldmath$\psi$}_{j}+s_{q}\mbox{\boldmath$\psi$}_{\ell}\big{)}h+\mathbf{n},$
where $\textbf{n}\in\mathcal{C}^{L\times 1}$ is the noise vector with elements
following $\mathcal{CN}(0,\frac{N_{0}}{2}\textbf{I}_{L})\ $ and $h$ represents
the complex Rayleigh fading coefficient.
The ML detectors for PIM and GPIM schemes can be expressed respectively as
follows:
$\displaystyle\Big{(}\hat{s}_{i},\hat{j}\Big{)}$
$\displaystyle\\!\\!\\!\\!\\!=$
$\displaystyle\\!\\!\\!\\!\\!\underset{i,j}{\mathrm{arg\,max}}\Big{(}\mathrm{Pr}\big{(}\textbf{r}_{\text{PIM}}|s_{i},\mbox{\boldmath$\psi$}_{j}\big{)}\Big{)}$
(7) $\displaystyle\\!\\!\\!\\!\\!=$
$\displaystyle\\!\\!\\!\\!\\!\underset{i,j}{\mathrm{arg\,min}}\bigg{\\{}\Big{|}\Big{|}\textbf{r}_{\text{PIM}}-hs_{i}\mbox{\boldmath$\psi$}_{j}\Big{|}\Big{|}^{2}\bigg{\\}},$
$\displaystyle\Big{(}\hat{s}_{i},\hat{s}_{q},\hat{j},\hat{\ell}\Big{)}$
$\displaystyle\\!\\!\\!\\!\\!=$
$\displaystyle\\!\\!\\!\\!\\!\underset{i,j,q,\ell}{\mathrm{arg\,max}}\Big{(}\mathrm{Pr}\big{(}\textbf{r}_{\text{GPIM}}|s_{i},s_{q}\mbox{\boldmath$\psi$}_{j},\mbox{\boldmath$\psi$}_{\ell}\big{)}\Big{)}$
(8) $\displaystyle\\!\\!\\!\\!\\!=$
$\displaystyle\\!\\!\\!\\!\\!\underset{i,j,q,\ell}{\mathrm{arg\,min}}\bigg{\\{}\\!\Big{|}\Big{|}\textbf{r}_{\text{GPIM}}\\!-\\!\big{(}s_{i}\mbox{\boldmath$\psi$}_{j}+s_{q}\mbox{\boldmath$\psi$}_{\ell}\big{)}h\Big{|}\Big{|}^{2}\\!\bigg{\\}}$
where, $i,q\in\\{1,2,\ldots,M\\}$ and $j,\ell\in\\{1,2,\ldots,2^{p_{1}}\\}$.
Finally, using the detected $(\hat{s}_{i},\hat{s}_{q},\hat{j},\hat{\ell})$
values, the originally transmitted bit sequence $\hat{\mathbf{b}}$ is
reconstructed at the receiver with the help of the index to bits mapping
technique as shown at the receiver block of the PIM and GPIM systems.
## III Performance Analysis
In this section, we analyze the ABEP performance of the PIM and GPIM schemes.
Accordingly, using the well-known union bounding technique as in [14], the
expression of ABEP $\mathbb{P}$ for proposed two schemes can be given as
follows:
$\mathbb{P}\leq\frac{1}{2^{\tilde{p}}}\sum_{d=1}^{2^{\tilde{p}}}\sum_{z=1}^{2^{\tilde{p}}}\frac{\mathbb{P}_{\text{e}}\Big{(}\mbox{\boldmath$\xi$}_{d}\to\hat{\mbox{\boldmath$\xi$}}_{z}\Big{)}N(d,z)}{\tilde{p}},$
(9)
where $\tilde{p}=p_{1}+p_{2}$ is the number of bits transmitted in active
pulse indices and modulated symbols, $N(d,z)$ is expressed as the number of
bits in errors between the vectors $\mbox{\boldmath$\xi$}_{d}$ and
$\hat{\mbox{\boldmath$\xi$}}_{z}$.
$\mathbb{P}_{\text{e}}\big{(}\mbox{\boldmath$\xi$}_{d}\to\hat{\mbox{\boldmath$\xi$}}_{z}\big{)}$
is the APEP of deciding $\hat{\mbox{\boldmath$\xi$}}_{z}$ giving that
$\mbox{\boldmath$\xi$}_{d}$ is transmitted and it can be expressed as
$\mathbb{P}_{\text{e}}\Big{(}\mbox{\boldmath$\xi$}_{d}\to\hat{\mbox{\boldmath$\xi$}}_{z}\Big{)}=\frac{1}{2}\Bigg{(}1-\sqrt{\frac{\sigma^{2}_{k,\alpha}}{1+\sigma^{2}_{k,\alpha}}}\Bigg{)},$
(10)
where, $k\in\\{1,2\\}$. Note that, for $k=1$ and for $k=2$, the ABEPs of PIM
and GPIM schemes are obtained, respectively. For the PIM and GPIM schemes
$\sigma^{2}_{k,\alpha}$ is given by:
$\sigma^{2}_{1,\alpha}=\left\\{\begin{tabular}[]{c}$\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\Big{(}|{s}_{i}|^{2}+|{\hat{s}}_{{i}}|^{2}\Big{)}$
$\text{if}\
\mbox{\boldmath$\psi$}_{j}\not=\mbox{\boldmath$\psi$}_{\hat{j}}$\\\
$\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\Big{(}|s_{{i}}-{\hat{s}}_{{i}}|^{2}\Big{)}$
$\text{if}\ \mbox{\boldmath$\psi$}_{j}=\mbox{\boldmath$\psi$}_{\hat{j}}$\\\
\end{tabular}\right.$ (11)
$\\!\\!\sigma^{2}_{2,\alpha}\\!\\!=\\!\\!\left\\{\begin{tabular}[]{c}$\\!\\!\\!\\!\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\big{(}|{s}_{i}|^{2}\\!+\\!|{\hat{s}}_{{i}}|^{2}\\!+\\!|{s}_{q}|^{2}\\!+\\!|{\hat{s}}_{{q}}|^{2}\big{)}$
$\text{if}\
\mbox{\boldmath$\psi$}\\!\not=\\!\mbox{\boldmath$\psi$}_{\hat{j}},\mbox{\boldmath$\psi$}_{\ell}\not=\\!\mbox{\boldmath$\psi$}_{\hat{\ell}}$\\\
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\big{(}|{s}_{i}\\!-\\!{\hat{s}}_{{i}}|^{2}\\!+\\!|{s}_{q}|^{2}\\!+\\!|{\hat{s}}_{{q}}|^{2}\big{)}$
$\text{if}\
\mbox{\boldmath$\psi$}_{j}\\!=\\!\mbox{\boldmath$\psi$}_{\hat{j}},\mbox{\boldmath$\psi$}_{\ell}\\!\not=\\!\mbox{\boldmath$\psi$}_{\hat{\ell}}$\\\
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\big{(}|{s}_{i}|^{2}\\!+\\!|{\hat{s}}_{{i}}|^{2}\\!+\\!|{s}_{q}\\!-\\!{\hat{s}}_{{q}}|^{2}\big{)}$
$\text{if}\
\mbox{\boldmath$\psi$}_{j}\\!\not=\\!\mbox{\boldmath$\psi$}_{\hat{j}},\mbox{\boldmath$\psi$}_{\ell}\\!=\\!\mbox{\boldmath$\psi$}_{\hat{\ell}}$\\\
$\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{E_{s}}{2N_{0}}\sigma_{h}^{2}\big{(}|{s}_{i}\\!-\\!{\hat{s}}_{{i}}|^{2}\\!+\\!|{s}_{q}\\!-\\!{\hat{s}}_{{q}}|^{2}\big{)}$
$\text{if}\
\mbox{\boldmath$\psi$}_{j}\\!=\\!\mbox{\boldmath$\psi$}_{\hat{j}},\mbox{\boldmath$\psi$}_{\ell}\\!=\\!\mbox{\boldmath$\psi$}_{\hat{\ell}}$\end{tabular}\right.$
(12)
where $\hat{s}_{{i}}$ and $\hat{s}_{{q}}$ are estimates of ${s}_{i}$ and
${s}_{q}$, respectively. $\sigma_{h}^{2}$ is the variance of Rayleigh fading
channel coefficient and $\sigma_{h}^{2}=1$.
Consequently, by substituting (11) and (10) into (9), we obtain the ABEP for
PIM system, and similarly, by substituting (12) and (10) into (9), we obtain
the ABEP for GPIM system.
Figure 4: BER performance curves of the PIM scheme with PSK modulation for
various $M$ values ($n=4$, $k=1$).
## IV Simulation Results
To demonstrate the improved performance of the proposed techniques, the bit
error rate (BER) of PIM systems is evaluated with different system setups. SM,
quadrature SM (QSM), traditional $M$PSK/QAM schemes are selected as
benchmarks. The SNR used in computer simulations herein is defined as
$E_{s}/N_{0}$ where $E_{s}$ is energy per symbol and $N_{0}$ is the noise
power. At the receiver, ML detector is used for all systems. Each Hermite-
Gaussian pulse consists of $127$ samples. Since the head and tail of the blows
contain a large number of zero-value samples, we truncate their edges. Thus,
each pulse includes $61$ samples. All simulations are performed over
frequency-flat Rayleigh fading channels. We assume that the channel is
constant during one symbol duration, and the CSI is perfectly known at the
receiver.
The theoretical and simulation average BER performance curves of the PIM
scheme with $M$-PSK, $M=4,8,16,32$, $n=4$, and $k=1$ are presented for
$p_{\text{PIM}}=4,5,6$, and $7$ bits in Fig. 4. Here, the PIM technique
transmits $4,5,6$, and $7$ bits by $2$ bits with active pulse indices and
$2,3,4$, and $5$ bits with the transmitted symbols, respectively. As can be
seen from Fig. 4, analytical results match simulation results well
particularly at high modulation order.
The average BER performance curves of the PIM, GPIM and benchmarks schemes are
shown in Fig. 5 for $M$-PSK/QAM at $6$ bpcu. GPIM technique carries $2$ bits
with active pulse indices and $4$ bits with the transmitted symbol; PIM scheme
transmits $2$ bits with with active pulse indices and $4$ bits with the
transmitted symbol; the QSM technique carries either $4$ bits with antenna
indices and $2$ bits with the transmitted symbol or $2$ bits with antenna
indices and $4$ bits with the transmitted symbol. In the SM technique, $2$
bits are transmitted in antenna indices and $4$ bits are transmitted with
symbols. In PSK/QAM, all 6 bits are carried on a modulated symbol with $M=64$.
As seen from Fig. 5, the GPIM scheme provides better performance with
approximately $1$ dB SNR gain compared to PIM system when QAM is used. Also,
the analytical and the simulation results match well. The proposed GPIM and
PIM schemes have also better BER performance compared to SM, QSM, and
traditional QAM schemes.
Fig. 6 presents average BER performance curves of GPIM, QSM, SM, and
traditional QAM schemes for (a) 8 bpcu and (b) 10 bpcu. For Fig. 6 (a), GPIM
carries $2$ bits with active pulse indices and $6$ bits with the transmitted
symbol; the QSM technique carries $4$ bits with antenna indices and $4$ bits
with the transmitted symbol; the SM scheme carries $3$ bits with antenna
indices and $5$ bits with the transmitted symbol. The corresponding values in
Fig. 7 (b) are $2$ and $8$ bits for GPIM; $6$ and $4$ bits for QSM; $3$ and
$7$ bits for SM schemes, respectively. In $M$-QAM, all $8$ and $10$ bits are
carried on a modulated symbol with $M=256$ and $M=1024$, respectively. We can
see from Fig. 6 that GPIM scheme has a considerable SNR gain compared to SM
and QSM schemes for the same bpcu. At BER $=10^{-2}$, the proposed scheme
requires almost 18 dB less power compared to SM and QSM schemes for $M=16$
case.
Figure 5: Performance comparisons of GPIM, PIM, QSM, SM, and traditional
PSK/QAM systems for $6$ bpcu. Figure 6: Performance comparisons of GPIM, QSM,
SM, and traditional QAM systems for (a) $8$ bpcu (b) $10$ bpcu.
## V Conclusions
We have proposed two new IM schemes, namely PIM and GPIM, which exploit the
indices of Hermite-Gaussian pulses for SISO systems. These methods are
suitable for systems that need low complexity owing to their SISO structure.
For this reason, we think that our schemes can be utilized especially in M2M
and IoT applications. Analytical expressions for average BER of the PIM and
GPIM systems have been derived and their superiority have been shown.
## References
* [1] _State of the IoT 2020: 12 billion IoT connections, surpassing non-IoT for the first time_ , Accessed on: Jan. 3, 2021. [Online]. Available:, https://iot-analytics.com/state-of-the-iot-2020-12-billion-iot-connections-surpassing-non-iot-for-the-first-time/ .
* [2] E. Telatar, “Capacity of multi-antenna Gaussian channels,” _European transactions on telecommunications_ , vol. 10, no. 6, pp. 585–595, 1999.
* [3] R. Y. Mesleh, H. Haas, S. Sinanovic, C. W. Ahn, and S. Yun, “Spatial modulation,” _IEEE Trans. Veh. Technol._ , vol. 57, no. 4, pp. 2228–2241, 2008.
* [4] C. A. F. da Rocha, B. F. Uchôa-Filho, and D. Le Ruyet, “Study of the impact of pulse shaping on the performance of spatial modulation,” in _Proc. 2017 Int. Symp. on Wireless Commun. Systems (ISWCS)_. IEEE, 2017, pp. 303–307.
* [5] K. Ishibashi and S. Sugiura, “Effects of antenna switching on band-limited spatial modulation,” _IEEE Wireless Commun. Lett._ , vol. 3, no. 4, pp. 345–348, 2014.
* [6] E. Basar, Ü. Aygölü, E. Panayırcı, and H. V. Poor, “Orthogonal frequency division multiplexing with index modulation,” _IEEE Trans. Signal Process_ , vol. 61, no. 22, pp. 5536–5549, 2013.
* [7] T. Mao, Z. Wang, Q. Wang, S. Chen, and L. Hanzo, “Dual-mode index modulation aided OFDM,” _IEEE Access_ , vol. 5, pp. 50–60, 2016.
* [8] C.-C. Cheng, H. Sari, S. Sezginer, and Y. T. Su, “Enhanced spatial modulation with multiple signal constellations,” _IEEE Trans. Commun._ , vol. 63, no. 6, pp. 2237–2248, 2015.
* [9] S. Aldirmaz-Colak, Y. Acar, and E. Basar, “Adaptive dual-mode OFDM with index modulation,” _Physical Commun._ , vol. 30, pp. 15–25, 2018.
* [10] G. Kaddoum, M. F. Ahmed, and Y. Nijsure, “Code index modulation: A high data rate and energy efficient communication system,” _IEEE Commun. Lett._ , vol. 19, no. 2, pp. 175–178, 2015.
* [11] S. Aldirmaz, A. Serbes, and L. Durak-Ata, “Spectrally efficient OFDMA lattice structure via toroidal waveforms on the time-frequency plane,” _EURASIP J. Advances Signal Process._ , vol. 2010, no. 1, p. 684097, 2010\.
* [12] T. Kurt, G. K. Kurt, and A. Yongaçoglu, “Throughput enhancement in multi-carrier systems employing overlapping Weyl-Heisenberg frames,” in _Proc. IEEE Int. Conf. Commun._ IEEE, 2009, pp. 1–6.
* [13] H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, “The fractional Fourier transform with applications in optics and signal processing,” _John Wiley &Sons, New York_, pp. 3880–3885, 2001.
* [14] J. G. Proakis, _Digital Commun._ , 5th ed. New York: McGraw-Hill, 2008.
|
# Beurling-Type Density Criteria for
System Identification
Verner Vlačić1, Céline Aubel2, and Helmut Bölcskei1
1ETH Zurich, Switzerland
2Swiss National Bank, Zurich, Switzerland
Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
This paper addresses the problem of identifying a linear time-varying (LTV)
system characterized by a (possibly infinite) discrete set of delay-Doppler
shifts without a lattice (or other “geometry-discretizing”) constraint on the
support set. Concretely, we show that a class of such LTV systems is
identifiable whenever the upper uniform Beurling density of the delay-Doppler
support sets, measured “uniformly over the class”, is strictly less than
$1/2$. The proof of this result reveals an interesting relation between LTV
system identification and interpolation in the Bargmann-Fock space. Moreover,
we show that this density condition is also necessary for classes of systems
invariant under time-frequency shifts and closed under a natural topology on
the support sets. We furthermore show that identifiability guarantees robust
recovery of the delay-Doppler support set, as well as the weights of the
individual delay-Doppler shifts, both in the sense of asymptotically vanishing
reconstruction error for vanishing measurement error.
## I Introduction
Identification of deterministic linear time-varying (LTV) systems has been a
topic of long-standing interest, dating back to the seminal work by Kailath
[1] and Bello [2], and has seen significant renewed interest during the past
decade [3, 4, 5, 6]. This general problem occurs in many fields of engineering
and science. Concrete examples include system identification in control theory
and practice, the measurement of dispersive communication channels, and radar
imaging. The formal problem statement is as follows. We wish to identify the
LTV system $\mathcal{H}$ from its response
$(\mathcal{H}x)(t)\vcentcolon=\int_{\mathbb{R}^{2}}S_{\mathcal{H}}(\tau,\nu)\,x(t-\tau)\,e^{2\pi
i\nu t}\,\mathrm{d}\tau\mathrm{d}\nu,\quad\forall t\in\mathbb{R},$ (1)
to a probing signal $x(t)$, with $S_{\mathcal{H}}(\tau,\nu)$ denoting the
spreading function associated with $\mathcal{H}$. Specifically, we consider
$\mathcal{H}$ to be identifiable if there exists an $x$ such that knowledge of
$\mathcal{H}x$ allows us to determine $S_{\mathcal{H}}$. The representation
theorem [7, Thm. 14.3.5] states that a large class of continuous linear
operators can be represented as in (1).
Kailath [1] showed that an LTV system with spreading function supported on a
rectangle centered at the origin of the $(\tau,\nu)$-plane is identifiable if
the area of the rectangle is at most $1$. This result was later extended by
Bello to arbitrarily fragmented spreading function support regions with the
support area measured collectively over all supporting pieces [2]. Necessity
of the Kailath-Bello condition was established in [3, 8] through elegant
functional-analytic arguments. However, all these results require the support
region of $S_{\mathcal{H}}(\tau,\nu)$ to be known prior to identification, a
condition that is very restrictive and often impossible to realize in
practice. More recently, it was demonstrated in [4] that identifiability in
the more general case considered by Bello [2] is possible without prior
knowledge of the spreading function support region, again as long as its area
(measured collectively over all supporting pieces) is upper-bounded by $1$.
This is surprising as it says that there is no price to be paid for not
knowing the spreading function’s support region in advance. The underlying
insight has strong conceptual ties to the theory of spectrum-blind sampling of
sparse multi-band signals [9, 10, 11, 12].
The situation is fundamentally different when the spreading function is
discrete according to
$(\mathcal{H}x)(t)\vcentcolon=\sum_{m\in\mathbb{N}}\alpha_{m}\,x(t-\tau_{m})\,e^{2\pi
i\nu_{m}t},\quad\forall t\in\mathbb{R},$ (2)
where $(\tau_{m},\nu_{m})\in\mathbb{R}^{2}$ are delay-Doppler shift parameters
and $\alpha_{m}$ are the corresponding complex weights, for $m\in\mathbb{N}$.
Here, the (discrete) spreading function can be supported on unbounded subsets
of the $(\tau,\nu)$-plane with the identifiability condition on the support
area of the spreading function replaced by a density condition on the support
set
$\operatorname{supp}(\mathcal{H})\vcentcolon=\\{(\tau_{m},\nu_{m}):m\in\mathbb{N}\\}$.
Specifically, for $\mathcal{H}$ supported on rectangular lattices according to
$\operatorname{supp}(\mathcal{H})=a^{-1}\mathbb{Z}\times b^{-1}\mathbb{Z}$,
Kozek and Pfander established that $\mathcal{H}$ is identifiable if and only
if $ab\leqslant 1$ [3]. In [13] a necessary condition for identifiability of a
set of Hilbert-Schmidt operators defined analogously to (2) is given; this
condition is expressed in terms of the Beurling density of the support set,
but the time-frequency pairs $(\tau_{m},\nu_{m})$ are assumed to be confined
to a lattice. Now, in practice the discrete spreading function will not be
supported on a lattice as the parameters $\tau_{m},\nu_{m}$ correspond to time
delays and frequency shifts induced, e.g. in wireless communication, by the
propagation environment. It is hence of interest to understand the limits on
identifiability in the absence of “geometry-discretizing” assumptions—such as
a lattice constraint—on $\operatorname{supp}(\mathcal{H})$. Resolving this
problem is the aim of the present paper.
### I-A Fundamental limits on identifiability
The purpose of this paper is twofold. First, we establish fundamental limits
on the stable identifiability of $\mathcal{H}$ in (2) in terms of
$\operatorname{supp}(\mathcal{H})$ and $\\{\alpha_{m}\\}_{m\in\mathbb{N}}$.
Our approach is based on the following insight. Defining the discrete complex
measure
$\mu\vcentcolon=\sum_{m\in\mathbb{N}}\alpha_{m}\,\delta_{\tau_{m},\nu_{m}}$ on
$\mathbb{R}^{2}$, where $\delta_{\tau_{m},\nu_{m}}$ denotes the Dirac point
measure with mass at $(\tau_{m},\nu_{m})$, the input-output relation (2) can
be formally rewritten as
$\quad(\mathcal{H}_{\mu}x)(t)=\int_{\mathbb{R}^{2}}x(t-\tau)e^{2\pi i\nu
t}\,\mathrm{d}{\mu}(\tau,\nu),\quad t\in\mathbb{R},$ (3)
where we use throughout $\mathcal{H}_{\mu}$ instead of $\mathcal{H}$ for
concreteness. Identifying the system $\mathcal{H}_{\mu}$ thus amounts to
reconstructing the discrete measure $\mu$ from $\mathcal{H}_{\mu}x$. More
specifically, we wish to find necessary and sufficient conditions on classes
$\mathscr{H}$ of measures guaranteeing stable identifiability bounds of the
form
$d_{\text{r}}(\mu,\mu^{\prime})\leqslant
d_{\text{m}}(\mathcal{H}_{\mu}x,\mathcal{H}_{\mu^{\prime}}x),\quad\text{for
all }\mu,\mu^{\prime}\in\mathscr{H},$ (4)
for appropriate reconstruction and measurement metrics $d_{\text{r}}$ and
$d_{\text{m}}$, where $\mu$ is the ground truth measure to be recovered and
$\mu^{\prime}$ is the estimated measure. The class $\mathscr{H}$ can be
thought of as modelling the prior information available about the measure
$\mu$ facilitating its identification by restricting the set of potential
estimated measures $\mu^{\prime}$. In particular, the smaller the class
$\mathscr{H}$, the “easier” it should be to satisfy (4). In addition to the
class $\mathscr{H}$ of measures itself, the existence of a bound of the form
(4) depends on the choice of the probing signal $x$, so we will later speak of
_identifiability by $x$_.
This formulation reveals an interesting connection to the super-resolution
problem as studied by Donoho [14], where the goal is to recover a discrete
complex measure on $\mathbb{R}$, i.e., a weighted Dirac train, from low-pass
measurements. The problem at hand, albeit formally similar, differs in several
important aspects. First, we want to identify a measure $\mu$ on
$\mathbb{R}^{2}$, i.e., a measure on a _two-dimensional_ set, from
observations in _one_ parameter, namely $(\mathcal{H}_{\mu}x)(t)$,
$t\in\mathbb{R}$. Next, the low-pass observations in [14] are replaced by
short-time Fourier transform-type observations, where the probing signal $x$
appears as the window function. While super-resolution from STFT-measurements
was considered in [15], the underlying measure to be identified in [15] is, as
in [14], on $\mathbb{R}$. Finally, [14] assumes that the support set of the
measure under consideration is confined to an a priori fixed lattice. While
such a strong structural assumption allows for the reconstruction metric
$d_{\text{r}}$ to take a simple and intuitive form, it unfortunately bars
taking into account the geometric properties of the support sets considered.
By contrast, the general definition of stable identifiability (see Definition
1) analogous to [14] will pave the way for a theory of support recovery
without a lattice assumption, as discussed in the next subsection.
These differences make for very different technical challenges. Nevertheless,
we can follow the spirit of Donoho’s work [14], who established necessary and
sufficient conditions for stable identifiability in the classical super-
resolution problem. Donoho’s conditions are expressed in terms of the uniform
Beurling density of the measure’s (one-dimensional) support set and are
derived using density theorems for interpolation in the Bernstein and Paley-
Wiener spaces [16] and for the balayage of Fourier-Stieltjes transforms [17].
We will, likewise, establish a sufficient condition guaranteeing stable
identifiability for classes of measures whose supports have density less than
1/2 “uniformly over the class $\mathscr{H}$” (formally introduced in
Definition 2). In addition, we show that this is also a necessary condition
for classes of measures invariant under time-frequency shifts and closed under
a natural topology on the support sets. We will see below that these
requirements are not very restrictive as we present several examples of
identifiable and non-identifiable classes of measures. The proofs of these
results are based on the density theorem for interpolation in the Bargmann-
Fock space [18, 19, 20, 21], as well as several results about Riesz sequences
from [22].
### I-B Robust recovery of the delay-Doppler support set
The second goal of the paper is to address the implications of the
identifiability condition on the recovery of the discrete measure $\mu$.
Concretely, suppose that we want to recover a fixed measure
$\mu\vcentcolon=\sum_{m\in\mathbb{N}}\alpha_{m}\,\delta_{\tau_{m},\,\nu_{m}}$
from a known class of measures $\mathscr{H}$ assumed to be stably identifiable
(in the sense of (4)) with respect to a probing signal $x$, and let
$\\{\mu_{n}\\}_{n\in\mathbb{N}}\subset\mathscr{H}$ be a sequence of “estimated
candidate measures”
$\mu_{n}\vcentcolon=\sum_{m\in\mathbb{N}}\alpha_{m}^{(n)}\,\delta_{\tau_{m}^{(n)},\,\nu_{m}^{(n)}}$
for the recovery of $\mu$. We will show that, under a mild regularity
condition on $x$, the stable identifiability condition (4) on $\mathscr{H}$
guarantees that
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x\quad\implies\quad\operatorname{supp}(\mathcal{H}_{\mu_{n}})\to\operatorname{supp}(\mathcal{H}_{\mu})\quad\text{and}\quad\\{\alpha_{m}^{(n)}\\}_{m\in\mathbb{N}}\to\\{\alpha_{m}\\}_{m\in\mathbb{N}},$
(5)
as $n\to\infty$, where the topologies in which these limits take place will be
specified in due course. In words, this result says that the better the
measurements $\mathcal{H}_{\mu_{n}}$ match the true measurement
$\mathcal{H}_{\mu}$, the closer the estimated measures $\mu_{n}$ are to the
ground truth $\mu$. This, in particular, shows that “measurement matching” is
sufficient for recovery within stably identifiable classes $\mathscr{H}$,
i.e., any algorithm that generates a sequence of measures
$\\{\mu_{n}\\}_{n\in\mathbb{N}}\subset\mathscr{H}$ satisfying
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x$ will succeed in recovering
$\mu\in\mathscr{H}$. Crucially, we do not assume that the support sets
$\operatorname{supp}(\mathcal{H}_{\mu})$ and
$\operatorname{supp}(\mathcal{H}_{\mu_{n}})$, for $n\in\mathbb{N}$, are
confined to a lattice (or any other a priori fixed discrete set). To the best
of our knowledge, this is the first known LTV system identification result on
the robust recovery of the discrete support set of the measure, instead of its
weights only.
Notation. We write $B_{R}(a)$ for the closed ball in $\mathbb{C}$ of radius
$R$ centered at $a$, and denote its boundary by $\partial B_{R}(a)$. For a set
$S\subset\mathbb{C}$, we let $\mathds{1}_{S}:\mathbb{C}\to\mathbb{R}$ be the
indicator function of $S$, taking on the value $1$ on $S$ and $0$ elsewhere.
We will identify $\mathbb{C}$ with $\mathbb{R}^{2}$ whenever appropriate and
convenient.
We say that a set $\Lambda\subset\mathbb{C}$ is discrete if, for all
$\lambda\in\Lambda$, one can find a $\delta>0$ such that
$|\lambda-\lambda^{\prime}|>\delta$, for all
$\lambda^{\prime}\in\Lambda\\!\setminus\\{\lambda\\}$. Following the
terminology employed in [22, §2.2], we say that a set
$\Lambda\subset\mathbb{C}$ is relatively separated if
$\mathrm{rel}(\Lambda)\vcentcolon=\sup{\\{\\#(\Lambda\cap
B_{1}(x)):x\in\mathbb{C}\\}}<\infty.$
Further, we say that $\Lambda$ is separated (usually referred to as uniformly
discrete in the literature), if
$\mathrm{sep}(\Lambda)\vcentcolon=\inf\\{|\lambda-\lambda^{\prime}|\colon\lambda,\lambda^{\prime}\in\Lambda,\lambda\neq\lambda^{\prime}\\}>0.$
Finally, for two separated sets
$\Lambda_{1},\Lambda_{2}\subset\mathbb{R}^{2}$, we define their mutual
separation according to
$\mathrm{ms}(\Lambda_{1},\Lambda_{2})\vcentcolon=\inf_{\begin{subarray}{c}\lambda_{1}\in\Lambda_{1},\lambda_{2}\in\Lambda_{2}\\\
\lambda_{1}\neq\lambda_{2}\end{subarray}}{|\lambda_{1}-\lambda_{2}|}.$ (6)
Note that points that are elements of both $\Lambda_{1}$ and $\Lambda_{2}$ are
excluded from consideration in the expression for mutual separation.
For a Banach space $\mathcal{B}$, we write
$\left\|\cdot\right\|_{\mathcal{B}}$, $\mathcal{B}^{*}$, and
$\left\langle\cdot,\cdot\right\rangle_{\mathcal{B}\times\mathcal{B}^{*}}$ to
denote the norm, the topological dual of $\mathcal{B}$, and the dual pairing
on $\mathcal{B}$, respectively. Throughout the paper we use $p$ and $q$ to
denote conjugate indices in $[1,\infty]$ such that $1/p+1/q=1$. We write
$\mathscr{M}^{p}$ for the vector space of all complex Radon measures on
$\mathbb{C}$ of the form
$\mu=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{\lambda}$, where
$\Lambda$ is a relatively separated discrete subset of $\mathbb{R}^{2}$,
$\\{\alpha_{\lambda}\\}_{\lambda\in\Lambda}$ is a sequence in $\mathbb{C}$,
and the norm
$\|\mu\|_{p}\vcentcolon=\begin{cases}\left(\sum_{\lambda\in\Lambda}\left|\alpha_{\lambda}\right|^{p}\right)^{1/p},&\quad\text{if
}p\in[1,\infty)\\\
\sup_{\lambda\in\Lambda}\left|\alpha_{\lambda}\right|,&\quad\text{if
}p=\infty\\\ \end{cases}$
is finite. For such measures we define
$\operatorname{supp}{(\mu)}\vcentcolon=\\{\lambda\in\Lambda:\alpha_{\lambda}\neq
0\\}$. Furthermore, for $s>0$, we let
$\mathscr{M}_{s}^{p}=\\{\mu\in\mathscr{M}^{p}:\mathrm{sep}(\operatorname{supp}(\mu))\geqslant
s\\}$.
For a complex number $\lambda=\tau+i\nu$ (or the corresponding point
$(\tau,\nu)\in\mathbb{R}^{2}$), we write
$(\mathcal{M}_{\nu}x)(t)\vcentcolon=e^{2\pi i\nu t}x(t)$ for the modulation
operator, $(\mathcal{T}_{\tau}x)(t)\vcentcolon=x(t-\tau)$ for the translation
operator, and $\pi(\lambda)=\mathcal{M}_{\nu}\mathcal{T}_{\tau}$ for the
combined time-frequency shift operator. Recall that, for a nonzero Schwartz
test function $\varphi\in\mathcal{S}(\mathbb{R})$, the short-time Fourier
transform (STFT) with respect to the window function $\varphi$ is the map
$\mathcal{V}_{\varphi}$ taking Schwartz distributions on $\mathbb{R}$ to
complex-valued functions on $\mathbb{C}$ according to
$(\mathcal{V}_{\varphi}x)(\lambda)=\langle
x,\pi(\lambda)\varphi\rangle_{\mathcal{S}^{\prime}(\mathbb{R})\times\mathcal{S}(\mathbb{R})},\quad\text{for
}x\in\mathcal{S}^{\prime}(\mathbb{R}),\lambda\in\mathbb{C},$
where $\mathcal{S}^{\prime}(\mathbb{R})$ denotes the set of tempered
distributions on $\mathbb{R}$. We take $\varphi(t)=2^{\frac{1}{4}}e^{-\pi
t^{2}}$ to be the $L^{2}$-normalized gaussian and, following [7], we write
$M^{p}_{m}(\mathbb{R})=\left\\{x\in\mathcal{S}^{\prime}(\mathbb{R}):\|x\|_{M^{p}_{m}(\mathbb{R})}\vcentcolon=\left(\int_{\mathbb{C}}|(\mathcal{V}_{\varphi}x)(\lambda)|^{p}m(\lambda)^{p}\mathrm{d}\lambda\right)^{1/p}<\infty\right\\},$
for the weighted modulation space on $\mathbb{R}$ of index $p$ and weight
function $m:\mathbb{C}\to\mathbb{R}_{\geqslant 0}$. When $m\equiv 1$, we write
$M^{p}(\mathbb{R})$ for the unweighted modulation space. We remark that
$\varphi$ has the convenient property of being its own Fourier transform,
i.e., $\widehat{\varphi}=\varphi$. According to [7, Thm. 11.3.5, Thm. 11.3.6],
$M^{p}(\mathbb{R})$ is a Banach space, and, for $p\in[1,\infty)$, its dual
space can be identified with $M^{q}(\mathbb{R})$ via the dual pairing
$\langle f,g\rangle_{M^{p}(\mathbb{R})\times
M^{q}(\mathbb{R})}=\langle\mathcal{V}_{\varphi}f,\mathcal{V}_{\varphi}g\rangle_{L^{p}(\mathbb{C})\times
L^{q}(\mathbb{C})},\quad\text{for }f\in M^{p}(\mathbb{R}),g\in
M^{q}(\mathbb{R}).$ (7)
Finally, for real-valued functions $f$ and $g$ of several variables
$p_{1},\dots,p_{n}$ (which may be real or complex numbers, or even functions),
and a non-negative integer $m\leqslant n$, we write
$f\lesssim_{\,p_{1},\dots,p_{m}}g$ if there exists a non-negative function
$C=C(p_{1},\dots,p_{m})$ such that $f\leqslant C\,g$, as well as
$f\asymp_{\,p_{1},\dots,p_{m}}g$ if $f\lesssim_{\,p_{1},\dots,p_{m}}g$ and
$g\lesssim_{\,p_{1},\dots,p_{m}}f$. We use the notation $f\lesssim g$ only if
$C$ is a universal constant, i.e., if it is independent of all of the
$p_{1},\dots,p_{n}$.
## II Contributions
### II-A Operators and identifiability
In order to formalize our definition of identifiability (4), we first need to
make sense of the integral (3). Concretely, we consider only probing signals
$x$ in the modulation space $M^{1}(\mathbb{R})$ (also referred to in the
literature as $\mathcal{S}_{0}$, the Feichtinger algebra) and, for a measure
$\mu\in\mathscr{M}^{p}$, we interpret (3) as a linear operator
$\mathcal{H}_{\mu}\colon M^{1}(\mathbb{R})\rightarrow M^{p}(\mathbb{R})$ given
by
$\displaystyle\mathcal{H}_{\mu}x$
$\displaystyle=\int_{\mathbb{R}^{2}}x(\cdot-\tau)e^{2\pi
i\nu\,\cdot}\,\mathrm{d}{\mu}(\tau,\nu)\vcentcolon=\sum_{\lambda\in\mathrm{supp}(\mu)}\mu(\\{\lambda\\})\,\pi(\lambda)x.$
The convergence of this sum in the Banach space $M^{p}(\mathbb{R})$ is
guaranteed by the following proposition whose proof can be found in the
appendix.
###### Proposition 1.
Let $\Lambda$ be a relatively separated subset of $\mathbb{C}$, and let
$p\in[1,\infty)$. Then
1. (i)
$\mathcal{H}_{\Lambda}:\ell^{p}(\Lambda)\times M^{1}(\mathbb{R})\to
M^{p}(\mathbb{R})$ given by
$\mathcal{H}_{\Lambda}(\alpha,x)\vcentcolon=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\pi(\lambda)x,\quad\text{for
all }\alpha\in\ell^{p}(\Lambda),\;x\in M^{1}(\mathbb{R}),$
is a well-defined continuous linear operator, in the sense of the sum
converging unconditionally in the norm of $M^{p}(\mathbb{R})$. Moreover, this
operator is bounded according to
$\|\mathcal{H}_{\Lambda}(\alpha,x)\|_{M^{p}(\mathbb{R})}\lesssim\mathrm{rel}(\Lambda)\,\|\alpha\|_{\ell^{p}}\|x\|_{M^{1}(\mathbb{R})},$
for all $\alpha\in\ell^{p}(\Lambda)$, $x\in M^{1}(\mathbb{R})$.
2. (ii)
For a fixed $x\in M^{1}(\mathbb{R})$, the adjoint operator
$\big{(}\mathcal{H}_{\Lambda}(\cdot,x)\big{)}^{*}:M^{q}(\mathbb{R})\to\ell^{q}$
of the map $\ell^{p}\ni\alpha\mapsto\mathcal{H}_{\Lambda}(\alpha,x)$ is given
by
$\big{(}\mathcal{H}_{\Lambda}(\cdot,x)\big{)}^{*}(y)=\\{\langle
y,\pi(\lambda)x\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\\}_{\lambda\in\Lambda},\quad\text{for }y\in
M^{q}(\mathbb{R}).$ (8)
Next, for a measure
$\mu=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{\lambda}\in\mathscr{M}^{p}$,
define $\mathcal{H}_{\mu}:M^{1}(\mathbb{R})\to M^{p}(\mathbb{R})$ by
$\mathcal{H}_{\mu}(x)=\mathcal{H}_{\operatorname{supp}(\mu)}(\alpha,x)$. Then,
1. (iii)
for every $\mu\in\mathscr{M}^{p}$,
$\|\mu\|_{\ell^{\infty}}\lesssim\|\mathcal{H}_{\mu}\|_{M^{1}(\mathbb{R})\to
M^{p}(\mathbb{R})}.$ (9)
As a consequence of item (iii) of Proposition 1, we have
$\|\mu_{1}-\mu_{2}\|_{\ell^{\infty}}\lesssim\|\mathcal{H}_{\mu_{1}-\mu_{2}}\|_{M^{1}(\mathbb{R})\to
M^{p}(\mathbb{R})}=\|\mathcal{H}_{\mu_{1}}-\mathcal{H}_{\mu_{2}}\|_{M^{1}(\mathbb{R})\to
M^{p}(\mathbb{R})},$
and therefore $\mu_{1}=\mu_{2}$ whenever
$\mathcal{H}_{\mu_{1}}=\mathcal{H}_{\mu_{2}}$. In other words, the measures in
$\mathscr{M}^{p}$ are completely characterized by their action on
$M^{1}(\mathbb{R})$, and thus there is a one-to-one correspondence between the
measures in $\mathscr{M}^{p}$ and the operators
$\\{\mathcal{H}_{\mu}:\mu\in\mathscr{M}^{p}\\}$. Note that this property is
necessary for there to be any hope of recovering a measure $\mu$ from a
measurement $\mathcal{H}_{\mu}x$ with respect to a _single_ probing signal
$x$.
We are now ready to state our definition of stable identifiability:
###### Definition 1 (Stable identifiability).
Let $p\in[1,\infty)$. We say that a class of measures
$\mathscr{H}\subset\mathscr{M}^{p}$ is stably identifiable by a probing signal
$x\in M^{1}(\mathbb{R})$ if there exist constants $C_{1},C_{2}>0$ (that may
depend on $p$ and $x$) such that
$C_{1}\left(\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge
1\right)\left\|\mu_{1}-\mu_{2}\right\|_{p}\leqslant\left\|\mathcal{H}_{\mu_{1}}x-\mathcal{H}_{\mu_{2}}x\right\|_{M^{p}(\mathbb{R})}\leqslant
C_{2}\left\|\mu_{1}-\mu_{2}\right\|_{p},$ (10)
for all $\mu_{1},\mu_{2}\in\mathscr{H}$, where
$\Lambda_{j}\vcentcolon=\operatorname{supp}(\mu_{j})$, $j\in\\{1,2\\}$.
The significance of the term $\mathrm{ms}(\Lambda_{1},\Lambda_{2})$ in (10)
becomes apparent when we consider classes $\mathscr{H}$ that contain measures
with potentially arbitrarily close supports. For a concrete example, consider
the class $\mathscr{H}=\\{\mu\in\mathscr{M}^{p},\\#(\mathrm{supp}(\mu))=1\\}$
of single time-frequency shifts. This class contains the measures
$\mu=\delta_{(0,0)}$ and $\mu_{\epsilon}=\delta_{(0,\epsilon)}$, for all
$\epsilon>0$. Let $x\in M^{1}(\mathbb{R})$ be a probing signal satisfying the
time-localization constraint $t\,x(t)\in L^{2}(\mathrm{d}t)$, but otherwise
arbitrary. Then
$\mathrm{ms}(\operatorname{supp}(\mu),\operatorname{supp}(\mu_{\epsilon}))=\epsilon$,
and
$\epsilon^{-1}(\mathcal{H}_{\mu}x-\mathcal{H}_{\mu_{\epsilon}}x)=\epsilon^{-1}(1-e^{2\pi
i\epsilon\,\cdot})\,x\to-(2\pi i\,\cdot)x$
in $L^{2}(\mathbb{R})=M^{2}(\mathbb{R})$ as $\epsilon\to 0$, and hence
$\|\mathcal{H}_{\mu}x-\mathcal{H}_{\mu_{\epsilon}}x\|_{M^{2}(\mathbb{R})}/\mathrm{ms}(\operatorname{supp}(\mu),\operatorname{supp}(\mu_{\epsilon}))\asymp\|t\,x(t)\|_{L^{2}(\mathrm{d}t)}>0,$
(11)
as $\epsilon\to 0$. On the other hand, $\|\mu-\mu_{\epsilon}\|_{2}=\sqrt{2}$
is bounded away from $0$ as $\epsilon\to 0$. Thus, if the class $\mathscr{H}$
is to be identifiable, the lower bound in (4) needs to decay at least linearly
with $\mathrm{ms}(\operatorname{supp}(\mu_{1}),\operatorname{supp}(\mu_{2}))$,
for $\mu_{1},\mu_{2}\in\mathscr{H}$. In contrast to (11), one could have
another class $\mathscr{K}$ containing measures $\mu^{\prime}$ and
$\mu_{\epsilon}^{\prime}$, for $\epsilon>0$, so that
$\|\mathcal{H}_{\mu^{\prime}}x-\mathcal{H}_{\mu^{\prime}_{\epsilon}}x\|_{M^{2}(\mathbb{R})}/\mathrm{ms}(\operatorname{supp}(\mu^{\prime}),\operatorname{supp}(\mu^{\prime}_{\epsilon}))\to
0$, as $\epsilon\to 0$, i.e.,
$\|\mathcal{H}_{\mu^{\prime}}x-\mathcal{H}_{\mu^{\prime}_{\epsilon}}x\|_{M^{2}(\mathbb{R})}$
decays superlinearly with
$\mathrm{ms}(\operatorname{supp}(\mu^{\prime}),\operatorname{supp}(\mu^{\prime}_{\epsilon}))$.
Classes such as $\mathscr{K}$ are not covered by our theory, and we hence
exclude them from our definition of stable identifiability. In summary,
Definition 1 says that we consider a class of measures to be stably
identifiable if the decay of
$\|\mathcal{H}_{\mu_{1}}x-\mathcal{H}_{\mu_{2}}x\|_{M^{2}(\mathbb{R})}$ as
$\mathrm{ms}(\operatorname{supp}(\mu_{1}),\operatorname{supp}(\mu_{2}))\to 0$
is not faster than linear. This property will turn out to be crucial later
when we discuss robust recovery (specifically, in the proofs of Theorems 3 and
4).
### II-B A necessary and sufficient condition for identifiability
As already mentioned in the introduction, our necessary and sufficient
condition for identifiability will be expressed in terms of the density of
support sets measured uniformly over the class of measures under
consideration. Concretely, we have the following definition:
###### Definition 2 (Upper Beurling class density).
Let $\mathcal{L}$ be a collection of relatively separated sets in
$\mathbb{R}^{2}$, and, for $R>0$, define
$[0,R]^{2}=[0,R]\times[0,R]\subset\mathbb{R}^{2}$. For
$\Lambda\in\mathcal{L}$, let $n^{+}(\Lambda,[0,R]^{2})$ be the largest number
of points of $\Lambda$ contained in any translate of $[0,R]^{2}$ in the plane.
We then define the upper Beurling class density of $\mathcal{L}$ according to
$\mathcal{D}^{+}(\mathcal{L})=\limsup_{R\to\infty}\sup_{\Lambda\in\mathcal{L}}\frac{n^{+}(\Lambda,[0,R]^{2})}{R^{2}}.$
We are now ready to state the first main result of the paper.
###### Theorem 1 (A sufficient condition for identifiability).
Let $p\in(1,\infty)$ and $s>0$, let $\mathscr{H}\subset\mathscr{M}_{s}^{p}$ be
a class of measures, and set
$\mathcal{L}=\\{\operatorname{supp}(\mu):\mu\in\mathscr{H}\\}$. Suppose that
$\mathcal{D}^{+}(\mathcal{L})<\frac{1}{2}\,$. Then the class $\mathscr{H}$ is
identifiable by the standard gaussian $\varphi(t)=2^{\frac{1}{4}}e^{-\pi
t^{2}}$, $\varphi\in M^{1}(\mathbb{R})$.
Crucially, the support sets $\operatorname{supp}(\mu)\in\mathcal{L}$ in
Theorem 1 are not assumed to be subsets of a lattice or any other a priori
fixed subset of $\mathbb{R}^{2}$. In particular, one allows $\mathscr{H}$ to
contain measures $\mu_{1}$ and $\mu_{2}$ with arbitrarily small
$\mathrm{ms}(\operatorname{supp}(\mu_{1}),\operatorname{supp}(\mu_{2}))$.
Note that a subclass $\mathscr{H}^{\prime}$ of an identifiable class
$\mathscr{H}$ is trivially identifiable, and accordingly the upper Beurling
class density of the supports of measures in $\mathscr{H}^{\prime}$ does not
exceed that of the support sets corresponding to $\mathscr{H}$. The
sufficiency result in Theorem 1 is therefore “compatible” with the inclusion
relation on classes. By contrast, the “non-identifiability” of a class
$\mathscr{H}\subset\mathscr{M}^{p}_{s}$ (i.e., the nonexistence of a probing
signal in $M^{1}(\mathbb{R})$ by which the class would be identifiable) can
only be meaningfully assessed in terms of the Beurling density
$\mathcal{D}^{+}(\\{\operatorname{supp}(\mu):\mu\in\mathscr{H}\\})$ for
sufficiently rich classes of measures. For example, one can construct
arbitrarily large finite subsets $\mathscr{H}$ of $\mathscr{M}^{p}_{s}$ with
arbitrarily large
$\mathcal{D}^{+}(\\{\operatorname{supp}(\mu):\mu\in\mathscr{H}\\})$, and yet
$\mathscr{H}$ will be identifiable (e.g. by the standard gaussian, using the
property that distinct time-frequency shifts of a gaussian are linearly
independent). A converse statement to Theorem 1 can hence be meaningfully
formulated only for classes $\mathscr{H}$ that are “sufficiently rich” in a
suitable sense. In the present paper we will do this for classes of measures
that are subspaces of $\mathscr{M}^{p}$, with support sets that are closed
under limits with respect to weak convergence and invariant under time-
frequency shifts.
Before providing the precise definition of these classes of measures, we need
to introduce the notion of weak convergence for subsets of $\mathbb{C}$.
Concretely, we say that a sequence of separated subsets
$\\{\Lambda_{n}\\}_{n\in\mathbb{N}}$ converges weakly to
$\Lambda\subset\mathbb{C}$, and write $\Lambda_{n}\xrightarrow[]{w}\Lambda$,
if
$\mathrm{dist}\big{(}(\Lambda_{n}\cap B_{R}(z))\cup\partial
B_{R}(z),(\Lambda\cap B_{R}(z))\cup\partial B_{R}(z)\big{)}\to 0\quad\text{as
}n\to\infty,$ (12)
for all $R>0$ and $z\in\mathbb{C}$, where $\mathrm{dist}$ denotes the
Hausdorff metric on the subsets of $\mathbb{C}$. We are now ready to formalize
the type of classes covered by our necessity result.
###### Definition 3 (Regular $\mathscr{H}(\mathcal{L})^{p}$ classes).
Let $p\in(1,\infty)$ and $s>0$, and let $\mathcal{L}$ be a collection of
separated subsets of $\mathbb{C}$.
1. (i)
We say that $\mathcal{L}$ is _closed and shift-invariant_ (CSI) if it is
closed under limits with respect to weak convergence, and
$\Lambda+z\vcentcolon=\\{\lambda+z:\lambda\in\Lambda\\}\in\mathcal{L}$, for
all $\Lambda\in\mathcal{L}$ and $z\in\mathbb{C}$.
2. (ii)
We define a class of measures
$\mathscr{H}(\mathcal{L})^{p}\subset\mathscr{M}^{p}$ according to
$\mathscr{H}(\mathcal{L})^{p}=\Bigg{\\{}\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{\lambda}:\Lambda\in\mathcal{L},\alpha\in\ell^{p}(\Lambda)\Bigg{\\}}.$
We call $\mathscr{H}(\mathcal{L})^{p}$ _$s$ -regular_ if $\mathcal{L}$ is CSI
and $\mathrm{sep}(\Lambda)\geqslant s$, for all $\Lambda\in\mathcal{L}$.
Even though the conditions in Definition 3 are rather technical, they are not
overly restrictive, as evidenced by several examples of $s$-regular classes
provided in §II-D. We are now ready to state our second main result, which is
a necessary condition for identifiability of $s$-regular classes and as such
constitutes a partial converse to Theorem 1.
###### Theorem 2 (A necessary condition for identifiability of $s$-regular
classes).
Let $p\in(1,\infty)$ and $s>0$, and let
$\mathscr{H}(\mathcal{L})^{p}\subset\mathscr{M}^{p}$ be an $s$-regular class.
If there exists an $x\in M^{1}(\mathbb{R})$ such that
$\mathscr{H}(\mathcal{L})^{p}$ is identifiable by $x$, then
$\mathcal{D}^{+}(\mathcal{L})<\frac{1}{2}$.
### II-C Identifiability and robust recovery
In this subsection we formalize the claim (5) made in the introduction under
the assumption that $x\in M^{1}_{m}(\mathbb{R})$, with the weight function
$m(z)=1+|z|$, for $z\in\mathbb{C}$. Informally, this assumption imposes
faster-than-linear decay on $x$ in both the time and frequency domains. Note
that the $L^{2}$-normalized gaussian $\varphi$ is in $M^{1}_{m}(\mathbb{R})$,
as its STFT decays exponentially (by virtue of
$\varphi\in\mathcal{S}(\mathbb{R})$ and [7, Thm. 11.2.5]).
We begin by defining the weak-* topology on $\mathscr{M}_{s}^{p}$, for
$p\in(1,\infty)$. Concretely, for $\mu\in\mathscr{M}_{s}^{p}$ and a sequence
$\\{\mu_{n}\\}_{n\in\mathbb{N}}\subset\mathscr{M}_{s}^{p}$, we say that
$\\{\mu_{n}\\}_{n\in\mathbb{N}}$ converges to $\mu$ in the weak-* topology of
$\mathscr{M}_{s}^{p}$, and write $\mu_{n}\xrightarrow[]{w^{*}}\mu$, if
$\lim_{n\to\infty}\int_{\mathbb{C}}\overline{f}\mathrm{d}\mu_{n}=\int_{\mathbb{C}}\overline{f}\mathrm{d}\mu,$
(13)
for all continuous $f:\mathbb{C}\to\mathbb{C}$ such that
$\lim_{|z|\to\infty}f(z)=0$ and
$\Big{\|}\sup_{\begin{subarray}{c}y\in\mathbb{C},|y|\leqslant
1\end{subarray}}|f(z+y)|\Big{\|}_{L^{q}(\mathrm{d}z)}<\infty.$
This definition corresponds to convergence in the weak-* topology on the
Wiener amalgam space $W(\mathcal{M},L^{p})$, which will be defined and treated
systematically in §III. In order to formalize (5), it will be helpful to first
state the following weak-* recovery result for $s$-regular classes:
###### Theorem 3 (Weak-* Recovery Theorem).
Let $p\in(1,\infty)$ and $s>0$, and let
$\mathscr{H}(\mathcal{L})^{p}\subset\mathscr{M}_{s}^{p}$ be an $s$-regular
class. Assume furthermore that $\mathscr{H}(\mathcal{L})^{p}$ is identifiable
by a probing signal $x\in M^{1}_{m}(\mathbb{R})$, where $m(z)=1+|z|$. Then
* (i)
if $\mu,\widetilde{\mu}\in\mathscr{H}(\mathcal{L})^{p}$ are such that
$\mathcal{H}_{\widetilde{\mu}}\,x=\mathcal{H}_{\mu}x$, then
$\widetilde{\mu}=\mu$.
* (ii)
Let $\mu\in\mathscr{H}(\mathcal{L})^{p}$ and let
$\\{\mu_{n}\\}_{n\in\mathbb{N}}$ be a sequence in
$\mathscr{H}(\mathcal{L})^{p}$. Then
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x$ in the weak-* topology of
$M^{p}(\mathbb{R})$ if and only if $\mu_{n}\xrightarrow[]{w^{*}}\mu$ in
$\mathscr{M}_{s}^{p}$.
The proof of Theorem 3 relies crucially on the fact that the decay of the
lower bound in (10) as a function of
$\mathrm{ms}(\operatorname{supp}(\mu_{1}),\operatorname{supp}(\mu_{2}))$ is
not faster than linear.
Note that item (i) of Theorem 3 guarantees perfect recovery of measures in
$\mathscr{H}(\mathcal{L})^{p}$ under perfect measurement matching. However,
this does not go a long way towards establishing (5) as item (ii) of the
theorem deals with convergence in weak-* topologies “only”. To illustrate that
a stronger form of convergence is needed, consider the $1/2$-regular class
$\mathscr{H}(\mathcal{L})^{2}$, where
$\mathcal{L}=\\{\Lambda\subset\mathbb{C}:\mathrm{sep}(\Lambda)\geqslant
1/2,\\#(\Lambda)\leqslant 2\\}$. In this class
$\delta_{0,0}+\delta_{n,0}\xrightarrow[]{w^{*}}\delta_{0,0}$ as $n\to\infty$
(where $\delta_{\tau,\nu}$ is again the Dirac point measure with mass at
$(\tau,\nu)$), and so, if one were to rely on the weak-* convergence guarantee
only, one could argue that $\\{\delta_{0,0}+\delta_{n,0}\\}_{n\in\mathbb{N}}$
recovers $\delta_{0,0}$. This sequence does, indeed, capture the component
$\delta_{0,0}$, but it also features the nonvanishing spurious component
$\delta_{n,0}$. Similarly, on the measurement side of (5), taking $x=\varphi$
as the probing signal would yield $\varphi+\varphi(\,\cdot-n)\to\varphi$ in
the weak-* topology of $L^{2}$, but not in the norm topology. We can thus hope
that upgrading from weak-* convergence to norm convergence on the measurement
side of (5) might imply a stronger form of convergence of the sequence of
candidate measures to the target measure. The following theorem establishes
that this is, indeed, the case for $s$-regular classes
$\mathscr{H}(\mathcal{L})^{p}$. Concretely, convergence of the measurements in
norm implies that the candidate measures $\mu_{n}$ approximate arbitrarily big
finite sections of the target measure $\mu$ and do not have any spurious
components.
###### Theorem 4 (Robust Recovery Theorem).
Let $p\in(1,\infty)$ and $s>0$, and let
$\mathscr{H}(\mathcal{L})^{p}\subset\mathscr{M}_{s}^{p}$ be an $s$-regular
class. Assume furthermore that $\mathscr{H}(\mathcal{L})^{p}$ is identifiable
by a probing signal $x\in M^{1}_{m}(\mathbb{R})$, where $m(z)=1+|z|$, and let
$C_{1}$ and $C_{2}$ be the corresponding constants such that (10) is
fulfilled. Fix a $\mu\in\mathscr{H}(\mathcal{L})^{p}$, write
$\Lambda=\operatorname{supp}(\mu)$, and let $\\{\mu_{n}\\}_{n\in\mathbb{N}}$
be a sequence in $\mathscr{H}(\mathcal{L})^{p}$ such that
$\|\mathcal{H}_{\mu_{n}}x-\mathcal{H}_{\mu}x\|_{M^{p}(\mathbb{R})}\to 0$ as
$n\to\infty$.
Then, for every $\epsilon>0$ and every finite subset $\widetilde{\Lambda}$ of
$\Lambda$ such that
$\|\mu-\mu\mathds{1}_{\widetilde{\Lambda}}\|_{p}<\epsilon$, there is an
$N\in\mathbb{N}$ so that, for all $n\geqslant N$, the measures $\mu_{n}$ take
the form
$\mu_{n}=\sum_{{\lambda}\in\tilde{\Lambda}}\alpha^{(n)}_{\lambda}\delta_{\lambda+\bm{\varepsilon}_{n}(\lambda)}+\rho_{n},$
where $|\bm{\varepsilon}_{n}(\lambda)|\leqslant\epsilon$ and
$|\alpha^{(n)}_{\lambda}-\alpha_{\lambda}|\leqslant\epsilon$, for all
$\lambda\in\widetilde{\Lambda}$, and
$\|\rho_{n}\|_{p}\leqslant\frac{4C_{2}}{C_{1}(s\wedge 1)}\,\epsilon\,$.
One can view
$\sum_{{\lambda}\in\tilde{\Lambda}}\alpha^{(n)}_{\lambda}\delta_{\lambda+\bm{\varepsilon}_{n}(\lambda)}$
as the “successfully recovered finite section” of $\mu$, which approximates
both the time-frequency shifts and their weights within $\epsilon$ error,
whereas $\rho_{n}$ is the “spurious” component, whose norm is also
proportional to $\epsilon$. The constant of proportionality
$(C_{2}/C_{1})\cdot(s\wedge 1)^{-1}$ in the bound on $\|\rho_{n}\|_{p}$ can be
interpreted as a “condition number”, indicating that the spurious component is
more difficult to suppress when the ratio of identifiability constants
$C_{2}/C_{1}$ is large, or when the separation $s$ of the measures under
consideration is excessively small, which agrees with our intuition on the
behavior of the “difficult cases”.
### II-D Examples of identifiable and non-identifiable $s$-regular classes
Finally, we present several explicit families of $s$-regular classes and
discuss their identifiability in view of Theorems 1 and 2. Let
$p\in(1,\infty)$, $s>0$, $N\in\mathbb{N}$, $\theta>0$, and $R>0$, and define
the sets
$\displaystyle\mathcal{L}_{s}^{\text{sep}}$
$\displaystyle=\\{\Lambda\subset\mathbb{R}^{2}:\mathrm{sep}(\Lambda)\geqslant
s\\},$ $\displaystyle\mathcal{L}_{s,N}^{\text{fin}}$
$\displaystyle=\\{\Lambda\subset\mathbb{R}^{2}:\mathrm{sep}(\Lambda)\geqslant
s,\\#(\Lambda)\leqslant N\\},\text{ and}$
$\displaystyle\mathcal{L}_{s,\theta,R}^{\text{Ray}}$
$\displaystyle=\\{\Lambda\subset\mathbb{R}^{2}:\mathrm{sep}(\Lambda)\geqslant
s,\,n^{+}\\!\left(\Lambda,(0,R)^{2}\right)\leqslant\theta R^{2}\\}.$
We call the corresponding sets
$\mathscr{H}({\mathcal{L}_{s}^{\text{sep}}})^{p}$,
$\mathscr{H}({\mathcal{L}_{s,N}^{\text{fin}}})^{p}$, and
$\mathscr{H}({\mathcal{L}_{s,\theta,R}^{\text{Ray}}})^{p}$, the
$\ell^{2}$-separated, finite, and Rayleigh classes, respectively. The
following proposition shows that these classes are $s$-regular.
###### Proposition 2.
Let $s>0$, $N\in\mathbb{N}$, $\theta>0$, and $R>0$. Then the collections
$\mathcal{L}_{s}^{\text{sep}}$, $\mathcal{L}_{s,N}^{\text{fin}}$, and
$\mathcal{L}_{s,\theta,R}^{\text{Ray}}$ are CSI and so the corresponding
$\ell^{2}$-separated, finite, and Rayleigh classes are $s$-regular.
Theorems 1 and 2 can be used to obtain the following identifiability results
for these classes.
###### Corollary 5 (Finite class).
Let $p\in(1,\infty)$, $s>0$, and $N\in\mathbb{N}$. Then the class
$H(\mathcal{L}_{s,N}^{\text{fin}})^{p}$ is identifiable by the gaussian
$\varphi(t)=2^{\frac{1}{4}}e^{-\pi t^{2}}$.
###### Corollary 6 ($\ell^{2}$-separated class).
Let $p\in(1,\infty)$ and $s>0$. Then,
* (i)
if $s>2\cdot 3^{-\frac{1}{4}}$,
$\mathscr{H}({\mathcal{L}_{s}^{\text{sep}}})^{p}$ is stably identifiable by
$\varphi$, and
* (ii)
if $s\leqslant 2\cdot 3^{-\frac{1}{4}}$,
$\mathscr{H}({\mathcal{L}_{s}^{\text{sep}}})^{p}$ is not stably identifiable
by any probing signal.
###### Corollary 7 (Rayleigh class).
Let $p\in(1,\infty)$ and $s\in(0,\theta^{-1/2})$. Then,
* (i)
if $\theta<\frac{1}{2}$,
$\mathscr{H}({\mathcal{L}_{s,\theta,R}^{\text{Ray}}})^{p}$ is stably
identifiable by $\varphi$, for all $R>0$, and
* (ii)
if $\theta>\frac{1}{2}$, there exists an $R_{0}>0$ such that
$\mathscr{H}({\mathcal{L}_{s,\theta,R}^{\text{Ray}}})^{p}$ is not stably
identifiable by any probing signal, for all $R\geqslant R_{0}$.
One could also consider the class $\mathscr{H}(\\{\Xi\\})^{p}$ for a fixed
lattice $\Xi=A(\mathbb{Z}\times\mathbb{Z})+b$, where $A\in\mathbb{R}^{2\times
2}$ and $b\in\mathbb{R}^{2}$, in which case $\mathscr{H}(\\{\Xi\\})^{p}$ is
$\mathrm{sep}(\Xi)$-separated, stably identifiable by $\varphi$ if
$\det(A)>1$, and not stably identifiable by any probing signal if
$\det(A)\leqslant 1$.
## III Lattices, Beurling densities, and Wiener amalgam spaces
In this section we introduce various technical tools used throughout the
paper. We begin with square lattices in $\mathbb{C}$ and write
$\Omega_{\gamma}=\\{\omega_{m,n}=\gamma(m+in)\\}_{m,n\in\mathbb{Z}}$ for the
square lattice in $\mathbb{C}$ of mesh size $\gamma>0$. Whenever we identify
$\mathbb{C}$ with $\mathbb{R}^{2}$, $\Omega_{\gamma}$ is equivalently given by
$\\{(\gamma m,\gamma n):m,n\in\mathbb{Z}\\}$. Next, we define the (standard)
upper Beurling density, which is analogous to our Definition 2, but is defined
for individual subsets of $\mathbb{R}^{2}$, instead of classes of subsets.
###### Definition 4 (Upper Beurling density, [17, p. 346][23, p. 47]).
Let $\Lambda$ be a relatively separated set in $\mathbb{R}^{2}$, and, for
$R>0$, let $[0,R]^{2}\subset\mathbb{R}^{2}$. Let $n^{+}(\Lambda,[0,R]^{2})$ be
the largest number of points of $\Lambda$ contained in any translate of
$[0,R]^{2}$. We then define
$D^{+}(\Lambda)\vcentcolon=\limsup_{R\to\infty}\frac{n^{+}(\Lambda,[0,R]^{2})}{R^{2}},$
and we call this quantity the upper (standard) Beurling density of $\Lambda$.
The following three lemmas, whose proofs can be found in the appendix, relate
the lattices $\Omega_{\gamma}$, the upper Beurling class density, and the
standard Beurling density.
###### Lemma 8.
Let $\mathcal{L}$ be a collection of relatively separated sets in
$\mathbb{R}^{2}$, and suppose that $\mathcal{D}^{+}(\mathcal{L})<\infty$.
Then,
1. (i)
for every $\theta>\mathcal{D}^{+}(\mathcal{L})$, there exists an $R_{0}>0$
such that
$n^{+}(\Lambda,(0,R)^{2})\leqslant\theta R^{2},$
for all $\Lambda\in\mathcal{L}$ and $R\geqslant R_{0}$, and
2. (ii)
$\mathcal{D}^{+}(\mathcal{L})\geqslant\sup_{\Lambda\in\mathcal{L}}D^{+}(\Lambda)$.
###### Definition 5.
Let $\Lambda$ be a non-empty relatively separated subset of $\mathbb{C}$, and
let $\gamma>0$ and $R>0$. We say that $\Lambda$ is $R$-uniformly close to
$\Omega_{\gamma}=\\{\omega_{m,n}=\gamma(m+in)\\}_{m,n\in\mathbb{Z}}\,$ if
there exists an enumeration $\\{\lambda_{m,n}\\}_{(m,n)\in\mathcal{I}}$ of
$\Lambda$ (with index set $\mathcal{I}\subset\mathbb{Z}\times\mathbb{Z}$) such
that $|\lambda_{m,n}-\omega_{m,n}|\leqslant R$, for all $(m,n)\in\mathcal{I}$.
###### Lemma 9.
Let $\Lambda$ be a non-empty discrete set in $\mathbb{C}$, and let $\theta>0$,
$\gamma>0$, and $R>0$. If $\gamma^{-2}>\theta$ and
$n^{+}(\Lambda,(0,R)^{2})\leqslant\theta R^{2}$, then there exists an
$R^{\prime}=R^{\prime}(\theta,\gamma,R)>0$ such that $\Lambda$ is
$R^{\prime}$-uniformly close to $\Omega_{\gamma}$.
###### Lemma 10.
Let $\mathcal{L}$ be a set of relatively separated subsets of $\mathbb{C}$,
and let $\gamma>0$ and $R>0$. If $\Lambda$ is $R$-uniformly close to
$\Omega_{\gamma}$, for all $\Lambda\in\mathcal{L}$, then
$\mathcal{D}^{+}(\mathcal{L})\leqslant\gamma^{-2}$.
We conclude this section by formalizing Wiener amalgam spaces [24, 25] on
$\mathbb{C}$ and relating them to weak-* convergence on $\mathscr{M}_{s}^{p}$
defined in (13). We adopt most of our terminology from [22]. Let
$\mathcal{D}(\mathbb{C})$ be the test space of smooth compactly supported
functions on $\mathbb{C}$, with its usual inductive limit topology and the
corresponding topological dual $\mathcal{D}^{\prime}(\mathbb{C})$, called the
space of distributions. Let $\mathcal{B}$ be a Banach space that admits a
continuous embedding into $\mathcal{D}^{\prime}(\mathbb{C})$. Furthermore, fix
a non-negative compactly supported continuous function
$\psi\in\mathcal{D}(\mathbb{C})$ forming a partition of unity, i.e.,
$\sum_{z\in\mathbb{Z}^{2}}\psi(\,\cdot-z)=1$, and let
$m:\mathbb{C}\to\mathbb{R}_{\geqslant 0}$ be a weight function of the form
$m(z)=(1+|z|)^{r}$, for some $r\geqslant 0$. Then, for $p\in[1,\infty]$, the
Wiener amalgam space $W(\mathcal{B},L^{p}_{m})$ is defined as
$W(\mathcal{B},L^{p}_{m})=\left\\{f\in\mathcal{D}^{\prime}(\mathbb{C}):\|f\|_{W(\mathcal{B},L^{p}_{m})}\vcentcolon=\Big{\|}\|f\,\overline{\psi}(\,\cdot-z)\|_{\mathcal{B}}\,m(z)\Big{\|}_{L^{p}(\mathrm{d}z)}<\infty\right\\}.$
The definition of $W(\mathcal{B},L^{p}_{m})$ is independent of the choice of
$\psi$, and different $\psi$ define equivalent norms on
$W(\mathcal{B},L^{p}_{m})$. Informally, $W(\mathcal{B},L^{p}_{m})$ is the
space of distributions (i.e., generalized functions) on $\mathbb{C}$ that are
“locally in $\mathcal{B}$” and “globally in $L^{p}_{m}$”.
Next, we claim that $\mathscr{M}_{s}^{p}\subset W(\mathcal{M},L^{p})$, for
$p\in(1,\infty]$, where $\mathcal{M}$ is the space of regular complex-valued
Borel measures on $\mathbb{C}$ with the total variation norm. To see this, let
$r>0$ be such that $\operatorname{supp}(\psi)\subset B_{r}(0)$. Now, for a
measure
$\mu=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{\lambda}\in\mathscr{M}_{s}^{p}$
denote
$|\mu|^{p}\vcentcolon=\sum_{\lambda\in\Lambda}|\alpha_{\lambda}|^{p}\,\delta_{\lambda}$.
Then Hölder’s inequality yields
$\displaystyle\|\mu\,\overline{\psi}(\,\cdot-z)\|_{\mathcal{M}}$
$\displaystyle\leqslant\sum_{\lambda\in\Lambda\cap
B_{r}(z)}|\alpha_{\lambda}|\leqslant\Bigg{(}\sum_{\lambda\in\Lambda\cap
B_{r}(z)}1^{q}\Bigg{)}^{1/q}\Bigg{(}\sum_{\lambda\in\Lambda\cap
B_{r}(z)}|\alpha_{\lambda}|^{p}\Bigg{)}^{1/p}$
$\displaystyle\lesssim_{\,\psi}\left(\mathrm{sep}(\Lambda)^{-2}\right)^{1/q}\,\Bigg{(}\int_{\mathbb{C}}\mathds{1}_{\\{|y-z|\leqslant
r\\}}\,\mathrm{d}|\mu|^{p}(y)\Bigg{)}^{1/p},\quad\text{for }z\in\mathbb{C},$
where the last inequality follows since one can pack at most
$r^{2}/(\mathrm{sep}(\Lambda)/2)^{-2}$ spheres of radius
$\mathrm{sep}(\Lambda)/2$ in $B_{r}(z)$. Therefore, as
$\mathrm{sep}(\Lambda)\geqslant s$ Tonelli’s theorem yields
$\displaystyle\Big{\|}\|\mu\,\overline{\psi}(\,\cdot-z)\|_{\mathcal{M}}\Big{\|}_{L^{p}(\mathrm{d}z)}^{p}$
$\displaystyle\lesssim_{\,\psi,s}\int_{\mathbb{C}}\Bigg{[}\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\,\mathrm{d}|\mu|^{p}\Bigg{]}\mathrm{d}z$
$\displaystyle=\int_{\mathbb{C}}\mathop{\mathchoice{\vtop{\halign{#\cr$\hfil\displaystyle\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\mathrm{d}z\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\displaystyle{ \int_{\mathbb{C}}
\mathds{1}_{\\{ |z-y|\leqslant r \\} } \mathrm{d}z
}\@mathmeasure\displaystyle{\upbrace}\@mathmeasure\displaystyle{\upbraceg}\@mathmeasure\displaystyle{\upbracegg}\@mathmeasure\displaystyle{\upbraceggg}\@mathmeasure\displaystyle{\upbracegggg}$\displaystyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\textstyle\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\mathrm{d}z\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\textstyle{ \int_{\mathbb{C}}
\mathds{1}_{\\{ |z-y|\leqslant r \\} } \mathrm{d}z
}\@mathmeasure\textstyle{\upbrace}\@mathmeasure\textstyle{\upbraceg}\@mathmeasure\textstyle{\upbracegg}\@mathmeasure\textstyle{\upbraceggg}\@mathmeasure\textstyle{\upbracegggg}$\textstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\scriptstyle\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\mathrm{d}z\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\scriptstyle{ \int_{\mathbb{C}}
\mathds{1}_{\\{ |z-y|\leqslant r \\} } \mathrm{d}z
}\@mathmeasure\scriptstyle{\upbrace}\@mathmeasure\scriptstyle{\upbraceg}\@mathmeasure\scriptstyle{\upbracegg}\@mathmeasure\scriptstyle{\upbraceggg}\@mathmeasure\scriptstyle{\upbracegggg}$\scriptstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\scriptscriptstyle\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\mathrm{d}z\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\scriptscriptstyle{ \int_{\mathbb{C}}
\mathds{1}_{\\{ |z-y|\leqslant r \\} } \mathrm{d}z
}\@mathmeasure\scriptscriptstyle{\upbrace}\@mathmeasure\scriptscriptstyle{\upbraceg}\@mathmeasure\scriptscriptstyle{\upbracegg}\@mathmeasure\scriptscriptstyle{\upbraceggg}\@mathmeasure\scriptscriptstyle{\upbracegggg}$\scriptscriptstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}}\limits_{=\pi
r^{2}}\,\mathrm{d}|\mu|^{p}=\pi r^{2}\|\mu\|_{p}^{p}<\infty,$
and so $\mu\in W(\mathcal{M},L^{p})$. As $\mu\in\mathscr{M}_{s}^{p}$ was
arbitrary, we have therefore shown that
$\|\mu\|_{W(\mathcal{M},L^{p})}\lesssim_{\,\psi,p,s}\|\mu\|_{p},\quad\text{for
all }\mu\in\mathscr{M}_{s}^{p},$ (14)
which establishes $\mathscr{M}_{s}^{p}\subset W(\mathcal{M},L^{p})$.
Now, by the Riesz-Markov-Kakutani representation theorem [26, Thm. 6.19],
$\mathcal{M}$ can be identified with the topological dual $C_{0}^{*}$ of
$C_{0}=\\{f\in L^{\infty}(\mathbb{C}):f\text{
continuous},\;\lim_{|z|\to\infty}|f(z)|=0\\},$
via the pairing
$\langle\mu,f\rangle=\int_{\mathbb{C}}\overline{f}\mathrm{d}\mu$. Therefore,
by [25, Thm. 2.8], we have that $|f(y)|\mathds{1}_{B_{r}(z)}(y)$ is integrable
w.r.t. the product measure $\mathrm{d}|\mu|(y)\times\mathrm{d}z$ on
$\mathbb{C}\times\mathbb{C}$, for $\mu\in W(\mathcal{M},L^{p})$ and $f\in
W(C_{0},L^{q})$, and $W(\mathcal{M},L^{p})$ can be identified with the
topological dual of $W(C_{0},L^{q})$ via the dual pairing
$\langle\mu,f\rangle\vcentcolon=\int_{\mathbb{C}}\left[\int_{\mathbb{C}}\overline{f}\,\mathds{1}_{B_{r}(z)}\mathrm{d}\mu\right]\mathrm{d}z.$
An application of Fubini’s theorem hence yields
$\displaystyle\langle\mu,f\rangle$
$\displaystyle=\int_{\mathbb{C}}\left[\int_{\mathbb{C}}\overline{f}(y)\,\mathds{1}_{B_{r}(z)}(y)\mathrm{d}\mu(y)\right]\;\mathrm{d}z=\int_{\mathbb{C}}\overline{f}(y)\int_{\mathbb{C}}\mathds{1}_{\\{|z-y|\leqslant
r\\}}\,\mathrm{d}z\;\mathrm{d}\mu(y)=\pi r^{2}\int\overline{f}\mathrm{d}\mu.$
Thus, as $\pi r^{2}$ is a constant depending only on the choice of $\psi$
through $\operatorname{supp}(\psi)\subset B_{r}(0)$, one can instead use the
following simpler dual pairing to effect the correspondence between
$W(\mathcal{M},L^{p})$ and $W(C_{0},L^{q})^{*}$:
$\langle\mu,f\rangle=\int\overline{f}\mathrm{d}\mu=\sum_{\lambda\in\operatorname{supp}(\mu)}\overline{f(\lambda)}\mu(\\{\lambda\\}),$
for $\mu\in W(\mathcal{M},L^{p})$ and $f\in W(C_{0},L^{q})$. Therefore,
definition (13) of weak-* convergence in $\mathscr{M}_{s}^{p}$ corresponds
precisely to convergence in the weak-* topology on $W(\mathcal{M},L^{p})$
(i.e., the weak topology generated by $W(C_{0},L^{q})$).
Finally, in the special case of weak convergence of subsets of
$\mathbb{R}^{2}$ defined in (12), following [22, p. 398], we have that, if
$\inf_{n\in\mathbb{N}}\mathrm{sep}(\Lambda_{n})>0$, then weak convergence of
subsets $\Lambda_{n}\xrightarrow[]{w}\Lambda$ is equivalent to
$\sum_{\lambda\in\Lambda_{n}}\delta_{\lambda}\to\sum_{\lambda\in\Lambda}\delta_{\lambda}$
in the weak-* topology $W(C_{0},L^{1})$.
## IV Proof of Theorem 1
As already mentioned in the introduction, the proof of Theorem 1 relies on the
theory of interpolation of entire functions. The idea for the proof is based
on [27, Thm. 1], where the lower bound (analogous to the left-hand side of
(10)), however, depends in a non-explicit manner on the supports of the
individual measures in the identifiability condition. As our goal is to obtain
an explicit lower bound, namely, a constant multiple of the minimum separation
of the supports, our theorem needs to be stated in terms of the class density
(according to Definition 2) instead of simply considering the standard
Beurling density (according to Definition 4) of the supports of the individual
measures in the class. This difference will also require us to delve deeper
into the interpolation theory underlying the proof of [27, Thm. 1].
We begin our exposition of the required technical tools by defining the
Weierstrass $\sigma_{\gamma}$-function associated with
$\Omega_{\gamma}=\\{\omega_{m,n}=\gamma(m+in)\\}_{m,n\in\mathbb{Z}}\,$:
$\sigma_{\gamma}(z)=z\prod_{(m,n)\in\mathbb{Z}^{2}\setminus\\{(0,0)\\}}\left(1-\frac{z}{\omega_{m,n}}\right)\exp{\left(\frac{z}{\omega_{m,n}}+\frac{1}{2}\frac{z^{2}}{\omega_{m,n}^{2}}\right)},\quad
z\in\mathbb{C}.$
We will need several basic facts about this function, which can be found in
[28] along with a more detailed account of its properties. Concretely, we note
that the infinite product in the definition of $\sigma_{\gamma}$ converges
absolutely uniformly on compact subsets of $\mathbb{C}$, and therefore defines
an entire function. Moreover, $\sigma_{\gamma}$ satisfies the following growth
estimate:
###### Lemma 11 ([28, Cor. 1.21]).
We have
$|\sigma_{\gamma}(z)|e^{-\frac{\pi}{2}\gamma^{-2}|z|^{2}}\asymp_{\,\gamma}d(z,\Omega_{\gamma})$,
where $d(z,\Omega_{\gamma})=\min\\{|z-\omega|:\omega\in\Omega_{\gamma}\\}$
denotes the Euclidean distance from $z$ to the lattice $\Omega_{\gamma}$.
In order to enable working with measures $\mu$ whose supports are not subsets
of lattices, we will need to perturb the zeros of the Weierstrass
$\sigma_{\gamma}$-function. We will do so following [28] and [20, p. 109].
Concretely, let $\mathcal{I}\subset\mathbb{Z}\times\mathbb{Z}$ be an index set
with $(0,0)\in\mathcal{I}$, and let
$\Lambda=\\{\lambda_{m,n}\\}_{(m,n)\in\mathcal{I}}$ be a discrete subset of
$\mathbb{C}$ with $\lambda_{m,n}\neq 0$ for
$(m,n)\in\mathcal{I}\setminus\\{(0,0)\\}$. We now define the modified
Weierstrass function associated with $\Lambda$ by
$g_{\Lambda}(z)=(z-\lambda_{0,0})\,\prod_{(m,n)\in\mathcal{I}\setminus\\{(0,0)\\}}\left(1-\frac{z}{\lambda_{m,n}}\right)\exp{\left(\frac{z}{\lambda_{m,n}}+\frac{1}{2}\frac{z^{2}}{\omega_{m,n}^{2}}\right)},\quad
z\in\mathbb{C}.$ (15)
According to [28, Lem. 4.21], provided there exist $\gamma>0$ and $R>0$ such
that $\Lambda$ is $R$-uniformly close to $\Omega_{\gamma}$, expression (15)
converges uniformly on compact subsets of $\mathbb{C}$ to an entire function
with zero set $\Lambda$. The proof of Theorem 1 relies on constructing and
controlling the growth of an entire function interpolating a sequence of
values $\\{\beta_{\lambda}\\}_{\lambda\in\Lambda}$ at the points of
$\Lambda=\operatorname{supp}(\mu_{1})\cup\operatorname{supp}(\mu_{2})$, where
$\mu_{1},\mu_{2}\in\mathscr{M}_{s}^{p}$ are the measures for which (10) is to
be established. This will be accomplished by means of “basis functions” that
interpolate the one-hot sequences
$\\{\mathds{1}_{\\{\lambda=\lambda^{\prime}\\}}\\}_{\lambda\in\Lambda}$, for
$\lambda^{\prime}\in\Lambda$. The following lemma furnishes a prototype for
these basis functions, obtained by “dividing out” a zero of the modified
Weierstrass function associated with $\Lambda$, as well as a growth bound
reminiscent of [28] and [20, §2.2], with the crucial difference that our bound
makes the dependence on the mutual separation of
$\operatorname{supp}(\mu_{1})$ and $\operatorname{supp}(\mu_{2})$ explicit.
The proof of the lemma largely follows [28], the only difference being that we
need to take the specific form
$\Lambda=\operatorname{supp}(\mu_{1})\cup\operatorname{supp}(\mu_{2})$ of
$\Lambda$ into account, carrying out the calculations more explicitly to
extract the dependence on the mutual separation of
$\operatorname{supp}(\mu_{1})$ and $\operatorname{supp}(\mu_{2})$.
###### Lemma 12.
Let $\Lambda=\\{\lambda_{m,n}\\}_{(m,n)\in\mathcal{I}}$ be a relatively
separated subset of $\mathbb{C}$ with $\lambda_{0,0}=0$. Furthermore, let
$\rho$, $s$, $\theta$, $\gamma$, and $R$ be positive real numbers, and set
$\Omega_{\gamma}=\\{\omega_{m,n}=\gamma(m+in)\\}_{m,n\in\mathbb{Z}}\,$. Define
$\mathcal{I}_{s}=\\{(m,n)\in\mathcal{I}:|\lambda_{m,n}|\leqslant\frac{s}{2}\\}$
and suppose that
1. (i)
$\\#(\mathcal{I}_{s})\leqslant 2$, and, if
$\mathcal{I}_{s}=\\{(0,0),(m^{\prime},n^{\prime})\\}$, then
$|\lambda_{m^{\prime},n^{\prime}}|\geqslant\rho\,$,
2. (ii)
$n^{+}(\Lambda,(0,R^{\prime})^{2})\leqslant\theta R^{\prime}\,{}^{2}$, for all
$R^{\prime}\geqslant R$, and
3. (iii)
$|\lambda_{m,n}-\omega_{m,n}|\leqslant R$, for all $(m,n)\in\mathcal{I}$.
Now, let $g_{\Lambda}$ be given by (15) and define
$\widetilde{g}_{\Lambda}:\mathbb{C}\to\mathbb{C}$ according to
$\widetilde{g}_{\Lambda}(z)=\frac{g_{\Lambda}(z)}{z}\prod_{(m,n)\in\mathcal{I}_{s}}\exp\left(\frac{z}{\omega_{m,n}}-\frac{z}{\lambda_{m,n}}\right)\prod_{(m,n)\in\mathbb{Z}^{2}\setminus\mathcal{I}}\left(1-\frac{z}{\omega_{m,n}}\right)\exp{\left(\frac{z}{\omega_{m,n}}+\frac{1}{2}\frac{z^{2}}{\omega_{m,n}^{2}}\right)}.$
(16)
Then
* (a)
$\widetilde{g}_{\Lambda}(0)=1$ and $\widetilde{g}_{\Lambda}(\lambda_{m,n})=0$,
for $(m,n)\in\mathcal{I}\setminus\\{(0,0)\\}$, and
* (b)
there exist constants $C>0$ and $c>0$ depending only on $s$, $\theta$,
$\gamma$, and $R$ such that
$|\widetilde{g}_{\Lambda}(z)|e^{-\frac{\pi}{2}\gamma^{-2}|z|^{2}}\leqslant
C(\rho\wedge 1)^{-1}e^{c|z|\log{|z|}},\quad\text{for all }z\in\mathbb{C}.$
(17)
The proof of Lemma 12 can be found in the appendix.
The next preparatory step towards the proof of Theorem 1 is to relate Gabor
systems generated by $\varphi(t)=2^{\frac{1}{4}}e^{-\pi t^{2}}$ with entire
functions of suitably bounded growth by means of the Bargmann transform.
Concretely, we will work with a definition of the Bargmann transform
consistent with [29] in order to facilitate arguments involving the isometry
property between modulation spaces and Bargmann-Fock spaces introduced next.
For conjugate indices $p,q\in[1,\infty]$, the Bargmann-Fock space
$\mathcal{F}^{p}(\mathbb{C})$ is defined as the set of all entire functions
$F$ for which $\left\|F\right\|_{\mathcal{F}^{p}(\mathbb{C})}<\infty$, where
$\left\|F\right\|_{\mathcal{F}^{p}(\mathbb{C})}\vcentcolon=\left(\int_{\mathbb{C}}\left|F\left(z\right)\right|^{p}e^{-p\pi\left|z\right|^{2}/2}\mathrm{d}z\right)^{1/p},\quad\text{for
}p\in[1,\infty),$
and
$\left\|F\right\|_{\mathcal{F}^{\infty}(\mathbb{C})}\vcentcolon=\sup_{z\in\mathbb{C}}{\left|F(z)\right|e^{-\pi\left|z\right|^{2}/2}}.$
The Bargmann transform is now defined as the linear map $\textfrak{B}\hskip
1.0pt\colon M^{p}(\mathbb{R})\to\mathcal{F}^{p}(\mathbb{C})$ given by
$(\textfrak{B}\hskip 1.0ptf)(z)=2^{\frac{1}{4}}e^{-\pi
z^{2}/2}\int_{\mathbb{R}}e^{2\pi tz-\pi t^{2}}f(t)\mathrm{d}t,\quad
z\in\mathbb{C}.$
According to [29, §1.4], the Bargmann transform is an isometric isomorphism
between the Banach spaces $M^{p}(\mathbb{R})$ and
$\mathcal{F}^{p}(\mathbb{C})$, i.e., it is bijective and
$\left\|\textfrak{B}\hskip
1.0ptf\right\|_{\mathcal{F}^{p}(\mathbb{C})}=\|f\|_{M^{p}(\mathbb{R})},\quad\text{for
all }f\in M^{p}(\mathbb{R}).$ (18)
Following [30], when $p\in[1,\infty)$, the topological dual of
$\mathcal{F}^{p}(\mathbb{C})$ can be identified with
$\mathcal{F}^{q}(\mathbb{C})$ via the pairing
$\left\langle
F,G\right\rangle_{\mathcal{F}^{p}(\mathbb{C})\times\mathcal{F}^{q}(\mathbb{C})}=\int_{\mathbb{C}}F(z)\overline{G(z)}e^{-\pi\left|z\right|^{2}}\mathrm{d}z,$
for $F\in\mathcal{F}^{p}(\mathbb{C})$ and $G\in\mathcal{F}^{q}(\mathbb{C})$.
The following lemma is a generalization (from $L^{2}(\mathbb{R})$ to
$M^{q}(\mathbb{R})$) of the standard identity [7, Prop. 3.4.1] relating the
Bargmann transform with time-frequency shifts of the gaussian
$\varphi(t)=2^{\frac{1}{4}}e^{-\pi t^{2}}$.
###### Lemma 13.
Let $q\in[1,\infty)$. Then, for every $y\in M^{q}(\mathbb{R})$ and
$\lambda=\tau+i\nu\in\mathbb{C}$, we have
$\langle y,\pi(\lambda)\varphi\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}=e^{-\pi i\tau\nu}e^{-\pi|\lambda|^{2}/2}(\textfrak{B}\hskip
1.0pty)(\bar{\lambda}).$
Before finally embarking on the proof of Theorem 1, we state the following two
lemmas about abstract Banach spaces and Wiener amalgam spaces that will
facilitate the application of the more specialized theory of Bargmann-Fock
spaces. Their proofs can be found in the appendix.
###### Lemma 14.
Let $\mathcal{A}:X\to Y$ be a continuous linear operator between Banach spaces
$X$ and $Y$. We then have the following:
* (i)
If $\mathcal{A}$ is bounded below (i.e. there exists a $c>0$ such that
$\|\mathcal{A}x\|\geqslant c\|x\|$, for all $x\in X$), then the adjoint
$\mathcal{A}^{*}:Y^{*}\to X^{*}$ is surjective.
* (ii)
Suppose that there exists a constant $a>0$ such that, for every $f\in X^{*}$,
there is a $g\in Y^{*}$ with $\mathcal{A}^{*}g=f$ and
$a\|g\|_{Y^{*}}\leqslant\|f\|_{X^{*}}$. Then $\mathcal{A}$ is bounded from
below by $a$.
###### Lemma 15.
Let $p\in[1,\infty)$ and $\Lambda\subset\mathbb{R}^{2}$ a separated subset and
set $s=\mathrm{sep}(\Lambda)>0$. Then
$\Big{\|}\sum_{\lambda\in\Lambda}\alpha_{\lambda}\,f(\,\cdot-\lambda)\Big{\|}_{L^{p}(\mathbb{R}^{2})}\lesssim_{\,p,s}\|f\|_{W(L^{\infty},L^{1})}\|\alpha\|_{\ell^{p}(\Lambda)},$
for all $\\{\alpha_{\lambda}\\}_{\lambda\in\Lambda}\subset\ell^{p}(\Lambda)$
and $f\in W(L^{\infty},L^{1})$.
###### Proof of Theorem 1.
Fix $\mu_{1},\mu_{2}\in\mathscr{H}$ and let
$\Lambda_{j}=\operatorname{supp}(\mu_{j})$, for $j\in\\{1,2\\}$, and
$\Lambda=\Lambda_{1}\cup\Lambda_{2}$. We can then write
$\mu_{1}-\mu_{2}=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{\lambda}$,
where $\alpha\in\ell^{p}(\Lambda)$, so that
$\|\mu_{1}-\mu_{2}\|_{p}=\|\alpha\|_{\ell^{p}}$ and
$\mathcal{H}_{\mu_{1}}\varphi-\mathcal{H}_{\mu_{2}}\varphi=\mathcal{H}_{\Lambda}(\alpha,\varphi)$,
for $\mathcal{H}_{\Lambda}(\,\cdot\,,\varphi):\ell^{p}(\Lambda)\to
M^{p}(\mathbb{R})$ as defined in the statement of Proposition 1. With this,
(10) is equivalent to
$C_{1}\big{(}\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge
1\big{)}\|\alpha\|_{\ell^{p}}\leqslant\|\mathcal{H}_{\Lambda}(\alpha,\varphi)\|_{M^{p}(\mathbb{R})}\leqslant
C_{2}\|\alpha\|_{\ell^{p}},$ (19)
and hence it suffices to find constants $C_{1}=C_{1}(p,\mathscr{H},\varphi)>0$
and $C_{2}=C_{2}(p,\mathscr{H},\varphi)>0$ such that (19) holds. To this end,
first note that by item (i) of Proposition 1 we have
$\|\mathcal{H}_{\Lambda}(\alpha,\varphi)\|_{M^{p}(\mathbb{R})}\lesssim_{\,\varphi}\mathrm{rel}(\Lambda)\|\alpha\|_{p}.$
Furthermore, as $\mu_{1},\mu_{2}\in\mathscr{M}_{s}^{p}$, we get
$\mathrm{rel}(\Lambda_{1}\cup\Lambda_{2})\leqslant\mathrm{rel}(\Lambda_{1})+\mathrm{rel}(\Lambda_{2})\lesssim
s^{-2},$
and so the upper bound in (19) holds for some $C_{2}>0$ depending on $\varphi$
and $s$, as desired.
We proceed to establish the lower bound in (19). Note that this bound holds
trivially if $\mathrm{ms}(\Lambda_{1},\Lambda_{2})=0$ or
$\Lambda_{1}=\Lambda_{2}=\varnothing$, so suppose w.l.o.g. that
$\mathrm{ms}(\Lambda_{1},\Lambda_{2})>0$ and $\Lambda_{1}\neq\varnothing$.
Then, in particular, $\Lambda\neq\varnothing$. Now, as
$\mathcal{H}_{\Lambda}(\,\cdot\,,\varphi):\ell^{p}(\Lambda)\to
M^{p}(\mathbb{R})$ is a continuous linear operator between Banach spaces,
Lemma 14 implies that it suffices to find a
$C_{1}=C_{1}(p,\mathscr{H},\varphi)>0$ such that the following statement
holds:
(P1) _For every $\beta\in\ell^{q}(\Lambda)$, there exists a $y\in
M^{q}(\mathbb{R})$ such that
$\left(\mathcal{H}_{\Lambda}(\cdot,\varphi)\right)^{*}(y)=\beta$ and_
$C_{1}\big{(}\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge
1\big{)}\|y\|_{M^{q}(\mathbb{R})}\leqslant\|\beta\|_{\ell^{q}}.$
By item (ii) of Proposition 1 and Lemma 13, we have the following expression
for $\left(\mathcal{H}_{\Lambda}(\cdot,\varphi)\right)^{*}$ in terms of the
Bargmann transform:
$\left(\mathcal{H}_{\Lambda}(\cdot,\varphi)\right)^{*}(y)=\\{e^{-\pi
i\tau\nu}e^{-\pi|\lambda|^{2}/2}(\textfrak{B}\hskip
1.0pty)(\overline{\lambda})\\}_{\lambda=\tau+i\nu\,\in\Lambda},\quad y\in
M^{q}(\mathbb{R}).$ (20)
Thus, as the Bargmann transform is an isometric isomorphism between
$M^{p}(\mathbb{R})$ and $\mathcal{F}^{p}(\mathbb{C})$, and the map
$\\{\beta_{\lambda}\\}_{\lambda=\tau+i\nu\,\in\Lambda}\mapsto\\{\beta_{\lambda}e^{-\pi
i\tau\nu}\\}_{\lambda=\tau+i\nu\,\in\Lambda}$ is an isometric isomorphism on
$\ell^{q}(\Lambda)$, the statement (P1) is equivalent to the following
statement about interpolation:
(P2) _For every $\beta\in\ell^{q}(\Lambda)$, there exists an
$F\in\mathcal{F}^{q}(\mathbb{C})$ such that
$e^{-\pi|\lambda|^{2}/2}F(\overline{\lambda})=\beta_{\lambda}$, for all
$\lambda\in\Lambda$, and_
$C_{1}\big{(}\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge
1\big{)}\|F\|_{\mathcal{F}^{q}(\mathbb{C})}\leqslant\|\beta\|_{\ell^{q}}.$
(21)
To prove (P2), we will make use of the interpolation basis functions provided
by Lemma 12. To this end, fix $\theta>0$ and $\gamma>0$ such that
$2\mathcal{D}^{+}(\mathcal{L})<2\theta<\gamma^{-2}<1$, and let
$\beta\in\ell^{q}(\Lambda)$ be arbitrary. Then, by Lemma 8, there exists an
$R_{0}>0$ (depending only on $\mathscr{H}$) such that
$n^{+}(\Lambda_{j},(0,R)^{2})\leqslant\theta R^{2}$, for $j\in\\{1,2\\}$ and
$R\geqslant R_{0}$. Now, for each $\lambda\in\Lambda$, define the set
$\widetilde{\Lambda}_{\lambda}=\\{\overline{\lambda^{\prime}}-\overline{\lambda}:\lambda^{\prime}\in\Lambda\\}.$
We will seek to apply Lemma 12 to each of the sets
$\widetilde{\Lambda}_{\lambda}$ as $\lambda$ ranges over $\Lambda$. To this
end, first note that
$n^{+}(\widetilde{\Lambda}_{\lambda},(0,R)^{2})=n^{+}(\Lambda,(0,R)^{2})\leqslant
n^{+}(\Lambda_{1},(0,R)^{2})+n^{+}(\Lambda_{2},(0,R)^{2})\leqslant 2\theta
R^{2},$
for all $R\geqslant R_{0}$. Therefore, as $\gamma<(2\theta)^{-1/2}$, it
follows by Lemma 9 that there exists an
$R^{\prime}=R^{\prime}(\theta,\gamma,R_{0})$ such that
$\widetilde{\Lambda}_{\lambda}$ is $R^{\prime}$-uniformly close to
$\Omega_{\gamma}=\\{\omega_{m,n}=\gamma(m+in):m,n\in\mathbb{Z}\\}$. In
particular, there exists an enumeration
$\widetilde{\Lambda}_{\lambda}=\\{\widetilde{\lambda}_{m,n}\\}_{(m,n)\in\widetilde{\mathcal{I}}}$
such that $|\widetilde{\lambda}_{m,n}-\omega_{m,n}|\leqslant R^{\prime}$, for
all $(m,n)\in\widetilde{\mathcal{I}}$. Note that
$0\in\widetilde{\Lambda}_{\lambda}$ by definition of
$\widetilde{\Lambda}_{\lambda}$. In order to apply Lemma 12 we need to
additionally ensure that we work with an enumeration of
$\widetilde{\Lambda}_{\lambda}=\\{{\lambda}_{m,n}\\}_{(m,n)\in{\mathcal{I}}}$
(possibly different from the enumeration
$\widetilde{\Lambda}_{\lambda}=\\{\widetilde{\lambda}_{m,n}\\}_{(m,n)\in\widetilde{\mathcal{I}}}$)
that satisfies ${\lambda}_{0,0}=0$. To this end, let
$(m_{0},n_{0})\in\widetilde{\mathcal{I}}$ be the index such that
$\widetilde{\lambda}_{m_{0},n_{0}}=0$, and define $\mathcal{I}$ and
$\\{{\lambda}_{m,n}\\}_{(m,n)\in{\mathcal{I}}}$ as follows:
* –
If $(0,0)\notin\widetilde{\mathcal{I}}$, set
$\mathcal{I}=\big{(}\widetilde{\mathcal{I}}\setminus\\{(m_{0},n_{0})\\}\big{)}\cup\\{(0,0)\\}$,
and let
$\lambda_{m,n}=\begin{cases}0,&\text{if }(m,n)=(0,0),\\\
\widetilde{\lambda}_{m,n},\,&\text{if
}(m,n)\in\mathcal{I}\setminus\\{(0,0)\\}\end{cases}.$
* –
If $(0,0)\in\widetilde{\mathcal{I}}$, set
$\mathcal{I}=\widetilde{\mathcal{I}}$, and let
$\lambda_{m,n}=\begin{cases}0,&\text{if }(m,n)=(0,0),\\\
\widetilde{\lambda}_{0,0},\,&\text{if }(m,n)=(m_{0},n_{0}),\\\
\widetilde{\lambda}_{m,n},\,&\text{if
}\in(m,n)\in\mathcal{I}\setminus\\{(0,0),(m_{0},n_{0})\\}\end{cases}.$
The new enumeration
$\widetilde{\Lambda}_{\lambda}=\\{{\lambda}_{m,n}\\}_{(m,n)\in{\mathcal{I}}}$
satisfies $|\lambda_{0,0}-\omega_{0,0}|=0$ and
$|\lambda_{m_{0},n_{0}}-\omega_{m_{0},n_{0}}|\leqslant|\widetilde{\lambda}_{0,0}|+|\widetilde{\lambda}_{m_{0},n_{0}}-\omega_{m_{0},n_{0}}|\leqslant
2R^{\prime}$, and thus we have $|{\lambda}_{m,n}-\omega_{m,n}|\leqslant
2R^{\prime}$, for all $(m,n)\in{\mathcal{I}}$. The set
$\widetilde{\Lambda}_{\lambda}$ therefore satisfies the assumptions of Lemma
12 with
$\rho\vcentcolon=\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge\frac{s}{2}$, $s$,
$\theta$, $\gamma$, and $R\vcentcolon=R_{0}\vee(2R^{\prime})$, and so the
function
$g_{-\lambda}\vcentcolon=\widetilde{g}_{\widetilde{\Lambda}_{\lambda}}(\,\cdot-\overline{\lambda})$,
where $\widetilde{g}_{\widetilde{\Lambda}_{\lambda}}$ is defined according to
(16), satisfies
$g_{-\lambda}(\overline{\lambda^{\prime}})=\begin{cases}1,\,&\text{if
}\lambda^{\prime}=\lambda\\\ 0,\,&\text{if
}\lambda^{\prime}\neq\lambda\end{cases},\quad\text{for all
}\lambda^{\prime}\in\Lambda,$ (22)
and
$|g_{-\lambda}(z)|\leqslant C(\rho\wedge
1)^{-1}e^{\frac{\pi}{2}\gamma^{-2}|z-\overline{\lambda}|^{2}+c|z-\overline{\lambda}|\log{|z-\overline{\lambda}|}},\quad\text{for
all }z\in\mathbb{C},$ (23)
where $c>0$ and $C>0$ depend on $s$, $\theta$, $\gamma$, and $R$. Moreover, as
$\lambda$ was arbitrary, (22) and (23) hold for all $\lambda\in\Lambda$. Next,
following [20, p. 112], we consider the interpolation function
$F(z)=\sum_{\lambda\in\Lambda}\beta_{\lambda}\;e^{\pi\lambda
z-\frac{\pi}{2}|\lambda|^{2}}\;g_{-\lambda}(z).$ (24)
To see that $F$ is an element of $\mathcal{F}^{q}(\mathbb{C})$, observe that
$\displaystyle|F(z)|e^{-\frac{\pi}{2}|z|^{2}}$
$\displaystyle\leqslant\sum_{\lambda\in\Lambda}|\beta_{\lambda}|e^{-\frac{\pi}{2}|z-\overline{\lambda}|^{2}}|g_{-\lambda}(z)|$
$\displaystyle\leqslant C(\rho\wedge
1)^{-1}\sum_{\lambda\in\Lambda}|\beta_{\lambda}|e^{-\frac{\pi}{2}(1-\gamma^{-2})|z-\overline{\lambda}|^{2}+c|z-\overline{\lambda}|\log{|z-\overline{\lambda}|}}$
$\displaystyle=C(\rho\wedge
1)^{-1}\sum_{j\in\\{1,2\\}}\sum_{\lambda\in\Lambda_{j}}|\beta_{\lambda}|f(z-\overline{\lambda}),\quad\text{for
all }z\in\mathbb{C},$
where
$f(z)=\exp{\big{[}-\frac{\pi}{2}(1-\gamma^{-2})|z|^{2}+c|z|\log{|z|}\big{]}}$.
Now, as $\gamma^{-2}<1$, we have that $f$ decays exponentially, and so $f\in
W(L^{\infty},L^{1})$. Lemma 15 thus yields
$\Big{\|}\sum_{\lambda\in\Lambda_{j}}|\beta_{\lambda}|f(\cdot-\overline{\lambda})\Big{\|}_{L^{q}(\mathbb{C})}\lesssim_{\,p,s}\|f\|_{W(L^{\infty},L^{1})}\|\\{\beta_{\lambda}\\}_{\lambda\in\Lambda_{j}}\|_{\ell^{q}},\quad\text{for
}j\in\\{1,2\\},$
and so
$\|F\|_{\mathcal{F}^{q}(\mathbb{C})}=\big{\|}F(\cdot)e^{-\frac{\pi}{2}\left|\cdot\right|^{2}}\big{\|}_{L^{q}(\mathbb{C})}\leqslant
C(\rho\wedge
1)^{-1}\Big{\|}\sum_{j\in\\{1,2\\}}\sum_{\lambda\in\Lambda_{j}}|\beta_{\lambda}|f(\cdot-\overline{\lambda})\Big{\|}_{L^{q}(\mathbb{C})}\lesssim_{\,p,s,\gamma}C(\rho\wedge
1)^{-1}\|\beta\|_{\ell^{q}}.$ (25)
Now, recall that
$\rho\vcentcolon=\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge\frac{s}{2}$, and
so $\mathrm{ms}(\Lambda_{1},\Lambda_{2})\wedge 1\lesssim_{\,s}\rho\wedge 1$.
This together with (25) establishes (21) with some $C_{1}>0$ depending on $s$,
$\theta$, $\gamma$, $R_{0}$, and $R^{\prime}$. As these quantities ultimately
depend only on $s$ and $\mathscr{H}$, so does $C_{1}$. Finally, (24) and the
basis interpolation property (22) together yield
$F(\overline{\lambda})=e^{\pi|\lambda|^{2}/2}\beta_{\lambda}$, for all
$\lambda\in\Lambda$. We have thus established (P2), thereby concluding the
proof of the theorem. ∎
## V Proof of Theorem 2
In the proof of Theorem 2 we will make use of the following results from [22],
as well as a combinatorial lemma about squares in the plane, whose proof can
be found in the appendix.
###### Theorem 16 (Non-uniform Balian-Low Theorem, [22, Cor. 1.2]).
Let $\Lambda$ be a relatively separated subset of $\mathbb{R}^{2}$ and $x\in
M^{1}(\mathbb{R})$. If $\\{\pi(\lambda)x\\}_{\lambda\in\Lambda}$ is a Riesz
sequence, i.e.
$\|\sum_{\lambda\in\Lambda}c_{\lambda}\pi(\lambda)x\|_{L^{2}(\mathbb{R})}\asymp_{\,\Lambda,x}\|c\|_{\ell^{2}(\Lambda)}$,
for all $c\in\ell^{2}(\Lambda)$, then $D^{+}(\Lambda)<1$.
###### Theorem 17 ([22, Thm. 3.2]).
Let $\Lambda$ be a relatively separated subset of $\mathbb{R}^{2}$ and $x\in
M^{1}(\mathbb{R})$. Then
$\|\sum_{\lambda\in\Lambda}c_{\lambda}\pi(\lambda)x\|_{M^{p}(\mathbb{R})}\asymp_{\,\Lambda,x}\|c\|_{\ell^{p}(\Lambda)}$,
$c\in\ell^{p}(\Lambda)$, holds for some $p\in[1,\infty]$ if and only if it
holds for all $p\in[1,\infty]$.
###### Lemma 18 ([22, Lem. 4.5]).
Let $\\{\Lambda_{n}\\}_{n\in\mathbb{N}}$ be a sequence of relatively separated
subsets of $\mathbb{R}^{2}$. If
$\sup_{n\in\mathbb{N}}\mathrm{rel}(\Lambda_{n})<\infty$, then there exists a
subsequence $\\{\Lambda_{n_{k}}\\}_{k\in\mathbb{N}}$ that converges weakly to
a relatively separated set.
###### Lemma 19.
Let $Y\subset\mathbb{R}^{2}$, $n\in\mathbb{N}$, and suppose that $K_{n}$ is a
square in the plane of side length $\sqrt{2}(2^{n}+1)$ such that
$\\#(K_{n}\cap Y)\geqslant 2^{2n}+1$. Then there exist squares
$K_{0},K_{1},\dots,K_{n-1}$ so that, for every $j\in\\{0,1,\dots,n-1\\}$,
* (i)
$K_{j}\subset K_{j+1}$, $K_{j}$ has sides of length $\sqrt{2}(2^{j}+1)$
parallel to the sides of $K_{j+1}$, and $K_{j}$ and $K_{j+1}$ share a corner,
* (ii)
$\\#(K_{j}\cap Y)\geqslant 2^{2j}+1$.
We call a sequence $(K_{0},K_{1},\dots,K_{n})$ satisfying (i) and (ii) a
sequence of nested squares.
###### Proof of Theorem 2.
We argue by contradiction, so suppose that $\mathscr{H}(\mathcal{L})^{p}$ is
identifiable, but $\mathcal{D}^{+}(\mathcal{L})\geqslant\frac{1}{2}$. Define
$\gamma_{n}=\sqrt{2}(1+2^{-n})$ and $R_{n}=2(2^{n}+1)$, for $n\in\mathbb{N}$.
It then follows by Lemma 10 that, for every $n\in\mathbb{N}$, there exists a
$\widetilde{\mu}_{n}\in\mathscr{H}(\mathcal{L})^{p}$ such that
$\operatorname{supp}(\widetilde{\mu}_{n})$ is not $R_{n}$-uniformly close to
$\Omega_{\gamma_{n}}$. Indeed, if this were not the case for some
$n\in\mathbb{N}$, we would have
$\mathcal{D}^{+}(\mathcal{L})\leqslant\gamma_{n}^{-2}<\frac{1}{2}$,
contradicting our assumption that
$\mathcal{D}^{+}(\mathcal{L})\geqslant\frac{1}{2}$. Fix such a
$\widetilde{\mu}_{n}$ for each $n$.
Now, for a fixed $n\in\mathbb{N}$, define the sets
$S_{k,\ell}=\left[\sqrt{2}(2^{n}+1)k,\sqrt{2}(2^{n}+1)(k+1)\right)\times\left[\sqrt{2}(2^{n}+1)\ell,\sqrt{2}(2^{n}+1)(\ell+1)\right)\subset\mathbb{R}^{2},$
for $(k,\ell)\in\mathbb{Z}^{2}$, forming a partition of the plane into squares
of side length $\sqrt{2}(2^{n}+1)$. As every $S_{k,\ell}$ consists of exactly
$2^{2n}$ fundamental cells of the lattice $\Omega_{\gamma_{n}}$ and the
diagonal of $S_{k,\ell}$ has length $R_{n}$, there must exist a pair
$(k_{n},\ell_{n})\in\mathbb{Z}^{2}$ such that
$\\#(S_{k_{n},\ell_{n}}\cap\operatorname{supp}(\widetilde{\mu}_{n}))\geqslant
2^{2n}+1$, for otherwise $\operatorname{supp}(\widetilde{\mu}_{n})$ would be
$R_{n}$-uniformly close to $\Omega_{\gamma_{n}}$, contradicting our choice of
$\widetilde{\mu}_{n}$.
We set $\widetilde{K}^{n}_{n}=S_{k_{n},\ell_{n}}$ and apply Lemma 19 with
$\widetilde{K}^{n}_{n}$ and $Y=\operatorname{supp}(\widetilde{\mu}_{n})$ to
obtain a sequence
$(\widetilde{K}_{0}^{n},\widetilde{K}_{1}^{n},\dots,\widetilde{K}_{n}^{n})$ of
nested squares. Next, let $\lambda_{n}$ be the center of
$\widetilde{K}^{n}_{0}$ and note that, as $\mathcal{L}$ is shift-invariant by
assumption, there exists a measure $\mu_{n}\in\mathscr{H}(\mathcal{L})^{p}$
with support $\operatorname{supp}(\widetilde{\mu}_{n})-\lambda_{n}$.
Therefore, setting $K_{j}^{n}=\widetilde{K}_{j}^{n}-\lambda_{n}$, we have that
$(K_{0}^{n},K_{1}^{n},\dots,K_{n}^{n})$ is a sequence of nested squares,
$\\#(K_{j}^{n}\cap\operatorname{supp}(\mu_{n}))\geqslant 2^{2j}+1$, and
$K_{0}^{n}=\big{[}-\sqrt{2},\sqrt{2}\big{)}\times\big{[}-\sqrt{2},\sqrt{2}\big{)}$,
for all $n\in\mathbb{N}$ and $j\in\\{0,1,\dots n\\}$. We next need to verify
the following auxiliary claim.
Claim: Let $r\in\mathbb{N}$ and suppose that
$\\{\mu_{n_{k}^{r}}\\}_{k\in\mathbb{N}}$ is a subsequence of
$\\{\mu_{n}\\}_{n\in\mathbb{N}}$ such that
$K_{j}^{n_{j}^{r}}=K_{j}^{n_{k}^{r}}$, for all $j\in\\{1,2,\dots,r\\}$ and all
$k\geqslant j$. Then there exists a further subsequence
$\\{\mu_{n_{k}^{r+1}}\\}_{k\in\mathbb{N}}$ such that
$K_{r+1}^{n_{r+1}^{r+1}}=K_{r+1}^{n_{k}^{r+1}}$, for all $k\geqslant r+1$.
Proof of Claim: Let $\mathscr{K}$ be the set of squares $K^{\prime}\subset
K_{r}^{n_{r}^{r}}$ of side length $\sqrt{2}(2^{r+1}+1)$ such that $K^{\prime}$
and $K_{r}^{n_{r}^{r}}$ have parallel sides and share a corner. As
$(K_{0}^{n_{k}^{r}},K_{1}^{n_{k}^{r}},\dots,K_{r}^{n_{k}^{r}},K_{r+1}^{n_{k}^{r}})$
is a sequence of nested squares, for all $k\geqslant r+1$, we have that
$K_{r+1}^{n_{k}^{r}}\in\mathscr{K}$, for all $k\geqslant r+1$. But
$\\#\mathscr{K}=4$, and therefore at least one element of $\mathscr{K}$
appears infinitely often in the sequence
$\\{K_{r+1}^{n_{k}^{r}}\\}_{k\geqslant r+1}$. We can therefore extract a
subsequence $\\{\mu_{n_{k}^{r+1}}\\}_{k\in\mathbb{N}}$ of
$\\{\mu_{n_{k}^{r}}\\}_{k\in\mathbb{N}}$ such that
$K_{r+1}^{n_{r+1}^{r+1}}=K_{r+1}^{n_{k}^{r+1}}$, for all $k\geqslant r+1$,
establishing the claim.
Now, as $K_{0}^{n}=K_{0}^{0}$, for all $n\in\mathbb{N}$, we can apply a
diagonalization argument together with the Claim to construct a subsequence
$\\{\mu_{n_{k}}\\}_{k\in\mathbb{N}}$ of $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ such
that $K_{j}^{n_{j}}=K_{j}^{n_{k}}$ for all $j\in\mathbb{N}$ and all
$k\geqslant j$. Next, as $\mathscr{H}(\mathcal{L})^{p}$ is $s$-regular, we
have $\inf_{\Lambda\in\mathcal{L}}\mathrm{sep}(\Lambda)\geqslant s>0$, and so
$\sup_{k\in\mathbb{N}}\mathrm{rel}(\operatorname{supp}(\mu_{n_{k}}))\lesssim
s^{-1}<\infty$. Therefore, by passing to a further subsequence of
$\\{\mu_{n_{k}}\\}_{k\in\mathbb{N}}$ if necessary, Lemma 18 implies the
existence of a set $\Lambda^{*}\subset\mathbb{R}^{2}$ such that
$\operatorname{supp}(\mu_{n_{k}})\xrightarrow[]{w}\Lambda^{*}$ as
$k\to\infty$. Then, as
$\\#(K_{j}^{n_{j}}\cap\operatorname{supp}(\mu_{n_{j}}))\geqslant 2^{2j}+1$, we
have
$n^{+}\Big{(}\Lambda^{*},[0,\sqrt{2}(2^{j}+1)+2]^{2}\Big{)}\geqslant\\#\Big{(}\Lambda^{*}\cap(K_{j}^{n_{j}}+B_{1}(0))\Big{)}\geqslant
2^{2j}+1,$
for all sufficiently large $j\in\mathbb{N}$, and so
$D^{+}(\Lambda^{*})\geqslant\limsup_{j\to\infty}\frac{2^{2j}+1}{\left(\sqrt{2}(2^{j}+1)+2\right)^{2}}=\frac{1}{2}.$
(26)
Now, as $\mathcal{L}$ is CSI, we have $\Lambda^{*}\in\mathcal{L}$ and
$\Lambda^{*}+(\frac{s}{2},0)\in\mathcal{L}$. Moreover, as
$\mathrm{sep}(\operatorname{supp}(\mu_{n_{k}}))\geqslant s$, for all
$k\in\mathbb{N}$, we have $\mathrm{sep}(\Lambda^{*})\geqslant s$, and so
$\mathrm{ms}(\Lambda^{*},\Lambda^{*}+(\frac{s}{2},0))\geqslant\frac{s}{2}$.
Therefore, as we assumed $\mathscr{H}(\mathcal{L})^{p}$ to be identifiable,
there exist $C_{1},C_{2}>0$ depending on $p$ and $\mathcal{L}$ and a probing
signal $x\in M^{1}(\mathbb{R})$ such that
$C_{1}\left(\frac{s}{2}\wedge
1\right)\left\|\nu\right\|_{p}\leqslant\left\|\sum_{\lambda\in\Lambda}\nu_{\lambda}\pi(\lambda)x\right\|_{M^{p}(\mathbb{R})}\leqslant
C_{2}\left\|\nu\right\|_{p},$
for all $\nu\in\mathscr{M}^{p}$ supported on
$\Lambda\vcentcolon=\Lambda^{*}\cup\left(\Lambda^{*}+(\frac{s}{2},0)\right)$,
and so by Theorems 17 and 16 we must have $D^{+}(\Lambda)<1$. On the other
hand, (26) implies
$D^{+}(\Lambda)=2D^{+}(\Lambda^{*})\geqslant 1,$
which stands in contradiction to $D^{+}(\Lambda)<1$. Our inital assumption
must hence be false, concluding the proof of the theorem. ∎
## VI Proofs of Theorems 3 and 4
We start with the following proposition that quantifies the behavior of Riesz
sums of time-frequency shifts of the probing signal $x$ under perturbation of
the individual time-frequency shifts. We do so under a mild condition on the
time-frequency spread of the probing signal. Concretely, $x$ will be assumed
to be an element of the weighted modulation space
$M^{1}_{m}(\mathbb{R})=\\{f\in\mathcal{S}^{\prime}:\mathcal{V}_{\varphi}f\in
L^{1}_{m}(\mathbb{R})\\}$.
###### Proposition 3.
Let $p\in[1,\infty)$, $\Lambda\subset\mathbb{C}$ a separated set, and let
$\alpha\in\ell^{p}(\Lambda)$. Suppose that $x\in M^{1}_{m}(\mathbb{R})$ with
the weight function $m(z)=1+|z|$. Then
* (i)
there exists a $\Phi\in W(L^{\infty},L^{1})$ depending only on $x$ such that
$|\mathcal{V}_{\varphi}(x-\pi(\epsilon)x)|(u,v)\leqslant|\epsilon|\Phi(u,v),$
for all $(u,v)\in\mathbb{R}^{2}$ and $\epsilon\in\mathbb{C}$ with
$|\epsilon|\leqslant 1$.
* (ii)
$\Big{\|}\sum_{\lambda\in\Lambda}\alpha_{\lambda}\pi(\lambda)x-\sum_{\lambda\in\Lambda}\alpha_{\lambda}e^{2\pi
i\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\;\pi(\lambda+\varepsilon_{\lambda})x\Big{\|}_{M^{p}(\mathbb{R})}\lesssim_{\,p,s,x}\|\bm{\varepsilon}\|_{\ell^{\infty}(\Lambda)}\|\alpha\|_{\ell^{p}(\Lambda)},$
for all
$\bm{\varepsilon}=\\{\varepsilon_{\lambda}\\}_{\lambda\in\Lambda}\in\ell^{\infty}(\Lambda)$
such that $\|\bm{\varepsilon}\|_{\ell^{\infty}(\Lambda)}\leqslant 1$.
###### Proof.
(i) Fix an $\epsilon\in\mathbb{C}$ with $|\epsilon|\leqslant 1$. We split
$|\mathcal{V}_{\varphi}(x-\pi(\epsilon)x)|\leqslant|\mathcal{V}_{\varphi}(x-\mathcal{M}_{\mathrm{Im}(\epsilon)}x)|+|\mathcal{V}_{\varphi}\mathcal{M}_{\mathrm{Im}(\epsilon)}(x-\mathcal{T}_{\mathrm{Re}(\epsilon)}x)|$
(27)
and bound each term on the right-hand side separately, beginning with the
second term. To this end, we first define the auxiliary quantity
$F(u,v)=(\mathcal{V}_{\varphi}x)(u,v)e^{2\pi iuv}$. Then, for all
$(u,v)\in\mathbb{R}^{2}$ and $\tau\in\mathbb{R}$, we have
$\displaystyle|\mathcal{V}_{\varphi}(x-\mathcal{T}_{\tau}x)|(u,v)$
$\displaystyle=|(\mathcal{V}_{\varphi}x)(u,v)-e^{-2\pi i\tau
v}(\mathcal{V}_{\varphi}x)(u-\tau,v)|=|F(u,v)-F(u-\tau,v)|$
$\displaystyle=\left|\int_{-\tau}^{0}(\partial_{u}F)(u+r,v)\,\mathrm{d}r\right|\leqslant|\tau|\sup_{r\in[-\tau,0]}|(\partial_{u}F)(u+r,v)|.$
Therefore, for all $(u,v)\in\mathbb{R}^{2}$,
$\displaystyle|\mathcal{V}_{\varphi}\mathcal{M}_{\mathrm{Im}(\epsilon)}(x-\mathcal{T}_{\mathrm{Re}(\epsilon)}x)|(u,v)$
$\displaystyle=|\mathcal{V}_{\varphi}(x-\mathcal{T}_{\mathrm{Re}(\epsilon)}x)|(u,v-\mathrm{Im}(\epsilon))$
$\displaystyle\leqslant|\mathrm{Re}(\epsilon)|\sup_{r\in[-\mathrm{Re}(\epsilon),0]}|(\partial_{u}F)(u+r,v-\mathrm{Im}(\epsilon))|$
$\displaystyle\leqslant|\epsilon|\;\cdot\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}|(\partial_{u}F)(u+r_{1},v+r_{2})|,$ (28)
where we used the assumption $|\epsilon|\leqslant 1$. Next, recalling the
definition of $F$, we have
$\displaystyle|(\partial_{u}F)(u,v)|$
$\displaystyle=\left|\left(\partial_{u}(\mathcal{V}_{\varphi}x)(u,v)\right)e^{2\pi
iuv}+(\mathcal{V}_{\varphi}x)(u,v)\cdot\partial_{u}e^{2\pi iuv}\right|$
$\displaystyle=\left|\partial_{u}\langle
x,\mathcal{M}_{v}\mathcal{T}_{u}\varphi\rangle+2\pi iv\langle
x,\mathcal{M}_{v}\mathcal{T}_{u}\varphi\rangle\right|$
$\displaystyle\leqslant|(\mathcal{V}_{\varphi^{\prime}}x)(u,v)|+2\pi|v||(\mathcal{V}_{\varphi}x)(u,v)|,$
(29)
where interchanging $\partial_{u}$ with the inner product in the last step is
justified due to $\mathcal{V}_{\varphi^{\prime}}x\in
L^{1}_{m}(\mathbb{R}^{2})$ (which follows from [7, Prop. 12.1.2]). Therefore,
(28) and (29) together yield
$|\mathcal{V}_{\varphi}\mathcal{M}_{\mathrm{Im}(\epsilon)}(x-\mathcal{T}_{\mathrm{Re}(\epsilon)}x)|(u,v)\leqslant|\epsilon|\;\cdot\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}\Psi(u+r_{1},v+r_{2}),$ (30)
for all $(u,v)\in\mathbb{R}^{2}$, where we set
$\Psi(u,v)\vcentcolon=|(\mathcal{V}_{\varphi^{\prime}}x)(u,v)|+2\pi\left(|u|+|v|\right)|(\mathcal{V}_{\varphi}x)(u,v)|.$
We bound the first term in (27) in a similar manner, this time using another
auxiliary quantity, namely
$G(u,v)=(\mathcal{V}_{\varphi}\widehat{x})(u,v)e^{2\pi iuv}$. Then, using the
fundamental identity of time-frequency analysis [7, eq. (3.10)]
$(\mathcal{V}_{f}g)(u,v)=e^{-2\pi
iuv}(\mathcal{V}_{\widehat{f}}\widehat{g})(v,-u),\qquad
f\in\mathcal{S},g\in\mathcal{S}^{\prime},(u,v)\in\mathbb{R}^{2},$
and the fact that $\widehat{\varphi}=\varphi$, we obtain
$\displaystyle|\mathcal{V}_{\varphi}(x-\mathcal{M}_{\mathrm{Im}(\epsilon)}x)|(u,v)$
$\displaystyle=|\mathcal{V}_{\widehat{\varphi}}(\widehat{x}-\mathcal{T}_{\mathrm{Im}(\epsilon)}\widehat{x})|(v,-u)$
$\displaystyle\leqslant|\epsilon|\;\cdot\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}|(\partial_{u}G)(v+r_{1},-(u+r_{2}))|,$ (31)
and
$\displaystyle|(\partial_{u}G)(v,-u)|$
$\displaystyle\leqslant|(\mathcal{V}_{\varphi^{\prime}}\widehat{x})(v,-u)|+2\pi|u||(\mathcal{V}_{\varphi}\widehat{x})(v,-u)|$
(32)
$\displaystyle\leqslant|(\mathcal{V}_{i\widehat{\varphi^{\prime}}}\,\widehat{x})(v,-u)|+2\pi|u||(\mathcal{V}_{\widehat{\varphi}}\,\widehat{x})(v,-u)|$
(33)
$\displaystyle=|(\mathcal{V}_{\varphi^{\prime}}x)(u,v)|+2\pi|u||(\mathcal{V}_{\varphi}x)(u,v)|$
(34)
for all $(u,v)\in\mathbb{R}^{2}$, where (32) is obtained analogously to (29),
in (33) we used $\varphi^{\prime}=i\widehat{\varphi^{\prime}}$, and in (34) we
again used the fundamental identity of time-frequency analysis.
Combining (31) and (34) thus yields
$|\mathcal{V}_{\varphi}(x-\mathcal{M}_{\mathrm{Im}(\epsilon)}x)|(u,v)\leqslant|\epsilon|\;\cdot\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}\Psi(u+r_{1},v+r_{2}),$ (35)
and so (30) and (35) together give
$|\mathcal{V}_{\varphi}(x-\pi(\epsilon)x)|(u,v)\leqslant
2|\epsilon|\;\cdot\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}\Psi(u+r_{1},v+r_{2}).$
Therefore, in order to complete the proof of item (i), it suffices to take
$\Phi(u,v)=\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
1}2\Psi(u+r_{1},v+r_{2}),$
and show that $\Phi\in W(L^{\infty},L^{1})$. In fact, as
$\displaystyle\|\Phi\|_{W(L^{\infty},L^{1})}$
$\displaystyle\lesssim\int_{\mathbb{R}^{2}}\sup_{\|(y_{1},y_{2})\|_{\infty}\leqslant
1}\Phi(u+y_{1},v+y_{2})\,\mathrm{d}u\mathrm{d}v\leqslant
2\int_{\mathbb{R}^{2}}\sup_{\|(r_{1},r_{2})\|_{\infty}\leqslant
2}\Psi(u+r_{1},v+r_{2})\,\mathrm{d}u\mathrm{d}v$
$\displaystyle\lesssim\|\Psi\|_{W(L^{\infty},L^{1})},$
it suffices to establish that $\Psi\in W(L^{\infty},L^{1})$. To this end, note
that
$\|\mathcal{V}_{\varphi^{\prime}}x\|_{L^{1}(\mathbb{R}^{2})}\asymp\|x\|_{M^{1}(\mathbb{R})}$
(see [7, Prop. 11.4.2]), and so by [7, Prop. 12.1.11] we have
$\mathcal{V}_{\varphi^{\prime}}x\in W(L^{\infty},L^{1})$. Next, as $\varphi\in
M^{1}_{m}(\mathbb{R})$ and $x\in M^{1}_{m}(\mathbb{R})$, we have by [7, Prop.
12.1.11] that $\mathcal{V}_{\varphi}x\in W(L^{\infty},L^{1}_{m})$. Therefore,
using $|u|+|v|\lesssim 1+|u+iv|=m(u+iv)$, we have
$\|\Psi\|_{W(L^{\infty},L^{1})}\lesssim\|\mathcal{V}_{\varphi^{\prime}}x\|_{W(L^{\infty},L^{1})}+\|\mathcal{V}_{\varphi}x\|_{W(L^{\infty},L^{1}_{m})}<\infty,$
as desired.
(ii) Recalling the definition of $\|\cdot\|_{M^{p}(\mathbb{R})}$, we have
$\displaystyle\Big{\|}{\sum_{\lambda\in\Lambda}\alpha_{\lambda}\pi(\lambda)x-\sum_{\lambda\in\Lambda}\alpha_{\lambda}e^{2\pi
i\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\;\pi(\lambda+\varepsilon_{\lambda})x}\Big{\|}_{M^{p}(\mathbb{R})}$
$\displaystyle=\;$
$\displaystyle\Big{\|}{\sum_{\lambda\in\Lambda}\alpha_{\lambda}\mathcal{V}_{\varphi}(\pi(\lambda)x)-\sum_{\lambda\in\Lambda}\alpha_{\lambda}e^{2\pi
i\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\;\mathcal{V}_{\varphi}(\pi(\lambda+\varepsilon_{\lambda})x)}\Big{\|}_{L^{p}(\mathbb{R}^{2})}$
$\displaystyle=\;$
$\displaystyle\Big{\|}{\sum_{\lambda\in\Lambda}\alpha_{\lambda}\,\mathcal{V}_{\varphi}(\pi(\lambda)(x-\pi(\varepsilon_{\lambda})x))}\Big{\|}_{L^{p}(\mathbb{R}^{2})},$
where we used the commutation relation
$\pi(\lambda+\varepsilon_{\lambda})=e^{-2\pi
i\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\pi(\lambda)\pi(\varepsilon_{\lambda})$.
Now, by item (i) we have
$\left|\mathcal{V}_{\varphi}(\pi(\lambda)(x-\pi(\varepsilon_{\lambda})x))\right|=\left|\mathcal{V}_{\varphi}(x-\pi(\varepsilon_{\lambda})x)\right|(\,\cdot-\lambda)\leqslant\|\bm{\varepsilon}\|_{\ell^{\infty}(\Lambda)}\Phi(\,\cdot-\lambda)$
pointwise, for all $\lambda\in\Lambda$, and so, by Lemma 15, we find that
$\Big{\|}{\sum_{\lambda\in\Lambda}\alpha_{\lambda}\,\mathcal{V}_{\varphi}(\pi(\lambda)(x-\pi(\varepsilon_{\lambda})x))}\Big{\|}_{L^{p}(\mathbb{R}^{2})}\leqslant\Big{\|}{\sum_{\lambda\in\Lambda}|\alpha_{\lambda}|\|\bm{\varepsilon}\|_{\ell^{\infty}(\Lambda)}\Phi(\,\cdot-\lambda)}\Big{\|}_{L^{p}(\mathbb{R}^{2})}\lesssim_{\,p,s,x}\|\bm{\varepsilon}\|_{\ell^{\infty}(\Lambda)}\|\alpha\|_{\ell^{p}(\Lambda)}.$
This establishes (ii) and completes the proof. ∎
We next show that weak-* convergence of measures
$\mu_{n}\in\mathscr{M}^{p}_{s}$ implies weak-* convergence of the measurements
$\mathcal{H}_{\mu_{n}}x\in M^{p}(\mathbb{R})$.
###### Proposition 4.
Let $p\in(1,\infty)$, $x\in M^{1}_{m}(\mathbb{R})$ with the weight function
$m(z)=1+|z|$, and let
$\\{\mu_{n}\\}_{n\in\mathbb{N}}\subset\mathscr{M}^{p}_{s}$ be a sequence
converging to some $\mu\in\mathscr{M}^{p}_{s}$ in the weak-* topology
$W(C_{0},L^{q})$. Then $\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x$ in the
weak-* topology of $M^{p}(\mathbb{R})$.
###### Proof.
As $p\in(1,\infty)$, $M^{p}(\mathbb{R})$ is reflexive and so its weak and
weak-* topologies coincide [7, Thm. 11.3.6]. Due to the dual pairing (7), this
topology is generated by the linear functionals
$\langle\,\cdot\,,y\rangle_{M^{p}(\mathbb{R})\times M^{q}(\mathbb{R})}$, for
$y\in M^{q}(\mathbb{R})$, and so we have to show that
$\lim_{n\to\infty}\langle\mathcal{H}_{\mu_{n}}x,y\rangle_{M^{p}(\mathbb{R})\times
M^{q}(\mathbb{R})}=\langle\mathcal{H}_{\mu}x,y\rangle_{M^{p}(\mathbb{R})\times
M^{q}(\mathbb{R})},\quad\text{for all }y\in M^{q}(\mathbb{R}).$ (36)
Now, for $y\in M^{q}(\mathbb{R})$, set $f_{y}(\lambda)\vcentcolon=\langle
y,\pi(\lambda)x\rangle_{M^{q}(\mathbb{R})\times M^{p}(\mathbb{R})}$, for
$\lambda=\tau+i\nu$. If we show that $f_{y}\in C_{0}$, we will then have
$\langle\mathcal{H}_{\mu_{n}}x,y\rangle_{M^{p}(\mathbb{R})\times
M^{q}(\mathbb{R})}=\sum_{\lambda\in\operatorname{supp}(\mu_{n})}\mu_{n}(\\{\lambda\\})\langle\pi(\lambda)x,y\rangle_{M^{p}(\mathbb{R})\times
M^{q}(\mathbb{R})}=\langle\mu_{n},f_{y}\rangle_{W(\mathcal{M},L^{p})\times
W(C_{0},L^{q})},$ (37)
since the dual pairing is continuous in its first argument, and so, as
$\mu_{n}\xrightarrow[]{w^{*}}\mu$ by assumption, (37) will imply (36).
Therefore, in order to complete the proof it suffices to show that $f_{y}\in
C_{0}$, for all $y\in M^{q}(\mathbb{R})$.
To this end, fix an arbitrary $y\in M^{q}(\mathbb{R})$ and note that then
$\mathcal{V}_{x}\,y\in W(L^{\infty},L^{q})$ by [7, Thm. 12.2.1]. On the other
hand, Hölder’s inequality yields
$\displaystyle\left|f_{y}(\lambda)\right|=\left|\langle
y,\pi(\lambda)x\rangle_{M^{q}(\mathbb{R})\times M^{p}(\mathbb{R})}\right|$
$\displaystyle\leqslant\int_{\mathbb{R}^{2}}\left|(\mathcal{V}_{\varphi}y)(u,v)\right|\left|(\mathcal{V}_{\varphi}\pi(\lambda)x)(u,v)\right|\,\mathrm{d}u\mathrm{d}v$
$\displaystyle\leqslant\int_{\mathbb{R}^{2}}\left|(\mathcal{V}_{\varphi}y)(u,v)\right|\left|(\mathcal{V}_{\varphi}x)(u-\tau,v-\nu)\right|\,\mathrm{d}u\mathrm{d}v$
$\displaystyle=(|\mathcal{V}_{\varphi}y|\ast|\mathcal{V}_{\varphi}x(-\,\cdot\,)|)(\tau,\nu).$
(38)
Next, as $x\in M^{1}(\mathbb{R})$, we have $\mathcal{V}_{\varphi}x\in
W(L^{\infty},L^{1})$ by [7, Thm. 12.2.1], which, together with
$\mathcal{V}_{\varphi}y\in L^{q}(\mathbb{R}^{2})$ and (38), implies by [7,
Thm. 11.1.5] that $f_{y}\in W(L^{\infty},L^{q})$. This, in particular, shows
that $f_{y}(z)\to 0$ as $|z|\to\infty$. Consider now arbitrary
$\lambda\in\mathbb{C}$ and $\epsilon\in\mathbb{C}$ with $|\epsilon|\leqslant
1$. Then, by applying item (i) of Proposition 3 with $x$ replaced by
$\pi(\lambda)x$, we find a $\Phi_{\lambda}\in W(L^{\infty},L^{1})\subset
W(L^{\infty},L^{p})$ depending only on $x$ and $\lambda$ such that
$|\mathcal{V}_{\varphi}(\pi(\lambda)x-\pi(\epsilon)\pi(\lambda)x)|\leqslant|\epsilon|\cdot\Phi_{\lambda}$
pointwise. We thus have
$\displaystyle\|\pi(\lambda)x-\pi(\lambda+\epsilon)x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\|\pi(\lambda)x-e^{2\pi
i\mathrm{Re}(\epsilon)\mathrm{Im}(\lambda)}\pi(\epsilon)\pi(\lambda)x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\|\pi(\lambda)x\|_{M^{p}(\mathbb{R})}\left|1-e^{2\pi
i\mathrm{Re}(\epsilon)\mathrm{Im}(\lambda)}\right|+\|\pi(\lambda)x-\pi(\epsilon)\pi(\lambda)x\|_{M^{p}(\mathbb{R})}$
$\displaystyle\leqslant\|\pi(\lambda)x\|_{M^{p}(\mathbb{R})}\left|1-e^{2\pi
i\mathrm{Re}(\epsilon)\mathrm{Im}(\lambda)}\right|+|\epsilon|\cdot\|\Phi_{\lambda}\|_{L^{q}(\mathbb{R}^{2})},$
and so $\lim_{\epsilon\to
0}\|\pi(\lambda+\epsilon)x\to\pi(\lambda)x\|_{M^{p}(\mathbb{R})}$. Therefore,
by the continuity of the dual pairing in the second argument, we get
$f_{y}(\lambda+\epsilon)=\langle
y,\pi(\lambda+\epsilon)x\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\to\langle y,\pi(\lambda)x\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}=f_{y}(\lambda)\quad\text{as }\epsilon\to 0,$
and so, as $\lambda$ was arbitrary, we deduce that $f_{y}$ is continuous. We
have hence established that $f_{y}\in C_{0}$, completing the proof. ∎
We are now ready to prove Theorems 3 and 4. In addition to Propositions 3 and
4, we will need the Banach-Alaoglu theorem as well as the inequality (14).
###### Proof of Theorem 3.
(i) Let $\mu,\widetilde{\mu}\in\mathscr{H}(\mathcal{L})^{p}$ be such that
$\mathcal{H}_{\widetilde{\mu}}\,x=\mathcal{H}_{\mu}x$, write
${\mu}=\sum_{\lambda\in\Lambda}\alpha_{\lambda}\delta_{{\lambda}}$,
$\widetilde{\mu}=\sum_{\widetilde{\lambda}\in\widetilde{\Lambda}}\alpha_{\widetilde{\lambda}}\delta_{\widetilde{\lambda}}$,
and for $R>0$ define
$\displaystyle\delta_{R}$
$\displaystyle=\min\left\\{|\lambda-\widetilde{\lambda}|:\lambda\in\Lambda\cap
B_{R}(0),\;\widetilde{\lambda}\in\widetilde{\Lambda}\setminus\Lambda\right\\}\wedge\frac{s}{2}\wedge
1,\quad\text{and}$ $\displaystyle\widetilde{\Lambda}_{R}$
$\displaystyle=\\{\widetilde{\lambda}\in\widetilde{\Lambda}\setminus\Lambda:|\widetilde{\lambda}|>R+\delta_{R},\,d(\widetilde{\lambda},\Lambda)\leqslant\delta_{R}\\}.$
Informally, $\delta_{R}$ is the distance between $\Lambda$ and
$\widetilde{\Lambda}$ restricted to the disk $B_{R}(0)$ (not counting the
points in $\Lambda\cap\widetilde{\Lambda}$), and $\widetilde{\Lambda}_{R}$ is
the part of the support of $\widetilde{\mu}$ which is at least $R+\delta_{R}$
away from the origin and everywhere within $\delta_{R}$ of $\Lambda$. Fix an
$R>0$, and, for $\widetilde{\lambda}\in\widetilde{\Lambda}_{R}$, write
$\lambda(\widetilde{\lambda})$ for the point of $\Lambda$ such that
$|\widetilde{\lambda}-\lambda(\widetilde{\lambda})|\leqslant\delta_{R}$. Note
that this point is unique, as $\delta_{R}\leqslant s/2$. Next, define the
measures
$\displaystyle\widetilde{\mu}_{R}^{1}$
$\displaystyle=\sum_{\widetilde{\lambda}\in\widetilde{\Lambda}_{R}}\alpha_{\,\widetilde{\lambda}}\;e^{2\pi
i\,\mathrm{Im}(\lambda(\widetilde{\lambda})-\widetilde{\lambda})\mathrm{Re}(\widetilde{\lambda})}\delta_{\lambda(\widetilde{\lambda})},\quad\text{and}$
$\displaystyle\widetilde{\mu}_{R}^{2}$
$\displaystyle=\sum_{\widetilde{\lambda}\in\widetilde{\Lambda}\setminus\widetilde{\Lambda}_{R}}\alpha_{\,\widetilde{\lambda}}\,\delta_{\,\widetilde{\lambda}}$
and note that $\widetilde{\mu}_{R}^{1}$ is the measure obtained by “shifting
the support” of the restricted measure
$\widetilde{\mu}\mathds{1}_{\widetilde{\Lambda}_{R}}$ onto $\Lambda$, and
$\widetilde{\mu}_{R}^{2}$ is the remaining part of $\widetilde{\mu}$.
Next, we have
$\mathrm{ms}(\operatorname{supp}(\mu-\widetilde{\mu}^{1}_{R}),\operatorname{supp}(\widetilde{\mu}^{2}_{R}))\geqslant\delta_{R}$.
Now, as
$\operatorname{supp}(\mu-\widetilde{\mu}_{R}^{1})\subset\operatorname{supp}(\mu)$
and
$\operatorname{supp}(\widetilde{\mu}_{R}^{2})\subset\operatorname{supp}(\widetilde{\mu})$,
we have that $\mu-\widetilde{\mu}_{R}^{1}$ and $\widetilde{\mu}_{R}^{2}$ are
elements of $\mathscr{H}(\mathcal{L})^{p}$, and so the identifiability
condition (10) can be applied to the measures $\mu-\widetilde{\mu}_{R}^{1}$
and $\widetilde{\mu}_{R}^{2}$, yielding
$\displaystyle C_{1}(\delta_{R}\wedge
1)\|\mu-\widetilde{\mu}_{R}^{1}-\widetilde{\mu}_{R}^{2}\|_{p}$
$\displaystyle\leqslant\|\mathcal{H}_{\mu}x-\mathcal{H}_{\widetilde{\mu}_{R}^{1}}x-\mathcal{H}_{\widetilde{\mu}_{R}^{2}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\widetilde{\mu}_{R}^{1}}x-\mathcal{H}_{\widetilde{\mu}_{R}^{2}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\Big{\|}{\sum_{\widetilde{\lambda}\in\widetilde{\Lambda}_{R}}\alpha_{\,\widetilde{\lambda}}\,\pi(\widetilde{\lambda})x-\sum_{\widetilde{\lambda}\in\widetilde{\Lambda}_{R}}\alpha_{\,\widetilde{\lambda}}\,e^{2\pi
i\,\mathrm{Im}(\lambda(\widetilde{\lambda})-\widetilde{\lambda})\mathrm{Re}(\widetilde{\lambda})}\pi({\lambda(\widetilde{\lambda})})x}\Big{\|}_{M^{p}(\mathbb{R})}$
$\displaystyle\lesssim_{\,p,s,x}\,\delta_{R}\cdot\|\\{\alpha_{\lambda}\\}_{\lambda\in\widetilde{\Lambda}_{R}}\|_{\ell^{p}},$
where in the last step we used item (ii) of Proposition 3 with
$\bm{\varepsilon}=\\{\lambda(\widetilde{\lambda})-\widetilde{\lambda}\\}_{\widetilde{\lambda}\in\widetilde{\Lambda}_{R}}$,
noting that
$|\lambda(\widetilde{\lambda})-\widetilde{\lambda}|\leqslant\delta_{R}\leqslant
1$, for all $\widetilde{\lambda}\in\widetilde{\Lambda}_{R}$. Therefore, by
dividing both sides by $\delta_{R}$, we obtain
$\|\mu-\widetilde{\mu}_{R}^{1}-\widetilde{\mu}_{R}^{2}\|_{p}\lesssim_{\,p,\mathscr{H}(\mathcal{L})^{p},x}\|\\{\alpha_{\lambda}\\}_{\lambda\in\widetilde{\Lambda}_{R}}\|_{\ell^{p}}.$
Now, as $R>0$ was arbitrary and
$\|\\{\alpha_{\lambda}\\}_{\lambda\in\widetilde{\Lambda}_{R}}\|_{\ell^{p}}\to
0$ as $R\to\infty$, we deduce that
$\|\mu-\widetilde{\mu}_{R}^{1}-\widetilde{\mu}_{R}^{2}\|_{p}\to 0$ as
$R\to\infty$. Moreover, as
$\|(\mu-\widetilde{\mu})\mathds{1}_{B_{R}(0)}\|_{p}\leqslant\|\mu-\widetilde{\mu}_{R}^{1}-\widetilde{\mu}_{R}^{2}\|_{p}$,
we obtain $\|(\mu-\widetilde{\mu})\mathds{1}_{B_{R}(0)}\|_{p}\to 0$ as
$R\to\infty$, which implies $\widetilde{\mu}=\mu$ and hence completes the
proof of (i).
(ii) The “if” direction follows immediately by Proposition 4. To show the
“only if” direction, suppose that
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x$ in the weak-* topology of
$M^{p}(\mathbb{R})$. It then suffices to establish that every subsequence of
$\\{\mu_{n}\\}_{n\in\mathbb{N}}$ has a further subsequence that converges to
$\mu$ in the weak-* topology of $\mathscr{M}_{s}^{p}$. To this end, fix an
arbitrary subsequence $\\{\mu_{n_{k}}\\}_{k\in\mathbb{N}}$ of
$\\{\mu_{n}\\}_{n\in\mathbb{N}}$ and let $\Lambda=\operatorname{supp}(\mu)$
and $\Lambda_{k}=\operatorname{supp}(\mu_{n_{k}})$. We then have
$\limsup_{n}\mathrm{rel}(\Lambda_{k})\lesssim s^{-2}<\infty$, and so by [22,
Lem. 4.5], $\\{\Lambda_{k}\\}_{k\in\mathbb{N}}$ has a subsequence
$\\{\Lambda_{k_{\ell}}\\}_{\ell\in\mathbb{N}}$ that converges weakly to a
relatively separated set $\widetilde{\Lambda}$. Note that, as
$\mathrm{sep}(\Lambda_{k})\geqslant s$, for all $k\in\mathbb{N}$, we also have
$\mathrm{sep}(\widetilde{\Lambda})\geqslant s$. Now, using (14) and the
identifiability condition (10), we have
$\sup_{k\in\mathbb{N}}\|\mu_{n_{k_{\ell}}}\|_{W(\mathcal{M},L^{p})}\lesssim_{\,p,s}\sup_{k\in\mathbb{N}}\|\mu_{n_{k_{\ell}}}\|_{p}\lesssim_{\,p,\mathscr{H}(\mathcal{L})^{p},x}\;\sup_{k\in\mathbb{N}}\|\mathcal{H}_{\mu_{n_{k_{\ell}}}}x\|_{M^{p}(\mathbb{R})}<\infty,$
and therefore, by the Banach-Alaoglu theorem [31, Thm. 3.15],
$\\{\mu_{n_{k_{\ell}}}\\}_{\ell\in\mathbb{N}}$ has a subsequence, which we
will w.l.o.g. also denote by $\\{\mu_{n_{k_{\ell}}}\\}_{\ell\in\mathbb{N}}$ to
lighten notation, such that
$\mu_{n_{k_{\ell}}}\xrightarrow[]{w^{*}}\widetilde{\mu}$, for some
$\widetilde{\mu}\in W(\mathcal{M},L^{p})$. Note that then
$\operatorname{supp}(\widetilde{\mu})\subset\widetilde{\Lambda}$ as
$\Lambda_{k_{\ell}}\xrightarrow[]{w}\widetilde{\Lambda}$, and so
$\widetilde{\mu}\in\mathscr{H}(\mathcal{L})^{p}$. It remains to show that
$\widetilde{\mu}=\mu$. To this end, note that by Proposition 4 we have
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\widetilde{\mu}}x$ in the weak-*
topology of $M^{p}(\mathbb{R})$, and so by uniqueness of weak-* limits we
deduce that $\mathcal{H}_{\widetilde{\mu}}x=\mathcal{H}_{{\mu}}x$. From this
it follows by item (i) of the theorem that $\widetilde{\mu}=\mu$, which
finishes the proof. ∎
###### Proof of Theorem 4.
Fix $\epsilon>0$ and an arbitrary finite subset
$\widetilde{\Lambda}\subset\Lambda$ such that
$\|\mu-\widetilde{\mu}\|_{p}<\epsilon$, where we set
$\widetilde{\mu}=\mu\mathds{1}_{\widetilde{\Lambda}}$, and write
$\widetilde{\mu}=\sum_{\lambda\in\widetilde{\Lambda}}\alpha_{\lambda}\delta_{\lambda}$.
Then
$\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\mu}x\|_{M^{p}(\mathbb{R})}<C_{2}\epsilon$
by the identifiability condition. Next, as
$\mathcal{H}_{\mu_{n}}x\to\mathcal{H}_{\mu}x$ in norm, this convergence also
holds in the weak-* topology, and so by Theorem 3 we have
$\mu_{n}\xrightarrow[]{w^{*}}\mu$. We can therefore decompose each $\mu_{n}$
according to $\mu_{n}=\nu_{n}+\rho_{n}$, where
$\nu_{n}=\sum_{\lambda\in{\widetilde{\Lambda}}}\alpha_{\lambda}^{(n)}\delta_{\lambda+\bm{\varepsilon}_{n}(\lambda)},$
and $\lim_{n\to\infty}\alpha_{\lambda}^{(n)}=\alpha_{\lambda}$,
$\lim_{n\to\infty}|\bm{\varepsilon}_{n}(\lambda)|=0$, for every
$\lambda\in\widetilde{\Lambda}$. Now, for every $n\in\mathbb{N}$, define the
“shifted” measure
$\widetilde{\nu}_{n}=\sum_{\lambda\in{\tilde{\Lambda}}}\alpha_{\lambda}^{(n)}e^{-2\pi
i\,\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\,\delta_{\lambda}.$
Then, as $\widetilde{\Lambda}$ is finite, we have
$\lim_{n\to\infty}\|\bm{\varepsilon}_{n}\|_{\ell^{\infty}({\widetilde{\Lambda}})}=0$
and
$\sup_{n\in\mathbb{N}}\|\alpha^{(n)}\|_{\ell^{p}({\widetilde{\Lambda}})}<\infty$,
which together with item (ii) of Proposition 3 yields
$\displaystyle\|\mathcal{H}_{\widetilde{\nu}_{n}}x-\mathcal{H}_{\nu_{n}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\Big{\|}{\sum_{\lambda\in{\widetilde{\Lambda}}}\alpha_{\lambda}^{(n)}e^{-2\pi
i\,\mathrm{Re}(\lambda)\mathrm{Im}(\varepsilon_{\lambda})}\pi(\lambda)x-\sum_{\lambda\in{\widetilde{\Lambda}}}\alpha_{\lambda}^{(n)}\pi(\lambda+\bm{\varepsilon}_{n}(\lambda))x}\Big{\|}_{M^{p}(\mathbb{R})}$
$\displaystyle\lesssim_{\,p,s,x}\|\bm{\varepsilon}_{n}\|_{\ell^{\infty}({\widetilde{\Lambda}})}\|\alpha^{(n)}\|_{\ell^{p}({\widetilde{\Lambda}})}\to
0\quad\text{as }n\to\infty.$ (39)
To bound $\|\rho_{n}\|_{p}$, we employ the identifiability condition (10) with
the measures $\rho_{n}$ and $\widetilde{\mu}-\widetilde{\nu}_{n}$. Concretely,
we note that
$\operatorname{supp}(\rho_{n})\subset\operatorname{supp}(\mu_{n})$ and
$\operatorname{supp}(\widetilde{\mu}-\widetilde{\nu}_{n})\subset\operatorname{supp}(\widetilde{\mu})\subset\operatorname{supp}(\mu)$,
and so $\rho_{n}$ and $\widetilde{\mu}-\widetilde{\nu}_{n}$ are indeed
elements of $\mathscr{H}(\mathcal{L})^{p}$. Now,
$\mathrm{ms}(\operatorname{supp}(\widetilde{\mu}-\widetilde{\nu}_{n}),\operatorname{supp}(\rho_{n}))\geqslant
s-\|\bm{\varepsilon}_{n}(\lambda)\|_{\ell^{\infty}(\widetilde{\Lambda})}>s/2$,
for sufficiently large $n$, and so, by combining (39),
$\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\mu}x\|_{M^{p}(\mathbb{R})}<C_{2}\epsilon$,
and the assumption
$\lim_{n\to\infty}\|\mathcal{H}_{\mu_{n}}x-\mathcal{H}_{\mu}x\|_{M^{p}(\mathbb{R})}=0$,
we get
$\displaystyle C_{1}\left(\frac{s}{2}\wedge
1\right)\|\widetilde{\mu}-\widetilde{\nu}_{n}-\rho_{n}\|_{p}$
$\displaystyle\leqslant\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\widetilde{\nu}_{n}}x-\mathcal{H}_{\rho_{n}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle=\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\widetilde{\nu}_{n}}x-\mathcal{H}_{\mu_{n}}x+\mathcal{H}_{\nu_{n}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle\leqslant\|\mathcal{H}_{\widetilde{\mu}}x-\mathcal{H}_{\mu}x\|_{M^{p}(\mathbb{R})}+\|\mathcal{H}_{\mu}x-\mathcal{H}_{\mu_{n}}x\|_{M^{p}(\mathbb{R})}+\|\mathcal{H}_{\nu_{n}}x-\mathcal{H}_{\widetilde{\nu}_{n}}x\|_{M^{p}(\mathbb{R})}$
$\displaystyle<2C_{2}\epsilon$ (40)
for large enough $n$. On the other hand,
$\|\widetilde{\mu}-\widetilde{\nu}_{n}-\rho_{n}\|_{p}\geqslant\|\rho_{n}\|_{p}$,
and so
$\|\rho_{n}\|_{p}\leqslant\frac{2C_{2}}{C_{1}(s/2\wedge
1)}\epsilon\leqslant\frac{4C_{2}}{C_{1}(s\wedge 1)}\epsilon,$
for all sufficiently large $n$. This concludes the proof. ∎
## VII Proofs of Proposition 2 and Corollaries 5, 6, and 7
###### Proof of Proposition 2.
Let $\mathcal{L}$ be any of the classes $\mathcal{L}_{s}^{\text{sep}}$,
$\mathcal{L}_{s,N}^{\text{fin}}$, and $\mathcal{L}_{s,\theta,R}^{\text{Ray}}$.
It then follows directly from the definitions of these sets that
$\Lambda+z\vcentcolon=\\{\lambda+z:\lambda\in\Lambda\\}\in\mathcal{L}$, for
all $\Lambda\in\mathcal{L}$ and $z\in\mathbb{R}^{2}$, so it remains to verify
closure under weak convergence. To this end, let
$\\{\Lambda_{n}\\}_{n\in\mathbb{N}}$ be a sequence in $\mathcal{L}$ such that
$\Lambda_{n}\xrightarrow[]{w}\Lambda$ as $n\to\infty$, for some
$\Lambda\subset\mathbb{R}^{2}$. In all three cases we have that
$\inf_{n\in\mathbb{N}}\mathrm{sep}(\Lambda_{n})\geqslant s$ implies
$\mathrm{sep}(\Lambda)\geqslant s$, which establishes that
$\mathcal{L}_{s}^{\text{sep}}$ is CSI.
Suppose now that $\mathcal{L}=\mathcal{L}_{s,N}^{\text{fin}}$. It then
suffices to show that $\\#\Lambda\leqslant N$. To this end, let
$\widetilde{\Lambda}$ be an arbitrary finite subset of $\Lambda$, and consider
a point $\lambda\in\widetilde{\Lambda}$. Then, for all sufficiently large $n$,
there exists a $\lambda_{n}\in\Lambda_{n}$ such that
$|\lambda-\lambda_{n}|<(\mathrm{sep}(\widetilde{\Lambda})\wedge s)/2$. We
deduce that, for all sufficiently large $n$, there exists a
$\widetilde{\Lambda}_{n}\subset\Lambda_{n}$ such that
$\\#\widetilde{\Lambda}_{n}=\\#\widetilde{\Lambda}$. But
$\\#\Lambda_{n}\leqslant N$, for all $n\in\mathbb{N}$, by definition of the
class $\mathcal{L}_{s,N}^{\text{fin}}$, so we must have
$\\#\widetilde{\Lambda}\leqslant N$. Now, as $\widetilde{\Lambda}$ was
arbitrary, we must have $\\#\Lambda\leqslant N$, and so
$\Lambda\in\mathcal{L}_{s,N}^{\text{fin}}$. This establishes that
$\mathcal{L}_{s,N}^{\text{fin}}$ is CSI.
For $\mathcal{L}=\mathcal{L}_{s,\theta,R}^{\text{Ray}}$, fix an arbitrary
translate $K^{\circ}_{x,y}=(x,x+R)\times(y,y+R)$ of $(0,R)^{2}$, and consider
a point $\lambda\in\Lambda\cap K^{\circ}_{x,y}$. Then, as $K^{\circ}_{x,y}$ is
open, we have that for all sufficiently large $n$ there exists a
$\lambda_{n}\in\Lambda_{n}$ such that $|\lambda-\lambda_{n}|<s/2$ and
$\lambda_{n}\in K^{\circ}_{x,y}$. Therefore, as $\lambda$ was arbitrary, and
$\Lambda\cap K^{\circ}_{x,y}$ is finite, we have $\\#(\Lambda\cap
K^{\circ}_{x,y})\leqslant\\#(\Lambda_{n}\cap K^{\circ}_{x,y})\leqslant\theta
R^{2}$, for sufficiently large $n$. Thus, as $K^{\circ}_{x,y}$ was arbitrary,
we obtain $n^{+}\left(\Lambda,(0,R)^{2}\right)\leqslant\theta R^{2}$, and so
$\Lambda\in\mathcal{L}_{s,\gamma,R}^{\text{Ray}}$. This establishes that
$\mathcal{L}_{s,\gamma,R}^{\text{Ray}}$ is CSI and thereby completes the
proof. ∎
###### Proof of Corollary 5.
The proof is effected by verifying the conditions of Theorem 1. As
$s\vcentcolon={\inf_{\Lambda\in\mathcal{L}_{s,N}^{\text{fin}}}}\,\mathrm{sep}(\Lambda)>0$,
by definition of $\mathcal{L}_{s,N}^{\text{fin}}$, it suffices to show that
$\mathcal{D}^{+}(\mathcal{L}_{s,N}^{\text{fin}})<\frac{1}{2}\,$. To this end,
fix a
$\Lambda=\\{\lambda_{1},\dots,\lambda_{n}\\}\in\mathcal{L}_{s,N}^{\text{fin}}$,
where $n\leqslant N$. Set $R=2\sqrt{2}\lceil\frac{N+1}{2}\rceil$, and for
$k\in\\{1,\dots,n\\}$ let
$S_{k}=\\{\xi\in\Omega_{2}:|\xi-\lambda_{k}|\leqslant R\\}$, where we recall
that $\Omega_{2}=\\{2(m+in):m,n\in\mathbb{Z}\\}$. Now, for every
$k\in\\{1,\dots,n\\}$, choose a $\xi_{k}\in
S_{k}\setminus\\{\xi_{1},\dots,\xi_{k-1}\\}$. Note that this is possible as
$\\#S_{k}\geqslant N$, for all $k$. We thus have
$|\lambda_{k}-\xi_{k}|\leqslant R$ for all $k$, and so, as $\Lambda$ was
arbitrary, we have that $\Lambda$ is $R$-uniformly close to $\Omega_{2}$, for
all $\Lambda\in\mathcal{L}_{s,N}^{\text{fin}}$. Lemma 10 therefore implies
$\mathcal{D}^{+}(\mathcal{L}_{s,N}^{\text{fin}})\leqslant 2^{-2}<\frac{1}{2}$,
completing the proof. ∎
###### Proof of Corollary 6.
First assume that $s>2\cdot 3^{-\frac{1}{4}}$. In view of Theorem 1, it again
suffices to verify that
$\mathcal{D}^{+}({\mathcal{L}_{s}^{\text{sep}}})<\frac{1}{2}\,$. To do this,
we will need a special case of the plane packing inequality by Folkman and
Graham [32]:
_Let $K$ be a compact convex set. Then any subset of $K$ whose any two points
are at least $1$ apart (in the Euclidean metric) has cardinality at most
$\frac{2}{\sqrt{3}}\mathrm{Area}(K)+\frac{1}{2}\mathrm{Per}(K)+1$._
For our purposes $K$ will be a square of side length $R$, to be specified
later. By scaling, we see that then any subset of $[0,R]^{2}$ whose any two
points are at least $s$ apart has cardinality at most
$\frac{2}{\sqrt{3}}\frac{R^{2}}{s^{2}}+\frac{2R}{s}+1$. Let
$\theta\in\big{(}\frac{2}{\sqrt{3}}s^{-2},\frac{1}{2}\big{)}$,
$\gamma\in(\sqrt{2},\theta^{-1/2})$, and $R>0$ such that
$2(sR)^{-1}+R^{-2}\leqslant\theta-\frac{2}{\sqrt{3}}s^{-2}$. Then, for all
$\Lambda\in{\mathcal{L}_{s}^{\text{sep}}}$, we have
$n^{+}(\Lambda,(0,R)^{2})\leqslant\frac{2}{\sqrt{3}}\frac{R^{2}}{s^{2}}+\frac{2R}{s}+1\leqslant\left(\frac{2}{\sqrt{3}}s^{-2}+2(sR)^{-1}+R^{-2}\right)R^{2}\leqslant\theta
R^{2},$
and so, by Lemma 9, there exists an $R^{\prime}=R^{\prime}(\theta,\gamma,R)>0$
such that $\Lambda$ is $R^{\prime}$-uniformly close to $\Omega_{\gamma}$.
Therefore, as $\Lambda$ was arbitrary, it follows by Lemma 10 that
$\mathcal{D}^{+}({\mathcal{L}_{s}^{\text{sep}}})\leqslant\gamma^{-2}<\frac{1}{2}$,
and so $\mathscr{H}({\mathcal{L}_{s}^{\text{sep}}})^{p}$ is identifiable by
the probing signal $\varphi$.
Now consider the case $s\leqslant 2\cdot 3^{-\frac{1}{4}}$ and let $r=2\cdot
3^{-\frac{1}{4}}$,
$\Lambda^{\prime}=\big{\\{}rm(1,0)+rn\big{(}\frac{1}{2},\frac{\sqrt{3}}{2}\big{)}:m,n\in\mathbb{Z}\big{\\}}\in{\mathcal{L}_{s}^{\text{sep}}}$.
Then
$D^{+}(\Lambda^{\prime})=\left(\frac{\sqrt{3}}{2}r^{2}\right)^{-1}=\frac{1}{2}$,
and so by Lemma 8 we have
$\mathcal{D}^{+}({\mathcal{L}_{s}^{\text{sep}}})\geqslant
D^{+}(\Lambda^{\prime})=\frac{1}{2}$. It thus follows by Theorem 2 that
$\mathscr{H}({\mathcal{L}_{s}^{\text{sep}}})^{p}$ is not identifiable. ∎
###### Proof of Corollary 7.
Consider first $\theta<\frac{1}{2}$. Fix $\gamma\in(\sqrt{2},\theta^{-1/2})$,
and let $\Lambda\in{\mathcal{L}_{s,\theta,R}^{\text{Ray}}}$. Then, by Lemma 9,
there exists an $R^{\prime}=R^{\prime}(\theta,\gamma,R)>0$ such that $\Lambda$
is $R^{\prime}$-uniformly close to $\Omega_{\gamma}$. Therefore, as $\Lambda$
was arbitrary, it follows by Lemma 10 that
$\mathcal{D}^{+}({\mathcal{L}_{s,\theta,R}^{\text{Ray}}})\leqslant\gamma^{-2}<\frac{1}{2}$,
and so Theorem 1 implies that
$\mathscr{H}({\mathcal{L}_{s,\theta,R}^{\text{Ray}}})^{p}$ is identifiable by
the probing signal $\varphi$.
For $\theta>\frac{1}{2}$, suppose by way of contradiction that there exists a
sequence $\\{R_{n}\\}_{n\in\mathbb{N}}$ of positive numbers such that
$\lim_{n\to\infty}R_{n}=\infty$ and
$\mathscr{H}({\mathcal{L}_{s,\theta,R_{n}}^{\text{Ray}}})^{p}$ is
identifiable, for all $n\in\mathbb{N}$. Let
$\\{\gamma_{n}\\}_{n\in\mathbb{N}}$ be a sequence of positive numbers such
that $\gamma_{n}>\theta^{-1/2}>s$ and
$\gamma_{n}^{-2}+4\left(R_{n}^{-1}\gamma_{n}^{-1}+R_{n}^{-2}\right)\leqslant\theta,$
for all $n\in\mathbb{N}$, and $\lim_{n\to\infty}\gamma_{n}=\theta^{-1/2}$. We
then have
$n^{+}(\Omega_{\gamma_{n}},(0,R_{n})^{2})\leqslant\left(R_{n}\gamma_{n}^{-1}\right)^{2}+4\left(R_{n}\gamma_{n}^{-1}+1\right)\leqslant\theta
R_{n},$
where the second term takes into account the points of $\Omega_{\gamma_{n}}$
along the four sides of the square $(0,R_{n})^{2}$. Therefore
$\Omega_{\gamma_{n}}\in\mathscr{H}({\mathcal{L}_{s,\theta,R_{n}}^{\text{Ray}}})^{p}$,
for all $n\in\mathbb{N}$, and so, as $\lim_{n\to\infty}\gamma_{n}^{-2}=\theta$
and $\theta>\frac{1}{2}$, it follows by Lemma 8 that
$\mathcal{D}^{+}({\mathcal{L}_{s,\theta,R_{n}}^{\text{Ray}}})\geqslant
D^{+}(\Omega_{\gamma_{n}})=\gamma_{n}^{-2}>\frac{1}{2}$, for all sufficiently
large $n$. This stands in contradiction to Theorem 2, completing the proof. ∎
## Acknowledgments
The authors would like to thank D. Stotz and R. Heckel for inspiring
discussions and H. G. Feichtinger for his comments on an earlier version of
the manuscript.
## References
* [1] T. Kailath, “Measurements on time-variant communication channels,” _IRE Trans. Inf. Theory_ , vol. 8, no. 5, pp. 229–236, Sep. 1962.
* [2] P. A. Bello, “Measurement of random time-variant linear channels,” _IEEE Trans. Inf. Theory_ , vol. 15, no. 4, pp. 469–475, Jul. 1969.
* [3] W. Kozek and G. E. Pfander, “Identification of operators with bandlimited symbols,” _SIAM J. Math. Anal._ , vol. 37, no. 3, pp. 867–888, Jan. 2005\.
* [4] R. Heckel and H. Bölcskei, “Identification of sparse linear operators,” _IEEE Trans. Inf. Th._ , vol. 59, no. 2, pp. 7985–8000, Dec. 2013.
* [5] W. U. Bajwa, K. Gedalyahu, and Y. C. Eldar, “Identification of parametric underspread linear systems and super-resolution radar,” _IEEE Trans. Sig. Proc._ , vol. 59, no. 6, pp. 2548–2561, June 2011.
* [6] R. Heckel, V. I. Morgenshtern, and M. Soltanolkotabi, “Super-resolution radar,” _Information and Inference: A Journal of the IMA_ , vol. 5, no. 1, pp. 22–57, 2015.
* [7] K. H. Gröchenig, _Foundations of Time-Frequency Analysis_ , ser. Appl. Numer. Harmonic Anal., J. J. Benedetto, Ed. Boston, MA, USA: Birkhäuser, 2000.
* [8] G. E. Pfander and D. F. Walnut, “Measurement of time-variant linear channels,” _IEEE Trans. Inf. Theory_ , vol. 52, no. 11, pp. 4808–4820, 2006.
* [9] P. Feng and Y. Bresler, “Spectrum-blind minimum-rate sampling and reconstruction of multiband signals,” in _Proc. of IEEE Int. Conf. Acoust. Speech Sig. Proc. (ICASSP)_ , vol. 3, May 1996, pp. 1688–1691.
* [10] P. Feng, “Universal minimum-rate sampling and spectrum-blind reconstruction for multiband signals,” Ph.D. dissertation, Univ. Illinois, Urbana-Champaign, 1997.
* [11] Y. Lu and M. Do, “A theory for sampling signals from a union of subspaces,” _IEEE Trans. Signal Process._ , vol. 56, pp. 2334–2345, 2008.
* [12] M. Mishali and Y. C. Eldar, “Blind multiband signal reconstruction: Compressed sensing for analog signals,” _IEEE Trans. Signal Process._ , vol. 57, no. 3, pp. 993–1009, 2009.
* [13] N. Grip, G. E. Pfander, and P. Rashkov, “A time-frequency density criterion for operator identification,” _Sampling Theory in Signal and Image Processing_ , vol. 12, no. 1, pp. 1–19, Jan. 2013.
* [14] D. L. Donoho, “Super-resolution via sparsity constraints,” _SIAM J. Math. Anal._ , vol. 23, no. 5, pp. 1303–1331, Sep. 1992.
* [15] C. Aubel, D. Stotz, and H. Bölcskei, “A theory of super-resolution from short-time Fourier transform measurements,” _Journal of Fourier Analysis and Applications_ , vol. 24, no. 3, pp. 45–107, 2017.
* [16] A. Beurling, “V. Interpolation for an interval on R1. 1. A density theorem. Mittag-Leffler Lectures on Harmonic Analysis,” in _The Collected Works of Arne Beurling: Volume 2, Harmonic Analysis_ , L. Carleson, P. Malliavin, J. Neuberger, and J. Werner, Eds. Boston, MA, USA: Birkhäuser, 1989, pp. 351–359.
* [17] ——, “Balayage of Fourier-Stieltjes transforms,” in _The Collected Works of Arne Beurling: Volume 2, Harmonic Analysis_ , L. Carleson, P. Malliavin, J. Neuberger, and J. Werner, Eds. Boston, MA, USA: Birkhäuser, 1989, pp. 341–350.
* [18] K. Seip, “Density theorems for sampling and interpolation in the Bargmann-Fock space,” _Bulletin of the AMS_ , vol. 26, no. 2, pp. 322–328, Apr. 1992.
* [19] ——, “Density theorems for sampling and interpolation in the Bargmann-Fock space I,” _J. Reine und Angew. Math._ , vol. 429, pp. 91–106, 1992.
* [20] K. Seip and R. Wallstén, “Density theorems for sampling and interpolation in the Bargmann-Fock space II,” _J. Reine und Angew. Math._ , vol. 429, pp. 107–113, 1992.
* [21] S. Brekke and K. Seip, “Density theorems for sampling and interpolation in the Bargmann-Fock space III,” _Mathematica Scandinavica_ , vol. 73, pp. 112–126, 1993.
* [22] K. H. Gröchenig, J. Ortega-Cerdà, and J. L. Romero, “Deformation of Gabor systems,” _Advances in Mathematics_ , vol. 277, pp. 388–425, 2015\.
* [23] H. J. Landau, “Necessary density conditions for sampling and interpolation of certain entire functions,” _Acta Mathematica_ , vol. 117, no. 1, pp. 37–52, Jul. 1966.
* [24] H. G. Feichtinger, “Banach convolution algebras of Wiener type,” _Proc. Conf. on Functions, Series, Operators_ , vol. 35, pp. 509–524, 1980\.
* [25] H. G. Feichtinger and P. Gröbner, “Banach spaces of distributions defined by decomposition methods, I,” _Math. Nachr._ , no. 123, pp. 97–120, 1985\.
* [26] W. Rudin, _Real and Complex Analysis_ , 3rd ed., ser. Higher Mathematics. McGraw-Hill, 1987.
* [27] C. Aubel and H. Bölcskei, “Density criteria for the identification of linear time-varying systems,” in _Proceedings of IEEE International Symposium on Information Theory (ISIT)_ , Hong Kong, China, June 2015, pp. 2568–2572.
* [28] K. Zhu, _Analysis on Fock Spaces_ , ser. Graduate Texts in Mathematics. Springer-Verlag, 2012.
* [29] K. H. Gröchenig and D. F. Walnut, “A Riesz basis for Bargmann-Fock space related to sampling and interpolation,” _Arkiv för Matematik_ , vol. 30, no. 1-2, pp. 283–295, 1992.
* [30] W. Gryc and T. Kemp, “Duality in Segal-Bargmann spaces,” _Journal of Fourier Analysis_ , vol. 261, pp. 1591–1623, 2011.
* [31] W. Rudin, _Functional Analysis_ , 2nd ed. McGraw-Hill, 1991.
* [32] J. Folkman and R. Graham, “A packing inequality for compact convex subsets of the plane,” _Canad. Math. Bull._ , vol. 12, pp. 745–752, 1969.
## Appendix: proofs of auxiliary results
###### Proof of Proposition 1.
Note that $\mathcal{H}_{\Lambda}$ is, in fact, the synthesis operator treated
in [22], and so items (i) and (ii) follow immediately from [22, §2.5]. We
proceed to establish (iii). To this end, recall that
$\varphi(t)=2^{\frac{1}{4}}e^{-\pi t^{2}}$, fix an arbitrary
$(x,\omega)\in\Lambda$, and let $\widetilde{\Lambda}$ be a finite subset of
$\Lambda$ such that $(x,\omega)\in\widetilde{\Lambda}$. Then, for
$a,b\in\mathbb{R}$ and $(\tau,\nu)\in\widetilde{\Lambda}$, we have
$\displaystyle\langle\mathcal{M}_{\nu}\mathcal{T}_{\tau}\,\mathcal{M}_{-\omega-b}\mathcal{T}_{a}\varphi,\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}$ $\displaystyle=$
$\displaystyle\langle\varphi,\mathcal{T}_{-a}\,\mathcal{M}_{\omega+b}\mathcal{T}_{-\tau}\mathcal{M}_{-\nu}\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\rangle_{L^{2}(\mathbb{R})}$
$\displaystyle=$ $\displaystyle e^{-2\pi
i(x+a)b}\langle\varphi,\mathcal{T}_{-a}\,\mathcal{M}_{\omega+b}\mathcal{T}_{-\tau}\mathcal{M}_{-\nu-b}\mathcal{T}_{x+a}\varphi\rangle_{L^{2}(\mathbb{R})}$
$\displaystyle=$ $\displaystyle e^{-2\pi i(x+a)b}e^{2\pi
i\tau(\nu+b)}\langle\varphi,\mathcal{T}_{-a}\,\mathcal{M}_{(\omega+b)+(-\nu-b)}\mathcal{T}_{(x+a)-\tau}\varphi\rangle_{L^{2}(\mathbb{R})}$
$\displaystyle=$ $\displaystyle e^{-2\pi i(x+a)b}e^{2\pi i\tau(\nu+b)}e^{-2\pi
ia(\omega-\nu)}\langle\varphi,\mathcal{M}_{\omega-\nu}\mathcal{T}_{(x+a)-\tau-a}\varphi\rangle_{L^{2}(\mathbb{R})}$
$\displaystyle=$ $\displaystyle e^{-2\pi iab}e^{2\pi i\tau\nu}e^{-2\pi
ia(\omega-\nu)-2\pi
ib(x-\tau)}(\mathcal{V}_{\varphi}\varphi)(x-\tau,\omega-\nu),$
and therefore
$\displaystyle e^{2\pi
iab}\left\langle\mathcal{H}_{{\widetilde{\Lambda}}}\,(\alpha,\mathcal{M}_{-\omega-b}\mathcal{T}_{a}\varphi),\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\right\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}$ (41)
$\displaystyle\qquad\qquad\qquad=\sum_{(\tau,\nu)\in\widetilde{\Lambda}}\alpha_{\tau,\nu}\,e^{2\pi
i\tau\nu}e^{-2\pi ia(\omega-\nu)-2\pi
ib(x-\tau)}(\mathcal{V}_{\varphi}\varphi)(x-\tau,\omega-\nu).$
Now, let $\epsilon>0$ be arbitrary. We multiply both sides of (41) by
$\epsilon\,e^{-\pi\epsilon(a^{2}+b^{2})}$ and integrate over
$(a,b)\in\mathbb{R}^{2}$. Then, as the Fourier transform of
$\sqrt{\epsilon}\varphi(\cdot\,\sqrt{\epsilon})$ is
$\varphi(\cdot/\sqrt{\epsilon})$, the integral of the right-hand side equals
$\sum_{(\tau,\nu)\in\widetilde{\Lambda}}\alpha_{\tau,\nu}e^{2\pi
i\tau\nu}e^{-\pi\left[(\omega-\nu)^{2}+(x-\tau)^{2}\right]\epsilon^{-1}}(\mathcal{V}_{\varphi}\varphi)(x-\tau,\omega-\nu),$
and the integral of the left-hand side satisfies
$\displaystyle\left|\int_{\mathbb{R}^{2}}\epsilon
e^{-\pi\epsilon(a^{2}+b^{2})}e^{2\pi
iab}\left\langle\mathcal{H}_{{\widetilde{\Lambda}}}\,(\alpha,\mathcal{M}_{-\omega-b}\mathcal{T}_{a}\varphi),\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\right\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\,\mathrm{d}a\,\mathrm{d}b\;\right|$
$\displaystyle\leqslant$ $\displaystyle\int_{\mathbb{R}^{2}}\epsilon
e^{-\pi\epsilon(a^{2}+b^{2})}\left|\left\langle\mathcal{H}_{{\widetilde{\Lambda}}}\,(\alpha,\mathcal{M}_{-\omega-b}\mathcal{T}_{a}\varphi),\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\right\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\right|\,\mathrm{d}a\,\mathrm{d}b\;$
$\displaystyle\leqslant$ $\displaystyle\int_{\mathbb{R}^{2}}\epsilon
e^{-\pi\epsilon(a^{2}+b^{2})}\|\mathcal{H}_{\widetilde{\Lambda}}(\alpha,\cdot)\|\|\mathcal{M}_{-\omega-b}\mathcal{T}_{a}\varphi\|_{M^{1}(\mathbb{R})}\|\mathcal{T}_{x+a}\mathcal{M}_{-b}\varphi\|_{M^{q}(\mathbb{R})}\,\mathrm{d}a\,\mathrm{d}b$
$\displaystyle\leqslant$
$\displaystyle\|\mathcal{H}_{\widetilde{\Lambda}}(\alpha,\cdot)\|\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})}.$
We hence deduce that
$\left|\sum_{(\tau,\nu)\in\widetilde{\Lambda}}\alpha_{\tau,\nu}e^{2\pi
i\tau\nu}e^{-\pi\left[(\omega-\nu)^{2}+(x-\tau)^{2}\right]\epsilon^{-1}}(\mathcal{V}_{\varphi}\varphi)(x-\tau,\omega-\nu)\right|\leqslant\|\mathcal{H}_{\widetilde{\Lambda}}(\alpha,\cdot)\|\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})},$
and so, upon letting $\epsilon\to 0$, we obtain
$\left|\alpha_{x,\omega}e^{2\pi
ix\omega}(\mathcal{V}_{\varphi}\varphi)(0,0)\right|\leqslant\|\mathcal{H}_{\widetilde{\Lambda}}(\alpha,\cdot)\|\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})}.$
Next, note that we can write
$\mathcal{H}_{{\widetilde{\Lambda}}}=\mathcal{H}_{\Lambda}-\mathcal{H}_{{\Lambda\setminus\widetilde{\Lambda}}}$,
and so, by item (i) of the proposition, there exists a universal constant
$C>0$ such that
$\|\mathcal{H}_{{\widetilde{\Lambda}}}(\alpha,\cdot)\|\leqslant\|\mathcal{H}_{\Lambda}(\alpha,\cdot)\|+C\,\|\\{\alpha_{\lambda}\\}_{\lambda\in\Lambda\setminus\widetilde{\Lambda}}\|_{\ell^{p}}.$
Therefore, since
$(\mathcal{V}_{\varphi}\varphi)(0,0)=\|\varphi\|_{L^{2}(\mathbb{R})}^{2}=1$,
we have
$|\alpha_{x,\omega}|\leqslant\left(\|\mathcal{H}_{\Lambda}(\alpha,\cdot)\|+C\,\|\\{\alpha_{\lambda}\\}_{\lambda\in\Lambda\setminus\widetilde{\Lambda}}\|_{\ell^{p}}\right)\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})}.$
The term
$\|\\{\alpha_{\lambda}\\}_{\lambda\in\Lambda\setminus\widetilde{\Lambda}}\|_{\ell^{p}}$
can now be made arbitrarily small by choosing a sufficiently large
$\widetilde{\Lambda}$, and hence we deduce that
$|\alpha_{x,\omega}|\leqslant\|\mathcal{H}_{\Lambda}(\alpha,\cdot)\|\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})}.$
As $(x,\omega)\in\Lambda$ was arbitrary, we obtain
$\|\alpha\|_{\ell^{\infty}}=\sup_{(x,\omega)\in\Lambda}|\alpha_{x,\omega}|\leqslant\|\mathcal{H}_{\Lambda}(\alpha,\cdot)\|\|\varphi\|_{M^{1}(\mathbb{R})}\|\varphi\|_{M^{q}(\mathbb{R})},$
establishing (9). ∎
###### Proof of Lemma 8.
Item (i) is a direct consequence of Definition 2 and the fact that
$n^{+}(\Lambda,(0,R)^{2})\allowbreak\leqslant n^{+}(\Lambda,[0,R]^{2})$. To
show item (ii), let $\epsilon>0$ be arbitrary, and set
$\theta=\mathcal{D}^{+}(\mathcal{L})+\epsilon$. Then, by Definition 2, we have
$\frac{n^{+}(\Lambda,[0,R]^{2})}{R^{2}}\leqslant\mathcal{D}^{+}(\mathcal{L})+\epsilon$
for every $\Lambda\in\mathcal{L}$ and sufficiently large $R$, and thus
$D^{+}(\Lambda)=\limsup_{R\to\infty}\frac{n^{+}(\Lambda,[0,R]^{2})}{R^{2}}\leqslant\mathcal{D}^{+}(\mathcal{L})+\epsilon.$
(42)
Now, as (42) holds for every $\Lambda\in\mathcal{L}$, we deduce that
$\sup_{\Lambda\in\mathcal{L}}D^{+}(\Lambda)\leqslant\mathcal{D}^{+}(\mathcal{L})+\epsilon$,
and hence, as $\epsilon>0$ was arbitrary, we obtain
$\sup_{\Lambda\in\mathcal{L}}D^{+}(\Lambda)\leqslant\mathcal{D}^{+}(\mathcal{L})$.
∎
###### Proof of Lemma 9.
We identify $\mathbb{C}$ with $\mathbb{R}^{2}$ for ease of exposition. For a
positive integer $q$, define the set $\mathcal{S}_{q}$ according to
$\mathcal{S}_{q}=\\{[qkR,q(k+1)R]\times[q\ell
R,q(\ell+1)R]\,:\,k,\ell\in\mathbb{Z}\\},$
and note that $\mathcal{S}_{q}$ is a collection of squares in $\mathbb{R}^{2}$
of side length $qR$ tessellating the plane. A simple counting argument now
yields, for all $K\in\mathcal{S}_{q}$,
$\\#(\Omega_{\gamma}\cap
K)\geqslant(qR\gamma^{-1})^{2}-4\left(qR\gamma^{-1}+1\right),$
where the subtracted term accounts for the $\lceil qR\gamma^{-1}\rceil$ points
adjacent to each of the edges of the square $K$, but which might not be inside
it. On the other hand, as every element of $\mathcal{S}_{q}$ can be covered by
$(q+1)^{2}$ translates of $(0,R)^{2}$, using
$n^{+}(\Lambda,(0,R)^{2})\leqslant\theta R^{2}$ we have $\\#(\Lambda\cap
K)\leqslant(q+1)^{2}\cdot\theta R^{2}=((q+1)R\theta^{1/2})^{2}$ for all
$K\in\mathcal{S}_{q}$. Therefore, as $\gamma^{-1}>\theta^{1/2}$ by assumption,
there exists a positive integer $q^{\prime}=q^{\prime}(\theta,R,\gamma)$ such
that
$(q^{\prime}R\gamma^{-1})^{2}-4\left(q^{\prime}R\gamma^{-1}+1\right)\geqslant\big{(}(q^{\prime}+1)R\theta^{1/2}\big{)}^{2}.$
We thus have $\\#(\Omega_{\gamma}\cap K)\geqslant\\#(\Lambda\cap K)$, for all
$K\in\mathcal{S}_{q^{\prime}}$, and can therefore enumerate
$\Lambda=\\{\lambda_{m,n}\\}_{(m,n)\in\mathcal{I}}$ so that, for every
$(m,n)\in\mathcal{I}$, $\lambda_{m,n}$ and $\omega_{m,n}$ are contained in the
same square $K_{m,n}\in\mathcal{S}_{q^{\prime}}$. Setting
$R^{\prime}=\sqrt{2}q^{\prime}R$ to be the length of the diagonal of the
squares in $\mathcal{S}_{q^{\prime}}$ now yields
$|\lambda_{m,n}-\omega_{m,n}|\leqslant R^{\prime}$, for all
$(m,n)\in\mathcal{I}$, as desired. ∎
###### Proof of Lemma 10.
We again identify $\mathbb{C}$ with $\mathbb{R}^{2}$ for ease of exposition.
Note that we have $n^{+}(\Lambda,[0,R^{\prime}]^{2})\leqslant
n^{+}(\Omega_{\gamma},[0,R^{\prime}+2R]^{2})$, for all $R^{\prime}>0$ and
$\Lambda\in\mathcal{L}$, by the uniform closeness assumption, and so
$\mathcal{D}^{+}(\mathcal{L})=\limsup_{R^{\prime}\to\infty}\sup_{\Lambda\in\mathcal{L}}\frac{n^{+}(\Lambda,[0,R^{\prime}]^{2})}{R^{\prime
2}}\leqslant\limsup_{R^{\prime}\to\infty}\frac{n^{+}(\Omega_{\gamma},[0,R^{\prime}+2R]^{2})}{(R^{\prime}+2R)^{2}}\left(1+\frac{2R}{R^{\prime}}\right)^{2}=\gamma^{-2}.$
∎
###### Proof of Lemma 12.
Note that the terms $z-\lambda_{0,0}$ and $z$ in the defining expressions of
$g_{\Lambda}$ and $\widetilde{g}_{\Lambda}$ cancel owing to $\lambda_{0,0}=0$,
so the interpolation property (a) follows immediately. We may hence proceed to
establishing statement (b). To this end, we begin by observing that the
assumptions (i), (ii), and (iii) remain valid and the conclusion of the lemma
unchanged if we replace $\rho$ by $\rho\wedge 1$ and $R$ by $R\vee
1\vee(3\gamma)$, so we may assume w.l.o.g. that $\rho\leqslant 1$ and
$R\geqslant 1\vee(3\gamma)$. Now, set
$\mathcal{I}^{\prime}\vcentcolon=\mathcal{I}\setminus\\{(0,0)\\}$ and write
$\widetilde{g}_{\Lambda}(z)e^{-\frac{\pi}{2}\gamma^{-2}|z|^{2}}=\frac{\sigma_{\gamma}(z)e^{-\frac{\pi}{2}\gamma^{-2}|z|^{2}}}{d(z,\Omega_{\gamma})}\,\frac{d(z,\Lambda)}{z}\,h(z),$
(43)
where $d$ is the Euclidean distance from a point to a subset of $\mathbb{C}$
and
$h(z)=\frac{d(z,\Omega_{\gamma})}{d(z,\Lambda)}\prod_{(m,n)\in\mathcal{I}_{s}}\exp\left(\frac{z}{\omega_{m,n}}-\frac{z}{\lambda_{m,n}}\right)\prod_{(m,n)\in\mathcal{I}^{\prime}}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})}.$
Note that $|{d(z,\Lambda)}/{z}|\leqslant 1$ owing to $0\in\Lambda$, and by
Lemma 11 we have
$|\sigma_{\gamma}(z)|e^{-\frac{\pi}{2}\gamma^{-2}|z|^{2}}/d(z,\Omega_{\gamma})\lesssim_{\,\gamma}\\!1$,
so in order to complete the proof it suffices to establish
$|h(z)|\lesssim_{\,s,\theta,\gamma,R}\rho^{-1}e^{c|z|\log{|z|}},\quad\forall
z\in\mathbb{C},$ (44)
for some $c=c(s,\theta,\gamma,R)>0$. This will be effected by bounding various
auxiliary quantities associated with $h$. To this end, we begin by bounding
$h_{\mathrm{aux}}(z)\vcentcolon=\prod_{(m,n)\in
A(z)}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})},$
from above, where $A(z)\subset\mathcal{I}$ is given by
$A(z)=\left\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}|>2|z|\vee
6R\right\\}.$
To this end, we first establish the following basic bounds valid for all
$z\in\mathbb{C}$ and $(m,n)\in A(z)$:
1. (A1)
$|\omega_{m,n}|>5R$,
${|\omega_{m,n}|/|\lambda_{m,n}|}\in\left({5/6,5/4}\right)$,
2. (A2)
$|z/\omega_{m,n}|<\frac{4}{5}\mathds{1}_{\\{|z|\leqslant
4R\\}}+\frac{4}{7}\mathds{1}_{\\{|z|>4R\\}}$, and
3. (A3)
$|z/\lambda_{m,n}|<1/2$.
The inequality in (A1) follows from
$|\omega_{m,n}|\geqslant|\lambda_{m,n}|-R>6R-R=5R$, and the upper and lower
bounds on $|\omega_{m,n}|/|\lambda_{m,n}|$ are due to
$|\omega_{m,n}|\geqslant|\lambda_{m,n}|-R>\frac{5}{6}|\lambda_{m,n}|$ and
$|\lambda_{m,n}|\geqslant|\omega_{m,n}|-R>\frac{4}{5}|\omega_{m,n}|$. To show
(A2), consider the cases $|z|\leqslant 4R$ and $|z|>4R$ separately. If
$|z|\leqslant 4R$, then $|z/\omega_{m,n}|<4R/5R=4/5$, whereas if $|z|>4R$,
then $|\omega_{m,n}|\geqslant|\lambda_{m,n}|-R>2|z|-R>\frac{7}{4}|z|$ and so
$|z/\omega_{m,n}|<4/7$. Finally, (A3) follows directly by the definition of
$A(z)$.
Now, using (A2), we have
$\displaystyle|h_{\mathrm{aux}}(z)|$ $\displaystyle\leqslant\prod_{(m,n)\in
A(z)}\frac{|1-z/\lambda_{m,n}||\exp(z/\lambda_{m,n})|}{|1-z/\omega_{m,n}||\exp(z/\omega_{m,n})|}$
$\displaystyle\leqslant\Bigg{(}\mathds{1}_{\\{|z|>4R\\}}\;+\mathds{1}_{\\{|z|\leqslant
4R\\}}\prod_{\begin{subarray}{c}(m,n)\in A(z)\\\
|\omega_{m,n}|\leqslant\frac{7}{4}|z|\leqslant
7R\end{subarray}}\frac{\left(1+\frac{1}{2}\right)e^{\frac{1}{2}}}{\left(1-\frac{4}{5}\right)e^{-\frac{4}{5}}}\Bigg{)}\prod_{\begin{subarray}{c}(m,n)\in
A(z)\\\
|\omega_{m,n}|>\frac{7}{4}|z|\end{subarray}}\frac{|1-z/\lambda_{m,n}||\exp(z/\lambda_{m,n})|}{|1-z/\omega_{m,n}||\exp(z/\omega_{m,n})|}$
$\displaystyle\lesssim_{\,\gamma,R}\Bigg{|}\prod_{(m,n)\in\widetilde{A}(z)}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})}\Bigg{|},\quad\forall
z\in\mathbb{C},$ (45)
where we set
$\widetilde{A}(z)=\\{(m,n)\in
A(z):|\omega_{m,n}|>{\textstyle\frac{7}{4}}|z|\\}.$
Next, for $z\in\mathbb{C}$ and $(m,n)\in A(z)$, it follows by (A2) that
$1-{z/\omega_{m,n}}$ and $1-{z/z_{m,n}}$ lie in the domain
$\mathbb{C}\setminus\mathbb{R}_{\leqslant 0}$ of the complex logarithm, which
we denote by $\mathrm{Log}$. We can thus write
$\displaystyle\prod_{(m,n)\in\widetilde{A}(z)}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})}$
$\displaystyle=\,$
$\displaystyle\prod_{(m,n)\in\widetilde{A}(z)}\exp{\left[\mathrm{Log}\left(1-\frac{z}{\lambda_{m,n}}\right)+\frac{z}{\lambda_{m,n}}-\mathrm{Log}\left(1-\frac{z}{\omega_{m,n}}\right)-\frac{z}{\omega_{m,n}}\right]}$
$\displaystyle=\,$
$\displaystyle\prod_{(m,n)\in\widetilde{A}(z)}\exp{\left[-\sum_{k=2}^{\infty}\frac{1}{k}\left(\frac{z}{\lambda_{m,n}}\right)^{k}+\sum_{k=2}^{\infty}\frac{1}{k}\left(\frac{z}{\omega_{m,n}}\right)^{k}\right]}$
$\displaystyle=\,$
$\displaystyle\exp{\Bigg{[}\sum_{(m,n)\in\widetilde{A}(z)}\sum_{k=2}^{\infty}\frac{z^{k}}{k}\left(\frac{1}{\omega_{m,n}^{k}}-\frac{1}{\lambda_{m,n}^{k}}\right)\Bigg{]}},$
which together with (45) yields
$|h_{\mathrm{aux}}(z)|\lesssim_{\,\gamma,R}\exp\Bigg{[}\sum_{(m,n)\in\widetilde{A}(z)}\sum_{k=2}^{\infty}\frac{|z|^{k}}{k}\left|\frac{1}{\omega_{m,n}^{k}}-\frac{1}{\lambda_{m,n}^{k}}\right|\Bigg{]},\quad\forall
z\in\mathbb{C}.$ (46)
Using (A1), we further have
$\displaystyle\left|\frac{1}{\omega_{m,n}^{k}}-\frac{1}{\lambda_{m,n}^{k}}\right|$
$\displaystyle=\frac{\left|\lambda_{m,n}-\omega_{m,n}\right|\sum_{\ell=0}^{k-1}|\lambda_{m,n}|^{k-1-\ell}|\omega_{m,n}|^{\ell}}{\left|\omega_{m,n}\right|^{k}\left|\lambda_{m,n}\right|^{k}}$
$\displaystyle\leqslant\frac{R\cdot|\omega_{m,n}|^{k-1}\sum_{\ell=0}^{k-1}\left(\frac{6}{5}\right)^{\ell}}{\left|\omega_{m,n}\right|^{2k}\left(\frac{4}{5}\right)^{k}}$
$\displaystyle\leqslant 5R\cdot{1.5}^{k}\left|\omega_{m,n}\right|^{-(k+1)},$
(47)
for all $z\in\mathbb{C}$, $(m,n)\in A(z)$, and all $k\in\mathbb{N}$. Next,
defining $c_{\mathrm{aux}}(z,R)\vcentcolon=\frac{7|z|}{4}\vee 5R$, (A1) and
(A2) imply that
$\widetilde{A}(z)\subset\\{(m,n)\in\mathbb{Z}^{2}:|\omega_{m,n}|>c_{\mathrm{aux}}(z,R)\\}$,
for all $z\in\mathbb{C}$, and so (46) and (47) together yield
$\displaystyle|h_{\mathrm{aux}}(z)|$
$\displaystyle\leqslant\exp\Bigg{[}\sum_{\begin{subarray}{c}(m,n)\in\widetilde{A}(z)\end{subarray}}\sum_{k=2}^{\infty}{\frac{5R|z|^{k}}{k}}\cdot
1.5^{k}|\omega_{m,n}|^{-(k+1)}\Bigg{]}$
$\displaystyle\leqslant\exp\Bigg{[}\sum_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
|\omega_{m,n}|>c_{\mathrm{aux}}(z,R)\end{subarray}}\quad\sum_{k=2}^{\infty}{\frac{5R|z|^{k}}{k}}\cdot
1.5^{k}|\omega_{m,n}|^{-(k+1)}\Bigg{]}$
$\displaystyle=\exp\Bigg{[}\sum_{k=2}^{\infty}{\frac{5R|z|^{k}}{k}}\cdot
1.5^{k}\cdot\sum_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
|\omega_{m,n}|>c_{\mathrm{aux}}(z,R)\end{subarray}}|\omega_{m,n}|^{-(k+1)}\Bigg{]}.$
(48)
Recall that $R\geqslant 3\gamma$, and so
$c_{\mathrm{aux}}(z,R)=\frac{7|z|}{4}\vee
5R\geqslant\frac{7|z|}{4}\vee\frac{20\gamma}{\sqrt{2}}$ implies
$c_{\mathrm{aux}}(z,R)-\gamma/\sqrt{2}\geqslant
0.95\,c_{\mathrm{aux}}(z,R)\geqslant 1.6\,|z|.$ (49)
Now, write $K_{\gamma}$ for the square of side length $\gamma$ centered at
$(0,0)$ and use $R\geqslant 3\gamma$ and (49) to obtain the following bound
$\displaystyle\sum_{|\omega_{m,n}|>c_{\mathrm{aux}}(z,R)}|\omega_{m,n}|^{-(k+1)}$
$\displaystyle\leqslant\sum_{|\omega_{m,n}|>c_{\mathrm{aux}}(z,R)}\left(\frac{|\omega_{m,n}|+\frac{\gamma}{\sqrt{2}}}{|\omega_{m,n}|}\right)^{k+1}\int_{(m\gamma,n\gamma)+K_{\gamma}}|w|^{-(k+1)}\left|\mathrm{d}w\right|$
(50)
$\displaystyle\leqslant\left(1+\frac{\gamma}{5R\sqrt{2}}\right)^{k+1}\int_{|w|\geqslant
c_{\mathrm{aux}}(z,R)-\frac{\gamma}{\sqrt{2}}}|w|^{-(k+1)}\left|\mathrm{d}w\right|$
$\displaystyle\leqslant\left(1+\frac{1}{15\sqrt{2}}\right)^{k+1}\frac{2\pi}{k-1}\left(c_{\mathrm{aux}}(z,R)-\gamma/\sqrt{2}\right)^{1-k}$
$\displaystyle\leqslant 1.04^{k+1}\cdot 2\pi\left(1.6|z|\right)^{1-k}$
$\displaystyle<11\cdot 0.65^{k}|z|^{1-k},$ (51)
for all $z\in\mathbb{C}\setminus\\{0\\}$ and $k\geqslant 2$. We now use (51)
in (48) to obtain
$\displaystyle|h_{\mathrm{aux}}(z)|$
$\displaystyle\lesssim_{\,\gamma,R}\exp\Bigg{[}\sum_{k=2}^{\infty}\frac{5R|z|^{k}}{k}\cdot
1.5^{k}\cdot 11\cdot 0.65^{k}|z|^{1-k}\Bigg{]}$
$\displaystyle\leqslant{\Bigg{[}R\,|z|\cdot\frac{55}{2}\sum_{k=2}^{\infty}0.975^{k}\Bigg{]}}$
$\displaystyle=e^{1100R\,|z|},\quad\forall z\in\mathbb{C}.$ (52)
We are now ready to bound $h$, and we do so by treating the cases $|z|>3R$ and
$|z|\leqslant 3R$ separately.
Case $|z|>3R\,$: We analyze $h(z)$ as a product
$h(z)=\prod_{k=1}^{5}h_{k}(z)$, where
$\displaystyle h_{1}(z)$
$\displaystyle=\exp\Bigg{[}z\sum_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\setminus\mathcal{I}_{s}\\\
|\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\left(\frac{1}{\lambda_{m,n}}-\frac{1}{\omega_{m,n}}\right)\Bigg{]},$
$\displaystyle h_{2}(z)$
$\displaystyle=\frac{d(z,\Omega_{\gamma})}{d(z,\Lambda)}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|\leqslant
2R\end{subarray}}\frac{1-z/\lambda_{m,n}}{1-z/\omega_{m,n}},$ $\displaystyle
h_{3}(z)$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|\leqslant
2R\end{subarray}}\frac{1-z/\lambda_{m,n}}{1-z/\omega_{m,n}},$ $\displaystyle
h_{4}(z)$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|>2R\\\ |\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\frac{1-z/\lambda_{m,n}}{1-z/\omega_{m,n}},\quad\text{and}$
$\displaystyle h_{5}(z)$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|>2|z|\end{subarray}}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})},$
and bound the functions $h_{j}$ in order.
Bounding $h_{1}$: Note that $|\lambda_{m,n}|>3R$ implies $|\omega_{m,n}|>2R$,
and hence
$|\lambda_{m,n}|\geqslant|\omega_{m,n}|-R>\frac{1}{2}|\omega_{m,n}|$. We thus
have the following bound for
$(m,n)\in\mathcal{I}^{\prime}\setminus\mathcal{I}_{s}$:
$\displaystyle\left|\frac{1}{\lambda_{m,n}}-\frac{1}{\omega_{m,n}}\right|$
$\displaystyle\leqslant\mathds{1}_{\\{|\lambda_{m,n}|\leqslant
3R\\}}(2s^{-1}+\gamma^{-1})+\mathds{1}_{\\{|\lambda_{m,n}|>3R\\}}\frac{|\lambda_{m,n}-\omega_{m,n}|}{|\omega_{m,n}||\lambda_{m,n}|}$
$\displaystyle\leqslant\mathds{1}_{\\{|\lambda_{m,n}|\leqslant
3R\\}}(2s^{-1}+\gamma^{-1})+2R|\omega_{m,n}|^{-2},$
and therefore, as $\\#\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}|\leqslant
3R\\}\leqslant 9\theta R^{2}$,
$\displaystyle|h_{1}(z)|$
$\displaystyle\leqslant\exp\Bigg{[}|z|\sum_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|z_{m,n}|\leqslant
2|z|\end{subarray}}\left|\frac{1}{z_{m,n}}-\frac{1}{\omega_{m,n}}\right|\Bigg{]},$
$\displaystyle\leqslant\exp\Bigg{[}9\theta
R^{2}(2s^{-1}+\gamma^{-1})|z|+2R|z|\cdot\sum_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\omega_{m,n}|\leqslant
2|z|+R\end{subarray}}\left|\omega_{m,n}\right|^{-2}\Bigg{]}.$ (53)
Recalling that $\log|z|>\log(3R)>\log(3)>0$, we can bound in a manner similar
to (51) to obtain
$\displaystyle\sum_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\omega_{m,n}|\leqslant 2|z|+R\end{subarray}}\left|\omega_{m,n}\right|^{-2}$
$\displaystyle\leqslant\left(\frac{\gamma+\frac{\gamma}{\sqrt{2}}}{\gamma}\right)^{2}\int_{\frac{\gamma}{\sqrt{2}}\leqslant|w|\leqslant
2|z|+R+\frac{\gamma}{\sqrt{2}}}|w|^{-2}\left|\mathrm{d}w\right|$
$\displaystyle\lesssim_{\,\gamma,R}\log|z|.$ (54)
Using this in (53) thus yields
$|h_{1}(z)|\leqslant e^{c_{1}|z|\log{|z|}},$ (55)
for some $c_{1}=c_{1}(s,\theta,\gamma,R)>0$ and all $z\in\mathbb{C}$ such that
$|z|>3R$.
Bounding $h_{2}$: We write
$|h_{2}|=h_{2}^{\mathrm{num}}/h_{2}^{\mathrm{den}}$, where
$h_{2}^{\mathrm{num}}(z)=\frac{1}{d(z,\Lambda)}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|\leqslant
2R\end{subarray}}\frac{|\lambda_{m,n}-z|}{|\lambda_{m,n}|},\quad\text{and}\quad
h_{2}^{\mathrm{den}}(z)=\frac{1}{d(z,\Omega_{\gamma})}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|\leqslant
2R\end{subarray}}\frac{|\omega_{m,n}-z|}{|\omega_{m,n}|}.$
In order to bound $h_{2}^{\mathrm{num}}$, we observe that one of the following
two circumstances occurs:
* –
The distance from $z$ to $\Lambda$ is minimized at $\lambda_{0,0}$, and so
$d(z,\Lambda)=|z|>3R>1$.
* –
The distance from $z$ to $\Lambda$ is minimized at a point $\lambda_{m,n}$,
where $(m,n)\in\mathcal{I}^{\prime}$, and so the term $d(z,\Lambda)$ cancels
with one of the factors $|\lambda_{m,n}-z|$ in the product over
$\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}-z|\leqslant 2R\\}$.
These facts lead to the following bound
$h_{2}^{\mathrm{num}}(z)\leqslant\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|\leqslant 2R\end{subarray}}\frac{|\lambda_{m,n}-z|\vee
1}{|z|-|\lambda_{m,n}-z|}\leqslant 2^{16\theta R^{2}}.$ (56)
For $h_{2}^{\mathrm{den}}$, we similarly observe that either
$(\mathrm{Re}\,z,\mathrm{Im}\,z)\in[-\frac{\gamma}{2},\frac{\gamma}{2}]^{2}$,
or $d(z,\Omega_{\gamma})$ cancels with a factor $|\omega_{m,n}-z|$ in the
product over $\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}-z|\leqslant
2R\\}$. In either case the numerators of the terms remaining in the product
satisfy $|\omega_{m,n}-z|\geqslant\frac{\gamma}{2}$, and we thus have
$h_{2}^{\mathrm{den}}(z)\geqslant\left(\\!\frac{\gamma}{\sqrt{2}}\wedge{1}\\!\right)\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|\leqslant 2R\end{subarray}}\frac{(\gamma/2)\wedge
1}{|\omega_{m,n}-\lambda_{m,n}|+|\lambda_{m,n}-z|+|z|}\geqslant\left(\\!\frac{\gamma}{\sqrt{2}}\wedge
1\\!\right)\left(\frac{(\gamma/2)\wedge 1}{3R+|z|}\right)^{16\theta R^{2}}.$
(57)
The inequalities (56) and (57) together yield
$|h_{2}(z)|\lesssim_{\,\theta,\gamma,R}|z|^{16\theta
R^{2}}\lesssim_{\,\theta,R}e^{|z|}.$ (58)
Bounding $h_{3}$: Recall that $|\lambda_{m,n}|>\frac{s}{2}$ for all
$(m,n)\in\mathcal{I}^{\prime}\setminus\mathcal{I}_{s}$, and there is at most
one $(m^{\prime},n^{\prime})\in\mathcal{I}_{s}\setminus\\{(0,0)\\}$, and for
this $(m^{\prime},n^{\prime})$ we have
$|\lambda_{m^{\prime},n^{\prime}}|\geqslant\rho$, by assumption (i). We thus
get
$\displaystyle|h_{3}(z)|$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|\leqslant
2R\end{subarray}}\frac{|1-z/\lambda_{m,n}|}{|1-z/\omega_{m,n}|}\leqslant\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|\leqslant
2R\end{subarray}}\frac{|\omega_{m,n}|}{|\lambda_{m,n}|}\frac{|\lambda_{m,n}-z|}{|\lambda_{m,n}-z|-R}$
$\displaystyle\leqslant\rho^{-1}\prod_{|\omega_{m,n}|\leqslant
2R}\frac{2R}{(s/2)\wedge
1}\,\frac{2R}{2R-R}\leqslant\rho^{-1}\left(\frac{4R}{(s/2)\wedge
1}\right)^{16\gamma^{-2}R^{2}}$
$\displaystyle\lesssim_{\,s,\gamma,R}\rho^{-1}.$ (59)
Bounding $h_{4}$: We write
$|h_{4}|=h_{4}^{\mathrm{num}}/h_{4}^{\mathrm{den}}$, where
$h_{4}^{\mathrm{num}}(z)=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|>2R\\\ |\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\left|1-\frac{\omega_{m,n}-\lambda_{m,n}}{\omega_{m,n}-z}\right|,\quad\text{and}\quad
h_{4}^{\mathrm{den}}(z)=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|>2R\\\ |\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\left|1-\frac{\omega_{m,n}-\lambda_{m,n}}{\omega_{m,n}}\right|.$
Now, fix a $z\in\mathbb{C}$ with $|z|>3R$ and write
$z=z^{\prime}+\omega_{k,\ell}$, where $(k,\ell)\in\mathbb{Z}^{2}$ and
$(\mathrm{Re}\,z^{\prime},\mathrm{Im}\,z^{\prime})\in[-\frac{\gamma}{2},\frac{\gamma}{2}]^{2}$.
Then, as
$\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}-z|\\!>\\!2R,|\lambda_{m,n}|\leqslant
2|z|\\}\subset\\{(m,n)\in\mathbb{Z}^{2}:|\omega_{m,n}-z|\\!>\\!R,|\omega_{m,n}-z|\leqslant
3|z|+R\\},$
and $\Omega_{\gamma}-\omega_{k,\ell}=\Omega_{\gamma}$, we have the following
bound
$\displaystyle|h_{4}^{\mathrm{num}}(z)|$
$\displaystyle\leqslant\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|>2R\\\ |\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\left(1+\frac{|\omega_{m,n}-\lambda_{m,n}|}{|\omega_{m,n}-z|}\right)\leqslant\prod_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
R<|\omega_{m,n}-z|\leqslant
3|z|+R\end{subarray}}\left(1+\frac{R}{|\omega_{m,n}-z|}\right)$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
R<|\omega_{m,n}-\omega_{k,\ell}-z^{\prime}|\leqslant
3|z|+R\end{subarray}}\left(1+\frac{R}{|\omega_{m,n}-\omega_{k,\ell}-z^{\prime}|}\right)$
$\displaystyle\leqslant\exp\Bigg{[}\sum_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
R<|\omega_{m,n}-z^{\prime}|\leqslant
3|z|+R\end{subarray}}\frac{R}{|\omega_{m,n}-z^{\prime}|}\Bigg{]},$ (60)
where in the last inequality we used $\log(1+x)\leqslant x$ for $x\geqslant
0$. Now, $|\omega_{m,n}-z^{\prime}|>R$ and $R\geqslant 3\gamma$ imply
$|\omega_{m,n}|>R-\frac{\gamma}{\sqrt{2}}>\sqrt{2}\gamma$ and so
$|\omega_{m,n}-z^{\prime}|\geqslant|\omega_{m,n}|-\frac{\gamma}{\sqrt{2}}>\frac{1}{2}|\omega_{m,n}|$.
We therefore have
$\displaystyle\sum_{R<|\omega_{m,n}-z^{\prime}|\leqslant
3|z|+R}\frac{1}{|\omega_{m,n}-z^{\prime}|}$
$\displaystyle\leqslant\sum_{R-\frac{\gamma}{\sqrt{2}}<|\omega_{m,n}|\leqslant
3|z|+R+\frac{\gamma}{\sqrt{2}}}\frac{2}{|\omega_{m,n}|}$
$\displaystyle\leqslant\sum_{R-\frac{\gamma}{\sqrt{2}}<|\omega_{m,n}|\leqslant
3|z|+R+\frac{\gamma}{\sqrt{2}}}2\frac{|\omega_{m,n}|+\frac{\gamma}{\sqrt{2}}}{|\omega_{m,n}|}\int_{(m\gamma,n\gamma)+K_{\gamma}}|w|^{-1}|\mathrm{d}w|$
$\displaystyle\lesssim_{\,\gamma,R}\frac{R}{R-\frac{\sqrt{2}}{2}}\int_{R-\sqrt{2}\gamma<|\omega|\leqslant
3|z|+R+\sqrt{2}\gamma}|w|^{-1}|\mathrm{d}w|$
$\displaystyle\lesssim_{\,\gamma,R}|z|.$ (61)
As $z$ was arbitrary, (61) holds for all $z\in\mathbb{C}$ with $|z|>3R$. Using
(61) in (60) thus yields
$|h_{4}^{\mathrm{num}}(z)|\leqslant e^{c_{4}^{\mathrm{num}}|z|}$ (62)
for some $c_{4}^{\mathrm{num}}=c_{4}^{\mathrm{num}}(\gamma,R)>0$ and all
$z\in\mathbb{C}$ with $|z|>3R$.
The quantity $h_{4}^{\mathrm{den}}$ is bounded from below in a similar
fashion:
$\displaystyle|h_{4}^{\mathrm{den}}(z)|$
$\displaystyle=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}-z|>2R\\\ |\omega_{m,n}|>2R\\\ |\lambda_{m,n}|\leqslant
2|z|\end{subarray}}\left|1-\frac{\omega_{m,n}-\lambda_{m,n}}{\omega_{m,n}}\right|\geqslant\prod_{\begin{subarray}{c}(m,n)\in\mathbb{Z}^{2}\\\
2R<|\omega_{m,n}|\leqslant
2|z|+R\end{subarray}}\left(1-\frac{R}{|\omega_{m,n}|}\right),$
$\displaystyle\geqslant\exp\Bigg{[}\sum_{2R<|\omega_{m,n}|\leqslant
2|z|+R}-\frac{2R}{|\omega_{m,n}|}\Bigg{]},$ (63)
where in the last inequality we used $\log(1+x)\geqslant 2x$ for
$x\in\left[-\frac{1}{2},0\right]$. Another integral bound yields
$\sum_{2R<|\omega_{m,n}|\leqslant
2|z|+R}\frac{1}{|\omega_{m,n}|}\lesssim_{\,\gamma,R}|z|,$
which together with (63) gives
$|h_{4}^{\mathrm{den}}(z)|\geqslant e^{-c_{4}^{\mathrm{den}}|z|},$ (64)
for some $c_{4}^{\mathrm{den}}=c_{4}^{\mathrm{den}}(\gamma,R)>0$ and all
$z\in\mathbb{C}$ with $|z|>3R$. Combining (62) and (64) thus yields
$|h_{4}(z)|\leqslant e^{(c_{4}^{\mathrm{num}}+c_{4}^{\mathrm{den}})|z|},$ (65)
for all $z\in\mathbb{C}$ with $|z|>3R$.
Bounding $h_{5}$: Note that $|\lambda_{m,n}|>2|z|$ implies
$|\lambda_{m,n}|>2|z|>6R$, and so
$\\{(m,n)\in\mathcal{I}^{\prime}:|\lambda_{m,n}|>2|z|\\}=A(z)$. We thus have
$h_{5}=h_{\mathrm{aux}}$, which satisfies (52).
Bounding $h$: We combine (55), (58), (59), (65), and (52) to obtain
$|h(z)|\lesssim_{\,s,\theta,\gamma,R}\rho^{-1}e^{(1+c_{4}^{\mathrm{num}}+c_{4}^{\mathrm{den}}+1100R)|z|+c_{1}|z|\log|z|}\leqslant\rho^{-1}e^{d|z|\log|z|},$
(66)
for some $d=d(s,\theta,\gamma,R)>0$ and all $z\in\mathbb{C}$ with $|z|>3R$.
This completes the derivation of the desired upper bound on $h$ in the case
$|z|>3R$.
Case $|z|\leqslant 3R$: We write $h(z)=h_{6}(z)h_{7}(z)$, where
$h_{6}(z)=\frac{d(z,\Omega_{\gamma})}{d(z,\Lambda)}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|\leqslant
6R\end{subarray}}\frac{1-z/\lambda_{m,n}}{1-z/\omega_{m,n}},\quad\text{and}\quad
h_{7}(z)=\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|>6R\end{subarray}}\frac{(1-z/\lambda_{m,n})\exp(z/\lambda_{m,n})}{(1-z/\omega_{m,n})\exp(z/\omega_{m,n})}.$
Note that $|\lambda_{m,n}|>6R$ and $|z|\leqslant 3R$ together imply
$|\lambda_{m,n}|>2|z|$, and so $h_{7}=h_{\mathrm{aux}}$. Hence it only remains
to bound $h_{6}$. To this end, write
$|h_{6}|=h_{6}^{\mathrm{num}}/h_{6}^{\mathrm{den}}$, where
$h_{6}^{\mathrm{num}}(z)=\frac{z}{d(z,\Lambda)}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|\leqslant
6R\end{subarray}}\frac{\lambda_{m,n}-z}{\lambda_{m,n}}\quad\text{and}\quad
h_{6}^{\mathrm{den}}(z)=\frac{z}{d(z,\Omega_{\gamma})}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|\leqslant
6R\end{subarray}}\frac{\omega_{m,n}-z}{\omega_{m,n}}.$
Now, the term $d(z,\Lambda)$ cancels with either $z$ or one of the factors
$\lambda_{m,n}-z$, and similarly, $d(z,\Omega_{\gamma})$ cancels with either
$z$ or one of the factors $\omega_{m,n}-z$. In either case the numerators of
the terms remaining in the product satisfy
$|\omega_{m,n}-z|\geqslant\frac{\gamma}{2}$. We again recall that
$|\lambda_{m,n}|>\frac{s}{2}$ for all
$(m,n)\in\mathcal{I}^{\prime}\setminus\mathcal{I}_{s}$, and there is at most
one $(m^{\prime},n^{\prime})\in\mathcal{I}_{s}\setminus\\{(0,0)\\}$, and for
this $(m^{\prime},n^{\prime})$ we have
$|\lambda_{m^{\prime},n^{\prime}}|\geqslant\rho$. These observations together
yield the following bounds:
$\displaystyle|h_{6}^{\mathrm{num}}(z)|$ $\displaystyle\leqslant
3R\rho^{-1}\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|\leqslant 6R\end{subarray}}\frac{(|\lambda_{m,n}|+|z|)\wedge
1}{(s/2)\vee 1}\leqslant 3R\rho^{-1}\left(\frac{(9R)\wedge 1}{(s/2)\vee
1}\right)^{144\theta R^{2}}$ $\displaystyle|h_{6}^{\mathrm{den}}(z)|$
$\displaystyle\geqslant\prod_{\begin{subarray}{c}(m,n)\in\mathcal{I}^{\prime}\\\
|\lambda_{m,n}|\leqslant 6R\end{subarray}}\frac{(\gamma/2)\wedge
1}{|\omega_{m,n}-\lambda_{m,n}|+|\lambda_{m,n}|}\geqslant\left(\frac{(\gamma/2)\wedge
1}{7R}\right)^{144\theta R^{2}},\quad\text{for }z\in\mathbb{C}\text{ s.t.
}|z|\leqslant 3R.$
Therefore $|h_{6}(z)|\lesssim_{\,s,\theta,\gamma,R}\rho^{-1}$, for $z$ with
$|z|\leqslant 3R$, which together with (52) gives
$|h(z)|\lesssim_{\,s,\theta,\gamma,R}\rho^{-1}e^{1100R}|z|,\quad\text{for
}z\in\mathbb{C}\text{ s.t. }|z|\leqslant 3R.$ (67)
The inequalities (66) and (67) can now be combined to yield the bound (44),
concluding the proof.
∎
###### Proof of Lemma 13.
Consider first the case when $y\in\mathcal{S}(\mathbb{R})$ is a Schwartz
function. We then have $y\in L^{2}(\mathbb{R})$, and thus
$\displaystyle\left\langle
y,\pi(\lambda)\varphi\right\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}$
$\displaystyle=\left\langle\mathcal{V}_{\varphi}y,\mathcal{V}_{\varphi}\pi(\lambda)\varphi\right\rangle_{L^{q}(\mathbb{R}^{2})\times
L^{p}(\mathbb{R}^{2})}$
$\displaystyle=\iint_{\mathbb{R}^{2}}(\mathcal{V}_{\varphi}y)(s,\xi)\;\overline{(\mathcal{V}_{\varphi}\pi(\lambda)\varphi)(s,\xi)}\;\mathrm{d}s\mathrm{d}\xi$
$\displaystyle=\int_{\mathbb{R}}y(t)\overline{\left(\pi(\lambda)\varphi\right)(t)}\;\mathrm{d}t$
(68) $\displaystyle=(\mathcal{V}_{\varphi}y)(\lambda)$ $\displaystyle=e^{-\pi
i\tau\nu}e^{-\pi|\lambda|^{2}/2}(\textfrak{B}\hskip 1.0pty)(\bar{\lambda}),$
(69)
where (68) follows from [7, Thm. 3.2.1], and (69) is [7, Prop. 3.4.1].
Now take an arbitrary $y\in M^{q}(\mathbb{R})$. As $\mathcal{S}(\mathbb{R})$
is dense in $M^{q}(\mathbb{R})$ for $q\in[1,\infty)$ (see [7, Prop. 11.3.4]),
we can take a sequence
$\\{y_{n}\\}_{n=1}^{\infty}\subset\mathcal{S}(\mathbb{R})$ such that $y_{n}\to
y$ in $M^{q}(\mathbb{R})$. The calculation above thus shows that
$\langle y_{n},\pi(\lambda)\varphi\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}=e^{-\pi i\tau\nu}e^{-\pi|\lambda|^{2}/2}(\textfrak{B}\hskip
1.0pty_{n})(\bar{\lambda}),\quad\forall n\in\mathbb{N}.$ (70)
Furthermore, as the dual pairing is continuous, we have
$\langle y_{n},\pi(\lambda)\varphi\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\to\langle
y,\pi(\lambda)\varphi\rangle_{M^{q}(\mathbb{R})\times
M^{p}(\mathbb{R})}\quad\text{as }n\to\infty.$
On the other hand, by the isometry property (18) we also have
$\|\textfrak{B}\hskip 1.0pty_{n}-\textfrak{B}\hskip
1.0pty\|_{\mathcal{F}^{q}(\mathbb{C})}=\|y_{n}-y\|_{M^{q}(\mathbb{R})}\to 0$
as $n\to\infty$. Thus, as the evaluation functional $F\mapsto
F(\overline{\lambda})$ is continuous on $\mathcal{F}^{q}(\mathbb{C})$ (see
[28, Lem. 2.32]), we obtain $(\textfrak{B}\hskip
1.0ptf_{n})(\overline{\lambda})\to(\textfrak{B}\hskip
1.0ptf)(\overline{\lambda})$, which together with (70) and (69) establishes
the claim of the proposition. ∎
###### Proof of Lemma 14.
(i) Suppose that $\mathcal{A}$ is bounded below. Then the operator
$\tilde{\mathcal{A}}:X\to\mathrm{Im}(\mathcal{A})$ given by
$\tilde{\mathcal{A}}(x)=\mathcal{A}(x)$, for $x\in X$, is a continuous map
between Banach spaces, and has a continuous inverse. In other words,
$\tilde{\mathcal{A}}$ is an isomorphism between Banach spaces. Thus
$\tilde{\mathcal{A}}^{*}:(\mathrm{Im}(\mathcal{A}))^{*}\to X^{*}$ is also an
isomorphism between Banach spaces, and so, by the inverse mapping theorem [31,
Cor. 2.12], so is
$(\tilde{\mathcal{A}}^{*})^{-1}:X^{*}\to(\mathrm{Im}(\mathcal{A}))^{*}$.
Consider now an arbitrary $f\in X^{*}$, and set
$h=(\tilde{\mathcal{A}}^{*})^{-1}f$. As $h$ is a continuous linear functional
on $\mathrm{Im}(\mathcal{A})\subset Y$, it follows by the Hahn-Banach theorem
[31, Thm. 3.6] that $h$ can be extended to a continuous linear functional
$h_{Y}$ defined on $Y$. Now, since
$h_{Y}\\!\mid_{\mathrm{Im}(\mathcal{A})}=h$, we have
$\displaystyle\langle\mathcal{A}^{*}h_{Y},x\rangle$ $\displaystyle=\langle
h_{Y},\mathop{\mathchoice{\vtop{\halign{#\cr$\hfil\displaystyle\mathcal{A}x\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\displaystyle{\mathcal{A}x}\@mathmeasure\displaystyle{\upbrace}\@mathmeasure\displaystyle{\upbraceg}\@mathmeasure\displaystyle{\upbracegg}\@mathmeasure\displaystyle{\upbraceggg}\@mathmeasure\displaystyle{\upbracegggg}$\displaystyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\textstyle\mathcal{A}x\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\textstyle{\mathcal{A}x}\@mathmeasure\textstyle{\upbrace}\@mathmeasure\textstyle{\upbraceg}\@mathmeasure\textstyle{\upbracegg}\@mathmeasure\textstyle{\upbraceggg}\@mathmeasure\textstyle{\upbracegggg}$\textstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\scriptstyle\mathcal{A}x\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\scriptstyle{\mathcal{A}x}\@mathmeasure\scriptstyle{\upbrace}\@mathmeasure\scriptstyle{\upbraceg}\@mathmeasure\scriptstyle{\upbracegg}\@mathmeasure\scriptstyle{\upbraceggg}\@mathmeasure\scriptstyle{\upbracegggg}$\scriptstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}{\vtop{\halign{#\cr$\hfil\scriptscriptstyle\mathcal{A}x\hfil$\crcr\kern
2.0pt\nointerlineskip\cr\@mathmeasure\scriptscriptstyle{\mathcal{A}x}\@mathmeasure\scriptscriptstyle{\upbrace}\@mathmeasure\scriptscriptstyle{\upbraceg}\@mathmeasure\scriptscriptstyle{\upbracegg}\@mathmeasure\scriptscriptstyle{\upbraceggg}\@mathmeasure\scriptscriptstyle{\upbracegggg}$\scriptscriptstyle\bracelu\leaders\hbox{$\bracemid$}\hfill\bracemu\leaders\hbox{$\bracemid$}\hfill\braceru$\crcr}}}}\limits_{\mathclap{\in\mathrm{Im}(\mathcal{A})}}\rangle=\langle
h,\mathcal{A}x\rangle=\langle h,\tilde{\mathcal{A}}x\rangle=$
$\displaystyle=\langle\tilde{\mathcal{A}}^{*}h,x\rangle=\langle
f,x\rangle\quad\text{for }x\in X,$
and thus, as $x\in X$ was arbitrary, we deduce that $\mathcal{A}^{*}h_{Y}=f$.
Finally, since $f$ was arbitrary, we have that $\mathcal{A}^{*}$ is
surjective.
(ii) Let $f$ be an arbitrary element of $X^{*}$ with $\|f\|=1$, and let $g\in
Y^{*}$ be such that $\mathcal{A}^{*}g=f$ and $a\|g\|\leqslant 1$. Note that
then $g\neq 0$, and so $g/\|g\|$ is a well-defined element of $Y^{*}$ of unit
norm. Therefore,
$\|\mathcal{A}x\|\geqslant\left|\left<\mathcal{A}x,\frac{g}{\|g\|}\right>\right|=\|g\|^{-1}\left|\langle
x,\mathcal{A}^{*}g\rangle\right|\geqslant a\left|\langle x,f\rangle\right|,$
for all $x\in X$. Taking the supremum of the right-hand side over $f\in X^{*}$
and using the fact that $\sup_{f\in X^{*},\|f\|=1}|\langle x,f\rangle|=\|x\|$
yields $\|\mathcal{A}x\|\geqslant a\|x\|$, as desired. ∎
###### Proof of Lemma 15.
For $m,n\in\mathbb{Z}$ write
$\textstyle
K_{m,n}=\left[\frac{s}{\sqrt{2}}\left(m_{\lambda}-\frac{1}{2}\right),\frac{s}{\sqrt{2}}\left(m_{\lambda}+\frac{1}{2}\right)\right)\times\left[\frac{s}{\sqrt{2}}\left(n_{\lambda}-\frac{1}{2}\right),\frac{s}{\sqrt{2}}\left(n_{\lambda}+\frac{1}{2}\right)\right),$
and, for $\lambda\in\Lambda$, let $(m_{\lambda},n_{\lambda})$ be the (unique)
element of $\mathbb{Z}^{2}$ such that $\lambda\in K_{m,n}$. Note that, as
$\mathrm{sep}(\Lambda)=s$, every $K_{m,n}$ contains at most one element of
$\Lambda$. Next, define the functions
$A(z)=\frac{2}{s}\sum_{\lambda\in\Lambda}a_{\lambda}\mathds{1}_{K_{m_{\lambda},n_{\lambda}}}(z)\quad\text{and}\quad\tilde{f}(z)=\max_{|w|\leqslant
s}|f(z+w)|.$
We then have
$\sum_{\lambda\in\Lambda}|\theta_{\lambda}||f(z-\lambda)|\leqslant\sum_{\lambda\in\Lambda}|\theta_{\lambda}|\;\tilde{f}\left(z-{s(m_{\lambda}+in_{\lambda})}/{\sqrt{2}}\right)\leqslant\int_{\mathbb{C}}A(w)\tilde{f}(z-w)|\mathrm{d}w|=(A\ast\tilde{f})(z),$
for all $z\in\mathbb{C}$. Therefore, using [7, Prop. 11.1.3], we obtain
$\|A\ast\tilde{f}\|_{L^{q}(\mathbb{C})}\leqslant\|\tilde{f}\|_{L^{1}(\mathbb{C})}\|A\|_{L^{q}(\mathbb{C})}\lesssim
s^{-2/p}\|f\|_{W(L^{\infty},L^{1})}\|\theta\|_{\ell^{p}(\Lambda)},$
where the last equality follows by computing the norm of $A$ explicitly. ∎
###### Proof of Lemma 19.
Note that it suffices to prove the claim for $j=n-1$, as the general statement
then follows by induction. To this end, divide $K_{n}$ into four disjoint
squares of side length $\sqrt{2}\left(2^{n-1}+\frac{1}{2}\right)$. By the
pigeonhole principle, one of these squares must contain at least
$2^{2(n-1)}+1$ points of $K_{n}\cap Y$. Denote this square by $K^{\prime}$,
and Let $K_{n-1}$ be the square which contains $K^{\prime}$ and satisfies
property (i) in the statement of the Lemma. Then $\\#(K_{n-1}\cap
Y)\geqslant\\#(K^{\prime}\cap Y)\geqslant 2^{2(n-1)}+1$, and so $K_{n-1}$
satisfies (ii), as desired. ∎
|
up#1
# On the Local Linear Rate of Consensus on the Stiefel Manifold
Shixiang Chen1, Alfredo Garcia1, Mingyi Hong2 and Shahin Shahrampour1 1The Wm
Michael Barnes ’64 Department of Industrial and Systems Engineering, Texas A&M
University, College Station, TX 77843. Email addresses<EMAIL_ADDRESS>(S.
Chen<EMAIL_ADDRESS>(A. Garcia<EMAIL_ADDRESS>(S.
Shahrampour).2The Department of Electrical and Computer Engineering,
University of Minnesota, Minneapolis, MN 55455. Email address<EMAIL_ADDRESS>(M. Hong).
###### Abstract
We study the convergence properties of Riemannian gradient method for solving
the consensus problem (for an undirected connected graph) over the Stiefel
manifold. The Stiefel manifold is a non-convex set and the standard notion of
averaging in the Euclidean space does not work for this problem. We propose
Distributed Riemannian Consensus on Stiefel Manifold (DRCS) and prove that it
enjoys a local linear convergence rate to global consensus. More importantly,
this local rate asymptotically scales with the second largest singular value
of the communication matrix, which is on par with the well-known rate in the
Euclidean space. To the best of our knowledge, this is the first work showing
the equality of the two rates. The main technical challenges include (i)
developing a Riemannian restricted secant inequality for convergence analysis,
and (ii) to identify the conditions (e.g., suitable step-size and
initialization) under which the algorithm always stays in the local region.
## I Introduction
Consensus and coordination has been a major topic of interest in the control
community for the last three decades. The consensus problem in the Euclidean
space is well-studied, but perhaps less well-known is consensus on the Stiefel
manifold $\mathrm{St}(d,r):=\\{x\in\mathbb{R}^{d\times r}:x^{\top}x=I_{r}\\}$,
which is a non-convex set. This problem has recently attracted significant
attention [1, 2, 3] due to its applications to synchronization in planetary
scale sensor networks[4], modeling of collective motion in flocks[5],
synchronization of quantum bits[6], and the Kuramoto models [7, 2]. We refer
the reader to [1, 2] for more applications of this framework.
In general, the optimization problem of consensus on a Riemannian manifold
$\mathcal{M}$ can be written as
$\displaystyle\min\phi(\mathbf{x}):=\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}a_{ij}\mbox{\rm
dist}^{2}(x_{i},x_{j})$ (I.1) $\displaystyle\mathrm{s.t.}\quad
x_{i}\in\mathcal{M},\ i=1,\ldots,N,$
where $\mbox{\rm dist}(\cdot,\cdot)$ is a distance function, $a_{ij}\geq 0$ is
a constant associated with the underlying undirected, connected graph, and
$\mathbf{x}^{\top}:=(x_{1}^{\top}\ x_{2}^{\top}\ \ldots\ x_{N}^{\top})$. The
consensus problem is also closely related to the center of mass problem on
$\mathcal{M}$[8]. To achieve consensus, one needs to solve the problem (I.1)
to obtain a global optimal point. The Riemannian gradient method (RGM)[9, 10]
is a natural choice. When $\mathcal{M}=\mathrm{St}(d,r)$, which is embedded in
the Euclidean space, it is more convenient to use the Euclidean distance for
both computation and analysis purposes. For example, if the distance function
in (I.1) is the geodesic distance, the Riemannian gradient of
$\phi(\mathbf{x})$ in (I.1) is the logarithm mapping, which does not have a
closed-form solution on $\mathrm{St}(d,r)$ for $1<r<d$, and thus, iterative
methods of computing Stiefel logarithm were proposed in [11, 12]. Moreover,
the geodesic distance is not globally smooth.
In this paper, we discuss the convergence of RGM for solving the consensus
problem on Stiefel manifold using the square Frobenius norm distance. This
problem has been discussed in [13, 2], which can be formulated as follows
$\displaystyle\min\varphi^{t}(\mathbf{x}):=\frac{1}{4}\sum_{i=1}^{N}\sum_{j=1}^{N}W^{t}_{ij}\|x_{i}-x_{j}\|_{\text{F}}^{2}$
(C-St) $\displaystyle\mathrm{s.t.}\quad x_{i}\in\mathrm{St}(d,r),\
i=1,\ldots,N,$
where the superscript $t\geq 1$ is an integer used to denote the $t$-th power
of a doubly stochastic matrix $W$. Note that $t$ is introduced here to provide
flexibility for our algorithm design and analysis, and computing $W^{t}_{ij}$
basically corresponds to performing $t$ steps of communication on the tangent
space, on which we elaborate in Algorithm 1.
It is well-known that for a generic smooth optimization problem over a
Riemannian manifold, RGM globally converges to first-order critical points
with a sub-linear rate [9, 10]. In this paper, we focus on applying RGM to
(C-St), and we call the resulting algorithm Distributed Riemannian Consensus
on Stiefel Manifold (DRCS). We prove that for DRCS this sub-linear rate can be
improved. In particular, we provide the first analysis showing that, a
discrete-time retraction based RGM applied to problem (C-St) converges
Q-linearly111A sequence $\\{a_{k}\\}$ is said to converge Q-linear to $a$ if
there exists $\rho\in(0,1)$ and such that
$\lim_{k\rightarrow\infty}\frac{|a_{k+1}-a|}{|a_{k}-a|}=\rho$. in a local
region of the global optimal set. Furthermore, we show that the size of the
local region and the linear rate are both dependent on the connectivity of the
graph capturing the network structure. Our main technical contributions are as
follows:
1. 1.
We develop and draw upon three second-order approximation properties
(P1)-(P2)-(P3) in Lemmas 1, 2 and 3, which are crucial to link the Riemannian
convergence analysis and Euclidean convergence analysis.
2. 2.
We focus on identifying the suitable stepsize for DRCS, which can guarantee
global convergence and local convergence. This is proved by showing a new
descent lemma in Lemma 4.
3. 3.
We will show that a surrogate of local strong convexity holds for problem
(C-St). It is called the Restricted Secant Inequality (RSI), derived in
Proposition 4. In Euclidean space, RSI was proposed in [14] to study the
convergence rate for gradient method. The benefit of RSI is that we do not
need to take into account the second-order information, and that the linear
rate can be proved easily like the Euclidean algorithms. Proposition 4 can be
thought as a Riemannian version of the Euclidean RSI.
4. 4.
Let $\mathcal{X}^{*}$ denote the optimal solution set for the problem (C-St).
It is easy to see that the following holds:
$\mathcal{X}^{*}:=\\{\mathbf{x}\in\mathrm{St}(d,r)^{N}:x_{1}=x_{2}=\ldots=x_{N}\\}.$
(I.2)
After establishing the RSI, we prove the local Q-linear consensus rate of
$\mathrm{dist}(\mathbf{x}_{k},\mathcal{X}^{*})$ for DRCS, where
$\mathrm{dist}(\mathbf{x}_{k},\mathcal{X}^{*})$ is the Euclidean distance
between $\mathbf{x}_{k}$ and the consensus set $\mathcal{X}^{*}$. We show that
the convergence rate asymptotically scales with the second largest singular
value of $W$, which is the same as its counterpart in the Euclidean space. We
characterize two local regions for such convergence in Theorem 2, and for the
larger region we require multi-step consensus.
### I-A Related Literature
As the general Riemannian manifolds are nonlinear and the problem (I.1) is
non-convex, the consensus on manifold is considered a more difficult problem
than that in the Euclidean space. The first-order critical points are not
always in $\mathcal{X}^{*}$. The consensus on Riemannian manifold has been
studied in several papers. We can broadly divide their approaches to intrinsic
or extrinsic, which we will describe next.
The intrinsic approach means that it relies only on the intrinsic properties
of the manifold, such as geodesic distances, exponential and logarithm maps,
etc. For example, the discrete-time RGM for manifolds with bounded curvature
is studied in [15]. [16] also studies the stochastic RGM and applies it to
solve the consensus problem on the manifold of symmetric positive definite
matrix. The authors of [16] show that using intrinsic approach outperforms the
extrinsic method, i.e., the gossip algorithm[17].
The extrinsic approach is based on specific embedding of the manifolds in
Euclidean space. In [13], RGM is also studied for solving the consensus
problem over the special orthogonal group $\mathrm{SO}(d)$ and the
Grassmannian. However, it is only shown that RGM converges to the critical
point. To achieve the global consensus, a synchronization algorithm on the
tangent space is presented in [13, Section 7]. But it requires communicating
an extra variable.
The main challenge of consensus on manifolds is that the optimization problem
is non-convex. Previous results show that the global consensus is graph
dependent, e.g., the global consensus is achievable on equally weighted
complete graph for $\mathrm{SO}(d)$ and Grassmannian [13]. In [15], it is also
shown that any first-order critical point is the global optima for the tree
graph on a manifold with bounded curvature. For general connected undirected
graphs, the survey paper [18] summarizes three solutions to achieve almost
global consensus on the circle (i.e., $d=2$ and $r=1$): potential
reshaping[7], the gossip algorithm[19] and dynamic consensus[13]. However,
such procedures could degrade the convergence speed. For example, the gossip
algorithm could be arbitrarily slow and the dynamic consensus is only
asymptotically convergent.
When specific to the Stiefel manifold, most of the previous work for consensus
on $\mathrm{St}(d,r)$ is on local convergence. For example, the results of
[15] show that, firstly, any critical point in the region
$\mathcal{S}:=\\{\mathbf{x}:\exists y\in\mathcal{M}\ \mathrm{s.t.}\
\max_{i}d_{g}(x_{i},y)<r^{*}\\}$ is a global optimal point, where
$d_{g}(\cdot,\cdot)$ is the geodesic distance and $r^{*}$ is an absolute
constant with respect to the manifold. Also, the region $\mathcal{S}$ is
convex222An open subset $s\subset\mathcal{M}$ is convex if it contains all
shortest paths between any two points of $s$. . Secondly, RGM is shown to
achieve consensus locally. Specifically, if the initial point $\mathbf{x}_{0}$
satisfies
$\mathbf{x}_{0}\in\mathcal{S}_{\text{conv}}:=\\{\phi(\mathbf{x})<\frac{(r^{*})^{2}}{2dia(\mathcal{G})}\\}$,
where $dia(\mathcal{G})$ is the diameter of the graph $\mathcal{G}$, then RGM
converges to global optimal point. However, the region
$\mathcal{S}_{\text{conv}}$ is much smaller compared with $\mathcal{S}$ since
$\mathbf{x}\in\mathcal{S}_{\text{conv}}$ implies that
$\sum_{j=1}^{N}a_{ij}d_{g}^{2}(x_{i},x_{j})\leq
2\phi(\mathbf{x})\leq(r^{*})^{2}/dia(\mathcal{G})$. The difficulty of showing
the consensus region to be $\mathcal{S}$ lies in preserving the iterates in
$\mathcal{S}$. To theoretically guarantee this, the sectional curvature of the
manifold should be constant and non-negative, e.g., the sphere, or when the
graph $\mathcal{G}$ has a linear structure.
Recently, the authors of [1, 2] show that one can achieve almost global
consensus for problem (C-St) whenever $r\leq\frac{2}{3}d-1$. More
specifically, all second-order critical points are global optima, and thus,
the measure of stable manifold of saddle points is zero. This can be proved by
showing that the Riemannian Hessian at all saddle points has negative
curvature, i.e., the strict saddle property in [20] holds true. Therefore, if
we randomly initialize the RGM, it will almost always converge to the global
optimal point [20, 2]. Additionally, [2] also conjectures that the strict
saddle property holds for $d\geq 3$ and $r\leq d-2$. The scenarios $r=d-1$ and
$r=d$ correspond to the multiply connected
($\mathrm{St}(d,d-1)\cong\mathrm{SO}(d)$) and not connected case
($\mathrm{St}(d,d)\cong\mathrm{O}(d)$), respectively, which yields multi-
stable systems[21].
However, none of the aforementioned work discusses the local linear rate of
RGM on $\mathrm{St}(d,r)$ with $r>1$. One way to prove the linear rate is to
show that the Riemannian Hessian is positive definite [9] near a consensus
point, but the Riemannian Hessian is degenerate at all consensus points (see
Section V). The linear rate of consensus can be established by
reparameterization on the circle [7] or computing the generalized Lyapunov-
type numbers on the sphere[22], but it is not known how to generalize them to
$r>1$. Thanks to the recent advancements in non-convex optimization[20, 23,
24] and optimization over Stiefel manifold [25, 26, 9, 10, 27, 28, 29], we
study the local landscape of (C-St) by an extrinsic approach and tackle the
problem using a Riemannian-type RSI.
## II Preliminaries
### II-A Outline of the Paper and Notation
The rest of the paper is organized as follows. Section III describes the
algorithm and challenges. Section IV presents the global convergence results.
Section V develops the Riemannian RSI and the local linear rate. Section VI
demonstrates the numerical experiments. APPENDIX provides the proofs of all
technical results.
Starting from this section, we use $\mathcal{M}=\mathrm{St}(d,r)$ for brevity.
We also have the following notations:
* •
$\mathcal{G}=(\mathcal{V},\mathcal{E})$: the undirected graph with
$|\mathcal{V}|=N$ nodes.
* •
$A=[a_{ij}]$: the adjacency matrix of graph $\mathcal{G}$.
* •
$\mathbf{x}$: the collection of all local variables $x_{i}$ by stacking them,
i.e., $\mathbf{x}^{\top}=(x_{1}^{\top}\ x_{2}^{\top}\ \ldots\ x_{N}^{\top})$.
* •
$\mathcal{M}^{N}=\mathcal{M}\times\ldots\times\mathcal{M}$: the $N-$fold
Cartesian product.
* •
$[N]:=\\{1,2,\ldots,N\\}$. For $\mathbf{x}\in({\mathbb{R}^{d\times r}})^{N}$,
the $i$-th block of $\mathbf{x}$: $[\mathbf{x}]_{i}=x_{i}$.
* •
$\nabla\varphi^{t}(\mathbf{x})$: Euclidean gradient;
$\nabla\varphi^{t}_{i}(\mathbf{x}):=[\nabla\varphi^{t}(\mathbf{x})]_{i}$: the
$i$-th block of $\nabla\varphi^{t}(\mathbf{x})$.
* •
$\mathrm{T}_{x}\mathcal{M}$: the tangent space of $\mathrm{St}(d,r)$ at point
$x$.
* •
$N_{x}\mathcal{M}$: the normal space of $\mathrm{St}(d,r)$ at point $x$.
* •
$\mathrm{Tr}(\cdot)$: the trace; $\left\langle
x,y\right\rangle=\mathrm{Tr}(x^{\top}y)$ : the inner product on
$\mathrm{T}_{x}\mathcal{M}$ is induced from the Euclidean inner product.
* •
$\mathrm{grad}\varphi^{t}(\mathbf{x})$: Riemannian gradient;
$\mathrm{grad}\varphi_{i}^{t}(\mathbf{x}):=[\mathrm{grad}\varphi^{t}(\mathbf{x})]_{i}$:
the i-th block of $\mathrm{grad}\varphi^{t}(\mathbf{x})$.
* •
$\|\cdot\|_{\text{F}}$: the Frobenius norm; $\|\cdot\|_{2}$: the operator
norm.
* •
$\mathcal{P}_{C}$: the orthogonal projection onto a closed set $C$.
* •
$I_{r}$: the $r\times r$ identity matrix.
* •
$\textbf{1}_{N}\in\mathbb{R}^{N}$: the vector of all ones;
$J:=\frac{1}{N}\textbf{1}_{N}\textbf{1}_{N}^{\top}$.
###### Definition 1 (Consensus).
Consensus is the configuration where $x_{i}=x_{j}\in\mathcal{M}$ for all
$i,j\in[N]$.
### II-B Network Setting
To represent the network, we use a graph $\mathcal{G}$ that satisfies the
following assumption.
###### Assumption 1.
We assume that the undirected graph $\mathcal{G}$ is connected and the
corresponding communication matrix $W$ is doubly stochastic, i.e.,
* •
$W=W^{\top}$.
* •
$W_{ij}\geq 0$ and $1>W_{ii}>0$.
* •
Eigenvalues of $W$ lie in $(-1,1]$. The second largest singular value
$\sigma_{2}$ of $W$ lies in $[0,1)$.
It is easy to see that any power of the matrix $W$ is also doubly stochastic
and symmetric. Moreover, the second largest singular value of $W^{t}$ is
$\sigma_{2}^{t}$.
### II-C Optimality Condition
We first introduce some preliminaries about optimization on a Riemannian
manifold. Let us consider the following optimization problem over a matrix
manifold $\mathcal{M}$
$\min f(x)\quad\mathrm{s.t.}\quad x\in\mathcal{M}.$ (II.1)
The Riemannian gradient $\mathrm{grad}f(x)$ is defined by the unique tangent
vector satisfying $\left\langle\mathrm{grad}f(x),\xi\right\rangle=Df(x)[\xi]$
for all $\xi\in\mathrm{T}_{x}\mathcal{M}$, where $D$ means the differential of
$f$ and $Df(x)[\xi]$ means the directional derivative along $\xi$. Since we
use the metric on the tangent space $\mathrm{T}_{x}\mathcal{M}$ induced from
the Euclidean inner product $\left\langle\cdot,\cdot\right\rangle$, the
Riemannian gradient $\mathrm{grad}f(x)$ on $\mathrm{St}(d,r)$ is given by
$\mathrm{grad}f(x)=\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(\nabla f(x))$,
where $\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}$ is the orthogonal projection
onto $\mathrm{T}_{x}\mathcal{M}$. More specifically, we have
$\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(y)=y-\frac{1}{2}x(x^{\top}y+y^{\top}x),$
for any $y\in\mathbb{R}^{d\times r}$ (see [25, 9]), and
$\mathcal{P}_{N_{x}\mathcal{M}}(y)=\frac{1}{2}x(x^{\top}y+y^{\top}x).$
Under the Euclidean metric, the Riemannian Hessian denoted by
$\mathrm{Hess}f(x)$ is given by
$\mathrm{Hess}f(x)[\xi]=\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(D(x\mapsto\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}\nabla
f(x))[\xi])$ for any $\xi\in\mathrm{T}_{x}\mathcal{M}$, i.e., the projection
differential of the Riemannian gradient[9, 10]. We refer to [30] for how to
compute
$\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(D(x\mapsto\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}\nabla
f(x))[\xi])$ on $\mathrm{St}(d,r)$. The necessary optimality condition of
problem (II.1) is given as follows.
###### Proposition 1.
([31, 10]) Let $x\in\mathcal{M}$ be a local optimum for (II.1). If $f$ is
differentiable at $x$, then $\mathrm{grad}f(x)=0$. Furthermore, if f is twice
differentiable at $x$, then $\mathrm{Hess}f(x)\succcurlyeq 0$.
A point $x$ is a first-order critical point (or critical point) if
$\mathrm{grad}f(x)=0$. $x$ is called a second-order critical point if
$\mathrm{grad}f(x)=0$ and $\mathrm{Hess}f(x)\succcurlyeq 0$.
The concept of a retraction [9], which is a first-order approximation of the
exponential mapping and can be more amenable to computation, is given as
follows.
###### Definition 2.
[9, Definition 4.1.1] A retraction on a differentiable manifold $\mathcal{M}$
is a smooth mapping $\mathrm{Retr}$ from the tangent bundle
$\mathrm{T}\mathcal{M}$ onto $\mathcal{M}$ satisfying the following two
conditions (here $\mathrm{Retr}_{x}$ denotes the restriction of
$\mathrm{Retr}$ onto $\mathrm{T}_{x}\mathcal{M}$):
1. 1.
$\mathrm{Retr}_{x}(0)=x,\forall x\in\mathcal{M}$, where $0$ denotes the zero
element of $\mathrm{T}_{x}\mathcal{M}$.
2. 2.
For any $x\in\mathcal{M}$, it holds that
$\lim_{\mathrm{T}_{x}\mathcal{M}\ni\xi\rightarrow
0}\frac{\|\mathrm{Retr}_{x}(\xi)-(x+\xi)\|_{F}}{\|\xi\|_{F}}=0.$
## III The Proposed Algorithm
The discrete-time RGM applied to solve problem (C-St) is described in
Algorithm 1. We name it as Distributed Riemannian Consensus on Stiefel
manifold (DRCS). The goal of this paper is to study the local (Q-linear) rate
of DRCS for solving problem (C-St).
Algorithm 1 Distributed Riemannian Consensus on Stiefel manifold (DRCS)
1:Input: random initial point $\mathbf{x}_{0}\in\mathrm{St}(d,r)^{N}$,
stepsize $0<\alpha<2/L_{t}$ and an integer $t\geq 1$.
2:for $k=0,1,\ldots$ do$\triangleright$ For each node $i\in[N]$, in parallel
3: Compute
$\nabla\varphi_{i}^{1}(\mathbf{x}_{k})=x_{i,k}-\sum_{j=1}^{N}W_{ij}x_{j,k}$.
4: for $l=2,\ldots,t$ do$\triangleright$ Multi-step consensus
5:
$\nabla\varphi_{i}^{l}(\mathbf{x}_{k})={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\nabla\varphi_{i}^{1}(\mathbf{x}_{k})+}\sum_{j=1}^{N}W_{ij}\nabla\varphi_{j}^{l-1}(\mathbf{x}_{k})$
6: end for
7: Update
$x_{i,k+1}=\mathrm{Retr}_{x_{i,k}}\left(-\alpha\mathcal{P}_{\mathrm{T}_{x_{i,k}}\mathcal{M}}\left(\nabla\varphi_{i}^{t}(\mathbf{x}_{k})\right)\right)$
(III.1)
8:end for
We remark that the DRCS algorithm is similar in spirit to the Riemannian
consensus algorithm in [15], but we use retraction instead of the exponential
map. In [15], geodesic distance is used in (I.1) for Grassmannian manifold and
special orthogonal group and only a sub-linear rate was shown (using one-step
communication). Given some integer $t\geq 1$, the iteration (III.1) in
Algorithm 1 is the Riemannian gradient descent step, where $\alpha$ is the
stepsize. The algorithm updates along a negative Riemannian gradient direction
on the tangent space, then performs the retraction operation
$\mathrm{Retr}_{\mathbf{x}_{k}}$ to guarantee feasibility.
Also notice that $\|x\|_{\text{F}}^{2}=r$ holds true for any
$x\in\mathrm{St}(d,r)$, so (C-St) is equivalent to
$\displaystyle\max
h^{t}(\mathbf{x}):=\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}W^{t}_{ij}\left\langle
x_{i},x_{j}\right\rangle$ (III.2) $\displaystyle\mathrm{s.t.}\quad
x_{i}\in\mathrm{St}(d,r),\ \forall i\in[N].$
DRCS can also be seen as applying Riemannian gradient ascent to solve (III.2).
That is, (III.1) is equivalent to
$x_{i,k+1}=\mathrm{Retr}_{x_{i,k}}\left(\alpha\mathcal{P}_{\mathrm{T}_{x_{i}}\mathcal{M}}(\sum_{j=1}^{N}W_{ij}^{t}x_{j,k})\right).$
(III.3)
The term
$\mathcal{P}_{\mathrm{T}_{x_{i}}\mathcal{M}}(\sum_{j=1}^{N}W_{ij}^{t}x_{j,k})$
can be viewed as performing $t$ steps of Euclidean consensus on the tangent
space $\mathrm{T}_{x_{i,k}}\mathcal{M}$.
Although multi-step consensus requires more communications at each iteration,
it reduces the outer loop iteration number since $\sigma_{2}^{t}$ scales
better than $\sigma_{2}$. For a large $t$, the corresponding graph of
$W^{t}\approx\frac{1}{N}\mathbf{1}_{N}\mathbf{1}_{N}^{\top}$ is approximately
the complete graph. We emphasize here that multi-step consensus does not make
the convergence analysis trivial, since we do not require $t$ to be too large.
For the Euclidean case, [32] also discusses the advantages of multi-step
consensus for decentralized gradient method.
### III-A Consensus in Euclidean Space: A Revisit
Let us briefly review the consensus with convex constraint in the Euclidean
space (C-E)[33], which will give us some insights to study the convergence
rate of DRCS. The optimization problem can be written as follows
$\displaystyle\min\varphi^{t}(\mathbf{x}):=\frac{1}{4}\sum_{i=1}^{N}\sum_{j=1}^{N}W^{t}_{ij}\|x_{i}-x_{j}\|_{\text{F}}^{2}$
(C-E) $\displaystyle\mathrm{s.t.}\quad x_{i}\in\mathcal{C},\ i=1,\ldots,N,$
where $\mathcal{C}$ is a closed convex set in the Euclidean space. Then, the
iteration is given by [34]
$x_{i,k+1}=\mathcal{P}_{\mathcal{C}}\left(\sum_{j=1}^{N}W_{ij}x_{i,k}\right)\quad\forall
i\in[N],$
with the corresponding matrix form being as follows
$\mathbf{x}_{k+1}=\mathcal{P}_{\mathcal{C}^{N}}\left((W\otimes
I_{d})\mathbf{x}_{k}\right),$ (EuC)
where $\mathcal{C}^{N}=\mathcal{C}\times\cdots\times\mathcal{C}.$ Different
forms of (EuC) are discussed in [35]. Let us denote the Euclidean mean via
$\hat{x}:=\frac{1}{N}\sum_{i=1}^{N}x_{i}\ \text{and}\
\hat{\mathbf{x}}:=\mathbf{1}_{N}\otimes\hat{x}.$ (III.4)
We have
$\displaystyle\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}$
$\displaystyle\leq\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k-1}\|_{\text{F}}$
(III.5) $\displaystyle=\|\mathcal{P}_{\mathcal{C}^{N}}\left((W\otimes
I_{d})\mathbf{x}_{k-1}\right)-\hat{\mathbf{x}}_{k-1}\|_{\text{F}}$
$\displaystyle\leq\|[(W-J)\otimes
I_{d}](\mathbf{x}_{k-1}-\hat{\mathbf{x}}_{k-1})\|_{\text{F}}$
$\displaystyle\leq\sigma_{2}\|\mathbf{x}_{k-1}-\hat{\mathbf{x}}_{k-1}\|_{\text{F}},$
where the second inequality follows from the non-expansiveness of
$\mathcal{P}_{\mathcal{C}}$. Therefore, the Q-linear rate of (EuC) is equal to
$\sigma_{2}$. On the other hand, the iteration (EuC) is the same as applying
projected gradient descent (PGD) method to solve the problem (C-E). That is,
we have
$\mathbf{x}_{k+1}=\mathcal{P}_{\mathcal{C}^{N}}\left((W\otimes
I_{d})\mathbf{x}_{k}\right)=\mathcal{P}_{\mathcal{C}^{N}}\left(\mathbf{x}_{k}-\alpha_{e}\nabla\varphi(\mathbf{x}_{k})\right),$
(III.6)
with stepsize $\alpha_{e}=1$. Let us take a look at how to show the linear
rate of PGD using standard convex optimization analysis. We have the Euclidean
gradient $\nabla\varphi(\mathbf{x})=\mathbf{x}-(W\otimes I_{d})\mathbf{x}$.
Though the hessian matrix $\nabla^{2}\varphi(\mathbf{x})=(I_{N}-W)\otimes
I_{d}$ is degenerated, it is positive definite when restricted to the subspace
$(\mathbb{R}^{d\times r})^{N}\setminus\mathcal{E}^{*}$, where
$\mathcal{E}^{*}:=\mathbf{1}_{N}\otimes\mathbb{R}^{d\times r}$ is the optimal
set of CE problem. Simply speaking, $I_{N}-W$ is positive definite in
$\mathbb{R}^{N}\setminus\mathrm{span}(\mathbf{1}_{N})$. Note that
$\hat{\mathbf{x}}=\mathcal{P}_{\mathcal{E}^{*}}\mathbf{x}$, so
$\mathbf{x}-\hat{\mathbf{x}}$ is orthogonal to $\mathcal{E}^{*}$. Following
the proof of linear rate for strongly convex functions[36, Theorem 2.1.15],
one needs the inequality in [36, Theorem 2.1.12], specialized to our problem
as follows
$\displaystyle\quad\left\langle\mathbf{x}-\hat{\mathbf{x}},\nabla\varphi(\mathbf{x})\right\rangle$
(III.7)
$\displaystyle=\left\langle\mathbf{x}-\hat{\mathbf{x}},(I_{N}-W)\otimes
I_{d}(\mathbf{x}-\hat{\mathbf{x}})\right\rangle$ $\displaystyle\geq\frac{\mu
L}{\mu+L}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}+\frac{1}{\mu+L}\|\nabla\varphi(\mathbf{x})\|_{\text{F}}^{2}.$
The constants are given by
$\mu:=1-\lambda_{2}(W)\quad\text{and}\quad L:=1-\lambda_{N}(W),$
where $\lambda_{2}(W)$ is the second largest eigenvalue of $W$, and
$\lambda_{N}(W)$ is the smallest eigenvalue of $W$, respectively. This
inequality can be obtained using the eigenvalue decomposition of $I_{N}-W$. We
provide the proof in the Appendix, and we call (III.7) “restricted secant
inequality”. With this, if $\alpha_{e}=\frac{2}{\mu+L}$, we get
$\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}\leq(\frac{L-\mu}{L+\mu})^{k}\|\mathbf{x}_{0}-\hat{\mathbf{x}}_{0}\|_{\text{F}}.$
It can be shown by simple calculations that
$\frac{L-\mu}{L+\mu}\leq\sigma_{2}$. This suggests that the PGD can achieve
faster convergence rate with $\alpha_{e}=\frac{2}{\mu+L}$. When
$\alpha_{e}=1$, the rate of $\sigma_{2}$ can be shown via combining (III.7)
with
$L\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}\geq\|\nabla\varphi(\mathbf{x})\|_{\text{F}}\geq\mu\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}$.
The proof is provided in the Appendix.
### III-B Consensus on Stiefel Manifold: Challenges and Insights
As we see, different from the (EuC) iteration with convex constraint[34], in
DRCS the projection onto convex set is replaced with a retraction operator,
and the Euclidean gradient is substituted by the Riemannian gradient. The
standard results [9, 10] on RGM already show global sub-linear rate of DRCS.
However, to obtain the local Q-linear rate, we need to exploit the specific
problem structure. To analyze DRCS, there are two main challenges.
First, due to the non-linearity of $\mathrm{St}(d,r)$, the Euclidean mean
$\hat{x}$ in (III.4) is infeasible. We need to use the average point defined
on the manifold. The second challenge comes from the non-convexity of
$\mathrm{St}(d,r)$. Previous work such as [34] usually discusses the convex
constraint in the Euclidean space, which depends on the non-expansive property
of the projection operator onto convex constraint.
To solve these issues. We use the so-called induced arithmetic mean (IAM) [13]
of $x_{1},\ldots,x_{N}$ over $\mathrm{St}(d,r)$, defined by
$\displaystyle\bar{x}$ $\displaystyle:=\mathop{\rm
argmin}_{y\in\mathrm{St}(d,r)}\sum_{i=1}^{N}\|y-x_{i}\|_{\text{F}}^{2}$
$\displaystyle=\mathop{\rm argmax}_{y\in\mathrm{St}(d,r)}\langle
y,\sum_{i=1}^{N}x_{i}\rangle=\mathcal{P_{\mathrm{St}}}(\hat{x}),$ (IAM)
where $\mathcal{P}_{\mathrm{St}}(\cdot)$ is the orthogonal projection onto
$\mathrm{St}(d,r)$. Different from the Euclidean mean notation, we define
$\bar{x}_{k}=\mathcal{P_{\mathrm{St}}}(\hat{x}_{k})\quad\text{and}\quad\bar{\mathbf{x}}_{k}=\mathbf{1}_{N}\otimes\bar{x}_{k}$
(III.8)
to denote IAM of $x_{1,k},\ldots,x_{N,k}$. The IAM is the orthogonal
projection of the Euclidean mean onto $\mathrm{St}(d,r)$, and
$\bar{\mathbf{x}}$ is also the projection of $\mathbf{x}$ onto the optimal set
$\mathcal{X}^{*}$ defined in (I.2). The distance between $\mathbf{x}$ and
$\mathcal{X}^{*}$ is given by
$\mathrm{dist}^{2}(\mathbf{x},\mathcal{X}^{*})=\min_{y\in\mathrm{St}(d,r)}\frac{1}{N}\sum_{i=1}^{N}\|y-x_{i}\|_{\text{F}}^{2}=\frac{1}{N}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}.$
The terminology IAM is derived from [37], where the IAM on $\mathrm{SO}(3)$ is
called the projected arithmetic mean. The IAM is different from the Fréchet
mean [8, 38, 15] (or the Karcher mean[39, 40]). We use IAM since it is easier
to adopt to the Euclidean linear structure and computationally convenient.
Furthermore, we define the $l_{F,\infty}$ distance between $\mathbf{x}_{k}$
and $\bar{\mathbf{x}}_{k}$ as
$\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}=\max_{i\in[N]}\|x_{i}-\bar{x}\|_{\text{F}}.$
($l_{F,\infty}$)
Let us first build the connection between the Euclidean mean and IAM in the
following lemma.
###### Lemma 1.
For any $\mathbf{x}\in\mathrm{St}(d,r)^{N}$, let
$\hat{x}=\frac{1}{N}\sum_{i=1}^{N}x_{i}$ be the Euclidean mean and denote
$\hat{\mathbf{x}}=\mathbf{1}_{N}\otimes\hat{x}$ defined in (III.4). Similarly,
let $\bar{\mathbf{x}}=\mathbf{1}_{N}\otimes\bar{x}$, where $\bar{x}$ is the
IAM defined in (IAM). We have
$\frac{1}{2}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}\leq\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}.$
(III.9)
Moreover, if $\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq N/2$, one has
$\|\bar{x}-\hat{x}\|_{\text{F}}\leq\frac{2\sqrt{r}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N},$
(P1)
and
$\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}\geq\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}-\frac{4r\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{4}}{{N}}.$
(III.10)
The inequality (III.9) is tight, since we have
$\frac{1}{2}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}=\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}=Nr$
when $\sum_{i=1}^{N}x_{i}=0$ and
$\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}=\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}$
when $x_{1}=x_{2}=\ldots=x_{N}$. The inequality (P1) suggests that the
Euclidean mean will converge to IAM quadratically if $\mathbf{x}$ is close to
$\bar{\mathbf{x}}$.
To deal with the non-convexity of $\mathrm{St}(d,r$), we use the nice
properties for second-order retraction. The following second-order property of
retraction in Lemma 2 is crucial to link the optimization methods between
Euclidean space and the matrix manifold. It means that
$\mathrm{Retr}_{x}(\xi)=x+\xi+\mathcal{O}(\|\xi\|_{\text{F}}^{2})$, that is,
$\mathrm{Retr}_{x}(\xi)$ is locally a good approximation to $x+\xi$. This
property has been used to analyze many algorithms (see e.g., [10, 28, 29]). In
this paper, we only use the polar decomposition based retraction to present a
simple proof. The polar decomposition is given by
$\displaystyle\mathrm{Retr}_{x}(\xi)=(x+\xi)(I_{r}+\xi^{\top}\xi)^{-1/2},$
(III.11)
which is also the orthogonal projection of $x+\xi$ onto $\mathrm{St}(d,r)$.
The following property (III.12) also holds for the polar retraction, which can
be seen as a non-expansiveness property.
###### Lemma 2.
[10, 27] Let $\mathrm{Retr}$ be a second-order retraction over
$\mathrm{St}(d,r)$. We then have
$\displaystyle\|\mathrm{Retr}_{x}(\xi)-(x+\xi)\|_{\text{F}}\leq
M\|\xi\|_{\text{F}}^{2},$ (P2) $\displaystyle\forall
x\in\mathrm{St}(d,r),\quad\forall\xi\in\mathrm{T}_{x}\mathcal{M}.$
Moreover, if the retraction is the polar retraction, then for all
$\mathbf{x}\in\mathrm{St}(d,r)$ and $\xi\in\mathrm{T}_{x}\mathcal{M}$, the
following inequality holds for any $y\in\mathrm{St}(d,r)$ [29, Lemma 1]:
$\|\mathrm{Retr}_{x}(\xi)-y\|_{\text{F}}\leq\|x+\xi-y\|_{\text{F}}.$ (III.12)
###### Remark 1.
The constant $M$ in (P2) depends on the retraction. [10] established (P2) for
all $\xi$. If $\xi$ is uniformly bounded[27], then we have a constant bound
for $M$, which is independent of the dimension. For example, [27, Append. E]
shows that if $\|\xi\|_{\text{F}}\leq 1$ then $M=1$ for polar retraction. If
$\|\xi\|_{\text{F}}\leq 1/2$ then $M=\sqrt{10}/4$ for QR decomposition[9] and
if $\|\xi\|_{\text{F}}\leq 1/2$ then $M=4$ for Caley transformation[41]. The
uniform bound of $\|\xi\|_{\text{F}}\leq 1$ will be satisfied automatically
under mild assumptions. We remark that the inequality (III.12) will help with
simplifying some of our analysis. If we do not use polar retraction, using
(P2) implies
$\|\mathrm{Retr}_{x}(\xi)-y\|_{\text{F}}\leq\|x+\xi-y\|_{\text{F}}+M\|\xi\|_{\text{F}}^{2},$
(III.13)
where the second-order term $M\|\xi\|_{\text{F}}^{2}$ changes the suitable
step-size range in most of our analysis.
We now show the relation between $\nabla\varphi^{t}(\mathbf{x})$ and
$\mathrm{grad}\varphi^{t}(\mathbf{x})$. Denoting
$\mathcal{P}_{N_{x}\mathcal{M}}$ as the orthogonal projection onto the normal
space $N_{x}\mathcal{M}$, a useful property of the projection
$\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(y-x),\forall
y\in\mathrm{St}(d,r)$[29, Section 6] is that
$\displaystyle\quad\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(x-y)=x-y-\mathcal{P}_{N_{x}\mathcal{M}}(x-y)$
(P3) $\displaystyle=x-y-\frac{1}{2}x\left((x-y)^{\top}x+x^{\top}(x-y)\right)$
$\displaystyle=x-y-\frac{1}{2}x(x-y)^{\top}(x-y),$
where we used $x^{\top}x=y^{\top}y=I_{r}$. This property implies that
$\mathcal{P}_{\mathrm{T}_{x}\mathcal{M}}(x-y)=x-y+\mathcal{O}(\|y-x\|_{\text{F}}^{2}).$
The relationship (P3) implies the following lemma.
###### Lemma 3.
For any $\mathbf{x},\mathbf{y}\in\mathrm{St}(d,r)^{N}$, we have
$\displaystyle\quad\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle=\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle+$
(III.14)
$\displaystyle\frac{1}{4}\sum_{i=1}^{N}\langle\sum_{j=1}^{N}W^{t}_{ij}(x_{i}-x_{j})^{\top}(x_{i}-x_{j}),(y_{i}-x_{i})^{\top}(y_{i}-x_{i})\rangle$
$\displaystyle\geq\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle.$
Lemma 3 directly yields a descent lemma on the Stiefel manifold similar to the
Euclidean-type inequality [36], which is helpful to identify the stepsize for
global convergence. The stepsize $\alpha$ will be determined by the constant
$L_{t}$ in Lemma 4 and the constant $M$ in Lemma 2. Lemma 4 is developed from
a so-called Riemannian inequality in [29], which is used to analyze a class of
Riemannian subgradient methods. For the function $\varphi^{t}(\mathbf{x})$, we
get a tighter estimation of $L_{t}$.
###### Lemma 4 (Descent lemma).
For the function $\varphi^{t}(\mathbf{x})$ defined in (C-St), we have
$\displaystyle\quad\varphi^{t}(\mathbf{y})-\left[\varphi^{t}(\mathbf{x})+\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle\right]$
(III.15)
$\displaystyle\leq\frac{L_{t}}{2}\|\mathbf{y}-\mathbf{x}\|_{\text{F}}^{2},\quad\forall\mathbf{x},\mathbf{y}\in\mathrm{St}(d,r)^{N},$
where $L_{t}=1-\lambda_{N}(W^{t})$ and $\lambda_{N}(W)$ is the smallest
eigenvalue of $W$.
We remark that a closely related inequality is the restricted Lipschitz-type
gradient presented in [10, Lemma 4], which is defined by the pull back
function $g(\xi):=\varphi^{t}(\mathrm{Retr}_{\mathbf{x}}(\xi))$, whose
Lipschitz $\tilde{L}$ relies on the retraction and the Lipschitz constant of
Euclidean gradient. Also, the stepsize of RGM in [10] depends on the norm of
Euclidean gradient. Our inequality does not rely on the retraction, which
could be of independent interest. One could also consider the following
Lipschitz inequality (e.g., see [42])
$\displaystyle\varphi^{t}(\mathbf{y})\leq\varphi^{t}(\mathbf{x})+\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathrm{Exp}_{\mathbf{x}}^{-1}\mathbf{y}\right\rangle+\frac{L_{g}}{2}d_{g}^{2}(\mathbf{x},\mathbf{y})$
(III.16)
where $\mathrm{Exp}_{\mathbf{x}}^{-1}\mathbf{y}$ is the logarithm map and
$d_{g}(\mathbf{x},\mathbf{y})$ is the geodesic distance. Since involving
logarithm map and geodesic distance brings computational and conceptual
difficulties, we choose to use the form of (III.15) for simplicity. In fact,
$L_{t}$ and $L_{g}$ are the same for problem (C-St).
By now, we have obtained three second-order properties (P1), (P2) (P3) in
Lemmas 1, 2 and 3. These lemmas would help us to solve the non-linearity
issue, and we can get a similar Riemannian restricted secant inequality as
(III.7). Before that, in next section we proceed to show the global
convergence of Algorithm 1 with a tight estimation of the stepsize $\alpha$.
## IV The Global Convergence Analysis
We first consider the convergence of sequence $\\{\mathbf{x}_{k}\\}$ generated
by Algorithm 1 in this section. We build on the results of [43, 44, 27] to
provide a necessary and sufficient condition for the optimality of critical
points (Proposition 2). The main results on the local rate are presented in
Section V.
###### Definition 3 (Łojasiewicz inequality).
We say that $\mathbf{x}\in\mathcal{M}^{N}$ satisfies the Łojasiewicz
inequality for the projected gradient $\mathrm{grad}f(\mathbf{x})$ if there
exists $\Delta>0$, $\Lambda>0$ and $\theta\in(0,1/2]$ such that for all
$\mathbf{y}\in\mathcal{M}^{N}$ with
$\|\mathbf{y}-\mathbf{x}\|_{\text{F}}<\Delta$, it holds that
$\left|f(\mathbf{y})-f(\mathbf{x})\right|^{1-\theta}\leq\Lambda\|\mathrm{grad}f(\mathbf{x})\|_{\text{F}}.$
(Ł)
Since $\varphi^{t}(\mathbf{x})$ is real analytic, and the Stiefel manifold is
a compact real-analytic submanifold, it is well known that a Łojasiewicz
inequality holds at each critical point of problem (C-St) [44]. Therefore, we
know that the sequence $\\{\mathbf{x}_{k}\\}$ converges to a single critical
point with properly chosen $\alpha$. The exponent $\theta$ decides the local
convergence rate. Later we will show a similar gradient dominant inequality in
Proposition 3.
###### Lemma 5.
Let
$G:=\max_{\mathbf{x}\in\mathcal{M}^{N}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}$.
Given any $t\geq 1$ and $\alpha\in(0,\frac{1}{MG+L_{t}/2})$, where $M$ is the
constant in Lemma 2 and $L_{t}$ is the Lipschitz constant in Lemma 4, the
sequence $\\{\mathbf{x}_{k}\\}$ generated by Algorithm 1 converges to a
critical point of problem (C-St) sub-linearly. Furthermore, if some critical
point is a limit point of $\\{\mathbf{x}_{k}\\}$ and has exponent $\theta=1/2$
in (Ł), $\\{\varphi^{t}(\mathbf{x}_{k})\\}$ converges to $0$ Q-linearly and
the sequence $\\{\mathbf{x}_{k}\\}$ converges to the critical point
R-linearly333A sequence $\\{a_{k}\\}$ is said to converge R-linear to $a$ if
there exists a sequence $\\{\varepsilon_{k}\\}$ such that
$|a_{k}-a|\leq\varepsilon_{k}$ and $\\{\varepsilon_{k}\\}$ converges
Q-linearly to 0..
The proof follows [44, Section 2.3] and [10], but here we use the descent
lemma (Lemma 4). It is provided in Appendix.
###### Remark 2.
The bound of stepsize $\alpha$ is $\frac{2}{2MG+L_{t}}$, decided by the
Lipschitz constant and the constant of retraction. It is the same as that of
[10]. Compared with the result in [15], the upper bound of stepsize using
exponential map is only determined by $2/L_{g}^{\prime}$. In the proof, we
notice that
$\alpha<2/(2M\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}+L_{t})$
can guarantee the convergence. As
$\lim_{k\rightarrow\infty}\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}=0$,
finally the upper bound will be approximately $2/L_{t}$.
Lemma 5 suggests the convergence to a critical point. We are more interested
in the convergence to the consensus configuration. It is shown in [2] that all
second-order critical points of problem (C-St) are global optima whenever
$r\leq\frac{2}{3}d-1$. Therefore, the DRCS can be guaranteed to almost always
converge to the optimal point set $\mathcal{X}^{*}$[20].
###### Lemma 6.
[2] When $r\leq\frac{2}{3}d-1$, all second-order critical points of problem
(C-St) are global optima. That is, the Riemannian Hessian at all saddle points
has strictly negative eigenvalues.
The following theorem is a discrete-time version of [2, Theorem 4]. It builds
on Lemma 5 and [20, Theorem 2, Corollary 6] and suggests that with random
initialization, sequence $\\{\mathbf{x}_{k}\\}$ of Algorithm 1 almost always
converges to the consensus configuration.
###### Theorem 1.
When $r\leq\frac{2}{3}d-1$, let $\alpha\in(0,C_{\mathcal{M},\varphi^{t}})$,
where
$C_{\mathcal{M},\varphi^{t}}:=\min\\{\frac{\hat{r}}{G},\frac{1}{\hat{B}},\frac{2}{2MG+L_{t}}\\}$,
$\hat{r}$ and $\hat{B}$ are two constants related to the retraction (defined
in [20, Prop. 9]). Let $\mathbf{x}_{0}$ be a random initial point of Algorithm
1. Then the set $\\{\mathbf{x}_{0}\in\mathrm{St}(d,r)^{N}:\mathbf{x}_{k}\
\textit{converges to a point of }\mathcal{X}^{*}\\}$ has measure $1$.
Theorem 1 states the almost sure convergence to consensus when
$r\leq\frac{2}{3}d-1$. For any $d,r$, when local agents are close enough to
each other, any first-order critical point is global optimum.
###### Proposition 2.
Suppose that $\mathbf{x}$ is a first-order critical point of problem (C-St).
Then, $\mathbf{x}$ is a global optimal point if and only if there exists some
$y\in\mathbb{R}^{d\times r}$ (with $\|y\|_{2}\leq 1$) such that $\left\langle
x_{i},y\right\rangle>r-1$ for all $i=1,\ldots,N$. Moreover, if we choose $y$
as the IAM of $\mathbf{x}$, then $\mathbf{x}$ is a global optimal point if and
only if
$\mathbf{x}\in{\mathcal{L}}:=\\{\mathbf{x}:\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}<\sqrt{2}\\}.$
When $r=1$, the region $\mathcal{L}$ is the same as that of
$\mathcal{S}:=\\{\mathbf{x}:\exists y\in\mathcal{M}\ \mathrm{s.t.}\
\max_{i}d_{g}(x_{i},y)<r^{*}\\}$ defined in [15], where
$r^{*}:=\frac{1}{2}\min\\{\mathrm{inj}\mathcal{M},\frac{\pi}{\sqrt{\Delta}}\\}$,
$\mathrm{inj}\mathcal{M}$ is the injectivity radius, and $\Delta$ is the upper
bound of the sectional curvature of $\mathcal{M}$. Specifically, on the sphere
$\mathrm{S}^{d-1}$, the arc length $2r^{*}=\pi$ corresponds to the hemisphere,
which is the largest convex set on $\mathrm{S}^{d-1}$. Geometrically, it
means that $x_{i}$ cannot be the antipode of any $x_{j}$, which is known as
the cut locus[8]. However, $\mathrm{inj}\mathcal{M}$ is unknown for general
case $r>1$. In [7, 15, 22], it was shown that the continuous Riemannian
gradient flow starting in $\mathcal{L}$ converges to $\mathcal{X}^{*}$ on
sphere $\mathrm{S}^{d-1}$ and the convergence rate is linear[7, 22]. However,
it is still unclear whether an algorithm could achieve global consensus
initialized in $\mathcal{L}$ when $r>1$. The main challenge here is that the
vanilla gradient method cannot guarantee that the sequence stays in
$\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}<\sqrt{2}$. Hence, in [15],
there is a need to assume
$\mathbf{x}_{0}\in\mathcal{S}_{\text{conv}}:=\\{\phi(\mathbf{x})<\frac{(r^{*})^{2}}{2dia(\mathcal{G})}\\}$,
where $\phi(\mathbf{x})$ is the objective in (I.1) with $\mathrm{dist}$ being
the geodesic distance and $dia(\mathcal{G})$ is the diameter of the graph
$\mathcal{G}$. But $\mathcal{S}_{\text{conv}}$ is smaller than $\mathcal{S}$.
Here, we present the same result on $\mathrm{S}^{d-1}$ with a different proof
since we work with Euclidean distance. We cannot generalize the proof to
$r>1$.
###### Lemma 7.
Let $r=1$ and assume that there exists a $y\in\mathrm{St}(d,1)$ such that the
initial point $\mathbf{x}_{0}$ satisfies $\left\langle
x_{i,0},y\right\rangle\geq\delta,\quad\forall i\in[N]$ for some $\delta>0$.
Then, the sequence $\\{\mathbf{x}_{k}\\}$ generated by Algorithm 1 with
$\alpha\leq 1$ and $t\geq 1$ satisfies
$\left\langle x_{i,k},y\right\rangle\geq\delta,\quad\forall i\in[N],\ \forall
k\geq 0.$ (IV.1)
## V Local Linear Convergence
As we see in Proposition 2, the region $\mathcal{L}$ characterizes the local
landscape of (C-St). Typically, a local linear rate can be obtained for RGM if
the Riemannian Hessian is non-singular at global optimal points. The
Riemannian Hessian of $\varphi^{t}(\mathbf{x})$ is a linear operator. For any
tangent vector $\eta^{\top}=[\eta_{1}^{\top},\ldots,\eta_{n}^{\top}]$, we have
[30]
$\displaystyle\left\langle\eta,\mathrm{Hess}\varphi^{t}(\mathbf{x})[\eta]\right\rangle=\|\eta\|_{\text{F}}^{2}-\sum_{i=1}^{N}\sum_{j=1}^{N}W^{t}_{ij}\left\langle\eta_{i},\eta_{j}\right\rangle-\sum_{i=1}^{N}\langle\eta_{i},\eta_{i}(\frac{1}{2}[\nabla\varphi^{t}_{i}(\mathbf{x})^{\top}x_{i}+x_{i}^{\top}\nabla\varphi^{t}_{i}(\mathbf{x})])\rangle.$
(V.1)
Following [2], if we let $x_{1}=\ldots=x_{N}$ and
$\eta_{i}=\mathcal{P}_{\mathrm{T}_{x_{i}}\mathcal{M}}\xi$ for any
$\xi\in\mathbb{R}^{d\times r}$, (V.1) reads
$0=\sum_{i=1}^{N}\left\langle\eta_{i},\mathrm{Hess}\varphi^{t}_{i}(\mathbf{x})[\eta_{i}]\right\rangle$.
Therefore, similar as the Euclidean case, the Riemannian Hessian at any
consensus point has a zero eigenvalue. This motivates us to consider an
alternative to the strong convexity. Luckily, there are more relaxed
conditions (than strong convexity) for Euclidean problems.
To exploit this, in the next subsection, we will generalize the inequality
(III.7) to its Riemannian version as follows
$\displaystyle\quad\left\langle\mathbf{x}-\mathcal{P}_{\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}}\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq
c_{d}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}+c_{g}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2},$
(V.2)
where $c_{d}>0,c_{g}>0$ and $\mathbf{x}$ is in some neighborhood of
$\mathcal{X}^{*}$. Note that for the Riemannian problem (C-St), we need to
substitute the Euclidean gradient with Riemannian gradient. Moreover, the IAM
$\bar{\mathbf{x}}$ should be mapped into the tangent space
$\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}$. One can use the inverse of
exponential map $\mathrm{Exp}^{-1}_{\mathbf{x}}(\bar{\mathbf{x}})$. However,
the map $\mathrm{Exp}^{-1}_{\mathbf{x}}(\bar{\mathbf{x}})$ is difficult to
compute. Note that $\mathrm{Exp}_{\mathbf{x}}$ is a local diffeomorphism. By
the inverse function theorem, we have
$\mathrm{Exp}^{-1}_{\mathbf{x}}(\bar{\mathbf{x}})=\bar{\mathbf{x}}-\mathbf{x}+\mathcal{O}(\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2})$.
Using the property in (P3), we know that
$\mathcal{P}_{\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}}(\bar{\mathbf{x}}-\mathbf{x})$
is a second-order approximation to
$\mathrm{Exp}^{-1}_{\mathbf{x}}(\bar{\mathbf{x}})$. As such, we directly
project $\bar{\mathbf{x}}$ onto the tangent space of $\mathbf{x}$. Note that
this is not the inverse of any retraction. Moreover, since
$\displaystyle\left\langle\mathbf{x}-\mathcal{P}_{\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}}\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle=\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle,$
we will investigate the following formal definition of RSI
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq
c_{d}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}+c_{g}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
(RSI)
To establish the (RSI), we first show the quadratic growth (QG) property of
$\varphi^{t}(\mathbf{x})$ (Lemma 8). In the Euclidean space, especially for
convex problems, QG condition is equivalent to the RSI as well as the
Łojasiewicz inequality with $\theta=1/2$[45]. To the best of our knowledge, QG
cannot be used directly to establish the linear rate of GD and it is usually
required to show the equivalence to Luo-Tseng[46] error bound inequality
(ERB)[47]. However, for nonconvex problems, RSI is usually stronger than QG.
We will discuss more about this later.
###### Lemma 8 (Quadratic growth).
For any $t\geq 1$ and $\mathbf{x}\in\mathcal{M}^{N}$, we have
$\varphi^{t}(\mathbf{x})-\varphi^{t}(\bar{\mathbf{x}})\geq\frac{\mu_{t}}{2}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}\geq\frac{\mu_{t}}{4}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
(QG)
where the constant is given by
$\mu_{t}:=1-\lambda_{2}(W^{t}).$
The $\lambda_{2}(W^{t})$ is the second largest eigenvalue of $W^{t}$, and
$L_{t}$ is given in Lemma 4. Moreover, if
$\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq\frac{N}{8r}$, we have
$\varphi^{t}(\mathbf{x})-\varphi^{t}(\bar{\mathbf{x}})\geq\frac{\mu_{t}}{2}(1-\frac{4r}{N}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2})\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}.$
(QG’)
The second inequality (QG’) is a local quadratic growth property, which is
tighter than (QG).
### V-A Restricted Secant Inequality
In this section, we discuss how to establish (RSI). We will derive RSI in the
following forms
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq
c_{d}^{\prime}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},\quad
c_{d}^{\prime}>0$ (RSI-1)
and
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq
c_{g}^{\prime}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}\quad
c_{g}^{\prime}>0.$ (RSI-2)
Then, (RSI) can be obtained by any convex combination of (RSI-1) and (RSI-2).
To proceed the analysis, we define for $i\in[N]$
$\displaystyle p_{i}:=\frac{1}{2}(x_{i}-\bar{x})^{\top}(x_{i}-\bar{x}),$ (V.3)
and
$\displaystyle
q_{i}:=\frac{1}{2}\sum_{j=1}^{N}W_{ij}^{t}(x_{i}-x_{j})^{\top}(x_{i}-x_{j}).$
(V.4)
Let $\mathbf{y}=\bar{\mathbf{x}}$ in (III.14). We get
$\displaystyle\quad\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle$
$\displaystyle=\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle-\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle$ (V.5)
$\displaystyle=2\varphi^{t}(\mathbf{x})-\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle,$
where in the last equation we used the following two identities
$2\varphi^{t}(\mathbf{x})=\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}\right\rangle$444See
(Proof of Lemma 8.) in Appendix. and
$\left\langle\nabla\varphi^{t}(\mathbf{x}),\bar{\mathbf{x}}\right\rangle=0.$
The term $\sum_{i=1}^{N}\left\langle p_{i},q_{i}\right\rangle$ is non-
negative, so if we substitute (V.5) into (RSI), we observe that RSI is
stronger than QG. Moreover, by Cauchy-Schwarz inequality, we have
$\displaystyle\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle\leq\max_{i\in[N]}\|p_{i}\|_{\text{F}}\cdot
2\varphi^{t}(\mathbf{x})\leq\varphi^{t}(\mathbf{x})\cdot\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}^{2}.$
(V.6)
Hence, we see that if
$\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}<\sqrt{2}$, we have
$\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle>0$,
which implies that the direction $-\mathrm{grad}\varphi^{t}(\mathbf{x})$ is
positively correlated with the direction $\bar{\mathbf{x}}-\mathbf{x}$.
However, it seems difficult to guarantee
$\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F},\infty}<\sqrt{2}$ since
$\bar{\mathbf{x}}_{k}$ is not fixed. We will see in Lemma 13 that multi-step
consensus can help us circumvent this problem. Moreover, note that
$\displaystyle\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle\leq\varphi^{t}(\mathbf{x})\cdot\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
(V.7)
so we can also establish (RSI-1) when $\varphi^{t}(\mathbf{x})=O(\mu_{t})$, as
we will see in Lemma 9.
To conclude, the two inequalities (V.6) and (V.7) correspond to two
neighborhoods of $\mathcal{X}^{*}$: $\mathcal{N}_{R,t}$ and $N_{l,t}$, which
are defined in the sequel. First, we define
$\mathcal{N}_{R,t}:=\mathcal{N}_{1,t}\cap\mathcal{N}_{2,t},$ (V.8)
where
$\displaystyle\mathcal{N}_{1,t}:$
$\displaystyle=\\{\mathbf{x}:\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq
N\delta_{1,t}^{2}\\}$ (V.9) $\displaystyle\mathcal{N}_{2,t}:$
$\displaystyle=\\{\mathbf{x}:\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}\leq\delta_{2,t}\\}$
(V.10)
and $\delta_{1,t},\delta_{2,t}$ satisfy
$\delta_{1,t}\leq\frac{1}{5\sqrt{r}}\delta_{2,t}\quad\textit{and}\quad\delta_{2,t}\leq\frac{1}{6}.$
(V.11)
Secondly, the region $\mathcal{N}_{l,t}$ is given by
$\mathcal{N}_{l,t}:=\\{\mathbf{x}:\varphi^{t}(\mathbf{x})\leq\frac{\mu_{t}}{4}\\}\cap\\{\mathbf{x}:\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq
N\delta_{3,t}^{2}\\},$ (V.12)
where $\delta_{3,t}$ satisfies
$\delta_{3,t}\leq\min\\{\frac{1}{\sqrt{N}},\frac{1}{4\sqrt{r}}\\}.$ (V.13)
According to Proposition 2, the radius $\mathcal{N}_{2,t}$ cannot be larger
than $\sqrt{2}$, which is the manifold property, while $\mathcal{N}_{l,t}$ is
decided by the connectivity of the network. If the connectivity is stronger,
then the region is larger. The (RSI-1) is formally established in the
following lemma.
###### Lemma 9.
Let $\mu_{t}$ be the constant given in Lemma 8 and $t\geq 1$.
1. 1.
Suppose $\mathbf{x}\in\mathcal{N}_{R,t}$, where $\mathcal{N}_{R,t}$ is defined
by equation V.8. There exists a constant $\gamma_{R,t}>0$:
$\gamma_{R,t}:=(1-4r\delta_{1,t}^{2})(1-\frac{\delta_{2,t}^{2}}{2})\mu_{t}\geq\frac{\mu_{t}}{2},$
such that the following holds:
$\displaystyle\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\gamma_{R,t}\|\bar{\mathbf{x}}-\mathbf{x}\|_{\text{F}}^{2}.$
(V.14)
2. 2.
For $\mathbf{x}\in\mathcal{N}_{l,t}$, where $\mathcal{N}_{l,t}$ is defined by
(V.12), we also have (RSI-1), in which
$c^{\prime}_{d}=\gamma_{l,t}:=\mu_{t}(1-4r\delta_{3,t}^{2})-\varphi^{t}(\mathbf{x})\geq\frac{\mu_{t}}{2}.$
###### Remark 3.
We show $\gamma_{R,t}$ and $\gamma_{l,t}$ by combining (QG’) with (V.6) and
(V.7), repectively. For (V.14),
$(1-4r\delta_{1,t}^{2})(1-\frac{\delta_{2,t}^{2}}{2})\geq\frac{1}{2}$ suffices
to guarantee the lower bound. However, we impose (V.11) to guarantee
$\mathbf{x}_{k}\in\mathcal{N}_{2,t}$ for all $k\geq 0$. Moreover, we find that
by combining (QG) with (V.6), one can also get (RSI-1) without the constraint
$\mathcal{N}_{1,t}$. But the coefficient will be smaller. For simplity, we
only show the results that stay in $\mathcal{N}_{1,t}\cap\mathcal{N}_{2,t}$.
Similarly for $\mathcal{N}_{l,t}$, $\delta_{3,t}\leq\frac{1}{4\sqrt{r}}$ is
enough to ensure RSI. We impose $\delta_{3,t}\leq 1/\sqrt{N}$ to get
Proposition 4 which is useful to ensure $\mathbf{x}_{k}\in\mathcal{N}_{l,t}$.
In fact, $\delta_{3,t}\leq 1/\sqrt{N}$ does not shrink the region since
$\varphi^{t}(\mathbf{x})\leq\mu_{t}$ implies a small region by Lemma 8. Also,
since $\delta_{3,t}\leq 1/\sqrt{N}$, it is clear that $\mathcal{N}_{l,t}$ is
smaller than $\mathcal{N}_{R,t}$ when $N$ is large enough.
Lemma 9 also implies that the following error bound inequality holds for
$\mathbf{x}\in\mathcal{N}_{R,t}$ and $\mathbf{x}\in\mathcal{N}_{l,t}$
$\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}\leq\frac{2}{\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}.$
(ERB)
This inequality is a generalization of the Luo-Tseng error bound[46] for
problems in Euclidean space. In [45], the following holds for smooth non-
convex problems
$\text{RSI}\Rightarrow\text{ERB}\Leftrightarrow\text{\L{}ojasiewicz inequality
with }\theta=1/2\Rightarrow\text{QG}.$
However, in Euclidean space and for convex problems, they are all equivalent.
RSI can be used to show the Q-linear rate of
$\mathrm{dist}(\mathbf{x},\mathcal{X}^{*})$, and ERB can be used to establish
the Q-linear rate of the objective value and the R-linear rate of
$\mathrm{dist}(\mathbf{x},\mathcal{X}^{*})$. Moreover, under mild assumptions
QG and ERB are shown to be equivalent for second-order critical points for
Euclidean nonconvex problems [48]. Some other error bound inequalities are
also obtained over the Stiefel manifold or oblique manifold. For example, Liu
et al. [27] established the error bound inequality of any first-order critical
point for the eigenvector problem. And [49, 50] gave two types of error bound
inequality for phase synchronization problem. Our proof of Lemma 9 relies
mainly on the doubly stochasticity of $W^{t}$ and the properties of IAM, thus
it is fundamentally different from previous works. Another similar form of RSI
is the Riemannian regularity condition proposed in [51] for minimizing the
nonsmooth problems over Stiefel manifold.
Following the same argument as [27], the error bound inequality (ERB) implies
a growth inequality similar as Łojasiewicz inequality. However, the
neighborhoods $\mathcal{N}_{R,t}$ and $\mathcal{N}_{l,t}$ are relative to the
set $\mathcal{X}^{*}$, which is different from the Definition 3. It can be
used to show the Q-linear rate of $\\{\varphi^{t}(\mathbf{x}_{k})\\}$ only if
$\mathbf{x}_{k}\in\mathcal{N}_{R,t}$ or $\mathbf{x}_{k}\in\mathcal{N}_{l,t}$
can be guaranteed.
###### Proposition 3.
For any $\mathbf{x}\in\mathcal{N}_{R,t}$ or $\mathbf{x}\in\mathcal{N}_{l,t}$
it holds that
$\varphi^{t}(\mathbf{x})\leq\frac{3}{2\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
(V.15)
We need the following bounds for $\mathrm{grad}\varphi^{t}(\mathbf{x})$ by
noting that $\varphi^{t}(\mathbf{x})$ is Lipschitz smooth as shown in Lemma 4.
It will be helpful to show (RSI-2).
###### Lemma 10.
For any $\mathbf{x}\in\mathrm{St}(d,r)^{N}$, it follows that
$\displaystyle\|\sum_{i=1}^{N}\mathrm{grad}\varphi^{t}(x_{i})\|_{\text{F}}\leq
L_{t}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}$ (V.16)
and
$\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}\leq
2L_{t}\cdot\varphi^{t}(\mathbf{x}),$ (V.17)
where $L_{t}$ is the Lipschitz constant given in Lemma 4. Moreover, suppose
$\mathbf{x}\in\mathcal{N}_{2,t}$, where $\mathcal{N}_{2,t}$ is defined by
(V.11). We then have
$\max_{i\in[N]}\|\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})\|_{\text{F}}\leq
2\delta_{2,t}.$ (V.18)
Next, we are going to show (RSI-2). The two RSI’s are crucial to show that
$\mathbf{x}_{k}\in\mathcal{N}_{1,t}$ or $\mathbf{x}_{k}\in\mathcal{N}_{l,t}$
with stepsize $\alpha=\mathcal{O}(\frac{1}{L_{t}})$. This holds naturally for
convex problems in Euclidean space[14], but it holds only locally for problem
(C-St).
###### Proposition 4 (Restricted secant inequality).
The following two inequalities hold for $\mathbf{x}\in\mathcal{N}_{R,t}$ and
$\mathbf{x}\in\mathcal{N}_{l,t}$
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\frac{\Phi}{2L_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2},$
(V.19)
and
$\displaystyle\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\nu\cdot\frac{\Phi}{2L_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}+(1-\nu)\gamma_{t}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
(RSI-I)
for any $\nu\in[0,1]$, where $\gamma_{t}$ and $\Phi>1$ are constants related
to $\mathbf{x}$, which are given by
$\gamma_{t}:=\left\\{\begin{matrix}\gamma_{R,t},&\mathbf{x}\in\mathcal{N}_{R,t}\\\
\gamma_{l,t},&\mathbf{x}\in\mathcal{N}_{l,t},\end{matrix}\right.$ (V.20)
$\Phi:=\left\\{\begin{matrix}2-\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}^{2},&\mathbf{x}\in\mathcal{N}_{R,t}\\\
2-\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},&\mathbf{x}\in\mathcal{N}_{l,t}.\end{matrix}\right.$
(V.21)
### V-B Local Rate of Consensus
Endowed with the RSI condition, we can now solve the problem (C-St). The main
difficulty now is to show that $\mathbf{x}_{k}\in\mathcal{N}_{2,t}$. In the
literature, there have been some work discussing how to bound the infinity
norm for Euclidean gradient descent (e.g., [23, 52]), which is called the
implicit regularization [23]. This is often related to a certain incoherence
condition under specific statistical models. However, to solve (C-St), we use
the Riemannian gradient method and we need to verify this property for DRCS.
We have the following bound in (V.22) for the total variation distance between
any row of $W^{t}$ and the uniform distribution.
###### Lemma 11.
Given any $\mathbf{x}\in\mathcal{N}_{2,t}$, where $\mathcal{N}_{2,t}$ is
defined in (V.10), if
$t\geq\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$, we have
$\max_{i\in[N]}\|\sum_{j=1}^{N}(W^{t}_{ij}-1/N)x_{j}\|_{\text{F}}\leq\frac{\delta_{2,t}}{2}.$
(V.22)
The lower bound $\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$ may not
be a small number. For example, when $W$ is the lazy Metropolis matrix of
regular connected graph, $\sigma_{2}$ usually scales as
$1-\mathcal{O}(\frac{1}{N^{2}})$ [53, Remark 2] and
$\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})=\mathcal{O}(N^{2}\log N)$. However,
for example, for a star graph this can be $\mathcal{O}(\log N)$. It will be
interesting to see under what conditions (V.22) holds for $t=1$ as a future
work. Here, we require this condition to ensure the algorithm is in a proper
local neighborhood.
Following a perturbation lemma of the polar decomposition [54, Theorem 2.4],
we get the following technical lemma which will be useful to bound the
Euclidean distance between two consecutive points $\bar{x}_{k}$ and
$\bar{x}_{k+1}$.
###### Lemma 12.
Suppose $\mathbf{x},\mathbf{y}\in\mathcal{N}_{1,t}$, we have
$\|\bar{x}-\bar{y}\|_{\text{F}}\leq\frac{1}{1-2\delta_{1,t}^{2}}\|\hat{x}-\hat{y}\|_{\text{F}},$
where $\bar{x}$ and $\bar{y}$ are the IAM of $x_{1},\ldots,x_{N}$ and
$y_{1},\ldots,y_{N}$, respectively.
Now, we are ready to prove that $\mathbf{x}_{k}$ always stays in
$\mathcal{N}_{R,t}=\mathcal{N}_{1,t}\cap\mathcal{N}_{2,t}$ if the stepsize
$\alpha$ satisfies $0\leq\alpha\leq\min\\{\frac{1}{L_{t}},1,\frac{1}{M}\\}$
and $t\geq\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$. The upper bound
$\frac{1}{M}$ and $1$ come from showing $\mathbf{x}_{k}\in\mathcal{N}_{2,t}$.
###### Lemma 13 (Stay in $\mathcal{N}_{R,t}$).
Let $\mathbf{x}_{k}\in\mathcal{N}_{R,t}$,
$0\leq\alpha\leq\min\\{\frac{\Phi}{L_{t}},1,\frac{1}{M}\\}$ and
$t\geq\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$, where the radius of
$\mathcal{N}_{R,t}$ is given by (V.11) and $M$ is given in Lemma 2. We then
have $\mathbf{x}_{k+1}\in\mathcal{N}_{R,t}.$
From the above result, we see that the stepsize is upper bounded by
$\frac{\Phi}{L_{t}}$ and $\frac{1}{M}$, and they reflect the role of the
network and the manifold. The condition $\alpha\leq 1/L_{t}$ guarantees that
$\mathbf{x}_{k}\in\mathcal{N}_{1,t}$ and $\alpha\leq\min\\{1,1/M\\}$ ensures
that $\mathbf{x}_{k}\in\mathcal{N}_{2,t}.$ As we mentioned in Remark 1, we
have $M=1$ in (P2) for the polar retraction if
$\alpha\|\mathrm{grad}\varphi^{t}(x_{i,k})\|_{\text{F}}\leq 1$. By our choice
of $\alpha\leq 1$ and $\mathbf{x}_{k}\in\mathcal{N}_{R,t}$, we indeed have
$\alpha\|\mathrm{grad}\varphi^{t}(x_{i,k})\|_{\text{F}}\leq 2\delta_{2,t}\leq
1$ according to Lemma 10. However, we do not plan to remove the term
$\frac{1}{M}$. Note that if we use other retractions, the bound will be
slightly worse due to larger $M$ and the extra second-order term in (III.13).
Now, we are ready to establish the local Q-linear convergence rate of
Algorithm 1.
###### Theorem 2.
Under Assumption 1. (1). Let $\nu\in(0,1)$ and the stepsize $\alpha$ satisfy
$0<\alpha\leq\min\\{\frac{\nu\Phi}{L_{t}},1,\frac{1}{M}\\}$ and
$t\geq\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$. The sequence
$\\{\mathbf{x}_{k}\\}$ of Algorithm 1 achieves consensus linearly if the
initialization satisfies $\mathbf{x}_{0}\in\mathcal{N}_{R,t}$ defined by
(V.11). That is, we have $\mathbf{x}_{k}\in\mathcal{N}_{R,t}$ for all $k\geq
0$ and
$\displaystyle\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}\leq(1-2\alpha(1-\nu)\gamma_{t})^{k}\|\mathbf{x}_{0}-\bar{\mathbf{x}}_{0}\|_{\text{F}}^{2}.$
(V.23)
Moreover, if $\alpha\leq\frac{1}{2MG+L_{t}}$, $\bar{\mathbf{x}}_{k}$ also
converges to a single point.
(2). If $\mathbf{x}_{0}\in\mathcal{N}_{l,t}$ and
$\alpha\leq\min\\{\frac{2}{L_{t}+2MG},\frac{\Phi}{2L_{t}}\\}$, one has (V.23)
for any $t\geq 1$.
Combining Theorem 2 with Lemma 5 and Theorem 1, we conclude the following
results. When
$\alpha<\min\\{C_{\mathcal{M},\varphi^{t}},\frac{2}{L_{t}+2MG},\frac{\nu\Phi}{L_{t}},1\\}$
and $r\leq\frac{2}{3}d-1$, we know that with random initialization,
$\\{\mathbf{x}_{k}\\}$ firstly converges sub-linearly and then linearly for
any $t\geq 1$. We find that any $\alpha\in(0,2/L_{t})$ can guarantee the
global convergence in practice. This could be explained as follows. When
$\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}\rightarrow 0$, then
$\varphi^{t}(\mathbf{x}_{k})\rightarrow 0$. We then have
$\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}\rightarrow 0$ (by
(V.17)),
$\max_{i\in[N]}\|\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})\|_{\text{F}}\rightarrow
0$ and $\Phi\rightarrow 2$. Combined with Remark 2 and the discussion after
Lemma 13, we deduce that the upper bound of $\alpha$ is asymptotically
$\min\\{C_{\mathcal{M},\varphi^{t}},\frac{2\nu}{L_{t}},1\\}$. Finally, we also
have $\gamma_{t}\rightarrow\mu_{t}$ for both cases of Lemma 9. If we let
$\nu=1/2$ and $\alpha=1$ is available then it implies a linear rate of
$(1-\mu_{t})^{1/2}$, but this could be worse than the rate $\sigma_{2}^{t}$ of
Euclidean consensus. We will discuss in the next section how to obtain this
rate.
### V-C Asymptotic Rate
To get the rate of $\sigma_{2}^{t}$, we need to ensure
$c_{d}=\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}$ and $c_{g}=\frac{1}{\mu_{t}+L_{t}}$
in (RSI). We will combine Lemma 8 with Lemma 3 to show this asymptotically for
any $\mathbf{x}\in\mathcal{N}_{l,t}$. Firstly, by (V.5) we have
$\displaystyle\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle=\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}-\hat{\mathbf{x}}\right\rangle-\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle,$ (V.24)
where $p_{i}$ and $q_{i}$ are given in (V.3)-(V.4). Using (III.7) and (III.10)
yields
$\displaystyle\quad\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}-\hat{\mathbf{x}}\right\rangle$
(V.25)
$\displaystyle\geq\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}+\frac{1}{\mu_{t}+L_{t}}\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}$
$\displaystyle\geq\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}(1-\frac{4r}{N}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2})\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}$
$\displaystyle\quad+\frac{1}{\mu_{t}+L_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2},$
where we also used
$\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}\leq\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}$
by the non-expansiveness of
$\mathcal{P}_{\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}}$. Substituting (V.25)
into (VII.22) and noting (V.7), we get
$\displaystyle\quad\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle$
$\displaystyle\geq\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}(1-\frac{4r}{N}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}-\frac{\mu_{t}+L_{t}}{\mu_{t}L_{t}}\varphi^{t}(\mathbf{x}))\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}+\frac{1}{\mu_{t}+L_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
Therefore, when $\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}\rightarrow 0$, we
have $\varphi^{t}(\mathbf{x})\rightarrow 0$ by Lemma 4. We get
$c_{d}=\frac{\mu_{t}L_{t}}{\mu_{t}+L}(1-\frac{4r}{N}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}-\frac{\mu_{t}+L_{t}}{\mu_{t}L_{t}}\varphi^{t}(\mathbf{x}))\rightarrow\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}.$
By the same arguments as of Theorem 2, we get the asymptotic rate being
$\frac{L_{t}-\mu_{t}}{L_{t}+\mu_{t}}$ with $\alpha=\frac{2}{L_{t}+\mu_{t}}$,
and $\frac{L_{t}-\mu_{t}}{L_{t}+\mu_{t}}\leq\sigma_{2}^{t}$. Also, using
similar arguments as (VII.3), we can get the rate of $\sigma_{2}^{t}$ with
$\alpha=1$ as the Euclidean case by noting that (ERB) is asymptotically
$\mu_{t}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}\leq\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}$.
## VI Numerical experiment
We test the stepsize on a ring graph. The matrix $W$ is given as follows:
$\displaystyle W=\left(\begin{array}[]{cccccc}1/3&1/3&&&&1/3\\\
1/3&1/3&1/3&&&\\\ &1/3&1/3&\ddots&&\\\ &&\ddots&\ddots&1/3&\\\
&&&1/3&1/3&1/3\\\ 1/3&&&&1/3&1/3\end{array}\right).$
(a) $W$: gradient (b) $W$: distance (c) $(W+I_{N})/2$: gradient (d)
$(W+I_{N})/2$: distance
Figure 1: Numerical results for $N=30,d=5,r=2$.
We ran Algorithm 1 with four choices of stepsize: $1/L,2/(L+\mu),2/L,1$, all
of them are stopped when
$\frac{1}{N}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}\leq 2\times
10^{-16}$. In fig. 1 (a)(b), We have $L=1-\lambda_{\min}=\frac{4}{3}$. For
fig. 1 (c)(d), the doubly stochastic matrix is given by $(W+I_{N})/2$ and we
have $L=1-\lambda_{\min}=\frac{2}{3}$. The left column is log-scale
$\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}$ and the right column
is log-scale distance
$\frac{1}{N}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}$. We see
that Algorithm 1 with $\alpha=2/L$ does not converge to a critical point. In
both cases, $\alpha=2/(\mu+L)$ produces the fastest convergence. The black
line is the convergence of multi-step consensus with $t=10$ and $\alpha=1$ and
the rest lines are for $t=1$. The convergence rate is about 10 times of that
green line.
## VII Conclusion
In this paper, we provided the global and local convergence analysis of DRCS,
a distributed method for consensus on the Stiefel manifold. We showed that the
convergence rate asymptotically matches the Euclidean counterpart, which
scales with the second largest singular value of the communication matrix. The
main technical contribution is to generalize the Euclidean restricted secant
inequality to the Riemannian version. In the future work, we would like to
study the preservation of iteration in the region $\mathcal{N}_{2,t}$ without
multi-step consensus and to estimate the constant
$C_{\mathcal{M},\varphi^{t}}$ for stepsize.
## APPENDIX
###### Proof of inequality (III.7).
Without loss of generality, we assume $d=r=1$. Let $U_{1},U_{2},\ldots,U_{N}$
be the orthonormal eigenvectors of $I_{N}-W$, corresponding to the eigenvalues
$0=\lambda_{1}<\lambda_{2}\leq\ldots\leq\lambda_{N}$. Then, we have that
$\mathbf{x}-\hat{\mathbf{x}}=\sum_{i=1}^{N}c_{i}U_{i}$. Since
$\mathbf{x}-\hat{\mathbf{x}}$ is orthogonal to $\mathrm{span}\\{U_{1}\\}$, we
have $c_{1}=0$. Note that
$\nabla\varphi(\mathbf{x})=(I_{N}-W)\mathbf{x}=(I_{N}-W)(\mathbf{x}-\hat{\mathbf{x}})$.
We get
$\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}=\sum_{i=2}^{N}c_{i}^{2}\quad\text{and}\quad\|\nabla\varphi(\mathbf{x})\|_{\text{F}}^{2}=\sum_{i=2}^{N}c_{i}^{2}\lambda_{i}^{2}.$
(VII.1)
Then, (III.7) reads
$\displaystyle\quad\left\langle\mathbf{x}-\hat{\mathbf{x}},\nabla\varphi(\mathbf{x})\right\rangle=\left\langle\mathbf{x}-\hat{\mathbf{x}},(I_{N}-W)(\mathbf{x}-\hat{\mathbf{x}})\right\rangle$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\left\langle\sum_{i=2}^{N}c_{i}U_{i},\sum_{i=2}^{N}c_{i}\lambda_{i}U_{i}\right\rangle$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\sum_{i=2}^{N}c_{i}^{2}\lambda_{i}\geq\frac{1}{L+\mu}\sum_{i=2}^{N}(\mu
Lc_{i}^{2}+c_{i}^{2}\lambda_{i}^{2})$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}=\frac{\mu
L}{\mu+L}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}+\frac{1}{\mu+L}\|\nabla\varphi(\mathbf{x})\|_{\text{F}}^{2},$
(VII.2)
where the inequality follows since $\mu=\lambda_{2}$ and $L=\lambda_{N}$. ∎
###### Proof of linear rate of PGD with $\alpha_{e}=1$.
Firstly, one can easily verify
$L\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}\geq\|\nabla\varphi(\mathbf{x})\|_{\text{F}}\geq\mu\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}$
using (VII.1). We have
$\displaystyle\quad\|\mathbf{x}_{k+1}-\hat{\mathbf{x}}_{k+1}\|_{\text{F}}^{2}\leq\|\mathbf{x}_{k+1}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
(VII.3)
$\displaystyle\leq\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}+\|\nabla\varphi(\mathbf{x}_{k})\|_{\text{F}}^{2}-2\left\langle\nabla\varphi(\mathbf{x}_{k}),\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\right\rangle$
$\displaystyle\stackrel{{\scriptstyle\eqref{restricted strong
convexity}}}{{\leq}}(1-\frac{2\mu
L}{\mu+L})\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}+(1-\frac{2}{\mu+L})\|\nabla\varphi(\mathbf{x}_{k})\|_{\text{F}}^{2}.$
If $\frac{2}{\mu+L}\geq 1$, i.e., $\lambda_{2}(W)+\lambda_{N}(W)\geq 0$, this
implies $\sigma_{2}=\lambda_{2}(W).$ Combining
$\|\nabla\varphi(\mathbf{x})\|_{\text{F}}\geq\mu\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}$
with (VII.3) yields
$\displaystyle\quad\|\mathbf{x}_{k+1}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
$\displaystyle\leq(1-\frac{2\mu
L}{\mu+L}-\mu^{2}+\frac{2\mu^{2}}{L+\mu})\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
$\displaystyle=(1-\mu)^{2}\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}=\sigma_{2}^{2}\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
If $\frac{2}{\mu+L}<1$, then $\lambda_{2}(W)+\lambda_{N}(W)<0$, this implies
$\sigma_{2}=-\lambda_{N}(W).$ Combining
$\|\nabla\varphi(\mathbf{x})\|_{\text{F}}\leq
L\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}$ with (VII.3) implies
$\displaystyle\quad\|\mathbf{x}_{k+1}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
$\displaystyle\leq(1-\frac{2\mu
L}{\mu+L}-L^{2}+\frac{2L^{2}}{L+\mu})\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
$\displaystyle=(1-L)^{2}\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}=\sigma_{2}^{2}\|\mathbf{x}_{k}-\hat{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
∎
###### Proof of Lemma 1.
Note that
$\displaystyle\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}$
$\displaystyle=\sum_{i=1}^{N}\|x_{i}-\hat{x}\|_{\text{F}}^{2}=N(r-\|\hat{x}\|_{\text{F}}^{2})$
(VII.4)
$\displaystyle=N(\sqrt{r}+\|\hat{x}\|_{\text{F}})(\sqrt{r}-\|\hat{x}\|_{\text{F}})$
$\displaystyle\leq 2N({r}-\sqrt{r}\|\hat{x}\|_{\text{F}}),$
where the inequality is due to $\|\hat{x}\|_{\text{F}}\leq\sqrt{r}$. Since
$\bar{x}=\mathcal{P}_{\mathrm{St}}(\hat{x})=uv^{\top},$ (VII.5)
where $usv^{\top}=\hat{x}$ is the singular value decomposition, we get
$\displaystyle\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}$
$\displaystyle=\sum_{i=1}^{N}(2r-2\left\langle x_{i},\bar{x}\right\rangle)$
(VII.6)
$\displaystyle=2N(r-\left\langle\hat{x},\bar{x}\right\rangle)=2N(r-\|\hat{x}\|_{*}),$
where $\|\cdot\|_{*}$ is the trace norm. Let
$\hat{\sigma}_{1}\geq\ldots\geq\hat{\sigma}_{r}$ be the singular values of
$\hat{x}$. It is clear that $\hat{\sigma}_{1}\leq 1$ since
$\|\hat{x}\|_{2}\leq\frac{1}{N}\sum_{i=1}^{N}\|x_{i}\|_{2}\leq 1$. The
inequality
$\|\hat{x}\|_{*}=\sum_{i=1}^{r}\hat{\sigma}_{i}\leq\sqrt{r}\sqrt{\sum_{i=1}^{r}\hat{\sigma}_{i}^{2}}=\sqrt{r}\|\hat{x}\|_{\text{F}}$,
together with (VII.4) and (VII.6) imply that
$\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}\leq\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}.$
Next, we also have
$\|\hat{x}\|_{*}=\sum_{i=1}^{r}\hat{\sigma}_{i}\geq\sum_{i=1}^{r}\hat{\sigma}_{i}^{2}=\|\hat{x}\|_{\text{F}}^{2}$.
This yields
$\frac{1}{2}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}=N(r-\|\hat{x}\|_{*})\leq
N(r-\|\hat{x}\|_{\text{F}}^{2})=\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2},$
which proves (III.9).
By utilizing the fact
$\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}\leq\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}$,
we have
$\displaystyle\sqrt{r}\sqrt{\sum_{i=1}^{r}\hat{\sigma}_{i}^{2}}=\sqrt{r}\|\hat{x}\|_{\text{F}}\geq\|\hat{x}\|_{\text{F}}^{2}=r-\frac{1}{N}\|\hat{\mathbf{x}}-\mathbf{x}\|_{\text{F}}^{2}\geq
r-\frac{1}{N}\|\bar{\mathbf{x}}-\mathbf{x}\|_{\text{F}}^{2},$ (VII.7)
where we used
$\|\hat{x}\|_{\text{F}}=\|\frac{1}{N}\sum_{i=1}^{N}x_{i}\|_{\text{F}}\leq\sqrt{r}.$
If $\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\leq N/2$ (by assumption),
we can square both sides of above and note $\hat{\sigma}_{i}^{2}\leq 1$ for
$i\in[r-1]$ to get
$\hat{\sigma}_{r}^{2}\geq
1-2\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N}+\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{4}}{N^{2}r}\geq
1-2\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N}.$
Then, we have
$\hat{\sigma}_{r}\geq\sqrt{1-2\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N}}\geq
1-2\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N},$ (VII.8)
where we use $\sqrt{1-s}\geq 1-s$ for any $1\geq s\geq 0$. Recall that
$\bar{x}=\mathcal{P}_{\mathrm{St}}(\hat{x})=uv^{\top}$. Hence, it follows that
$\displaystyle\quad\|\hat{x}-\bar{x}\|_{\text{F}}^{2}=r-2\left\langle\hat{x},\bar{x}\right\rangle+\|\hat{x}\|_{\text{F}}^{2}$
$\displaystyle=r-2\sum_{i=1}^{r}\hat{\sigma}_{i}+\sum_{i=1}^{r}\hat{\sigma}_{i}^{2}=\sum_{i=1}^{r}(1-\hat{\sigma}_{i})^{2}\leq\frac{4r\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{4}}{N^{2}}.$
Hence, we have proved (P1). Finally,
$\displaystyle\quad\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}=\sum_{i=1}^{N}\left\langle
x_{i}-\hat{x},x_{i}-\hat{x}\right\rangle$
$\displaystyle=\sum_{i=1}^{N}\left\langle
x_{i}-\hat{x},x_{i}-\bar{x}\right\rangle+\sum_{i=1}^{N}\left\langle
x_{i}-\hat{x},\bar{x}-\hat{x}\right\rangle$
$\displaystyle=\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}+\sum_{i=1}^{N}\left\langle\bar{x}-\hat{x},x_{i}-\bar{x}\right\rangle$
$\displaystyle=\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}-N\|\bar{x}-\hat{x}\|_{\text{F}}^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{key}}}{{\geq}}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}-\frac{4r\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{4}}{{N}},$
where we used $\sum_{i=1}^{N}\left\langle
x_{i}-\hat{x},\bar{x}-\hat{x}\right\rangle=0$ in the third line. ∎
###### Proof of Lemma 3.
It follows that
$\displaystyle\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle$
$\displaystyle=$
$\displaystyle\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathcal{P}_{\mathrm{T}_{\mathbf{x}}\mathcal{M}^{N}}(\mathbf{y}-\mathbf{x})\right\rangle$
$\displaystyle=$
$\displaystyle\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle-\sum_{i=1}^{N}\left\langle\nabla\varphi^{t}_{i}(\mathbf{x}),\mathcal{P}_{N_{x_{i}}\mathcal{M}}(y_{i}-x_{i})\right\rangle$
$\displaystyle=$
$\displaystyle\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle+\frac{1}{4}\sum_{i=1}^{N}\left\langle\nabla\varphi^{t}_{i}(\mathbf{x})^{\top}x_{i}+x_{i}^{\top}\nabla\varphi^{t}_{i}(\mathbf{x}),(y_{i}-x_{i})^{\top}(y_{i}-x_{i})\right\rangle.$
Since
$\frac{1}{2}[\nabla\varphi^{t}_{i}(\mathbf{x})^{\top}x_{i}+x_{i}^{\top}\nabla\varphi^{t}_{i}(\mathbf{x})]=\frac{1}{2}\sum_{j=1}^{N}W^{t}_{ij}(x_{i}-x_{j})^{\top}(x_{i}-x_{j})$
is positive semi-definite, we get
$\displaystyle\sum_{i=1}^{N}\left\langle\nabla\varphi^{t}_{i}(\mathbf{x}),\frac{1}{2}x_{i}(y_{i}-x_{i})^{\top}(y_{i}-x_{i})\right\rangle\geq
0.$ (VII.9)
Therefore, we get
$\displaystyle\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle$
$\displaystyle\leq\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle.$
(VII.10)
∎
###### Proof of Lemma 4.
The largest eigenvalue of
$\nabla^{2}\varphi^{t}(\mathbf{x})=(I_{N}-W^{t})\otimes I_{d}$ is
$L_{\phi}=1-\lambda_{N}(W^{t})$ in Euclidean space, where $\lambda_{N}(W^{t})$
denotes the smallest eigenvalue of $W^{t}$. For any
$\mathbf{x},\mathbf{y}\in(\mathbb{R}^{d\times r})^{N}$, it follows that[36]
$\varphi^{t}(\mathbf{y})-\left[\varphi^{t}(\mathbf{x})+\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle\right]\leq\frac{L_{\phi}}{2}\|\mathbf{y}-\mathbf{x}\|_{\text{F}}^{2}.$
(VII.11)
Together with (III.14), this implies that
$\displaystyle\varphi^{t}(\mathbf{y})-\left[\varphi^{t}(\mathbf{x})+\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{y}-\mathbf{x}\right\rangle\right]\leq\frac{L_{\phi}}{2}\|\mathbf{x}-\mathbf{y}\|_{\text{F}}^{2}.$
(VII.12)
The proof is completed. ∎
###### Proof of Lemma 5.
The proof follows [27, Theorem 3]. We only need to verify the following three
properties:
1. (A1).
(Sufficient descent) There exists a constant $\kappa>0$ and sufficiently large
$K_{1}$ such that for $k\geq K_{1}$,
$\displaystyle\varphi^{t}(\mathbf{x}_{k+1})-\varphi^{t}(\mathbf{x}_{k})\leq-\kappa\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}\cdot\|\mathbf{x}_{k}-\mathbf{x}_{k+1}\|_{\text{F}}.$
2. (A2).
(Stationarity) There exists an index $K_{2}>0$ such that for $k\geq K_{2}$,
$\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}=0\Rightarrow\mathbf{x}_{k}=\mathbf{x}_{k+1}.$
3. (A3).
(Safeguard) There exist a constant $C_{3}>0$ and an index $K_{3}>0$ such that
for $k\geq K_{3}$
$\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}\leq
C_{3}\|\mathbf{x}_{k}-\mathbf{x}_{k+1}\|_{\text{F}}.$
The main difference is that we use Lemma 4 to derive the sufficient descent
property (A1). Let us first consider (A1). Using (III.15) of Lemma 4, one has
$\displaystyle\varphi^{t}(\mathbf{x}_{k+1})\leq\varphi^{t}(\mathbf{x}_{k})+\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}_{k}),\mathbf{x}_{k+1}-\mathbf{x}_{k}\right\rangle+\frac{L_{t}}{2}\|\mathbf{x}_{k}-\mathbf{x}_{k+1}\|_{\text{F}}^{2}.$
Let us start with the following
$\displaystyle\quad\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}_{k+1}-\mathbf{x}_{k}\right\rangle$
$\displaystyle=\sum_{i=1}^{N}\left\langle\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k}),x_{i,k+1}-x_{i,k}\right\rangle$
$\displaystyle=\sum_{i=1}^{N}\left\langle\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k}),\mathrm{Retr}_{x_{i,k}}(-\alpha\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k}))-x_{i,k}\right\rangle$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:ret_second-
order}}}{{\leq}}(M\alpha^{2}\cdot\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}-\alpha)\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}$
and
$\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{\text{F}}^{2}\stackrel{{\scriptstyle\eqref{ineq:ret_nonexpansive}}}{{\leq}}\alpha^{2}\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}.$
We now get
$\displaystyle\varphi^{t}(\mathbf{x}_{k+1})\leq\varphi^{t}(\mathbf{x}_{k})+[(MG_{k}+\frac{L_{t}}{2})\alpha^{2}-\alpha]\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2},$
where $G_{k}=\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}$.
Therefore, for any $\beta\in(0,1)$, if
$\alpha<\bar{\alpha}_{k}:=\frac{1-\beta}{MG_{k}+L_{t}/2}$, we have
$\varphi^{t}(\mathbf{x}_{k+1})\leq\varphi^{t}(\mathbf{x}_{k})-\alpha\beta\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}.$
(VII.13)
Note that $\bar{\alpha}_{k}\geq\frac{1-\beta}{MG+L_{t}/2}$, the stepsize
$\alpha<\bar{\alpha}_{k}$ is well defined. Again, by
$\|\mathbf{x}_{k+1}-\mathbf{x}_{k}\|_{\text{F}}^{2}\stackrel{{\scriptstyle\eqref{ineq:ret_nonexpansive}}}{{\leq}}\alpha^{2}\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2},$
we get the sufficient decrease condition in (A1) for any $k\geq 0$ with
$\kappa=\beta$
$\displaystyle\varphi^{t}(\mathbf{x}_{k+1})\leq\varphi^{t}(\mathbf{x}_{k})-\beta\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}\cdot\|\mathbf{x}_{k}-\mathbf{x}_{k+1}\|_{\text{F}}.$
(VII.14)
The condition (A2) is automatically satisfied by the iteration of Algorithm 1.
For (A3), the argument is the same as that of [27, Theorem 3]. By (VII.13), we
have
$\sum_{k=0}^{\infty}\alpha\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}\leq\varphi^{t}(\mathbf{x}_{0})-\inf\varphi^{t}(\mathbf{x})<\infty$,
which implies
$\lim_{k\rightarrow\infty}\alpha\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}=0.$
So, there exists $K_{3}>0$ such that
$\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}$ is sufficiently
small whenever $\alpha>0$. Using the second-order property of retraction
$\mathrm{Retr}_{x}(\xi)=x+\xi+\mathcal{O}(\|\xi\|_{\text{F}}^{2})$, we have
the property (A3).
By [44, Theorem 2.3], (A1)-(A2) together with (Ł) imply the convergence to a
critical point. With (A3), one has that the convergence rate is sub-linearly
if $\theta<1/2$ and linearly if $\theta=1/2$, respectively. ∎
###### Proof of Proposition 2.
Let $B:=W\otimes I_{d}$. The necessity is trivial by letting
$y=[B\mathbf{x}]_{i}$ if $x_{1}=x_{2}=\ldots=x_{N}$. Now, if $\mathbf{x}$ is a
first-order critical point, then it follows from Proposition 1 that
$\displaystyle\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})=\nabla\varphi^{t}_{i}(\mathbf{x})-\frac{1}{2}x_{i}(x_{i}^{\top}\nabla\varphi^{t}_{i}(\mathbf{x})+\nabla\varphi^{t}_{i}(\mathbf{x})^{\top}x_{i})$
$\displaystyle=(I_{d}-\frac{1}{2}x_{i}x_{i}^{\top})(\nabla\varphi^{t}_{i}(\mathbf{x})-x_{i}\nabla\varphi^{t}_{i}(\mathbf{x})^{\top}x_{i})=0,\quad\forall
i\in[N].$
Note that since $I_{d}-\frac{1}{2}x_{i}x_{i}^{\top}$ is invertible, one has
$[B\mathbf{x}]_{i}-x_{i}([B\mathbf{x}]_{i}^{\top}x_{i})=0,\quad\forall
i\in[N].$ (VII.15)
Multiplying both sides by $x_{i}^{\top}$ yields
$x_{i}^{\top}[B\mathbf{x}]_{i}=[B\mathbf{x}]_{i}^{\top}x_{i},\quad\forall
i\in[N].$ (VII.16)
For the sufficiency, let
$\Gamma_{i}:=\sum_{j=1}^{N}W_{ij}(x_{j}^{\top}x_{i})$, $i\in[N].$ From
(VII.15), we get
$x_{i}\Gamma_{i}=\sum_{j=1}^{N}W_{ij}x_{j},\quad\forall i\in[N].$ (VII.17)
Summing above over $i\in[N]$ yields
$\sum_{i=1}^{N}x_{i}\Gamma_{i}=\sum_{i=1}^{N}x_{i}$. Taking inner product with
$y$ on both sides gives $\sum_{i=1}^{N}\left\langle
y,x_{i}(I_{r}-\Gamma_{i})\right\rangle=0$. Note that $I_{r}-\Gamma_{i}$ is
symmetric for all $i$ due to (VII.16). It is also positive semi-definite.
Since $\left\langle x_{i},y\right\rangle>r-1$ for all $i$, we get that
$\Omega_{i}:=\frac{1}{2}(x_{i}^{\top}y+y^{\top}x_{i})$ is positive definite.
Then, it follows that
$\displaystyle\left\langle
y,x_{i}(I_{r}-\Gamma_{i})\right\rangle=\mathrm{Tr}(\Omega_{i}^{1/2}(I_{r}-\Gamma_{i})\Omega_{i}^{1/2})\geq
0.$
The equation $\sum_{i=1}^{N}\left\langle
y,x_{i}(I_{r}-\Gamma_{i})\right\rangle=0$ suggests that $I_{r}=\Gamma_{i}$,
which also implies $x_{1}=x_{2}=\ldots=x_{N}$ by (VII.17).
Furthermore, suppose $y=\bar{x}$ which is the IAM of $\mathbf{x}$. The
condition ${d}_{2,\infty}(\mathbf{x},\mathcal{X}^{*})<\sqrt{2}$ means that
$\|\bar{x}-x_{i}\|_{\text{F}}^{2}<2$, or equivalently, $\left\langle
y,x_{i}\right\rangle>r-1$ for all $i\in[N]$. ∎
###### Proof of Lemma 7.
We prove it by induction. Suppose (IV.1) holds for some $k$. For $k+1$, we
first have
$\displaystyle\left\langle
x_{i,k}-\alpha\mathrm{grad}\varphi_{i}(\mathbf{x}_{k}),y\right\rangle$
$\displaystyle=$ $\displaystyle\langle
x_{i,k}-\frac{\alpha}{2}x_{i,k}\sum_{j=1}^{N}W_{ij}(x_{j,k}^{\top}x_{i,k}+x_{i,k}^{\top}x_{j,k}),y\rangle+\alpha\sum_{j=1}^{N}W_{ij}\left\langle
x_{j,k},y\right\rangle$ $\displaystyle=$
$\displaystyle\frac{\alpha}{2}\sum_{j=1}^{N}W_{ij}\|x_{i,k}-x_{j,k}\|_{\text{F}}^{2}\cdot\left\langle
x_{i,k},y\right\rangle+(1-\alpha)\left\langle
x_{i,k},y\right\rangle+\alpha\sum_{j=1}^{N}W_{ij}\left\langle
x_{j,k},y\right\rangle$ $\displaystyle\geq$
$\displaystyle\delta\frac{\alpha^{2}}{2}\sum_{j=1}^{N}W_{ij}\|x_{i,k}-x_{j,k}\|_{\text{F}}^{2}+\delta.$
The last inequality follows from $\alpha\leq 1$. Then, since
$x_{i,k+1}=\frac{x_{i,k}-\alpha\mathrm{grad}\varphi_{i}(\mathbf{x}_{k})}{\sqrt{1+\alpha^{2}\|\mathrm{grad}\varphi_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}}}$
(due to (III.11)), we get
$\displaystyle\left\langle x_{i,k+1},y\right\rangle=$
$\displaystyle\frac{\left\langle
x_{i,k}-\alpha\mathrm{grad}\varphi_{i}(\mathbf{x}_{k}),y\right\rangle}{\sqrt{1+\alpha^{2}\|\mathrm{grad}\varphi_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}}}$
$\displaystyle\geq$ $\displaystyle\frac{\left\langle
x_{i,k}-\alpha\mathrm{grad}\varphi_{i}(\mathbf{x}_{k}),y\right\rangle}{1+\frac{\alpha^{2}}{2}\|\mathrm{grad}\varphi_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}}$
(VII.18) $\displaystyle\geq$ $\displaystyle\frac{\left\langle
x_{i,k}-\alpha\mathrm{grad}\varphi_{i}(\mathbf{x}_{k}),y\right\rangle}{1+\frac{\alpha^{2}}{2}\sum_{j=1}^{N}W_{ij}\|x_{i,k}-x_{j,k}\|_{\text{F}}^{2}}\geq\delta,$
(VII.19)
where we used $\sqrt{1+z^{2}}\leq 1+\frac{1}{2}z^{2}$ for any $z\geq 0$ in
(VII.18) and
$\|\mathrm{grad}\varphi_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}\leq\|\nabla\varphi_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}\leq\sum_{j=1}^{N}W_{ij}\|x_{i,k}-x_{j,k}\|_{\text{F}}^{2}$
in (VII.19). ∎
###### Proof of Lemma 8.
We rewrite the objective $\varphi^{t}(\mathbf{x})$ as follows
$\displaystyle
2\varphi^{t}(\mathbf{x})=\sum_{i=1}^{N}\|x_{i}\|_{\text{F}}^{2}-\sum_{i=1,j=1}^{N}W^{t}_{ij}\left\langle
x_{i},x_{j}\right\rangle$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{N}\langle
x_{i},x_{i}-\sum_{j=1}^{N}W^{t}_{ij}x_{j}\rangle$ $\displaystyle=$
$\displaystyle\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}\right\rangle.$
(VII.20)
Note that
$\left\langle\nabla\varphi^{t}(\mathbf{x}),\hat{\mathbf{x}}\right\rangle=0$,
we get
$\displaystyle 2\varphi^{t}(\mathbf{x})$
$\displaystyle=\left\langle\nabla\varphi^{t}(\mathbf{x}),\mathbf{x}-\hat{\mathbf{x}}\right\rangle$
$\displaystyle\stackrel{{\scriptstyle\eqref{restricted strong
convexity}}}{{\geq}}\frac{\mu_{t}L_{t}}{\mu_{t}+L_{t}}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2}+\frac{1}{\mu_{t}+L_{t}}\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}$
$\displaystyle\geq\mu_{t}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}^{2},$
where the last inequality follows from
$\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}\geq\mu_{t}\|\mathbf{x}-\hat{\mathbf{x}}\|_{\text{F}}$.
The conclusions are obtained by using Lemma 1.
∎
###### Proof of Lemma 9.
(1). Combining (V.5) with (V.6), we get
$\displaystyle\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\varphi^{t}(\mathbf{x})\cdot(2-\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F},\infty}^{2}).$
(VII.21)
Since $\mathbf{x}\in\mathcal{N}_{R,t}$, invoking (QG’) in Lemma 8, we get
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq(1-4r\delta_{1,t}^{2})(1-\frac{\delta_{2,t}^{2}}{2})\mu_{t}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
where using the conditions (V.11) completes the proof.
(2). For $\mathbf{x}\in\mathcal{N}_{l,t},$ combining (V.5), (V.7) and (QG’)
yields
$\displaystyle\quad\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle$
$\displaystyle\geq[\mu_{t}(1-4r\delta_{3,t}^{2})-\varphi^{t}(\mathbf{x})]\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}$
$\displaystyle\geq{\frac{1}{2}\mu_{t}}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
where we used the conditions in (V.13).
∎
###### Proof of Proposition 3.
By (V.5), we get
$\displaystyle\quad 2\varphi^{t}(\mathbf{x})$
$\displaystyle=\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}),\mathbf{x}-\bar{\mathbf{x}}\right\rangle+\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle$ (VII.22)
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:error
bound}}}{{\leq}}\frac{2}{\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}+\sum_{i=1}^{N}\left\langle
p_{i},q_{i}\right\rangle.$
If $\mathbf{x}\in\mathcal{N}_{R,t},$ we use (V.6) to get
$(2-\delta_{2,t}^{2})\varphi^{t}(\mathbf{x})\leq\frac{2}{\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
If $\mathbf{x}\in\mathcal{N}_{l,t},$ we use (V.7) to get
$2\varphi^{t}(\mathbf{x})\leq\frac{2}{\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}+\frac{\mu_{t}}{4}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}\stackrel{{\scriptstyle\eqref{ineq:error
bound}}}{{\leq}}\frac{3}{\mu_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
We conclude the proof by noting $\delta_{2,t}\leq 1/6.$ ∎
###### Proof of Lemma 10.
First, using (P3) we have
$\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})=x_{i}-\sum_{j=1}^{N}W_{ij}x_{j}-\frac{1}{2}x_{i}\sum_{j=1}^{N}W_{ij}^{t}(x_{i}-x_{j})^{\top}(x_{i}-x_{j}).$
(VII.23)
Since
$\sum_{i=1}^{N}\nabla\varphi^{t}_{i}(\mathbf{x})=\sum_{i=1}^{N}(x_{i}-\sum_{j=1}^{N}W_{ij}x_{j})=0$,
we have
$\displaystyle\|\sum_{i=1}^{N}\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})\|_{\text{F}}=\frac{1}{2}\|\sum_{i=1}^{N}x_{i}\sum_{j=1}^{N}W_{ij}^{t}(x_{i}-x_{j})^{\top}(x_{i}-x_{j})\|_{\text{F}}$
$\displaystyle\leq\frac{1}{2}\sum_{i=1}^{N}\|\sum_{j=1}^{N}W_{ij}^{t}(x_{i}-x_{j})^{\top}(x_{i}-x_{j})\|_{\text{F}}$
$\displaystyle\leq\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}W_{ij}^{t}\|x_{i}-x_{j}\|_{\text{F}}^{2}=2\varphi^{t}(\mathbf{x})\leq
L_{t}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2},$
where the last inequality follows from (III.15). Moreover, it is clear that in
the embedded Euclidean space we have
$\displaystyle 0$
$\displaystyle\leq\varphi^{t}(\mathbf{x}-\frac{1}{L_{t}}\nabla\varphi^{t}(\mathbf{x}))$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:lip_smooth_E}}}{{\leq}}\varphi^{t}(\mathbf{x})+\langle\nabla\varphi^{t}(\mathbf{x}),-\frac{1}{L_{t}}\nabla\varphi^{t}(\mathbf{x})\rangle+\frac{1}{2L_{t}}\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}$
$\displaystyle=\varphi^{t}(\mathbf{x})-\frac{1}{2L_{t}}\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
Since
$\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})=\mathcal{P}_{\mathrm{T}_{x_{i}}\mathcal{M}}(\nabla\varphi^{t}_{i}(\mathbf{x}))$,
we get
$\displaystyle\quad\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}\leq\|\nabla\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}\leq
2L_{t}\cdot\varphi^{t}(\mathbf{x}).$
Finally, it follows from $\mathbf{x}\in\mathcal{N}_{2,t}$ that
$\displaystyle\|\mathrm{grad}\varphi^{t}_{i}(\mathbf{x})\|_{\text{F}}\leq\|\sum_{j=1}^{N}W_{ij}^{t}(x_{j}-x_{i})\|_{\text{F}}\leq
2\delta_{2,t}.$
∎
###### Proof of Proposition 4.
First, we prove it for $\mathbf{x}\in\mathcal{N}_{R,t}$. It follows from (V.5)
and (V.6) that
$\displaystyle\begin{aligned}
\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\Phi_{R}\cdot\varphi^{t}(\mathbf{x}).\end{aligned}$
Combining with (V.17), we get
$\left\langle\mathbf{x}-\bar{\mathbf{x}},\mathrm{grad}\varphi^{t}(\mathbf{x})\right\rangle\geq\frac{\Phi_{R}}{2L_{t}}\|\mathrm{grad}\varphi^{t}(\mathbf{x})\|_{\text{F}}^{2}.$
Secondly, for $\mathbf{x}\in\mathcal{N}_{l,t}$, we have the similar arguments
by combining (V.5) with (V.7). Furthermore, if
$\mathbf{x}\in\mathcal{N}_{R,t}$ or $\mathbf{x}\in\mathcal{N}_{l,t}$, we
notice that (RSI-I) is the convex combination of (V.19) and (V.14). ∎
###### Proof of Lemma 11.
Note that $W^{t}$ is doubly stochastic with $\sigma_{2}^{t}$ as the second
largest singular value. As $\mathbf{x}\in\mathcal{N}_{2}$, it follows that
$\|x_{i}-\bar{x}\|_{\text{F}}\leq\delta_{2,t}$ for all $i\in[N]$. We then have
$\displaystyle\quad\max_{i\in[N]}\|\sum_{j=1}^{N}(W^{t}_{ij}-1/N)x_{j}\|_{\text{F}}$
$\displaystyle=\max_{i\in[N]}\|\sum_{j=1}^{N}(W^{t}_{ij}-1/N)(x_{j}-\bar{x})\|_{\text{F}}$
$\displaystyle\leq\max_{i\in[N]}\sum_{j=1}^{N}|W_{ij}^{t}-1/N|\delta_{2,t}\leq\sqrt{N}\sigma_{2}^{t}\delta_{2,t},$
where the last inequality follows from the bound on the total variation
distance between any row of $W^{t}$ and $\frac{1}{N}\textbf{1}_{N}^{\top}$
[55, Prop.3][56, Sec 1.1.2]. The conclusion is obtained by setting
$t\geq\lceil\log_{\sigma_{2}}(\frac{1}{2\sqrt{N}})\rceil$. ∎
###### Proof of Lemma 12.
Let $\hat{x}=\frac{1}{N}\sum_{i=1}^{N}x_{i}$ and
$\hat{y}=\frac{1}{N}\sum_{i=1}^{N}y_{i}$ be the Euclidean average points of
$\mathbf{x}$ and $\mathbf{y}$. Then, $\bar{x}$ and $\bar{y}$ are the
(generalized) polar factor [54] of $\hat{x}$ and $\hat{y}$, respectively. We
have
$\sigma_{r}(\hat{x})\stackrel{{\scriptstyle\eqref{ineq:reg_condition-2-3}}}{{\geq}}1-2\frac{\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}}{N}\stackrel{{\scriptstyle(i)}}{{\geq}}1-2\delta_{1,t}^{2}>0,$
where $(i)$ follows from $\mathbf{x}\in\mathcal{N}_{1,t}$. Similarly, we have
$\sigma_{r}(\hat{y})\geq 1-2\delta_{1,t}^{2}$ since
$\mathbf{y}\in\mathcal{N}_{1,t}.$
Then, it follows from [54, Theorem 2.4] that
$\|\bar{y}-\bar{x}\|_{\text{F}}\leq\frac{2}{\sigma_{r}(\hat{x})+\sigma_{r}(\hat{y})}\|\hat{y}-\hat{x}\|_{\text{F}}\leq\frac{1}{1-2\delta_{1,t}^{2}}\|\hat{x}-\hat{y}\|_{\text{F}}.$
The proof is completed. ∎
We use Lemma 12 for the following lemma.
###### Lemma 14.
If $\mathbf{x}_{k}\in\mathcal{N}_{R,t},\mathbf{x}_{k+1}\in\mathcal{N}_{1,t}$
and
$x_{i,k+1}=\mathrm{Retr}_{x_{i,k}}(-\alpha\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k}))$,
where $\delta_{1,t}$ and $\delta_{2,t}$ are given by (V.11). It follows that
$\displaystyle\|\bar{x}_{k}-\bar{x}_{k+1}\|_{\text{F}}\leq\frac{L_{t}}{1-2\delta_{1,t}^{2}}\frac{\alpha+2M\alpha^{2}L_{t}}{N}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
###### Proof.
From Lemma 2 and Lemma 10, we have
$\displaystyle\quad\|\hat{x}_{k}-\hat{x}_{k+1}\|_{\text{F}}$
$\displaystyle\leq\|\hat{x}_{k}-\frac{\alpha}{N}\sum_{i=1}^{N}\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})-\hat{x}_{k+1}\|_{\text{F}}+\|\frac{\alpha}{N}\sum_{i=1}^{N}\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})\|_{\text{F}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:ret_second-
order}}}{{\leq}}\frac{M}{N}\sum_{i=1}^{N}\|\alpha\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})\|_{\text{F}}^{2}+\alpha\|\frac{1}{N}\sum_{i=1}^{N}\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})\|_{\text{F}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:bound-of-sum-
gradh}}}{{\leq}}\frac{2L_{t}^{2}M\alpha^{2}+L_{t}\alpha}{N}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
Therefore, it follows from Lemma 12 that
$\displaystyle\|\bar{x}_{k}-\bar{x}_{k+1}\|_{\text{F}}\leq\frac{1}{1-2\delta_{1,t}^{2}}\cdot\|\hat{x}_{k}-\hat{x}_{k+1}\|_{\text{F}}$
$\displaystyle\leq\frac{L_{t}}{1-2\delta_{1,t}^{2}}\frac{\alpha+2M\alpha^{2}L_{t}}{N}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
∎
###### Proof of Lemma 13.
First, we verify that $\mathbf{x}_{k+1}\in\mathcal{N}_{1,t}$. Since
$\mathbf{x}_{k}\in\mathcal{N}_{R,t}$, it follows from Lemma 9 that
$\displaystyle\quad\|\mathbf{x}_{k+1}-\bar{\mathbf{x}}_{k+1}\|_{\text{F}}^{2}\leq\|\mathbf{x}_{k+1}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}$
(VII.24)
$\displaystyle\leq\sum_{i=1}^{N}\|x_{i,k}-\alpha\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})-\bar{x}_{k}\|_{\text{F}}^{2}$
$\displaystyle=\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}-2\alpha\left\langle\mathrm{grad}\varphi^{t}(\mathbf{x}_{k}),\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\right\rangle+\|\alpha\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq: lower bound by gradient and
distance}}}{{\leq}}\left(1-2\alpha(1-\nu)\gamma_{R,t}\right)\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}+\left(\alpha^{2}-\frac{\alpha\nu\Phi}{L_{t}}\right)\|\mathrm{grad}\varphi^{t}(\mathbf{x}_{k})\|_{\text{F}}^{2},$
for any $\nu\in[0,1]$, where the last inequality holds by noting $\Phi\geq 1$
for $\mathbf{x}\in\mathcal{N}_{R,t}$. By letting $\nu=1$ and
$\alpha\leq\frac{\Phi}{L_{t}}$, we get
$\|\mathbf{x}_{k+1}-\bar{\mathbf{x}}_{k+1}\|_{\text{F}}^{2}\leq\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
(VII.25)
and thus $\mathbf{x}_{k+1}\in\mathcal{N}_{1,t}$.
Next, let us verify $\mathbf{x}_{k+1}\in\mathcal{N}_{2,t}$. For each
$i\in[N]$, one has
$\displaystyle\|x_{i,k+1}-\bar{x}_{k}\|_{\text{F}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{ineq:ret_nonexpansive}}}{{\leq}}$
$\displaystyle\|x_{i,k}-\alpha\mathrm{grad}\varphi^{t}_{i}(\mathbf{x}_{k})-\bar{x}_{k}\|_{\text{F}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{rewrite}}}{{=}}$
$\displaystyle\|(1-\alpha)(x_{i,k}-\bar{x}_{k})+\alpha(\hat{x}_{k}-\bar{x}_{k})+\alpha\sum_{j=1}^{N}{W}_{ij}^{t}(x_{j,k}-\hat{x}_{k})+\frac{\alpha}{2}x_{i,k}\sum_{j=1}^{N}W_{ij}^{t}(x_{i,k}-x_{j,k})^{\top}(x_{i,k}-x_{j,k})\|_{\text{F}}$
$\displaystyle\leq$
$\displaystyle(1-\alpha)\delta_{2,t}+\alpha\|\hat{x}_{k}-\bar{x}_{k}\|_{\text{F}}+\alpha\|\sum_{j=1}^{N}({W}_{ij}^{t}-\frac{1}{N})x_{j,k}\|_{\text{F}}+\frac{1}{2}\|{\alpha}\sum_{j=1}^{N}W_{ij}^{t}(x_{i,k}-x_{j,k})^{\top}(x_{i,k}-x_{j,k})\|_{\text{F}}$
$\displaystyle\stackrel{{\scriptstyle\eqref{key}}}{{\leq}}$
$\displaystyle(1-\alpha)\delta_{2,t}+2\alpha\delta_{1,t}^{2}\sqrt{r}+\alpha\|\sum_{j=1}^{N}({W}_{ij}^{t}-\frac{1}{N})x_{j,k}\|_{\text{F}}+2{\alpha}\delta_{2,t}^{2}$
$\displaystyle\stackrel{{\scriptstyle\eqref{lem:bound_of_2_infty}}}{{\leq}}$
$\displaystyle(1-\frac{\alpha}{2})\delta_{2,t}+2\alpha\delta_{1,t}^{2}\sqrt{r}+2{\alpha}\delta_{2,t}^{2}.$
Since $\alpha\geq 0$, by invoking Lemma 14 we get
$\displaystyle\|\bar{x}_{k}-\bar{x}_{k+1}\|_{\text{F}}\leq
L_{t}\cdot\frac{2M\alpha^{2}L_{t}+\alpha}{N(1-2\delta_{1,t}^{2})}\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}\leq\frac{10\alpha\delta_{1,t}^{2}}{1-2\delta_{1,t}^{2}},$
where the last inequality follows from $\alpha\leq\frac{1}{M}$ and $L_{t}\leq
2$. Therefore, using the conditions on $\delta_{1,t}$ and $\delta_{2,t}$ in
(V.11) gives
$\displaystyle\|x_{i,k+1}-\bar{x}_{k+1}\|_{\text{F}}\leq\|x_{i,k+1}-\bar{x}_{k}\|_{\text{F}}+\|\bar{x}_{k}-\bar{x}_{k+1}\|_{\text{F}}$
$\displaystyle\leq$
$\displaystyle(1-\frac{\alpha}{2})\delta_{2,t}+2\alpha\delta_{1,t}^{2}\sqrt{r}+2{\alpha}\delta_{2,t}^{2}+\frac{10}{1-2\delta_{1,t}^{2}}\alpha\delta_{1,t}^{2}\leq\delta_{2,t}.$
The proof is completed. ∎
###### Proof of Theorem 2.
(1). Since $0<\alpha\leq\min\\{1,\frac{\Phi}{L_{t}},\frac{1}{M}\\}$. By Lemma
13, we have $\mathbf{x}_{k}\in\mathcal{N}_{R,t}$ for all $k\geq 0$. By
choosing any $\nu\in(0,1)$ and $\alpha\leq\frac{\nu\Phi}{L_{t}}$, we get from
(VII.24) that
$\|\mathbf{x}_{k+1}-\bar{\mathbf{x}}_{k+1}\|_{\text{F}}^{2}\leq(1-2\alpha(1-\nu)\gamma_{R,t})\|\mathbf{x}_{k}-\bar{\mathbf{x}}_{k}\|_{\text{F}}^{2}.$
(VII.26)
We know that $\mathbf{x}_{k}$ converges to the optimal set $\mathcal{X}^{*}$
Q-linearly. Furthermore, if $\alpha\leq\frac{2}{2MG+L_{t}}$, it follows from
Lemma 5 that the limit point of $\mathbf{x}_{k}$ is unique. Hence,
$\bar{\mathbf{x}}_{k}$ also converges to a single point.
(2). If $\mathbf{x}_{k}\in\mathcal{N}_{l,t}$, we have the constant
$\Phi=2-\frac{1}{2}\|\mathbf{x}-\bar{\mathbf{x}}\|_{\text{F}}^{2}>1$ in
Proposition 4. So, $\alpha\leq\frac{1}{L_{t}+2MG}\leq\frac{\Phi}{L_{t}}$, we
have $\mathbf{x}_{k+1}\in\mathcal{N}_{l,t}$ by using the sufficient decrease
inequality (VII.13). The remaining proof follows the same argument of (1). ∎
## References
* [1] J. Markdahl, J. Thunberg, and J. Goncalves, “Almost global consensus on the $n$-sphere,” IEEE Transactions on Automatic Control, vol. 63, no. 6, pp. 1664–1675, 2017.
* [2] J. Markdahl, J. Thunberg, and J. Goncalves, “High-dimensional kuramoto models on stiefel manifolds synchronize complex networks almost globally,” Automatica, vol. 113, p. 108736, 2020.
* [3] J. Markdahl, “A geometric obstruction to almost global synchronization on riemannian manifolds,” arXiv preprint arXiv:1808.00862, 2018.
* [4] D. A. Paley, “Stabilization of collective motion on a sphere,” Automatica, vol. 45, no. 1, pp. 212–216, 2009.
* [5] S. Al-Abri, W. Wu, and F. Zhang, “A gradient-free three-dimensional source seeking strategy with robustness analysis,” IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3439–3446, 2018.
* [6] M. Lohe, “Quantum synchronization over quantum networks,” Journal of Physics A: Mathematical and Theoretical, vol. 43, no. 46, p. 465301, 2010.
* [7] A. Sarlette and R. Sepulchre, “Synchronization on the circle,” arXiv preprint arXiv:0901.2408, 2009.
* [8] B. Afsari, “Riemannian $l^{p}$ center of mass: existence, uniqueness, and convexity,” Proceedings of the American Mathematical Society, vol. 139, no. 2, pp. 655–673, 2011.
* [9] P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds. Princeton University Press, 2009.
* [10] N. Boumal, P.-A. Absil, and C. Cartis, “Global rates of convergence for nonconvex optimization on manifolds,” IMA Journal of Numerical Analysis, vol. 39, no. 1, pp. 1–33, 2019.
* [11] Q. Rentmeesters et al., Algorithms for data fitting on some common homogeneous spaces. PhD thesis, Ph. D. thesis, Université Catholique de Louvain, Louvain, Belgium, 2013.
* [12] R. Zimmermann, “A matrix-algebraic algorithm for the riemannian logarithm on the stiefel manifold under the canonical metric,” SIAM Journal on Matrix Analysis and Applications, vol. 38, no. 2, pp. 322–342, 2017.
* [13] A. Sarlette and R. Sepulchre, “Consensus optimization on manifolds,” SIAM Journal on Control and Optimization, vol. 48, no. 1, pp. 56–76, 2009.
* [14] H. Zhang and W. Yin, “Gradient methods for convex minimization: better rates under weaker conditions,” arXiv preprint arXiv:1303.4645, 2013.
* [15] R. Tron, B. Afsari, and R. Vidal, “Riemannian consensus for manifolds with bounded curvature,” IEEE Transactions on Automatic Control, vol. 58, no. 4, pp. 921–934, 2012.
* [16] S. Bonnabel, “Stochastic gradient descent on riemannian manifolds,” IEEE Transactions on Automatic Control, vol. 58, no. 9, pp. 2217–2229, 2013.
* [17] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE transactions on information theory, vol. 52, no. 6, pp. 2508–2530, 2006.
* [18] R. Sepulchre, “Consensus on nonlinear spaces,” Annual reviews in control, vol. 35, no. 1, pp. 56–64, 2011.
* [19] A. Sarlette, S. E. Tuna, V. D. Blondel, and R. Sepulchre, “Global synchronization on the circle,” IFAC Proceedings Volumes, vol. 41, no. 2, pp. 9045–9050, 2008.
* [20] J. D. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M. I. Jordan, and B. Recht, “First-order methods almost always avoid strict saddle points,” Math. Program., vol. 176, p. 311–337, 2019.
* [21] J. Markdahl, “Synchronization on riemannian manifolds: Multiply connected implies multistable,” arXiv preprint arXiv:1906.07452, 2019.
* [22] C. Lageman and Z. Sun, “Consensus on spheres: Convergence analysis and perturbation theory,” in 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 19–24, IEEE, 2016.
* [23] C. Ma, K. Wang, Y. Chi, and Y. Chen, “Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution,” Foundations of Computational Mathematics, 2019.
* [24] N. Boumal, “Nonconvex phase synchronization,” SIAM Journal on Optimization, vol. 26, no. 4, pp. 2355–2377, 2016.
* [25] A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM journal on Matrix Analysis and Applications, vol. 20, no. 2, pp. 303–353, 1998.
* [26] T. E. Abrudan, J. Eriksson, and V. Koivunen, “Steepest descent algorithms for optimization under unitary matrix constraint,” IEEE Transactions on Signal Processing, vol. 56, no. 3, pp. 1134–1147, 2008.
* [27] H. Liu, A. M.-C. So, and W. Wu, “Quadratic optimization with orthogonality constraint: Explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods,” Mathematical Programming Series A, vol. 178, no. 1-2, pp. 215–262, 2019.
* [28] S. Chen, S. Ma, A. Man-Cho So, and T. Zhang, “Proximal gradient method for nonsmooth optimization over the stiefel manifold,” SIAM Journal on Optimization, vol. 30, no. 1, pp. 210–239, 2020.
* [29] X. Li, S. Chen, Z. Deng, Q. Qu, Z. Zhu, and A. M. C. So, “Nonsmooth optimization over stiefel manifold: Riemannian subgradient methods,” arXiv preprint arXiv:1911.05047, 2019.
* [30] P.-A. Absil, R. Mahony, and J. Trumpf, “An extrinsic look at the riemannian hessian,” in International Conference on Geometric Science of Information, pp. 361–368, Springer, 2013.
* [31] W. H. Yang, L.-H. Zhang, and R. Song, “Optimality conditions for the nonlinear programming problems on Riemannian manifolds,” Pacific J. Optimization, vol. 10, no. 2, pp. 415–434, 2014.
* [32] A. S. Berahas, R. Bollapragada, N. S. Keskar, and E. Wei, “Balancing communication and computation in distributed optimization,” IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3141–3155, 2018.
* [33] J. Tsitsiklis, Problems in decentralized decision making and computation. PhD thesis, MIT, 1984.
* [34] A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Transactions on Automatic Control, vol. 55, no. 4, pp. 922–938, 2010.
* [35] A. Nedić, A. Olshevsky, and M. G. Rabbat, “Network topology and communication-computation tradeoffs in decentralized optimization,” Proceedings of the IEEE, vol. 106, no. 5, pp. 953–976, 2018.
* [36] Y. Nesterov, Introductory lectures on convex optimization: A basic course, vol. 87. Springer Science & Business Media, 2013.
* [37] M. Moakher, “Means and averaging in the group of rotations,” SIAM journal on matrix analysis and applications, vol. 24, no. 1, pp. 1–16, 2002\.
* [38] B. Afsari, R. Tron, and R. Vidal, “On the convergence of gradient descent for finding the riemannian center of mass,” SIAM Journal on Control and Optimization, vol. 51, no. 3, pp. 2230–2260, 2013.
* [39] K. Grove and H. Karcher, “How to conjugatec 1-close group actions,” Mathematische Zeitschrift, vol. 132, no. 1, pp. 11–20, 1973.
* [40] H. Karcher, “Riemannian center of mass and mollifier smoothing,” Communications on pure and applied mathematics, vol. 30, no. 5, pp. 509–541, 1977.
* [41] Z. Wen and W. Yin, “A feasible method for optimization with orthogonality constraints,” Mathematical Programming, vol. 142, no. 1-2, pp. 397–434, 2013.
* [42] H. Zhang and S. Sra, “First-order methods for geodesically convex optimization,” in Conference on Learning Theory, pp. 1617–1638, 2016.
* [43] P.-A. Absil, R. Mahony, and B. Andrews, “Convergence of the iterates of descent methods for analytic cost functions,” SIAM Journal on Optimization, vol. 16, no. 2, pp. 531–547, 2005.
* [44] R. Schneider and A. Uschmajew, “Convergence results for projected line-search methods on varieties of low-rank matrices via łojasiewicz inequality,” SIAM Journal on Optimization, vol. 25, no. 1, pp. 622–646, 2015.
* [45] H. Karimi, J. Nutini, and M. Schmidt, “Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition,” in Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 795–811, Springer, 2016.
* [46] Z.-Q. Luo and P. Tseng, “Error bounds and convergence analysis of feasible descent methods: a general approach,” Annals of Operations Research, vol. 46, no. 1, pp. 157–178, 1993.
* [47] D. Drusvyatskiy and A. S. Lewis, “Error bounds, quadratic growth, and linear convergence of proximal methods,” Mathematics of Operations Research, vol. 43, no. 3, pp. 919–948, 2018.
* [48] M.-C. Yue, Z. Zhou, and A. Man-Cho So, “On the quadratic convergence of the cubic regularization method under a local error bound condition,” SIAM Journal on Optimization, vol. 29, no. 1, pp. 904–932, 2019.
* [49] H. Liu, M.-C. Yue, and A. M.-C. So, “On the estimation performance and convergence rate of the generalized power method for phase synchronization,” SIAM J. Optim., vol. 27, no. 4, pp. 2426–2446, 2017.
* [50] S. Chen, First-Order Algorithms for Structured Optimization: Convergence, Complexity and Applications. PhD thesis, The Chinese University of Hong Kong (Hong Kong), 2019.
* [51] Z. Zhu, T. Ding, D. Robinson, M. Tsakiris, and R. Vidal, “A linearly convergent method for non-smooth non-convex optimization on the grassmannian with applications to robust subspace and dictionary learning,” in Advances in Neural Information Processing Systems, pp. 9442–9452, 2019.
* [52] Y. Zhong and N. Boumal, “Near-optimal bounds for phase synchronization,” SIAM Journal on Optimization, vol. 28, no. 2, pp. 989–1016, 2018.
* [53] A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017.
* [54] W. Li and W. Sun, “Perturbation bounds of unitary and subunitary polar factors,” SIAM journal on matrix analysis and applications, vol. 23, no. 4, pp. 1183–1193, 2002.
* [55] P. Diaconis and D. Stroock, “Geometric bounds for eigenvalues of markov chains,” The Annals of Applied Probability, pp. 36–61, 1991.
* [56] S. Boyd, P. Diaconis, and L. Xiao, “Fastest mixing markov chain on a graph,” SIAM review, vol. 46, no. 4, pp. 667–689, 2004.
|
# “I Choose Assistive Devices That Save My Face”
A Study on Perceptions of Accessibility and Assistive Technology Use Conducted
in China
Franklin Mingzhe Li Carnegie Mellon UniversityPittsburghPennsylvaniaUSA
<EMAIL_ADDRESS>, Di Laura Chen University of
TorontoTorontoOntarioCanada<EMAIL_ADDRESS>, Mingming Fan Rochester
Institute of TechnologyRochesterNew YorkUSA<EMAIL_ADDRESS>and Khai N.
Truong University of TorontoTorontoOntarioCanada<EMAIL_ADDRESS>
(2021)
###### Abstract.
Despite the potential benefits of assistive technologies (ATs) for people with
various disabilities, only around 7% of Chinese with disabilities have had an
opportunity to use ATs. Even for those who have used ATs, the abandonment rate
was high. Although China has the world’s largest population with disabilities,
prior research exploring how ATs are used and perceived, and why ATs are
abandoned have been conducted primarily in North America and Europe. In this
paper, we present an interview study conducted in China with 26 people with
various disabilities to understand their practices, challenges, perceptions,
and misperceptions of using ATs. From the study, we learned about factors that
influence AT adoption practices (e.g., misuse of accessible infrastructure,
issues with replicating existing commercial ATs), challenges using ATs in
social interactions (e.g., Chinese stigma), and misperceptions about ATs
(e.g., ATs should overcome inaccessible social infrastructures). Informed by
the findings, we derive a set of design considerations to bridge the existing
gaps in AT design (e.g., manual vs. electronic ATs) and to improve ATs’ social
acceptability in China.
Assistive technology, Accessibility, People with disabilities, Qualitative
study, Interview, China, Misperceptions
††journalyear: 2021††copyright: rightsretained††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††doi: 10.1145/3411764.3445321††isbn:
978-1-4503-8096-6/21/05††ccs: Human-centered computing Empirical studies in
accessibility
## 1\. Introduction
China has the world’s largest population with disabilities (83 million) (Hays,
2011), which is twice as large as that of the US (Federation, 2014; Taylor,
2017). From many Chinese people’s perspective, having a disability is linked
to past wrongdoings (Campbell and Uren, 2011), and they view disability as a
problem that needs to be “fixed” or pitied (Susanne et al., 2018), which
creates a barrier in social interactions between people with disabilities and
the general public (Campbell and Uren, 2011). Prior work has reported that
Chinese people with disabilities are largely invisible from the public in both
urban and rural areas (Dai, 2017), and have limited education and presence in
the workplace. Such a large population with disabilities also severely lacks
the care offered by trained professionals. For example, China has only 1/185
as many physiotherapists per person as Europe (Hays, 2011). Given the large
number of people with disabilities and the serious shortage of trained
professionals that can offer help to people with disabilities in China,
assistive technologies are viewed as an appealing solution to assist people
with disabilities. However, only about 7% of the Chinese with disabilities
have had an opportunity to use ATs (the office of the second national sample
survey of people with disability, 2007).
The practices and challenges surrounding AT and design considerations for
improving the social acceptability and adoption of ATs have been the subject
for many research studies in the past (e.g., (Shinohara and Wobbrock, 2011;
Profita et al., 2016; McNaney et al., 2014; Boiani et al., 2019; Asghar et
al., 2019b; Asghar et al., 2019a)). However, these studies were conducted
primarily of North America (Shinohara and Wobbrock, 2011; Profita et al.,
2016; McNaney et al., 2014) and Europe (Boiani et al., 2019; Asghar et al.,
2019b; Asghar et al., 2019a). Given the large population with disabilities in
China, the shortage of trained professional caregivers, the low usage rate of
ATs, and different cultural contexts from North America or Europe, it is
important to examine the practices and challenges surrounding AT acceptability
and adoption within China specifically. In this work, we sought to answer the
following two overarching research questions (RQs):
* •
RQ1: What are the practices and challenges surrounding AT use by people with
various disabilities?
* •
RQ2: What are the design factors that influence the adoption and social
acceptability of ATs?
To answer the research questions, we conducted a semi-structured interview
study with 26 participants with various disabilities: eight with visual
impairments, eight with hearing loss, eight with motor impairments, and two
with cerebral palsy. From the study, we articulated current problems with AT
adoption (e.g., misuse of accessible infrastructure, issues with replicating
existing commercial ATs) and challenges of using ATs in social interactions
(e.g., Chinese stigma). We then revealed existing misperceptions about ATs
(e.g., ATs should overcome inaccessible social infrastructures, ATs with more
functionalities are better) and compared our findings with misperceptions
about ATs in North America (Shinohara and Wobbrock, 2011). Informed by the
findings, we further showed a set of design considerations for improving the
social acceptability of ATs, and bridging the existing gaps in AT design
(e.g., manual vs. electronic ATs, mainstream technologies vs. ATs). By
completing a study, similar to previous research, with participants in China
specifically, we contribute an understanding of the practices and challenges
surrounding AT use by people with various disabilities and provide associated
AT design recommendations under the Chinese context.
## 2\. BACKGROUND AND RELATED WORK
### 2.1. People with Disabilities in China
Despite China’s rapid urbanization, the majority of people with disabilities
still reside in rural areas (Susanne et al., 2018). Although China has the
largest population with disabilities, people with disabilities are rarely seen
in public spaces (Dai, 2017). Chinese with disabilities also have limited
education compared to other countries (Hays, 2011). In 2016, nearly 20% of
Chinese with disabilities were either illiterate or had no schooling (Susanne
et al., 2018). Furthermore, Kim et al. compared special education between
China and the United States from national educational statistics (Kim et al.,
2019; of Education of the People’s Republic of China, 2018; of Education,
2018). Around 48% of people with disabilities in China went to special schools
instead of studying in regular schools with able-bodied students in 2017 (Kim
et al., 2019; of Education of the People’s Republic of China, 2018). In
contrast, this number was less than 3% for people with disabilities in the
United States (Kim et al., 2019; of Education, 2018).
In terms of employment, China adopted multiple methods, such as employment by
proportion, concentrated employment, and non-profit job allocation to support
the employment of people with disabilities (Guo, 2014). However, only 28% of
Chinese with disabilities were working in 2017 (Susanne et al., 2018). Due to
the existence of prejudices in the workplace, many people from the general
public assume that people with disabilities cannot productively contribute to
the economic growth or the society, and should have specialized career paths
(e.g., visually impaired individuals trained to be massagers) that often
separate them from the general public (Susanne et al., 2018).
People with disabilities in China, for a long time, were referred to as “can
fei,” a combination of two characters meaning “incomplete or deficient” and
“useless.” Starting from the 1990s, people started using the word “can ji,”
changing the latter character to one meaning “disease or sickness.” However,
the term “can ji” implies that people with disabilities have some kind of
incurable ailment that renders them abnormal. Unfortunately, this term is
still widely used today even though the term “can zhang” has been suggested
(replacing the second character with one meaning “obstacle or barrier”) (Dai,
2017). Some Chinese parents still consider having a child with disabilities as
linked to wrongdoings in the past (Campbell and Uren, 2011). In sum, it is not
uncommon that many Chinese people still hold a stigmatized view that
disability is a problem to be “fixed” or pitied. In our work, we explore how
the traditional Chinese stigma affects the AT adoption in social interactions.
### 2.2. AT Adoption
Given the large number of people with disabilities and the shortage of trained
professionals that could offer help to people with disabilities in China, ATs
can be an appealing solution to help people with disabilities improve daily
functioning, enable a person to successfully live at home and in the
community, and enhance independence (Scherer, 1996). According to the national
sample survey of people with disability in China, only 7% of the population
with disabilities in China has ever used an AT (the office of the second
national sample survey of people with disability, 2007). Within this 7% of AT
users, over 20% of them abandoned their owned ATs (Wang et al., 2009). As a
result, it is important to understand how people with disabilities use ATs,
why they abandon certain ATs, and the challenges that they encounter when
using ATs. To investigate the reasons for AT abandonment, prior research
explored the social and personal factors that influence AT adoption and usage
(Phillips and Zhao, 1993; Deibel, 2013; Kane et al., 2009; Kintsch and
DePaula, 2002; Parette and Scherer, 2004; Shinohara and Wobbrock, 2011). To
improve the ATs’ adoption rate, it is important to involve AT users in the
entire design process (Phillips and Zhao, 1993; Riemer-Reiss and Wacker, 2000)
or empower them to “DIY” their own AT devices (Hurst and Tobias, 2011). In
terms of the AT design, Riemer-Reiss and Wacker (Riemer-Reiss and Wacker,
2000) conducted a survey study with 115 individuals with various disabilities
and concluded that ATs must meet an important functional need to improve the
adoption rate, similar to what Kintsch and DePaula found (Kintsch and DePaula,
2002). Other factors, including frustration tolerance, minimized
stigmatization, and willingness to incorporate ATs into daily routine, could
help to reduce technology abandonment (Kintsch and DePaula, 2002). Deibel
(Deibel, 2013) further presented a generalized heuristic model for
understanding various factors that influence the adoption and usage of ATs,
such as device necessity, task motivation, physical effort, and cognitive
effort. Different environments, such as workplaces or social interactions,
also affect the choice of ATs (Carmien and Fischer, 2008; Shinohara and
Wobbrock, 2011; Wahidin et al., 2018). In our work, we introduce AT adoptions
in China through factors that affect AT choices and the unique needs for AT
customizations.
Mainstream technologies, such as mobile devices, have been explored for
accessibility purposes (e.g., (Fan et al., 2020; Li et al., 2020; Bigham et
al., 2010; Guo et al., 2016; Kane et al., 2009, 2008; Kianpisheh et al., 2019;
Li et al., 2017; Li et al., 2019)). However, challenges and concerns
surrounding the adoption of mainstream technologies still exist (Carrington et
al., 2015; Kane et al., 2008, 2009; Rodrigues et al., 2015; Profita, 2016).
Kane et al. (Kane et al., 2009) conducted a qualitative two-method study with
20 participants with visual and motor impairments to examine how people
select, adapt, and use mobile devices in their daily lives. The study provided
guidelines to design more accessible mobile devices (e.g., increasing
configurability and contextual adaptation). Furthermore, the evolving needs of
users should also be considered to increase the adoption of ATs (Rodrigues et
al., 2015).
AT adoption conditions also vary in countries with different levels of income
(Eide and Øderud, 2009). People who live in low-income countries may have
limited access to ATs (Eide and Øderud, 2009) and lack sufficient knowledge
and research on ATs (May-Teerink, 1999). For example, Rodrigues et al.
(Rodrigues et al., 2015) examined smartphone adoption in Western countries and
found that some people with visual impairments continue to use their old
feature phones with the availability of other ATs. In contrast, people with
visual impairments in Bangalore started to switch to smartphones (Pal et al.,
2017) due to the lack of existing ATs or old feature phones. In our research,
we discuss the similarities and differences of ATs and mainstream technologies
adoption for people with various disabilities between China and other
countries.
### 2.3. Social Acceptability of ATs
Researchers found that lacking considerations of users’ opinions is one key
factor of AT abandonment, in addition to the poor performance of ATs and the
change in user needs (Phillips and Zhao, 1993). Users’ opinions and
preferences of ATs might change based on social contexts (Shinohara and
Tenenberg, 2009). For example, depending on social contexts, people with
disabilities may feel either self-conscious or self-confident when using ATs
(Shinohara and Wobbrock, 2016). To understand how social contexts affect AT
use, Shinohara and Wobbrock (Shinohara and Wobbrock, 2011) conducted an
interview study and found existing misperceptions that pervaded AT use: ATs
could functionally eliminate a disability, and people with disabilities would
be helpless without their ATs. These findings inspired later research to take
social interactions into AT design consideration (Profita et al., 2016). To
reduce the misperceptions surrounding ATs, researchers proposed participatory
design (Lindsay et al., 2012), design for social accessibility (Shinohara and
Wobbrock, 2016; Shinohara et al., 2018), and collaborative accessibility
(Bennett et al., 2018; Branham and Kane, 2015; Zeng and Weber, 2015).
Moreover, to reduce unwanted attention surrounding ATs, prior research has
also advocated integrating accessibility features into mainstream technologies
(Shinohara and Wobbrock, 2011; Naftali and Findlater, 2014). In sum, social
acceptability has been demonstrated to be important for AT design. However,
prior research was mostly conducted in Western cultures, and cultural
background (Ripat and Woodgate, 2011), level of education (Kaye et al., 2008)
and access to information services (Rubin and White-Means, 2001) may affect AT
adoption. China has a different social and cultural context for people with
disabilities. However, there is a lack of exploration on how social
interactions affect the uses of ATs in China. Although some past works have
explored AT adoption in developing countries from the design and the user’s
prospectives (Pal et al., 2017), these research did little to investigate how
the social context affects AT adoption and the existing misperceptions of
using ATs in these countries. Thus, in this work, we examine whether and to
what extent prior reported causes and solutions of social acceptability of ATs
apply to the practices and challenges with AT use in China.
Table 1. Participants’ demographic information. Participant | Disability | Age | Gender | Occupation | Assistive Technology
---|---|---|---|---|---
1 | Born with low vision, lost sight 10 years ago | 34 | M | Massager | white cane, PC screen reader, smartphone screen reader, magnifier, Huawei smartphone, iPhone, slate and stylus, radio, blind poker cards
2 | Cerebral palsy | 19 | M | Student | standing bed, wheelchair, walking frame, iPhone
3 | Spina bifida | 36 | F | Self-employed | car, iPhone, electric tricycle, sporting wheelchair, crutches, wheelchair trailer
4 | Deaf | 36 | F | Community centre staff | iPhone, Xiaomi smartphone, voice-to-text software, lighting doorbell, artificial cochlea, SIEMENS hearing aid
5 | Congenitally blind | 44 | M | Massager | e-reader, smartphone screen reader, slate and stylus, white cane, iPhone
6 | Upper-extremity amputations, no forearm | 61 | M | Retired | mechanical artificial arm, electric artificial arm, motorcycle, PC, smartphone
7 | Lost sight due to medical accident 15 years ago, totally blind | 35 | F | Software dealer | iPhone, Android phones, Nokia phones, PC screen reader, radio, smart home appliances, smartphone screen reader
8 | Deaf | 30 | M | Unemployed | hearing aid, artificial cochlea, vibration band, OPPO smartphone
9 | Motor impairment due to spinal cord injury 13 years ago | 40 | M | Information technology | wheelchair, crutches, iPhone, computer, wheelchair trailer, extended clamp
10 | Spinal cord injury due to medical accident 7 years ago | 25 | F | Call center telephone operator | wheelchair, wheelchair trailer, Huawei smartphone, extended clamp
11 | Motor impairment caused by polio | 60 | M | Lottery service | crutches, motorcycle, electric tricycle, sporting wheelchair, iPhone
12 | Congenitally blind | 42 | M | Massager | white cane, book reader, PC screen reader, smartphone screen reader, slate and stylus, Xiaomi smartphone
13 | Motor impairment caused by polio | 29 | M | Call center telephone operator | crutches, wheelchair, hand-propelled tricycle, electric tricycle, smartphone
14 | Deaf | 44 | M | Website operator | voice-to-text software (Shenghuo, Xinsheng, Luyinbao), smart watch, iPhone, hearing aid, iPad, PC, lighting doorbell
15 | Cerebral palsy | 50 | F | Vegetables sales at grocery store | standing bed, wheelchair, crutches, walking frame, iPad
16 | Deaf | 44 | F | Unemployed | hearing aid, lighting doorbell, voice-to-text software, Android smartphone
17 | Congenitally blind | 42 | F | Massage instructor | slate and stylus, talking watch, white cane, book reader (Dushulang), smartphone screen reader, Huawei smartphone, PC screen reader, APP (Didi)
18 | Congenitally blind | 41 | M | Massager | white cane, smartphone screen reader, navigation APP (Baidu Map), slate and stylus, Braille board, Braille book, e-reader, Xiaomi smartphone
19 | Low vision | 45 | M | Massager | monocular, magnifier, radio, PC screen reader, white cane, e-reader, iPhone, smartphone screen reader
20 | Motor impairment caused by polio | 43 | M | Information technology | hand-propelled tricycle, extended clamp, motorcycle, car, crutches, smartphone
21 | Spina bifida | 31 | F | Video editor | wheelchair, electric tricycle, smartphone, APP
22 | Deaf | 28 | F | Unemployed | hearing aid, iPhone, iPad, voice-to-text software
23 | Congenitally blind | 44 | M | Massager | Sunshine screen reader, iPhone, white cane
24 | Deaf after 2 years old | 30 | M | Teacher | hearing aid, Huawei smartphone, voice-to-text software, APP
25 | Deaf | 44 | M | Community centre staff | hearing aid, lighting doorbell, vibration alarm clock, smartphone, voice-to-text software
26 | Hearing loss | 40 | F | Teacher | hearing aid, iPhone, voice-to-text software
## 3\. METHOD
We recruited 26 people with various disabilities through local disability
communities and conducted semi-structured interviews with them to understand
their practices and challenges with using ATs. Interview sessions took place
at various local disability communities and lasted approximately 60 - 90
minutes. Interviews were audio-recorded, transcribed, and translated for
further analysis. The whole recruitment and study procedure was approved by
the institutional review board (IRB).
### 3.1. Participants
Previous research has provided an understanding of the perceptions and
misperceptions associated with ATs. These findings were uncovered through
studies focused mostly on people with visual impairments. However, Phillips
and Zhao (Phillips and Zhao, 1993) found that mobility-related ATs had the
highest rate of abandonment. Thus, to better understand the practices and
challenges of AT use for people with various disabilities, we interviewed
eight participants with visual impairments, eight participants with hearing
impairments, eight participants with motor impairments, and two participants
with cerebral palsy. To recruit participants, the researchers reached out to
the China Disabled Persons’ Federation (CDPF), the largest government
authorized organization for Chinese with disabilities, to distribute the study
advertisement to people with disabilities. All participants were registered
population with disabilities in the CDPF. To understand the practices and
existing challenges of using ATs, we recruited participants who had
experiences with ATs. Participants were between 19 and 61 years old (mean =
38.7, SD = 9.7). Table 1 shows detailed information regarding the their age,
gender, disability, occupation, and the list of ATs that they have used.
Participants were compensated 100 CNY after completing the interview.
### 3.2. Procedure
To understand the perceptions of ATs, we adapted the questionnaire from
Shinohara and Wobbrock (Shinohara and Wobbrock, 2011) and extended it to learn
about the participants’ perceptions of mainstream technology (e.g., what types
of mainstream technologies do they use? What do they like about them? What
mainstream technologies do they want to try the most, but are not currently
accessible?), and the differences between mainstream technology and AT (e.g.,
how does mainstream technology help you compared to AT under different
circumstances?).
We first asked participants about their demographic information, the condition
of their disabilities, and their experiences with ATs. We then asked them to
compare their previous ATs with the current ones that they are using, if they
have used multiple versions of functionally similar ATs. We then asked them to
share their experiences and feelings when they use their ATs in social and
work contexts. Furthermore, we asked participants to talk about any
misperceptions about ATs that they had encountered from the general public,
any previous misperceptions that they had themselves, and if they feel self-
empowered or self-conscious while using ATs. Additionally, we asked
participants what ATs they would like to have in the future, what they think
are the most important factors of successful ATs, and their perspectives on
how ATs compare with mainstream technologies.
### 3.3. Analysis Method
All interviews were conducted in Mandarin by the first author, who is a native
Chinese speaker. We audio-recorded all interviews with participants and
transcribed the recordings verbatim. We then translated the transcripts into
English for analysis. Two coders independently performed open-coding (Corbin
and Strauss, 1990) on the transcripts. In the open-coding process, Cohen’s
Kappa equals to 0.85 for the inter-rater reliability between the independent
coders initially. Then, the coders met and discussed their codes. When there
was a conflict, they explained their rationale for their code to each other
and discussed to resolve the conflict. Eventually, they reached a consensus
and consolidated the list of codes. Afterward, they performed affinity
diagramming (Hartson and Pyla, 2012) to group the codes and identify the
themes emerging from the groups of codes. Overall, we established 10 themes
and 21 codes. The results introduced in the next section are organized based
on these themes, and we clustered the themes that overlap.
## 4\. RESULTS
In this section, we present the AT adoption practices reported by our
participants, the challenges they encountered with using ATs in social
interactions, and common misperceptions they faced about disabilities and ATs.
### 4.1. AT Adoption
#### 4.1.1. Choice of ATs
In the interview, we found that participants’ choices of AT are affected by
many factors, including self-esteem, limited resources for consultation,
advice from people with a similar disability, limited space, social
infrastructure, and subsidy coverage. First, participants felt that ATs
signaled the severity of disabilities and thus tended to choose to use
particular ATs that could signal to others their self-care abilities with
minimal assistance from ATs. For example, participants preferred crutches over
wheelchairs because the former indicates that they could still walk to some
extent. This behavior relates to the concept of “face” in Chinese culture,
which is an individual’s contingent self-esteem (Hwang, 2006), and affects
their decisions on whether to adopt certain ATs. We found that some
participants with motor impairments used their crutches until they were warned
by their doctors to stop using them due to the risk of spinal scoliosis caused
by their body weight. P20 commented on the reasons for continuing to use
crutches until 35 years old before switching to the wheelchair:
> “…You may ask me why I do not just use an electric wheelchair instead of
> crutches every day; the key reason is that I want to show other people the
> ability of my abled body parts—especially the upper body. I do not want
> other people to think that I am a useless person, I want other people to
> still think that although I lost the control of my legs, I still have the
> ability to move with my arms and shoulders. Using my crutches helps me to
> save face when I walk on the street. This is the reason why I continued
> using crutches until 35 years of age when my doctor told me that I was too
> heavy to use the crutches…”
In China, maintaining “face” means that “shameful” family affairs cannot be
disclosed to outsiders. Due to some negative perceptions on disabilities, the
family of a child with disabilities may be reluctant to seek supportive
services (Ravindran and Myers, 2012). From our study, we found that most
participants sought information about what skills they could learn and what
ATs they could use by primarily consulting family members and friends around
them. However, their family and friends often do not have the same
disabilities and cannot offer accurate information about what people with
certain disabilities could learn or what ATs would be the best fit for them,
which limits their understandings of what ATs they could or should use.
Consequently, 88% of participants reported that they did not know what they
could learn or do when they were young, and their primary information source
was limited to their family members and friends. In our study, we found that
this circumstance in China delayed the process of learning what people who
acquired a certain disability later in their life could use to become more
independent. We also discovered that some people would start with a very
pessimistic view on how to live independently once they acquired a certain
disability. ”It took me years to know that I could still work in IT and live
independently. There lack professional consultants that I could ask when I
just had the accident, and it really made me feel desperate initially”, said
P9. Therefore, it would be beneficial to have consultants who could provide
precise and customized advice to people who just acquired a disability
regarding what ATs they could use and career planning.
“I did not know that I could use smartphones until I met a person with a
similar impairment who used the smartphone every day”, continued P9. When
people saw that others with the same disability could do certain things, they
started to realize that they can use certain ATs too. Participants mentioned
that seeing and knowing what people with a similar disability can do gives
them the confidence to also learn to use particular ATs. P20 commented that
knowing someone with an even more severe motor impairment who can drive a car
inspired him to learn how to drive:
> “…When I did not know what I can do, everything was hard for me. However, I
> realized later that there are so many things I could do. This made lots of
> changes; I started to become more optimistic about everything. I wish I knew
> what I could do at the beginning. After I found that other people with even
> more severe motor impairments could drive cars, I bought mine, and I own two
> cars now…”
The surrounding space that participants interacted with also impacted their
decisions on what ATs to use. We found that most of the home environments our
participants lived in required special accessible modifications to be fully
accessible. One potential reason is that many Chinese people live in
apartments or condos due to the high density of the population, which is
different from people who live in many areas in North America (Grumbine,
2018). Seven participants also mentioned that they have to live in older
apartments for the lower rent, but these apartments are not accessible or
required further modifications. For some participants, their bathroom size was
too small to roll a wheelchair in. P11 mentioned the problem of his bathroom
and kitchen:
> “…Most of the apartments were not initially aimed to be designed for people
> with disabilities. I found that nearly all of them need some modification,
> such as increasing the door size or reducing the height of the kitchen
> stove. Even so, I still cannot move around with my wheelchair in the
> bathroom. This forced me to use the crutches while I was using the bathroom.
> But it is more dangerous…”
In addition to the home environment, participants also revealed that
inaccessible public places usually forced people with disabilities to use
specific ATs. Ten participants mentioned that they are used to visiting
specific accessible public places in the city, even though it may require a
long commute. However, they often have to visit unfamiliar places for various
reasons, where the accessibility of the new environment is unknown to them
beforehand—such as traveling in a different city, hanging out with friends, or
visiting a customer. “I always bring my crutches with me when I visit
unfamiliar places because they either do not have an elevator or the washroom
is not accessible”, said P11. Even if participants tried to always visit
accessible places, there are still constraints which forced them to stay at
inaccessible places. For example, P9 complained about the washroom he
experienced at work:
> “…The washroom in my office building is not accessible to me. The door is
> not even as wide as my wheelchair. More importantly, there is a small step
> in each unit of the washroom. It forced me to use crutches instead of my
> wheelchair. Using the washroom at work is the hardest task for me every
> day…”
Figure 1. Misuse of accessibility infrastructures by members of the general
public. Left: the tactile paving was blocked by the car. Right: the ramp was
blocked by locked electrical bikes.
There are two pictures. The left picture shows a white car that parked on the
tactile paving. The right picture shows a ramp and the lower side of the ramp
is blocked by several blue electrical bikes.
In our study, we also found that social infrastructures that are supposed to
help people with disabilities engage in social interactions, such as the
tactile paving and the wheelchair ramp, are widely constructed in public.
However, participants reported that the misuse of accessibility
infrastructures by members of the general public posed even more safety
concerns than without any infrastructures, which forced people to use certain
ATs. For example, people locked their bikes on the ramp or placed random
objects on the tactile paving (Fig. 1).
Finally, financial conditions varied among people with disabilities and
affected their choice of ATs. Most participants mentioned that they could not
get their desired ATs because of financial concerns. Other than general
financial conditions, some participants who have insurance or subsidy to buy
ATs explicitly mentioned that the subsidy coverage is often restricted to
certain brands or models of ATs, which may not necessarily match their needs.
P8 commented on this:
> “…My current hearing aid is really ugly and always have a loud noise that
> annoys me all the time, I know there is a [brand of] hearing aid that is
> much lighter and better designed than my current one, but it is too
> expensive, and my insurance does not cover the cost for that brand…”
Beyond the subsidy to buy ATs, we found that the high cost of the maintenance
fee is not covered by the subsidy, which also affects the choice of ATs.
Similar to what Armstrong et al. (Armstrong et al., 2007) found in
Afghanistan, even if people obtained certain ATs through subsidy or donation,
the AT maintenance problem and the lack of replacement parts still affect the
adoption of ATs in developing countries. “I can use my subsidy to buy my
electronic artificial arms, but it is too expensive to repair it, and my
subsidy does not cover the maintenance cost. That is one of the reasons for
using my mechanical artificial arms now”, said P6.
#### 4.1.2. “DIY” and Customized ATs
We found that most participants used ATs that were constructed by themselves,
family, or friends (Fig. 2). From the interview, we observed two main
practices of “do-it-yourself” (DIY) ATs: replicating existing commercial ATs
and modifying inaccessible mainstream technologies. Unlike the purpose of
“DIY” ATs in North America (Buehler et al., 2015; Hurst and Tobias, 2011),
which leveraged fabrication tools to create new customized ATs or to make AT
functional attachments, many of the participants’ customized ATs aimed to
replicate existing commercial ATs (Fig. 2(a,b)). Different from using 3D
printers or other fabrication tools in North America, most of the customized
ATs that we learned about were created through handcrafted or traditional
ways. For example, crutches or canes were crafted from wooden sticks (Fig.
2(a)). The key reasons behind it are financial concerns, lack of AT designers,
and the knowledge gap about the available ATs and how to use them. P13
commented on a hand-propelled tricycle made by his father:
Figure 2. (a,b) Self-made crutches which lack careful design and engineering
considerations. (c) Accessibility modifications made to a car by a third-party
company for a participant with motor impairment. (d) Self-modified brake and
accelerator of the motorcycle of P6 (no forearm).
There are four pictures in a row. The first picture shows a pair of self-made
crutches that is made of wood. The second picture shows a self-made crutch.
The third picture shows a modified car hand brake and hand acceleration. This
picture has blue dash lines and words with blue font to indicate the shape of
modified hand brake and hand acceleration. The fourth picture shows a foot
acceleration and a foot brake on a motorcycle. The shape and the location of
the foot acceleration and the foot brake are marked and labeled in blue font.
> “…When I was young, I used to craft my ‘crutches’ from the wood found in the
> forest in our rural village. One of the key reasons was the high cost of a
> wheelchair or crutches. Later, my dad modified and made a hand-propelled
> tricycle for me due to the slow movement of my ‘crutches.’ Other than
> financial considerations, none of my friends or family have the same
> impairment as me, which made it hard for me to know what assistive devices
> suited me better…”
Although “DIY” ATs were functional to some extent and were often economical as
reported by participants, these “DIY” ATs typically lacked careful design and
engineering considerations, and therefore often posed health risks. For
example, using crutches with an inappropriate length over time can cause
lumbar spine distortion or periarthritis. P11 commented on his “DIY” crutches:
> “…When I was young, there did not exist anything called assistive
> technologies; all I used was just wooden crutches crafted by my parents. The
> left crutch was slightly shorter than the right one. It made my left
> shoulder feel really painful when I used it for a long time. Later, my
> doctor told me that I had periarthritis. Even now, it has never recovered…”
Beyond constructing ATs from scratch, people also modified some inaccessible
mainstream technologies (Fig. 2(c,d))—such as cars and motorcycles—to make
them accessible. Participants reported that most of the automobile companies
do not support any modifications for accessibility in China. As a result, they
had to ask their friends or other third-party companies to modify their
vehicles. Such modifications, however, are often done by non-professionals and
pose safety concerns for both users and the general public. Fig. 2(c) shows
the modification of a participant’s car by adding hand controllers for the
brake and the accelerator. Similarly, P6 asked one of his friends to add a
foot-controlled brake and accelerator onto his motorcycle to overcome the loss
of his forearms. Although these modifications functionally allowed people to
use these devices, the poor quality of the modifications and the lack of
engineering considerations could potentially be dangerous for people who use
them. P6 commented on his experiences and safety concerns of his motorcycle:
> “…I like my motorcycle; it allowed me to visit different places. However, it
> took me lots of effort to get it modified. Initially, I visited the original
> motorcycle company, and they told me that they did not offer any type of
> accessible modifications to their products. Then, I talked to one of my
> friends, who was a mechanical technician. He then modified my motorcycle and
> added the red button to allow me to accelerate with my foot. I think the
> design and the placement of the brake and accelerator need to be improved. I
> got injured in the past when I tried to reach the foot accelerator, which
> might be too high for me, and it caused the whole motorcycle to become
> unbalanced. I am glad the speed was not too fast, but it still made me fall
> off the motorcycle and bruise my leg…”
### 4.2. Challenges with Using ATs in Social Interactions
#### 4.2.1. Stigmatization
Participants discussed the negative impact of stigmatization of using ATs,
which could have been caused by Chinese traditions, infrastructures, and the
knowledge gap from the general public. In traditional Chinese culture, the
Buddhist belief of karma caused the negative perspectives on disabilities: it
is regarded as punishment for the parents of people with disabilities or past
life sins (Du, 2017). The negative perspectives give social stigmas to people
with disabilities through social interactions. Currently, traditional Chinese
stigmatizing terms are still being used to refer to people with disabilities.
For example, “long zi” refers to people who are deaf or hard of hearing, “xia
zi” refers to people with visual impairments, and “que zi” refers to people
with motor impairments. These traditional terms were used throughout Chinese
history and have derogatory connotations and sound humiliating to people with
disabilities. 96% of participants recalled having past memories of being
referred to using these terms. P7 described her feelings when she heard a
conversation between a mom and a son beside her about her blindness:
> “…I was walking on the street, and I did not say anything or ask anyone for
> help. Maybe I was walking a little bit slow. There was a mom and a son
> beside me; the son asked his mom about why I walked slow. I heard the
> whisper from the mom: ‘she is a ‘xia zi,’ she cannot see us.’ I was really
> depressed because ‘xia zi’ sounds like I did something wrong and I am a
> useless person…”
Participants mentioned the high usage of these derogatory terms in some slang
and public shows, which posed difficulties to eliminate the use of these terms
in the general public. Beyond being called derogatory terms, participants
recalled being stigmatized by others who call attention to and limit their AT
usages. P13 described his embarrassment of being called out by the subway
station general announcement to turn off some functions of his wheelchair
trailers throughout the whole subway station:
> “…I remember that I got called out in the subway, and it made me feel really
> embarrassed. A staff announced on the public speaker: ‘the person with the
> electric wheelchair, please turn off your automatic functionalities when you
> are on the train.’ This made all other people stare at me, and I felt really
> self-conscious…”
It can be perceived as good intent by the general public to allocate
designated areas for people with disabilities. However, such settings may make
people with disabilities more self-conscious. For example, in movie theatres,
people with motor impairments are limited to sit or park their wheelchairs in
a front area where everyone else can see them. Participants typically wanted
to blend into the general public when they are in public settings and did not
want to be called out or draw people’s attention. P20 commented on this:
> “…I like that some places now have accessible areas for people with
> disabilities. However, it is still restricted to a certain area. These areas
> are either at the front or at the entrance. I found that it caused a lot of
> attention…”
The existing stigmatization from the general public affected people with
disabilities in social interactions and also influenced their choices of using
their AT devices.
#### 4.2.2. Employment challenges with using ATs
In general, we found that our participants encountered various challenges from
employment, such as transportation to work, the inaccessible working
environment, and unwanted attention. In our study, 65% of our participants
chose to work where the majority of employees have similar disabilities. For
example, P5 and P18 worked at a massage clinic called “mang ren an mo” in
Chinese, which means “blind massage.” All the employees in that clinic are
people with visual impairments. Another example is a cafe where P16 used to
work at, where all employees are people with hearing loss. This clustering
effect reduced some of the concerns of unwanted attention or the inaccessible
working environment for people with disabilities. However, it may also reduce
the interactions between people with disabilities and the general public,
which may further cause misperceptions.
In China, the State Taxation Administration (Administration, 2007) has the
policy of tax reduction if a company employs over a certain number of people
with disabilities. However, our participants mentioned that they still have a
hard time finding a job where the majority of employees do not have the same
disability. We found that participants with various disabilities had some
difficulties using their ATs while working due to two potential problems:
feeling self-conscious due to their unique office workstation setup, and
incompatibility of accessibility features on work devices. P9 talked about his
concerns about the incompatibility between his wheelchair and the office desk:
> “…My ATs do not affect me much during work. As far as I know, for most
> people with motor impairments, like me, our work mostly relies on our upper
> body. The only part that made me uncomfortable is the height of my table at
> work. As you know, most of the wheelchairs are not capable of adjusting the
> height. And we are shorter than other people who sit on a normal chair, that
> made me really uncomfortable, and my manager bought me a new desk with a
> lower height. However, it made me look different in the company; I felt
> self-conscious when other people walked by my desk…”
P7 reported the problems of using different screen readers that made her
customers lose patience:
> “…My work required me to use my computer to record customer and product
> information. At the first time, I was trying to use my screen reader on the
> company’s computer. Since my screen reader only supported an old Windows
> system and was not compatible with the system that my company used, I ended
> up using another screen reader from another software company. Different
> computer systems and screen readers really delayed my work, and my customer
> complained to my manager about that…”
#### 4.2.3. Knowledge Gap on ATs
In this section, we further elaborate on the existing knowledge gap between
people with disabilities and the general public, and the associated
consequences (e.g., misperception, unwanted help). In China, around 48% of
people with disabilities went to specialized schools instead of regular
schools; however, this number is less than 3% in the US (Kim et al., 2019; of
Education of the People’s Republic of China, 2018; of Education, 2018). Most
of our participants with congenital disabilities went to specialized schools
for professional skills training, such as massage. This separation of
schooling caused mutual misperceptions between people with disabilities and
the general public. Participants mentioned that the general public’s lack of
knowledge on accessibility sometimes posed threats and dangers to people with
disabilities. For example, the misuse of tactile paving is dangerous to people
with visual impairments which prevent them from walking on the street.
Due to the separation of education between people with disabilities and the
general public, the general public lacks understanding on how to offer help
appropriately. Therefore, some people tried to offer unwanted help which may
lead to safety concerns. For example, some people pushed the wheelchairs from
behind without being asked to do so, which can be very dangerous. P20
mentioned his experiences when he was on the street with his wheelchair:
> “…first, I want to say that I am delighted that other people offered me
> help. However, they lacked some basic knowledge. For example, many
> wheelchairs do not require someone to push from the back. If you push
> someone with a wheelchair, they may feel uncomfortable, and it might be
> dangerous when there is a bump…”
In addition, some people may choose to lift a wheelchair to a bus without
asking for their user’s permission. This could have a potential negative
impact on the wheelchair user’s dignity. “I know other people wish to help me,
but being lifted in public made me feel so bad and embarrassed,” said P20. It
suggests that although many ATs are viewed primarily as personal devices, some
ATs also have a “social” aspect that invites people to offer help. For
example, wheelchairs are often used by people with disabilities and their
caregivers in many social settings, such as hospitals and airport terminals.
Thus, when designing ATs, the social aspect of ATs should be considered so
that their designs offer clear affordance and signals to invite proper use.
### 4.3. Misperceptions about Disability and ATs
Previous research has identified several misperceptions under a North American
context: ATs could functionally eliminate disabilities, and people with
disabilities would be helpless without their ATs (Shinohara and Wobbrock,
2011). In this section, we present the similarity of the misperceptions and
new findings from this study (e.g., “ATs should overcome inaccessible social
infrastructures” and “ATs are symbols of permanent disabilities”).
#### 4.3.1. “ATs could functionally eliminate the disability” and “people
with disabilities would be helpless without their ATs”
Similar to Shinohara and Wobbrock’s findings (Shinohara and Wobbrock, 2011),
the misperception “ATs could functionally eliminate the disability” also
exists in China for ATs used by people with motor impairments, hearing
impairments, visual impairments, and cerebral palsy. For example, participants
pointed out that other people thought wheelchairs could allow people with
motor impairments to move freely. However, even with the wheelchair, there are
still many obstacles, such as having a hard time entering narrow spaces and
climbing stairs. Notably, many participants with hearing impairment mentioned
misperceptions related to the hearing aid—that it allows them to hear the
sound clearly. However, they claimed that most of the hearing aids could only
allow them to recognize whether there is a sound or not. P14 felt annoyed when
other people tried to speak close to his ear very loudly and repeat multiple
times:
> “…Having a hearing aid does not mean that I can hear the conversations of
> other people. I can use the hearing aid to locate the sound source, but not
> understand what a person is talking about. I felt extremely annoyed when
> some people tried to repeat something to me multiple times and with
> increasing volume! All I can hear is the noise! I just could not understand,
> and it annoyed me…”
Furthermore, participants mentioned that some people have the misperception
that cutting-edge healthcare systems should completely eliminate disabilities.
According to P2, “When I was having my lunch at the school cafe, I heard some
students whispering: ‘our healthcare system is way more advanced than 30 years
ago, why are there still people with disabilities?’”. P2 continued: “because
people with disabilities are largely not visible in public due to inaccessible
infrastructures, the general public have fewer opportunities to learn about
and understand our lives.” This situation made the misperception hard to
resolve.
Beyond the misperception that “ATs could functionally eliminate the
disability,” our findings agree with another misperception that “people with
disabilities would be helpless without their ATs” (Shinohara and Wobbrock,
2011), which was found in the Western context. Our participants with visual
impairments mentioned their experiences of attracting unwanted attention when
using their mainstream devices. “They were surprised that I could use iPhone X
to call an Uber! Some people said I could not do anything without my cane.
However, I can use my iPhone to do lots of things without me physically being
there”, said P7. Moreover, we found that people with disabilities tend to rely
on specialized ATs less often than before due to the introduction of
mainstream technologies with accessibility features. Our participants are
already using their smart devices to order food delivery, shop online, contact
friends and enjoy entertainment without the need for ATs, such as canes or
wheelchairs.
#### 4.3.2. ATs should overcome inaccessible social infrastructures
Figure 3. (Left) Local subway system has a gap of over 100 mm between the
train and the platform which can decrease the accessibility of the subway
system to wheelchair users; (Right) the entrance gate of a park blocks
wheelchair users.
There are two pictures. The left one shows a subway and the platform. There is
a red rectangle that marks the gap between the train and the platform. The
right figure shows a white entrance gate to a place with lots of trees. In the
picture, one person is walking in the gate.
Making infrastructures universally designed or modified (Iwarsson and Ståhl,
2003) for accessibility purposes would help eliminate the need for designing
ATs, which are often secondary solutions to inaccessible infrastructure. For
example, public transit systems in North America, such as the Toronto subway,
were built with accessibility issues considered, which requires that the
horizontal gap between the subway train and the platform be less than 89 mm
for accessibility purposes (Ross, 2017). In contrast, some subways in China
were not built with accessibility in mind, having gaps of over 100 mm between
the train and the platform (Fig. 3 Left). As P11 expressed, “I hate the local
subway; my wheelchair just cannot move through the gap. When I talk to people,
most of them focused on how I should modify my wheelchair rather than how to
make the subway system fully accessible.”
Furthermore, some participants commented that the general public in China
thinks modifying social infrastructures will take more effort than modifying
individual ATs to adapt to social infrastructures. In particular, they
mentioned that many of the old buildings were not designed to be accessible
when it was built, and the building structure may need to be changed to make
it accessible (Section 4.1.1). “Some people think that individuals with
disabilities should compromise or even sacrifice for the society”, said P9.
Overcoming different social structure barriers by modifying ATs might
eventually make the ATs more complicated.
The design of certain park entrances is another example of the challenge
involved in overcoming inaccessible social infrastructures. For instance, the
S-shaped metal gates of the park entrance (Fig. 3 Right) was intended to block
vehicles or bicycles from entering, but also accidentally blocked out people
with disabilities who use wheelchairs.
#### 4.3.3. ATs are symbols of permanent disabilities
We found that the general public tends to assume that people who use ATs have
permanent disabilities. This is a misperception because many people who use
wheelchairs may recover from their temporary disabilities. People with broken
legs or any other injuries with the lower body may require a wheelchair.
Furthermore, participants reported that people thought that if they use
certain ATs, they would need it for the rest of their lives. P15 commented on
this:
> “…I use my standing bed and walking frame every day to recover from my
> disability. Some people thought I might need them for the rest of my life.
> However, once my condition gets better from rehabilitation, I may switch to
> new ATs that give less assistance…”
This misperception points to the importance of designing for people’s
practical needs. “People with temporary impairments do not need to think about
independence, but we do”, commented P20.
#### 4.3.4. ATs with more functionalities are better
Participants reported that the general public often thinks that ATs with more
functionalities are better for people with disabilities. For example,
participants with motor impairments mentioned that they were asked why they
did not just use a multi-functional electric wheelchair. As we introduced
before, people might prefer using certain ATs over others because of the
following factors: signaling to others their remaining abilities and financial
considerations (Section 4.1.1). P20 mentioned the importance of considering
practical needs in the ATs design:
> “…I found that so many existing ATs try to add as many fancy functions as
> possible. Do we really need them? These manufacturers really need to think
> from the user’s perspective…”
## 5\. Discussion
In the results section, we described 1) the unique findings of AT adoptions
practices employed by people with various disabilities in China (e.g., factors
that affect the choice of ATs, “DIY” ATs), 2) challenges with using ATs in
social interactions (e.g., stigmatization and employment challenges), and 3)
existing misperceptions about disabilities and ATs in China (e.g., ATs should
overcome inaccessible social infrastructures). Based on our findings, we
present the following questions for researchers and designers to consider
pertaining to challenges surrounding AT adoption and design.
### 5.1. How Can Manual and Electronic ATs Be Designed Differently to Help
People with Disabilities?
From the interview, we found that there are roles for both manual and
electronic ATs, and people with disabilities need both of them in their lives
(Section 4.1.1). Some participants mentioned the existing misperception from
the general public in China that it is more preferable to use more electronic
ATs and less manual ATs. We found that both manual and electronic ATs have
their own situational uses. Electronic ATs may have the benefits of faster
movement speed and improved safety. However, people with various disabilities
chose to also use manual ATs. Potential reasons include financial concerns,
exercise, and signaling to others their remaining abilities to maintain “face”
as discussed in section 4.1.1.
When designing electronic ATs, it is essential to ensure that the appearance
of ATs can communicate to others the abilities of their users. For example,
the wheelchair could be designed with foldable components to support users to
stand up briefly if they could do so, and be able to signal this affordance to
others clearly. It is also important to design electronic ATs to encourage
people with disabilities to exercise as much as possible. For example,
potential electric wheelchairs could be designed to force the users to roll
the wheelchair for more than a certain number of strokes before they can turn
on the automatic function.
In our interview, we found that participants with motor impairments preferred
using manual wheelchairs over electric wheelchairs. However, a recent article
revealed that the general public might view a manual wheelchair as a more
stigmatizing AT than an electric wheelchair in Norway (Boiani et al., 2019).
Boiania et al. (Boiani et al., 2019) found that the general public has more
negative perceptions towards manual wheelchairs in terms of comfortability,
aesthetics, and enjoyability. Therefore, designers should take feedback from
both people with motor impairments and the general public into consideration,
such as using participatory design or co-design methods.
In terms of manual ATs, we found that each device is mainly used for a single
situation or purpose. For example, the cane is always and only required when
people with visual impairments needed to walk outside. Unlike manual ATs,
electronic ATs are more centralized—smart devices become multi-functional.
People with visual impairments used to carry a book reader and a radio to
acquire information, and a slate and stylus to share their own thoughts with
other people. As smartphones became more accessible, people with visual
impairments can simply install screen readers on their smartphones to
accomplish the tasks that they may have needed several devices to accomplish
before.
In our study, participants with motor impairments mentioned the misperception
that “more functions are always better” in ATs design. For example, P9
mentioned the experiences of unwanted pushing from the back, and he removed
the handle at the back of the wheelchair. This problem does not only exist in
China; Low (Low, 2019) reported that a wheelchair user from the UK put spikes
on the handles to prevent the unwanted pushing. Furthermore, some participants
commented on the bulky design of some multi-functional ATs. We can conclude
that multi-functional ATs are not always better than the ones with a single
function. ATs designers should take into consideration whether or not add-on
functions will cause unwanted interactions. On the contrary, we found that
most participants with visual impairment preferred having more functions on
their canes, such as integrating the navigation function on the cane. We see
that people with different disabilities may have varying preferences on
functionalities of different ATs.
### 5.2. How Should Customization Be Integrated into AT Design under the
Chinese Context?
In China, people with disabilities generally lack support from trained
professionals (Hays, 2011); it is hard for them to find a professional AT
designer for ATs that suit their special needs. As 3D printing and quick
prototyping technologies are being increasingly used in many aspects of our
daily lives, these techniques might allow people with disabilities to design
their ATs by themselves. As the costs for personal fabrication technologies
keep decreasing, it might be possible to consider how to enable people with
disabilities to leverage technologies to customize their ATs (Reichinger et
al., 2018; Buehler et al., 2014; Kane and Bigham, 2014; Baldwin et al., 2017),
especially how to enable these fabrication technologies in rural areas where
AT users lack access to technologies. Furthermore, collaborative design of ATs
by empowering people with disabilities to design for themselves may reduce
concerns of functionalities and aesthetics (Branham and Kane, 2015).
Participants commonly mentioned that their ATs lacked design considerations of
where and when these devices are used. This includes using ATs indoors,
outdoors, at daytime, at night, and by people with various conditions of
disability. To incorporate ATs into daily routines to improve AT adoption
(Kintsch and DePaula, 2002), the AT designers should take where the user may
use the ATs into consideration. For example, users might use crutches outside
when the ground is slippery with ice. Beyond different locations, users may
also use the same AT during the daytime and at night. P18 mentioned that he
was hit by bicycles several times when he used his cane at night. Although
some of the canes have reflective strips, it may not be enough for people to
use canes in dark environments.
Therefore, it is important to take the user’s contextual information into
consideration while designing ATs. In future work, it would be interesting to
create a set of design metrics on how different contextual information may
affect design decisions. Once we collect the users’ individual AT usage
contexts, we could customize their ATs based on the usages.
### 5.3. How to Help People with Disabilities Improve Their Understandings of
Disabilities, Available ATs, and Career Opportunities?
We observed the practice of consulting family members and friends on what ATs
to use and the benefits of knowing what other people with a similar disability
are using (Section 4.1.1). There are now over 83 million Chinese people with
disabilities (Hays, 2011), but online resources with related information are
lacking. Current online information sources tend to provide general
information, such as what a person with spinal cord injuries should and could
do. However, different people may have different severity of disabilities. In
the interview, 19 participants mentioned that seeing what their peers use is
the most common approach to learn about what ATs are available and how to use
them. These participants all commented that it is tremendously helpful for
them to learn what they can do and how they can use certain ATs from a larger
community where people with similar disabilities are setting examples. In
China, there are very few online platforms for the population with
disabilities to acquire information related to their disabilities. More
importantly, most of these platforms (in China, [n.d.]) are operated by the
government and are often designed to increase the public’s knowledge about the
disabilities rather than supporting interactions among users. Interactive
platforms, such as StackOverflow and Github, allow people with common
interests but different levels of expertise to interact and share their
experiences and questions, which benefit the growth of users. On the other
hand, we also see such communities for people with visual impairments
emerging, such as on Reddit (Reddit, [n.d.]). Creating channels for people
with disabilities to know what their peers do could potentially open new
opportunities for them and make them feel self-empowered and encouraged by the
excellent performance of their peers. At the same time, due to widespread
stigma, it is also challenging to design such online communities so that
people with different levels of disabilities would feel comfortable to share
their experiences and questions without being judged.
In addition to government support, the disability movement and Disabled
Persons’ self-help Organizations (DPOs) have recently begun to emerge in China
(Zhang, 2017). In addition to supporting DPOs in promoting social and policy
changes, our findings also show that it is worth considering to build online
platforms that allow people with disabilities to share their successful
stories and what they can do with their peers so that the community can
inspire each other and allow them to understand potential opportunities that
they would otherwise have no access to. More research should be conducted to
understand the features, functions, and the interaction mechanisms that such
platforms should provide.
Emerging types of social media may also provide more opportunities and
resources to share information on ATs and career development among people with
various disabilities. Recently, live-streaming platforms have been explored to
share knowledge (Lu et al., 2018) and even promote intangible cultural
heritage (Lu et al., 2019). Live-streaming has fewer physical constraints,
which may cause fewer mobility concerns for people with disabilities, and the
monetary gifts that the live-streaming audience sends may generate additional
income to cover their daily costs. In the future, it is worth exploring ways
to leverage existing live-streaming social media platforms or create new live-
streaming social media platforms for people with disabilities to share their
stories and to potentially reduce misperceptions held by people with
disabilities and the general public.
### 5.4. How Can Mainstream and Emerging Technologies Be Leveraged to Improve
AT User Experience?
As smart IoT devices play a more prominent role for people with disabilities
in China, participants started to reduce or even replace their traditional
ATs. For example, participants with visual impairments commented that they
used the cane less often after they could order food delivery and shop online
on their smartphones. Our findings verified Shinohara and Wobbrock’s
(Shinohara and Wobbrock, 2011) predictions around 2011 on the trend that
people with disabilities would use more mainstream technologies with
accessible features. In Shinohara and Wobbrock’s (Shinohara and Wobbrock,
2011) study, more than half of the participants did not use smartphones.
However, we found that all of our participants now use different kinds of
smartphones for daily purposes.
Participants with different disabilities mentioned the frequent uses of
various smart devices, such as smart speakers, smartphones, smart curtains,
and smart rice cookers. We found that participants used smart devices to
mostly complete Instrumental Activities of Daily Living (IADL) (Staff, 2018),
such as shopping, paying bills, and cooking, by reducing the effort of
movement, searching, and physical actions. Most participants expressed strong
interest in trying new smart IoT technologies. This generates opportunities
and concerns for smart IoT designers to consider when designing accessible
smart IoTs. We found that participants have an open mind about new
technologies and innovations. For example, participants with visual and motor
impairments showed strong interest in self-driving vehicles. The enjoyment of
smart technologies and the open-mindedness for emerging technologies by people
with disabilities in this study is different from the findings of the recent
research conducted in Bangalore (India) that showed people with disabilities
there adopted smartphones over feature phones because they had no choice due
to the increasing market share of smartphones in the society (Pal et al.,
2017). It would be interesting to examine what caused this different attitude
toward smart technologies among people with disabilities in two countries in
the future.
Although smart devices helped people with disabilities throughout IADL, there
are also concerns related to these smart devices. We found that most smart
devices our participants used are commercial products from the general public
without any accessible modifications. Our participants reported that they
highly rely on these smart devices every day, and it would be better if these
products can be more accessible when completing an activity that contains
multiple actions, instead of only a single action. For example, P7 complained
about the complex actions of cooking rice every day:
> “…I am happy that my rice cooker allows me to start cooking and receive
> notifications through my phone. However, the hard task for me is to find my
> uncooked rice, measure the amount, and add the appropriate amount of water
> before cooking. If all of these procedures could be automatic, it would save
> me lots of time and effort…”
In our study, we found that most participants rely on accessibility features,
such as screen reader or voice-to-text software, to use mainstream
technologies. However, most participants mentioned that the slow update on
accessibility features affected the use of mainstream technologies.
Specifically, participants with visual impairments complained that the third-
party applications’ fast updates are not always compatible with their screen
readers and can render their screen readers useless. This problem becomes more
critical as some people depend on these mainstream technologies for most of
the daily activities. Currently, although people with visual impairment could
use certain apps to communicate with their friends and shop online, many apps
(e.g., games, augmented reality, and virtual reality) are still not accessible
to certain groups of people. Future work should have a mechanism to detect the
accessibility of software and libraries, especially when the software and
libraries are updated.
## 6\. LIMITATION
In this work, we studied practices and challenges surrounding ATs use as well
as the perceptions and misperceptions of ATs from the perspectives of people
with disabilities in China. Most of our participants were from the same
province in China. Thus, our participant sample may not generalize to the 83
million people with disabilities across all of China. However, we do think
that our work provides an understanding of these matters from a specific
context in China, while existing research on these issues have been conducted
primarily in North America and Europe only. An additional limitation is that
it is also essential to understand the perceptions and misperceptions of ATs
from people who do not have disabilities. These two complementary perspectives
could provide a more holistic picture to understand the issues that prevent
ATs from being socially acceptable. Toward this end, it is worth exploring the
perspectives of people without disabilities about ATs and contrast the
findings with that of our study as well as prior research to better understand
how to make ATs more socially acceptable and how to make people with
disabilities feel more inclusive.
We intentionally interviewed people with a wide range of disabilities to cover
a broader range of practices and challenges that they have encountered when
using ATs. Despite the broader coverage of the types of disabilities, we still
have not yet covered all possible disabilities and different severity levels
of the disabilities, such as people with cognitive disabilities and the
different severity levels of physical disabilities. The particular type of
disability and its severity level could shape the way in which people with
such a disability use and perceive their ATs. As a result, future work should
extend our current study to cover a broader range of disabilities and levels
of severity to examine whether and to what extent the findings still hold or
need to be extended. Furthermore, due to our limitation of recruitment, we
were only able to recruit participants who had experiences with ATs. However,
some people with disabilities might have never used ATs in the first place,
and it is important to explore their reasons in future research.
This study revealed the practices and challenges of ATs use and perceptions
and misperceptions surrounding ATs in China to some extent. Although we have
contrasted our findings with related work that was conducted in North America
or other regions, we have not conducted a thorough comparative study to
systematically compare the similarities and differences of these issues
between China and other countries. Thus, even though the similarities and
differences found through our study shed light on this line of research, they
are by no means exhaustive. Future work should conduct comparative studies to
more systematically compare the similarities and differences in the use,
perception, and misperceptions of ATs across countries and cultures.
## 7\. CONCLUSION
We have presented a semi-structured interview study conducted in China with 26
people who have various disabilities to understand the practices and
challenges surrounding the use of ATs and their perceptions and expectations
of ATs. We found that participants with disabilities choose AT devices and
“DIY” their ATs to signal to others their remaining abilities while also
considering functional and financial constraints. Our study further identified
the challenges that participants encountered with using ATs in social
interactions. It is not uncommon for the general public to misuse
infrastructures that are designed to help people with disabilities, use
traditional stigmatization terms, offer unwanted help, or even pose safety
threats. For the few who worked, their working environments also
unintentionally posed challenges. Lastly, we also reported the misperceptions
that people with disabilities felt others hold about their use of ATs.
Specifically, our findings confirmed that previously found misperceptions in a
North American context (Shinohara and Wobbrock, 2011) also pervade in China
today: ATs could functionally eliminate a disability. Moreover, we found
additional misperceptions: ATs should overcome inaccessible social
infrastructures; ATs are symbols of permanent disabilities; and ATs with more
functionalities are better.
Based on these findings, we recommend designers and researchers to take the
following aspects into consideration when designing ATs: consider the
advantages of both manual and electronic designs; consider both multi-
functional and single-functional designs; understand users’ personalization
needs and use emerging prototyping technologies (e.g., 3D printing, emerging
fabrication) to better integrate aesthetics and customization into AT design;
consider in-situ use of ATs and make ATs (both hardware and software) easy to
update. Additionally, we found the following two issues that the general
public needs to address: avoid misuse of the infrastructures designed for
people with disabilities; and learn to offer help to people with disabilities
appropriately by considering the “social” aspects of ATs. Furthermore, we
offer the following design considerations to reduce misperceptions that people
with disabilities hold about themselves: build online and offline platforms
dedicated to people with disabilities, such as discussion forums and live-
streaming platforms, to engage them to communicate and share knowledge and
skills that they could learn so that others with similar disabilities could be
aware of and encouraged by what they could actually learn and do.
###### Acknowledgements.
We would like to thank Hebei Disabled Persons’ Federation and Shijiazhuang Shi
Disabled Persons’ Federation, in particular the following people, for their
help in recruitment: Jian Gao, Hongtao Zhen, Yunfeng Shi, Jie Zheng, Yang Xue
and Anshi Yu. This work was supported in part by the Natural Sciences and
Engineering Research Council of Canada (RGPIN-2016-06326).
## References
* (1)
* Administration (2007) State Taxation Administration. 2007\. The notice of having tax reduction by employing people with disabilities. http://www.chinatax.gov.cn/n810341/n810765/n812176/n812778/c1194284/content.html?wscckey=0c6f683a502e4ff6_1599689299. (Accessed on 09/09/2020).
* Armstrong et al. (2007) William Armstrong, Kim D Reisinger, and William K Smith. 2007\. Evaluation of CIR-whirlwind wheelchair and service provision in Afghanistan. _Disability and rehabilitation_ 29, 11-12 (2007), 935–948.
* Asghar et al. (2019a) Salman Asghar, George Edward Torrens, and Robert Harland. 2019a. Cultural influences on perception of disability and disabled people: a comparison of opinions from students in the United Kingdom (UK) Pakistan (PAK) about a generic wheelchair using a semantic differential scale. _Disability and Rehabilitation: Assistive Technology_ (2019), 1–13.
* Asghar et al. (2019b) Salman Asghar, George Edward Torrens, Hassan Iftikhar, Ruth Welsh, and Robert Harland. 2019b. The influence of social context on the perception of assistive technology: using a semantic differential scale to compare young adults’ views from the United Kingdom and Pakistan. _Disability and Rehabilitation: Assistive Technology_ (2019), 1–14.
* Baldwin et al. (2017) Mark S Baldwin, Gillian R Hayes, Oliver L Haimson, Jennifer Mankoff, and Scott E Hudson. 2017\. The tangible desktop: a multimodal approach to nonvisual computing. _ACM Transactions on Accessible Computing (TACCESS)_ 10, 3 (2017), 9\.
* Bennett et al. (2018) Cynthia L Bennett, Erin Brady, and Stacy M Branham. 2018\. Interdependence as a frame for assistive technology research and design. In _Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility_. ACM, 161–173.
* Bigham et al. (2010) Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, et al. 2010\. VizWiz: nearly real-time answers to visual questions. In _Proceedings of the 23nd annual ACM symposium on User interface software and technology_. 333–342.
* Boiani et al. (2019) Josieli Aparecida Marques Boiani, Sara Raquel Martins Barili, Fausto Orsi Medola, and Frode Eika Sandnes. 2019\. On the non-disabled perceptions of four common mobility devices in Norway: a comparative study based on semantic differentials. _Technology and Disability_ 31, 1-2 (2019), 15–25.
* Branham and Kane (2015) Stacy M Branham and Shaun K Kane. 2015. Collaborative accessibility: How blind and sighted companions co-create accessible home spaces. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_. ACM, 2373–2382.
* Buehler et al. (2015) Erin Buehler, Stacy Branham, Abdullah Ali, Jeremy J Chang, Megan Kelly Hofmann, Amy Hurst, and Shaun K Kane. 2015. Sharing is caring: Assistive technology designs on thingiverse. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_. ACM, 525–534.
* Buehler et al. (2014) Erin Buehler, Shaun K Kane, and Amy Hurst. 2014. ABC and 3D: opportunities and obstacles to 3D printing in special education environments. In _Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility_. ACM, 107–114.
* Campbell and Uren (2011) Anne Campbell and Marie Uren. 2011. ” The Invisibles”… Disability in China in the 21st Century. _International Journal of Special Education_ 26, 1 (2011), 12–24.
* Carmien and Fischer (2008) Stefan Parry Carmien and Gerhard Fischer. 2008. Design, adoption, and assessment of a socio-technical environment supporting independence for persons with cognitive disabilities. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. 597–606.
* Carrington et al. (2015) Patrick Carrington, Kevin Chang, Helena Mentis, and Amy Hurst. 2015\. ” But, I don’t take steps” Examining the Inaccessibility of Fitness Trackers for Wheelchair Athletes. In _Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility_. 193–201.
* Corbin and Strauss (1990) Juliet M Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria. _Qualitative sociology_ 13, 1 (1990), 3–21.
* Dai (2017) Wangyun Dai. 2017\. Invisible Millions: China’s Unnoticed Disabled People. https://www.sixthtone.com/news/1001285/invisible-millions-chinas-unnoticed-disabled-people. (Accessed on 12/11/2019).
* Deibel (2013) Katherine Deibel. 2013\. A convenient heuristic model for understanding assistive technology adoption. In _Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility_. 1–2.
* Du (2017) Wenqi Du. 2017. Chinese Cultural Perspectives and Interventions on Disabilities. https://lendblog.ahslabs.uic.edu/2017/05/29/chinese-cultural/. (Accessed on 01/14/2020).
* Eide and Øderud (2009) Arne H Eide and Tone Øderud. 2009. Assistive technology in low-income countries. In _Disability & international development_. Springer, 149–160.
* Fan et al. (2020) Mingming Fan, Zhen Li, and Franklin Mingzhe Li. 2020\. Eyelid Gestures on Mobile Devices for People with Motor Impairments. In _The 22nd International ACM SIGACCESS Conference on Computers and Accessibility_ (Virtual Event, Greece) _(ASSETS ’20)_. Association for Computing Machinery, New York, NY, USA, Article 15, 8 pages. https://doi.org/10.1145/3373625.3416987
* Federation (2014) China Disabled Persons’ Federation. 2014\. _Communique On Major Statistics Of the Second China National Sample Survey on Disability_. https://archive.is/20141013223328/http://www.cdpf.org.cn/english/contactus/content/2008-04/14/content_84989.htm
* Grumbine (2018) John Grumbine. 2018\. What percentage of the American population lives apartments, and what percent in houses? https://www.quora.com/What-percentage-of-the-American-population-lives-apartments-and-what-percent-in-houses. (Accessed on 12/12/2019).
* Guo et al. (2016) Anhong Guo, Xiang’Anthony’ Chen, Haoran Qi, Samuel White, Suman Ghosh, Chieko Asakawa, and Jeffrey P Bigham. 2016. Vizlens: A robust and interactive screen reader for interfaces in the real world. In _Proceedings of the 29th Annual Symposium on User Interface Software and Technology_. 651–664.
* Guo (2014) Chun-ning Guo. 2014\. The development of the policy of persons with disabilities in China. _China Journal of Social Work_ 7, 2 (2014), 202–207.
* Hartson and Pyla (2012) Rex Hartson and Pardha S Pyla. 2012. _The UX Book: Process and guidelines for ensuring a quality user experience_. Elsevier.
* Hays (2011) Jeffrey Hays. 2011\. Handicapped people in China. http://factsanddetails.com/china/cat13/sub83/item1906.html
* Hurst and Tobias (2011) Amy Hurst and Jasmine Tobias. 2011. Empowering individuals with do-it-yourself assistive technology. In _The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility_. 11–18.
* Hwang (2006) Kwang-Kuo Hwang. 2006\. Moral face and social face: Contingent self-esteem in Confucian society. _International Journal of Psychology_ 41, 4 (2006), 276–281.
* in China ([n.d.]) Disability in China. [n.d.]. ChinaDP website. http://www.chinadp.net.cn/. (Accessed on 11/11/2019).
* Iwarsson and Ståhl (2003) Susanne Iwarsson and Agnetha Ståhl. 2003. Accessibility, usability and universal design-positioning and definition of concepts describing person-environment relationships. _Disability and rehabilitation_ 25, 2 (2003), 57–66.
* Kane and Bigham (2014) Shaun K Kane and Jeffrey P Bigham. 2014. Tracking@ stemxcomet: teaching programming to blind students via 3D printing, crisis management, and twitter. In _Proceedings of the 45th ACM technical symposium on Computer science education_. ACM, 247–252.
* Kane et al. (2008) Shaun K Kane, Jeffrey P Bigham, and Jacob O Wobbrock. 2008\. Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In _Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility_. 73–80.
* Kane et al. (2009) Shaun K Kane, Chandrika Jayant, Jacob O Wobbrock, and Richard E Ladner. 2009. Freedom to roam: a study of mobile device adoption and accessibility for people with visual and motor disabilities. In _Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility_. 115–122.
* Kaye et al. (2008) H Stephen Kaye, Patricia Yeager, and Myisha Reed. 2008\. Disparities in usage of assistive technology among people with disabilities. _Assistive Technology_ 20, 4 (2008), 194–203.
* Kianpisheh et al. (2019) Mohammad Kianpisheh, Franklin Mingzhe Li, and Khai N Truong. 2019. Face Recognition Assistant for People with Visual Impairments. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 3, 3 (2019), 1–24.
* Kim et al. (2019) Eunjoo Kim, Jie Zhang, and Xiaoke Sun. 2019. Comparison of Special Education in the United States, Korea, and China. _International Journal of Special Education_ 33, 4 (2019), 796–814.
* Kintsch and DePaula (2002) Anja Kintsch and Rogerio DePaula. 2002. A framework for the adoption of assistive technology. _SWAAAC 2002: Supporting learning through assistive technology_ (2002), 1–10.
* Li et al. (2019) Franklin Mingzhe Li, Di Laura Chen, Mingming Fan, and Khai N Truong. 2019. FMT: A Wearable Camera-Based Object Tracking Memory Aid for Older Adults. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 3, 3 (2019), 1–25.
* Li et al. (2017) Mingzhe Li, Mingming Fan, and Khai N Truong. 2017. BrailleSketch: A Gesture-based Text Input Method for People with Visual Impairments. In _Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility_. ACM, 12–21.
* Li et al. (2020) Zhen Li, Mingming Fan, Ying Han, and Khai N. Truong. 2020\. iWink: Exploring Eyelid Gestures on Mobile Devices. In _Proceedings of the 1st International Workshop on Human-Centric Multimedia Analysis_ (Seattle, WA, USA) _(HuMA’20)_. Association for Computing Machinery, New York, NY, USA, 83–89. https://doi.org/10.1145/3422852.3423479
* Lindsay et al. (2012) Stephen Lindsay, Katie Brittain, Daniel Jackson, Cassim Ladha, Karim Ladha, and Patrick Olivier. 2012\. Empathy, participatory design and people with dementia. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. ACM, 521–530.
* Low (2019) Harry Low. 2019\. Spikes - and other ways disabled people combat unwanted touching. https://www.bbc.com/news/disability-49584591?fbclid=IwAR0iQluQZKWb6wwRNKPahK3KRt141FSQ1gHkaNqvaD3q0XJrUNICtpo4bM8. (Accessed on 11/07/2019).
* Lu et al. (2019) Zhicong Lu, Michelle Annett, Mingming Fan, and Daniel Wigdor. 2019. I feel it is my responsibility to stream: Streaming and Engaging with Intangible Cultural Heritage through Livestreaming. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_. ACM, 229.
* Lu et al. (2018) Zhicong Lu, Haijun Xia, Seongkook Heo, and Daniel Wigdor. 2018\. You watch, you give, and you engage: a study of live streaming practices in China. In _Proceedings of the 2018 CHI conference on human factors in computing systems_. 1–13.
* May-Teerink (1999) Teresa May-Teerink. 1999\. A survey of rehabilitative services and people coping with physical disabilities in Uganda, East Africa. _International journal of rehabilitation research. Internationale Zeitschrift fur Rehabilitationsforschung. Revue internationale de recherches de readaptation_ 22, 4 (1999), 311–316.
* McNaney et al. (2014) Roisin McNaney, John Vines, Daniel Roggen, Madeline Balaam, Pengfei Zhang, Ivan Poliakov, and Patrick Olivier. 2014. Exploring the Acceptability of Google Glass As an Everyday Assistive Device for People with Parkinson’s. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI ’14)_. ACM, New York, NY, USA, 2551–2554. https://doi.org/10.1145/2556288.2557092
* Naftali and Findlater (2014) Maia Naftali and Leah Findlater. 2014. Accessibility in context: understanding the truly mobile experience of smartphone users with motor impairments. In _Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility_. ACM, 209–216.
* of Education (2018) U.S. Department of Education. 2018\. _Digest of education statistics, 2016_. https://nces.ed.gov/programs/digest/d16/ch_2.asp
* of Education of the People’s Republic of China (2018) Ministry of Education of the People’s Republic of China. 2018. National educational statistics in 2017. (2018). http://www.moe.gov.cn/jyb_sjzl/sjzl_fztjgb/201807/t20180719_343508.html
* Pal et al. (2017) Joyojeet Pal, Anandhi Viswanathan, Priyank Chandra, Anisha Nazareth, Vaishnav Kameswaran, Hariharan Subramonyam, Aditya Johri, Mark S Ackerman, and Sile O’Modhrain. 2017\. Agency in assistive technology adoption: visual impairment and smartphone use in Bangalore. In _Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems_. 5929–5940.
* Parette and Scherer (2004) Phil Parette and Marcia Scherer. 2004. Assistive technology use and stigma. _Education and Training in Developmental Disabilities_ (2004), 217–226.
* Phillips and Zhao (1993) Betsy Phillips and Hongxin Zhao. 1993. Predictors of assistive technology abandonment. _Assistive technology_ 5, 1 (1993), 36–45.
* Profita et al. (2016) Halley Profita, Reem Albaghli, Leah Findlater, Paul Jaeger, and Shaun K Kane. 2016. The AT effect: how disability affects the perceived social acceptability of head-mounted display use. In _proceedings of the 2016 CHI conference on human factors in computing systems_. ACM, 4884–4895.
* Profita (2016) Halley P Profita. 2016\. Designing wearable computing technology for acceptability and accessibility. _ACM SIGACCESS Accessibility and Computing_ 114 (2016), 44–48.
* Ravindran and Myers (2012) Neeraja Ravindran and Barbara J Myers. 2012. Cultural influences on perceptions of health, illness, and disability: A review and focus on autism. _Journal of Child and Family Studies_ 21, 2 (2012), 311–319.
* Reddit ([n.d.]) Reddit. [n.d.]. Accessibility community in reddit. https://www.reddit.com/r/accessibility/. (Accessed on 11/11/2019).
* Reichinger et al. (2018) Andreas Reichinger, Helena Garcia Carrizosa, Joanna Wood, Svenja Schröder, Christian Löw, Laura Rosalia Luidolt, Maria Schimkowitsch, Anton Fuhrmann, Stefan Maierhofer, and Werner Purgathofer. 2018\. Pictures in your mind: using interactive gesture-controlled reliefs to explore art. _ACM Transactions on Accessible Computing (TACCESS)_ 11, 1 (2018), 2\.
* Riemer-Reiss and Wacker (2000) Marti L Riemer-Reiss and Robbyn R Wacker. 2000. Factors associated with assistive technology discontinuance among individuals with disabilities. _Journal of Rehabilitation_ 66, 3 (2000).
* Ripat and Woodgate (2011) Jacquie Ripat and Roberta Woodgate. 2011. The intersection of culture, disability and assistive technology. _Disability and Rehabilitation: Assistive Technology_ 6, 2 (2011), 87–96.
* Rodrigues et al. (2015) André Rodrigues, Kyle Montague, Hugo Nicolau, and Tiago Guerreiro. 2015. Getting smartphones to talkback: Understanding the smartphone adoption process of blind users. In _Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility_. 23–32.
* Ross (2017) James Ross. 2017\. Gap Between Subway Trains and Platforms. https://www.ttc.ca/About_the_TTC/Commission_reports_and_information/Commission_meetings/2017/November_13/Reports/11_Gap_Between_Subway_Trains_and_Platforms.pdf. (Accessed on 01/21/2020).
* Rubin and White-Means (2001) Rose M Rubin and Shelley I White-Means. 2001. Race, disability and assistive devices: Sociodemographics or discrimination. _International Journal of Social Economics_ (2001).
* Scherer (1996) Marcia J Scherer. 1996\. Outcomes of assistive technology use on quality of life. _Disability and rehabilitation_ 18, 9 (1996), 439–448.
* Shinohara and Tenenberg (2009) Kristen Shinohara and Josh Tenenberg. 2009. A blind person’s interactions with technology. _Commun. ACM_ 52, 8 (2009), 58–66.
* Shinohara and Wobbrock (2011) Kristen Shinohara and Jacob O Wobbrock. 2011. In the shadow of misperception: assistive technology use and social interactions. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. ACM, 705–714.
* Shinohara and Wobbrock (2016) Kristen Shinohara and Jacob O Wobbrock. 2016. Self-conscious or self-confident? A diary study conceptualizing the social accessibility of assistive technology. _ACM Transactions on Accessible Computing (TACCESS)_ 8, 2 (2016), 5\.
* Shinohara et al. (2018) Kristen Shinohara, Jacob O Wobbrock, and Wanda Pratt. 2018\. Incorporating social factors in accessible design. In _Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility_. ACM, 149–160.
* Staff (2018) Healthwise Staff. 2018\. Learning About Instrumental Activities of Daily Living (IADLs). https://myhealth.alberta.ca/health/AfterCareInformation/pages/conditions.aspx?HwId=abk6308. (Accessed on 01/14/2020).
* Susanne et al. (2018) M Susanne et al. 2018\. Disability in the Workplace in China: Situation Assessment. (2018).
* Taylor (2017) Danielle Taylor. 2017\. Disability Statistics from the U.S. Census Bureau in 2017/2018. (2017).
* the office of the second national sample survey of people with disability (2007) the office of the second national sample survey of people with disability. 2007. _the Second National Sample Survey of People with Disability_. Hua Xia Chu Ban She. https://books.google.ca/books?id=R4TsnQAACAAJ
* Wahidin et al. (2018) Herman Wahidin, Jenny Waycott, and Steven Baker. 2018\. The challenges in adopting assistive technologies in the workplace for people with visual impairments. In _Proceedings of the 30th Australian Conference on Computer-Human Interaction_. 432–442.
* Wang et al. (2009) Yunping Wang et al. 2009\. the Documentary of the Service of Assistive Technology in Midwest China. _Disability of China_ 10 (2009).
* Zeng and Weber (2015) Limin Zeng and Gerhard Weber. 2015. A pilot study of collaborative accessibility: How blind people find an entrance. In _Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services_. ACM, 347–356.
* Zhang (2017) Chao Zhang. 2017\. ‘Nothing about us without us’: the emerging disability movement and advocacy in China. _Disability & Society_ 32, 7 (2017), 1096–1101. https://doi.org/10.1080/09687599.2017.1321229 arXiv:https://doi.org/10.1080/09687599.2017.1321229
|
# A graph-based formalism for surface codes and twists
Rahul Sarkar111Institute for Computational & Mathematical Engineering,
Stanford University, Stanford, CA, USA<EMAIL_ADDRESS>and Theodore J.
Yoder222IBM T.J. Watson Research Center, Yorktown Heights, NY, USA.
<EMAIL_ADDRESS>
###### Abstract
Twist defects in surface codes can be used to encode more logical qubits,
improve the code rate, and implement logical gates. In this work we provide a
rigorous formalism for constructing surface codes with twists generalizing the
well-defined homological formalism introduced by Kitaev for describing CSS
surface codes. In particular, we associate a surface code to _any_ graph $G$
embedded on _any_ 2D-manifold, in such a way that (1) qubits are associated to
the vertices of the graph, (2) stabilizers are associated to faces, (3) twist
defects are associated to odd-degree vertices. In this way, we are able to
reproduce the variety of surface codes, with and without twists, in the
literature and produce some new examples. We also calculate and bound various
code properties such as the rate and distance in terms of topological graph
properties such as genus, systole, and face-width.
###### Contents
1. 1 Introduction
2. 2 Preliminaries
1. 2.1 Graph embeddings
2. 2.2 Oriented rotation systems
3. 2.3 General rotation systems
4. 2.4 Dual graph and checkerboardability
5. 2.5 Homology of graph embeddings
6. 2.6 Covering maps and contractible loops
3. 3 Majorana and qubit surface codes from rotation systems
1. 3.1 Majorana operators and codes
2. 3.2 Majorana surface codes defined on embedded graphs
3. 3.3 From Majorana surface codes to qubit surface codes
4. 3.4 Relation to homological surface codes
4. 4 Locating logical operators
1. 4.1 Checkerboardability with defects
2. 4.2 Characterizing logical operators as cycles in the decoding graph
3. 4.3 Doubled graphs
4. 4.4 Face-width as a lower-bound on code distance
5. 4.5 Logical operators from trails in the original graph
5. 5 Code examples
1. 5.1 Square lattice toric codes
2. 5.2 Rotated toric codes
3. 5.3 New hyperbolic codes
4. 5.4 Two more ways to generalize the triangle code
6. 6 Open problems
7. A Graph embeddings from oriented rotation systems
8. B Paulis with arbitrary commutation patterns and CALs
1. B.1 Basic definitions
2. B.2 Constructing a list of Paulis given desired commutation relations
3. B.3 Properties of cyclically anticommuting lists of Paulis
9. C Equivalence of Majorana and qubit surface codes
10. D A more general family of cyclic qubit stabilizer codes
1. D.1 A four parameter cyclic code family
2. D.2 A two parameter cyclic code family
11. E Embedding of the medial graph
12. F Using GAP to find normal subgroups
## 1 Introduction
Since their introduction [1], surface codes have been an essential tool in the
quantum engineer’s fight against noise, combining a high threshold with
relatively simple 2-dimensional qubit connectivity. These codes also exemplify
a surprising connection between quantum error-correction and topology. In
effect, one associates a CSS quantum code to a graph embedded on a
2-dimensional manifold by assigning qubits to edges and stabilizers to
vertices and faces of the graph. The homology of the surface guarantees that
the stabilizers commute and that non-trivial logical operators correspond to
homologically non-trivial cycles in the graph or its dual.
However, the simple 2-dimensional connectivity comes with a drawback. Like all
2-dimensional stabilizer codes, surface codes are limited by the Bravyi-
Poulin-Terhal bound [2], which says that a code family that uses $N$ qubits to
encode $K$ qubits with distance $D$ must satisfy $KD^{2}\leq cN$ for some
constant $c$. For instance, the rotated surface code [3] achieves $c=1$ in the
plane, while the square-lattice toric code [1] achieves $c=1$ on the torus.
Motivated in part by improving the constant $c$, twist defects have been
introduced to surface codes [4, 5]. Fundamentally, twist defects are local
regions where Pauli errors can create (or destroy) unpaired excitations of
both types, $X$ and $Z$, say, whereas elsewhere in the code the parities of
these excitations are conserved. However, twist defects appear in several
quantitatively different forms. For example, sometimes weight-five stabilizers
are involved [4, 5], sometimes only weight-four as in the usual surface code
[6], and sometimes twists are embodied by a combination of weight-two and
weight-six stabilizers [7]. Qualitatively, these defects behave the same, with
$M$ defects adding $(M-2)/2$ logical qubits to the code and with non-trivial
logical operators forming paths between pairs of defects.
Clever placement and implementation of twist defects can lead to savings in
qubit count by producing codes with $N=cKD^{2}$ for relatively small constant
$c$. For instance, in the plane, $c=3/4$ [6] and $c=1/2$ [8] have been
achieved for surface codes, and $c=3/8$ for color codes [8]. However, there is
not a rigorous formalism for constructing surface codes with twists comparable
to the well-defined homological formalism introduced by Kitaev for describing
CSS surface codes [1]. Such a formalism would be helpful for looking for
better surface codes with perhaps even smaller constants $c$.
Our main goal in this work is to introduce such a formalism, capable of
describing surface codes with and without twist defects in a unified manner.
Again we associate a code to an embedded graph, but find it most natural to do
this in a different way than the homological formalism described above.
Instead of placing qubits on edges, we place them on vertices. Only faces of
the graph support stabilizers in our description, and odd-degree vertices act
as the twist defects. Placing qubits on vertices is not a substantially new
idea (for instance, appearing for the degree-4 case in [3]), but we go beyond
prior works in applying the construction to arbitrary embedded graphs,
yielding new codes with improved parameters.
Let us briefly summarize the main results of each section. Section 2
rigorously defines embedded graphs and related notation used throughout the
paper. Our description of choice for an embedded graph is via a rotation
system. Effectively, this is a combinatoric description of the embedded graph,
which abstracts away unnecessary details about how exactly the graph is drawn
on the surface. What remains is just the adjacency of the graph components
(e.g. vertices, edges, and faces). The rotation system description is very
similar to the formalisms used for hyperbolic codes in, for example, [9, 10]
but slightly more general as it applies to non-regular graphs. Although the
rotation system description may be overkill for individual codes, for which a
picture may suffice, we believe that creating a more comprehensive theory of
surface codes deserves such a precise framework. Having the rotation system
description is also convenient for computer-assisted searches for codes, as we
do for hyperbolic manifolds in Section 5.3. We identify a key property of
embedded graphs called checkerboardability, which means one can two-color
faces of the graph such that adjacent faces are differently colored.
In Section 3, we give our surface code construction. We actually provide two
equivalent descriptions, one using Majoranas to encode qubits and one using
qubits to encode qubits. The Majorana description is similar to that used to
describe surface codes in [11, 12, 13]. We place two Majoranas on each edge
and a single Majorana at each odd-degree vertex. Stabilizers, products of
Majoranas, are associated to faces and vertices of the graph. Identifying
qubit subspaces at the vertices turns the Majorana code into a qubit surface
code with qubits on the vertices, as we described above. A significant result
in this section is the computation of the number of encoded qubits in Theorem
3.3. This depends on the genus of the surface, its orientability, the number
of odd-degree vertices, and the checkerboardability of the surface. Finally,
we note that, when the graph is checkerboardable, Kitaev’s homological
construction, applied to a different but related graph, can be used to
describe the same code.
Our focus in Section 4 is on finding logical operators and bounding the code
distance. A key idea here is to derive a “decoding” graph from the original
embedded graph. Vertices in the decoding graph represent stabilizers, edges
represent Pauli errors, and cycles represent logical operators. In principle,
one can perform perfect matching in the decoding graph to recover from errors,
though for non-checkerboardable codes this may not succeed in achieving the
code distance. Like Kitaev’s homological construction, we wish to associate
homologically non-trivial cycles to non-trivial logical operators. This is
easily done in the checkerboardable case of our construction. However, in the
non-checkerboardable case, the decoding graph might not embed in the same
manifold as the original graph. In this case, we instead find a way to embed
the decoding graph in a manifold of potentially higher genus in such a way
that this correspondence of homological and logical non-triviality is
maintained. Bounds on the code distance follow in terms of lengths of cycles
in the decoding graph.
Finally, Section 5, we provide several code families that serve as examples of
our construction. The first family, square-lattice codes on the torus [11],
serves as a familiar introduction. The second family rotates the square-
lattice on the torus to get codes like those in [14]. Included in this case is
a family of cyclic codes that contains the famous 5-qubit quantum code as a
member. The third family consists of hyperbolic codes defined using regular
tilings of high genus surfaces. Because we are using our qubit-on-vertices
definition of surface codes, this leads to new codes, different from those in,
for instance [15, 16]. In these first examples, we take pains to calculate the
code distances formally. A final section of examples consists of a planar code
family generalizing the triangle code [6] and an embedding of stellated codes
[8] on higher genus surfaces. In these cases, we do not formally prove the
code distances but provide conjectures. We believe that stellated codes on
higher genus surfaces provide the smallest known constant $c$, approaching
$1/4$, for codes with stabilizers that are mostly weight four and vanishingly
many stabilizers of weight five.
## 2 Preliminaries
In this section, we first provide an introduction to the combinatorial
description of graph embeddings (also called maps) on closed surfaces. We
refer to this description as a rotation system, following terminology in [17,
18, 19]. We recall that a closed surface is a 2-dimensional, compact, and
connected topological manifold without boundary (see [20, 21] for an
introduction to topological manifolds), and henceforth we simply say a
manifold to mean a closed surface (unless specified otherwise). All manifolds
admit a unique smooth structure up to diffeomorphisms [22], and by the closed
surface classification theorem [23, 20, 24], each manifold is homeomorphic to
a space of one of three types — (a) Type I: $\mathbb{S}^{2}$, (b) Type II: a
connected sum of one or more copies of $\mathbb{T}^{2}$, (c) Type III: a
connected sum of one or more copies of $\mathbb{RP}^{2}$. This greatly
simplifies the picture when we physically think of “drawing” a graph on a
manifold, more formally captured using the concept of graph embeddings. A
rotation system is simply a combinatorial description of any such graph
emdedding. To garner the most intuition, we believe that rotation systems are
most clearly presented in two parts, by first defining them for the case of
embeddings in orientable manifolds (Type I and II), and then generalizing them
to include embeddings in non-orientable manifolds (Type III). To a large
extent, the content of this section follows [17, 25, 26]. Graph theoretic
terminology appearing in this section is borrowed from [27, 28]. We also
introduce the concept of graph checkerboardability in this section, and
briefly discuss the notion of $\mathbb{F}_{2}$-homology of a graph embedding.
### 2.1 Graph embeddings
Let us first cover graphs and their embeddings. A graph $G(V,E)$ is a
collection of vertices $V$ and edges $E$. We will mostly deal with finite
graphs, i.e. $1\leq|V|<\infty$, and $1\leq|E|<\infty$, and explicitly state
when this assumption is violated. The entry in a vertex-edge adjacency matrix
$a_{ve}$ indicates whether a vertex $v$ and an edge $e$ are adjacent
($a_{ve}=1$) or not ($a_{ve}=0$), where $a\in\mathbb{F}^{|V|\times|E|}$. An
edge adjacent to a single vertex is a loop. If an edge is not a loop, then it
is adjacent to exactly two vertices. The degree of a vertex $v$ is denoted
$\text{deg}(v)$, and it is the number of edges adjacent to it, with loops
counted with multiplicity two. We assume in this section that $G$ is a
connected graph.
A graph embedding requires that we actually draw the graph on a manifold
$\mathcal{M}$. There may be multiple ways to do this even for a single graph
and a fixed manifold. Generally, this drawing procedure is described by a
graph embedding map $\Gamma:V\cup E\rightarrow\mathcal{M}$ that assigns unique
points in $\mathcal{M}$ to each $v\in V$, and arcs in $\mathcal{M}$ to each
$e\in E$. If $e\in E$ is not a loop adjacent to distinct vertices $v_{1}$ and
$v_{2}$, then the arc assigned to it must be the image of a homeomorphism
$\gamma:[0,1]\rightarrow\mathcal{M}$, with endpoints $\gamma(0)=\Gamma(v_{1})$
and $\gamma(1)=\Gamma(v_{2})$. If $e$ is a loop adjacent to a vertex $v$, then
the arc assigned to it is the image of a homeomorphism
$\gamma:\mathbb{S}^{1}\rightarrow\mathcal{M}$, with both endpoints given by
$\gamma(0)=\Gamma(v)$ (here $\mathbb{S}^{1}$ is parameterized in polar
coordinates). Moreover, two arcs may not intersect except possibly at their
endpoints.
The set $\mathcal{F}=\mathcal{M}\setminus\Gamma(E)$ is a collection of regions
called the faces of the embedding, and two points $p,q\in\mathcal{F}$ belong
to the same face if and only if there exists a continuous map
$\gamma^{\prime}:[0,1]\rightarrow\mathcal{F}$ such that
$\gamma^{\prime}(0)=p$, and $\gamma^{\prime}(1)=q$. We say that an embedding
of $G$ is a 2-cell embedding if and only if every face is homeomorphic to the
open disc $\mathcal{B}(0,1)=\\{(x,y)\in\mathbb{R}^{2}:x^{2}+y^{2}<1\\}$. If
the closure of every face is homeomorphic to the closed disc
$\overline{\mathcal{B}(0,1)}$, and the set difference of the closure of the
face and the face itself is homeomorphic to the circle $\mathbb{S}^{1}$, we
say that it is a closed 2-cell embedding or a strong embedding. Every
connected graph admits a 2-cell embedding in some orientable manifold (see
Theorem A.16. in [19]), and also in some non-orientable manifold (see Theorem
3.4.3 in [28]). One can see a graph (a square lattice) embedded in the torus
in Fig. 1(a).
### 2.2 Oriented rotation systems
The goal in this subsection is to establish an equivalence between graphs
embedded in orientable manifolds and a combinatorial object called an oriented
rotation system. It is easier to see this equivalence in one direction — start
with an embedded graph in an orientable manifold and develop the oriented
rotation system description. This direction we sketch here. A more formal
argument and the other direction are covered in Appendix A.
The description of a graph embedding in the previous section requires a
variety of homeomorphisms whose exact details, topologically speaking, do not
matter. The oriented rotation system description dispenses with those details
by breaking the embedded graph down into half-edges, two for every edge, and
describing vertices, edges, and faces of the graph embedding as sets of these
half-edges. The precise definition is as follows.
###### Definition 2.1.
An oriented rotation system is a triple $R_{O}=(H_{O},\nu,\epsilon)$, where
$H_{O}$ is a finite set of half-edges (sometimes called darts), and two
permutations $\nu:H_{O}\rightarrow H_{O}$ and $\epsilon:H_{O}\rightarrow
H_{O}$ satisfying the additional properties
1. (i)
$\epsilon$ is a fixed-point-free involution, meaning $\epsilon^{2}=\text{Id}$,
and $\epsilon h\neq h$ for all $h\in H_{O}$.
2. (ii)
The free group $\langle\nu,\epsilon\rangle$ generated by $\nu$ and $\epsilon$,
acts transitively on $H_{O}$. This means that for all $h,h^{\prime}\in H_{O}$,
one can find $\kappa\in\langle\nu,\epsilon\rangle$ such that $\kappa
h=h^{\prime}$.
We now define the three sets
$\begin{split}V&=\\{v\subseteq H_{O}:v\text{ is an orbit of the free group
}\langle\nu\rangle\\},\\\ E&=\\{e\subseteq H_{O}:e\text{ is an orbit of the
free group }\langle\epsilon\rangle\\},\\\ F&=\\{f\subseteq H_{O}:f\text{ is an
orbit of the free group }\langle\nu\epsilon\rangle\\}.\end{split}$ (1)
We call elements of these sets the vertices, edges, and faces of the oriented
rotation system, respectively. So, condition (i) ensures edges are sets of
exactly two half-edges, and thus $|H_{O}|$ is even, while condition (ii)
ensures the graph is connected.
Thus given an embedded graph $G(V,E)$ in an oriented manifold $\mathcal{M}$,
to get the oriented rotation system $(H_{O},\nu,\epsilon)$ corresponding to
it, we first create a set of half-edges $H_{O}$ with each edge in $E$
contributing two half-edges. Let $e\in E$ with corresponding half-edges
$h_{e},h^{\prime}_{e}\in H_{O}$. The permutation $\epsilon$ is defined so that
$\epsilon h_{e}=h^{\prime}_{e}$, and $\epsilon h^{\prime}_{e}=h_{e}$ for every
$e\in E$, and note that property (i) of Definition 2.1 is satisfied. If $e$ is
adjacent to vertices $v,v^{\prime}\in V$ ($v=v^{\prime}$ if $e$ is a loop), we
assign $h_{e}$ and $h^{\prime}_{e}$ to $v$ and $v^{\prime}$ respectively. Thus
each vertex $v$ gets assigned exactly $\text{deg}(v)$ half-edges, and
conversely each half-edge in $H_{O}$ is assigned to exactly one vertex in $V$.
This partitions $H_{O}$ into disjoint subsets indexed by the vertices. Now
choose a smoothly varying outward normal on $\mathcal{M}$, which is possible
since it is orientable, fix a vertex $v\in V$, and let $(H_{O})_{v}$ be the
half-edges assigned to it. We can enumerate the edges adjacent to $v$ in
counterclockwise order with respect to the outward normal, where an edge
appears twice successively in this enumeration if it is a loop. Then replacing
each edge in this ordering by the corresponding half-edge that it contributes
to $v$, gives a counterclockwise ordering of $(H_{O})_{v}$. The permutation
$\nu$ for $(H_{O})_{v}$ is defined to be the cyclic permutation given by this
counterclockwise ordering, and repeating this at every vertex completely
specifies $\nu$. It follows easily that property (ii) of Definition 2.1 is
satisfied if $G$ is connected. This completes the construction of the oriented
rotation system $(H_{O},\nu,\epsilon)$. See Fig. 1(a) for an example. However,
the description we just provided fails to capture graph embeddings in non-
orientable manifolds, because in that case we cannot choose a smoothly varying
outward normal.
Figure 1: (a) A graph embedded onto the torus and its oriented rotation
system. An orbit of $\langle\nu\rangle$, an orbit of $\langle\epsilon\rangle$,
and two orbits of $\langle\nu\epsilon\rangle$ are shown. These correspond to
an edge, a vertex, and two faces of the graph embedding. (b) We show a graph
embedded on the projective plane. A general rotation system is defined by a
set of flags (gray triangles) and three permutations: $\lambda$ swaps flags
along the same side of an edge, $\rho$ swaps them within a face adjacent to a
vertex, and $\tau$ swaps them across an edge.
### 2.3 General rotation systems
The rotation system description can be generalized to cover non-orientable
manifolds as well. To do so, we must effectively introduce an orientation to
half-edges. Thus, instead of half-edges, we use a set of flags $H$. Fig. 1(b)
makes it clear how flags can be visualized in the embedding, with two flags
per half-edge, one on either “side”. The formal definition of a general
rotation system is as follows.
###### Definition 2.2.
A general rotation system is a quadruple $R=(H,\lambda,\rho,\tau)$, where $H$
is a finite set of flags, and $\lambda,\rho,\tau$ are permutations on $H$
satisfying the properties
1. (i)
$\lambda,\rho$, and $\tau$ are fixed-point-free involutions.
2. (ii)
$\lambda\tau=\tau\lambda$, or equivalently, $\lambda\tau$ is an involution.
3. (iii)
the monodromy group $M(R)=\langle\lambda,\rho,\tau\rangle$ acts transitively
on $H$.
The involutions $\lambda,\rho,\tau$ permute flags as shown in Fig. 1(b). We
define vertices, edges, and faces as orbits of $\langle\rho,\tau\rangle$,
$\langle\lambda,\tau\rangle$, and $\langle\rho,\lambda\rangle$ respectively.
From properties (i) and (ii) of Definition 2.2, orbits of
$\langle\lambda,\tau\rangle$ have the form $\\{h,\lambda h,\tau h,\lambda\tau
h:h\in H\\}$, i.e. are exactly of size four, and so $|H|\equiv 0\pmod{4}$.
Moreover, one deduces from property (i) that the orbits of
$\langle\rho,\lambda\rangle$ and $\langle\rho,\tau\rangle$ have even sizes,
with the canonical form of an orbit of size $2n+2$ given by $\\{h,\lambda
h,\rho\lambda h,\lambda\rho\lambda h,\dots,(\lambda\rho)^{n}\lambda h:h\in
H\\}$, and $\\{h,\tau h,\rho\tau h,\tau\rho\tau h,\dots(\tau\rho)^{n}\tau
h:h\in H\\}$ respectively. The sets of vertices, edges, and faces, which we
denote $V$, $E$, and $F$ respectively, define a graph embedding and satisfy
Euler’s formula
$\chi=|V|-|E|+|F|,$ (2)
where $\chi$ is the Euler characteristic of the manifold, i.e. if $g$ is the
manifold’s genus (orientable or non-orientable), then $\chi=2-2g$ if the
manifold is orientable, and $\chi=2-g$ if it is non-orientable. Note that
property (iii) ensures that $G$ is a connected graph. We will write $G(V,E,F)$
to denote such an embedded graph, as compared to $G(V,E)$ which only denotes
the graph without the embedding.
The orbits of the free group $\langle\tau\rangle$ are exactly the half-edges,
and additionally we define _sectors_ as the orbits of the free group
$\langle\rho\rangle$. As $\tau$ and $\rho$ are fixed-point-free involutions,
these orbits are just two flags each. We denote a half-edge as
$[h]_{\tau}=\\{h,\tau h\\}$, and a sector as $[h]_{\rho}=\\{h,\rho h\\}$.
Definition 2.2 generalizes Definition 2.1. Suppose we have an oriented
rotation system $R_{O}=(H_{O},\nu,\epsilon)$. There is a corresponding general
rotation system $R=(H,\lambda,\rho,\tau)$ where $H=H_{O}\times\\{1,-1\\}$ and
for $(h,i)\in H$,
$\lambda(h,i)=(\epsilon
h,-i),\quad\rho(h,i)=(\nu^{i}x,-i),\quad\tau(h,i)=(h,-i).$ (3)
This definition ensures that the general rotation system represents the same
embedded graph as the given oriented rotation system.
A general rotation system $(H,\lambda,\rho,\tau)$ describes a graph embedded
in an orientable manifold if and only if $H$ can be partitioned into two sets
$H_{\pm 1}$ such that $\lambda$, $\rho$, and $\tau$ each map elements of
either set to elements of the other set. Otherwise, the graph is embedded in a
non-orientable manifold. If the manifold is orientable, there are at least two
oriented rotation systems describing the same graph embedding, on the same set
of half-edges. Take $H_{O}=H/\tau$ and let $[h]_{\tau}=\\{h,\tau h\\}\in
H_{O}$ be a generic half-edge. Define $\nu[h]_{\tau}=[\rho\tau h]_{\tau}$,
$\nu^{\prime}[h]_{\tau}=[\tau\rho h]_{\tau}$, and
$\epsilon[h]_{\tau}=[\lambda\tau h]_{\tau}$. Then, both $(H_{O},\nu,\epsilon)$
and $(H_{O},\nu^{\prime},\epsilon)$ are oriented rotation systems for the same
embedded graph.
Adjacency is an important concept for graphs and their components. Using the
general rotation system description, flags are the primitives, and we say a
flag is adjacent to a vertex, edge, or face $x\in V\cup E\cup F$, if it is
contained in $x$. We define adjacency matrices in terms of flags. The vertex-
flag adjacency matrix $A\in\mathbb{F}_{2}^{|V|\times|H|}$ has $A_{vh}=1$ if
and only if $h\in v$. Similarly define an edge-flag adjacency matrix
$B\in\mathbb{F}_{2}^{|E|\times|H|}$, and a face-flag adjacency matrix
$C\in\mathbb{F}_{2}^{|F|\times|H|}$. Although flags are primitives, we can
also define adjacency of the more typical graph components. For example, we
say that a vertex $v\in V$ and an edge $e\in E$ are adjacent if $v\cap
e\neq\emptyset$, and $a_{ve}=1$ if and only if $v$ and $e$ are adjacent.
Likewise, adjacency of edges and faces, and of vertices and faces can be
defined by non-trivial intersection. Note that each edge in an embedded graph
is adjacent to at least one and at most two faces. The degree of a vertex $v$
is half the number of flags it is adjacent to, $\text{deg}(v)=|v|/2$, and this
definition coincides with the one in Subsection 2.1. We also say that two
vertices (resp. two faces) are adjacent, if there exists an edge adjacent to
both vertices (resp. both faces).
### 2.4 Dual graph and checkerboardability
If an embedded graph $G(V,E,F)$ is described by the general rotation system
$(H,\lambda,\rho,\tau)$, then the embedded dual graph
$\overline{G}(\overline{V},\overline{E},\overline{F})$ of $G$ is defined by
the general rotation system $(H,\tau,\rho,\lambda)$, i.e. by simply exchanging
the permutations $\lambda$ and $\tau$. This implies that $\overline{G}$ has a
vertex (resp. face) for every face (resp. vertex) of $G$, while
$|E|=|\overline{E}|$, and we conclude that the Euler characteristic of $G$ and
$\overline{G}$ are the same. Note also that the graph $\overline{G}$ is
connected (by properties of a general rotation system). Moreover
$\overline{G}$ defines an embedding in an orientable manifold if and only if
$G$ does too — thus the graphs $G$ and $\overline{G}$ are embedded in the same
manifold. The edges of $\overline{G}$ have a natural interpretation: for every
edge $e\in E$ adjacent to faces $f,f^{\prime}\in F$, not necessarily distinct,
there is an edge $\overline{e}\in\overline{E}$ between the vertices of
$\overline{G}$ corresponding to $f$ and $f^{\prime}$ (in particular if
$f=f^{\prime}$ then $\overline{e}$ is a loop). The dual of the graph
$\overline{G}$ is isomorphic to the graph $G$. There are several more
modifications of a graph $G$ that we use in the paper. In Table 1, we list
these and the sections where they are defined.
Name | Original | Dual | Medial | Decoding | Doubled | Face-vertex
---|---|---|---|---|---|---
Notation | $G$ | $\overline{G}$ | $\widetilde{G}$ | $G_{\text{dec}}$ | $G^{2}$ | $G_{\text{fv}}$
Section | 2.1 | 2.4 | 3.4 | 4.2 | 4.3 | 4.4
Table 1: Embedded graph $G$ and the derived graphs used in this paper.
Before proceeding any further, given a graph embedding $G(V,E,F)$, we define
the face-edge adjacency matrix
$\Phi=\frac{1}{2}CB^{\top}\in\mathbb{F}_{2}^{|F|\times|E|}$ (with this matrix
multiplication performed over the integers, then reduced modulo two), which
captures the adjacency of the faces and edges of $G$. It is worth pointing out
the structure of the matrix $\Phi$. First note that each column of $B^{\top}$
contains exactly four non-zeros, as each edge $e\in E$ is a set of four flags.
Now there are two cases: (i) the two sides of $e$ belong to distinct faces
$f,f^{\prime}\in F$, and (ii) $e$ is surrounded by a single face on both
sides, i.e. $e$ is a subset of that face. In the former case,
$\Phi_{fe}=\Phi_{f^{\prime}e}=1$ are the only non-zero entries of the column
of $\Phi$ corresponding to $e$, while in the latter case $\Phi_{fe}=0$ for all
$f\in F$.
We now discuss an important property of embedded graphs from the perspective
of quantum codes, called checkerboardability.
###### Definition 2.3.
A graph embedding $G$ is checkerboardable if the dual graph $\overline{G}$ is
bipartite.
An equivalent, and a more intuitive way of stating this definition is the
following lemma.
###### Lemma 2.1.
A graph embedding $G(V,E,F)$ described by a general rotation system
$(H,\lambda,\rho,\tau)$, is checkerboardable if and only if the faces of $G$
can be two-colored, such that for every flag $h\in H$, $h$ and $\tau h$ are
differently colored.
* Proof.
One direction of this equivalence is clear. Suppose $\overline{G}$ is
bipartite, and we color the two disjoint vertex sets of $\overline{G}$,
corresponding to the bipartition differently. Using the correspondence between
the faces of $G$ and vertices of $\overline{G}$, we get a two-coloring of $F$,
and hence of the flags $H$. Now pick any flag $h\in H$, and let $h\in e\in E$.
Then $h,\tau h$ must belong to distinct faces $f,f^{\prime}\in F$
respectively, otherwise if $e$ is adjacent to a single face then
$\overline{G}$ will contain a loop (as mentioned above), and hence cannot be
bipartite. Since $f$ and $f^{\prime}$ are both adjacent to $e$, they must be
differently colored. For the converse, suppose we are given a two-coloring of
$F$ (which now induces a coloring of the vertices of $\overline{G}$) such that
$h$, and $\tau h$ are differently colored for all $h\in H$. Consider
$\overline{G}(\overline{V},\overline{E})$, and partition the vertices as
$\overline{V}=\overline{V}_{w}\sqcup\overline{V}_{b}$, where each disjoint
partition contains all vertices of $\overline{G}$ of a single color. Assume
that $\overline{G}$ is not bipartite with this bipartition. If $\overline{G}$
has a loop then there exists $e\in E$ adjacent to a single face. If
$\overline{G}$ has no loop, then either $\overline{V}_{w}$ or
$\overline{V}_{b}$ is not an independent set; so there exist distinct faces
$f,f^{\prime}\in F$ of the same color, adjacent to an edge $e\in E$. In both
cases, all four flags of $e$ have the same color which is a contradiction.
Thus $\overline{G}$ is bipartite. ∎
There are numerous other equivalent statements of checkerboardability. We give
two more below that we find useful. Note that the only permutation out of
$\lambda$, $\rho$, $\tau$ that moves flags across edges of the graph is
$\tau$. Therefore using Lemma 2.1, we have the following:
###### Lemma 2.2.
A graph embedding $G(V,E,F)$ described by the general rotation system
$R=(H,\lambda,\rho,\tau)$, is checkerboardable if and only if we can partition
$H$ into two disjoint sets $H_{w}$ and $H_{b}$ such that (i) $\lambda$ and
$\rho$ map both sets to themselves, and (ii) $\tau$ maps elements of either
set to the other set.
* Proof.
Suppose first that $G$ is checkerboardable. Then there is a two-coloring of
the faces of $F$ such that for every $h\in H$, $h$ and $\tau h$ are
differently colored. Defining $H_{w}$ and $H_{b}$ to be the sets of flags of
each color then shows that these sets have the required properties (i) and
(ii), proving the only if direction. Now assume that $H$ can be partitioned
into disjoint sets $H_{w}$ and $H_{b}$ satisfying the two properties. If $f\in
F$, by property (i) all flags of $f$ must belong to either $H_{w}$ or $H_{b}$.
Let us color all flags of $H_{w}$ and $H_{b}$ black and white respectively,
which induces a two-coloring of all the faces of $G$. Property (ii) now
implies that for every $h\in H$, $h$ and $\tau h$ are differently colored,
proving that $G$ is checkerboardable. ∎
In this paper we adopt the notation that $\vec{1}$ and $\vec{0}$ are the row
vectors of all 1s and all 0s respectively. The arguments above show that in a
checkerboard coloring, each edge is adjacent to exactly one face of each color
(white and black, say). Thus, checkerboardability implies the existence of a
vector $x\in\mathbb{F}_{2}^{|F|}$ (e.g. $x_{f}=1$ if and only if face $f$ is
colored white) such that $x\Phi=\vec{1}$. In fact we also have that
$(\vec{1}-x)\Phi=\vec{1}$, and these are the only two vectors with this
property. This is summarized in the following lemma.
###### Lemma 2.3.
Let $G(V,E,F)$ be a graph embedding described by the general rotation system
$(H,\lambda,\rho,\tau)$, and let $\Phi$ be its face-edge adjacency matrix.
Then
1. (a)
$G$ is checkerboardable if there exists $x\in\mathbb{F}_{2}^{|F|}$ such that
$x\Phi=\vec{1}$.
2. (b)
If $G$ is checkerboardable then the set
$\\{x\in\mathbb{F}_{2}^{|F|}:x\Phi=\vec{1}\\}$ has exactly two distinct
elements $x$ and $x^{\prime}$ (different from $\vec{0}$ and $\vec{1}$), which
satisfy $x+x^{\prime}=\vec{1}$. Thus the partitioning of the flags into the
two disjoint sets as described in Lemma 2.2 is unique.
* Proof.
1. (a)
Suppose there exists $x$ such that $x\Phi=\vec{1}$, and define $F_{b}=\\{f\in
F:x_{f}=1\\}$, and $F_{w}=F\setminus F_{b}$. Color the faces in $F_{b}$ and
$F_{w}$ differently. Assume for contradiction that $h\in H$ is a flag such
that $h$ and $\tau h$ have the same color, and let $h\in e\in E$. Since $h$
and $\lambda h$ belong to the same face, this implies that all four flags of
$e$ have the same color. But this implies $\Phi_{fe}=0$ for all $f\in F$ (by
properties of $\Phi$ discussed above), and so $(x\Phi)_{e}=\vec{0}$, which is
a contradiction.
2. (b)
Assume $G$ is checkerboardable, which also implies that every column of $\Phi$
has exactly two non-zeros (or equivalently each edge $e\in E$ is adjacent to
two distinct faces), and so $\vec{1}\Phi=\vec{0}$. We already know from the
discussion in the paragraph before this lemma that there exists
$y\in\mathbb{F}_{2}^{|F|}$ such that $y\Phi=\vec{1}$. Thus we also conclude
that $(\vec{1}-y)\Phi=\vec{1}$. Now suppose there is a vector
$x\in\mathbb{F}_{2}^{|F|}$ such that $x\Phi=\vec{1}$, and define a disjoint
partition $F=F_{w}\sqcup F_{b}$ of the faces of $G$, as in the proof of (a).
This induces a disjoint vertex partition
$\overline{V}=\overline{V}_{w}\sqcup\overline{V}_{b}$ of the dual graph
$\overline{G}(\overline{V},\overline{E})$. Then combining the arguments of the
proof of (a), and the proof of the converse part of Lemma 2.1 shows that
$\overline{G}$ is bipartite. Since, $\overline{G}$ is connected, the
bipartition is unique, and so $x$ is equal to either $y$ or $\vec{1}-y$. This
proves the first part, and the second part follows from it easily.
∎
A consequence of Lemma 2.3(b) is that the checkerboard coloring of a
checkerboardable graph is essentially unique up to a permutation of the
colors. One easy way a graph can fail to be checkerboardable is if it contains
an odd-degree vertex $v$. Then, $(\rho\tau)^{\text{deg}(v)}$ would map $h\in
v$ to itself, but also because $\tau$ is applied an odd number of times, it
would map $h$ from one set, say $H_{w}$, to the other $H_{b}$ (where $H_{w}$
and $H_{b}$ are as in Lemma 2.2), a contradiction. If $g=0$ (the graph is
planar, i.e. embedded on the 2-sphere $\mathbb{S}^{2}$), we also have have the
well known converse: a planar graph embedding is checkerboardable if all
vertices have even degree.
### 2.5 Homology of graph embeddings
One topological invariant that is important for this paper (and for other
quantum codes such as homological codes) is that of $\mathbb{F}_{2}$-homology.
We introduce this concept briefly here, and refer the reader to [29] for a
more comprehensive treatment of homology. To do this in the context of
embedded graphs, we will first introduce some graph theoretic terminology.
Recall first that a trail $t$ in a graph is a sequence of vertices and edges,
$t=(v_{0},e_{1},v_{1},e_{2},\dots,v_{\ell-1},e_{\ell},v_{\ell}),$ (4)
where edge $e_{i}$ is adjacent to vertices $v_{i-1}$ and $v_{i}$, and each
edge is distinct. Trails can be open (if $v_{0}\neq v_{\ell}$) or closed (if
$v_{0}=v_{\ell}$). Open trails have endpoints $v_{0}$ and $v_{\ell}$. Closed
trails are also called cycles. Including the vertices in the specification of
a trail is standard but superfluous — a trail $t$ can be represented by a
vector $t\in\mathbb{F}_{2}^{|E|}$ such that $t_{i}=1$ if and only if the
$i^{\text{th}}$ edge of the graph is in the trail. A _path_ is a trail without
repeated vertices, except the first and last if it is a closed trail. The set
of trails generates a group $\mathcal{T}(G)$ with the group operation the
symmetric difference of the trails or, equivalently, the addition modulo two
of the vectors $y\in\mathbb{F}_{2}^{|E|}$ representing the trails. Because
each edge is a trail, $\mathcal{T}(G)$ is isomorphic to the vector space
$\mathbb{F}_{2}^{|E|}$, and a generating set of $\mathcal{T}(G)$ (or
equivalently a basis for $\mathbb{F}_{2}^{|E|}$) can be constructed to consist
only of paths.
Now let $G(V,E,F)$ be an embedded graph on a manifold, either orientable or
non-orientable. Two subgroups of $\mathcal{T}(G)$ are particularly important
in the definition of $\mathbb{F}_{2}$-homology: the subgroup $\mathcal{Z}(G)$
of all the cycles of $G$, and the subgroup $\mathcal{B}(G)$ generated by the
cycles of $G$ that are boundaries of faces of the embedded graph. Formally
$\begin{split}\mathcal{Z}(G)&=\left\langle c:c\text{ is a cycle of
}G\right\rangle,\\\ \mathcal{B}(G)&=\left\langle c:c\text{ is a cycle of
}G,\;\exists f\in F\text{ such that }\forall\;\text{edges }e\in
c,\;\Phi_{fe}=1\right\rangle.\end{split}$ (5)
We call cycles in $\mathcal{B}(G)$ homologically trivial and cycles in
$\mathcal{Z}(G)$ but not in $\mathcal{B}(G)$ homologically non-trival. The
homological systole of the graph, $\mathrm{hsys}\left({G}\right)$, is the
length of the shortest homologically non-trivial cycle. The first homology
group over $\mathbb{F}_{2}$ is the quotient group (or equivalently as the
quotient vector space)
$H_{1}(G,\mathbb{F}_{2})=\frac{\mathcal{Z}(G)}{\mathcal{B}(G)}.$ (6)
It turns out that for two different graphs $G,G^{\prime}$ embedded in the same
manifold $\mathcal{M}$, $H_{1}(G,\mathbb{F}_{2})\cong
H_{1}(G^{\prime},\mathbb{F}_{2})$. Thus in the rest of the paper, we will
simply write $H_{1}(\mathcal{M})$ for any given graph embedding $G(V,E,F)$ in
$\mathcal{M}$.
### 2.6 Covering maps and contractible loops
A cycle drawn on a manifold $\mathcal{M}$ is contractible if it can be
continuously deformed to a point and non-contractible otherwise. Homological
non-triviality implies non-contractibility [30]. However, the converse is not
true – consider for instance a cycle winding twice around a torus, which is
non-contractible but homologically trivial. Similar to the homological case,
we let $\mathrm{sys}\left({G}\right)$ denote the length of the shortest non-
contractible cycle in a graph $G$ embedded on $\mathcal{M}$ and note
$\mathrm{hsys}\left({G}\right)\geq\mathrm{sys}\left({G}\right)$. A manifold is
simply-connected if all cycles are contractible.
We say a connected manifold $\mathcal{M}^{\prime}$ is a cover of $\mathcal{M}$
if there is a map $\pi:\mathcal{M}^{\prime}\rightarrow\mathcal{M}$ such that
each open disk $U\subseteq\mathcal{M}$ has a pre-image $\pi^{-1}(U)$ that is a
union of disjoint open disks, each mapped homeomorphically onto $U$ by $\pi$
[31]. We say $\pi$ is a covering map. An $l$-fold cover is one in which
$\pi^{-1}(U)$ is a union of $l$ open disks for all $U$. It is also acceptable
to call a 2-fold cover a double cover, a 3-fold cover a triple cover, etc.
A universal cover $\mathcal{U}$ of a manifold $\mathcal{M}$ is a cover that
covers all other covers of $\mathcal{M}$. Equivalently, the universal cover of
$\mathcal{M}$ is the unique simply-connected cover of $\mathcal{M}$ (see for
instance Chapter III.4 of [31]). Compact manifolds are universally covered by
either the sphere (if $\mathcal{M}$ is the sphere or projective plane) or the
plane (otherwise).
A curve drawn in $\mathcal{M}^{\prime}$ can be projected down to $\mathcal{M}$
by applying $\pi$. Conversely, a curve $\gamma:[0,1]\rightarrow\mathcal{M}$
can be _lifted_ to $\mathcal{M}^{\prime}$. Roughly one constructs a lift by
applying $\pi^{-1}$ to $\gamma$. However, since $\pi^{-1}$ is one-to-many,
more precisely a lift of $\gamma$ is defined to be any compact, continuous
curve in $\mathcal{M}^{\prime}$ that maps to $\gamma$ when applying $\pi$.
An interesting connection presents itself between contractibility and the
universal cover. A cycle in $\mathcal{M}$ is contractible if and only if its
lift into the universal cover $\mathcal{U}$ is also a cycle. Non-contractible
curves, on the other hand, lift to curves with two distinct endpoints [30].
## 3 Majorana and qubit surface codes from rotation systems
In this section, we define our framework for constructing qubit surface codes
from rotation systems. Majorana surface codes appear as a useful intermediary
in the construction. So, building on the last section’s introduction to
rotation systems, the logical progression of this section is roughly
$\begin{array}[]{ccccc}\text{Rotation system \textbackslash graph
embedding}&\longrightarrow&\text{Majorana surface
code}&\longrightarrow&\text{Qubit surface code}\\\
\text{(Section\leavevmode\nobreak\ \ref{sec:rotation-
systems})}&&\text{(Section\leavevmode\nobreak\
\ref{subsec:maj_codes_on_graphs})}&&\text{(Section\leavevmode\nobreak\
\ref{subsec:qub_surface_codes})}\end{array}.$ (7)
Before diving into surface codes, Section 3.1 gives a quick introduction to
Majorana fermions and Majorana fermion codes in general. Afterward, in Section
3.4, we show how our framework for qubit surface codes generalizes the
standard homological definition [1], in which qubits are placed on edges,
while vertices and faces support stabilizers. We assume some familiarity of
the reader with the Pauli group, and Appendix B contains a brief recap.
### 3.1 Majorana operators and codes
The Majorana operators $\\{\gamma_{0},\gamma_{1},\dots,\gamma_{m-1}\\}$, for
$m$ even, are linear Hermitian operators acting on the fermionic Fock space
$\mathcal{H}_{m/2}=\\{|{\vec{b}}\rangle:\vec{b}\in\mathbb{F}_{2}^{m/2}\\}$, or
equivalently the $m/2$-qubit complex Hilbert space, satisfying
$\displaystyle\gamma_{i}^{2}=I,\quad\gamma_{i}\gamma_{j}=-\gamma_{j}\gamma_{i},\quad\forall\;0\leq
i<j\leq m-1.$ (8)
Eq. (8) ensures that each $\gamma_{i}$ is distinct and different from $I$. The
total number of Majorana operators is even because two Majoranas correspond to
each fermion in the system. We define a group $\mathcal{J}_{m}$ consisting of
finite products of the Majorana operators $\gamma_{i}$, and phase factor
$i=\sqrt{-1}$. By Eq. (8), this group is finite with size
$|\mathcal{J}_{m}|=2^{m+2}$. Elements of $\mathcal{J}_{m}$ either commute or
anticommute. We indicate an element of $\mathcal{J}_{m}$ uniquely by
$\eta\gamma_{a}$ where $\eta\in\\{\pm 1,\pm i\\}$, $a\in\mathbb{F}_{2}^{m}$,
and $\gamma_{a}=\prod_{i:a_{i}=1}\gamma_{i}$ (this product is ordered so that
the $\gamma_{i}$ with smaller indices are on the left, and the empty product
is defined to be equal to $I$), and the uniqueness of this representation
follows from Eq. (8). We define the support of $\eta\gamma_{a}$ to be the set
$\mathrm{supp}\left({\eta\gamma_{a}}\right):=\\{i:a_{i}=1\\}$, and its weight
$|\eta\gamma_{a}|:=|\mathrm{supp}\left({\eta\gamma_{a}}\right)|=|a|$, where
$|a|$ is the Hamming weight of $a$. The commutation of the elements of
$\mathcal{J}_{m}$ is now easily expressed as
$\gamma_{a}\gamma_{b}=(-1)^{|a||b|+a\cdot
b}\gamma_{b}\gamma_{a}=(-1)^{\xi(a,b)}\gamma_{a+b},\;\;\xi(a,b)=\sum_{i:b_{i}=1}|\\{j>i:a_{j}=1\\}|,$
(9)
where it is understood the operations “$\cdot$” and “$+$” are performed over
$\mathbb{F}_{2}$ (the quantity $|a||b|$ is computed over integers and reduced
modulo 2). These vectors are assumed to have context-appropriate length.
Particular choices of Majorana operators can be made by associating them with
appropriately chosen Pauli operators on $m/2$ qubits. This can be done in a
variety of ways. A particularly famous example is the Jordan-Wigner
transformation, which makes the association
$\displaystyle\text{JW}(\gamma_{2k})=X_{k}\prod_{i=0}^{k-1}Z_{i},\quad\text{JW}(\gamma_{2k+1})=Y_{k}\prod_{i=0}^{k-1}Z_{i},$
(10)
for all $0\leq k\leq m/2-1$, where $X_{i},Y_{i},Z_{i}$ are the Paulis acting
on qubit $i$. One can check that the Paulis associated with the Majoranas obey
Eq. (8).
Majorana fermion codes (see e.g. [32, 33]) are created by specifying a
subgroup $\mathcal{S}\leq\mathcal{J}_{m}$ to be a stabilizer. The
corresponding codespace is defined as usual to be the $+1$-eigenspace of all
stabilizers, a subspace of $\mathcal{H}_{m/2}$. We require the stabilizer be
chosen so that
1. (i)
$\mathcal{S}$ does not contain $-I$.
2. (ii)
Each $\gamma\in\mathcal{S}$ has even weight.
The first condition arises because we wish the codespace to be non-empty, and
the second condition is imposed because we want the stabilizer operators to be
physical, preserving fermion parity in the system. In fact, the first
condition also implies that $\mathcal{S}$ is Abelian and that all elements of
$\mathcal{S}$ are Hermitian. We note this fact and a few others in the
following lemma.
###### Lemma 3.1.
Let $\mathcal{S}$ be a subgroup of $\mathcal{J}_{m}$, and let
$\mathcal{I}\subseteq\mathcal{J}_{m}$ be non-empty, such that elements of
$\mathcal{I}$ commute and are Hermitian. Then $\langle\mathcal{I}\rangle$ is
Abelian and Hermitian, and moreover the following holds:
1. (a)
There exists a set $\mathcal{I}^{\prime}$ formed by multiplying each element
of $\mathcal{I}$ by either $1$ or $-1$, such that
$-I\not\in\langle\mathcal{I}^{\prime}\rangle$.
2. (b)
If $-I\not\in\mathcal{S}$, then $\mathcal{S}$ is Abelian and Hermitian.
Conversely, if $\mathcal{S}$ is Abelian and Hermitian, then either
$-I\not\in\mathcal{S}$, or $\mathcal{S}=\langle\mathcal{S}^{\prime},-I\rangle$
for some subgroup $\mathcal{S}^{\prime}$ of $\mathcal{S}$ with
$-I\not\in\mathcal{S}^{\prime}$.
* Proof.
In this proof $\eta,\eta^{\prime},\eta^{\prime\prime}\in\\{\pm 1,\pm i\\}$,
and $a,a^{\prime},b\in\mathbb{F}_{2}^{m}$. We also note some useful facts for
this proof which follow easily from Eqs. (8,9): if $x\in\mathcal{J}_{m}$ then
(i) $x^{-1}=x^{{\dagger}}$, (ii) $x$ is either Hermitian or skew-Hermitian,
and (iii) $x^{2}\in\\{I,-I\\}$, with $x^{2}=I$ if and only if $x=x^{\dagger}$.
Also note that for any non-empty $\mathcal{J}\subseteq\mathcal{J}_{m}$ whose
elements commute and are Hermitian, if $x\in\mathcal{J}$, then by (i) and
(iii) $x=x^{\dagger}=x^{-1}$, so all elements in $\langle\mathcal{J}\rangle$
are products of elements of $\mathcal{J}$. That $\langle\mathcal{I}\rangle$ is
Abelian is clear since any element of $\langle\mathcal{I}\rangle$ is a product
of the elements in $\mathcal{I}$. To see that $\langle\mathcal{I}\rangle$ is
Hermitian, note that if $\langle\mathcal{I}\rangle\ni
x=\prod_{y\in\mathcal{I}^{\prime}}y$ for some
$\mathcal{I}^{\prime}\subseteq\mathcal{I}$, then
$x^{2}=\prod_{y\in\mathcal{I}^{\prime}}y^{2}=I$, and we conclude using (iii).
1. (a)
Let $|\mathcal{I}|=k\geq 1$. We prove the result by induction on
$|\mathcal{I}|$. If $\mathcal{I}=\\{x\\}$ we choose
$\mathcal{I}^{\prime}=\\{x\\}$ if $x\neq-I$, else
$\mathcal{I}^{\prime}=\\{I\\}$, proving the base case. Now suppose the result
is true for all $\mathcal{I}$ such that $1\leq|\mathcal{I}|\leq r<k$. If
$|\mathcal{I}|=r+1$, pick any $x\in\mathcal{I}$ and let
$\mathcal{I}_{1}=\mathcal{I}\setminus\\{x\\}$. By the inductive hypothesis,
there exists $\mathcal{I}^{\prime}_{1}$ formed by multiplying every element of
$\mathcal{I}_{1}$ by either $1$ or $-1$, such that
$-I\not\in\langle\mathcal{I}^{\prime}_{1}\rangle$. If
$x\in\langle\mathcal{I}^{\prime}_{1}\rangle$ (resp.
$-x\in\langle\mathcal{I}^{\prime}_{1}\rangle$), choose
$\mathcal{I}^{\prime}=\mathcal{I}^{\prime}_{1}\cup\\{x\\}$ (resp.
$\mathcal{I}^{\prime}=\mathcal{I}^{\prime}_{1}\cup\\{-x\\}$) and we are done.
If $x,-x\not\in\langle\mathcal{I}^{\prime}_{1}\rangle$, choose
$\mathcal{I}^{\prime}=\mathcal{I}^{\prime}_{1}\cup\\{x\\}$. Now for
contradiction, assume $-I\in\langle\mathcal{I}^{\prime}\rangle$, so we have
$-I=zx$ for some $z\in\langle\mathcal{I}^{\prime}_{1}\rangle$, using
properties of $\mathcal{I}$ (commuting and Hermitian) and (iii). Letting
$z=\eta\gamma_{a}$, and $x=\eta^{\prime}\gamma_{a^{\prime}}$ then gives
$-I=\eta\eta^{\prime}(-1)^{\xi(a,a^{\prime})}\gamma_{a+a^{\prime}}$ using Eq.
(9), and so $a=a^{\prime}$. Combining with $z^{2}=x^{2}=I$ now gives
$\eta=-\eta^{\prime}$, or equivalently $z=-x$ which is a contradiction.
2. (b)
First suppose $-I\not\in\mathcal{S}$. Note that if $x\in\mathcal{S}$ is not
Hermitian, then $\mathcal{S}\ni x^{2}=-I$ by (iii). Now suppose $\mathcal{S}$
is non-Abelian. Then there exists
$\eta\gamma_{a},\;\eta^{\prime}\gamma_{a^{\prime}}\in\mathcal{S}$, such that
$\gamma_{a}\gamma_{a^{\prime}}=-\gamma_{a^{\prime}}\gamma_{a}$. But this
implies that $\pm\eta^{\prime\prime}\gamma_{b}\in\mathcal{S}$, where
$\eta^{\prime\prime}\gamma_{b}=\eta\eta^{\prime}\gamma_{a}\gamma_{a^{\prime}}$,
and so both $\pm(\eta^{\prime\prime})^{2}\gamma_{b}^{2}\in\mathcal{S}$. But
one of these must be $-I$, as $\gamma_{b}^{2}=\pm I$, giving a contradiction.
For the converse assume that $-I\in\mathcal{S}$ for Hermitian and Abelian
$\mathcal{S}$. Consider the cosets of $\langle-I\rangle$ in $\mathcal{S}$, and
let $\mathcal{I}$ be the set formed by picking exactly one element from each
coset. By (a), then there exists $\mathcal{I}^{\prime}$ such that
$-I\not\in\langle\mathcal{I}^{\prime}\rangle$, and
$\mathcal{S}=\langle\langle\mathcal{I}^{\prime}\rangle,-I\rangle$.
∎
Because of Pauli representations of Majoranas like Eq. (10), it is clear that
a Majorana fermion code on $m$ Majoranas corresponds to a stabilizer code on
$m/2$ qubits. Applying the Jordan-Wigner transformation, Eq. (10), to
$\mathcal{S}$ converts it to an Abelian group of Pauli operators on $m/2$
qubits, which has size upper bounded by $2^{m/2}$. Therefore,
$|\mathcal{S}|\leq 2^{m/2}$. Thus, if $\mathcal{S}$ is generated by $m/2-k$
independent operators, then the code encodes $k$ qubits (or $2k$ Majoranas).
The centralizer $\mathcal{C}(\mathcal{S})$ of $\mathcal{S}$ is the subset of
$\mathcal{J}_{m}$ that commutes with all elements of $\mathcal{S}$. The
distance $d$ of the Majorana code is the minimum weight of an element of
$\mathcal{C}(\mathcal{S})$ that is not (up to factors of $i$) in
$\mathcal{S}$, i.e.
$d=\min\\{|\gamma|:\gamma\in\mathcal{C}(\mathcal{S})\setminus\langle\\{iI\\}\cup\mathcal{S}\rangle\\}$.
Consider, for example, a simple Majorana code, generated by just one
stabilizer:
$\mathcal{S}_{\text{even}}=\langle
i^{m/2}\gamma_{0}\gamma_{1}\dots\gamma_{m-1}\rangle.$ (11)
The phase $i^{m/2}$ guarantees this stabilizer squares to $I$ and not $-I$.
This code encodes $k=(m/2-1)$ qubits into $m$ Majoranas with distance $2$,
because the centralizer consists of all $\gamma_{a}$ with even weight. The
code $\mathcal{S}_{\text{even}}$ will play an important role in our converting
Majorana codes defined on graphs to qubit stabilizer codes defined on graphs.
### 3.2 Majorana surface codes defined on embedded graphs
Given an embedded graph $G$ given by a rotation system $R$, we define a
Majorana stabilizer code $\mathcal{S}(R)$ (or equivalently $\mathcal{S}(G)$)
by placing Majorana operators on each half-edge and each odd-degree vertex,
and associating stabilizers to each vertex and face of the embedded graph. The
formal definition is as follows.
###### Definition 3.1.
Given a rotation system $R=(H,\lambda,\rho,\tau)$ and the associated embedded
graph $G=(V,E,F)$ with $M$ odd-degree vertices, associate a single Majorana
$\gamma_{[h]_{\tau}}$ to each half-edge $[h]_{\tau}=\\{h,\tau h\\}\in H/\tau$
and a single Majorana $\bar{\gamma}_{v}$ to each odd-degree vertex in $v\in
V$. Now to every vertex $v\in V$, associate a vertex stabilizer $S_{v}$ by
$S_{v}=\bigg{\\{}\begin{array}[]{ll}i^{\text{deg}(v)/2}\left(\prod_{[h]_{\tau}\subseteq
v}\gamma_{[h]_{\tau}}\right),&\text{deg}(v)\text{ is even}\\\
i^{(\text{deg}(v)+1)/2}\left(\prod_{[h]_{\tau}\subseteq
v}\gamma_{[h]_{\tau}}\right)\bar{\gamma}_{v},&\text{deg}(v)\text{ is
odd}\end{array}$ (12)
and to every face $f\in F$, a face stabilizer $S_{f}$ by
$S_{f}=i^{|\\{h\in f:\tau h\not\in f\\}|/2}\prod_{\\{h\in f:\tau h\not\in
f\\}}\gamma_{[h]_{\tau}}.$ (13)
The order of Majoranas in these products is important. To clarify this, we
give each Majorana a unique label from $\\{0,1,\dots,m-1\\}$ satisfying
certain rules, where $m=2|E|+M$, and then the products are sorted by ascending
order in these labels. When $G$ is not checkerboardable, the labeling can be
arbitrary as long as it ensures that any two Majoranas on the same edge, e.g.
$\gamma_{[h]_{\tau}}$ and $\gamma_{[\lambda h]_{\tau}}$, have successive
labels. When $G$ is checkerboardable, the labeling should additionally ensure
that for all $0\leq j\leq m/2-1$, the Majoranas corresponding to labels $2j+1$
and $2j+2$ (evaluating the labels modulo $m$) belong to half-edges which are
subsets of the same vertex. We denote the group generated by all $S_{v}$ and
$S_{f}$ as $\mathcal{S}(R)$ (or equivalently $\mathcal{S}(G)$). The Majorana
code associated with $R$ (or $G$) has the stabilizer group $\mathcal{S}(R)$.
We note a few easy consequences of this definition. First, notice that the
total number of Majoranas is even since $M$ is, which is true because
$2|E|=\sum_{v\in V}\text{deg}(v)$. Second, notice that the vertex and the face
stabilizers have even weight, are Hermitian (since they square to $I$), and
commute. The commutation follows from Eq. (9) since supports of $S_{v}$ and
$S_{v^{\prime}}$ do not overlap for distinct $v,v^{\prime}\in V$, supports of
$S_{f}$ and $S_{f^{\prime}}$ for distinct $f,f^{\prime}\in F$ overlap if and
only if they share adjacent edges and therefore share an even number of
Majoranas, and supports of $S_{v}$ and $S_{f}$ for $v\in V$ and $f\in F$
overlap only if $v$ and $f$ are both adjacent to one or more sectors in which
case they share an even number of Majoranas (note that this holds even if $v$
and $f$ are both adjacent to any edge $e$ such that $e$ is not adjacent to any
other face).
It turns out that the Majorana labeling scheme used in Definition 3.1 ensures
that the stabilizer group $\mathcal{S}(R)$ has the property
$-I\not\in\mathcal{S}(R)$, which is formally proved below in Lemma 3.2(g), and
since each stabilizer has even weight, so does every element of
$\mathcal{S}(R)$; thus $\mathcal{S}(R)$ satisfies the two properties of a
Majorana fermion code. If a different Majorana labeling was used, or if the
products in Eqs. (12) and (13) were ordered differently then it is not
necessarily true that the first property holds, but in that case by Lemma
3.1(a) one can multiply the stabilizers by either $1$ or $-1$ and get a new
set of stabilizers for which the property would still be true. By Lemma
3.1(b), $\mathcal{S}(R)$ is Abelian and Hermitian. Two examples of Majorana
surface code stabilizers are shown in Fig. 2.
We note briefly that a labeling scheme satisfying the demands of Definition
3.1 exists in the checkerboardable case (the non-checkerboardable case is
trivial). If $G$ is checkerboardable it has no odd-degree vertices, so there
exists an Euler cycle $(e_{0},e_{1},\dots,e_{|E|-1})$ that uses all the edges
in $E$. Majoranas can be labeled in the order they are encountered by
following this cycle.
Figure 2: (a) An example section of a Majorana surface code. Majorana
operators are placed on each half-edge (blue circles) and at each odd-degree
vertex (green circles). The stabilizer associated to an even-degree vertex is
the product of Majoranas around that vertex. The stabilizer associated to an
odd-degree vertex is the product of Majoranas around that vertex and the
Majorana located at that vertex. Gray lines cordon off the Majoronas involved
in these stabilizers. The stabilizer associated to the pentagonal face is the
product of the ten blue Majoranas on edges around that face. (b) A Majorana
surface code of reference [34]. In our framework, it arises from a graph
triangulating the torus. Here opposite sides of the hexagon are identified
(directionally, as indicated by arrows) along with Majoranas on those edges. A
total of 72 Majoranas survive this identification. All stabilizers associated
to both vertices and faces are the product of six Majoranas.
The next lemma is useful to understand the dependence between the stabilizers
of the Majorana surface code.
###### Lemma 3.2.
The stabilizers of the Majorana surface code for the embedded graph $G(V,E,F)$
satisfy
1. (a)
There is no non-empty subset $V^{\prime}\subseteq V$ such that $\prod_{v\in
V^{\prime}}S_{v}=\pm I$.
2. (b)
There is no non-empty, proper subset $V^{\prime}\subset V$ and no subset
$F^{\prime}\subseteq F$ such that $\prod_{v\in V^{\prime}}S_{v}=\pm\prod_{f\in
F^{\prime}}S_{f}$. If there is any odd-degree vertex, then the statement holds
for all non-empty subsets $V^{\prime}\subseteq V$.
3. (c)
If $\prod_{v\in V}S_{v}=\prod_{f\in F^{\prime}}S_{f}$ for some subset
$F^{\prime}\subseteq F$, then $G$ is checkerboardable. If $G$ is
checkerboardable then $\prod_{v\in V}S_{v}=\prod_{f\in
F^{\prime}}S_{f}=\prod_{f\in F\setminus F^{\prime}}S_{f}$ for a non-empty
proper subset $F^{\prime}\subset F$, which is determined uniquely up to taking
complements.
4. (d)
There is no non-empty, proper subset $F^{\prime}\subset F$ such that
$\prod_{f\in F^{\prime}}S_{f}=\pm I$.
5. (e)
$\prod_{f\in F}S_{f}=I$.
6. (f)
$|F|=1$ if and only if there is $f\in F$ such that $S_{f}=I$.
7. (g)
$-I\not\in\mathcal{S}(G)$.
* Proof.
We start with an observation that is needed for proofs of (c) and (g). For any
subset $F^{\prime}\subseteq F$ and any Majorana associated to a half-edge
$\gamma_{[h]_{\tau}}$, the product $\prod_{f\in F^{\prime}}S_{f}$ either
contains both $\gamma_{[h]_{\tau}}$ and $\gamma_{[\lambda h]_{\tau}}$, or
neither of them. Thus the Majoranas appearing in the product $\prod_{f\in
F^{\prime}}S_{f}$ can be grouped in pairs labeled by edges, which allows us to
identify the product with a vector $y\in\mathbb{F}_{2}^{|E|}$, with $y_{e}=1$
for $e=\\{h,\lambda h,\tau h,\lambda\tau h\\}\in E$, if and only if both
$\gamma_{[h]_{\tau}}$ and $\gamma_{[\lambda h]_{\tau}}$ appear in the product.
It follows that $y=x\Phi$, where we recall that
$\Phi\in\mathbb{F}_{2}^{|F|\times|E|}$ is the face-edge adjacency matrix of
the graph embedding, and $x\in\mathbb{F}_{2}^{|F|}$ satisfying $x_{f}=1$ if
and only if $f\in F^{\prime}$. We now return to the proof.
1. (a)
This follows because all the vertex stabilizers are independent from each
other, as they have disjoint supports.
2. (b)
First assume that $V^{\prime}\subset V$ is a non-empty proper subset. We find
an edge $e\in E$ with one endpoint in $V^{\prime}$ and one in $V\setminus
V^{\prime}$ (we use connectivity of $G$ here), and notice that since face
stabilizers either include or exclude both Majoranas on an edge together,
there is no way to take a product of face stabilizers to include only one
Majorana out of the two on edge $e$. This proves the first part. Now assume
that there is at least one odd-degree vertex $v$, and suppose $\prod_{v\in
V^{\prime}}S_{v}=\prod_{f\in F^{\prime}}S_{f}$ for some non-empty subset
$V^{\prime}\subseteq V$ and some subset $F^{\prime}\subseteq F$. Then by the
first part, $V^{\prime}=V$, so the Majorana $\bar{\gamma}_{v}$ is present in
the product $\prod_{v\in V^{\prime}}S_{v}$, which is a contradiction since no
face stabilizer contains $\bar{\gamma}_{v}$.
3. (c)
Suppose $\prod_{v\in V}S_{v}=\prod_{f\in F^{\prime}}S_{f}$ for some subset
$F^{\prime}\subseteq F$. This cannot happen if there is any odd-degree vertex,
as the product $\prod_{f\in F^{\prime}}S_{f}$ cannot contain any Majorana
associated to an odd-degree vertex. On the other hand, $\prod_{v\in V}S_{v}$
contains every Majorana associated to a half-edge, and so we can identify
$\prod_{f\in F^{\prime}}S_{f}$ with $\vec{1}\in\mathbb{F}_{2}^{|E|}$. Thus we
have $x\Phi=\vec{1}$ where $x\in\mathbb{F}_{2}^{|F|}$ satisfying $x_{f}=1$ if
and only if $f\in F^{\prime}$, so by Lemma 2.3(a) we conclude that $G$ is
checkerboardable. Now suppose that $G$ is checkerboardable. So $G$ has no odd-
degree vertices, and by Lemma 2.3(b) there exist exactly two distinct vectors
$x,x^{\prime}\in\mathbb{F}_{2}^{|F|}$ (different from $\vec{0}$ and $\vec{1}$)
such that $x\Phi=x^{\prime}\Phi=\vec{1}$, and $x+x^{\prime}=\vec{1}$. Taking
$F^{\prime}=\\{f\in F:x_{f}=1\\}$ we obtain a non-empty proper subset for
which these properties imply $\prod_{v\in V}S_{v}=\eta\prod_{f\in
F^{\prime}}S_{f}=\eta^{\prime}\prod_{f\in F\setminus F^{\prime}}S_{f}$ where
$\eta,\eta^{\prime}\in\\{\pm 1,\pm i\\}$. Now $\eta,\eta^{\prime}\neq\pm i$ as
$\mathcal{S}(G)$ is Hermitian by Lemma 3.1, so it remains to show that
$\eta,\eta^{\prime}\neq-1$. We defer this remaining part to the proof of (g).
4. (d)
This is similar to (b), but for the dual graph embedding $\overline{G}$. Find
a face $f^{\prime}\in F^{\prime}$ that shares an adjacent edge $e\in E$ with a
face $f\not\in F^{\prime}$ (this uses connectivity of the dual graph), and
notice that the Majoranas on edge $e$ appear just once in the product, which
can therefore not be identity.
5. (e)
Note that each Majorana $\gamma_{[h]_{\tau}}$ is included in either two face
stabilizers (if $h$ and $\tau h$ are in different faces) or none (if $h$ and
$\tau h$ are in the same face). Moreover, Majoranas associated to odd-degree
vertices are not included in any face stabilizer. Also note that for any flag
$h\in H$, $i\gamma_{[h]_{\tau}}\gamma_{[\lambda h]_{\tau}}$ commutes with all
Majoranas different from $\gamma_{[h]_{\tau}}$ and $\gamma_{[\lambda
h]_{\tau}}$, and squares to $I$. Using these facts, a straightforward
calculation shows that $\prod_{f\in F}S_{f}=I$.
6. (f)
This follows directly from (d) and (e).
7. (g)
Suppose this is false. Then $-I=\prod_{v\in V}S_{v}\prod_{f\in
F^{\prime}}S_{f}$ for some non-empty proper subset $F^{\prime}\subset F$, as
all other cases are ruled out by (a), (b), (d), (e), or equivalently
$\prod_{v\in V}S_{v}=-\prod_{f\in F^{\prime}}S_{f}$. By the same
identification argument used in (c), we conclude that $G$ is checkerboardable.
We will now show that in fact $\prod_{v\in V}S_{v}=\prod_{f\in
F^{\prime}}S_{f}$, which will complete the proof. Note that this also finishes
the remaining part of the proof of (c), because if either $\eta=-1$ or
$\eta^{\prime}=-1$, then $-I\in\mathcal{S}(G)$.
Since $G$ is checkerboardable, it has no odd-degree vertices, and each edge in
$E$ is adjacent to exactly one face in $F^{\prime}$. Thus let us identify the
Majoranas by their labels: so define $\gamma_{j}:=\gamma_{[h]_{\tau}}$ if
$\gamma_{[h]_{\tau}}$ has label $j$. Let $\tilde{v}\in V$ be the vertex that
contains the Majoranas $\gamma_{0}$ and $\gamma_{2|E|-1}$ in its vertex
stabilizer. Then for each $v\in V\setminus\tilde{v}$, one has
$S_{v}=\prod_{j\in I_{v}}(i\gamma_{2j+1}\gamma_{2j+2})$, and
$S_{\tilde{v}}=(i\gamma_{0})\left(\prod_{j\in
I_{\tilde{v}}}i\gamma_{2j+1}\gamma_{2j+2}\right)\gamma_{2|E|-1}$, where for
every $v\in V$, $I_{v}\subseteq\\{0,\dots,|E|-2\\}$ contains those integers
$j$ such that $\gamma_{2j+1}$ is associated to a half-edge that is a subset of
$v$. Noticing that $(i\gamma_{2j+1}\gamma_{2j+2})$ commutes with all Majoranas
different from $\gamma_{2j+1}$ and $\gamma_{2j+2}$, we immediately get
$\prod_{v\in V}S_{v}=i^{|E|}\gamma_{0}\gamma_{1}\dots\gamma_{2|E|-1}$.
Similarly, each face stabilizer can be written as $S_{f}=\prod_{j\in
I_{f}}i\gamma_{2j}\gamma_{2j+1}$ where $I_{f}\subseteq\\{0,\dots,|E|-1\\}$
contains those integers $j$ for which $\gamma_{2j}$ belongs to an edge
adjacent to $f$. It follows that $\prod_{f\in
F^{\prime}}S_{f}=i^{|E|}\gamma_{0}\gamma_{1}\dots\gamma_{2|E|-1}$, completing
the proof.
∎
A consequence of Lemma 3.2(g) is that since $\mathcal{S}(R)$ is Abelian and
Hermitian, it implies that a non-empty subset of the stabilizers is dependent
if and only if the product of its elements is $I$. Let us determine the number
of qubits encoded by this Majorana code. There are $|V|+|F|$ stabilizers as
defined. However, not all these stabilizers are independent. If the total
number of independent stabilizers is $|V|+|F|-\alpha$ for a non-negative
integer $\alpha$, then the number of encoded qubits is
$K=(2|E|+M)/2-\left(|V|+|F|-\alpha\right)=M/2-\chi+\alpha,$ (14)
using Eq. (2). The remaining work is in determining $\alpha$ by counting
dependencies in the stabilizers, which can be done easily using Lemma 3.2.
###### Theorem 3.3.
A Majorana surface code encodes
$K=\left\\{\begin{array}[]{ll}2g&,\text{ orientable}\\\ g&,\text{ non-
orientable}\end{array}\right\\}+\left\\{\begin{array}[]{ll}0&,\text{
checkerboardable}\\\ (M-2)/2&,\text{ not
checkerboardable}\end{array}\right\\}$ (15)
qubits, where conditions in brackets are properties of the rotation system
$(H,\lambda,\rho,\tau)$, or equivalently the graph embedding, defining the
code.
* Proof.
We will argue that (i) $\alpha\in\\{1,2\\}$, and (ii) $\alpha=2$ if and only
if the graph is checkerboardable. Once we have done so and because a
checkerboardable graph does not contain any odd degree vertices, we can use
Eq. (14) to complete the proof.
To prove (i), let $\mathcal{I}=\\{S_{v}:v\in V\\}\cup\\{S_{f}:f\in F\\}$ be
the set of stabilizers, so $\langle\mathcal{I}\rangle=\mathcal{S}(R)$. By
Lemma 3.2(e), one can always remove a single face stabilizer $S_{f}$ so that
$\langle\mathcal{I}\setminus S_{f}\rangle=\mathcal{S}(R)$, which implies
$\alpha\geq 1$. If $\mathcal{I}\setminus S_{f}$ is not independent, by Lemma
3.2(a,b,d) it must be that $\prod_{v\in V}S_{v}=\prod_{f\in F^{\prime}}S_{f}$
for some non-empty subset $F^{\prime}\subseteq F\setminus f$. If this happens
then we remove a vertex stabilizer $S_{v}$ arbitrarily after which Lemma
3.2(b) guarantees that the set $\mathcal{I}\setminus\\{S_{f},S_{v}\\}$ is
independent. Thus $\alpha\leq 2$. To prove (ii), by the preceding arguments
$\alpha=2$ if and only if $\prod_{v\in V}S_{v}=\prod_{f\in F^{\prime}}S_{f}$
for some non-empty proper subset $F^{\prime}\subset F$, and the latter is true
if and only if the graph is checkerboardable by Lemma 3.2(c). ∎
We also point out that if the graph defining the Majorana surface code has
vertices with degree one or degree two, then there are vertex stabilizers
$S_{v}$ that are the product of just two modes. These two modes and the
stabilizer that is their product can be removed from the code without
affecting the number of encoded qubits or code distance. What may be affected
is the degree of protection arising from superselection rules [32], e.g.
Kitaev’s 1D chain [35] has many such weight-two stabilizers. Nevertheless, our
main goal is to convert these Majorana surface codes to qubit codes where
superselection is not a relevant protection. Thus, we assume from now on that
our graphs have no vertices of degree less than three.
### 3.3 From Majorana surface codes to qubit surface codes
In this subsection we replace Majorana operators with Paulis to make qubit
surface codes from the Majorana surface codes of the previous section. The
main idea is to recognize that at each even (resp. odd) degree vertex $v$,
because of the vertex stabilizer $S_{v}$, there is a copy of the code
$\mathcal{S}_{\text{even}}$, Eq. (11), defined on $\text{deg}(v)$ (resp.
$\text{deg}(v)+1$) distinct Majorana modes and encoding $(\text{deg}(v)-2)/2$
(resp. $(\text{deg}(v)-1)/2$) qubits. Therefore, to define the qubit code
given an embedded graph $G=(V,E,F)$, the first step is to place this many
qubits at each vertex. We already assumed the graph has vertices of degree at
least three, so there is at least one qubit at each vertex.
Now, consider a single vertex $v$. By definition, Paulis that act only on the
qubits at $v$ can be associated to logical operators of the code
$\mathcal{S}_{\text{even}}$ located at $v$. We are most interested in the
Paulis associated to the sector operators
$q_{[h]_{\rho}}=i\gamma_{[h]_{\tau}}\gamma_{[\rho h]_{\tau}}$ for each flag
$h\in v$. That is, $q_{[h]_{\rho}}$ is the product of the two Majoranas on
half-edges adjacent to the sector. We are interested particularly in
$q_{[h]_{\rho}}$ because the face stabilizers of the Majorana code can be
alternatively written as the product of these sector operators,
$S_{f}=\pm\prod_{[h]_{\rho}\subseteq f}q_{[h]_{\rho}}.$ (16)
Therefore, choosing Paulis $q_{[h]_{\rho}}$ will also give $S_{f}$ in terms of
Paulis.
Let us list the sector operators $\\{q_{[h]_{\rho}},q_{[\tau\rho
h]_{\rho}},\dots,q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}}\\}$ starting from
some arbitrary $h\in v$. Notice that these operators have very specific
commutation rules, with adjacent (and the first and last elements also
considered adjacent) elements anticommuting with one another, and all other
pairs commuting. This leads us to study the following lists of Paulis.
###### Definition 3.2.
A cyclically anticommuting list (CAL) acting on $n$ qubits is a list of
$n$-qubit Paulis $\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$, in which for distinct
$i$ and $j$, $p_{i}$ and $p_{j}$ anticommute if and only if $i=j\pm
1\mod\ell$. A CAL of length $\ell\geq 1$ is called extremal if there does not
exist a CAL of the same length acting on fewer qubits.
Clearly, the list of sector operators $\\{q_{[h]_{\rho}},q_{[\tau\rho
h]_{\rho}},\dots,q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}}\\}$ is a CAL of
length $\text{deg}(v)$ acting on $(\text{deg}(v)-2)/2$ qubits if $v$ has even
degree or acting on $(\text{deg}(v)-1)/2$ qubits if $v$ has odd degree. Let us
establish existence of CALs with a construction.
###### Theorem 3.4.
A CAL of length $\ell\geq 3$ acting on $n$ qubits in which the Paulis have
weight at most two exists if and only if
$n\geq\bigg{\\{}\begin{array}[]{ll}(\ell-2)/2,&\ell\text{ even},\\\
(\ell-1)/2,&\ell\text{ odd}.\end{array}$ (17)
* Proof.
We construct extremal CALs with length $\ell$ acting on $n$ qubits saturating
the lower bound in Eq. (17). To get non-extremal CALs, more qubits can always
be added by acting on them with identity, or just Pauli $Z$, for instance, so
long as the commutation relations are unaffected. We note that extremal CALs
of length three and four, acting on a single qubit, are $\\{X,Y,Z\\}$ and
$\\{X,Z,X,Z\\}$. To construct extremal CALs of longer length, we define a
composition operation $\circ$ that takes a CAL of length $\ell$ (even) and a
CAL of length $\ell^{\prime}$ (either even or odd) to a CAL of length
$\ell+\ell^{\prime}-2$:
$\\{p_{0},\dots,p_{\ell-1}\\}\circ\\{p^{\prime}_{0},\dots,p^{\prime}_{\ell^{\prime}-1}\\}:=\\{p_{0},\dots,p_{\ell/2-2},p_{\ell/2-1}p^{\prime}_{0},p^{\prime}_{1},p^{\prime}_{2},\dots,p^{\prime}_{\ell^{\prime}-2},p^{\prime}_{\ell^{\prime}-1}p_{\ell/2},p_{\ell/2+1}\dots
p_{\ell-1}\\},$ (18)
where the $p_{j}$s and $p^{\prime}_{j}$s represent Paulis that act on $n$ and
$n^{\prime}$ qubits, respectively. Thus the CAL on the right acts on
$n+n^{\prime}$ qubits. Repeated application of this composition suffices to
make CALs of any length out of the length three and four CALs given above.
Composition preserves extremality — if $n$ and $n^{\prime}$ are minimal qubit
counts for CALs of lengths $\ell$ and $\ell^{\prime}$, then
$n+n^{\prime}=\bigg{\\{}\begin{array}[]{ll}(\ell+\ell^{\prime}-4)/2,&\ell^{\prime}\text{
even},\\\ (\ell+\ell^{\prime}-3)/2,&\ell^{\prime}\text{ odd}\end{array}$ (19)
is the minimum number of qubits needed for a length $\ell+\ell^{\prime}-2$
CAL. A graphical depiction of this composition operation is shown in Fig. 3.
It should be clear that the construction results in CALs consisting of Paulis
of weight just one or two.
Figure 3: One way to create a CAL of length $\ell$ is to compose CALs of
lengths three or four together. In (a), we compose three CALs with length four
(right) to make a CAL with length eight (left) acting on three qubits (red
circles). In (b), we use a similar composition of two length-four CALs and one
length-three CAL to make a length-seven CAL. In terms of surface codes defined
on graphs, this composition operation means that we can always decompose
higher degree vertices into vertices with degrees just three or four. Still,
we generally will include higher degree vertices in our drawings to highlight
symmetries when possible.
There are no CALs of length $\ell$ acting on fewer qubits than in Eq. (17).
Suppose, by way of contradiction, that a CAL
$\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ acting on $n$ qubits has length
$\ell>2n+2$. Then, there is a set of Paulis
$\left\\{a_{j}=\prod_{k=0}^{j}p_{k}:j\in\\{0,1,\dots,\ell-2\\}\right\\}$ (20)
with size $>2n+1$ in which all Paulis anticommute with one another. However,
this contradicts known results on the maximum size of such anticommuting sets
[36]. ∎
CALs can be thoroughly studied in their own right. The interested reader can
find much more discussion of CALs in Appendix B. For instance, we show some
structural properties of extremal CALs, such as the following lemmas which we
find useful (see Corollary B.10 and Corollary B.11).
###### Lemma 3.5.
If $\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ is an extremal CAL of length $\ell\geq
3$ acting on $n$ qubits, then taking products of the $p_{i}$ generates the
entire $n$-qubit Pauli group (up to global phases).
###### Lemma 3.6.
Suppose $\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ is an extremal CAL of length
$\ell\geq 3$. Then $\prod_{j=0}^{\ell-1}p_{j}\propto I$ and, if $\ell$ is
even, $\prod_{j=0}^{\ell/2-1}p_{2j}\propto I$ and
$\prod_{j=0}^{\ell/2-1}p_{2j+1}\propto I$. Moreover, these are the only
products that are proportional to identity.
These lemmas lead to the following simple corollary.
###### Corollary 3.7.
Let $\mathcal{C}=\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ be an extremal CAL of
length $\ell\geq 3$ acting on $n$ qubits. For any Pauli $q$ acting on $n$
qubits, define the (row) vector $C_{q}\in\mathbb{F}_{2}^{\ell}$ whose
$j^{\text{th}}$ element is $1$ if and only if $q$ and $p_{j}$ anticommute. If
$\ell$ is odd define
$M^{\top}=\left(\begin{smallmatrix}1&1&\dots&1&1\end{smallmatrix}\right)\in\mathbb{F}_{2}^{1\times\ell}$
and if $\ell$ is even $M^{\top}=\left(\begin{smallmatrix}1&1&\dots&1&1\\\
1&0&\dots&1&0\end{smallmatrix}\right)\in\mathbb{F}_{2}^{2\times\ell}$. Then
$C_{q}M=\vec{0}$. Moreover, if $x\in\mathbb{F}_{2}^{\ell}$ satisfies
$xM=\vec{0}$, then there exists a (unique, up to phase) Pauli $q$ such that
$x=C_{q}$. Finally, $C_{q}=\vec{0}$ if and only if $q\propto I$.
* Proof.
Lemma 3.6 implies that the last one (two) Paulis of a CAL are not independent
from the others if $\ell$ is odd (even) and that these are the only
dependencies. These dependencies result in $C_{q}M=0$. Lemma 3.5 implies that
the first $\ell-1$ ($\ell-2$) independent Paulis of the CAL generate the
entire $n$-qubit Pauli group. Therefore, we are guaranteed the existence of a
(unique, up to phase) Pauli $q$ satisfying the commutation relations expressed
by the first $\ell-1$ ($\ell-2$) bits of $x$ and, since $xM=\vec{0}$, the
final one (two) bits of $x$ must be consistent with those commutations.
Finally, $C_{q}=\vec{0}$ implies $q$ commutes with the entire Pauli group,
which is possible if and only if $q\propto I$. ∎
With CALs forming the basis for the logical space of the Majorana code
$\mathcal{S}_{\text{even}}$ we can complete our construction of qubit surface
codes.
###### Definition 3.3.
Suppose we have an embedded graph $G$ described by some rotation system
$R=(H,\lambda,\rho,\tau)$ and the vertices of $G$ are at least degree three.
At vertex $v$, place $N_{v}=\lceil(\text{deg}(v)-2)/2\rceil$ qubits, and
associate a Hermitian Pauli $q_{[h]_{\rho}}$ to each sector $[h]_{\rho}$
around that vertex. These Paulis $\\{q_{[h]_{\rho}},q_{[\tau\rho
h]_{\rho}},\dots,q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}}\\}$ should form an
extremal CAL of length $\text{deg}(v)$ acting on the $N_{v}$ qubits at $v$. To
every face $f$ of the graph we associate a stabilizer, the product of all
Paulis associated to sectors $[h]_{\rho}\subseteq f$.
By the discussion starting this subsection, this definition ensures the
codespaces of the Majorana code associated to $G$ (Definition 3.1) and the
qubit surface code associated to $G$ (Definition 3.3) are the same, both
encoding $K$ qubits from Theorem 3.3, leading to the following corollary,
which is formally proved in Appendix C.
###### Corollary 3.8.
A qubit surface code encodes
$K=\left\\{\begin{array}[]{ll}2g&,\text{ orientable}\\\ g&,\text{ non-
orientable}\end{array}\right\\}+\left\\{\begin{array}[]{ll}0&,\text{
checkerboardable}\\\ (M-2)/2&,\text{ not
checkerboardable}\end{array}\right\\}$ (21)
qubits, where conditions in brackets are properties of the rotation system
$(H,\lambda,\rho,\tau)$, or equivalently the graph embedding, defining the
code.
Fig. 4, the qubit versions of the Majorana codes from Fig. 2, should help in
understanding the correspondence of Majorana and qubit surface codes. Further
examples of qubit surface codes fitting our definition and appearing in prior
literature are shown in Fig. 5.
Notice that while a Majorana surface code has stabilizers associated to both
vertices and faces, a qubit surface code only has stabilizers associated to
faces. This is because we introduced just the right number of qubits at each
vertex to automatically enforce the vertex stabilizers of the Majorana surface
code. Alternatively, we introduced just enough physical qubits to span the
codespaces of the Majorana codes $\mathcal{S}_{\text{even}}$ that exist at
each vertex. This fact can also be illustrated with a simple qubit count –
with one qubit for every two Majoranas (e.g. using the Jordan-Wigner
transformation, Eq. (10)) we would have $(2|E|+M)/2=|E|+M/2$ qubits (recall
$M$ is the number of odd-degree vertices), but instead we have used
$N=\sum_{v\in V}N_{v}=M/2+\sum_{v\in
V}\left(\text{deg}(v)-2\right)/2=|E|-|V|+M/2$ (22)
qubits. Each of the $|V|$ “missing” qubits is a degree of freedom eliminated
by enforcing a stabilizer $S_{v}$.
For stabilizer code with stabilizer group $\mathcal{S}$, the _logical
operators_ are the elements of the centralizer $\mathcal{C}(\mathcal{S})$,
i.e. the group of all Pauli operators that commute with all elements of
$\mathcal{S}$. The centralizer includes the stabilizer group itself, and so we
refer to elements of $\mathcal{S}$ as trivial logical operators as they apply
the logical identity to encoded qubits. The code distance $D$ of a stabilizer
code is the minimum weight of an element of
$\mathcal{C}(\mathcal{S})\setminus\langle\\{iI\\}\cup\mathcal{S}\rangle$. We
use the notation $\llbracket N,K,D\rrbracket$ to concisely present the code
parameters of a stabilizer code.
Figure 4: Qubit versions of the Majorana codes in Fig. 2. Place a single qubit
on vertices with degrees three or four and a pair of qubits on vertices with
degrees five or six. Around each vertex write a cyclically anticommuting list
of Paulis (acting on qubits at that vertex). Each face represents a stabilizer
defined as the product of all Paulis written within that face. Note that (b)
depicts a toric code with $X^{\otimes 4}$ and $Z^{\otimes 4}$ stabilizers but
on a lattice different from Kitaev’s square lattice [1]. In this case, the
code is $\llbracket 24,2,4\rrbracket$. The number of encoded qubits is clear
by Corollary 3.8 as this graph embedding is checkerboardable. Figure 5: (a)
The triangular surface code [6], (b) the rotated surface code [3] and (c) a
stellated surface code [8] with even greater symmetry. Each code fits in our
framework – in this case, by Definition 3.3 applied to these planar graphs. In
(b), we show explicitly the assignment of CALs that gives the familiar rotated
surface code. Notice that the outer face also gets assigned a stabilizer.
Because of Lemma 3.2(e), the product of stabilizers on all non-outer faces is
proportional to the stabilizer on the outer face.
To further demonstrate Definition 3.3, in Fig. 6 we show a family of
topological codes that includes the well-known $\llbracket 5,1,3\rrbracket$
code as its smallest member. Also included as a subset are the cyclic codes
given in Example 11 and Figure 3 of [37]. The general construction gives
cyclic codes defined on the torus. Consider the typical fundamental square for
the torus, i.e. a unit square with opposite sides identified, and draw the
lines $y=bx/a$ and $y=-ax/b$, where we assume $b>a\geq 1$ are integers and
$\text{gcd}(a,b)=1$. These two lines intersect at $N=a^{2}+b^{2}$ points
within the fundamental square, including the single point at
$(0,0)=(0,1)=(1,0)=(1,1)$. Interpret these intersections as vertices of a
4-regular graph with edges the line segments between them. With correct choice
of CALs, so that each face stabilizer has two $X$s and two $Z$s (see Fig. 6
for the convention we use), and with qubits labeled in a sequence along the
line $y=bx/a$ (moving outwards from the origin in the direction of increasing
$x$), the surface code defined by this graph and Definition 3.3 is cyclic — an
overcomplete generating set of the stabilizer group consists of $Z\otimes
X\otimes I^{\otimes s}\otimes X\otimes Z\otimes I^{N-s-4}$ and its cyclic
permutations, where $s=\lceil Nt/b\rceil-2$ and $t$ is the minimal positive
integer such that $(ta+1)/b$ is an integer.333The extended Euclidean algorithm
on $(a,b)$ can be used to determine $t$. A Bézout pair $(x,y)$ of $(a,b)$ is a
pair of integers such that $xa+yb=\text{gcd}(a,b)$. In particular,
$(x,y)=(-t,(ta+1)/b)$ is the unique Bézout pair of $(a,b)$ in which $-b\leq
x\leq 0$. We have the following parameters for these codes.
Figure 6: (a) The smallest error-correcting quantum code, the $\llbracket
5,1,3\rrbracket$ code, is a surface code defined by an embedding of the
5-vertex complete graph $K_{5}$ in the torus. There are different embeddings
of $K_{5}$ in the torus, but no other gives a 5-qubit code with distance more
than two. Parts (b-f) show members of the cyclic toric code family with more
qubits. See Theorem 3.9 and the paragraph above it for the code definition and
parameters. A demonstrative CAL is written around one vertex in each graph.
The codes are cyclic when this same CAL is orientated similarly around each
vertex.
###### Theorem 3.9.
The cyclic toric code with integer parameters $a$ and $b$, where without loss
of generality $\text{gcd}(a,b)=1$ and $b>a\geq 1$, is a $\llbracket
N=a^{2}+b^{2},K,D\rrbracket$ code, with
1. 1.
If $N$ is odd, $K=1$ and $D=a+b$,
2. 2.
If $N$ is even, $K=2$ and $D=b$.
The odd and even cases differ in the number of encoded qubits because of
checkerboardability (the latter case being checkerboardable and the former
not). Note that a consequence of the theorem is that the code distance $D$ is
always odd. The codes in Example 11 and Figure 3 of [37] are cyclic toric
codes with $a=b-1$.
We delay the proof of this theorem, in particular the code distances, until
Section 5.2, after we have developed sufficient tools in Section 4. We point
out that in two situations, the cyclic toric codes achieve
$N=\frac{1}{2}KD^{2}+O(D)$ – when $a=b-1=(D-1)/2$ and when $b=D$ is odd with
$a=1$. Moreover, $1/2$ is the best constant achievable for the cyclic toric
codes.
To conclude this section, we point out a corollary on the structure of qubit
surface codes that follows from the proof of Theorem 3.4, and in particular
the vertex decomposition demonstrated in Fig. 3. We use this corollary later
to justify considering only graphs of low degree.
###### Corollary 3.10.
For any graph $G$, there is a choice of CALs such that the surface code
associated with $G$ is the same as the code associated with another graph
$G^{\prime}$ that only has vertices with degrees three and four.
### 3.4 Relation to homological surface codes
Finally, we want to point out the relation of Definition 3.3 with the well-
known homological construction of surface codes [1, 38, 39]. In the
homological construction, a single qubit is placed on each edge of a graph
embedding $G=(V,E,F)$. Stabilizers correspond to each vertex $v\in V$ and face
$f\in F$:
$S^{\prime}_{v}=\prod_{e\in v}X_{e},\quad S^{\prime}_{f}=\prod_{e\in f}Z_{e},$
(23)
where $e\in v$ and $e\in f$ are shorthand for saying edge $e$ is adjacent to
the vertex $v$ or face $f$, and of course $X_{e}$, $Z_{e}$ are Paulis on the
qubit on that edge.
The _medial graph_ $\widetilde{G}$ of an embedded graph $G=(V,E,F)$ has a
vertex for each edge of $G$ and connects them with edges such that vertices
and faces of $G$ form faces of $\widetilde{G}$. See some examples in Fig. 7.
It should be clear that the homological surface code defined on graph $G$ is
exactly the same code as our surface code, Definition 3.3, defined on the
medial graph $\widetilde{G}$. Thus, Definition 3.3 includes all homological
codes.
We can formally define the medial graph using rotation systems.
###### Definition 3.4.
The _medial rotation system_
$\widetilde{R}=(\widetilde{H},\widetilde{\lambda},\widetilde{\rho},\widetilde{\tau})$
of a rotation system $R=(H,\lambda,\rho,\tau)$ is defined by setting
$\widetilde{H}=H\times\\{1,-1\\}$ and for $(h,j)\in\widetilde{H}$ letting
$\widetilde{\tau}(h,j)=(h,-j)$, $\widetilde{\lambda}(h,j)=(\rho(h),j)$, and
$\widetilde{\rho}(h,j)=\bigg{\\{}\begin{array}[]{ll}(\lambda(h),j),&\text{if
}j=-1\\\ (\tau(h),j),&\text{if }j=1\end{array}.$ (24)
Some facts about medial graphs become apparent (we prove these in Appendix E).
First, the medial graph embeds into the same manifold as the original graph.
Second, medial graphs are also always 4-valent (every vertex is degree four).
Third, medial graphs are always checkerboardable – e.g. a natural way to color
faces in $\widetilde{G}$ is to color black faces that correspond to vertices
of $G$ and color white faces corresponding to faces of $G$. In fact, an
embedded graph is the medial graph of another embedded graph if and only if it
is 4-valent and checkerboardable. As a result, there are codes resulting from
Definition 3.3 that cannot be described as homological codes.
The reader may also refer to [3] where checkerboardable, 4-valent graphs (i.e.
medial graphs) are used to define homological surface codes directly and to
[40] where this connection between homological codes and medial graphs is also
pointed out.
Figure 7: Graphs (gray edges with small, black vertices) on the torus (a) and
projective plane, (b) and (c), are drawn with their medial graphs (black edges
and large, red vertices). In (a), we get a $\llbracket 8,2,2\rrbracket$ toric
code. In (b), we get Shor’s $\llbracket 9,1,3\rrbracket$ code [41] with the
gray graph coming from [39]. In (c), we show another $\llbracket
9,1,3\rrbracket$ code from [39].
Of course, forays have already been made beyond the strict homological surface
code construction. We have already seen some examples in the forms of the
triangular, rotated, and stellated codes, Fig. 5. Our goal however is a more
rigorous framework that generalizes these specific instances.
One way in which the homological surface code construction can be rigorously
generalized to produce more general planar codes is described by Freedman and
Meyer [39]. Let us describe this construction using the language of medial
graphs. Start with a homological surface code defined on an improper embedding
of a graph in orientable surface of genus $g$. By an improper embedding, we
mean one face is not an open disk but instead a $g$-punctured disk, e.g. if
$g=1$ the $g$-punctured face is a cylinder. Now, form the medial graph of this
improperly embedded graph444We defined the medial graph of an embedded graph.
In this case, the graph is not properly embedded. When we construct the medial
graph here, we do so by drawing proper, disk-like faces around each vertex..
The medial graph will also be improperly embedded, but it is still 4-valent
and checkerboardable. Because of the redundancy in stabilizers (see proof of
Theorem 3.3), we can delete, i.e. remove from the stabilizer group, any single
black as well any single white face without changing the code. Thus, delete
the face that is $g$-punctured (say it is colored white), a black face sharing
an adjacent edge, and any edges shared between them. The result is a planar
graph with some degree-3 vertices (because of the deletion of edges). Examples
producing the rotated surface code and the Bravyi-Kitaev planar code [38] are
shown in Fig. 8.
Figure 8: Improper embeddings of graphs (gray), and their medial graphs
(black), on the torus. In both pictures, the face (of the medial graph
embedding) intersecting the top and bottom frames is not an open disk but
instead a cylinder. Deleting this face, the face intersecting the left and
right frames, and their shared adjacent edges (highlighted) results in
familiar planar codes. In (a) we find the $\llbracket 9,1,3\rrbracket$ rotated
surface code and in (b) Bravyi and Kitaev’s planar code [38], in this instance
$\llbracket 18,1,3\rrbracket$.
Notably, this improper embedding procedure does not produce non-
checkerboardable graph embeddings in surface of larger genus $g>0$, so it does
not produce the cyclic toric codes in Fig. 6(a,c,d,f) for instance. We have
also not found a way to produce stellated codes with odd symmetry, e.g. parts
(a) and (c) of Fig. 5, using the improper embedding technique, the apparent
obstacle being the presence of an odd degree vertex in the center of the
planar graph, i.e. not adjacent to the outer face. This appears an obstacle
because the outer face and the odd degree vertices are effectively created by
deleting faces and their shared adjacent edges from the improper embedding,
with the result being that any odd degree vertices are adjacent to the outer
face. Hence, we find this construction involving improper embeddings and face
deletions to be more complicated than our construction while also producing
fewer codes.
## 4 Locating logical operators
In this section, we discuss the structure of logical operators in the
generalized surface codes of Definition 3.3. One of our main findings is that
logical operators are exactly the homologically non-trivial cycles in a
related graph, called the decoding graph. This is probably not a huge surprise
to someone familiar with homological codes, but there are some technical
hurdles to overcome in proving it in our generalized setting, including the
fact that the decoding graph for non-checkerboardable codes is not generally
embeddable in the same manifold $\mathcal{M}$ as the original graph. Without a
graph embedding, there is no notion of homology, so finding an appropriate
embedding of the decoding graph is essential to the proof.
In Section 4.1 we generalize checkerboardability to include “defects”, useful
in the subsequent sections. In Section 4.2 we define the decoding graph, show
that cycles in the decoding graph correspond to logical operators of the code
(both trivial and non-trivial), and prove it embeds in $\mathcal{M}$ in the
checkerboardable case. In Section 4.3, we define a (generally higher-genus)
manifold in which we can embed the decoding graph, and prove that homological
non-triviality in this new manifold equates with logical non-triviality.
Finally, in Section 4.5, we discuss how some logical operators (including all
trivial ones) can be represented by paths in the original graph.
### 4.1 Checkerboardability with defects
In Definition 2.3 we defined what it means for a graph embedding to be
checkerboardable. In this section, we generalize this notion to include
defects. Informally put, a defect is a set of edges that can be deleted from
the graph embedding to make it checkerboardable. Defects have a few uses in
finding or identifying logical operators in surface codes which are covered in
the subsequent subsections.
To define checkerboardability with defects more rigorously, we use the matrix
$\Phi\in\mathbb{F}_{2}^{|F|\times|E|}$ from Section 2.4, which encodes the
edges belonging to each face of the embedding.
###### Definition 4.1.
We say a graph embedding $G=(V,E,F)$ is checkerboardable with defect
$\delta\in\mathbb{F}_{2}^{|E|}$ if there is a vector
$x\in\mathbb{F}_{2}^{|F|}$ such that $x\Phi=\vec{1}+\delta$.
Clearly, being checkerboardable according to Definition 2.3 is equivalent to
being checkerboardable with the trivial defect $\delta=\vec{0}$. The analogous
lemma to Lemma 2.2 is as follows.
###### Lemma 4.1.
$G$, described by rotation system $R=(H,\lambda,\rho,\tau)$, is
checkerboardable with a defect if and only if we can partition $H$ into two
sets $H_{w}$ and $H_{b}$ such that $\lambda$ and $\rho$ map both sets to
themselves, while $\tau$ maps elements of either set to the other set,
_except_ when those flags are part of an edge in the defect.
A given embedded graph does not have a unique defect. However, different
choices of defect are related by the addition of rows of $\Phi$. That is, by
simple application of the definition, the following is true.
###### Lemma 4.2.
Suppose a graph embedding $G=(V,E,F)$ is checkerboardable with defect
$\delta_{1}$. Then, $G$ is checkerboardable with defect $\delta_{2}$ as well
if and only if there exists $x$ such that $x\Phi=\delta_{1}+\delta_{2}$.
Moreover, it is an interesting fact that a defect cannot be just any subset of
edges. In fact, a defect must be a sum of certain _trails_ in the graph.
Recall that $\mathcal{T}(G)\cong\mathbb{F}_{2}^{|E|}$ is the group of all
trails in the graph. A subgroup of $\mathcal{T}(G)$ is generated by just the
closed trails and trails whose endpoints are odd-degree vertices. We call this
subgroup $\mathcal{T}_{0}(G)$.
###### Lemma 4.3.
Suppose the graph embedding $G=(V,E,F)$ is checkerboardable with defect
$\delta\in\mathbb{F}_{2}^{|E|}$. Then, $\delta\in\mathcal{T}_{0}(G)$. Let
$a_{v}=\sum_{e\ni v}\delta_{e}$ evaluated over $\mathbb{F}_{2}$ indicate the
parity of the number of edges that are adjacent to $v\in V$ and in the defect.
Then, $a_{v}=1$ if and only if $\deg(v)$ is odd.
* Proof.
The way that $\delta$ can fail to be in $\mathcal{T}_{0}(G)$ is if $a_{v}=1$
for some even-degree vertex $v$. However, if this were the case, it would
imply that $G$ is not checkerboardable with defect $\delta$ because it fails
to be so locally around vertex $v$. We would need to two-color faces around
$v$ such that faces on opposite sides of an edge $e\ni v$ are different colors
if $\delta_{e}=0$ and the same color if $\delta_{e}=1$. This is impossible for
an even-degree vertex with $a_{v}=1$. This shows
$\delta\in\mathcal{T}_{0}(G)$. The remaining claim, that $a_{v}=1$ for odd-
degree vertices $v$ follows from a similar argument of colorability in the
local neighborhood of $v$. ∎
Finally, we point out that for any graph embedding $G$ it is efficient
(polynomial time in graph size) to perform two related tasks: (1) determine
some $\delta\in\mathbb{F}_{2}^{|E|}$ such that $G$ is checkerboardable with
defect $\delta$ and (2) given a candidate $\delta\in\mathbb{F}_{2}^{|E|}$
determine whether $G$ is checkerboardable with defect $\delta$. We present an
algorithm in Algorithm 1 that can perform both tasks via a greedy strategy.
The inputs are, first, a representation of the faces in the graph embedding
(to be concrete, we use the binary vectors $\phi_{i}\in\mathbb{F}_{2}^{|E|}$,
$i=1,2\dots,|F|$, that are the rows of $\Phi$) and, second, a candidate defect
$\delta$. The outputs are a list of faces colored black, a list of those
colored white, and a defect. The algorithm starts by arbitrarily coloring one
face and continues by coloring faces that neighbor colored faces appropriately
(the same color if they neighbor across a defect edge and differently
otherwise). If ever a face cannot be colored either color without
contradiction, edges are added or removed from the defect to make it work.
It is clear this algorithm solves task (1). To solve (2), we note the
algorithm guarantees that the output defect $\gamma$ is the same as the input
$\delta$ if and only if $\delta$ is a defect. This solves (2). In the course
of the algorithm, each edge is considered at most once in the while loop, the
body of which takes $O(\max_{i}|\phi_{i}|)$ time using appropriate data
structures,555In particular, one needs adjacency oracles that take a part of
the graph embedding (e.g. a face) and return other adjacent parts (e.g. the
adjacent edges). These oracles or similar are standard in the field of
computational graph theory – for instance, the “doubly-connected edge list”
data structure used in [42]. leading to a total time complexity of just
$O(|E|\max_{i}|\phi_{i}|)$ for both tasks (1) and (2).
Algorithm 1 Returns checkerboard colors for each face and a list of edges in
the defect
1:procedure
Checkerboard($\\{\phi_{1},\phi_{2},\dots,\phi_{|F|}\\}\in\left(\mathbb{F}_{2}^{|E|}\right)^{|F|}$,
$\delta\in\mathbb{F}_{2}^{|E|}$)
2: Set $B\leftarrow\\{1\\}$, $W\leftarrow\emptyset$ # Sets of faces colored
black and white
3: Set $\gamma\leftarrow\delta$ # This is a record of the defect edges (vector
in $\mathbb{F}_{2}^{|E|}$)
4: Set $b\leftarrow\phi_{1}$ # This is a list of “boundary” edges of the faces
already colored (vector in $\mathbb{F}_{2}^{|E|}$)
5: For any edge $e$ that is internal to a face (same face on both sides), set
$\gamma_{e}\leftarrow 1$.
6: While $b$ is not $\vec{0}$:
7: Let $e$ be the first edge such that $b_{e}=1$
8: If $e$ is adjacent to a face $f\not\in B\cup W$ and a face $g\in B\cup W$:
9: If $g\in W$ and $\gamma_{e}=0$, append $f$ to $B$.
10: If $g\in B$ and $\gamma_{e}=1$, append $f$ to $B$.
11: If $f\not\in B$, append $f$ to $W$.
12: Set $b_{e^{\prime}}=1$ for any $e^{\prime}$ that borders both $f$ and a
face $f^{\prime}\not\in B\cup W$.
13: else:
14: If the two faces adjacent to $e$ are both in $B$ or both in $W$, set
$\gamma_{e}\leftarrow 1$. Else, set $\gamma_{e}\leftarrow 0$.
15: Set $b_{e}=0$
16: Return $B,W,\gamma$
### 4.2 Characterizing logical operators as cycles in the decoding graph
Recall that $\mathcal{C}(\mathcal{S})$ is the group of logical operators of
the stabilizer code with stabilizer $\mathcal{S}$. We characterize
$\mathcal{C}(\mathcal{S})$ for our surface codes as the cycles in a related
graph, referred to here as the decoding graph.
###### Definition 4.2.
Given an embedded graph $G=(V,E,F)$, an associated decoding graph
$G_{\text{dec}}$ is a (unembedded) graph with vertices $V_{\text{dec}}$ of
three types: (1) a vertex $u_{f}$ associated to each face of $G$, (2) a vertex
$w^{(0)}_{v}$ associated to each odd-degree vertex of $G$, and (3) two
vertices $w^{(1)}_{v}$ and $w^{(-1)}_{v}$ associated to each even-degree
vertex of $G$. For each sector $[h]_{\rho}$ of $G$, there is an edge
$e_{[h]_{\rho}}$ in $G_{\text{dec}}$. Choose $v\in V$ and $f\in F$ such that
$h\in v$ and $h\in f$. If $v$ is odd-degree, $e_{[h]_{\rho}}$ connects
vertices $w^{(0)}_{v}$ and $u_{f}$. If $v$ is even-degree, $e_{[h]_{\rho}}$
connects vertices $u_{f}$ to $w^{(j)}_{v}$ where $j$ is such that the edges
associated to the adjacent sectors, i.e. $e_{[\tau h]_{\rho}}$ and
$e_{[\tau\rho h]_{\rho}}$, are incident to $w^{(-j)}_{v}$. All decoding graphs
of $G$ are equivalent up to relabeling vertices $w^{(1)}_{v}$ and
$w^{(-1)}_{v}$, so we just speak of _the_ decoding graph of $G$.
In Fig. 9, we draw the decoding graph locally around vertices of small degree.
Our definition is slightly different from the decoding graphs commonly used
decode surface codes (e.g. [43]) in that we introduce vertices at the vertices
of $G$ as well as the faces. One reason we do this is so we can make precise
statements about embedding the decoding graph or its connected components into
various manifolds. A consequence of this decision however is that the decoding
graph’s systole is doubled. The reader knowledgeable in topological quantum
error correction should not be concerned by the appearance of a factor of
$1/2$ when we calculate the code distance in terms of this systole.
Consider now the problem of relating cycles in $G_{\text{dec}}$ to elements of
$\mathcal{C}(\mathcal{S})$. Notice that the decoding graph is bipartite, with
vertices associated to faces of $G$ (type 1 in the Definition) on one side and
vertices associated to vertices of $G$ (types 2 and 3) on the other.
Therefore, any cycle in $G_{\text{dec}}$ is even-length and can be broken into
paths of consecutive edges $(e_{[h]_{\rho}},e_{[k]_{\rho}})$, where $h,k$ are
flags in the same vertex $v$ but not in the same sector. We let
$\mathcal{T}_{2}(G_{\text{dec}})\leq\mathcal{T}(G_{\text{dec}})$ denote the
group of two-edge paths of $G_{\text{dec}}$.
Define a map
$\sigma:\mathcal{T}_{2}(G_{\text{dec}})\rightarrow\mathcal{C}(\mathcal{S})$
that translates each two-edge path $(e_{[h]_{\rho}},e_{[k]_{\rho}})$ to a
Pauli acting on qubits at the vertex $v$. This Pauli is chosen to anti-commute
with only two elements, $q_{[h]_{\rho}}$ and $q_{[k]_{\rho}}$, of the CAL
associated with $v$ that defines the surface code on $G$ (see Definition 3.3).
This Pauli exists and is unique (up to phase) by Corollary 3.7.
Now we state some consequences of Corollary 3.7 for $\sigma$. First, if we
ignore phases on the Paulis, $\sigma$ is actually surjective – every Pauli up
to phase is represented by some collection of two-edge paths in
$G_{\text{dec}}$ – and, second, $\sigma$ is a homomorphism – for all
$t_{1},t_{2}\in\mathcal{T}_{2}(G_{\text{dec}})$, we have
$\sigma(t_{1}+t_{2})=\sigma(t_{1})\sigma(t_{2})$. Finally, $\sigma(t)\propto
I$ if and only if $t$ is the empty set.
Figure 9: The decoding graph (red/blue) pictured locally around vertices of
degrees 3 through 6. In odd-degree cases, all vertices in the surrounding
faces connect to the central vertex. In the even-degree cases, there are two
distinct central vertices, red and blue (not shown), and faces connect to
these alternatively.
By surjectivity of $\sigma$, if we have a logical operator
$l\in\mathcal{C}(\mathcal{S})$, then it can be represented by a set of edges
$E_{l}\subseteq E_{\text{dec}}$ in the decoding graph. Because the logical
operator must commute with all stabilizers $S_{f}$, the number of edges from
$E_{l}$ incident to any vertex $u_{f}$ of the decoding graph (i.e. a face in
the original graph) must be even. Therefore, $E_{l}$ is a sum of cycles in the
decoding graph. The converse is also clearly true – any cycle $c$ in the
decoding graph maps to a logical operator
$\sigma(c)\in\mathcal{C}(\mathcal{S})$. Thus, we have a characterization of
the logical operators of our codes.
###### Theorem 4.4.
Let $G=(V,E,F)$ be an embedded graph. Cycles in the decoding graph of $G$
(Defintion 4.2) represent all logical operators of the qubit surface code
corresponding to $G$ (Definition 3.3) via the map $\sigma$.
Figure 10: Example cycles in a decoding graph. The original graph has black
edges, and stitched edges are part of a (particular choice of) defect. Faces
are checkerboardable with this defect (the outer face should be colored black
but is not shown). The entire decoding graph is not shown, but cycles labeled
(a), (b), (c), (d), (e), and (f) illustrate some of it. Cycles (a), (b), (c)
in green represent trivial logical operators (i.e. stabilizers), while cycles
(d) and (e) in red and cycle (f) in blue represent non-trivial logical
operators. We know they are non-trivial because (d) and (e) anticommute with
(f). Cycles (a), (e), and (f) connect to the vertex in the outer face (not
shown) via their dangling edges.
See Fig. 10 for some examples of cycles in a decoding graph and their
correspondence with logical operators.
In checkerboardable codes, we can say even more about locating nontrivial
logical operators by relating them to the homology of the surface. For this to
work, we must embed the decoding graph of the checkerboardable code into the
same surface.
###### Lemma 4.5.
Suppose $G$ is a checkerboardable graph embedded in a 2-manifold
$\mathcal{M}$. Then the decoding graph $G_{\text{dec}}$ consists of exactly
two connected components, each of which can be embedded in the manifold
$\mathcal{M}$.
* Proof.
Suppose $G$ is described by the rotation system $(H,\lambda,\rho,\tau)$ and,
because it is checkerboardable, there is a partition $H=H_{w}\sqcup H_{b}$ as
in Lemma 2.2. Of course, each vertex of $G$ is necessarily even-degree. It
should be clear from Fig. 9 that the decoding graph $G_{\text{dec}}$ has two
connected components, one corresponding to faces that are subsets of $H_{w}$
and another corresponding to faces that are subsets of $H_{b}$.
Let $G_{\text{dec},w}$ and $G_{\text{dec},b}$ be the connected components of
$G_{\text{dec}}$. These are described by rotation systems
$R_{\text{dec},w}=(H_{w}\times\\{\pm
1\\},\lambda_{\text{dec}},\rho_{\text{dec}},\tau_{\text{dec}})$ and
$R_{\text{dec},b}=(H_{b}\times\\{\pm
1\\},\lambda_{\text{dec}},\rho_{\text{dec}},\tau_{\text{dec}})$, where
$\displaystyle\lambda_{\text{dec}}(h,j)$ $\displaystyle=(h,-j),$ (25)
$\displaystyle\rho_{\text{dec}}(h,j)$
$\displaystyle=\bigg{\\{}\begin{array}[]{ll}(\tau\rho\tau(h),j),&j=-1\\\
(\lambda(h),j),&j=1\end{array},$ (28) $\displaystyle\tau_{\text{dec}}(h,j)$
$\displaystyle=(\rho(h),j).$ (29)
This provides an explicit embedding of the components of $G_{\text{dec}}$.
We do still need to show that they are indeed embedded in the manifold
$\mathcal{M}$ and not some other one. Suppose $G$ has sets of vertices, edges,
faces $V$, $E$, $F=F_{w}\sqcup F_{b}$ (with this partition due to
checkerboardability) and $G_{\text{dec},w}$ has sets $V_{\text{dec}}$,
$E_{\text{dec}}$, and $F_{\text{dec}}$. One can check from the rotation
system, Eqs. (25-29), that $|V_{\text{dec}}|=|F_{w}|+|V|$,
$|E_{\text{dec}}|=\frac{1}{2}\sum_{v\in V}\text{deg}(v)=|E|$, and
$|F_{\text{dec}}|=|F_{b}|$. This implies the Euler characteristics of $R$ and
$R_{\text{dec},w}$ are the same.
We also need to show that $R_{\text{dec},w}$ is orientable if and only if $R$
is. Note $R$ being orientable means one can partition $H$ into two sets
$H_{\pm 1}$ such that $\lambda$, $\rho$, $\tau$ applied to an element of one
set maps it to an element of the other set. Define $H^{\prime}_{\pm
1}\subseteq H^{\prime}:=H_{w}\times\\{\pm 1\\}$ such that $(h,j)\in
H_{\text{dec},k}$ if and only if $h\in H_{jk}$. These sets clearly partition
$H^{\prime}$ and $\lambda_{\text{dec}}$, $\rho_{\text{dec}}$, and
$\tau_{\text{dec}}$ all map either set to the other. So $R_{\text{dec}}$ is
orientable. The other direction – if $R_{\text{dec}}$ is orientable, then $R$
is – follows a very similar argument that we leave for the reader. ∎
Here we comment on the case where a checkerboardable graph $G$ has only
degree-4 vertices. In this case, the decoding graph has degree-2 vertices
associated to the vertices of $G$. Suppose we remove these vertices, joining
the edges incident to them into one edge. It is not hard to see that this
vertex removal makes the connected components of $G_{\text{dec}}$ into graphs
$Q$ and $\overline{Q}$ (duals of one another) where the medial graph
$\widetilde{Q}$ is equal to $G$. Therefore, in this checkerboardable and
4-valent case, the operation of building the decoding graph essentially
inverts the operation of building the medial graph. Then $Q$ is a graph that
defines a homological code (see Section 3.4) equivalent to the code we have
defined on $G$.
With the decoding graph embedded, we can relate homological and logical
nontriviality in the checkerboardable case.
###### Theorem 4.6.
For checkerboardable graph $G$, a cycle $c$ in $G_{\text{dec}}$ is
homologically nontrivial if and only if $\sigma(c)$ is a nontrivial logical
operator of the surface code associated with $G$.
* Proof.
The key fact to use is that a Pauli $p$ is a stabilizer generator associated
to a face in $G$ if and only if $p\propto\sigma(c)$ for some facial cycle $c$
in $G_{\text{dec}}$. In Fig. 11, we show this correspondence explicitly. We
use this fact to complete the proof.
Figure 11: Picturing the correspondence between a face $f$ of checkerboardable
graph $G$ (thick, black edges) and a face $f^{\prime}$ of a connected
component of its decoding graph $G_{\text{dec}}$ (blue edges). Notation comes
from the proof of Lemma 4.5. Flags of $G$ and $G_{\text{dec}}$ are outlined by
thin lines. Flags of $G$ making up $f$ include $h$, $\rho h$, $\lambda\rho h$,
and $\rho\lambda\rho h$. These each relate to two flags in the corresponding
face $f^{\prime}$. For instance, $h\in f$ becomes $(\tau h,\pm 1)\in
f^{\prime}$. If $f$ is colored black in $G$, then $f^{\prime}$ is a face of
$G_{\text{dec},w}$.
Let $c$ be a homologically trivial cycle in $G_{\text{dec}}$. We need to show
$\sigma(c)$ is a stabilizer. Since $G_{\text{dec}}$ has two connected
components, $c=c_{w}+c_{b}$ where $c_{w}$ is a cycle in $G_{\text{dec},w}$ and
$c_{b}$ a cycle in $G_{\text{dec},b}$. Since
$\sigma(c)=\sigma(c_{w})\sigma(c_{b})$, we just show $\sigma(c_{w})$ is a
stabilizer with an analogous argument for $\sigma(c_{b})$. Since $c_{w}$ is
homologically trivial it is a sum of facial cycles
$c_{w}=c_{1}+c_{2}+\dots+c_{l}$ and
$\sigma(c_{w})=\sigma(c_{1})\sigma(c_{2})\dots\sigma(c_{l})$. By the fact in
the previous paragraph, $\sigma(c)$ is indeed a stabilizer.
Likewise, if Pauli $p$ is a stabilizer, then it is the product of stabilizer
generators associated to faces of $G$, or $p=p_{1}p_{2}\dots p_{l}$. By fact
(1), there are facial cycles $c_{1},c_{2},\dots,c_{l}$ in $G_{\text{dec}}$
such that $\sigma(c_{j})=p_{j}$ and therefore a homologically trivial cycle
$c=c_{1}+c_{2}+\dots+c_{l}$ so that $\sigma(c)=p$. ∎
In a slight abuse of notation, we let
$\mathrm{hsys}\left({G_{\text{dec}}}\right)$ be the smaller of the homological
systoles of the two connected components of $G_{\text{dec}}$ embedded as in
Lemma 4.5.
###### Corollary 4.7.
Suppose $G$ is a checkerboardable, embedded graph and $D$ is the code distance
of the surface code defined by $G$. Then,
$\frac{1}{2}\mathrm{hsys}\left({G_{\text{dec}}}\right)\leq D$ and if $G$ is
4-valent, $\frac{1}{2}\mathrm{hsys}\left({G_{\text{dec}}}\right)=D$.
* Proof.
For the proof of the bound, note that any non-trivial logical operator with
weight $D$ is represented by a homologically non-trivial cycle $c$ confined to
a single connected component of the decoding graph $G_{\text{dec}}$. Suppose
$c$ visits $n\leq D$ vertices of $G_{\text{dec}}$ that are associated to
vertices of $G$. There is a homologically non-trivial sub-cycle of $c$, call
it $c^{\prime}$, that visits each of the $n$ vertices at most once and so has
at most $2n$ edges. Since $\mathrm{hsys}\left({G_{\text{dec}}}\right)$ lower
bounds the length of $c^{\prime}$,
$\mathrm{hsys}\left({G_{\text{dec}}}\right)\leq 2n\leq 2D$.
Now suppose $G$ is 4-valent. The shortest homologically non-trivial cycle $c$
in $G_{\text{dec}}$ is confined to just one connected component. By 4-valency,
there is only one qubit at each vertex and each two-edge sub-path in $c$ is
mapped to single-qubit Pauli by $\sigma$. Since $c$ has length
$\mathrm{hsys}\left({G_{\text{dec}}}\right)$, the weight of the non-trivial
logical Pauli it represents is at most
$\frac{1}{2}\mathrm{hsys}\left({G_{\text{dec}}}\right)$, which therefore is an
upper bound on $D$. ∎
The equality statement in this corollary does not hold for graphs with higher
degree vertices because, in that case, two-edge paths in the decoding graph
may represent Paulis of weight greater than one. However, if one uses the CAL
construction in Theorem 3.4 to assign Paulis to sectors, one can effectively
decompose these higher-degree vertices into degree-4 vertices (see Corollary
3.10) and apply Corollary 4.7 to the resulting 4-valent graph.
### 4.3 Doubled graphs
Our goal in this section is to find some manifold to embed $G_{\text{dec}}$
into when the original graph $G$ is not checkerboardable. In general, this is
a manifold with genus larger than the original surface. In particular, for any
non-checkerboardable graph $G$ embedded on a manifold $\mathcal{M}$, we show
that there is a checkerboardable graph $G^{2}$, referred to as a _doubled_
graph, embedded on manifold $\mathcal{M}^{2}$ such that $G_{\text{dec}}$ is
isomorphic to any one of the two connected components of $G^{2}_{\text{dec}}$.
A similar doubled construction is given by Barkeshli and Freedman [44] in the
context of topological phases.
Figure 12: A doubled graph for the rotated surface code of Fig. 5(b) is
embedded in the torus (six-sided blue border). Figure 13: A doubled graph for
the triangle code of Fig. 5(a) is embedded in the torus (six-sided blue
border).
Informally, the doubled manifold $\mathcal{M}^{2}$ is created by taking two
copies of $\mathcal{M}$, cutting along the edges of a defect of the embedded
graph $G$, and gluing the two copies together. This gluing is done in such a
way that in crossing from one copy of $\mathcal{M}$ to the other, a path ends
up at the same place in the destination manifold as it would have in the
original manifold if it were still intact. This means, for instance, that a
defect trail between two odd degree vertices is cut open into a disk and glued
with its corresponding disk on the other copy. In contrast, cycles in the
defect that are homologically non-trivial result in the handles or cross-caps
of the surface being cut apart before they are glued with their copies. We
provide a formal construction of the doubled graph in terms of rotation
systems and a construction of the doubled manifold subsequently. We comment
that the doubled graph $G^{2}$ depends on the choice of defect, but the
topology of $\mathcal{M}^{2}$ does not.
###### Definition 4.3.
The doubled rotation system
$R^{2}=(H^{\prime},\lambda^{\prime},\rho^{\prime},\tau^{\prime})$ of a
rotation system $R=(H,\lambda,\rho,\tau)$ with defect $\delta$ is defined by
setting $H^{\prime}=H\times\\{1,-1\\}$ and for $(h,j)\in H^{\prime}$ letting
$\lambda^{\prime}(h,j)=(\lambda(h),j)$, $\rho^{\prime}(h,j)=(\rho(h),j)$, and
$\tau^{\prime}(h,j)=\bigg{\\{}\begin{array}[]{ll}(\tau(h),-j),&\exists
e\in\delta\text{ s.t.\leavevmode\nobreak\ }h\in e\\\
(\tau(h),j),&\text{otherwise}\end{array}.$ (30)
One can study orbits of $\langle\lambda^{\prime},\rho^{\prime}\rangle$ to
relate faces of $G^{2}$ with faces of $G$. Since neither $\lambda^{\prime}$
nor $\rho^{\prime}$ acting on $(h,j)\in H^{\prime}$ flips the sign of $j$,
there are two faces in $G^{2}$ for every face of $G$, one made up of flags
with $j=1$ and one with flags $j=-1$. It turns out we can also checkerboard
$G^{2}$ such that these two faces are oppositely colored.
###### Lemma 4.8.
For any embedded graph $G$, the doubled graph $G^{2}$ is checkerboardable.
* Proof.
Let $\delta$ be the defect used to define $G^{2}$ from $G$, and let
$R=(H,\lambda,\rho,\tau)$ be the rotation system of $G$. As $G$ is
checkerboardable with defect $\delta$, there is a partition of $H=H_{w}\sqcup
H_{b}$, where $\lambda$ and $\rho$ map either set to itself and $\tau$ maps an
element $h$ from either set to the other _except_ when $h$ is in an edge
belonging to the defect (Lemma 4.1).
Now we can partition $H^{\prime}=H\times\\{1,-1\\}$, the set of flags for the
doubled rotation system
$R^{2}=(H^{\prime},\lambda^{\prime},\rho^{\prime},\tau^{\prime})$, into two
sets
$\displaystyle H^{\prime}_{w}$ $\displaystyle=\\{(h,j):h\in H_{w},j=1\text{ or
}h\in H_{b},j=-1\\},$ (31) $\displaystyle H^{\prime}_{b}$
$\displaystyle=\\{(h,j):h\in H_{b},j=1\text{ or }h\in H_{w},j=-1\\}.$ (32)
One can check that $\lambda^{\prime}$ and $\rho^{\prime}$ map either set to
itself and $\tau^{\prime}$ maps between the sets. Thus, $G^{2}$ is
checkerboardable. ∎
Just as there are two faces in $G^{2}$ for every face in $G$, it follows from
Definition 4.3 that $G^{2}$ has two edges for every edge of $G$, two
degree-$\text{deg}(v)$ vertices for every even-degree vertex $v$ of $G$, and
one degree-$2\text{deg}(v)$ vertex for every odd-degree vertex $v$ of $G$.
This counting allows us to establish the genus of $G^{2}$.
###### Theorem 4.9.
Let $G$ be a non-checkerboardable graph containing $M$ odd-degree vertices.
1. 1.
If $G$ embeds into a genus $g$, orientable manifold, then $G^{2}$ embeds into
a genus $2g+(M-2)/2$, orientable manifold.
2. 2.
Suppose $G$ embeds into a genus $g$, non-orientable manifold.
1. (a)
If $G^{2}$ embeds into an orientable manifold, its genus is $g+(M-2)/2$.
2. (b)
If $G^{2}$ embeds into a non-orientable manifold, its genus is
$2\left(g+(M-2)/2\right)$.
* Proof.
Part of this theorem that is implicit is that if $G$ is orientable, then
$G^{2}$ is orientable. Represent $G$ and $G^{2}$ with rotation systems as in
Definition 4.3. Then, $G$ being orientable means there is a partition of
$H=H_{+1}\sqcup H_{-1}$ such that $\lambda$, $\rho$, and $\tau$ all map
elements of either set $H_{\pm 1}$ to elements of the other. Clearly,
$H^{\prime}=H^{\prime}_{+1}\sqcup H^{\prime}_{-1}$ can also be partitioned so
that $(h,j)\in H^{\prime}_{k}$ if and only if $h\in H_{k}$. As required for
orientability, $\lambda^{\prime}$, $\rho^{\prime}$, and $\tau^{\prime}$ all
map elements of either set $H^{\prime}_{\pm 1}$ to elements of the other.
The genus of $G^{2}$ can be obtained by counting. As discussed previously, the
doubled graph has $|F^{\prime}|=2|F|$ faces, $|E^{\prime}|=2|E|$ edges, and
$|V^{\prime}|=2|V|-M$ vertices where $|F|$, $|E|$, and $|V|$ are the face,
edge, and vertex counts of $G$. The conclusions follow. ∎
Because $G^{2}$ is checkerboardable, its decoding graph $G^{2}_{\text{dec}}$
has two connected components, each embedding in the same manifold as $G^{2}$.
###### Theorem 4.10.
Either connected component of $G^{2}_{\text{dec}}$ is isomorphic to
$G_{\text{dec}}$. Moreover, homologically non-trivial cycles in
$G^{2}_{\text{dec}}$ correspond to non-trivial logical operators of the
surface code associated to $G$. If that code has distance $D$, then
$\frac{1}{4}\mathrm{hsys}\left({G^{2}_{\text{dec}}}\right)\leq
D\leq\frac{1}{2}\mathrm{hsys}\left({G^{2}_{\text{dec}}}\right)$.
* Proof.
We describe the isomorphism by describing how vertices of $G_{\text{dec}}$ and
$G^{2}_{\text{dec},w}$ (one of the two connected components of
$G^{2}_{\text{dec}}$) are mapped to each other. There are three maps that
should be composed in the right way, the map from $G$ to $G_{\text{dec}}$, the
map from $G$ to $G^{2}$, and the map from $G^{2}$ to $G^{2}_{\text{dec},w}$.
First, note $G^{2}_{\text{dec},w}$ has a single vertex for each white face of
$G^{2}$ (which is checkerboardable). Also, $G^{2}_{\text{dec},w}$ has a single
vertex for each vertex of $G^{2}$. Next, recall $G_{\text{dec}}$ has a vertex
for each face of $G$, two vertices for each even-degree vertex of $G$, and one
vertex for each odd-degree vertex of $G$. Finally, $G^{2}$ has two
(differently colored) faces for each face of $G$, two vertices for each even-
degree vertex of $G$, and one vertex for each odd-degree vertex of $G$. With
the vertices of $G_{\text{dec}}$ and $G^{2}_{\text{dec},w}$ associated to one
another by the implied map, vertex-adjacency is also preserved, showing the
isomorphism.
We argued in Lemma 4.8 that every face of $G$ corresponds to two oppositely
colored faces of $G^{2}$. Since $G^{2}_{\text{dec},w}$ has faces corresponding
to black faces of $G^{2}$, these actually represent all the faces of $G$.
Therefore, a cycle in $G^{2}_{\text{dec},w}$ is homologically non-trivial if
and only if it is a non-trivial logical operator of the code. The upper bound
on the code distance follows immediately.
The lower bound is a result of two vertices in $G^{2}$ representing each even-
degree vertex of $G$. If a homologically non-trivial cycle in
$G^{2}_{\text{dec},w}$ crosses both of these vertices, it may represent only
one qubit of support in the corresponding logical operator. Thus, we have an
additional factor of $1/2$ compared to the upper bound. ∎
With the theory above, we have seen how the doubled manifold effectively
describes the homology of nontrivial logical operators in a higher genus
surface. One consequence for surface codes is that for every non-
checkerboardable code defined by graph $G$ with parameters $\llbracket
N,K,D\rrbracket$, there is a checkerboardable code defined by graph $G^{2}$
with parameters $\llbracket 2N,2K,D^{\prime}\rrbracket$ where $D^{\prime}\geq
D$. The checkerboardable code has the same rate and at least as good a code
distance, but with potentially higher weight stabilizers and larger genus.
Thus, non-checkerboardable codes can offer improvements over checkerboardable
codes given that stabilizer weight and genus are relevant parameters for
practical implementations.
### 4.4 Face-width as a lower-bound on code distance
The _face-width_ $\text{fw}(G)$ of an embedded graph $G$ (with genus $g>0$) is
the minimum number of times any non-contractible cycle drawn on the manifold
intersects the graph [30]. This can also be defined as the length of the
shortest non-contractible cycle in a related graph, the face-vertex graph
$G_{\text{fv}}$.
###### Definition 4.4.
The face-vertex graph $G_{\text{fv}}$ of an embedded graph $G$ is an embedded
graph possessing a vertex $w_{v}$ for each vertex $v$ of $G$ and a vertex
$u_{f}$ for each face $f$ of $G$. An edge is drawn between $w_{v}$ and $u_{f}$
if and only if $v$ and $f$ are adjacent in $G$. To specify the embedding, if
$G$ has rotation system $(H,\lambda,\rho,\tau)$, the face-vertex graph has
rotation system
$(H_{\text{fv}},\lambda_{\text{fv}},\rho_{\text{fv}},\tau_{\text{fv}})$, where
$H_{\text{fv}}=H\times\\{1,-1\\}$ and
$\displaystyle\lambda_{\text{fv}}(h,j)=(h,-j),\quad\quad\rho_{\text{fv}}(h,j)=\bigg{\\{}\begin{array}[]{ll}(\tau(h),j),&j=-1\\\
(\lambda(h),j),&j=1\end{array},\quad\quad\tau_{\text{fv}}(h,j)=(\rho(h),j).$
(35)
We point out some easily-proved facts about the face-vertex graph. First, it
is bipartite since vertices of type $w_{v}$ are only ever connected to
vertices of type $u_{f}$. Second, the face-vertex graph is the dual of the
medial graph (Definition 3.4), $\overline{\widetilde{G}}=G_{\text{fv}}$.
Third, the face-vertex graph is embedded in the same manifold as $G$. Fourth,
if we recall that a graph $G$ and its dual $\overline{G}$ have isomorphic
medial graphs, we also find that $G$ and $\overline{G}$ have isomorphic face-
vertex graphs. Finally, the face-vertex graph is the decoding graph (see Def.
4.2) with the vertices $w^{(\pm 1)}_{v}$ merged at each even-degree vertex
$v$.
Because $G_{\text{fv}}$ is bipartite,
$\mathrm{sys}\left({G_{\text{fv}}}\right)$ is even. Any cycle drawn on the
manifold is homeomorphic to a cycle in the face-vertex graph. Hence, it is
easy to see that
$\text{fw}(G)=\frac{1}{2}\mathrm{sys}\left({G_{\text{fv}}}\right)$. This is
the most convenient definition of face-width for our purposes.
The rest of this subsection is devoted to proving the following.
###### Theorem 4.11.
Suppose $G$ is an embedded graph with only even-degree vertices and genus
$g>0$. Let $D$ be the code distance of the surface code associated to $G$.
Then $\text{fw}(G)\leq D$.
To prove this theorem, we need a simple lemma, which we prove separately.
###### Lemma 4.12.
Suppose $G$ is a non-checkerboardable graph with only even-degree vertices
embedded in manifold $\mathcal{M}$ with genus $g>0$. Then the doubled graph
$G^{2}$ is embedded in a manifold $\mathcal{M}^{2}$ that is a double cover of
$\mathcal{M}$.
* Proof.
To create the manifold $\mathcal{M}^{2}$ from the rotation system description
in Def. 4.3 (whose notation we import here), we glue the flags together
according to $\lambda^{\prime}$, $\rho^{\prime}$, $\tau^{\prime}$. Suppose we
map flags in $H^{\prime}=H\times\\{1,-1\\}$ to flags in $H$, i.e. $\Pi(h,j)=h$
for $h\in H$ and $j=\pm 1$. This map extends to a map
$\pi:\mathcal{M}^{2}\rightarrow\mathcal{M}$ by simply mapping points in the
flag $(h,j)$ to points in flag $h$ in homeomorphic fashion, i.e. if both
$(h,j)$ and $h$ are viewed as identically-sized triangular patches of surface
with boundaries associated to the $\lambda$, $\rho$, and $\tau$ involutions,
then map the former to latter point-by-point. Note that
$\Pi\circ\lambda^{\prime}=\lambda\circ\Pi$,
$\Pi\circ\rho^{\prime}=\rho\circ\Pi$, and
$\Pi\circ\tau^{\prime}=\tau\circ\Pi$. This establishes that if two flags are
glued together in $\mathcal{M}^{2}$, their images are glued together in
$\mathcal{M}$.
We should check that for all open disks $U$, $\pi^{-1}(U)$ is the union of two
open disks, thus proving it is a double cover. This is clearly true for open
disks contained entirely within a single flag $h$. We also look at open disks
that straddle two flags (across a flag’s edge) and open disks that straddle
more than two flags (because they contain a flag’s corner).
Suppose an open disk straddles two flags $h,\lambda h\in H$. Applying
$\Pi^{-1}$ to this pair of flags, we get either $(h,1)$ and
$\lambda^{\prime}(h,1)=(\lambda h,1)$ or $(h,-1)$ and
$\lambda^{\prime}(h,-1)=(\lambda h,-1)$. Thus, $\pi^{-1}$ gives two open
disks, each straddling one of these pairs of flags in $\mathcal{M}^{2}$.
The same argument holds for flags $h,\rho h$. Slightly more interesting is the
case of flags $h,\tau h$. In this case, if the edge of the graph $G$
containing $h$ is in the defect, $\Pi^{-1}$ gives either $(h,1)$ and $(\tau
h,-1)$ or $(h,-1)$ and $(\tau h,1)$. If the edge is not in the defect, then
the result is $(h,1)$ and $(\tau h,1)$ or $(h,-1)$ and $(\tau h,-1)$.
Finally, corners of the flags correspond to vertices, edges, and faces of the
graphs. An open disk at the corner $\\{h,\rho h,\lambda h,\rho\lambda h\\}$, a
face of the graph, is the most straightforward case since no defect is
involved. Here $\Pi^{-1}$ maps the face of $G$ to two faces of $G^{2}$, namely
$\\{(h,1),(\rho h,1),(\lambda h,1),(\rho\lambda h,1)\\}$ and $\\{(h,-1),(\rho
h,-1),(\lambda h,-1),(\rho\lambda h,-1)\\}$. Likewise, it is not hard to see
that an edge (containing $h$) in $G$ maps to two edges (the one containing
$(h,1)$ and the one containing $(h,-1)$) in $G^{2}$, whether or not that edge
is part of the defect. For the case of a vertex $v$ in $G$, one must recall
the fact that an even number of edges incident to $v$ are part of the defect,
Lemma 4.3. This is the only place where we are using that $G$ has only even-
degree vertices. Suppose $h\in v$. The vertices in $G^{2}$ that map to $v$
under $\Pi$ are the one containing $(h,1)$ and the one containing $(h,-1)$,
and these are distinct because of the aforementioned fact. ∎
* Proof of Theorem 4.11.
If $G$ is checkerboardable, then notice that
$\text{fw}(G)=\frac{1}{2}\text{sys}(G_{\text{fv}})\leq\frac{1}{2}\text{sys}(G_{\text{dec}})\leq\frac{1}{2}\text{hsys}(G_{\text{dec}})=D$,
with the final equality following from Corollary 4.7.
Thus, we are left with the non-checkerboardable case. Lemma 4.12 says the
doubled manifold $\mathcal{M}^{2}$ is a double cover of the manifold
$\mathcal{M}$ on which $G$ is embedded. This means that the universal cover of
$\mathcal{M}$ (which is the plane $\mathcal{U}=\mathbb{R}^{2}$) is also the
universal cover of $\mathcal{M}^{2}$. This implies the existence of three
covering maps, the projections
$\pi_{21}:\mathcal{M}^{2}\rightarrow\mathcal{M}$,
$\pi_{U2}:\mathcal{U}\rightarrow\mathcal{M}^{2}$, and
$\pi_{U1}:\mathcal{U}\rightarrow\mathcal{M}$ such that
$\pi_{U1}=\pi_{21}\pi_{U2}$.
Now, choose a logical operator represented by some homologically non-trivial
cycle $c_{2}$ (in the decoding graph $G^{2}_{\text{dec}}$) drawn on
$\mathcal{M}^{2}$. Any lift $\pi_{U2}^{-1}(c_{2})$ is necessarily a non-closed
curve in $\mathcal{U}$. We also see that $c_{1}=\pi_{21}(c_{2})$, which is
easily seen to be homeomorphic to a cycle in $G_{\text{fv}}$, also lifts to
$\pi_{U2}^{-1}(c_{2})=\pi_{U1}^{-1}\pi_{21}(c_{2})$, and therefore $c_{1}$ is
non-contractible.
Finally, there must be some simple sub-cycle of $c_{1}$, call it
$c_{1}^{\prime}$, that is also non-contractible. It visits exactly
$\frac{1}{2}\text{len}(c_{1}^{\prime})$ vertices and so this is at least the
Pauli weight $w$ of the logical operator it represents,
$w\geq\frac{1}{2}\text{len}(c_{1}^{\prime})\geq\text{fw}(G)$. Since our
arguments apply to all choices of $c_{2}$ (representing all non-trivial
logical operators), the distance of the code is also lower bounded by
$\text{fw}(G)$. ∎
### 4.5 Logical operators from trails in the original graph
In this section, we provide another upper bound on the code distance of
surface codes, complementing the bounds in Theorem 4.10. This bound comes from
identifying logical operators with certain trails in the graph $G$ that
defines the surface code. Conveniently, we do not have to work with any
derived graph, such as the decoding graph, to obtain these logical operators.
Recall that $\mathcal{T}_{0}(G)$ is the subgroup of $\mathcal{T}(G)$ that is
generated by closed trails and open trails whose endpoints are odd-degree
vertices. To each trail in $\mathcal{T}_{0}(G)$ we assign a logical operator,
or equivalently an element of $\mathcal{C}(\mathcal{S})$. We define a
homomorphism $\omega:\mathcal{T}_{0}(G)\rightarrow\mathcal{C}(\mathcal{S})$.
It is easy to define $\omega$ using the Majorana code picture from Section
3.2. Let $t\in\mathcal{T}_{0}(G)$. If $t$ is a closed trail, then we define
$\omega(t)$ to be the product of all Majoranas on the edges in $t$. If $t$ is
an open trail, its endpoints are at odd degree vertices by definition, and we
define $\omega(t)$ to be the product of all Majoranas on the edges in $t$ and
the two Majoranas at the odd-degree endpoints. By the arguments in Section
3.3, these products of Majoranas are equivalent to Paulis in the qubit surface
code. We do not concern ourselves with the sign of the Pauli, and so the
products can be taken in arbitrary order. Moreover, the Pauli $\omega(t)$
commutes with all stabilizers by construction and so is in
$\mathcal{C}(\mathcal{S})$.
What is the weight of the Pauli $\omega(t)$? If $t$ does not visit a vertex,
clearly $\omega(t)$ is not supported on qubits at that vertex. Alternatively,
if $t$ traverses all edges adjacent to a vertex, then $\omega(t)$ is also not
supported on qubits at that vertex. These are the only vertices lacking
support – $\omega(t)$ is supported on qubits at all other vertices. If the
underlying graph has only degree 3 or 4 vertices (higher degree vertices may
be decomposed with Corollary 3.10), then the weight of $\omega(t)$ is equal to
the number of vertices visited exactly once by $t$. Paths by definition visit
vertices at most once, so that for a path $p\in\mathcal{T}_{0}(G)$, the weight
of $\omega(p)$ is exactly the number of vertices visited by $p$. From this
discussion, we can find the kernel of $\omega$ (for our usual assumption of a
connected graph $G$).
$\ker\omega=\\{b\in\mathcal{T}_{0}(G):\omega(b)=I\\}=\\{\vec{0},\vec{1}\\}.$
(36)
This means the map $\omega$ is 2-to-1, ignoring phases on Paulis in
$\mathcal{C}(\mathcal{S})$.
When is $\omega(t)$ a non-trivial logical operator? It turns out we can
efficiently check this using the notion of checkerboardability and Algorithm
1. With some abuse of notation, we call $t$ non-trivial if $\omega(t)$ is non-
trivial.
###### Theorem 4.13.
Let $t\in\mathcal{T}_{0}(G)$. Then $\omega(t)\in\mathcal{C}(\mathcal{S})$ is a
stabilizer if and only if $G$ is checkerboardable with defect $t$ or with
defect $\vec{1}+t$.
* Proof.
We start with the reverse direction. We are assured the existence of
$x\in\mathbb{F}_{2}^{|F|}$ such that $x\Phi=\vec{1}+t$ or such that $x\Phi=t$.
Let $\Phi_{i}$ denote the $i^{\text{th}}$ row of $\Phi$. Because $\omega$ is a
homomorphism and $\omega(\vec{1})=I$, we have
$\displaystyle\omega(t)=\omega(x\Phi)=\omega\left(\sum_{i:x_{i}=1}\Phi_{i}\right)=\prod_{i:x_{i}=1}\omega\left(\Phi_{i}\right),$
(37)
which is a stabilizer because each $\omega(\Phi_{i})$ is the stabilizer
$S_{i}$ in Eq. (13) associated to the $i^{\text{th}}$ face.
For the forward direction of the theorem, we start with
$\omega(t)=\prod_{i:x_{i}=1}\omega(\Phi_{i})$ for some
$x\in\mathbb{F}_{2}^{|F|}$, since the faces generate the stabilizer group by
definition. By Eq. (37) once again, we get $\omega(t)=\omega(x\Phi)$. This
implies $t=x\Phi+b$ where $b\in\ker\omega$. By Eq. (36), we have
$\vec{1}+t=x\Phi$ or $t=x\Phi$ or, equivalently, $G$ is checkerboardable with
defect $t$ or with defect $\vec{1}+t$. ∎
To summarize the above, for any graph $G$ embedded in a 2-manifold defining
surface code with stabilizer $\mathcal{S}$, we have shown
$\mathcal{S}\leq\omega(\mathcal{T}_{0}(G))\leq\mathcal{C}(\mathcal{S}).$ (38)
We now point out a strengthening of this result when $G$ is planar (i.e.
$g=0$).
###### Theorem 4.14.
Let $G$ be a planar graph and $\mathcal{S}$ be the stabilizer of the surface
code defined on $G$ according to Definition 3.3. Then
$\mathcal{S}\leq\omega(\mathcal{T}_{0}(G))=\mathcal{C}(\mathcal{S})$ (ignoring
Pauli’s phases).
* Proof.
If $G$ contains no odd-degree vertices, then $K=0$ by Theorem 3.3 and so
$\mathcal{S}=\mathcal{C}(\mathcal{S})=\tau(\mathcal{T}_{0}(\mathcal{S}))$.
Otherwise, $G$ contains an even number of odd-degree vertices $M\geq 2$.
Construct a spanning tree of $G$ rooted at one of these odd-degree vertices
$v$. There is a path $p_{w}$ from $v$ to any other odd-degree vertex $w$
contained within the spanning tree, and
$\omega(p_{w})\in\mathcal{C}(\mathcal{S})$. Notice that for $w\neq
w^{\prime}$, $\omega(p_{w})$ anticommutes with $\omega(p_{w^{\prime}})$,
because, written as products of Majoranas, $p_{w}$ and $p_{w^{\prime}}$ have
odd overlap – they share edges, two Majoranas each, as well as the Majorana
located at odd-degree vertex $v$. Thus, we have a set of $M-1$ logical Paulis
$\mathcal{B}=\\{\omega(p_{w}):\text{odd-degree vertex }w\neq v\\}$ that
mutually anticommute. This implies, because the maximum size of an
anticommuting set of $K$-qubit Paulis is $2K+1$ [36], that $K\geq(M-2)/2$.
However, $K=(M-2)/2$ is exactly the number of encoded qubits by Theorem 3.3,
meaning $\mathcal{B}$ is a basis of all logical operators. In turn, this
implies
$\mathcal{C}(\mathcal{S})=\mathcal{S}\mathcal{B}\leq\omega(\mathcal{T}_{0}(G))$,
which completes the proof. ∎
The rest of this section concerns the actual calculation of an element of
$\mathcal{T}_{0}(G)$ that visits the fewest number of vertices. First, note
that there are bases of $\mathcal{T}_{0}(G)$ that consist only of trails that
do not visit any vertex more than once, i.e. bases of paths. Let the length of
a path be the number of vertices it visits. The total length of a path basis
is the sum of lengths of all trails in the basis. This implies the existence
of a path basis that minimizes the total length of the basis, called a minimum
path basis. Such a minimum path basis must also contain the shortest non-
trivial path (with non-triviality determined by Theorem 4.13). If it did not,
we could replace some element of the basis with this shortest non-trivial
path, obtaining another path basis with lower total weight.
This discussion of minimum path bases is mirrored for bases of
$\mathcal{Z}(G)$, the cycle space of the graph. Likewise, there is a minimum
simple cycle basis that consists of simple cycles (cycles without repeated
vertices) and minimizes their combined length. Polynomial time algorithms for
finding a minimum simple cycle basis have been known since Horton’s algorithm
[45], which has time complexity $O(|E|^{3}|V|)$ for a general graph $G=(V,E)$.
Better algorithms now exist, including a $O(|E|^{\Omega})$ time Monte-Carlo
algorithm [46] (where $\Omega$ is the exponent of matrix multiplication) and
$O\left(|E|^{2}|V|\right)$ [47] and
$O\left(|E|^{2}|V|/\log|V|+|E||V|^{2}\right)$ [48] deterministic algorithms.
We can use these algorithms to also find a minimum path basis and efficiently
place an upper bound on the code distance.
###### Theorem 4.15.
For embedded graph $G$, let $J$ be the number of vertices in a non-trivial
path $p\in\mathcal{T}_{0}(G)$ visiting the fewest vertices. Then the surface
code defined by graph $G$ has distance $D\leq J$ and $J$ can be calculated in
polynomial time in the size of graph $G$.
* Proof.
Since $\omega(p)$ is a non-trivial logical operator with weight $J$, clearly
$J$ upper bounds the code distance.
It remains to show $J$ is efficient to calculate. Construct a graph
$G^{\prime}$ from $G$ by adding edges between all pairs of odd-degree
vertices, a total of $M(M-1)/2$ new edges if there are $M$ odd-degree
vertices. Call the set of new edges $E_{o}$. Find the minimum cycle basis $B$
of $G^{\prime}$ using one of the efficient algorithms referenced above, e.g.
[45]. Each cycle $b\in B$ that traverses an edge outside $E_{o}$ corresponds
to a path $q\in\mathcal{T}_{0}(G)$. In particular, $q$ is formed from $b$ by
removing all edges from $E_{o}$. The length of $b$ is the number of vertices
visited by $q$. Thus, by replacing $b$ with $q$ we can turn the minimum simple
cycle basis $B$ into a minimum path basis $B_{0}$. To calculate $J$, we just
need to determine whether each path in $B_{0}$ is non-trivial using Theorem
4.13 and take the one visiting the fewest vertices. ∎
Finally, we point out that the upper bound from Theorem 4.15 and the bounds
from Theorem 4.10 are not saturated in general. An example is shown in Fig.
14.
Figure 14: An example graph $G$ in which the bounds in Theorems 4.10 and 4.15
are not saturated. The graph is largely a square lattice on the torus, but
with obstructions along one row that force minimum weight logical operators to
go around. We checkerboard the graph with stitched edges indicating the
defect. The code encodes one logical qubit. The shortest non-trivial cycles in
the decoding graph $G_{\text{dec}}$ have length $L=8$; an example is shown in
blue (and purple where it overlaps the red cycle). The shortest non-trivial
cycles in the original graph $G$ also visit $J=8$ vertices. Thus, Theorem 4.15
proves $4\leq D\leq 8$. However, the distance of the code actually satisfies
$D\leq 7$, as demonstrated by the red cycle in the decoding graph (which has
length $10$). Also, $D\geq 5$ follows from Lemma 5.1 and arguments similar to
those in Section 5.1.
## 5 Code examples
In this section, we present five example code families. In the first three
subsections we consider square lattice toric codes, rotated toric codes (with
a subfamily of cylic toric codes first mentioned in Section 3.3), and
hyperbolic codes. For these codes we make some rigorous arguments about code
parameters (especially the code distance) using the theorems from Section 4.
In the final subsection, we present two more code families that generalize
stellated codes [8].
### 5.1 Square lattice toric codes
As the name suggests, this family of codes is created from simple square
lattices on the torus. Up to local Cliffords, these are the codes defined by
Wen [11]. Because of checkerboardability properties, the dimensions of the
lattice are relevant to $K$ as well as $D$. We let the lattice be $m\times n$,
i.e. it takes $m$ steps to traverse the torus in one direction and $n$ to
traverse the other direction. See Fig. 15, for two examples.
Figure 15: Three examples of square lattice toric codes. (a) An
even$\times$even code can be checkerboarded and encodes two logical qubits. We
show two cycles, red and blue, in the decoding graph representing two
commuting, non-trivial logical operators, e.g. $\bar{X}_{1}$ and
$\bar{Z}_{2}$. (b) An odd$\times$odd code can be checkerboarded with a defect
(stitched edges) and encodes one logical qubit. We show two cycles, red and
blue (with purple edges and vertices where they overlap), representing two
anti-commuting logical operators, e.g. $\bar{X}$ and $\bar{Z}$. (c) An
odd$\times$even code is also checkerboardable with a defect and encodes one
logical qubit.
Note that a square lattice toric code is only checkerboardable if both $m$ and
$n$ are even. Therefore,
$K=\bigg{\\{}\begin{array}[]{ll}2,&m,n\text{ are even},\\\
1,&\text{otherwise}.\end{array}$ (39)
Since they are checkerboardable and four-valent, the $K=2$ codes can also be
described by the homological code model (see Section 3.4), while the $K=1$
codes cannot.
What is the code distance of a $m\times n$ square lattice toric code? It will
surprise few to learn that it is
$D=\min(m,n).$ (40)
However, we would like to prove this. We divide the proof into cases –
even$\times$even, odd$\times$even, odd$\times$odd – though the proof is
similar for each case. The general idea is to assume that there is a logical
operator $l$ that has weight less than $\min(m,n)$, then show that $l$
commutes with all logical operators of the code, so it must be trivial (i.e.
in the center of the group, i.e. a stabilizer). Thus, $D$ is at least
$\min(m,n)$. Separately, we can show there is some non-trivial logical
operator with that weight.
The odd$\times$even case is slightly simpler so we start there. Say $m$
(number of rows) is odd and $n$ (number of columns) is even. There is one
logical qubit. Any column $c$ is a cycle, so $\omega(c)$ is a logical
operator. Theorem 4.13 tells us $\omega(c)$ is a non-trivial logical operator,
but we can also see this a different way. Let $r$ be any row and $r^{\prime}$
be any cycle in the decoding graph such that the support of the logical
operator $\sigma(r^{\prime})$ is only within row $r$. Then
$\sigma(r^{\prime})$ and $\omega(c)$ anticommute and so are both non-trivial
logical operators. Also, $\omega(c)$, $\sigma(r^{\prime})$, and the
stabilizers generate the entire group of logical operators, since there is
only one encoded qubit. Fig. 15(c) shows an example of these two logical
operators for $m=3$, $n=6$. Note, we can find such anticommuting pairs for any
column $c$ and row $r$. Therefore, if logical operator $l$ has weight less
than $\min(m,n)$, there is both a column $c$ and a row $r$ where it is not
supported. Thus, it commutes with the entire logical group and must be
trivial.
In the even$\times$even case, the only complication is that there are two
logical qubits. Non-trivial logical operators can still be confined to a
single column $c$ and a single row $r$. There are two different cycles
$c^{\prime}$ and $c^{\prime\prime}$ in the decoding graph such that support of
$\sigma(c^{\prime})$ and $\sigma(c^{\prime\prime})$ lie within column $c$.
Likewise, there are two different cycles $r^{\prime}$ and $r^{\prime\prime}$.
See Fig. 15(a). By their commutation relations, we note that
$\sigma(c^{\prime}),\sigma(c^{\prime\prime}),\sigma(r^{\prime}),\sigma(r^{\prime\prime}),$
and stabilizers generate the entire group of logical operators. Thus, as
before, a logical operator $l$ with weight less than $\min(m,n)$ will not have
support in one column and row, thus commutes with all logical operators, and
so must be trivial.
Finally, in the odd$\times$odd case, there is one logical qubit. Any column
$c_{1}$ is a closed trail and $\omega(c_{1})$ a non-trivial logical operator.
However, the complementary logical operator is not confined to a single row.
Instead, given any row $r$ _and_ column $c_{2}$, there is a cycle $z$ in the
decoding graph such that $\sigma(z)$ has support only in $r$ and $c_{2}$.
Moreover, $\omega(c_{1})$ and $\sigma(z)$ anticommute no matter the row and
columns chosen. Set $c_{1}=c_{2}=c$. See Fig. 15(b). Now the same argument
from before can be applied: if logical operator $l$ has weight less than
$\min(m,n)$, then it is not supported in some column $c$ and some row $r$, but
then it commutes with all logical operators and is trivial.
The preceding proof technique can be summarized in a lemma that is potentially
useful for lower bounding the code distance of any stabilizer code. If $l$ is
a logical Pauli operator of stabilizer code with stabilizer group
$\mathcal{S}$, we let $D(l)$ be the minimum weight of an element of the coset
$l\mathcal{S}$.
###### Lemma 5.1.
Let $l$ be a logical Pauli operator for stabilizer code $\mathcal{S}$ on $N$
qubits and suppose there exists positive integer $\mu$ and collections
$C_{1},C_{2},\dots,C_{\mu}\subseteq P(\\{0,1,\dots,N-1\\})$ (where $P(S)$
denotes the power set of $S$) such that
1. 1.
For all $C_{i}$ and all distinct $c,c^{\prime}\in C_{i}$, $c\cap
c^{\prime}=\emptyset$,
2. 2.
For all choices of $c_{i}\in C_{i}$, there exists $l_{0}\in l\mathcal{S}$,
with $\mathrm{supp}\left({l_{0}}\right)\subseteq\bigcup_{i=1}^{\mu}c_{i}$.
Then, for all logical Paulis $l^{\prime}$ that anticommute with $l$,
$D(l^{\prime})\geq\min_{i}|C_{i}|$.
* Proof.
Since logical operators $l$ and $l^{\prime}$ anticommute, every element of
$l\mathcal{S}$ anticommutes with every element of $l^{\prime}\mathcal{S}$.
Suppose, by way of contradiction, there is $l^{\prime}_{0}\in l^{\prime}S$
with weight $|l^{\prime}_{0}|<\min_{i}|C_{i}|$. For all $i$, because the sets
in each $C_{i}$ are disjoint, there must be a set $c_{i}\in C_{i}$ such that
$\mathrm{supp}\left({l^{\prime}_{0}}\right)\cap c_{i}=\emptyset$. However,
some $l_{0}\in l\mathcal{S}$ exists such that
$\mathrm{supp}\left({l_{0}}\right)\subseteq\bigcup_{i=1}^{\mu}c_{i}$.
Therefore,
$\mathrm{supp}\left({l_{0}}\right)\cap\mathrm{supp}\left({l^{\prime}_{0}}\right)=\emptyset$.
So $l_{0}$ and $l^{\prime}_{0}$ do not anticommute, a contradiction. ∎
The total distance of the code is
$D=\min_{l\in\mathcal{C}(\mathcal{S})\setminus\mathcal{S}}D(l)$. To lower
bound the code distance it is sufficient to fix a basis of logical operators
for a code, e.g. $\\{\bar{X}_{i},\bar{Z}_{i}\\}$, and find appropriate
collections satisfying the conditions of the lemma for each element of the
basis.
For any square lattice surface code, we need only two collections, $C_{1}$ the
set of all columns and $C_{2}$ the set of all rows. With these collections,
_every_ non-trivial logical operator of the code satisfies the second
condition of the lemma (see the constructions of these operators in Fig. 15),
and we immediately get the lower bound $D\geq\min(m,n)$.
One thing to note when applying the lemma, is that because of the conditions
on the collections $C_{i}$, $N$ must be at least $D^{2}/\mu$. Thus, the lemma
is unlikely to be useful for codes with $N$ scaling better than a constant
times $D^{2}$. Luckily, our codes must scale this way due to the Bravyi-
Poulin-Terhal bound [2]. For instance, symmetric square lattice surface codes
with $m=n=D$ satisfy $N=KD^{2}$ if $D$ is odd and $N=\frac{1}{2}KD^{2}$ if $D$
is even. Because of the factor of $1/2$, the latter family has twice the code
rate of the rotated surface code, Fig. 5(b).
### 5.2 Rotated toric codes
In this subsection we focus on a generalization of toric codes introduced by
Kovalev and Pryadko [14] and prove Theorem 3.9. These codes are defined by two
vectors, $L_{1}=(a_{1},b_{1})$ and $L_{2}=(a_{2},b_{2})$, where $a_{i},b_{i}$
are integers. Consider the infinite square lattice, and equate points
$x,y\in\mathbb{R}^{2}$ if
$x-y\in S(L_{1},L_{2}):=\\{m_{1}L_{1}+m_{2}L_{2}:m_{1},m_{2}\in\mathbb{Z}\\}.$
(41)
The vertices and edges of the infinite square lattice now map, using this
equivalence, to vertices and edges of a finite graph $G(L_{1},L_{2})$ embedded
on the torus. With single qubits at each vertex and stabilizers associated to
faces, $G(L_{1},L_{2})$ defines a surface code via Def. 3.3. For instance, if
$b_{1}=0$ and $a_{2}=0$, these codes are just the square lattice toric codes
discussed in the previous section. Referring back to Section 2.6, the infinite
square lattice is acting as the universal cover of the graph $G(L_{1},L_{2})$
on the torus, with the covering map described by equating points as in Eq. 41.
Some of these rotated toric codes are equivalent despite being defined by
different vectors $L_{1}$ and $L_{2}$. The sets $S(L_{1},L_{2})$ and
$S(L_{1}^{\prime},L_{2}^{\prime})$ are equal if and only if
$L_{i}^{\prime}=g_{i1}L_{1}+g_{i2}L_{2}$ for some integer-valued matrix $g$
with $\det(g)=\pm 1$ [14]. In these cases, the codes associated to graphs
$G(L_{1},L_{2})$ and $G(L_{1}^{\prime},L_{2}^{\prime})$ are also equivalent.
Code parameters $N$ and $K$ are relatively obvious given our general
framework. First, $N$ can be calculated as the area of the parallelogram with
sides $L_{1}$ and $L_{2}$, or [14]
$N=|L_{1}\times L_{2}|=|a_{1}b_{2}-b_{1}a_{2}|.$ (42)
As also pointed out in [14], checkerboardability of the graph $G(L_{1},L_{2})$
depends on the parities of $\|L_{1}\|_{1}=a_{1}+b_{1}$ and
$\|L_{2}\|_{1}=a_{2}+b_{2}$. A checkerboard coloring of the infinite square
lattice can be mapped to a checkerboard coloring of the finite graph
$G(L_{1},L_{2})$ if and only if, for all points $P\in\mathbb{R}^{2}$, the
faces at points $P$, $P+L_{1}$, and $P+L_{2}$ are colored the same. Moreover,
those points are colored the same if and only if $\|L_{1}\|_{1}$ and
$\|L_{2}\|_{1}$ are both even integers. Thus, we have
$K=\bigg{\\{}\begin{array}[]{ll}1,&\|L_{1}\|_{1}\text{ or }\|L_{2}\|_{1}\text{
is odd,}\\\ 2,&\|L_{1}\|_{1}\text{ and }\|L_{2}\|_{1}\text{ are
even.}\end{array}$ (43)
With regards to distance $D$, we first relate the (non)-triviality of logical
operators to the topological (non)-triviality of cycles in the decoding graph.
To do this for non-checkerboardable codes, we need to use the doubled graph of
Section 4.3. Even the doubled graph does not resolve the issue that cycle
length in the non-checkerboardable code’s decoding graph is not necessarily
the Pauli weight (reflected in the gap in the upper and lower bounds of
Theorem 4.10), and the next step is to resolve this.
Consider first the checkerboardable rotated toric codes as also considered in
[14]. In this case, the faces of $G(L_{1},L_{2})$ are two-colorable, black and
white. The decoding graph resolves into two connected components which are
duals of one another: the black component has vertices associated to only
black faces and the white component has vertices associated to only white
faces. As both these connected components embed naturally onto the torus, with
qubits associated to edges and stabilizers to vertices and faces, the
homological description of toric codes applies [1]. We conclude that a cycle
in the decoding graph is non-trivial if and only if it is homologically non-
trivial, or equivalently, when mapped back to the infinite square lattice, it
connects a point $P$ with the point $P+m_{1}L_{1}+m_{2}L_{2}$ where at least
one of $m_{1}$ or $m_{2}$ is odd. Edges in the decoding graph follow the
vectors $(1,1)$, $(1,-1)$, $(-1,1)$, or $(-1,-1)$. Therefore, to get from $P$
to $P+m_{1}L_{1}+m_{2}L_{2}$ takes $\|m_{1}L_{1}+m_{2}L_{2}\|_{\infty}$ steps
in the decoding graph. We conclude the code distance is [14]
$D=\min_{\begin{subarray}{c}m_{1},m_{2}\in\mathbb{Z}\\\
(m_{1},m_{2})\neq(0,0)\end{subarray}}\|m_{1}L_{1}+m_{2}L_{2}\|_{\infty}.$ (44)
We divide the non-checkerboardable case into two sub-cases. For now, assume
$\|L_{1}\|_{1}$ is odd but $\|L_{2}\|_{2}$ is even. We claim that the decoding
graph of $G(L_{1},L_{2})$, call it $G_{\text{dec}}$ is isomorphic to either
connected component of the decoding graph of the doubled graph
$G(2L_{1},L_{2})$, a checkerboardable code. If one believes that
$G(2L_{1},L_{2})$ is legitimately a doubled graph of $G(L_{1},L_{2})$, then
this isomorphism follows from Theorem 4.10. However, we can be more explicit
in the argument for this example.
Therefore, let us show that $G_{\text{dec}}$ and the black component
$G^{\prime}_{\text{dec}}$ of the decoding graph of $G(2L_{1},L_{2})$ are
isomorphic. Showing it for the white component is analogous. To establish some
coordinates, we imagine that graph $G(L_{1},L_{2})$ lies within the
parallelogram $P$ with corners $(0,0)$, $L_{1}$, $L_{1}+L_{2}$, and $L_{2}$.
Similarly, the graph $G(2L_{1},L_{2})$ lies within the parallelogram
$P^{\prime}$ with corners $(0,0)$, $2L_{1}$, $2L_{1}+L_{2}$, and $L_{2}$. See
Fig. 16(a).
Figure 16: Regular tessellations of the infinite square lattice. Inside a
single cell of a tessellation, the subgraph of the infinite square lattice
defines a toric code. In fact, both images show two tessellations, one
outlined in blue and one outlined in yellow. The blue tessellations have
periodicity vectors $L_{1}$ and $L_{2}$ and define non-checkerboardable
rotated toric codes. As argued in the text, the decoding graphs of these non-
checkerboardable codes are isomorphic to a connected component of the decoding
graphs of the corresponding checkerboardable codes defined by the yellow
tessellations. Some paths (blue and red, purple where they overlap) in the
yellow codes’ decoding graphs are shown, which, when mapped to a single cell
of the blue tessellation, are anticommuting logical operators.
Consider the graphs $G_{\text{dec}}$ and $G^{\prime}_{\text{dec}}$ as a sets
of points making up their edges and vertices. The idea is that any point $p$
in $G^{\prime}_{\text{dec}}$ that is also in the parallelogram
$P^{\prime}\setminus P$ should be mapped to $p-L_{1}$. This places it within
$P$. It is easily seen that this map takes the points of
$G^{\prime}_{\text{dec}}$ to the points of $G_{\text{dec}}$ in one-to-one
fashion and is a graph isomorphism.
The other crucial property of the map is that any cycle in $G_{\text{dec}}$
that corresponds to single face stabilizer maps to a cycle in
$G^{\prime}_{\text{dec}}$ corresponding to a single face stabilizer, and vice
versa. In this way, we know that all trivial (both topologically and
logically) cycles in $G^{\prime}_{\text{dec}}$ map to (logically) trivial
cycles in $G_{\text{dec}}$. The cycle space of $G^{\prime}_{\text{dec}}$ is
generated by these trivial cycles as well as two non-trivial cycles (because
it is embedded on a torus). When mapped to $G_{\text{dec}}$, these two non-
trivial cycles must be anticommuting logical operators of the code.
Now, we start the other sub-case of non-checkerboardable codes. Say both
$\|L_{1}\|_{1}$ and $\|L_{2}\|_{1}$ are odd. To come up with a related
checkerboardable code, we have to alter the rotated toric code construction
slightly. Let $G_{E}(L_{1},L_{2})$ be the graph defined by equating points
$x,y\in\mathbb{R}^{2}$ in the infinite square lattice such that
$x-y\in
S_{E}(L_{1},L_{2}):=\\{m_{1}L_{1}+m_{2}L_{2}:m_{1},m_{2}\in\mathbb{Z},m_{1}+m_{2}\text{
is even}\\}.$ (45)
This corresponds to tiling the plane with offset parallelograms, or hexagons
with opposite sides identified, see Fig. 16(b). Clearly,
$G_{E}(L^{\prime}_{1},L^{\prime}_{2})$ is always a checkerboardable code for
any $L^{\prime}_{1}$ and $L^{\prime}_{2}$. We claim that either connected
component of the decoding graph of $G_{E}(L_{1},L_{2})$ is isomorphic to the
decoding graph of $G(L_{1},L_{2})$.
Again, let $G_{\text{dec}}$ be the decoding graph of $G(L_{1},L_{2})$ and
$G^{\prime}_{\text{dec}}$ be the black connected component of the decoding
graph of $G_{E}(L_{1},L_{2})$. As before, we can imagine $G(L_{1},L_{2})$
lying within the parallelogram $P$ and $G_{E}(L_{1},L_{2})$ lying within the
parallelogram $P^{\prime}$. The isomorphism is also the same map. Each point
$p$ in $P^{\prime}\setminus P$ and in $G^{\prime}_{\text{dec}}$ gets mapped to
a point $p-L_{1}$ in $P$. Once again this maps trivial cycles to trivial
cycles. The two remaining generators of the cycle space of
$G^{\prime}_{\text{dec}}$, when mapped to $G_{\text{dec}}$, are the
anticommuting logical operators of the non-checkerboardable code.
With this setup, we finish the proof of Theorem 3.9.
* Proof of Theorem 3.9.
The nontrivial part of Theorem 3.9 that remains is the proof of the distance.
Here, in the special case of cyclic codes $L_{1}=(a,b)$ and $L_{2}=(-b,a)$
with $b>a>0$ and $\text{gcd}(a,b)=1$. For checkerboardable codes, those in
case (2) of the theorem, Eq. (44) applies and simplifies to $D=\max(a,b)=b$,
as claimed.
The non-checkerboardable case, in which both $\|L_{1}\|_{1}$ and
$\|L_{2}\|_{1}$ are odd, makes up the bulk of this proof. Above, we
characterized the nontrivial logical operators as topologically nontrivial
cycles in the decoding graph of a related checkerboardable code. We will take
this a step further and change coordinates to align with the decoding graph
itself. This amounts to rotating and re-scaling $L_{1}$ and $L_{2}$.
$\displaystyle L_{1}^{\text{dec}}$
$\displaystyle=\frac{1}{2}\left(\begin{array}[]{cc}1&1\\\
-1&1\end{array}\right)L_{1}=\left(\frac{1}{2}(b+a),\frac{1}{2}(b-a)\right),$
(48) $\displaystyle L_{2}^{\text{dec}}$
$\displaystyle=\frac{1}{2}\left(\begin{array}[]{cc}1&1\\\
-1&1\end{array}\right)L_{2}=\left(-\frac{1}{2}(b-a),\frac{1}{2}(b+a)\right).$
(51)
Again, points $x,y\in\mathbb{R}^{2}$ are equated if $x-y\in
S_{E}(L_{1}^{\text{dec}},L_{2}^{\text{dec}})$. It is now the decoding graph,
rather than the doubled graph, that is represented by applying this
equivalence to the square lattice.
We look at topologically nontrivial paths in the decoding graph. By symmetry,
we just consider two cases: (1) paths from the origin $O=(0,0)$ to
$2L_{1}^{\text{dec}}$ and (2) paths from $O$ to
$L_{1}^{\text{dec}}+L_{2}^{\text{dec}}$. The shortest path from point
$A\in\mathbb{R}^{2}$ to $B\in\mathbb{R}^{2}$ clearly has length equal to
$\|B-A\|_{1}$. The complication is that, due to the graph isomorphism between
non-checkerboardable and checkerboardable decoding graphs, edges located at
points $e_{1},e_{2}\in\mathbb{R}^{2}$ act on the same qubit (one acts with
Pauli $X$ say, and the other with Pauli $Z$) if
$e_{2}-e_{1}\in
T(L_{1}^{\text{dec}},L_{2}^{\text{dec}}):=\\{m_{1}L_{1}^{\text{dec}}+m_{2}L_{2}^{\text{dec}}:m_{1},m_{2}\in\mathbb{Z},m_{1}+m_{2}\text{
is odd}\\}.$ (52)
Thus, the Pauli weight of a path is at most $\|B-A\|_{1}$ but may be less. Eq.
(52) implies that if $e_{1}$ is horizontal then $e_{2}$ is vertical and vice-
versa. So if a path has, say, more horizontal edges than vertical edges, at
most each vertical edge may be acting on the same qubit as a horizontal edge,
and thus the Pauli weight is at least $\|B-A\|_{\infty}$. In summary, if we
let $|P(A,B)|$ denote the Pauli weight of a minimum weight path from $A$ to
$B$, we have shown
$\|B-A\|_{\infty}\leq|P(A,B)|\leq\|B-A\|_{1}.$ (53)
Moreover, because adding an edge to a path never decreases the Pauli weight,
there is a path with minimum Pauli weight that also has minimum length,
namely, length $\|B-A\|_{1}$.
For case (1), we argue that the path with lowest Pauli weight has weight
$|P(O,2L_{1}^{\text{dec}})|=\|2L_{1}^{\text{dec}}\|_{\infty}=a+b$, matching
the lower bound. We need only demonstrate an explicit path from $O$ to
$2L_{1}^{\text{dec}}=(a,b)$ with this weight. We can describe this with a
string of symbols $N$ and $E$ representing moving north and east,
respectively. So, for instance, $NENE=(NE)^{2}$ would mean moving north, then
east, then north, then east. The path with minimum Pauli weight is then
$({\color[rgb]{1,0,0}NE})^{\frac{1}{2}(b-a-1)}{\color[rgb]{1,0,0}N}E^{a}({\color[rgb]{0,0,1}EN})^{\frac{1}{2}(b-a-1)}{\color[rgb]{0,0,1}E}E^{a}.$
(54)
Edges of opposite orientation act on the same qubit if they are separated in
this sequence by $\frac{1}{2}(b+a-1)$ $E$ edges and $\frac{1}{2}(b-a-1)$ $N$
edges. For instance, the red edges in the sequence (54) act on the same $b-a$
qubits as the blue edges. Thus, while the sequence’s length is $2b$, its Pauli
weight is $2b-(b-a)=a+b$, as claimed. See Fig. 17 for an example.
Figure 17: Illustrating the proof of Theorem 3.9 with a specific example. We
show a path from $O$ to $O+2L_{1}^{\text{dec}}$ consisting of red, black, and
blue edges. Each red edge can be paired with a blue edge that is located a
vector $L_{1}^{\text{dec}}$ away and acts on the same qubit. The dashed lines
show a path from $O$ to $O+L_{1}^{\text{dec}}+L_{2}^{\text{dec}}$ with minimal
Pauli weight in which no two edges act on the same qubit. The logical
operators represented by the solid and dashed paths anticommute.
For case (2), we argue that the path with lowest Pauli weight has weight
$|P(O,L_{1}^{\text{dec}}+L_{2}^{\text{dec}})|=\|L_{1}^{\text{dec}}+L_{2}^{\text{dec}}\|_{1}=a+b$,
matching the upper bound on path weight. Therefore, we must reason that, in
going from $O$ to $L_{1}^{\text{dec}}+L_{2}^{\text{dec}}=(a,b)$ one can take
no advantage of edges acting on the same qubit. According to the comment just
below Eq. (53), we can restrict our investigation to minimum length paths. We
argue that, for any $t\in T(L_{1}^{\text{dec}},L_{2}^{\text{dec}})$ and any
edge $e$, no minimum length path contains both $e$ and $e+t$. If we show this
claim for $t$, it implies the claim for $-t$. So assume
$t=m_{1}L_{1}^{\text{dec}}+m_{2}L_{2}^{\text{dec}}=(t_{x},t_{y})$ and that,
without loss of generality, $m_{1}>m_{2}$. Note, because $t$ associates edges
of opposite orientation, both $t_{x}$ and $t_{y}$ are half-integers and so
necessarily nonzero.
We introduce three easily-verified conditions that preclude any shortest path
from containing both edge $e$ and $e+t$. If (A) $|t_{x}|>a$ or (B)
$t_{x}t_{y}<0$ or (C) $|t_{y}|>b$, then the shortest paths are restricted in
this way and the shortest path’s length equals its Pauli weight. We show all
$t\in T(L_{1}^{\text{dec}},L_{2}^{\text{dec}})$ fall into one of these two
cases.
First, assume $m_{1}>0$. Then, because $m_{1}>m_{2}$ and $b>a>0$,
$t_{x}>m_{1}a\geq a$. This falls into case (A). Next, assume $m_{2}<m_{1}\leq
0$. We find $t_{y}\leq-a/2<0$. If $t_{x}>0$, case (B) applies and we are done.
So, suppose $t_{x}<0$. Then $(m_{1}+m_{2})<-(m_{1}-m_{2})b/a$, which implies
$|t_{y}|/b>(m_{1}-m_{2})(a^{2}+b^{2})/2ab>1$, and we have case (C). ∎
### 5.3 New hyperbolic codes
In this section, we discuss how one can construct hyperbolic quantum codes
using Definition 3.3. We restrict to regular tilings of hyperbolic space, and
as a result the set of codes we create here is incomparable with the set of
regular homological codes created in, for instance, [10, 15, 16]. Codes in our
set that are not in the regular homological set include non-checkerboardable
codes, as well as some checkerboardable codes defined on graphs with even
vertex degrees larger than four. Codes in the homological set not included in
our set include those with face (e.g. $Z$-type) and vertex (e.g. $X$-type)
stabilizers of different weight. In principle, using our code construction on
irregular tilings would reproduce all homological hyperbolic codes and more,
but irregular graphs are more difficult to study systematically than regular
graphs, and so we do not consider them here.
The automorphism group $\text{Aut}(R)$ of a rotation system
$R=(H,\lambda,\rho,\tau)$ is the group of all permutations of $H$ that commute
with $\lambda$, $\rho$, and $\tau$ [25]. Because of the transitivity of the
monodromy group $M(R)$ on $H$, knowing where $\kappa\in\text{Aut}(R)$ sends a
single flag $h\in H$ is enough to fix the action of $\kappa$ on all of $H$.
Therefore, the largest the automorphism group can be is $|H|$. If
$|\text{Aut}(R)|=|H|$, then the rotation system is regular, and it represents
a regular graph, i.e. all vertices have the same degree, say $n$, and all
faces have the same number of adjacent edges, say $m$. These graphs are called
$(m,n)$-regular. A simple calculation and Eq. (2) shows
$\frac{1}{2}\chi=\left(\frac{1}{m}+\frac{1}{n}-\frac{1}{2}\right)|E|.$ (55)
All half-edges in regular rotation systems are equivalent up to automorphism.
Therefore, it is sensible that there is a description without an explicit set
of half-edges $H$. This description is realized by first specifying a free
group along with identity relations on its generators:
$\mathcal{F}=\langle\lambda,\rho,\tau|\lambda^{2}=\rho^{2}=\tau^{2}=(\lambda\tau)^{2}=(\lambda\rho)^{m}=(\rho\tau)^{n}=1\rangle.$
(56)
However, only if $\chi>0$, or equivalently, $1/m+1/n>1/2$, is the group
$\mathcal{F}$ finite. Additional identities are necessary to compactify
infinite graphs. Compactified regular graphs are in one-to-one correspondence
with normal subgroups of $\mathcal{F}$ with finite index, and moreover these
take the form of free groups with additional identities $r_{i}$ imposed on the
generators, e.g.
$\mathcal{F}_{c}=\langle\lambda,\rho,\tau|\lambda^{2}=\rho^{2}=\tau^{2}=(\lambda\tau)^{2}=(\lambda\rho)^{m}=(\rho\tau)^{n}=1\text{
and }r_{i}(\lambda,\rho,\tau)=1,\forall i\rangle.$ (57)
To find normal subgroups of $\mathcal{F}_{c}$, one can use a computer algebra
package such as GAP [49, 50]. Appendix F gives an example of using GAP for
this purpose. We tabulate some small examples of non-checkerboardable
hyperbolic codes in Table 2 and draw some explicit examples in Fig. 18.
Tiling | $N$ | $K$ | $D$ | Generators | Orientable?
---|---|---|---|---|---
$(5,4)$ | 20 | 5 | 4 | $(\rho\tau\rho\lambda)^{4},(\rho\lambda\tau)^{5}$ | No
$(6,4)$ | 6 | 3 | 2 | $(\rho\lambda\tau)^{3}$ | No
15 | 6 | 2 | $(\rho\tau\rho\lambda)^{3},(\rho\lambda\tau)^{5}$ | No
24 | 9 | 3 | $(\rho\lambda\tau\rho\lambda)^{3},(\rho\tau\rho\lambda)^{4},(\rho\lambda\tau)^{6}$ | No
30 | 11 | 3 | $(\rho\tau\rho\lambda)^{3}$ | Yes
Table 2: Some very small hyperbolic codes on non-checkerboardable graphs. The
$(5,4)$ example in particular uses fewer qubits than any distance four
hyperbolic code in [16]. Figure 18: (a) A $\llbracket 20,5,4\rrbracket$
hyperbolic code from a $(5,4)$-regular tiling of a genus six non-orientable
manifold. (b) A $\llbracket 32,10,4\rrbracket$ hyperbolic code from a
$(4,6)$-regular tiling of a genus ten non-orientable manifold using the
relations
$(\lambda\tau\rho\tau\rho)^{3}=(\tau\rho\lambda\rho)^{4}=(\rho\lambda\tau)^{6}=1$.
As indicated by letters, edges are identified to compactify the tilings. Non-
primed lettered edges are matched orientably, i.e. one edge is directed
clockwise, the other counterclockwise. Primed lettered edges are matched non-
orientably, i.e. both edges directed clockwise. In (a), example actions of the
generator permutations are shown as well. In (b), because the code distance
depends on the choice and orientation of the CALs, we label each face with the
Pauli that should act on each vertex of the face.
Of course, non-checkerboardable codes like those in Table 2 and Fig. 18(a) are
not equivalent to homological hyperbolic codes. What is less obvious is that
even some checkerboardable codes, like the $\llbracket 32,10,4\rrbracket$ code
shown in Fig. 18(b), are not equivalent to homological hyperbolic codes on
_regular_ tilings (though they are equivalent to a homological code on some
irregular tiling). To see this, suppose we define a surface code in our
framework on a $(m,n)$-regular tiling with $m>4$ being even. Since $m>4$ this
is not a medial graph of any other embedded graph. However, we should also
show that even after we decompose vertices into degree-four vertices using
Fig. 3 the graph is still not a medial graph. Suppose it is, so that after the
vertex decomposition we have a $(m^{\prime},4)$-regular tiling. If before the
decomposition we have $V$ vertices and $F$ faces, after the decomposition
there are $V^{\prime}=V+(n-4)V/2=(n-2)V/2$ vertices and $F$ faces. By
regularity of the tilings, $F=nV/m=4V^{\prime}/m^{\prime}$. This implies
$m^{\prime}=(2n-4)m/n$, which must be an integer. When $(2n-4)m/n$ is not an
integer, it is not possible to end up with a medial graph after the vertex
decomposition. An example is when $n=4$ and $m=6$, as for the code in Fig.
18(b).
Finally, we point out that the face-width lower bound on code distance,
Theorem 4.11, is applicable to the hyperbolic codes in this section. Using it
and explicitly finding logical operators with that weight, we can verify the
distances of the non-checkerboardable codes in Table 2. The $\llbracket
30,10,4\rrbracket$ code in Fig. 18(b), being checkerboardable, has distance
lower bounded by the shortest homologically non-trivial cycle in its decoding
graph, Corollary 4.7. This gives $3\leq D$, or, in other words, any non-
trivial logical operator acts on qubits at three different vertices. However,
we argue it must act on both qubits at at least one of these vertices, and
thus be at least weight four. Focus on the $X$-type logical operators; the
$Z$-type are analogous. Note each vertex is surrounded by three $Z$-type
faces, $IZ$, $ZI$, and $ZZ$. Thus, the homologically non-trivial cycle with
length three must visit a face of each type. At the vertex where it passes
from the $IZ$ to the $ZI$ type face, it is supported on both qubits, namely as
$XX$. Therefore, the minimum weight logical operator is at least weight four,
and it is easy enough to find one with this weight.
### 5.4 Two more ways to generalize the triangle code
In [6], the authors presented the family of triangle codes; the distance 5
version is pictured here in Fig. 5(a). This family has been generalized by
Kesselring et al. to the stellated codes [8]. Stellated codes feature an
improved constant in the relation $N=cKD^{2}$, namely $c=s/(2s-2)$ for odd
integers $s$ specifying the degree of symmetry ($s=3$ for the triangle code,
$s=5$ for the code in Fig. 5(c), etc.), and so $c$ approaches $1/2$ as
$s\rightarrow\infty$.
One curiosity in the stellated codes is that the qubit density in the center
becomes large as $s$ grows. One might wonder if there is a family of planar
codes for which $c$ approaches $1/2$ that does not have this singularity.
There is indeed such a family of codes, provided one accepts a conjecture on
its code distance.
We begin the construction of this family by imagining “gluing” square surface
code patches, like that pictured in Fig. 5(b), together. For instance, one can
glue three distance $D=3$ square surface code patches together to obtain the
$D=5$ triangle code, Fig. 5(a), and five distance $D=3$ patches to obtain the
code in Fig. 5(c). To ensure that the qubit density is bounded by a constant
throughout the resulting graph, we should not glue too many patches together
around a single vertex. We glue at most four patches around a vertex, and the
maximum vertex degree in the graph is six, leading to at most two qubits at a
vertex.
Twist defects, the odd-degree vertices in the resulting graph, should be
spaced apart by graph distance $D-1$ to get a distance $D$ code. Therefore, we
imagine each square patch is $(D+1)/2$-by-$(D+1)/2$ qubits in size, and that
the only odd-degree vertices are located at two corners of the patch,
diagonally opposite one another. If we glue the patches together accordingly,
we get codes as in Fig. 19. We refer to this family as circle-packing codes,
because of the way in which odd-degree vertices are closely packed. Because
the addition of a square patch of roughly $D^{2}/4$ qubits adds one additional
twist, and two additional twists are required to add a logical qubit, we have
$N\rightarrow KD^{2}/2$ for these codes as $K$ gets large.
Figure 19: Circle-packing codes encoding $K=1,2,3,$ and $4$ qubits in order of
size. The packed circles in the background act as a coordinate system, and are
not part of the graphs or codes. In the top left, we show the triangle code
and the entire square lattices of qubits making it up. In other drawings, the
lattice interiors are abstracted away. Odd-degree vertices, or twist defects,
are shown as yellow circles. One can continue attaching square patches,
following the pattern established in these first four examples, to grow the
code down and to the right and obtain a family where
$N=\frac{1}{4}(2K+1)D^{2}$. Therefore, $c=(2K+1)/4K$ approaches $1/2$ as $K$
increases.
A seemingly strange feature of the circle-packing codes is how they do not
fill a 2-D area but are rather grown outward in one direction. However, one
can easily see that filling a 2-D area with the same pattern of square patches
results in a worse constant $c$. Suppose we have glued $F$ square patches
together, filling a large 2-D area, so that the perimeter is much smaller than
the volume. Then, because we can neglect the perimeter, there are $2F/3$ odd-
degree vertices, implying $K\approx 2F/6$ and $N\approx F(D/2)^{2}=3KD^{2}/4$.
We note explicitly that we did not prove that the distance of the circle-
packing codes is actually $D$, unlike the more exaggerated proofs in Secs. 5.1
and 5.2. We expect this is the code distance due to comparison with the
similar codes in [6] and [8] and the confidence with which those authors state
the code distances (though one might say they are also lacking rigorous
proofs).
A second way we can generalize the stellated codes is by embedding them in
higher genus ($g>0$) surfaces. By Corollary 3.8, a larger genus can lead to
more encoded qubits, assuming any reduction in the number of odd-degree
vertices does not outweigh the effect. Our higher genus embeddings double the
number of encoded qubits with small reductions in both $N$ and $D$.
Figure 20: Two examples of stellated codes embedded in higher genus surfaces,
(a) a $\llbracket 13,2,4\rrbracket$ code with $s=3$ and $t=2$ and (b) a
$\llbracket 23,4,4\rrbracket$ code with $s=5$ and $t=2$. They are drawn within
a $2s$-gon with opposite sides identified. The $\llbracket 13,2,4\rrbracket$
code actually has optimal distance for a quantum code with $N=13$ and $K=2$
[51].
The idea of the higher genus embedding is to draw the stellated code with odd
symmetry parameter $s$ into a polygon with $2s$ sides and identify opposite
sides of the polygon to make an orientable manifold. In fact, this manifold is
a torus with $g=(s-1)/2$ holes. This embedding results in two odd-degree
vertices with degree $s$, a single vertex with degree $2s$, and all other
vertices having degree four. Suppose the graph distance from the code’s center
to the boundary is $t$. Then, $N=st^{2}+s-2$, $K=s-1$, and we conjecture
$D=2t$. This leads to
$N=\frac{1}{4}(K+1)D^{2}+K-1=\frac{1}{4}KD^{2}+O(D^{2}+K)$, so that if both
$K$ and $D$ are large, we have $c=1/4$.
We believe the code distance is $2t$ because the code distance of a stellated
code in the plane is $2t+1$, the length of some logical operators that cross
the code from one boundary to another. In the higher genus case, the qubits on
the boundary are identified, reducing the weight of such operators by one. It
is easy to see that $2t$ is indeed an upper bound on $D$ by finding a logical
operator (one crossing the code, as described) with that weight.
## 6 Open problems
In this work, we took a rigorous look at qubit surface codes defined from
embedded graphs. We related them to Majorana surface codes, calculated the
number of encoded qubits, and bounded the code distance in general with more
specific cases worked out exactly.
There remain some interesting questions, however. First, it would be very nice
to be able to calculate (efficiently) the code distance of an arbitrary
surface code defined by Definition 3.3. We illustrated a prickly example in
Fig. 14 and the codes in Section 5.4 lack rigorous distance proofs, for
instance. If the distance cannot be efficiently calculated exactly, can a
multiplicative approximation be found better than the factor $1/2$
approximation of Theorem 4.10? Inapproximability (see e.g. [52]) in complexity
theory might be applicable if indeed the exact distance calculation is NP-
hard.
A second interesting problem is to create a graph-based formalism for color
code twists as described by Kesselring et al. [8]. By this we mean that one
should be able to define twisted color codes on arbitrary embedded graphs,
where the variety of twists correspond to (local) features of the graph, just
as we found that surface code twists arise at odd-degree vertices. For
instance, while untwisted color codes are defined from graphs with
3-colorable, even-degree faces, a twist may arise in a more general graph
where a face has odd-degree.
A third direction would be to literally add a third direction: can a lattice-
based formalism be used to describe the variety of surface codes and twist
defects in 3-dimensions and beyond? Here things may in general get messy due
to the lack of classification theorems like those for 1D and 2D codes [53,
54]. Still, we do not need to demand that the framework describe every 3D code
by a 3D lattice, just that every 3D lattice gives rise to some 3D code. We
probably would want some known family, like the toric code, to arise from the
construction on the cubic lattice, and that something interesting (better code
parameters, fractal logical operators, etc.) happen for other lattices.
Finally, probably the most practical open problem is to find lower bounds on
the constant $c$ in the relation $N=cKD^{2}$ [2] that holds for 2-dimensional
codes. Note that $c(w,\mathcal{M})$ may depend on both the maximum stabilizer
weight $w$ and the manifold $\mathcal{M}$. Therefore, a concrete question is,
for instance, limited to stabilizers of at most weight five in the plane
$\mathbb{R}^{2}$, what is the smallest possible value of
$c(5,\mathbb{R}^{2})$? We believe the best known is $c(5,\mathbb{R}^{2})$
approaching $1/2$, see Figure 19. If one allows weight-six stabilizers, as in
the color codes of [8], then $c(6,\mathbb{R}^{2})$ can approach $1/4$. One can
generally concatenate two 2-dimensional codes (e.g. surface codes) with the
$\llbracket 4,2,2\rrbracket$ code to get a color code with $N^{\prime}=4N$,
$K^{\prime}=2K$, $D^{\prime}=2D$, and $w^{\prime}=2w$ [55]. Therefore
$c(2w,\mathcal{M})\leq c(w,\mathcal{M})/2$ for any $w$ and $\mathcal{M}$.
## Acknowledgements
We are grateful to a number of people for helpful discussions, including
Sergey Bravyi, Andrew Cross, Carollan Helinski, Tomas Jochym-O’Connor, Alex
Kubica, Andrew Landahl, and Guanyu Zhu. We would especially like to thank
Leonid Pryadko and Alexey Kovalev for sharing ideas that led to the distance
proof for cyclic toric codes. Rahul Sarkar was partially supported by the
Schlumberger Innovation Fellowship. Ted Yoder was partially supported by the
IBM Research Frontiers Institute.
## References
* [1] A Yu Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, 2003.
* [2] Sergey Bravyi, David Poulin, and Barbara Terhal. Tradeoffs for reliable quantum information storage in 2d systems. Physical review letters, 104(5):050503, 2010.
* [3] H Bombin and Miguel A Martin-Delgado. Optimal resources for topological two-dimensional stabilizer codes: Comparative study. Physical Review A, 76(1):012305, 2007.
* [4] Héctor Bombín. Topological order with a twist: Ising anyons from an abelian model. Physical review letters, 105(3):030403, 2010.
* [5] Alexei Kitaev and Liang Kong. Models for gapped boundaries and domain walls. Communications in Mathematical Physics, 313(2):351–373, 2012.
* [6] Theodore J Yoder and Isaac H Kim. The surface code with a twist. Quantum, 1:2, 2017.
* [7] Anirudh Krishna and David Poulin. Topological wormholes: Nonlocal defects on the toric code. Physical Review Research, 2(2):023116, 2020.
* [8] Markus S Kesselring, Fernando Pastawski, Jens Eisert, and Benjamin J Brown. The boundaries and twist defects of the color code and their applications to topological quantum computation. Quantum, 2:101, 2018.
* [9] Nicolas Delfosse. Tradeoffs for reliable quantum information storage in surface codes and color codes. In 2013 IEEE International Symposium on Information Theory, pages 917–921. IEEE, 2013.
* [10] Nikolas P Breuckmann and Barbara M Terhal. Constructions and noise threshold of hyperbolic surface codes. IEEE transactions on Information Theory, 62(6):3731–3744, 2016\.
* [11] Xiao-Gang Wen. Quantum orders in an exact soluble model. Physical review letters, 90(1):016803, 2003.
* [12] Alexei Kitaev. Anyons in an exactly solved model and beyond. Annals of Physics, 321(1):2–111, 2006.
* [13] Sergey Bravyi, Matthias Englbrecht, Robert König, and Nolan Peard. Correcting coherent errors with surface codes. npj Quantum Information, 4(1):1–6, 2018.
* [14] Alexey A Kovalev and Leonid P Pryadko. Improved quantum hypergraph-product ldpc codes. In 2012 IEEE International Symposium on Information Theory Proceedings, pages 348–352. IEEE, 2012.
* [15] Nikolas P Breuckmann, Christophe Vuillot, Earl Campbell, Anirudh Krishna, and Barbara M Terhal. Hyperbolic and semi-hyperbolic surface codes for quantum storage. Quantum Science and Technology, 2(3):035007, 2017.
* [16] Nikolas P Breuckmann. Homological quantum codes beyond the toric code. PhD thesis, RWTH Aachen University, 2017.
* [17] Saul Stahl. The embeddings of a graph—a survey. Journal of Graph Theory, 2(4):275–298, 1978.
* [18] Saul Stahl. Generalized embedding schemes. Journal of Graph Theory, 2(1):41–52, 1978.
* [19] Marcus Schaefer. Crossing numbers of graphs. CRC Press, 2018.
* [20] John Lee. Introduction to topological manifolds, volume 202. Springer Science & Business Media, 2010.
* [21] John Lee. Introduction to smooth manifolds, volume 218. Springer Science & Business Media, 2012.
* [22] Tibor Radó. Über den begriff der riemannschen fläche. Acta Litt. Sci. Szeged, 2(101-121):10, 1925.
* [23] Herbert Seifert. Seifert and Threlfall, A textbook of topology, volume 89. Elsevier, 1980.
* [24] George K Francis and Jeffrey R Weeks. Conway’s zip proof. The American mathematical monthly, 106(5):393–399, 1999.
* [25] Roman Nedela and Martin Škoviera. Regular maps on surfaces with large planar width. European Journal of Combinatorics, 22(2):243–262, 2001.
* [26] Jozef Širáň. Triangle group representations and their applications to graphs and maps. Discrete Mathematics, 229(1-3):341–358, 2001.
* [27] Reinhard Diestel. Graph theory. Springer-Verlag New York, Incorporated, 2000.
* [28] Jonathan L Gross and Thomas W Tucker. Topological graph theory. Courier Corporation, 2001.
* [29] Allen Hatcher. Algebraic Topology. Cambridge University Press, 2002.
* [30] Sergio Cabello and Bojan Mohar. Finding shortest non-separating and non-contractible cycles for topologically embedded graphs. Discrete & Computational Geometry, 37(2):213–235, 2007.
* [31] Glen E Bredon. Topology and geometry, volume 139. Springer Science & Business Media, 2013.
* [32] Sergey Bravyi, Barbara M Terhal, and Bernhard Leemhuis. Majorana fermion codes. New Journal of Physics, 12(8):083039, 2010.
* [33] Sagar Vijay and Liang Fu. Quantum error correction for complex and Majorana fermion qubits. arXiv preprint arXiv:1703.00459, 2017.
* [34] Sagar Vijay, Timothy H Hsieh, and Liang Fu. Majorana fermion surface code for universal quantum computation. Physical Review X, 5(4):041038, 2015.
* [35] A Yu Kitaev. Unpaired Majorana fermions in quantum wires. Physics-Uspekhi, 44(10S):131, 2001.
* [36] Rahul Sarkar and Ewout Van Den Berg. On sets of commuting and anticommuting Paulis. arXiv preprint arXiv:1909.08123, 2019.
* [37] Alexey A Kovalev, Ilya Dumer, and Leonid P Pryadko. Low-complexity quantum codes designed via codeword-stabilized framework. arXiv preprint arXiv:1108.5490, 2011.
* [38] Sergey B Bravyi and A Yu Kitaev. Quantum codes on a lattice with boundary. arXiv preprint quant-ph/9811052, 1998.
* [39] Michael H Freedman and David A Meyer. Projective plane and planar quantum codes. Foundations of Computational Mathematics, 1(3):325–332, 2001.
* [40] Jonas T Anderson. Homological stabilizer codes. Annals of Physics, 330:1–22, 2013.
* [41] Peter W Shor. Scheme for reducing decoherence in quantum computer memory. Physical review A, 52(4):R2493, 1995.
* [42] Mark De Berg, Marc Van Kreveld, Mark Overmars, and Otfried Schwarzkopf. Computational geometry. In Computational geometry, pages 1–17. Springer, 1997.
* [43] Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. Journal of Mathematical Physics, 43(9):4452–4505, 2002.
* [44] Maissam Barkeshli and Michael Freedman. Modular transformations through sequences of topological charge projections. Physical Review B, 94(16):165108, 2016.
* [45] Joseph Douglas Horton. A polynomial-time algorithm to find the shortest cycle basis of a graph. SIAM Journal on Computing, 16(2):358–366, 1987.
* [46] Edoardo Amaldi, Claudio Iuliano, Tomasz Jurkiewicz, Kurt Mehlhorn, and Romeo Rizzi. Breaking the o (m 2 n) barrier for minimum cycle bases. In European Symposium on Algorithms, pages 301–312. Springer, 2009\.
* [47] Telikepalli Kavitha, Kurt Mehlhorn, Dimitrios Michail, and Katarzyna Paluch. A faster algorithm for minimum cycle basis of graphs. In International Colloquium on Automata, Languages, and Programming, pages 846–857. Springer, 2004.
* [48] Kurt Mehlhorn and Dimitrios Michail. Minimum cycle bases: Faster and simpler. ACM Transactions on Algorithms (TALG), 6(1):8, 2009.
* [49] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.11.0, 2020\.
* [50] Friedrich Rober. LINS – a GAP package, 2020. github.com/FriedrichRober/LINS.
* [51] Markus Grassl. Bounds on the minimum distance of linear codes and quantum codes. Online available at http://www.codetables.de, 2007. Accessed on 2020-11-03.
* [52] Luca Trevisan. Inapproximability of combinatorial optimization problems. arXiv preprint cs/0409043, 2004.
* [53] Héctor Bombín. Structure of 2d topological stabilizer codes. Communications in Mathematical Physics, 327(2):387–432, 2014.
* [54] Jeongwan Haah. Algebraic methods for quantum codes on lattices. Revista colombiana de matematicas, 50(2):299–349, 2016.
* [55] Ben Criger and Barbara Terhal. Noise thresholds for the [[4, 2, 2]]-concatenated toric code. arXiv preprint arXiv:1604.04062, 2016.
* [56] Daniel E. Gottesman. Stabilizer Codes and Quantum Error Correction. Dissertation. PhD thesis, California Institute of Technology, 1997. http://resolver.caltech.edu/CaltechETD:etd-07162004-113028.
* [57] Eugene P Wigner and Pascual Jordan. Über das paulische äquivalenzverbot. Z. Phys, 47:631, 1928.
* [58] Pavel Hrubeš. On families of anticommuting matrices. Linear Algebra and its Applications, 493:494–507, 2016.
* [59] Henry S Warren. Hacker’s delight. Pearson Education, 2013.
## Appendix A Graph embeddings from oriented rotation systems
We begin with an oriented rotation system $R_{O}=(H_{O},\nu,\epsilon)$ and
show how to obtain the graph embedding that it corresponds to, in this
section. Recall that the rotation system gives rise to a set of vertices $V$,
edges $E$, and faces $F$, which are orbits of $\langle\nu\rangle$,
$\langle\epsilon\rangle$ and $\langle\nu\epsilon\rangle$ respectively. The
graph specified by $R_{O}$, consists of vertices and edges labeled by the
elements of $V$ and $E$ respectively (in a bijective fashion), so we can use
the notation $G(V,E)$ to denote the graph without any ambiguity. The vertex-
edge adjacency matrix of $G$ satisfies $a_{ve}=1$ if and only if $v\cap
e\neq\emptyset$, for any $v\in V$ and $e\in E$. As mentioned in Section 2.2,
property (ii) of Definition 2.1 guarantees that $G$ is connected.
In addition to specifying the graph connectivity, $R_{O}$ also specifies an
embedding of $G$, where the faces of the graph embedding are in bijection with
the elements of $F$ — this justifies the use of the notation $G(V,E,F)$ to
denote the graph embedding. We will now briefly sketch how to obtain this
embedding starting from $R_{O}$. For this we establish some additional
notation: for every half-edge $h\in H_{O}$, let $[h]_{\nu}\in V$ be the unique
vertex that contains $h$ and, likewise, let $[h]_{\epsilon}\in E$ be the
unique edge that contains $h$. Adjacency of vertices, edges and faces for
oriented rotation systems are defined by non-trivial intersection (just as for
general rotation systems), so for e.g. a vertex $v$ and a face $f$ are
adjacent if and only if $v\cap f\neq\emptyset$.
The construction starts by assigning an open disc
$\mathcal{B}_{f}(0,1)\subseteq\mathbb{R}^{2}$ to each face $f\in F$ of
$R_{O}$. For a face $f$, let us order the half-edges in it as
$\\{h_{0},\dots,h_{|f|-1}\\}$, where $h_{i}=(\nu\epsilon)^{i}h$ for any $h\in
f$ chosen arbitrarily. Now for the closed disc
$\overline{\mathcal{B}_{f}(0,1)}$, first let $S^{1}_{f}$ denote the boundary
circle parameterized in polar coordinates, and place $|f|$ distinct points
$0=\theta_{0}<\dots<\theta_{|f|-1}<2\pi$ on $S^{1}_{f}$. Let
$\omega_{0},\dots,\omega_{|f|-1}$ denote the open segments that result defined
as $\omega_{i}=(\theta_{i},\theta_{i+1})$ for $0\leq i\leq|f|-2$, and
$\omega_{|f|-1}=(\theta_{|f|-1},2\pi)$, and then define a marking function
$M_{f}:\bigcup_{i=0}^{|f|-1}\\{\theta_{i},\omega_{i}\\}\rightarrow V\cup
H_{O}$ as
$M_{f}(x)=\begin{cases}[h_{i}]_{\nu}&\;\;\text{if }x=\theta_{i}\\\
h_{i}&\;\;\text{if }x=\omega_{i}.\end{cases}$ (58)
The combined effect of the marking functions for all the faces is that each
half-edge $h$ gets associated with exactly one open segment of a boundary
circle $S^{1}_{f}$ for some face $f$, while each vertex $v$ gets associated
with exactly $|v|$ points in different boundary circles. Finally, we take the
disjoint union space $\bigsqcup_{f\in F}\overline{\mathcal{B}_{f}(0,1)}$ and
define an identification rule $\sim$ for points on the boundary circles as
follows:
1. (i)
For every half-edge $h\in H_{O}$, if $(\theta_{a},\theta_{b})$ and
$(\theta^{\prime}_{a},\theta^{\prime}_{b})$ are the open segments associated
with $h$ and $\epsilon h$ respectively, identify these segments using the map
$(\theta_{a},\theta_{b})\ni x\mapsto
x(\theta^{\prime}_{b}-\theta^{\prime}_{a})/(\theta_{a}-\theta_{b})+(\theta_{a}\theta^{\prime}_{a}-\theta_{b}\theta^{\prime}_{b})/(\theta_{a}-\theta_{b})\in(\theta^{\prime}_{a},\theta^{\prime}_{b})$.
2. (ii)
If points $\theta,\theta^{\prime}$ are associated with vertices $v,v^{\prime}$
respectively, then they are identified if and only if $v=v^{\prime}$.
The quotient space $\mathcal{M}=\bigsqcup_{f\in
F}\overline{\mathcal{B}_{f}(0,1)}/\sim$ that results under this identification
is an orientable manifold (we omit the topological details), and $G(V,E)$ is
2-cell embedded in it. The graph embedding map $\Gamma$ is obtained as
follows: (a) if a vertex $v\in V$ is associated with points
$\\{\theta_{1},\dots,\theta_{|v|}\\}$, then $\Gamma(v)$ equals the equivalence
class of these points under $\sim$, (b) if a half-edge $h$ is associated with
an open segment $(\theta_{a},\theta_{b})$ then for the corresponding edge
$e=\\{h,\epsilon h\\}$, $\Gamma(e)$ equals the closure in $\mathcal{M}$ of the
equivalence class of $(\theta_{a},\theta_{b})$ under $\sim$.
## Appendix B Paulis with arbitrary commutation patterns and CALs
In this appendix we first do a very brief review of the Pauli group and recall
some of its basic properties. We then analyze the problem of finding a list of
Paulis given a desired commutation pattern. As a special case, we study the
case of cyclically anticommuting list of Paulis in some detail, which is a key
tool in the construction of the qubit surface codes of Section 3.3.
### B.1 Basic definitions
A multiset $M$ is a set with repeated elements. The multiplicity $\nu(m)$ of
an element $m\in M$ is defined as the number of occurrences of $m$ in $M$. For
a set $A$, if $m\in M$ implies $m\in A$, then we say $M\subseteq A$. If
$M^{\prime}$ is another multiset, then we say $M^{\prime}\subseteq M$ if and
only if the following condition holds: for each $m\in M^{\prime}$ with
multiplicity $\nu^{\prime}(m)$, it must hold that $m\in M$ and
$\nu^{\prime}(m)\leq\nu(m)$. If $M^{\prime}\subseteq M$ and $M^{\prime}\neq M$
then we write $M^{\prime}\subset M$. An ordered multiset is a list.
The Pauli group $\mathcal{P}_{1}$ on one qubit is the 16 element group
consisting of the $2\times 2$ identity matrix $I_{2}$, and the Pauli matrices
$X=\begin{bmatrix}0&1\\\ 1&0\end{bmatrix},\;\;Y=\begin{bmatrix}0&-i\\\
i&0\end{bmatrix},\;\;\text{and }Z=\begin{bmatrix}1&0\\\
0&-1\end{bmatrix},\;\;$ (59)
together with all possible phase factors of $\pm 1$ and $\pm i$. The $n$-Pauli
group $\mathcal{P}_{n}$ on $n$ qubits ($n\geq 1$), with qubits numbered
$0,\dots,n-1$, is now defined to be the group
$\mathcal{P}_{n}=\\{\eta\;(\sigma_{0}\otimes\dots\otimes\sigma_{n-1}):\eta\in\\{\pm
1,\pm i\\},\sigma_{j}\in\\{I_{2},X,Y,Z\\}\;\forall\;j\\},$ (60)
where $\otimes$ denotes the Kronecker product of matrices. The support of a
Pauli $p=\eta\;(\sigma_{0}\otimes\dots\otimes\sigma_{n-1})\in\mathcal{P}_{n}$,
which we define as $\mathrm{supp}\left({p}\right):=\\{0\leq j\leq
n-1:\sigma_{j}\neq I_{2}\\}$, indicates all those qubits where it acts by
either $X$, $Y$ or $Z$. For brevity we will often simply list the Pauli
matrices acting on the support of a Pauli, with a subscript per matrix
indicating the qubit where it acts. For example, $p=iX_{0}Y_{2}Z_{3}$
indicates that $p$ acts on qubits $0$, $2$ and $3$ with $X$, $Y$ and $Z$
respectively, and with $I_{2}$ on all remaining qubits. Another equivalent
notation for representing $p$ (frequently used in Appendix D) is to drop the
$\otimes$ symbol, which gives a string of length $n$ with a phase factor in
front. This is called the string representation of Paulis. In a string
representation, since there is never any chance for confusion, $I_{2}$ is
replaced by $I$ to simplify the notation. For example if $n=5$, the Pauli
$iX_{0}Y_{2}Z_{3}$ has the string representation $iXIYZI$. Moreover one can
optionally combine alphabets appearing consecutively in a string
representation, for which we give an example: for $n=5$, if $XXIIZ$ is the
string representation of a Pauli, then it is also equivalently written in
compressed form as $X^{\otimes 2}I^{\otimes 2}Z$. We define the abelian group
$\mathcal{\hat{P}}_{n}$ to be the quotient group $\mathcal{P}_{n}/\langle
iI\rangle$, i.e. if $\hat{p}\in\mathcal{\hat{P}}_{n}$, then it is an
equivalence class of the form $\hat{p}=\\{p,-p,ip,-ip\\}=\langle iI,p\rangle$
for some $p\in\mathcal{P}_{n}$, where $I$ is the identity element of
$\mathcal{P}_{n}$, the $2^{n}\times 2^{n}$ identity matrix. In this appendix
and also in the next, if $p\in\mathcal{P}_{n}$, then $[p]$ will refer to the
equivalence class of $p$ in $\mathcal{\hat{P}}_{n}$, and the support of $[p]$
is defined to be the support of $p$. The weight of any element of
$\mathcal{P}_{n}$ or $\mathcal{\hat{P}}_{n}$ is defined to be the number of
qubits in its support. For any $[p]\in\mathcal{\hat{P}}_{n}$, where
$p=\eta\;(\sigma_{0}\otimes\dots\otimes\sigma_{n-1})\in\mathcal{P}_{n}$, we
define the restriction of $p$ (resp. $[p]$) to qubit $j$ for any $0\leq j\leq
n-1$, to be $\sigma_{j}\in\mathcal{P}_{1}$ (resp.
$[\sigma_{j}]\in\mathcal{\hat{P}}_{1}$). An element $p\in\mathcal{P}_{n}$
(resp. $\hat{p}\in\mathcal{\hat{P}}_{n}$) is called $X$-type if and only if
the restriction of $p$ (resp. $\hat{p}$) to qubit $j$ is either $I_{2}$ or $X$
(resp. $[I_{2}]$ or $[X]$), for all $0\leq j\leq n-1$. We similarly define
$Y$-type and $Z$-type elements of $\mathcal{P}_{n}$ and
$\mathcal{\hat{P}}_{n}$.
$\mathcal{P}_{n}$ is a non-Abelian group of size $4^{n+1}$, and each element
of $\mathcal{P}_{n}$ is traceless, while we have
$|\mathcal{\hat{P}}_{n}|=4^{n}$. Any two elements $p,q\in\mathcal{P}_{n}$
either commute or anticommute, and we will use the notations $[p,q]=0$ and
$\\{p,q\\}=0$, to denote these two cases respectively. Note that $\\{p,q\\}$
can also refer to the set containing the elements $p$ and $q$, but this
difference will either be clear from context, or we will explain the usage
whenever there is a chance for confusion. For convenience, we define a
function
$\text{Com}:\mathcal{P}_{n}\times\mathcal{P}_{n}\rightarrow\mathbb{F}_{2}$,
that will be useful in some places, as
$\text{Com}(p,q)=\frac{1}{2}\left(1-\mathrm{Tr}\left({pqp^{\dagger}q^{\dagger}}\right)/2^{n}\right),$
(61)
which satisfies $\text{Com}(p,q)=0$ if and only if $[p,q]=0$. Thus Com is a
symmetric function of its arguments. Two elements
$\hat{p},\hat{q}\in\mathcal{\hat{P}}_{n}$ will be said to commute (resp.
anticommute)666This notion of commutativity of elements of
$\mathcal{\hat{P}}_{n}$ is different from that due to the group operation on
$\mathcal{\hat{P}}_{n}$. Elements of $\mathcal{\hat{P}}_{n}$ always commute
under the group operation on $\mathcal{\hat{P}}_{n}$. if and only if for any
chosen representatives $p\in\hat{p}$ and $q\in\hat{q}$, $p$ and $q$ commute
(resp. anticommute), and we will use the same notation as above to express
commutativity. For any non-empty, ordered multiset
$\mathcal{H}=\\{\hat{p}_{1},\dots,\hat{p}_{\ell}\\}\subseteq\mathcal{\hat{P}}_{n}$
such that $|\mathcal{H}|=\ell$, we define following [36], the commutativity
map with respect to $\mathcal{H}$, to be the function
$\text{Com}_{\mathcal{H}}:\mathcal{\hat{P}}_{n}\rightarrow\mathbb{F}_{2}^{\ell}$,
$(\text{Com}_{\mathcal{H}}(\hat{q}))_{i}=\text{Com}(q,p_{i}),\text{ where
}\hat{q}=[q],\;\hat{p_{i}}=[p_{i}],\;\;\;\forall\;1\leq i\leq\ell,$ (62)
and we say that $\hat{q}\in\mathcal{\hat{P}}_{n}$ generates the commutativity
pattern $\text{Com}_{\mathcal{H}}(\hat{q})$ with respect to $\mathcal{H}$.
Commutativity of Paulis can also be represented in yet another equivalent
form. Since the phase factors of two Paulis $p,q\in\mathcal{P}_{n}$ do not
influence their commutativity, it is often convenient to represent a Pauli
$p=\eta\;(\sigma_{0}\otimes\dots\otimes\sigma_{n-1})$ in symplectic notation
[56] ignoring the phase factor $\eta$, that is as a binary (row) vector
$v_{p}\in\mathbb{F}_{2}^{2n}$ such that for $0\leq i\leq n-1$, $(v_{p})_{i}=1$
if and only if $\sigma_{i}\in\\{X,Y\\}$, and for $n\leq i\leq 2n-1$,
$(v_{p})_{i}=1$ if and only if $\sigma_{i-n}\in\\{Y,Z\\}$. As binary vectors,
Paulis $p$ and $q$ commute if and only if $v_{p}\Lambda v_{q}^{\top}=0$ where
$\Lambda=\left(\begin{smallmatrix}0&I_{n}\\\
I_{n}&0\end{smallmatrix}\right)\in\mathbb{F}_{2}^{2n\times 2n}$ written using
$n\times n$ blocks [56], $I_{n}$ is the $n\times n$ identity matrix, and all
operations are performed over $\mathbb{F}_{2}$.
A list $\mathcal{H}=\\{p_{0},\dots,p_{\ell-1}\\}\subseteq\mathcal{P}_{n}$ is
called anticommuting (resp. commuting) if and only if $\\{p_{i},p_{j}\\}=0$,
for all $i\neq j$ (resp. $[p_{i},p_{j}]=0$ for all $i,j$). Recall from
Definition 3.2 that a CAL on $n$ qubits is a list
$\mathcal{C}=\\{p_{0},\dots,p_{\ell-1}\\}\subseteq\mathcal{P}_{n}$, with the
property that for distinct $i$ and $j$, $\\{p_{i},p_{j}\\}=0$ if and only if
$j=(i\pm 1)\mod\ell$. Singleton lists and the empty list are considered to be
anticommuting lists and CALs by convention. Similarly we also say that a list
$\mathcal{H}\subseteq\mathcal{\hat{P}}_{n}$ is anticommuting (resp. commuting)
if the resulting list of $\mathcal{P}_{n}$ obtained by replacing each element
in $\mathcal{H}$ by exactly one of its representatives (which can be chosen
arbitrarily) is anticommuting (resp. commuting). We define a CAL of
$\mathcal{\hat{P}}_{n}$ analogously to be a list
$\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$, such that the list obtained by
replacing each element of $\mathcal{C}$ by a representative is a CAL in
$\mathcal{P}_{n}$, and it is called extremal if it has length $\ell\geq 1$,
and there is no CAL of same length on fewer qubits.
We will define the product of all elements in a list
$\mathcal{H}\subseteq\mathcal{P}_{n}$ to be
$\prod\mathcal{H}:=\prod_{p\in\mathcal{H}}p$, (this product is ordered).
Product of multisets of $\mathcal{\hat{P}}_{n}$ is defined in a similar
fashion, except the ordering does not matter as all elements of
$\mathcal{\hat{P}}_{n}$ commute under the group operation. By convention the
product of the empty set is defined to be the identity element
$I\in\mathcal{P}_{n}$ (resp. $[I]\in\mathcal{\hat{P}}_{n}$). Given a multiset
$\mathcal{H}=\\{p_{0},\dots,p_{\ell-1}\\}\subseteq\mathcal{P}_{n}$, the
symplectic representation of Paulis allows us to define its dimension. Writing
each Pauli in the symplectic notation gives a matrix
$H\in\mathbb{F}_{2}^{\ell\times 2n}$ whose each row is a Pauli in
$\mathcal{H}$, and we define
$\dim(\mathcal{H}):=\mathrm{rank}\left({H}\right)$. In words,
$\dim(\mathcal{H})$ is the size of the smallest subset of $\mathcal{H}$ that
generates all its elements up to phase factors — thus it satisfies
$\dim(\mathcal{H})=\min\\{|\mathcal{H}^{\prime}|:\mathcal{H}^{\prime}\subseteq\mathcal{H},\langle\mathcal{H}^{\prime},iI\rangle=\langle\mathcal{H},iI\rangle\\}$.
We say $\mathcal{H}$ is independent if and only if
$\dim(\mathcal{H})=|\mathcal{H}|$. Dimension and independence of multisets of
$\mathcal{\hat{P}}_{n}$ are now analogously defined in terms of the dimension
and independence of the multiset of $\mathcal{P}_{n}$, obtained by choosing
representatives. Finally, if $\hat{p}\in\mathcal{\hat{P}}_{n}$, and
$\hat{q}\in\mathcal{\hat{P}}_{m}$, the Kronecker product of $\hat{p}$ and
$\hat{q}$, is defined as $\hat{p}\otimes\hat{q}=[p\otimes
q]\in\mathcal{\hat{P}}_{n+m}$, where $p\in\hat{p}$, and $q\in\hat{q}$.
### B.2 Constructing a list of Paulis given desired commutation relations
Suppose $C\in\mathbb{F}_{2}^{\ell\times\ell}$ is a matrix specifying the
commutation relations for an unknown list of Paulis
$\\{p_{0},\dots,p_{\ell-1}\\}\subseteq\mathcal{P}_{n}$ (not necessarily all
distinct), that is $C_{ij}=\text{Com}(p_{i},p_{j})$. Hence, $C$ must be
symmetric and must have zero diagonal (i.e. $C_{ii}=0$ for all $i$). In this
subsection, we answer the following questions:
1. (i)
From $C$ alone, can we find Paulis $p_{i}$ that obey the specified commutation
relations?
2. (ii)
What is the minimum number of qubits $n$ required to achieve $C$?
The key idea to answering these questions is to simultaneously row and column
reduce the matrix $C$. We start by analyzing a symmetric rank-revealing
factorization algorithm, which is important in the sequel. We make use of the
elementary row operations $E^{(h,k)}\in\mathbb{F}_{2}^{\ell\times\ell}$,
defined by $E^{(h,k)}_{ij}=1$ if and only if $i=j$, or $(i,j)=(h,k)$. If
$h\neq k$, when a matrix is multiplied by $E^{(h,k)}$ on the left, its
$k^{\text{th}}$ row is added to its $h^{\text{th}}$ row, while when multiplied
by $\left(E^{(h,k)}\right)^{\top}$ on the right, its $k^{\text{th}}$ column is
added to its $h^{\text{th}}$ column. We will refer to the latter as elementary
column operations. Note also that $E^{(h,k)}$ is self-inverse. If $h=k$,
$E^{(h,h)}$ equals the identity matrix. Now consider Algorithm 2, where all
operations occur over the field $\mathbb{F}_{2}$ (note that Algorithm 2 may be
of independent interest elsewhere).
Algorithm 2
1:procedure LBLdecompose($C$) $\triangleright$
$C\in\mathbb{F}_{2}^{\ell\times\ell}$ is symmetric with zero diagonal
2: Set $B\leftarrow C$ and $L\leftarrow I$
3: For $i=1$ to $\ell-1$:
4: Find the smallest index $j_{i}$ such that $B_{ij_{i}}=1$. If none exists,
set $j_{i}\leftarrow i$.
5: While there exists $k>i$ such that $B_{kj_{i}}=1$:
6: $B\leftarrow E^{(k,i)}B\left(E^{(k,i)}\right)^{\top}$ $\triangleright$ Set
$B_{kj_{i}}=B_{j_{i}k}=0$
7: $L\leftarrow LE^{(k,i)}$
8: Return $B$ and $L$
###### Lemma B.1.
Let $C\in\mathbb{F}_{2}^{\ell\times\ell}$ with $\ell\geq 1$, such that
$C=C^{\top}$ and $C_{ii}=0$ for all $i$, and consider Algorithm 2. Let
$B^{(0)}=C$, and for $i\geq 1$, let $B^{(i)}$ denote the matrix $B$ after the
end of iteration $i$ of the for-loop. For $i\geq 0$, let $B^{(i,k)}$ denote
the upper-left $k\times k$ block of $B^{(i)}$, i.e. all rows and columns
numbered $1$ to $k$. Then for all $1\leq i\leq\ell-1$
1. (a)
$B^{(i)}$ is symmetric, has zero diagonal, and $B^{(i,i+1)}=B^{(r,i+1)}$ for
all $r\geq i$. Moreover the upper-left $i\times i$ block of $B^{(i-1)}$ is
unchanged during for-loop iteration $i$.
2. (b)
If $j_{i}\neq i$, then $B^{(r)}_{is}=B^{(r)}_{si}=0$ and
$B^{(r)}_{ij_{i}}=B^{(r)}_{j_{i}i}=1$ for all $r\geq i-1$ and $s<j_{i}$. If
$j_{i}=i$, then $B^{(r)}_{is}=B^{(r)}_{si}=0$ for all $r\geq i-1$ and $1\leq
s\leq\ell$.
3. (c)
If $j_{i}\neq i$, then $B^{(r)}_{sj_{i}}=B^{(r)}_{j_{i}s}=0$, for all $r\geq
i$ and $s>i$.
4. (d)
If $B^{(i,i)}_{rs}=1$, then it is the only non-zero element of the
$r^{\text{th}}$ and $s^{\text{th}}$ rows and columns of $B^{(i)}$.
5. (e)
Each row and column of $B^{(i,i+1)}$ contains at most a single $1$.
6. (f)
$j_{j_{i}}=i$.
The output of the algorithm $L,B\in\mathbb{F}_{2}^{\ell\times\ell}$ satisfy
$C=LBL^{\top}$, where $L$ is an invertible lower-triangular matrix, $B$ is
symmetric and has zero diagonal, and each row and column of $B$ contains at
most a single $1$. The algorithm runs in $O(\ell^{3})$ time.
* Proof.
Note that by construction, throughout the algorithm we have that $L$ is lower
triangular and invertible (since the $E^{(k,i)}$ matrix inside the while loop
is lower triangular and invertible), and also that $C=LBL^{\top}$. The claims
about the output $B$ of the algorithm follow as $B=B^{(\ell-1)}$ and from (a)
and (e). Also it is clear that the worst case run time of the algorithm is
$O(\ell^{3})$, where we must use the sparsity of $E^{(k,i)}$ while performing
the matrix multiplication inside the while loop (otherwise we will get a
$O(\ell^{5})$ algorithm). Note also that the symmetry of $B^{(i-1)}$ dictates
that in the case $j_{i}=i$, the while loop will exit immediately.
We now prove (a)-(f).
1. (a)
Let $\tilde{L}$ denote the matrix $L$ after the end of for-loop iteration $i$
of Algorithm 2. Then $C=\tilde{L}B^{(i)}\tilde{L}^{\top}$ and so $B^{(i)}$ is
symmetric, for all $1\leq i\leq\ell-1$. $B^{(i)}$ has zero diagonal because
for all $1\leq r\leq\ell$
$B^{(i)}_{rr}=\sum_{xy}\tilde{L}^{-1}_{rx}C_{xy}\tilde{L}^{-1}_{ry}=\sum_{x}\tilde{L}^{-1}_{rx}C_{xx}+\sum_{x<y}\tilde{L}^{-1}_{rx}\tilde{L}^{-1}_{ry}(C_{xy}+C_{yx})=0.$
(63)
Now note that during any for-loop iteration $i$, in each iteration of the
while loop, multiplying $E^{(k,i)}$ on the left does not change any row of $B$
with index $1$ to $i$, while multiplying by $\left(E^{(k,i)}\right)^{\top}$ on
the right does not change any column of $E^{(k,i)}B$ with index $1$ to $i$. It
immediately follows that the upper-left $i\times i$ block of $B^{(i-1)}$ is
unchanged during for-loop iteration $i$. We also easily deduce from this that
$B^{(i,i+1)}=B^{(r,i+1)}$ for all $r\geq i$.
2. (b)
First notice that from the beginning of for-loop iteration $i$, all rows
numbered $1$ to $i$ are unaffected by elementary row operations alone. If
$j_{i}=i$, it means that at the start of for-loop iteration $i$, the
$i^{\text{th}}$ row of $B^{(i-1)}$ is zero. Any elementary column operations
will not change this row subsequently. If instead $j_{i}\neq i$, then at the
start of for-loop iteration $i$ we have $B^{(i-1)}_{is}=0$ for all $s<j_{i}$,
and $B^{(i-1)}_{ij_{i}}=1$. In this case, any subsequent elementary column
operations will not change columns $1$ to $j_{i}$ of the $i^{\text{th}}$ row.
The conclusion now follows by symmetry using (a).
For parts (c)-(f), we will first prove a claim. We claim that for all $1\leq
i\leq\ell-1$, Algorithm 2 maintains the following invariants after for-loop
iteration $i$:
1. (i)
For all $1\leq k\leq i$, $B^{(i)}_{sj_{k}}=B^{(i)}_{j_{k}s}=0$, for all $s>k$.
2. (ii)
For every $1\leq r,s\leq i$ such that $B^{(i)}_{rs}=1$, the only non-zero
element of the $r^{\text{th}}$ and $s^{\text{th}}$ rows and columns of
$B^{(i)}$ are $B^{(i)}_{rs}=B^{(i)}_{sr}=1$. If in addition $r=i$, then the
$i^{\text{th}}$ row and column of $B^{(i-1)}$ do not change during the
iteration, while if $1\leq r,s<i$, then the $r^{\text{th}}$ row and the
$s^{\text{th}}$ column of $B^{(i-1)}$ do not change during iteration $i$.
3. (iii)
If $j_{i}\leq i$, then $j_{j_{i}}=i$.
Proof of claim. It is helpful to make an observation about what happens during
for-loop iteration $i$, assuming $j_{i}\neq i$. One then easily checks that in
an iteration of the while loop, the effect of the update $B\leftarrow
E^{(k,i)}B\left(E^{(k,i)}\right)^{\top}$ on the $j_{i}^{\text{th}}$ column of
$B$ is to set the $(k,j_{i})$ entry to zero without affecting any other
entries, for some $k>i$. By symmetry the only effect on the
$j_{i}^{\text{th}}$ row of $B$ is to set the $(j_{i},k)$ entry to zero. Thus
at the end of for-loop iteration $i$, the $j_{i}^{\text{th}}$ column of
$B^{(i)}$ satisfies $B^{(i)}_{sj_{i}}=0$ for all $s>i$, and
$B^{(i)}_{ij_{i}}=1$.
Returning to the proof of the claim, notice that for $i=1$ (i)-(iii) are
clearly true: if $j_{1}\neq 1$ then (i) holds by the observation in the
previous paragraph, and if $j_{1}=1$ then it is true by (b), (ii) is vacuous
as $B^{(1,1)}_{11}=0$ by (a), and for (iii) note that $j_{1}\leq 1$ implies
$j_{1}=1$, thus $j_{j_{1}}=j_{1}=1$. Now for the inductive hypothesis, assume
that the invariants are true after for-loop iteration $i-1$, for some
$1<i\leq\ell-1$, and we want to show that they are also true after iteration
$i$. We do this separately in three steps.
Step 1: Let us first show that (iii) holds. If $j_{i}=i$ then it clearly
holds, so assume $j_{i}\leq i-1$. Now at the start of for-loop iteration $i$
we have by symmetry $B^{(i-1)}_{j_{i}i}=1$. Assume for contradiction that
$j_{j_{i}}\neq i$, and consider the situation during for-loop iteration
$j_{i}$. If $j_{j_{i}}>i$, then by (b) we would get $B^{(i-1)}_{j_{i}i}=0$, so
it must be the case that $j_{j_{i}}<i$. But then we get
$B^{(i-1)}_{j_{i}j_{j_{i}}}=1$, again by (b). Thus the $j_{i}^{\text{th}}$ row
of $B^{(i-1)}$ contains two non-zero elements, which contradicts (ii) of the
inductive hypothesis.
Step 2: We now show that (ii) holds after for-loop iteration $i$, first in the
limited setting $1\leq r,s<i$. By symmetry, it suffices to prove that whenever
$B^{(i)}_{rs}=1$, it is the only non-zero element of the $r^{\text{th}}$ row
of $B^{(i-1)}$ which does not change during the iteration, since one can
repeat the same argument for the element $B^{(i)}_{sr}=1$ in the
$s^{\text{th}}$ row. Assuming such an element $B^{(i)}_{rs}=1$ exists, we have
from (a) that $B^{(i-1)}_{rs}=1$, and by (ii) of the inductive hypothesis it
is the only non-zero element of the $r^{\text{th}}$ row of $B^{(i-1)}$. Now
for every while loop iteration within for-loop iteration $i$, the
$r^{\text{th}}$ row of $B^{(i-1)}$ is not modified by elementary row
operations alone, by the same argument as in the proof of (b), or by
elementary column operations as $B^{(i-1)}_{ri}=0$ (this continues to be true
by (a) during the iteration). Thus the $r^{\text{th}}$ row of $B^{(i-1)}$ is
unchanged proving this case.
It remains to prove that (ii) also holds for any non-zero element in the
$i^{\text{th}}$ row or column of $B^{(i,i)}$. For this consider the
$i^{\text{th}}$ row of $B^{(i)}$. If $j_{i}\geq i$, then we are again done
using (b) as the $i^{\text{th}}$ row and column of $B^{(i,i)}$ is zero, so
suppose $j_{i}<i$. We will show that in this case the only non-zero element of
the $i^{\text{th}}$ row and $j_{i}^{\text{th}}$ column of $B^{(i)}$ is
$B^{(i)}_{ij_{i}}=1$, and the $i^{\text{th}}$ row of $B^{(i-1)}$ is unchanged
during the for-loop, and then we are done by symmetry. We now have a series of
consequences. By invariant (iii), which we have already proved above to hold
after for-loop iteration $i$, we have $j_{j_{i}}=i$. Applying (i) of the
inductive hypothesis then gives $B^{(i-1)}_{si}=0$ for all $s>j_{i}$, and
using symmetry of $B^{(i-1)}$ we conclude that the only non-zero element of
the $i^{\text{th}}$ row of $B^{(i-1)}$ is $B^{(i-1)}_{ij_{i}}=1$. Next using
exactly the same argument as in the last paragraph (for the case $1\leq
r,s<i$), we conclude that the $i^{\text{th}}$ row of $B^{(i-1)}$ is unchanged
during for-loop iteration $i$. Now consider the $j_{i}^{\text{th}}$ column of
$B^{(i-1)}$. By (ii) of the inductive hypothesis and (a) we first have
$B^{(i-1)}_{sj_{i}}=B^{(i)}_{sj_{i}}=0$ for all $s<i$, and by the observation
stated above we also have $B^{(i)}_{sj_{i}}=0$ for all $s>i$. This proves that
the only non-zero element in the $j_{i}^{\text{th}}$ column of $B^{(i)}$ is
$B^{(i)}_{ij_{i}}=1$.
Step 3: Finally we show that (i) holds. If $k=i$, then the case $j_{i}\neq i$
is already true by the observation made above, while the case $j_{i}=i$ is
true by (b). So fix some $1\leq k<i$. If $j_{k}\leq i$ then by (b) we have
$B^{(i)}_{k,j_{k}}=1$, so we are done because invariant (ii) holds after
iteration $i$. If $j_{i}=i$, then $B^{(i-1)}=B^{(i)}$ and we are done using
(i) of the inductive hypothesis. So the interesting case is when $j_{i}\neq i$
and $j_{k}>i$. Now there are two subcases $j_{i}>k$ and $j_{i}<k$ which need
to be handled separately (note that the case $j_{i}=k$ is ruled out by (b)).
First suppose $j_{i}>k$, and consider the $j_{k}^{\text{th}}$ column of
$B^{(i-1)}$. Since by (i) of the inductive hypothesis we have
$B^{(i-1)}_{ij_{k}}=B^{(i-1)}_{j_{i}j_{k}}=0$, elementary row operations in
the while loops during for-loop iteration $i$ do not change this column (this
uses $B^{(i-1)}_{ij_{k}}=0$), while elementary column operations never act on
the column as $B^{(i-1)}_{j_{k}j_{i}}=0$ (this uses symmetry and
$B^{(i-1)}_{j_{i}j_{k}}=0$). Thus the $j_{k}^{\text{th}}$ columns are the same
for both $B^{(i-1)}$ and $B^{(i)}$, and this case is proved. Finally consider
the case $j_{i}<k$. Notice that since $j_{i}<k<i$ and we have proved that
(iii) holds after for-loop iteration $i$, we have $j_{j_{i}}=i$, and then by
(i) of the induction hypothesis we get $B^{(i-1)}_{si}=0$ for all $s>j_{i}$.
Moreover since we also proved that (ii) holds, we know that the
$i^{\text{th}}$ column of $B^{(i-1)}$ does not change during the for-loop. Now
again consider the $j_{k}^{\text{th}}$ column of $B^{(i-1)}$, focusing only on
the rows numbered $k$ to $\ell$. As before, this part of the column is
unchanged by elementary row operations during for-loop iteration $i$, so we
only need to argue that elementary column operations also do not affect it.
But this immediately follows from the pattern of zeros in the $i^{\text{th}}$
column of $B^{(i-1)}$, which finishes the proof of the claim.
We now finish up the proof of the lemma.
1. (c)
This follows directly from (i) of the above claim.
2. (d)
This is a sub-part of invariant (ii) of the claim.
3. (e)
Suppose this is not true, and let $k=i+1$. Then by (ii) of the claim and
symmetry of $B^{(i)}$, we must have at least two non-zero elements in the
$k^{\text{th}}$ row of $B^{(i,k)}$, one of which must be $B^{(i,k)}_{kj_{k}}$,
and let the other non-zero element be $B^{(i,k)}_{kh}$, and note that
$j_{k}<h<k$ (by definition of $j_{k}$). But by invariant (iii) applied to the
$k^{\text{th}}$ for-loop, we also have $j_{j_{k}}=k$. Since $j_{k}\leq i$, we
can now apply (i) of the claim, but this gives a contradiction as
$B^{(i,k)}_{hk}=1$ by symmetry.
4. (f)
If $j_{i}\leq i$, then this is just the statement of (iii) of the claim. So
let $j_{i}>i$, and suppose $j_{j_{i}}\neq i$. Then after for-loop iteration
$j_{i}$, we have $B^{(j_{i})}_{j_{j_{i}}j_{i}}=1=B^{(j_{i})}_{ij_{i}}$, by (b)
and (i) of the claim. But this contradicts invariant (ii).
∎
From the properties of $B$ in the above lemma, we deduce the following
additional simple lemma.
###### Lemma B.2.
Let $B\in\mathbb{F}_{2}^{\ell\times\ell}$ be a symmetric matrix with zero
diagonal and at most a single 1 in each row and column. Let
$|B|=\sum_{ij}B_{ij}$ be the total number of ones. Then $|B|$ is even and
$\mathrm{rank}\left({B}\right)=|B|$. In particular
$\mathrm{rank}\left({C}\right)=|B|$ is even.
* Proof.
It is clear that $|B|$ is even due to symmetry of $B$ and it having zero
diagonal. Since each row has at most a single $1$, $B$ has exactly $|B|$ non-
zero rows, all of which are independent since each column of $B$ has at most a
single $1$. This shows $\mathrm{rank}\left({B}\right)=|B|$. Finally, since
$C=LBL^{\top}$ and $L$ is invertible,
$\mathrm{rank}\left({C}\right)=\mathrm{rank}\left({B}\right)$. ∎
Let us apply this linear algebra to our study of Paulis. Let
$P\in\mathbb{F}_{2}^{\ell\times 2n}$ be the matrix representing the list of
Paulis $\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ (which we are looking for) in
symplectic notation. Therefore, $\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$ satisfies
the commutation relations specified by $C$ if and only if $P\Lambda
P^{\top}=C$. If $C=0$, it suffices to just use $1$ qubit and we can take
$p_{0}=\dots=p_{\ell-1}=I_{2}$. So let us assume that $C\neq 0$. Let us
suppose that we decompose $C=LBL^{\top}$ as in Lemma B.1. As we now explain,
we can easily create a list of Paulis $\\{q_{0},q_{1},\dots,q_{\ell-1}\\}$
with commutation relations given by $B$, or equivalently a matrix
$Q\in\mathbb{F}_{2}^{\ell\times 2n}$ such that $Q\Lambda Q^{\top}=B$. For each
pair $(i,j)$ such that $i<j$ and $B_{ij}=1$, we introduce a qubit (call it
qubit $(i,j)$), and let $q_{i}=X_{(i,j)}$ and $q_{j}=Z_{(i,j)}$ act on that
qubit. For all zero rows $i$ of $B$, we set $q_{i}=I_{2}$. We have thus used
$n=\mathrm{rank}\left({B}\right)/2=\mathrm{rank}\left({C}\right)/2$ qubits to
make the Paulis $\\{q_{0},q_{1},\dots,q_{\ell-1}\\}$, and one easily verifies
that this list of Paulis satisfy the commutation relations specified by $B$.
Now $Q\Lambda Q^{\top}=B$ implies $(LQ)\Lambda(LQ)^{\top}=LBL^{\top}=C$, so
$P=LQ$ represents a list of Paulis satisfying the commutation relations
specified by $C$. It is also easy to check that
$\mathrm{rank}\left({P}\right)=\mathrm{rank}\left({LQ}\right)=\mathrm{rank}\left({Q}\right)=2n=\mathrm{rank}\left({C}\right)$.
This discussion leads to the following theorem.
###### Theorem B.3.
Let $C\in\mathbb{F}_{2}^{\ell\times\ell}$ be symmetric with zero diagonal. A
list of $n$-qubit Paulis
$\mathcal{C}=\\{p_{0},p_{1},\dots,p_{\ell-}\\}\subseteq\mathcal{P}_{n}$ such
that $C_{ij}=\text{Com}(p_{i},p_{j})$ for all $0\leq i,j\leq\ell-1$, exists if
and only if $n\geq\max(1,\mathrm{rank}\left({C}\right)/2)$. Moreover, when
$\mathcal{C}$ exists, $\dim(\mathcal{C})\geq\mathrm{rank}\left({C}\right)$,
with equality if $n=\mathrm{rank}\left({C}\right)/2$.
* Proof.
If $C=0$, then as argued above $n=1$ is the minimum number of needed qubits to
achieve the commutation relations specified by $C$; thus the first statement
is true. So assume $C\neq 0$, which implies $\mathrm{rank}\left({C}\right)\geq
2$ by Lemma B.2. The discussion in the previous paragraph shows the
construction of $\mathcal{C}$ when $n=\mathrm{rank}\left({C}\right)/2$, and
this clearly suffices as if $n>\mathrm{rank}\left({C}\right)/2$, then one can
simply act by $I_{2}$ on the extra qubits. We argue the converse, i.e. for the
minimality of $n$, by contradiction. Let $P\in\mathbb{F}_{2}^{\ell\times 2n}$
be the matrix such that $P\Lambda P^{\top}=C$ with
$n<\mathrm{rank}\left({C}\right)/2$. Let $C=LBL^{\top}$ from Lemma B.1. Then
$L^{-1}P$ represents a list of Paulis with commutation relations given by $B$.
But by the structure of $B$, this implies
$\mathrm{rank}\left({B}\right)/2=\mathrm{rank}\left({C}\right)/2$
anticommuting pairs of Paulis (i.e. the Paulis can be grouped in pairs that
anticommute with each other but commute with all other Paulis in the list),
which is known to be impossible on fewer than
$\mathrm{rank}\left({C}\right)/2$ qubits.777Notice that, ignoring phases, the
size of the group generated by $n$ anticommuting pairs of Paulis is $4^{n}$,
since every paired Pauli is necessarily independent. Therefore, the Paulis
must act on at least $n$ qubits.
Now suppose $\mathcal{C}\subseteq\mathcal{P}_{n}$ exists satisfying the
commutation relations specified by $C$, where we already know
$n\geq\max(1,\mathrm{rank}\left({C}\right)/2)$. Representing the Paulis in
$\mathcal{C}$ by a matrix $P\in\mathbb{F}_{2}^{\ell\times 2n}$ again gives
$P\Lambda P^{\top}=C$. Thus
$\mathrm{rank}\left({C}\right)\leq\mathrm{rank}\left({P}\right)=\dim(\mathcal{C})$.
If in addition $2n=\mathrm{rank}\left({C}\right)$, then
$\mathrm{rank}\left({P}\right)\leq\min(\ell,\mathrm{rank}\left({C}\right))\leq\mathrm{rank}\left({C}\right)$,
which implies $\dim{\mathcal{C}}=\mathrm{rank}\left({C}\right)$. ∎
We demonstrate the use of Theorem B.3 with a couple of examples, the first of
which is well known, and the second one concerns CALs, which will be discussed
much more thoroughly in the next subsection.
###### Example 1.
Suppose
$C=\left(\begin{array}[]{ccccc}0&1&1&&1\\\ 1&0&1&\dots&1\\\ 1&1&0&&1\\\
&\vdots&&\ddots&\vdots\\\
1&1&1&\dots&0\end{array}\right)\in\mathbb{F}_{2}^{\ell\times\ell},\;\;\ell\geq
2,$ (64)
so that we are looking for a list of anticommuting Paulis. It is well known
that
$\mathrm{rank}\left({C}\right)/2=\bigg{\\{}\begin{array}[]{ll}(\ell-1)/2,&\ell\text{
odd}\\\ \ell/2,&\ell\text{ even}\end{array}.$ (65)
Then, the construction from Theorem B.3 with
$n=\mathrm{rank}\left({C}\right)/2$, returns the Jordan-Wigner [57] list —
i.e. for $h\in\\{0,1,\dots,2\lfloor\ell/2\rfloor-1\\}$,
$p_{h}=\bigg{\\{}\begin{array}[]{ll}X_{h/2}\prod_{i=0}^{h/2-1}Y_{i},&h\text{
even}\\\ Z_{(h-1)/2}\prod_{i=0}^{(h-3)/2}Y_{i},&h\text{ odd}\end{array},$ (66)
and, if $\ell$ is odd, the final Pauli $p_{\ell-1}=\prod_{i=0}^{n-1}Y_{i}$.
###### Example 2.
Suppose
$C=\left(\begin{array}[]{ccccc}0&1&0&&1\\\ 1&0&1&\dots&0\\\ 0&1&0&&0\\\
&\vdots&&\ddots&\vdots\\\
1&0&0&\dots&0\end{array}\right)\in\mathbb{F}_{2}^{\ell\times\ell},\;\;\ell\geq
3,$ (67)
so that we are looking for a cyclically anticommuting list of Paulis. We note
that the rank of $C$ satisfies
$\mathrm{rank}\left({C}\right)/2=\bigg{\\{}\begin{array}[]{ll}(\ell-1)/2,&\ell\text{
odd}\\\ (\ell-2)/2,&\ell\text{ even}\end{array}.$ (68)
The construction from Theorem B.3, again with
$n=\mathrm{rank}\left({C}\right)/2$ qubits, now gives the list
$\begin{split}\left\\{X_{0},\ Z_{0},\ X_{0}X_{1},\ Z_{1},\ \dots,\
X_{n-2}X_{n-1},\ Z_{n-1},\
Y_{n-1}\prod_{i=0}^{n-2}Z_{i}\right\\},&\quad\ell\text{ odd},\\\
\left\\{X_{0},\ Z_{0},\ X_{0}X_{1},\ Z_{1},\ \dots,\ X_{n-2}X_{n-1},\
Z_{n-1},\ X_{n-1},\ \prod_{i=0}^{n-1}Z_{i}\right\\},&\quad\ell\text{
even}.\end{split}$ (69)
By Theorem B.3, $\dim(\mathcal{C})=2n$ in the preceding examples, so it
ensures that $\langle\mathcal{C},iI\rangle=\mathcal{P}_{n}$. However, we can
even generalize the construction to include $\dim(\mathcal{C})$ as an input
parameter.
###### Theorem B.4.
Suppose $C\in\mathbb{F}_{2}^{\ell\times\ell}$ is symmetric and has zero
diagonal, and let $k\in\mathbb{Z}$ such that
$\mathrm{rank}\left({C}\right)\leq k\leq\ell$. A list of Paulis
$\mathcal{C}=\\{p_{0},p_{1},\dots,p_{\ell-1}\\}\subseteq\mathcal{P}_{n}$
satisfying (a) $C_{ij}=\text{Com}(p_{i},p_{j})$ for all $0\leq i,j\leq\ell-1$,
and (b) $\dim(\mathcal{C})=k$ exists if and only if
$n\geq\max(1,k-\mathrm{rank}\left({C}\right)/2)$.
* Proof.
First consider the case $C=0$, then as argued in Theorem B.3 we need $n\geq
1$. But it is well known that on $n$ qubits, the maximum size of an
independent commuting list of Paulis is $n$, so $k\leq n$, and this also
proves the converse for this case. For the rest of the proof assume $C\neq 0$,
which implies $\mathrm{rank}\left({C}\right)\geq 2$ by Lemma B.2. Let
$[\ell]=\\{0,1,\dots,\ell-1\\}$. We begin the same way as Theorem B.3,
decomposing $C=LBL^{\top}$ and creating a set of Paulis $q_{i}=X_{(i,j)}$,
$q_{j}=Z_{(i,j)}$ for all pairs $(i,j)$ such that $i<j$ and $B_{ij}=1$. Let
$\mathcal{I}\subseteq[\ell]$ be the indices of the rows in $B$ that are all
zero, and partition $\mathcal{I}$ into two sets $\mathcal{I}_{1}$ and
$\mathcal{I}_{2}$ such that
$|\mathcal{I}_{1}|=k-\mathrm{rank}\left({C}\right)$. For each element
$i\in\mathcal{I}_{1}$ we introduce a qubit (labeled by $i$) and set
$q_{i}=Z_{i}$. We now have $n=k-\mathrm{rank}\left({C}\right)/2$ qubits. We
form $Q\in\mathbb{F}_{2}^{\ell\times 2n}$ representing the Pauli list
$\\{q_{0},q_{1},\dots,q_{\ell-1}\\}$ as symplectic vectors and calculate
$P=LQ$ representing the desired list
$\mathcal{C}=\\{p_{0},p_{1},\dots,p_{\ell-1}\\}$. Note $Q\Lambda Q^{\top}=B$
by construction and so $P\Lambda P^{\top}=LBL^{\top}=C$. Also note that since
$\dim(\\{q_{0},q_{1},\dots,q_{\ell-1}\\})=\mathrm{rank}\left({Q}\right)=k$ by
construction and $L$ is invertible,
$\dim(\mathcal{C})=\mathrm{rank}\left({P}\right)=k$. This shows the existence
of $\mathcal{C}$ when $n=k-\mathrm{rank}\left({C}\right)/2$, and also covers
the case $n\geq k-\mathrm{rank}\left({C}\right)/2$, since one can act on the
extra qubits by identity.
It remains to prove the converse. This argument by contradiction is analogous
to that in Theorem B.3. If $P\in\mathbb{F}_{2}^{\ell\times 2n^{\prime}}$
represents the list of Paulis on $n^{\prime}<n$ qubits satisfying $P\Lambda
P^{\top}=C$ and
$\mathrm{rank}\left({P}\right)=k\geq\mathrm{rank}\left({C}\right)$, then
$Q=L^{-1}P$ represents the list of $n^{\prime}$-qubit Paulis such that
$Q\Lambda Q^{\top}=B$ and $\mathrm{rank}\left({Q}\right)=k$. However, the
structure of $B$ implies that there is an independent and fully commuting
subset of $\\{q_{0},q_{1},\dots,q_{\ell-1}\\}$ with size
$k-\mathrm{rank}\left({C}\right)/2$ which is again known to be impossible on
$n^{\prime}<k-\mathrm{rank}\left({C}\right)/2$ qubits [36]. ∎
### B.3 Properties of cyclically anticommuting lists of Paulis
We will work exclusively in this subsection with the group
$\mathcal{\hat{P}}_{n}$, to not have to keep track of the phase factors of the
Paulis. CALs of $\mathcal{P}_{n}$ were introduced in Definition 3.2, and CALs
of $\mathcal{\hat{P}}_{n}$ were defined in Section B.1. Here we present some
more properties of CALs of $\mathcal{\hat{P}}_{n}$, that we know of currently.
Translating these results to CALs of $\mathcal{P}_{n}$ is obvious. We recall
the commutation relations that a CAL satisfies: if
$\mathcal{C}=\\{\hat{p}_{0},\hat{p}_{1},\dots,\hat{p}_{k-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
is a CAL, then for distinct $i$ and $j$
$\begin{cases}\\{\hat{p}_{i},\hat{p}_{j}\\}=0,&\text{if}\;\;i=j\pm 1\mod k\\\
[\hat{p}_{i},\hat{p}_{j}]=0,&\text{otherwise}.\end{cases}$ (70)
This subsection is organized as follows: Lemma B.5 \- Lemma B.6 points out the
relationship between CALs and anticommuting lists, and shows that all elements
of a CAL must be unique if the CAL has length greater than four, Theorem B.8
\- Corollary B.11 addresses the question of dimension and independence of a
CAL, and finally Lemma B.13 \- Theorem B.15 looks at whether it is possible to
create a CAL, where the representative of each element of the CAL is a
Kronecker product of only $X$ and $Z$ Pauli matrices, and possibly the
$2\times 2$ identity matrix $I_{2}$.
Recall from the proof of Theorem 3.4, that on $n=1$ qubits,
$\mathcal{C}=\\{[X],[Z],[X],[Z]\\}$ is a CAL. Clearly this has repeating
elements. A natural question is whether it is possible to have CALs of longer
length with repeating elements. The next lemma completely addresses this
question.
###### Lemma B.5.
Let $\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$ be a CAL, for $n\geq 1$. If
$\mathcal{C}$ has repeating elements, then $\mathcal{C}$ has one of the
following forms
$\mathcal{C}=\begin{cases}\\{\hat{p},\hat{q},\hat{p},\hat{q}\\}&\;\;\;\;\text{
with }\\{\hat{p},\hat{q}\\}=0,\\\
\\{\hat{p},\hat{q},\hat{p},\hat{s}\\}&\;\;\;\;\text{ with
}\\{\hat{p},\hat{q}\\}=\\{\hat{p},\hat{s}\\}=[\hat{q},\hat{s}]=0,\\\
\\{\hat{p},\hat{q},\hat{s},\hat{q}\\}&\;\;\;\;\text{ with
}\\{\hat{p},\hat{q}\\}=\\{\hat{q},\hat{s}\\}=[\hat{p},\hat{s}]=0,\end{cases}$
(71)
where $\hat{p},\hat{q},\hat{s}\in\mathcal{\hat{P}}_{n}$, and distinct. The
last two cases can only happen if $n>1$.
* Proof.
First notice that if $\mathcal{C}$ has any of the forms in Eq. (71), then the
commutation relations imply that $\mathcal{C}$ is indeed a CAL. Now we claim
that if $\mathcal{C}$ has at least one repeating element, then
$|\mathcal{C}|=4$. We first prove this claim. If $|\mathcal{C}|\leq 1$, then
there can obviously be no repeating element. If $2\leq|\mathcal{C}|\leq 3$,
then $\mathcal{C}$ is an anticommuting list, and so again there can be no
repeating element. Now suppose that $|\mathcal{C}|\geq 5$, and
$\mathcal{C}=\\{\hat{p}_{0},\dots,\hat{p}_{k-1}\\}$ with repeating elements
$\hat{p}_{\ell}=\hat{p}_{\ell^{\prime}}$, for $0\leq\ell<\ell^{\prime}\leq k$.
Notice first that $\ell^{\prime}\notin\\{\ell+1,\;(\ell-1)\mod k\\}$, because
that would imply that $\hat{p}_{\ell}$ anticommutes with itself. Then there
must exist an element $\hat{p}_{m}\in\mathcal{C}$,
$m\in\\{\ell+1,\;(\ell-1)\mod k\\}$, such that
$\\{\hat{p}_{\ell},\hat{p}_{m}\\}=0$, and
$[\hat{p}_{\ell^{\prime}},\hat{p}_{m}]=0$. The existence of $\hat{p}_{m}$
follows from the fact that $\mathcal{C}$ is a CAL, and $|\mathcal{C}|\geq 5$.
But this is a contradiction because it implies that $\hat{p}_{m}$ commutes and
anticommutes with the same element, and therefore $|\mathcal{C}|<5$. This
proves the claim.
Using the claim, we can suppose that
$\mathcal{C}=\\{\hat{p},\hat{q},\hat{r},\hat{s}\\}$, for
$\hat{p},\hat{q},\hat{r},\hat{s}\in\mathcal{\hat{P}}_{n}$, where all of them
may not be distinct, and satisfies the CAL commutation relations in Eq. (70).
We already know that $\hat{p}\neq\hat{q}$, $\hat{q}\neq\hat{r}$,
$\hat{r}\neq\hat{s}$, and $\hat{s}\neq\hat{p}$, because otherwise Eq. (70) is
violated. Now there are two cases: (i) $\hat{p}=\hat{r}$, and (ii)
$\hat{p}\neq\hat{r}$. Assuming the first case $\hat{p}=\hat{r}$, we have two
subcases: either $\hat{q}=\hat{s}$ which gives that
$\mathcal{C}=\\{\hat{p},\hat{q},\hat{p},\hat{q}\\}$, or $\hat{q}\neq\hat{s}$
which gives $\mathcal{C}=\\{\hat{p},\hat{q},\hat{p},\hat{s}\\}$, and both are
thus of the forms in Eq. (71). Next assume the second case
$\hat{p}\neq\hat{r}$, and thus all $\hat{p},\hat{q}$, and $\hat{r}$ are
distinct. Since there is at least one repeating element in $\mathcal{C}$, this
means that $\hat{s}$ equals either $\hat{p},\hat{q},$ or $\hat{r}$. But we
already argued above that $\hat{p}\neq\hat{s}$ and $\hat{r}\neq\hat{s}$, and
so in fact $\hat{q}=\hat{s}$. This gives that
$\mathcal{C}=\\{\hat{p},\hat{q},\hat{r},\hat{q}\\}$, which is the third form
in Eq. (71). The proof is finished by noting that when $n=1$, only the first
case in Eq. (71) can happen. ∎
The next lemma concerns the maximum possible length of a CAL on $n$ qubits.
The arguments in the proof of parts (b) and (e) of the lemma already appear in
the proof of Theorem 3.4, when the length of the CAL is at least three. We
record the lemma here for completeness, as it covers the case when the length
is less than three. Also part (c) shows how to get a CAL from an anticommuting
list of $\mathcal{\hat{P}}_{n}$.
###### Lemma B.6.
Let
$\mathcal{C}=\\{\hat{p}_{0},\dots,\hat{p}_{\ell-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
be a CAL, and $|\mathcal{C}|\geq 1$.
1. (a)
The list
$\mathcal{C}^{\prime}=\\{\hat{p}^{\prime}_{0},\dots,\hat{p}^{\prime}_{\ell-1}\\}$
defined as $\hat{p}^{\prime}_{i}=\hat{p}_{j}$ with $j=(i+r)\mod\ell$, is also
a CAL for all translations $r\in\mathbb{Z}$.
2. (b)
If $|\mathcal{C}|=1$, let $\mathcal{A}=\emptyset$, otherwise let
$\mathcal{A}=\\{\hat{q}_{0},\dots,\hat{q}_{\ell-2}\\}$ defined as
$\hat{q}_{i}=\prod_{j=0}^{i}\hat{p}_{j}$, for all $0\leq i\leq\ell-2$. Then
$\mathcal{A}$ is an anticommuting list. Thus an anticommuting list exists of
size $|\mathcal{C}|-1$.
3. (c)
Let
$\mathcal{A}=\\{\hat{q}_{0},\dots,\hat{q}_{k-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
be an anticommuting list, and $|\mathcal{A}|\geq 2$. If we define
$\mathcal{D}=\\{\hat{d}_{0},\dots,\hat{d}_{k}\\}$ by
$\hat{d}_{0}=\hat{q}_{0}$, $\hat{d}_{k}=\hat{q}_{k-1}$, and
$\hat{d}_{i+1}=\hat{q}_{i}\hat{q}_{i+1}$ for all $0\leq i\leq k-2$, then
$\mathcal{D}$ is a CAL.
4. (d)
$|\mathcal{C}|\leq 2n+2$. Thus there exists a CAL on $n$ qubits of every
length $0\leq\ell^{\prime}\leq 2n+2$.
5. (e)
The minimum number $n$ of qubits needed to create a CAL of length
$\ell^{\prime}$ is
$n=\begin{cases}1&\;\text{ if }\ell^{\prime}\leq 2,\\\
(\ell^{\prime}-1)/2&\;\text{ if }\ell^{\prime}>2\text{ and odd},\\\
(\ell^{\prime}-2)/2&\;\text{ if }\ell^{\prime}>2\text{ and even}.\end{cases}$
(72)
6. (f)
$\mathcal{C}=\\{[I]\\}$ if and only if $[I]\in\mathcal{C}$.
* Proof.
1. (a)
This is easily checked and follows from the commutation relations of the
elements of $\mathcal{C}$.
2. (b)
If $|\mathcal{C}|=1$, $\mathcal{A}=\emptyset$ is anticommuting by convention.
Otherwise take any two distinct elements
$\hat{q}_{i},\hat{q}_{j}\in\mathcal{A}$, and without loss of generality assume
that $i<j$, and thus $\hat{q}_{j}=\hat{q}_{i}\prod_{k=i+1}^{j}\hat{p}_{k}$.
From the commutation relations of CAL, we have that
$\\{\hat{p}_{i},\hat{p}_{i+1}\\}=0$, $[\hat{p}_{i},\hat{p}_{k^{\prime}}]=0$
for all $i+1<k^{\prime}\leq j$, and $[\hat{p}_{k},\hat{p}_{k^{\prime}}]=0$ for
all $0\leq k<i$, and $i+1\leq k^{\prime}\leq j$. Then
$\hat{q}_{i}\hat{q}_{j}=\hat{q}_{i}(\hat{q}_{i}\prod_{k=i+1}^{j}\hat{p}_{k})=-(\hat{q}_{i}\prod_{k=i+1}^{j}\hat{p}_{k})\hat{q}_{i}=-\hat{q}_{j}\hat{q}_{i}$.
Thus $\mathcal{A}$ is an anticommuting list of size $|\mathcal{C}|-1$.
3. (c)
As in (b), one easily checks that the elements of $\mathcal{D}$ satisfy the
CAL commutation relations of Eq. 70.
4. (d)
For the sake of contradiction, assume that $|\mathcal{C}|\geq 2n+3$. Then by
(b) there exists an anticommuting list $\mathcal{A}$ of size at least $2n+2$.
But it is a well-known fact that the largest size of an anticommuting list of
$\mathcal{\hat{P}}_{n}$ is $2n+1$ (see for example [36] and Proposition 9 in
[58]), which is a contradiction. For the last part, if $\ell^{\prime}\leq 2$,
then choose any anticommuting list of size $\ell^{\prime}$. If
$\ell^{\prime}>2$, choose an anticommuting list of size $\ell^{\prime}-1$, and
then apply (c).
5. (e)
The case $\ell^{\prime}\geq 3$ follows from (d) (and has been proved
previously). For the case $\ell^{\prime}\leq 2$, note that the set $\\{[X]\\}$
is a CAL of length 1, and the set $\\{[X],[Y]\\}$ is a CAL of length 2.
6. (f)
Suppose $[I]\in\mathcal{C}$ and $\mathcal{C}\neq\\{[I]\\}$. Then $\mathcal{C}$
must contain an element $\hat{p}$ that anticommutes with $[I]$ which is
impossible.
∎
It is clear from the above lemma that extremal CALs of length $\ell\geq 3$
correspond precisely to the last two cases of Eq. (72), while extremal CALs of
length $1\leq\ell\leq 2$ correspond to the first case. We now turn to the
question of the dimension of a CAL. For the next few proofs, it will be useful
to have the following result, that is proved in Section 4 of [36]. It uses the
notion of commutativity maps in parts (e)-(g), that we defined in Section B.1.
###### Lemma B.7.
Let
$\mathcal{A}=\\{\hat{p}_{0},\ldots,\hat{p}_{\ell-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
be an anticommuting list, and let $\text{Com}_{\mathcal{A}}$ denote the
commutativity map with respect to $\mathcal{A}$. Then
1. (a)
$\prod\mathcal{A}\neq[I]$ implies $\mathcal{A}$ is an independent generator of
the subgroup $\langle\mathcal{A}\rangle$ of order $2^{\ell}$.
2. (b)
For any non-empty list $\mathcal{H}\subset\mathcal{A}$, we have
$\prod\mathcal{H}\neq[I]$, so $\mathcal{H}$ is independent.
3. (c)
$\prod\mathcal{A}=[I]$ implies $\ell$ is odd.
4. (d)
Suppose $\mathcal{A}$ is non-empty, independent and $\ell$ is even. Then for
every $v\in\mathbb{F}_{2}^{\ell}$, there exists a unique element
$\hat{p}\in\langle\mathcal{A}\rangle$ such that
$\text{Com}_{\mathcal{A}}(\hat{p})=v$.
5. (e)
Suppose $\mathcal{A}$ is independent, $\mathcal{A}\neq\\{[I]\\}$, and $\ell$
is odd. Then for every $v\in\mathbb{F}_{2}^{\ell}$ containing an odd number of
zeros, there exists exactly two elements
$\hat{p},\hat{q}\in\langle\mathcal{A}\rangle$ such that
$\text{Com}_{\mathcal{A}}(\hat{p})=\text{Com}_{\mathcal{A}}(\hat{q})=v$.
6. (f)
Suppose $\mathcal{A}$ is independent, $\mathcal{A}\neq\\{[I]\\}$, and $\ell$
is odd. Then each coset $\mathcal{H}$ of $\langle\mathcal{A}\rangle$ in
$\mathcal{\hat{P}}_{n}$ has the property that either for all
$\hat{q}\in\mathcal{H}$, $\text{Com}_{\mathcal{A}}(\hat{q})$ has an odd number
of zeros, or for all $\hat{q}\in\mathcal{H}$,
$\text{Com}_{\mathcal{A}}(\hat{q})$ has an even number of zeros. Moreover the
number of cosets of each type are equal.
We now give the answer to the question of the dimension of a CAL. The next two
theorems cover the cases when the CAL has lengths odd and even respectively,
and the corollary after that specializes these results to the case of an
extremal CAL. It is important to remember for the next proofs that by Lemma
B.6(f), whenever $[I]\not\in\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$, the
dimension of a CAL $\mathcal{C}$ is the minimum number of elements of
$\mathcal{C}$ that generates $\langle\mathcal{C}\rangle$.
###### Theorem B.8.
Let $\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$ be a CAL, and suppose
$|\mathcal{C}|$ is odd.
1. (a)
The dimension of $\mathcal{C}$ satisfies
$\dim(\mathcal{C})=\begin{cases}|\mathcal{C}|\;\;&\text{ if
}\prod\mathcal{C}\neq[I],\\\ |\mathcal{C}|-1\;\;&\text{ if
}\prod\mathcal{C}=[I].\end{cases}$ (73)
2. (b)
If $|\mathcal{C}|\geq 3$, it holds that $\mathcal{C}\setminus\\{\hat{p}\\}$ is
independent for every $\hat{p}\in\mathcal{C}$.
3. (c)
For all non-empty multisets $\mathcal{H}\subset\mathcal{C}$, we have
$\prod\mathcal{H}\neq[I]$.
* Proof.
In this proof let $\mathcal{C}=\\{\hat{p}_{0},\dots,\hat{p}_{2k}\\}$ where
$k\geq 0$, so $|\mathcal{C}|=2k+1$. The case $\mathcal{C}=\\{[I]\\}$ is true
for part (a) as $\dim(\\{[I]\\})=0$, and also obvious for parts (b)-(c). So
for the rest of the proof assume that $\mathcal{C}\neq\\{[I]\\}$.
1. (a)
Since $\dim(\mathcal{C})\leq|\mathcal{C}|$, it is easily seen that proving Eq.
(73), is equivalent to proving the following two conditions:
1. (i)
$\dim(\mathcal{C})\geq|\mathcal{C}|-1$,
2. (ii)
$\dim(\mathcal{C})=|\mathcal{C}|-1$ if and only if $\prod\mathcal{C}=[I]$.
If $|\mathcal{C}|=1$, then $\dim(\mathcal{C})=1$, and
$\prod\mathcal{C}\neq[I]$, and so (i) and (ii) are satisfied. Now assume
$k\geq 1$. Construct the anticommuting list
$\mathcal{A}=\\{\hat{q}_{0},\dots,\hat{q}_{2k-1}\\}$ similarly as in Lemma
B.6(b). Notice that because $|\mathcal{A}|$ is even, it follows from Lemma
B.7(c) that $\prod\mathcal{A}\neq[I]$, and so $\mathcal{A}$ is an independent
generator of $\langle\mathcal{A}\rangle$ by Lemma B.7(a). We also have
$\langle\mathcal{C}\setminus\\{\hat{p}_{2k}\\}\rangle=\langle\mathcal{A}\rangle$,
which then implies that $\mathcal{C}\setminus\\{\hat{p}_{2k}\\}$ is
independent, because $|\mathcal{C}\setminus\\{\hat{p}_{2k}\\}|=2k$ and
$|\langle\mathcal{C}\setminus\\{\hat{p}_{2k}\\}\rangle|=|\langle\mathcal{A}\rangle|=2^{2k}$.
It follows that $\dim(\mathcal{C})\geq|\mathcal{C}|-1$, finishing the proof of
(i).
For the proof of (ii), first assume that $\prod\mathcal{C}=[I]$. But this
implies that $\dim(\mathcal{C})<|\mathcal{C}|$, and hence combining with (i)
this gives us that $\dim(\mathcal{C})=|\mathcal{C}|-1$. To prove the converse,
for contradiction assume that $\prod\mathcal{C}\neq[I]$, which is equivalent
to the assumption that
$\hat{p}_{2k}\neq\prod(\mathcal{C}\setminus\\{\hat{p}_{2k}\\})=\hat{q}_{2k-1}$.
We have already proved in the previous paragraph that $\mathcal{A}$ is an
independent list, and it can also be checked that
$\\{\hat{p}_{2k},\hat{q}_{i}\\}=0$, for all $0\leq i\leq 2k-2$, and
$[\hat{p}_{2k},\hat{q}_{2k-1}]=0$. Since $|\mathcal{A}|$ is even, it follows
from Lemma B.7(d) that $\hat{q}_{2k-1}$ is the unique element in
$\langle\mathcal{A}\rangle$ generating the commutativity pattern
$\text{Com}_{\mathcal{A}}(\hat{p}_{2k})$ with respect to $\mathcal{A}$. But
$\hat{p}_{2k}\neq\hat{q}_{2k-1}$ by assumption, implying that
$\hat{p}_{2k}\notin\langle\mathcal{A}\rangle=\langle\mathcal{C}\setminus\\{\hat{p}_{2k}\\}\rangle$.
So $\mathcal{C}$ is an independent list, because we also proved in the
previous paragraph that $\mathcal{C}\setminus\\{\hat{p}_{2k}\\}$ is
independent, and it follows that $\dim(\mathcal{C})=|\mathcal{C}|$, which is a
contradiction.
2. (b)
We have already proved in (a) that $\mathcal{C}\setminus\\{\hat{p}_{2k}\\}$ is
independent. The fact that $\mathcal{C}\setminus\\{\hat{p}_{i}\\}$ is also
independent for all $\hat{p}_{i}\in\mathcal{C}$ then follows by translating
the elements of $\mathcal{C}$ to get a relabeling of the CAL
$\mathcal{C}=\\{\hat{p}^{\prime}_{0},\dots,\hat{p}^{\prime}_{2k}\\}$ by Lemma
B.6(a), where $\hat{p}^{\prime}_{j}=\hat{p}_{i+j-2k}$ and indices are
evaluated modulo $2k+1$.
3. (c)
If $|\mathcal{C}|=1$, there is nothing to prove. So let
$\mathcal{H}\subset\mathcal{C}$ be a non-empty multiset, and
$|\mathcal{C}|\geq 3$. Then there exists
$\mathcal{C}\ni\hat{p}\not\in\mathcal{H}$. So by (b),
$\mathcal{C}\setminus\\{\hat{p}\\}$ is an independent multiset. Since
$\mathcal{H}\subseteq\mathcal{C}\setminus\\{\hat{p}\\}$,
$\prod\mathcal{H}\neq[I]$.
∎
###### Theorem B.9.
Let $\mathcal{C}=\\{\hat{p}_{\ell}:0\leq\ell\leq
2k-1\\}\subseteq\mathcal{\hat{P}}_{n}$ be a non-empty CAL, with
$|\mathcal{C}|$ even.
1. (a)
Define the lists $\mathcal{C}_{\text{odd}}=\\{\hat{p}_{2\ell+1}:0\leq\ell\leq
k-1\\}$, and $\mathcal{C}_{\text{even}}=\\{\hat{p}_{2\ell}:0\leq\ell\leq
k-1\\}$. The dimension of $\mathcal{C}$ satisfies
$\dim(\mathcal{C})=\begin{cases}|\mathcal{C}|\;\;&\text{ if
}\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}},\text{
and }\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}},\\\
|\mathcal{C}|-1\;\;&\text{ if
}\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}},\text{
and }\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}},\\\
|\mathcal{C}|-1\;\;&\text{ if either
}\prod\mathcal{C}_{\text{odd}}=[I]\neq\prod\mathcal{C}_{\text{even}},\text{ or
}\prod\mathcal{C}_{\text{odd}}\neq[I]=\prod\mathcal{C}_{\text{even}},\\\
|\mathcal{C}|-2\;\;&\text{ if
}\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}=[I].\end{cases}$
(74)
In particular, if $|\mathcal{C}|=2$, then $\dim(\mathcal{C})=2$, while if
$|\mathcal{C}|=2n\geq 4$, then $\dim(\mathcal{C})<2n$.
2. (b)
For all non-empty multisets $\mathcal{H}\subset\mathcal{C}$ such that
$\mathcal{H}\neq\mathcal{C}_{\text{odd}}$ and
$\mathcal{H}\neq\mathcal{C}_{\text{even}}$, we have $\prod\mathcal{H}\neq[I]$.
3. (c)
If $|\mathcal{C}|\geq 4$, it holds that
$\mathcal{C}\setminus\\{\hat{p},\hat{q}\\}$ is an independent multiset for all
$\hat{p}\in\mathcal{C}_{\text{odd}}$, and
$\hat{q}\in\mathcal{C}_{\text{even}}$.
* Proof.
1. (a)
If $|\mathcal{C}|=2$, then $\mathcal{C}$ is an anticommuting list of two
elements, and hence by Lemma B.7(a),(d), we have that $\mathcal{C}$ is
independent. Thus $\dim(\mathcal{C})=2$,
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$, and
also $\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}}$, which
proves the statement for $|\mathcal{C}|=2$. For the rest of the proof assume
$|\mathcal{C}|\geq 4$. Construct the anticommuting list
$\mathcal{A}=\\{\hat{q}_{0},\dots,\hat{q}_{2k-2}\\}$ similarly as in Lemma
B.6(b). Then $\mathcal{A}\setminus\\{\hat{q}_{2k-2}\\}$ is independent by
Lemma B.7(c), and since it is easily checked that
$\langle\mathcal{A}\setminus\\{\hat{q}_{2k-2}\\}\rangle=\langle\mathcal{C}\setminus\\{\hat{p}_{2k-2},\hat{p}_{2k-1}\\}\rangle$,
we have that $\dim(\mathcal{C})\geq|\mathcal{C}|-2$.
First suppose that
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}=[I]$. This
implies that $\dim(\mathcal{C}_{\text{odd}})\leq|\mathcal{C}_{\text{odd}}|-1$,
and $\dim(\mathcal{C}_{\text{even}})\leq|\mathcal{C}_{\text{even}}|-1$, and so
$\dim(\mathcal{C})\leq|\mathcal{C}|-2$. This proves that
$\dim(\mathcal{C})=|\mathcal{C}|-2$ in this case.
Next suppose that either
$\prod\mathcal{C}_{\text{odd}}=[I]\neq\prod\mathcal{C}_{\text{even}},\text{ or
}\prod\mathcal{C}_{\text{odd}}\neq[I]=\prod\mathcal{C}_{\text{even}}$. In the
former case we have that
$\dim(\mathcal{C}_{\text{odd}})\leq|\mathcal{C}_{\text{odd}}|-1$, while in the
latter case we have that
$\dim(\mathcal{C}_{\text{even}})\leq|\mathcal{C}_{\text{even}}|-1$, and so in
both cases $\dim(\mathcal{C})\leq|\mathcal{C}|-1$. Let us now assume without
loss of generality that
$\prod\mathcal{C}_{\text{odd}}=[I]\neq\prod\mathcal{C}_{\text{even}}$, because
the other case reduces to this one by translating $\mathcal{C}$ by one
element, and relabeling the elements as in the proof of Theorem B.8(b). Then a
simple computation shows that
$\prod\mathcal{A}=\prod\mathcal{C}_{\text{even}}$, and so
$\prod\mathcal{A}\neq[I]$ by assumption, which implies that $\mathcal{A}$ is
an independent list by Lemma B.7(a). But as
$\langle\mathcal{A}\rangle=\langle\mathcal{C}\setminus\\{\hat{p}_{2k-1}\\}\rangle$,
this implies that $\dim(\mathcal{C})\geq|\mathcal{C}|-1$, and so in fact
$\dim(\mathcal{C})=|\mathcal{C}|-1$.
Now assume that
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$, and
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}$. Because
$\prod\mathcal{C}_{\text{even}}\neq[I]$, the reasoning in the previous
paragraph gives us that $\dim(\mathcal{C})\geq|\mathcal{C}|-1$ because
$\mathcal{A}$ is independent, while we also have that
$\prod\mathcal{C}=(\prod\mathcal{C}_{\text{odd}})(\prod\mathcal{C}_{\text{even}})=[I]$
and thus $\dim(\mathcal{C})\leq|\mathcal{C}|-1$, which means that
$\dim(\mathcal{C})=|\mathcal{C}|-1$.
Finally assume that
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$, and
$\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}}$. As in the
last paragraph, we still have that $\mathcal{A}$ is an independent
anticommuting list. One can now verify that $\hat{p}_{2k-1}$, $\hat{q}_{2k-2}$
and $\prod(\mathcal{A}\setminus\hat{q}_{2k-2})$ generate the same
commutativity pattern with respect to $\mathcal{A}$, i.e. if $\hat{p}$ is
either $\hat{p}_{2k-1}$, $\hat{q}_{2k-2}$, or
$\prod(\mathcal{A}\setminus\hat{q}_{2k-2})$, then
$[\hat{p},\hat{q}_{2k-2}]=0$, and $\\{\hat{p},\hat{q}_{i}\\}=0$, for all
$0\leq i\leq 2k-3$. Now notice that
$\hat{p}_{2k-1}\hat{q}_{2k-2}=\prod\mathcal{C}\neq[I]$ as
$\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}}$, and also
$\hat{p}_{2k-1}\prod(\mathcal{A}\setminus\hat{q}_{2k-2})=\prod\mathcal{C}_{\text{odd}}\neq[I]$
by assumption, and so an application of Lemma B.7(e) implies that
$\hat{p}_{2k-1}\notin\langle\mathcal{A}\rangle$. This implies that in this
case $\dim(\mathcal{C})=|\mathcal{C}|$.
It remains to show that if $|\mathcal{C}|=2n\geq 4$, then
$\dim(\mathcal{C})<2n$. From what has already been proved above, it suffices
to show that the conditions
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$, and
$\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}}$ cannot both
be true. However, for contradiction assume that they in fact both hold. Then
the discussion in the last paragraph applies, and so
$\hat{p}_{2n-1}\notin\langle\mathcal{A}\rangle$, and $\dim(\mathcal{A})=2n-1$.
This also implies that there are exactly two cosets of
$\langle\mathcal{A}\rangle$ in $\mathcal{\hat{P}}_{n}$, and so
$\hat{p}_{2n-1}$ must be in the coset different from
$\langle\mathcal{A}\rangle$. But this is a contradiction by an application of
Lemma B.7(f), which rules out any element in the other coset, that generate
the same commutativity pattern as $\hat{p}_{2k-1}$, with respect to the
elements of $\mathcal{A}$.
2. (b)
The statement is vacuous for $|\mathcal{C}|=2$ as there are no non-empty
proper multisets satisfying the conditions; so assume $|\mathcal{C}|\geq 4$.
We will prove this using contradiction. So suppose
$\mathcal{H}\subset\mathcal{C}$ is a non-empty multiset,
$\mathcal{C}_{\text{even}}\neq\mathcal{H}\neq\mathcal{C}_{\text{odd}}$, and
$\prod\mathcal{H}=[I]$. This means $\prod\mathcal{H}$ commutes with all
elements of $\mathcal{C}$. Now there are two cases: (i)
$\hat{p}_{0}\in\mathcal{H}$, and (ii) $\hat{p}_{0}\not\in\mathcal{H}$. Since
$\prod\mathcal{H}$ commutes with $\hat{p}_{1}$, it must be that in the first
case $\hat{p}_{2}\in\mathcal{H}$, and in the second case
$\hat{p}_{2}\not\in\mathcal{H}$. Continuing this argument iteratively for
$\hat{p}_{3},\dots,\hat{p}_{2k-3}$ then shows that (i) and (ii) are equivalent
to the conditions $\mathcal{C}_{\text{even}}\subseteq\mathcal{H}$ and
$\mathcal{C}_{\text{even}}\cap\mathcal{H}=\emptyset$ respectively. Similarly,
since $\prod\mathcal{H}$ commutes with all elements of
$\mathcal{C}_{\text{even}}$, one again concludes that exactly one of the two
cases is true: either $\mathcal{C}_{\text{odd}}\subseteq\mathcal{H}$ or
$\mathcal{C}_{\text{odd}}\cap\mathcal{H}=\emptyset$. Combining we have four
cases: (i)
$\mathcal{C}_{\text{odd}}\cup\mathcal{C}_{\text{even}}\subseteq\mathcal{H}$,
(ii)
$(\mathcal{C}_{\text{odd}}\cup\mathcal{C}_{\text{even}})\cap\mathcal{H}=\emptyset$,
(iii) $\mathcal{H}=\mathcal{C}_{\text{odd}}$, and (iv)
$\mathcal{H}=\mathcal{C}_{\text{even}}$, all of which are impossible by
assumption.
3. (c)
For contradiction assume that $\mathcal{C}\setminus\\{\hat{p},\hat{q}\\}$ is
not independent. Then there exists a multiset
$\mathcal{H}\subseteq\mathcal{C}\setminus\\{\hat{p},\hat{q}\\}$ such that
$\prod\mathcal{H}=[I]$. But this is a contradiction by (b).
∎
###### Corollary B.10.
Let $\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$ be an extremal CAL of length
$|\mathcal{C}|\geq 3$. Then the dimension of $\mathcal{C}$ satisifies
$\dim(\mathcal{C})=\begin{cases}|\mathcal{C}|-1\;\;&\text{ if
}|\mathcal{C}|\text{ odd},\\\ |\mathcal{C}|-2\;\;&\text{ if
}|\mathcal{C}|\text{ even}.\end{cases}$ (75)
Thus $\mathcal{C}$ generates the group $\mathcal{\hat{P}}_{n}$.
* Proof.
First assume that $|\mathcal{C}|$ is odd. Then from Theorem B.8 we have
$\dim(\mathcal{C})\geq|\mathcal{C}|-1$. But because $\mathcal{C}$ is an
extremal CAL we also have that $\dim(\mathcal{C})\leq|\mathcal{C}|-1$, because
the number of qubits used is $(|\mathcal{C}|-1)/2$. Now assume that
$|\mathcal{C}|$ is even. In this case Theorem B.9 gives that
$\dim(\mathcal{C})\geq|\mathcal{C}|-2$, while again as $\mathcal{C}$ is an
extremal CAL, it must also be true that
$\dim(\mathcal{C})\leq|\mathcal{C}|-2$. In both cases $\dim(\mathcal{C})=n$,
and so $\langle\mathcal{C}\rangle=\mathcal{\hat{P}}_{n}$. This proves the
lemma. ∎
As a consequence of the previous lemmas, we get the next corollary that states
which multisets of an extremal CAL of length at least three, can possibly
multiply to the identity.
###### Corollary B.11.
Let $\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$ be an extremal CAL of length
$|\mathcal{C}|\geq 3$. Then the following statements are true.
1. (a)
Let $|\mathcal{C}|$ be odd. If $\mathcal{H}\subseteq\mathcal{C}$ is a non-
empty multiset, then $\prod\mathcal{H}=[I]$ if and only if
$\mathcal{H}=\mathcal{C}$.
2. (b)
Let $|\mathcal{C}|$ be even. If $\mathcal{H}\subseteq\mathcal{C}$ is a non-
empty multiset, then $\prod\mathcal{H}=[I]$ if and only if
$\mathcal{H}=\mathcal{C}_{\text{odd}}$ or
$\mathcal{H}=\mathcal{C}_{\text{even}}$, where $\mathcal{C}_{\text{odd}}$ and
$\mathcal{C}_{\text{even}}$ are defined as in Theorem B.9.
* Proof.
1. (a)
As $\mathcal{C}$ is an extremal CAL, it is not independent by Corollary B.10.
The result then follows directly from Theorem B.8(a),(c).
2. (b)
As $\mathcal{C}$ is an extremal CAL, $\dim(\mathcal{C})=|\mathcal{C}|-2$ by
Corollary B.10. The result follows by Theorem B.9(a),(b).
∎
In the final part of this subsection, we turn to the question of whether it is
possible to create CALs of $\mathcal{\hat{P}}_{n}$, such that the
representatives of each element in the CAL are Kronecker products of $I_{2}$
and the Pauli matrices $X$ and $Z$ only, and not of $Y$. To motivate this
section, note that on $n=1$ qubit, one has the CALs of lengths $\ell=1$, $2$,
and $4$ given by $\\{[X]\\}$, $\\{[X],[Z]\\}$, and $\\{[X],[Z],[X],[Z]\\}$
respectively, satisfying this property. But there is no CAL of length $\ell=3$
with this property. Also notice that if the CAL has length $\ell\geq 2$, then
it is not possible that all its element representatives are Kronecker products
of $I_{2}$ and just one other Pauli matrix, for example $X$. This is because
all elements of the CAL constructed this way will commute with one another. So
we definitely need to use at least two of the Pauli matrices, for example $X$
and $Z$.
For what follows, we need a couple of simple lemmas. The next result is well-
known (see for instance Chapter 5 in [59]).
###### Lemma B.12.
For any $v\in\mathbb{F}_{2}^{p}$, let $\text{Ham}(v)=\sum_{j}v_{j}$ denote its
Hamming weight. Then if $u,v\in\mathbb{F}_{2}^{p}$ have even Hamming weight,
so does $w=u+v$ (where the addition is over $\mathbb{F}_{2}$). Thus any binary
sum of even Hamming weight vectors also has even Hamming weight.
* Proof.
Let $J_{u}\subseteq\\{0,\dots,p-1\\}$ be the set of indices such that
$u_{j}=1$ if and only if $j\in J_{u}$, and similarly let
$J_{v}\subseteq\\{0,\dots,p-1\\}$ be the set of indices such that $v_{j}=1$ if
and only if $j\in J_{v}$. Since addition is over $\mathbb{F}_{2}$, this
implies that $w_{j}=1$ if and only if $j\in J_{w}=(J_{u}\cup
J_{v})\setminus(J_{u}\cap J_{v})$. But $|J_{w}|=|J_{u}|+|J_{v}|-2|J_{u}\cap
J_{v}|$, and we also have that $|J_{u}|=\text{Ham}(u)$, and
$|J_{v}|=\text{Ham}(v)$, which are even by assumption. So $|J_{w}|$ is even,
and since $|J_{w}|=\text{Ham}(w)$, this proves the first part of the lemma.
The last part now follows by induction. ∎
We also give a construction that allows us to take a CAL and create a CAL of
smaller length on the same number of qubits, or a longer CAL on more qubits,
as given in the lemma below. The first part of this construction will be used
later.
###### Lemma B.13.
Let
$\mathcal{C}=\\{\hat{p}_{0},\dots,\hat{p}_{k-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
be a CAL of length $k\geq 2$, and define the lists
$\mathcal{C}_{\text{odd}}=\\{\hat{p}_{2\ell+1}:0\leq\ell\leq\lfloor
k/2\rfloor-1\\}$ and
$\mathcal{C}_{\text{even}}=\\{\hat{p}_{2\ell}:0\leq\ell\leq\lceil
k/2\rceil-1\\}$. Consider the following constructions:
1. (i)
CAL expansion — If $k$ is odd, let $\hat{r}=[Z]$, $\hat{s}=[X]$, and if $k$ is
even, let $\hat{r}=[X]$, $\hat{s}=[Z]$. Define the list
$\mathcal{C}^{\prime}=\\{\hat{p}^{\prime}_{0},\dots,\hat{p}^{\prime}_{k+1}\\}\subseteq\mathcal{\hat{P}}_{n+1}$
by $\hat{p}^{\prime}_{\ell}=\hat{p}_{\ell}\otimes[I_{2}]$ for all
$0\leq\ell\leq k-3$, $\hat{p}^{\prime}_{k-2}=\hat{p}_{k-2}\otimes\hat{r}$,
$\hat{p}^{\prime}_{k-1}=[I]\otimes\hat{s}$,
$\hat{p}^{\prime}_{k}=[I]\otimes\hat{r}$, and
$\hat{p}^{\prime}_{k+1}=\hat{p}_{k-1}\otimes\hat{s}$, where
$[I]\in\mathcal{\hat{P}}_{n}$ is the identity element of
$\mathcal{\hat{P}}_{n}$. Also define the lists
$\mathcal{C}^{\prime}_{\text{odd}}=\\{\hat{p}^{\prime}_{2\ell+1}:0\leq\ell\leq\lfloor
k/2\rfloor\\}$ and
$\mathcal{C}^{\prime}_{\text{even}}=\\{\hat{p}^{\prime}_{2\ell}:0\leq\ell\leq\lceil
k/2\rceil\\}$.
2. (ii)
CAL contraction — Let $\alpha\geq 0$ be any integer such that
$\beta:=k-2\alpha\geq 2$. Define the list
$\tilde{\mathcal{C}}=\\{\hat{q}^{\prime}_{0},\dots,\hat{q}^{\prime}_{\beta-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
by $\hat{q}^{\prime}_{\ell}=\hat{p}_{\ell}$ for all $0\leq\ell\leq\beta-3$,
$\hat{q}^{\prime}_{\beta-2}=\prod_{\ell=0}^{\alpha}\hat{p}_{\beta-2+2\ell}$,
and
$\hat{q}^{\prime}_{\beta-1}=\prod_{\ell=0}^{\alpha}\hat{p}_{\beta-1+2\ell}$,
and also define the lists
$\tilde{\mathcal{C}}_{\text{odd}}=\\{\hat{q}_{2\ell+1}:0\leq\ell\leq\lfloor\beta/2\rfloor-1\\}$
and
$\tilde{\mathcal{C}}_{\text{even}}=\\{\hat{q}_{2\ell}:0\leq\ell\leq\lceil\beta/2\rceil-1\\}$.
Then we have the following:
1. (a)
$\prod\mathcal{C}^{\prime}_{\text{odd}}=\prod\mathcal{C}_{\text{odd}}\otimes[I_{2}]$,
$\prod\mathcal{C}^{\prime}_{\text{even}}=\prod\mathcal{C}_{\text{even}}\otimes[I_{2}]$,
and $\prod\mathcal{C}^{\prime}=\prod\mathcal{C}\otimes[I_{2}]$. If all
elements of $\mathcal{C}_{\text{odd}}$ (resp. $\mathcal{C}_{\text{even}}$) are
of $Z$-type (resp. $X$-type), then so are all elements of
$\mathcal{C}^{\prime}_{\text{odd}}$ (resp.
$\mathcal{C}^{\prime}_{\text{even}}$).
2. (b)
If $k\geq 3$, then $\mathcal{C}^{\prime}$ is a CAL of length $k+2$, and
$\dim(\mathcal{C}^{\prime})=\dim(\mathcal{C})$.
3. (c)
$\prod\tilde{\mathcal{C}}_{\text{odd}}=\prod\mathcal{C}_{\text{odd}}$,
$\prod\tilde{\mathcal{C}}_{\text{even}}=\prod\mathcal{C}_{\text{even}}$, and
$\prod\tilde{\mathcal{C}}=\prod\mathcal{C}$. If all elements of
$\mathcal{C}_{\text{odd}}$ (resp. $\mathcal{C}_{\text{even}}$) are of $Z$-type
(resp. $X$-type), then so are all elements of
$\tilde{\mathcal{C}}_{\text{odd}}$ (resp.
$\tilde{\mathcal{C}}_{\text{even}}$).
4. (d)
If $\beta\geq 3$, then $\tilde{\mathcal{C}}$ is a CAL of length $\beta$, and
$\dim(\tilde{\mathcal{C}})=\dim(\mathcal{C})$.
* Proof.
1. (a)
It is easy to verify that by construction, if all elements of
$\mathcal{C}_{\text{odd}}$ are $Z$-type, then the same is true for
$\mathcal{C}^{\prime}_{\text{odd}}$, and if all elements of
$\mathcal{C}_{\text{even}}$ are $X$-type, then the same holds for all elements
of $\mathcal{C}^{\prime}_{\text{even}}$. Now
$\prod\mathcal{C}^{\prime}_{\text{odd}}=\prod_{\ell=0}^{\lfloor
k/2\rfloor}\hat{p}^{\prime}_{2\ell+1}=\left(\prod_{\ell=0}^{\lfloor
k/2\rfloor-2}\hat{p}_{2\ell+1}\otimes[I_{2}]\right)\left(\hat{p}_{2\lfloor
k/2\rfloor-3}\otimes[Z]\right)\left([I]\otimes[Z]\right)=\left(\prod_{\ell=0}^{\lfloor
k/2\rfloor-1}\hat{p}_{2\ell+1}\right)\otimes[I_{2}]$, which shows
$\prod\mathcal{C}^{\prime}_{\text{odd}}=\prod\mathcal{C}_{\text{odd}}\otimes[I_{2}]$.
A similar calculation also shows that
$\prod\mathcal{C}^{\prime}_{\text{even}}=\prod\mathcal{C}_{\text{even}}\otimes[I_{2}]$,
and from both these we also conclude that
$\prod\mathcal{C}^{\prime}=\prod\mathcal{C}\otimes[I_{2}]$.
2. (b)
It is straightforward to verify that when $k\geq 3$, and since $\mathcal{C}$
is a CAL, the elements of $\mathcal{C}^{\prime}$ satisfy the CAL commutation
relations (Eq. (70)). Now by (a), we have that
$\prod\mathcal{C}^{\prime}=[I]\in\mathcal{\hat{P}}_{n+1}$ if and only if
$\prod\mathcal{C}=[I]\in\mathcal{\hat{P}}_{n}$. So when $k$ is odd, we have
from Theorem B.8(a) that $\dim(\mathcal{C}^{\prime})=\dim(\mathcal{C})$. When
$k$ is even, it follows similarly using (a) the following: (i)
$\prod\mathcal{C}_{\text{odd}}=[I]$ if and only if
$\prod\mathcal{C}^{\prime}_{\text{odd}}=[I]$, (ii)
$\prod\mathcal{C}_{\text{even}}=[I]$ if and only if
$\prod\mathcal{C}^{\prime}_{\text{even}}=[I]$, and (iii)
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}$ if and only if
$\prod\mathcal{C}^{\prime}_{\text{odd}}=\prod\mathcal{C}^{\prime}_{\text{even}}$.
Theorem B.9 now implies that $\dim(\mathcal{C}^{\prime})=\dim(\mathcal{C})$.
3. (c)
This also follows by construction: we have
$\prod\tilde{\mathcal{C}}_{\text{odd}}=\left(\prod_{\ell=0}^{\lfloor\beta/2\rfloor-2}\hat{p}_{2\ell+1}\right)\left(\prod_{\ell=0}^{\alpha}\hat{p}_{2\lfloor\beta/2\rfloor-1+2\ell}\right)=\prod\mathcal{C}_{\text{odd}}$,
and
$\prod\tilde{\mathcal{C}}_{\text{even}}=\left(\prod_{\ell=0}^{\lceil\beta/2\rceil-2}\hat{p}_{2\ell}\right)\left(\prod_{\ell=0}^{\alpha}\hat{p}_{2\lceil\beta/2\rceil-2+2\ell}\right)=\prod\mathcal{C}_{\text{even}}$,
and multiplying them together gives
$\prod\tilde{\mathcal{C}}=\prod\mathcal{C}$. The last part is a simple
observation.
4. (d)
If $\beta\geq 3$, then one easily verifies that $\tilde{\mathcal{C}}$ is a
CAL. The proof of the last part is similar to (b), after noting that (c)
implies (i)
$\prod\mathcal{C}_{\text{odd}}=\prod\tilde{\mathcal{C}}_{\text{odd}}$, and
(ii) $\prod\mathcal{C}_{\text{even}}=\prod\tilde{\mathcal{C}}_{\text{even}}$.
∎
We are now in a position to discuss the case of odd-length CALs, which is more
interesting than the even-length case. It turns out that one must always use
all the three Pauli matrices to construct an odd-length CAL of length at least
three, if the CAL is not an independent set; otherwise it is possible to not
use the Pauli matrix $Y$. This is proved in the next theorem.
###### Theorem B.14.
For all $n\geq 1$, and $1\leq k\leq 2n+1$ such that $k$ is odd, we have the
following:
1. (a)
If $3\leq k\leq 2n+1$, then it is not possible to construct a CAL
$\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$, with $|\mathcal{C}|=k$, and
$\prod\mathcal{C}=[I]$, such that the representatives of each element of
$\mathcal{C}$ are Kronecker products not involving the Pauli matrix $Y$. If
$k=1$, then $\mathcal{C}=\\{[I]\\}$ is a CAL whose element representatives are
Kronecker products not involving any Pauli matrix.
2. (b)
If $1\leq k\leq 2n-1$, then it is always possible to construct a CAL
$\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$, with $|\mathcal{C}|=k$, and
$\prod\mathcal{C}\neq[I]$, such that the representatives of each element of
$\mathcal{C}$ are Kronecker products not involving the Pauli matrix $Y$.
* Proof.
1. (a)
The case $k=1$ is obvious, so fix $n\geq 1$, $3\leq k\leq 2n+1$, such that $k$
is odd. The proof is by contradiction. Suppose there exists a CAL
$\mathcal{C}=\\{\hat{p}_{0},\dots,\hat{p}_{k-1}\\}\subseteq\mathcal{\hat{P}}_{n}$
satisfying $|\mathcal{C}|=k$, and $\prod\mathcal{C}=[I]$, such that the
representatives of each element of $\mathcal{C}$ are Kronecker products not
involving $Y$. Let $\hat{p}_{\ell}^{(j)}$ denote the restriction of
$\hat{p}_{\ell}$ to qubit $j$, and let us also define the lists
$\mathcal{C}^{(1)},\dots\mathcal{C}^{(n)}\subseteq\mathcal{\hat{P}}_{1}$ by
$\mathcal{C}^{(j)}=\\{\hat{p}_{\ell}^{(j)}:0\leq\ell\leq k-1\\}$, for all
$0\leq j\leq n-1$. We observe that $\prod\mathcal{C}=[I]$ implies
$\prod\mathcal{C}^{(j)}=[I]$ for all $j$ (here the latter $[I]$ denotes the
identity element of $\mathcal{\hat{P}}_{1}$). Now by assumption we have that
$[Y]\notin\mathcal{C}^{(j)}$, and so $[X]$ and $[Z]$ must occur an even number
of times in each $\mathcal{C}^{(j)}$. Define the matrices
$M,M^{(0)},\dots,M^{(n-1)}\in\mathbb{F}_{2}^{k\times k}$ as
$\begin{split}M(r,s)&=\begin{cases}0&\;\;\;\;\text{if
}[\hat{p}_{r},\hat{p}_{s}]=0\\\ 1&\;\;\;\;\text{if
}\\{\hat{p}_{r},\hat{p}_{s}\\}=0\end{cases}\\\
M^{(j)}(r,s)&=\begin{cases}0&\;\;\;\;\text{if
}[\hat{p}_{r}^{(j)},\hat{p}_{s}^{(j)}]=0\\\ 1&\;\;\;\;\text{if
}\\{\hat{p}_{r}^{(j)},\hat{p}_{s}^{(j)}\\}=0\end{cases}\end{split}$ (76)
for all $0\leq r,s\leq k-1$, and $0\leq j\leq n-1$. These matrices have the
following properties:
1. (i)
$M,M^{(0)},\dots,M^{(n-1)}$ are symmetric, and have zeros on the diagonal.
2. (ii)
The matrices satisfy the identity $M=\sum_{j=0}^{n-1}M^{(j)}$, where addition
is performed over $\mathbb{F}_{2}$. This is because $\hat{p}_{r}$ and
$\hat{p}_{s}$ commute if and only if their restrictions $\hat{p}_{r}^{(j)}$
and $\hat{p}_{s}^{(j)}$ for qubits $0\leq j\leq n-1$, anticommute an even
number of times.
3. (iii)
The upper triangular part of $M$ contains exactly $k$ non-zeros. This is
simply a consequence of the commutation relations Eq. (70).
4. (iv)
For all $0\leq j\leq n-1$, since the multiset $\mathcal{C}^{(j)}$ contains an
even number of $[X]$ and $[Z]$, it follows from Eq. (76) that the number of
non-zeros in $M^{(j)}$ is a multiple of $8$. As $M^{(j)}$ is symmetric with
zeros on the diagonal by property (i), we have that the number of non-zeros in
the upper triangular part of $M^{(j)}$ is a multiple of $4$.
By an application of Lemma B.12, property (iv) above implies that the number
of non-zeros in the upper triangular part of $\sum_{j=0}^{n-1}M^{(j)}$ is
even, but this is a contradiction to property (iii) which states that the
number of non-zeros in the upper triangular part of $M$ is $k$, which is odd
by assumption. This gives the desired contradiction.
2. (b)
First notice that if there exists a CAL
$\mathcal{C}=\\{\hat{p}_{\ell}:0\leq\ell\leq
k-1\\}\subseteq\mathcal{\hat{P}}_{n}$ of odd length $k$ and
$\prod\mathcal{C}\neq I$, then it must necessarily satisfy $1\leq k\leq 2n-1$,
by Lemma B.6(d) and Corollary B.11(a), and this also implies the existence of
a CAL of length $k$ with the required properties on $n^{\prime}>n$ qubits,
because one can simply form the new CAL
$\mathcal{C}^{\prime}=\\{\hat{p}_{\ell}\otimes[I]:0\leq\ell\leq k-1\\}$, where
$[I]\in\mathcal{\hat{P}}_{n^{\prime}-n}$ is the identity element on
$n^{\prime}-n$ qubits. Thus for each odd $k\geq 1$, it suffices to find a CAL
$\mathcal{C}_{k}$ of length $k$, with the required properties on $(k+1)/2$
qubits. If $k=1$, then $\mathcal{C}_{1}=\\{[X]\\}$ is such a CAL. If $k=3$,
then $\mathcal{C}_{3}=\\{[X]\otimes[I_{2}],[Z]\otimes[Z],[Z]\otimes[X]\\}$ is
a CAL with the desired properties. One can now iteratively use the CAL
expansion construction in Lemma B.13, to obtain $\mathcal{C}_{k+2}$ from
$\mathcal{C}_{k}$ for all odd $k\geq 3$. The construction ensures that if the
restriction of each element of $\mathcal{C}_{k}$ to qubit $j$ is not $[Y]$,
for all $0\leq j\leq(k-1)/2$, then the same is true for each element of
$\mathcal{C}_{k+2}$ for all qubit indices $0\leq j\leq(k+1)/2$.
∎
The final theorem of this section shows that for CALs of even-length, it
always suffices to not use the Pauli matrix $Y$.
###### Theorem B.15.
For all $n\geq 1$, and for all integers $k$ satisfying $2\leq 2k\leq 2n+2$,
there exists a CAL $\mathcal{C}\subseteq\mathcal{\hat{P}}_{n}$, with
$|\mathcal{C}|=2k$, such that the representatives of each element of
$\mathcal{C}$ are Kronecker products not involving the Pauli matrix $Y$.
Moreover defining $\mathcal{C}_{\text{odd}}$ and $\mathcal{C}_{\text{even}}$
according to Theorem B.9, one can choose $\mathcal{C}$ to satisfy any of the
cases
1. (i)
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$, and
$\prod\mathcal{C}_{\text{odd}}\neq\prod\mathcal{C}_{\text{even}}$
2. (ii)
$\prod\mathcal{C}_{\text{odd}}\neq[I]\neq\prod\mathcal{C}_{\text{even}}$ and
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}$
3. (iii)
$\prod\mathcal{C}_{\text{odd}}=[I]\neq\prod\mathcal{C}_{\text{even}}$
4. (iv)
$\prod\mathcal{C}_{\text{odd}}\neq[I]=\prod\mathcal{C}_{\text{even}}$
5. (v)
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}=[I]$
if the case is allowed by Theorem B.9(a). Except for case (ii), one can also
choose all elements of $\mathcal{C}_{\text{odd}}$ to be of $Z$-type, and all
elements of $\mathcal{C}_{\text{even}}$ to be of $X$-type.
* Proof.
As in the proof of Theorem B.14(b), notice that if we prove this theorem for
any fixed values of $k$ and $n$, then we have also proved the theorem for the
same fixed $k$ and for all $n^{\prime}>n$. First consider CALs of length
$2k=2$. Then by Theorem B.9(a) and its proof, we know that only case (i) is
possible. In this case the CAL $\mathcal{C}=\\{[X],[Z]\\}$ has the required
properties for $n=1$, so this proves the case $2k=2$. For the rest of the
proof, assume $2k\geq 4$. Also notice that the last statement is true because
if it were possible to choose all elements of $\mathcal{C}_{\text{odd}}$ and
$\mathcal{C}_{\text{even}}$ to be of $Z$-type and $X$-type respectively, then
the only way to have
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}$ is to have
$\prod\mathcal{C}_{\text{odd}}=\prod\mathcal{C}_{\text{even}}=[I]$, which is a
contradiction.
We now adopt the notation $\mathcal{C}_{2k,n}\subseteq\mathcal{\hat{P}}_{n}$
to denote a CAL of length $2k$. Note that the minimum value of $n$ for which
$\mathcal{C}_{2k,n}$ can exist for cases (i)-(v) satisfying the properties of
the theorem is given by $n\geq n_{2k}$, where $n_{2k}=k+1$ for case (i),
$n_{2k}=k$ for cases (ii)-(iv), and $n_{2k}=k-1$ for case (v), all following
from Lemma B.6(d) and Theorem B.9(a). We first exhibit examples of CALs
$\mathcal{C}_{4,n_{2}}$ existing for each case — these are (i)
$\mathcal{C}_{4,3}=\\{[X_{1}X_{2}],[Z_{2}],[X_{2}X_{3}],[Z_{1}Z_{3}]\\}$, (ii)
$\mathcal{C}_{4,2}=\\{[X_{1}],[Z_{1}],[X_{1}X_{2}],[Z_{1}X_{2}]\\}$, (iii)
$\mathcal{C}_{4,2}=\\{[X_{1}],[Z_{1}],[X_{1}X_{2}],[Z_{1}]\\}$, (iv)
$\mathcal{C}_{4,2}=\\{[X_{1}],[Z_{1}],[X_{1}],[Z_{1}Z_{2}]\\}$, and (v)
$\mathcal{C}_{4,1}=\\{[X],[Z],[X],[Z]\\}$, for the cases (i)-(v) respectively.
Now for each case we use the CAL expansion construction of Lemma B.13(i) to
iteratively construct the CALs $\mathcal{C}_{2k,n_{k}}$ for all $k\geq 2$,
starting from $\mathcal{C}_{4,n_{2}}$. These CALs have the required properties
in each case by Lemma B.13(a), since $\mathcal{C}_{4,n_{2}}$ has them as well.
The theorem is then proved by the first sentence of the previous paragraph. ∎
## Appendix C Equivalence of Majorana and qubit surface codes
The goal of this section is to prove the equivalence between the Majorana
surface code, defined on $m=M+2|E|$ Majoranas (where $m$ is even), and the
qubit surface code, defined on $N=M/2+|E|-|V|$ qubits, as explained in Section
3.3 (the quantities $M,|E|,|V|$ are defined in Section 3.2). The proof relies
on some general facts that we also present here. We will use some of the
notations and definitions introduced in Section B.1. Recall from Section 3.1
that the Jordan-Wigner transformation identifies the Pauli group
$\mathcal{P}_{m/2}$ with the Majorana group $\mathcal{J}_{m}$, in a way that
preserves commutation relations. To remain consistent with the notation used
for Paulis, define
$\text{Com}:\mathcal{J}_{m}\times\mathcal{J}_{m}\rightarrow\mathbb{F}_{2}$ as
$\text{Com}(\gamma,\gamma^{\prime})=\text{Com}(\text{JW}(\gamma),\text{JW}(\gamma^{\prime}))$,
where $\gamma,\gamma^{\prime}\in\mathcal{J}_{m}$, and $\text{JW}(\cdot)$ is
the Jordan-Wigner map (the phase factors of the elements of $\mathcal{J}_{m}$
have been absorbed into $\gamma,\gamma^{\prime}$ and we continue to do so for
the rest of this section888This is in contrast to the notation in Section 3.1
where the phase factors were made explicit.). Due to the Jordan-Wigner
identification, Lemma 3.1 is also valid for the Pauli group. To simplify
notation, let us define $r:=m/2$.
Recall that a stabilizer group $\mathcal{S}$ on $r$-qubits is a subgroup of
$\mathcal{P}_{r}$ satisfying $-I\not\in\mathcal{S}$, which implies that
$p^{2}=I$ for all $p\in\mathcal{S}$, and $\mathcal{S}$ is a commuting
subgroup. Let $\mathcal{C}(\mathcal{S})$ be the centralizer of $\mathcal{S}$.
If $\dim(\mathcal{S})=k$, for some $0\leq k\leq r$, then
$\dim(\mathcal{C}(\mathcal{S}))=2r-k$. Let $\mathcal{S}^{\prime}$ be another
stabilizer group of $\mathcal{P}_{r}$ such that
$\mathcal{S}\subseteq\mathcal{S}^{\prime}$. Then by the definition of the
centralizer we have $\mathcal{S}^{\prime}\subseteq\mathcal{C}(\mathcal{S})$.
Now suppose that $\kappa:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{P}_{N}$
is a homomorphism. Notice that $\pm iI\not\in\kappa(\mathcal{S}^{\prime})$,
because if otherwise $\kappa(p)=\pm iI$ for some $p\in\mathcal{S}^{\prime}$,
then it implies $\kappa(I)=\kappa(p^{2})=(\kappa(p))^{2}=-I$. We say that
$\kappa$ is commutation preserving if
$\text{Com}(p,q)=\text{Com}(\kappa(p),\kappa(q))$ for all
$p,q\in\mathcal{C}(\mathcal{S})$. Let
$\pi:\mathcal{P}_{N}\rightarrow\mathcal{\hat{P}}_{N}$ be the projection map
taking each element of $\mathcal{P}_{N}$ to its equivalence class in
$\mathcal{\hat{P}}_{N}$. It is easily checked that $\pi$ is a surjective
homomorphism, so $\kappa$ descends to a homomorphism
$\hat{\kappa}:=\pi\circ\kappa:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{\hat{P}}_{N}$.
The homomorphism $\pi$ is commutation preserving in the sense that for all
$p,q\in\mathcal{P}_{N}$, $\text{Com}(p,q)=\text{Com}([p],[q])$. Thus if
$\kappa$ is commutation preserving also, then so is $\hat{\kappa}$, as for all
$p,q\in\mathcal{C}(\mathcal{S})$,
$\text{Com}(p,q)=\text{Com}(\hat{\kappa}(p),\hat{\kappa}(q))$. Morover, if
$\kappa$ is surjective, it follows by surjectivity of $\pi$ that
$\hat{\kappa}$ is surjective. Below in Lemma C.1, we prove a result addressing
the converse direction.
###### Lemma C.1.
Let $\mathcal{S}$ be a stabilizer subgroup of $\mathcal{P}_{r}$, and let
$\mu:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{\hat{P}}_{N}$ be a
surjective, commutation preserving homomorphism. Then there exists a
surjective homomorphism
$\kappa:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{P}_{N}$ such that
$\hat{\kappa}=\mu$, and $\kappa(\mathcal{S})$ is a stabilizer subgroup of
$\mathcal{P}_{N}$. Moreover, for any such $\kappa$, if $\mathcal{S}^{\prime}$
is a stabilizer subgroup of $\mathcal{P}_{r}$ satisfying
$\mathcal{S}\subseteq\mathcal{S}^{\prime}$, the following holds:
1. (a)
$\kappa$ is commutation preserving.
2. (b)
$\kappa(p)\not\in\langle iI\rangle$ for all
$p\in\mathcal{C}(\mathcal{S})\setminus\langle iI,\mathcal{S}\rangle$.
3. (c)
The kernel of $\kappa$ is $\mathcal{S}$, and $\kappa(iI)=\pm iI$.
4. (d)
$\kappa(\mathcal{S}^{\prime})$ is a stabilizer subgroup of $\mathcal{P}_{N}$.
5. (e)
$\kappa(\mathcal{C}(\mathcal{S}^{\prime}))=\mathcal{C}(\kappa(\mathcal{S}^{\prime}))$.
6. (f)
$\mathcal{T}/\mathcal{S}\cong\kappa(\mathcal{T})$, and
$\dim(\kappa(\mathcal{T}))=\dim(\mathcal{T})-\dim(\mathcal{S})$, where
$\mathcal{T}$ is either $\mathcal{S}^{\prime}$ or
$\mathcal{C}(\mathcal{S}^{\prime})$. The stabilizer codes given by the
stabilizer groups $\mathcal{S}^{\prime}$ in $\mathcal{P}_{r}$, and
$\kappa(\mathcal{S}^{\prime})$ in $\mathcal{P}_{N}$ encode the same number of
logical qubits.
* Proof.
For the proof, we first note some useful facts. Since $\mu$ is surjective and
commutation preserving, we have $\mu(iI)=[I]$. This follows because if
$\mu(iI)\neq[I]$, then by surjectivity there exists
$p\in\mathcal{C}(\mathcal{S})$ such that $\text{Com}(\mu(p),\mu(iI))=1$, while
$\text{Com}(p,iI)=0$ giving a contradiction. Also recall that for any
$p\in\mathcal{C}(\mathcal{S})\setminus\langle iI,\mathcal{S}\rangle$, there
exists $q\in\mathcal{C}(\mathcal{S})\setminus\langle iI,\mathcal{S}\rangle$
such that $\text{Com}(p,q)=1$. Now let $\dim(\mathcal{S})=k\geq 0$. Then there
exists an independent list
$\mathcal{G}=\\{p_{1},\dots,p_{k},p^{{}^{\prime}}_{1},\dots,p^{{}^{\prime}}_{2r-2k}\\}$
of commuting, Hermitian elements of $\mathcal{P}_{r}$, such that
$\mathcal{S}=\langle p_{1},\dots,p_{k}\rangle$, and
$\mathcal{C}(\mathcal{S})=\langle iI,\mathcal{G}\rangle$. Thus every element
$p\in\mathcal{C}(\mathcal{S})$ can be uniquely represented in the form
$p=(iI)^{\alpha_{p}}\prod_{y\in\mathcal{H}_{p}}y$, where
$\alpha_{p}\in\\{0,1,2,3\\}$, and $\mathcal{H}_{p}\subseteq\mathcal{G}$ are
uniquely determined by $p$. The elements in this product representation appear
in the same order as in the list $\mathcal{G}$, with the convention that the
product of the empty subset is $I$.
We first show the existence of $\kappa$. Let us define a map
$\psi:\mathcal{G}\rightarrow\mathcal{P}_{N}$ by letting $\psi(x)$ be any
arbitrarily chosen Hermitian element in the equivalence class $\mu(x)$ for
each $x\in\mathcal{G}$, and consider the list
$\mathcal{T}=\\{\psi(p_{1}),\dots,\psi(p_{k})\\}$. Since $\mathcal{S}$ is a
commuting subgroup and $\mu$ is a homomorphism, it follows that
$\langle\mathcal{T}\rangle$ is a commuting subgroup. If $k\geq 1$, then
$\mathcal{T}$ is non-empty, and so by Lemma 3.1 we conclude
$\langle\mathcal{T}\rangle$ is Hermitian, and the elements of $\mathcal{T}$
can be multiplied by $\pm 1$ to obtain a new list
$\tilde{\mathcal{T}}=\\{\tilde{\psi}(p_{1}),\dots,\tilde{\psi}(p_{k})\\}$ such
that $-I\not\in\langle\tilde{\mathcal{T}}\rangle$. Next define the map
$\kappa:\mathcal{G}\cup\\{iI\\}\rightarrow\mathcal{\hat{P}}_{N}$ as follows
$\kappa(x)=\begin{cases}\tilde{\psi}(x)&\;\;\;\text{if}\;\;x=p_{1},\dots,p_{k}\\\
\psi(x)&\;\;\;\text{if}\;\;x=p^{{}^{\prime}}_{1},\dots,p^{{}^{\prime}}_{2r-2k}\\\
iI&\;\;\;\text{if}\;\;x=iI,\end{cases}$ (77)
and we claim that $\kappa$ extends uniquely to a homomorphism
$\kappa:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{P}_{N}$. For any such
extension, if
$x=(iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}y\in\mathcal{C}(\mathcal{S})$,
then $\kappa(x)=(iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}\kappa(y)$,
showing uniqueness. To prove existence, for any
$x=(iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}y\in\mathcal{C}(\mathcal{S})$,
define
$\kappa(x):=(\kappa(iI))^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}\kappa(y)$.
This definition satisfies Eq. (77), so we need to verify that $\kappa$ is a
homomorphism. Let $x,x^{\prime}\in\mathcal{C}(\mathcal{S})$ be given by
$x=(iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}y$, and
$x^{\prime}=(iI)^{\alpha_{x^{\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime}}}y$.
Then
$x^{\prime\prime}:=xx^{\prime}=\left((iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}y\right)\;\left((iI)^{\alpha_{x^{\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime}}}y\right)=(iI)^{\alpha_{x^{\prime\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime\prime}}}y$,
where
$\mathcal{H}_{x^{\prime\prime}}=(\mathcal{H}_{x}\cup\mathcal{H}_{x^{\prime}})\setminus(\mathcal{H}_{x}\cap\mathcal{H}_{x^{\prime}})$,
and $\alpha_{x^{\prime\prime}}=(\alpha_{x}+\alpha_{x^{\prime}}+2t)\mod 4$,
with $t$ determined by the commutativity of the elements of $\mathcal{H}_{x}$
and $\mathcal{H}_{x^{\prime}}$. Now from the definition of $\kappa$, we get
$\kappa(xx^{\prime})=(iI)^{\alpha_{x^{\prime\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime\prime}}}\kappa(y)$.
We also observe that since $\mu$ is commutation preserving, Eq. (77) implies
$\text{Com}(x_{1},x_{2})=\text{Com}(\kappa(x_{1}),\kappa(x_{2}))$, for all
$x_{1},x_{2}\in\mathcal{G}\cup\\{iI\\}$. This gives
$\kappa(x)\kappa(x^{\prime})=\left((iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}\kappa(y)\right)\;\left((iI)^{\alpha_{x^{\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime}}}\kappa(y)\right)=(iI)^{\alpha_{x^{\prime\prime}}}\prod_{y\in\mathcal{H}_{x^{\prime\prime}}}\kappa(y)$,
where in the last equality we have used that all elements of
$\kappa(\mathcal{G})$ square to $I$ (as they are Hermitian by choice). This
shows $\kappa(xx^{\prime})=\kappa(x)\kappa(x^{\prime})$ proving the claim.
Next observe that by construction
$\kappa(\mathcal{S})=\langle\tilde{\mathcal{T}}\rangle$, and so
$\kappa(\mathcal{S})$ is a stabilizer subgroup of $\mathcal{P}_{N}$. We
moreover have $\hat{\kappa}=\mu$, because for any
$x=(iI)^{\alpha_{x}}\prod_{y\in\mathcal{H}_{x}}y\in\mathcal{C}(\mathcal{S})$,
we have
$\hat{\kappa}(x)=\prod_{y\in\mathcal{H}_{x}}\hat{\kappa}(y)=\prod_{y\in\mathcal{H}_{x}}\mu(y)=\mu(\prod_{y\in\mathcal{H}_{x}}y)=\mu(x)$,
where the last equality uses $\mu(iI)=[I]$. Finally the condition
$\kappa(iI)=iI$ in Eq. (77), together with the fact that $\mu$ is surjective,
and $\hat{\kappa}=\mu$, imply that $\kappa$ is surjective.
We now prove parts (a)-(f) of the lemma, assuming the existence of $\kappa$.
1. (a)
For any two elements $p,q\in\mathcal{C}(\mathcal{S})$, since
$\hat{\kappa}=\mu$ and both $\mu$ and $\pi$ are commutation preserving
homomorphisms, we have
$\text{Com}(p,q)=\text{Com}(\hat{\kappa}(p),\hat{\kappa}(q))=\text{Com}(\kappa(p),\kappa(q))$.
2. (b)
If $p\in\mathcal{C}(\mathcal{S})\setminus\langle iI,\mathcal{S}\rangle$, then
there exists $q\in\mathcal{C}(\mathcal{S})\setminus\langle
iI,\mathcal{S}\rangle$ such that $\text{Com}(p,q)=1$. As $\kappa$ is
commutation preserving by (a), we also have
$\text{Com}(\kappa(p),\kappa(q))=1$, which is impossible if
$\kappa(p)\in\langle iI\rangle$.
3. (c)
The center of $\mathcal{C}(\mathcal{S})$ is $\langle iI,\mathcal{S}\rangle$,
and the center of $\mathcal{P}_{N}$ is $\langle iI\rangle$. Since $\kappa$ is
a surjective homomorphism, this implies $\kappa(\langle
iI,\mathcal{S}\rangle)\subseteq\langle iI\rangle$. Now fix any
$p\in\mathcal{S}$. We have already argued in the paragraph preceding the lemma
that $\kappa(p)\neq\pm iI$. Since $\kappa(\mathcal{S})$ is a stabilizer group,
we also have $\kappa(p)\neq-I$. This shows $\kappa(\mathcal{S})=\langle
I\rangle$ (equivalently $\mathcal{S}$ is in the kernel of $\kappa$). Now by
the surjectivity of $\kappa$ and (b), it follows that $\kappa(\langle
iI,\mathcal{S}\rangle)=\langle iI\rangle$, and since $\mu(iI)=[I]$, we also
have $\kappa(iI)\in\langle iI\rangle$. If $\kappa(iI)=\pm I$, then
$\kappa(\langle iI,\mathcal{S}\rangle)=\langle-I\rangle$, which is a
contradiction. This proves $\kappa(iI)=\pm iI$. In each of these two possible
cases, since $\kappa(\mathcal{S})=\\{I\\}$, we have for any $p\in\langle
iI,\mathcal{S}\rangle\setminus\mathcal{S}$, $\kappa(p)\neq I$, and this proves
that the kernel of $\kappa$ is exactly $\mathcal{S}$.
4. (d)
It suffices to show that $-I\not\in\kappa(\mathcal{S}^{\prime})$, as then
Lemma 3.1(b) implies $\kappa(\mathcal{S}^{\prime})$ is also commuting and
Hermitian. If $\mathcal{S}^{\prime}=\mathcal{S}$ then
$\kappa(\mathcal{S}^{\prime})=I$ by (c). Now assume
$\mathcal{S}^{\prime}\neq\mathcal{S}$, and for contradiction, assume there
exists $p\in\mathcal{S}^{\prime}$ such that $\kappa(p)=-I$. Then by (c),
$p\in\mathcal{S}^{\prime}\setminus\mathcal{S}$, and since
$\mathcal{S}^{\prime}$ is a stabilizer group it must also hold that
$p\not\in\langle iI,\mathcal{S}\rangle$. But this contradicts (b).
5. (e)
As $\kappa$ is a homomorphism, we have
$\kappa(\mathcal{C}(\mathcal{S}^{\prime}))\subseteq\mathcal{C}(\kappa(\mathcal{S}^{\prime}))$.
We want to show
$\mathcal{C}(\kappa(\mathcal{S}^{\prime}))\subseteq\kappa(\mathcal{C}(\mathcal{S}^{\prime}))$.
Pick any $p\in\mathcal{C}(\kappa(\mathcal{S}^{\prime}))$. By surjectivity of
$\kappa$, there exists $q\in\mathcal{C}(\mathcal{S})$ such that $\kappa(q)=p$.
Then for all $x\in\mathcal{S}^{\prime}$, since $\kappa$ is commutation
preserving, we have $\text{Com}(q,x)=\text{Com}(p,\kappa(x))=0$, which implies
$q\in\mathcal{C}(\mathcal{S}^{\prime})$.
6. (f)
Since $\mathcal{S}\subseteq\mathcal{S}^{\prime}$, we have
$\mathcal{S}\subseteq\mathcal{S}^{\prime}\subseteq\mathcal{C}(\mathcal{S}^{\prime})\subseteq\mathcal{C}(\mathcal{S})$.
Let $\mathcal{T}$ be either $\mathcal{S}^{\prime}$ or
$\mathcal{C}(\mathcal{S}^{\prime})$, and let
$\kappa_{\mathcal{T}}:\mathcal{T}\rightarrow\kappa(\mathcal{T})$ be the
restriction of $\kappa$ to $\mathcal{T}$. Then $\kappa_{\mathcal{T}}$ is
surjective, and the kernel of $\kappa_{\mathcal{T}}$ is $\mathcal{S}$. Thus we
have the isomorphism $\mathcal{T}/\mathcal{S}\cong\kappa(\mathcal{T})$, which
implies
$\dim(\kappa(\mathcal{T}))=\dim(\mathcal{T}/\mathcal{S})=\dim(\mathcal{T})-\dim(\mathcal{S})$.
Using this, we easily obtain the equality
$\dim(\kappa(\mathcal{C}(\mathcal{S}^{\prime})))-\dim(\kappa(\mathcal{S}^{\prime}))=\dim(\mathcal{C}(\mathcal{S}^{\prime}))-\dim(\mathcal{S}^{\prime})$,
where the left (resp. right) hand side of the equation denotes twice the
number of encoded qubits by the stabilizer code specified by
$\mathcal{\kappa(\mathcal{S}^{\prime})}$ (resp. $\mathcal{S}^{\prime}$).
∎
Similarly, a homomorphism $\mu:\mathcal{G}\rightarrow\mathcal{\hat{P}}_{N}$,
where $\mathcal{G}$ is a subgroup of $\mathcal{J}_{m}$, is called commutation
preserving if for all $\gamma,\gamma^{\prime}\in\mathcal{G}$,
$\text{Com}(\mu(\gamma),\mu(\gamma^{\prime}))=\text{Com}(\gamma,\gamma^{\prime})$.
A simple fact is the following: if $\mathcal{T}\subseteq\mathcal{G}$ such that
$\langle\mathcal{T}\rangle=\mathcal{G}$, and
$\mu:\mathcal{G}\rightarrow\mathcal{\hat{P}}_{N}$ is a homomorphism such that
$\text{Com}(\gamma,\gamma^{\prime})=\text{Com}(\mu(\gamma),\mu(\gamma^{\prime}))$
for all $\gamma,\gamma^{\prime}\in\mathcal{T}$, then $\mu$ is commutation
preserving on $\mathcal{G}$. To see this, if
$\gamma,\gamma^{\prime}\in\mathcal{G}$, there exists
$\gamma_{1},\dots,\gamma_{k_{\gamma}},\gamma^{\prime}_{1},\dots,\gamma^{\prime}_{k_{\gamma^{\prime}}}\in\mathcal{T}$
such that $\gamma=\prod_{s=1}^{k_{\gamma}}\gamma_{s}$, and
$\gamma^{\prime}=\prod_{t=1}^{k_{\gamma^{\prime}}}\gamma^{\prime}_{t}$. Then
$\text{Com}(\mu(\gamma),\mu(\gamma^{\prime}))=\text{Com}(\prod_{s=1}^{k_{\gamma}}\mu(\gamma_{s}),\prod_{t=1}^{k_{\gamma^{\prime}}}\mu(\gamma^{\prime}_{t}))=\sum_{s=1}^{k_{\gamma}}\sum_{t=1}^{k_{\gamma^{\prime}}}\text{Com}(\mu(\gamma_{s}),\mu(\gamma^{\prime}_{t}))=\sum_{s=1}^{k_{\gamma}}\sum_{t=1}^{k_{\gamma^{\prime}}}\text{Com}(\gamma_{s},\gamma^{\prime}_{t})=\text{Com}(\gamma,\gamma^{\prime})$,
where the first equality is true since $\mu$ is a homomorphism, and the sums
are evaluated over $\mathbb{F}_{2}$.
Let us discuss a consequence of Majorana and qubit partitioning on
$\mathcal{J}_{m}$ and $\mathcal{\hat{P}}_{N}$, respectively. For
$m,N\geq\ell\geq 1$, let $J_{1},\dots,J_{\ell}$ (resp.
$J^{\prime}_{1},\dots,J^{\prime}_{\ell}$) be a partition of $m$ Majoranas
(resp. $N$ qubits), and denote by $\mathcal{J}_{m,J_{k}}$ (resp.
$\mathcal{\hat{P}}_{N,J^{\prime}_{k}}$) the subgroup of $\mathcal{J}_{m}$
(resp. $\mathcal{\hat{P}}_{N}$) formed by those elements of $\mathcal{J}_{m}$
(resp. $\mathcal{\hat{P}}_{N}$) whose support is contained in $J_{k}$ (resp.
$J^{\prime}_{k}$). We then have
$\mathcal{J}_{m}=\langle\mathcal{J}_{m,J_{1}},\dots,\mathcal{J}_{m,J_{\ell}}\rangle$
and the direct sum
$\mathcal{\hat{P}}_{N}=\bigoplus_{k=1}^{\ell}\mathcal{\hat{P}}_{N,J^{\prime}_{k}}$.
Recall that a stabilizer group $\mathcal{S}$ of $\mathcal{J}_{m}$ satisfies
$-I\not\in\mathcal{S}$ such that all elements of $\mathcal{S}$ have even
weight. Now suppose we are given stabilizer groups
$\mathcal{S}_{1},\dots,\mathcal{S}_{\ell}$ of $\mathcal{J}_{m}$, such that for
each $k$, $\mathcal{S}_{k}\subseteq\mathcal{J}_{m,J_{k}}$, and moreover
$\mathcal{S}=\langle\mathcal{S}_{1},\dots,\mathcal{S}_{\ell}\rangle$ is a
stabilizer group. Then the centralizer of $\mathcal{S}$ satisfies
$\mathcal{C}(\mathcal{S})=\langle\mathcal{C}(\mathcal{S}_{1})\cap\mathcal{J}_{m,J_{1}},\dots,\mathcal{C}(\mathcal{S}_{\ell})\cap\mathcal{J}_{m,J_{\ell}}\rangle$.
To prove this, note that for any fixed $k$, if
$\gamma\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$, then from
the support of $\gamma$ we deduce using Eq. (9) that
$\gamma\in\mathcal{C}(\mathcal{S})$, showing
$\langle\mathcal{C}(\mathcal{S}_{1})\cap\mathcal{J}_{m,J_{1}},\dots,\mathcal{C}(\mathcal{S}_{\ell})\cap\mathcal{J}_{m,J_{\ell}}\rangle\subseteq\mathcal{C}(\mathcal{S})$.
Next let $\gamma\in\mathcal{C}(\mathcal{S})$. As $\gamma$ is an element of
$\mathcal{J}_{m}$, we have a decomposition
$\gamma=\prod_{k=1}^{\ell}\gamma_{k}$, where for each $k$ we have
$\gamma_{k}\in\mathcal{J}_{m,J_{k}}$. Fixing a $k$, we note that for all
$\gamma^{\prime}\in\mathcal{S}_{k}$, $\text{Com}(\gamma,\gamma^{\prime})=0$
implies $\text{Com}(\gamma_{k},\gamma^{\prime})=0$ (this uses
$\text{Com}(\gamma_{k^{\prime}},\gamma^{\prime})=0$ for all $k^{\prime}\neq k$
from support conditions), and this shows
$\mathcal{C}(\mathcal{S})\subseteq\langle\mathcal{C}(\mathcal{S}_{1})\cap\mathcal{J}_{m,J_{1}},\dots,\mathcal{C}(\mathcal{S}_{\ell})\cap\mathcal{J}_{m,J_{\ell}}\rangle$.
This discussion is useful in proving the following lemma:
###### Lemma C.2.
Suppose we are given
1. (i)
$m,N\geq\ell\geq 1$, $J_{1},\dots,J_{\ell}$ is a partition of $m$ Majoranas,
and $J^{\prime}_{1},\dots,J^{\prime}_{\ell}$ is a partition of $N$ qubits.
2. (ii)
$\mathcal{S}_{1},\dots,\mathcal{S}_{\ell}$ are stabilizer groups of
$\mathcal{J}_{m}$, satisfying $\mathcal{S}_{k}\subseteq\mathcal{J}_{m,J_{k}}$
for all $k$, and
$\mathcal{S}=\langle\mathcal{S}_{1},\dots,\mathcal{S}_{\ell}\rangle$ is a
stabilizer group.
3. (iii)
For all $k$, and for all
$\gamma\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$, $\gamma$ has
even weight.
4. (iv)
$\mu:\mathcal{C}(\mathcal{S})\rightarrow\mathcal{\hat{P}}_{N}$ is a map with
the properties: (a)
$\mu(\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}})\subseteq\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$
for each $k$, and (b) if $\gamma=\prod_{k=1}^{\ell}\gamma_{k}$ with each
$\gamma_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$, then
$\mu(\gamma)=\prod_{k=1}^{\ell}\mu(\gamma_{k})$.
Denote by $\mu_{k}$ the restriction of $\mu$ to
$\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$. Then the following
hold:
1. (a)
Two distinct $\mu$ cannot give rise to the same family
$\\{\mu_{k}\\}_{k=1}^{\ell}$.
2. (b)
If $\ell>1$, then for each $k$, we have $\mu_{k}(iI)=[I]$, and
$\mu_{k}(\gamma)=\mu_{k}(\gamma^{\prime})$ whenever
$\gamma,\gamma^{\prime}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$,
and $\gamma,\gamma^{\prime}$ are equivalent up to phase factors. Moreover, if
we are given a family of maps
$\mu_{k}:\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}\rightarrow\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$
for $1\leq k\leq\ell$, satisfying these two conditions, then there exists a
corresponding $\mu$ satisfying (iv), and it is unique.
3. (c)
$\mu$ is a surjective, commutation preserving homomorphism if and only if
$\mu_{k}:\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}\rightarrow\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$
is a surjective, commutation preserving homomorphism, for each $k$.
* Proof.
1. (a)
Since
$\mathcal{C}(\mathcal{S})=\langle\mathcal{C}(\mathcal{S}_{1})\cap\mathcal{J}_{m,J_{1}},\dots,\mathcal{C}(\mathcal{S}_{\ell})\cap\mathcal{J}_{m,J_{\ell}}\rangle$,
it implies that if $\gamma\in\mathcal{C}(\mathcal{S})$, then
$\gamma=\prod_{k=1}^{\ell}\gamma_{k}$ with each
$\gamma_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$. Then by
property (b) of (iv), we get
$\mu(\gamma)=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})$ showing that distinct
$\mu$ cannot give rise to the same family $\\{\mu_{k}\\}_{k=1}^{\ell}$.
2. (b)
If there exists $k$ such that $\mu_{k}(iI)\neq[I]$, then property (a) of (iv)
does not hold. Now suppose there exists $k$ and
$\gamma_{k},\gamma^{\prime}_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$,
such that $\gamma_{k},\gamma^{\prime}_{k}$ are distinct and equivalent up to
phase, and $\mu_{k}(\gamma_{k})\neq\mu_{k}(\gamma^{\prime}_{k})$. Consider any
$\gamma\in\mathcal{C}(\mathcal{S})$ given by
$\gamma=\prod_{s=1}^{\ell}\gamma_{s}$, where for all $s\neq k$,
$\gamma_{s}\in\mathcal{C}(\mathcal{S}_{s})\cap\mathcal{J}_{m,J_{s}}$ are
chosen arbitrarily. By adjusting the phases, one also has
$\gamma=\prod_{s=1}^{\ell}\gamma^{\prime}_{s}$ with
$\gamma_{s},\gamma^{\prime}_{s}$ equivalent up to phase for all $s$. Then by
property (b) of (iv) we get
$\prod_{s=1}^{\ell}\mu_{s}(\gamma_{s})=\prod_{s=1}^{\ell}\mu_{s}(\gamma^{\prime}_{s})$,
and the support conditions imply
$\mu_{k}(\gamma_{k})=\mu_{k}(\gamma^{\prime}_{k})$ giving a contradiction. To
prove the second part, assume we are given a family
$\\{\mu_{k}\\}_{k=1}^{\ell}$ satisfying the two conditions of the first part.
Take any $\gamma\in\mathcal{C}(\mathcal{S})$, and suppose
$\gamma=\prod_{k=1}^{\ell}\gamma_{k}$ and
$\gamma=\prod_{k=1}^{\ell}\gamma^{\prime}_{k}$ are two different
representations of $\gamma$, where
$\gamma_{k},\gamma^{\prime}_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$
for all $k$. From support conditions, we conclude that
$\gamma_{k},\gamma^{\prime}_{k}$ are equivalent up to phase for each $k$. Thus
$\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})=\prod_{k=1}^{\ell}\mu_{k}(\gamma^{\prime}_{k})$,
and so we can define $\mu(\gamma):=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})$
unambiguously. This definition of $\mu$ satisfies (iv), and it is unique by
(a).
3. (c)
First assume that $\mu$ is a surjective, commutation preserving homomorphism,
and fix any $k$. Then if
$\gamma,\gamma^{\prime}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$,
we have
$\text{Com}(\mu_{k}(\gamma),\mu_{k}(\gamma^{\prime}))=\text{Com}(\mu(\gamma),\mu(\gamma^{\prime}))=\text{Com}(\gamma,\gamma^{\prime})$,
and also
$\mu_{k}(\gamma\gamma^{\prime})=\mu(\gamma\gamma^{\prime})=\mu(\gamma)\mu(\gamma^{\prime})=\mu_{k}(\gamma)\mu_{k}(\gamma^{\prime})$,
showing that $\mu_{k}$ is a commutation preserving homomorphism. Next let
$[q]\in\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$. By surjectivity of $\mu$, we
have $\gamma\in\mathcal{C}(\mathcal{S})$ such that $\mu(\gamma)=[q]$. Since we
can write $\gamma=\prod_{s=1}^{\ell}\gamma_{s}$ with
$\gamma_{s}\in\mathcal{C}(\mathcal{S}_{s})\cap\mathcal{J}_{m,J_{s}}$ for each
$s$, it implies $\mu(\gamma)=\prod_{s=1}^{\ell}\mu(\gamma_{s})=[q]$. But from
property (a) of (iv), it must be that $\mu(\gamma_{s})=[I]$ for all $s\neq k$,
and so $\mu(\gamma_{k})=[q]$ showing that $\mu_{k}$ is surjective. For the
converse direction, suppose
$\mu_{k}:\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}\rightarrow\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$
is a surjective, commutation preserving homomorphism for all $k$. Take
$\gamma,\gamma^{\prime}\in\mathcal{C}(\mathcal{S})$, and we can write
$\gamma=\prod_{k=1}^{\ell}\gamma_{k}$, and
$\gamma^{\prime}=\prod_{k=1}^{\ell}\gamma^{\prime}_{k}$, with
$\gamma_{k},\gamma^{\prime}_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$
for each $k$, and thus
$\gamma\gamma^{\prime}=\pm\prod_{k=1}^{\ell}\gamma_{k}\gamma^{\prime}_{k}$. It
follows that
$\mu(\gamma\gamma^{\prime})=\mu(\pm\gamma_{1}\gamma^{\prime}_{1})\prod_{k=2}^{\ell}\mu(\gamma_{k}\gamma^{\prime}_{k})=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k}\gamma^{\prime}_{k})=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})\mu_{k}(\gamma^{\prime}_{k})$,
where the first equality is by property (b) of (iv), and the second equality
uses that $\mu_{1}(iI)=[I]$ as $\mu_{1}$ is a surjective, commutation
preserving homomorphism (see the first paragraph of the proof of Lemma C.1 for
a similar reasoning). Similarly,
$\mu(\gamma)=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})$ and
$\mu(\gamma^{\prime})=\prod_{k=1}^{\ell}\mu_{k}(\gamma^{\prime}_{k})$, from
which we get $\mu(\gamma)\mu(\gamma^{\prime})=\mu(\gamma\gamma^{\prime})$. We
also have
$\text{Com}(\mu(\gamma),\mu(\gamma^{\prime}))=\sum_{k=1}^{\ell}\text{Com}(\mu_{k}(\gamma_{k}),\mu_{k}(\gamma^{\prime}_{k}))=\sum_{k=1}^{\ell}\text{Com}(\gamma_{k},\gamma^{\prime}_{k})=\text{Com}(\gamma,\gamma^{\prime})$,
where the last equality is by (iii). This shows that $\mu$ is a commutation
preserving homomorphism. To prove $\mu$ is surjective, let
$[q]\in\mathcal{\hat{P}}_{N}$. Then by the direct sum representation of
$\mathcal{\hat{P}}_{N}$, one can write $[q]=\prod_{k=1}^{\ell}[q_{k}]$ with
$[q_{k}]\in\hat{\mathcal{P}}_{N,J^{\prime}_{k}}$ for all $k$. By surjectivity
of $\mu_{k}$, we have that for all $k$, there exists
$\gamma_{k}\in\mathcal{C}(\mathcal{S}_{k})\cap\mathcal{J}_{m,J_{k}}$
satisfying $\mu_{k}(\gamma_{k})=[q_{k}]$. Defining
$\gamma=\prod_{k=1}^{\ell}\gamma_{k}$, gives
$\mu(\gamma)=\prod_{k=1}^{\ell}\mu_{k}(\gamma_{k})=\prod_{k=1}^{\ell}[q_{k}]=[q]$.
∎
Using the lemmas and facts presented above in this section, it is now easy to
show that the Majorana and the qubit surface codes of Section 3.2 and Section
3.3 respectively, encode the same number of qubits. Recall that the Majorana
code is defined by the stabilizer group
$\mathcal{S}(G)\subseteq\mathcal{J}_{m}$, generated by the vertex stabilizers
$S_{v}$ and the face stabilizers $S_{f}$, for all $v\in V$ and $f\in F$, as
defined by Eqs. (12), (13). We partition the Majoranas into $|V|$ subsets
$\\{J_{v}\\}_{v\in V}$, with each subset indexed by a vertex $v\in V$. If $v$
is an even degree vertex, then
$J_{v}=\\{\gamma_{[h]_{\tau}}:[h]_{\tau}\subseteq v\\}$, and if $v$ is an odd
degree vertex, then $J_{v}=\\{\gamma_{[h]_{\tau}}:[h]_{\tau}\subseteq
v\\}\cup\\{\bar{\gamma}_{v}\\}$. Similarly, we also partition the $N$ qubits
into $|V|$ subsets $\\{J^{\prime}_{v}\\}_{v\in V}$, where $J^{\prime}_{v}$
consists of the $N_{v}$ qubits placed at $v$ for the qubit surface code (see
Definition 3.3). Using this, we complete the proof of Corollary 3.8 below.
* Proof of Corollary 3.8.
Let $\mathcal{S}_{v}$ be the stabilizer group generated by $S_{v}$, and note
that $\mathcal{S}_{v}\subseteq\mathcal{J}_{m,J_{v}}$ for all $v$. Let
$\mathcal{S}_{\text{maj}}=\langle\\{\mathcal{S}_{v}:v\in V\\}\rangle$, and it
is a stabilizer group as $\mathcal{S}_{\text{maj}}\subseteq\mathcal{S}(G)$.
Define $m_{v}=|J_{v}|$ and note that $m_{v}$ is even. If
$\gamma\in\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$ then
$|\mathrm{supp}\left({\gamma}\right)|$ is even, as $\gamma$ must commute with
$S_{v}$. Thus conditions (i)-(iii) of Lemma C.2 are satisfied (with
$\ell=|V|$). Our goal is to define a family of surjective, commutation
preserving homomorphisms
$\mu_{v}:\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}\rightarrow\hat{\mathcal{P}}_{N,J^{\prime}_{v}}$,
for each vertex $v$. Fixing a $v\in V$, note that any such $\mu_{v}$ must
satisfy $\mu_{v}(iI)=[I]$ (one argues similarly as for $\mu_{1}$ in the proof
of the converse part of Lemma C.2(c)), and this ensures
$\mu_{v}(\gamma)=\mu_{v}(\gamma^{\prime})$ whenever $\gamma,\gamma^{\prime}$
are equivalent up to phase. Thus the two conditions in Lemma C.2(b) are
satisfied by $\mu_{v}$, so given such a family $\\{\mu_{v}\\}_{v\in V}$ one
concludes that there exists a unique map
$\mu:\mathcal{C}(\mathcal{S}_{\text{maj}})\rightarrow\mathcal{\hat{P}}_{N}$
that gives rise to $\\{\mu_{v}\\}_{v\in V}$. This ensures $\mu$ satisfies
condition (iv) of Lemma C.2, and then Lemma C.2(c) allows us to conclude that
$\mu$ is a surjective, commutation preserving homomorphism.
Starting with any half-edge $h\in v$, assign the Majoranas
$\gamma_{[h]_{\tau}},\gamma_{[\tau\rho
h]_{\tau}},\dots,\gamma_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\tau}}$ in $J_{v}$
the labels $1,\dots,\text{deg}(v)$ respectively, and if $v$ is an odd-degree
vertex, assign the Majorana $\bar{\gamma}_{v}$ the label $\text{deg}(v)+1$
(note that this is independent of the labeling scheme discussed in Section
3.2). Also consider the list of sector operators
$\mathcal{T}_{v}=\\{q_{[h]_{\rho}},q_{[\tau\rho
h]_{\rho}},\dots,q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}}\\}\subseteq\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$,
and let
$\chi_{v}:\mathcal{T}_{v}\rightarrow\mathcal{\hat{P}}_{N,J^{\prime}_{v}}$ be
the map that assigns each sector operator to an extremal CAL on $N_{v}$ qubits
(as discussed in Section 3.3). Let
$\Xi_{v}\in\mathbb{F}_{2}^{(\text{deg}(v)+1)\times m_{v}}$ be the matrix whose
last row is all ones, and the top $\text{deg}(v)$ rows are defined by
$(\Xi_{v})_{ij}=1$ if and only if the sector operator
$q_{[(\tau\rho)^{i-1}h]_{\rho}}$ contains the Majorana with label $j$. Let
$\mathcal{V}_{v}$ denote the row space of $\Xi_{v}$, and since all rows of
$\Xi$ are binary vectors of even Hamming weight, Lemma B.12 implies that
$\mathcal{V}_{v}\subseteq\\{x\in\mathbb{F}_{2}^{m_{v}}:\text{Ham}(x)\text{ is
even}\\}$. Also define
$\mathcal{N}_{v}=\\{y\in\mathbb{F}_{2}^{\text{deg}(v)+1}:y\Xi_{v}=\vec{0}\in\mathbb{F}_{2}^{m_{v}}\\}$.
We want to construct $\mu_{v}$ as an extension of $\chi_{v}$. Since elements
of $\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$ which are
equivalent up to phase, must get mapped to the same element of
$\hat{\mathcal{P}}_{N,J^{\prime}_{v}}$, we may represent any
$\gamma\in\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$ by a row
vector $x\in\mathbb{F}_{2}^{m_{v}}$ of even Hamming weight, where $x_{j}=1$ if
and only if $\gamma$ contains the Majorana with label $j$. Given any such $x$,
let $y\in\mathbb{F}_{2}^{\text{deg}(v)+1}$ be a solution of $x=y\Xi_{v}$. Then
for any $\gamma$ represented by $x$ we define
$\mu_{v}(\gamma)=\prod_{j:y_{j}=1}\chi_{v}(q_{[(\tau\rho)^{j-1}h]_{\rho}}).$
(78)
To show that $\mu_{v}$ is a well defined map, we need to check (i) for all
$x\in\mathbb{F}_{2}^{m_{v}}$ of even Hamming weight, the equation $x=y\Xi_{v}$
has a solution $y$, and (ii) if multiple solutions exist for the same $x$,
then Eq. (78) does not depend on the solution $y$ picked. Observe that
$\mathrm{rank}\left({\Xi_{v}}\right)=\text{deg}(v)-1$ if $\text{deg}(v)$ is
even, and $\mathrm{rank}\left({\Xi_{v}}\right)=\text{deg}(v)$ if
$\text{deg}(v)$ is odd. Thus $\mathrm{rank}\left({\Xi_{v}}\right)=m_{v}-1$ in
both cases, and we conclude that
$|\mathcal{V}_{v}|=2^{m_{v}-1}=|\\{x\in\mathbb{F}_{2}^{m_{v}}:\text{Ham}(x)\text{
is even}\\}|$. This proves (i). To prove (ii), let
$\vec{1}\in\mathbb{F}_{2}^{\text{deg}(v)}$ be the all-ones row vector, and we
consider the $\text{deg}(v)$ odd or even cases separately. First suppose
$\text{deg}(v)$ is odd. Then
$\mathcal{N}_{v}=\text{span}\\{\begin{bmatrix}\vec{1}&0\end{bmatrix}\\}$, and
thus if $y\neq y^{\prime}$ satisy $x=y\Xi_{v}=y^{\prime}\Xi_{v}$, then
$y-y^{\prime}=\begin{bmatrix}\vec{1}&0\end{bmatrix}$. Since
$\prod_{j=1}^{\text{deg}(v)}\chi_{v}(q_{[(\tau\rho)^{j-1}h]_{\rho}})=[I]$ by
Corollary B.11(a), it follows that Eq. (78) gives the same $\mu_{v}(\gamma)$
whether we choose $y$ or $y^{\prime}$. Next suppose $\text{deg}(v)$ is even.
Then
$\mathcal{N}_{v}=\text{span}\\{\begin{bmatrix}\vec{1}&0\end{bmatrix},\;\begin{bmatrix}z&1\end{bmatrix}\\}$,
where $z\in\mathbb{F}_{2}^{\text{deg}(v)}$ satisfies $z_{j}=1$ if and only if
$j$ is even. In this case, both
$\prod_{j=1}^{\text{deg}(v)}\chi_{v}(q_{[(\tau\rho)^{j-1}h]_{\rho}})=[I]$ and
$\prod_{j=1}^{\text{deg}(v)/2}\chi_{v}(q_{[(\tau\rho)^{2j-1}h]_{\rho}})=[I]$
by Corollary B.11(b), and so again Eq. (78) is well defined. We also see that
$\mu_{v}$ is an extension of $\chi_{v}$. For any fixed $j$, consider the
sector operator $q_{[(\tau\rho)^{j-1}h]_{\rho}}$ and let $x$ be its
representation. Then a solution to $x=y\Xi_{v}$ is given by $y$ satisfying
$y_{j^{\prime}}=1$ if and only if $j^{\prime}=j$, and thus by Eq. (78) we get
$\mu_{v}(q_{[(\tau\rho)^{j-1}h]_{\rho}})=\chi_{v}(q_{[(\tau\rho)^{j-1}h]_{\rho}})$.
We now show that the map $\mu_{v}$ is a surjective, commutation preserving
homomorphism. Let
$\gamma,\gamma^{\prime}\in\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$
be represented by $x,x^{\prime}\in\mathbb{F}_{2}^{m_{v}}$ respectively, and
suppose $y,y^{\prime}$ satisfy $x=y\Xi_{v}$, $x^{\prime}=y^{\prime}\Xi_{v}$.
Then we have $(x+x^{\prime})=(y+y^{\prime})\Xi_{v}$, and
$\gamma\gamma^{\prime}$ is represented by $x+x^{\prime}$. From Eq. (78) we
conclude
$\mu_{v}(\gamma\gamma^{\prime})=\mu_{v}(\gamma)\mu_{v}(\gamma^{\prime})$
proving $\mu_{v}$ is a homomorphism. Since the CAL
$\\{\chi_{v}(q_{[h]_{\tau}}),\dots,\chi_{v}(q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}})\\}$
generates $\mathcal{\hat{P}}_{N,J^{\prime}_{v}}$ by Corollary B.10, $\mu_{v}$
being a homomorphism also implies it is surjective. Next note that the fact
$x=y\Xi_{v}$ always has a solution for any $x\in\mathbb{F}_{2}^{m_{v}}$ of
even Hamming weight, implies
$\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}=\langle
iI,S_{v},q_{[h]_{\rho}},\dots,q_{[(\tau\rho)^{\text{deg}(v)-1}h]_{\rho}}\rangle$,
and from Eq. (78) we have $\mu_{v}(iI)=\mu_{v}(S_{v})=[I]$. Thus we see that
$\mu_{v}$ is commutation preserving on a generating set of
$\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$, and by the same
argument as in the paragraph below the proof of Lemma C.1 we conclude that
$\mu_{v}$ is commutation preserving.
As mentioned in the first paragraph of the proof, this finishes the
construction of $\mu$ whose restriction to
$\mathcal{C}(\mathcal{S}_{v})\cap\mathcal{J}_{m,J_{v}}$ is $\mu_{v}$ for each
$v$. Now after identifying $\mathcal{J}_{m}$ with $\mathcal{P}_{m/2}$ using
the Jordan-Wigner map (which is a commutation preserving isomorphism), we can
apply Lemma C.1 and conclude the existence of a surjective homomorphsim
$\kappa:\mathcal{C}(\mathcal{S}_{\text{maj}})\rightarrow\mathcal{P}_{N}$ such
that $\hat{\kappa}=\mu$, and by Lemma C.1(f) we conclude that the stabilizer
codes $\mathcal{S}(G)$ and $\kappa(\mathcal{S}(G))$ encode the same number of
logical qubits. It remains to show that $\mu(\mathcal{S}(G))$ equals the
stabilizer group of the qubit surface code. By Definition 3.3 and Eq. 16, for
each $f\in F$ there is a stabilizer of the qubit surface code given by
$\prod_{[h]_{\rho}\subseteq f}\chi_{v([h]_{\rho})}(q_{[h]_{\rho}})$, where for
each sector $[h]_{\rho}$, $v([h]_{\rho})$ is the unique vertex containing the
sector. But by Eq. (78) this is equal to $\mu(S_{f})$, and the conclusion
follows as $\mu(S_{v})=[I]$ for all $v\in V$. ∎
## Appendix D A more general family of cyclic qubit stabilizer codes
In this section we show that the cyclic toric code introduced in Subsection
3.3 are part of a much larger 2-parameter code family of cyclic qubit
stabilizer codes, which in turn are special cases of a 4-parameter cyclic code
family. We will prove a few facts about these code families. The results here
also provide a completely different proof for the number of encoded qubits for
the cyclic toric code (see Theorem 3.9).
### D.1 A four parameter cyclic code family
We begin by choosing integers $N,p,q,r$ satisfying the conditions (i)
$p,q,r\geq 1$, (ii) $r,q,p+r$ all distinct, and (iii) $\max\\{q,p+r\\}+1\leq
N$, which ensures $N\geq 4$. We will always assume in this section that
$N,p,q,r$ satisfies these conditions. Now consider a Pauli string
$a_{0}a_{1}\dots a_{N-1}$ of length $N$ such that
$a_{k}=\begin{cases}Z&\;\;\text{if }k\in\\{0,q\\},\\\ X&\;\;\text{if
}k\in\\{r,p+r\\},\\\ I_{2}&\;\;\text{otherwise}.\end{cases}$ (79)
Let $S_{i}$ denote the Pauli string in Eq. (79) cyclically shifted by $i$
indices to the right. Thus $S_{0}$ is the unshifted Pauli string. Let
$\mathcal{S}=\\{S_{i}:0\leq i\leq N-1\\}$ denote the set of all cyclically
shifted Pauli strings, and $\langle\mathcal{S}\rangle$ be the subgroup
generated by $\mathcal{S}$. It will be convenient to associate subsets of
$\mathcal{N}:=\\{0,\dots,N-1\\}$ with subsets of $\mathcal{S}$, so we
introduce the map $\Gamma:2^{\mathcal{N}}\rightarrow 2^{\mathcal{S}}$ defined
as $\Gamma(\mathcal{X})=\\{S_{x}:x\in\mathcal{X}\\}$, for all $\mathcal{X}\in
2^{\mathcal{N}}$.
If all the cyclic shifts of this Pauli string commute with one another, they
can be used to define a stabilizer group with each shifted Pauli string
defining a stabilizer up to a phase factor of $1$ or $-1$ (see Lemma 3.1(a)
for the equivalent statement about the Majorana group, which also applies for
the Pauli group due to identification of the two groups via the Jordan-Wigner
transform). The quantum stabilizer code that results will be called a
4-parameter $(N,p,q,r)$ cyclic code. Depending on the values of $r,q$, and
$p+r$, there are three distinct subfamilies of these codes. The explicit form
of the Pauli string $S_{0}$ corresponding to these three subfamilies are given
below:
$S_{0}=\begin{cases}ZI^{\otimes r-1}XI^{\otimes q-r-1}ZI^{\otimes
p+r-q-1}XI^{\otimes N-p-r-1}\;\;\;\;&\text{if
}r<q<p+r,\;\;\;\;\;\;\;\;\text{(subfamily A)}\\\ ZI^{\otimes r-1}XI^{\otimes
p-1}XI^{\otimes q-p-r-1}ZI^{\otimes N-q-1}\;\;\;\;&\text{if
}r<p+r<q,\;\;\;\;\;\;\;\;\text{(subfamily B)}\\\ ZI^{\otimes q-1}ZI^{\otimes
r-q-1}XI^{\otimes p-1}XI^{\otimes N-p-r-1}\;\;\;\;&\text{if
}q<r<p+r.\;\;\;\;\;\;\;\;\;\text{(subfamily C)}\end{cases}$ (80)
We would like to understand the conditions on the values of $N,p,q,r$ under
which all cyclic shifts of $S_{0}$ for each subfamily in Eq. (80) commute with
one another, which is necessary to define a valid stabilizer group. The
following lemma is important towards achieving this goal.
###### Lemma D.1.
For any Pauli string defined in Eq. (79), $\langle\mathcal{S}\rangle$ is a
commuting subgroup if and only if the set $\\{S_{0},S_{q},S_{r},S_{p+r}\\}$ is
pairwise commuting.
* Proof.
In this proof all qubit indices will be evaluated modulo $N$. If
$\langle\mathcal{S}\rangle$ is a commuting subgroup, the set
$\\{S_{0},S_{q},S_{r},S_{p+r}\\}$ is pairwise commuting because it is a subset
of $\langle\mathcal{S}\rangle$. For the other direction, assume that
$\\{S_{0},S_{q},S_{r},S_{p+r}\\}$ is a pairwise commuting set. To show that
$\langle\mathcal{S}\rangle$ is a commuting subgroup, it suffices to show that
$\mathcal{S}$ is pairwise commuting, which is in turn equivalent to showing
that $S_{0}$ commutes with each element in $\mathcal{S}$, because of
cyclicity. For contradiction, assume that this is not true, so there exists
$S_{\ell}\in\mathcal{S}$ that anticommutes with $S_{0}$. This is only possible
if there exists a qubit index $0\leq n\leq N-1$, such that both $S_{0}$ and
$S_{\ell}$ contain a Pauli different from $I_{2}$ (not necessarily the same)
at that qubit, and so this implies $\ell\in\\{\pm p,\pm q,\pm
r,\pm(p+r),\pm(q-r),\pm(p+r-q)\\}$. Now for any index $\ell^{\prime}$, it
follows using cyclicity that $S_{0}$ commutes with $S_{\ell^{\prime}}$ if and
only if $S_{0}$ commutes with $S_{-\ell^{\prime}}$, because the relative
shifts in both cases are the same. Also by assumption $S_{0}$ commutes with
each of $S_{q}$, $S_{r}$, and $S_{p+r}$. Thus we deduce that
$\ell\in\\{p,q-r,p+r-q\\}$. Using cyclicity again, it also follows by the same
reasoning that (i) $S_{0}$ and $S_{p}$ are commuting if and only if $S_{r}$
and $S_{p+r}$ are commuting, (ii) $S_{0}$ and $S_{q-r}$ are commuting if and
only if $S_{r}$ and $S_{q}$ are commuting, and (iii) $S_{0}$ and $S_{p+r-q}$
are commuting if and only if $S_{q}$ and $S_{p+r}$ are commuting. But this is
a contradiction as $S_{q}$, $S_{r}$ and $S_{p+r}$ are pairwise commuting by
assumption. ∎
Because of Lemma D.1, given any values of $N,p,q,r$, one only needs to check
whether all six pairs $\\{S_{0},S_{q}\\}$, $\\{S_{0},S_{r}\\}$,
$\\{S_{0},S_{p+r}\\}$, $\\{S_{q},S_{r}\\}$, $\\{S_{q},S_{p+r}\\}$ and
$\\{S_{r},S_{p+r}\\}$ are commuting or not. Each of these checks has
complexity $O(1)$, independent of $N$, because each Pauli string in
$\\{S_{0},S_{q},S_{r},S_{p+r}\\}$ has bounded support, and so determining
whether $\langle\mathcal{S}\rangle$ is a commuting subgroup or not also has
complexity $O(1)$ for any input values of $N,p,q,r$. We will say that a
quadruple $(N,p,q,r)$ is consistent if and only if $\langle\mathcal{S}\rangle$
is a commuting subgroup. For example, the following special case is of
particular interest.
###### Lemma D.2.
A quadruple $(N,p,q,r)$ satisfying $r<p+r<q$ is consistent if $q=p+2r$, and
corresponds to a cyclic code of subfamily B.
* Proof.
In this proof all qubit indices will be implicitly assumed to be modulo $N$.
If we prove that the quadruple is consistent then we know from Eq. (80) that
the cyclic code belongs to subfamily B. First note that the Pauli string
$S_{0}$ has the form $ZI^{\otimes r-1}XI^{\otimes p-1}XI^{\otimes
r-1}ZI^{\otimes N-p-2r-1}$. Let $\begin{bmatrix}a&b\end{bmatrix}$ be the
symplectic representation of $S_{0}$ and
$\begin{bmatrix}a^{\prime}&b^{\prime}\end{bmatrix}$ be the symplectic
representation of $S_{m}$, for some $0\leq m\leq N-1$, where
$a,a^{\prime},b,b^{\prime}\in\mathbb{F}_{2}^{N}$. Then $S_{0}$ and $S_{m}$
commute if and only if $(a\cdot b^{\prime}+a^{\prime}\cdot b)=0$, where the
left hand side is evaluated over $\mathbb{F}_{2}$. Now note that $a_{i}=1$ if
and only if at $i\in\\{r,p+r\\}$, $b_{i}=1$ if and only if $i\in\\{0,p+2r\\}$,
$a^{\prime}_{i}=1$ if and only if at $i\in\\{m+r,m+p+r\\}$, and
$b^{\prime}_{i}=1$ if and only if $i\in\\{m,m+p+2r\\}$. Then
$\begin{split}a\cdot b^{\prime}+a^{\prime}\cdot
b=&\left(\delta_{r,m}+\delta_{r,m+p+2r}+\delta_{p+r,m}+\delta_{p+r,m+p+2r}\right)\\\
&+\left(\delta_{m+r,0}+\delta_{m+r,p+2r}+\delta_{m+p+r,0}+\delta_{m+p+r,p+2r}\right)\\\
=&\left(\delta_{r,m}+\delta_{0,m+p+r}+\delta_{p+r,m}+\delta_{0,m+r}\right)+\left(\delta_{m+r,0}+\delta_{m,p+r}+\delta_{m+p+r,0}+\delta_{m,r}\right)\\\
=&\left(\delta_{r,m}+\delta_{m,r}\right)+\left(\delta_{0,m+p+r}+\delta_{m+p+r,0}\right)+\left(\delta_{p+r,m}+\delta_{m,p+r}\right)+\left(\delta_{0,m+r}+\delta_{m+r,0}\right)\\\
=&\;0.\end{split}$ (81)
Since $m$ is arbitrary this proves that $S_{0}$ and $S_{m}$ commute for all
$0\leq m\leq N-1$, and using cyclicity we then conclude that $(N,p,q,r)$ is
consistent. ∎
Given a consistent quadruple $(N,p,q,r)$, we would like to know the number of
qubits encoded by the corresponding cyclic code. The answer to this question
is completely provided by the next theorem, which we state below.
###### Theorem D.3.
For any 4-parameter $(N,p,q,r)$ cyclic code, the number of encoded qubits
satisfies
$K=\frac{\gcd(p,N)\;\gcd(q,N)}{\gcd(\operatorname{lcm}(p,q),N)}$ (82)
and is independent of $r$.
The proof of this theorem is delayed to first present a few general facts, on
which the proof depends on. Below we start by proving Lemma D.4, which is then
used to prove Lemma D.5, the latter being the most important result that we
will need. We then prove Lemma D.6, which is a structure lemma that tells us
exactly which subsets of the stabilizers of the 4-parameter $(N,p,q,r)$ cyclic
code are independent. Combined with Lemma D.5, this immediately proves Theorem
D.3. In the proofs of Lemma D.6 and Theorem D.3, we use the notation
introduced in the paragraph below Eq. (79), that associates subsets of
$\\{0,\dots,N-1\\}$ to subsets of stabilizers for the 4-parameter $(N,p,q,r)$
cyclic code.
###### Lemma D.4.
Let $a,b,m\geq 1$ be positive integers, and $\alpha,\beta,\gamma$ be real
numbers. Then
1. (a)
$\min(\gamma-\min(\alpha,\gamma),\gamma-\min(\beta,\gamma))=\gamma-\min(\gamma,\alpha+\beta-\min(\alpha,\beta)),$
(83)
2. (b)
$\gcd\left(\frac{m}{\gcd(a,m)},\frac{m}{\gcd(b,m)}\right)=\frac{m}{\gcd(\operatorname{lcm}(a,b),m)}.$
(84)
* Proof.
1. (a)
Without loss of generality assume that $\alpha\geq\beta$. Then
$\alpha+\beta-\min(\alpha,\beta)=\alpha$, so the expression on the right-hand
side of Eq. (83) simplifies to $\gamma-\min(\gamma,\alpha)$. First consider
the case $\gamma\leq\alpha$. We have $\gamma-\min(\gamma,\alpha)=0$, while
$\min(\gamma-\min(\alpha,\gamma),\gamma-\min(\beta,\gamma))=\min(0,\gamma-\min(\beta,\gamma))=0$,
since $\gamma-\min(\beta,\gamma)\geq 0$. Next consider the case
$\gamma>\alpha\geq\beta$. Then $\gamma-\min(\gamma,\alpha)=\gamma-\alpha$, and
$\min(\gamma-\min(\alpha,\gamma),\gamma-\min(\beta,\gamma))=\min(\gamma-\alpha,\gamma-\beta)=\gamma-\alpha$.
This proves Eq. (83) in both cases.
2. (b)
Let $\\{p_{i}\\}_{i\geq 1}$ be the enumeration of all the primes in ascending
order. Then there exists an $n$ such that $a,b,m$ have the following unique
representations
$a=\prod_{i=1}^{n}p_{i}^{\alpha_{i}},\;b=\prod_{i=1}^{n}p_{i}^{\beta_{i}},\;m=\prod_{i=1}^{n}p_{i}^{\gamma_{i}},$
(85)
where $\alpha_{i},\beta_{i},\gamma_{i}\in\\{0,1,2,\dots\\}$ for all $1\leq
i\leq n$. Using Eq. (85) it then easily follows that
$\begin{split}\gcd\left(\frac{m}{\gcd(a,m)},\frac{m}{\gcd(b,m)}\right)&=\gcd\left(\prod_{i=1}^{n}p_{i}^{\gamma_{i}-\min(\alpha_{i},\gamma_{i})},\prod_{i=1}^{n}p_{i}^{\gamma_{i}-\min(\beta_{i},\gamma_{i})}\right)\\\
&=\prod_{i=1}^{n}p_{i}^{\min(\gamma_{i}-\min(\alpha_{i},\gamma_{i}),\gamma_{i}-\min(\beta_{i},\gamma_{i}))},\end{split}$
(86)
and
$\frac{m}{\gcd(\operatorname{lcm}(a,b),m)}=\frac{\prod\limits_{i=1}^{n}p_{i}^{\gamma_{i}}}{\gcd\left(\prod\limits_{i=1}^{n}p_{i}^{\alpha_{i}+\beta_{i}-\min(\alpha_{i},\beta_{i})},\prod\limits_{i=1}^{n}p_{i}^{\gamma_{i}}\right)}=\prod_{i=1}^{n}p_{i}^{\gamma_{i}-\min(\gamma_{i},\alpha_{i}+\beta_{i}-\min(\alpha_{i},\beta_{i}))},$
(87)
and thus by (a) these two expressions are equal.
∎
###### Lemma D.5.
Let $a,b,m\geq 1$ be positive integers, and let “$\boldsymbol{+}$” denote
addition modulo $m$. Then the set of integers
$\mathcal{G}=\\{k_{1}a\boldsymbol{+}k_{2}b:k_{1},k_{2}\in\mathbb{Z}\\}$ is a
cyclic subgroup of $(\mathbb{Z}/m\mathbb{Z},\boldsymbol{+})$, under
$\boldsymbol{+}$. The order of $\mathcal{G}$ satisfies
$|\mathcal{G}|=\frac{m\gcd(\operatorname{lcm}(a,b),m)}{\gcd(a,m)\;\gcd(b,m)}.$
(88)
The smallest positive integer that generates $\mathcal{G}$ is
$\gcd(a,m)\gcd(b,m)/\gcd(\operatorname{lcm}(a,b),m)$.
* Proof.
$\mathcal{G}$ is closed under $\boldsymbol{+}$, and for every
$x\in\mathcal{G}$, $-x\bmod m\in\mathcal{G}$, and so each element has an
additive inverse. Thus $\mathcal{G}$ is a subgroup of
$(\mathbb{Z}/m\mathbb{Z},\boldsymbol{+})$ since
$\mathcal{G}\subseteq\mathbb{Z}/m\mathbb{Z}$, and because
$(\mathbb{Z}/m\mathbb{Z},\boldsymbol{+})$ is a cyclic group, it follows that
$\mathcal{G}$ is cyclic.
We now derive the expression for $|\mathcal{G}|$ in Eq. (88), which we break
up into a few steps.
1. (i)
Step 1: We first construct two subsets of $\mathcal{G}$:
$\mathcal{G}_{a}=\\{k_{1}a\bmod m:k_{1}\in\mathbb{Z}\\}$, and
$\mathcal{G}_{b}=\\{k_{2}b\bmod m:k_{2}\in\mathbb{Z}\\}$. Using the same
argument as for $\mathcal{G}$, we have that $\mathcal{G}_{a}$ and
$\mathcal{G}_{b}$ are cyclic subgroups of $\mathcal{G}$ under
$\boldsymbol{+}$, hence also of $(\mathbb{Z}/m\mathbb{Z},\boldsymbol{+})$.
Since $\operatorname{lcm}(a,m)$ is the smallest positive multiple of $a$
divisible by $m$, and noting that $\operatorname{lcm}(a,m)/a=m/\gcd(a,m)$, it
follows that $\mathcal{G}_{a}=\\{k_{1}a\bmod m:0\leq k_{1}\leq
m/\gcd(a,m)-1\\}$. Moreover for any two distinct integers $0\leq
k_{1},k^{\prime}_{1}\leq m/\gcd(a,m)-1$, we have $k_{1}a\not\equiv
k^{\prime}_{1}a\pmod{m}$ because otherwise $|k_{1}-k^{\prime}_{1}|a$ is a
positive multiple of $a$ strictly smaller than $\operatorname{lcm}(a,m)$ that
is also divisible by $m$. Thus $|\mathcal{G}_{a}|=m/\gcd(a,m)$, and similarly
$|\mathcal{G}_{b}|=m/\gcd(b,m)$.
2. (ii)
Step 2: Our next goal is to show that $\mathcal{G}_{a}=\\{k_{1}\gcd(a,m):0\leq
k_{1}\leq m/\gcd(a,m)-1\\}$. If $\gcd(a,m)=m$ then the result is clearly true,
so assume that $\gcd(a,m)<m$. By Bézout’s identity $\gcd(a,m)=xa+ym$ for some
integers $x,y$, i.e. $\gcd(a,m)\equiv xa\pmod{m}$, and so
$\gcd(a,m)\in\mathcal{G}_{a}$. But this implies that $\\{k_{1}\gcd(a,m):0\leq
k_{1}\leq m/\gcd(a,m)-1\\}\subseteq\mathcal{G}_{a}$, since $\mathcal{G}_{a}$
forms a group under $\boldsymbol{+}$. But all the elements in this subset are
distinct and the number of elements in it equal $m/\gcd(a,m)$, which is the
order of $\mathcal{G}_{a}$, and so we have in fact proved that
$\mathcal{G}_{a}=\\{k_{1}\gcd(a,m):0\leq k_{1}\leq m/\gcd(a,m)-1\\}$, and
analogously $\mathcal{G}_{b}=\\{k_{2}\gcd(b,m):0\leq k_{2}\leq
m/\gcd(b,m)-1\\}$. This also shows that $\gcd(a,m)$ and $\gcd(b,m)$ are the
smallest positive integers that cyclically generate $\mathcal{G}_{a}$ and
$\mathcal{G}_{b}$ respectively.
3. (iii)
Step 3: We now want to locate the smallest non-zero integer $p$ in
$\mathcal{G}_{a}$ that is also in $\mathcal{G}_{b}$. For this we consider two
subsets: $\mathcal{G}_{a}\cap\mathcal{G}_{b}$ and
$\mathcal{G}^{\prime}:=\\{k_{3}\operatorname{lcm}(a,b)\bmod
m:k_{3}\in\mathbb{Z}\\}$. First note that $\mathcal{G}_{a}\cap\mathcal{G}_{b}$
forms a subgroup of both $\mathcal{G}_{a}$ and $\mathcal{G}_{b}$ under
$\boldsymbol{+}$, because it is the intersection of two subgroups; hence
Lagrange’s theorem implies
$|\mathcal{G}_{a}\cap\mathcal{G}_{b}|\leq\gcd(|\mathcal{G}_{a}|,|\mathcal{G}_{b}|)$.
Also since each element in $\mathcal{G}_{a}\cap\mathcal{G}_{b}$ is a multiple
of both $a$ and $b$ modulo $m$, it follows that
$\mathcal{G}^{\prime}\subseteq\mathcal{G}_{a}\cap\mathcal{G}_{b}$, and so
$|\mathcal{G}_{a}\cap\mathcal{G}_{b}|\geq|\mathcal{G}^{\prime}|$. Next,
reasoning similarly as in Steps 1 and 2 for $\mathcal{G}_{a}$ and
$\mathcal{G}_{b}$, we conclude that $\mathcal{G}^{\prime}$ is a cyclic group
under $\boldsymbol{+}$,
$\mathcal{G}^{\prime}=\\{k_{3}\gcd(\operatorname{lcm}(a,b),m):0\leq k_{3}\leq
m/\gcd(\operatorname{lcm}(a,b),m)-1\\}$, and
$|\mathcal{G}^{\prime}|=m/\gcd(\operatorname{lcm}(a,b),m)$. But applying Lemma
D.4(b) gives that
$m/\gcd(\operatorname{lcm}(a,b),m)=\gcd(m/\gcd(a,m),m/\gcd(b,m))=\gcd(|\mathcal{G}_{a}|,|\mathcal{G}_{b}|)$.
We have thus proved that
$\mathcal{G}^{\prime}=\mathcal{G}_{a}\cap\mathcal{G}_{b}$, and it follows that
$p=\gcd(\operatorname{lcm}(a,b),m)$.
4. (iv)
Step 4: We now give an equivalent characterization of $\mathcal{G}$ that will
help us count the number of elements in it. Let $q=p/\gcd(a,m)$, where $p$ is
from Step 3, and notice that $q$ is an integer because
$\begin{split}\gcd(\operatorname{lcm}(a,b),m)&=\gcd\left(\gcd(a,m)\frac{a}{\gcd(a,m)}\frac{b}{\gcd(a,b)},\gcd(a,m)\frac{m}{\gcd(a,m)}\right)\\\
&=\gcd(a,m)\gcd\left(\frac{a}{\gcd(a,m)}\frac{b}{\gcd(a,b)},\frac{m}{\gcd(a,m)}\right),\end{split}$
(89)
and moreover $m$ is a multiple of $q$, because $p$ divides $m$. Now construct
the set
$\mathcal{G}^{\prime\prime}=\bigcup_{k_{1}=0}^{q-1}\left(k_{1}\gcd(a,m)\boldsymbol{+}\mathcal{G}_{b}\right).$
(90)
We want to prove that $\mathcal{G}^{\prime\prime}=\mathcal{G}$. Clearly we
have that $\mathcal{G}^{\prime\prime}\subseteq\mathcal{G}$, so we only need to
show that $\mathcal{G}\subseteq\mathcal{G}^{\prime\prime}$. If
$r\in\mathcal{G}$ then there exists $k_{1},k_{2}\in\mathbb{Z}$, such that
$r=k_{1}a\boldsymbol{+}k_{2}b$. But $k_{1}a\boldsymbol{+}k_{2}b=(k_{1}a\bmod
m)\boldsymbol{+}(k_{2}b\bmod m)$, and so $r=x\boldsymbol{+}y$, for some
$x\in\mathcal{G}_{a}$ and $y\in\mathcal{G}_{b}$. Note that any element
$x\in\mathcal{G}_{a}$ can be expressed as $x=x^{\prime}+z$ for some
$z\in\mathcal{G}_{a}\cap\mathcal{G}_{b}$ and
$x^{\prime}\in\\{k_{1}\gcd(a,m):0\leq k_{1}\leq q-1\\}$, and thus we have
$r=x^{\prime}\boldsymbol{+}y\boldsymbol{+}z=x^{\prime}\boldsymbol{+}y^{\prime}$
for some $y^{\prime}\in\mathcal{G}_{b}$, since both $y,z\in\mathcal{G}_{b}$.
This proves $\mathcal{G}\subseteq\mathcal{G}^{\prime\prime}$.
5. (v)
Step 5: We now want to show that the union appearing in Eq. (90) is a disjoint
union. Note that this would imply
$|\mathcal{G}|=\sum_{k_{1}=0}^{q-1}|k_{1}\gcd(a,m)\boldsymbol{+}\mathcal{G}_{b}|=\sum_{k_{1}=0}^{q-1}|\mathcal{G}_{b}|=q|\mathcal{G}_{b}|$
by Step 4, thus proving Eq. (88). So for the sake of contradiction, assume
that there exists distinct integers $0\leq k_{1}<k^{\prime}_{1}\leq q-1$ such
that
$\left(k_{1}\gcd(a,m)\boldsymbol{+}\mathcal{G}_{b}\right)\cap\left(k^{\prime}_{1}\gcd(a,m)\boldsymbol{+}\mathcal{G}_{b}\right)\neq\emptyset$.
Then there exists $b,b^{\prime}\in\mathcal{G}_{b}$ such that
$k_{1}\gcd(a,m)\boldsymbol{+}b=k^{\prime}_{1}\gcd(a,m)\boldsymbol{+}b^{\prime}$,
and so $(k^{\prime}_{1}-k_{1})\gcd(a,m)\equiv(b-b^{\prime})\pmod{m}$ which
implies $(k^{\prime}_{1}-k_{1})\gcd(a,m)\in\mathcal{G}_{b}$. But
$k^{\prime}_{1}-k_{1}<q$ and so $(k^{\prime}_{1}-k_{1})\gcd(a,m)<p$, and since
$p$ is the smallest non-zero element in $\mathcal{G}_{a}\cap\mathcal{G}_{b}$
by Step 3, this implies $k^{\prime}_{1}=k_{1}$ which is a contradiction.
For the last part, let $s$ be the smallest non-zero integer in $\mathcal{G}$;
so $\\{ks:0\leq k\leq\lceil m/s\rceil-1\\}\subseteq\mathcal{G}$ because
$\mathcal{G}$ is a group. Then $s$ divides $m$ because otherwise $t=s\lceil
m/s\rceil\bmod m\in\mathcal{G}$, $0\neq t<s$. Suppose $u\in\mathcal{G}$ and
$u\notin\\{ks:0\leq k\leq m/s-1\\}$. Then $0\neq u-s\lfloor
u/s\rfloor\in\mathcal{G}$ and $u-s\lfloor u/s\rfloor<s$ which is a
contradiction. This proves that $\mathcal{G}=\\{ks:0\leq k\leq m/s-1\\}$, and
so $m/s=|\mathcal{G}|$, or equivalently
$s=\gcd(a,m)\gcd(b,m)/\gcd(\operatorname{lcm}(a,b),m)$. Thus $s$ is the
smallest positive integer that cyclically generates $\mathcal{G}$. ∎
###### Lemma D.6.
Let “$\boldsymbol{+}$” denote addition modulo $N$,
$\mathcal{G}=\\{k_{1}p\boldsymbol{+}k_{2}q:k_{1},k_{2}\in\mathbb{Z}\\}$,
$\mathcal{G}_{p}=\\{kp\boldsymbol{+}0:k\in\mathbb{Z}\\}$, and
$\mathcal{G}_{q}=\\{kq\boldsymbol{+}0:k\in\mathbb{Z}\\}$. If
$\mathcal{A}\subseteq\mathcal{N}$ and $s\in\mathbb{Z}$, denote
$\mathcal{A}\boldsymbol{+}s:=\\{x\boldsymbol{+}s:x\in\mathcal{A}\\}$. Then for
any quadruple $(N,p,q,r)$ (not necessarily consistent), and for any non-empty
$\mathcal{H}\subseteq\mathcal{N}$, the following holds:
1. (a)
$\Gamma(\mathcal{H})$ multiplies to a $X$-type Pauli string if and only if
$\mathcal{H}$ is a union of the cosets of $\mathcal{G}_{q}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$.
2. (b)
$\Gamma(\mathcal{H})$ multiplies to a $Z$-type Pauli string if and only if
$\mathcal{H}$ is a union of the cosets of $\mathcal{G}_{p}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$.
3. (c)
$\Gamma(\mathcal{H})$ multiplies to $I$ (up to phase factors) if and only if
$\mathcal{H}$ is a union of the cosets of $\mathcal{G}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$.
* Proof.
In this proof all qubit indices and cyclic shifts will be implicitly assumed
to be modulo $N$, and products of Pauli strings in $\mathcal{S}$ will be
considered modulo phase factors. Note that we have already proved
$\mathcal{G}$, $\mathcal{G}_{p}$, and $\mathcal{G}_{q}$ are subgroups of
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$ under $\boldsymbol{+}$ by Lemma D.5,
so cosets of $\mathcal{G}$, $\mathcal{G}_{p}$, and $\mathcal{G}_{q}$ are well-
defined. It will also be useful to remember the following fact: if
$\mathcal{T}\subseteq\mathcal{S}$, and $\mathcal{T}^{\prime}$ is obtained by
cyclically shifting each Pauli string in $\mathcal{T}$ by some integer $k$,
then $\prod\mathcal{T}^{\prime}$ is obtained by cyclically shifting
$\prod\mathcal{T}$ by $k$.
We next make the following claims, which we prove at the end of the proof: (i)
$\prod\Gamma(\mathcal{G}_{q})$ is a $X$-type Pauli string, and
$\prod\Gamma(\mathcal{G}_{p})$ is a $Z$-type Pauli string, (ii) if
$\prod\Gamma(\mathcal{H}^{\prime})$ is a $X$-type Pauli string for any
$\mathcal{H}^{\prime}\subseteq\mathcal{N}$ such that
$x\in\mathcal{H}^{\prime}$, then
$\mathcal{G}_{q}\boldsymbol{+}x\subseteq\mathcal{H}^{\prime}$, (iii) if
$\prod\Gamma(\mathcal{H}^{\prime})$ is a $Z$-type Pauli string for any
$\mathcal{H}^{\prime}\subseteq\mathcal{N}$ such that
$x\in\mathcal{H}^{\prime}$, then
$\mathcal{G}_{p}\boldsymbol{+}x\subseteq\mathcal{H}^{\prime}$, and (iv) if
$\prod\Gamma(\mathcal{H}^{\prime})$ equals $I$ (up to phase factors) for any
$\mathcal{H}^{\prime}\subseteq\mathcal{N}$ such that
$x\in\mathcal{H}^{\prime}$, then
$\mathcal{G}\boldsymbol{+}x\subseteq\mathcal{H}^{\prime}$.
1. (a)
Suppose $\mathcal{H}$ is a union of the cosets of $\mathcal{G}_{q}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$. By claim (i) and the fact mentioned
at the beginning of the proof, if $\mathcal{H}^{\prime}$ is a coset of
$\mathcal{G}_{q}$ in $(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$, then
$\prod\Gamma(\mathcal{H}^{\prime})$ is a X-type Pauli string, since
$\mathcal{H}^{\prime}=\mathcal{G}_{q}\boldsymbol{+}s$ for some
$s\in\mathcal{N}$. Thus $\prod\Gamma(\mathcal{H})$ is also a X-type Pauli
string. For the converse, suppose for contradiction that $\mathcal{H}$ is non-
empty, $\prod\Gamma(\mathcal{H})$ is a X-type Pauli string, and $\mathcal{H}$
is not a union of the cosets of $\mathcal{G}_{q}$. Then there exists an
integer $x$ belonging to some coset $\mathcal{H}^{\prime}$ of
$\mathcal{G}_{q}$ such that $x\in\mathcal{H}$ and
$\mathcal{H}^{\prime}\not\subseteq\mathcal{H}$. So
$\mathcal{H}^{\prime}=\mathcal{G}_{q}\boldsymbol{+}x$, which gives a
contradiction by claim (ii).
2. (b)
This follows from (a) by performing Clifford transformations on every qubit
interchanging $X$ and $Z$, and using cyclicity.
3. (c)
Assume $\mathcal{H}$ is a union of the cosets of $\mathcal{G}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$. Since $\mathcal{G}_{q}$ (resp.
$\mathcal{G}_{p}$) is a subgroup of $\mathcal{G}$, it implies $\mathcal{G}$ is
a union of the cosets of $\mathcal{G}_{q}$ (resp. $\mathcal{G}_{p}$) in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$. Thus by (a) and (b),
$\prod\Gamma(\mathcal{H})$ is both a $X$-type and $Z$-type Pauli string, and
so it must equal $I$ up to phase factors. To prove the converse, suppose for
contradiction that $\mathcal{H}$ is non-empty, $\prod\Gamma(\mathcal{H})$
equals $I$ (up to phase factors), and $\mathcal{H}$ is not a union of the
cosets of $\mathcal{G}$. Then similar to the proof of the converse in (a),
there exists an integer $x$ belonging to some coset $\mathcal{H}^{\prime}$ of
$\mathcal{G}$ such that $x\in\mathcal{H}$ and
$\mathcal{H}^{\prime}\not\subseteq\mathcal{H}$. This implies
$\mathcal{H}^{\prime}=\mathcal{G}\boldsymbol{+}x$, but this contradicts claim
(iv).
We now prove the claims made above.
1. (i)
We observe that for any qubit index $n\in\mathcal{N}$, there are exactly two
Pauli strings $S_{n}$ and $S_{n-q}$ containing $Z$ at index $n$, exactly two
other Pauli strings $S_{n-r}$ and $S_{n-p-r}$ containing $X$ at index $n$,
while all others have $I_{2}$ at index $n$ (note that $S_{n}$, $S_{n-q}$,
$S_{n-r}$, and $S_{n-p-r}$ are distinct by assumptions on $p,q,r$). Since the
relative shift of $S_{n}$ and $S_{n-q}$ is $q$, and the relative shift of
$S_{n-r}$ and $S_{n-p-r}$ is $p$, it follows from the group structure of
$\mathcal{G}_{q}$ and $\mathcal{G}_{p}$ that $S_{n}\in\Gamma(\mathcal{G}_{q})$
if and only if $S_{n-q}\in\Gamma(\mathcal{G}_{q})$, and
$S_{n-r}\in\Gamma(\mathcal{G}_{p})$ if and only if
$S_{n-p-r}\in\Gamma(\mathcal{G}_{p})$. Thus $\prod\Gamma(\mathcal{G}_{q})$ has
either $X$ or $I_{2}$ at index $n$, and $\prod\Gamma(\mathcal{G}_{p})$ has
either $Z$ or $I_{2}$ at index $n$.
2. (ii)
By cyclicity, it suffices to prove this for the special case $x=0$. So suppose
$0\in\mathcal{H}^{\prime}\subseteq\mathcal{N}$,
$\prod\Gamma(\mathcal{H}^{\prime})$ is a $X$-type Pauli string, and for
contradiction also assume $\mathcal{G}_{q}\not\subseteq\mathcal{H}^{\prime}$.
Noting that $\mathcal{G}_{q}=\\{kq\bmod N:0\leq k\leq N/\gcd(q,N)-1\\}$, then
there exists $0<k^{\prime}\leq N/\gcd(q,N)-1$, such that $k^{\prime}q\bmod
N\notin\mathcal{H}^{\prime}$ and $k^{\prime\prime}q\bmod
N\in\mathcal{H}^{\prime}$ for all $0\leq k^{\prime\prime}<k^{\prime}$. Thus
the Pauli string $S_{(k^{\prime}-1)q}\in\Gamma(\mathcal{H}^{\prime})$, which
contains $Z$ at qubit index $k^{\prime}q$. The only other Pauli string that
has $Z$ at the same index is $S_{k^{\prime}q}$, but
$S_{k^{\prime}q}\notin\Gamma(\mathcal{H}^{\prime})$. Thus
$\prod\Gamma(\mathcal{H}^{\prime})$ is not $X$-type, giving a contradiction.
3. (iii)
This follows from claim (ii) by cyclicity, i.e. cyclically shifting every
element of $\mathcal{S}$ by $r$ units to the left, and then performing
Clifford transformations on every qubit swapping $X$ and $Z$.
4. (iv)
Let $\mathcal{H}^{\prime}$ be as in the claim. Since
$\prod\Gamma(\mathcal{H}^{\prime})$ is a $X$-type Pauli string, we conclude by
claim (ii) that $\mathcal{G}_{q}\subseteq\mathcal{H}^{\prime}$. Next fix any
$q^{\prime}\in\mathcal{G}_{q}$, and consider the set
$\mathcal{G}_{p}\boldsymbol{+}q^{\prime}=\\{kp\boldsymbol{+}q^{\prime}:0\leq
k\leq N/\gcd(p,N)-1\\}$. Since $\prod\Gamma(\mathcal{H}^{\prime})$ is a
$Z$-type Pauli string, by claim (iii) we have
$\mathcal{G}_{p}\boldsymbol{+}q^{\prime}\subseteq\mathcal{H}^{\prime}$. As
$q^{\prime}$ is arbitrary, this proves
$\mathcal{G}=\bigcup_{q^{\prime}\in\mathcal{G}_{q}}(\mathcal{G}_{p}\boldsymbol{+}q^{\prime})\subseteq\mathcal{H}^{\prime}$.
∎
* Proof of Theorem D.3.
Let $\mathcal{G}$ be defined as in Lemma D.6. For each coset $\mathcal{H}$ of
$\mathcal{G}$ in $(\mathbb{Z}/N\mathbb{Z}$, +), we can then remove exactly one
stabilizer from $\Gamma(\mathcal{H})$ and take the union of the resulting
sets, to get a set of stabilizers $\mathcal{H}^{\prime}$ that still generates
$\langle\mathcal{S}\rangle$ by Lemma D.6(c). Since the number of cosets of
$\mathcal{G}$ is $N/|\mathcal{G}|$, it follows by Lemma D.5 that
$|\mathcal{H}^{\prime}|=N-\gcd(p,N)\gcd(q,N)/\gcd(\operatorname{lcm}(p,q),N)$.
Moreover $\Gamma(\mathcal{H}^{\prime})$ is independent as it has no subset
that multiplies to $I$, again by Lemma D.6(c), and so the number of
independent generators of $\langle\mathcal{S}\rangle$ equals
$|\mathcal{H}^{\prime}|$. The proof is complete noting that
$K=N-|\mathcal{H}^{\prime}|$. ∎
Combining the results of Lemma D.5, Lemma D.6(c), and Theorem D.3 we also have
the following corollary, which allows us to extract an independent subset of
stabilizers that generate $\langle\mathcal{S}\rangle$ from the knowledge of
$N$ and $K$ alone.
###### Corollary D.7.
If a 4-parameter $(N,p,q,r)$ cyclic code encodes $K$ qubits, then $K\geq 1$
and divides $N$. Moreover a non-empty subset of $\mathcal{S}$ multiplies to
$I$ (up to phase factors) if and only if it is a non-empty union of the
subsets of $\mathcal{S}$ corresponding to the cosets of $\mathcal{G}$ in
$(\mathbb{Z}/N\mathbb{Z},\boldsymbol{+})$, where $\mathcal{G}=\\{kK:0\leq
k\leq N/K-1\\}$.
* Proof.
The second part is a mere restatement of Lemma D.6(c), combining the result
proved in the last paragraph of the proof of Lemma D.5, and the expression of
$K$ in Theorem D.3. That $K\geq 1$ and $K$ divides $N$ follows because
$\gcd(p,N)\gcd(q,N)/\gcd(\operatorname{lcm}(p,q),N)$ divides $N$, the proof of
which is already contained in the proof of Lemma D.5. ∎
As the final result of this subsection, we prove in Lemma D.9 an interesting
property of the subgroup $\langle\mathcal{S}\rangle$ for any quadruple
$(N,p,q,r)$, which finds use in the next subsection. The lemma studies the
membership of the Pauli strings $X^{\otimes N}$ and $Z^{\otimes N}$ in
$\langle\mathcal{S}\rangle$, where membership is decided up to phase factors.
Thus we will say for instance that $X^{\otimes N}$ belongs to the subgroup
$\langle\mathcal{S}\rangle$ if and only if $\\{\pm X^{\otimes N},\pm
iX^{\otimes N}\\}\cap\langle\mathcal{S}\rangle\neq\emptyset$. For the proof,
we need the following fact (Lemma D.8) that we first prove. Lemma D.9 is
stated and proved after that.
###### Lemma D.8.
Let $1\leq a,m\in\mathbb{Z}$, $\mathcal{M}=\\{x\in\mathbb{Z}:0\leq x\leq
m-1\\}$, and “$\boldsymbol{+}$” denote addition modulo $m$. For any
$\mathcal{A}\subseteq\mathcal{M}$, denote
$\mathcal{A}\boldsymbol{+}a=\\{x\boldsymbol{+}a:x\in\mathcal{A}\\}$. A
partition $\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$ satisfying
$\mathcal{M}_{2}=\mathcal{M}_{1}\boldsymbol{+}a$ exists, if and only if
$m/\gcd(a,m)=\operatorname{lcm}(a,m)/a\equiv 0\pmod{2}$.
* Proof.
Let $\mathcal{G}_{a}=\\{ka\boldsymbol{+}0:k\in\mathbb{Z}\\}$ denote an orbit
of the function that adds $a$ modulo $m$, i.e. $f(x)=x\boldsymbol{+}a$. Each
orbit $\mathcal{G}_{a}\boldsymbol{+}x$ has size $\operatorname{lcm}(a,m)/a$,
and for $x=0,1,\dots,\gcd(a,m)-1$ they are all disjoint. We may write
$\mathcal{M}=\bigsqcup_{x=0}^{\gcd(a,m)-1}\left(\mathcal{G}_{a}\boldsymbol{+}x\right)$.
Suppose $\operatorname{lcm}(a,m)/a\equiv 0\pmod{2}$ so the orbits have even
size. Create the sets $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ as follows. For
each $\mathcal{G}_{a}\boldsymbol{+}x$, place $x\boldsymbol{+}ka$ in
$\mathcal{M}_{1}$ if $k$ is even and $x\boldsymbol{+}ka$ in $\mathcal{M}_{2}$
if $k$ is odd. Because the orbits have even size, $\mathcal{M}_{1}$ and
$\mathcal{M}_{2}$ thus defined are disjoint. Moreover,
$\mathcal{M}_{2}=\mathcal{M}_{1}\boldsymbol{+}a$.
Now suppose $\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$ where
$\mathcal{M}_{2}=\mathcal{M}_{1}\boldsymbol{+}a$. If $x\in\mathcal{M}_{1}$,
then $x\boldsymbol{+}ka$ is in $\mathcal{M}_{1}$ if $k$ is even and in
$\mathcal{M}_{2}$ if $k$ is odd. Notice $x\equiv
x\boldsymbol{+}(\operatorname{lcm}(a,m)/a)a\pmod{m}$. If
$\operatorname{lcm}(a,m)/a$ were odd then $x$ would be in both
$\mathcal{M}_{1}$ and $\mathcal{M}_{2}$, a contradiction with those sets being
disjoint. So $\operatorname{lcm}(a,m)/a$ is even. ∎
###### Lemma D.9.
For any quadruple $(N,p,q,r)$, the following holds:
1. (a)
$Z^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ if and only if
$\gcd(p,N)/\gcd(q,\gcd(p,N))\equiv 0\pmod{2}$.
2. (b)
$X^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ if and only if
$\gcd(q,N)/\gcd(p,\gcd(q,N))\equiv 0\pmod{2}$.
* Proof.
We only prove (a), as (b) follows from (a) by cyclicity, i.e. cyclically
shifting every element of $\mathcal{S}$ by $r$ units to the left, and then
performing Clifford transformations on every qubit swapping $X$ and $Z$. For
the proof, we introduce some notation. Let $m=\gcd(p,N)$,
$\mathcal{M}=\\{x\in\mathbb{Z}:0\leq x\leq m-1\\}$, “$\boldsymbol{+}$ denote
addition modulo $N$, and “$+_{m}$” denote addition modulo $m$. For any
$\mathcal{A}\subseteq\mathbb{Z}$ and $a\in\mathbb{Z}$, denote
$\mathcal{A}+_{m}a=\\{x+_{m}a:x\in\mathcal{A}\\}$ and
$\mathcal{A}\boldsymbol{+}a=\\{x\boldsymbol{+}a:x\in\mathcal{A}\\}$. To prove
(a), it suffices to show $Z^{\otimes N}$ belongs to
$\langle\mathcal{S}\rangle$ if and only if there exists a partition
$\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$ with
$\mathcal{M}_{2}=\mathcal{M}_{1}+_{m}q$, because by Lemma D.8 the latter is
true if and only if $m/\gcd(q,m)\equiv 0\pmod{2}$. In the proof, we consider
two Pauli strings equal if and only if they are the same up to phase factors.
Let $\mathcal{G}=\\{k_{1}p\boldsymbol{+}k_{2}q:k_{1},k_{2}\in\mathbb{Z}\\}$,
$\mathcal{G}_{p}=\\{kp\boldsymbol{+}0:k\in\mathbb{Z}\\}$, and recall that
$\mathcal{G}_{p}$ has $m$ distinct cosets
$\\{\mathcal{G}_{p}\boldsymbol{+}x\\}_{x=0}^{m-1}$. Notice when $q\equiv
0\pmod{m}$, we have $q\in\mathcal{G}_{p}$ (since $m\in\mathcal{G}_{p}$ by Step
2 of the proof of Lemma D.5) implying $\mathcal{G}=\mathcal{G}_{p}$.
Let $x\in\mathcal{M}$. Observe that $x=x+_{m}q$ if and only if $q\equiv
0\pmod{m}$, so the cosets $\mathcal{G}_{p}\boldsymbol{+}x$ and
$\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q)$ are different if and only if
$q\not\equiv 0\pmod{m}$. We claim
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ is a $Z$-type Pauli string with
$Z$ only at qubit indices
$(\mathcal{G}_{p}\boldsymbol{+}x)\sqcup(\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q))$
if $q\not\equiv 0\pmod{m}$, and
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)=I$ if $q\equiv 0\pmod{m}$. The
case $q\equiv 0\pmod{m}$ follows by Lemma D.6(c) as
$\mathcal{G}=\mathcal{G}_{p}$. So now assume $q\not\equiv 0\pmod{m}$. Lemma
D.6(b) implies $\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ is a $Z$-type
Pauli string. For each $y\in\mathcal{G}_{p}\boldsymbol{+}x$, the qubit indices
where the Pauli string $S_{y}$ contributes $Z$ are $y$ and $y\boldsymbol{+}q$,
where $y\boldsymbol{+}q\in\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q)$. Thus we
conclude using the disjointness of $\mathcal{G}_{p}\boldsymbol{+}x$ and
$\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q)$, that
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ has $Z$ at qubit indices
$\\{y:y\in\mathcal{G}_{p}\boldsymbol{+}x\\}\sqcup\\{y\boldsymbol{+}q:y\in\mathcal{G}_{p}\boldsymbol{+}x\\}=(\mathcal{G}_{p}\boldsymbol{+}x)\sqcup(\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q))$.
Assume that there exists a partition
$\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$ such that
$\mathcal{M}_{2}=\mathcal{M}_{1}+_{m}q$. Lemma D.8 implies that
$m/\gcd(q,m)\equiv 0\pmod{2}$, and so $q\not\equiv 0\pmod{m}$. Let
$\mathcal{H}=\bigsqcup_{x\in\mathcal{M}_{1}}(\mathcal{G}_{p}\boldsymbol{+}x)$.
By the claim, if $x\in\mathcal{M}_{1}$, then
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ has $Z$ at qubit indices
$(\mathcal{G}_{p}\boldsymbol{+}x)\sqcup(\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q))$.
Moreover for distinct $x,x^{\prime}\in\mathcal{M}_{1}$, the conditions on the
partitions $\mathcal{M}_{1},\mathcal{M}_{2}$ imply the cosets
$\mathcal{G}_{p}\boldsymbol{+}x$, $\mathcal{G}_{p}\boldsymbol{+}x^{\prime}$,
$\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q)$ and
$\mathcal{G}_{p}\boldsymbol{+}(x^{\prime}+_{m}q)$ are distinct. This means
that there is no cancellation in the Pauli $Z$s of
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ and
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x^{\prime})$. Since
$\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$, we conclude
$\prod\Gamma(\mathcal{H})$ has $Z$ at qubit indices
$\bigsqcup_{x\in\mathcal{M}}(\mathcal{G}_{p}\boldsymbol{+}x)=\mathcal{N}$.
Now assume $Z^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$. By Lemma
D.6(b), $Z^{\otimes N}=\prod\Gamma(\mathcal{H})$, where
$\mathcal{H}=\bigsqcup_{x\in\mathcal{M}_{1}}(\mathcal{G}_{p}\boldsymbol{+}x)$
with $\emptyset\neq\mathcal{M}_{1}\subseteq\mathcal{M}$. Then $q\not\equiv
0\pmod{m}$ by claim above, because otherwise
$\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)=I$ for each
$x\in\mathcal{M}_{1}$, contradicting $\prod\Gamma(\mathcal{H})=Z^{\otimes N}$.
Define $\mathcal{M}_{2}:=\mathcal{M}_{1}+_{m}q\subseteq\mathcal{M}$. The claim
also implies that the qubit indices where $\prod\Gamma(\mathcal{H})$ contains
$Z$ is a subset of
$\bigsqcup_{x\in\mathcal{M}_{1}\cup\mathcal{M}_{2}}(\mathcal{G}_{p}\boldsymbol{+}x)$.
Thus we already have $\mathcal{M}=\mathcal{M}_{1}\cup\mathcal{M}_{2}$ using
$\prod\Gamma(\mathcal{H})=Z^{\otimes N}$. We want to show $\mathcal{M}_{1}$
and $\mathcal{M}_{2}$ are disjoint. For contradiction, suppose there exists
$x\in\mathcal{M}_{1}\cap\mathcal{M}_{2}$. Then there exists
$x^{\prime}\in\mathcal{M}_{1}$ satisfying $x^{\prime}+_{m}q=x$. The Pauli
string $\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x^{\prime})$ has $Z$ at qubit
indices
$(\mathcal{G}_{p}\boldsymbol{+}x^{\prime})\sqcup(\mathcal{G}_{p}\boldsymbol{+}(x^{\prime}+_{m}q))$,
while $\prod\Gamma(\mathcal{G}_{p}\boldsymbol{+}x)$ has $Z$ at qubit indices
$(\mathcal{G}_{p}\boldsymbol{+}x)\sqcup(\mathcal{G}_{p}\boldsymbol{+}(x+_{m}q))$.
Thus there is cancellation of Pauli Zs at the qubit indices in the coset
$\mathcal{G}_{p}\boldsymbol{+}x$ in the product $\prod\Gamma(\mathcal{H})$.
This is a contradiction to $Z^{\otimes N}=\prod\Gamma(\mathcal{H})$, so we
have proved $\mathcal{M}=\mathcal{M}_{1}\sqcup\mathcal{M}_{2}$. ∎
### D.2 A two parameter cyclic code family
Choose two non-negative integers $s,t\geq 0$. Setting $r=1,\;p=s+1,\;q=s+3$,
and $N=s+t+4$ in the 4-parameter $(N,p,q,r)$ cyclic code family of the last
subsection gives rise to a 2-parameter $(s,t)$ cyclic code family whose
stabilizers are cyclic shifts of $ZXI^{\otimes s}XZI^{\otimes t}$, up to phase
factors. Notice that in this case $q=p+2r$, and $r<p+r<q$, and so by Lemma D.2
the quadruple $(N,p,q,r)$ is consistent. We have computed the $\llbracket
N,K,D\rrbracket$ values for this 2-parameter family for all $0\leq s,t\leq 9$,
which we provide in Table 3.
$s$\$t$ | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
---|---|---|---|---|---|---|---|---|---|---
0 | 4,1,2 | 5,1,3 | 6,1,2 | 7,1,3 | 8,1,3 | 9,1,3 | 10,1,3 | 11,1,3 | 12,1,3 | 13,1,3
1 | — | 6,2,2 | 7,1,3 | 8,2,2 | 9,1,3 | 10,2,3 | 11,1,3 | 12,2,3 | 13,1,3 | 14,2,3
2 | — | — | 8,1,2 | 9,1,3 | 10,1,2 | 11,1,3 | 12,1,3 | 13,1,3 | 14,1,4 | 15,1,3
3 | — | — | — | 10,2,2 | 11,1,3 | 12,2,2 | 13,1,5 | 14,2,3 | 15,1,3 | 16,2,3
4 | — | — | — | — | 12,1,2 | 13,1,3 | 14,1,2 | 15,1,3 | 16,1,4 | 17,1,3
5 | — | — | — | — | — | 14,2,2 | 15,1,3 | 16,2,2 | 17,1,5 | 18,2,3
6 | — | — | — | — | — | — | 16,1,2 | 17,1,3 | 18,1,2 | 19,1,5
7 | — | — | — | — | — | — | — | 18,2,2 | 19,1,3 | 20,2,2
8 | — | — | — | — | — | — | — | — | 20,1,2 | 21,1,3
9 | — | — | — | — | — | — | — | — | — | 22,2,2
Table 3: The code parameters $\llbracket N,K,D\rrbracket$ of cyclic quantum
codes with stabilizer group generated by $ZXI^{\otimes s}XZI^{\otimes t}$ and
its cyclic shifts. Interchanging $s$ and $t$ does not change the code
parameters (see Lemma D.10), so we show just the cases $t\geq s$.
One should note that Table 3 is symmetric about the diagonal – so
interchanging $s$ and $t$ does not change the code parameters $\llbracket
N,K,D\rrbracket$. This is proved in the next lemma.
###### Lemma D.10.
For any 2-parameter $(s,t)$ cyclic code, the $\llbracket N,K,D\rrbracket$ code
parameters are invariant upon interchange of $s$ and $t$.
* Proof.
The cyclic code generated by cyclic shifts of $ZXI^{\otimes s}XZI^{\otimes t}$
is the same as that generated by cyclic shifts of $XZI^{\otimes t}ZXI^{\otimes
s}$. Now we perform a Clifford transformation on every qubit that interchanges
$X$ and $Z$ (note that Clifford transformations do not change commutativity of
Pauli strings). This results in a new cyclic code generated by cyclic shifts
of $ZXI^{\otimes t}XZI^{\otimes s}$. Clifford transformations are group
isomorphisms of the Pauli group, thus $N$ and $K$ are unchanged. Moreover
single qubit Clifford transformations don’t change the distance of the code,
and hence the same is true for their compositions. Thus $D$ is also unchanged
under interchange of $s$ and $t$. ∎
The number of encoded qubits for the $(s,t)$ cyclic code family is explained
by the following theorem.
###### Theorem D.11.
For any 2-parameter $(s,t)$ cyclic code, the number of encoded qubits
satisfies
$K=\begin{cases}2&\;\;\text{if}\;\;s\text{ and }t\text{ are odd},\\\
1&\;\;\text{otherwise}.\end{cases}$ (91)
If $K=1$, the only subset of $\mathcal{S}$ that multiplies to $I$ up to phase
factors is $\mathcal{S}$. If $K=2$, there are two disjoint subsets
$\mathcal{S}_{\text{even}}$ and
$\mathcal{S}\setminus\mathcal{S}_{\text{even}}$ of $\mathcal{S}$ that
multiplies to $I$ up to phase factors, where
$\mathcal{S}_{\text{even}}=\\{S_{2n}:0\leq n\leq N/2-1\\}$.
* Proof.
In this proof we will use the fact that for any integers $a,b,c\geq 1$, if
$\gcd(a,b)=1$ then $\gcd(ab,c)=\gcd(a,c)\gcd(b,c)$, which easily follows by
considering the prime factorizations of $a,b,c$. Also the variables $x$ and
$y$ appearing in this proof are always non-negative integers. We start by
noting that when $s+1$ is odd, $\gcd(s+1,s+3)=1$ which implies that
$\operatorname{lcm}(s+1,s+3)=(s+1)(s+3)$, while if $s+1$ is even, then
$\gcd(s+1,s+3)=2$ and so $\operatorname{lcm}(s+1,s+3)=(s+1)(s+3)/2$. Applying
Theorem D.3 then gives
$K=\begin{cases}\gcd(s+1,N)\;\gcd(s+3,N)/\gcd((s+1)(s+3),N)&\;\;\text{if}\;\;s+1\text{
is odd},\\\
\gcd(s+1,N)\;\gcd(s+3,N)/\gcd\left(\frac{(s+1)(s+3)}{2},N\right)&\;\;\text{otherwise}.\end{cases}$
(92)
In the first case ($s+1$ odd), by the fact mentioned above, we have
$\gcd((s+1)(s+3),N)=\gcd(s+1,N)\gcd(s+3,N)$, and so $K=1$. Now assume that
$s+1=2x$, i.e. is even, and so $(s+1)(s+3)/2=2x(x+1)$. Then there are two
subcases which we consider separately below.
1. (i)
$t$ is odd: In this case $N=s+t+4$ is even, and so suppose that $N=2y$. Then
we have $\gcd(s+1,N)=\gcd(2x,2y)=2\gcd(x,y)$,
$\gcd(s+3,N)=\gcd(2x+2,2y)=2\gcd(x+1,y)$, and
$\gcd((s+1)(s+3)/2,N)=\gcd(2x(x+1),2y)=2\gcd(x(x+1),y)$. But $\gcd(x,x+1)=1$
and so $\gcd(x(x+1),y)=\gcd(x,y)\gcd((x+1),y)$. Plugging these into Eq. (92)
gives $K=2$.
2. (ii)
$t$ is even: In this case $N$ is odd, and so $\gcd(2x,N)=\gcd(x,N)$,
$\gcd(2(x+1),N)=\gcd(x+1,N)$, and
$\gcd(2x(x+1),N)=\gcd(x(x+1),N)=\gcd(x,N)\gcd(x+1,N)$. Eq. (92) then gives
$K=1$.
We have thus proved Eq. (91). The last part of the theorem now follows
directly from Corollary D.7. ∎
Theorem D.11 suggests that the case when $s$ and $t$ are both odd is special.
We note another curious property that is also true for this case in the next
lemma, which will find an use in the proof of Theorem D.13.
###### Lemma D.12.
For any 2-parameter $(s,t)$ cyclic code with both $s$ and $t$ odd, let
$\mathcal{S}_{\text{even}}=\\{S_{2n}:0\leq n\leq N/2-1\\}$ and
$\mathcal{S}_{\text{odd}}=\mathcal{S}\setminus\mathcal{S}_{\text{even}}$. Fix
any qubit index $0\leq n\leq N-1$. Then
1. (a)
If $n$ is even, $\mathcal{S}_{\text{even}}$ contains both stabilizers that
contain $Z$ at index $n$, and $\mathcal{S}_{\text{odd}}$ contains both
stabilizers that contain $X$ at index $n$.
2. (b)
If $n$ is odd, $\mathcal{S}_{\text{even}}$ contains both stabilizers that
contain $X$ at index $n$, and $\mathcal{S}_{\text{odd}}$ contains both
stabilizers that contain $Z$ at index $n$.
3. (c)
For all subsets $\mathcal{S}^{\prime}\subset\mathcal{S}_{\text{even}}$ and
$\mathcal{S}^{\prime\prime}\subset\mathcal{S}_{\text{odd}}$, the products
$\prod\mathcal{S}^{\prime}$ and $\prod\mathcal{S}^{\prime\prime}$ do not
contain $Y$ at any qubit index.
* Proof.
All qubit indexes will be assumed to be modulo $N$ in this proof. Since $s,t$
are odd, first note that $N=s+t+4$ is even, $s+3$ is even, and $s+2$ is odd.
Now for any qubit index $0\leq n\leq N-1$, the two stabilizers containing $Z$
at index $n$ are $S_{n}$ and $S_{n-s-3}$, and the two stabilizers containing
$X$ at index $n$ are $S_{n-1}$ and $S_{n-s-2}$ (note that these four
stabilizers are distinct). If $n$ is even, we have $(n-s-3)\bmod N$ is even,
$(n-1)\bmod N$ is odd, and $(n-s-2)\bmod N$ is odd; while if $n$ is odd, we
have $(n-s-3)\bmod N$ is odd, $(n-1)\bmod N$ is even, and $(n-s-2)\bmod N$ is
even. We have thus proved (a) and (b). For (c), notice that in order to obtain
$Y$ at any fixed qubit index $n$, one needs to at least multiply a stabilizer
containing $X$ at index $n$, and another stabilizer containing $Z$ at index
$n$. But by (a) and (b) this is impossible for subsets of
$\mathcal{S}_{\text{even}}$ or $\mathcal{S}_{\text{odd}}$. ∎
The next theorem states the conditions that determine the membership of the
Pauli strings $X^{\otimes N}$, $Y^{\otimes N}$ and $Z^{\otimes N}$ in the
stabilizer group $\langle\mathcal{S}\rangle$ for the $(s,t)$ cyclic code
family. As with Lemma D.9, we will only determine membership upto phase
factors.
###### Theorem D.13.
For any 2-parameter $(s,t)$ cyclic code, we have the following:
1. (a)
$Z^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ if and only if $s\equiv
3\pmod{4}$ and $t\equiv 1\pmod{4}$.
2. (b)
$X^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ if and only if $s\equiv
1\pmod{4}$ and $t\equiv 3\pmod{4}$.
3. (c)
At most one of $X^{\otimes N}$, $Y^{\otimes N}$, or $Z^{\otimes N}$ can belong
to $\langle\mathcal{S}\rangle$.
4. (d)
$Y^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ if and only if $s\equiv
0\pmod{2}$ and $t\equiv 0\pmod{2}$.
* Proof.
1. (a)
Define $\alpha=(s+3)\bmod\gcd(s+1,N)$. Note that
$\gcd(s+3,\gcd(s+1,N))=\gcd(\alpha,\gcd(s+1,N))$, and its useful to also
observe that $\gcd(s+1,N)/\gcd(s+3,\gcd(s+1,N))\equiv 0\pmod{2}$ implies
$\alpha\neq 0$. By Lemma D.9(a), then $Z^{\otimes N}$ belongs to
$\langle\mathcal{S}\rangle$ if and only if
$\gcd(s+1,N)/\gcd(\alpha,\gcd(s+1,N))\equiv 0\pmod{2}$. We proceed to check
the various cases. The variables $x$ and $y$ appearing in this proof are
always non-negative integers.
1. (i)
At least one of $s$ or $t$ is even: First assume that both $s,t$ are even.
Then $N$ is even, while $s+1$ is odd, and so $\gcd(s+1,N)$ is odd. Now assume
exactly one of $s,t$ is even. Then $N$ is odd, so again $\gcd(s+1,N)$ is odd.
Now if $\gcd(s+1,N)=1$ we have $\alpha=0$, while if $\gcd(s+1,N)\geq 3$ then
$\alpha=2$. In the latter case this implies that
$\gcd(\alpha,\gcd(s+1,N))=\gcd(2,\gcd(s+1,N))=1$, and so
$\gcd(s+1,N)/\gcd(\alpha,\gcd(s+1,N))=\gcd(s+1,N)\equiv 1\pmod{2}$. Thus
$Z^{\otimes N}$ does not belong to $\langle\mathcal{S}\rangle$ in these cases.
2. (ii)
$s,t\equiv 1\pmod{4}$; $s,t\equiv 3\pmod{4}$; $s\equiv 1\pmod{4}\text{ and
}t\equiv 3\pmod{4}$: We handle these three cases together. So assume that
$s=4x+1,t=4y+1$ in the first case, $s=4x+3,t=4y+3$ in the second, and
$s=4x+1,t=4y+3$ in the third. The values of $N$ are $4x+4y+6$, $4x+4y+10$, and
$4x+4y+8$ for the three cases respectively. It then follows that
$\gcd(s+1,N)=\begin{cases}2\gcd(2x+1,2y+2)&\;\;\text{if}\;\;s=4x+1,t=4y+1,\\\
2\gcd(2x+2,2y+3)&\;\;\text{if}\;\;s=4x+3,t=4y+3,\\\
2\gcd(2x+1,2y+3)&\;\;\text{if}\;\;s=4x+1,t=4y+3.\end{cases}$ (93)
In each of the above cases $\gcd(s+1,N)$ is twice an odd number, so either
$\gcd(s+1,N)=2$ or $\gcd(s+1,N)\geq 6$. If $\gcd(s+1,N)=2$ we have
$\alpha=(s+3)\bmod 2=0$, since $s+3$ is even in all three cases; so
$Z^{\otimes N}$ does not belong to $\langle\mathcal{S}\rangle$. If
$\gcd(s+1,N)\geq 6$, then first we have $\alpha=(s+3)\bmod\gcd(s+1,N)=2$, and
so $\gcd(\alpha,\gcd(s+1,N))=\gcd(2,\gcd(s+1,N))=2$ in all three cases. It
follows that $\gcd(s+1,N)/\gcd(\alpha,\gcd(s+1,N))=\gcd(s+1,N)/2\equiv
1\pmod{2}$ in each case, and $Z^{\otimes N}$ does not belong to
$\langle\mathcal{S}\rangle$.
3. (iii)
$s\equiv 3\pmod{4}\text{ and }t\equiv 1\pmod{4}$: Finally suppose that
$s=4x+3,t=4y+1$; so $N=4(x+y+2)$ and
$\gcd(s+1,N)=\gcd(4x+4,4x+4y+8)=\gcd(4x+4,4y+4)=4\gcd(x+1,y+1)$ is a multiple
of $4$. This first implies that $\alpha=2$, and so
$\gcd(\alpha,\gcd(s+1,N))=\gcd(2,4\gcd(x+1,y+1))=2$. This in turn implies that
$\gcd(s+1,N)/\gcd(\alpha,\gcd(s+1,N))=4\gcd(x+1,y+1)/2=2\gcd(x+1,y+1)\equiv
0\pmod{2}$. Thus $Z^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$.
2. (b)
Let $\mathcal{S}^{\prime}$ be the set of stabilizers obtained by performing
the Clifford transformation that interchanges $X$ and $Z$ at every qubit, for
each stabilizer in $\mathcal{S}$. Thus it suffices to show that $Z^{\otimes
N}$ belongs to $\langle\mathcal{S}^{\prime}\rangle$ if and only if $s\equiv
1\pmod{4}$ and $t\equiv 3\pmod{4}$. Now the stabilizers in
$\mathcal{S}^{\prime}$ are generated by cyclic shifts of the Pauli string
$ZXI^{\otimes t}XZI^{\otimes s}$. The result then follows from (a).
3. (c)
For the sake of contradiction assume that any two of $X^{\otimes N}$,
$Y^{\otimes N}$, or $Z^{\otimes N}$ belong to $\langle\mathcal{S}\rangle$.
Then their product also belongs to $\langle\mathcal{S}\rangle$, and so in fact
all three $X^{\otimes N}$, $Y^{\otimes N}$, and $Z^{\otimes N}$ belong to
$\langle\mathcal{S}\rangle$. But by (a) and (b) $X^{\otimes N}$ and
$Z^{\otimes N}$ cannot belong to $\langle\mathcal{S}\rangle$ simultaneously,
which is a contradiction.
4. (d)
We analyze the different cases separately.
1. (i)
$s,t$ are even: In this case $N=s+t+4$ is even. Then for the subset
$\mathcal{S}_{\text{even}}=\\{S_{2n}:0\leq n\leq N/2-1\\}$ it is easily
calculated that
$\prod\mathcal{S}_{\text{even}}=\prod(\mathcal{S}\setminus\mathcal{S}_{\text{even}})=Y^{\otimes
N}$ up to phase factors.
2. (ii)
Exactly one of $s,t$ is even: In this case $N=s+t+4$ is odd, and so the Pauli
strings $X^{\otimes N}$, $Y^{\otimes N}$, and $Z^{\otimes N}$ are pairwise
anticommuting. Noticing that each stabilizer in $\mathcal{S}$ also commutes
with each of $X^{\otimes N}$, $Y^{\otimes N}$, and $Z^{\otimes N}$, we then
conclude that none of the Pauli strings $X^{\otimes N}$, $Y^{\otimes N}$, or
$Z^{\otimes N}$ belong to $\langle\mathcal{S}\rangle$.
3. (iii)
$s,t$ are odd: If $s\equiv 3\pmod{4}\text{ and }t\equiv 1\pmod{4}$, then
$Z^{\otimes N}$ belongs to $\langle\mathcal{S}\rangle$ by (a); while if
$s\equiv 1\pmod{4}$ and $t\equiv 3\pmod{4}$, then $X^{\otimes N}$ belongs to
$\langle\mathcal{S}\rangle$ by (b). In both cases $Y^{\otimes N}$ does not
belong to $\langle\mathcal{S}\rangle$ by (c). We are now left with two cases:
either $s,t\equiv 1\pmod{4}$, or $s,t\equiv 3\pmod{4}$, both satisfying
$N=s+t+4\equiv 2\pmod{4}$. We handle these cases together and show below that
$Y^{\otimes N}$ does not belong to $\langle\mathcal{S}\rangle$.
The proof is by contradiction, so let us suppose that $Y^{\otimes N}$ belongs
to $\langle\mathcal{S}\rangle$, and $\prod\mathcal{S}^{\prime}=Y^{\otimes N}$
up to phase factors, for some subset
$\mathcal{S}^{\prime}\subseteq\mathcal{S}$. Since both $s$ and $t$ are odd,
consider the subsets $\mathcal{S}_{\text{even}}$ and
$\mathcal{S}_{\text{odd}}$ defined in Lemma D.12, and let
$\mathcal{S}_{\text{even}}^{{}^{\prime}}=\mathcal{S}^{\prime}\cap\mathcal{S}_{\text{even}}$,
and
$\mathcal{S}_{\text{odd}}^{{}^{\prime}}=\mathcal{S}^{\prime}\cap\mathcal{S}_{\text{odd}}$.
It must then be true that the products
$\prod\mathcal{S}_{\text{even}}^{{}^{\prime}}$ and
$\prod\mathcal{S}_{\text{odd}}^{{}^{\prime}}$ have the property that for any
fixed qubit index $n$, exactly one of them contains $X$ and the other one
contains $Z$ at index $n$ (this follows from Lemma D.12(c) because otherwise
one cannot get $Y$ at index $n$). Thus both
$\prod\mathcal{S}_{\text{even}}^{{}^{\prime}}$ and
$\prod\mathcal{S}_{\text{odd}}^{{}^{\prime}}$ don’t contain $I_{2}$ at any
qubit index. Moreover any two distinct Pauli strings
$S_{i},S_{j}\in\mathcal{S}_{\text{even}}^{{}^{\prime}}$ must have disjoint
supports, because otherwise by Lemma D.12(a),(b) the product
$\prod\mathcal{S}_{\text{even}}^{{}^{\prime}}$ will contain $I_{2}$ at every
qubit index in the intersection of the supports of $S_{i}$ and $S_{j}$. Since
each stabilizer is supported at exactly four qubit indices this implies that
$N\equiv 0\pmod{4}$, which is a contradiction.
∎
Finally, we can find the distance of the two-parameter cyclic toric codes when
$s$ is close to $t$, the first three diagonals of Table 3.
###### Theorem D.14.
Consider a two-parameter $(s,t)$ cyclic toric code. If $t=s$ or $t=s+2$, its
code distance is two. If $t=s+1$, then its code distance is three.
* Proof.
Letting $D$ indicate the code distance, in all cases $D>1$ because no Pauli
with weight one commutes with all stabilizers.
In the $t=s$ case, we note that both $ZI^{\otimes s+1}XI^{\otimes s+1}$ and
$Y^{\otimes s+2}I^{\otimes s+2}$ commute with all stabilizers. They also
anticommute with each other. Thus, they are nontrivial logical operators and
$D=2$.
In the $t=s+1$ case, we note $YI^{\otimes s+1}XXI^{\otimes s+1}$ and
$X^{\otimes N}$ are nontrivial logical operators and therefore $D\leq 3$. We
must also show there is no weight-two Pauli commuting with all stabilizers. A
weight-two Pauli has the form $T=PI^{\otimes a}QI^{\otimes N-a-2}$ for single-
qubit Paulis $P,Q\in\\{X,Y,Z\\}$. If $a<s+1$ or $N-a-2=2s-a+3<s+1$, then $T$
clearly must anticommute with some stabilizer. This leaves two cases $a=s+1$
and $a=s+2$. In the former case, commuting with all stabilizers requires that
$Q=X$ and also that $P\otimes Q$ commutes with $Z\otimes Z$ and with $Z\otimes
X$, which is clearly impossible. In the latter case, the analogous argument
shows $P=X$ and $P\otimes Q$ commutes with $Z\otimes Z$ and with $X\otimes Z$,
again a contradiction.
In the $t=s+2$ case, we note that $XI^{\otimes s+2}XI^{\otimes s+2}$ and
$Y^{\otimes s+2}ZXI^{\otimes s+2}$ are a pair of anticommuting logical
operators, implying $D=2$. ∎
## Appendix E Embedding of the medial graph
The goal here is to prove some properties of medial graphs claimed in Section
3.4. See Definition 3.4 for the definition of a medial rotation system and, by
extension, a medial graph.
###### Lemma E.1.
The rotation system $R=(H,\lambda,\rho,\tau)$ and its medial rotation system
$\widetilde{R}=(\widetilde{H},\widetilde{\lambda},\widetilde{\rho},\widetilde{\tau})$
embed in the same manifold.
* Proof.
We show that both rotation systems have the same orientability and the same
genus.
For orientability, we show $R$ is orientable if and only if $\widetilde{R}$
is. Start by assuming $R$ is orientable. Therefore, we can partition $H$ into
two sets $H_{\pm 1}$ such that $\lambda$, $\rho$, $\tau$ applied to any
element of $H_{k}$ takes it to an element of $H_{-k}$. Define two sets
$\widetilde{H}_{\pm 1}\subseteq\widetilde{H}=H\times\\{-1,1\\}$ such that
$(h,j)\in\widetilde{H}_{k}$ if and only if $h\in H_{jk}$. These two sets
clearly partition $\widetilde{H}$. By definition, if
$(h,j)\in\widetilde{H}_{k}$ then
$\widetilde{\tau}(h,j)=(h,-j)\in\widetilde{H}_{-k}$. Also, by using the
orientability of $R$,
$\widetilde{\lambda}(h,j)=(\rho(h),j)\in\widetilde{H}_{-k}$ and
$\widetilde{\rho}(h,j)$ is either $(\lambda(h),j)$ or $(\tau(h),j)$ both of
which are in $\widetilde{H}_{-k}$. We conclude that $\widetilde{R}$ is
orientable. The other direction – if $\widetilde{R}$ is orientable, then so is
$R$ – follows the same argument in reverse.
The genus question boils down to counting the numbers of vertices, edges, and
faces. Let $V,E,F$ and $\widetilde{V},\widetilde{E},\widetilde{F}$ be the sets
of vertices, edges, faces for $R$ and $\widetilde{R}$. Then, we note
$\displaystyle|\widetilde{V}|$ $\displaystyle=|E|,$ (94)
$\displaystyle|\widetilde{E}|$ $\displaystyle=\sum_{v\in
V}\text{deg}(v)=2|E|,$ (95) $\displaystyle|\widetilde{F}|$
$\displaystyle=|V|+|F|.$ (96)
Therefore,
$\widetilde{\chi}=|\widetilde{V}|-|\widetilde{E}|+|\widetilde{F}|=\chi$ and
thus the genuses are the same. ∎
###### Lemma E.2.
Let $R^{\prime}=(H^{\prime},\lambda^{\prime},\rho^{\prime},\tau^{\prime})$ be
a rotation system. Then, $R^{\prime}=\widetilde{R}$ is the medial rotation
system of some rotation system $R=(H,\lambda,\rho,\tau)$ if and only if
$R^{\prime}$ is checkerboardable and 4-valent.
* Proof.
We begin by showing a medial rotation system is 4-valent and checkerboardable.
The degree of a vertex is the size of an orbit of
$\widetilde{\rho}\widetilde{\tau}$. Clearly, any orbit must have even size $L$
because $\widetilde{\rho}\widetilde{\tau}$ applied to
$(h,j)\in\widetilde{H}=H\times\\{-1,1\\}$ flips the sign of $j$. Applying
$(\widetilde{\rho}\widetilde{\tau})^{L}$ to $(h,j)$ gives
$((\tau\lambda)^{L/2}h,j)$ if $j=1$ and $((\lambda\tau)^{L/2}h,j)$ if $j=-1$.
By definition, $(\lambda\tau)$ has order two, so $L=4$ is the size of any
orbit of $\widetilde{\rho}\widetilde{\tau}$ and the degree of any vertex in
the medial graph.
Checkerboardability demands that there is a partition of $\widetilde{H}$ into
two sets $\widetilde{H}_{\pm 1}$ such that $\tau$ applied to any element
$(h,j)\in\widetilde{H}_{k}$ gives an element of $\widetilde{H}_{-k}$ and that
applying $\rho$ or $\lambda$ to $(h,j)$ gives an element of
$\widetilde{H}_{k}$. This clearly achieved by putting $(h,j)$ for any $h$ into
the set $\widetilde{H}_{j}$.
Now we must show the other direction, that a 4-valent, checkerboardable
rotation system $R^{\prime}$ is the medial graph of another rotation system.
Because $R^{\prime}$ is checkerboardable there is a partition
$H^{\prime}=H^{\prime}_{-1}\sqcup H^{\prime}_{1}$ where $\tau^{\prime}$ moves
elements from $H^{\prime}_{k}$ to $H^{\prime}_{-k}$ while $\lambda^{\prime}$
and $\rho^{\prime}$ preserve the sets. We let
$H=\left\\{(h,\tau^{\prime}(h)):h\in H^{\prime}_{1}\right\\}$ and define
permutations of $H$ called $\rho$, $\lambda$, $\tau$ so that
$\displaystyle\lambda(h,\tau^{\prime}(h))$
$\displaystyle=(\tau^{\prime}\rho^{\prime}\tau^{\prime}(h),\rho^{\prime}\tau^{\prime}(h)),$
(97) $\displaystyle\rho(h,\tau^{\prime}(h))$
$\displaystyle=(\lambda^{\prime}(h),\tau^{\prime}\lambda^{\prime}(h)),$ (98)
$\displaystyle\tau(h,\tau^{\prime}(h))$
$\displaystyle=(\rho^{\prime}(h),\tau^{\prime}\rho^{\prime}(h)).$ (99)
Each of these are clearly involutions. To be a legitimate rotation system,
$\lambda$ and $\tau$ must also commute. They do so because $R^{\prime}$ is
4-valent, i.e. $(\rho^{\prime}\tau^{\prime})^{4}=1$.
We claim the medial rotation system of $R=(H,\lambda,\rho,\tau)$ is
$R^{\prime}$. We must therefore establish a one-to-one map
$\phi:\widetilde{H}=H\times\\{-1,1\\}\rightarrow H^{\prime}$. We do this by
defining, for all $h\in H^{\prime}$ and $j\in\\{-1,1\\}$,
$\displaystyle\phi((h,\tau^{\prime}(h)),j)$
$\displaystyle=\bigg{\\{}\begin{array}[]{ll}h,&j=1\\\
\tau^{\prime}(h),&j=-1\end{array},$ (102) $\displaystyle\phi^{-1}(h)$
$\displaystyle=\bigg{\\{}\begin{array}[]{ll}((h,\tau^{\prime}(h)),1),&h\in
H^{\prime}_{1}\\\ ((\tau^{\prime}(h),h),-1),&h\in H^{\prime}_{-1}\end{array}.$
(105)
Then one can easily show
$\phi(\widetilde{\lambda}(\phi^{-1}(h)))=\lambda^{\prime}(h)$ for all $h$ and
likewise for $\lambda$ replaced with $\rho$ or $\tau$. This completes the
proof. ∎
## Appendix F Using GAP to find normal subgroups
To elucidate the process of finding hyperbolic codes, we give an example of
finding low-index normal subgroups in GAP. In particular, we use the LINS
package [50]. For the sake of illustration, we focus on the index 160 normal
subgroup $S$ that defines the $\llbracket 20,5,4\rrbracket$ code in Fig. 18(a)
and the first entry of Table 2.
$\displaystyle\begin{array}[]{lr}\mathrm{LoadPackage}(``\mathrm{LINS}");;&\\\
F:=\mathrm{FreeGroup}(``l",``r",``t");;&\text{//}l=\lambda,r=\rho,t=\tau\\\
\mathrm{AssignGeneratorVariables}(F);;&\text{//makes $l,r,t$ into variable
names}\\\
G:=F/[l^{2},r^{2},t^{2},(l*r)^{5},(l*t)^{2},(r*t)^{4}];;&\text{//create a
quotient group, face-degree 5 and vertex-degree 4.}\\\
L:=\mathrm{LowIndexNormal}(G,200);&\text{//a list of normal subgroups of $G$
up to index 200}\\\ S:=L[9].\mathrm{Group};;&\text{//$S$ happens to be the 9th
entry in the list}\\\ \mathrm{IsNormal}(G,S);&\text{//checking it is indeed a
normal subgroup, returns true}\\\
\mathrm{GeneratorsOfGroup}(S);&\text{//displays (an overcomplete set) of
relations of the group}\end{array}$
Reducing overcomplete generating sets to independent ones was done by hand and
verified by drawing the codes as those in Fig. 18 are drawn.
|
Further author information: (Send correspondence to Ali Vosoughi)
Ali Vosoughi: E-mail<EMAIL_ADDRESS>
# Large-scale Augmented Granger Causality (lsAGC) for Connectivity Analysis in
Complex Systems: From Computer Simulations to Functional MRI (fMRI)
Axel Wismüller Department of Electrical and Computer Engineering, University
of Rochester, NY, USA Department of Imaging Sciences, University of
Rochester, NY, USA Department of Biomedical Engineering, University of
Rochester, NY, USA Faculty of Medicine and Institute of Clinical Radiology,
Ludwig Maximilian University, Munich, Germany Department of Electrical and
Computer Engineering, University of Rochester, NY, USA Department of Imaging
Sciences, University of Rochester, NY, USA Department of Biomedical
Engineering, University of Rochester, NY, USA Faculty of Medicine and
Institute of Clinical Radiology, Ludwig Maximilian University, Munich, Germany
M. Ali Vosoughi Department of Electrical and Computer Engineering, University
of Rochester, NY, USA
###### Abstract
We introduce large-scale Augmented Granger Causality (lsAGC) as a method for
connectivity analysis in complex systems. The lsAGC algorithm combines
dimension reduction with source time-series augmentation and uses predictive
time-series modeling for estimating directed causal relationships among time-
series. This method is a multivariate approach, since it is capable of
identifying the influence of each time-series on any other time-series in the
presence of all other time-series of the underlying dynamic system. We
quantitatively evaluate the performance of lsAGC on synthetic directional
time-series networks with known ground truth. As a reference method, we
compare our results with cross-correlation, which is typically used as a
standard measure of connectivity in the functional MRI (fMRI) literature.
Using extensive simulations for a wide range of time-series lengths and two
different signal-to-noise ratios of 5 and 15 dB, lsAGC consistently
outperforms cross-correlation at accurately detecting network connections,
using Receiver Operator Characteristic Curve (ROC) analysis, across all tested
time-series lengths and noise levels. In addition, as an outlook to possible
clinical application, we perform a preliminary qualitative analysis of
connectivity matrices for fMRI data of Autism Spectrum Disorder (ASD) patients
and typical controls, using a subset of 59 subjects of the Autism Brain
Imaging Data Exchange II (ABIDE II) data repository. Our results suggest that
lsAGC, by extracting sparse connectivity matrices, may be useful for network
analysis in complex systems, and may be applicable to clinical fMRI analysis
in future research, such as targeting disease-related classification or
regression tasks on clinical data.
###### keywords:
machine learning, resting-state fMRI, large-scale Augmented Granger Causality,
functional connectivity, autism spectrum disorder
## 1 INTRODUCTION
Currently, the quantification of directed information transfer between
interacting brain areas is one of the most challenging methodological problems
in computational neuroscience. A fundamental problem is identifying
connectivity in very high-dimensional systems. A common practice has been to
transform a high-dimensional system into a simplified representation, e.g. by
clustering, Principal, or Independent Component Analysis. The drawback of such
methodology is that an identified interaction between such simplified
components cannot readily be transferred back into the original high-
dimensional space. Thus, directed interactions between the original network
nodes can no longer be revealed. Although this significantly limits the
interpretation of brain network activities in physiological and disease
states, surprisingly little effort has been devoted to circumvent the
inevitable information loss induced by the aforementioned frequently employed
techniques.
Various methods have been proposed to obtain directional relationships in
multivariate time-series data, e.g., transfer entropy [1] and mutual
information [2]. However, as the multivariate problem’s dimensions increase,
computation of the density function becomes computationally expensive [3, 4].
Under the Gaussian assumption, transfer entropy is equivalent to Granger
causality [5]. However, the computation of multivariate Granger causality for
short time series in large-scale problems is challenging [6, 7]. To address
these problems, we have previously proposed a method for multivariate Granger
causality analysis using linear multivariate auto-regressive (MVAR) modeling,
which simultaneously circumvents the drawbacks of above mentioned
simplification strategies by introducing an invertible dimension reduction
followed by a back-projection of prediction residuals into the original data
space (large-scale Granger Causality, lsGC) [8]. We have also demonstrated the
applicability of this approach to resting-state fMRI analysis [9]. Recently,
we have also presented an alternative multivariate Granger causality analysis
method, large-scale Extended Granger Causality (lsXGC), that uses an augmented
dimension-reduced time-series representation for predicting target time-series
in the original high-dimensional system directly, i.e., without inverting the
dimensionality reduction step [10].
In this paper, we introduce a hybrid of both methods, large-scale Augmented
Granger Causality (lsAGC) that combines both invertible dimension reduction
and time-series augmentation. It first uses an augmented dimension-reduced
time-series representation for prediction in the low-dimensional space,
followed by an inversion of the initial dimension reduction step. In the
following, we explain the lsAGC algorithm and present quantitative results on
synthetic time-series data with known connectivity ground truth. Finally, as
an outlook to possible clinical application, we perform a preliminary
qualitative analysis of connectivity matrices for fMRI data of Autism Spectrum
Disorder (ASD) patients and typical controls, using a subset of the Autism
Brain Imaging Data Exchange II (ABIDE II) data repository.
This work is embedded in our group’s endeavor to expedite artificial
intelligence in biomedical imaging by means of advanced pattern recognition
and machine learning methods for computational radiology and radiomics, e.g.,
[ 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68
].
## 2 DATA
### 2.1 Synthetic Networks with Known Ground Truth for Quantitative Analysis
We quantitatively evaluate the performance of lsAGC for network structure
recovery using synthetic networks with known ground truth. We constructed
ground truth networks with N = 50 nodes, each containing 5 modules of 10 nodes
with high (low) probability for the existence of directed intra- (inter-)
module connections. We simulated two values of additive white Gaussian noise,
with signal-to-noise ratios (SNR) of 15 dB and 5 dB, and repeated the
experiment 100 times with different noise seed. Networks were realized as
noisy stationary multivariate auto-regressive (MVAR) processes of model order
p = 2 in each of T = 1000 temporal process iterations. The network structure
was adapted from [69] and [56].
### 2.2 Functional MRI Data for Qualitative Analysis
The following explanation of participants and data in this section follows the
description in [70]: The Autism Brain Imaging Data Exchange II (ABIDE II)
initiative has made publicly available MRI data from multiple sites. We used
resting-state fMRI data from 59 participants of the online ABIDE II repository
(http:// fcon_1000.projects.nitrc.org/indi/abide) in this analysis, namely the
data from the Olin Neuropsychiatry Research Center, Institute of Living at
Hartford Hospital [71].
This data set consists of 24 ASD subjects (18-31 years) and 35 typical
controls (18-31 years). Autism diagnosis was based on the ASD cutoff of the
Autism Diagnostic Observation Schedule-Generic (ADOS-G). The typical controls
(TC) were screened for autism using the ADOS-G and any psychiatric disorder
based on the Structured Clinical Interview for DSM-IV Axis I Disorders-
Research Version (SCID-I RV) [71].
In this data set, resting-state fMRI scans were obtained from all subjects
using Siemens Magnetom Skyra Syngo MR D13. The study protocol included: (i)
High-resolution structural imaging using T1-weighted magnetization-prepared
rapid gradient-echo (MPRAGE) sequence, TE = 2.88 ms, TR = 2200 ms, isotropic
voxel size 1 mm, flip angle $13^{\circ}$. (ii) Resting-state fMRI scans with
TE = 30 ms, TR = 475 ms, flip angle $60^{\circ}$, isotropic voxel size of 3
mm. The acquisition lasted 7 minutes and 37 seconds.
The fMRI data used in this study were pre-processed using standard
methodology. Motion correction, brain extraction, and correction for slice
timing acquisition were performed. Additional nuisance regression for removing
variations due to head motion and physiological processes was carried out.
Each data set was finally registered to the 2 mm MNI standard space using a
12-parameter affine transformation. The functional data used in this study was
pre-processed using the CONN toolbox (www.nitrc.org/projects/conn) [72].
Additionally, the time-series were normalized to zero mean and unit standard
deviation to focus on signal dynamics rather than amplitude [73]. Finally, the
brain was parcellated into 90 regions as defined by the Automated Anatomic
Labelling (AAL) template [74]. Each regional time-series was represented by
the average time-series of all voxels included in the region.
## 3 ALGORITHM
Large-scale Augmented Granger Causality (lsAGC) has been developed based on 1)
the principle of original Granger causality, which quantifies the causal
influence of time-series $\mathbf{x_{s}}$ on time-series $\mathbf{x_{t}}$ by
quantifying the amount of improvement in the prediction of $\mathbf{x_{t}}$ in
presence of $\mathbf{x_{s}}$. 2) the idea of dimension reduction, which
resolves the problem of the tackling a under-determined system, which is
frequently faced in fMRI analysis, since the number of acquired temporal
samples usually is not sufficient for estimating the model parameters [9].
Consider the ensemble of time-series $\mathcal{X}\in\mathbb{R}^{N\times T}$,
where $N$ is the number of time-series (Regions Of Interest – ROIs) and $T$
the number of temporal samples. Let
$\mathcal{X}=(\mathbf{x_{1}},\mathbf{x_{2}},\dots,\mathbf{x_{N}})^{\mathsf{T}}$
be the whole multidimensional system and $x_{i}\in\mathbb{R}^{1\times T}$ a
single time-series with $i=1,2,\dots,N$, where
$\mathbf{x_{i}}=(x_{i}(1),x_{i}(2),\dots,x_{i}(T))$. In order to overcome the
under-determined problem, first $\mathcal{X}$ will be decomposed into its
first $p$ high-variance principal components
$\mathcal{Z}\in\mathbb{R}^{p\times T}$ using Principal Component Analysis
(PCA), i.e.,
$\mathcal{Z}=W\mathcal{X},$ (1)
where $W\in\mathbb{R}^{p\times N}$ represents the PCA coefficient matrix.
Subsequently, the dimension-reduced time-series ensemble $\mathcal{Z}$ is
augmented by one original time-series $\mathbf{x_{s}}$ yielding a dimension-
reduced augmented time-series ensemble $\mathcal{Y}\in\mathbb{R}^{(p+1)\times
T}$ for estimating the influence of $\mathbf{x_{s}}$ on all other time-series.
Following this, we locally predict the dimension-reduced representation
$\mathcal{Z}$ of the original high-dimensional system $\mathcal{X}$ at each
time sample $t$, i.e. $\mathcal{Z}(t)\in\mathbb{R}^{p\times 1}$ by calculating
an estimate $\hat{\mathcal{Z}}_{\mathbf{x_{s}}}(t)$. To this end, we fit an
affine model based on a vector of $m$ vector of m time samples of
$\mathcal{Y}(\tau)\in\mathbb{R}^{(p+1)\times 1}$($\tau=t-1,t-2,\dots,t-m$),
which is $\mathbf{y}(t)\in\mathbb{R}^{m.(p+1)\times 1}$, and a parameter
matrix $\mathcal{A}\in\mathbb{R}^{p\times m.(p+1)}$ and a constant bias vector
$\mathbf{b}\in\mathbb{R}^{p\times 1}$,
$\hat{\mathcal{Z}}_{\mathbf{x_{s}}}(t)=\mathcal{A}\mathbf{y}(t)+\mathbf{b},~{}~{}t=m+1,m+2,\dots,T.$
(2)
Subsequently, we use the prediction $\hat{\mathcal{Z}}_{\mathbf{x_{s}}}(t)$ to
calculate an estimate of $\mathcal{X}$ at time $t$, i.e.
$\mathcal{X}(t)\in\mathbb{R}^{N\times 1}$ by inverting the PCA of equation
(2), i.e.
$\mathcal{X}=W^{\dagger}\mathcal{Z},$ (3)
where $W^{\dagger}\in\mathbb{R}^{N\times p}$ represents the inverse of the PCA
coefficient matrix $W$, which is calculated as the Moore–Penrose pseudoinverse
of $W$.
Now $\hat{\mathcal{X}}_{\setminus{\mathbf{x_{s}}}}(t)$, which is the
prediction of $\mathcal{X}(t)$ without the information of $\mathbf{x_{s}}$,
will be estimated. The estimation processes is identical to the previous one,
with the only difference being that we have to remove the augmented time-
series $\mathbf{x_{s}}$ and its corresponding column in the PCA coefficient
matrix $W$.
The computation of a lsAGC index is based on comparing the variance of the
prediction errors obtained with and without consideration of $\mathbf{x_{s}}$.
The lsAGC index $f_{\mathbf{x_{s}}\xrightarrow{}\mathbf{x_{t}}}$ , which
indicates the influence of $\mathbf{x_{s}}$ on $\mathbf{x_{t}}$, can be
calculated by the following equation:
$f_{\mathbf{x_{s}}\xrightarrow{}\mathbf{x_{t}}}=\log{\frac{\mathrm{var}(e_{s})}{\mathrm{var}(e_{\setminus
s})}},$ (4)
where $e_{\setminus s}$ is the error in predicting $\mathbf{x_{t}}$ when
$\mathbf{x_{s}}$ was not considered, and $e_{s}$ is the error, when
$\mathbf{x_{s}}$ was used. Based on preliminary analyses, in this study, we
set $p=7$ and $m=3$.
## 4 RESULTS
Quantitative Analysis of Synthetic Networks with Known Ground Truth: Network
reconstruction results for the synthetic networks with known ground truth,
using the Area Under the Curve (AUC) for Receiver Operating Characteristic
(ROC) analysis, are shown in Fig. 1. For each time-series length and each
noise level, we performed 100 simulations. As can be seen from Fig. 1, lsAGC
consistently outperforms cross-correlation in its ability to accurately
recover network structure over a wide range of time-series-lengths in both
high- and low-noise scenarios, with a mean AUC for lsAGC for a time-series
length of 1000 temporal samples equal to 98.9% and 97.1% for signal-to-noise
values of 15 and 5 dB, respectively. On the other hand, cross-correlation
performs quite poorly compared to lsAGC with its mean AUC ranging around 0.5
for all examined time-series lengths and noise levels, equivalent to the
quality of randomly guessing the presence or absence of network connections.
Figure 1: Quantitative performance comparison of cross-correlation and lsAGC
for recovery of synthetic networks. The vertical axis is the Area Under the
Curve (AUC) for Receiver Operating Characteristics (ROC) analysis, where an
AUC = 1 indicates a perfect network recovery and AUC = 0.5 random assignment.
Whiskers are related to the 95% confidence interval, green diamonds represent
outliers, orange lines represent medians, and boxes are drawn from the first
quartile to the third quartile. It is clearly seen that lsAGC outperforms
cross-correlation over all tested time-series lengths and noise levels.
Qualitative Analysis of Connectivity Matrices Extracted from fMRI Data:
Averaged connectivity matrices, which were extracted using lsAGC and cross-
correlation, are shown in Fig. 2 for both healthy controls and ASD patients.
These matrices were obtained by calculating and then averaging over the
connectivity matrices of the 24 ASD patients and 35 typical controls, using
the proposed lsAGC algorithm as well as conventional cross-correlation
analysis. Visual inspection of the mean connectivity matrices in Fig. 2
reveals subtle differences between ASD patients and healthy controls for both
methods, which may be exploited for classification among the two cohorts in
future research. We also find from visual inspection of Fig. 2 that the
features extracted by the two methods are likely different, where the mean
connectivity matrices for lsAGC appear to be more “sparse” than for cross-
correlation. To quantify this qualitative visual impression, we calculated the
entropy of the connectivity matrix elements for each of the 59 subjects as a
surrogate for matrix “sparseness”. The mean entropy for healthy controls with
lsAGC and correlation was 1.66 $\pm$ 0.14 and 4.76 $\pm$ 0.07, respectively,
and the mean entropy for the ASD patients with lsAGC and correlation was 1.75
$\pm$ 0.12 and 4.78 $\pm$ 0.07, respectively. I.e., for both cohorts, the
sparseness of lsAGC connectivity matrices appears to be higher than for cross-
correlation analysis. We found that this difference between methods, as
expressed by the entropy of the connectivity matrix elements, was
statistically significant (Mann-Whitney U-test, $p<10^{-8}$). We conclude that
lsAGC may be useful for disease-related classification or regression tasks on
clinical fMRI data, because it may extract relevant features potentially not
captured by cross-correlation, which is currently used as the mainstay of fMRI
connectivity analysis. This hypothesis can be further investigated in future
research.
Figure 2: Averaged connectivity matrices: top left: average connectivity
matrix of healthy control subjects using lsAGC, top right: average
connectivity matrix of ASD patients using lsAGC, bottom left: average
connectivity matrix of healthy control subjects using cross-correlation, and
bottom right: average connectivity matrix of ASD patients using cross-
correlation. Note that the different methods capture different connectivity
features, and that there are slight differences of connectivity patterns
between healthy subjects and ASD patients. Also, the lsAGC connectivity
matrices appear to be significantly “sparser” than cross-correlation matrices.
This observation is quantitatively confirmed by calculating the entropy over
the matrix elements, as explained in the text.
## 5 CONCLUSIONS
In this work, we have introduced large-scale Augmented Granger Causality
(lsAGC) as a method for connectivity analysis in complex systems. The lsAGC
algorithm combines dimension reduction with source time-series augmentation
and uses multivariate predictive time-series modeling for estimating directed
causal relationships among time-series. We quantitatively evaluated the
performance of lsAGC on synthetic directional time-series networks with known
ground truth. Using simulations for a wide range of time-series lengths and
different signal-to-noise ratios, we compared lsAGC with cross-correlation,
which is currently used as the clinical standard for fMRI connectivity
analysis. We found that lsAGC consistently outperformed cross-correlation at
accurately detecting network connections. In addition, we performed a
preliminary qualitative analysis of connectivity matrices for fMRI data of
Autism Spectrum Disorder (ASD) patients and typical controls, using a subset
of the ABIDE II data repository. Our results suggest that lsAGC, by extracting
sparse connectivity matrices, may be useful for network analysis in complex
systems, and may be applicable to clinical fMRI analysis in future research,
such as targeting disease-related classification or regression tasks on
clinical data.
###### Acknowledgements.
This research was funded by Ernest J. Del Monte Institute for Neuroscience
Award from the Harry T. Mangurian Jr. Foundation. This work was conducted as a
Practice Quality Improvement (PQI) project related to American Board of
Radiology (ABR) Maintenance of Certificate (MOC) for Prof. Dr. Axel Wismüller.
This work is not being and has not been submitted for publication or
presentation elsewhere.
## References
* [1] Schreiber, T., “Measuring information transfer,” Physical review letters 85(2), 461 (2000).
* [2] Kraskov, A., Stögbauer, H., and Grassberger, P., “Estimating mutual information,” Physical review E 69(6), 066138 (2004).
* [3] Mozaffari, M. and Yilmaz, Y., “Online multivariate anomaly detection and localization for high-dimensional settings,” arXiv preprint arXiv:1905.07107 (2019).
* [4] Mozaffari, M. and Yilmaz, Y., “Online anomaly detection in multivariate settings,” in [2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) ], 1–6, IEEE (2019).
* [5] Barnett, L., Barrett, A. B., and Seth, A. K., “Granger causality and transfer entropy are equivalent for Gaussian variables,” Physical review letters 103(23), 238701 (2009).
* [6] Vosoughi, M. A. and Wismüller, A., “Large-scale kernelized Granger causality to infer topology of directed graphs with applications to brain networks,” arXiv preprint arXiv:2011.08261 (2020).
* [7] Wismüller, A., DSouza, A. M., Abidin, A. Z., and Vosoughi, M. A., “Large-scale nonlinear Granger causality: A data-driven, multivariate approach to recovering directed networks from short time-series data,” arXiv preprint arXiv:2009.04681 (2020).
* [8] D’Souza, A. M., Abidin, A. Z., Leistritz, L., and Wismüller, A., “Large-scale Granger causality analysis on resting-state functional MRI,” in [Medical Imaging 2016: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 9788, 97880L, International Society for Optics and Photonics (2016).
* [9] DSouza, A. M., Abidin, A. Z., Leistritz, L., and Wismüller, A., “Exploring connectivity with large-scale Granger causality on resting-state functional MRI,” Journal of neuroscience methods 287, 68–79 (2017).
* [10] Vosoughi, M. A. and Wismüller, A., “Large-scale extended Granger causality for classification of marijuana users from functional MRI,” arXiv preprint arXiv:2101.01832 (2021).
* [11] Leinsinger, G., Schlossbauer, T., Scherr, M., Lange, O., Reiser, M., and Wismüller, A., “Cluster analysis of signal-intensity time course in dynamic breast MRI: does unsupervised vector quantization help to evaluate small mammographic lesions?,” European radiology 16(5), 1138–1146 (2006).
* [12] Wismüller, A., Vietze, F., Behrends, J., Meyer-Baese, A., Reiser, M., and Ritter, H., “Fully automated biomedical image segmentation by self-organized model adaptation,” Neural Networks 17(8-9), 1327–1344 (2004).
* [13] Nattkemper, T. W. and Wismüller, A., “Tumor feature visualization with unsupervised learning,” Medical Image Analysis 9(4), 344–351 (2005).
* [14] Bunte, K., Hammer, B., Wismüller, A., and Biehl, M., “Adaptive local dissimilarity measures for discriminative dimension reduction of labeled data,” Neurocomputing 73(7-9), 1074–1092 (2010).
* [15] Wismüller, A., Vietze, F., and Dersch, D. R., “Segmentation with neural networks,” in [Handbook of medical imaging ], 107–126, Academic Press, Inc. (2000).
* [16] Hoole, P., Wismüller, A., Leinsinger, G., Kroos, C., Geumann, A., and Inoue, M., “Analysis of tongue configuration in multi-speaker, multi-volume MRI data,” (2000).
* [17] Wismüller, A., “Exploratory morphogenesis (XOM): a novel computational framework for self-organization,” Ph. D. thesis, Technical University of Munich, Department of Electrical and Computer Engineering (2006).
* [18] Wismüller, A., Dersch, D. R., Lipinski, B., Hahn, K., and Auer, D., “A neural network approach to functional MRI pattern analysis—clustering of time-series by hierarchical vector quantization,” in [International Conference on Artificial Neural Networks ], 857–862, Springer (1998).
* [19] Wismüller, A., Vietze, F., Dersch, D. R., Behrends, J., Hahn, K., and Ritter, H., “The deformable feature map-a novel neurocomputing algorithm for adaptive plasticity in pattern analysis,” Neurocomputing 48(1-4), 107–139 (2002).
* [20] Behrends, J., Hoole, P., Leinsinger, G. L., Tillmann, H. G., Hahn, K., Reiser, M., and Wismüller, A., “A segmentation and analysis method for MRI data of the human vocal tract,” in [Bildverarbeitung für die Medizin 2003 ], 186–190, Springer (2003).
* [21] Wismüller, A., “Neural network computation in biomedical research: chances for conceptual cross-fertilization,” Theory in Biosciences (1997).
* [22] Bunte, K., Hammer, B., Villmann, T., Biehl, M., and Wismüller, A., “Exploratory observation machine (XOM) with Kullback-Leibler divergence for dimensionality reduction and visualization.,” in [ESANN ], 10, 87–92 (2010).
* [23] Wismüller, A., Vietze, F., Dersch, D. R., Hahn, K., and Ritter, H., “The deformable feature map—adaptive plasticity for function approximation,” in [International Conference on Artificial Neural Networks ], 123–128, Springer (1998).
* [24] Wismüller, A., “The exploration machine–a novel method for data visualization,” in [International Workshop on Self-Organizing Maps ], 344–352, Springer (2009).
* [25] Wismüller, A., “Method, data processing device and computer program product for processing data,” (July 28 2009). US Patent 7,567,889.
* [26] Huber, M. B., Nagarajan, M., Leinsinger, G., Ray, L. A., and Wismüller, A., “Classification of interstitial lung disease patterns with topological texture features,” in [Medical Imaging 2010: Computer-Aided Diagnosis ], 7624, 762410, International Society for Optics and Photonics (2010).
* [27] Wismüller, A., “The exploration machine: a novel method for analyzing high-dimensional data in computer-aided diagnosis,” in [Medical Imaging 2009: Computer-Aided Diagnosis ], 7260, 72600G, International Society for Optics and Photonics (2009).
* [28] Bunte, K., Hammer, B., Villmann, T., Biehl, M., and Wismüller, A., “Neighbor embedding XOM for dimension reduction and visualization,” Neurocomputing 74(9), 1340–1350 (2011).
* [29] Meyer-Bäse, A., Lange, O., Wismüller, A., and Ritter, H., “Model-free functional MRI analysis using topographic independent component analysis,” International journal of neural systems 14(04), 217–228 (2004).
* [30] Wismüller, A., “A computational framework for nonlinear dimensionality reduction and clustering,” in [International Workshop on Self-Organizing Maps ], 334–343, Springer (2009).
* [31] Meyer-Base, A., Auer, D., and Wismüller, A., “Topographic independent component analysis for fMRI signal detection,” in [Proceedings of the International Joint Conference on Neural Networks, 2003. ], 1, 601–605, IEEE (2003).
* [32] Meyer-Baese, A., Schlossbauer, T., Lange, O., and Wismüller, A., “Small lesions evaluation based on unsupervised cluster analysis of signal-intensity time courses in dynamic breast MRI,” International journal of biomedical imaging 2009 (2009).
* [33] Wismüller, A., Lange, O., Auer, D., and Leinsinger, G., “Model-free functional MRI analysis for detecting low-frequency functional connectivity in the human brain,” in [Medical Imaging 2010: Computer-Aided Diagnosis ], 7624, 76241M, International Society for Optics and Photonics (2010).
* [34] Meyer-Bäse, A., Saalbach, A., Lange, O., and Wismüller, A., “Unsupervised clustering of fMRI and MRI time series,” Biomedical Signal Processing and Control 2(4), 295–310 (2007).
* [35] Huber, M. B., Nagarajan, M. B., Leinsinger, G., Eibel, R., Ray, L. A., and Wismüller, A., “Performance of topological texture features to classify fibrotic interstitial lung disease patterns,” Medical Physics 38(4), 2035–2044 (2011).
* [36] Wismüller, A., Verleysen, M., Aupetit, M., and Lee, J. A., “Recent advances in nonlinear dimensionality reduction, manifold and topological learning.,” in [ESANN ], (2010).
* [37] Meyer-Baese, A., Lange, O., Wismüller, A., and Hurdal, M. K., “Analysis of dynamic susceptibility contrast MRI time series based on unsupervised clustering methods,” IEEE Transactions on Information Technology in Biomedicine 11(5), 563–573 (2007).
* [38] Wismüller, A., Behrends, J., Hoole, P., Leinsinger, G. L., Reiser, M. F., and Westesson, P.-L., “Human vocal tract analysis by in vivo 3d MRI during phonation: a complete system for imaging, quantitative modeling, and speech synthesis,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 306–312, Springer (2008).
* [39] Wismüller, A., “Method and device for representing multichannel image data,” (Nov. 17 2015). US Patent 9,189,846.
* [40] Huber, M. B., Bunte, K., Nagarajan, M. B., Biehl, M., Ray, L. A., and Wismüller, A., “Texture feature ranking with relevance learning to classify interstitial lung disease patterns,” Artificial intelligence in medicine 56(2), 91–97 (2012).
* [41] Wismüller, A., Meyer-Baese, A., Lange, O., Reiser, M. F., and Leinsinger, G., “Cluster analysis of dynamic cerebral contrast-enhanced perfusion MRI time-series,” IEEE transactions on medical imaging 25(1), 62–73 (2005).
* [42] Twellmann, T., Saalbach, A., Muller, C., Nattkemper, T. W., and Wismüller, A., “Detection of suspicious lesions in dynamic contrast enhanced MRI data,” in [The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society ], 1, 454–457, IEEE (2004).
* [43] Otto, T. D., Meyer-Baese, A., Hurdal, M., Sumners, D., Auer, D., and Wismüller, A., “Model-free functional MRI analysis using cluster-based methods,” in [Intelligent Computing: Theory and Applications ], 5103, 17–24, International Society for Optics and Photonics (2003).
* [44] Varini, C., Nattkemper, T. W., Degenhard, A., and Wismüller, A., “Breast MRI data analysis by lle,” in [2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541) ], 3, 2449–2454, IEEE (2004).
* [45] Huber, M. B., Lancianese, S. L., Nagarajan, M. B., Ikpot, I. Z., Lerner, A. L., and Wismüller, A., “Prediction of biomechanical properties of trabecular bone in mr images with geometric features and support vector regression,” IEEE Transactions on Biomedical Engineering 58(6), 1820–1826 (2011).
* [46] Meyer-Base, A., Pilyugin, S. S., and Wismüller, A., “Stability analysis of a self-organizing neural network with feedforward and feedback dynamics,” in [2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541) ], 2, 1505–1509, IEEE (2004).
* [47] Meyer-Baese, A., Lange, O., Schlossbauer, T., and Wismüller, A., “Computer-aided diagnosis and visualization based on clustering and independent component analysis for breast MRI,” in [2008 15th IEEE International Conference on Image Processing ], 3000–3003, IEEE (2008).
* [48] Wismüller, A., Meyer-Bäse, A., Lange, O., Schlossbauer, T., Kallergi, M., Reiser, M., and Leinsinger, G., “Segmentation and classification of dynamic breast magnetic resonance image data,” Journal of Electronic Imaging 15(1), 013020 (2006).
* [49] Bhole, C., Pal, C., Rim, D., and Wismüller, A., “3d segmentation of abdominal ct imagery with graphical models, conditional random fields and learning,” Machine vision and applications 25(2), 301–325 (2014).
* [50] Nagarajan, M. B., Coan, P., Huber, M. B., Diemoz, P. C., Glaser, C., and Wismüller, A., “Computer-aided diagnosis in phase contrast imaging x-ray computed tomography for quantitative characterization of ex vivo human patellar cartilage,” IEEE Transactions on Biomedical Engineering 60(10), 2896–2903 (2013).
* [51] Wismüller, A., Meyer-Bäse, A., Lange, O., Auer, D., Reiser, M. F., and Sumners, D., “Model-free functional MRI analysis based on unsupervised clustering,” Journal of Biomedical Informatics 37(1), 10–18 (2004).
* [52] Meyer-Baese, A., Wismüller, A., Lange, O., and Leinsinger, G., “Computer-aided diagnosis in breast MRI based on unsupervised clustering techniques,” in [Intelligent Computing: Theory and Applications II ], 5421, 29–37, International Society for Optics and Photonics (2004).
* [53] Nagarajan, M. B., Coan, P., Huber, M. B., Diemoz, P. C., Glaser, C., and Wismüller, A., “Computer-aided diagnosis for phase-contrast x-ray computed tomography: quantitative characterization of human patellar cartilage with high-dimensional geometric features,” Journal of digital imaging 27(1), 98–107 (2014).
* [54] Nagarajan, M. B., Huber, M. B., Schlossbauer, T., Leinsinger, G., Krol, A., and Wismüller, A., “Classification of small lesions on dynamic breast MRI: Integrating dimension reduction and out-of-sample extension into cadx methodology,” Artificial intelligence in medicine 60(1), 65–77 (2014).
* [55] Yang, C.-C., Nagarajan, M. B., Huber, M. B., Carballido-Gamio, J., Bauer, J. S., Baum, T. H., Eckstein, F., Lochmüller, E.-M., Majumdar, S., Link, T. M., et al., “Improving bone strength prediction in human proximal femur specimens through geometrical characterization of trabecular bone microarchitecture and support vector regression,” Journal of electronic imaging 23(1), 013013 (2014).
* [56] Wismüller, A., Nagarajan, M. B., Witte, H., Pester, B., and Leistritz, L., “Pair-wise clustering of large scale Granger causality index matrices for revealing communities,” in [Medical Imaging 2014: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 9038, 90381R, International Society for Optics and Photonics (2014).
* [57] Wismüller, A., Wang, X., DSouza, A. M., and Nagarajan, M. B., “A framework for exploring non-linear functional connectivity and causality in the human brain: mutual connectivity analysis (mca) of resting-state functional MRI with convergent cross-mapping and non-metric clustering,” arXiv preprint arXiv:1407.3809 (2014).
* [58] Schmidt, C., Pester, B., Nagarajan, M., Witte, H., Leistritz, L., and Wismüller, A., “Impact of multivariate Granger causality analyses with embedded dimension reduction on network modules,” in [2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society ], 2797–2800, IEEE (2014).
* [59] Wismüller, A., Abidin, A. Z., D’Souza, A. M., Wang, X., Hobbs, S. K., Leistritz, L., and Nagarajan, M. B., “Nonlinear functional connectivity network recovery in the human brain with mutual connectivity analysis (MCA): convergent cross-mapping and non-metric clustering,” in [Medical Imaging 2015: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 9417, 94170M, International Society for Optics and Photonics (2015).
* [60] Wismüller, A., Abidin, A. Z., DSouza, A. M., and Nagarajan, M. B., “Mutual connectivity analysis (MCA) for nonlinear functional connectivity network recovery in the human brain using convergent cross-mapping and non-metric clustering,” in [Advances in Self-Organizing Maps and Learning Vector Quantization ], 217–226, Springer (2016).
* [61] Schmidt, C., Pester, B., Schmid-Hertel, N., Witte, H., Wismüller, A., and Leistritz, L., “A multivariate Granger causality concept towards full brain functional connectivity,” PloS one 11(4) (2016).
* [62] Abidin, A. Z., Chockanathan, U., DSouza, A. M., Inglese, M., and Wismüller, A., “Using large-scale Granger causality to study changes in brain network properties in the clinically isolated syndrome (CIS) stage of multiple sclerosis,” in [Medical Imaging 2017: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 10137, 101371B, International Society for Optics and Photonics (2017).
* [63] Chen, L., Wu, Y., DSouza, A. M., Abidin, A. Z., Wismüller, A., and Xu, C., “MRI tumor segmentation with densely connected 3d cnn,” in [Medical Imaging 2018: Image Processing ], 10574, 105741F, International Society for Optics and Photonics (2018).
* [64] Abidin, A. Z., DSouza, A. M., Nagarajan, M. B., Wang, L., Qiu, X., Schifitto, G., and Wismüller, A., “Alteration of brain network topology in HIV-associated neurocognitive disorder: A novel functional connectivity perspective,” NeuroImage: Clinical 17, 768–777 (2018).
* [65] Abidin, A. Z., Deng, B., DSouza, A. M., Nagarajan, M. B., Coan, P., and Wismüller, A., “Deep transfer learning for characterizing chondrocyte patterns in phase contrast x-ray computed tomography images of the human patellar cartilage,” Computers in biology and medicine 95, 24–33 (2018).
* [66] DSouza, A. M., Abidin, A. Z., Chockanathan, U., Schifitto, G., and Wismüller, A., “Mutual connectivity analysis of resting-state functional MRI data with local models,” NeuroImage 178, 210–223 (2018).
* [67] Chockanathan, U., DSouza, A. M., Abidin, A. Z., Schifitto, G., and Wismüller, A., “Automated diagnosis of HIV-associated neurocognitive disorders using large-scale Granger causality analysis of resting-state functional MRI,” Computers in Biology and Medicine 106, 24–30 (2019).
* [68] Wismüller, A. and Vosoughi, M. A., “Classification of schizophrenia from functional MRI using large-scale extended Granger causality,” arXiv preprint (2021).
* [69] Baccalá, L. A. and Sameshima, K., “Partial directed coherence: a new concept in neural structure determination,” Biological cybernetics 84(6), 463–474 (2001).
* [70] DSouza, A. M., Abidin, A. Z., and Wismüller, A., “Classification of autism spectrum disorder from resting-state fMRI with mutual connectivity analysis,” in [Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging ], 10953, 109531D, International Society for Optics and Photonics (2019).
* [71] First, M., Spitzer, R., Gibbon, M., and Williams, J., “Structured clinical interview for dsm-iv axis i disorders (scid). new york, new york state psychiatric institute,” Biometrics Research (1995).
* [72] Whitfield-Gabrieli, S. and Nieto-Castanon, A., “Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks,” Brain connectivity 2(3), 125–141 (2012).
* [73] Wismüller, A., Lange, O., Dersch, D. R., Leinsinger, G. L., Hahn, K., Pütz, B., and Auer, D., “Cluster analysis of biomedical image time-series,” International Journal of Computer Vision 46(2), 103–128 (2002).
* [74] Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., Mazoyer, B., and Joliot, M., “Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni MRI single-subject brain,” Neuroimage 15(1), 273–289 (2002).
|
Dipartimento di Fisica e Geologia, Università degli Studi di Perugia, Italy
INFN - Sezione di Perugia - Perugia, Italy
# Development of a web application for monitoring solar activity and cosmic
radiation
D. Pelosi ins:x N. Tomassetti ins:x M. Duranti ins:y ins:xins:xins:yins:y
###### Abstract
The flux of cosmic rays (CRs) in the heliosphere is subjected to remarkable
time variations caused by the 11-year cycle of solar activity. To help the
study of this effect, we have developed a web application (Heliophysics
Virtual Observatory) that collects real-time data on solar activity,
interplanetary plasma, and charged radiation from several space missions or
observatories. As we will show, our application can be used to visualize,
manipulate, and download updated data on sunspots, heliospheric magnetic
fields, solar wind, and neutron monitors counting rates. Data and calculations
are automatically updated on daily basis. A nowcasting for the energy spectrum
of CR protons near-Earth is also provided using calculations and real-time
neutron monitor data as input.
## 1 Introduction
During their motion inside the heliosphere, cosmic rays (CRs) experience the
effects of heliospheric forces, commonly known as solar modulation.
Specifically, the solar wind and its embedded magnetic field constantly
reshape their energy spectra. The solar modulation effect is caused by several
processes such as convection, drift motion, diffusion, and adiabatic cooling,
although investigations are underway on defining the associated parameters
[1]. As a result, the energy spectrum of CRs observed near-Earth is
significantly different from that in the surrounding interstellar medium, the
so-called Local Interstellar Spectrum (LIS). Furthermore, solar modulation is
known to be energy- and time-dependent. In fact, the effect is more evident
for CRs with kinetic energies below $\sim$ 10 GeV and shows a clear
correlation with solar activity. This implies that solar modulation inherits
the quasi-periodical behavior of solar activity. More specifically, the
monthly SunSpot Number (SSN) observed on the Sun’s photosphere, widely used as
a good proxy for solar activity, varies with a period of 11 years, known as
solar cycle. A deep investigation of the solar modulation phenomenon is of
crucial importance to achieve a full understanding of the dynamics of charged
particles in the heliospheric turbulence, as well as to accurately predict the
radiation dose received by electronics and astronauts. Forecasting the CR
fluxes near-Earth and in the interplanetary space is essential, given the
ever-growing number of satellites orbiting around Earth and human space
missions to Moon and Mars planned in the next decades. For this purpose,
several analytic and numerical models of solar modulation have been proposed
[1, 2, 3, 4]. Recent progress in this field has been possible thanks to time-
resolved data on CRs fluxes released from many space missions such as
EPHIN/SOHO (since 1995 to 2018) [5], PAMELA (2006-2016) [6], AMS-02 (since
2011 and still operative for all ISS lifetime) [7], along with the direct LIS
data from the Voyager probes in the interstellar space [8]. The temporal
variations of CRs fluxes are also measured, with some caveats, with the
ground-based network of Neutron Monitors (NMs) whose data are collected in
since 1951 [9]. A NM detector is an energy-integrating device whose count rate
($N$) is defined (for each species of CR) as an integral, above the local
geomagnetic rigidity cutoff, of a product of the near-Earth CR flux and the
specific yield function of the detector. Models also need solar data such as
SSN (in different time resolutions) provided by the SILSO/SIDC database of the
Royal Observatory of Belgium [10], polar field strength and tilt angle values
of the heliospheric current sheet (HCS) monthly provided by the Wilcox Solar
Observatory [11] and heliospheric data about radial speed and proton density
of solar wind, monthly distributed by NASA missions WIND and ACE [12]. We have
developed the Heliophysics Virtual Observatory (HVO) in order to make data-
access easier and faster. HVO is a web application that collects all the data
mentioned above in a unique tool providing a daily automatic update. This tool
gives users the functionalities of visualizing, manipulating and downloading
updated data. We also present a simplified real-time model of near-Earth
proton flux integrated into a specific section of HVO.
## 2 Real-time model
The propagation of CRs in the heliosphere is governed by the Parker equation:
$\frac{\partial f}{\partial
t}=-(C\vec{V}+\langle\vec{v}_{drift}\rangle)\cdot\nabla
f+\nabla\cdot(\textbf{K}\cdot\nabla
f)+\frac{1}{3}(\nabla\cdot\vec{V})\frac{\partial f}{\partial\ln R}+q$ (1)
The equation describes the temporal evolution of CR phase space density
$f=f(t,R)$, where $R=p/Z$ is the CR rigidity, $\langle\vec{v}_{drift}\rangle$
is the averaged particle drift velocity, $\vec{V}$ is the solar wind velocity,
K is the symmetric part of CR diffusion tensor, and $q$ is any local source of
CRs [1]. The Parker equation is often resolved within the so-called Force-
Field (FF) approximation [2]. The FF model assumes steady-state conditions
(i.e. negligible short-term modulation effects), radially expanding wind
$V(r)$, isotropic and separable diffusion coefficient
$K\equiv\kappa_{1}(r){\cdot}\kappa_{2}(R)$, negligible drift and loss terms.
Despite these assumptions are often violated, the FF approximation provides a
useful way to describe the near-Earth CR flux evolution and it is frequently
used thanks to its simplicity. The resulting CR flux $J(t,R)$ is related to
$f$ by $J=R^{2}f$. Writing the solution in terms of kinetic energy per nucleon
$E$, for a CR nucleus with charge number $Z$ and mass number $A$, the near-
Earth ($r=1$ AU) flux at the epoch $t$ is given by:
$J(E,t)=\frac{(E+M_{p})^{2}-M_{p}^{2}}{(E+M_{p}+\frac{Z}{A}\phi(t))^{2}-M_{p}^{2}}J_{\rm{LIS}}(E+\frac{Z}{A}\phi(t)),$
(2)
where $\phi$ is the _modulation potential_ , it has the units of an electric
potential, typically in the range 0.1-1 GV. The parameter $\phi$ can be
interpreted as the averaged rigidity loss of CRs in their motion from the edge
of the heliosphere down to the Earth. Thus, the implementation of this
simplified model depends on the knowledge of two key elements: the time-series
of $\phi$ and the LIS. In this work, we have used the new LIS models based on
the latest results from Voyager 1 and AMS-02 [13] and the values of the
modulation potential reconstructed by Usoskin et al. 2011 [14], from NM data
on monthly basis, since 1964 to 2011. To set up the $\phi$ reconstruction
after 2011 and to the present epoch, instead of repeating the Usoskin
methodology (based on NM’s yield function), we propose a simplified method.
For a given NM detector, the NM counting rate $N(t)$ and $\phi(t)$ are anti-
correlated and we can establish a quadratic relation between them:
$\phi(N(t))=A+B\cdot N(t)+C\cdot N(t)^{2}$ (3)
We determined the coefficient $A$, $B$, and $C$ as best-fit values using for
several NM stations the Usoskin $\phi$-values. This enable us to obtain an
prediction of $\phi$ for any epoch $t$ for which the NM rate $N$ is known.
Inserting the parameterization of LIS and the $\phi$ value at epoch $t$ in Eq.
2 we obtain a real-time estimator for near-Earth CR flux. This simplified
model has been integrated into HVO.
## 3 The Heliophysics Virtual Observatory
The investigation of the solar modulation phenomenon requires a large variety
of heliospheric and radiation data. HVO [15] is a project developed under the
CRISP scientific program of experimental study and phenomenological modeling
of space weather, within the framework agreement between Università degli
Studi di Perugia and Agenzia Spaziale Italiana (ASI). HVO is a web application
that daily extracts data with Python scripts from several databases (listed in
Resources section), it visualizes and makes them available in a standardized
format. HVO has been implemented with the JavaScript ROOT package JSROOT. It
enables users to manipulate, directly from the web page, graphs and download
data as text format or graphic objects with ROOT extension. To date, HVO has
three main sections. The first one is dedicated to solar data such as SSN in
daily, monthly, yearly, and smoothed formats extracted from SILSO/SIDC [10],
observations of the Sun’s polar magnetic field strength and tilt angle of the
HCS reconstructed with the classic and radial model from the Wilcox Solar
observatory [11]. The second section contains heliospheric data such as proton
density and radial speed of the solar wind, monthly updated from WIND and ACE
[12]. The third section contains cosmic radiation data from NMs and a real-
time model for Galactic CR protons discussed in Sect. 2. An interactive user
interface provides the possibility to select one or more NM stations, choose
the time resolution of the rates (daily, monthly, yearly and by Carrington
rotation), set the proton energy and time interval. HVO provides, for each
selected NM, the graph of the count rate $N(t)$, the calculated time-series of
$\phi$ from Eq. 3, and the estimated proton flux near-Earth $J(E)$ from Eq. 2.
Fig. 1 shows an example of the HVO functionality.
Figure 1: Plots extracted from HVO: A) Monthly rate $N(t)$ of the NEWK station
(Newark, NJ, USA) in time interval 1/1/2004 - 1/1/2021. B) Monthly SSN in the
time interval 1/1/2004 - 1/1/2021. C) Calculated time-series of $\phi$ using
the NEWK rates (1/1/2004 - 1/1/2021). D) Averaged tilt angle of the HCS
measured with the classic model for any Carrington rotation between 1/1/2004
and 1/1/2021. E) Estimated near-Earth proton flux at $E$ = 1 GeV using the
NEWK rates (1/1/2004 - 1/1/2021). F) Solar wind speed for the period 1/1/2004
- 1/1/2021.
## 4 Conclusions
In this work, we have presented a web application aimed at monitoring solar
activity and cosmic radiation, as well as providing real-time calculation of
the energy spectra of CR protons in proximity of the Earth. HVO is in its
first development phase. We can propose possible improvements to realize a
useful tool for the CR astrophysics and space physics community. In
particular, we can extend the real-time proton model to other charged species.
HVO can be also integrated with improved numerical models of CRs transport in
the heliosphere [3, 4], that will enable us to forecast the CR radiation at an
interplanetary level. Finally, we can include other relevant observations such
as, e.g., data on solar energetic particle (SEP) events, solar flares and
coronal mass ejections (CME) or other interplanetary disturbance phenomena.
###### Acknowledgements.
We acknowledge the support of Italian Space Agency under agreement ASI-UniPG
2019-2-HH.0.
## References
* [1] M.S. Potgieter Liv. Rev. Sol. Phys., 10 (2013) 3
* [2] H. Moraal Space Sci. Ref., 176 (2013) 299
* [3] N. Tomassetti Phys. Rev. D, 96 (2017) 103005
* [4] N. Tomassetti et al. ApJ Lett., 849 (2017) L32
* [5] P. Kühl, R. Gómez-Herrero, B. Heber Sol. Phys, 291 (965-974) 2016
* [6] M. Martucci et al. ApJ Lett., 854 (2018) L2
* [7] M. Aguilar, et al. PRL, 120 (2018) 051101
* [8] A. C. Cummings, et al. ApJ, 831 (2016) 18
* [9] NMDB - Neutron Monitor Database [http://www01.nmdb.eu]
* [10] SILSO/SIDC - Solar Influences Data analysis Center [http://www.sidc.be/silso]
* [11] WSO - Wilcox Solar Observatory [http://wso.stanford.edu]
* [12] NASA/SPDF - Space Physics Data Facility [https://spdf.gsfc.nasa.gov]
* [13] C. Corti, et al. ApJ, 829 (2016) 8; N. Tomassetti, et al. PRL, 121 (2018) 251104
* [14] I. G. Usoskin, et al. J. Geophys. Res. - Space Physics, 116 (2011) A2
* [15] HVO - Heliophysics Virtual Observatory [https://crisp.unipg.it/hvo]
|
# Effects of Pre- and Post-Processing on type-based Embeddings in
Lexical Semantic Change Detection
Jens Kaiser, Sinan Kurtyigit11footnotemark: 1, Serge Kotchourko11footnotemark:
1, Dominik Schlechtweg
Institute for Natural Language Processing, University of Stuttgart
<EMAIL_ADDRESS>Authors contributed equally, and their ordering was determined randomly.
###### Abstract
Lexical semantic change detection is a new and innovative research field. The
optimal fine-tuning of models including pre- and post-processing is largely
unclear. We optimize existing models by (i) pre-training on large corpora and
refining on diachronic target corpora tackling the notorious small data
problem, and (ii) applying post-processing transformations that have been
shown to improve performance on synchronic tasks. Our results provide a guide
for the application and optimization of lexical semantic change detection
models across various learning scenarios.
## 1 Introduction
In recent years Lexical Semantic Change Detection (LSCD), i.e. the detection
of word meaning change over time, has seen considerable developments
(Tahmasebi et al., 2018; Kutuzov et al., 2018; Hengchen et al., 2021). The
recent publication of multi-lingual human-annotated evaluation data from
SemEval-2020 Task 1 (Schlechtweg et al., 2020) makes it now possible to
compare LSCD models in a variety of scenarios. The task shows a clear
dominance of type-based embeddings, although these are strongly influenced by
the size of training corpora. In order to mitigate this problem we propose
pre-training models on large corpora and refine them on diachronic target
corpora. We further improve the obtained embeddings with several post-
processing transformations which have been shown to have positive effects on
performance in semantic similarity and analogy tasks (Mu et al., 2017; Artetxe
et al., 2018b; Raunak et al., 2019) as well as term extraction (Hätty et al.,
2020). Extensive experiments are performed on the German and English LSCD
datasets from SemEval-2020 Task 1. According to our findings, pre-training is
advisable when the target corpora are small and should be done using
diachronic data. We further show that pre-training on large corpora strongly
interacts with vector dimensionality and propose a simple solution to avoid
drastic performance drops. Post-processing often yields further improvements.
However, it is hard to find a reliable parameter that performs well across the
board. Our experiments suggest that it is possible to use simple pre- and
post-processing techniques to improve the state-of-the-art in LSCD.
## 2 Related Work
As evident in Schlechtweg et al. (2020) the field of LSCD is currently
dominated by Vector Space Models (VSMs), which can be divided into type-based
(static) (Turney and Pantel, 2010) and token-based (contextualized) (Schütze,
1998) models. Prominent type-based models include low-dimensional embeddings
such as Global Vectors (GloVe, Pennington et al., 2014) and Skip-Gram with
Negative Sampling (SGNS, Mikolov et al., 2013a, b). However, as these models
come with the deficiency that they aggregate all senses of a word into a
single representation, token-based embeddings have been proposed (Peters et
al., 2018; Devlin et al., 2019). According to Hu et al. (2019) these models
can ideally capture complex characteristics of word use, and how they vary
across linguistic contexts. The results of SemEval-2020 Task 1 (Schlechtweg et
al., 2020), however, show that contrary to this, the token-based embedding
models (Beck, 2020; Kutuzov and Giulianelli, 2020) are heavily outperformed by
the type-based ones (Pražák et al., 2020; Asgari et al., 2020). The SGNS model
was not only widely used, but also performed best among the participants in
the task. This result was recently reproduced in the DIACR-Ita shared task
(Basile et al., 2020; Laicher et al., 2020; Kaiser et al., 2020b). Its fast
implementation and combination possibilities with different alignment types
further solidify SGNS as the standard in LSCD (Schlechtweg et al., 2020,
2019a; Shoemark et al., 2019; Kutuzov et al., 2020). Hence, the embeddings
used in this work are SGNS-based.
Further increases in performance of type-based VSMs can be achieved by various
post-processing transformations. This has been shown for semantic similarity
and analogy tasks (Mu et al., 2017; Artetxe et al., 2018b; Raunak et al.,
2019) as well as term extraction (Hätty et al., 2020). It is still an open
question whether these transformations improve performance in the special
setting of LSCD where we typically have several corpora and vector spaces
which have to be transformed simultaneously (Schlechtweg et al., 2020). An
indication is given by Schlechtweg et al. (2019a) showing that for a simple
LSCD model mean centering leads to consistent performance improvements on two
German data sets. Whether this result is reproducible on further data sets,
more complex models and further post-processing techniques has not been
determined yet.
Post-processing methods operate on information already contained in a VSM,
rather than adding additional information. Further semantic information can be
introduced by pre-training vectors on a larger unspecific collection of text
(Kutuzov and Kuzmenko, 2016) or by training a seperate matrix on such text and
concatenating the two VSMs (Limsopatham and Collier, 2016). This is especially
helpful for cases where only smaller specialized corpora are given. Combining
the information from two models is also found in Kim et al. (2014), here it is
used for alignment proposes. We operate similarly to Kim et al. but with the
motivation of Limsopatham and Collier and Kutuzov and Kuzmenko, as we aim to
enrich a VSM prior to the training process.
## 3 Data and Tasks
| diachron | modern
---|---|---
| GERt1 | GERt2 | ENGt1 | ENGt2 | SdeWaC | PukWaC
source | DTA | BZ+ND | CCOHA | CCOHA | web | web
time period | 1800 – | 1946 – | 1810 – | 1960 – | $\sim$2005 – | $\sim$2005 –
1899 | 1990 | 1860 | 2010 | 2005 | 2005
# of tokens | 66.9M | 67.2M | 6.48M | 6.62M | 750M | 1.92B
# of types | 51.1K | 59.1K | 25.9K | 37.5K | 44.6K | 51.9K
min word freq. | 39 | 39 | 4 | 4 | 450 | 750
Table 1: Corpus statistics. GERt1 and GERt2 are sampled from DTA Deutsches
Textarchiv (2017), BZ Berliner Zeitung (2018) and ND Neues Deutschland (2018).
DTA contains texts from different genres, BZ and ND are collections of
newspaper articles. Clean Corpus of Historical American English (CCOHA)
(Davies, 2012; Alatrash et al., 2020) is a genre balanced collection of texts
from a wide variety of time periods and the basis for ENGt1 and ENGt2.
We train SGNS-based VSMs on various corpora and use a word similarity task and
an LSCD task for evaluation. The two tasks share a common aspect: the vector
representations of two words need to be compared with some metric (e.g. cosine
similarity), and word pairs need to be ranked according to that metric. In the
word similarity task, we have the vectors of two different words in the same
vector space $(w_{i},w_{j})$, while for LSCD we have the vectors of the same
word but from different vector spaces representing different time periods
$(w_{i}^{t1},w_{i}^{t2})$.
##### Modern Data.
We use two large modern English and German corpora, PukWaC (Baroni et al.,
2009) and SdeWaC (Faaß and Eckart, 2013) to validate the post-processing
methods on the word similarity task and to create pre-trained embeddings for
the LSCD task. PukWaC and SdeWaC are web-crawled corpora from the .uk and .de
domain respectively. Resulting in fairly large corpora, 2B tokens and 750M
tokens (see Table 1). We evaluate vector representations created on the two
corpora on a standard dataset of human similarity judgments, WordSim353
(Finkelstein et al., 2002), by measuring Spearman’s rank correlation
coefficient of the cosine similarity of vectors for target word pairs with
human judgments.
##### Diachronic Data.
We utilize the English and German datasets provided by SemEval-2020 Task 1
Subtask 2 (Schlechtweg et al., 2020). Each dataset contains two target corpora
from different time periods, $t_{1}$ and $t_{2}$, as well as a list of target
words. The corpora originate mostly from newspaper articles and books. Their
biggest difference to PukWaC and SdeWaC is their approximately 10 to 100 times
smaller size, according to token counts (see to Table 1). The task is to rank
the list of target words according to their word sense divergence, gradually
from 0 (no change) to 1 (total change). The rank predictions are compared
against gold data which is based on human judgments. Once again Spearman’s
rank correlation coefficient is used to measure performance on the task.
## 4 Models
Following the popular approach taken for type-based vector space models in
LSCD, we combine three sub-systems: (i) creating semantic word
representations, (ii) aligning them across corpora, and (iii) measuring
differences between the aligned representations (Schlechtweg et al., 2019a;
Dubossarsky et al., 2019; Shoemark et al., 2019). Alignment is needed as
columns from different vector spaces may not correspond to the same coordinate
axes, due to the stochastic nature of many low-dimensional word
representations (Hamilton et al., 2016). Additionally, we aim to refine sub-
system (i) by adding pre-trained semantic word representations and using post-
processing methods to improve the quality of the created semantic word
representations.111Find a comprehensive overview of type-based LSCD models
including semantic representations, alignments and measures in Schlechtweg et
al. (2019a).
We use SGNS (Mikolov et al., 2013a, b) to create type-based word
representations in combination with three different alignment methods,
Orthogonal Procrustes (OP), Vector initialization (VI), and Word Injection
(WI). The three alignment methods combined with SGNS have been proven to be
state-of-the-art, even when competing against token-based embeddings
(Schlechtweg et al., 2020; Kaiser et al., 2020a; Basile et al., 2020). Cosine
Distance (CD) is used to measure differences between word vectors.222We
provide our code at: https://github.com/Garrafao/LSCDetection.
### 4.1 Alignment
##### Vector initialization (VI).
In VI we first train SGNS on one corpus and then use the learned word and
context vectors to initialize the model for training on the second corpus (Kim
et al., 2014; Kaiser et al., 2020a). The motivation is that the vector of a
word with similar contexts across both corpora will not deviate much from its
initialized value. On the other hand, vectors of words with different contexts
across both corpora, will be updated to accommodate the new semantic
properties. Words which only appear in the second corpus are initialized on
random vectors.
##### Orthogonal Procrustes (OP).
SGNS is trained on each corpus separately, resulting in word matrices $A$ and
$B$. To align them, we follow Hamilton et al. (2016) and calculate an
orthogonally-constrained matrix $W^{*}$:
$W^{*}=\underset{W\in O(d)}{\arg\min}\left\lVert BW-A\right\lVert_{F}.$
Prior to this alignment step both matrices are length-normalized and mean-
centered (Artetxe et al., 2017; Schlechtweg et al., 2019a).
##### Word Injection (WI).
The sentences of both corpora are shuffled into one joint corpus, but all
occurrences of target words are substituted by the target word concatenated
with a tag indicating the corpus it originated from (Ferrari et al., 2017;
Schlechtweg et al., 2019a; Dubossarsky et al., 2019). This leads to the
creation of two vectors for each target word in one vector space, while non-
target words receive only one vector encoding information from both corpora.
##### No Alignment (NO).
Comparing two vector spaces without aligning them results in poor performance
on LSCD (Schlechtweg et al., 2019a). As VI shows, initializing the model with
weights from the previous run, results in aligned vector spaces. We expand on
this concept by initializing two models on the same pre-trained weights
assuming that the resulting vector spaces are aligned to one another. The
difference to VI is that instead of initializing model $B$ with the weights
from model $A$, the weights from a third pre-trained model $C$ are used to
initialize both models $A$ and $B$.
(a)
(b)
Figure 1: Performance on modern data (wordsim353) left: SOT for
$\alpha\in[-1,1]$, right: MC+PCR across different amounts of PCs removed. Zero
PCs removed indicates only mean centering. Baselines are performances without
PP.
### 4.2 Pre-training
The corpora used in the context of LSCD are often small, as they are
restricted by the length of time periods or availability of historical data.
For example the English corpora of SemEval-2020 Task 1 only have $6.6$M tokens
each, compared to $1.9$G of PukWaC. This reduced corpus size limits the amount
of semantic information encoded into VSMs trained on the corpus. Pre-training
addresses this problem by first training SGNS on a large, possibly external
corpus, and then using these vectors to initialize the model for training on
the smaller diachronic target corpora. The idea is that the model first learns
very broad and general semantic properties followed by the training on the
target corpora, where corpus and time specific details are picked up, i.e., a
form of refinement. This procedure is applicable to all alignment types.
We use PukWaC and SdeWaC for pre-training, later referenced as modern.
However, pre-training on modern corpora is only advisable if the assumption
can be made that the meanings of words in the pre-training corpus roughly
correspond to the meanings of words in the target corpora. It is unclear to
which extent this assumption holds for our data. Hence, we also combine the
two target corpora into a bigger corpus, referenced as diachron, which is then
used for pre-training.
### 4.3 Post-processing (PP)
##### Similarity Order Transformation (SOT).
In 2nd order similarity, the similarity of two words is assessed in terms of
how similar they are to a third word (Schütze and Pedersen, 1993; Artetxe et
al., 2018b; Schlechtweg et al., 2019b). This can analogously be done for
higher (3rd, 4th, etc.) orders. According to Artetxe et al. (2018b) these
orders capture different aspects of language. Artetxe et al. propose a linear
transformation deriving higher or lower orders of similarity from a given
matrix $X$. For this, the product with the transpose matrix is split into its
eigendecomposition $X^{T}X=Q\lambda Q^{T}$, so that $\lambda$ is a positive
diagonal matrix whose entries are the eigenvalues of $X^{T}X$ and $Q$ is an
orthogonal matrix with their respective eigenvectors as columns. The linear
transformation matrix is then defined as $W_{\alpha}=Q\lambda^{\alpha}$, where
$\alpha$ is the parameter that adjusts the desired similarity order. Applying
this to the original embeddings $X$ yields the transformed embeddings
$X^{\prime}=XW^{\alpha}$.
##### Mean Centering (MC).
The centroid of a matrix is the average vector over all vectors in a matrix:
$\vec{\bar{c}}=\frac{1}{|V|}\sum_{i}^{V}\vec{w_{i}}$. MC refers to subtracting
$\vec{\bar{c}}$ from each $\vec{w_{i}}$ in the matrix. MC alters all
dimensions so that the mean of all columns is zero. Artetxe et al. provide the
intuitive motivation for MC that it moves randomly similar vectors further
apart and Mu and Viswanath (2018) consider mean centering as an operation
making vectors “more isotropic”, i.e., more uniformly distributed across the
vector space. Mu and Viswanath indicate that isotropy of word vectors is
positively correlated to performance.
##### Principal Component Removal (PCR).
Given a $n$-dimensional matrix $X$, Principal Component Analysis (PCA,
Pearson, 1901) returns $n$ vectors where each vector describes a best fitting
line for the data while being orthogonal to the first $n-1$ vectors. Thus, the
first PC describes the greatest variance in the first direction, the second PC
describes the second greatest variance in the second direction, and the $n$th
PC describes the $n$th greatest variance in the $n$th direction. Mu and
Viswanath (2018) use PCA to compute the top $m$ PCs from a mean centered word
embedding $\bar{M}$: $p_{1},...,p_{m}=\text{PCA}(\bar{M})$. Subsequently these
PCs are used to project each vector $v\in M$ onto the subspace spanned by the
PCs. This projection is then subtracted from the original mean centered word
vector $\tilde{v}$ by
$v^{\prime}=\tilde{v}-\sum_{i=1}^{m}(p_{i}^{\intercal}v)p_{i}$, which results
in nullifying the top $m$ PCs in $M$. This is similar to the approach of
Bullinaria and Levy (2012). Mu and Viswanath combine both MC and PCR into one
PP transformation (MC+PCR).
As for MC Mu and Viswanath’s main motivation for PCR is to make vectors more
isotropic. They also demonstrate empirically that the top PCs encode word
frequency and offer the removal of this noise from the matrix as an
alternative explanation for observed performance improvements.
##### Stacking.
VI and OP alignment result in two matrices, and hence, a proper way for
applying PP to both of them is needed. The naïve way of simply post-processing
both matrices separately (SEP) may violate the assumption that they are
represented in the same space. Therefore, in a second approach, we apply PP to
both matrices simultaneously by stacking them vertically beforehand (STA).
Preliminary experiments showed that following the naïve way of PP (SEP) led to
severe decrease in performance for SOT (but not for MC+PCR). Hence, applying
SOT on two matrices separately is followed by an orthogonal post-alignment
(SEP+PA).
## 5 Experiments
For the most part, we chose common model hyper-parameter settings in order to
keep our results comparable to previous research (Hamilton et al., 2016;
Schlechtweg et al., 2019a; Kaiser et al., 2020a). We fine-tune for different
alignment methods and datasets by varying dimensionality $d$, window size $w$
and number of training epochs $e$.333For a detailed overview on SGNS
parameters see Appendix B.
### 5.1 Validation
We validate the results reported by Artetxe et al. (2018a) and Mu and
Viswanath (2018) on PukWaC and SdeWaC. The performance peaks for negative
$\alpha$-values around -0.2 as well as the slight performance increase over
the baseline for SOT are in line with the findings of Artetxe et al. (see
Figure 1(a)). For MC+PCR we observe the greatest performance improvement when
the number of removed PCs is around $m=\frac{d}{100}$ (see Figure 1(b)). This
fits the rule of thumb as stated by Mu and Viswanath.
### 5.2 LSCD
Figure 2: Left: max scores from Table 2, middle and right: Performance
(Spearman’s rho) of NO alignment method on LSCD task across different
dimensionalities and pre-training corpora. VI without pre-training as
comparable baseline.
#### 5.2.1 Pre-training
We tune SGNS models for each alignment method with and without pre-training
(baseline), see Table 2. Recall from Section 4.2 that we use the corpora
modern and diachron for pre-training. Table 2 lists the maximum and mean
performances of the baseline and pre-training with different alignment
methods, as well as the standard deviation (for a visual representation of the
max values see Figure 2). The mean is calculated across different $d$, $e$ and
$w$, giving the expected performance in a realistic scenario where fine-tuning
hyper-parameters is not possible (Schlechtweg et al., 2020; Basile et al.,
2020). For German, the baseline max and mean scores could not be significantly
improved by pre-training across alignments. For English, pre-training on
diachron results in better max and mean scores for OP and WI, with max
improvements up to .10. Also, the overall best result is achieved with OP and
pre-training on diachron. The usage of modern does not improve on the maximum,
while reducing the mean. The overall lower performance as well as the observed
performance improvements compared to German, may be attributed to the roughly
10 times smaller target corpora. That is, pre-training is helpful on the
smaller target corpora.
| align. | baseline | diachron | modern
---|---|---|---|---
| max | mean/std | max | mean/std | max | mean/std
GER | VI | .77 | .72 / .063∗ | .74 | .61 / .067∗ | .77 | .70 / .060∗
OP | .72 | .69 / .022 | .68 | .59 / .049∗ | .68 | .61 / .051∗
WI | .76 | .70 / .033 | .74 | .69 / .037∗ | .71 | .66 / .043∗
NO | - | \- / - | .70 | .58 / .081∗ | .67 | .60 / .050∗
ENG | VI | .42 | .30 / .067 | .41 | .28 / .073 | .38 | .26 / .060
OP | .34 | .28 / .041 | .44 | .31 / .071 | .35 | .27 / .047
WI | .35 | .28 / .041 | .39 | .29 / .053 | .35 | .24 / .055
NO | - | \- / - | .40 | .34 / .080 | .32 | .24 / .060
Table 2: max and mean performance on LCSD task (Spearman’s rho) for all
alignment methods. Note: mean values marked with (∗) ignore results utilizing
$d$ $<$100 due to consistent performance drops at higher $d$.
#### 5.2.2 Post-processing
For every combination of alignment and pre-training method, the matrix with
the highest performance across parameters is chosen as the baseline. SOT and
MC+PCR are applied individually to these matrices within a wide parameter
range (see Appendix B) for both stacking methods (STA and SEP/SEP+PA). Table 3
presents the mean optimal performance gains after PP, which is calculated by
extracting the best performance after PP for every matrix, subtracting the
baseline values and averaging the values per language. Averaging the
respective parameter values yields the mean argmax. Figure 3(a) and 3(d) show
the highest performances for every baseline matrix after SOT and MC+PCR
respectively.
##### SOT.
As we see in Figure 3(a), SEP+PA and STA perform similarly. We find small mean
performance gains across the board (.013 for GER+STA, .008 for GER+SEP+PA,
.013 for ENG+STA), except for ENG+SEP+PA where a minuscule decrease (-.005)
can be seen. Overall, STA outperforms SEP+PA slightly. We now further examine
the effect of SOT+STA on individual matrices. In general, the data can
approximately be described as a downward opening parabola (see Figure 3(b)),
with different peaks for both languages and slight differences between
alignment methods. Averaging the argmax for $\alpha$ shows us where these
peaks are. The calculations yield a mean optimal $\alpha$ of 0 for GER+STA,
and -0.2 for ENG+STA. For GER the peak performance always lies in the interval
$[-0.2,0.3]$. This changes to $[-0.4,0.1]$ for ENG, except for one outlier,
where the peak is at -0.8. Moving $\alpha$ away from this parameter range
results in severe performance decreases. This behaviour can also be seen on
the modern corpora (see Figure 1) and is in line with the findings of Artetxe
et al. (2018b). In order to predict a high-performing parameter, independent
from the underlying matrix, we calculate mean performance gains for fixed
parameter values. The values are chosen according to the the above-described
peak intervals for the respective languages. However, on average, using a
fixed parameter results in slight performance losses, notwithstanding the
$\alpha$-value, and hence, finding a high-performing fixed parameter value was
not possible. We observe similar findings for individual alignment methods and
varying dimensionality. However, GER+VI alignment represents an interesting
exception: With high dimensionality ($>$ 300) base performance drops heavily
(Kaiser et al., 2020a), and is then “repaired” by the PP, bringing it close to
the baseline of the best performing dimension (see Figure 3(c)).
| PP + STA/SEP | argmax | gain
---|---|---|---
| mean/std | mean/std
GER | SOT + STA | 0.0/0.2 | .013/.013
SOT + SEP+PA | 0.1/0.3 | .008/.015
MC+PCR + STA | 1.2/1.6 | .004/.042
MC+PCR + SEP | 0.7/1.1 | .004/.043
ENG | SOT + STA | -0.2/0.2 | .013/.041
SOT + SEP+PA | -0.2/0.3 | -.005/.043
MC+PCR + STA | 3.0/3.8 | .049/.068
MC+PCR + SEP | 6.2/7.1 | .058/.077
Table 3: Mean of best-performing parameters and mean performance gain compared
to baseline on LSCD task. Parameter range for SOT [-1,1] and MC+PCR [0,25]
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Top: SOT (3(a), 3(c), 3(b)), Bottom: MC+PCR (3(d), 3(e), 3(f)).
Performance over high-scores (3(a), 3(d)). Representative results after
SOT+STA over german and english dataset (3(b)). Representative plot of
“repair” effect after SOT+STA for GER+VI (3(c)). Representative result after
MC+PCR over German and English dataset (3(e), 3(f)). Note for 3(a) and 3(d):
where data points overlap only lighter colour visible; dashed line between
baseline data points only a visual aid.
##### MC+PCR.
As we see in Figure 3(d), MC+PCR yields small improvements over the baselines
for German. This is also reflected in the mean gain in Table 3. We find that
no single value for $m$ yields consistent improvements. However, we find that
for $m$=0 (only MC) MC+PCR consistently improves the baseline slightly (see
Figure 3(e)), while for higher $m$ the performance decreases consistently. For
English we see greater improvements, see Figure 3(f), 3(d) and mean gain in
Table 3. A range of parameters shows improvements with $m$=3 yielding the
highest (.0175). This can also be seen in Figure 3(f) where several parameters
yield improvements. We conclude that predicting a parameter for likely
performance improvement is possible for English, but not for German. However,
if this PP should be used, we recommend using a parameter space of $m\in$ [0,
5], as this parameter space is most likely to produce improvements on English,
while not harming performance too much on German. This also roughly
corresponds to the recommendation of Mu and Viswanath (2018), as they predict
that the parameter should be chosen around $\frac{d}{100}$. Furthermore, we
suggest using STA, as this does on average show better performance over SEP
for the aforementioned parameter space. We see that the effects of SOT as well
as MC+PCR are highly dependent on the underlying matrix.
## 6 Analysis
##### Test Statistics.
The effects of pre-training and PP methods on word embeddings are not limited
to performance differences in word similarity or LSCD tasks. We use two test
statistics to further analyse vector spaces: (i) isotropy Mu and Viswanath
(2018), i.e., uniformity of vector distribution and (ii) frequency bias
Dubossarsky et al. (2017); Kaiser et al. (2020a), i.e., correlation between
cosine distance and frequency.444We compute correlation based on frequency in
the second target corpus, results were similar for the first target corpus.
### 6.1 Pre-training
(a)
(b)
(c)
Figure 4: Test-statistics for result analysis, left: Performance after pre-
training on different corpora, middle: Correlation between CD and frequency,
right: average vector length of the weights created on different-sized pre-
training corpora.
On the German dataset it is noticeable that pre-training on diachron often
results in slight drop in performance at higher $d$. This behaviour is more
pronounced, consistent and even visible on the English dataset when pre-
training on modern, see Figure 2.555Although not depicted, the other alignment
techniques in combination with pre-training show very similar behaviour to NO.
Such a drop in performance after initializing on pre-trained vectors has
already been observed by Kaiser et al. (2020a). The authors relate the drop to
an increased frequency bias and reduce it by increasing $e$/$w$. It is
noteworthy that the drop is much more pronounced for pre-training on modern
compared to diachron. This can be attributed to a difference in word vector
lengths of the SGNS model used for initialization. We make the following
observation: average word vector length increases with the amount of training
word pairs. The difference more training data makes is amplified at higher
$d$, see Figure 4(c). By length-normalizing the word vectors between the
initialization and training step, the drop in performance can be completely
circumvented. Additionally, the frequency bias is reduced to 0, see Figure
4(b).
For English, we expected a higher performance gain from pre-training when
using modern because of the small data size. However, we observe no
improvements over the baseline. Using length-normalized word vectors for
initialization does result in slightly improved max and mean values for modern
but these are still lower than max and mean values of diachron.
### 6.2 SOT
(a)
(b)
Figure 5: Representative plot for the isotropy after SOT+STA (5(a)).
Performance and frequency bias after SOT+STA for GER+VI+BIG (5(b)).
SOT has a clear effect on isotropy, which has not been described in previous
research. Isotropy shows the same behaviour across both languages and all
models, and is best described as a vertically mirrored S-curve (see Figure
5(a)). Decreasing $\alpha$ increases isotropy close to 1, while increasing
$\alpha$ decreases isotropy close to 0. The average correlation (Pearson)
between $\alpha$ and isotropy over all matrices is -.89 for both languages.
However, the performance correlates only slightly with isotropy (-.25, .35).
Moreover, $\alpha$ correlates only weakly with frequency bias (.19, -.12,
however with high variance). In order to explain the above-described “repair”
effect we take a closer look at the three GER+VI models. Applying SOT brings
large performance increases, as stated in Section 5.2.2. For all three models
a considerably higher baseline frequency bias for $d$=500 is visible. SOT
strongly reduces this bias for modern, and results in a huge performance gain
(see Figure 5(b)).
### 6.3 MC+PCR
As Mu and Viswanath (2018)’s main motivation behind MC+PCR is to increase
isotropy of a vector space as well as removal of word frequency noise through
PCR, we examine how isotropy and frequency bias develop with $m$. While PCR
has the predicted effect on frequency bias (GER: -.94, ENG: -0.6), PCR does in
fact not increase isotropy, contrary to Mu and Viswanath’s motivation of
“rounding towards isotropy”, but has a consistent reducing effect (GER -.75,
ENG: -.7). Thus, we believe that rounding towards isotropy is not suitable for
explaining performance. Furthermore, we observe that MC not only exhibits
effects on isotropy, but also acts on frequency bias, thus Mu and Viswanath’s
PCR motivation can be extended to MC.
## 7 Conclusion
We tested the effects of pre-training and post-processing on a variety of LSCD
models. We performed extensive experiments on a German and an English LSCD
dataset. According to our findings, pre-training is advisable when the target
corpora are small and should be done using diachronic data. The size of the
pre-training corpus is crucial, as a large number of training pairs leads to
performance drops, which are probably caused by their effect on vector length.
Length-normalization may be used on pre-trained vectors to counteract this
effect.
Further performance improvements may be reached by post-processing. While
SOT+STA yielded moderate improvements for both languages, MC+PCR showed larger
improvements, but only on English. However, for neither we were able to find a
reliable parameter that performed well across the board. Instead, we found
that a well-performing parameter value is highly dependent on the underlying
matrix. Both post-processing methods affect isotropy and frequency bias.
The methods we tested are particularly helpful when tuning data is available,
as performance can be optimized and becomes more predictable. Hence, we
recommend to obtain a small annotated sample of target words for the target
corpora and to tune pre-training, model and post-processing parameters on the
sample before performing predictions for semantic changes on unseen data. With
the recent upsurge of digitized historical corpora and diachronic semantic
annotation efforts (Tahmasebi and Risse, 2017; Schlechtweg et al., 2018, 2020;
Basile et al., 2020; Rodina and Kutuzov, 2020) this may often be a likely and
feasible scenario.
## Acknowledgments
Dominik Schlechtweg was supported by the Konrad Adenauer Foundation and the
CRETA center funded by the German Ministry for Education and Research (BMBF)
during the conduct of this study. We thank the reviewers for their insightful
feedback.
## References
* Alatrash et al. (2020) Reem Alatrash, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2020\. CCOHA: Clean Corpus of Historical American English. In _Proceedings of the 12th International Conference on Language Resources and Evaluation_ , Marseille, France. European Language Resources Association.
* Artetxe et al. (2016) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 2289–2294, Austin, Texas. Association for Computational Linguistics.
* Artetxe et al. (2017) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics_ , pages 451–462. Association for Computational Linguistics.
* Artetxe et al. (2018a) Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_ , pages 5012–5019.
* Artetxe et al. (2018b) Mikel Artetxe, Gorka Labaka, Iñigo Lopez-Gazpio, and Eneko Agirre. 2018b. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_ , pages 282–291, Brussels, Belgium. Association for Computational Linguistics.
* Asgari et al. (2020) Ehsaneddin Asgari, Christoph Ringlstetter, and Hinrich Schütze. 2020. EmbLexChange at SemEval-2020 Task 1: Unsupervised Embedding-based Detection of Lexical Semantic Changes. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Baroni et al. (2009) Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: A collection of very large linguistically processed web-crawled corpora. _Language Resources and Evaluation_ , 43:209–226.
* Basile et al. (2020) Pierpaolo Basile, Annalina Caputo, Tommaso Caselli, Pierluigi Cassotti, and Rossella Varvara. 2020. Overview of the EVALITA 2020 Diachronic Lexical Semantics (DIACR-Ita) Task. In _Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020)_ , Online. CEUR.org.
* Beck (2020) Christin Beck. 2020. DiaSense at SemEval-2020 Task 1: Modeling sense change via pre-trained BERT embeddings. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Berliner Zeitung (2018) Berliner Zeitung. Diachronic newspaper corpus published by Staatsbibliothek zu Berlin [online]. 2018.
* Bullinaria and Levy (2012) John Bullinaria and Joseph Levy. 2012. Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and svd. _Behavior research methods_ , 44:890–907.
* Davies (2012) Mark Davies. 2012. Expanding Horizons in Historical Linguistics with the 400-Million Word Corpus of Historical American English. _Corpora_ , 7(2):121–157.
* Deutsches Textarchiv (2017) Deutsches Textarchiv. Grundlage für ein Referenzkorpus der neuhochdeutschen Sprache. Herausgegeben von der Berlin-Brandenburgischen Akademie der Wissenschaften [online]. 2017.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dubossarsky et al. (2019) Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, and Dominik Schlechtweg. 2019\. Time-Out: Temporal referencing for robust modeling of lexical semantic change. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 457–470, Florence, Italy. Association for Computational Linguistics.
* Dubossarsky et al. (2017) Haim Dubossarsky, Daphna Weinshall, and Eitan Grossman. 2017. Outta control: Laws of semantic change and inherent biases in word representation models. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 1147–1156, Copenhagen, Denmark.
* Faaß and Eckart (2013) Gertrud Faaß and Kerstin Eckart. 2013. SdeWaC – A corpus of parsable sentences from the web. In Iryna Gurevych, Chris Biemann, and Torsten Zesch, editors, _Language Processing and Knowledge in the Web_ , volume 8105 of _Lecture Notes in Computer Science_ , pages 61–68. Springer Berlin Heidelberg.
* Ferrari et al. (2017) Alessio Ferrari, Beatrice Donati, and Stefania Gnesi. 2017. Detecting domain-specific ambiguities: An NLP approach based on wikipedia crawling and word embeddings. In _Proceedings of the 2017 IEEE 25th International Requirements Engineering Conference Workshops_ , pages 393–399.
* Finkelstein et al. (2002) Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. _ACM Transactions on Information Systems_ , 20:116–131.
* Hamilton et al. (2016) William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics_ , pages 1489–1501, Berlin, Germany. Association for Computational Linguistics.
* Hätty et al. (2020) Anna Hätty, Dominik Schlechtweg, Michael Dorna, and Sabine Schulte im Walde. 2020. Predicting degrees of technicality in automatic terminology extraction. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , Seattle, Washington. Association for Computational Linguistics.
* Hengchen et al. (2021) Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg, and Haim Dubossarsky. 2021\. Challenges for computational lexical semantic change. In Nina Tahmasebi, Lars Borin, Adam Jatowt, Yang Xu, and Simon Hengchen, editors, _Computational Approaches to Semantic Change_ , volume Language Variation, chapter 11. Language Science Press, Berlin.
* Hu et al. (2019) Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3899–3908, Florence, Italy. Association for Computational Linguistics.
* Kaiser et al. (2020a) Jens Kaiser, Dominik Schlechtweg, Sean Papay, and Sabine Schulte im Walde. 2020a. IMS at SemEval-2020 Task 1: How low can you go? Dimensionality in Lexical Semantic Change Detection. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Kaiser et al. (2020b) Jens Kaiser, Dominik Schlechtweg, and Sabine” Schulte im Walde. 2020b. OP-IMS @ DIACR-Ita: Back to the Roots: SGNS+OP+CD still rocks Semantic Change Detection. In _Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020)_ , Online. CEUR.org.
* Kim et al. (2014) Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. In _LTCSS@ACL_ , pages 61–65. Association for Computational Linguistics.
* Kutuzov et al. (2020) Andrey Kutuzov, Vadim Fomin, Vladislav Mikhailov, and Julia Rodina. 2020. Shiftry: Web service for diachronic analysis of russian news. In _Proceedings of the International Conference “Dialog”_.
* Kutuzov and Giulianelli (2020) Andrey Kutuzov and Mario Giulianelli. 2020. UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical Semantic Change Detection. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Kutuzov and Kuzmenko (2016) Andrey Kutuzov and Elizaveta Kuzmenko. 2016. Cross-lingual trends detection for named entities in news texts with dynamic neural embedding models. In _NewsIR@ECIR_ , pages 27–32.
* Kutuzov et al. (2018) Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1384–1397, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
* Laicher et al. (2020) Severin Laicher, Gioia Baldissin, Enrique Castaneda, Dominik Schlechtweg, and Sabine Schulte im Walde. 2020. CL-IMS @ DIACR-Ita: Volente o Nolente: BERT does not outperform SGNS on Semantic Change Detection. In _Proceedings of the 7th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA 2020)_ , Online. CEUR.org.
* Limsopatham and Collier (2016) Nut Limsopatham and Nigel Collier. 2016. Modelling the combination of generic and target domain embeddings in a convolutional neural network for sentence classification. In _Proceedings of the 15th Workshop on Biomedical Natural Language Processing_ , pages 136–140, Berlin, Germany. Association for Computational Linguistics.
* Mikolov et al. (2013a) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In _1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings_.
* Mikolov et al. (2013b) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In _Advances in Neural Information Processing Systems 26_ , pages 3111–3119, Lake Tahoe, Nevada, USA.
* Mu et al. (2017) Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2017. Representing sentences as low-rank subspaces. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 629–634, Vancouver, Canada. Association for Computational Linguistics.
* Mu and Viswanath (2018) Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In _International Conference on Learning Representations_.
* Neues Deutschland (2018) Neues Deutschland. Diachronic newspaper corpus published by Staatsbibliothek zu Berlin [online]. 2018.
* Pearson (1901) Karl Pearson. 1901. _On Lines and Planes of Closest Fit to Systems of Points in Space_. University College.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.
* Pražák et al. (2020) Ondřej Pražák, Pavel Přibákň, Stephen Taylor, and Jakub Sido. 2020. UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Raunak et al. (2019) Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embeddings. In _Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)_ , pages 235–243, Florence, Italy. Association for Computational Linguistics.
* Rodina and Kutuzov (2020) Julia Rodina and Andrey Kutuzov. 2020. Rusemshift: a dataset of historical lexical semantic change in russian. In _Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020)_. Association for Computational Linguistics.
* Schlechtweg et al. (2019a) Dominik Schlechtweg, Anna Hätty, Marco del Tredici, and Sabine Schulte im Walde. 2019a. A Wind of Change: Detecting and evaluating lexical semantic change across times and domains. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 732–746, Florence, Italy. Association for Computational Linguistics.
* Schlechtweg et al. (2020) Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection. In _Proceedings of the 14th International Workshop on Semantic Evaluation_ , Barcelona, Spain. Association for Computational Linguistics.
* Schlechtweg et al. (2019b) Dominik Schlechtweg, Cennet Oguz, and Sabine Schulte im Walde. 2019b. Second-order co-occurrence sensitivity of skip-gram with negative sampling. In _Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 24–30, Florence, Italy. Association for Computational Linguistics.
* Schlechtweg et al. (2018) Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic Usage Relatedness (DURel): A framework for the annotation of lexical semantic change. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 169–174, New Orleans, Louisiana, USA.
* Schütze (1998) Hinrich Schütze. 1998. Automatic word sense discrimination. _Computational Linguistics_ , 24(1):97–123.
* Schütze and Pedersen (1993) Hinrich Schütze and Jan Pedersen. 1993. A vector model for syntagmatic and paradigmatic relatedness. In _Proc. of the 9th Annual Conference of the UW Centre for the New OED and Text Research_ , pages 104–113, Oxford, England.
* Shoemark et al. (2019) Philippa Shoemark, Farhana Ferdousi Liza, Dong Nguyen, Scott Hale, and Barbara McGillivray. 2019. Room to Glo: A systematic comparison of semantic change detection approaches with word embeddings. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing_ , pages 66–76, Hong Kong, China. Association for Computational Linguistics.
* Tahmasebi et al. (2018) Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of computational approaches to diachronic conceptual change. _arXiv:1811.06278_.
* Tahmasebi and Risse (2017) Nina Tahmasebi and Thomas Risse. 2017. Finding individual word sense changes and their delay in appearance. In _Proceedings of the International Conference Recent Advances in Natural Language Processing_ , pages 741–749, Varna, Bulgaria.
* Turney and Pantel (2010) Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. _J. Artif. Int. Res._ , 37(1):141–188.
## Appendix A Corpus details
The corpora are lemmatized and contain no punctuation, further pre-processing
on the corpora by us is limited to removing low-frequency words. All words
with a frequency below the value listed in row min word freq. in Table 1 are
removed from the corpora. This is done to reduce noise and unwanted artifacts.
## Appendix B Parameter settings
##### SGNS.
We use common hyper-parameter settings: initial learning rate of 0.025, number
of negative samples $k$=5 and no sub-sampling. Vector dimensionality $d$,
window size $w$ and number of training epochs $e$ are varied in order to fine-
tune model and methods. This is important as alignment methods like VI are
highly dependent on the choice of $e$ and $d$ (Kaiser et al., 2020a). The
following values are used: $w\in\\{$5, 10$\\}$, $e\in\\{$5, 10, 20, 30$\\}$,
$d\in\\{$25, 50, 100, 200, 300, 500$\\}$. Due to the immense amount of
possible parameter combinations we only ran each setting once.
PP was performed on the high-scores of each language, where we differentiate
between different combinations of alignment, pre-training as well as if the
matrices were STA or SEP post-processed.
##### SOT.
As stated in Section 4.3, SEP is used in combination with post-alignment. We
apply SOT with $\alpha$ values ranging from -1 to 1 in 0.1 increments on every
baseline matrix with $d\in\\{$25, 50, 100, 200, 300, 500$\\}$.
##### MC+PCR.
MC+PCR is performed using a parameter space of [0, 25] in order to examine the
performance development over a growing number of PCs removed. It is important
to note that using the parameter 0 results in only applying MC.
|
# The TESS–Keck Survey IV: A Retrograde, Polar Orbit for the Ultra-Low-
Density, Hot Super-Neptune WASP-107b
Ryan A. Rubenzahl NSF Graduate Research Fellow Department of Astronomy,
California Institute of Technology, Pasadena, CA 91125, USA Fei Dai Division
of Geological and Planetary Sciences, California Institute of Technology,
Pasadena, CA 91125, USA Andrew W. Howard Department of Astronomy, California
Institute of Technology, Pasadena, CA 91125, USA Ashley Chontos NSF Graduate
Research Fellow Institute for Astronomy, University of Hawai‘i, 2680 Woodlawn
Drive, Honolulu, HI 96822, USA Steven Giacalone Department of Astronomy,
University of California Berkeley, Berkeley, CA 94720, USA Jack Lubin
Department of Physics & Astronomy, University of California Irvine, Irvine, CA
92697, USA Lee J. Rosenthal Department of Astronomy, California Institute of
Technology, Pasadena, CA 91125, USA Howard Isaacson Department of Astronomy,
University of California Berkeley, Berkeley CA 94720, USA Centre for
Astrophysics, University of Southern Queensland, Toowoomba, QLD, Australia
Natalie M. Batalha Department of Astronomy and Astrophysics, University of
California, Santa Cruz, CA 95060, USA Ian J. M. Crossfield Department of
Physics & Astronomy, University of Kansas, 1082 Malott, 1251 Wescoe Hall Dr.,
Lawrence, KS 66045, USA Courtney Dressing Department of Astronomy, University
of California Berkeley, Berkeley CA 94720, USA Benjamin Fulton NASA Exoplanet
Science Institute/Caltech-IPAC, MC 314-6, 1200 E. California Blvd., Pasadena,
CA 91125, USA Daniel Huber Institute for Astronomy, University of Hawai‘i,
2680 Woodlawn Drive, Honolulu, HI 96822, USA Stephen R. Kane Department of
Earth and Planetary Sciences, University of California, Riverside, CA 92521,
USA Erik A Petigura Department of Physics & Astronomy, University of
California Los Angeles, Los Angeles, CA 90095, USA Paul Robertson Department
of Physics & Astronomy, University of California Irvine, Irvine, CA 92697, USA
Arpita Roy Space Telescope Science Institute, 3700 San Martin Dr., Baltimore,
MD 21218, USA Department of Physics and Astronomy, Johns Hopkins University,
3400 N Charles St, Baltimore, MD 21218, USA Lauren M. Weiss Institute for
Astronomy, University of Hawai‘i, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
Corey Beard Department of Physics & Astronomy, University of California
Irvine, Irvine, CA 92697, USA Michelle L. Hill Department of Earth and
Planetary Sciences, University of California, Riverside, CA 92521, USA Andrew
Mayo Department of Astronomy, University of California Berkeley, Berkeley CA
94720, USA Teo Močnik Gemini Observatory/NSF’s NOIRLab, 670 N. A’ohoku Place,
Hilo, HI 96720, USA Joseph M. Akana Murphy NSF Graduate Research Fellow
Department of Astronomy and Astrophysics, University of California, Santa
Cruz, CA 95064, USA Nicholas Scarsdale Department of Astronomy and
Astrophysics, University of California, Santa Cruz, CA 95060, USA
###### Abstract
We measured the Rossiter–McLaughlin effect of WASP-107b during a single
transit with Keck/HIRES. We found the sky-projected inclination of WASP-107b’s
orbit, relative to its host star’s rotation axis, to be
$|\lambda|={118}^{+38}_{-19}$ degrees. This confirms the misaligned/polar
orbit that was previously suggested from spot-crossing events and adds
WASP-107b to the growing population of hot Neptunes in polar orbits around
cool stars. WASP-107b is also the fourth such planet to have a known distant
planetary companion. We examined several dynamical pathways by which this
companion could have induced such an obliquity in WASP-107b. We find that
nodal precession and disk dispersal-driven tilting can both explain the
current orbital geometry while Kozai–Lidov cycles are suppressed by general
relativity. While each hypothesis requires a mutual inclination between the
two planets, nodal precession requires a much larger angle which for WASP-107
is on the threshold of detectability with future Gaia astrometric data. As
nodal precession has no stellar type dependence, but disk dispersal-driven
tilting does, distinguishing between these two models is best done on the
population level. Finding and characterizing more extrasolar systems like
WASP-107 will additionally help distinguish whether the distribution of hot-
Neptune obliquities is a dichotomy of aligned and polar orbits or if we are
uniformly sampling obliquities during nodal precession cycles.
††software: emcee (Foreman-Mackey et al., 2013), corner.py (Foreman-Mackey,
2016), REBOUND/REBOUND/x (Rein & Liu, 2012; Tamayo et al., 2019), SciPy
(Virtanen et al., 2020), NumPy (Harris et al., 2020), Matplotlib (Hunter,
2007)
## 1 Introduction
WASP-107b is a close-in ($P=5.72$ days) super-Neptune orbiting the cool
K-dwarf WASP-107. Originally discovered via the transit method by WASP-South,
WASP-107b was later observed by K2 in Campaign 10 (Howell et al., 2014). These
transits revealed a radius close to that of Jupiter, $R_{b}=10.8\pm
0.34~{}R_{\oplus}=0.96\pm 0.03~{}R_{J}$ (Dai & Winn, 2017; Močnik et al.,
2017; Piaulet et al., 2021). However, follow-up radial velocity (RV)
measurements with the CORALIE spectrograph demonstrated a mass of just $38\pm
3~{}M_{\oplus}$ (Anderson et al., 2017), meaning this Jupiter-sized planet has
just one-tenth its density. Higher-precision RVs from Keck/High Resolution
Echelle Spectrometer (HIRES) suggested an even lower mass of $30.5\pm
1.7~{}M_{\oplus}$ (Piaulet et al., 2021). This low density challenges the
standard core-accretion model of planet formation. If runaway accretion
brought WASP-107b to a gas-to-core mass ratio of $\sim 3$ but was stopped
prematurely before growing to gas giant size, orbital dynamics and/or
migration may have played a significant role in this system (Piaulet et al.,
2021). Alternatively WASP-107b’s radius may be inflated from tidal heating,
which would allow a lower gas-to-core ratio consistent with core accretion
(Millholland et al., 2020).
With a low density, large radius, and hot equilibrium temperature, WASP-107b’s
large atmospheric scale height makes it a prime target for atmospheric
studies. Indeed analyses of transmission spectra obtained with the Hubble
Space Telescope (HST)/WFC3 have detected water amongst a methane-depleted
atmosphere (Kreidberg et al., 2018). WASP-107b was the first exoplanet to be
observed transiting with excess absorption at 10830 Å, an absorption line of a
metastable state of neutral helium indicative of an escaping atmosphere
(Oklopčić & Hirata, 2018). These observations suggest that WASP-107b’s
atmosphere is photoevaporating at a rate of a few percent in mass per billion
years (Spake et al., 2018; Allart et al., 2019; Kirk et al., 2020).
The orbit of WASP-107b is suspected to be misaligned with the rotation axis of
its host star. The angle between the star’s rotation axis and the normal to
the planet’s orbital plane, called the stellar obliquity $\psi$ (or just
obliquity), was previously constrained by observations of WASP-107b passing
over starspots as it transited (Dai & Winn, 2017). As starspots are regions of
reduced intensity on the stellar photosphere that rotate with the star, this
is seen as a bump of increased brightness in the transit light curve. By
measuring the time between spot-crossing events across successive transits,
combined with the absence of repeated spot crossings, Dai & Winn (2017) were
able to constrain the sky-projected obliquity, $\lambda$, of WASP-107b to
$\lambda\in$[40–140] deg. Intriguingly, long-baseline RV monitoring of the
system with Keck/HIRES has revealed a distant ($P_{c}\sim 1100$ days) massive
($M\sin i_{\text{orb},c}=115\pm 13~{}M_{\oplus}$) planetary companion, which
may be responsible for this present day misaligned orbit through its
gravitational influence on WASP-107b (Piaulet et al., 2021).
The sky-projected obliquity can also be measured spectroscopically. The
Rossiter–McLaughlin (RM) effect refers to the anomalous Doppler-shift caused
by a transiting planet blocking the projected rotational velocities across the
stellar disk (McLaughlin, 1924; Rossiter, 1924). If the planet’s orbit is
aligned with the rotation of the star (prograde), its transit will cause an
anomalous redshift followed by an anomalous blueshift. A anti-aligned
(retrograde) orbit will cause the opposite to occur.
Following the first obliquity measurement by Queloz et al. (2000), the field
saw measurements of 10 exoplanet obliquities over the next 8 years that were
all consistent with aligned, prograde orbits. After a few misaligned systems
had been discovered (e.g., Hébrard et al., 2008), a pattern emerged with hot
Jupiters on highly misaligned orbits around stars hotter than about $6250$ K
(Winn et al., 2010a). This pattern elicited several hypotheses such as damping
of inclination by the convective envelope of cooler stars (Winn et al., 2010a)
or magnetic realignment of orbits during the T Tauri phase (Spalding &
Batygin, 2015).
More recently a number of exoplanets have been found on misaligned orbits
around cooler stars, such as the hot Jupiter WASP-8b (Queloz et al., 2010;
Bourrier et al., 2017), as well as lower-mass hot Neptunes like HAT-P-11b
(Winn et al., 2010b), Kepler-63b (Sanchis-Ojeda et al., 2013), HAT-P-18b
(Esposito, M. et al., 2014), GJ 436b (Bourrier et al., 2018), and HD 3167 c
(Dalal et al., 2019). Strikingly, all of these exoplanets are on or near polar
orbits. Some of these systems have recently had distant, giant companions
detected (e.g. HAT-P-11c; Yee et al., 2018), hinting that these obliquities
arise from multibody planet-planet dynamics.
In this paper we present a determination of the obliquity of WASP-107b from
observations of the RM effect (Section 2). These observations were acquired
under the TESS–Keck Survey (TKS), a collaboration between scientists at the
University of California, the California Institute of Technology, the
University of Hawai‘i, and NASA. TKS is organized through the California
Planet Search with the goal of acquiring substantial RV follow-up observations
of planetary systems discovered by TESS (Dalba et al., 2020). TESS observed
four transits of WASP-107b (TOI 1905) in Sector 10. An additional science goal
of TKS is to measure the obliquities of interesting TESS systems. WASP-107b,
which is already expected to have a significant obliquity (Dai & Winn, 2017),
is an excellent target for an RM measurement with HIRES.
In Section 3 we confirm a misaligned orientation; in fact, we found a
polar/retrograde orbit. This adds WASP-107b to the growing population of hot
Neptunes in polar orbits around cool stars. We explored possible mechanisms
that could be responsible for this misalignment in Section 4. Lastly in
Section 5 we summarized our findings and discussed the future work needed to
better understand the obliquity distribution for small planets around cool
stars.
Table 1: Radial Velocities of WASP-107
Time | RV | $\sigma_{\rm{RV}}$ | Exposure time
---|---|---|---
(BJD${}_{\text{TDB}}$) | (m s-1) | (m s-1) | (s)
2458905.90111 | 5.05 | 1.50 | 900
2458905.91189 | 6.43 | 1.42 | 883
2458905.92247 | 0.14 | 1.49 | 862
2458905.93288 | -1.35 | 1.65 | 844
2458905.94266 | -0.25 | 1.45 | 783
2458905.95204 | -5.28 | 1.44 | 745
2458905.96141 | -2.40 | 1.37 | 797
2458905.97098 | -3.40 | 1.46 | 754
2458905.98004 | 2.45 | 1.37 | 727
2458905.98927 | -5.52 | 1.45 | 780
2458905.99888 | 2.07 | 1.48 | 792
2458906.00848 | 4.21 | 1.37 | 776
2458906.01796 | -0.58 | 1.38 | 775
2458906.02768 | 0.83 | 1.47 | 817
2458906.03780 | 3.07 | 1.49 | 836
2458906.04780 | -3.01 | 1.26 | 818
2458906.05771 | 0.02 | 1.45 | 796
2458906.06752 | -3.72 | 1.49 | 795
2458906.07703 | 3.61 | 1.33 | 773
2458906.08654 | 1.27 | 1.38 | 790
2458906.09648 | -2.88 | 1.45 | 837
2458906.10657 | -5.39 | 1.44 | 818
Note. — A machine readable version is available.
## 2 Observations
We observed the RM effect for WASP-107b during a transit on 2020 February 26
(UTC) with HIRES (Vogt et al., 1994) on the Keck I Telescope on Maunakea. Our
HIRES observations covered the full transit duration ($\sim 2.7$ hr) with a
$\sim 1$ hour baseline on either side. We used the “C2” decker
($14^{\prime\prime}\times 0\farcs 861$, $R=45,000$) and integrated until the
exposure meter reached 60,000 counts (signal-to-noise ratio (S/N) $\sim 100$
per reduced pixel, $\lesssim 15$ minutes) or readout after 15 minutes. The
spectra were reduced using the standard procedures of the California Planet
Search (Howard et al., 2010), with the iodine cell serving as the wavelength
reference (Butler et al., 1996). In total we obtained 22 RVs, 12 of which were
in transit (Table 1).
Table 2: Adopted parameters of the WASP-107 System Parameter | Value | Unit | Source
---|---|---|---
$P_{b}$ | $5.7214742$ | days | 1
$t_{c}$ | $7584.329897\pm 0.000032$ | JDaaDays since JD 2,450,000. Sources: (1) Dai & Winn (2017); (2) Piaulet et al. (2021). | 1
$b$ | $0.07\pm 0.07$ | | 1
$i_{\text{orb},b}$ | $89.887^{+0.074}_{-0.097}$ | degrees | 1
$R_{p}/R_{\star}$ | $0.14434\pm 0.00018$ | | 1
$a/R_{\star}$ | $18.164\pm 0.037$ | | 1
$e_{b}$ | $0.06\pm 0.04$ | | 2
$\omega_{b}$ | $40^{+40}_{-60}$ | degrees | 2
$M_{b}$ | $30.5\pm 1.7$ | M⊕ | 2
$P_{c}$ | $1088^{+15}_{-16}$ | days | 2
$e_{c}$ | $0.28\pm 0.07$ | | 2
$\omega_{c}$ | $-120^{+30}_{-20}$ | degrees | 2
$M_{c}\sin i_{\text{orb},c}$ | $0.36\pm 0.04$ | MJ | 2
$T_{\text{eff}}$ | $4245\pm 70$ | K | 2
$M_{\ast}$ | $0.683^{+0.017}_{-0.016}$ | M⊙ | 2
$R_{\ast}$ | $0.67\pm 0.02$ | R⊙ | 2
$u_{1}$ | $0.6666\pm 0.0062$ | | 1
$u_{2}$ | $0.0150\pm 0.0110$ | | 1
Visually inspecting the observations (Fig. 1) shows an anomalous blueshift
following the transit ingress, followed by an anomalous redshift after the
transit midpoint,111Propagating the uncertainty in $t_{c}$ in Table 2 the
transit midpoint on the night of observation is uncertain to about 9 s.,
indicating a retrograde orbit. The asymmetry and low-amplitude of the signal
constrain the orientation to a near-polar alignment, but whether the orbit is
polar or anti-aligned is somewhat degenerate with the value of $v\sin
i_{\star}$. The expected RM amplitude is $v\sin
i_{\star}(R_{p}/R_{\star})^{2}\sim 40$ m s-1, using previous estimates of
$R_{p}/R_{\star}=0.144$ (Dai & Winn, 2017) and $v\sin i_{\star}\sim 2$ km
$s^{-1}$ (e.g., Anderson et al., 2017). The signal we detected with HIRES is
only $\sim 5.5$ m s-1 in amplitude. Dai & Winn (2017) found the transit impact
parameter to be nearly zero, therefore the small RM amplitude suggests either
a much lower $v\sin i_{\star}$ than was spectroscopically inferred (see
Section 3.5), a near-polar orbit, or both.
## 3 Analysis
### 3.1 Rossiter–McLaughlin Model
We used a Gaussian likelihood for the RV time series $(\bm{t},\,\bm{v}_{r})$
given the model parameters $\bm{\Theta}$, and included a RV jitter term
($\sigma_{j}$) to account for additional astrophysical or instrumental noise,
$p(\bm{v}_{r},\,\bm{t}|\bm{\Theta})=\prod_{i=1}^{N}\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left[-\frac{(v_{r,i}-f(t_{i},\,\bm{\Theta}))^{2}}{2\sigma_{i}^{2}}\right],$
(1)
where $\sigma_{i}^{2}=\sigma_{\text{RV,i}}^{2}+\sigma_{j}^{2}$. The model
$f(t_{i},\,\bm{\Theta})$ is given by
$f(t_{i},\,\bm{\Theta})=\text{RM}(t_{i},\,\bm{\theta})+\gamma+\dot{\gamma}(t_{i}-t_{0}),$
(2)
where $\bm{\Theta}=(\bm{\theta},\,\gamma,\,\dot{\gamma})$ is the RM model
parameters ($\bm{\theta}$) as well as an offset ($\gamma$) and slope
($\dot{\gamma}$) term which we added to approximate the reflex motion of the
star and model any other systematic shift in RV throughout the transit (e.g.,
from noncrossed spots). The reference time $t_{0}$ is the time of the first
observation (BJD).
$\text{RM}(t_{i},\,\bm{\theta})$ is the RM model described in Hirano et al.
(2011). We assumed zero stellar differential rotation and adopted the transit
parameters determined by Dai & Winn (2017), which came from a detailed
analysis of K2 short-cadence photometry. We performed a simultaneous fit to
the photometric and spectroscopic transit data using the same photometric data
from K2 as in Dai & Winn (2017) to check for consistency. We obtained
identical results for the transit parameters as they did, hence we opted to
simply adopt their values, including their quadratic limb-darkening model.
These transit parameters are all listed in Table 2. Our best-fit RV jitter is
$\sigma_{j}=2.61_{-0.51}^{+0.64}$ m s-1, smaller than the jitter from the
Keplerian fit to the full RV sample of $3.9_{-0.4}^{+0.5}$ m s-1 (Piaulet et
al., 2021). This is expected as the RM sequence covers a much shorter time
baseline as compared to the full RV baseline, and as a result is only
contaminated by short-term stellar noise sources such as granulation and
convection.
The free parameters in the RM model are the sky-projected obliquity
($\lambda$), stellar inclination angle ($i_{\star}$), and projected rotational
velocity ($v\sin i_{\star}$). To first order, the impact parameter $b$ and
sky-projected obliquity $\lambda$ determine the shape of the RM signal, while
$v\sin i_{\star}$ and $R_{p}/R_{\star}$ set the amplitude. We adopted the
parameterization ($\sqrt{v\sin i_{\star}}\cos\lambda$, $\sqrt{v\sin
i_{\star}}\sin\lambda$) to improve the sampling efficiency and convergence of
the Markov Chain Monte Carlo (MCMC). A higher order effect that becomes
important when the RM amplitude is small is the convective blueshift, which we
denote $v_{cb}$ (see Section 3.3 for more details). There are thus seven free
parameters in our model: $\sqrt{v\sin i_{\star}}\cos\lambda$, $\sqrt{v\sin
i_{\star}}\sin\lambda$, $\cos i_{\star}$, $\log(|v_{cb}|)$, $\gamma$,
$\dot{\gamma}$, and $\sigma_{j}$. We placed a uniform hard-bounded prior on
$v\sin i_{\star}\in[0,\,5]$ km s-1 and on $\cos i_{\star}\in[0,\,1]$, and used
a Jeffrey’s prior for $\sigma_{j}$. All other parameters were assigned uniform
priors.
### 3.2 Micro/Macroturbulence Parameters
The shape of the RM curve is also affected by processes on the surface of the
star that broaden spectral lines, which affect the inferred RVs. In the Hirano
et al. (2011) model, these processes are parameterized by
$\gamma_{\text{lw}}$, the intrinsic line width, $\zeta$, the line width due to
macroturbulence, given by the Valenti & Fischer (2005) scaling relation
$\zeta=\left(3.98+\frac{T_{\text{eff}}-5770~{}\text{K}}{650~{}\text{K}}\right)~{}\text{km
s}^{-1},$ (3)
and $\beta$, given by
$\beta=\sqrt{\frac{2k_{B}T_{\text{eff}}}{\mu}+\xi^{2}+\beta_{\text{IP}}},$ (4)
where $\xi$ is the dispersion due to microturbulence and $\beta_{\text{IP}}$
is the Gaussian dispersion due to the instrument profile, which we set to the
HIRES line-spread function (LSF) (2.2 km s-1). We tested having
$\gamma_{\text{lw}},\,\xi$, and $\zeta$ as free parameters in the model (with
uniform priors) but only recovered the prior distributions for these
parameters. Moreover we saw no change in the resulting posterior distribution
for $\lambda$ or $v\sin i_{\star}$. Because of this, we opted to instead adopt
fixed nominal values of $\xi=0.7$ km s-1, $\gamma_{\text{lw}}=1$ km s-1, and
$\zeta=1.63$ km s-1 (from Eq. 3 using $T_{\text{eff}}$ from Table 2).
### 3.3 Convective blueshift
Convection in the stellar photosphere, caused by hotter bubbles of gas rising
to the stellar surface and cooler gas sinking, results in a net blueshift
across the stellar disk. This is because the rising (blueshifted) gas is
hotter, and therefore brighter, than the cooler sinking (redshifted) gas.
Since this net-blueshifted signal is directed at an angle normal to the
stellar surface, the radial component seen by the observer is different in
amplitude near the limb of the star compared to the center of the stellar
disk, according to the stellar limb-darkening profile. Thus the magnitude of
the convective blueshift blocked by the planet varies over the duration of the
transit. The amplitude of this effect is $\sim 2$ m s-1, which is significant
given the small amplitude of the RM signal we observe for WASP-107b ($\sim
5.5$ m s-1).
For this reason we included the prescription of Shporer & Brown (2011) in the
RM model, which is parameterized by the magnitude of the convective blueshift
integrated over the stellar disk ($v_{cb}$). This quantity is negative by
convention. Since the possible value of $v_{cb}$ could cover several orders of
magnitude, we fit for $\log(|v_{cb}|)$ and set a uniform prior between -1 and
3. While we found that including $v_{cb}$ has no effect on the recovered
$\lambda$ and $v\sin i_{\star}$ posteriors, we are able to rule out
$|v_{cb}|>450$ m s-1 at 99% confidence, and $>250$ m s-1 at 95% confidence.
Figure 1: The RM effect for WASP-107b. The dark shaded bands show the
16th–84th (black) and 5th–95th (gray) percentiles from the posterior
distribution of the modeled RV. The red best-fit line is the maximum
a-posteriori (MAP) model. The three vertical dashed lines denote, in
chronological order, the times of transit ingress, midpoint, and egress. The
residuals show the data minus the best-fit model. Data points are drawn with
the measurement errors and the best-fit jitter added in quadrature. Figure 2:
Posterior distribution for $\lambda$ and $v\sin i_{\star}$. Although a more
anti-aligned configuration is consistent with the data if $v\sin i_{\star}$ is
small, the most likely orientations are close to polar. A prograde orbit
($|\lambda|<90^{\circ}$) is strongly ruled out. Figure 3: Sky-projected
orbital configuration of WASP-107b’s orbit relative to the stellar rotation
axis. The black lines correspond to posterior draws while the red line is the
MAP orbit from Fig. 1. The direction of WASP-107b’s orbit is denoted by the
red arrow. The stellar rotation axis (black arrow) and lines of stellar
latitude and longitude are drawn for an inclination of $i_{\star}=25^{\circ}$.
The posterior for $i_{\star}$ is illustrated by the shaded gray strip with a
transparency proportional to the probability.
### 3.4 Evidence for a Retrograde/Polar Orbit
Table 3: WASP-107b Rossiter–McLaughlin Parameters Parameter | MCMC CI | MAP value | Unit
---|---|---|---
Model Parameters
$\sqrt{v\sin i_{\star}}\cos\lambda$ | $-0.309_{-0.154}^{+0.150}$ | -0.30 | aa $v\sin i_{\star}$ is in km s-1 and $v_{cb}$ is in m s-1.
$\sqrt{v\sin i_{\star}}\sin\lambda$ | $-0.126_{-0.771}^{+0.808}$ | -0.72 | aa $v\sin i_{\star}$ is in km s-1 and $v_{cb}$ is in m s-1.
$\cos i_{s}$ | $-0.003_{-0.681}^{+0.682}$ | -0.56 |
$\gamma$ | $0.80_{-1.38}^{+1.36}$ | 0.97 | m s-1
$\dot{\gamma}$ | $-20.83_{-10.94}^{+11.05}$ | -21.85 | m s-2
$\sigma_{\text{jit}}$ | $2.61_{-0.51}^{+0.64}$ | 2.20 | m s-1
$\log(|v_{cb}|)$ | $0.89_{-1.27}^{+1.18}$ | 2.17 | aa $v\sin i_{\star}$ is in km s-1 and $v_{cb}$ is in m s-1.
Derived Parameters
$|\lambda|$ | $118.1_{-19.1}^{+37.8}$ | 112.63 | degrees
$v\sin i_{\star}$ | $0.45_{-0.23}^{+0.72}$ | 0.61 | km s-1
$v_{cb}$ | $-7.74_{-109.71}^{+7.33}$ | -149.41 | m s-1
$i_{\star}$ | $28.17_{-20.04}^{+40.38}$ | 7.06 | degrees
$|\psi|$ | $109.81_{-13.64}^{+28.17}$ | 92.60 | degrees
We first found the maximum a posteriori (MAP) solution by minimizing the
negative log-posterior using Powell’s method (Powell, 1964) as implemented in
scipy.optimize.minimize (Virtanen et al., 2020). The MAP solution was then
used to initialize an MCMC. We ran 8 parallel ensembles each consisting of 32
walkers for 10,000 steps using the python package emcee (Foreman-Mackey et
al., 2013). We checked for convergence by requiring that both the Gelman–Rubin
statistic (G–R; Gelman et al., 2003) was $<1.001$ across the ensembles (Ford,
2006) and the autocorrelation time was $<50$ times the length of the chains
(Foreman-Mackey et al., 2013).
The MAP values and central 68% confidence intervals (CI) computed from the
MCMC chains are tabulated in Table 3, and the full posteriors for $\lambda$
and $v\sin i_{\star}$ are shown in Fig. 2. A prograde ($|\lambda|<90^{\circ}$)
orbit is ruled out at $>99\%$ confidence. An anti-aligned
($135^{\circ}<\lambda<225^{\circ}$) orbit is allowed if $v\sin i_{\star}$ is
small ($0.26\pm 0.10$ km s-1), although a more polar aligned (but still
retrograde) orbit with $90^{\circ}<|\lambda|<135^{\circ}$ is more likely (if
$v\sin i_{\star}\in[0.22,\,2.09]$ km s-1, 90% CI). The true obliquity $\psi$
will always be closer to a polar orientation than $\lambda$, since $\lambda$
represents the minimum obliquity in the case where the star is viewed edge-on
($i_{\star}=90^{\circ}$). While an equatorial orbit that transits requires
$i_{\star}\sim 90^{\circ}$, a polar orbit may be seen to transit for any
stellar inclination.
To confirm that the signal we detected was not driven by correlated noise
structures in the data, we performed a test using the cyclical residual
permutation technique. We first calculated the residuals from the MAP fit to
the original RV time series. We then shifted these residuals forward in time
by one data point, wrapping at the boundaries, and added these new residuals
back to the MAP model. This new “fake” dataset was then fit again and the
process was repeated $N$ times where $N=22$ is the number of data points in
our RV time series. This technique preserves the red noise component, and
permuting multiple times generates datasets that have the same temporal
correlation but different realizations of the data. If we assume that the
signal we detected is caused by a correlated noise structure, then we would
expect to see the detected signal vanish or otherwise become significantly
weaker across each permutation as that noise structure becomes asynchronous
with the transit ephemeris. We found that the signal is robustly detected at
all permutations, with and without including the convective blueshift (fixed
to the original MAP value). The MAP estimate for $\lambda$ tended to be closer
to polar across the permutations compared as to the original fit, which is
consistent with the posterior distribution estimated from the MCMC, but did
not vary significantly. While this method is not appropriate for estimating
parameter uncertainties (Cubillos et al., 2017), we conclude that our results
are not qualitatively affected by correlated noise in our RV time series.
Spot-crossing events can also affect the RM curve since the planet would block
a different amount of red/blueshifted light. Out of the nine transits observed
by Dai & Winn (2017), a single spot-crossing event was seen in only three of
the transits. Hence there is roughly a one in three chance that the transit we
observed contained a spot-crossing event. As we did not obtain simultaneous
high-cadence photometry, we do not know if or when such an event occurred.
Judging from the durations ($\sim 30$ min) of the spot crossings observed by
Dai & Winn (2017), this would only affect one or maybe two of our 15-minute
exposures. While we don’t see any significant outliers in our dataset, these
spots were only $\sim 10\%$ changes on a $\sim 2\%$ transit depth, amounting
to an overall spot depth of $\sim 0.2\%$. Given our estimate of $v\sin
i_{\star}\sim 0.5$ km s-1 this suggests a spot-crossing event would produce a
$\sim 1$ m s-1 RV anomaly, small compared to our measurement uncertainties
($\sim 1.5$ m s-1) and the estimated stellar jitter ($\sim 2.6$ m s-1). In
other words, there is a roughly 33% chance that a spot-crossing event
introduced an additional $0.5\sigma$ error on a single data point. If there
were multiple spot-crossing events this anomaly would vary across the transit
similar to other stellar-activity processes. In practice this introduces a
correlated noise structure in the RV time series which our cyclical residual
permutation test demonstrated is not significantly influencing our measurement
of the obliquity or other model parameters. From this semi-analytic analysis
we conclude that spot crossings are not a leading source of uncertainty in our
model.
### 3.5 Constraints on the Stellar Inclination
Figure 4: Obliquity of WASP-107b. The true obliquity $\psi$ is calculated
using the constraints on the stellar inclination as inferred from the $v\sin
i_{\star}$ posterior (Section 3.5).
Given a constraint on $v\sin i_{\star}$ and $v$, we can constrain the stellar
inclination $i_{\star}$. Previous studies have found a range of estimates for
the $v\sin i_{\star}$ of WASP-107. Anderson et al. (2017) found a value of
$2.5\pm 0.8$ km s-1, whereas John Brewer (private communication) obtained a
value of $1.5\pm 0.5$ km s-1 using the automated spectral synthesis modeling
procedure described in Brewer et al. (2016). We note that the Specmatch-Emp
(Yee et al., 2017) result for our HIRES spectrum only yields an upper bound
for $v\sin i_{\star}$ of $<2$ km s-1, as this technique is limited by the
HIRES PSF. All three of these methods derive $v\sin i_{\star}$ by modeling the
amount of line broadening present in the stellar spectrum, which in part comes
from the stellar rotation. However these estimates may be biased from other
sources of broadening which are not as well constrained in these models. Our
RM analysis on the other hand incorporates a direct measurement of $v\sin
i_{\star}$ by observing how much of the projected stellar rotational velocity
is blocked by the transiting planet’s shadow. Our RM analysis found $v\sin
i_{\star}=0.45_{-0.23}^{+0.72}$ km s-1, lower than the spectroscopic
estimates. We adopted this posterior for $v\sin i_{\star}$ to keep internal
consistency.
The rotation period of WASP-107 has been estimated to be $17\pm 1$ days from
photometric modulations due to starspots rotating in and out of view (Anderson
et al., 2017; Dai & Winn, 2017; Močnik et al., 2017). We combined this
rotation period with the stellar radius of $0.67\pm 0.02~{}R_{\odot}$ inferred
from the HIRES spectrum (Piaulet et al., 2021) using Specmatch-Emp (Yee et
al., 2017) to constrain the tangential rotational velocity $v=2\pi
R_{\star}/P_{\text{rot}}$. We then used the statistically correct procedure
described by Masuda & Winn (2020) and performed an MCMC sampling of $v$ and
$\cos i_{\star}$, using uniform priors for each, and using the posterior
distribution for $v\sin i_{\star}$ obtained in the RM analysis as a
constraint. Sampling both variables simultaneously correctly incorporates the
nonindependence of $v$ and $\cos i_{\star}$, since $v\leq v\sin i_{\star}$. We
found that $i_{\star}=25.8_{-15.4}^{+22.5}$ degrees (MAP value $7.1^{\circ}$),
implying a viewing geometry of close to pole-on for the star. Thus any
transiting configuration will necessarily imply a near-polar orbit, even for
orbital solutions with $\lambda$ near $180^{\circ}$ (see Fig. 3). It is worth
mentioning that one of the three spot-crossing events observed by Dai & Winn
(2017) occurred near the transit midpoint. This small stellar inclination
implies that this spot must be at a relatively high latitude
($90^{\circ}-i_{\star}$) compared to that of our Sun, which has nearly all of
its sunspots contained within $\pm 30^{\circ}$ latitude.
Knowledge of the stellar inclination $i_{\star}$, the orbital inclination
$i_{\text{orb}}$, and the sky-projected obliquity $\lambda$ allows one to
compute the true obliquity $\psi$, as these four angles are related by
$\cos\psi=\cos i_{\text{orb}}\cos i_{\star}+\sin i_{\text{orb}}\sin
i_{\star}\cos\lambda.$ (5)
The resulting posterior distribution for the true obliquity $\psi$ is shown in
Fig. 4. As expected, the true orbit is constrained to a more polar orientation
than is implied by the wide posteriors on $\lambda$, due to the nearly pole-on
viewing geometry of the star itself.
## 4 Dynamical History
How did WASP-107b end up in a slightly retrograde, nearly polar orbit? To
explore this question, we examined the orbital dynamics of the WASP-107 system
considering the new discovery of a distant, giant companion WASP-107c (Piaulet
et al., 2021). As in Mardling (2010), Yee et al. (2018), and Xuan & Wyatt
(2020), we can understand the evolution of the WASP-107 system by examining
the secular three-body Hamiltonian. Assuming the inner planet is a test
particle (i.e., $M_{b}\sqrt{a_{b}}\ll M_{c}\sqrt{a_{c}}$), and since
$a_{b}/a_{c}\ll 1$, we can approximate the Hamiltonian by expanding to
quadrupole order in semimajor axis ratio
$\displaystyle\mathcal{H}$ $\displaystyle=$
$\displaystyle\frac{1}{16}n_{b}\frac{M_{c}}{M_{\star}}\left(\frac{a_{b}}{a_{c}\sqrt{1-e_{c}^{2}}}\right)^{3}\left[\frac{(5-3G_{b}^{2})(3H_{b}^{2}-G_{b}^{2})}{G_{b}^{2}}\right.$
(6)
$\displaystyle\left.+\frac{15(1-G_{b}^{2})(G_{b}^{2}-H_{b}^{2})\cos(2g_{b})}{G_{b}^{2}}\right]+\frac{GM_{\star}}{a_{b}c^{2}}\frac{3n_{b}}{G_{b}},$
where the last term is the addition from general relativity (GR) and
$n_{b}=2\pi/P_{b}$. The quantities $G$ and $H$ are the canonical Delaunay
variables
$\displaystyle G_{b}$ $\displaystyle=\sqrt{1-e_{b}^{2}}$
$\displaystyle\leftrightarrow\;g_{b}=\omega_{b},$ (7) $\displaystyle H_{b}$
$\displaystyle=G\cos i_{b}$ $\displaystyle\leftrightarrow\;h_{b}=\Omega_{b},$
where the double-arrow ($\leftrightarrow$) symbolizes conjugate variables,
$\omega_{b}$ is the argument of perihelion of the inner planet, $\Omega_{b}$
is the longitude of ascending node of the inner planet, and $i_{b}$ is the
inclination of the inner planet with respect to the invariant plane. The
invariant plane is the plane normal to the total angular momentum bmtor, which
to good approximation is simply the orbital plane of the outer planet (since
angular momentum is $\propto Ma^{1/2}$). With this approximation, $i_{b}$ is
the relative inclination between the two planets.
Figure 5: Evolution of WASP-107b’s true obliquity ($\psi_{b}$, solid line)
throughout the the $N$-body simulation using the system parameters given in
Table 2. The outer planet has $M_{c}=M\sin i_{\text{orb},c}$ and was
initialized with an obliquity of $\psi_{c}=60^{\circ}$ (dashed line). The
obliquity of planet b oscillates between $\psi_{c}\pm\psi_{c}$ every $\sim
2.5$ Myr due to nodal precession. If $\sin i_{\text{orb},c}<1$ then the larger
$M_{c}$ simply produces a shorter nodal precession timescale. The right panel
shows the evolution of the inclinations with the difference in the longitudes
of ascending node.
### 4.1 Kozai–Lidov oscillations
Since the Hamiltonian $\mathcal{H}$ does not depend on $h_{b}$, the quantity
$H_{b}=\sqrt{1-e_{b}^{2}}\cos i_{b}$ is conserved. This leads to a periodic
exchange of $e_{b}$ and $i_{b}$, so long as the outer planet has an
inclination greater than a critical value of $\sim 39\fdg 2$ (Kozai, 1962;
Lidov, 1962). These Kozai–Lidov cycles also require a slowly changing argument
of perihelion, which may precess due to GR as is famously seen in the orbit of
Mercury. This precession can suppress Kozai–Lidov cycles if fast enough, as is
the case for HAT-P-11 and $\pi$ Men (Xuan & Wyatt, 2020; Yee et al., 2018).
The precession rate from GR is given by
$\dot{\omega}_{GR}=\frac{GM_{\star}}{a_{b}c^{2}}\frac{3n_{b}}{G_{b}^{2}},$ (8)
which has an associated timescale of $\tau_{GR}=2\pi/\dot{\omega}\approx
42,500$ years for WASP-107b. The Kozai timescale (Kiseleva et al., 1998) is
$\tau_{\text{Kozai}}=\frac{2P_{c}^{2}}{3\pi
P_{b}^{2}}\frac{M_{\star}}{M_{c}}(1-e_{c}^{2})^{3/2}\approx
210,000~{}\text{yr},$ (9)
five times longer. The condition for Kozai–Lidov cycles to be suppressed by
relativistic precession is $\tau_{\text{Kozai}}\dot{\omega}_{\text{GR}}>3$
(Fabrycky & Tremaine, 2007), which the MAP minimum mass and orbital parameters
WASP-107c satisfy. This is nicely visualized in Figure 6 of Piaulet et al.
(submitted), which shows the full posterior distributions of
$\tau_{\text{Kozai}}$ and $\tau_{\text{GR}}$. While the true mass of WASP-107c
is likely to be larger than the derived $M\sin i_{\text{orb},c}$, it would
need to be $\sim 10$ times larger for Kozai–Lidov oscillations to occur. This
would imply a near face-on orbit of at most $i_{\text{orb},c}<5\fdg 5$. Such a
face-on orbit is unlikely but is still plausible if it is aligned with the
rotation axis of the star, given our constraints on the stellar inclination
angle in Section 3.5.
### 4.2 Nodal precession
An alternative explanation for the high obliquity of WASP-107b is nodal
precession, as was proposed for HAT-P-11b (Yee et al., 2018) and for $\pi$ Men
c (Xuan & Wyatt, 2020). In this scenario the outer planet must have an
obliquity greater than half that of the inner planet, which in this case would
require $\psi_{\text{c}}\sim 55^{\circ}$. Then the longitude of ascending node
$\Omega_{b}$ evolves in a secular manner according to Yee et al. (2018),
$\frac{d\Omega_{b}}{dt}=\frac{\partial\mathcal{H}}{\partial
H_{b}}=\frac{n_{b}}{8}\frac{M_{c}}{M_{\star}}\left(\frac{a_{b}}{a_{c}\sqrt{1-e_{c}^{2}}}\right)^{3}\left(\frac{15-9G_{b}^{2}}{G_{b}^{2}}\right)H_{b}.$
(10)
The associated timescale $\tau_{\Omega_{b}}=2\pi/\dot{\Omega}_{b}$ is only
about 2 Myr, much shorter than the age of the system. Yee et al. (2018)
pointed out that such a precession will cause the relative inclination of the
two planets to oscillate between $\approx\psi_{c}\pm\psi_{c}$. Thus at certain
times the observer may see a highly misaligned orbit ($\psi_{b}\sim
2\psi_{c}$) for the inner planet, while at other times the observer may see an
aligned orbit ($\psi_{b}=0$).
We examined this effect by running a 3D $N$-body simulation in REBOUND (Rein &
Liu, 2012). We initialized planet c with an obliquity of 60∘ (which sets the
maximum obliquity planet b can obtain, ${\sim}2\psi_{c}=120^{\circ}$) and
planet b with an obliquity of 0∘ (aligned, prograde orbit). We included the
effects of GR and tides using the gr and modify_orbits_forces features of
REBOUNDx (Kostov et al., 2016; Tamayo et al., 2019) and used the the WHFast
integrator (Rein & Tamayo, 2015) to evolve the system forward in time for 10
Myr.
Fig. 5 shows that over these 10 Myr $\psi_{b}$ oscillates in the range
$0^{\circ}\text{--}120^{\circ}$ due to the precession of $\Omega_{b}$. Thus
nodal precession can easily produce high relative inclinations, despite
Kozai–Lidov oscillations being suppressed by GR. A configuration like what is
observed today in which the inner planet is misaligned on a polar, yet
slightly retrograde orbit is attainable at times during this cycle where the
mutual inclination is at or near its maximum. The obliquity is $\gtrsim 80\%$
the amplitude from nodal precession (${\sim}2\psi_{c}$) approximately one-
third of the time (bottom panel in Fig. 6). Therefore, even though the
observed obliquity depends on when during the nodal precession cycle the
system is observed, there is a decent chance of observing $\psi_{b}$ near its
maximum.
In the simulation we ran, WASP-107b is only seen by an observer to be in a
transiting geometry about 2.8% of the time. Xuan & Wyatt (2020) did a more
detailed calculating accounting for the measured mutual inclination and found
that the dynamical transit probability for $\pi$ Men c and HAT-P-11b is of
order 10-20%. However, as Xuan & Wyatt (2020) point out, this does not affect
the population-level transit likelihood since the overall orientations of
extrasolar systems can still be treated as isotropic. It merely suggests that
a system with a transiting distant giant planet may be harboring a nodally
precessing inner planet that just currently happens to be nontransiting.
Both Kozai–Lidov and nodal precession require a large mutual inclination in
order for the inner planet to reach polar orientations. The origin of this
large mutual inclination may be hidden in the planet’s formation history, or
perhaps was caused by a planet-planet scattering event with an additional
companion that was ejected from the system. This could also explain the
moderately eccentric orbit of WASP-107c (Piaulet et al., 2021). Indeed a
significant mutual inclination is observed for the inner and outer planets of
the HAT-P-11 and $\pi$ Men systems (Xuan & Wyatt, 2020), although the inner
planet in $\pi$ Men is only slightly misaligned with $\lambda=24\pm 4.1$
degrees (Hodžić et al., submitted), while HAT-P-11b has
$\lambda=103_{-10}^{+26}$ degrees (Winn et al., 2010b).
As more close-in Neptunes with distant giant companions are discovered, the
distribution of observed obliquities for the inner planet will help determine
if we are indeed simply seeing many systems undergoing nodal precession but at
different times during the precession cycle. If so, we might observe a sky-
projected obliquity distribution that resembles the bottom panel of Fig. 6.
However, we may instead be observing two classes of close-in Neptunes: ones
aligned with their host stars and ones in polar or near-polar orbits (see the
top panel of Fig. 6). This suggests an alternative mechanism that favors
either polar orbits or aligned orbits depending on the system architecture.
Figure 6: Top: polar plot showing the absolute sky-projected obliquity as the
azimuthal coordinate and normalized orbital distance as the radial coordinate,
for ${<}100~{}M_{\oplus}$ planets around stars with $T_{\text{eff}}<6250$ K
(similar mass planets around hotter stars are shown as faded gray points). The
red point is WASP-107b. Other noteworthy systems are shown with various colors
and markers (see Section 1 for references). Data compiled from TEPCat as of
2020 October (Southworth, 2011). Only WASP-107, HAT-P-11, and $\pi$ Men have
distant giant companions detected. Kepler-56 (Huber et al., 2013) is another
similar system but is not included in this plot as it is an evolved massive
star. Bottom: the fraction of a nodal precession cycle spent in a given
obliquity bin (left). The true obliquity $\psi$ is assumed to vary as
$\cos[(\pi/2)\psi(t)/\psi_{\text{max}}]=\sin^{2}(\pi t/\tau)$, where
$t\in[0,\tau=1]$. This recreates the shape of the oscillating inclination in
Fig. 5. The amplitude $\psi_{\text{max}}$ is twice the outer planet’s
inclination which is plotted for three different distributions (shown on the
right): uniform between $[0^{\circ},\,90^{\circ}]$ (gray), uniform between
$[40^{\circ},\,60^{\circ}]$ (red), and using the von-Mises Fisher distribution
from Masuda et al. (2020) calculated in a hierarchical manner incorporating
their posterior distribution for the shape parameter $\sigma$ for all. In all
three cases the true obliquity is shown as a dashed histogram. The sky-
projected obliquity is computed given a transiting geometry
($i_{\text{orb},b}=90^{\circ}$) and is marginalized over stellar inclination
angle (solid histogram). $M_{p}<100~{}M_{\oplus}$ planets with observed sky-
projected obliquities are shown as a filled histogram for comparison. Note
that while the gray and black predictions are relatively similar, an excess of
polar orbits can be observed if the mutual inclination distribution is
clustered around $\sim 40$–$60^{\circ}$.
### 4.3 Disk dispersal-driven tilting
Recently, Petrovich et al. (2020) showed that, even for $\psi_{c}\sim
0^{\circ}$, a resonance encountered as the young protoplanetary disk
dissipates can excite an inner planet to high obliquities, even favoring a
polar orbit given appropriate initial conditions. To summarize the model,
consider a system with a close-in planet and a distant (few astronomical
units) giant planet, like WASP-107, after the disk interior to the outer
planet has been cleared but the disk exterior remains. The external gaseous
disk induces a nodal precession of the outer planet at a rate proportional to
the disk mass (Eq. 10 with $b\mapsto c$ and $c\mapsto\text{disk}$). The outer
planet still induces a nodal precession on the inner planet according to Eq.
10. If at first the rate $d\Omega_{c}/dt>d\Omega_{b}/dt$, then as the disk
dissipates (and $M_{\text{disk}}$ decreases) the precession rate for planet c
will decrease until it matches the precession rate of the inner planet. At
this point the system will pass through a secular resonance, driving an
instability which tilts the inner planet to a high obliquity; a small initial
obliquity of a few degrees can quickly reach $90^{\circ}$. Additionally,
depending on the relative strength of the stellar quadrupole moment and GR
effects, the inner planet may obtain a high eccentricity (if GR is
unimportant), a modest eccentricity (if GR is important), or a circular orbit
(if GR dominates). Tidal forces can circularize the orbit, although the planet
may retain a detectable eccentricity even after several gigayears. This
process well explains the polar, close-in, and eccentric orbits of small
planets like HAT-P-11b. Nodal precession alone is unable to explain the
eccentricity of such planets.
Given the planet and stellar properties of the WASP-107 system, we calculated
the instability criteria developed in Petrovich et al. (2020). The steady-
state evolution of the system can be inferred by comparing the relative
strength of GR ($\eta_{\text{GR}}$) with the stellar quadrupole moment
($\eta_{\star}$). We found that $\eta_{\text{GR}}>\eta_{\star}+6$ at 99.76%
confidence, $\eta_{\star}+6>\eta_{\text{GR}}>4$ at 0.155% confidence, and
$\eta_{\text{GR}}<4$ at 0.084% confidence (i.e., $\eta_{\text{GR}}\sim 30-80$
and $\eta_{\star}\sim 1$). Thus WASP-107b is stable against eccentricity
instabilities and lives in the polar, circular region of parameter space in
Fig. 4 of Petrovich et al. (2020).
We calculated the final obliquity of WASP-107b using the procedure outlined in
Petrovich et al. (2020), incorporating the uncertainties in $M\sin
i_{\text{orb},c}$ and $P_{c}$ and integrating over all possible initial
obliquities for the outer planet. Evaluating their Eq. (3), we found that the
resonance that drives the inner planet to high obliquities is always crossed.
We calculated the adiabatic parameter
$x_{\text{ad}}\equiv\tau_{\text{disk}}/\tau_{\text{adia}}$ from the disk
dispersal timescale and the adiabatic time (their Eq. 7), taking
$\tau_{\text{disk}}$ to be 1 Myr. In the orbital configurations where
$x_{\text{ad}}>1$ (adiabatic crossing) we computed the final obliquity from
their Eq. (12) ($I_{\text{crit}}$). Otherwise, the final obliquity was set to
$I_{\text{non-ad}}$ from their Eq. (15).
The resulting probability of the final obliquity of WASP-107b is 7.6% for a
nonpolar (but oblique) orbit and 92.4% for a polar orbit. A polar orbit is
likely if the outer planet’s orbit is inclined at least $\sim 8^{\circ}$, and
is guaranteed for $\psi_{\text{init},c}\gtrsim 25^{\circ}$. In an equivalent
parameterization, Petrovich et al. (2020) explicitly predict a polar orbit for
WASP-107b if the mass and semiminor axis of WASP-107c satisfy
$(b_{c}/2~{}\text{AU})^{3}>(M_{c}/0.5~{}M_{J})$. Since we only have a
constraint on $M\sin i_{\text{orb},c}$, this condition is satisfied if
$i_{\text{orb},c}\in[60^{\circ}-90^{\circ}]$. Such a viewing geometry, in
conjunction with an obliquity of $\psi_{c}>25^{\circ}$, is plausible given the
likely stellar orientation (Section 3.5).
A key deviation from this model is that while the orbit of WASP-107b is indeed
close to polar, it is quite definitively retrograde. In the disk dispersal-
driven tilting scenario, the inner planet approaches a $\psi=90^{\circ}$ polar
orbit from below and stops at $\psi_{b}=90^{\circ}$. In order to reach a
super-polar/retrograde orbit, WASP-107c must have a significant obliquity,
either primordial from formation or through a scattering event (Petrovich et
al., 2020). As we alluded to in Section 4.2, a scattering event could also
explain the moderate eccentricities of the outer giants WASP-107c and
HAT-P-11c, and could easily give WASP-107c a high enough obliquity to
guarantee a polar/super-polar configuration for WASP-107b (Huang et al.,
2017). In fact a scattering event is more likley to produce the modest
obliquity for planet c needed to produce a super-polar orbit under the disk
dispersal framework than it is to produce the large ($\psi_{c}\gtrsim
40-50^{\circ}$) obliquity needed to excite either Kozai–Lidov or nodal
precession cycles.
## 5 Discussion and Conclusion
We observed the RM effect during a transit of WASP-107b on 2020 February 26,
from which we derived a near-polar and retrograde orbit as well as a low
stellar $v\sin i_{\star}$. This low $v\sin i_{\star}$ implies that we are
viewing the star close to one of its poles, reinforcing the near-polar orbital
configuration of WASP-107b. However, we are unable to conclusively say how
WASP-107b acquired such an orbit. Nodal precession or disk dispersal-driven
tilting are both plausible mechanisms for producing a polar orbit, while
Kozai–Lidov oscillations may be possible but only for a very narrow range of
face-on orbital geometries for WASP-107c. RV observations (Piaulet et al.,
2021) as well as constraints on the velocity of the escaping atmosphere of
WASP-107b (e.g., Allart et al. 2019, Kirk et al. 2020, Spake, J. J. et al.
2020, in preparation) are consistent with a circular orbit. The eccentricity
damping timescale due to tidal forces is only $\sim 60$ Myr (Piaulet et al.,
2021), so this is not unexpected. While a circular orbit does not rule out any
of these pathways, only disk dispersal-driven tilting can explain both the
eccentric and polar orbit of WASP-107b’s doppelganger HAT-P-11 b.
Since all three scenarios depend on the obliquity of the outer giant planet,
measuring the mutual inclination of planet b and c is essential to understand
the dynamics of this system. This has been done for similar system
architectures such as HAT-P-11 (Xuan & Wyatt, 2020) and $\pi$ Men (Xuan &
Wyatt, 2020; De Rosa et al., 2020) by observing perturbations in the
astrometric motion of the star due to the gravitational tugging of the distant
giant planet, using data from Hipparcos and Gaia. Unfortunately WASP-107 is
significantly fainter ($V=11.5$; Anderson et al., 2017) and barely made the
cutoff in the Tycho-2 catalog of Hipparcos (90% complete at V=11.5; Høg et
al., 2000). The poor Hipparcos astrometric precision, combined with the small
angular scale of the orbit of WASP-107 on the sky (10 - 30 $\mu$as), prevents
a detection of the outer planet using astrometry. Assuming future Gaia data
releases have the same astrometric precision as in DR2 ($44~{}\mu$as for
WASP-107), WASP-107c will be at the threshold of detectability using the full
five-year astrometric time series.
On the population level, the disk dispersal-driven model favors low-mass and
slowly rotating stars due to its dependence on the stellar quadrupole moment,
and also can explain eccentric polar orbits. Since nodal precession has no
stellar type preference nor a means of exciting eccentric orbits, measuring
the obliquities and eccentricities for a population of close-in Neptunes will
be essential for distinguishing which process is the dominant pathway to polar
orbits. Additionally a large population is needed to determine if the overall
distribution of planet obliquities is consistent with catching systems at
different stages of nodal precession, or if there are indeed two distinct
populations of aligned or polar close-in Neptunes. As these models all depend
on the presence of an outer giant planet, long-baseline RV surveys will be
instrumental for discovering the nature of any perturbing companions (e.g.
Rosenthal et al. submitted). Moreover RV monitoring of systems with small
planets that already have measured obliquities, but do not have mass
constraints or detected outer companions, will further expand this population.
Recent examples of such systems include Kepler-408b (Kamiaka et al., 2019), AU
Mic b (Palle et al., 2020), HD 63433 (b, Mann et al. 2020; and c, Dai et al.
2020), K2-25b (Stefánsson et al., 2020), and DS Tuc b (Montet et al., 2020;
Zhou et al., 2020). Comparing the proportions of systems with and without
companions which have inner aligned or misaligned planets will further
illuminate the likelihood of these different dynamical scenarios.
We thank Konstantin Batygin, Cristobol Petrovich, and Jerry Xuan for helpful
comments and productive discussions on orbital dynamics, and Josh Winn for
constructive feedback that improved this manuscript. R.A.R. and A.C.
acknowledge support from the National Science Foundation through the Graduate
Research Fellowship Program (DGE 1745301, DGE 1842402). C.D.D. acknowledges
the support of the Hellman Family Faculty Fund, the Alfred P. Sloan
Foundation, the David & Lucile Packard Foundation, and the National
Aeronautics and Space Administration via the TESS Guest Investigator Program
(80NSSC18K1583). I.J.M.C. acknowledges support from the NSF through grant
AST-1824644. D.H. acknowledges support from the Alfred P. Sloan Foundation,
the National Aeronautics and Space Administration (80NSSC18K1585,
80NSSC19K0379), and the National Science Foundation (AST-1717000). E.A.P.
acknowledges the support of the Alfred P. Sloan Foundation. L.M.W. is
supported by the Beatrice Watson Parrent Fellowship and NASA ADAP Grant
80NSSC19K0597. We thank the time assignment committees of the University of
California, the California Institute of Technology, NASA, and the University
of Hawai‘i for supporting the TESS–Keck Survey with observing time at the W.
M. Keck Observatory. We gratefully acknowledge the efforts and dedication of
the Keck Observatory staff for support of HIRES and remote observing. We
recognize and acknowledge the cultural role and reverence that the summit of
Maunakea has within the indigenous Hawaiian community. We are deeply grateful
to have the opportunity to conduct observations from this mountain. Keck I
(HIRES)
## References
* Allart et al. (2019) Allart, R., Bourrier, V., Lovis, C., et al. 2019, A&A, 623, A58
* Anderson et al. (2017) Anderson, D. R., Collier Cameron, A., Delrez, L., et al. 2017, A&A, 604, A110
* Bourrier et al. (2017) Bourrier, V., Cegla, H. M., Lovis, C., & Wyttenbach, A. 2017, A&A, 599, A33
* Bourrier et al. (2018) Bourrier, V., Lovis, C., Beust, H., et al. 2018, Nature, 553, 477
* Brewer et al. (2016) Brewer, J. M., Fischer, D. A., Valenti, J. A., & Piskunov, N. 2016, ApJS, 225, 32
* Butler et al. (1996) Butler, R. P., Marcy, G. W., Williams, E., et al. 1996, PASP, 108, 500
* Cubillos et al. (2017) Cubillos, P., Harrington, J., Loredo, T. J., et al. 2017, AJ, 153, 3
* Dai & Winn (2017) Dai, F., & Winn, J. N. 2017, AJ, 153, 205
* Dai et al. (2020) Dai, F., Roy, A., Fulton, B., et al. 2020, AJ, 160, 193
* Dalal et al. (2019) Dalal, S., Hébrard, G., Lecavelier des Étangs, A., et al. 2019, A&A, 631, A28
* Dalba et al. (2020) Dalba, P. A., Gupta, A. F., Rodriguez, J. E., et al. 2020, AJ, 159, 241
* De Rosa et al. (2020) De Rosa, R. J., Dawson, R., & Nielsen, E. L. 2020, A&A, 640, A73
* Esposito, M. et al. (2014) Esposito, M., Covino, E., Mancini, L., et al. 2014, A&A, 564, L13
* Fabrycky & Tremaine (2007) Fabrycky, D., & Tremaine, S. 2007, ApJ, 669, 1298
* Ford (2006) Ford, E. B. 2006, ApJ, 642, 505
* Foreman-Mackey (2016) Foreman-Mackey, D. 2016, JOSS, 1
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Gelman et al. (2003) Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. 2003, Bayesian Data Analysis, 2nd edn. (Chapman and Hall)
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357–362
* Hébrard et al. (2008) Hébrard, G., Bouchy, F., Pont, F., et al. 2008, A&A, 488, 763
* Hirano et al. (2011) Hirano, T., Suto, Y., Winn, J. N., et al. 2011, ApJ, 742, 69
* Hodžić et al. (submitted) Hodžić, V. K., Triaud, A. H. M. J., Cegla, H. M., Chaplin, W. J., & Davies, G. R. submitted, MNRAS
* Høg et al. (2000) Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27
* Howard et al. (2010) Howard, A. W., Johnson, J. A., Marcy, G. W., et al. 2010, ApJ, 721, 1467
* Howell et al. (2014) Howell, S. B., Sobeck, C., Haas, M., et al. 2014, PASP, 126, 398
* Huang et al. (2017) Huang, C. X., Petrovich, C., & Deibert, E. 2017, AJ, 153, 210
* Huber et al. (2013) Huber, D., Carter, J. A., Barbieri, M., et al. 2013, Science, 342, 331
* Hunter (2007) Hunter, J. D. 2007, CSE, 9, 90
* Kamiaka et al. (2019) Kamiaka, S., Benomar, O., Suto, Y., et al. 2019, The Astronomical Journal, 157, 137
* Kirk et al. (2020) Kirk, J., Alam, M. K., López-Morales, M., & Zeng, L. 2020, AJ, 159, 115
* Kiseleva et al. (1998) Kiseleva, L. G., Eggleton, P. P., & Mikkola, S. 1998, MNRAS, 300, 292
* Kostov et al. (2016) Kostov, V. B., Moore, K., Tamayo, D., Jayawardhana, R., & Rinehart, S. A. 2016, ApJ, 832, 183
* Kozai (1962) Kozai, Y. 1962, AJ, 67, 591
* Kreidberg et al. (2018) Kreidberg, L., Line, M. R., Thorngren, D., Morley, C. V., & Stevenson, K. B. 2018, ApJL, 858, L6
* Lidov (1962) Lidov, M. 1962, Planet. Space Sci., 9, 719
* Mann et al. (2020) Mann, A. W., Johnson, M. C., Vanderburg, A., et al. 2020, AJ, 160, 179
* Mardling (2010) Mardling, R. A. 2010, MNRAS, 407, 1048
* Masuda & Winn (2020) Masuda, K., & Winn, J. N. 2020, AJ, 159, 81
* Masuda et al. (2020) Masuda, K., Winn, J. N., & Kawahara, H. 2020, AJ, 159, 38
* McLaughlin (1924) McLaughlin, D. B. 1924, ApJ, 60, 22
* Millholland et al. (2020) Millholland, S., Petigura, E., & Batygin, K. 2020, ApJ, 897, 7
* Montet et al. (2020) Montet, B. T., Feinstein, A. D., Luger, R., et al. 2020, AJ, 159, 112
* Močnik et al. (2017) Močnik, T., Hellier, C., Anderson, D. R., Clark, B. J. M., & Southworth, J. 2017, MNRAS, 469, 1622
* Oklopčić & Hirata (2018) Oklopčić, A., & Hirata, C. M. 2018, ApJL, 855, L11
* Palle et al. (2020) Palle, E., Oshagh, M., Casasayas-Barris, N., et al. 2020, A&A, 643, A25
* Petrovich et al. (2020) Petrovich, C., Muñoz, D. J., Kratter, K. M., & Malhotra, R. 2020, ApJL, 902, L5
* Piaulet et al. (2021) Piaulet, C., Benneke, B., Rubenzahl, R. A., et al. 2021, AJ, 161, 70
* Powell (1964) Powell, M. J. D. 1964, The Computer Journal, 7, 155
* Queloz et al. (2000) Queloz, D., Eggenberger, A., Mayor, M., et al. 2000, A&A, 359, L13
* Queloz et al. (2010) Queloz, D., Anderson, D., Collier Cameron, A., et al. 2010, A&A, 517, L1
* Rein & Liu (2012) Rein, H., & Liu, S.-F. 2012, A&A, 537, A128
* Rein & Tamayo (2015) Rein, H., & Tamayo, D. 2015, MNRAS, 452, 376
* Rossiter (1924) Rossiter, R. A. 1924, ApJ, 60, 15
* Sanchis-Ojeda et al. (2013) Sanchis-Ojeda, R., Winn, J. N., Marcy, G. W., et al. 2013, ApJ, 775, 54
* Shporer & Brown (2011) Shporer, A., & Brown, T. 2011, ApJ, 733, 30
* Southworth (2011) Southworth, J. 2011, MNRAS, 417, 2166
* Spake et al. (2018) Spake, J. J., Sing, D. K., Evans, T. M., et al. 2018, Nature, 557, 68
* Spalding & Batygin (2015) Spalding, C., & Batygin, K. 2015, ApJ, 811, 82
* Stefánsson et al. (2020) Stefánsson, G., Kopparapu, R., Lin, A., et al. 2020, AJ, 160, 259
* Tamayo et al. (2019) Tamayo, D., Rein, H., Shi, P., & Hernandez, D. M. 2019, MNRAS, 491, 2885
* Valenti & Fischer (2005) Valenti, J. A., & Fischer, D. A. 2005, ApJSupplement Series, 159, 141
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, NatMe, 17, 261
* Vogt et al. (1994) Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, in Proc. SPIE, Vol. 2198, Instrumentation in Astronomy VIII, ed. D. L. Crawford & E. R. Craine, 362
* Winn et al. (2010a) Winn, J. N., Fabrycky, D., Albrecht, S., & Johnson, J. A. 2010a, ApJL, 718, L145
* Winn et al. (2010b) Winn, J. N., Johnson, J. A., Howard, A. W., et al. 2010b, ApJL, 723, L223
* Xuan & Wyatt (2020) Xuan, J. W., & Wyatt, M. C. 2020, MNRAS, 497, 2096
* Yee et al. (2017) Yee, S. W., Petigura, E. A., & von Braun, K. 2017, ApJ, 836, 77
* Yee et al. (2018) Yee, S. W., Petigura, E. A., Fulton, B. J., et al. 2018, AJ, 155, 255
* Zhou et al. (2020) Zhou, G., Winn, J. N., Newton, E. R., et al. 2020, ApJ, 892, L21
|
# Relief and Stimulus in a Cross-sector Multi-product
Scarce Resource Supply Chain Network
Xiaowei Hu Xiaowei Hu Xiaowei Hu Corresponding author<EMAIL_ADDRESS>Peng
Li<EMAIL_ADDRESS>
(https://doi.org/10.1016/j.tre.2022.102932
Received 25 May 2022; Revised 31 August 2022; Accepted 3 October 2022
Transportation Research Part E: Logistics and Transportation Review (2022),
Volume 168, pp 102932.)
Abstract
In the era of a growing population, systemic changes to the world, and the
rising risk of crises, humanity has been facing an unprecedented challenge of
resource scarcity. Confronting and addressing the issues concerning the scarce
resource’s conservation, competition, and stimulation by grappling its
characteristics and adopting viable policy instruments calls the decision-
maker’s attention with a paramount priority. In this paper, we develop the
first general decentralized cross-sector supply chain network model that
captures the unique features of scarce resources under a unifying fiscal
policy scheme. We formulate the problem as a network equilibrium model with
finite-dimensional variational inequality theories. We then characterize the
network equilibrium with a set of classic theoretical properties, as well as
with a set of properties that are novel to the network games application
literature, namely, the lowest eigenvalue of the game Jacobian. Lastly, we
provide a series of illustrative examples, including a medical glove supply
network, to showcase how our model can be used to investigate the efficacy of
the imposed policies in relieving supply chain distress and stimulating
welfare. Our managerial insights inform and expand the political dialogues on
fiscal policy design, public resource legislation, social welfare
redistribution, and supply chain practice toward sustainability.
Keywords: networks; resource scarcity; game theory; supply chain management;
fiscal policy
This research did not receive any specific grant from funding agencies in the
public, commercial, or not-for-profit sectors.
## 1 Introduction
Humanity depends on the supply of an array of essential resources. These
resources, such as agricultural crops, fisheries, wildlife, forests,
petroleum, metals, minerals, even air, soil, and water, are not only critical
to the flourishing of humanities but also considered assets for businesses
operations, product innovation, and government affairs (Rosenberg, 1973;
Wagner and Light, 2002; Krautkraemer, 2005). The impact of resource scarcity
has also become a popular theme that impels some of the futuristic opinions
and critiques on potential societal collapse and humanity trajectory (see
also, Friedman (2009a, b), Gilding (2012), inter alia). Therefore, meeting the
resource demands is a substantive economic, commercial, and societal matter
for the decision-makers.
However, modern history has never been immune to resource scarcity; in fact,
we are experiencing shortages with greater magnitude and frequency. In the
1970s, the western world faced oil crises, which had a lasting impact on the
United States and extended the oil shortage of 1979 to the farm crisis in the
1980s, resulting in low crop prices, foreclosed farmlands, and thereby the low
farm income (Corbett, 2013; FDIC, 1997). Since the 2000s, many regions around
the world, including North and South Africa, South and Central Asia, and some
parts of the U.S., have been under enormous water stress. For instance,
California of the U.S. declared to have experienced the driest year in 2013,
and again in 2021 (Worland, 2014; Jones, 2021). Since 2019, induced by the
COVID-19 global pandemic, the world has experienced an overwhelming shortage
of commodities, ranging from raw materials such as lumber, precious metals,
and labor, to the end-products such as medical supplies, semiconductor chips,
grocery items, and household cleaning products. (Gasparro et al., 2020; Chou,
2020; Valinsky, 2021). As is summarized by an Economist (2021) article: we are
in a shortage economy.
Some of the key causes of the resource scarcity, as is pointed out by
Population Reference Bureau, include supply, demand, and structure (UN, 2012).
Today, with the world’s population projected to reach 8.5 billion in 2030 (UN,
2019), the demand for the planet’s scarce resources continues to grow at a
staggering rate. Meanwhile, the industrialization in the past two centuries
has ended the “ever-cheaper” commodity resources era and set the fierce
competition with rising prices for the “ever-scarcer” resources as the new
norm (Krautkraemer, 1998). Furthermore, traumatic crises, such as the
aforementioned COVID-19 pandemic, also induce the scarcity of specific
supplies.
As a means to relieve resource shortages, stimulate growth, and influence
economic outcomes, especially during critical times, supply-side fiscal
policies are commonly used by governments as strategic instruments. The fiscal
policy describes changes to government spending and revenue behavior (Keynes,
1936). In the energy crisis of the 1970s, the U.S. Senate advanced supply-side
fiscal incentives to mitigate the ramification of the energy problems (Long,
1973). Those policies included investment credits for domestic exploratory
drilling, research and development on the commercial exploitation of
alternative energy sources. In the global financial crisis of 2007-2008, the
U.S. Congress passed the $787 billion American Recovery and Reinvestment Act
in 2009, after the $125 billion provided by the Economic Stimulus Act of 2008
(Davig and Leeper, 2011). In the COVID-19 pandemic of 2020, worldwide
governments adopted economic packages including fiscal, monetary, and
financial policy measures to mitigate the negative effects of the public
health crisis on the economy and to sustain public welfare (Gourinchas, 2020).
To date, the estimated global stimulus effort has totaled $ 10.4 trillion
(Economist, 2021) with $5.2 trillion by the U.S. (Romer and Romer, 2021).
Monitoring the global fiscal policy in the wake of the COVID-19 pandemic, IMF
(2021) sees the worldwide fiscal stimulus in many advanced economies (e.g.,
the European Union and the U.S.) starting to make it more productive,
equitable, and sustainable.
From the industrial standpoint of a scarce resource value chain, the supply-
side fiscal policies, such as production incentives, tax credits, or
expansionary funds, serve as a cost reduction and revenue increase for firms
in short term and entrepreneurial processes in long term (Arestis and Sawyer,
2003; Braunerhjelm, 2022). In particular, the provision of fiscal policies to
a resource sector in scarcity, directly or indirectly, can have an extreme
preserve effect on the management of the targeted sector and the broader
economy (Young, 2015). For example, in the face of groundwater depletion in
Indian states like Punjab and Gujarat, farmers are given free access to
electricity to pump groundwater, in compensating for the massive
underinvestment in water-saving technologies. Such a benefit results in a more
profound impact on the resilience of the supply chain as well. Today, the
emerging resource scarcity, especially from the latest pandemic, has served as
a wake-up call by exposing the vulnerability of the supply chains. Along with
an overwhelming pleading for fiscal stimulation in popular press (Blinder,
2021; Winegarden, 2021; Smith, 2021), it bears merit to investigate how the
supply-side fiscal policies could relieve the shortage and stimulate the
social outcome.
In existential resource scarcity where the demand is not limited to one single
resource type, governments must identify their optimal grand strategies. But
often, policies and regulations alone can inadvertently create sub-optimal
signals to economic or environmental concerns. As a response to the need for
expanding and informing political dialogues, growing acknowledgment of the
cross-sector consideration such as the well-known water-energy-food (WEF)
nexus concept (Hoff, 2011) (see also, Bazilian et al. (2011) and the reference
therein) has emerged in the past decade. Similarly, the cross-sector construct
considers key issues in each commodity sector’s security through a holistic
lens in order to achieve long-term sustainability (cf. Biggs et al. (2015)).
For instance, the discussions on integrated water resources management have
now emphasized issues through analyses of inter-sectoral competition for
surface freshwater resources, integration of water management at farm, system,
and basin scales, or balancing tradeoffs with electricity generation and fuel
supply (Kurian, 2017; Zhang and Vesselinov, 2016). To date, the cross-sectoral
approach has tended toward technical assessments to enable knowledge and
information sharing, support intersectoral cooperation, and enhance
productivity and synergies (Biggs et al., 2015; Olawuyi, 2020).
In this paper, we construct a general supply chain network equilibrium (SCNE)
model to better understand the scarce resources vis-à-vis conservation,
competition, transportation, and policy intervention, by consolidating a
resource’s innate scarcity and the supply-demand caused shortage (more in
§2.1). Our supply chain network model incorporates the interdependency of the
scarce resources by allowing for cross-resource sector links. With the
inclusion of multiple resource products that are differentiated by markets,
such a network emulates the scarce resource’s heterogeneity feature in a
similar approach by Zhang (2006). The interdependency and heterogeneity are
two features inspired by Pfeffer and Salancik (2003), and Hunt (2000).
Furthermore, in the model, we install a unifying supply-side fiscal policy
scheme in relation to the scarce resource ownership and production. We assess
the efficacy of utilizing incentives and taxes in addressing the supply chain
shortage from an industrial organization’s viewpoint. Also, we incorporate a
non-cooperative behavior of decision-makers, considering the unique ownership
of the resources and the nature of quantity competition under the context of
scarcity. We believe such behavior is best represented by a decentralized
network, a construct known for its generality, efficiency, and ability to
synthesize local information in supply chains (see also, Lee and Billington
(1993), inter alia). Finally, in illustration, we provide a series of
numerical examples, including one that concerns medical personal protective
equipment (PPE) supplies, to illustrate the utility of our model.
The remainder of this paper is organized as follows. Section 2 is a literature
review and the contributions of our work. Section 3 presents the scarce
resource supply chain network model with fiscal policies and its mathematical
formulation. In Section 4, we show the solution concept of the model. Section
5 establishes the theoretical properties of the equilibrium in the context of
both classic and novel network game studies. Section 6 proposes an algorithmic
procedure to find the network equilibrium. Section 7 demonstrates the utility
of the model with small-scale illustrative examples. Section 8 provides the
decision-makers and practitioners with real-world managerial insights. Lastly,
Section 9 is a highlight of our study and foresight to its extension.
## 2 Literature review
### 2.1 Scarce resources
The broader studies on resource scarcity lie primarily in the fields of
economics and management, each of which gives a self-containing viewpoint
toward the characteristics of scarce resources. On one hand, the equilibrium
and monopoly theories of Adam Smith, John Stuart Mill, and John Nash, as well
as the rent and location theories of Alfred Marshall and Alfred Weber, tie the
scarcity to the location and quality of resources needed for industrial and
agricultural purposes (Weber and Friedrich, 1929). On the other hand, among
the rising organizational theories, the Resource Dependence Theory (Pfeffer
and Salancik, 2003) claim the importance of resource by which the owners,
producers, and suppliers are connected and interdependent (cf. Caniëls and
Gelderman (2007)), whereas the Resource Advantage Theory (Hunt, 2000) posits
resource’s features, including their demand heterogeneity and roles, in a firm
setting. Overall, from an economic perspective, scarce resources can be
characterized as of oligopolistic supply-side power, heterogeneous intra-
sector demand, and low price-elasticity of products. From a management
perspective, the possession of scarce resources is indicative of a firm’s
competitive advantage and negotiation power. The coalescence of these two
avenues of studies in resource scarcity has yet to occur in literature.
The scholarly interests in the interconnectedness of scarce resources started
to agglomerate since the release of the World Economic Forum report introduced
the WEF nexus as a novel concept (Hoff, 2011). Nearly all WEF-related
literature that we are aware of has sought integrated solutions or suggested
the interconnection of resources. For instance, Bai et al. (2016) investigated
the interplay between agricultural and fuel products; Zhang and Vesselinov
(2016) studied the interaction between water and energies; Bakker et al.
(2018) exemplified a regional economy with water and agriculture in focus; Mun
et al. (2021) provided an expansionary solution to the hydro networks for
energy, irrigation and flood control for developing countries. Almost all of
these works, at the same time, amalgamate with agricultural supply chain
management (SCM), in which resource scarcity was associated with a range of
societal and economic issues.
In the tumultuous times of critical resource shortages, especially in the wake
of the COVID-19 pandemic, a wealth of studies have emerged to demonstrate the
utility of quantitative supply chain models for the urgency of healthcare,
agricultural, logistical, and humanitarian challenges. For example, Kazaz et
al. (2016) developed a malaria medicine supply chain model to improve supply
and reduce price volatility via interventions. Arifoğlu and Tang (2022)
devised a decentralized system to combat the frequent supply and demand
mismatches of the vaccines. Both Chen and Tang (2015) and Liao et al. (2019)
used stylized models to examine the impact of information provision policies
on farmers in developing countries. Yu et al. (2020) constructed three-echelon
supply chain models to study donors’ optimal subsidy strategies for the
development of under-developed areas. Chintapalli and Tang (2021) modeled,
analyzed, and evaluated the impact of a credit-based scheme on the welfare of
small-scale, risk-averse farmers. Shen and Sun (2021) explained the impact of
the pandemic on supply chain resilience, identified the challenges that retail
supply chains had experienced in China, and showcased the practical response
of JD.com throughout the pandemic.
### 2.2 Fiscal policies
The research of fiscal policies such as economic incentives in behavioral
economics is also related to our work. Under such subdomain, the design of
fiscal policies often becomes a delicate and complex matter when presented
with a decentralized system of multiple decision-making agents (cf. Arrow and
Kruz (2013)). Generally, to assess the interplay of incentives and behaviors,
microeconomic approaches are often adopted (see also, Frey and Jegen (2001),
Huck et al. (2012), inter alia). With such approaches, incentives may be
linked to the network topology and its design (cf. Belhaj and Deroïan (2013),
Jackson and Zenou (2015)). For instance, Calvo-Armengol and Jackson (2004)
showed that education subsidies and other labor market regulation policies
display local increasing returns due to the network structure. Hu et al.
(2019) studied the price fluctuation in agricultural markets by discussing the
role a carefully designed preseason buyout contract plays in individual
farmer’s welfare. Srai et al. (2022) explored how the interplay between
competing and coexisting policy regimens can affect the supply dynamics
between producers, customers, and their intermediaries in a supply network.
As noted before, today’s fragmentation and globalization of supply chains give
impetus to new value-adding opportunities with the inclusion of supply-side
fiscal policies. A plethora of recent works is dedicated to designing supply
chain smart tax- and incentive-based schemes to promote new energies, foster
sustainability, and develop e-commerce, health system, or broader economies.
For example, Alizamir et al. (2019) analyzed the impact of two U.S. farmer’s
subsidy programs on consumers, farmers, and the government. Jiang et al.
(2021) studied the government’s penalty provision in a bioenergy supply chain.
Tao et al. (2020) evaluated the impact of various carbon policies on planning
and operation decisions in emerging markets. Xu and Choi (2021) investigated
the effect of the blockchain technology on the manufacturer’s operational
decisions under cap-and-trade regulation. Arifoğlu and Tang (2022) developed a
two-sided incentive program that proposes “vaccination incentives” to be given
to both the demand and supply side to combat the frequent supply and demand
mismatches. Levi et al. (2022) analyzed the effectiveness of various
government interventions in improving consumer welfare under an artificial
shortage in agricultural supply chains.
### 2.3 Supply chain network equilibrium models
Also akin to this study are the supply chain network models with multi-agent
decision-making, in which case, game theory is often used (e.g., Hu et al.
(2021), inter alia). Among the wealth of existing literature, the SCNE models,
inaugurated by Nagurney et al. (2002), permit one to represent the
interactions between decision-makers in the supply chain for general
commodities in terms of connections, flows, and prices. Since then, this sub-
domain has proliferated to a wide range of applications with timely interests.
In conjunction with resource scarcity and shortages, we note, for instance, Wu
et al. (2006) and Matsypura et al. (2007) modeled power transmissions. Masoumi
et al. (2012), Dutta and Nagurney (2019) each modeled the supply and demand of
a unique medical item; Besik and Nagurney (2017) and Wu et al. (2018) modeled
the logistics of fresh produce in the scope of their qualities and
preservation. In conjunction with supply-side fiscal policies, on the other
hand, we are aware that Yu et al. (2018), Wu et al. (2019), and Yang et al.
(2021, 2022), for example, captured the design of an environmental tax or
subsidy policy in a green supply chain. Nagurney et al. (2019a, b) studied how
global trade policies could impact the suppliers and markets.
To date, albeit the research in SCNE models has been well developed, new
studies continue to proliferate. In surveying the SCNE studies since 2020, we
now identify a few trends of its application. First, the Chinese e-commerce
industry has evolved rapidly with explosive consumer traffic in online
shopping, payment, marketing, and services, as the works of Zhang et al.
(2020), Guoyi et al. (2020), and Chen et al. (2020), all affiliated with
Chinese institutions, demonstrated the opportunities and utilities of such
model in this industry. Second, the studies in socially responsible practices,
led by firms’ environmentally conscientious strategies, have reflected the
zeitgeist of our renewed societal values. In particular, Chen et al. (2020),
Zhang et al. (2021), and Yang et al. (2021, 2022) all showcased that
corporate-level efforts like de-carbonization have embodied today’s global
supply chain practitioners’ response. Last but not least, the COVID-19
pandemic has evidently created a host of complex problems in the logistics of
medical supplies and labor, among which, for example, are the works by
Nagurney (2021a, b), inter alia.
### 2.4 Research gap and current contribution
While most of the literature in SCM focuses on either a category of general
products or one specific type, it remains largely sparse on the
interdependence of firms and the specific resource commodities they produce in
a cross-sector framework (cf. Bell et al. (2012)). Such a gap can be
exemplified by the papers referenced in §2.1, as they mainly emphasize one or
two specific products. To the best of our knowledge, we are not aware of a
study in SCM that explores the characteristics of resource scarcity and the
shortage conditions from both economic and management perspectives. A
systematic review by Matopoulos et al. (2015) has underscored a need for
further research on understanding the implications of resource scarcity for
supply chain relationships and their impact on supply chain configurations.
Moreover, the fiscal policy scheme we implement in our model is inclusive of
both incentives and taxes. We also set forth our scope of policy calibration
as the broader social outcome and firm’s collective profits, instead of
specific performance measures (e.g., prices and quantities), as what appeared
in the referenced papers in §2.2. The coalescence of these two aspects has
been limited in the theoretical literature on supply chain networks. Such a
topic would have provided invaluable managerial insights and should,
therefore, merit attention. The major contributions of this paper are
summarized as follows:
1. 1.
The construction of the first decentralized 4-tier multi-product scarce
resource SCNE model with supply-side fiscal instruments. The model could
handle not only the supply chains of scarce resources but also those of
general resources that pertain to the aforementioned characteristics of
scarcity.
2. 2.
The first work to examine a general supply-side fiscal policy that encompasses
both economic incentives and taxes in one model, with the social outcome as a
calibration.
3. 3.
A rigorous variational inequality formulation of the modeled general Nash
equilibrium problem due to the consideration of both competition and resource
capacities; The establishment of a set of novel theoretical conditions to
characterize the network equilibria.
4. 4.
Our examples and the resulted managerial insights instilled for governments
and supply chain practitioners to handle scarce resource management, fiscal
policy design, shortage relief, and sustainable development.
Finally, we use Table LABEL:table:compare_SR to compare and differentiate our
work with other closely related SCNE models mentioned in this section.
Table 1: A comparison of closely related SCNE articles Article | Multi-product | Oligopolistic Competition | Capacitated | Policy Instrument
---|---|---|---|---
Single | Unifying
Masoumi et al. (2012) | | $\bullet$ | | |
Besik and Nagurney (2017) | | $\bullet$ | | |
Yu et al. (2018) | | $\bullet$ | | $\bullet$ |
Yang et al. (2021) | | $\bullet$ | | $\bullet$ |
Yang et al. (2022) | | $\bullet$ | | $\bullet$ |
Zhang (2006) | $\bullet$ | | | |
Nagurney and Dutta (2021) | $\bullet$ | | | |
Wu et al. (2006) | | | | $\bullet$ |
Nagurney et al. (2019b) | | | | $\bullet$ |
Wu et al. (2019) | | | | $\bullet$ |
Hu and Qiang (2013) | | | $\bullet$ | |
Nagurney et al. (2017) | | | $\bullet$ | |
This Paper | $\bullet$ | $\bullet$ | $\bullet$ | | $\bullet$
## 3 The cross-sector multi-product scarce resource supply chain network
model
In the network, there are $I$ types of scarce resources, denoted as
$i=1,...,I.$ We assign $i$’s alias $j$ in order to use it in different tiers.
Naturally, $j=1,...,I$. Each type of resource is affiliated with $N_{i}$
owners, $M_{j}$ producers, and $S_{j}$ suppliers. The unfinished or end-
product throughout the entire supply chain network is assumed to be
homogeneous by the types of resources. See Figure 1 for the network topology.
Later, when referring to a certain type of resource, we will omit the phrase
“type” or “type of”.
Figure 1: A cross-sector scarce resource supply chain network model
The first tier of nodes in the network represents the resource owner. For
instance, a soybean-producing farmer is an owner of a resource type "soybean
products". As the stakeholder of a scarce resource, an owner may supply such a
resource to any producer. A typical resource owner is denoted as $(i,n)$,
where $n=1,...,N_{i}$.
The second tier of nodes represents the resource producer. An example of a
resource producer can be a food manufacturer who uses the soybeans purchased
from the farmers as raw material and produces soybean-related foods. A
resource producer is able to obtain any type of resource from any owner, as
stated above, but is only able to sell and ship its products to the suppliers
who are specialized in the same type of resource. A typical resource producer
is denoted as $(j,m)$, where $m=1,...,M_{j}$.
The third tier of nodes represents the resource supplier. A resource supplier
can only obtain and supply one type of resource to all markets. A soybean
supplier, for instance, does not supply any other resources. A typical
resource supplier is denoted as $(j,s)$, where $s=1,...,S_{j}$.
The fourth tier of nodes represents the physical markets. A typical physical
market, consuming some resources from its suppliers, is denoted as $k$, where
$k=1,...,K$. Each supplier has $t$ transportation modes, where
$t=1,...,T_{j}$, to ship a resource product to any physical market. To emulate
the heterogeneity of the scarce resources characterized by Hunt (2000), we
assume that each market has its own perception of the same resource, but is
unable to distinguish the same resource from different suppliers or
transportation modes. We assume also all markets to be competitive and,
thereby, price-takers.
We consider the network to be decentralized, i.e., each firm of a given tier
competes with all others in a non-cooperative fashion to maximize its own
profit by determining its optimal production quantity and shipments, given the
simultaneous best response of all competitors. All of the notations regarding
the model are listed in Table LABEL:table:notations.
Table 2: Notations of the model Notation | Definition
---|---
Indices: |
$i,j$ | The index of resource. $j$ is an alias of $i$. $i,j\in\\{1,...,I\\}$.
$n$ | The index of resource owner $i$. $n\in\\{1,...,N_{i}\\}$.
$m$ | The index of resource producer $j$. $m\in\\{1,...,M_{j}\\}$.
$s$ | The index of resource supplier $j$. $s\in\\{1,...,S_{j}\\}$.
$t$ | The index of transportation mode of resource product $j$. $t\in\\{1,...,T_{j}\\}$
$k$ | The index of a physical market. $k\in\\{1,...,K\\}$.
$g$ | The index of an incentive payment bracket. $g\in\\{1,...,G\\}$.
Parameters: |
$U_{i}$ | The total amount of resource $i$. $U\equiv\\{U_{1},U_{2},...,U_{I}\\}$ is a column vector.
$A^{i}_{g}$ | The $g$’th incentive payment bracket of resource owners $(i,n)$
$B^{j}_{g}$ | The $g$’th incentive payment bracket of resource producers $(j,m)$
$\psi^{in}_{jm}$ | The non-negative conversion rate at resource producers $(j,m)$ from owner $(i,n)$
$w_{ts}$ | The weight on market demand over supplier $s$ via transportation mode $t$
Variables: |
$x^{in}_{jm}$ | The flow between resource owner $(i,n$) and producer $(j,m)$.
$x^{jm}_{s}$ | The flow between resource producer $(j,m)$ and supplier $(j,s)$.
$x^{js}_{tk}$ | The flow between resource supplier $(j,s)$ and market $k$ via transportation mode $t$.
$x^{in},x_{jm},x^{js}$ | The column vectors that collect $x^{in}_{jm}$, $x^{jm}_{s}$, $x^{js}_{tk}$, respectively.
$d_{jk}$ | Demand of resource $j$ at market $k$.
$p^{in}_{0jm}$ | The transaction price between resource owner $(i,n)$ to producer $(j,m)$.
$p^{jm}_{1s}$ | The transaction price between resource producer $(j,m)$ to supplier $(j,s)$.
$p^{js}_{2tk}$ | The sales price by supplier $(j,s)$ for market $k$ via transportation mode $t$.
$p^{js}_{3tk}$ | The transaction price at market $k$ from supplier $(j,s)$ via transportation mode $t$.
$\delta^{in}_{g}$ | Resource owner $(i,n)$’s excess of output quantity to bracket $g$.
$\delta^{jm}_{g}$ | Resource producer $(j,m)$’s excess of output quantity to bracket $g$.
$\lambda^{0}_{i}$ | The Lagrange multipliers associated with constraint (9).
$\lambda^{1}_{jm}$ | The Lagrange multipliers associated with constraint (18).
$\lambda^{2}_{js}$ | The Lagrange multiplier associated with constraints (24).
$\mu^{0}_{ing}$ | The Lagrange multipliers associated with constraint (10).
$\mu^{1}_{jmg}$ | The Lagrange multipliers associated with constraint (19).
$\pi_{in},\pi_{jm},\pi_{js}$ | The profit of resource owner $(i,n)$, producer $(j,m)$, and supplier $(j,s)$, respectively.
$CS_{jk}$ | The consumer surplus of resource product $j$ at market $k$.
$SW$ | Social welfare of the entire supply chain network.
Functions: |
$\alpha^{i}_{g}(.)$ | The incentive payment to resource owners in bracket $g$.
$\beta^{j}_{g}(.)$ | The incentive payment to resource producers in bracket $g$.
$f^{in}=f^{in}(x^{in})$ | The operating cost incurred by resource owner $(i,n)$ in terms of outgoing shipments.
$c^{in}_{jm}=c^{in}_{jm}(x^{in}_{jm})$ | The transaction cost incurred by resource owner $(i,n)$ with producer $(j,m)$.
$f^{jm}=f^{jm}(x_{jm})$ | The operating cost incurred by producer $(j,m)$ in terms of incoming shipments.
$c^{jm}_{s}=c^{jm}_{s}(x^{jm}_{s})$ | The transaction cost incurred by producer $(j,m)$ with supplier $(j,s)$.
$f^{js}=f^{js}(x^{js})$ | The operating cost incurred by resource supplier $(j,s)$ in terms of outgoing shipments.
$c^{js}_{tk}=c^{js}_{tk}(x^{js}_{tk})$ | The transaction cost incurred by supplier $(j,s)$ for market $k$ via transportation mode $t$.
$\hat{c}^{js}_{tk}=\hat{c}^{js}_{tk}(x^{js}_{tk})$ | The transaction cost incurred by market $k$ with supplier $(j,s)$ via transportation mode $t$.
$p^{j}_{3k}=p^{j}_{3k}(d_{jk})$ | The price-demand function of resource $j$ at the market $k$.
### 3.1 The fiscal policy scheme
For the ease of conceptualization, in our model, we apply a supply-side fiscal
incentive to resource owners and producers. We design a piece-wise scheme to
represent some of the most commonly used governmental incentive policies. The
incentive scheme devised here can easily be expanded to a general fiscal
policy to include taxation.
We now define the incentive scheme. Let $A^{i}_{g}$ denote the cutoff bracket
of the incentive on resource $i$, with $g$ enumerating between $1,...G$. Let
$\delta^{in}_{g}$ denote the resource owner $(i,n)$’s output quantity excess
over the bracket $A^{i}_{g}$. We assume the incentives given to each entity to
be a series of linear functions, only affected by its output quantity, as most
commonly used governmental economic policies are (cf. Sadka (1976); Huck et
al. (2012); Yu et al. (2018), inter alia). As such, we define the incentive
payment function as
$\alpha^{i}_{0}(.)+\sum^{G}_{g=1}\alpha^{i}_{g}(.),$ (1)
where $\alpha^{i}_{0}(.)$ is usually non-negative, both $\alpha^{i}_{0}(.)$
and $\alpha^{i}_{g}(.)$ are linear. For an arbitrary output quantity of $x$,
the payment scheme for the resource owners is piece-wise linear in shape and
can be expressed as:
If $0\leq x<A^{i}_{1}$, then the incentive payment is $\alpha^{i}_{0}(x)$;
If $A^{i}_{1}\leq x<A^{i}_{2}$, then $\delta^{in}_{1}=x-A^{i}_{1}$, and the
incentive payment is $\alpha^{i}_{0}(x)+\alpha^{i}_{1}(\delta^{in}_{1})$;
If $A^{i}_{2}\leq x<A^{i}_{3}$, then $\delta^{in}_{2}=x-A^{i}_{2}$, and the
incentive payment is
$\alpha^{i}_{0}(x)+\alpha^{i}_{1}(\delta^{in}_{1})+\alpha^{i}_{2}(\delta^{in}_{2})$;
If $A^{i}_{g}\leq x<A^{i}_{g+1}$, then $\delta^{in}_{g}=x-A^{i}_{g}$, and the
incentive payment is
$\alpha^{i}_{0}(x)+\alpha^{i}_{1}(\delta^{in}_{1})+\alpha^{i}_{2}(\delta^{in}_{2})+...+\alpha^{i}_{g}(\delta^{in}_{g})$;
If $x\geq A^{i}_{G}$, then the incentive payment is
$\alpha^{i}_{0}(x)+\alpha^{i}_{1}(\delta^{in}_{1})+\alpha^{i}_{2}(\delta^{in}_{2})+...+\alpha^{i}_{g}(\delta^{in}_{G})=\alpha^{i}_{0}(x)+\sum^{G}_{g=1}\alpha^{i}_{g}(\delta^{in}_{g})$.
Finally, a resource owner’s general incentive payment function in terms of
$\delta^{in}_{g}$ can be rewritten in a compact form as
$\alpha^{i}_{0}(x)+\sum^{G}_{g=1}\alpha^{i}_{g}(\delta^{in}_{g}),\quad
i=1,...,I,\quad n=1,...,N,$ (2)
where,
$\delta^{in}_{g}=max\\{x-A^{i}_{g},\ 0\\},\quad g=1,...,G.$ (3)
In particular, when $\alpha^{i}_{0}(.)$ is set to be a constant and
$\alpha^{i}_{g}(.)$ to be negative, expression (2) represents a regressive
incentive.111A regressive incentive is one whose marginal rate decreases (cf.
Sadka (1976)). An example of this incentive scheme is the United States
Economic Impact Payments of COVID-19.222See: https://www.irs.gov/coronavirus-
tax-relief-and-economic-impact-payments. Naturally, with the properly chosen
parameter values, expression (2)-(3) can also represent a general tax scheme.
We will use this scheme to express the fiscal policies for the rest of the
paper. Specifically, in §3.3, we will invariably define the incentives
administered to the resource producers.
### 3.2 The resource owner’s problem
A typical resource $i$ has a total capacity of $U_{i}$, shared by all $N_{i}$
owners of the same resource. A resource owner $(i,n)$ strategically determines
its shipment quantity $x^{in}_{jm}$ to a resource producer $(j,m)$. We use
$Q^{0}$, a column vector with $Q^{0}\in
R_{+}^{\sum_{i=1}^{I}\sum_{j=1}^{I}N_{i}M_{j}}$, to group all $x^{in}_{jm}$.
We assume an owner incurs an operating cost of
$f^{in}=f^{in}(x^{in}),\ \forall i,n,$ (4)
where $x^{in}\equiv(x^{in}_{11},...,x^{in}_{jm},...,x^{in}_{IM_{j}})$, and a
transaction cost of
$c^{in}_{jm}=c^{in}_{jm}(x^{in}_{jm}),\ \forall i,n,j,m.$ (5)
The incentive payment received by the resource owner $(i,n)$ is
$\alpha^{i}_{0}(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in}_{jm})+\sum^{G}_{g=1}\alpha^{i}_{g}(\delta^{in}_{g}),\
\forall i,n,$ (6)
where, the quantity excess
$\delta^{in}_{g}=max\Bigg{\\{}\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in}_{jm}-A^{i}_{g},\
0\Bigg{\\}},\quad\forall g=1,...,G.$ (7)
We group all $\delta^{in}_{g}$ into a column vector $\mathfrak{\Delta}^{0}$,
with $\mathfrak{\Delta}^{0}\in R_{+}^{\sum_{i=1}^{I}GN_{i}}$, and assume all
of the cost functions above, i.e., $f^{in},\ c^{in}_{jm}$, to be continuously
differentiable.
The price of products between the resource owner $(i,n)$ and the producer
$(j,m)$ is denoted by $p^{in}_{0jm}$. We further denote the equilibrium of
such a price as $p^{in*}_{0jm}$. Hence, the total revenue of each owner is
$\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}p^{in}_{0jm}x^{in}_{jm}$.
As a profit-maximizer, each typical owner faces the following problem.
Maximize: $\displaystyle\
\pi_{in}=\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}p^{in}_{0jm}x^{in}_{jm}-f^{in}(x^{in})-\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}c^{in}_{jm}(x^{in}_{jm})+\alpha^{i}_{0}(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in}_{jm})+\sum^{G}_{g=1}\alpha^{i}_{g}(\delta^{in}_{g})$
(8) Subject to: $\displaystyle\
\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}x^{in}_{jm}\leq U_{i},$
$\displaystyle(\lambda^{0}_{i})$ (9) $\displaystyle\
\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in}_{jm}-\delta^{in}_{g}\leq
A^{i}_{g},\quad\forall g,$ $\displaystyle(\mu^{0}_{ing})$ (10) $\displaystyle\
x^{in}_{jm}\geq 0,\ \forall j,m,\qquad\delta^{in}_{g}\geq 0,\ \forall g.$
The constraint (9) is the resource capacity. The constraint (10), along with
$\delta^{in}_{g}\geq 0$, is the equivalence of expression (7).
$\lambda^{0}_{i}$ and $\mu^{0}_{ing}$ are the Lagrange multipliers333For the
details of such reformulation technique, see Bertsekas and Tsitsiklis (1989).
associated with inequality constraints (9) and (10), respectively. We
subsequently group all $\lambda^{0}_{i}$ and $\mu^{0}_{ing}$ into a column
vector $\lambda^{0}$ and $\mu^{0}$, with $\lambda^{0}\in R_{+}^{I}$ and
$\mu^{0}\in R_{+}^{\sum_{i=1}^{I}GN_{i}}$, respectively.
In this decentralized supply chain, the resource owners compete in a non-
cooperative fashion, a behavior of classic Cournot-Nash (Cournot, 1838; Nash,
1950, 1951) stating that each owner determines the optimal output, given the
simultaneously optimal ones of all competitors. As such, the optimal behavior
of all resource owners can be expressed as a variational inequality
(VI)444Gabay and Moulin (1980) were among some of the earliest landmark works
to establish the equivalence between the optimization problem and variational
inequality problem. See also, Nagurney (1999).: determine
$(Q^{0*},\mathfrak{\Delta}^{0*},\lambda^{0*},\mu^{0*})\in\mathcal{K}^{1}$,
satisfying:
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\Bigg{[}\frac{\partial
f^{in}(x^{in*})}{\partial x^{in}_{jm}}+\frac{\partial
c^{in}_{jm}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}-p^{in*}_{0jm}-\frac{\partial\alpha^{i}_{0}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}+\lambda^{0*}_{i}+\sum_{g=1}^{G}\mu^{0*}_{ing}\Bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\alpha^{i}_{g}(\delta^{in*}_{g})}{\partial\delta^{in}_{g}}-\mu^{0*}_{ing}\bigg{]}\times(\delta^{in}_{g}-\delta^{in*}_{g})+\sum_{i=1}^{I}\bigg{[}U_{i}-\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}x^{in*}_{jm}\bigg{]}$
$\displaystyle\times(\lambda^{0}_{i}-\lambda^{0*}_{i})$ (11)
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}A^{i}_{g}-\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in*}_{jm}+\delta^{in*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{0}_{ing}-\mu^{0*}_{ing})\geq 0,$
$\displaystyle\forall(Q^{0},\mathfrak{\Delta}^{0},\lambda^{0},\mu^{0})\in\mathcal{K}^{1},$
where,
$\mathcal{K}^{1}\equiv\\{(Q^{0},\mathfrak{\Delta}^{0},\lambda^{0},\mu^{0})|(Q^{0},\mathfrak{\Delta}^{0},\lambda^{0},\mu^{0})\in
R_{+}^{\sum_{i=1}^{I}(\sum_{j=1}^{I}N_{i}M_{j}+2GN_{i})+I}\\}.$
Due to constraint (9), the solution to the optimality condition (3.2) is a
variational equilibrium (Kulkarni and Shanbhag, 2012), a refinement of the
general Nash equilibrium (GNE)555GNE problem has been frequently used in
common-pool resource studies as a mathematical-economic tool. For more
information on general Nash equilibrium problems (GNEP), see Harker (1991),
Facchinei and Kanzow (2007).. In a variational equilibrium, all of the
Lagrange multipliers associated with the shared constraints are equal. By
formulating a GNE in VI, we can resort to a set of well-established
theoretical properties (more in Section 5) and algorithmic schemes than with a
formulation by quasi-inequality, a formalism for GNE problems (Facchinei and
Kanzow, 2007).
In addition, condition (3.2) proffers a readily interpretable mathematical
form from an economic perspective. According to the first summand, a resource
owner will ship a positive amount of the product to a resource producer if the
price that the producer is willing to pay for the product is exactly equal to,
the owner’s marginal cost less the marginal incentive benefit. If the owner’s
marginal cost, less the marginal incentive benefit, exceeds what the retailer
is willing to pay for the product, then there will not be any shipments from
the owner to the producer.
### 3.3 The resource producer’s problem
A resource producer $(j,m)$ strategically determines its shipment quantity
$x^{jm}_{s}$ to a resource supplier $(j,s)$. We group all $x^{jm}_{s}$ into a
column vector $Q^{1}$, with $Q^{1}\in R_{+}^{\sum_{j=1}^{I}M_{j}S_{j}}$. We
introduce a non-negative conversion rate of production $\psi^{in}_{jm}$. We
assume that the producer’s operating cost is associated with its total raw
material quantity receivables from the owners and that it can be written as
$f^{jm}=f^{jm}(x_{jm}),\quad\forall j,m,$ (12)
where, $x_{jm}\equiv(x_{jm}^{11},...,x_{jm}^{in},...,x_{jm}^{IN_{i}}).$
The producer’s transaction cost is associated with the flow between each pair
of producer and supplier
$c^{jm}_{s}=c^{jm}_{s}(x^{jm}_{s}),\quad\forall j,m,s.$ (13)
Similar to the incentive scheme for the resource owners, we let $B^{j}_{g}$
denote the cutoff bracket of the incentives, with $g$ enumerating between
$1,...G$. Let $\delta^{jm}_{g}$ denote the excess of output quantity over
bracket $B^{j}_{g}$. As such, we define the resource producer’s incentive
payment function as
$\beta^{j}_{0}(.)+\sum^{G}_{g=1}\beta^{j}_{g}(.),$ (14)
Specifically, the incentive payment received by the resource producer $(j,m)$
is a function of its total output quantity, i.e.,
$\beta^{j}_{0}(\sum^{S_{j}}_{s=1}x^{jm}_{s})+\sum^{G}_{g=1}\beta^{j}_{g}(\delta^{jm}_{g}),\
\forall j,m,$ (15)
where, the quantity excess
$\delta^{jm}_{g}=max\Bigg{\\{}\sum^{S_{j}}_{s=1}x^{jm}_{s}-B^{j}_{g},\
0\Bigg{\\}},\quad\forall g=1,...,G.$ (16)
We group all $\delta^{jm}_{g}$ into a column vector $\mathfrak{\Delta}^{1}$,
and assume $f^{jm}$, $c^{jm}_{s}$ to be continuously differentiable,
$\beta^{j}_{0}$, and $\beta^{j}_{g}$ to be linear.
The price of products between the resource producer $(j,m)$ and the supplier
$(j,s)$ is denoted by $p^{jm}_{1s}$. We further denote the equilibrium of such
a price as $p^{jm*}_{1s}$. Hence, the total revenue of a producer is
$\sum_{s=1}^{S_{j}}p^{jm}_{1s}x^{jm}_{s}$.
As a profit-maximizer, each typical producer faces the following problem.
Maximize:
$\displaystyle\pi_{jm}=\sum_{s=1}^{S_{j}}p^{jm*}_{1s}x^{jm}_{s}-\sum_{j=1}^{I}\sum_{n=1}^{N_{i}}p^{in*}_{0jm}x^{in}_{jm}-f^{jm}(x_{jm})-\sum_{s=1}^{S_{j}}c^{jm}_{s}(x^{jm}_{s})+\beta^{j}_{0}(\sum^{S_{j}}_{s=1}x^{jm}_{s})+\sum^{G}_{g=1}\beta^{j}_{g}(\delta^{jm}_{g})$
(17) Subject to:
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}x^{in}_{jm}\cdot\psi^{in}_{jm}\geq\sum_{s=1}^{S_{j}}x^{jm}_{s},$
$\displaystyle(\lambda^{1}_{jm})$ (18)
$\displaystyle\sum^{S_{j}}_{s=1}x^{jm}_{s}-\delta^{jm}_{g}\leq
B^{j}_{g},\quad\forall g,$ $\displaystyle(\mu^{1}_{jmg})$ (19) $\displaystyle
x^{jm}_{s}\geq 0,\ \forall s,\qquad\delta^{jm}_{g}\geq 0,\ \forall g.$
The constraint (18) is the flow conservation in light of the inter-resource
conversion. The constraint (19), along with $\delta^{jm}_{g}\geq 0$, is the
equivalence of expression (16). $\lambda^{1}_{jm}$ and $\mu^{1}_{jmg}$ are the
Lagrange multipliers associated with inequality constraints (18) and (19),
respectively. We subsequently group all $\lambda^{1}_{jm}$ and $\mu^{1}_{jmg}$
into a column vector $\lambda^{1}$ and $\mu^{1}$, with $\lambda^{1}\in
R_{+}^{\sum_{j=1}^{I}M_{j}}$ and $\mu^{1}\in R_{+}^{\sum_{j=1}^{I}GM_{j}}$,
respectively. The optimal behavior of all resource producers can be expressed
as a VI: determine
$(Q^{0*},Q^{1*},\mathfrak{\Delta}^{1*},\lambda^{1*},\mu^{1*})\in\mathcal{K}^{2}$,
satisfying:
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\Bigg{[}\frac{\partial
f^{jm}(x^{*}_{jm})}{\partial
x^{in}_{jm}}+p^{in*}_{0jm}-\psi^{in}_{jm}\lambda^{1*}_{jm}\Bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\Bigg{[}\frac{\partial
c^{jm}_{s}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}-p^{jm*}_{1s}-\frac{\partial\beta^{j}_{0}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}+\lambda^{1*}_{jm}+\sum_{g=1}^{G}\mu^{1*}_{jmg}\Bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$ (20)
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\beta^{j}_{g}(\delta^{jm*}_{g})}{\partial\delta^{jm}_{g}}-\mu^{1*}_{jmg}\bigg{]}\times(\delta^{jm}_{g}-\delta^{jm*}_{g})+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}B^{j}_{g}-\sum^{S_{j}}_{s=1}x^{jm*}_{s}+\delta^{jm*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{1}_{jmg}-\mu^{1*}_{jmg})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}x^{in*}_{jm}\cdot\psi^{in}_{jm}-\sum_{s=1}^{S_{j}}x^{jm*}_{s}\bigg{]}$
$\displaystyle\times(\lambda^{1}_{jm}-\lambda^{1*}_{jm})\geq 0,$
$\displaystyle\forall(Q^{0},Q^{1},\mathfrak{\Delta}^{1},\lambda^{1},\mu^{1})\in\mathcal{K}^{2},$
where,
$\mathcal{K}^{2}\equiv\\{(Q^{0},Q^{1},\mathfrak{\Delta}^{1},\lambda^{1},\mu^{1})|(Q^{0},Q^{1},\mathfrak{\Delta}^{1},\lambda^{1},\mu^{1})\in
R_{+}^{\sum_{j=1}^{I}(\sum_{i=1}^{I}N_{i}M_{j}+M_{j}S_{j}+2GM_{j}+M_{j})}\\}.$
### 3.4 The resource supplier’s problem
A typical resource supplier is denoted as $(j,s)$. Each supplier has $t$ modes
to transport the resources to Market $k$. The supplier’s strategic variable is
$x^{js}_{tk}$, its shipment quantity to market $k$ with transportation mode
$t$. We group all $x^{js}_{tk}$ into a column vector $Q^{2}$, with $Q^{2}\in
R_{+}^{\sum_{j=1}^{I}KS_{j}T_{j}}$. We assume the supplier incurs an operating
cost that is associated with the resource product flow from the producers,
i.e.,
$f^{js}=f^{js}(x^{js}),$ (21)
where, $x^{js}\equiv(x^{j1}_{s},...,x^{jm}_{s},...,x^{jM_{j}}_{s_{j}}),\ \
\forall j,s$. The supplier also incurs a transaction cost with each outgoing
shipment to the markets:
$c^{js}_{tk}=c^{js}_{tk}(x^{js}_{tk}).$ (22)
We assume $f^{js}$ and $c^{js}_{tk}$ to be continuously differentiable.
The price of goods between the resource supplier $(j,s)$ and the market $k$
via transportation mode $t$ is denoted by $p^{js}_{2tk}$. We further denote
the equilibrium of such a price as $p^{js*}_{2tk}$. Hence, the total revenue
of each supplier is $\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}p^{js}_{2tk}x^{js}_{tk}$.
As a profit-maximizer, each typical supplier faces the following problem.
Maximize:
$\displaystyle\quad\pi_{js}=\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}p^{js*}_{2tk}x^{js}_{tk}-\sum_{m=1}^{M_{j}}p^{jm*}_{1s}x^{jm}_{s}-f^{js}(x^{js})-\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}c^{js}_{tk}(x^{js}_{tk})$
(23) Subject to:
$\displaystyle\quad\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}x^{js}_{tk}\leq\sum_{m=1}^{M_{j}}x^{jm}_{s},$
$\displaystyle(\lambda^{2}_{js})$ (24) $\displaystyle\quad x^{js}_{tk}\geq
0,\quad\forall t,k.$
The constraint (24) is the flow conservation of the finished resource
commodities at each supplier, with $\lambda^{2}_{js}$ being the associated
Lagrange multipliers. We subsequently group all $\lambda^{2}_{js}$ into a
column vector $\lambda^{2}$ with $\lambda^{2}\in R_{+}^{\sum_{j=1}^{I}S_{j}}$.
The optimal behavior of all resource suppliers can be expressed as a VI:
determine $(Q^{1*},Q^{2*},\lambda^{2*})\in\mathcal{K}^{3}$, satisfying:
$\displaystyle\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}\Bigg{[}\frac{\partial
c^{js}_{tk}(x^{js*}_{tk})}{\partial
x^{js}_{tk}}-p^{js*}_{2tk}+\lambda^{2*}_{js}\Bigg{]}$
$\displaystyle\times(x^{js}_{tk}-x^{js*}_{tk})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\bigg{[}\frac{\partial
f^{js}(x^{js*})}{\partial x^{jm}_{s}}+p^{jm*}_{1s}-\lambda^{2*}_{js}\bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$ (25)
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\Bigg{[}\sum_{m=1}^{M_{j}}x^{jm*}_{s}-\sum_{k=1}^{K}\sum_{t=1}^{T_{j}}x^{js*}_{tk}\Bigg{]}$
$\displaystyle\times(\lambda^{2}_{js}-\lambda^{2*}_{js})\geq
0,\qquad\forall(Q^{1},Q^{2},\lambda^{2})\in\mathcal{K}^{3},$
where,
$\mathcal{K}^{3}\equiv\\{(Q^{1},Q^{2},\lambda^{2})|(Q^{1},Q^{2},\lambda^{2})\in
R_{+}^{\sum_{j=1}^{I}(KS_{j}T_{j}+M_{j}S_{j}+S_{j})}\\}.$
### 3.5 Demand market equilibrium conditions
Scarce resources, such as clean water, energy, foods, and industrial
materials, are consumed in demand markets with unique needs. Concerned with
its willingness to pay, a market determines the purchasing quantity from each
supplier via each available transportation mode.
Because the difference in the suppliers or the transportation modes can cause
different product preservation in the network, without loss of generality, for
an arbitrary market $k$ ($k=1,...,K$) receiving resource commodity $j$
($j=1,...,I$), we assign a set of ex-ante weights $w_{ts}$ across all flows
via transportation modes $t$ ($t=1,...,T_{j}$) and suppliers $s$
($s=1,...,S_{j}$). In other words, the demand for product $j$ at market $k$ is
$d_{jk}=\sum_{t=1}^{T_{j}}\sum_{s=1}^{S_{j}}w_{ts}\cdot
x^{js}_{tk},\quad\forall j,k.$ (26)
We group all $d_{jk}$ from (26) into a column vector $d$, i.e.,
$d\equiv(d_{11},...,d_{jk},...,d_{JK})$, where $d\in R^{IK}_{+}$. We further
denote the equilibrium of such demands as $d^{*}_{jk}$ and $d^{*}$,
respectively. Because each scarce-resource product in each market has a unique
demand function, as is in (26), it corresponds with a market price
$p^{j}_{3k}$ in the form of price-demand function
$p^{j}_{3k}=p^{j}_{3k}(d).$ (27)
Market $k$ incurs a transaction cost from supplier $(j,s)$ via transportation
mode $t$
$\hat{c}^{js}_{tk}=\hat{c}^{js}_{tk}(x^{js}_{tk}),\quad\forall j,s,k,t.$ (28)
We assume $\hat{c}^{js}_{tk}$ to be continuously differentiable. Finally, we
adopt the classic Spatial Price Equilibrium (SPE) model666The Spatial Price
Equilibrium model appeared in the early literature of Samuelson (1952),
Takayama and Judge (1964), inter alia, and was thereafter extended with VI
theories by Dafermos and Nagurney (1984). to represent the markets. The
corresponding equilibrium condition is given by
$p^{js*}_{2tk}+\hat{c}^{js*}_{tk}(x^{js*}_{tk})\left\\{\begin{array}[]{ll}=p^{j}_{3k}(d^{*})&\quad
if\quad x^{js*}_{tk}>0\\\ \geq p^{j}_{3k}(d^{*})&\quad if\quad
x^{js*}_{tk}=0\end{array}\quad\forall j,s,t,k.\right.$ (29)
The intuition of equation (29) is, that the consumption of resource at the
markets will remain positive if the supplier’ purchase price of such a
resource plus all associated cost is equal to the price consumers are willing
to pay; however, if the supplier’ purchase price plus cost turns out to be
higher than what consumers are willing to pay, then there will be no shipments
between the supplier and the market.
As such, the optimal behaviors of all demand markets as a spatial price
equilibrium can be expressed in a VI form: determine
$(Q^{2*},d^{*})\in\mathcal{K}^{4}$, satisfying:
$\displaystyle\sum_{j=1}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}[p^{js*}_{2tk}+\hat{c}^{js}_{tk}(x^{js*}_{tk})]\times(x^{js}_{tk}-x^{js*}_{tk})-\sum^{I}_{j=1}\sum^{K}_{k=1}p^{j}_{3k}(d^{*})\times(d_{jk}-d^{*}_{jk})\geq
0,\qquad\forall(Q^{2},d)\in\mathcal{K}^{4},$ (30)
where, $\mathcal{K}^{4}\equiv\\{(Q^{2},d)|Q^{2}\in
R_{+}^{\sum_{j=1}^{I}KS_{j}T_{j}},\ d\in R^{IK}_{+}\\}.$
### 3.6 Welfare estimates
To assess the social impact of the model elements in our supply chain network,
we provide an estimate of consumer surplus and social welfare. The consumer
surplus, termed to measure the aggregate of consumers’ benefits upon
purchasing a product (cf. Willig (1976)) of resource $j$ at demand market $k$
is given by:
$CS_{jk}=\int_{0}^{d^{*}_{jk}}p^{j}_{3k}(z)dz-p^{j*}_{3k}d^{*}_{jk},\quad\forall
j,k.$ (31)
We also denote social welfare of the network to be the aggregation of all firm
profits and consumer surplus, i.e.,
$SW=\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\pi_{in}+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\pi_{jm}+\sum_{j=1}^{I}\sum_{s=1}^{S_{j}}\pi_{js}+\sum_{j=1}^{I}\sum_{k=1}^{K}CS_{jk},\quad\forall
i,j,n,m,s,k.$ (32)
## 4 The network equilibrium
###### Definition 1 (Cross-sector multi-product scarce resource supply chain
network equilibrium with fiscal policy)
A product flow pattern
$(Q^{0*},Q^{1*},Q^{2*},\mathfrak{\Delta}^{0*},\mathfrak{\Delta}^{1*},d^{*},\lambda^{0*},\lambda^{1*},\lambda^{2*},\mu^{0*},\mu^{1*})\in\mathcal{K}$
is said to constitute a cross-sector multi-product scarce resource supply
chain network equilibrium with fiscal policy if it satisfies condition (3.2),
(3.3), (3.4), and (30).
###### Theorem 1 (Variational inequality formulation)
The equilibrium conditions governing the cross-sector multi-product scarce
resource supply chain network, according to Definition 1, are equivalent to
the solution to the following VIs:
Determine
$(Q^{0*},Q^{1*},Q^{2*},\mathfrak{\Delta}^{0*},\mathfrak{\Delta}^{1*},d,\lambda^{0*},\lambda^{1*},\lambda^{2*},\mu^{0*},\mu^{1*})\in\mathcal{K}$,
satisfying
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\Bigg{[}\frac{\partial
f^{in}(x^{in*})}{\partial x^{in}_{jm}}+\frac{\partial
f^{jm}(x^{*}_{jm})}{\partial x^{in}_{jm}}+\frac{\partial
c^{in}_{jm}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}-\frac{\partial\alpha^{i}_{0}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}+\lambda^{0*}_{i}-\psi^{in}_{jm}\lambda^{1*}_{jm}+\sum_{g=1}^{G}\mu^{0*}_{ing}\Bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\Bigg{[}\dfrac{\partial
f^{js}(x^{js*})}{\partial x^{jm}_{s}}+\frac{\partial
c^{jm}_{s}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}-\frac{\partial\beta^{j}_{0}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}+\lambda^{1*}_{jm}-\lambda^{2*}_{js}+\sum_{g=1}^{G}\mu^{1*}_{jmg}\Bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}\Bigg{[}\frac{\partial
c^{js}_{tk}(x^{js*}_{tk})}{\partial
x^{js}_{tk}}+\hat{c}^{js}_{tk}(x^{js*}_{tk})+\lambda^{2*}_{js}\Bigg{]}$
$\displaystyle\times(x^{js}_{tk}-x^{js*}_{tk})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\alpha^{i}_{g}(\delta^{in*}_{g})}{\partial\delta^{in}_{g}}-\mu^{0*}_{ing}\bigg{]}\times(\delta^{in}_{g}-\delta^{in*}_{g})+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\beta^{j}_{g}(\delta^{jm*}_{g})}{\partial\delta^{jm}_{g}}-\mu^{1*}_{jmg}\bigg{]}$
$\displaystyle\times(\delta^{jm}_{g}-\delta^{jm*}_{g})$ (33)
$\displaystyle-\sum^{I}_{j=1}\sum^{K}_{k=1}p^{j}_{3k}(d^{*})\times(d_{jk}-d^{*}_{jk})+\sum_{i=1}^{I}\bigg{[}U_{i}-\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}x^{in*}_{jm}\bigg{]}$
$\displaystyle\times(\lambda^{0}_{i}-\lambda^{0*}_{i})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}x^{in*}_{jm}\cdot\psi^{in}_{jm}-\sum_{s=1}^{S_{j}}x^{jm*}_{s}\bigg{]}\times(\lambda^{1}_{jm}-\lambda^{1*}_{jm})+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\bigg{[}\sum_{m=1}^{M_{j}}x^{jm*}_{s}-\sum_{k=1}^{K}\sum_{t=1}^{T_{j}}x^{js*}_{tk}\bigg{]}$
$\displaystyle\times(\lambda^{2}_{js}-\lambda^{2*}_{js})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}A^{i}_{g}-\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in*}_{jm}+\delta^{in*}_{g}\bigg{]}\times(\mu^{0}_{ing}-\mu^{0*}_{ing})+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}B^{j}_{g}-\sum^{S_{j}}_{s=1}x^{jm*}_{s}+\delta^{jm*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{1}_{jmg}-\mu^{1*}_{jmg})\geq 0,$
$\displaystyle\forall(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})\in\mathcal{K},$
where,
$\mathcal{K}\equiv\mathcal{K}^{1}\times\mathcal{K}^{2}\times\mathcal{K}^{3}\times\mathcal{K}^{4}=\big{\\{}(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})|(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\\\
\lambda^{2},\mu^{0},\mu^{1})\in
R_{+}^{\sum_{i=1}^{I}(\sum_{j=1}^{I}2N_{i}M_{j}+2GN_{i})+\sum_{j=1}^{I}(2M_{j}S_{j}+2KS_{j}T_{j}+2GM_{j}+S_{j}+M_{j})+IK+I}\big{\\}}.$
Proof: See Appendix A.
It should be noted that variable $p^{in*}_{0jm}$, $p^{jm*}_{1s}$, and
$p^{js*}_{2tk}$ do not appear within the formulation of Theorem 1. They are
endogenous to the model and can be retrieved once the solution is obtained. To
retrieve $p^{js*}_{2tk}$, for all $j,s,t,k$, recall equilibrium condition
(29). Since $p^{j}_{3k}(d^{*})$ is readily available from (27), if
$x^{js*}_{tk}>0$ for some $j,s,t,k$, then $p^{js*}_{2tk}$ can be obtained by
the equality
$p^{js*}_{2tk}=p^{j*}_{3k}(d^{*})-\hat{c}^{js}_{tk}(x^{js*}_{tk}).$ (34)
Invariably, if $x^{jm*}_{s}>0$ for some $j,m,s$, then from the second summand
in (3.4), one may immediately obtain
$p^{jm*}_{1s}=\lambda^{2*}_{js}-\frac{\partial f^{js}(x^{js*})}{\partial
x^{jm}_{s}}.$ (35)
And, if $x^{in*}_{jm}>0$ for some $i,n,j,m$, then from the first summand of
(3.3),
$p^{in*}_{0jm}=\psi^{in}_{jm}\lambda^{1*}_{jm}-\frac{\partial
f^{jm}(x^{*}_{jm})}{\partial x^{in}_{jm}}.$ (36)
## 5 Theoretical properties
We provide a few classic theoretical properties of the solution to VI (1),
based on Gabay and Moulin (1980), Nagurney (1999), and Melo (2018), inter
alia. In particular, we derive existence and uniqueness results by
incorporating strategic measure of network games from the latest theoretical
advancement.
To facilitate the development in this section, we rewrite VI problem (1) in
standard form as follows: determine $X^{*}\in\mathcal{K}$ satisfying
$\langle F(X^{*}),\ X-X^{*}\rangle\geq 0,\quad\forall X^{*}\in\mathcal{K},$
(37)
where,
$X\equiv(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})$,
with an indulgence of notation,
$F(X)\equiv(F_{Q^{0}},F_{Q^{1}},F_{Q^{2}},F_{\mathfrak{\Delta}^{0}},F_{\mathfrak{\Delta}^{1}},F_{d},F_{\lambda^{0}},\\\
F_{\lambda^{1}},F_{\lambda^{2}},F_{\mu^{0}},F_{\mu^{1}})$, in which each
component of $F$ is given by each respective summand expression in (1), and,
$\mathcal{K}\equiv\mathcal{K}^{1}\times\mathcal{K}^{2}\times\mathcal{K}^{3}\times\mathcal{K}^{4}.$
The notation $\langle\cdot,\cdot\rangle$ represents the inner product in a
Euclidean space. Both $F$ and $X$ are $\mathcal{N}$-dimensional column
vectors, where
$\mathcal{N}=\sum_{i=1}^{I}(\sum_{j=1}^{I}2N_{i}M_{j}+2GN_{i})+\sum_{j=1}^{I}(2M_{j}S_{j}+2KS_{j}T_{j}+2GM_{j}+S_{j}+M_{j})+IK+I$.
First, we provide the existence properties. While $F$ in (37) is continuous,
the feasible set $\mathcal{K}$ is not compact. This causes the existence
condition of (1) not readily available. But one can impose a weak condition to
guarantee the existence of a solution pattern, as in Nagurney et al. (2002).
Let
$\displaystyle\mathcal{K}_{b}\equiv\big{\\{}(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})\
|\ 0\leq Q^{0}\leq b_{1},0\leq Q^{1}\leq b_{2},0\leq Q^{2}\leq
b_{3},0\leq\mathfrak{\Delta}^{0}\leq b_{4},$ $\displaystyle
0\leq\mathfrak{\Delta}^{1}\leq b_{5},0\leq d\leq b_{6},0\leq\lambda^{0}\leq
b_{7},0\leq\lambda^{1}\leq b_{8},0\leq\lambda^{2}\leq b_{9},0\leq\mu^{0}\leq
b_{10},0\leq\mu^{1}\leq b_{11}\big{\\}},$
where
$b=(b_{1},b_{2},b_{3},b_{4},b_{5},b_{6},b_{7},b_{8},b_{9},b_{10},b_{11})\geq
0$ and $Q^{0}\leq b_{1}$, $Q^{1}\leq b_{2}$, $Q^{2}\leq b_{3}$,
$\mathfrak{\Delta}^{0}\leq b_{4}$, $\mathfrak{\Delta}^{1}\leq b_{5}$, $d\leq
b_{6}$, $\lambda^{0}\leq b_{7}$, $\lambda^{1}\leq b_{8}$, $\lambda^{2}\leq
b_{9}$, $\mu^{0}\leq b_{10}$, $\mu^{1}\leq b_{11}$ means that $x^{in}_{jm}\leq
b_{1}$, $x^{jm}_{s}\leq b_{2}$, $x^{js}_{tk}\leq b_{3}$, $\delta^{in}_{g}\leq
b_{4}$, $\delta^{jm}_{g}\leq b_{5}$, $d_{jk}\leq b_{6}$, $\lambda^{0}_{i}\leq
b_{7}$, $\lambda^{1}_{jm}\leq b_{8}$, $\lambda^{2}\leq b_{9}$,
$\mu^{0}_{ing}\leq b_{10}$, $\mu^{1}_{jmg}\leq b_{11}$ for all $i,j,n,m,t,k$.
Then $\mathcal{K}_{b}$ is a bounded, closed, convex subset of
$R_{+}^{\mathcal{N}}$. Tangentially, the existence of $b$ is sensible from an
economic perspective. Thus, the VI expression
$\langle F(X^{b}),\ X-X^{b}\rangle\geq 0,\quad\forall X^{b}\in\mathcal{K}_{b}$
(38)
admits at least one solution. Following Kinderlehrer and Stampacchia (1980),
and Nagurney et al. (2002), it is straightforward to establish:
###### Lemma 1
The VI (37) admits a solution if and only if there exists a $b>0$ such that
the VI (38) admits a solution in $\mathcal{K}_{b}$, with
$\displaystyle Q^{0}<b_{1},\ Q^{1}<b_{2},\ Q^{2}<b_{3},\
\mathfrak{\Delta}^{0}<b_{4},\ \mathfrak{\Delta}^{1}<b_{5},\ d<b_{6},$ (39)
$\displaystyle\lambda^{0}<b_{7},\ \lambda^{1}<b_{8},\ \lambda^{2}<b_{9},\
\mu^{0}<b_{10},\ \mu^{1}<b_{11}.$
###### Theorem 2 (Existence)
The VI problem (1) admits at least one solution in $\mathcal{K}_{b}$.
Proof: By virtue of theorem 3.1 in Harker and Pang (1990), one can easily
verify that $\mathcal{K}_{b}$ is non-empty, compact, and convex, and that the
mapping $F$ representing (1) is continuous. Therefore, there exists a solution
to the problem (1) in $\mathcal{K}_{b}$. $\square$
Next, we provide a set of sufficient conditions for the uniqueness properties.
In general, uniqueness is often associated with the monotonicity of the $F$
that enters the VI problem. Here, we begin by redefining a set of well-known
monotonicities (definition 2.3.1 by Facchinei and Pang (2003)).
###### Definition 2 (Monotonicity)
A mapping $F:\mathcal{K}\subseteq R^{n}\rightarrow R^{n}$ is said to be
(a) monotone on $\mathcal{K}$ if
$\langle F(X^{1})-F(X^{2}),\ X^{1}-X^{2}\rangle\geq 0,\quad\forall
X^{1},X^{2}\in\mathcal{K}.$ (40)
(b) strictly monotone on $\mathcal{K}$, if
$\displaystyle\langle F(X^{1})-F(X^{2}),\ X^{1}-X^{2}\rangle>0,\quad\forall
X^{1},X^{2}\in\mathcal{K},\quad X^{1}\neq X^{2}.$ (41)
(c) strongly monotone on $\mathcal{K}$, if
$\langle F(X^{1})-F(X^{2}),\ X^{1}-X^{2}\rangle>\alpha\left\lVert
X^{1}-X^{2}\right\rVert^{2},\quad\forall X^{1},X^{2}\in\mathcal{K},$ (42)
where $\alpha>0$.
###### Definition 3 (Lipschitz Continuity)
$F(X)$ is Lipschitz Continuous on $\mathcal{K}$, if
$\langle F(X^{1})-F(X^{2}),\ X^{1}-X^{2}\rangle\geq L\left\lVert
X^{1}-X^{2}\right\rVert^{2},\quad\forall X^{1},X^{2}\in\mathcal{K},$ (43)
where L is the Lipschitz constant.
We will also use the following established lemma (theorem 1.6 in Nagurney
(1999)).
###### Lemma 2
Suppose that $F(X)$ is strictly monotone on $\mathcal{K}$, then the solution
to the $VI(F,K)$ problem is unique, if one exists.
To further characterize the nature of the network, we need to adopt an
additional set of notations. Following Melo (2018), we define the game
Jabocian777The game Jacobian is akin to the topology, equilibrium analyses,
strategic interaction, and comparative statics of the network. For more
details, see Bramoullé et al. (2014), Jackson and Zenou (2015), Parise and
Ozdaglar (2019), Melo (2018), and the reference therein. of $F(X)$ as an
$\mathcal{N}$-by-$\mathcal{N}$ matrix
$J(X)\equiv[\nabla_{q}F_{r}(X)]_{q,r}=\Bigg{[}\frac{\partial
F_{r}(X)}{\partial X_{q}}\Bigg{]}_{q,r},\quad\forall
q,r=1,...,\mathcal{N},X\in R_{+}^{\mathcal{N}}.$ (44)
Decomposing the Jacobian in terms of its diagonal and off-diagonal element
yields
$J(X)=D(X)+N(X),$ (45)
where $D(X)$ is a diagonal matrix, whose elements are
$D_{qr}(X)=\left\\{\begin{array}[]{ll}\frac{\partial F_{r}(X)}{\partial
X_{q}}&\quad if\ q=r\\\ 0&\quad otherwise,\end{array}\right.$ (46)
and $N(X)$ is an off-diagonal matrix, whose elements are
$N_{qr}(X)=\left\\{\begin{array}[]{ll}\frac{\partial F_{r}(X)}{\partial
X_{q}}&\quad if\ q\neq r\\\ 0&\quad otherwise.\end{array}\right.$ (47)
To accommodate the scenarios in which the Jacobian may be non-symmetric, we
expand the definition of Jacobian in (45) by rewriting it as
$\bar{J}(X)=D(X)+\bar{N}(X),$ (48)
where $\bar{N}(X)\equiv\frac{1}{2}[N(X)+N^{T}(X)]$.
Finally, we denote the lowest eigenvalue of a square matrix as
$\lambda_{min}(\cdot)$.888For more information on how the lowest eigenvalue of
a game Jacobian allows us to elicit insights on the interplay, neighboring
influences, aggregate effect, see Bramoullé et al. (2014). Clearly, both
$D(X)$ and $\bar{N}(X)$ have real-numbered eigenvalues because they are
symmetric. Now we are ready to present the main results for the uniqueness.
###### Theorem 3 (Sufficient conditions for Uniqueness)
Assuming the condition for Theorem 2 is satisfied, the solution $X^{*}$ to VI
(1) is unique, if
(i) $F$ is strictly monotone on $\mathcal{K}$, or
(ii) $F$ is strongly monotone on $\mathcal{K}$, or
(iii) $\mathfrak{U}$ is strictly concave, and the condition
$\lvert\lambda_{min}(\bar{N}(X))\rvert<\lambda_{min}(D(X))$ holds $\forall
X\in\mathcal{K}$, where $\nabla_{X}\mathfrak{U}(X)=F(X)$.
Proof:
(i) The proof is immediate with Lemma 2 and Theorem 2.
(ii) See the proof of theorem 1.8 in Nagurney (1999).
(iii) Because all cost, incentive, and demand functions in equation (1) are
continuously differentiable, all of their partial derivatives are well
defined. By virtue of proposition 4 in Melo (2018), $F(X)$ is strictly
monotone on $\mathcal{K}$. Combining Lemma 2 and Theorem 2, condition (iii) is
proved. $\square$
Remark
A few observations can be made. First, it is well-known that the monotonicity
conditions in Definition 2 are ranked in the ascending order of "strength"
(Facchinei and Pang, 2003), i.e., (c) implies (b), and (b) implies (a). In
Theorem 3, the uniqueness condition (i) and (iii), in essence, are established
under strict monotonicity, whereas (ii) is under strong monotonicity. Hence,
one can easily infer a relation of "(ii) $\Rightarrow$ (i)", as well as "(ii)
$\Rightarrow$ (iii)" in Theorem 3.
Second, to characterize the uniqueness of variational equilibrium (1), one
could also establish similar sufficient conditions by the semidefiniteness of
$J(X)$ as it pertains to the analogous monotonicity features (cf. Parise and
Ozdaglar (2019); Melo (2018)).
## 6 The algorithm
To solve a VI problem in standard form, we propose an algorithm with
theoretical measures of the result. The algorithm is extragradient method
first proposed by Korpelevich (1976), which is later promoted as the modified
projection method in Nagurney (1999) by setting the step length to 1. The
solution is guaranteed to converge as long as the function $F$ that enters the
standard form is monotone and Lipschitz continuous. The realization of the
algorithm for the cross-sector multi-product scarce resource supply chain
network model is given as follows.
The modified projection method:
Step 0. Initialization
Set
$X^{0}=(Q^{00},Q^{10},Q^{20},\mathfrak{\Delta}^{00},\mathfrak{\Delta}^{10},d^{0},\lambda^{00},\lambda^{10},\lambda^{20},\mu^{00},\mu^{10})\in\mathcal{K}$.
Set $\tau=:1$ and select $\varphi$ such that $0<\varphi\leq 1/L$, where $L$ is
the Lipschitz constant for function $F$.
Step 1. Construction and computation
Compute
$\bar{X}^{\tau-1}=(\bar{Q}^{0\tau},\bar{Q}^{1\tau},\bar{Q}^{2\tau},\bar{\mathfrak{\Delta}}^{0\tau},\bar{\mathfrak{\Delta}}^{1\tau},\bar{d}^{\tau},\bar{\lambda}^{0\tau},\bar{\lambda}^{1\tau},\bar{\lambda}^{2\tau},\bar{\mu}^{0\tau},\bar{\mu}^{1\tau})\in\mathcal{K}$
by solving the VI sub-problem
$\langle\bar{X}^{\tau-1}+\varphi F(X^{\tau-1})-X^{\tau-1},\
X-\bar{X}^{\tau-1}\rangle\geq 0,\quad\forall X\in\mathcal{K}.$ (49)
Step 2. Adaptation
Compute
$X^{\tau}=(Q^{0\tau},Q^{1\tau},Q^{2\tau},\mathfrak{\Delta}^{0\tau},\mathfrak{\Delta}^{1\tau},d^{\tau},\lambda^{0\tau},\lambda^{1\tau},\lambda^{2\tau},\mu^{0\tau},\mu^{1\tau})\in\mathcal{K}$
by solving the VI sub-problem
$\langle X^{\tau}+\varphi F(\bar{X}^{\tau-1})-X^{\tau-1},\
X-X^{\tau}\rangle>0,\quad\forall X\in\mathcal{K}.$ (50)
Step 3. Convergence verification
If $\lvert X^{\tau}-X^{\tau-1}\rvert\leq\epsilon$, for $\epsilon>0$, a pre-
specified tolerance, then, stop; otherwise, set $\tau=:\tau+1$ and go to step
1.
The algorithm converges to the solution of $VI(F,\mathcal{K})$ under the
following conditions.
###### Theorem 4 (Convergence)
Assume that F(X) is monotone, as is in expression (41), and that $F(X)$ is
also Lipschitz continuous, then the modified projection method converges to a
solution of VI (37).
Proof: See Theorem 2.5 in Nagurney (1999).
## 7 Small scale examples
In this section, we construct several numerical cases to illustrate our
model’s utility. Example 1 is an application to the medical PPE glove supply.
Example 2 broadens the application to an interconnected abstract resource-trio
supply chain. The aforementioned algorithm is implemented in MATLAB installed
on an ordinary ASUS VivoBook F510 personal laptop computer with an Intel Core
i5 CPU 2.5 GHz and RAM 8.00 GB. We exhibit and discuss the highlights of each
example.
Example 1.1: A medical gloves supply chain network (benchmark)
The COVID-19 pandemic of 2020 has reportedly caused a demand surge for medical
gloves (Finkenstadt et al., 2020). In a single 100-day wave during early 2020,
the estimated need for medical gloves was 3.939 billion (Toner, 2020),
followed by a subsequent official guideline on conservation and optimizing
usage of gloves during medical practice in the U.S. (CDC, 2020). It has become
clear that the scarcity of medical gloves calls for a boost in the supply
chain to the extent of better coordination and stimulus effort. Commonly used
medical glove materials include latex, made from natural rubber, and nitrile,
made from petroleum-based materials (Anedda et al., 2020; Henneberry, 2020).
This example illustrates a resource-duo supply chain network with natural
rubber and petroleum as the resources, and medical gloves as their end-
products. Specifically, the network contains 2 owners, 2 producers, and 2
retailers, as in Figure 2. The corresponding end-products, latex and nitrile
gloves, are shipped to in 2 demand markets, medical and residential
facilities, via 1 available transportation mode. We use this example as our
benchmark case, in which the supply chain network has unlimited resources and
is imposed no fiscal policies.
Figure 2: A resource-duo network with medical gloves supply chain
We will continue to use the same topology on example 1.2-1.5, with variations
of setting. We examine the output quantities of firms, market prices, and
welfare estimates.
The cost functions are constructed for all $i=1,...,I$ and $j=1,...,J$, with
$I=J=N_{i}=M_{j}=K_{j}=2$ and $T_{j}=1$, as the following.
$\displaystyle
f^{1n}(x^{1n})=2.5(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{1n}_{jm})^{2}+(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{1n}_{jm})(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{2n}_{jm})+2\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{1n}_{jm},$
$\displaystyle
f^{2n}(x^{2n})=0.5(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{2n}_{jm})^{2}+(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{1n}_{jm})(\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{2n}_{jm})+2\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{2n}_{jm},$
$\displaystyle c^{jm}_{s}(x^{jm}_{s})=0.5(x^{jm}_{s})^{2},\qquad
f^{js}(x^{js})=0.1(\sum^{M_{j}}_{m=1}x^{jm}_{s})^{2},\qquad\hat{c}^{js}_{tk}(x^{js}_{tk})=0.01x^{js}_{tk},$
$\displaystyle c^{11}_{11}=0.5(x^{11}_{11})^{2}+3.5x^{11}_{11},\quad
c^{11}_{12}=0.5(x^{11}_{12})^{2}+3.5x^{11}_{12},\quad
c^{12}_{11}=0.5(x^{12}_{11})^{2}+2x^{12}_{11},\quad
c^{12}_{12}=0.5(x^{12}_{12})^{2}+2x^{12}_{12},$ $\displaystyle
c^{21}_{11}=0.4(x^{21}_{11})^{2}+3.5x^{21}_{11},\quad
c^{21}_{12}=0.4(x^{21}_{12})^{2}+3.5x^{21}_{12},\quad
c^{22}_{11}=0.4(x^{22}_{11})^{2}+2x^{22}_{11},\quad
c^{22}_{12}=0.4(x^{22}_{12})^{2}+2x^{22}_{12}.$
All other costs are set to zero. The price-demand functions at the markets
are:
$\displaystyle p^{j}_{3k}(d_{jk})=-d_{jk}+300,\quad\forall j,k.$
In addition, the conversion rates of production by resource producers are
$\psi^{in}_{jm}=0.9$. The weights of resource commodities at the markets are
$w_{11}=0.5,w_{12}=0.5$. The parameters concerning the resource capacities and
policy instruments, $U_{i}$, $A^{i}_{g}$, $B^{i}_{g}$, are all set to a
sufficiently large number, given their absence.
We initialize the algorithm by setting all the flow quantity to be 1, the
step-size $\varphi$ to be 0.01 (unless noted otherwise), the convergence
tolerance $\epsilon$ to be $10^{-4}$. The computation process takes
approximately 2.0 seconds. We display the solution in Table
LABEL:table:eg1.999The complete result of all examples is available at:
https://github.com/Pergamono/SRSCN. Among the equilibrium solution, the zero
values of $\delta^{in}_{g}$ and $\delta^{jm}_{g}$ simply reaffirm the
incentives to be flat-rate, whereas the zero values of all Lagrange
multipliers suggest that the corresponding constraints are inactive.
Table 3: The equilibrium solution of example 1.1 Variable | Result | Variable | Result | Variable | Result | Variable | Result
---|---|---|---|---|---|---|---
$x^{11*}_{11}$ | 8.88 | $x^{11*}_{1}$ | 13.79 | $\delta^{11*}_{1}$ | 0.00 | $\lambda^{0*}_{1}$ | 0.00
$x^{11*}_{12}$ | 8.88 | $x^{11*}_{2}$ | 18.19 | $\delta^{12*}_{1}$ | 0.00 | $\lambda^{0*}_{2}$ | 0.00
$x^{11*}_{21}$ | 9.50 | $x^{12*}_{1}$ | 13.79 | $\delta^{21*}_{1}$ | 0.00 | $\lambda^{1*}_{11}$ | 247.28
$x^{11*}_{22}$ | 9.50 | $x^{12*}_{2}$ | 18.19 | $\delta^{22*}_{1}$ | 0.00 | $\lambda^{1*}_{12}$ | 247.28
$x^{12*}_{11}$ | 8.88 | $x^{21*}_{1}$ | 14.53 | $\delta^{11*}_{1}$ | 0.00 | $\lambda^{1*}_{21}$ | 247.29
$x^{12*}_{12}$ | 8.88 | $x^{21*}_{2}$ | 19.65 | $\delta^{12*}_{1}$ | 0.00 | $\lambda^{1*}_{22}$ | 247.29
$x^{12*}_{21}$ | 9.50 | $x^{22*}_{1}$ | 14.53 | $\delta^{21*}_{1}$ | 0.00 | $\lambda^{2*}_{11}$ | 266.59
$x^{12*}_{22}$ | 9.50 | $x^{22*}_{2}$ | 19.65 | $\delta^{22*}_{1}$ | 0.00 | $\lambda^{2*}_{12}$ | 263.64
$x^{21*}_{11}$ | 8.88 | $x^{11*}_{11}$ | 13.79 | $\mu^{0*}_{111}$ | 0.00 | $\lambda^{2*}_{21}$ | 267.64
$x^{21*}_{12}$ | 8.88 | $x^{11*}_{12}$ | 13.79 | $\mu^{0*}_{121}$ | 0.00 | $\lambda^{2*}_{22}$ | 264.97
$x^{21*}_{21}$ | 9.50 | $x^{12*}_{11}$ | 18.18 | $\mu^{0*}_{211}$ | 0.00 | $d^{*}_{11}$ | 15.98
$x^{21*}_{22}$ | 9.50 | $x^{12*}_{12}$ | 18.18 | $\mu^{0*}_{221}$ | 0.00 | $d^{*}_{12}$ | 15.98
$x^{22*}_{11}$ | 8.88 | $x^{21*}_{11}$ | 14.53 | $\mu^{1*}_{111}$ | 0.00 | $d^{*}_{21}$ | 17.10
$x^{22*}_{12}$ | 8.88 | $x^{21*}_{12}$ | 14.53 | $\mu^{1*}_{121}$ | 0.00 | $d^{*}_{22}$ | 17.10
$x^{22*}_{21}$ | 9.50 | $x^{22*}_{11}$ | 19.67 | $\mu^{1*}_{211}$ | 0.00 | |
$x^{22*}_{22}$ | 9.50 | $x^{22*}_{12}$ | 19.67 | $\mu^{1*}_{221}$ | 0.00 | |
Example 1.2: The commons101010A commons is where the natural resources are
owned and shared collectively (Ostrom, 1990). with a resource capacity limit
The natural rubber shipments from the commons of farming and harvesting, which
are concentrated in Southeast Asia, can be severely affected by external
shocks, such as natural disasters, geopolitical shifts, regulations, and
pandemics (Chou, 2020; Lee, 2020). Therefore, it bears merit to investigate
the resilience of these supply chains. We perform a sensitivity study on the
rubber capacity limit, by inheriting all settings from example 1.1, with the
additional imposition of resource capacity on natural rubber. Acknowledging
$U_{1}$’s level when such a limit can be reached, we vary $U_{1}$ between 10
and 100.
Figure 3: Owner profits under a resource capacity
Figure 4: Total profits and welfare under a resource capacity
As the natural rubber’s capacity increases, as expected, each glove producer,
supplier’s profit, as well as the consumer surplus, increases. The petroleum
miners, however, suffer from the abundance of natural rubber, as Figure 4
shows. The total profits or surplus of each supply chain tier are displayed in
Figure 4. It is worth noting that the overall height of each stacked bar
indicates social welfare of the entire network. Interestingly, we find that
social welfare peaks at a $U_{1}$’s level of around 60, likely owing to a
similar trend of the owners’ total profit that peaks at the similar level of
$U_{1}$.
Example 1.3: Who should get the incentive, owners or producers?
In this example, we compare the scenarios in which either the natural rubber
farmers or the latex glove producers receive a fairly small flat-rate
incentive on their production quantity. As such, we inherit all settings from
example 1.1, with the additional imposition of a fiscal policy in the form of
quantity incentives. Specifically, we separately incentivize the natural
rubber farmers and the latex glove producers with a flat-rate of
$\alpha^{1}_{0}$ and $\beta^{1}_{0}$, respectively.
(a) Latex glove market price
(b) Nitrile glove market price
Figure 5: Glove market prices by flat-rate incentive
We elect to examine only the glove prices at the markets, as the rest of the
equilibrium results can be examined invariably. In Figure 5, we observe that
with the flat-rate incentives administered to the latex glove supply chain,
the corresponding latex glove prices at the market are reduced, whereas the
prices of the substitute product, nitrile glove, change in opposite trends.
Figure 6: Total profits and welfare under the flat-rate incentives Figure 7:
Social-welfare gains and policy efficiency under the flat-rate incentives
From the standpoint of the supply chain participants, the farmers, producers,
and consumers each as a tier, benefit mildly, because of the relatively small
amount of incentives. See Figure 6. The suppliers, however, enjoy a
discernible gain in total profit when the incentives are given to the rubber
farmers.
From the standpoint of the incentive administer, it bears meaning to examine
such a fiscal policy’s social benefit and efficiency. We use the Benefit-Cost
ratio, i.e., the dollar amount of social-welfare gain for every $1 incentive
administered to the system, to represent the efficiency. In Figure 7, we show
the social-welfare gains, as well as the efficiency of the incentive. It can
be seen that the efficiency of the incentive given to the resource producer
suffers more severely.
Example 1.4: Regressive vs. flat-rate incentives
In this example, we examine how a regressive incentive policy differs from a
flat-rate one, as well as how the incentive bracket affects the system
performance. Once again, we inherit all settings from example 1.1, plus the
additional imposition of regressive incentive policy, with $\alpha^{1}_{0}=11$
and $\alpha^{1}_{1}=-2.2$ on both natural rubber farmers, to be selected to
provide a sensible comparison against a flat-rate, $\alpha^{1}_{0}=10$,
incentive policy. The only incentive bracket, $A^{1}_{1}$, will be left
varying as a sensitivity study parameter.
Figure 8: Total profits and welfare under the regressive incentives
Figure 9: Social-welfare gains and Benefit-Cost ratio under the regressive
incentives
The range of $A^{1}_{1}$ displayed in Figure 9 and 9 are selected to ensure
each farmer’s total output quantity exceeds such a bracket value for it to
take effect. In Figure 9, we observe that the change of the regressive
incentive bracket does not influence each supply chain tier’s profit and the
consumer welfare significantly. Such a result is consistent with the findings
with respect to the effect of the “tax brackets” on firm’s profits in a study
by Yu et al. (2018). In Figure 9, a mildly increasing gain of social welfare
can be gleaned as the bracket increases, though falling short of the
comparable flat-rate policy. The dollar amount of social benefit, embodied by
the Benefit-Cost ratio, on the other hand, trends up with the bracket level.
Example 1.5: Critical resource shortage relief
In this example, we use a flat-rate incentive to relieve a latex glove
shortage caused by a demand surge at the medical facilities. First, we
construct a distressed supply chain in which the natural rubber emerges to be
a critical shortage in its supplies at the medical facilities. In doing so, we
inherit all settings from example 1.1, except the price-demand function of
latex gloves at medical facilities, which is set as
$\displaystyle p^{1}_{31}(d_{11})=-d_{11}+420.$
Our algorithm returns the result including the shipments from two of the
suppliers to a market, $x^{11}_{12}$ and $x^{12}_{12}$ being 0. To illustrate,
Figure 10(a) displays the topology of this disrupted supply chain, in which
the residential facilities are completely cut from the supply of latex gloves.
(a) A latex glove shortage occurs at residential
(b) Shortage relieved by producer incentives
Figure 10: Critical resource shortage relief through production stimulus
To aid such circumstances, we impose a flat-rate incentive on both latex glove
producers, with $\beta^{1}_{0}=50$, and re-run the model. Immediately, the
previously disrupted supply can be restored. We use Figure 10(b) to display
the recovered supply chain status. Similar results of shortage relief can be
achieved by imposing a flat-rate incentive on both rubber farmers.
Example 2: A mixed fiscal policy in an abstract scarce resource supply chain
In many developed economies, governments tend to be concerned with levels of
income inequality, and are, therefore, interested in making redistribution of
societal wealth a substantial objective for economic development. Thus, it is
meaningful to evaluate the utility of a mixed fiscal policy in redistributing
social welfare. In this example, we construct an abstract resource-trio
network to illustrate a mixed fiscal policy with a combination of incentives
and taxes. In the network topology shown in Figure 11, the red nodes indicate
the firms that are being taxed whereas the blue ones incentivized.
Figure 11: A resource-trio supply chain network with a mixed fiscal policy
The cost functions are constructed, for all $i=1,...,I$ and $j=1,...,J$, with
$I=J=3,N_{i}=M_{j}=K_{j}=2$ and $T_{j}=1$, as the following.
$\displaystyle f^{11}=2.5q_{11}^{2}+q_{11}(q_{21}+q_{31})+2q_{11},\quad
f^{12}=2.5q_{12}^{2}+q_{12}(q_{22}+q_{32})+2q_{12},$ $\displaystyle
f^{13}=2.5q_{13}^{2}+q_{13}(q_{23}+q_{33})+2q_{12},\quad
f^{21}=0.5q_{21}^{2}+q_{21}(q_{11}+q_{31})+2q_{21},$ $\displaystyle
f^{22}=0.5q_{22}^{2}+q_{22}(q_{12}+q_{32})+2q_{22},\quad
f^{23}=0.5q_{23}^{2}+q_{23}(q_{13}+q_{33})+2q_{23},$ $\displaystyle
f^{31}=0.5q_{31}^{2}+q_{31}(q_{11}+q_{21})+2q_{31},\quad
f^{32}=0.5q_{32}^{2}+q_{32}(q_{12}+q_{22})+2q_{32},$ $\displaystyle
f^{33}=0.5q_{33}^{2}+q_{33}(q_{13}+q_{23})+2q_{33},\quad where,\quad
q_{in}=\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in}_{jm};$ $\displaystyle
c^{11}_{1}=0.5(x^{11}_{1})^{2},\quad c^{11}_{2}=0.25(x^{11}_{2})^{2},\quad
c^{12}_{1}=0.5(x^{12}_{1})^{2},\quad c^{12}_{2}=0.25(x^{12}_{2})^{2},$
$\displaystyle c^{21}_{1}=0.5(x^{21}_{1})^{2},\quad
c^{21}_{2}=0.25(x^{21}_{2})^{2},\quad c^{22}_{1}=0.5(x^{22}_{1})^{2},\quad
c^{22}_{2}=0.25(x^{22}_{2})^{2},$ $\displaystyle
c^{31}_{1}=0.5(x^{31}_{1})^{2},\quad c^{31}_{2}=0.25(x^{31}_{2})^{2},\quad
c^{32}_{1}=0.5(x^{32}_{1})^{2},\quad c^{32}_{2}=0.25(x^{32}_{2})^{2};$
$\displaystyle f^{11}=0.1(x^{11}_{1}+x^{12}_{1})^{2},\quad
f^{12}=0.1(x^{11}_{2}+x^{12}_{2})^{2},$ $\displaystyle
f^{21}=0.1(x^{21}_{1}+x^{22}_{1})^{2},\quad
f^{22}=0.1(x^{21}_{2}+x^{22}_{2})^{2},$ $\displaystyle
f^{31}=0.1(x^{31}_{1}+x^{32}_{1})^{2},\quad
f^{32}=0.1(x^{31}_{2}+x^{32}_{2})^{2};$ $\displaystyle
c^{11}_{11}=0.5(x^{11}_{11})^{2}+3.5x^{11}_{11},\quad
c^{11}_{12}=0.5(x^{11}_{12})^{2}+3.5x^{11}_{12},$ $\displaystyle
c^{12}_{11}=0.5(x^{12}_{11})^{2}+2x^{12}_{11},\qquad
c^{12}_{12}=0.5(x^{12}_{12})^{2}+2x^{12}_{12},$ $\displaystyle
c^{21}_{11}=0.4(x^{21}_{11})^{2}+3.5x^{21}_{11},\quad
c^{21}_{12}=0.4(x^{21}_{12})^{2}+3.5x^{21}_{12},$ $\displaystyle
c^{22}_{11}=0.4(x^{22}_{11})^{2}+2x^{22}_{11},\qquad
c^{22}_{12}=0.4(x^{22}_{12})^{2}+2x^{22}_{12},$ $\displaystyle
c^{31}_{11}=0.45(x^{31}_{11})^{2}+3.5x^{31}_{11},\quad
c^{31}_{12}=0.45(x^{31}_{12})^{2}+3.5x^{31}_{12},$ $\displaystyle
c^{32}_{11}=0.45(x^{32}_{11})^{2}+2x^{32}_{11},\qquad
c^{32}_{12}=0.45(x^{32}_{12})^{2}+2x^{32}_{12};$
$\displaystyle\hat{c}^{js}_{tk}(x^{js}_{tk})=0.1x^{js}_{tk}.$
All other costs are set to zero. The price-demand functions at the markets are
$\displaystyle p^{j}_{3k}(d_{jk})=-d_{jk}+300,\quad\forall j,k.$
Similar to example 1-5, we set the production conversion rates
$\psi^{in}_{jm}=0.9$, the market resource commodity weights
$w_{11}=0.5,w_{12}=0.5$, the parameters concerning the resource capacities,
$U_{i}$, sufficiently large. In contrast to previous examples, here we set the
step-size $\varphi$ to be $10^{-5}$, the convergence tolerance $\epsilon$ to
be $6\times 10^{-4}$ for this example.
To examine the efficacy of this mixed fiscal policy, we first establish the
equilibrium of the benchmark scenario, i.e., the setting without such policy.
As such, we present the results in Table LABEL:table:eg6. With the ex-ante
knowledge that the resource owners capture most of the supply chain profit, we
then impose such a policy in which the producers of resource 1 are given a
flat-rate incentive of $\beta^{1}_{0}=12$, whereas the owners of resource 1
and producers of resource 3 are charged a flat-rate tax of
$\alpha^{1}_{0}=-10$ and $\beta^{1}_{0}=-2$, respectively.
The projection method takes approximately 120 seconds for this problem of a
total of 99 variables to converge to the preset tolerance. We include all
equilibrium results in the supplemental file while displaying only the profit-
related outcome in Table LABEL:table:eg6 again. It is worth pointing out that
the "net incentive" is the total taxes collected net of the total incentive
disbursed by the government.
Table 4: Profit and welfare results of example 2 Profit | | Benchmark | With policy | Welfare | | Benchmark | With policy
---|---|---|---|---|---|---|---
Owner | $\pi_{11}$ | 2573.50 | 2359.58 | Consumer | $CS_{11}$ | 104.26 | 141.64
Owner | $\pi_{12}$ | 2573.50 | 2700.43 | Consumer | $CS_{12}$ | 120.11 | 97.84
Owner | $\pi_{21}$ | 2573.50 | 2700.43 | Consumer | $CS_{21}$ | 107.24 | 77.30
Owner | $\pi_{22}$ | 2573.50 | 2359.58 | Consumer | $CS_{22}$ | 104.26 | 141.64
Owner | $\pi_{31}$ | 2573.50 | 2700.43 | Consumer | $CS_{31}$ | 120.11 | 97.84
Owner | $\pi_{32}$ | 2573.50 | 2700.43 | Consumer | $CS_{32}$ | 107.24 | 77.30
Producer | $\pi_{11}$ | 248.25 | 340.20 | Tot. owner | $\pi_{total}$ | 15441.00 | 15520.90
Producer | $\pi_{12}$ | 433.20 | 345.63 | Tot. producer | $\pi_{total}$ | 1977.61 | 1963.28
Producer | $\pi_{21}$ | 307.36 | 295.82 | Tot. supplier | $\pi_{total}$ | 1825.59 | 667.13
Producer | $\pi_{22}$ | 248.25 | 340.20 | Tot. consumer | $CS_{total}$ | 663.21 | 633.57
Producer | $\pi_{31}$ | 433.20 | 345.63 | Soc. welfare | $SW$ | 19907.42 | 18784.87
Producer | $\pi_{32}$ | 307.36 | 295.82 | Net Incentive | | 0.00 | 99.15
Supplier | $\pi_{11}$ | 323.04 | 175.60 | Soc. welfare gain | $\Delta SW$ | 0.00 | -1122.55
Supplier | $\pi_{12}$ | 302.42 | 60.78 | Benefit-to-cost | BC | - | -11.32
Supplier | $\pi_{21}$ | 255.43 | 95.10 | | | |
Supplier | $\pi_{22}$ | 319.75 | 182.51 | | | |
Supplier | $\pi_{31}$ | 291.83 | 58.54 | | | |
Supplier | $\pi_{32}$ | 333.12 | 94.60 | | | |
In Figure 12, we observe that under the mixed fiscal policy, the directly
affected firms, i.e., the top two tiers of the network, do not consistently
respond to the intended policy elements, as there are both increases and
decreases of profit on both the incentivized and taxed firms. On the bottom
two tiers of the network, the supplier profits drop unanimously, whereas the
distribution of consumer surplus across the markets becomes more uneven.
Figure 12: Profits and welfare under a mixed fiscal policy Figure 13: Total
profits and welfare under a mixed fiscal policy
Overall, with a positive net incentive, $99.15, administered to the network,
we observe, however, that social welfare gains a negative amount, owing to the
significant decrease in supplier profits. See Table LABEL:table:eg6 and Figure
13.
## 8 Managerial implications
Government plays a key role in maintaining a balance between the firm’s
profitability, consumer’s well-being, supply chain’s overall health, and
society’s sustainable growth. Yet in the face of widespread resource shortages
that stifle the economic growth in supply chains, balancing the tradeoffs
between the goals has been enormously challenging for governments. The
proposed cross-sector multi-product scarce resource supply chain network model
can serve as a support system for both the supply chain practitioners and
governments to make operational and strategic planning decisions. With a
particular focus on the practice that can be adopted by the government to help
address the insurgent scarcity and achieve overall sustainability, we herein
instill the following managerial implications.
1. 1.
From the viewpoint of designing a stimulative fiscal policy, the legislator
will likely face a caveat between the choice of a flat-rate policy and a
regressive one. Our analysis shows that the flat-rate incentive generally
performs more effectively than its regressive counterpart when administered to
the resource owners in stimulating supply chain welfare. This result is in
general agreement with a dual study by Yu et al. (2018), in which they pointed
out the strength of flat-rate tax when imposed to discourage adverse
environmental activities. This result is also supported by the advantages of
approximately linear tax in maintaining welfare (Mirrlees, 1971). Furthermore,
as a case in point, we also find that if the legislator’s focus of policy
design shifts to the efficiency of it, i.e., the social-welfare gain on every
dollar of fiscal investment, then a carefully designed two-bracket regressive
incentive can be more advantageous in policy efficiency than its flat-rate
counterpart. That is, with a large enough “bracket”, the legislator may be
able to devise a regressive incentive scheme that can achieve a higher
Benefit-Cost ratio, and almost as high of a net welfare gain as the flat-rate
scheme. Though delicate, the value of such a bracket can be practically
narrowed down by determining the largest effective bracket, followed by
determining the largest achievable net welfare gain.
2. 2.
For the choice of incentive recipients in a supply chain, i.e., the resource
owners or the resource producers, the decision-makers should anticipate a
tradeoff. When the incentives are administered to the owners, both the social
welfare gain and the policy efficiency will remain at a relatively high level.
When the incentives are administered to the producers, the supplier’s profit
will enjoy the most gains, whereas the market price of the affected products
will be reduced significantly. In practice, the market price of a scarce
resource not only influences the consumer’s behaviors, but also serves as a
key indicator for the broader economy. Under a supply chain disruption, we
suggest that the governments first administer an appropriate amount of
incentives to the producers so that the commodity prices can be reduced at the
market level in short term. We note that, at the same time, any commodities
experiencing shortage at the market level may also be relieved by the same
incentives. Once the cause for the disruption has subsided and the supply
chain’s performance measures stabilized, governments can then reelect the
appropriate supply chain tier(s) as the new recipients of the next round of
incentives, depending on the economic objectives and legislative priorities.
3. 3.
“Income inequality” is believed to hinder economic growth. Using fiscal policy
to reduce income gaps has become the goal of many advanced economies (Coady
and Gupta, 2012). In our observation, the large profit difference across the
supply chain tiers entices a growth-minded government to pursue a
redistributive strategy for social surplus. It has been found, however, that
the redistribution of welfare among resource owners and producers via a mixed
fiscal policy may result in a net loss of welfare. While we acknowledge that
such a finding resembles the rhetoric of the opposition to the dominant social
policy notion that the resources generated by economic growth should be
redistributed to fund social programs (Midgley, 1999), it is pertinent to note
that our analytical takeaway is derived from a microeconomic framework with
standard assumptions. Nonetheless, we note further that the welfare loss
caused by our experimental mixed fiscal policy has been previously associated
with the elasticity of demand from the classic oligopoly theories (Worcester,
1975). In particular, it is also widely acknowledged in literature that
taxation can be ineffective in reducing wealth inequalities as opposed to what
conventional wisdom would have anticipated (Mirrlees, 1971). As is
demonstrated before, our proposed fiscal policy can effectively relieve
resource shortages and stimulate growth in supply chains. But for the
governments or practitioners who oversee the supply chain grand strategies in
a post-crisis stage, we caution that the well-intended fiscal interventionist
may derail the overall economic sustainability of society. We advise that the
use of a mixed fiscal policy in supply chains should follow the principle of
configuring a mild or less redistributive policy, as is recognized in Coady
and Gupta (2012).
4. 4.
Finally, we note an interesting link between resource capacity and welfare
within a competitive supply chain. In related literature, Chen and Chou (2008)
imposed firm-wise resource capacity in their supply chain and found that
capacity limit restricts welfare. In both Nagurney et al. (2019b) and Hu and
Qiang (2013), though the shared capacity limit was modeled, neither study
associated such a limit with its impact on social welfare. The current study,
however, proffers further results. We find that the capacity limit of a given
type of scarce (more often, natural) resource does not strictly curtail social
welfare. Rather, an appropriate level of the limit can even benefit the social
outcome. In practice, the ownership and the right to use a natural resource as
a shared public good is often governed by its respective common laws or local
policies111111Public goods has been a widely discussed topic in studies of
economics and law. See Réaume (1988), Holcombe (1997) and the reference
therein.. Thus, if a natural resource of critical importance belongs to a
government’s jurisdiction, it is advisable for the government to legislate and
impose a mild restriction on the usage of such natural resources. The
restriction, if properly selected, will not only preserve the quantity of the
shared resource but also create a “sweet spot” that induces higher social
welfare.
Admittedly, the above managerial implications are based on our stylized
numerical experiments and thus, are limited to the extent of the
characteristics of the network, e.g., the competitive nature of the nodes, the
substitutability of the flows, the mix of policies, etc. Moreover, because of
the number of features incorporated in our model, it is probable that in the
presence of multiple features, their interactions could result in more
profound managerial implications than what we have uncovered. Nonetheless,
practitioners and decision-makers should carefully verify and validate the
premises of the model before expanding the aforementioned insights.
## 9 Conclusions
Prima facie, the munificence of scarce resources is akin to the sustainability
and growth of individual firms, societies, and the flourishing of humanity.
Yet, in this age of intensifying societal changes, shocks, crises, and inter-
connectivity, the conflict and competition for scarce resources and products
have become more fierce. Fiscal policy remains the most common governmental
policy instrument to relieve the shortage of supplies and stimulate economies.
In this paper, our contributions to the literature on scarce resources and
supply chain networks include the following.
We construct the first general decentralized cross-sector scarce resource SCNE
model with a unifying supply-side fiscal policy. We provide a rigorous VI
formulation for the governing equilibrium conditions of the network model.
Such a substitute network provides a versatile tool for the evaluation of
profit, welfare, policy instruments, cost structure, transportation,
conservation, competition, and interdependence of resources throughout the
supply chains. The generality of our model also allows for a variety of
extensions, i.e., dynamics, stochastic features, multi-criteria decision-
making, disequilibrium behaviors, etc, to be furnished.
Second, our model is also a general Nash equilibrium problem. We formulate the
GNE of the network model in VI. There are only a few GNEP studies that
incorporate fiscal policy in the SCNE literature. The utility of this model is
not limited to the scarce resource supply chains, but also eligible to any
resource commodity that pertains to the aforementioned characteristics of
scarce resources.
Third, from a technical aspect, we introduce a recently uncovered approach to
characterize the network equilibrium by adopting a novel set of theoretical
properties, including $\lambda_{min}$. To the best of our knowledge in the
supply chain network literature, such a means of characterization for the
uniqueness property of network equilibrium has yet to appear heretofore.
Lastly, we furnish the model with numerical studies and extract managerial
insights that provide governments, resource owners, and firms useful advice in
expansion, cost restructuring, resource conservation,
competition/collaboration strategy, shortage handling, and post-crisis
stimulation. In particular, we provide guidance on supply-side policy design
and administration in relieving and stimulating the PPE shortage caused by the
COVID-19 global pandemic. Our findings also enrich the political discussion on
public resource legislation, income inequality, and sustainable development.
We anticipate that the extension of this model can shed light on the
stimulation and relief effort on vaccine distribution and economic recovery.
## References
* Alizamir et al. (2019) Alizamir, S., Iravani, F., and Mamani, H. (2019). An analysis of price vs. revenue protection: Government subsidies in the agriculture industry. Management Science, 65(1):32–49.
* Anedda et al. (2020) Anedda, J., Ferreli, C., Rongioletti, F., and Atzori, L. (2020). Changing gears: Medical gloves in the era of coronavirus disease 2019 pandemic. Clinics in Dermatology, 38(6):734–736.
* Arestis and Sawyer (2003) Arestis, P. and Sawyer, M. (2003). Reinventing fiscal policy. Journal of Post Keynesian Economics, 26(1):3–25.
* Arifoğlu and Tang (2022) Arifoğlu, K. and Tang, C. S. (2022). A two-sided incentive program for coordinating the influenza vaccine supply chain. Manufacturing & Service Operations Management, 24(1):235–255.
* Arrow and Kruz (2013) Arrow, K. J. and Kruz, M. (2013). Public investment, the rate of return, and optimal fiscal policy. Taylor & Francis.
* Bai et al. (2016) Bai, Y., Ouyang, Y., and Pang, J.-S. (2016). Enhanced models and improved solution for competitive biofuel supply chain design under land use constraints. European Journal of Operational Research, 249(1):281–297.
* Bakker et al. (2018) Bakker, C., Zaitchik, B. F., Siddiqui, S., Hobbs, B. F., Broaddus, E., Neff, R. A., Haskett, J., and Parker, C. L. (2018). Shocks, seasonality, and disaggregation: Modelling food security through the integration of agricultural, transportation, and economic systems. Agricultural Systems, 164:165–184.
* Bazilian et al. (2011) Bazilian, M., Rogner, H., Howells, M., Hermann, S., Arent, D., Gielen, D., Steduto, P., Mueller, A., Komor, P., Tol, R. S., and Yumkella, K. K. (2011). Considering the energy, water and food nexus: Towards an integrated modelling approach. Energy Policy, 39(12):7896–7906.
* Belhaj and Deroïan (2013) Belhaj, M. and Deroïan, F. (2013). Strategic interaction and aggregate incentives. Journal of Mathematical Economics, 49(3):183–188.
* Bell et al. (2012) Bell, J. E., Autry, C. W., Mollenkopf, D. A., and Thornton, L. M. (2012). A natural resource scarcity typology: Theoretical foundations and strategic implications for supply chain management. Journal of Business Logistics, 33:158–166.
* Bertsekas and Tsitsiklis (1989) Bertsekas, D. P. and Tsitsiklis, J. N. (1989). Parallel and Distributed Computation: Numerical Methods. Prentice Hall.
* Besik and Nagurney (2017) Besik, D. and Nagurney, A. (2017). Quality in competitive fresh produce supply chains with application to farmers’ markets. Socio-Economic Planning Sciences, 60:62–76.
* Biggs et al. (2015) Biggs, E. M., Bruce, E., Boruff, B., Duncan, J. M. A., Horsley, J., Pauli, N., McNeill, K., Neef, A., Van Ogtrop, F., Curnow, J., and Others (2015). Sustainable development and the water-energy-food nexus: A perspective on livelihoods. Environmental Science & Policy, 54:389–397.
* Blinder (2021) Blinder, A. (2021). Biden’s Plan Encourages True Supply-Side Economics. https://www.wsj.com/articles/bidens-plan-encourages-true-supply-side-economics-11621982082. Accessed: May 1, 2022.
* Bramoullé et al. (2014) Bramoullé, Y., Kranton, R., and D’amours, M. (2014). Strategic interaction and networks. American Economic Review, 104(3):898–930.
* Braunerhjelm (2022) Braunerhjelm, P. (2022). Rethinking stabilization policies; Including supply-side measures and entrepreneurial processes. Small Business Economics, 58:1–21.
* Calvo-Armengol and Jackson (2004) Calvo-Armengol, A. and Jackson, M. O. (2004). The effects of social networks on employment and inequality. American Economic Review, 94(3):426–454.
* Caniëls and Gelderman (2007) Caniëls, M. C. and Gelderman, C. J. (2007). Power and interdependence in buyer supplier relationships: A purchasing portfolio approach. Industrial Marketing Management, 36(2):219–229.
* CDC (2020) CDC (2020). Strategies for Optimizing the Supply of Disposable Medical Gloves. https://www.cdc.gov/coronavirus/2019-ncov/hcp/ppe-strategy/gloves.html. Accessed: May 1, 2022.
* Chen et al. (2020) Chen, D., Ignatius, J., Sun, D., Goh, M., and Zhan, S. (2020). Pricing and equity in cross-regional green supply chains. European Journal of Operational Research, 280(3):970–987.
* Chen and Chou (2008) Chen, H.-K. and Chou, H.-W. (2008). Supply chain network equilibrium problem with capacity constraints. Papers in Regional Science, 87(4):605–621.
* Chen and Tang (2015) Chen, Y.-J. and Tang, C. S. (2015). The economic value of market information for farmers in developing economies. Production and Operations Management, 24(9):1441–1452.
* Chintapalli and Tang (2021) Chintapalli, P. and Tang, C. S. (2021). The value and cost of crop minimum support price: Farmer and consumer welfare and implementation cost. Management Science, 67(11):6839–6861.
* Chou (2020) Chou, B. (2020). Glove Manufacturers: Conserve PPE During Widespread Shortage. https://www.ehstoday.com/ppe/article/21127107/glove-manufacturers-conserve-ppe-during-widespread-shortage. Accessed: May 1, 2022.
* Coady and Gupta (2012) Coady, M. D. and Gupta, M. S. (2012). Income inequality and fiscal policy (2nd edition). IMF Staff Discussion Note, 2012(9):37.
* Corbett (2013) Corbett, M. (2013). Oil Shock of 1973–74. https://www.federalreservehistory.org/essays/oil-shock-of-1973-74. Accessed: May 1, 2022.
* Cournot (1838) Cournot, A. (1838). Researches into the mathematical principles of the theory of wealth, English translation. Macmillan, London.
* Dafermos and Nagurney (1984) Dafermos, S. and Nagurney, A. (1984). Sensitivity Analysis for the General Spatial Economic Equilibrium Problem. Operations Research, 32(5):1069–1086.
* Davig and Leeper (2011) Davig, T. and Leeper, E. M. (2011). Monetary–fiscal policy interactions and fiscal stimulus. European Economic Review, 55(2):211–227.
* Dutta and Nagurney (2019) Dutta, P. and Nagurney, A. (2019). Multitiered blood supply chain network competition: Linking blood service organizations, hospitals, and payers. Operations Research for Health Care, 23:100230.
* Economist (2021) Economist (2021). The world economy’s shortage problem. https://www.economist.com/leaders/2021/10/09/the-world-economys-shortage-problem. Accessed: May 1, 2022.
* Facchinei and Kanzow (2007) Facchinei, F. and Kanzow, C. (2007). Generalized Nash equilibrium problems. Annals of Operations Research, 175(1):177–211.
* Facchinei and Pang (2003) Facchinei, F. and Pang, J.-S. (2003). Finite-dimensional variational inequalities and complementarity problems: Volume I. Springer, Netherlands.
* FDIC (1997) FDIC (1997). Banking and the Agricultural Problems of the 1980s. History of the Eighties - Lessons for the Future, pages 259–290.
* Finkenstadt et al. (2020) Finkenstadt, D. J., Handfield, R., and Guinto, P. (2020). Why the U.S. Still Has a Severe Shortage of Medical Supplies. Harvard Business Review. Publication Date: September 17, 2020.
* Frey and Jegen (2001) Frey, B. S. and Jegen, R. (2001). Motivation crowding theory. Journal of Economic Surveys, 15(5):589–611.
* Friedman (2009a) Friedman, G. (2009a). The Next 100 Years: A Forecast for the 21st Century. Knopf Doubleday Publishing Group.
* Friedman (2009b) Friedman, T. L. (2009b). Hot, flat, and crowded 2.0: Why we need a green revolution-and how it can renew America. Picador.
* Gabay and Moulin (1980) Gabay, D. and Moulin, H. (1980). On the uniqueness and stability of Nash-equilibria in noncooperative games. In Applied stochastic control in econometrics and management science, pages 271–293. North-Holland Publ. Co., Amsterdam, The Netherlands.
* Gasparro et al. (2020) Gasparro, A., Smith, J., and Kang, J. (2020). Grocers Stopped Stockpiling Food. Then Came Coronavirus. https://www.wsj.com/articles/grocers-stopped-stockpiling-food-then-came-coronavirus-11584982605. Accessed: May 1, 2022.
* Gilding (2012) Gilding, P. (2012). The Great Disruption: How the Climate Crisis Will Transform the Global Economy. Bloomsbury Publishing.
* Gourinchas (2020) Gourinchas, P.-O. (2020). Flattening the pandemic and recession curves. In Mitigating the COVID Economic Crisis: Act Fast and Do Whatever, page 227. CEPR Press, London.
* Guoyi et al. (2020) Guoyi, X., Caiquan, D., Yubin, Z., and Yunhui, Z. (2020). Research on Multi-Period Closed-Loop Supply Chain Network Equilibrium Based on Consumers’ Preference for Products. International Journal of Information Systems and Supply Chain Management (IJISSCM), 13(4):68–94.
* Harker (1991) Harker, P. T. (1991). Generalized Nash games and quasi-variational inequalities. European Journal of Operational Research, 54(1):81–94.
* Harker and Pang (1990) Harker, P. T. and Pang, J. S. (1990). Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Mathematical Programming, 48(1-3):161–220.
* Henneberry (2020) Henneberry, B. (2020). How to Make Medical Gloves for Coronavirus/COVID-19. https://www.thomasnet.com/articles/other/how-to-make-medical-gloves/. Accessed: May 1, 2022.
* Hoff (2011) Hoff, H. (2011). Understanding the nexus: Background paper for the Bonn 2011 Nexus Conference. Technical report, Stockholm University, Stockholm.
* Holcombe (1997) Holcombe, R. G. (1997). A theory of the theory of public goods. The Review of Austrian Economics, 10(1):1–22.
* Hu et al. (2019) Hu, M., Liu, Y., and Wang, W. (2019). Socially beneficial rationality: The value of strategic farmers, social entrepreneurs, and for-profit firms in crop planting decisions. Management Science, 65(8):3654–3672.
* Hu et al. (2021) Hu, X., Jang, J., Hamoud, N., and Bajgiran, A. (2021). Strategic Inventories in a Supply Chain with Downstream Cournot Duopoly. International Journal of Operational Research, 42(4):524–542.
* Hu and Qiang (2013) Hu, Y. and Qiang, Q. (2013). An equilibrium model of online shopping supply chain networks with service capacity investment. Service Science, 5(3):238–248.
* Huck et al. (2012) Huck, S., Kübler, D., and Weibull, J. (2012). Social norms and economic incentives in firms. Journal of Economic Behavior & Organization, 83(2):173–185.
* Hunt (2000) Hunt, S. D. (2000). A General Theory of Competition. Journal of Macromarketing, 20(1):77–81.
* IMF (2021) IMF (2021). Fiscal Monitor: Strengthening the Credibility of Public Finances. Technical report, Washington, October.
* Jackson and Zenou (2015) Jackson, M. O. and Zenou, Y. (2015). Games on networks. In Handbook of game theory with economic applications, volume 4, pages 95–163. Elsevier.
* Jiang et al. (2021) Jiang, Z.-Z., He, N., and Huang, S. (2021). Government penalty provision and contracting with asymmetric quality information in a bioenergy supply chain. Transportation Research Part E: Logistics and Transportation Review, 154:102481.
* Jones (2021) Jones, Z. (2021). California records driest year since 1924. https://www.cbsnews.com/news/california-drought-dry-water-year/. Accessed: May 1, 2022.
* Kazaz et al. (2016) Kazaz, B., Webster, S., and Yadav, P. (2016). Interventions for an artemisinin-based malaria medicine supply chain. Production and Operations Management, 25(9):1576–1600.
* Keynes (1936) Keynes, J. M. (1936). The General Theory of Employment, Interest and Money. Harcourt, Brace.
* Kinderlehrer and Stampacchia (1980) Kinderlehrer, D. and Stampacchia, G. (1980). An Introduction to Variational Inequalities and Their Applications. SIAM, New York.
* Korpelevich (1976) Korpelevich, G. M. (1976). The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody, 12:747–756.
* Krautkraemer (1998) Krautkraemer, J. A. (1998). Nonrenewable resource scarcity. Journal of Economic Literature, 36(4):2065–2107.
* Krautkraemer (2005) Krautkraemer, J. A. (2005). Economics of natural resource scarcity: The state of the debate. Technical report, Resources for the Future, Washington, D.C.
* Kulkarni and Shanbhag (2012) Kulkarni, A. A. and Shanbhag, U. V. (2012). On the variational equilibrium as a refinement of the generalized Nash equilibrium. Automatica, 48(1):45–55.
* Kurian (2017) Kurian, M. (2017). The water-energy-food nexus: Trade-offs, thresholds and transdisciplinary approaches to sustainable development. Environmental Science and Policy, 68:97–106.
* Lee and Billington (1993) Lee, H. L. and Billington, C. (1993). Material management in decentralized supply chains. Operations Research, 41(5):835–847.
* Lee (2020) Lee, L. (2020). Malaysia’s Top Glove sees supply shortages boosting latex glove prices. https://www.reuters.com/article/us-health-coronavirus-top-glove-supplies/malaysias-top-glove-sees-supply-shortages-boosting-latex-glove-prices-idUSKBN2850XX. Accessed: May 1, 2022.
* Levi et al. (2022) Levi, R., Singhvi, S., and Zheng, Y. (2022). Artificial shortage in agricultural supply chains. Manufacturing & Service Operations Management, 24(2):746–765.
* Liao et al. (2019) Liao, C.-N., Chen, Y.-J., and Tang, C. S. (2019). Information provision policies for improving farmer welfare in developing countries: Heterogeneous farmers and market selection. Manufacturing & Service Operations Management, 21(2):254–270.
* Long (1973) Long, R. (1973). Fiscal Policy and The Energy Crisis. Technical report, the United States Senate, Washington, D.C.
* Masoumi et al. (2012) Masoumi, A. H., Yu, M., and Nagurney, A. (2012). A supply chain generalized network oligopoly model for pharmaceuticals under brand differentiation and perishability. Transportation Research Part E: Logistics and Transportation Review, 48(4):762–780.
* Matopoulos et al. (2015) Matopoulos, A., Barros, A. C., and van der Vorst, J. G. (2015). Resource-efficient supply chains: A research framework, literature review and research agenda. Supply Chain Management, 20(2):218–236.
* Matsypura et al. (2007) Matsypura, D., Nagurney, A., and Liu, Z. (2007). Modeling of electric power supply chain networks with fuel suppliers via variational inequalities. International Journal of Emerging Electric Power Systems, 8(1).
* Melo (2018) Melo, E. (2018). A Variational Approach to Network Games. Working paper, No. 005.2018, Fondazione Eni Enrico Mattei (FEEM), Milano.
* Midgley (1999) Midgley, J. (1999). Growth, redistribution, and welfare: Toward social investment. Social Service Review, 73(1):3–21.
* Mirrlees (1971) Mirrlees, J. A. (1971). An exploration in the theory of optimum income taxation. The Review of Economic Studies, 38(2):175–208.
* Mun et al. (2021) Mun, K. G., Zhao, Y., and Rafique, R. A. (2021). Designing hydro supply chains for energy, food, and flood. Manufacturing & Service Operations Management, 23(2):274–293.
* Nagurney (1999) Nagurney, A. (1999). Network Economics: A Variational Inequality Approach. Kluwer Academic Publishers, Dordrecht, The Netherlands, second edition.
* Nagurney (2021a) Nagurney, A. (2021a). Optimization of supply chain networks with inclusion of labor: Applications to COVID-19 pandemic disruptions. International Journal of Production Economics, 235:108080.
* Nagurney (2021b) Nagurney, A. (2021b). Supply chain game theory network modeling under labor constraints: Applications to the Covid-19 pandemic. European Journal of Operational Research, 293(3):880–891.
* Nagurney et al. (2019a) Nagurney, A., Besik, D., and Dong, J. (2019a). Tariffs and quotas in world trade: A unified variational inequality framework. European Journal of Operational Research, 275(1):347–360.
* Nagurney et al. (2019b) Nagurney, A., Besik, D., and Li, D. (2019b). Strict quotas or tariffs? Implications for product quality and consumer welfare in differentiated product supply chains. Transportation Research Part E: Logistics and Transportation Review, 129:136–161.
* Nagurney et al. (2002) Nagurney, A., Dong, J., and Zhang, D. (2002). A supply chain network equilibrium model. Transportation Research Part E: Logistics and Transportation Review, 38(5):281–303.
* Nagurney and Dutta (2021) Nagurney, A. and Dutta, P. (2021). A multiclass, multiproduct Covid-19 convalescent plasma donor equilibrium model. In Operations Research Forum, volume 2, pages 1–30. Springer.
* Nagurney et al. (2017) Nagurney, A., Yu, M., and Besik, D. (2017). Supply chain network capacity competition with outsourcing: a variational equilibrium framework. Journal of Global Optimization, 69(1):231–254.
* Nash (1951) Nash, J. (1951). Non-cooperative games. Annals of Mathematics, 54:286–295.
* Nash (1950) Nash, J. F. (1950). Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, USA, 36:48–49.
* Olawuyi (2020) Olawuyi, D. (2020). Sustainable development and the water-energy-food nexus: Legal challenges and emerging solutions. Environmental Science & Policy, 103:1–9.
* Ostrom (1990) Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge university press.
* Parise and Ozdaglar (2019) Parise, F. and Ozdaglar, A. (2019). A variational inequality framework for network games: Existence, uniqueness, convergence and sensitivity analysis. Games and Economic Behavior, 114:47–82.
* Pfeffer and Salancik (2003) Pfeffer, J. and Salancik, G. R. (2003). The external control of organizations: A resource dependence perspective. Stanford University Press.
* Réaume (1988) Réaume, D. (1988). Individuals, groups, and rights to public goods. University of Toronto Law Journal, 38(1):1–27.
* Romer and Romer (2021) Romer, C. D. and Romer, D. H. (2021). The fiscal policy response to the pandemic. Brookings Papers on Economic Activity, page 28.
* Rosenberg (1973) Rosenberg, N. (1973). Innovative responses to materials shortages. The American Economic Review, 63(2):111–118.
* Sadka (1976) Sadka, E. (1976). On income distribution, incentive effects and optimal income taxation. The Review of Economic Studies, 43(2):261–267.
* Samuelson (1952) Samuelson, P. A. (1952). Spatial price equilibrium and linear programming. The American Economic Review, 42(3):283–303.
* Shen and Sun (2021) Shen, Z. M. and Sun, Y. (2021). Strengthening supply chain resilience during covid-19: A case study of jd.com. Journal of Operations Management, pages 1–25. https://doi.org/10.1002/joom.1161.
* Smith (2021) Smith, N. (2021). Big Government Is the Answer to America_s Supply Problems. https://www.bloomberg.com/opinion/articles/2021-06-16/big-government-is-the-answer-to-america-s-supply-problems. Accessed: May 1, 2022.
* Srai et al. (2022) Srai, J. S., Joglekar, N., Tsolakis, N., and Kapur, S. (2022). Interplay between competing and coexisting policy regimens within supply chain configurations. Production and Operations Management, 31(2):457–477.
* Takayama and Judge (1964) Takayama, T. and Judge, G. G. (1964). Spatial equilibrium and quadratic programming. American Journal of Agricultural Economics, 46(1):67–93.
* Tao et al. (2020) Tao, Y., Wu, J., Lai, X., and Wang, F. (2020). Network planning and operation of sustainable closed-loop supply chains in emerging markets: Retail market configurations and carbon policies. Transportation Research Part E: Logistics and Transportation Review, 144:102131.
* Toner (2020) Toner, E. (2020). Interim Estimate of the US PPE Needs for COVID-19. Technical report, Johns Hopkins Center for Health Security, Baltimore, Maryland.
* UN (2012) UN (2012). Renewable Resources and Conflict. Technical report, United Nations, New York.
* UN (2019) UN (2019). World Population Prospects 2019: Highlights. Technical report, United Nations, Kenya.
* Valinsky (2021) Valinsky, J. (2021). Supply chain interrupted: Here’s everything you can’t get now. https://www.cnn.com/2021/07/31/business/supply-chain-shortages-pandemic-july-2021/index.html. Accessed: May 1, 2022.
* Wagner and Light (2002) Wagner, L. A. and Light, H. M. (2002). Materials in the economy - Material flows, scarcity, and the environment. U.S. Department of the Interior, U.S. Geological Survey.
* Weber and Friedrich (1929) Weber, A. and Friedrich, C. J. (1929). Alfred Weber’s theory of the location of industries. Italy: University of Chicago Press.
* Willig (1976) Willig, R. D. (1976). Consumer’s surplus without apology. The American Economic Review, 66(4):589–597.
* Winegarden (2021) Winegarden, W. (2021). It’s Time For A Supply-Side Resurgence. https://www.forbes.com/sites/waynewinegarden/2021/05/10/its-time-for-a-supply-side-resurgence. Accessed: May 1, 2022.
* Worcester (1975) Worcester, D. A. (1975). On monopoly welfare losses: Comment. The American Economic Review, 65(5):1015–1023.
* Worland (2014) Worland, J. (2014). California’s Drought Is Now the Worst in 1,200 Years. https://time.com/3621246/california-drought-study/. Accessed: May 1, 2022.
* Wu et al. (2019) Wu, H., Xu, B., and Zhang, D. (2019). Closed-Loop Supply Chain Network Equilibrium Model with Subsidy on Green Supply Chain Technology Investment. Sustainability, 11(16):4403.
* Wu et al. (2006) Wu, K., Nagurney, A., Liu, Z., and Stranlund, J. K. (2006). Modeling generator power plant portfolios and pollution taxes in electric power supply chain networks: A transportation network equilibrium transformation. Transportation Research Part D: Transport and Environment, 11(3):171–190.
* Wu et al. (2018) Wu, X., Nie, L., Xu, M., and Yan, F. (2018). A perishable food supply chain problem considering demand uncertainty and time deadline constraints: Modeling and application to a high-speed railway catering service. Transportation Research Part E: Logistics and Transportation Review, 111:186–209.
* Xu and Choi (2021) Xu, X. and Choi, T.-M. (2021). Supply chain operations with online platforms under the cap-and-trade regulation: Impacts of using blockchain technology. Transportation Research Part E: Logistics and Transportation Review, 155:102491.
* Yang et al. (2021) Yang, Y., Goodarzi, S., Bozorgi, A., and Fahimnia, B. (2021). Carbon cap-and-trade schemes in closed-loop supply chains: Why firms do not comply? Transportation Research Part E: Logistics and Transportation Review, 156:102486.
* Yang et al. (2022) Yang, Y., Goodarzi, S., Jabbarzadeh, A., and Fahimnia, B. (2022). In-house production and outsourcing under different emissions reduction regulations: An equilibrium decision model for global supply chains. Transportation Research Part E: Logistics and Transportation Review, 157:102446.
* Young (2015) Young, M. (2015). Fiscal instruments and water scarcity. GGKP Research Committee on Fiscal Instruments, 18:2019.
* Yu et al. (2020) Yu, J. J., Tang, C. S., Sodhi, M. S., and Knuckles, J. (2020). Optimal subsidies for development supply chains. Manufacturing & Service Operations Management, 22(6):1131–1147.
* Yu et al. (2018) Yu, M., Cruz, J. M., and Li, D. M. (2018). The sustainable supply chain network competition with environmental tax policies. International Journal of Production Economics, 217:218–231.
* Zhang (2006) Zhang, D. (2006). A network economic model for supply chain versus supply chain competition. Omega, 34(3):283–295.
* Zhang et al. (2020) Zhang, G., Dai, G., Sun, H., Zhang, G., and Yang, Z. (2020). Equilibrium in supply chain network with competition and service level between channels considering consumers’ channel preferences. Journal of Retailing and Consumer Services, 57:102199.
* Zhang et al. (2021) Zhang, G., Zhang, X., Sun, H., and Zhao, X. (2021). Three-Echelon Closed-Loop Supply Chain Network Equilibrium under Cap-and-Trade Regulation. Sustainability, 13(11):6472.
* Zhang and Vesselinov (2016) Zhang, X. and Vesselinov, V. V. (2016). Energy-water nexus: Balancing the tradeoffs between two-level decision makers. Applied Energy, 183:77–87.
APPENDIX
## Appendix A Proof of Theorem 1
Proof: First, we prove that an equilibrium according to Definition 1 coincides
with the solution of VI (1). The summation of (3.2), (3.3), (3.4), and (30),
after algebraic simplifications, yields (1).
Next, we prove the converse, that is, a solution to the VI (1) satisfies the
sum of conditions (3.2), (3.3), (3.4), and (30), and thereby, is a cross-
sector multi-product scarce resource SCNE pattern, in accordance with
Definition 1.
In (1), we begin by adding the term
$\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}(p^{in*}_{0jm}-p^{in*}_{0jm})$ to the first
summand expression over $i$ and $n$,
$\sum_{s=1}^{S_{j}}(p^{jm*}_{1s}-p^{jm*}_{1s})$ to the third summand
expression over $j$ and $m$, and lastly, $p^{js*}_{2tk}-p^{js*}_{2tk}$ to the
fifth summand expression over $i$, $s$, $t$, and $k$. Since these terms are
all equal to zero, (1) holds true. Hence, we obtain the following inequality:
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\Bigg{[}\frac{\partial
f^{in}(x^{in*})}{\partial x^{in}_{jm}}+\frac{\partial
f^{jm}(x^{*}_{jm})}{\partial x^{in}_{jm}}+\frac{\partial
c^{in}_{jm}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}-\frac{\partial\alpha^{i}_{0}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}$
$\displaystyle+\lambda^{0*}_{i}-\psi^{in}_{jm}\lambda^{1*}_{jm}+\sum_{g=1}^{G}\mu^{0*}_{ing}+(p^{in*}_{0jm}-p^{in*}_{0jm})\Bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\alpha^{i}_{g}(\delta^{in*}_{g})}{\partial\delta^{in}_{g}}-\mu^{0*}_{ing}\bigg{]}$
$\displaystyle\times(\delta^{in}_{g}-\delta^{in*}_{g})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\Bigg{[}\dfrac{\partial
f^{js}(x^{js*})}{\partial x^{jm}_{s}}+\frac{\partial
c^{jm}_{s}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}-\frac{\partial\beta^{j}_{0}(x^{jm*}_{s})}{\partial x^{jm}_{s}}$
$\displaystyle+\lambda^{1*}_{jm}-\lambda^{2*}_{js}+\sum_{g=1}^{G}\mu^{1*}_{jmg}+(p^{jm*}_{1s}-p^{jm*}_{1s})\Bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\beta^{j}_{g}(\delta^{jm*}_{g})}{\partial\delta^{jm}_{g}}-\mu^{1*}_{jmg}\bigg{]}$
$\displaystyle\times(\delta^{jm}_{g}-\delta^{jm*}_{g})$
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}\bigg{[}\frac{\partial
c^{js}_{tk}(x^{js*}_{tk})}{\partial
x^{js}_{tk}}+\hat{c}^{js}_{tk}(x^{js*}_{tk})+\lambda^{2*}_{js}+(p^{js*}_{2tk}-p^{js*}_{2tk})\bigg{]}$
$\displaystyle\times(x^{js}_{tk}-x^{js*}_{tk})$
$\displaystyle-\sum^{I}_{j=1}\sum^{K}_{k=1}p^{j}_{3k}(d^{*})$
$\displaystyle\times(d_{jk}-d^{*}_{jk})$ (51)
$\displaystyle+\sum_{i=1}^{I}\bigg{[}U_{i}-\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}x^{in*}_{jm}\bigg{]}$
$\displaystyle\times(\lambda^{0}_{i}-\lambda^{0*}_{i})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}x^{in*}_{jm}\cdot\psi^{in}_{jm}-\sum_{s=1}^{S_{j}}x^{jm*}_{s}\bigg{]}$
$\displaystyle\times(\lambda^{1}_{jm}-\lambda^{1*}_{jm})$
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\bigg{[}\sum_{m=1}^{M_{j}}x^{jm*}_{s}-\sum_{k=1}^{K}\sum_{t=1}^{T_{j}}x^{js*}_{tk}\bigg{]}$
$\displaystyle\times(\lambda^{2}_{js}-\lambda^{2*}_{js})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}A^{i}_{g}-\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in*}_{jm}+\delta^{in*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{0}_{ing}-\mu^{0*}_{ing})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}B^{j}_{g}-\sum^{S_{j}}_{s=1}x^{jm*}_{s}+\delta^{jm*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{1}_{jmg}-\mu^{1*}_{jmg})\geq 0,$
$\displaystyle\forall(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})\in\mathcal{K}.$
Rearranging (A) yields:
$\displaystyle\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\frac{\partial
f^{in}(x^{in*})}{\partial x^{in}_{jm}}+\frac{\partial
c^{in}_{jm}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}-p^{in*}_{0jm}-\frac{\partial\alpha^{i}_{0}(x^{in*}_{jm})}{\partial
x^{in}_{jm}}$
$\displaystyle+\lambda^{0*}_{i}+\sum_{g=1}^{G}\mu^{0*}_{ing}\bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\alpha^{i}_{g}(\delta^{in*}_{g})}{\partial\delta^{in}_{g}}-\mu^{0*}_{ing}\bigg{]}$
$\displaystyle\times(\delta^{in}_{g}-\delta^{in*}_{g})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\frac{\partial
f^{jm}(x^{*}_{jm})}{\partial
x^{in}_{jm}}+p^{in*}_{0jm}-\psi^{in}_{jm}\lambda^{1*}_{jm}\bigg{]}$
$\displaystyle\times(x^{in}_{jm}-x^{in*}_{jm})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\bigg{[}\frac{\partial
c^{jm}_{s}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}-p^{jm*}_{1s}-\frac{\partial\beta^{j}_{0}(x^{jm*}_{s})}{\partial
x^{jm}_{s}}+\lambda^{1*}_{jm}+\sum_{g=1}^{G}\mu^{1*}_{jmg}\bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}-\frac{\partial\beta^{j}_{g}(\delta^{jm*}_{g})}{\partial\delta^{jm}_{g}}-\mu^{1*}_{jmg}\bigg{]}$
$\displaystyle\times(\delta^{jm}_{g}-\delta^{jm*}_{g})$
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}\bigg{[}\frac{\partial
c^{js}_{tk}(x^{js*}_{tk})}{\partial
x^{js}_{tk}}-p^{js*}_{2tk}+\lambda^{2*}_{js}\bigg{]}$
$\displaystyle\times(x^{js}_{tk}-x^{js*}_{tk})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{s=1}^{S_{j}}\bigg{[}\dfrac{\partial
f^{js}(x^{js*})}{\partial x^{jm}_{s}}+p^{jm*}_{1s}-\lambda^{2*}_{js}\bigg{]}$
$\displaystyle\times(x^{jm}_{s}-x^{jm*}_{s})$
$\displaystyle+\sum_{j=1}^{I}\sum_{s=1}^{S_{j}}\sum_{t=1}^{T_{j}}\sum_{k=1}^{K}[p^{js*}_{2tk}+\hat{c}^{js}_{tk}(x^{js*}_{tk})]$
$\displaystyle\times(x^{js}_{tk}-x^{js*}_{tk})$
$\displaystyle-\sum^{I}_{j=1}\sum^{K}_{k=1}p^{j}_{3k}(d^{*})$
$\displaystyle\times(d_{jk}-d^{*}_{jk})$ (52)
$\displaystyle+\sum_{i=1}^{I}\bigg{[}U_{i}-\sum_{n=1}^{N_{i}}\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}x^{in*}_{jm}\bigg{]}$
$\displaystyle\times(\lambda^{0}_{i}-\lambda^{0*}_{i})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\bigg{[}\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}x^{in*}_{jm}\cdot\psi^{in}_{jm}-\sum_{s=1}^{S_{j}}x^{jm*}_{s}\bigg{]}$
$\displaystyle\times(\lambda^{1}_{jm}-\lambda^{1*}_{jm})$
$\displaystyle+\sum_{j=I}^{I}\sum_{s=1}^{S_{j}}\bigg{[}\sum_{m=1}^{M_{j}}x^{jm*}_{s}-\sum_{k=1}^{K}\sum_{t=1}^{T_{j}}x^{js*}_{tk}\bigg{]}$
$\displaystyle\times(\lambda^{2}_{js}-\lambda^{2*}_{js})$
$\displaystyle+\sum_{i=1}^{I}\sum_{n=1}^{N_{i}}\sum_{g=1}^{G}\bigg{[}A^{i}_{g}-\sum^{I}_{j=1}\sum^{M_{j}}_{m=1}x^{in*}_{jm}+\delta^{in*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{0}_{ing}-\mu^{0*}_{ing})$
$\displaystyle+\sum_{j=1}^{I}\sum_{m=1}^{M_{j}}\sum_{g=1}^{G}\bigg{[}B^{j}_{g}-\sum^{S_{j}}_{s=1}x^{jm*}_{s}+\delta^{jm*}_{g}\bigg{]}$
$\displaystyle\times(\mu^{1}_{jmg}-\mu^{1*}_{jmg})\geq 0,$
$\displaystyle\forall(Q^{0},Q^{1},Q^{2},\mathfrak{\Delta}^{0},\mathfrak{\Delta}^{1},d,\lambda^{0},\lambda^{1},\lambda^{2},\mu^{0},\mu^{1})\in\mathcal{K}.$
Clearly, (A) is the sum of the optimality condition (3.2), (3.3), (3.4), and
(30), and thereby, is, according to Definition 1, a cross-sector multi-product
scarce resource SCNE pattern. $\square$
|
# An Enhanced Passkey Entry Protocol for Secure Simple Pairing in Bluetooth
††thanks: * Research supported by NSERC RGPIN 2016-05610.
Sai Swaroop Madugula and Ruizhong Wei
Department of Computer Science
Lakehead University
Thunder Bay, Canada
<EMAIL_ADDRESS>
###### Abstract
Bluetooth devices are being used very extensively in today’s world. From
simple wireless headsets to maintaining an entire home network, the Bluetooth
technology is used everywhere. However, there are still vulnerabilities
present in the pairing process of Bluetooth which lead to serious security
issues resulting in data theft and manipulation. We scrutinized the passkey
entry protocol in Secure Simple Pairing (SSP) in the Bluetooth standard v5.2.
In this paper, we propose a simple enhancement for the passkey entry protocol
in the authentication stage 1 of Secure Simple Pairing using preexisting
cryptographic hash functions and random integer generation present in the
protocol. The new protocol is more secure and efficient than previous known
protocols. Our research mainly focuses on strengthening the passkey entry
protocol and protecting the devices against passive eavesdropping and active
Man-in-the-middle (MITM) attacks in both Bluetooth Basic Rate/Enhanced Data
Rate (BR/EDR) and Bluetooth Low Energy (Bluetooth LE). This method can be used
for any device which uses the passkey entry protocol.
###### Index Terms:
Bluetooth security, Secure Simple Pairing, MITM attacks, Passkey Entry Model
## I Introduction
The Bluetooth is a simple wireless technology developed to exchange data over
short ranges of area. The Bluetooth technology usually employs a wireless
personal area network (PAN), also known as a piconet, in which data between
two devices are exchanged securely. With the introduction of Bluetooth v4.0,
namely Bluetooth Low Energy (Bluetooth LE), it became largely famous for its
low power consumption. Bluetooth LE largely has its applications in the
healthcare, fitness, security, and home network systems. This is because
Bluetooth LE offered very low energy consumption while maintaining a similar
communication range to that of Bluetooth Basic Rate/Enhanced Data Rate
(BR/EDR). As the Bluetooth LE is known for its low energy consumption, it is
most suited for the Internet of things (IoT) and is extensively used in it.
The Bluetooth LE is ideal to send small amounts of data between devices
continuously or periodically. Although different from each other, most
security protocols in Bluetooth LE work like the BR/EDR. This is to ensure
that both devices which support BR/EDR and Bluetooth LE are compatible to
connect with each other. The BR/EDR and Bluetooth LE has a secure way of
pairing and generating the link key which is essential for communication
between any two Bluetooth devices. This method is known as Secure Simple
Pairing (SSP). The two main goals of SSP protocol are to protect the devices
against passive eavesdropping and active attacks like MITM.
SSP employs four association models that are used for authentication between
two devices. One of the models is based on the input-output compatibility
(IOcap) of the two devices. In these models, the passkey entry protocol uses a
6-digit passkey which is shared on both sides by the user and is used as an
authentication token. The Bluetooth standard describes that the passkey entry
protocol provides protection against passive eavesdropping and active man-in-
the-middle (MITM) attacks. However, the security of the protocol is based on
the assumption that the passkey is one-time. There are vulnerabilities present
in the passkey entry model which allows an attacker or an adversary to guess
the passkey if it is reused and hijack the session which enables the attacker
to gain access to the Bluetooth devices and retrieve sensitive information.
The most recent countermeasure to the passkey reuse issue was proposed by Da-
zhi Sun and Mu [1] in their improved passkey entry protocol. Their improved
protocol successfully prevents an adversary from deducing the passkey using
passive eavesdropping. In this paper, we have deduced that their protocol is
still vulnerable to an attack algorithm, which is a variant of the normal
brute-force method. By using our attack, the attacker will be able to deduce
the correct passkey which will help them to conduct a successful MITM attack
allowing them to gain access to the Long Term key (LTK).
## II Problem Definitions
To understand the vulnerability in the passkey entry protocol, we need to
consider a few points.
### II-A Security of passkey entry protocol according to Bluetooth standard
The Bluetooth standard uses public key exchange to establish a common key
between two devices. However, since the public keys used in the protocol do
not have certificates (they cannot use internet to verify the certificates),
the key exchange is exposed to MITM attacks. According to the Bluetooth
standard v5.2 [2], the passkey entry protocol makes use of a 6-digit passkey
which is entered in both of the devices for authentication. This method is
used to test if there is an MITM attack in the key exchange. The 6-digit
passkey is entered by the user and is of 20-bit length. The passkey entry
protocol uses a method of gradual disclosure of each bit of the passkey based
on the commitment protocol. It describes that a simple brute-force guesser
succeeds guessing the passkey with a probability of 0.000001 thus making it
very difficult for an attacker to obtain the passkey. Then it uses the
Elliptic Curve Diffie Hellman (ECDH) under secure connections mode to protect
the devices against passive eavesdropping.
### II-B Vulnerability in Passkey Entry Protocol
The passkey entry protocol normally protects the devices against passive and
active attacks. The Bluetooth standard requires that the user should never
reuse the passkey from the previous session. However, if the passkey is
reused, it creates a vulnerability which the attacker can exploit. This is
caused by the method of verifying the passkey. If an attacker can passively
capture the public key exchange packets and the passkey entry protocol
communication packets, they can easily deduce the passkey and conduct a MITM
attack during the user’s next SSP session.
### II-C How does the vulnerability arise
As we described earlier, ideally it is said that the user should not reuse the
same passkey again. However, just like a user tends to use the same password
for every account, the user might find it simple or easy to reuse the same
passkey for every Bluetooth connection. The attacker can make use of this
vulnerability and exploit the devices. Alternatively, the protocol which is
generating the random passkey might reuse the same passkey to save space or
its computational power which makes the passkey a static key. Since the random
passkey generation protocol does not belong to the Bluetooth standard, it does
not necessarily adhere to its rules. Therefore, there is a chance that it will
use the same passkey for several connections.
### II-D Assumptions on the capabilities for an adversary to exploit the
vulnerability
In this paper, we assume that an attacker has a set of capabilities that
enables them to perform passive eavesdropping and active MITM attacks. The
capabilities are defined below.
* •
The attacker has the knowledge of the frequency hopping pattern which is
shared by the two legitimate Bluetooth devices.
* •
The attacker has the ability to passively capture the transmission packets
exchanged by the two legitimate Bluetooth devices.
* •
The attacker has the ability to modify packets that are transmitted in real-
time between the legitimate Bluetooth devices thus enabling them to perform a
MITM attack before the devices successfully establish the passkey.
### II-E Goal of an attacker
The obtaining of the 6-digit passkey in the passkey entry protocol will enable
an attacker to obtain the Long Term Key (LTK) and the encryption key which is
derived after the secure simple pairing process is finished successfully. If
an attacker can successfully conduct a MITM attack using the derived 6-digit
passkey, then they can obtain the LTK and encryption key and will be
considered as a trusted device by the legitimate devices. By using these keys,
an attacker can simply make a request of sensitive data to the legitimate
user. The legitimate user accepts the request since the connection was
successful and sends sensitive data to the attacker thinking that it is an
authentic device. Alternatively, the attacker can intercept the data being
exchanged by legitimate devices and modify the data. Therefore, the
vulnerability of the passkey entry protocol can lead to serious data
manipulation and data theft.
### II-F Research Goal
In this paper, we propose a new enhanced passkey entry protocol that provides
protection against passive eavesdropping and MITM attacks even in case the
user reuses the same passkey. Our method also tried significantly decreasing
the communication cost and the computation cost. The main idea of our protocol
is completely preventing an attacker from obtaining the correct passkey. By
using our protocol, a legitimate device can successfully prevent an attacker
from obtaining the LTK by MITM attacks.
## III Association Models in SSP
The SSP is the most important protocol in secure BLE pairing. The main goal of
the SSP is to generate a shared Link key between two Bluetooth devices. A
shared link key must be linked with a strong encryption algorithm to give the
user protection against passive eavesdropping. Therefore, SSP uses Elliptic
Curve Diffie Hellman (ECDH) to generate the Link Key (LK) and is used to
maintain the pairing. However, The SSP process can still be subjected to man-
in-the-middle (MITM) attacks because of the lack of user authentications. In
order to provide authentication, the Bluetooth standard has introduced four
associated models which are implemented in the authentication stage 1 (phase
2) of the SSP. The models are as follows (see [2]):
$\bullet$ Just Works model:
This model is ideally used for scenarios where at least one device has no
output display or any input capability, e.g. wireless speaker. The just works
model does not provide any protection against MITM attacks.
$\bullet$ Numeric Comparison model:
This model is used when both devices have display capabilities as well as
input capability for the user to enter “yes” or “no”. This user confirmation
is required for the device to know that the other device is legitimate and
authentic and is, in fact, the same device the user is trying to connect to.
The numeric comparison model offers limited protection against MITM attacks
with the attacker having a success probability of around 0.000001 on each
iteration of the protocol.
$\bullet$ Out of Band model:
This model is used when both devices can support communication over additional
wireless channels e.g. near field communication (NFC). The out of band (OOB)
channel ideally is resistant to MITM attacks. If not, there is a possibility
that the data might be compromised during the authentication phase.
$\bullet$ Passkey Entry model:
This model is used in scenarios where one device has input capability but does
not have any output display capability. In this case, the user enters an
identical 6-digit passkey on both devices. Alternatively, the passkey might be
randomly generated by an algorithm in one device and displayed, which the user
then inputs into the other device.
## IV Related Work on SSP
Da-Zhi Sun and Li Sun [3] designed a formal security model to evaluate SSP’s
association models and authenticated link key security. They discussed about
the security and the possible vulnerabilities present in the SSP’s association
models. This model simulates the networking of the Bluetooth devices and
detects attack vectors when an SSP session is run over an insecure public
channel. Eli Biham and Lior Neumann [5] discovered a new vulnerability in the
ECDH key exchange process. Their attack successfully allows an adversary to
recover the session encryption which is used for secure data transfer with a
success rate of 50% during the pairing attempts. They accomplished this by
modifying the y-coordinate of public keys of device and setting them to zero.
By doing this, the computed DHkey value will always be infinity. Therefore, an
adversary could calculate the encryption key this way. This attack is
performed during the public key exchange phase of SSP. Samta Gajbhiya [6]
proposed SSP with an enhanced security level which involves SSP with
authenticated public key exchange and delayed-encrypted capability exchange.
The main goal of their protocol is to prevent an attacker from forcing the
legitimate devices to adopt the association model as per the attacker’s
convenience. Giwon Kwon [4] introduced a method to increase the length of the
Temporary Key (TK) by repeating it. They defined a model in which the master
is required to select and transmit a security level through the security level
field in the PairingRequest packet. Da-Zhi Sun [1] proposed a novel method of
protecting the passkey entry model in secure simple pairing protocol against
MITM even if passkey is reused. After the passkey is injected into two
devices, they generated a random nonce on both devices A and B and exchange
it. Then, A computes a hash using the HMAC-SHA256 algorithm based on the
passkey entered and sets the value r$\boldsymbol{{}^{*}_{a}}$ by taking the
six most significant digits of r. The same process is also done at device B
and the rest of the protocol is kept the same. Due to the random nonces, the
r∗ values computed from the hash are different for each connection. Barnickel
[7] researched on the vulnerability on the passkey entry model of Bluetooth
v2.1+ when the user reuses the same passkey for another SSP session. They
implemented the MITM attack with a successful probability of 90% using the GNU
radio platform on Universal Software Radio Peripheral (USRP) and USRP2
devices. They proposed two countermeasures for the MITM attacks. The first is
to record all the previously entered passkeys by the user and reject them if
they are used again. However, this would mean there would be a very high
storage cost of the protocol since it has to store every passkey entered for
every SSP session. In addition to this, it is difficult for a user to enter a
new password every time they want to connect to a new device since even they
have to remember the previous passkeys used. The second countermeasure
involved using encrypting and decrypting the random nonces which would prevent
a passive attacker from eavesdropping. However, it greatly increases the
complexity and the performance cost of the protocol due to the encryption and
decryption functions.
## V Architecture of Secure Simple Pairing
For any kind of data transfer between two or more Bluetooth devices, they
first need to establish a connection and pair. To make it secure, the devices
use protocols such as Secure Simple Pairing under different security modes.
The end goal of any pairing process between two Bluetooth devices is to
generate a shared long-term link key on both sides which is used to transfer
data. In classic Bluetooth and Bluetooth LE, there are a total of five phases
in the SSP protocol. SSP strives to provide security while maintaining minimal
user interaction. The primary focus of our research work is the passkey entry
protocol present in the authentication stage 1 phase.
The SSP is the most important pairing protocol in Bluetooth Pairing. Before
SSP, the Bluetooth technology used a 4-digit PIN code to pair with other
devices. This PIN code was also a part of the Link Key calculation process.
Due to the short length of the PIN code, several vulnerabilities were
revealed, and to mitigate them, SSP was used. The end goal of the SSP is to
generate a shared Link key between two Bluetooth devices. However, The SSP
process can still be subjected to man-in-the-middle (MITM) attacks. To prevent
MITM attacks, the Bluetooth standard has introduced association models that
are implemented in the authentication stage 1 (phase 2) of the SSP.
Figure 1: SSP Architecture
As shown in Fig. 1, the working of SSP is the same for any device, excluding
phase 2 which is protocol dependent. All phases of SSP must work in the same
way in every Bluetooth device. As for phase 2, when the IO capabilities are
exchanged before phase 1 of SSP, both devices show what protocols are
supported by them. Based on the compatibility and the requirements, a protocol
is selected and executed.
### V-A How SSP provides Security
Usually, an attacker has two ways of obtaining data. The first technique is to
execute passive eavesdropping (usually performed with specially designed tools
like Ubertooth) and capture the transmission packets. By doing this, the
attacker can try to compute the LTK based on the captured data in phase 1 and
phase 2. If the attacker obtains the LTK in this way, then they can simply
calculate the encryption key which is originally derived from the LTK. Thus,
when two legitimate devices transfer data using this encryption key, the
attacker can again capture the data packets and simply decrypt them using the
deduced encryption. This way, they can gain access to sensitive data.
The second technique is to execute a MITM attack. This technique is more
dangerous because if successful, the attacker has a trusted connection with
the legitimate device and can request data or send malicious files to the
target device. In the case of passive eavesdropping, the attacker can only
access the data which is transferred between two legitimate devices. However,
when it comes to active attacks such as MITM, the attacker and the target
devices share the same LTK. Therefore, an attacker can enable data transfer
requests whenever they require provided that the target device has its
Bluetooth switched on.
SSP provides protection from these attacks in the authentication stage 1 and
authentication stage 2. In authentication stage 1, the SSP tries to prevent
any MITM attacks by using either the numeric comparison or the passkey entry
protocol based on the IOcap of devices. If an attacker was able to intercept
the communication in phase 1 of SSP, it prevents the attacker from obtaining
the LTK by performing authentication in phase 2. In authentication stage 2,
the DHkey which is computed in phase 1 is verified which protects the device
from passive eavesdropping.
### V-B Outline of SSP
In phase 1 of SSP, the initiating device A and the advertising device B
establish a common key DHkey following the Elliptic Curve Diffie Hellman
protocol. Note that since both devices do not have authentications, this key
only can be used to prevent eavesdropping attacks in the following, but not
for the MITM attacks.
The SSP Phase 2 ( Authentication Stage 1) consists of 3 association models. In
Bluetooth LE, the just works model integrates with the numeric comparison
model. Aside from these two models, we have the out of band model and the
passkey entry model. This phase is dependent on the IO capabilities of both
devices. Based on the IOCap, the compatible association model is selected.
Since our focus is on the passkey entry model, we will not discuss the other
association models in detail.
Figure 2: SSP Phase 2: Passkey Entry Protocol
Figure 2 describes the passkey entry model. In this model, the host or the
user enters a 6-digit identical passkey on both devices A and B. The 6-digit
passkey which is of 20-bit length is then stored into ra and rb. After
injecting the passkey, the steps 3-8 are repeated for 20 times and each round
is denoted with the integer $i$ which has a range of $1<=i<=20$. For every
round, exactly one bit of passkey is sent to the other device. In each round,
a 128-bit random nonce Nai and Nbi is generated accordingly at both sides.
After that, the device A computes a commitment value Cai using
$f1$(PKax,PKbx,Nai,rai) and sends it to device B. B also generates the
commitment value Cbi using $f1$(PKbx,PKax,Nbi,rbi) and sends it to device A.
After exchanging the commitment values, the device A sends its Nai value to B.
Now, the device B, uses the Nai value sent by A to check if
C$\boldsymbol{{}_{ai}}=f1$(PKax,PKbx,Nai,r${}_{bi})$ and if the check passes,
the device B will also send its Nbi value. If the check does not pass, the
protocol is aborted and the SSP session will need to be run again. The same
procedure is followed at the device A also. According to the Bluetooth
standard, the gradual disclosure of the passkey in this way prevents leakage
of more than 1 bit of the passkey making it difficult for an MITM attacker to
crack it. At the end of this stage, the values of Na and Nb are set to
N$\boldsymbol{{}_{a}}20$ and N$\boldsymbol{{}_{b}}20$ which are later used in
the next stage, authentication stage 2. Here, $f1()$ is an HMAC-SHA256 secure
hash function used to calculate the commitment values of Cai and Cbi. In the
following $f2(),f3()$ are also HMAC-SHA256 based hash functions.
In SSP Phase 3 (Authentication Stage 2), the device A computes a new
confirmation value Ea using $f3(DHkey,N_{a},N_{b},r_{b},IOcapA,A,B)$ and B
computes a confirmation value Eb using $f3(DHkey,N_{b},N_{a},r_{a},\\\
IOcapB,B,A)$ simultaneously. After the computations, the device A sends the Ea
to device B where B checks the Ea with
$f3(DHKey,N_{a},N_{b},r_{b},IOcapA,A,B)$. If the check fails, the protocol
aborts. If the check passes, the value of Eb is sent to device A where A also
checks the Eb with $f3(DHKey,N_{b},N_{a},r_{a},IOcapB,B,A)$. If the check
fails, the protocol aborts. Here the value of ra and rb is the entire 20-bit
passkey.
In SSP Phase 4, a shared link key is generated by using
$f2(DHkey,N_{a},N_{b},btlk,BD\\_ADDR_{a},BD\\_ADDR_{b})$ at both sides. The
order of the parameters must be the same in order to calculate the same key.
The end goal of an attacker is to obtain this key.
Once the shared link key is established at both sides, the authentication
process and the encryption key generation process is completed based on the
link key. The steps followed in this phase are identical to the ones in the
legacy pairing. Since we are only focusing on the passkey entry model, we
omitted the details of phase 5 here.
## VI Vulnerability in Passkey Entry Protocol
The Bluetooth standard uses ECDH public key exchange in phase 1 to prevent
passive eavesdropping. However, Since both devices do not have any
authentication during the key exchange, an attacker can easily perform MITM
attack which will result DHkeys shared with the attacker and the two devices.
To prevent such attacks, the standard uses the Phase 2 or authentication stage
1 phase. The main idea is that if there is a MITM attack in Phase 1, then the
Bluetooth standard will try to prevent the attacker from connecting by using
one of the association models in Phase 2. The Passkey Entry model of Bluetooth
uses the commitment protocol to make sure that both parties have the same
passkey. That method works well if the user will change the passkey each time
when the connection has a problem. Unfortunately, in practice very likely
users will use the same passkey for many times, which will cause serious
security problems.
For instance, a passive attacker can capture all the commitment values
$(C_{ai}$ and $C_{bi})$, random nonce $(N_{ai}$ and $N_{bi})$ and deduce each
bit of passkey r by computing their own commitment
$f1(Pk_{ax},PK_{bx},N_{ai},r^{\prime}_{ai})$ and comparing it with the
original value. Since the value of the rai is always either a 1 or 0, the
attacker can easily obtain the entire passkey. In this way, the attacker can
figure out the correct $r_{ai}$. Ideally, we should not reuse the same passkey
again, however, most users tend to reuse the old passkey again for simplicity
and ease. This enables the attacker to launch an MITM attack with the known
passkey successfully communicating with the two devices simultaneously. In
this way, after capturing all the packets and deducing the passkey, the
attacker will successfully break into the system.
In what follows, we always assume that there is a MITM attack in Phase 1, and
we what to check if the authentication process in Phase 2 can always find out
the attacks.
## VII An Improved Passkey Entry Protocol by Sun and Mu
Sun and Mu in [1] proposed an improved passkey entry protocol as a
countermeasure to passkey reuse. We will refer to this protocol as the SM
protocol. They proposed an improved passkey entry protocol in which, before
sending each bit of passkey (steps 3-8) in Figure 2, they generate a new
random nonce on both sides and exchange them. Then, they compute
$r=f2(DHkey,N_{a0},N_{b0},r_{a})$ at device A and
$r=f2(DHkey,N_{a0},N_{b0},r_{b})$ at device B and set the six most significant
digits of r to r${}^{*}_{a}$. This protocol needs a generation of two
additional 128-bit random nonces as well as two additional cryptographic hash
function f2() which is originally used in the generation of link key in SSP
phase 4.
Figure 3: SM Improved Passkey Entry Model.
in Figure 3, after injecting the passkey r on both sides, the devices A and B
generate two random nonces Na0 and Nb0. The two nonces are sent to each other.
Then, on device A, a new hash value is computed as
$r=f2(DHkey,N_{a0},N_{b0},r_{a})$ and the six most significant digits of r is
set to r$\boldsymbol{{}^{*}_{a}}$. The device B computes the hash value
$r=f2(DHkey,N_{a0},N_{b0},r_{b})$ and sets the six most significant digits of
r to r$\boldsymbol{{}^{*}_{b}}$. The rest of the protocol, the steps 5-10, is
run the same as that of the original passkey entry model. In their protocol,
the random passkey r is used as a seed of the authentication passkey.
This way, even if the user uses the same passkey next time, the attacker
cannot easily attack since each time the values of r$\boldsymbol{{}^{*}_{a}}$
and r$\boldsymbol{{}^{*}_{b}}$ will be different.
### VII-A An attack on SM protocol
Consider a scenario where an attacker is simultaneously trying to establish a
connection with both devices A and B. If we assume that the device A and
device B have a fault tolerance system where they allow three to four
consecutive SSP sessions with minimal delay in the event of failure, the
attacker can try to establish consecutive sessions to device A and device B
event in the event of failure of prediction of bits. When an attacker has
successfully completed the public key exchange, then they have two DHKey
values, one with device A and another with device B. If we assume that an
attacker was able to correctly predict around 8-bits of the r${}^{*}_{a}$ and
r${}^{*}_{b}$, then a simple brute-force would greatly increase the chances of
the attacker to predict the correct passkey. This is because aside from the
passkey r, the attacker knows the values of DHkey, Na0, and Nb0. So after the
SSP session fails due to the incorrect passkey bit, the attacker can simply
brute-force all the combinations of passkey and match the output with the
8-bits of r${}^{*}_{a}$. Out of the passkeys which match, there is going to be
the correct passkey. The attacker can now take the list of the matched
passkeys and perform a dictionary attack and try those combinations with the
first 8 bits of r${}^{*}_{b}$ of another SSP session.
The algorithm is displayed in Algorithm 1.
Algorithm 1 Brute-force algorithm
1:Initialise DHkey, Na0, Nb0, newlist[] and count = 0 obtained from first SSP
session.
2:for i in range of (0..1000000) do
3: if i$\leq$100000 then
4: prepend zeros to i to make it 6-digit.
5: end if
6: Generate rx = HMAC-SHA256(DHkey, Na0, Nb0, i)
7: Set n most significant of rx to r${}^{\prime}_{x}$ where n is the obtained
number of r${}^{*}_{a}$ bits by the attacker.
8: if r${}^{\prime}_{x}==$r${}^{*}_{a}$ then
9: newlist[count] = i (Adding i to the list of potential passkeys.)
10: count = count + 1
11: print(Match found. Potential Passkey = i)
12: end if
13:end for
14:procedure consecutivebrute(DHkey, Na0, Nb0, newlist, count)
15: Initialise DHkey, Na0, Nb0 obtained from next consecutive SSP session.
16: newcount = 0
17: for i in range of (0..count) do
18: if newlist[i]$\leq$100000 then
19: prepend zeros to newlist[i] to make it 6-digit.
20: end if
21: Generate rx = HMAC-SHA256(DHkey, Na0, Nb0, newlist[i])
22: Set n most significant of rx to r${}^{\prime}_{x}$ where n is obtained
number of r${}^{*}_{a}$ bits by the attacker.
23: if r${}^{\prime}_{x}==$r${}^{*}_{a}$ then
24: newlist[newcount] = i (Adding i to the list of potential passkeys.)
25: newcount = newcount + 1
26: print(Match found. Potential Passkey = newlist[i])
27: end if
28: end for
29:end procedure
30:Execute consecutivebrute(DHkey, Na0, Nb0, newlist, count) with a new list
of potential passkeys until the correct passkey is found.
By doing this, the attacker can eventually obtain the correct passkey within
2-3 SSP sessions. When the passkey is known, the attacker will simply execute
MITM attack in the new session, and this time, they will be successful because
of the correct passkey and will be able to obtain the LTK.
### VII-B Simulation Results
We have done a simulation where we tested the resilience of the SM passkey
entry model against the brute-force attack. For this specific simulation of
the brute-force attack, we used a desktop computer with an Intel i7 2.20GHz
processor. We have performed the brute-force attack when 4, 5, 6 and 7 bits of
r* are known to the attacker. The brute-force attack was conducted 50 times
with each known r* value and the average SSP sessions required were calculated
for each r* value as shown in Figure 4.
Figure 4: SM protocol resistance against Brute-force attack.
## VIII Enhanced Passkey Entry Protocol
To improve the security of the passkey protocol, we propose an alternate to
[1], our enhanced passkey entry protocol below. Our protocol does not differ
much from the original protocol in Bluetooth standard and in addition to that,
there are only 10 rounds in our protocol which significantly reduce the power
and communication costs of the devices. The main idea of our protocol is to
never reveal the original passkey until phase 3 of the SSP. We also make use
of the properties of the DHKey in the ECDH algorithm to generate a 256-bit
hash from passkey r. Therefore, we use a derivative hash of the original
passkey r and use that to run the entire passkey entry protocol. In our
protocol, there are 2 different bits exchanged and verified at each side for
each round resulting in 20-bits being exchanged altogether.
Figure 5: New Enhanced Passkey Entry Protocol.
Our protocol is displayed in Fig. 5. When the user enters the random passkey
on both devices, we make use of the f2() hash function of the original passkey
entry protocol to create a 256-bit hash value from the input passkey r. We use
the shared DHKey calculated in the public key exchange phase as the key for
the HMAC function and the input message, 20-bit passkey. The resultant hash is
denoted as r$\boldsymbol{{}^{\prime}_{a}}$ and r$\boldsymbol{{}^{\prime}_{b}}$
respectively. The computed hashes r$\boldsymbol{{}^{\prime}_{a}}$ and
r$\boldsymbol{{}^{\prime}_{b}}$ calculated in steps 3a and 3b have the same
value since the passkey is same at both devices. After generating these
values, the steps 4-12 are executed for 10 times. In steps 4a and 4b, an 8-bit
random integer n′ (where $1<=n\boldsymbol{{}^{\prime}}<=255$) is generated at
both sides i.e., n$\boldsymbol{{}^{\prime}_{a}}$ and
n$\boldsymbol{{}^{\prime}_{b}}$ for device A and device B respectively. This
random integer defines the position at which the passkey bit is to be taken
from r$\boldsymbol{{}^{\prime}_{a}}$ and r$\boldsymbol{{}^{\prime}_{b}}$.
After generating the random integers, both devices perform an XOR operation
with the 8-bit random integer n′ and the $\boldsymbol{i^{th}}$ set of 8-bits
of the r′. Both values are exchanged in steps 5 and 6. Next, at device A, we
compute the XOR as $(n\boldsymbol{{}^{\prime}_{b}}\oplus
r\boldsymbol{{}^{\prime}_{b}})\oplus r\boldsymbol{{}^{\prime}_{a}}$ and obtain
the n$\boldsymbol{{}^{\prime}_{b}}$ value. Then, we set the
n$\boldsymbol{{}^{\prime}_{b}}$th bit of 256-bit
r$\boldsymbol{{}^{\prime}_{a}}$ hash value to the r$\boldsymbol{{}^{*}_{ai}}$
bit. For instance, if the random number n$\boldsymbol{{}^{\prime}_{b}}$ is 20,
then we set the value of the 20th bit of r$\boldsymbol{{}^{\prime}_{a}}$ hash
as r$\boldsymbol{{}^{*}_{ai}}$. The same process is followed at the device B
where we calculate $(n\boldsymbol{{}^{\prime}_{a}}\oplus
r\boldsymbol{{}^{\prime}_{a}})\oplus r\boldsymbol{{}^{\prime}_{b}}$ to obtain
the n$\boldsymbol{{}^{\prime}_{a}}$. Since the values of the
r$\boldsymbol{{}^{\prime}_{a}}$ and r$\boldsymbol{{}^{\prime}_{b}}$ are equal,
the XOR operation will derive the correct values. After setting the
r$\boldsymbol{{}^{*}_{ai}}$ and r$\boldsymbol{{}^{*}_{bi}}$ values, both
devices generate a 128-bit random nonce. In steps 10a and 10b, the device A
computes a commitment value Cai$=f1$(PKax,PKbx,Nai,r${}^{*}_{ai}$) and device
B computes a commitment value Cbi$=f1$(PKbx,PKax,Nbi,r${}^{*}_{bi}$). Both
commitment values are exchanged between the two devices. Then, the device A
sends the 128-bit random nonce Nai. After receiving the random nonce Nai, the
device B sets the r$\boldsymbol{{}^{*}_{bi}}$ with the value at
n$\boldsymbol{{}^{\prime}_{b}}$th bit of r$\boldsymbol{{}^{\prime}_{b}}$ hash.
Next, the commitment value Cai is checked by verifying
Cai$=f1$(PKax,PKbx,Nai,r${}^{*}_{bi}(n^{\prime}_{b}position)$). If the two
values match, then the protocol is continued. The device B sends the random
nonce Nbi and random integer n$\boldsymbol{{}^{\prime}_{b}}$. The same process
is followed at device A and the commitment values are verified. This process
is repeated for 10 rounds where two bits of the 256-bit r′ hash are sent
randomly by two devices in each round.
The main idea of our proposed protocol is to use bits from the random position
of the hash value of the passkey to do the commitment instead of using the
bits of passkey. To keep the position information secure, we again use the
hash value as the one-time pad encryption key to encrypt the information of
positions. In our protocol, the passkey is never used to communicate between
two parties.Therefore no information will be revealed by the communication.
Since the random position is used, Algorithm 1 does not work for our protocol.
## IX Security of Enhanced Passkey Entry Protocol under passive Eavesdropping
and MITM attacks
Compared to the original passkey entry protocol, our enhanced protocol
provides protection against passive and active attacks. Consider a scenario
where the attacker X captures all the commitment values and the random nonce
of the passkey entry protocol run using passive eavesdropping. Note that here,
we assume that the attacker already has the knowledge of the frequency hopping
pattern of the target devices. By doing this, the attacker effectively obtains
the public keys of device A and device B. Then in the passkey entry protocol
run, the commitment values and the random nonces are obtained. Now, the
attacker X can try to deduce the value of r∗ bits by comparing the commitment
values but X cannot find out the original passkey r from the r∗ bits.
Therefore, even though X knows r∗, they cannot deduce the original passkey
because the r∗ bits are generated from the hash function which uses the
original passkey r as the message and the DHkey as the key for the HMAC
operation. In a worst case, the attacker would have 10 bits of the
r$\boldsymbol{{}^{*}_{ai}}$ and r$\boldsymbol{{}^{*}_{bi}}$ values which are
exchanged in each round. However, these values are not enough to deduce the
entire hash. In addition to this, even if the DHkey is known the attacker
would still not know the position of the r∗ bits which are exchanged due to
XOR operations. In addition to this, the r′ value will be different for every
SSP session which leads to an entirely different and unique hash value r′ in
another SSP session even a same passkey is used. In this way we eliminate the
possibility of an attacker that uses passive eavesdropping to deduce the
original passkey. To summarise, it would be impossible for an attacker to
deduce the full hash value of the passkey even if they passively eavesdropped
the target device’s SSP sessions continuously several times because of the
following reasons.
* •
Even if the passkey is reused, the hash value obtained in step 3 of Figure 5
will be different for every SSP session. Each time, the target device connects
to a different device, the DHKey value will change and as a result, the hash
value will also change.
* •
The ECDH private-public key pair is generated randomly and is not static.
* •
Since the n’ bits are hidden using the XOR operations, an adversary cannot
guess the correct passkey due to a one-time pad technique.
We have also compared the security of our protocol to the SM protocol. While
the SM protocol can be attacked using our brute-force attack mentioned in
algorithm 1, the enhanced passkey entry model completely prevents an adversary
from establishing the attack. This is because of the XOR operations done on
both sides to hide the correct value of the n$\boldsymbol{{}^{\prime}_{a}}$
and n$\boldsymbol{{}^{\prime}_{b}}$. From an attacker’s perspective, the
XOR’ed value which is sent during steps 5 and 6 is not useful to them. Unless
the attacker already knows the random integer n′ or the $i^{th}$ set of 8-bits
of the correct hash value, they can obtain neither the correct n′ or the
$i^{th}$ set of r′ value. In the XOR operation, the value of the n′ and the
set of 8-bit r′ used is different for each round and is not repeated anywhere.
Therefore, the XOR operation that is executed in steps 5 and 6 technically
uses the properties of the one-time pad technique making it very secure.
In case of a MITM attack, when the public key exchange phase is completed
between the attacker and the devices A and B. Now, the attacker will share one
DHkey with device A and one DHkey with device B. In our proposed protocol,
after the legitimate user enters the 6-digit passkey, we generate a hash using
the DHkey and the 6-digit passkey. Since the attacker does not know the
passkey, they have 1 million possibilities of the passkey. Even if one digit
of the passkey is guessed wrong, the hash value which is computed is changed
drastically. In addition to this, the hash value of device A and device B will
be different due to the different DHkey values. Due to this, when the XOR
operation is done with the 8-bit random integer and the 8-bit set of r′ and
the result is exchanged between the devices, the derived value will be
different leading to an incorrect n′ value. Therefore, even if an attacker
predicts a correct r∗ bit in one round, they would not know the correct value
of the position n′ which is needed to execute the brute-force attack described
in Algorithm 1.
Therefore, our protocol not only thwarts off passive eavesdropping but also
Man-in-the-middle attacks even when a passkey is reused.
## X Performance Evaluation
### X-A Performance Cost of Original Passkey Entry Model
In the original passkey entry protocol for each round, there are hash
operations of f1 at steps 4a, 4b, 7a, and 8a as shown in Figure 2. Therefore,
the computation cost between the two devices is 80 hash functions. Coming to
the commitment values, each value has a size of 128bits. And there are exactly
2 commitment values exchanged in each round for 20 rounds. Therefore, the
communication cost for the commitment values can be calculated as
$20(rounds)*2(commitmentvalues)*128(sizeof(commitment))=5120bits$
And in addition to that, the two devices exchange exactly two random nonce for
each round in $1\leq i\leq 20$ which leads to
$20(rounds)*2(RandomNonce)*128(sizeof(randnonce))=5120bits$
Altogether, we have a total of 10,240 bits being exchanged in the protocol
run. Finally, the storage cost of the protocol would be around 192 or 256 bits
depending on the length of the DHkey. Taking the size of the passkey into
account, we also need an additional 20-bits. Then, the total storage cost
would be 212 or 276 bits.
### X-B Performance Cost of Enhanced Passkey Entry Model
Our enhanced protocol performs in the same way as the original. The main
difference of our protocol is that it significantly reduces the number of hash
computations required due to the 10 rounds. We also need two extra variables
to store the calculated 256-bit hash $\boldsymbol{r^{\prime}}$ and the 8-bit
random integer $\boldsymbol{n^{\prime}}$. In addition to this, there are 2
additional hash computations present before the execution of steps 4-14 in
Figure 5. These hash computations are done on both sides. We are not
considering the XOR operations done on both sides as the operation cost is
almost negligible. In our protocol, aside from two hash computations in step
3a and 3b, in each round, there are 4 hash computations which are used to
compute the commitment values and verify them at both sides. So, for 10 rounds
we have 4(Hash) x 10(rounds) = 40 Hash computations. Therefore, the
computation cost of our protocol adds up to a total of 42 Hash computations.
Regarding the communication cost, our protocol follows the same steps as that
of the original protocol aside from the fact that two 8-bit random integers
are exchanged in each round. The size of the commitment values and the random
nonces remain the same. Therefore, the communication cost can be calculated as
$10(rounds)*2(randint)*8(randintsize)+10(rounds)*2(commitmentvalues)*128(commitmentsize)+10(rounds)*2(randomnonces)*128(randomnoncesize)=5280bits$
The storage cost of our protocol is 468 or 532 bits (including the 20-bit
passkey) depending on the length of DHkey. Although the storage cost is higher
than the original protocol, it significantly increases the security of the
protocol by providing two layers of security and protecting devices against
passive attacks and active MITM attacks.
Our enhanced passkey entry significantly reduces the computation cost as well
as the communication cost in half. Due to the decrease in the computation
cost, the device will not take as much power as the original protocol. In
addition to this, the passkey entry protocol runs faster due to the decrease
in the communication cost as we only need to send half of the bits compared to
the original protocol.
## XI Conclusion
In this paper, we have introduced a new enhanced passkey entry protocol that
successfully protects the Bluetooth devices against passive attacks and MITM
attacks while reducing the computation and communication costs by half. It
should be noted that we have designed our protocol without deviating too much
from the original protocol. This is done so that our enhanced protocol can be
simply integrated with the Bluetooth devices in the form of a patch. We have
made a theoretical approach to the passkey reuse issue and protection against
MITM in the passkey entry model. We will consider employing our proposed
protocol in different Bluetooth devices and test the protocol under different
scenarios as our future work.
## References
* [1] Da-Zhi Sun, Yi Mu. “Man-in-the-middle attacks on Secure Simple Pairing in Bluetooth standard v5.0 and its countermeasure.” Personal and Ubiquitous Computing, February 2018. pp.55-67.
* [2] Bluetooth Special Interest Group (SIG) Core Specification document v5.2 (2019) https://www.bluetooth.com/
* [3] Da-Zhi Sun, Li sun. “On Secure Simple pairing in Bluetooth standard v5.0-Part I: Authenticated Link Key Security and Its Home Automation and Entertainment Applications.” Mar 2019.
* [4] Giwon Kwon, Jeehyeong Kim. “Bluetooth Low Energy Security Vulnerability and Improvement Method.” IEEE ICCE Asia, 2016.
* [5] Eli Biham, Lior Neumann. Breaking the Bluetooth Pairing - Fixed Coordinate Invalid Curve Attack. July 2018. Accessed January 2020.
* [6] Samta Gajbhiye, Sanjeev Karmakar. “Bluetooth Secure Simple Pairing with enhanced security level.” Journal of information security and applications 44 (2019). pp.170-183.
* [7] Barnickel J, Wang J, Meyer U. “Implementing an attack on Bluetooth 2.1+ secure simple pairing in passkey entry mode.” IEEE computer society (2012), pp.17-24.
|
11institutetext: Wrocław University of Science and Technology, Faculty of Pure
and Applied Mathematics, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
11email<EMAIL_ADDRESS>
http://szajowski.wordpress.com/ 22institutetext: Wrocław University of Science
and Technology
22email<EMAIL_ADDRESS>
# A measure of the importance of roads based on topography and traffic
intensity††thanks: Presented at Bernoulli-IMS One World Symposium 2020, August
24-28, 2020. Recorded presentation is here.
Krzysztof J. Szajowski 11 Kinga Włodarczyk 22
###### Abstract
Mathematical models of street traffic allowing assessment of the importance of
their individual segments for the functionality of the street system is
considering. Based on methods of cooperative games and the reliability theory
the suitable measure is constructed. The main goal is to analyze methods for
assessing the importance (rank) of road fragments, including their functions.
A relevance of these elements for effective accessibility for the entire
system will be considered.
Subject Classification:MSC 68Q80 $\cdot$ (90B20; 90D80)
###### Keywords:
component importance coherent system road classification graph models traffic
modelling.
## 1 Introduction.
### 1.1 Historical remarks and motivations.
The function of a road network is to facilitate movement from one area to
another. As such, it has an important role to play in the urban environment to
facilitate mobility. It furthermore determines the accessibility of an (urban)
area (together with public transport options). In many studies on the design
and maintenance of roads, the authors raise the problem of alternative
connections needed to ensure efficient transport between strategic places (cf.
Lin2010:Path (Lin2010:Path), TacMerMan2012:Hazards (TacMerMan2012:Hazards)).
It is known that individual segments of the road structure are exposed to
various types of threats, resulting in temporary disconnection of such
couplings. As a result, the road network determines the quality of life in the
analyzed area. Therefore, it is worth trying to define measurable parameters,
the quality of road connections, the road system constituting the
infrastructure used for transport. Further considerations focus on road
systems for road transport. However, the proposed approach can be successfully
applied to other similar structures.
When designing, it is worth conducting an analysis of the effects of excluding
individual segments and determining the measures that allow for the
identification of critical ones. However, the difficulty with this kind of
economic appraisal is first of all that it is not easy to measure the
valuation of travel time. Different people and organizations value travel time
in different ways, depending on many factors such as income, goal of the trip,
social background, etc (cf. Che1981:TTS (Che1981:TTS)). It is relatively
easier to measure the value travel time than the highway security measure (v.
Sha2012:Secirity (Sha2012:Secirity)). The purpose of the work is to determine
the importance of roads segment in road traffic. Consideration will be
commonly known measures of significance used to evaluate components of binary
systems. Road topography is a long-term process that cannot be changed in a
short time. Therefore, it is important to ensure safe road traffic when
planning communication infrastructure. To this end, it is important to
introduce objective methods for assessing weak links in the road system. Using
methods of stochastic processes and game theory, a quantitative approach to
the importance of various elements of infrastructure will be proposed. The
introduced connection assessment proposals will be illustrated using
information about the actual local road network in the selected city (see
Example 1.2).
### 1.2 A motivating example.
In the presented work, the network of streets ensuring access from point $A$
to point $B$ in Zduńska Wola will be treated as a system. The diagram of the
streets analyzed can be seen in Figure 1(a). Let us emphasize that the purpose
of modeling is not to reflect the current traffic on the network, as shown in
Figure 1(b), but to establish the importance of network elements due to their
objective importance for the functioning of the road system.
(a) Road segments from A to B.
(b) Google Maps presentation of the typical traffic load on the
streets.222Source: Google Maps.
Figure 1: Analysed traffic network.
In research, there are many measures that allow assessing the importance of
individual components, based on the system structures, lifetimes and
reliability of individual components or methods of estimating significance
based on the methods of turning on and off. The most classic methods based on
reliability theory will be used in the paper. To this end, the street network
will be presented in the form of a system, where each road is presented as a
separate component. Then, based on the construction of the system, the
structure function will be determined, thanks to which it will be possible to
calculate the meaning of individual components and the corresponding streets.
The next stage will be defining the theory of traffic in the context of
significance measures. This area will be examined in relation to the
satisfaction and comfort of drivers. Drivers satisfaction means that the
system works properly, if not the system is failed. So as reliability of
particular road we consider probability of driver’s satisfaction from the
journey. A road connection system in a given area should allow transport in a
predictable time between different points. Extending this time has a negative
effect on drivers. As a result, their right ride quality is compromised and
they are more likely to fail to comply with the rules. Therefore, providing
drivers with driving comfort and satisfaction is also important for general
road safety. The use of this approach can, therefore, be a guide for both
drivers and road builders planning road infrastructure.
### 1.3 The paper organization.
The purpose of the research presented here is to implement of various
importance measures introduced in the reliability theory to analyze the impact
of elements of road networks. The theory related to significance measures and
their use in traffic theory is described in the section 2. There is a close
relationship between road delays and the construction and function of both the
road and the intersection that forms part of it. For this purpose, simulations
of vehicle traffic on the analyzed roads were performed. The theory related to
the method of modeling vehicle traffic and their behavior at intersections is
described in section 3. Section 4 describes the real traffic network, its
transfer to the simulation model, and the results obtained in this way. Then,
on this basis, the importance of individual fragments was calculated depending
on the intensity of traffic on these roads.
Considering this work is a look at the impact on the comfort of communication
of the road structure in connection with traffic without directly referring to
the behavior of drivers, which was devoted the paper SzaWlo2020:Divers
(SzaWlo2020:Divers). In these previous works, significant dependence on
traffic quality on drivers’ compliance with applicable rules was shown. Here,
a similar approach was applied to the condition of changing behavior to
incorrect, which may further result in a deterioration in traffic quality.
Therefore, the results obtained show important elements of the road network
that have an impact on road safety and properly functioning.
## 2 Importance measure.
The operation of most systems depends on the functioning of its individual
components. It is important to ensure the proper running of the entire system.
To this end, it is important to assess the contribution of individual
components. In road networks, network curves model road segments,
intersections, and special places on the road that have a significant impact
on the flow of traffic, such as railway crossings, tunnels, bridges, viaducts
or road narrowing. In order to estimate the importance of particular elements,
the concept of importance measures was introduced (for detailed description of
the concept and its extension to multilevel elements and systems see review
paper by Overview_importance (Overview_importance)). Since 1969 researchers
offer various numerical representations to determine which components are the
most significant for system reliability. It is obvious that the greater are
these values, the more this element have on the functioning of the entire
system. The significance of individual elements depends on the system
structure as well as the specificity and failure rate of individual elements.
There are three basic classes of measures of importance (v.
Overview_importance (Overview_importance), Birnbaum1968 (Birnbaum1968),
Sre2020:Importance (Sre2020:Importance)):
1. i
Reliability importance measures subordinate changes in the reliability of the
system depending on the change in the reliability of individual elements over
a given period of time and in depend on the structure of the system.
2. ii
Structural importance measures are using when just the structure of the system
is known. Depending on the position of the components in the system, their
relative importance is measured.
3. iii
Lifetime importance measures focus on both, components position in the system
and lifetime distribution of each element. According to Kuo_Zhu if it is a
function of the time it can be classified as Time-Depend Lifetime(TDL)
importance and if it is not a function of time we have Time Independent
Lifetime(TIL) importance.
Moreover, depending on the number of states, systems can be divided into two
types:
1. i
Binary systems — comprised of $n$ components, where each of them can have
precisely one of two states. State $\mathbf{0}$ when the component is damaged
and state $\mathbf{1}$ when is working.
2. ii
Multistate systems (MSS) — comprised of $n$ components, which can undergo a
partial failure, but they do not cease to perform their functions and do not
cause damage to the entire system.
### 2.1 Concepts of importance measures.
Establishing the hierarchy of components of a complex system has been reduced
to measuring the influence of the element state on the status of the entire
system. The concept of an element (system) state depends on the context. For
the needs of the road network analysis, we assume, similarly to the
reliability theory, a binary description of both elements and the system (v.
also Ramamurthy1990 (Ramamurthy1990)). For this purpose, we will use the known
results on significance measures obtained in research on this subject
developed in recent years. Importance measures have been developed in many
directions and under many definitions. However, one of the most popular areas
of development and application are:
* •
theory of cooperative games (in simple game)
* •
reliability theory (in coherent and semi-coherent structure)
Many methods have been developed to combine and standardize the terminology
associated with both applications. Therefore, to begin with, we must briefly
mention the relationship between importance measure theory in the context of
both concepts. The first attempt to define it was made by Ramamurthy1990
(Ramamurthy1990), so the following notation was proposed:
1. (1)
$\emptyset\in P$, where $P$ is set of subset of $N$;
2. (2)
$N\in P$, where $N$ is finite, nonempty set;
3. (3)
$S\subseteq T\subseteq N$ and $S\in P$ imply $T\in P$.
The concepts related to game theory and reliability theory were compared with
each other and on this basis, it was possible to define the relationship
between these concepts. To begin with, it is easy to see the relationship
between players and components. According to game theory, we have a set of
players $N=\\{1,2,3,\ldots,n\\}$ and a family of coalitions $2^{N}$. In the
theory of reliability, we have a set of components $N=\\{1,2,3,\ldots,n\\}$,
where the components and the entire system can be in two states, state 1 for
functioning and state 0 for failed. Similarly is in game theory, where
$\lambda:2^{N}\rightarrow\\{0,1\\}$, which is applied in simple game if on set
$N$ form of characteristic function fulfils
1. (1)
$\lambda(\emptyset)=0$;
2. (2)
$\lambda(N)=1$;
3. (3)
$S\subseteq T\subseteq N$ implies $\lambda(S)\leq\lambda(T)$.
Here this characteristic function has its counterpart as a structure function,
and simple games as semi-coherent structures. In addition, also winning and
blocking coalitions are comparable to path and cut sets.
In this paper, the reliability concept of application of Importance measures
will be considered, and the traffic network will be shown as a system. That
way, in the rest of this paper we will use reliability terminology.
### 2.2 Important measures on binary reliability systems.
In the classical approach the system and their elements are binary (v.
Birnbaum1968(Birnbaum1968), Birnbaum1961(Birnbaum1961)). Let the system
comprised of $n$ components can be denoted by $c=(c_{1},c_{2},...,c_{n})$. The
description of the vector of component states (in the short state vector)
$x=(x_{1},x_{2},...,x_{n})$, where each $x_{i}=\chi_{W}(c_{i})$,
$c_{i}\in\\{W,F\\}$ ($W$ \- means the element is functioning; $F$ \- means the
element is failed). For state vector, we can use below notations
[Birnbaum1968]
$\displaystyle\vec{x}\leq\vec{y}\mbox{\quad if\leavevmode\nobreak\
\quad}x_{i}\leq y_{i}\mbox{\quad for \quad$i\in\\{1,\ldots,n\\}$}$
$\displaystyle\vec{x}=\vec{y}\mbox{\quad if\leavevmode\nobreak\
\quad}x_{i}=y_{i}\mbox{\quad for \quad$\forall_{i\in\\{1,\ldots,n\\}}$}$
$\displaystyle\vec{x}<\vec{y}\mbox{\quad if\leavevmode\nobreak\
\quad}\vec{x}\leq\vec{y}\mbox{\quad\, and \quad$x\neq y$}$
$\displaystyle(1_{i},x)=(x_{1},x_{2},x_{3},\ldots,x_{i-1},1,x_{i+1},...,x_{n})=(1,x_{-i})$
$\displaystyle(0_{i},x)=(x_{1},x_{2},x_{3},\ldots,x_{i-1},0,x_{i+1},...,x_{n})=(0,x_{-i})$
$\displaystyle\vec{0}=(0,0,\ldots,0)\mbox{\quad$\vec{1}=(1,1,\ldots,1)$.}$
If the structure of the system is known, we can define the state of the system
$\phi(\vec{x})$ as Boolean function (structure function ) of the state vector.
If from $x_{i}\leq y_{i}$ for $i\in\\{1,\dots,n\\}$ results
$\phi(\vec{x})\leq\phi(\vec{y})$, and $\phi(\vec{1})=1$, $\phi(\vec{0})=0$,
then we call the system coherent. It is known (v. Birnbaum1968 (Birnbaum1968))
that for every $i=1,2,\ldots,n$ structure function can be decomposed as
follows:
$\phi(\vec{x})=x_{i}\cdot\delta_{i}(\vec{x})+\mu_{i}(\vec{x}),$ (1)
where $\delta_{i}(\vec{x})=\phi(1_{i},\vec{x})-\phi(0_{i},\vec{x})$,
$\mu_{i}(\vec{x})=\phi(0_{i},\vec{x})$ are independent of the state $x_{i}$ of
the component $c_{i}$.
In addition, we can observe situations where the system can be functioning
even if some components are failed. The smallest set of functioning elements
that ensures the operation of the entire system is called minimal path. The
opposite situation is observed in the case of minimal cut set, which is the
minimum set of components whose failure cause the whole system to fail. We can
define the structure function as a parallel structure of minimal paths.
According to the definition, this structure is damaged if, and only if all of
the components are failed. So the system consists of $n$ minimal paths series,
denoted by $\rho_{i}(\cdot)$, for $i=1,2,\ldots,n$, can be presented as:
$\phi(\vec{x})=\coprod_{i=1}^{n}\rho_{i}(\vec{x})=1-\prod_{i=1}^{n}\big{[}1-\rho_{i}(\vec{x})\big{]}.$
(2)
Similarly, the structure function can be presented as series of minimal cut
sets. So for $n$ minimal cut parallel structures, marked by
$\kappa_{i}(\cdot)$, for $i=1,2,\ldots,n$, structure function looks as
follows:
$\phi(\vec{x})=\prod_{i=1}^{n}\kappa_{i}(\vec{x}).$ (3)
If we simply replace the minimum paths and minimum cut sets with components,
the formulas (2) and (3) apply for serial and parallel components.
In most of the considerations about the functioning of systems, it is assumed
that elements work independently. Then the state of $i$-th element is a binary
random variable $X_{i}$ and the reliability that the element $i$ is unimpaired
will be denoted by $p_{i}$, where
$p_{i}=P(X_{i}=1)=1-P(X_{i}=0).$ (4)
We also define the vector of reliabilities for $n$ elements by
$\vec{p}=(p_{1},p_{2},\ldots,p_{n}).$ (5)
Based on reliabilities vector and structure function we can define the
probability of the system functioning
$P(\phi(x)=1|\vec{p})=E[\phi(x)|\vec{p}]=h_{\phi}(\vec{p}).$ (6)
For the structure $\phi(\vec{x})$ function $h_{\phi}(\vec{x})$ is called
reliability function.
### 2.3 Reliability importance measure.
As was introduce, reliability importance measures are based on changes in
reliabilities of components and on the system structure. This measure first
time was introduced by Birnbaum1968 (Birnbaum1968). At the beginning, from
formulas (4), (5) and (1) he express the reliability function by
$h_{\phi}(\vec{p})=p_{i}\cdot E[\delta_{i}(X)]+E[\mu_{i}(X)]$, where, for
every $i=1,2,\ldots,n$ and according to equation (1), we have
$\frac{\partial h_{\phi}(\vec{p})}{\partial
p_{i}}=E[\delta_{i}(\vec{X})]=E\left[\frac{\partial\phi(\vec{X})}{\partial
X_{i}}\right].$
According to Birnbaum1968 (Birnbaum1968) the reliability importance of the
component $c_{i}$ for structure $\phi(\cdot)$ is defined as
$I_{i}(\phi;p)=I_{i}(\phi,1;p)+I_{i}(\phi,0;p)$, where
$I_{i}(\phi,1;\vec{p})=P\\{\phi(X)=1|X_{i}=1;\vec{p}\\}-P\\{\phi(\vec{X})=1;\vec{p}\\}$,
and
$I_{i}(\phi,0;p)=P\\{\phi(X)=0|X_{i}=0;p\\}-P\\{\phi(X)=0;p\\}$333$I_{i}(\phi,1;\vec{p})$
and $I_{i}(\phi,0;\vec{p})$ are the reliability importance of the $c_{i}$
component for functioning and failure of the structure, respectively.. We have
the following useful identities
$\displaystyle I_{i}(\phi;1;\vec{p})=(1-p_{i})\cdot\frac{\partial
h(\vec{p})}{\partial p_{i}}=E[(1-X_{i})\delta_{i}(X)]$ $\displaystyle
I_{i}(\phi;0;\vec{p})=p_{i}\cdot\frac{\partial h(\vec{p})}{\partial
p_{i}}=E[X_{i}\delta_{i}(X)]$ $\displaystyle
I_{i}(\phi;\vec{p})=\frac{\partial h(\vec{p})}{\partial
p_{i}}=E[\delta_{i}(X)].$
The Birnbaum importance measures for $i=1,2,\ldots,n$ have forms (the symbol
$\phi$ is droped for short)
$B(i|\vec{p})=\frac{\partial h(\vec{p})}{\partial
p_{i}}=\frac{\partial[1-h(\vec{p})]}{\partial[1-p_{i}]},$ (7)
here $B(i|p)$ is $p$ dependent. In case when reliabilities vector $\vec{p}$ is
unknown, we have to consider structural importance defined for $i=1,2,...,n$
in the following way
$B(i)=I_{i}(\phi)=\frac{\partial h(\vec{p})}{\partial
p_{i}}\Bigg{\rvert}_{p_{1}=...=p_{n}=\frac{1}{2}},$ (8)
this information will be useful in the next section.
### 2.4 Structural importance measures.
When we looking for relevant component $c_{i}$ for the structure $\phi(\cdot)$
and the state vector $\vec{x}$ is known, we are going as follow definition
$\delta_{i}(\vec{x})=\phi(1_{i},\vec{x})-\phi(0_{i},\vec{x})=1$. We can also
highlight definitions if the component $c_{i}$ is relevant for the functioning
of structure $\phi(\cdot)$ at the state vector $\vec{x}$ if
$(1-x_{i})\cdot\delta_{i}(\vec{x})=1$, and, if the component $c_{i}$ is
relevant for the failure of structure $\phi(\cdot)$ at the state vector
$\vec{x}$ gives $x_{i}\cdot\delta_{i}(x)=1$. Distinctly, depends on if the
coordinate $x_{i}$ of the vertex $\vec{x}$ is equal to $0$ or $1$, then
$c_{i}$ is relevant for functioning or failure of the system.
Birnbaum1968 (Birnbaum1968) defined structural importance measure of component
$c_{i}$ for the functioning of the structure $\phi(\cdot)$ as
$I_{i}(\phi,1)=2^{-n}\sum_{(x)}(1-x_{i})\cdot\delta_{i}(x)$, where sum extends
on all combinations $2^{n}$ of vertices of the state vectors. In the similar
way is defined structural importance measure of the component $c_{i}$ for the
failure of structure $\phi(\cdot)$ by
$I_{i}(\phi,0)=2^{-n}\sum_{(x)}x_{i}\cdot\delta_{i}(x)$. Finally, by
summarizing, the structural importance measure of the component $c_{i}$ for
the structure $\phi(\cdot)$ is
$I_{i}(\phi)=I_{i}(\phi,1)+I_{i}(\phi,0)=2^{-n}\sum_{(x)}\delta_{i}(x)$.
barlow (barlow) used a more extended approach to structural measures. Their
point of view assumes that all components have a continuous lifetime
distribution, denoted by $F_{i}$, for $i=1,2,\ldots,n$. It is possible to
calculate the probability of a system failure caused by the $c_{i}$ component.
For the $c_{i}$ component, which is described by the distribution $F_{i}$ and
the density function $f_{i}$, the probability that a system failure at time
$t$ was caused by the $c_{i}$ component can be described as follows
$\frac{[h(1_{i},\bar{F}(t))-h(0_{i},\bar{F}(t))]f_{i}(t)}{\sum_{k=1}^{n}[h(1_{k},\bar{F}(t))-h(0_{k},\bar{F}(t))]f_{k}(t)}.$
(9)
In the consequence of (9), it obvious to define the probability that failure
of the system in $[0,t]$ was caused by the $c_{i}$ component is
$\frac{\int_{0}^{t}[h(1_{i},\bar{F}(u))-h(0_{i},\bar{F}(u))]dF_{i}(u)}{\int_{0}^{t}\sum_{k=1}^{n}[h(1_{k},\bar{F}(u))-h(0_{k},\bar{F}(u))]dF_{k}(u)}.$
Here, if $t\to\infty$, then we obtain that the system finally failed it was
caused by the component $c_{i}$. In this case, we have to note that the
denominator is equal to 1. This limit is taken as a definition of _component
importance_.
Importance measures according Barlow and Proschan definition we will denoted
by $I_{i}^{BS}(\phi)$. We have
$I_{i}^{BP}(\phi)=\int\limits_{0}^{1}[h(1_{i},p)-h(0_{i},p)]dp,$ (10)
where $(1_{i},p)$ and $(0_{i},p)$ is a probability vector where $i$-th
component has probability equal 1 or 0, relatively.
For further calculations, let us remind quick note from Section 2.2, that
minimal path is the minimal set of elements, which ensures the proper
functioning of the system. Based on this we can define critical path set for
component $c_{i}$ as $\\{i\\}\cup\\{j|x_{j}=1,i\neq j\\}$. In this way,
information about the system is functioning or failed is determined by the
$c_{i}$ component functions or fails. A critical path vector (or set) for the
component $c_{i}$, and its size $r$, we have $1+\sum_{i\neq j}x_{j}=r$, for
$r=1,2,\ldots,n$. The formula for counting the number of vectors of critical
paths for the component $c_{i}$ with size $r$ is the following
$n_{r}(i)=\sum_{\sum_{i\neq j}x_{j}=r-1}[\phi(1_{j},x)-\phi(0_{j},x)].$
Finally, we can define the structural importance of the component $c_{i}$
using the number of vectors of critical paths $n_{r}(i)$ as follows
$I_{i}^{BP}(\phi)=\sum_{r=1}^{n}n_{r}(i)\cdot\frac{(r-1)!(n-r)!}{n!}.$ (11)
The equation (11) can be also presented in two more interesting expressions.
The first expression is the following
$I_{i}^{BP}(\phi)=\frac{1}{n}\sum_{r=1}^{n}n_{r}(i)\tbinom{n-1}{r-1}^{-1},$
where $n_{r}(i)$ describes the number of vectors of critical paths with size
$r$. The denominator in the above equation represents the amount of results in
which precisely $r-1$ components are in operation among the $n-1$ components
without $c_{i}$ component. Second additional representation of equation (11)
can be written as follows
$I_{i}^{BP}(\phi)=\int\limits_{0}^{1}\Big{[}\sum_{r=1}^{n}n_{r}(i)\cdot\tbinom{n-1}{r-1}^{-1}\tbinom{n-1}{r-1}\cdot(1-p)^{n-r}\cdot
p^{r-1}\Big{]}dp,$
here $\tbinom{n-1}{r-1}(1-p)^{n-r}p^{r-1}$ means the probability that from the
$n-1$ components without $c_{i}$ component, $r-1$ elements are functioning.
What’s more, $n_{r}(i)\tbinom{n-1}{r-1}^{-1}$ means the probability that $r-1$
functioning elements including $c_{i}$ component determine the critical path
set to the $c_{i}$ component. So multiplication of them means the probability
that components $c_{i}$ is responsible for system failure and integral of it
over $p$ is that reliability for the component $c_{i}$ is a uniform
distribution $p\sim\mathcal{U}(0,1)$.
As it was written at the beginning in Section 2.1 there is a big connection
between the concepts related to game theory and the theory of reliability. The
measure introduced by Barlow and Proschan is an example of this. This
definition is reflected in cooperative games as Shapley’s value, which informs
what profit a given coalition player can expect, taking into account his
contribution to any coalition.
### 2.5 Importance measures of road segments based on traffic flow in example
1.2.
As was said in section 1, binary systems are considered. The analyzed system
is a street network allowing access from $A$ to $B$, it is possible in several
ways. We assume that drivers drive only from $A$ to $B$, straight, without
unnecessary U-turns on the route. Streets were presented at the beginning in
Fig. 1(a) and can be transform to the form of the system (a scheme) as on Fig.
2.
Figure 2: Analysed traffic network presented in system form.
Based on the system representation of the streets network we can determine the
structure function. As we know, the structure function can be defined using
either minimal path set or minimal cut set. So for the given structure both
sets are presented in the tables 2 and 2.
Table 1: Minimal path set. Path | Elements
---|---
1 | 1 2 3 8 12
2 | 1 2 5 9 11 12
3 | 4 6 9 11 12
4 | 4 7 10 11 12
Table 2: Minimal cut set. Cut | Elements | Cut | Elements
---|---|---|---
1 | 1 4 | 11 | 3 9 10
2 | 2 4 | 12 | 8 9 7
3 | 1 6 7 | 13 | 8 9 10
4 | 2 6 7 | 14 | 3 4 9
5 | 4 5 3 | 15 | 4 8 9
6 | 4 5 8 | 16 | 3 11
7 | 1 6 10 | 17 | 8 11
8 | 2 6 10 | 18 | 1 11
9 | 3 5 6 7 | 19 | 2 11
10 | 3 9 7 | 20 | 12
Based on tables 2 and 2, it is possible to define minimal path series
structures represented by the following equations
$\displaystyle\rho_{1}(x)$
$\displaystyle=\prod_{\\{1,2,3,8,12\\}}x_{i}=x_{1}\cdot x_{2}\cdot x_{3}\cdot
x_{8}\cdot x_{12}$ $\displaystyle\rho_{2}(x)=\prod_{\\{1,2,5,9,11,12\\}}x_{i}$
$\displaystyle\rho_{3}(x)$ $\displaystyle=\prod_{\\{4,6,9,11,12\\}}x_{i}$
$\displaystyle\rho_{4}(x)=\prod_{\\{4,7,10,11,12\\}}x_{i}$ and minimal cut
parallel structures described as follows $\displaystyle\kappa_{1}(x)$
$\displaystyle=\coprod_{\\{1,4\\}}x_{i}=x_{1}\amalg x_{4}$
$\displaystyle\kappa_{2}(x)=\coprod_{\\{2,4\\}}x_{i}\quad$
$\displaystyle\kappa_{3}(x)=\coprod_{\\{1,6,7\\}}x_{i}$
$\displaystyle\kappa_{4}(x)$
$\displaystyle=\coprod_{\\{2,6,7\\}}x_{i}=x_{2}\amalg x_{6}\amalg x_{7}$
$\displaystyle\kappa_{5}(x)=\coprod_{\\{4,5,3\\}}x_{i}\ $
$\displaystyle\kappa_{6}(x)=\coprod_{\\{4,5,8\\}}x_{i}$
$\displaystyle\kappa_{7}(x)$
$\displaystyle=\coprod_{\\{1,6,10\\}}x_{i}=x_{1}\amalg x_{6}\amalg x_{10}\ $
$\displaystyle\kappa_{8}(x)=\coprod_{\\{2,6,10\\}}x_{i}\ $
$\displaystyle\kappa_{9}(x)=\coprod_{\\{3,5,6,7\\}}x_{i}$
$\displaystyle\kappa_{10}(x)$
$\displaystyle=\coprod_{\\{3,9,7\\}}x_{i}=x_{3}\amalg x_{9}\amalg x_{7}\ $
$\displaystyle\kappa_{11}(x)=\coprod_{\\{3,9,10\\}}x_{i}\ $
$\displaystyle\kappa_{12}(x)=\coprod_{\\{8,9,7\\}}x_{i}$
$\displaystyle\kappa_{13}(x)$
$\displaystyle=\coprod_{\\{8,9,10\\}}x_{i}=x_{8}\amalg x_{9}\amalg x_{10}\ $
$\displaystyle\kappa_{14}(x)=\coprod_{\\{3,4,9\\}}x_{i}\ $
$\displaystyle\kappa_{15}(x)=\coprod_{\\{4,8,9\\}}x_{i}$
$\displaystyle\kappa_{16}(x)$
$\displaystyle=\coprod_{\\{3,11\\}}x_{i}=x_{3}\amalg x_{11}\ $
$\displaystyle\kappa_{17}(x)=\coprod_{\\{8,11\\}}x_{i}\ $
$\displaystyle\kappa_{18}(x)=\coprod_{\\{1,11\\}}x_{i}$
$\displaystyle\kappa_{19}(x)$
$\displaystyle=\coprod_{\\{2,11\\}}x_{i}=x_{2}\amalg x_{11}\ $
$\displaystyle\kappa_{20}(x)=\coprod_{\\{12\\}}x_{i}=x_{12}\ $
From the definition in equation (2) and based on the above equations, we can
write the structure function of the presented system as follows
$\displaystyle\phi(x)$
$\displaystyle=\rho_{1}(x)\amalg\rho_{2}(x)\amalg\rho_{3}(x)\amalg\rho_{4}(x)=$
$\displaystyle=1-(1-\rho_{1}(x))(1-\rho_{2}(x))(1-\rho_{3}(x))(1-\rho_{4}(x))$
In addition, our structure function can be also expressed by the series of
minimal cut structures
$\phi(x)=\prod_{i=1}^{20}\kappa_{i}(x).$
And finally, thanks to equation (6), we can write the reliability function of
the analyzed system
$\displaystyle h_{\phi}(p)$
$\displaystyle=1-(1-\prod_{\\{1,2,3,8,12\\}}p_{i})(1-\prod_{\\{1,2,5,9,11,12\\}}p_{i})(1-\prod_{\\{4,6,9,11,12\\}}p_{i})$
(12) $\displaystyle\quad\times(1-\prod_{\\{4,7,10,11,12\\}}p_{i}),$
where $p_{i}$, for $i=1,2,\ldots,12$, are some probabilities, which definition
will be introduce in next section.
### 2.6 Reliability importance applied to road networks.
To consider reliability importance we need to define what exactly means that
system is functioning or failed. We assume that the condition of the system’s
functioning is the comfort and satisfaction of drivers. The state 1 will mean
the driver’s satisfaction with a given road section or route, the state 0 —
dissatisfaction. For drivers, the measure of satisfaction is the travel time
on a given section of the road, and more precisely, the realization of the
road according to the planned travel time. Drivers want to finish the journey
in the shortest possible time. The excess of this time, i.e. the delay on a
given section of the road after exceeding a certain critical level causes
dissatisfaction of drivers with the journey. This critical level that causes
dissatisfaction may be different for each driver and is close to the lifetime.
Weibull distribution is often used to represent the lifetime of objects. A
similar approach was used in a paper published by FanJiaTianYun2014:Weibul in
FanJiaTianYun2014:Weibul. The cited article considered a situation when, while
waiting before entering the intersection, the waiting time for a given driver
exceeded a certain critical value, the driver stopped complying with traffic
rules. Like here, this critical value was determined by Weibull distribution.
The variable from the Weibull distribution can be represented as the
cumulative distribution function given by the following formula:
$F(t)=\left\\{\begin{array}[]{ll}1-\exp\left\\{-\left(\frac{t}{\lambda}\right)^{k}\right\\},&\text{for
$t>0$,}\\\ 0,&\textrm{otherwise.}\\\ \end{array}\right.$
Based on the cumulative distribution function, it is possible to calculate the
reliability function, i.e. the function that tells the probability of correct
functioning of an object. We parametrize the segments by the acceptable delay
time $t$ by the driver. The population of the diver is not homogeneous. The
acceptable delay is the random variable with some distribution $\Pi$. The
delay of travel $\tau$ is a consequence of various factors. Let us assume that
its cumulative distribution is $F(t)$. We will say that the segment is
reliable or works for given driver with accepted delay $t$ if
$\tau(\omega)\geq t$. The probability $Q(t)$ of the event is the subjective
driver reliability of the segment. Its expected probability with respect to
$\Pi$, $p=\int_{0}^{\infty}Q(u)d\Pi(u)$ is mean reliability of the segment.
For the homogeneous class of drivers the delay time $\xi$ on given road
section is common value for all drivers, so (mean) reliability will means
probability that for assumed delay time driver is satisfied from the journey.
Therefore, according to the theory of _importance measures_ , $p_{i}$, which
indicates the reliability of the segment will be determined as probability of
driver’s satisfaction and $1-p_{i}$ will means probability that driver is
dissatisfied of journey for assumed delay time. It can be determine as
following formula:
$p_{i}=P(X_{i}=1|t=\xi)=1-P(X_{i}=0|t=\xi)=Q(\xi),$
where $\xi$ is the delay time, $X=1$ means the driver is satisfied with the
road, $X=0$ – he is dissatisfied. The paper adopts the same Weibull
distribution parameters as in paper by FanJiaTianYun2014:Weibul
(FanJiaTianYun2014:Weibul), i.e. $\lambda$ = 30, $k=2.92$. When we know the
relationship between street reliability from the time of delays, we can define
the reliability of these route fragments for a given traffic intensity.
Different road sections react differently to increasing traffic intensity,
which is why reliabilities will be different. Using simulations we determine
the dependence of traffic intensity and delay times.
### 2.7 Continuing example of section 1.2
We will now proceed to briefly introduce these definitions on a simple
example. Let us assume that we have a shortened road network scheme limited to
Piwna 1, Zlotnickiego, Laska, Sieradzka 1 streets. This scheme is presented in
the way shown in Figure 3.
Figure 3: Short version of analysed scheme of traffic network.
Here we have the components $c_{4}$ i $c_{11}$ in series, and the components
$c_{6}$ and $c_{10}$ in parallel. So we can define this system as
"$k$–out–of–$n$" structure, where $n$ is number of all components, $k$ means
number of components in series, and $n-k$ is the number of components in
parallel. On this basis, we can define the structure function as
$\phi(\vec{x})=x_{4}\cdot(1-(1-x_{6})\cdot(1-x_{10}))\cdot x_{11}$, and the
system reliability function corresponding to the above
$h(\vec{p})=p_{4}\cdot(1-(1-p_{6})\cdot(1-p_{10}))\cdot p_{11}$.
To begin with, we assume that the reliability of individual components is
unknown, so only structural measures of significance can be calculated. They
will be calculated based on the definitions introduced in Section 2.4. Two
proposals for structural measures have been introduced: first proposed by
Birnbaum, which assume that each reliability $p_{i}$ of components $c_{i}$,
for $i=1,2,\ldots,n$ are the same and equal to $\frac{1}{2}$ and second the
Barlow and Proschan Importance Measures, which is define for $p\in[0,1]$. So
using this theory and definitions in equation (8) for Birnbaum Importance and
in (10) for Barlow and Proschan Importance, we count the importance of
analyzed components. Obtained results are presented in Table 3.
Table 3: Structural importance of roads in the analyzed system. Id | Street | Birnbaum | Barlow-Proschan
---|---|---|---
name | Importance $B(i;\phi)$ | Importance $I_{i}^{BP}(\phi)$
4 | Piwna 1 | 0.375 | 0.4167
6 | Zlotnickiego | 0.125 | 0.0833
10 | Laska | 0.125 | 0.0833
11 | Sieradzka 1 | 0.375 | 0.4167
We see that roads connected in series are more important than roads connected
in a parallel way. This is consistent with the logic, if one of the parallel
roads is blocked, you can always choose a different route that will allow you
to reach your destination. For streets in a serial connection, this is not
possible. We also note the differences in the values of the importance
measures calculated using the Birnbaum and Barlow-Proschan definitions, this
is because the first measure is calculated for the constant reliability of the
elements equal to $\frac{1}{2}$, so it only examines the relationship between
element positions. The second measure takes into account, apart from the
structure itself, also the variability of reliability of individual elements.
Now we will examine the reliability importance measures for the simplified
system shown in Figure 3. Let us assume that for a given traffic intensity, we
have certain delay times, on this basis, we will calculate the probability
that drivers are still satisfied with the travel along the road, i.e. the
reliability of the road. Next, for these values, using the formula (7), we
calculate the value of the measures of significance defined by Birnbaum. The
assumed delay times, as well as the corresponding reliability and importance,
will be presented in Table 4.
Table 4: Hypothetical calculations of importance measures in the example system. Id | Street | Delay | Probability of | Importance
---|---|---|---|---
name | $\xi$ | satisfaction $Q(\xi)$ | $B(i|p)$
4 | Piwna 1 | 25 s | 0.5559 | 0.8513
6 | Zlotnickiego | 20 s | 0.7363 | 0.0025
10 | Laska | 5 s | 0.9947 | 0.1249
11 | Sieradzka 1 | 16 s | 0.8526 | 0.5551
As we can see, with a road delay of about 25 seconds, the likelihood of driver
satisfaction is close to $\frac{1}{2}$, and in the case of delays of about 5
seconds, drivers do not experience almost the negative effects of a slowdown
in traffic. With such reliability of streets and with such a scheme, it is
easy to notice some issues: streets in a parallel position have less
contribution to potential nervousness or driver satisfaction than in a serial
connection, in addition, in the case of streets in a series, those with less
reliability are more important, so these should be paid greater attention to
maintain proper traffic quality. In the case of streets in parallel
connection, streets with greater reliability are more important. It is logical
that drivers knowing that the road is a better way will choose it, so it is
important to constantly maintain it in good condition because when it fails
the whole connection will lose much reliability.
### 2.8 Structural importance of real traffic network.
As was presented in the previous sections, if the reliability of individual
components is unknown, it is possible to use structural measures of
significance. Therefore, we will begin our considerations about the analyzed
system by calculating the structural significance of individual roads. In the
same way as in the previous section, the definition of significance measures
introduced by Birnbaum, and Barlow and Proschan presented in section 2.3 by
the equations (8) and (10), respectively, were used. The results obtained are
presented in Table 5.
Table 5: Structural importance of roads in the analyzed system. Id | Street | Birnbaum | Barlow-Proschan
---|---|---|---
name | Importance $B(i;\phi)$ | Importance $I_{i}^{BP}(\phi)$
1 | Dolna | 0.0861 | 0.0973
2 | Zlota | 0.0861 | 0.0973
3 | Mickiewicza | 0.0577 | 0.0531
4 | Piwna 1 | 0.1155 | 0.1202
5 | Nyska 1 | 0.0284 | 0.0338
6 | Zlotnickiego | 0.0577 | 0.0531
7 | Piwna 2 | 0.0577 | 0.0531
8 | Jasna | 0.0577 | 0.0531
9 | Nyska 2 | 0.0861 | 0.0973
10 | Laska | 0.0577 | 0.0531
11 | Sieradzka 1 | 0.1439 | 0.1882
12 | Sieradzka 2 | 0.2016 | 0.3690
We see that the results of both measures are similar. As was expected, the
most important for the entire route is street Sieradzka 2, because each route
finally leads along this street, for B-P importance for these streets is
bigger than for B-importance. Next, the most important part of the route is
Sieradzka 1, we see that 3 of 4 ways to obtain point $B$ are going by this
street. For this street, Birnbaum’s value is smaller than the Barlow-
Proschan’s value. The importance of Piwna 1 is the last value of importance
bigger than $0.1$, anyway similar to this value are importances of Dolna,
Zlota, and Nyska 2. Surmise that the significance of the Zlota and Dolna will
be close to the value calculated for Piwna 1 was not difficult. However, it is
not so easy to guess the similarity of the importance of Nyska 2 street to
Dolna and Zlota. Mickiewicza, Zlotnickiego, Piwna 2, Jasna, Laska and Nyska 1
streets have the smallest contribution to the proper functioning of the entire
connections between $A$ and $B$.
## 3 Traffic modelling
### 3.1 Review and history.
Traffic modeling is a particularly complex issue. There are both modeling of
individual phenomena occurring on roads and entire road networks. The first
research into vehicle movement and traffic modeling theory began with the work
of Bruce D. Greenshields(greens). On the basis of photographic measurement
methods, he proposed basic and empirical relationships between flow, density,
and speed occurring in vehicle traffic. Next, LigWhi1955:Kinematic
(LigWhi1955:Kinematic) and Rich1956:Shock (Rich1956:Shock) introduced the
first theory of movement flow. They presented a model based on the analogy of
vehicles in traffic and fluid particles. Interest in this field has increased
significantly since the nineties, mainly due to the high development of road
traffic. As a result, many models were created describing various aspects of
road traffic. As a result, many models were created describing various aspects
of road traffic and focusing on different detail models, we can distinguish:
* •
microscopic models
* •
mesoscopic models
* •
macroscopic models
The differences in the models are at the level of aggregation of modeled
elements. Mesoscopic models based mainly on gas kinetic models. Macroscopic
models based on first and second-order differential equations, derived from
Lighthill-Whitham-Richards(LWR) theory. Microscopic models focus on the
simulation of individual vehicles and their interactions. One of the most
popular are car-following models and cellular automata models, the last is
used in this paper. The most popular cellular automata traffic model is the
Nagel-Schereckenberg (nasch) model , but also very interesting model is LAI
model (cf. LarAlvIca2010:Cellular(LarAlvIca2010:Cellular)), which is more
advance than NaSch model. LAI model is used in this paper, therefore, in the
next section theory about cellular automata will be introduced and later will
be a more detailed description of LAI model.
### 3.2 Cellular automaton
Janos von Neumann, a Hungarian scientist working at Princeton, is the creator
of cellular automata theory. In addition, the development of this area was
significantly influenced by the Lviv mathematician Stanislaw Ulam, who is
responsible for discrediting the time and space of automats and is considered
to be the creator of the definition of cellular automats as "imaginary
physics"[automaty3]. According to a book written by ksiazkaautomaty, cellular
automata can reliably reflect many complex phenomena with simple rules and
local interactions. Cellular automata are a network of identical cells, each
of which can take one specific state, with the number of states being
arbitrarily large and finite. The processes of changing the state of the cells
run parallel and according to the rules. These rules usually depend on the
current state of the cell or the state of neighboring cells. From the
mathematical point of view, cellular automatas are defined by the following
parameters [automaty2] [automaty1]:
* •
State space — a finite, $k$-element set of values defined for each individual
cell.
* •
Cell grid — discrete, $D$-dimensional space divided into identical cells, each
of which at a given time $t_{h}$ has one, strictly defined state of all
possible $k$ states. In the case of the $2D$ network, the cell status at $i$,
$j$ is indicated by the symbol $\sigma_{ij}$.
* •
Neighborhood — parameter determining the states of the nearest neighbors of a
given cell $ij$, marked with the symbol $N_{ij}$.
* •
Transition rules — rules determining the cell state in a discrete time
$t_{h+1}$ depending on the current state of this cell and the states of
neighboring cells. The state of the cell in the next step is presented in the
following relationship:
$\sigma_{ij}(t_{h+1})=F\left(\sigma_{ij}(t_{h}),N_{ij}(t_{h})\right),$
where:
$\sigma_{ij}(t_{h+1})$ — cell state in position $i$,$j$ in step $t_{h+1}$,
$\sigma_{ij}(t_{h})$ — cell state in position $i$,$j$ in step $t_{h}$,
$N_{ij}(t_{h})$ — cells in the neighborhood of a cell in position $i$,$j$ in
step $t_{h}$.
The way the cell neighborhood is defined has a significant impact on the
calculation results. The most common are two types:
* •
Von Neumann neighborhood Each cell is surrounded by four neighbors,
immediately adjacent to each side of the cell being analyzed. The neighborhood
for $i,j$ constructed in this way is as follows:
$N_{i,j}(t_{h})=\left(\begin{array}[]{ccc}&\sigma_{i-1,j}(t_{h})&\\\\[4.30554pt]
\sigma_{i,j-1}(t_{h})&\mathbf{\sigma_{i,j}(t_{h})}&\sigma_{i,j+1}(t_{h})\\\\[4.30554pt]
&\sigma_{i+1,j}(t_{h})&\end{array}\right)$
* •
Moore neighborhood Each cell is surrounded by eight neighbors, four directly
adjacent to the sides of the analyzed cell, and four on the corners of the
analyzed cell. The neighbor cell matrix for $i$, $j$ looks like this:
$N_{i,j}(t_{h})=\left(\begin{array}[]{ccc}\sigma_{i-1,j-1}(t_{h})&\sigma_{i-1,j}(t_{h})&\sigma_{i-1,j+1}(t_{h})\\\\[4.30554pt]
\sigma_{i,j-1}(t_{h})&\mathbf{\sigma_{i,j}(t_{h})}&\sigma_{i,j+1}(t_{h})\\\\[4.30554pt]
\sigma_{i+1,j-1}(t_{h})&\sigma_{i+1,j}(t_{h})&\sigma_{i+1,j+1}(t_{h})\end{array}\right)$
There are also modifications to the above types, such as the combined
neighborhood of Moore and von Neumann, as well as numerous modifications to
the Moore neighborhood itself, and a different way defined by Margolus to
simulate falling sand.
In addition, boundary conditions are an important aspect of cellular automata
theory. Since it is impossible to produce an infinite cellular automaton, some
of the simulations would be impossible because with the end of the automaton’s
grid the history of a given object or group of objects would end. For this
purpose, boundary conditions at the ends of the grid were introduced. There
are the following types of boundary conditions:
* •
periodic boundaries — cells at the edge of the grid behind neighbors have
cells on the opposite side. In this way, the continuity of traffic and ongoing
processes is ensured.
* •
open boundaries — elements extending beyond the boundaries of the grid cease
to exist. This is used when new objects are constantly generated, which
prevents too high density of objects on the grid.
* •
reflective boundaries — on the edge of the automaton a border is created, from
which the simulated objects are reflected, most often it serves to imitate the
movement of particles in closed rooms.
In the next section the model using cellular automata used in the simulation
will be presented. Open boundary conditions are used in our simulations. After
leaving the street, vehicles disappear. This is in line with logic, new
vehicles are constantly appearing and disappearing on the roads. The applied
neighborhood is a modified version of the presented neighborhoods, because
vehicles as their neighbors take those vehicles that are nearby, and more
specifically the nearest vehicle on the road, even if it is not directly
adjacent to the analyzed vehicle, and also cars move by more cell. We can
assume that it is a more extended version of the Von Neumann neighborhood.
### 3.3 The vehicles movement.
In order to define vehicle traffic rules and simulate their movement, the
model proposed by LarAlvIca2010:Cellular (LarAlvIca2010:Cellular) was used.
The proposed model meets the general behavior of vehicles on the road. Drivers
with free space ahead are traveling at maximum speed. Approaching the second
vehicle, drivers react to changes in its speed, providing themselves with a
constant space for collision-free braking. This model is often called LAI
model, from the authors’ names. This part of the work will include a
description of this model and also comments on possible assumptions.
The model presents traffic flow at a single-lane road, where vehicles move
from left to right. The road is divided into 2.5-meters sections, and each is
presented as a separate cell. The length of the car is taken as 5 meters what
is represented as two cells. Each cell can be empty or occupied only by part
of one vehicle. The position of the vehicle is determined by the position of
its front bumper. Vehicles run at speeds from 0 to $v_{max}$, which symbolize
the number of cells a vehicle can move in one-time step $t$. The time step
corresponds to one second. The speed conversion from simulation to real is
presented in the Table 6.
Table 6: The relationship between real and simulation speeds in the model used. Velocity $v$ | Distance | Real speed | Real speed
---|---|---|---
1 | 2.5 m | 2.5 m/s | 9 km/h
2 | 5 m | 5 m/s | 18 km/h
3 | 7.5 m | 7.5 m/s | 27 km/h
4 | 10 m | 10 m/s | 36 km/h
5 | 12.5 m | 12.5 m/s | 45 km/h
6 | 15 m | 15 m/s | 54 km/h
7 | 17.5 m | 17.5 m/s | 63 km/h
Here in the first column, we have the velocity used in the model, next column
presents how distance is done in a one-time step (1 second), the next columns
present real velocity in m/s and km/h for better imagine how the model works.
In simulations we decide to used maximum speed equals to 45 km/h, because
traffic flow in the city is considered, so drivers have not too much space to
fast driving.
The model takes into account the limited acceleration and braking capabilities
of vehicles and also ensures appropriate distances between vehicles to
guarantee safe driving. Three distances calculated for the car following to
its predecessor are included. These values calculate the distance needed for
safe driving in the event that the driver wants to slow down ($d_{dec}$),
accelerate ($d_{acc}$) or maintain the current speed ($d_{keep}$), assuming
that the predecessor will want to suddenly start slowing down with the maximum
force M until to stop. They are calculated as follows:
$\displaystyle d_{acc}$
$\displaystyle=\max\Big{(}0,\sum^{\lfloor(v_{n}(t)+\Delta
v)/M\rfloor}_{i=0}\left[(v_{n}(t)+\Delta v)-i\cdot
M\right]-\sum^{\lfloor(v_{n+1}(t)-M)/M\rfloor}_{i=0}\left[(v_{n+1}(t)-M)-i\cdot
M\right]\Big{)}$ (13a) $\displaystyle d_{keep}$
$\displaystyle=\max\Big{(}0,\sum^{\lfloor
v_{n}(t)/M\rfloor}_{i=0}\left[v_{n}(t)-i\cdot
M\right]-\sum^{\lfloor(v_{n+1}(t)-M)/M\rfloor}_{i=0}\left[(v_{n+1}(t)-M)-i\cdot
M\right]\Big{)}$ (13b) $\displaystyle d_{dec}$
$\displaystyle=\max\Big{(}0,\sum^{\lfloor(v_{n}(t)-\Delta
v)/M\rfloor}_{i=0}\left[(v_{n}(t)-\Delta v)-i\cdot
M\right]-\sum^{\lfloor(v_{n+1}(t)-M)/M\rfloor}_{i=0}\left[(v_{n+1}(t)-M)-i\cdot
M\right]\Big{)}$ (13c)
Here, vehicle $n$ is the follower, and $n+1$ is the preceding car. $v_{n}(t)$
means the value of the velocity of vehicle $n$ in time $t$, $\Delta v$ is the
ability to accelerate in one-time step and $M$ is ability to emergency
braking.
Updating vehicle traffic takes place in four steps, which are done parallel
for each of the vehicles.
1. I.
Calculation of safe distances $d_{dec_{n}}$, $d_{acc_{n}}$, $d_{keep_{n}}$.
2. II.
Calculation of the probability of slow acceleration.
3. III.
Speed update.
4. IV.
Updating position.
Safe distances. According to formulas 13 safe distances are counted for each
vehicles. The calculation of these values is based on the assumption that if
the vehicle in the next time step $t+1$ increases its speed (or maintains it
or slows it down respectively) and the driver preceding from the moment $t$
will constantly slow down to speed 0 (with maximum ability to emergency
braking), there will be no collision. The different between these equation is
just in first part, which define traveled distance by vehicle $n$ if it
decelerate ($v_{n}(t+1)=v_{n}(t)-\Delta v$), keep velocity
($v_{n}(t+1)=v_{n}(t)$) or accelerate $v_{n}(t+1)=v_{n}(t)+\Delta v$, in next
time step, and next begins to brake rapidly. The second part of equation
determines the distance traveled by the preceding vehicle if it starts to
braking with maximum force $M$.
Calculation of the probability of slow acceleration. The second step in the
vehicle movement procedure focuses on calculating the stochastic parameter
$R_{a}$ responsible for slowing down vehicle acceleration. It is assumed that
low-speed vehicles have more troubles to accelerate. According to human nature
and the mechanism of the car, it is true that is that the faster we go, the
easier we manage to accelerate, and standing or driving very slowly cause
slower acceleration. The limiting speed at which acceleration comes easier is
assumed to be 3, which corresponds to 27 km/h. The value of $R_{a}$ parameter
is calculated based on the formula
$R_{a}=\min(R_{d},R_{0}+v_{n}(t)\cdot(R_{d}-R_{0})/v_{s}),$ (14)
where $R_{0}$ and $R_{d}$ are fixed stochastic parameters, mean respectively
probability to accelerate when the speed is equal to 0, and probability to
accelerate when the speed is equal or more than $v_{s}$, and $v_{s}$ limit
speed below which acceleration is harder.
Easily can be seen, that the relationship between the probability of
acceleration at speed 0 and at a speed greater than the limit is interpolated
linearly, which is taken from the idea presented also by Ra_ref [Ra_ref]. In
the simulations, 0.8 and 1 were adopted as $R_{0}$ and $R_{d}$ parameters,
respectively, which will not cause frequent difficulties in accelerating
vehicles, however, the stochastic nature of this process will be taken into
account. The graph of the $R_{a}$ parameter change for the other parameters
adopted in this way is presented in Figure 4.
Figure 4: Values of $R_{a}$ parameters for fixed $R_{0}$, $R_{d}$ and $v_{s}$
[LarAlvIca2010:Cellular].
Speed update. In the beginning, as mentioned before $\Delta v$ means speed
increase in one time step, fixed for all vehicles. $v_{n}(t)$ and $x_{n}(t)$
determine the velocity and the position of vehicle $n$ in time $t$. Distance
from vehicle $n$ to vehicle $n+1$ is counted by the following formula
$d_{n}(t)=x_{n+1}(t)-x_{n}(t)-l_{s},$
which exactly means the distance from front bumper of vehicle $n$ pointed by
$x_{n}(t)$ to rear bumper of the vehicle in the front, presented by the
difference between the position of the front bumper $x_{n+1}(t)$ and the
length of the vehicle $l_{s}$ (in cells). The speed update is done in four
steps, the order of which does not matter.
1. 1.
Acceleration. If the distance to the preceding vehicle is greater than
$d_{aac_{n}}$ then the vehicle $n$ increase velocity by $\Delta v$ with
probability $R_{a}$, what is presented as follows
$v_{n}(t+1)=\left\\{\begin{array}[]{ll}\min(v_{n}(t)+\Delta
v,v_{max}),&\textrm{with prob. $R_{a}$}\\\ v_{n}(t),&\textrm{otherwise}\\\
\end{array}\right.$
In this rule is assumed that all drivers strive to achieve the maximum
velocity if it is possible. Here is include irregular ability to accelerate
depends on the distance to preceding vehicles, relevant velocities of both,
and stochastic parameter responsible for slower acceleration defined in Step
II.
2. 2.
Random slowing down. This rule allows drivers to maintain the current speed,
if it allows safe driving, it also takes into account traffic disturbances,
which are an indispensable element of traffic flow. The probability of random
events is determined by the $R_{s}$ parameter. If $d_{acc_{n}}>d_{n}(t)\geq
d_{keep_{n}}$, then the updated speed is determined according to the formula
$v_{n}(t+1)=\left\\{\begin{array}[]{ll}\max(v_{n}(t)-\Delta v,0),&\textrm{with
prob. $R_{s}$}\\\ v_{n}(t),&\textrm{otherwise}\\\ \end{array}\right.$
3. 3.
Braking. This rule ensures that the drivers keep an adequate distance from
front vehicles. Rapid braking is not desirable, so in order to ensure a
moderate braking process for the driver, when the free space in front of the
car is too small, the vehicle speed is reduced by $\Delta v$, which reflects
optimal braking.
$v_{n}(t+1)=\max(v_{n}(t)-\Delta v,0)\quad\textrm{if }\quad
d_{keep_{n}}>d_{n}(t)\geq d_{dec_{n}}$
4. 4.
Emergency braking. As can be seen in real life, it is not always possible to
brake calmly. What is more, road situations often force more aggressive
braking. Such situations are included in this rule. When the driver gets too
close to the other car, or when the other car brakes too much, it forces
emergency braking. If the distance is at least $d_{dec}$, this rule is not
applied. According to the commonly accepted standard proposed in the
literature (v. force_M LarAlvIca2010:Cellular, the emergency braking force is
set to $-5$ m/s2. With respect to the assumed model parameters the value of
$M$ is 2. This step is described by
$v_{n}(t+1)=\max(v_{n}(t)-M,0)\quad\textrm{if }\quad d_{n}(t)<d_{dec_{n}}$
Updating position Finally, with updated vehicle speed, it is possible to
actualize vehicle positions. The vehicles are moved by the number of cells
according to their speed. This is described by means of
$x_{n}(t+1)=x_{n}(t)+v_{n}(t+1),$
where $x_{n}(t+1)$ is actualized position, $v_{n}(t+1)$ is the previously
determined vehicle speed, and $x_{n}(t)$ is last position of vehicle.
### 3.4 Intersections
Intersections are an inseparable element of road traffic, they are an
intersection with a road at one level. All connections and crossroads also
count in intersections. There are the following types of intersections:
* •
uncontrolled intersections
* •
intersections with traffic signs
* •
crossings with controlled traffic (traffic lights or authorized person)
Modeling of traffic at intersections is an important element of road traffic
modeling, many models have been created on this subject, such as models
simulating the movement of vehicles at intersections of type T [T_shape1],
describing the movement at un-signalized intersections as in the case of
[intersection1], [intersections2] and those considering traffic at
intersections with traffic lights [signalized1]. Typically, these models
consist of two aspects, modeling vehicle traffic and modeling interactions at
intersections. General rules are set for intersections, however, the behavior
of drivers who may or may not comply with these rules is also taken into
account. Modeling of such behavior is also different, which usually
distinguishes these models. This aspects was consider in my engineering thesis
[SzaWlo2020:Divers]. Helpful in modeling interactions at intersections is game
theory, which facilitates the decision about the right of way, where players
are drivers in conflict at the intersection, examples of such use can be seen
in [gamet1]. Additionally, signalized intersection models using Markov chain
are often used, as in the case of [markov1].
However, the purpose of the work is simple modeling of road traffic, therefore
advanced intersection modeling methods will not be considered. It is assumed
that all drivers comply with traffic regulations and follow road safety. The
consequences of changing behavior to incorrect and inconsistent with traffic
rules are not investigated. The purpose of the work is to find elements that
affect the potential threat affecting the reluctance of drivers to comply with
traffic rules. Depending on the maneuver performed by the drivers and the type
of intersection, the following situations need to be modeled:
* •
turn right from the road without right of way
* •
turn right from the road with right of way
* •
turn left from the road with right of way
* •
turn left from the road without right of way
* •
go straight ahead at traffic lights
* •
turn left at traffic lights
When modeling the above situations, two basic rules were used:
Rule 1
a driver who wants to join the traffic on the main road can do, if and only
if, during the whole process, until the maximum speed is reached, he does not
disturb the driving of other vehicles on the main road. This maneuver may be
described by the following formula:
$l_{x}-v_{x}-\sum_{\Delta v=2}^{v_{max}}\min(v_{max},v_{x}+\Delta
v-1)+\sum_{\Delta v=2}^{v_{max}}\Delta v>d_{keep_{x}},$
where $l_{x}$ is the distance of the vehicle on the main road to the
intersection, $v_{x}$ is his current vehicle speed. The first sum symbolizes
the distance traveled by the vehicle on the main road until the passing
vehicle reaches maximum speed. The second sum represents the distance traveled
by the vehicle joining the traffic until it reaches maximum speed, assuming
that in the first second the vehicle will be at an intersection with a speed
equal 1. Both vehicles increase their speed by 1 in each second and they do
not exceed the maximum speed. The value of the left side of the inequality
must be greater than the distance needed by the driver on the main road to
maintain his speed. Otherwise, the driver would be forced to brake which would
disturb his movement.
Rule 2
The driver wanting to cross the opposite direction road can do it if there is
no collision with the opposite direction during the time needed to complete it
and the opposite driver will not be forced to brake. The time it takes to
complete the maneuver depends on the initial speed at the start of the
maneuver. This relationship is described in the Table 7.
Table 7: Relationship between the time of crossing of the opposite road and the initial speed. Velocity $v_{n}$ | Need time $\tau_{n}$
---|---
2 | 1s
1 | 2s
0 | 3s
The condition ensuring the correct execution of the maneuver can be described
by the following inequality:
$l_{x}-\sum_{\Delta v=0}^{\tau_{n}}\min(v_{max},v_{x}+\Delta
v-1)>d_{dec_{x}},$
where $l_{x}$ is the distance of the opposite car opposite to the intersection
and the sum is responsible for calculating the distance traveled by this car
in the time needed to complete the turn. The value of the left side of the
inequality must be greater than the speed needed for safe braking by the
vehicle. Otherwise, it would force the driver to emergency braking, which is
not desirable, and in the event of a possible delayed reaction of the driver
could lead to an accident.
The above rules are the basis used to define behavior at intersections, more
information on where the rules were applied, and why, it will be described in
Section 4.1.
In addition, modelling of traffic lights was needed. The traffic light scheme
was used in accordance with the Polish regulations. The traffic light cycle
follows the diagram 5.
Figure 5: Traffic light cycle.
The duration and meaning of individual signals are as follows:
1. 1.
Red light — no entry behind the signal light. The duration is 60 seconds.
2. 2.
Red and yellow light — means that in a moment will be a green signal.
According to the regulations, it lasts 1 s.
3. 3.
Green light — allows entry after the signal light if it is possible to
continue driving and this will not cause a road safety hazard. The duration is
the same as for the red signal, equal to 60 s.
4. 4.
Yellow light — does not allow entry behind the signal light, unless stopping
the vehicle would cause an emergency brake. According to the regulations, it
should last at least 3 seconds.
Such a traffic light cycle and the duration of each signal were adopted in the
simulation. of course, there is also a relationship between the capacity of
intersections and the time of the traffic light cycle. However, the most
standard signaling scheme was adopted to ensure optimal intersection capacity.
In addition, it was assumed that both directions of travel are equivalent,
which is why this cycle is the same on both roads.
In accordance with the theory described for traffic modeling, as well as with
the proposed method of traffic conditioning at intersections. For each street
from the diagram in the drawing 1, traffic simulations were performed in the
MATLAB package. Real and simulated street sizes are presented in Section 4.1
in the next chapter. For each simulation, the time it took me from the
beginning of the road to leaving the intersection at its end was calculated
for each vehicle. Simulations have been carried out many times for different
probabilities of a new driver appearing on the road, which in the further
understanding will be taken as traffic intensity.
### 3.5 Model calibration
An important aspect in the case of traffic modeling, which we could not fail
to mention in this chapter, is the calibration of models. In general, this
topic is part of a larger problem, which is simulation optimization. The area
of development of simulation optimization in recent years has enjoyed great
interest among researchers and practitioners. Simulation optimization is the
pursuit of the maximum performance of a simulated real system. The system
performance is assessed based on the simulation results, and the model
parameters are the decision variables. The assessed performance in this case
is the model’s ability to recreate reality. Therefore, it is a very important
topic in modeling traffic, which aims to enable the reconstruction of real
vehicle traffic, so correctly choose the model parameters so that the model
used is a reliable model and correctly shows the modeled behavior
characteristics. Optimization in the context of motion simulation models has
evolved in many areas and the only ones were optimization and calibration of
motion, but they were not often combined with optimization theory, where some
of the problems in motion modeling are well known. One of the most important
conclusions is that there is no algorithm that is suitable for all problems
and needs and that the choice of the right algorithm depends on the example
being examined (v. Spall2006(Spall2006)).Most studies focused on testing the
performance of the optimization algorithm, where models are evaluated against
actual traffic data, e.g. Hollander2008 in Hollander2008. However, based on
real traffic data, it is not possible to evaluate the effectiveness of the
algorithm and the entire calibration process. Another approach proposed in the
literature is the use of synthetic measurements, i.e. data obtained from the
model itself. This approach was proposed e.g. by Ossen2008(Ossen2008), and
tested changes in model calibration due to the use of errors in synthetic
motion trajectories by ciuffo(ciuffo), which used tests with synthetic data to
configure the process of calibration of microscopic motion models, based on
trial and error.
## 4 Simulation
### 4.1 Description of real traffic network
Using the models proposed in section 3, a simulation of vehicle movement was
performed on each street presented in the Figure 1. The model of vehicle
traffic along a straight road is presented in Section 3.3. The modeling
movement between streets was more complicated. Section 3.4 describes the
general rules needed to define traffic at intersections. There are, various
maneuvers required simulation. In addition, the actual road lengths have been
converted into simulation values to best reflect the road traffic. Table 8
describes real and simulation road lengths and maneuvers that should be
performed on a given road section.
Table 8: Description and base information on analysed roads. Id | Street | Length | Intersections and turning
---|---|---|---
name | In meters | In cells
1 | Dolna | 300 m | 120 | Give way on Zlota and turn right
2 | Zlota | 350 m | 140 | Give way oncoming vehicles and turn left or go straight
3 | Mickiewicza | 500 m | 200 | Turn right with right of way
4 | Piwna 1 | 450 m | 180 | Give way oncoming vehicles and turn left or go straight
5 | Nyska 1 | 160 m | 64 | Go ahead with right of way
6 | Zlotnickiego | 500 m | 200 | Give way on Nyska 1 and turn right
7 | Piwna 2 | 180 m | 72 | Give way vehicles on the main road and turn left
8 | Jasna | 400 m | 160 | and Give way vehicles on the main road and turn left
9 | Nyska 2 | 200 m | 80 | Give way oncoming vehicles and turn left on intersection
10 | Laska | 500 m | 200 | Go ahead, but wait on traffic lights
11 | Sieradzka 1 | 500 m | 200 | Go ahead with right of way
12 | Sieradzka 2 | 500 m | 200 | Go ahead to the end of road
In the case of Nyska 1, Sieradzka 1, and Sieradzka 2 streets, drivers go
through given section with priority, driving straight ahead.
For Dolna and Zlotnickiego roads, drivers join the traffic on the main road
being on a road without the right of way. Rule 1 was applied, assuming that
when approaching an intersection, drivers must slow down to a speed of 0 or 1,
and then decide according to the condition described.
At Mickiewicza street, at the end of the road, the driver is forced to slow
down to 2, which corresponds to the real speed of 18 km/h, we can assume that
this is a reasonable speed to make a turn. After decelerating, drivers can
leave the intersection.
For Zlota and Piwna 1 streets, drivers with probability $\frac{1}{3}$ turn
left, otherwise they go straight. Before turning, the drivers slow down to at
least speed 2, if they can cross the opposite direction lane they continue
driving if they do not slow down more. Therefore, drivers must give way to
oncoming vehicles, Rule 2 applies.
In the case of Piwna 2 and Jasna streets, we assume that the drivers slow down
before the intersection to 0 or 1 and with probability $\frac{1}{2}$ turn
right or left. In both situations, it is necessary to apply Rule 1, because
drivers must give way to vehicles that are on the road they are turning, in
addition in the case of a left turn, Rule 2 should be applied too because the
vehicle will cross the opposite direction.
The last two traffic situations relate to traffic at intersections with
traffic lights. When driving along Laska Street, in the event of red light,
drivers wait at the intersection, then they can leave it. The case where
drivers would like to turn left is not being considered because in real life a
left lane is intended for a left turn. When leaving Nyska 2 Street, drivers
may ride to the right, left, or straight. In the case of a right turn or
straight ahead, the process goes without any problems, so we allow drivers to
leave the intersection. When turning left, you must pass vehicles driving in
the opposite direction, so Rule 2 applies.
According to the above assumptions, simulations were made, and repeated 1000
times for each traffic intensity to obtain the average values of delays
depending on the intensity. The sample code and description of the program are
at the end of the work in Appendix LABEL:chap:appendixCode.
### 4.2 Simulations results
In accordance with the characteristics described in the previous section,
simulations of motion were made. The results of road delays are shown in graph
6.
Figure 6: Time of delays depend on traffic intensity for different roads.
We see that the delay increase characteristics for different roads are
different. It is easy to see that one of the most difficult streets to travel
are Jasna and Piwna 2, we see here a high sensitivity to traffic intensity.
Another group of streets in terms of delays are Zlotnickiego and Dolna, and
also Laska Street is similar to them, although the growth characteristics are
different. Nyska 2 has a completely different behavior from the rest, but it
is the only street with such a complex intersection, including traffic lights.
In this case, the delay increases very quickly, reaching a critical level for
this street, related to the capacity of the road. Therefore, despite the fact
that the final result of the delay is not the largest, it can be considered
that the efficiency of this intersection is the worst. The next, but
definitely more efficient streets are Piwna 1 and Zlota, and the most fluid
traffic can be seen on the last 4 streets, where there are no intersections
and traffic disturbances. In addition, both Sieradzka streets have the same
delay times, because they both are without intersections streets and they have
the same length.
Next, using the approach introduced in Section 2.6 and based on the calculated
delay times, it is possible to determine the probability of driver
satisfaction with a given section of the route. An undesirable phenomenon is
exceeding a certain critical level of delay time, which will cause
dissatisfaction to the driver. The probability that the critical value for a
given delay is not exceeded is described by the reliability function of the
Weibull distribution. Using this, the probability of driver satisfaction for a
given delay on each road will be calculated depending on traffic intensity.
These probabilities are presented in Figure 7.
Figure 7: Probabilities of drivers’ satisfaction vs. traffic intensity for
different roads.
We see that the reliability of individual streets is different. Ones of the
streets already at a low traffic intensity reach a critical state, which will
cause drivers’ dissatisfaction for sure (these are e.g. ulice Nyska 2, Piwna
2, Jasna, Laska, Dolna, Zlotnickiego). We can also observe streets such as
Nyska 1, where traffic is constantly flowing and does not irritate drivers.
This drawing shows us that it is true that individual streets react
differently to increasing traffic. Therefore, it is worth examining how these
changes affect the overall functioning of the traffic network and the
importance of individual fragments.
### 4.3 Values of Importance Measures
To begin with, we calculate the reliability of the entire system depending on
traffic and the reliability of each route. In this way, we obtain the
probability of finish the journey with the satisfaction of all roads. In order
to calculate the reliability values of individual roads $p_{i}$, for
$i=1,2,\ldots,12$ are substituted the appropriate values in the formula (12).
In addition, we will calculate the reliability of individual routes that
correspond to the minimum paths. The relationship between the elements of each
route is in series. Individual routes include the following streets:
Route 1:
Dolna, Zlota, Mickiewicza, Jasna, Sieradzka 2
Route 2:
Dolna, Zlota, Nyska 1, Nyska 2, Sieradzka 1, Sieradzka 2
Route 3:
Piwna 1, Zlotnickiego, Nyska 2, Sieradzka 1, Sieradzka 2
Route 4:
Piwna 1, Piwna 2, Laska, Sieradzka 1, Sieradzka 2
Table 9: The probabilities of comfortable driving between points $A$ and $B$ vs. various routes and different traffic intensity. Traffic | Probability of satisfaction from
---|---
intensity | All routes | Route 1 | Route 2 | Route 3 | Route 4
0.050 | 1.0000 | 0.9974 | 0.9973 | 0.9948 | 0.9948
0.075 | 1.0000 | 0.9989 | 0.9970 | 0.9961 | 0.9969
0.100 | 1.0000 | 0.9855 | 0.9598 | 0.9574 | 0.9807
0.125 | 0.9963 | 0.8737 | 0.5440 | 0.5422 | 0.8596
0.150 | 0.1396 | 0.0841 | 0.0006 | 0.0006 | 0.0595
0.175 | 0 | 0 | 0 | 0 | 0
0.200 | 0 | 0 | 0 | 0 | 0
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
0.600 | 0 | 0 | 0 | 0 | 0
The calculated reliability values are presented in Table 9. We can see that
the system is no longer efficient at a traffic intensity of 0.175. In
addition, we see that the capacity of the system is always greater than the
efficiency of individual roads. This is important information regarding the
critical value of traffic intensity that causes failure of the entire network.
In addition, we can see that the capacity of Routes 1 and 4 is greater, which
may suggest that with heavy traffic it is better to choose one of these two
routes to ensure a better chance of a quiet ride.
We will now proceed to calculate the importance of individual roads in the
functioning of the entire system. The calculated values are shown in Table 10.
For each street, received values of measure of significance at a given traffic
intensity were presented. As mentioned before, these values are calculated on
the basis of the structure function (12) and importance measures theory
introduced by Birnbaum (7) using the received reliability for individual
traffic intensities.
Table 10: The importance of route elements for different traffic intensities. Id | Street | Traffic intensity
---|---|---
name | 0.050 | 0.075 | 0.100 | 0.125 | 0.150 | 0.175 | 0.200 | $\ldots$
1 | Dolna | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0301 | 0.0810 | 0 | 0 | $\ldots$
2 | Zlota | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0300 | 0.0795 | 0 | 0 | $\ldots$
3 | Mickiewicza | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0256 | 0.0790 | 0 | 0 | $\ldots$
4 | Piwna 1 | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0271 | 0.0550 | 0 | 0 | $\ldots$
5 | Nyska 1 | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0044 | 0.0005 | 0 | 0 | $\ldots$
6 | Zlotnickiego | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0044 | 0.0005 | 0 | 0 | $\ldots$
7 | Piwna 2 | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0257 | 0.8201 | 0.1980 | 0 | $\ldots$
8 | Jasna | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0292 | 0.9214 | 0.8481 | 0.0207 | $0,\ldots$
9 | Nyska 2 | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0161 | 1.6921 | 1.7424 | 0.0751 | $0,\ldots$
10 | Laska | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0232 | 0.0607 | 0 | 0 | $\ldots$
11 | Sieradzka 1 | $\approx 0$ | $\approx 0$ | $\approx 0$ | 0.0315 | 0.0556 | 0 | 0 | $\ldots$
12 | Sieradzka 2 | $\approx 0$ | $\approx 0$ | 0.0001 | 0.0571 | 0.1346 | 0 | 0 | $\ldots$
As we can see, the most interesting results were obtained for the traffic
intensity of 0.125 and 0.150. At low traffic intensities, the reliability of
individual elements does not affect the functioning of the system, because the
whole system works properly and the reliability of the roads are close to 1.
For the intensity of 0.125, the contribution of individual streets begins to
be noticeable. We see that, according to structural measures, the largest
contribution to the functioning of the network has Sieradzka 2 street, the
next streets have the value of importance close to 0.03, with the exception of
Nyska 1, Zlotnickiego, and Nyska 2, which are smaller. For the intensity of
0.150, we can see that there are difficulties in movement. The first thing
that draws our attention is the importance of Nyska 2 Street, which was one of
the smallest before, now it has become the most significant element. Another
important component of the system is again Sieradzka 2, which is obvious.
However, the streets that are worth paying attention to are Piwna 2 and Jasna,
whose significance has also risen dramatically. With subsequent increases in
intensity, we see that only these 3 streets really affect the quality of
traffic, and of them the most street Nyska 2.
### 4.4 Comparison with real life data.
Based on the analysis made in the previous section, the streets Nyska 2, Piwna
2, and Jasna have the greatest importance for the appropriate functioning of
the entire system at high traffic intensity. The analyzed traffic system is a
real traffic network, which is why we know what traffic really looks like on
individual roads. The presented scheme of travel from A to B shows the travel
from two strategic positions in the city. The main streets in the city are
Laska and Sieradzka, they pass through the center of the city. The traffic "on
top" of Laska Street is greater than on Sieradzka Street because here we are
already approaching the exit from the city. The results obtained are in line
with expectations. One of the most important points in the city is the
Nyska–Laska–Sieradzka intersection. In fact, this intersection is more
extensive and we can see that a lot of work has been put into its proper
functioning. Many simplifications are used there, which would also slightly
change the results obtained from the simulation. For example, vehicles turning
left into Sieradzka Street have more space so, when they are waiting for a
turn, they do not obstruct the traffic of other vehicles going straight or
turning right. In addition, time counters are used on the traffic lights that
increase drivers’ watchfulness and their start when the green light comes on.
The intersection of Piwna and Laska streets was critical enough that it was
impossible to turn left there, the sign ’right to turn right’ was in force.
This was a major impediment to general traffic as well as to the routes
presented in the paper. That is why a roundabout intersection has recently
been built here. This decision certainly required a lot of consideration by
the city authorities, because there is not enough space for a full-size
roundabout here, so it has a slightly flattened one side. However, as can be
seen from the results obtained, it was one of the critical parts of traffic in
the city, so this decision seems sensible.
The last problematic street is Jasna, but here in real traffic, there is no
such intensity of vehicles, both on Jasna Street and the "bottom" part of
Sieradzka Street. Assuming that traffic in this part of the city is smaller
and that turning into Mickiewicza street is not very problematic gives
important information to drivers who considered which of these two roads is
better.
## 5 Summarizing and conclusions.
A quantitative approach to road quality assessment is proposed. The measures
of significance defined for the reliability systems were used as a tool to
calculate the importance of individual road fragments. An actual traffic
network diagram was analyzed, ensuring access from point $A$ to point $B$. To
begin with, assuming that only the structure of the analyzed road network is
known, the structural significance of individual road fragments was
calculated. For this purpose, two measures were used: proposed by Birnbaum,
assuming constant reliability of individual road fragments, and Barlow and
Proschan measure, which takes into account the variability of individual
element reliability. There were noticeable differences between the received
values, but the final result was similar in both cases. The most important for
maintaining the efficiency of the analyzed road network are the streets that
occur in the largest number of possible routes to the point $B$ as a serial
connection. This confirmed our expectations, but also helped to locate some of
the roads that at the first consideration were not potentially important
routes. When comparing the differences between Birnbaum and Barlow-Proschan
measures, the second one was considered more appropriate for use in the
context of road traffic, because the reliability of individual roads are not
the same, many factors affect on them.
Then a method of assessing the reliability of street elements was proposed.
For this purpose, it was assumed that the quality of roads is the satisfaction
of drivers with the route traveled, and the delay time on individual roads was
used as a measure of this. It was assumed that drivers have limited patience,
which is close to a lifetime and was presented as a variable from the Weibull
distribution. Having calculated the delay times on individual roads, it was
possible to determine the probability of upset the driver at such a delay.
However, in order for the obtained value to be able to be used in the theory
of measures of importance, it had to be transformed so that it was responsible
for the reliability of a given element. Therefore, the Weibull distribution
reliability function was used, which in our example reflected the probability
that with a given road delay, the driver would still be satisfied. Alternate
method of reliability assessing to the net of roads has been used by
PilSzy2009:Road(PilSzy2009:Road).
In the paper by SzaWlo2020:Divers (SzaWlo2020:Divers)), it was shown that if
drivers are dissatisfied with driving then they can stop complying with
traffic rules. And one of the factors influencing their change and negative
behavior on the road is delays. Therefore, the measures defined in this way
are a guide for both drivers and traffic managers. For drivers, it shows which
road is better to avoid because there is a chance of potential nervousness,
and gives road drivers information about dangerous points in the city and
points that have a negative impact on drivers. In addition, the large delay
time on individual roads indicates the failure of the fragments concerned.
Based on the simulations performed, the delay times on each road were
calculated. Then it was shown which road fragments are the most important. The
obtained results were confronted with the actual feelings regarding the given
fragments. And they were considered likely because with the network as defined
it was used as the most significant elements that were improved in real
traffic. Which proves the real importance of these elements.
Author Contributions: Author Contributions: Development of the application of
the important measures to element of road networks, Krzysztof Szajowski(KSz)
and Kinga Włodarczyk(KW); implementation of the algorithm and the example,
numerical simulations, KW; writing and editing, KSz and KW.
Funding: Supported by Wrocław University of Science and Technology, Faculty of
Pure and Applied Mathematics, under the project 049U/0051/19(KSz).
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this article:
TDL | Time-Dependent Lifetime (p. iii)
---|---
TIL | Time Independent Lifetime (p. iii)
LAI model | Model of general behavior of vehicles on the road (p. 3.3; v. [LarAlvIca2010:Cellular])
|
# The Lightning Model
James T. Campbell111Corresponding Author<EMAIL_ADDRESS>Partially
supported by a University of Memphis Hardin Honors College Summer Research
Grant Alexandra Deane222Partially supported by an NSERC-USRA grant Anthony
Quas333Partially supported by an NSERC grant
###### Abstract
We introduce a non-standard model for percolation on the integer lattice
$\mathbb{Z}^{2}$. Randomly assign to each vertex $a\in\mathbb{Z}^{2}$ a
potential, denoted $\phi_{a}$, chosen independently and uniformly from the
interval $[0,1]$. For fixed $\epsilon\in[0,1]$, draw a directed edge from
vertex $a$ to a nearest-neighbor vertex $b$ if $\phi_{b}<\phi_{a}+\epsilon$,
yielding a directed subgraph of the infinite directed graph
$\overrightarrow{G}$ whose vertex set is $\mathbb{Z}^{2}$, with nearest-
neighbor edge set. We define notions of weak and strong percolation for our
model, and observe that when $\epsilon=0$ the model fails to percolate weakly,
while for $\epsilon=1$ it percolates strongly. We show that there is a
positive $\epsilon_{0}$ so that for $0\leq\epsilon\leq\epsilon_{0}$, the model
fails to percolate weakly, and that when $\epsilon>p_{\text{site}}$, the
critical probability for standard site percolation in $\mathbb{Z}^{2}$, the
model percolates strongly. We study the number of infinite strongly connected
clusters occurring in a typical configuration. We show that for these models
of percolation on directed graphs, there are some subtle issues that do not
arise for undirected percolation. Although our model does not have the finite
energy property, we are able to show that, as in the standard model, the
number of infinite strongly connected clusters is almost surely 0, 1 or
$\infty$.
2010 Mathematics Subject Classification: 60K35, 82B43
Keywords: percolation, integer lattice, phase transition, infinite clusters
## 1 Introduction
In this paper we introduce and establish some preliminary results about the
following family of non-standard models for percolation on the directed
integer lattice $\mathbb{Z}^{2}$. Randomly assign a potential to each vertex
in $\mathbb{Z}^{2}$, where the values are chosen independently and uniformly
from the interval $[0,1]$. Such an assignment is called a vertex
configuration; if $\phi$ is such a configuration and $a\in\mathbb{Z}^{2}$, we
designate the value of $\phi$ at $a$ by $\phi_{a}$. Fix $\epsilon\in[0,1]$,
and for nearest neighbor vertices $a$ and $b$, draw a directed edge from $a$
to $b$ if $\phi_{b}<\phi_{a}+\epsilon$, giving a subgraph of the nearest
neighbor graph on $\mathbb{Z}^{2}$. Thus, each vertex configuration gives rise
to an edge configuration, and there is a natural probability measure (the
push-forward of Lebesgue product measure on $[0,1]^{\mathbb{Z}^{2}}$) on the
edge configuration space. The Lightning Model refers to this family (for
$0\leq\epsilon\leq 1$), or perhaps a fixed member of this family, of
configuration spaces.
If there is an infinite path originating at the origin 0, we say the
configuration percolates weakly. Define the strong cluster of 0 to be the
strongly connected component containing 0, namely, the set of vertices $a$ for
which there is a directed path from 0 to $a$ and also a directed path from $a$
to 0. When this cluster is infinite, we say that the configuration percolates
strongly. It is immediate that strong percolation implies weak percolation.
We are interested mainly in the question: for which values of $\epsilon$ does
the Lightning Model percolate with positive probability?
In Section 3 we find an $\epsilon_{0}>0$ so that for
$0\leq\epsilon<\epsilon_{0}$, weak percolation fails to occur. While soft
arguments establish the existence of such an $\epsilon_{0}>0$, our arguments
(based on computing the spectral radius of certain operators) give an explicit
non-trivial lower bound. We also show that when $\epsilon>p_{\text{site}}$,
the critical probability for standard site percolation on $\mathbb{Z}^{2}$,
strong percolation occurs in the Lightning Model. The estimates in this
section are not sharp, and there is a substantial gap between $\epsilon_{0}$
and $p_{\text{site}}$. This leaves the question of determining precise
critical values $0<\epsilon_{w}\leq\epsilon_{s}<1$ so that for
$0<\epsilon_{w}$, weak percolation does not occur, and for
$\epsilon>\epsilon_{w}$, weak percolation occurs with positive probability.
Similarly for $\epsilon_{s}$ and strong percolation. We conjecture that
$\epsilon_{w}=\epsilon_{s}$.
In the standard (non-directed) site and bond percolation models, it is
straightforward to show using ergodicity and the finite energy condition that
for each $p$, there is $N_{p}\in\\{0,1,\infty\\}$ such that the number of
infinite clusters for a.e. configuration is $N_{p}$ (first established in
[NS81]). Furthermore a well-known argument of Burton and Keane ([BK89]) shows
that $N_{p}$ only takes the values 0 and 1 in these models. These questions
become far more subtle when studying strongly connected clusters in the
Lightning Model. Firstly the model lacks the finite energy condition, but more
seriously strong connectedness in directed graphs is a much more sensitive
property than connectedness in undirected graphs. For example, changing a
single edge can break a single strongly connected cluster into infinitely many
finite clusters. In Section 4, we establish that in the Lightning Model, the
number of strongly connected clusters is almost surely 0, 1 or $\infty$. Our
proof is strictly two-dimensional, and we do not know if the corresponding
statement holds in the higher-dimensional version of the model. In Section 5,
we present some open questions, and give further examples highlighting the
difficulties with Burton-Keane type arguments in directed settings. As
Grimmett ([Gri06]) indicates, “The Burton-Keane method is canonical of its
type, and is the first port of call in any situation where such a uniqueness
result is needed”. It seems that for this model, a different sort argument
must be used, which presents a truly interesting situation.
We use the term Lightning Model because the base idea (since modified to its
present form) originated as a simple model for lightning in which preferred
transitions are possible based on relative values of a potential. We
discovered later that in fact a model similar to the one in this paper, in
three dimensions, had been proposed by climatologists ([RPK+07]). Although
their paper focuses on basic geometric properties of simulations in finite
regions, the double connection confirmed our name choice.
## 2 Preliminaries
### 2.1 Basic Definitions, Paths, Clusters
Our base graph is the infinite directed nearest-neighbor graph
$\overrightarrow{G}$ whose vertex set is $\mathbb{Z}^{2}$, with edge set
$E=\\{(a,a\pm e_{i}):a\in\mathbb{Z}^{2},\,i=1,2\\}$, where $e_{1}$ and $e_{2}$
are the unit coordinate vectors. Adjacent vertices $a,b\in\mathbb{Z}^{2}$ are
called neighbors.
###### Definition 2.1.
The set $\mathbb{V}$ of _vertex configurations_ , is defined as
$\mathbb{V}=[0,1]^{\mathbb{Z}^{2}},$
where $[0,1]$ is the unit interval equipped with the usual topology. We equip
$\mathbb{V}$ with the product topology and the Borel $\sigma$-algebra
$\mathcal{B}$, and put Lebesgue product measure $\lambda$ on
$(\mathbb{V},\mathcal{B})$.
For a fixed vertex configuration $\phi\in\mathbb{V}$, we will write $\phi_{a}$
(or when clarity requires, $\phi(a)$) to denote the value of $\phi$ at vertex
$a$, and call it the _potential_ at $a$.
###### Definition 2.2.
The set of _edge configurations_ is defined as
$\mathbb{E}={\\{\,\raisebox{2.0pt}{ \leavevmode\hbox to2.98pt{\vbox
to2.98pt{\pgfpicture\makeatletter\hbox{\hskip 1.49165pt\lower-1.49165pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\;,\,\to\,\\}}^{E},$
again equipped with the product topology and Borel $\sigma$-algebra
$\mathscr{B}$.
###### Definition 2.3.
For a parameter $\epsilon\in[0,1]$, define
$f_{\epsilon}:\mathbb{V}\to\mathbb{E}$ by
$f_{\epsilon}(\phi)=z,$
where for every edge $e=(a,b)$ in $E$,
$z_{e}:=z(a,b)=\begin{cases}\rightarrow&\text{ if
}\phi_{b}<\phi_{a}+\epsilon;\\\ \raisebox{2.0pt}{ \leavevmode\hbox
to2.98pt{\vbox to2.98pt{\pgfpicture\makeatletter\hbox{\hskip
1.49165pt\lower-1.49165pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}&\text{
otherwise.}\end{cases}$
We think of $\to$ as representing a directed edge and representing the
absence of such an edge. Hence for a fixed $\phi$ and $\epsilon$,
$f_{\epsilon}(\phi)$ represents an infinite directed subgraph of
$\overrightarrow{G}$ with vertex set $\mathbb{Z}^{2}$, with a directed edge
from the vertex $a$ to the vertex $b$ iff the potential at $b$ is lower than
that at $a$, up to a tolerance of $\epsilon$. We often use standard
percolation terminology and call the directed edges in this subgraph open.
###### Definition 2.4.
For $\epsilon\in[0,1]$ define a probability measure $\mathbb{P}_{\epsilon}$ on
$(\mathbb{E},\mathscr{B})$ by
$\mathbb{P}_{\epsilon}(A):=\lambda(f_{\epsilon}^{-1}(A)),\quad
A\in\mathcal{B},$
where $\lambda$ is the Lebesgue product measure on $(\mathbb{V},\mathcal{B})$.
###### Remark.
An efficient way to describe our setup is to view the vertex potentials as a
family $\\{U_{a}\\}_{a\in\mathbb{Z}^{2}}$ of i.i.d. standard uniform random
variables. For fixed $\epsilon>0$, declare the directed nearest neighbor edge
$(a,b)$ to be $\epsilon$-open if $U_{b}<U_{a}+\epsilon$.
###### Definition 2.5.
By the Lightning Model we mean the aggregate of probability spaces
$(\mathbb{E},\mathscr{B},\mathbb{P}_{\epsilon})$, for $0\leq\epsilon\leq 1$.
We may also refer to the space with a fixed $\epsilon$ as the Lightning Model.
We want to study the (typical) component structure of edge configurations in
the Lightning Model, for which the following definitions are useful.
###### Definition 2.6.
A path from a to b in a configuration $f_{\epsilon}(\phi)$ is a sequence of
distinct vertices
$a=a_{0},a_{1},a_{2},\ldots,a_{n-1},a_{n}=b\in\mathbb{Z}^{2}$ such that
$a_{i}$ is a neighbor of $a_{i+1}$ and
$f_{\epsilon}(\phi)(a_{i},a_{i+1})=\,\to\,$ for all $0\leq i<n$. In this case
we write $a\overset{\epsilon}{\to}b$. If both $a\overset{\epsilon}{\to}b$ and
$b\overset{\epsilon}{\to}a$, we write $a\overset{\epsilon}{\leftrightarrow}b$,
and say that $a$ and $b$ are bi-directionally connected.
Note that in the case $a\overset{\epsilon}{\leftrightarrow}b$, there is no
requirement that the forward and backward paths are the reverse of each other.
###### Definition 2.7.
Let $a\in\mathbb{Z}^{2}$. We define the _strongly-connected component_ of $a$
in $f_{\epsilon}(\phi)$ to be
$\\{b\in\mathbb{Z}^{2}:a\overset{\epsilon}{\leftrightarrow}b\\}\;.$
This is also called the _strongly-connected cluster of a_.
For the remainder of this paper, whenever we discuss paths, clusters, etc.,
the vertex configuration $\phi$ is assumed to have been sampled from
$\mathbb{V}$ and the relations $\overset{\epsilon}{\to}$,
$\overset{\epsilon}{\leftrightarrow}$ have been generated by
$f_{\epsilon}(\phi)$ as above.
###### Definition 2.8.
Let $a\in\mathbb{Z}^{2}$. If
$\\{b\in\mathbb{Z}^{2}:a\overset{\epsilon}{\to}b\\}$ is infinite, then we say
that $a$ _percolates weakly_.
It is clear from the definition that $\\{\phi:a\text{ percolates weakly}\\}$
is measurable. König’s Lemma ([Kön27]) implies that $a$ percolates weakly if
and only if there is an infinite path starting from $a$.
###### Definition 2.9.
Let $a\in\mathbb{Z}^{2}$. If the strongly-connected component of $a$ is
infinite, we say that $a$ _percolates strongly_.
The set $\\{\phi:a\text{ percolates strongly}\\}$ is also measurable. We use
the phrase weak (resp., strong) percolation to mean weak (resp., strong)
percolation at the origin. We denote the events of weak percolation and strong
percolation by $\\{0\overset{\epsilon}{\to}\infty\\}$ and
$\\{0\overset{\epsilon}{\leftrightarrow}\infty\\}$, respectively.
### 2.2 Basic results
Percolation in the Lightning Model is monotone in $\epsilon$. Here is the
argument. We define the following partial order on $\mathbb{E}$:
###### Definition 2.10.
Given two edge configurations $v,w\in\mathbb{E}$, we say $v\preceq w$ if every
open edge of $v$ is also open in $w$.
It is clear that the maps $\\{f_{\epsilon}\\}$ are monotone in $\epsilon$ with
respect to this partial order; that is,
$\epsilon\leq\epsilon^{\prime}\;\implies f_{\epsilon}(\phi)\preceq
f_{\epsilon^{\prime}}(\phi)\;\forall\;\phi\in\mathbb{V}.$ (MP)
In other words if $\epsilon\,\leq\epsilon^{\prime}$ then the directed graph
defined by $f_{\epsilon}(\phi)$ is a subgraph of
$f_{\epsilon^{\prime}}(\phi)$. The following immediate corollary is stated as
a theorem because of its importance:
###### Theorem 2.11.
Monotonicity of Percolation Probability
Both $\mathbb{P}_{\epsilon}(\\{\text{weak percolation}\\})$ and
$\mathbb{P}_{\epsilon}(\\{\text{strong percolation}\\})$ are non-decreasing
functions of $\epsilon$.
###### Proof.
Let $W_{\epsilon}=\\{\phi\in\mathbb{V}\colon
0\overset{\epsilon}{\to}\infty\text{ in $f_{\epsilon}(\phi)$}\\}$ and
$S_{\epsilon}=\\{\phi\in\mathbb{V}\colon
0\overset{\epsilon}{\leftrightarrow}\infty\text{ in $f_{\epsilon}(\phi)$}\\}$.
It follows immediately from (MP) that for $\epsilon\leq\epsilon^{\prime}$,
$W_{\epsilon}\subseteq W_{\epsilon^{\prime}}$ and $S_{\epsilon}\subseteq
S_{\epsilon^{\prime}}$. ∎
We now show that 0 goes to infinity with the same probability that infinity
comes to 0\. This result hints that weak percolation could force strong
percolation.
###### Definition 2.12.
Denote $\\{v\in\mathbb{Z}^{2}:v\overset{\epsilon}{\to}0\\}$ as the attracting
set of the origin. If this set is infinite, we write
$\infty\overset{\epsilon}{\to}0$.
Observe that the event $\\{\infty\overset{\epsilon}{\to}0\\}$ is measurable.
Recall that weak percolation is the event
$\\{0\overset{\epsilon}{\to}\infty\\}$.
###### Theorem 2.13.
$\mathbb{P}_{\epsilon}(\\{0\overset{\epsilon}{\to}\infty\\})=\mathbb{P}_{\epsilon}(\\{\infty\overset{\epsilon}{\to}0\\}).$
###### Proof.
Given an edge $e=(a,b)$, its flip is the edge $\bar{e}=(b,a)$. Define the
mirror transformation $M:\mathbb{E}\to\mathbb{E}$ as sending each edge to its
flip:
$M(z)_{e}=z_{\bar{e}},\quad z\in\mathbb{E},\;e\in E.$
Let $I$ be the involution on vertex configurations defined by
$I:\mathbb{V}\to\mathbb{V}$, $I(\phi)_{a}=1-\phi_{a}$. It is easy to check
that $f_{\epsilon}(I(\phi))_{e}=M(f_{\epsilon}(\phi))_{e}$. Recalling that
$\lambda$ is Lebesgue product measure on $\mathbb{V}$, one also checks that
$\lambda\circ I^{-1}=\lambda$. It follows that
$\mathbb{P}_{\epsilon}\circ M^{-1}=\lambda\circ f_{\epsilon}^{-1}\circ
M^{-1}=\lambda\circ(M\circ f_{\epsilon})^{-1}=\lambda\circ(f_{\epsilon}\circ
I)^{-1}=\lambda\circ f_{\epsilon}^{-1}=\mathbb{P}_{\epsilon},$
so that $\mathbb{P}_{\epsilon}$ is invariant under $M$. To finish, observe
that the mirror image of the event $\\{0\overset{\epsilon}{\to}\infty\\}$ is
the event $\\{\infty\overset{\epsilon}{\to}0\\}$. ∎
## 3 Upper and Lower Bounds for Percolation
In this section we show that for sufficiently small $\epsilon$, weak
percolation almost surely fails to occur. We first give an elementary counting
argument which shows that weak percolation fails when $\epsilon=0$. Then we
sketch a soft argument showing that weak percolation also fails for some
positive $\epsilon$. This argument, however, does not give explicit non-
trivial lower bounds for such $\epsilon$, so we include a third argument,
based upon estimating the spectral radius for an appropriate linear
transformation, which gives a non-trivial lower bound. We note that it is
certainly not sharp.
Here is an argument showing that weak percolation fails when $\epsilon=0$. Fix
a path $0\to a_{1}\dots\to a_{n}$ in $\overrightarrow{G}$. This path is open
in $f_{0}(\phi)$ if and only if $\phi_{0}>\phi_{a_{1}}>\ldots>\phi_{a_{n}}$.
This last event has probability $1/(n+1)!$ since the values
$\phi_{0},\ldots,\phi_{a_{n}}$ are chosen independently and uniformly from
$[0,1]$. But since paths are simple, there are at most $4\cdot 3^{n-1}$ paths
of length $n$ starting from the origin. Hence the probability that at least
one of the paths is open is bounded above by $4\cdot 3^{n-1}/(n+1)!$, which
clearly tends to 0 as $n\to\infty$. Since weak percolation can occur only if
there is an open path of length $n$ (from the origin) for each $n$, we see
that weak percolation fails almost surely when $\epsilon=0$.
Here is a soft argument showing that weak percolation fails also for some
positive $\epsilon$. Fix a non-self-intersecting path with $nk$ edges (where
$k$ is to be fixed below and $n$ will be increased to $\infty$) and vertices
$a_{0}=0,\ldots,a_{nk}$. Then the probability that the path is open is bounded
above by the probability that each of the paths $a_{ik}\to\ldots\to
a_{ik+(k-1)}$ are open for $i=0,\ldots,n-1$. Since these events are
independent, the probability of this is
$\mathbb{P}_{\epsilon}(\\{a_{0}\to\ldots\to a_{k-1}\text{ is open}\\})^{n}$.
Next one sees that $\mathbb{P}_{\epsilon}(\\{a_{0}\to\ldots\to a_{k-1}\text{
is open}\\})$ is a continuous function of $\epsilon$, converging to $1/k!$ as
$\epsilon\to 0$. Now choose $k=7$ (so that $k!>3^{k}$) and fix an $\epsilon>0$
sufficiently small so that $\mathbb{P}_{\epsilon}(a_{0}\to\ldots\to
a_{6}\text{ is open})<3^{-7}$. Then the standard counting argument shows that
the probability there exists an open path of length $nk$ starting from the
origin approaches 0 as $n\to\infty$.
Next we give a more detailed argument which obtains an explicit lower bound by
computing the probability that a fixed path of length $n$ is open using an
iterated linear operator whose spectral radius may be accurately estimated.
###### Theorem 3.1.
Let $\epsilon_{0}=0.1481$. In the Lightning Model with
$\epsilon\leq\epsilon_{0}$, the probability of weak percolation is zero.
###### Proof.
Fix $\epsilon>0$, and consider a fixed path in $\overrightarrow{G}$ consisting
of vertices with potential values $x_{0},x_{1},x_{2},\ldots,x_{n}$. The path
will be open iff the values satisfy
$x_{0}\overset{\epsilon}{>}x_{1}\overset{\epsilon}{>}x_{2}\overset{\epsilon}{>}\cdots\overset{\epsilon}{>}x_{n}$,
where we write $x_{a}\overset{\epsilon}{>}x_{b}$ to indicate
$x_{b}<x_{a}+\epsilon$. Since the values at each of the vertices are chosen
independently and uniformly, the probability of the set of vertex
configurations satisfying
$x_{0}\overset{\epsilon}{>}x_{1}\overset{\epsilon}{>}x_{2}\overset{\epsilon}{>}\cdots\overset{\epsilon}{>}x_{n}$
is given by
$\int_{0}^{1}dx_{0}\int_{0}^{\min(x_{0}+\epsilon,1)}dx_{1}\int_{0}^{\min(x_{1}+\epsilon,1)}dx_{2}\ldots\int_{0}^{\min(x_{n-1}+\epsilon,1)}dx_{n}\;.$
(1)
We define now a related sequence of functions that lends itself to recursive
evaluation:
###### Definition 3.2.
Let $y\in[0,1]$. For $\epsilon>0$ and $n=0,1,\dots$, define
$F^{\epsilon}_{n}(y)=\int_{0}^{\min(y+\epsilon,1)}dx_{1}\int_{0}^{\min(x_{1}+\epsilon,1)}dx_{2}\ldots\int_{0}^{\min(x_{n-1}+\epsilon,1)}dx_{n}.$
$F^{\epsilon}_{n}(y)$ gives the probability of being able to continue $n$
steps from a vertex having value $y$, i.e., it is the probability that there
exists an open path of length $n$ beginning at a given vertex, conditioned on
that vertex having potential value $y$.
This last formula contains one fewer integral than appeared in expression (1),
and the probability that a fixed path of length $n$ is open is
$F^{\epsilon}_{n+1}(1)$.
###### Example 3.3.
It is enlightening to calculate the first few of these directly. Here we are
assuming that $\epsilon<1/2$.
$F^{\epsilon}_{0}(y)=1.$
$F^{\epsilon}_{1}(y)=\begin{cases}1&y\in(1-\epsilon,1];\\\ \epsilon+y&y\leq
1-\epsilon.\end{cases}$
$F^{\epsilon}_{2}(y)=\begin{cases}-\frac{\epsilon^{2}}{2}+\epsilon+\frac{1}{2}&y\in(1-\epsilon,1];\\\
-\frac{\epsilon^{2}}{2}+2\epsilon-\frac{1}{2}+y&y\in(1-2\epsilon,1-\epsilon];\\\
\frac{3\epsilon^{2}}{2}+2\epsilon y+\frac{y^{2}}{2}&y\leq
1-2\epsilon.\end{cases}$
These piecewise defined polynomials in $y$ are continuous, as the values agree
at the endpoints of each subinterval of definition.
The endpoints of the intervals of the piecewise definition are all of the form
$1-j\epsilon$, or 0, motivating the following definition.
###### Definition 3.4.
Let $M=\lceil\frac{1}{\epsilon}\rceil$. For $j=0,1,\ldots,M-2$, we define
$I_{j}=\left(1-(j+1)\epsilon,1-j\epsilon\right],$
and for $j=M-1$ define
$I_{M-1}=[0,1-(M-1)\epsilon].$
Then $\\{I_{0},I_{1},\dots,I_{M-1}\\}$ gives a partition of [0,1] into $M$
subintervals of length $\epsilon$ with perhaps the exception of $I_{M-1}$,
which has length at most $\epsilon$.
In Example 3.3, each of the calculated $F^{\epsilon}_{n}(y)$, restricted to
the interval $I_{j}$, is a polynomial in $y$ of degree $j$. We will now show
that this pattern holds for all $n$ and $\epsilon\in(0,1]$. We note that
$F^{\epsilon}_{n}(y)$ satisfies
$F^{\epsilon}_{n}(y)=\int_{0}^{\min(y+\epsilon,1)}F^{\epsilon}_{n-1}(x)dx$,
which motivates the following.
Define a linear transformation on (suitable) functions by
$\mathscr{L}^{\epsilon}f(x)=\int_{0}^{\min(x+\epsilon,1)}f(t)\,dt,$
and set
$\mathcal{F}^{\epsilon}=\big{\\{}f:[0,1]\to\mathbb{R}:\
f\raisebox{-2.15277pt}{$\big{|}$}_{I_{j}}\text{is a polynomial of degree }\leq
j\big{\\}},$
a finite dimensional vector space.
###### Lemma 3.5.
For any $\epsilon>0$, $\mathcal{F}^{\epsilon}$ is invariant under
$\mathscr{L}^{\epsilon}$.
###### Proof.
Let $s_{j}=1-j\epsilon$ so that $I_{j}=(s_{j+1},s_{j}]$ for $j<M-1$ and
$I_{M-1}=[0,s_{M-1}]$. If $\mathds{1}_{I_{j}}$ denotes the indicator function
of $I_{j}$, it is easily checked that if we define
$h_{j,i}(x)=\left(x-s_{j+1}\right)^{i}\cdot\mathds{1}_{I_{j}}(x),$
then the set of functions
$\\{h_{j,i}(x):j=0,1,\ldots,M-1\text{ and }i=0,1,\ldots,j\\}$
forms a basis for $\mathcal{F}^{\epsilon}$. Hence the lemma will follow if we
show that $\mathscr{L}^{\epsilon}h_{j,i}\in\mathcal{F}^{\epsilon}$ for each
$0\leq j<M$ and $0\leq i\leq j$.
We deal first with the case $j<M-1$. If $x\leq s_{j+2}$, then $x+\epsilon\leq
s_{j+1}$ so that $\big{[}0,\min(x+\epsilon,1)\big{]}$ does not intersect
$I_{j}$. It follows that $\mathscr{L}^{\epsilon}h_{j,i}$ is identically 0 on
$\bigcup_{k=j+2}^{M-1}I_{k}$.
If $x\geq s_{j+1}$, then $x+\epsilon\geq s_{j}$, so that
$\int_{0}^{\min(x+\epsilon,1)}h_{j,i}(t)\,dt=\int_{I_{j}}h_{j,i}(t)\,dt=\frac{\epsilon^{i+1}}{i+1},$
which is independent of $x$. That is, the restriction of
$\mathscr{L}^{\epsilon}h_{j,i}$ to $\bigcup_{k=0}^{j}I_{k}$ is a degree 0
polynomial.
If $x\in(s_{j+2},s_{j+1}]$, then it is straightforward to calculate
$\int_{0}^{\min(x+\epsilon,1)}h_{j,i}(t)\,dt=\int_{s_{j+1}}^{x+\epsilon}h_{j,i}(t)\,dt=\frac{1}{i+1}(x-s_{j+2})^{i+1}.$
That is, the restriction of $\mathscr{L}^{\epsilon}h_{j,i}$ to $I_{j+1}$ is
$\frac{1}{i+1}h_{j+1,i+1}$. Combining these, we have shown that
$\mathscr{L}^{\epsilon}h_{j,i}=\frac{1}{i+1}\left(\sum_{k=0}^{j}\epsilon^{i+1}h_{k,0}+h_{j+1,i+1}\right).$
(2)
Similarly if $j=M-1$ and $x\in[0,1]$, then $x+\epsilon>s_{M-1}$, and we have
$\mathscr{L}^{\epsilon}h_{M-1,i}(x)=\int_{I_{M-1}}h_{M,i}(t)\,dt=\int_{0}^{1-(M-1)\epsilon}\big{(}t-(1-M\epsilon)\big{)}^{i}\,dt,$
which shows that
$\mathscr{L}^{\epsilon}h_{M-1,i}=\frac{1}{i+1}\big{(}\epsilon^{i+1}-(M\epsilon-1)^{i+1}\big{)}=\frac{\epsilon^{i+1}-(M\epsilon-1)^{i+1}}{i+1}\sum_{k=0}^{M-1}h_{k,0}.$
(3)
∎
###### Remark.
Even though each $h_{j,i}$ is not continuous, one can easily check that after
applying $\mathscr{L}^{\epsilon}$ each resulting function is continuous. Also
notice that since $M=\lceil\frac{1}{\epsilon}\rceil$, $0\leq
M\epsilon-1<\epsilon$, so that all the coefficients of the matrix representing
$\mathscr{L}^{\epsilon}$ with respect to this basis are non-negative.
###### Corollary.
$F^{\epsilon}_{n}(y)\in\mathcal{F}^{\epsilon}$ for every $n\in\mathbb{N}$.
###### Proof.
$F^{\epsilon}_{0}(y)=1\in\mathcal{F}^{\epsilon}$ by definition. The result
follows by induction, since
$F^{\epsilon}_{n}(y)=\left((\mathscr{L}^{\epsilon})^{n}1\right)(y)$. ∎
In order to estimate the growth rate of $F^{\epsilon}_{n+1}(1)$, the
probability a fixed path of length $n$ is open, we may work entirely within
$\mathcal{F}^{\epsilon}$. Fix the ordered basis
$S=\\{h_{0,0},\,h_{1,0},\,h_{1,1},h_{2,0},h_{2,1},h_{2,2},\,\ldots,\,h_{M-1,M-1}\\}$
for $\mathcal{F}^{\epsilon}$ and let $A$ be the matrix representing
$\mathscr{L}^{\epsilon}$ with respect to $S$. $F^{\epsilon}_{0}$ is constant
function $1$ on the interval $[0,1]$, which we can write as the linear
combination $1=1h_{0,0}+1h_{1,0}+\ldots+1h_{M-1,0}$. Hence, its coordinate
vector with respect to our ordered basis is
$\begin{bmatrix}1&1&0&1&0&0&\cdots&0\end{bmatrix}^{T}$. Since
$F^{\epsilon}_{n}=(\mathscr{L}^{\epsilon})^{n}\,F^{\epsilon}_{0}$, the
coefficients of the function $F^{\epsilon}_{n}$ with respect to the basis are
given by
$A^{n}\begin{bmatrix}1&1&0&1&0&0&\cdots&0\end{bmatrix}^{T}.$
We want to evaluate the function this represents, at the value $y=1$.
Evaluating at $y=1$ means we are only interested in the values of the
resulting function on the interval $I_{0}$, which is given by the coefficient
of $h_{0,0}$, which we can get by simply taking the first entry of the
previous matrix product.
That is,
$F^{\epsilon}_{n}(1)=\begin{bmatrix}1&0&0&\cdots&0\end{bmatrix}\left(A^{n}\begin{bmatrix}1&1&0&1&0&0&\cdots&0\end{bmatrix}^{T}\right)\,.$
(4)
It follows that
$F^{\epsilon}_{n+1}(1)\leq C\|A^{n+1}\|\leq C\|A^{n}\|,$ (5)
for some constant $C$ (which may depend upon $\epsilon$).
Let us consider our current position. We wish to show that for some small
$\epsilon>0$, weak percolation does not occur (a.s.). The value
$F^{\epsilon}_{n+1}(1)$ is the probability that a fixed path of length $n$ is
open. If we let $\mu_{n}$ denote the number of paths of length $n$ (starting
at 0, say) in $\mathbb{Z}^{2}$, it is sufficient to show
$\lim_{n\to\infty}\;\mu_{n}\cdot F^{\epsilon}_{n+1}(1)\,=\,0\,.$ (6)
By sub-multiplicativity, $\mu_{n}^{1/n}$ is convergent; the limit is the
_connective constant_ , $\lambda$. It is known ([PT00]) that $\lambda\leq
2.679192495$. Recall that the spectral radius of $A$ is given by
$\rho(A)=\lim_{n\to\infty}\|A^{n}\|^{1/n}$, Moreover $\rho(A)$ is just the
maximum of the absolute values of the eigenvalues of $A$. Now if
$\lambda\cdot\rho(A)<1$ then (6) follows. Thus, to establish (6) it is
sufficient to find $\epsilon_{0}>0$ for which $\rho(A)<0.373246<1/\lambda$.
Finally, by monotonicity, weak percolation would fail a.s. for each
$0\leq\epsilon\leq\epsilon_{0}$.
Using equations (2) and (3), we may easily compute the matrix entries for $A$.
When $\epsilon_{0}=0.1481$, $A$ is a $28\times 28$ matrix whose spectral
radius is approximately $\rho(A)\approx 0.373079$ so that
$\lambda\cdot\rho(A)<1$ as required. ∎
###### Proposition 3.6.
When $\epsilon$ is greater than $p_{\text{site}}$, the critical probability
for the standard site percolation model in $\mathbb{Z}^{2}$, the Lightning
Model has positive probability of strong percolation.
###### Proof.
For $\phi\in\mathbb{V}$ and $\epsilon>0$ set
$S=S_{\phi,\epsilon}=\\{a\in\mathbb{Z}^{2}:\phi_{a}>1-\epsilon\\}$. By
$\epsilon$-tolerance, if $a$ and $b$ are neighbors in $S$, then both edges
$(a,b)$ and $(b,a)$ are present in $f_{\epsilon}(\phi)$. In particular, if $C$
is a (non-directed) cluster in $S$, then $C$ is contained in a single strongly
connected component of $f_{\epsilon}(\phi)$.
Fix $\epsilon>p_{\text{site}}$. Since the vertex potentials are independently
uniformly distributed, each $a\in\mathbb{Z}^{2}$ belongs to the random set $S$
with probability $\epsilon$ independently of all other vertices. Let $A$ be
the set of vertex configurations such that $S$ is infinite. By standard site
percolation, $\lambda(A)>0$. By the previous paragraph,
$f_{\epsilon}^{-1}\\{0\overset{\epsilon}{\leftrightarrow}\infty\\}\supset A$,
so that
$\mathbb{P}_{\epsilon}(\\{0\overset{\epsilon}{\leftrightarrow}\infty\\})>0$ as
required.
∎
Wierman ([Wie95]) established that $p_{\text{site}}<0.679492$, which therefore
gives an upper bound for the critical threshold for strong percolation in the
lightning model.
## 4 Number of Infinite Components
In this section, we show that for the Lightning Model, the number of infinite
(strong) clusters is almost surely 0, 1, or $\infty$. By the results in the
previous section, for sufficiently large $\epsilon$ the Lightning Model
strongly percolates, so in that case there is at least one cluster. Due to
ergodic considerations, we know that the number of clusters is almost surely
constant. However, we cannot at this time rule out the possibility that there
are infinitely many infinite strong clusters.
###### Definition 4.1.
For $\omega\in\mathbb{E}$, let $N_{\omega}$ denote the number of infinite
strong clusters in the configuration $\omega$.
We remark that $N_{\omega}$ is a measurable function of $\omega$. For a
natural number $n$, let $B_{n}$ denote
$\\{(x,y)\in\mathbb{Z}^{2}:\max\\{|x|,|y|\\}\leq n\\}$. For $r\in\mathbb{N}$
and $\omega\in\mathbb{E}$, which we think of a subgraph of
$\overrightarrow{G}$, the _restriction_ of $\omega$ to $B_{r}$, written
$\omega|_{B_{r}}$, denotes the induced subgraph of $\omega$ on the vertex set
$B_{r}$. For $\omega\in\mathbb{E}$ and $m<n<r$, let $C(m,n,r)(\omega)$ denote
the number of clusters in $\omega|_{B_{r}}$ that intersect both $B_{m}$ and
$B_{n}^{c}$.
###### Lemma 4.2.
For $\omega\in\mathbb{E}$, the limits
$\displaystyle\lim_{r\to\infty}C(m,n,r)(\omega),$
$\displaystyle\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)\text{ and}$
$\displaystyle\lim_{m\to\infty}\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)$
all exist (the first for all $n>m>0$ and the second for all $m>0$). Also
$N_{\omega}=\lim_{m\to\infty}\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)$,
so that $\omega\mapsto N_{\omega}$ is measurable.
###### Proof.
For now, let $n>m>0$ be fixed. For $r>n$, let $\leftrightarrow_{r}$ be the
equivalence relation on $B_{n}$ where $u\leftrightarrow_{r}v$ if there is a
directed path from $u$ to $v$ in $\omega|_{B_{r}}$ and a directed path from
$v$ to $u$ in $\omega|_{B_{r}}$. These equivalence relations are increasing.
That is, if $u\leftrightarrow_{r}v$, then $u\leftrightarrow_{r^{\prime}}v$ for
all $r^{\prime}>r$. Since there are finitely many equivalence relations on
$B_{n}$, they stabilize at some $r_{0}>n$. From that point onwards, the
sequence $(C(m,n,r)(\omega))_{r\geq r_{0}}$ does not change. (Prior to this
point, $C(m,n,r)(\omega)$ may increase as new connections added in the outer
layer can ensure that a cluster connects $B_{m}$ to $B_{n}^{c}$ (in a
bidirectional way); or decrease as new connections added in the outer layer
can merge two previously existing clusters). It follows that
$\lim_{r\to\infty}C(m,n,r)(\omega)$ exists. We see that
$\lim_{r\to\infty}C(m,n,r)(\omega)$ is the number of clusters in $\omega$ that
intersect both $B_{m}$ and $B_{n}^{c}$.
Given this, it is clear that $\lim_{r\to\infty}C(m,n,r)(\omega)$ is a non-
increasing function of $n$ (as if a cluster intersects $B_{n^{\prime}}^{c}$
for $n^{\prime}>n$, then it intersects $B_{n}^{c}$), so that
$\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)$ exists. This is the
number of infinite clusters in $\omega$ that intersect $B_{m}$ as a cluster is
infinite if and only if it intersects each $B_{n}^{c}$.
Finally we see $\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)$ is an
increasing function of $m$, so the limit as $m$ approaches $\infty$ also
exists, possibly taking the value $\infty$. From the above, we see that
$\lim_{m\to\infty}\lim_{n\to\infty}\lim_{r\to\infty}C(m,n,r)(\omega)$ is the
number of infinite clusters in $\omega$ as required.
Since $C(m,n,r)(\omega)$ is a measurable function of $\omega$ (as it depends
on finitely many coordinates), it follows that $N_{\omega}$ is measurable as
required. ∎
Here is the main theorem of this section:
###### Theorem 4.3.
For each $\epsilon>0$, there exists $k\in\\{0,1,\infty\\}$ such that for
$\mathbb{P}_{\epsilon}$-a.e. $\omega\in\mathbb{E}$, $N_{\omega}=k$.
For the proof of Theorem 4.3, we introduce a transformation on vertex
configurations that modifies the potential values only for vertices in a large
finite box centered the origin. The idea is that as a result of applying the
transformation, a large sub-box (again centered at the origin) will be forced
to be strongly connected in every configuration.
For $n\in\mathbb{N}$, recall that
$B_{2n}=\\{(x,y)\in\mathbb{Z}^{2}\colon\max(|x|,|y|)\leq 2n\\}$. Within such a
box, define the sequence of _layers_ $L_{0},L_{1},\ldots$, $L_{2n}$ by
$L_{i}=\\{(x,y)\in\mathbb{Z}^{2}:\max(|x|,|y|)=2n-i\\}.$
These are illustrated in Figure 1.
We now define our transformation. For $n\in\mathbb{N}$ and $\eta\in(0,1)$,
define $\Psi_{n}^{\eta}:\mathbb{V}\to\mathbb{V}$ by
$\Psi_{n}^{\eta}(\phi)(a)=\begin{cases}(1-\eta)^{i}\phi(a)&\ \ \ \text{if
$a\in L_{i}\,$;}\\\ \phi(a)&\ \ \ \text{otherwise,}\end{cases}$
for any vertex configuration $\phi\in\mathbb{V}$ and vertex
$a\in\mathbb{Z}^{2}$. The following lemma describes the useful properties of
$\Psi_{n}^{\eta}$.
###### Lemma 4.4.
Let $\epsilon>0$ be given. The transformations $\Psi_{n}^{\eta}$ defined above
have the following properties:
1. 1.
For each $n\in\mathbb{N}$ and $\eta\in(0,1)$, if $A\subset\mathbb{V}$ has
positive measure, then $\Psi_{n}^{\eta}(A)$ also has positive measure.
2. 2.
Suppose $n>\log(\tfrac{1}{\epsilon})$ and set
$\eta=\log(\tfrac{1}{\epsilon})/n$. For each $\phi\in\mathbb{V}$,
$f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$ has the property that all of the edges
within the central sub-box, $B_{n}$, are bidirectionally connected. In
particular, the strongly connected component of the origin contains $B_{n}$.
3. 3.
Suppose $n>\tfrac{1}{\epsilon}$ and let $\eta$ be as in 2. There exists a
universal constant $\delta>0$, independent of $\epsilon$ and $n$, so that the
probability that $\Psi_{n}^{\eta}$ doesn’t break any edges is at least
$\delta$. That is,
$\lambda\Big{(}\big{\\{}\phi\in\mathbb{V}\colon f_{\epsilon}(\phi)\preceq
f_{\epsilon}(\Psi_{n}^{\eta}(\phi))\big{\\}}\Big{)}\geq\delta.$
Figure 1: Layers $L_{0},L_{1},\ldots$ in the case $n=5$.
###### Proof of Lemma 4.4:.
Condition 1 follows since $\Psi_{n}^{\eta}$ is simply scaling the values of a
finite number of coordinates.
To establish condition 2, we show that $(1-\eta)^{n}<\epsilon$. This ensures
that for all $\phi\in\mathbb{V}$, the potential values of
$\Psi_{n}^{\eta}(\phi)$ at vertices in $B_{n}$ are all less than $\epsilon$.
By the argument in Proposition 3.6, this ensures that the edges contained in
$B_{n}$ are bidirectionally connected in
$f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$.
Notice that $(1-\eta)^{n}<\epsilon$ follows immediately from the fact that for
$\eta\in(0,1)$, $1-\eta<e^{-\eta}$ and hence
$(1-\eta)^{n}<e^{-n\eta}=\epsilon$.
We move to the proof of 3. Define444This order induces the partial order
$\preceq$ on edge configuations in the obvious way. an order $\sqsubseteq$ on
the set $\\{\,\raisebox{2.0pt}{ \leavevmode\hbox to2.98pt{\vbox
to2.98pt{\pgfpicture\makeatletter\hbox{\hskip 1.49165pt\lower-1.49165pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,,\rightarrow\\}$ by
$\raisebox{2.0pt}{ \leavevmode\hbox to2.98pt{\vbox
to2.98pt{\pgfpicture\makeatletter\hbox{\hskip 1.49165pt\lower-1.49165pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,\sqsubseteq\,\raisebox{2.0pt}{
\leavevmode\hbox to2.98pt{\vbox to2.98pt{\pgfpicture\makeatletter\hbox{\hskip
1.49165pt\lower-1.49165pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,\text{,
}\rightarrow\ \sqsubseteq\ \rightarrow\text{, and }\,\raisebox{2.0pt}{
\leavevmode\hbox to2.98pt{\vbox to2.98pt{\pgfpicture\makeatletter\hbox{\hskip
1.49165pt\lower-1.49165pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{1.29166pt}{0.0pt}\pgfsys@curveto{1.29166pt}{0.71336pt}{0.71336pt}{1.29166pt}{0.0pt}{1.29166pt}\pgfsys@curveto{-0.71336pt}{1.29166pt}{-1.29166pt}{0.71336pt}{-1.29166pt}{0.0pt}\pgfsys@curveto{-1.29166pt}{-0.71336pt}{-0.71336pt}{-1.29166pt}{0.0pt}{-1.29166pt}\pgfsys@curveto{0.71336pt}{-1.29166pt}{1.29166pt}{-0.71336pt}{1.29166pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\,\sqsubseteq\
\rightarrow\;$. Let $a,b\in\mathbb{Z}^{2}$ be any two adjacent vertices, and
let $\phi\in\mathbb{V}$ be some vertex configuration. We say that the edge
$e=(a,b)$ is _not broken_ by $\Psi_{n}^{\eta}$ if
$f_{\epsilon}(\phi)_{e}\sqsubseteq f_{\epsilon}(\Psi_{n}^{\eta}(\phi))_{e}$.
First notice that if $a,b\in\mathbb{Z}^{2}$ are adjacent vertices in the same
layer $L_{i}$, then the edge $(a,b)$ is never broken by $\Psi_{n}^{\eta}$.
This is because if both vertex values are scaled by the same positive value,
then the difference between them will be scaled by that value as well, and the
scaling is a contraction. It is worth noting that this may create new open
edges, something we will need to consider later in the proof of Theorem 4.3.
If $a$ and $b$ are adjacent vertices such that $a\in L_{i}$ and $b\in L_{i+1}$
(i.e. $b$ is closer to the origin), we refer to the edge $(a,b)$ as an
_inwards_ edge (similarly $(b,a)$ is an _outwards_ edge). We claim that if
$(a,b)$ is an inwards edge and $a\to b$ in $f_{\epsilon}(\phi)$, then $a\to b$
in $f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$. Since $a\to b$ in
$f_{\epsilon}(\phi)$, we have $\phi(b)\leq\phi(a)+\epsilon$. This implies
$\Psi_{n}^{\eta}(\phi)(b)=(1-\eta)^{i+1}\phi(b)\leq(1-\eta)^{i+1}\phi(a)+(1-\eta)^{i+1}\epsilon\leq(1-\eta)^{i}\phi(a)+\epsilon=\Psi_{n}^{\eta}(\phi)(a)+\epsilon$,
so that $a\to b$ in $f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$ also.
We now know that the only edges that might not be preserved by
$\Psi_{n}^{\eta}$ are the outwards edges. We will narrow this down even
further, proving that only edges in an outer “ring” near the boundary of
$B_{n}$ might not be preserved. This will be useful because it represents a
small quantity of the total number of edges in $B_{n}$.
###### Claim.
Let $e=(a,b)$ be an outwards edge with $a\in L_{i+1}$ and $b\in L_{i}$ and
$a\to b$ in some configuration $f_{\epsilon}(\phi)$. If
$i\geq\frac{1}{\epsilon}$ then $a\to b$ in
$f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$.
That is, for $i\geq\frac{1}{\epsilon}$, no edges from level $i+1$ to level $i$
are broken.
###### Proof of Claim:.
By assumption $\phi_{b}<\phi_{a}+\epsilon$ and $i\geq 1/\epsilon$. We simply
need to verify that $(1-\eta)^{i}\phi_{b}<(1-\eta)^{i+1}\phi_{a}+\epsilon$.
We compute
$\displaystyle(1-\eta)^{i}\phi_{b}$
$\displaystyle<(1-\eta)^{i}\phi_{a}+(1-\eta)^{i}\epsilon$
$\displaystyle=(1-\eta)^{i+1}\phi_{a}+\epsilon+\eta(1-\eta)^{i}\phi_{a}-\epsilon(1-(1-\eta)^{i})$
$\displaystyle=(1-\eta)^{i+1}\phi_{a}+\epsilon+(1-\eta)^{i}\left[\eta\phi_{a}-\epsilon\left(\frac{1}{(1-\eta)^{i}}-1\right)\right].$
Since $\frac{1}{1-\eta}>1+\eta$, we see $\frac{1}{(1-\eta)^{i}}>1+i\eta$ so
that
$\eta\phi_{a}-\epsilon\big{(}\frac{1}{(1-\eta)^{i}}-1\big{)}<\eta\phi_{a}-\epsilon
i\eta=\eta(\phi_{a}-\epsilon i)<0$. Hence, the term displayed above in square
brackets is negative, so that
$(1-\eta)^{i}\phi_{b}<(1-\eta)^{i+1}\phi_{a}+\epsilon,$
as required. ∎
There is now a very small collection of edges which might not be preserved,
the outwards ($L_{i+1}\to L_{i}$) edges in the outermost $\frac{1}{\epsilon}$
layers ($i$ ranging between 0 and $\lfloor\frac{1}{\epsilon}\rfloor$). We move
to estimating the probability that they might be broken.
We call an edge between a vertex in $L_{1}$ and a vertex in $L_{0}$ an _outer_
edge. We begin by obtaining an upper bound on the probability that an outer
edge will not be preserved, and then use this to get an upper bound for the
probability that any edge is not preserved.
###### Claim.
Let $e=(a,b)$ be an outer edge. $\mathbb{P}_{\epsilon}(e\text{ is
broken})\leq\frac{\eta}{2}$.
###### Proof of Claim:.
Let $a\in L_{1}$ and $b\in L_{0}$ be neighbors in $\mathbb{Z}^{2}$. The
probability that the edge $(a,b)$ is not preserved by $\Psi_{n}^{\eta}$ is the
probability that the following two inequalities hold:
$\displaystyle\phi_{a}$ $\displaystyle>\phi_{b}-\epsilon;$
$\displaystyle\Psi_{n}^{\eta}(\phi)(a)$
$\displaystyle\not>\Psi_{n}^{\eta}(\phi)(b)-\epsilon.$
Let $s=\phi_{a}$ and $t=\phi_{b}$. The above conditions can then be written as
$\begin{split}t&<s+\epsilon;\\\ t&\geq(1-\eta)s+\epsilon.\end{split}$ (7)
The probability we want is the area of the region in $[0,1]^{2}$ defined by
these two inequalities. While the exact value is slightly messy to compute, we
may obtain a useful overestimate if we allow $t$ to extend to its maximum
value for $s\in[0,1]$. This region is a triangle with vertices at
$(0,\epsilon)$, $(1,1+\epsilon)$ and $(1,1-\eta+\epsilon)$, so it has width 1
and height $\eta$, and therefore area $\frac{\eta}{2}$. Hence, for an outer
edge $e$, $\mathbb{P}_{\epsilon}(e\text{ is broken})\leq\frac{\eta}{2}$, as
claimed. ∎
We now move to the case of a general outwards edge from $L_{i+1}\to L_{i}$,
for $1\leq i\leq\lfloor\frac{1}{\epsilon}\rfloor$. In order for an edge
$(a,b)$ with $a\in L_{i+1}$ and $b\in L_{i}$ to not be preserved, the
constraints (7) become
$\displaystyle t$ $\displaystyle<s+\epsilon;$ $\displaystyle t$
$\displaystyle\geq(1-\eta)s+(1-\eta)^{-i}\epsilon.$
Since the lower bound has been increased (as $\frac{1}{1-\eta}>1$), we see
that the area of the set of solutions is smaller than it was for the outer
edges from $L_{1}$ to $L_{0}$. Hence, for a general outward edge between
$L_{i+1}$ and $L_{i}$, the probability that it is broken is less than
$\eta/2$.
Summarizing: we know that the only edges which might be broken are outwards
edges in the outer $\lfloor\frac{1}{\epsilon}\rfloor$ layers, and that the
probability that any one of them is broken is less than $\eta/2$.
We move next to calculating the probability that no edge is broken by
$\Psi_{n}^{\eta}$. To do so, we consider separately the corners and sides of
the outermost $\lfloor\frac{1}{\epsilon}\rfloor$ layers of $B_{n}$. Let
$C_{1},\ldots,C_{4}$ denote the four corners of $B_{n}$ of size
$\lfloor\frac{1}{\epsilon}\rfloor\times\lfloor\frac{1}{\epsilon}\rfloor$ and
let $S_{1},\ldots,S_{4}$ denote the sides of $B_{n}$, that is, the regions of
size
$\lfloor\frac{1}{\epsilon}\rfloor\times(4n+1-2\lfloor\frac{1}{\epsilon}\rfloor)$
and
$(4n+1-2\lfloor\frac{1}{\epsilon}\rfloor)\times\lfloor\frac{1}{\epsilon}\rfloor$
between the corners. These regions contain the only edges that could be broken
by applying $\Psi_{n}^{\eta}$.
Because there are fewer than $\frac{1}{\epsilon^{2}}$ outward edges in each
corner, and the the probability that any given outward edge is broken is at
most $\frac{\eta}{2}$, the probability that some edge is broken is bounded
above by $\frac{\eta}{2\epsilon^{2}}$ and hence the probability that no edge
is broken in any given corner by $\Psi_{n}^{\eta}$ is at least
$1-\frac{\eta}{2\epsilon^{2}}$.
Next, we move to the sides. We intend to show that within each side, the
probability that no edge is broken by $\Psi_{n}^{\eta}$ is at least
$(1-\frac{\eta}{2\epsilon})^{4n}$. For this estimate we cannot simply use the
union bound as we did for the estimate in the corners; doing this naively
gives an upper bound for the probability that an edge is broken which is
greater than 1! The difficulty we face is the built-in dependence of the edges
in the lightning model: while the vertex potential values are independent, the
existence of an edge $a\to b$ affects the probability of an edge $b\to c$. We
want to use independence in some fashion, however. To obtain our desired
estimate, we split each side into disjoint pieces for which the event that
some edge is broken in one piece is independent of the event that some edge is
broken in another piece.
Decompose each side $S_{i}$ into $4n+1-2\lfloor\frac{1}{\epsilon}\rfloor$
disjoint slices, that is, outward paths of length $\lfloor 1/\epsilon\rfloor$
which have one vertex in each of the layers $L_{0},L_{1},\ldots L_{\lfloor
1/\epsilon\rfloor}$. The union of these slices contain all of the outward
edges in $S_{i}$. In each slice, the union bound implies the probability than
some edge is broken is at most $\frac{\eta}{2\epsilon}$, hence the probability
that no edge is broken in a given slice is at least
$1-\frac{\eta}{2\epsilon}$. For each slice, consider the event that no edge is
broken. These events are independent. Thus, we see the probability that no
edge is broken in $S_{i}$ is at least
$(1-\frac{\eta}{2\epsilon})^{4n+1-2\lfloor\frac{1}{\epsilon}\rfloor}$.
Since the $C_{i}$’s and the $S_{j}$’s are mutually disjoint, the above
argument shows that the probability that no edges are broken in any of the
corners or sides is at least
$\left(1-\frac{\eta}{2\epsilon^{2}}\right)^{4}\left(1-\frac{\eta}{2\epsilon}\right)^{16n}.$
Recalling that $\eta=(\log\frac{1}{\epsilon})/n$, the probability that no
edges are broken is at least
$\left(1-\frac{\log\frac{1}{\epsilon}}{2n\epsilon^{2}}\right)^{4}\left(1-\frac{\log\frac{1}{\epsilon}}{2n\epsilon}\right)^{16n}.$
Since $\epsilon$ is assumed to be fixed, then by using the well-known result
that $(1+\frac{a}{n})^{n}\to e^{a}$ we see that, for large $n$, the lower
bound for the probability that no edges are broken approaches
$e^{-8\log\frac{1}{\epsilon}/\epsilon}=\epsilon^{8/\epsilon},$
so that the probability that there no edges are broken is bounded below
uniformly in $n$. ∎
###### Proof of Theorem 4.3.
Fix $\epsilon>0$ so that strong percolation occurs. By ergodicity,
$N_{\omega}$ is constant, $\mathbb{P}_{\epsilon}$-almost surely. Suppose for a
contradiction that there are almost surely exactly $k$ infinite strongly-
connected components with $k\in\mathbb{N}\setminus\\{1\\}$.
Suppose $n>\log\tfrac{1}{\epsilon}$ and let $\delta>0$ be as given in Lemma
4.4 so that for all such $n$,
$\lambda\Big{(}\big{\\{}\phi\in\mathbb{V}\colon f_{\epsilon}(\phi)\preceq
f_{\epsilon}(\Psi_{n}^{\eta}(\phi))\big{\\}}\Big{)}\geq\delta.$
By continuity of measures, we may choose $N>0$ so that for any $n\geq N,$
$\lambda\left(\\{\phi\in\mathbb{V}\colon\text{$B_{n}$ intersects each infinite
component in $f_{\epsilon}(\phi)$}\\}\right)>1-\delta.$
Now fix $n\geq\max\\{\log\tfrac{1}{\epsilon},N\\}$ and consider the two events
whose probabilities are given in the previous two inequalities. Since the sum
of the probabilities is larger than 1, the intersection $E$ of these two
events has positive probability. Since the image under $\Psi_{n}^{\eta}$ of a
set of positive measure still has positive measure (Condition 1 of Lemma 4.4),
we have $\lambda(\Psi_{n}^{\eta}(E))>0$.
Suppose $\phi\in E$. Then $f_{\epsilon}(\phi)$ has $k$ infinite clusters, and
the box $B_{n}$ must contain vertices from each of them. Additionally, any
edge of $f_{\epsilon}(\phi)$ will still be present in
$f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$ (possibly becoming bidirectional). It
follows that in $f_{\epsilon}(\Psi_{n}^{\eta}(\phi))$, the $k$ infinite
strongly-connected clusters originally present in $f_{\epsilon}(\phi)$ are
contained in a single infinite cluster, which we will denote by
$\mathcal{C}_{*}(\phi)$.
It appears, then, that what we have created is a set of vertex configurations
of positive measure ($\Psi_{n}^{\eta}(E)$) for which the resulting edge
configurations have a single infinite cluster. However, in directed
percolation it is possible to modify finitely many edges and create an
infinite strong cluster where there was none before; see example 5.1 below.
Hence, it is conceivable that $k-1$ additional infinite clusters were created
during the modification process, which would avoid the contradiciton that we
seek.
We can deal with this issue by defining a new map $\widehat{\Psi_{n}^{\eta}}$
by
$\widehat{\Psi_{n}^{\eta}}(\phi)_{a}=\begin{cases}\left(\Psi_{n}^{\eta}(\phi)\right)_{a}&\text{if
$a\in\mathcal{C}_{*}(\phi))$;}\\\ \phi_{a}&\text{otherwise.}\end{cases}$
Since there are only finitely many possibilities for
$\mathcal{C}_{*}(\phi)\cap B_{n}$, and all of them depend measurably on
$\phi$, there is a fixed set $\Lambda\subset B_{n}$ such that
$\mathcal{C}_{*}(\phi)\cap B_{n}=\Lambda$ for all $\phi$ in a positive measure
subset $A$ of $E$. A finite energy argument similar to that given in the proof
of Lemma 4.4(1) then shows that $\widehat{\Psi_{n}^{\eta}}(A)$ is of positive
measure.
Our final claim is that for any $\phi\in A$, $\widehat{\Psi_{n}^{\eta}}(\phi)$
has a unique infinite strongly connected cluster. To see this, let
$\mathcal{C}$ be any cluster for the edge configuration
$f_{\epsilon}(\widehat{\Psi_{n}^{\eta}}(\phi))$. If $\mathcal{C}$ intersects
$\mathcal{C}_{*}(\phi)$, then $\mathcal{C}\supset\mathcal{C}_{*}(\phi)$ (note
they may not be equal since they are generated from different potentials:
$\widehat{\Psi_{n}^{\eta}}(\phi)$ and $\Psi_{n}^{\eta}(\phi)$). On the other
hand, if $\mathcal{C}$ does not intersect $\mathcal{C}_{*}(\phi)$, then the
restriction of $f_{\epsilon}(\widehat{\Psi_{n}^{\eta}}(\phi))$ to
$\mathcal{C}$ is the same as the restriction of $f_{\epsilon}(\phi)$
restricted to $\mathcal{C}$, so that $\mathcal{C}$ is a finite cluster. We
have shown that for each $\phi\in A$, any cluster in
$f_{\epsilon}(\widehat{\Psi_{n}^{\eta}}(\phi))$ is either contained in
$\mathcal{C}_{*}(\phi)$ or is finite. Since $A$ has positive measure, this
contradicts our original assumption that there were exactly $k$ infinite
strong components almost surely, and we are done. ∎
We remark that the proof in this section is essentially two-dimensional. Lemma
4.4, part 2 makes essential use of the fact that $\eta=\Theta(1/n)$: this
guarantees that after applying $\Psi$, the central block is fully connected.
On the other hand, Lemma 4.4, part 3 requires that $\eta=O(1/n)$: this part
ensures that no edges are broken when the potential is transformed by
$\Psi_{n}^{\eta}$. The proof works by showing that the only edges potentially
broken are those within $1/\epsilon$ of the edge. In two dimensions, there are
$O(n)$ edges that are potentially broken and each has a probability of the
order of $\eta$ of breaking, allowing us to show that the probability that no
edges are broken is $\Theta(1)$.
## 5 Open Problems
It is natural to ask whether one can rule out the case of infinitely many
infinite strong clusters. The Burton-Keane theorem [BK89] is a well known
approach to this in the case of undirected graphs. The following examples show
that some things behave quite differently when dealing with directed graphs.
###### Example 5.1.
Creating an infinite strong cluster by modifying a single edge: Consider an
edge configuration with edges only on two parallel lines, corresponding to,
say, the lines $y=1$ and $y=2$ in $\mathbb{Z}^{2}$. Suppose the top line is a
source, that is, each vertex $(x,2)$ for $x\geq 0$ has a right-pointing edge
to $(x+1,2)$, and each vertex $(x,2)$ for $x\leq 0$ has a left-pointing edge
to $(x-1,2)$. Reverse the corresponding edges on the line $y=1$, and at every
fourth $x$-value, say, place a downward edge from $(x,2)$ to $(x,1)$. This has
no strongly connected infinite component, but adding a single edge from
$(0,1)$ to $(0,2)$ creates one.
###### Example 5.2.
Many splitting points give rise to the same boundary partition: Burton and
Keane’s proof works by considering _“splitting points”_ , that is places where
when a single vertex is removed from a configuration, an infinite cluster
splits into at least three separate infinite clusters. The proof counts
splitting points, showing that if they exist, their number grows
proportionally to the volume of a region by the ergodic theorem, while showing
they grow at most proportionally to the surface area of a region. For the
upper bound they study, in a large volume, which boundary points belong to
which infinite component when a splitting point is removed. Critically,
removing different splitting points gives rise to a different infinite
component structures. Unfortunately in the directed case, the removal of many
splitting points may give rise to the same infinite component structures.
This is illustrated schematically in Figure 2: when any of the (red) splitting
points is removed, the only infinite clusters are the bi-directional paths
connecting the circle to infinity.
Figure 2: A failure of Burton-Keane in directed graphs
Two more questions of interest:
1. 1.
Does positive probability of weak percolation imply positive probability of
strong percolation? We conjecture that it does.
2. 2.
The results prior to Section 5 carry over to higher dimensions. However, the
proof of Theorem 4.3 depends upon working in two dimensions. Does this result
hold for $d\geq 2$?
Data Availability Statement: Data sharing not applicable to this article as no
datasets were generated or analysed during the current study.
## References
* [BK89] R. Burton and M. Keane, _Density and Uniqueness in Percolation_ , Comm. Math. Phys. 121 (1989), 501–505.
* [Gri06] Geoffrey Grimmett, _Uniqueness and multiplicity of infinite clusters_ , Dynamics & stochastics, IMS Lecture Notes Monogr. Ser., vol. 48, Inst. Math. Statist., Beachwood, OH, 2006, pp. 24–36.
* [Kön27] D. König, _Über eine Schlussweise aus dem Endlichen ins Undendliche_ , Acta Litterarum ac Scientiarum 3 (1927), 121–130.
* [NS81] C.M. Newman and L. S. Schulman, _Infinite clusters in percolation models_ , J. Stat. Phys. 26 (1981), 613–628.
* [PT00] A. Pönitz and P. Tittmann, _Improved upper bounds for self-avoiding walks in $\mathbb{Z}^{d}$_, Electronic Journal of Combinatorics 7 (2000), no. Research Paper 21, 10, electronic.
* [RPK+07] J. A. Riousset, V. P. Pasko, P. R. Krehbiel, R. J. Thomas, and W. Rison, _Three-dimensional fractal modeling of intracloud lightning discharge in a New Mexico thunderstorm and comparison with lightning mapping observations_ , J. Geophys. Res. Atmospheres 112 (2007), [D15203].
* [Wie95] J. C. Wierman, _Substitution method critical probability bounds for the square lattice site percolation model_ , Combin. Probab. Comput. 4 (1995), 181–188.
J.T. Campbell∗ (Corresponding author), Department of Mathematical Sciences,
University of Memphis, Memphis TN 38152
E-mail address, J.T. Campbell: jcampbllmemphis.edu
A. Deane, Department of Mathematics and Statistics, University of Victoria,
Victoria, BC V8W 2Y2 Canada
E-mail address, A. Deane: alexandradeaneuvic.ca
A. Quas, Department of Mathematics and Statistics, University of Victoria,
Victoria, BC V8W 2Y2 Canada
E-mail address, A. Quas: aquasuvic.ca
|
# Online Adversarial Purification based on Self-Supervised Learning
Changhao Shi, Chester Holtz & Gal Mishne
University of California, San Diego
<EMAIL_ADDRESS>
###### Abstract
Deep neural networks are known to be vulnerable to adversarial examples, where
a perturbation in the input space leads to an amplified shift in the latent
network representation. In this paper, we combine canonical supervised
learning with self-supervised representation learning, and present Self-
supervised Online Adversarial Purification (SOAP), a novel defense strategy
that uses a self-supervised loss to purify adversarial examples at test-time.
Our approach leverages the label-independent nature of self-supervised
signals, and counters the adversarial perturbation with respect to the self-
supervised tasks. SOAP yields competitive robust accuracy against state-of-
the-art adversarial training and purification methods, with considerably less
training complexity. In addition, our approach is robust even when adversaries
are given knowledge of the purification defense strategy. To the best of our
knowledge, our paper is the first that generalizes the idea of using self-
supervised signals to perform online test-time purification.
## 1 Introduction
Deep neural networks have achieved remarkable results in many machine learning
applications. However, these networks are known to be vulnerable to
adversarial attacks, i.e. strategies which aim to find adversarial examples
that are close or even perceptually indistinguishable from their natural
counterparts but easily mis-classified by the networks. This vulnerability
raises theory-wise issues about the interpretability of deep learning as well
as application-wise issues when deploying neural networks in security-
sensitive applications.
Many strategies have been proposed to empower neural networks to defend
against these adversaries. The current most widely used genre of defense
strategies is adversarial training. Adversarial training is an on-the-fly data
augmentation method that improves robustness by training the network not only
with clean examples but adversarial ones as well. For example, Madry et al.
(2017) propose projected gradient descent as a universal first-order attack
and strengthen the network by presenting it with such adversarial examples
during training (e.g., adversarial training). However, this method is
computationally expensive as finding these adversarial examples involves
sample-wise gradient computation at every epoch.
Self-supervised representation learning aims to learn meaningful
representations of unlabeled data where the supervision comes from the data
itself. While this seems orthogonal to the study of adversarial vulnerability,
recent works use representation learning as a lens to understand as well as
improve adversarial robustness (Hendrycks et al., 2019; Mao et al., 2019; Chen
et al., 2020a; Naseer et al., 2020). This recent line of research suggests
that self-supervised learning, which often leads to a more informative and
meaningful data representation, can benefit the robustness of deep networks.
In this paper, we study how self-supervised representation learning can
improve adversarial robustness. We present Self-supervised Online Adversarial
Purification (SOAP), a novel defense strategy that uses an auxiliary self-
supervised loss to purify adversarial examples at test-time, as illustrated in
Figure 1. During training, beside the classification task, we jointly train
the network on a carefully selected self-supervised task. The multi-task
learning improves the robustness of the network and more importantly, enables
us to counter the adversarial perturbation at test-time by leveraging the
label-independent nature of self-supervised signals. Experiments demonstrate
that SOAP performs competitively on various architectures across different
datasets with only a small computation overhead compared with vanilla
training. Furthermore, we design a new attack strategy that targets both the
classification and the auxiliary tasks, and show that our method is robust to
this adaptive adversary as well.
(a) Joint training of classification and auxiliary.
(b) Test-time online purification
Figure 1: An illustration of self-supervised online adversarial purification
(SOAP). Left: joint training of the classification and the auxiliary task;
Right: input adversarial example is purified iteratively to counter the
representational shift, then classified. Note that the encoder is shared by
both classification and purification.
## 2 Related work
#### Adversarial training
Adversarial training aims to improve robustness through data augmentation,
where the network is trained on adversarially perturbed examples instead of
the clean original training samples (Goodfellow et al., 2014; Kurakin et al.,
2016; Tramèr et al., 2017; Madry et al., 2017; Kannan et al., 2018; Zhang et
al., 2019). By solving a min-max problem, the network learns a smoother data
manifold and decision boundary which improve robustness. However, the
computational cost of adversarial training is high because strong adversarial
examples are typically found in an iterative manner with heavy gradient
calculation. Compared with adversarial training, our method avoids solving the
complex inner-max problem and thus is significantly more efficient in
training. Our method does increase test-time computation but it is practically
negligible per sample.
#### Adversarial purification
Another genre of robust learning focuses on shifting the adversarial examples
back to the clean data representation , namely purification. Gu & Rigazio
(2014) exploited using a general DAE (Vincent et al., 2008) to remove
adversarial noises; Meng & Chen (2017) train a reformer network, which is a
collection of autoencoders, to move adversarial examples towards clean
manifold; Liao et al. (2018) train a UNet that can denoise adversarial
examples to their clean counterparts; Samangouei et al. (2018) train a GAN on
clean examples and project the adversarial examples to the manifold of the
generator; Song et al. (2018) assume adversarial examples have lower
probability and learn the image distribution with a PixelCNN so that they can
maximize the probability of a given test example; Naseer et al. (2020) train a
conditional GAN by letting it play a min-max game with a critic network in
order to differentiate between clean and adversarial examples. In contrast to
above approaches, SOAP achieves better robust accuracy and does not require a
GAN which is hard and inefficient to train. More importantly, our approach
exploits a wider range of self-supervised signals for purification and
conceptually can be applied to any format of data and not just images, given
an appropriate self-supervised task.
#### Self-supervised learning
Self-supervised learning aims to learn intermediate representations of
unlabeled data that are useful for unknown downstream tasks. This is done by
solving a self-supervised task, or pretext task, where the supervision of the
task comes from the data itself. Recently, a variety of self-supervised tasks
have been proposed on images, including data reconstruction (Vincent et al.,
2008; Rifai et al., 2011), relative positioning of patches (Doersch et al.,
2015; Noroozi & Favaro, 2016), colorization (Zhang et al., 2016),
transformation prediction (Dosovitskiy et al., 2014; Gidaris et al., 2018) or
a combination of tasks (Doersch & Zisserman, 2017).
More recently, studies have shown how self-supervised learning can improve
adversarial robustness. Mao et al. (2019) find that adversarial attacks fool
the networks by shifting latent representation to a false class. Hendrycks et
al. (2019) observe that PGD adversarial training along with an auxiliary
rotation prediction task improves robustness, while Naseer et al. (2020) use
feature distortion as a self-supervised signal to find transferable attacks
that generalize across different architectures and tasks. Chen et al. (2020a)
combine adversarial training and self-supervised pre-training to boost fine-
tuned robustness. These methods typically combine self-supervised learning
with adversarial training, thus the computational cost is still high. In
contrast, our approach achieves robust accuracy by test-time purification
which uses a variety of self-supervised signals as auxiliary objectives.
## 3 Self-supervised purification
### 3.1 Problem formulation
As aforementioned, Mao et al. (2019) observe that adversaries shift clean
representations towards false classes to diminish robust accuracy. The small
error in input space, carefully chosen by adversaries, gets amplified through
the network, and finally leads to wrong classification. A natural way to solve
this is to perturb adversarial examples so as to shift their representation
back to the true classes, i.e. purification. In this paper we only consider
classification as our main task, but our approach should be easily generalized
to other tasks as well.
Consider an encoder $z=f(x;\theta_{\textrm{enc}})$, a classifier
$g(z;\theta_{\textrm{cls}})$ on top of the representation $z$, and the network
$g\circ f$ a composition of the encoder and the classifier. We formulate the
purification problem as follows: for an adversarial example
$(x_{\textrm{adv}},y)$ and its clean counterpart $(x,y)$ (unknown to the
network), a purification strategy $\pi$ aims to find
$x_{\textrm{pfy}}=\pi(x_{\textrm{adv}})$ that is as close to the clean example
$x$ as possible: $x_{\textrm{pfy}}\rightarrow x$. However, this problem is
underdetermined as different clean examples can share the same adversarial
counterpart, i.e. there might be multiple or even infinite solutions for
$x_{\textrm{pfy}}$. Thus, we consider the relaxation
$\displaystyle\min_{\pi}\ \mathcal{L}_{\textrm{cls}}\left((g\circ
f)(x_{\textrm{pfy}}),y\right)\ \quad\textrm{s.t.}\ \
||x_{\textrm{pfy}}-x_{\textrm{adv}}||\leq\epsilon_{\textrm{adv}},\quad
x_{\textrm{pfy}}=\pi(x_{\textrm{adv}}),$ (1)
i.e. we accept $x_{\textrm{pfy}}$ as long as $\mathcal{L}_{\textrm{cls}}$ is
sufficiently small and the perturbation is bounded. Here
$\mathcal{L}_{\textrm{cls}}$ is the cross entropy loss for classification and
$\epsilon_{\textrm{adv}}$ is the budget of adversarial perturbation. However,
this problem is still unsolvable since neither the true label $y$ nor the
budget $\epsilon_{\textrm{adv}}$ is available at test-time. We need an
alternative approach that can lead to a similar optimum.
### 3.2 Self-supervised Online Purification
Let $h({z;\theta_{\textrm{aux}}})$ be an auxiliary device that shares the same
representation $z$ with $g(z;\theta_{\textrm{cls}})$, and
$\mathcal{L}_{\textrm{aux}}$ be the auxiliary self-supervised objective. The
intuition behind SOAP is that the shift in representation $z$ that hinders
classification will hinder the auxiliary self-supervised task as well. In
other words, large $\mathcal{L}_{\textrm{aux}}$ often implies large
$\mathcal{L}_{\textrm{cls}}$. Therefore, we propose to use
$\mathcal{L}_{\textrm{aux}}$ as an alternative to $\mathcal{L}_{\textrm{cls}}$
in Eq. (1). Then we can purify the adversarial examples using the auxiliary
self-supervised signals, since the purified examples which perform better on
the auxiliary task (small $\mathcal{L}_{\textrm{aux}}$) should perform better
on classification as well (small $\mathcal{L}_{\textrm{cls}}$).
During training, we jointly minimize the classification loss and self-
supervised auxiliary loss
$\displaystyle\min_{\theta}\ \\{\mathcal{L}_{\textrm{cls}}\left((g\circ
f)(x;\theta_{\textrm{enc}},\theta_{\textrm{cls}}),y\right)+\alpha\mathcal{L}_{\textrm{aux}}\left((h\circ
f)(x;\theta_{\textrm{enc}},\theta_{\textrm{aux}})\right)\\},$ (2)
where $\alpha$ is a trade-off parameter between the two tasks. At test-time,
given fixed network parameters $\theta$, we use the label-independent
auxiliary objective to perform gradient descent _in the input space_. The
purification objective is
$\displaystyle\min_{\pi}\ \mathcal{L}_{\textrm{aux}}((h\circ
f)(x_{\textrm{pfy}}))\ \ \textrm{s.t.}\
||x_{\textrm{pfy}}-x_{\textrm{adv}}||\leq\epsilon_{\textrm{pfy}},x_{\textrm{pfy}}=\pi(x_{\textrm{adv}}),$
(3)
where $\epsilon_{\textrm{pfy}}$ is the budget of purification. This is
legitimate at test-time because unlike Eq. (1), the supervision or the
purification signal comes from data itself. Also, compared with vanilla
training the only training increment of SOAP is an additional self-supervised
regularization term. Thus, the computational complexity is largely reduced
compared with adversarial training methods. In Sec. 4, we will show that
adversarial examples do perform worse on auxiliary tasks and the gradient of
the auxiliary loss provides useful information on improving robustness. Note
that $\epsilon_{\textrm{adv}}$ is replaced with $\epsilon_{\textrm{pfy}}$ in
Eq. (3), and we will discuss how to find appropriate $\epsilon_{\textrm{pfy}}$
in the next section.
### 3.3 Online Purification
Inspired by the PGD (Madry et al., 2017) attack (see Alg. 1), we propose a
multi-step purifier (see Alg. 2) which can be seen as its inverse. In contrast
to a PGD attack, which performs projected gradient _ascent_ on the input in
order to maximize the cross entropy loss $\mathcal{L}_{\textrm{cls}}$, the
purifier performs projected gradient _descent_ on the input in order to
minimize the auxiliary loss $\mathcal{L}_{\textrm{aux}}$. The purifier
achieves this goal by perturbing the adversarial examples, i.e.
$\pi(x_{\textrm{adv}})=x_{\textrm{adv}}+\delta$, while keeping the
perturbation under a budget, i.e.
${||\delta||}_{\infty}\leq\epsilon_{\textrm{pfy}}$. Note that it is also
plausible to use optimization-based algorithms in analogue to some $\ell_{2}$
adversaries such as CW (Carlini & Wagner, 2017), however this would require
more steps of gradient descent at test-time.
Taking the bound into account, the final objective of the purifier is to
minimize the following
$\displaystyle\min_{\delta}\ \mathcal{L}_{\textrm{aux}}((h\circ
f)(x_{\textrm{adv}}+\delta))\ \ \textrm{s.t.}\
||\delta||\leq\epsilon_{\textrm{pfy}},\ x_{\textrm{adv}}+\delta\in[0,1].$ (4)
For a multi-step purifier with step size $\gamma$, at each step we calculate
$\displaystyle\delta_{t}=\delta_{t-1}+\gamma\operatorname{sign}(\nabla_{x}\mathcal{L}_{\textrm{aux}}((h\circ
f)(x_{\textrm{adv}}+\delta_{t-1}))).$ (5)
For step size $\gamma=\epsilon_{\textrm{pfy}}$ and number of steps $T=1$, the
multi-step purifier becomes a single-step purifier. This is analogous to PGD
degrading to FGSM (Goodfellow et al., 2014) when the step size of the
adversary $\gamma=\epsilon_{\textrm{adv}}$ and the number of projected
gradient ascent steps $T=1$ in Alg. 1.
A remaining question is how to set $\epsilon_{\textrm{pfy}}$ when
$\epsilon_{\textrm{adv}}$ is unknown. If $\epsilon_{\textrm{pfy}}$ is too
small compared to the attack, it will not be sufficient to neutralize the
adversarial perturbations. In the absence of knowledge of the attack
$\epsilon_{\textrm{adv}}$, we can use the auxiliary loss as a proxy to set the
appropriate $\epsilon_{\textrm{pfy}}$. In Figure 3 we plot the average
auxiliary loss (green plot) of the purified examples for a range of
$\epsilon_{\textrm{pfy}}$ values. The “elbows” of the auxiliary loss curves
almost identify the unknown $\epsilon_{\textrm{adv}}$ in every case with
slight over-estimation. This suggests that the value for which the auxiliary
loss approximately stops decreasing is a good estimate of
$\epsilon_{\textrm{adv}}$. Empirically, we find that using a slightly over-
estimated $\epsilon_{\textrm{pfy}}$ benefits the accuracy after purification,
similar to the claim by Song et al. (2018). This is because our network is
trained with noisy examples and thus can handle the new noise introduced by
purification. At test-time, we use the auxiliary loss to set
$\epsilon_{\textrm{pfy}}$ in an online manner, by trying a range of values for
$\epsilon_{\textrm{pfy}}$ and selecting the smallest one which minimizes the
auxiliary loss for each _individual_ example. In the experiment section we
refer to the output of this selection procedure as $\epsilon_{\textrm{min-
aux}}$. We also empirically find for each sample the $\epsilon_{\textrm{pfy}}$
that results in the best adversarial accuracy, denoted
$\epsilon_{\textrm{oracle}}$ in the experiment section. This is an upper-bound
on the performance SOAP can achieve.
Algorithm 1 PGD attack
1:$x$: a test example;$T$: the number of attack steps
2:$x_{\textrm{adv}}$: the adversarial example
3:$\delta\leftarrow 0$
4:for $t=1,2,\ldots,T$ do
5: $\ell\leftarrow\mathcal{L}_{\textrm{cls}}((g\circ
f)(x+\delta;\theta_{\textrm{enc}},\theta_{\textrm{cls}}),y)$
6: $\delta\leftarrow\delta+\gamma\operatorname{sign}(\nabla_{x}\ell)$
7:
$\delta\leftarrow\min(\max(\delta,-\epsilon_{\textrm{adv}}),\epsilon_{\textrm{adv}})$
8: $\delta\leftarrow\min(\max(x+\delta,0),1)-x$
9:end for
10:$x_{\textrm{adv}}\leftarrow x+\delta$
Algorithm 2 Multi-step purification
1:$x$: a test example;$T$: the number of purification steps
2:$x_{\textrm{pfy}}$: the purified example
3:$\delta\leftarrow 0$
4:for $t=1,2,\ldots,T$ do
5: $\ell\leftarrow\mathcal{L}_{\textrm{aux}}((h\circ
f)(x+\delta;\theta_{\textrm{enc}},\theta_{\textrm{aux}}))$
6: $\delta\leftarrow\delta-\gamma\operatorname{sign}(\nabla_{x}\ell)$
7:
$\delta\leftarrow\min(\max(\delta,-\epsilon_{\textrm{pfy}}),\epsilon_{\textrm{pfy}})$
8: $\delta\leftarrow\min(\max(x+\delta,0),1)-x$
9:end for
10:$x_{\textrm{pfy}}\leftarrow x+\delta$
### 3.4 Self-supervised signals
Theoretically, any existing self-supervised objective can be used for
purification. However, due to the nature of purification and also for the sake
of efficiency, not every self-supervised task is suitable. A suitable
auxiliary task should be sensitive to the representation shift caused by
adversarial perturbation, differentiable with respect to the entire input,
e.g. every pixel for an image, and also efficient in both train and test-time.
In addition, note that certain tasks are naturally incompatible with certain
datasets. For example, a rotation-based self-supervised task cannot work on a
rotation-invariant dataset. In this paper, we exploit three types of self-
supervised signals: data reconstruction, rotation prediction and label
consistency.
#### Data reconstruction
Data reconstruction (DR), including both deterministic data compression and
probabilistic generative modeling, is probably one of the most natural forms
of self-supervision. The latent representation, usually lying on a much lower
dimensional space than the input space, is required to be comprehensive enough
for the decoder to reconstruct the input data.
To perform data reconstruction, we use a decoder network as the auxiliary
device $h$ and require it to reconstruct the input from the latent
representation $z$. In order to better learn the underlying data manifold, as
well as to increase robustness, the input is corrupted with additive Gaussian
noise $\eta$ (and clipped) before fed into the encoder $f$. The auxiliary loss
is the $\ell_{2}$ distance between examples and their noisy reconstruction via
the autoencoder $h\circ f$:
$\displaystyle\mathcal{L}_{\textrm{aux}}={||x-(h\circ f)(x+\eta)||}_{2}^{2}.$
(6)
In Figure 2, we present the outputs of an autoencoder trained using Eq. (4),
for clean, adversarial and purified inputs. The purification shifts the
representation of the adversarial examples closer to their original class (for
example, 2 4, 8 and 9). Note that SOAP does not use the output of the
autoencoder as a defense, but rather uses the autoencoder loss to purify the
input. We plot the autoencdoer output here as we consider it as providing
insight to how the trained model ‘sees’ these samples.
Figure 2: Input digits of the encoder (left) and output digits of the decoder
(right). From top to bottom are the clean digits, adversarially perturbed
digits and purified digits, respectively. Red rectangles: the adversary fools
the model to incorrectly classify the perturbed digit 8 as a 3 and the
purification corrects the perception back to an 8.
#### Rotation prediction
Rotation prediction (RP), as an image self-supervised task, was proposed by
Gidaris et al. (2018). The authors rotate the original images in a dataset by
a certain degree, then use a simple classifier to predict the degree of
rotation using high-level representation by a convolutional neural network.
The rationale is that the learned representation has to be semantically
meaningful for the classifier to predict the rotation successfully.
Following Gidaris et al. (2018), we make four copies of the image and rotate
each of them by one of four degrees:
$\Omega=\\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\\}$. The auxiliary
task is a 4-way classification using representation $z=f(x)$, for which we use
a simple linear classifier as the auxiliary device $h$. The auxiliary loss is
the summation of 4-way classification cross entropy of each rotated copy
$\displaystyle\mathcal{L}_{\textrm{aux}}=-\sum_{\omega\in\Omega}\log(h(f(x_{\omega}))_{\omega})$
(7)
where $x_{\omega}$ is a rotated input, and $h(\cdot)_{\omega}$ is the
predictive probability of it being rotated by $\omega$. While the standard
rotation prediction task works well for training, we found that it tends to
under-estimate $\epsilon_{\textrm{pfy}}$ at test-time.
$\mathcal{L}_{\textrm{aux}}$. Therefore, for purification we replace the cross
entropy classification loss by the mean square error between predictive
distributions and one-hot targets. This increases the difficulty of the
rotation prediction task and leads to better robust accuracy.
#### Label consistency
The rationale of label consistency (LC) is that different data augmentations
of the same sample should get consistent prediction from the network. This
exact or similar concept is widely used in semi-supervised learning (Sajjadi
et al., 2016; Laine & Aila, 2016), and also successfully applied in self-
supervised contrastive learning (He et al., 2020; Chen et al., 2020b).
We adopt label consistency to perform purification. The auxiliary task here is
to minimize the $\ell_{2}$ distance between two augmentations $a_{1}(x)$ and
$a_{2}(x)$ of a given image $x$, in the logit space given by $g(\cdot)$. The
auxiliary device of LC is the exact classifier, i.e. $h=g$, and the auxiliary
loss
$\displaystyle\mathcal{L}_{\textrm{aux}}={||(g\circ f)(a_{1}(x))-(g\circ
f)(a_{2}(x))||}_{2}^{2}.$ (8)
(a) SOAP-DR
(b) SOAP-RP
(c) SOAP-LC
Figure 3: Auxiliary loss vs. $\epsilon_{\textrm{pfy}}$. SOAP (green plot)
reduces the high adversarial auxiliary loss (orange plot) to the low clean
level (blue plot). The vertical dashed line is the value of
$\epsilon_{\textrm{adv}}$. The trained models are FCN and ResNet-18 for MNIST
and CIFAR10, respectively, with a PGD attack.
## 4 Experiments
We evaluate SOAP on the MNIST, CIFAR10 and CIFAR100 datasets following Madry
et al. (2017).
#### MNIST (LeCun et al., 1998)
For MNIST, we evaluate our method on a fully-connected network (FCN) and a
convolutional neural network (CNN). For the auxiliary task, we evaluate the
efficacy of data reconstruction. For the FCN $g(\cdot)$ is a linear classifier
and $h(\cdot)$ is a fully-connected decoder; for the CNN $g(\cdot)$ is a
2-layer MLP and $h(\cdot)$ is a convolutional decoder. The output of the
decoder is squashed into the range of $[0,1]$ by a sigmoid function. During
training, the input digits are corrupted by an additive Gaussian noise
($\mu=0,\sigma=0.5$). At test-time, $\mathcal{L}_{\textrm{aux}}$ of the
reconstruction is computed without input corruption. SOAP runs $T=5$
iterations with step size $\gamma=0.1$.
#### CIFAR10 $\&$ CIFAR100 (Krizhevsky & Hinton, 2009)
For CIFAR10 and CIFAR100, we evaluate our method on a ResNet-18 (He et al.,
2016) and a 10-widen Wide-ResNet-28 (Zagoruyko & Komodakis, 2016). For the
auxiliary task, we evaluate rotation prediction and label consistency. To
train on rotation prediction, each rotated copy is corrupted by an additive
Gaussian noise ($\mu=0,\sigma=0.1$), encoded by $f(\cdot)$, and classified by
a linear classifier $g(\cdot)$ for object recognition and by an auxiliary
linear classifier $h(\cdot)$ for degree prediction. This results in a batch
size 4 times larger than the original. At test-time, similar to DA, we compute
$\mathcal{L}_{\textrm{aux}}$ on clean input images.
To train on label consistency, we augment the input images twice using a
composition of random flipping, random cropping and additive Gaussian
corruption ($\mu=0,\sigma=0.1$). Both of these two augmentations are used to
train the classifier, therefore the batch size is twice as large as the
original. At test-time, we use the input image as one copy and flip-crop the
image to get another copy. Using the input image ensures that every pixel in
the image is purified, and using definite flipping and cropping ensures there
is enough difference between the input image and its augmentation. For both
rotation prediction and label consistency, SOAP runs $T=5$ iterations with
step size $\gamma=4/255$.
Note that we did not apply all auxiliary tasks on all datasets due to the
compatibility issue mentioned in Sec. 3.4. DR is not suitable for CIFAR as
reconstruction via an autoencdoer is typically challenging on more realistic
image datasets. RP is naturally incompatible with MNIST because the digits 0,
1, and 8 are self-symmetric; and digits 6 and 9 are interchangeable with 180
degree rotation. Similarly, LC is also not appropriate for MNIST because
common data augmentations such as flipping and cropping are less meaningful on
digits.
### 4.1 White-box attacks
In Tables 1\- 3 we compare SOAP against widely-used adversarial training
(Goodfellow et al., 2014; Madry et al., 2017) and purification methods
(Samangouei et al., 2018; Song et al., 2018) on a variety of $\ell_{2}$ and
$\ell_{\infty}$ bounded attacks: FGSM, PGD, CW, and DeepFool (Moosavi-Dezfooli
et al., 2016). For MNIST, both FGSM and PGD are $\ell_{\infty}$ bounded with
$\epsilon_{\textrm{adv}}=0.3$, and the PGD runs 40 iterations with a step size
of $0.01$; CW and DeepFool are $\ell_{2}$ bounded with
$\epsilon_{\textrm{adv}}=4$. For CIFAR10, FGSM and PGD are $\ell_{\infty}$
bounded with $\epsilon_{\textrm{adv}}=8/255$, and PGD runs 20 iterations with
a step size of $2/255$; CW and DeepFool are $\ell_{2}$ bounded with
$\epsilon_{\textrm{adv}}=2$. For CW and DeepFool which are optimization-based,
resulted attacks that exceed the bound are projected to the
$\epsilon\textrm{-ball}$. We mark the best performance for each attack by an
underlined and bold value and the second best by bold value. We do not mark
out the oracle accuracy but it does serves as an empirical upper bound of
purification.
For MNIST, SOAP-DR has great advantages over FGSM and PGD adversarial training
on all attacks when the model has small capacity (FCN). This is because
adversarial training typically requires a large parameter set to learn a
complex decision boundary while our method does not have this constraint. When
using a larger CNN, SOAP outperforms FGSM adversarial training and comes close
to Defense-GAN and PGD adversarial training on $\ell_{\infty}$ attacks. SOAP
also achieves better clean accuracy compared with all other methods.
Note that FGSM AT achieves better accuracy under FGSM attacks than when there
is no attack for the large capacity networks. This is due to the label leaking
effect (Kurakin et al., 2016): the model learns to classify examples from
their perturbations rather than the examples themselves.
Table 1: MNIST Results Method | FCN | CNN
---|---|---
No Atk | FGSM | PGD | CW | DF | No Atk | FGSM | PGD | CW | DF
No Def | 98.10 | 16.87 | 0.49 | 0.01 | 1.40 | 99.15 | 1.49 | 0.00 | 0.00 | 0.69
FGSM AT | 79.76 | 80.57 | 2.95 | 6.22 | 17.24 | 98.78 | 99.50 | 33.70 | 0.02 | 6.16
PGD AT | 76.82 | 60.70 | 57.07 | 31.68 | 13.82 | 98.97 | 96.38 | 93.22 | 90.31 | 75.55
Defense-GAN | 95.84 | 79.30 | 84.10 | 95.07 | 95.29 | 95.92 | 90.30 | 91.93 | 95.82 | 95.68
SOAP-DR | | | | | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 97.57 | 29.15 | 0.58 | 0.25 | 2.32 | 99.04 | 65.35 | 27.54 | 0.35 | 0.69
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 97.56 | 66.85 | 61.88 | 86.81 | 87.02 | 98.94 | 87.78 | 84.92 | 74.61 | 81.27
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 98.93 | 69.21 | 64.76 | 97.88 | 97.97 | 99.42 | 89.40 | 86.62 | 98.44 | 98.47
Table 2: CIFAR-10 results Method | ResNet-18 | Wide-ResNet-28
---|---|---
No Atk | FGSM | PGD | CW | DF | No Atk | FGSM | PGD | CW | DF
No Def | 90.54 | 15.42 | 0.00 | 0.00 | 6.26 | 95.13 | 14.82 | 0.00 | 0.00 | 3.28
FGSM AT | 72.73 | 44.16 | 37.40 | 2.69 | 24.58 | 72.20 | 91.63 | 0.01 | 0.00 | 14.41
PGD AT | 74.23 | 47.43 | 42.11 | 3.14 | 25.84 | 85.92 | 51.58 | 41.50 | 2.06 | 24.08
Pixel-Defend | 79.00 | 39.85 | 29.89 | 76.47 | 76.89 | 83.68 | 41.37 | 39.00 | 79.30 | 79.61
SOAP-RP | | | | | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 73.64 | 5.77 | 0.47 | 0.00 | 13.65 | 88.68 | 30.21 | 8.52 | 0.08 | 10.67
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 71.97 | 35.80 | 38.53 | 68.22 | 68.44 | 90.94 | 51.11 | 51.90 | 83.03 | 82.50
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 87.57 | 37.60 | 39.40 | 79.80 | 84.34 | 95.55 | 52.69 | 52.61 | 86.99 | 90.49
SOAP-LC | | | | | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 86.36 | 22.81 | 0.15 | 0.00 | 8.52 | 93.40 | 59.23 | 3.55 | 0.01 | 46.98
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 84.07 | 51.02 | 51.42 | 73.95 | 74.79 | 91.89 | 64.83 | 53.58 | 80.33 | 60.56
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 94.06 | 59.45 | 62.29 | 86.94 | 88.88 | 96.93 | 71.85 | 63.10 | 88.96 | 73.66
Table 3: CIFAR-100 results Method | ResNet-18 | Wide-ResNet-28
---|---|---
No Atk | FGSM | PGD | CW | DF | No Atk | FGSM | PGD | CW | DF
No Def | 65.56 | 3.81 | 0.01 | 0.00 | 12.30 | 78.16 | 13.76 | 0.06 | 0.01 | 9.05
FGSM AT | 44.35 | 20.30 | 17.41 | 4.23 | 18.15 | 46.45 | 88.24 | 0.15 | 0.00 | 13.40
PGD AT | 42.15 | 21.92 | 20.04 | 3.57 | 17.90 | 62.71 | 28.15 | 21.34 | 0.65 | 16.57
SOAP-RP | | | | | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 40.47 | 2.53 | 0.45 | 0.03 | 11.89 | 60.33 | 13.30 | 4.65 | 0.09 | 12.19
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 35.21 | 11.65 | 11.73 | 32.97 | 33.51 | 60.80 | 22.25 | 22.00 | 54.11 | 54.70
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 45.57 | 12.44 | 12.04 | 41.13 | 46.51 | 72.03 | 24.42 | 24.19 | 63.04 | 67.86
SOAP-LC | | | | | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 57.86 | 6.11 | 0.01 | 0.01 | 12.72 | 74.04 | 16.46 | 0.49 | 0.00 | 9.65
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 52.91 | 22.93 | 27.55 | 50.26 | 50.57 | 61.01 | 31.40 | 37.53 | 56.09 | 53.79
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 69.99 | 27.52 | 31.82 | 62.87 | 68.65 | 82.74 | 37.56 | 47.07 | 71.19 | 73.39
For CIFAR10, on ResNet-18 SOAP-RP beats Pixel-Defend on all attacks except for
FGSM and beats PGD adversarial training on $\ell_{2}$ attacks; on Wide-
ResNet-28 it performs superiorly or equivalently against other methods on all
attacks. SOAP-LC achieves superior accuracy compared with other methods, where
the capacity is either small or large. Note that we choose Pixel-Defend as our
purification baseline since Defense-GAN does not work on CIFAR10.
Specifically, our method achieves over $50\%$ accuracy under strong PGD
attack, which is $10\%$ higher than PGD adversarial training. SOAP also
exhibits great advantages over adversarial training on the $\ell_{2}$ attacks.
Also, compared with vanilla training (‘No Def’) the multi-task training of
SOAP improves robustness without purification ($\epsilon_{\textrm{pfy}}=0$),
which is similar on MNIST. Examples are shown in Figure 4.
For CIFAR100, SOAP shows similar advantages over other methods. SOAP-RP beats
PGD adversarial training on PGD attacks when using large Wide-ResNet-28 model
and on $\ell_{2}$ attacks in all cases; SOAP-LC again achieves superior
accuracy compared with all other methods, where the capacity is either small
or large.
Our results demonstrate that SOAP is effective under both $\ell_{\infty}$ and
$\ell_{2}$ bounded attacks, as opposed to adversarial training which only
defends effectively against $\ell_{2}$ attacks for MNIST with a CNN. This
implies that while the formulation of the purification in Eq. (4) mirrors an
$\ell_{\infty}$ bounded attack, our defense is not restricted to this specific
type of attack, and the bound in Eq. (4) serves merely as a constraint on the
purification perturbation rather than a-prior knowledge of the attack.
(a) SOAP-RP
(b) SOAP-LC
Figure 4: Adversarial and purified CIFAR10 examples by SOAP with Wide-
ResNet-28 under PGD attacks. True classes are shown on the top of each column
and the model predictions are shown under each image.
#### Auxiliary-aware attacks
Previously, we focus on standard adversaries which only rely on the
classification objectives. A natural question is: can an adversary easily find
a stronger attack given the knowledge of our purification defense? In this
section, we introduce a more ‘complete’ white-box adversary which is aware of
the purification method, and show that it is not straightforward to attack
SOAP even with the knowledge of the auxiliary task used for purification.
In contrast to canonical adversaries, here we consider adversaries that
jointly optimize the cross entropy loss and the auxiliary loss with respect to
the input. As SOAP aims to minimize the auxiliary loss, the auxiliary-aware
adversaries maximize the cross entropy loss while also minimizing the
auxiliary loss at the same time. The intuition behind this is that the
auxiliary-aware adversaries try to find the auxiliary task “on-manifold”
(Stutz et al., 2019) examples that can fool the classifier. The auxiliary-
aware adversaries perform gradient ascent on the following combined objective
$\displaystyle\max_{\theta}\
\\{\mathcal{L}_{\textrm{cls}}(f(x),y;\theta_{\textrm{enc}},\theta_{\textrm{cls}})-\beta\mathcal{L}_{\textrm{aux}}(g(x;\theta_{\textrm{enc}},\theta_{\textrm{aux}}))\\},$
(9)
where $\beta$ is a trade-off parameter between the cross entropy loss and the
auxiliary loss. An auxiliary-aware adversary degrades to a canonical one when
$\beta=0$ in the combined objective.
As shown in Figure 5, an adversary cannot benefit from the knowledge of the
defense in a straight-forward way. When the trade-off parameter $\beta$ is
negative (i.e. the adversary is attacking the auxiliary device as well), the
attacks are weakened (blue plot) and purification based on all three
auxiliaries achieves better robust accuracy (orange plot) as the amplitude of
$\beta$ increases. When $\beta$ is positive, the accuracy of SOAP using data
reconstruction and label consistency increases with $\beta$. The reason for
this is that the auxiliary component of the adapted attacks obfuscates the
cross entropy gradient, and thus weakens canonical attacks. The accuracy of
rotation prediction stays stable as $\beta$ varies, i.e. it is more sensitive
to this kind of attack compared to the other tasks.
(a) SOAP-DR
(b) SOAP-RP
(c) SOAP-LC
Figure 5: Purification against auxiliary-aware PGD attacks. Plots are
classification accuracy before (blue) and after (orange) purification.
### 4.2 Black-box attacks
Table 4 compares SOAP-DR with adversarial training against FGSM black-box
attacks (Papernot et al., 2017). Following their approach, we let white-box
adversaries, e.g. FGSM, attack a substitute model, with potentially different
architecture, to generate the black-box adversarial examples for the target
model. The substitute model is trained on a limited set of 150 test images
unseen by the target model. These images are further labeled by the target
model and augmented using a Jacobian-based method. SOAP significantly out-
performs adversarial training on FCN; for CNN it out-performs FGSM adversarial
training and comes close to PGD adversarial training.
Table 4: MNIST Black-box Results Target | FCN | CNN
---|---|---
Substitute | No Atk | FCN | CNN | No Atk | FCN | CNN
No Def | 98.10 | 25.45 | 39.10 | 99.15 | 49.49 | 49.25
FGSM AT | 79.76 | 40.88 | 58.74 | 98.78 | 93.62 | 96.52
PGD AT | 76.82 | 62.87 | 69.07 | 98.97 | 97.79 | 98.09
SOAP-DR | | | | | |
$\epsilon_{\textrm{pfy}}=0$ | 97.57 | 78.52 | 92.72 | 99.04 | 95.25 | 97.43
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{min-aux}}$ | 97.56 | 90.35 | 94.51 | 98.94 | 96.02 | 97.80
$\epsilon_{\textrm{pfy}}=\epsilon_{\textrm{oracle}}*$ | 98.93 | 94.34 | 97.33 | 99.42 | 98.12 | 98.81
## 5 Conclusion
In this paper, we introduced SOAP: using self-supervision to perform test-time
purification as an online defense against adversarial attacks. During
training, the model learns a clean data manifold through joint optimization of
the cross entropy loss for classification and a label-independent auxiliary
loss for purification. At test-time, a purifier counters adversarial
perturbation through projected gradient descent of the auxiliary loss with
respect to the input. SOAP is consistently competitive across different
network capacities as well as different datasets. We also show that even with
knowledge of the self supervised task, adversaries do not gain an advantage
over SOAP. While in this paper we only explore how SOAP performs on images,
our purification approach can be extended to any data format with suitable
self-supervised signals. We hope this paper can inspire future exploration on
a broader range of self-supervised signals for adversarial purification.
## References
* Carlini & Wagner (2017) Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In _2017 IEEE Symposium on Security and Privacy (SP)_ , pp. 39–57. IEEE, 2017.
* Chen et al. (2020a) Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 699–708, 2020a.
* Chen et al. (2020b) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. _arXiv preprint arXiv:2002.05709_ , 2020b.
* Doersch & Zisserman (2017) Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 2051–2060, 2017.
* Doersch et al. (2015) Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In _Proceedings of the IEEE international Conference on Computer Vision_ , pp. 1422–1430, 2015.
* Dosovitskiy et al. (2014) Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In _Advances in neural information processing systems_ , pp. 766–774, 2014.
* Gidaris et al. (2018) Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. _arXiv preprint arXiv:1803.07728_ , 2018.
* Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_ , 2014.
* Gu & Rigazio (2014) Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. _arXiv preprint arXiv:1412.5068_ , 2014.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , pp. 770–778, 2016.
* He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 9729–9738, 2020.
* Hendrycks et al. (2019) Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. In _NeurIPS_ , 2019.
* Hwang et al. (2019) Uiwon Hwang, Jaewoo Park, Hyemi Jang, Sungroh Yoon, and Nam Ik Cho. Puvae: A variational autoencoder to purify adversarial examples. _IEEE Access_ , 7:126582–126593, 2019.
* Kannan et al. (2018) Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. _arXiv preprint arXiv:1803.06373_ , 2018.
* Krizhevsky & Hinton (2009) A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. _Master’s thesis, Department of Computer Science, University of Toronto_ , 2009.
* Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. _arXiv preprint arXiv:1611.01236_ , 2016.
* Laine & Aila (2016) Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. _arXiv preprint arXiv:1610.02242_ , 2016.
* LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 86(11):2278–2324, 1998.
* Liao et al. (2018) Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 1778–1787, 2018.
* Madry et al. (2017) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ , 2017.
* Mao et al. (2019) Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In _Advances in Neural Information Processing Systems_ , pp. 480–491, 2019.
* Meng & Chen (2017) Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security_ , pp. 135–147, 2017.
* Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In _Proceedings of the IEEE conference on Computer Vision and Pattern Recognition_ , pp. 2574–2582, 2016.
* Naseer et al. (2020) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli. A self-supervised approach for adversarial robustness. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 262–271, 2020.
* Noroozi & Favaro (2016) M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In _ECCV_ , 2016.
* Papernot et al. (2017) Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In _Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security_ , pp. 506–519, 2017.
* Rifai et al. (2011) Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In _ICML_ , 2011.
* Sajjadi et al. (2016) Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In _Advances in neural information processing systems_ , pp. 1163–1171, 2016.
* Samangouei et al. (2018) Pouya Samangouei, Maya Kabkab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. _arXiv preprint arXiv:1805.06605_ , 2018.
* Song et al. (2018) Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=rJUYGxbCW.
* Stutz et al. (2019) David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 6976–6987, 2019.
* Tramèr et al. (2017) Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. _arXiv preprint arXiv:1705.07204_ , 2017.
* Vincent et al. (2008) Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In _Proceedings of the 25th international Conference on Machine Learning_ , pp. 1096–1103, 2008.
* Zagoruyko & Komodakis (2016) Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. _arXiv preprint arXiv:1605.07146_ , 2016.
* Zhang et al. (2019) Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. _arXiv preprint arXiv:1901.08573_ , 2019.
* Zhang et al. (2016) Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In _European Conference on Computer Vision_ , pp. 649–666. Springer, 2016.
## Appendix A Appendix
### A.1 Illustration of self-supervised tasks
For those readers who are not familiar with self-supervised representation
learning, we detailed the self-supervised tasks here. As data reconstruction
is straight-forward to understand, we simply illustrate rotation prediction
and label consistency in Figure 6. As shown on the left-hand side, to perform
rotation prediction, we duplicate a input image to 4 copies and rotate them by
one of the four degrees. We then use the auxiliary classifier to predict the
rotation of each copy. For the label consistency auxiliary task on the right-
hand side, we duplicate a input image to 2 copies and apply separate data
augmentation to each of them. The left copy is augmented with random cropping
and the right copy is augmented with random cropping as well as horizontal
flipping. We require the predictive distributions of these 2 augmented copies
to be close.
(a) Rotation prediction
(b) Label consistency
Figure 6: An illustration of auxiliary self-supervised tasks.
### A.2 Hyper-parameters of the purifier
Beyond $\epsilon_{\textrm{pfy}}$, the two additional hyper-parameters of SOAP
are the step size of the purifier $\gamma$, and the number of iterations
performed by the purifier $T$. The selection of these hyper-parameters is
important for the efficacy of purification. A step size that is too small or a
number of iterations that is too large can cause the purifier to get stuck in
a local minimum neighboring the perturbed example. This is confirmed by our
empirical finding that using a relatively large step size for a small number
of iterations is better than using a relatively small step size for a large
number of iterations. Although a large step size makes it hard to get the
cleanest purified examples, this drawback is compensated by adding noise in
training. Training the model on corrupted examples makes the model robust to
the residual noise left by purification.
### A.3 Training details
Table 5: Basic modules & specifics Module | Specifics
---|---
Conv($m$, $k\times k$, $s$) | 2-D convolutional layer with $m$ feature maps, $k\times k$ kernel size, and stride $s$ on both directions
Maxpool($s$) | 2-D maxpooling layer with stride $s$ on both directions
FC($m$) | Fully-connected layer with $m$ outputs
ReLU | Rectified linear unit activation
BN | 1-D or 2-D batch normalization
Dropout($p$) | Dropout layer with probability $p$
ShortCut | Residual addition that bypasses the basic block
Table 6: ResNet basic blocks Block($m$, $k\times k$, $s$) | Wide-Block($m$, $k\times k$, $s$, $p$)
---|---
Conv($m$, $k\times k$, $s$) | Conv($m\times 10$, $k\times k$, 1)
BN | Dropout($p$)
ReLU | BN
Conv($m$, $k\times k$, $1$) | ReLU
BN | Conv($m\times 10$, $k\times k$, $s$)
ShortCut | ShortCut
ReLU | BN
| ReLU
Table 7: Architectures FCN | CNN | ResNet-18 | Wide-ResNet-28
---|---|---|---
FC(256) | Conv(32, $3\times 3$, 2) | Conv(16, $3\times 3$, 1) | Conv(16, $3\times 3$, 1)
ReLU | ReLU | BN | BN
Dropout(0.5) | Conv(64, $3\times 3$, 2) | ReLU | ReLU
FC(128) | ReLU | Block(16, $3\times 3$, 1) $\times$ 2 | Wide-Block(16, $3\times 3$, 1, 0.3) $\times$ 4
ReLU | BN | Block(32, $3\times 3$, 2) | Wide-Block(32, $3\times 3$, 2, 0.3)
Dropout(0.5) | FC(128) | Block(32, $3\times 3$, 1) | Wide-Block(32, $3\times 3$, 1, 0.3) $\times$ 3
FC(10) | ReLU | Block(64, $3\times 3$, 2) | Wide-Block(64, $3\times 3$, 2, 0.3)
| BN | Block(64, $3\times 3$, 1) | Wide-Block(64, $3\times 3$, 1, 0.3)
| FC(10) | FC(10) | FC(10)
The architectures of the networks and training details are described as
followed. Table 5 describes the basic modules and their specifics, and Table 6
describes the low-level building blocks of residual networks. Full
architectures of the networks are listed in Table 7.
For MNIST, we evaluate on a 2 hidden layer FCN and a CNN which has the same
architecture as in Madry et al. (2017). FCN is trained for 100 epochs with an
initial learning rate of $0.01$ and CNN for 200 epochs with an initial
learning rate of $0.1$ using SGD. The learning rate is decreased 10 times at
halfway in both cases. The batch size is 128. In both FGSM and PGD adversarial
training, the adversaries are $l_{\infty}$ bounded with
$\epsilon_{\textrm{adv}}=0.3$. For PGD adversarial training, the adversary
runs 40 steps of projected gradient descent with a step size of $0.01$. To
train SOAP, the trade-off parameter $\alpha$ in Eq. (1) is 100.
For CIFAR10, we evaluate our method on a regular residual network ResNet-18
and a 10-widen residual network Wide-ResNet-28-10. Both networks are trained
for 200 epochs using a SGD optimizer. The initial learning rate is 0.1, which
is decreased by a factor of 0.1 at the 100 and 150 epochs. We use random crop
and random horizontal flipping for data augmentation on CIFAR10.
$\epsilon_{\textrm{adv}}=8/255$ for both FGSM and PGD adversarial training.
For PGD adversarial training, the adversary runs 7 steps of projected gradient
descent with a step size of $2/255$. The trade-off parameter of SOAP for
rotation prediction and label consistency $\alpha$ are 0.5 and 1 respectively.
While we implement adversarial training and SOAP ourselves, we use authors’
implementation for both Defense-GAN and Pixel-Defend. Notice that our white-
box Defense-GAN accuracy is lower than the accuracy reported in (Samangouei et
al., 2018). Part of the reason is the difference in architecture and training
scheme, but we are still not able to replicate their accuracy using the exact
same architecture following their instructions. Nonetheless, our results are
close to (Hwang et al., 2019) where the authors also reported lower accuracy.
### A.4 Training efficiency
(a) Data Reconstruction
(b) Rotation prediction and Label consistency
Figure 7: Comparison of training efficiency between SOAP, vanilla training
(‘No Def’) and adversarial training (FGSM and PGD). The y-axis is the average
time consumption of 30 training epochs.
The comparison of training efficiency between SOAP and other methods is shown
in Figure 7. To measure the training complexity, we run each training method
for 30 epochs on a single Nvidia Quadro RTX 8000 GPU, and report the average
epoch time consumption. When the network capacity is small, the training
complexity of SOAP is close to FGSM adversarial training and much lower than
PGD adversarial training. When the network capacity is large, the training
complexity of SOAP is higher than FGSM adversarial training but still
significantly lower than PGD adversarial training.
Note that it is hard to compare with other purification methods because they
are typically trained in 2 stages, the training of the classifier and the
training of another purifier such as a GAN. While the training of those
purifiers is typically difficult and intense, SOAP does not suffer from this
limitation as the encoder is shared between the main classification network
and the purifier. Therefore it is reasonable to claim that SOAP is more
efficient than other purification methods.
### A.5 Purified examples
We have shown some PGD examples of adversarial images purified images by SOAP
in Figure 4. In Figures 8-10 we present some examples of every attack. The
adversary is shown on the top of each column. The network prediction is shown
at the bottom of each example.
(a) FCN
(b) CNN
Figure 8: MNIST examples with data reconstruction.
(a) ResNet-18
(b) Wide-ResNet-28
Figure 9: CIFAR10 examples with rotation prediction.
(a) ResNet-18
(b) Wide-ResNet-28
Figure 10: CIFAR10 examples with label consistency.
### A.6 Success rate of $\ell_{2}$ attacks
To provide more details about the robustness under $\ell_{2}$ attacks, in
Tables 8 and 9 we report the success rate of generating $\ell_{2}$ attacks.
For the MNIST dataset, generating an $\ell_{2}$ attack is considered a success
if the $\ell_{2}$ norm of the perturbation is smaller than 4; for the CIFAR10
dataset, the bound is set as 2. PGD adversarial training typically results in
low success rate compared with other methods; SOAP, on the other hand, is
typically “easy” to attack before purification because there is no adversarial
training involved. Notice that the Wide-ResNet-28 model trained on label
consistency is robust to the DeepFool attack even before purification. This
explains why SOAP-LC robust accuracy in Table 2 is relatively low because its
DeepFool perturbations are larger than other cases.
Table 8: MNIST $\ell_{2}$ attacks success rate Attack | Architecture | No Def | FGSM AT | PGD AT | SOAP-DR
---|---|---|---|---|---
CW | FCN | 100.00 | 99.98 | 99.98 | 99.78
CNN | 100.00 | 100.00 | 100.00 | 99.82
DF | FCN | 99.88 | 94.90 | 99.10 | 99.53
CNN | 99.92 | 100.00 | 18.97 | 95.33
Table 9: CIFAR10 $\ell_{2}$ attacks success rate Attack | Architecture | No Def | FGSM AT | PGD AT | SOAP-RP | SOAP-LC
---|---|---|---|---|---|---
CW | ResNet-18 | 100.00 | 97.31 | 96.88 | 100.00 | 100.00
Wide-ResNet-28 | 100.00 | 100.00 | 97.95 | 99.92 | 99.99
DF | ResNet-18 | 100.00 | 88.20 | 87.01 | 99.99 | 99.85
Wide-ResNet-28 | 100.00 | 100.00 | 83.68 | 96.31 | 28.22
|
## 1 Introduction
Bipedal robots have the capability to cover a span of terrains that humans can
traverse. This makes bipeds an appealing locomotion option for robots in human
environments. It is impractical, however, to design a single controller that
works over all required environments, particularly if all conditions are not
known a priori. For robots to be able to perform a number of diverse
behaviours, it is desirable for a control suite to be modular, where any
number of behaviours can be added with minimal retraining. Deep Reinforcement
Learning (DRL) is an appealing method for developing visuo-motor behaviours
for legged platforms. However, such DRL policies are usually trained in the
same on the terrain they are expected to operate, though it is unlikely that
the agent would have access to a complete set of terrain conditions during
policy training before deployment. Controllers for dynamic platforms are
typically developed and tuned with harnesses and treadmills safety harnesses
and human supervisors in order to minimise potential damage to the robot and
the environment. These restrictions limit the development of controllers over
an exhaustive collection of terrain conditions. Furthermore, it is preferable
to not re-design modifying controllers that have already been tuned , whereas
is costly, for example DRL methods typically require retraining if the target
conditions are evolved to included new terrains new terrains are introduced.
We propose training a Setup Policysetup policy to transition between two pre-
trained policies targeting different terrains. A modular control suite for a
dynamic platform requires careful design. Simply switching between controllers
may result in the robot falling over if controllers are switched when the
robot is not in a safe region for the subsequent controller. Figure 1 shows
our primary contribution: a Setup Policy setup policy that prepares the robot
for the necessary controller to traverse the upcoming terrain obstacle. With
our method, we can transition between existing behaviours, allowing pre-
trained DRL policies to be combined.
[width=]images/Fig1.png
Figure : As the robot approaches a difficult terrain artifact, it must move
into position for target policy $j$$i$. The trajectory of the walking policy
$i$ $j$ (shown with a blue line), does not intersect with the trajectory of
the target policy $j$ $i$ (green line). Our setup policy provides a trajectory
that prepares the robot for the upcoming target policy (orange line).
Our contributions are as follows:
* •
Using pre-trained policies, we develop a Setup Policysetup policies that
significantly improves improve the success rate from transitioning from a
default walking policy to a target policy (from 1.5$\%$ success rate without
Setup Policies setup policies to 82$\%$ for a difficult 0.70.5m jump). The
setup policy also learns when to switch to the target policy.
* •
We introduce a novel reward, called Advantage Weighted Target Value, based on
the performance of the subsequent guiding the robot towards the target policy.
* •
We show that we can use our setup policies with several difficult terrain
types, allowing for a modular control suite, combining policies behaviours
that have been trained separately without any retraining of low level
policies.
## 2 Related Work
Deep reinforcement learning (DRL) has demonstrated impressive results for
locomotion tasks in recent works [schulman_proximal_2017,
heess_emergence_2017, peng_deeploco_2017, peng_deepmimic_2018,
xie_allsteps_2020][heess_emergence_2017, peng_deeploco_2017,
peng_deepmimic_2018, xie_allsteps_2020]. Typically DRL policies optimise a
single reward function, as new environments and behaviours are introduced
costly retraining is required. Single policies have demonstrated locomotion
over various terrains for simple 2D cases [song_recurrent_2018], and
impressive quadruped behaviours [lee_learning_2020, rudin_learning_2021],
however, for many scenarios multiple policies are often required. Furthermore,
developing a single policy to perform multiple behaviours can degrade the
performance of each behaviour [lee_robust_2019]. Hierarchical reinforcement
learning (HRL) offers flexibility through architecture, involving by training
all segments of the hierarchy concurrently [sutton_between_1999, bacon_option-
critic_2018, frans_meta_2018, peng_deeploco_2017], or training parts of the
hierarchy separately [merel_hierarchical_2019], [lee_composing_2019]. When
trained together, HRL can improve task level outcomes, such as steering and
object tracking [peng_deeploco_2017], or improve learning efficiency by
reusing low level skills across multiple high level tasks [frans_meta_2018].
When trained separately, low level controllers can be refined efficiently
using prior knowledge, such as by utilising motion capture data
[peng_deepmimic_2018, merel_hierarchical_2019, peng_mcp_2019] or behavioural
cloning [strudel_learning_2020]. For robotics applications it may be difficult
to develop controllers for multiple behaviours simultaneously. Using pre-
trained primitives can break up a large difficult problem into smaller
solvable sub-problems [schaal_dynamic_2006]. Using pre-trained policies
requires a suitable handling of transitions between behaviours. Faloutsos et
al. [faloutsos_composable_2001] learns pre and post-conditions for each
controller, such that switching only occurs when these conditions have been
satisfied. DeepMimic by Peng et al. [peng_deepmimic_2018] combines pre-trained
behaviours learned from motion capture, with reliance on a phase variable to
determine when one behaviour has been completed. Policy sketches introduce a
hierarchical method that uses task specific policies, with each task performed
in sequence [andreas_modular_2017]. CompILE uses soft boundaries between task
segments [kipf_compile_2019]. Work by Peng et al. [peng_terrain-adaptive_2016]
trains several actor-critic control policies, modulated by the highest critic
value in an a given state. For these examples there must be a reasonable state
overlap between sequential controllers. Other methods that combine pre-trained
behaviours learn latent representations of skills [pertsch_accelerating_2020]
or primitives [ha_distilling_2020], enabling interpolation in the latent
space. From an offline dataset of experience, Pertsch et al
[pertsch_accelerating_2020] are . [pertsch_accelerating_2020] were able to
combine low level controllers in manipulation tasks and locomotion task for a
multi-legged agent. Ha et al. [ha_distilling_2020] utilise motion capture to
learn latent representations of primitives, then use model predictive control
for the high level navigation of to navigate with a high dimensional humanoid.
The FeUdal approach learns a master policy that modulates low-level policies
using a learned goal signal [vezhnevets_feudal_2017]. Work by Peng et al.
[peng_mcp_2019] combines pre-trained policies using a gating function that
learns a multiplicative combination of actions. In our previous work
[tidd_learning_2021], we learn when to switch between low level primitives for
a simulated biped using data collected by randomly switching between
behaviours. Interpolation between behaviours yields natural transitions
[xu_hierarchical_2020], however in . Yang et al. [yang_multi_2020] learn a
gating neural network to blend several separate expert neural network policies
to perform trotting, steering, and fall recovery in real-world experiments
with a quadruped. Da et al. [da_supervised_2017] use supervised learning to
train a control policy for a biped from several manually derived controllers
to perform periodic and transitional gaits on flat and sloped ground. In each
of these approaches, experience from all behaviours must be available during
training. For dynamic platforms, where separate controllers occupy different
subsets of the state, changing behaviours may not result in stable behaviour
result in instability if there is no overlap between controllers. Lee et al.
[lee_composing_2019] learn a proximity predictor to train a transition policy
to get guide an agent to the initial state required by the next controller.
Locomotion experiments are demonstrated with a 2D planar walker, where
learning occurs by training with all terrain conditions. Our setup policies
also use an intermediate controller, however we We show that our method
performs more reliably (with a higher success rate)while performing complex
behaviours with a 3D biped, despite only , despite training with experience
from a single terrain obstacle for each behaviourtype. Our setup policies
learn to transition between complex behaviours with a 3D biped in a simulation
environment using a depth image to perceive the terrain.
## 3 Method
In this section we describe the problem we are solving and our method for
training setup policies.
[][width=]images/Fig2.png [][width=]images/Fig3.png
Figure : a) shows the modular pipeline. A terrain classifier selects which
module should be utilised, and the torque from the output of the selected
module is applied to the robot, returning robot state and depth image. b) Each
module is pre-trained on a single terrain type. The Target Policy target
policy has been designed to traverse a specific terrain condition. The Setup
Policy setup policy guides the trajectory of the robot from a default walking
controller to the Target Policytarget policy, also learning when to switch to
the Target Policytarget policy.
### Problem Description
We investigate composing controllers for a dynamic platform with little narrow
or no state overlap between adjacent behaviours. An example of where this
occurs when a bipedal robot performs a jump over a large block. In this
scenario, the robot walks up to the block using a walking policy and then
performs a jump with a terrain-specific Target Policytarget policy, where the
behaviour involves jumping with both feet. Fig 1 shows that the trajectory of
the walk policy does not intersect with the trajectory of the Target
Policytarget policy, therefore we require an intermediate policy. While it may
be possible to improve the robustness of the target policies after they have
been trained (i.e. to include states visited by the walk policy), we consider
the case where target behaviours and value functions are provided as a black
box and are therefore immutable. Our agent has 12 torque controlled actuators,
simulated in PybulletPyBullet [coumans_pybullet_2020]. We select which terrain
module to employ using an oracle terrain classifier (Fig 3a). From a default
walking policy, we utilise a Setup Policy setup policy that prepares the robot
for the Target Policy target policy (Fig 3b)). When to switch from the Setup
Policy to the Target Policy setup policy to the target policy is also a
learned output of the Setup Policysetup policy. We first study the traversal
of a single difficult terrain artifact that requires walking on a flat surface
and performing a jump. We then consider several diverse terrain types (gaps,
hurdles, stairs, and stepsstepping stones). The state provided to each policy
is $s_{t}=[rs_{t},I_{t}]$, where $rs_{t}$ is the robot state and $I_{t}$ is
the perception input at time $t$. Robot state:
$rs_{t}=[J_{t},Jv_{t},c_{t},c_{t-1},v_{CoM,t},\omega_{CoM,t},\\\
\theta_{CoM,t},\phi_{CoM,t},h_{CoM,t},s_{right},s_{left}]$, where $J_{t}$ are
the joint positions in radians, $Jv_{t}$ are the joint velocities in rad/s,
$c_{t}$ and $c_{t-1}$ are the current and previous contact information of each
foot , respectively (four Boolean contact points per foot), $v_{CoM,t}$ and
$\omega_{CoM,t}$ are the linear and angular velocities of the body Centre of
Mass (CoM)robot body, $\theta_{CoM,t}$ and $\phi_{CoM,t}$ are the pitch and
roll angles of the CoM robot body and $h_{CoM,t}$ is the height of the robot
from the walking surface. $s_{right}$ and $s_{left}$ are Boolean indicators of
which foot is in the swing phase, and is updated when the current swing foot
makes contact with the ground. All angles except joint angles are represented
in the world coordinate frameRobot body rotations are provided as Euler
angles. In total there are $51$ elements to the robot state, which is
normalised by subtracting the mean and dividing by the standard deviation for
each variable (statistics are collected as an aggregate during training).
Perception: Perception is a depth sensor mounted to the robot base, with a
resolution of $[48,48,1]$, and field of view of 60 degrees. Each pixel is a
continuous value scaled between $0-1$, measuring a distance between 0.25 and
$\SI{2}{\metre}$ from the robot base. The sensor moves with the body in
translation and yaw, we provide an artificial gimbal to keep the sensor fixed
in roll and pitch. The sensor is pointed at the robot’s feet (downwards 60
degrees so the feet are visible) and covers at least two steps in front of the
robot [zaytsev_two_2015]. We reduce the sampling rate of the perception to
$\SI{20}{\hertz}$ (reducing computational complexity), where the simulator
operates at $\SI{120}{\hertz}$.
### Deep Reinforcement Learning for Continuous Control
We consider our task to be a Markov Decision Process $\text{MDP}$, defined by
tuple $\\{\mathcal{S},\mathcal{A},R,\mathcal{P},\gamma\\}$
$\\{\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P},\gamma\\}$ where
$s_{t}\in\mathcal{S}$, $a_{t}\in\mathcal{A}$, $r_{t}\in R$
$r_{t}\in\mathcal{R}$ are state, action and reward observed at time $t$,
$\mathcal{P}$ is an unknown transition probability from $s_{t}$ to $s_{t+1}$
taking action $a_{t}$, and applying discount factor $\gamma$. The goal of
reinforcement learning is to maximise the sum of future rewards
$R=\sum_{t=0}^{T}\gamma^{t}r_{t}$, where $r_{t}$ is provided by the
environment at time $t$. Actions are sampled from a deep neural network policy
$a_{t}\sim\pi_{\theta}(s_{t})$, where $a_{t}$ is a torque applied to each
joint the joint commands. Each policy is trained with Proximal Policy
Optimisation (PPO) [schulman_proximal_2017]. We use the implementation of PPO
from OpenAI Baselines [dhariwal_openai_2017].
### Training Setup Policies
For each target terrain type we train a Setup Policy setup policy with the
purpose of bridging the trajectories of a default walking policy and the
policy for behaviour for traversing the upcoming terraincondition. The
algorithm for training each Setup Policy setup policy is shown in Algorithm 4.
Default Policy: The Default Policy default policy is a pre-trained policy
designed to walk on a flat surface. When training a Setup Policy setup policy
we always begin with the Default Policy default policy until the terrain
artifact is detected by an oracle. Once the terrain has been traversed, we
switch back to the Default Policy default policy when a termination criteria
is reached ($\tau_{\theta}$, defined below). We initialise the Setup Policy
setup policy from the default walking policy such that the robot continues to
walk towards the obstacle while learning how to prepare for the Target
Policytarget policy. Setup Policy: When terrain type $i$ is detected, we
switch immediately from the Default Policy to the Setup Policy default policy
to the setup policy for terrain type $i$, denoted as $\pi_{\phi}^{i}$. While
training we are given access to an oracle terrain detector that indicates when
the robot is a fixed distance from the terrain artifact. The Setup Policy
setup policy outputs joint torques $a_{i}$, and switch condition
$\tau_{\phi}$. If the switch condition is met, we immediately switch to the
Target Policytarget policy. Target Policy: The Target Policy target policy for
terrain type $i$, denoted as $\pi_{\theta}^{i}$, is trained with curriculum
learning following the method outlined in our previous prior work
[tidd_guided_2020]. We also define a termination condition $\tau_{\theta}$
that denotes switching from the Target Policy target policy back to the
Default Policydefault policy. For the the termination condition to be reached,
the robot must be have successfully traversed the terrain artifact,
successfully traverse the terrain obstacle and have both feet in contact with
the groundplane, and a positive forward velocity $vx$. This condition is the
same for all artifact types. Setup Policy Reward: We design a reward function
to motivate the Setup Policy setup policy to transition the robot from states
visited by the Default Policy default policy to states required by the Target
Policytarget policy. We note the value function of the Target Policy,
$\hat{V}(s_{t})=\sum_{t=0}^{T}\gamma^{t}r_{t}$target policy,
$V^{\pi_{\theta}^{i}}(s_{t})=\sum_{t=0}^{T}\gamma^{t}r_{t}$, (where $r_{t}$ is
the original reward afforded by the environment for the Target Policytarget
policy), provides an estimate of what return we can expect if we run the
Target Policy target policy from $s_{t}$. However, value functions are
notoriously over optimistic over-optimistic for states that have not been
visited [haarnoja_soft_2018] (such as those we might experience by running the
Default Policydefault policy). The advantage is a zero centered estimate of
the effect of running action $a_{t}\sim\pi_{\theta}^{i}(s_{t})$, given by the
advantage for the Target Policycalculation of how much better or worse policy
$\pi$ performs compared to the value function prediction ${V}^{\pi}(s_{t})$:
$\DIFadd{A^{\pi}(s_{t},a_{t})=R_{t}-V^{\pi}(s_{t})}$ (1)
where $R_{t}$ is sum of future rewards from time $t$ running policy $\pi$. We
estimate the advantage using the temporal difference (TD) error with the value
function of the target policy $\pi_{\theta}^{i}$:
$\hat{A}\DIFaddbegin\DIFadd{{}^{\pi_{\theta}^{i}}}\DIFaddend(s_{t},a_{t})=r_{t}+\gamma\DIFdelbegin\DIFdelend\DIFaddbegin\DIFadd{V^{\pi_{\theta}^{i}}}\DIFaddend(s\DIFdelbegin\DIFdel{{}_{t+1)}}\DIFdelend\DIFaddbegin\DIFadd{{}_{t+1})}\DIFaddend-\DIFdelbegin\DIFdelend\DIFaddbegin\DIFadd{V^{\pi_{\theta}^{i}}}\DIFaddend(s_{t})$
(2)
where $r_{t}$ is the Target Policy reward. (details can be found in our
previous work [tidd_guided_2020]). target policy reward. Using advantage
$\hat{A}(s_{t},a_{t})$ as a confidence for
$\hat{A}^{\pi_{\theta}^{i}}(s_{t},a_{t})$ as an indication of the accuracy of
$\hat{V}(s_{t})$$V^{\pi_{\theta}^{i}}(s_{t})$, we define the reward from the
Advantage Weighted Target Value:
$\hat{r}_{t}=(1-\text{min}(\alpha\hat{A}\DIFaddbegin\DIFadd{{}^{\pi_{\theta}^{i}}}\DIFaddend(s_{t},a_{t})^{2},1))\cdot\beta\hat{V}\DIFaddbegin\DIFadd{{}^{\pi_{\theta}^{i}}}\DIFaddend(s_{t})\DIFaddbegin\DIFaddend$
(3)
Where where $\alpha$ and $\beta$ are scaling factors are tuned empirically
(set to 0.15 and 0.01 respectively). The Target Policy value function
$\hat{V}(s_{t})$ target policy value function $V^{\pi_{\theta}^{i}}(s_{t})$
has learned the value for states where the Target Policy target policy has
collected experience. Outside of these states, for example when the robot is
activating a different policy, we expect the reward provided by the
environment $r_{t}$, and the next state $s_{t+1}$ will not be consistent with
what is expected by the Target Policy target policy value function. Therefore,
the advantage $\hat{A}(s_{t},a_{t})$ $\hat{A}^{\pi_{\theta}^{i}}(s_{t},a_{t})$
will not be close to zero, and $\hat{r}_{t}$ becomes small. Intuitively, for
states where the Target Policy value function $\hat{V}(s_{t})$ target policy
value function $V^{\pi_{\theta}^{i}}(s_{t})$ is accurate, the Target Policy
advantage $\hat{A}$ target policy advantage $\hat{A}^{\pi_{\theta}^{i}}$ will
be close to zero, thus the Setup Policy setup policy will learn to maximise
$\hat{V}$$V^{\pi_{\theta}^{i}}$. States where $\hat{A}$
$\hat{A}^{\pi_{\theta}^{i}}$ is far from zero will reduce the effect of an
overconfident $\hat{V}$$V^{\pi_{\theta}^{i}}$. Extended Reward: One issue with
training Setup Policies setup policies is the delayed effect of actions, where
once switched to the Target target policy, the Setup Policy setup policy is no
longer acting on the environment or receiving a reward. Once the Setup Policy
setup policy has transitioned to the Target Policytarget policy, we must
provide the Setup Policy setup policy with information regarding the
performance of the robot since switching. Our method for solving this problem
is to provide the additional reward obtained after switching, by including an
extended reward in the rollout buffer of the Setup Policysetup policy. We add
this extended reward to the last element of the Setup Policy bufferreward
received running the setup policy:
$\hat{r}_{final}=\hat{r}_{final}+\hat{r}_{t}$ (4)
where $\hat{r}_{final}$ is the final reward entry received by the Setup
Policysetup policy before switching, and $\hat{r_{t}}$ is the reward received
at time $t$, after the Setup Policy setup policy has transitioned to the
Target Policytarget policy. For algorithmic stability, we clear all but the
last entry of the buffer after each training update. The last reward entry of
the buffer becomes $\hat{r}_{final}$ for environment steps that continue after
training. The procedure for training Setup Policies setup policies is provided
in Algorithm 4. Note that the termination variable $\tau_{t}$ (line 7) is
either $\tau_{\phi}$ if the Current Policy is the Setup Policycurrent policy
is the setup policy, or $\tau_{\theta}$ if the Current Policy is the Target
Policycurrent policy is the target policy. Setup Before Switching [1] Load
Target Policy $\pi_{\theta}^{i}$. Load Default Policy $\pi_{\theta}^{j}$
Initialise Setup Policy from $\pi_{\phi}^{i}$ from the Default Policy
Initialise Buffer Current Policy $\leftarrow$ Default Policy
$iteration=1,2,\ldots$ not $done$ $a_{t},\tau_{t}\sim$ Current Policy$(s_{t})$
$\tau$ is $\tau_{\phi}$ or $\tau_{\theta}$
$s_{t+1},r_{t},done=env.step(a_{t})$ Terrain detected
$\hat{A}(s_{t},a_{t})=r_{t}+\gamma\hat{V}(s_{t+1})-\hat{V}(s_{t})$
$\hat{A}^{\pi_{\theta}^{i}}(s_{t},a_{t})=r_{t}+\gamma
V^{\pi_{\theta}^{i}}(s_{t+1})-V^{\pi_{\theta}^{i}}(s_{t})$
$\hat{r}_{t}=(1-\alpha\hat{A}(s_{t},a_{t})^{2})\cdot\beta\hat{V}(s_{t})$
$\hat{r}_{t}=(1-\text{min}(\alpha\hat{A}^{\pi_{\theta}^{i}}(s_{t},a_{t})^{2},1))\cdot\beta
V^{\pi_{\theta}^{i}}(s_{t})$ Current Policy == Default Policy Current Policy
$\leftarrow$ Setup Policy $\pi_{\phi}^{i}$ $\tau_{t}\leftarrow\tau_{\phi}$
Current Policy == Setup Policy Store $s_{t}$, $a_{t}$, $\hat{r}_{t}$,
$\tau_{\phi}$ into Buffer Buffer Full PPO Training Update Clear buffer (except
last entry) $\tau_{\phi}$ Current Policy $\leftarrow$ Target Policy
$\pi_{\theta}^{i}$ $\tau_{t}\leftarrow\tau_{\theta}$ $\tau_{\theta}$ Current
Policy $\leftarrow$ Default Policy $\pi_{\theta}^{j}$ Current Policy != Setup
Policy $\hat{r}_{final}=\hat{r}_{final}+\hat{r}_{t}$ $s_{t}=s_{t+1}$
## 4 Experiments
We evaluate our method with a biped in simulation. Initial evaluations are
performed with a single difficult terrain (Sec 4, Sec 4, Sec 4), before adding
more behaviours in Sec 4. The difficult terrain is a sample of flat terrain
with a single block 70cm 50 cm high, and 30cm 30 cm in length (in the forward
direction of the robot), an example is shown in Fig. 1. We perform training
and evaluation on a single terrain sample, starting the robot from a random
$x$ (forward direction) uniformly sampled from $(0.0,2.2)\si{\meter}$ (0.0,
2.2) $\si{\meter}$ before the terrain artifact, $y$ (strafe) uniformly sampled
from $(-0.6,0.6)\si{\meter}$, and heading (-0.6, 0.6) $\si{\meter}$, and
$heading$ (yaw) uniformly sampled from $(-0.3,0.3)\si{\radian}$(-0.3, 0.3)
$\si{\radian}$. All experiments using a single terrain type are evaluated by
pausing training and performing 100 evaluation runs, and recording both the
Success percentage and Distance percentage. Success % is defined as the
average success rate to reach a goal location several meters past the terrain
artifact. Distance % is the average percentage of total terrain distance the
agent successfully traversed of the terraincovered.
### Effect of Using Initialisation and Extended Reward
We perform an ablation to study the effect of initialising the Setup Policy
setup policy from the walking policy, and the effect of extending the reward
after the Setup Policy setup policy has transitioned to the Target
Policytarget policy. In all experiments the Default Policy default policy is
used until the terrain is detected, then the Setup Policy setup policy is
activated until the termination signal $\tau_{\phi}$ indicates switching to
the Target Policytarget policy, and $\tau_{\theta}$ then results in switching
back to the Default Policydefault policy.
* [leftmargin=*]
* •
Without Initialisation: We evaluate our method without initialising the Setup
Policy from the Default Walk Policysetup policy from the default walking
policy, i.e. the Setup Policy setup policy is randomly initialised.
* •
Without Extended Reward: We evaluate our method without including the reward
after the Setup Policy setup policy has switched to the Target Policytarget
policy.
* •
Full Method: Our method uses a Setup Policy setup policy initialised from the
Default Walk Policydefault walk policy, and receives reward signal after the
Setup Policy setup policy has transitioned to the Target Policytarget policy.
min width=0.8 Success % Distance % Without Initialisation 38.8 73.2 Without
Extended Reward 12.3 58.3 Full Method 82.2 96.1
Table : Ablation study for Initialising initialising setup policies from Walk
Policy a default walk policy, and Extended Rewardsreceiving an extended
reward, when for switching from a walking policy to a jump policy on a single
terrain sample.
We can see that initialising the Setup Policy setup policy with the default
walking policy and extending the reward improves learning outcomes as shown in
Table 4, where our method achieves 82.2% success compared to 38.8% (without
initialisation from the default walking policy ) and 12.3% (without extended
reward)without receiving the extended reward.
### Setup Policy Reward Choice
For training the setup policy, we investigate several options for the reward
provided at each timestep. For each reward type we evaluate with three
different random seeds, results are shown in Fig 4.
[width=]images/results.png
Figure : We investigate several options for the reward function used by the
Setup Policy to train a setup policy on the difficult jump terrain. We show
the percentage of successful runs success rate (robot reaches several meters
after the jump) from during training for three different random seeds.
* [leftmargin=*]
* •
Original: The reward afforded by the environment that is used to train the
Target Policytarget policy: the reward encourages following an expert
motion(See [tidd_guided_2020] for details of the reward function).
* •
Constant: At each timestep a constant reward of 1.5 is given.
* •
Target Torque: We encourage the setup policy to match the output of the Target
Policy torque target policy by minimising the errorbetween the Setup Policy
and the Target Policy:
$\exp[-2.0\cdot(\pi_{\phi}(s_{t})-\pi_{\theta}(s_{t}))^{2}]$.
* •
Target Value: We evaluate the effect of using the Target Policy target policy
value function as the reward for the Setup Policysetup policy. This is not the
same as the original reward, as it is not provided by the environment, but by
$\beta\hat{V}(s_{t})$$\beta V^{\pi_{\theta}^{i}}(s_{t})$, where $\beta$ is
scaling factor 0.01.
* •
Advantage Weighted Target Value: We evaluate the reward based on the Target
Policy advantage $\hat{A}$ and value function $\hat{V}$:
$(1-\text{min}(\alpha\hat{A}(s_{t},a_{t})^{2},1))\cdot\beta\hat{V}(s_{t})$,
where $\alpha$ and $\beta$ are scaling factors 0.15 and 0.01
respectivelyintroduced in Equation 3.
It can be seen from Fig 4 that a reward using Advantage Weighted Target Value
performs better than other reward choices, achieving the highest average
success rate (78.7%). Using the Target Value target value reaches a success
rate of 63.9%. This disparity, and the large variation in the Target Value
target value (green line), shows the effect of over estimation bias often seen
with value functions, validating our idea that weighting the value by the
advantage reduces the effect of this bias when training Setup Policiessetup
policies.
### Comparisons
We compare our method with several others, including end to end methods,
without using Setup Policiessetup policies, and a method that uses proximity
prediction to train a Transition Policytransition policy [lee_composing_2019].
min width=0.8 Success% Distance% Single Policy 0.0 11.3 Single Policy With
Curriculum 0.0 3.8 No Setup Policy 1.5 51.3 Learn When to Switch 0.0 44.2
Proximity Prediction 51.3 82.4 Setup Policy (Ours) 82.2 96.1
Table : Success and average distance of the total length covered when
switching from a walking policy to a jump policy on a single terrain sample.
* [leftmargin=*]
[width=]images/all.png
Figure : Setup policies allow us enable a biped to transition between
behaviours to traverse several complex visuo-motor behaviours for traversing a
sequence of diverse terrains.
* •
Single Policy: We train a single end to end policy on the terrain sample.
* •
Single Policy With Curriculum: We utilise curriculum learning to train a
single end to end policy on the terrain sample (using the method defined
outlined in [tidd_guided_2020]).
* •
No Setup Policy: We switch to the Target Policy from the Default Policy target
policy from the default policy as soon as the terrain oracle detects the next
terrain.
* •
Learn When to Switch: We collect data by switching from the Default Policy to
the Target Policy default policy to the target policy at random distances from
the Jump. We use supervised learning to learn when the robot is in an a
suitable state to switch. Details of this method is found in our previous work
[tidd_learning_2021].
* •
Proximity Prediction: We follow the method defined by Lee et al.
[lee_composing_2019] to train Transition Policies (one for each behaviour)
using a proximity predictor function $P(s_{t})$. $P(s_{t})$ outputs a
continuous value that indicates how close the agent is to a configuration that
resulted in successful traversal. $P_{(}s_{t})$ $P(s_{t})$ is trained using
supervised learning from success and failure buffers. The reward used to train
the Transition Policy transition policy is the dense reward created by
$P(s_{t+1})-P(s_{t})$, encouraging the agent to move closer to a successful
switching configurationconfiguration that results in successful switching. For
accurate comparison we initialise the transition policy with weights from the
walk policy, and utilise the terrain oracle for policy selection (the paper
refers to a rule-based meta-policy in place of a terrain oracle
[lee_composing_2019]).
* •
Setup Policy (Ours): We follow Algorithm 4 to train Setup Policiessetup
policies.
We can see from Table 4 that our method for developing Setup Policies setup
policies performs the best (82.2% success rate), compared to other methods. A
single policy was unable to traverse the difficult jump terrain, even after
with an extended learning time and with curriculum learning. The poor
performance of Single Policy With Curriculum was the result of the robot not
progressing through the curriculum, and as a consequence is unable to move
without assistive forces. We show that Setup Policies setup policies are
necessary for this problem, with No Setup Policy unable to successfully
traverse the terrain (1.5% success). Learning When to Switch also performed
poorly as there are very few states that overlap between the Default Policy
and the Target Policy default policy and the target policy for the switch
estimator to learn. The Proximity Prediction method was able to successfully
traverse the jump 51.3% of the time. It is worth noting that this method
required approximately three times more training episodes than our method to
reach the provided results, despite also being initialised from the Default
Policydefault policy. These experiments show that for the difficult jump
terrain, we require a specialised Target Policy target policy dedicated to
learning the complex jumping behaviour. We show that this specialised policy
has a trajectory that does not overlap with the Default Policydefault policy,
and thus Setup Policies setup policies are required. Our method for learning a
Setup Policy method achieves setup policies achieved the highest performance
on this difficult task.
### Multiple Terrain Types
For our final experiment we train Setup Policies setup policies for several
additional terrain types. In total we now have modules for jumps, gaps,
hurdles, stairs, and stepsstepping stones. Fig 4 shows a sequence with each
terrain type. We follow the pipeline shown in Fig 3a), where a terrain oracle
determines which module should be selected. On selection, the Setup Policy
setup policy is employed until it determines when to switch to the Target
Policythe robot reaches a switch state ($\tau_{\phi}$), then switching to the
target policy. Termination criteria $\tau_{\theta}$ (introduced in Sec 3)
determines when to switch back to the Default Policy for the Jump Policy
default policy for the jump obstacle only. All other policies resemble the
behaviour of the Default Policy default policy on flat terrain, i.e. we
continue using the Target Policy target policy after the terrain has been
traversed for Gaps, Hurdles, Stairs, and Stepsgaps, hurdles, stairs, and
stepping stones, until the oracle determines the terrain has changed. We
evaluate with 7 randomly selected terrains types in sequence a sequence of
each of the 5 terrain types, randomly shuffled before each run. We perform
1000 runs, with a total of 7000 5000 various terrain types to be traversed. In
Table 4 we show the average percentage of total distance travelled and success
rate with and without using Setup Policiessetup policies. A successful run in
this scenario is defined as traversing the last terrain in the sequence. Table
4 shows the failure count for each terrain type.
min width=0.8 Success % Distance % Without Setup Policies 6.8 1.9 37.1 36.3
With Setup Policies 26.471.2 53.280.2
Table : Success rate and percent percentage of the total distance terrain
length travelled for from 1000 episodes of 7 randomly selected all 5 terrain
types, randomly shuffled each episode.
min width=0.9 Jump Gap Hurdle Stairs Steps Without Setup Policies 710 782 90
52 4336 2031 9086 With Setup Policies 42036 7048 73 43 36 65 144 96
Table : Number of failures by terrain type from 1000 episodes of 7 randomly
selected all 5 terrain types, randomly shuffled each episode.
min width=0.8 Success % Distance % Without Setup Policies 68.2 78.1 With
Setup Policies 76.7 84.6
Table : Success rate and percentage of the total terrain length travelled from
1000 episodes of 4 terrain types (without the jump terrain), randomly shuffled
each episode
min width=0.82 Gap Hurdle Stairs Steps Without Setup Policies 74 41 64 142
With Setup Policies 44 51 49 90
Table : Number of failures by terrain type from 1000 episodes of 4 terrain
types (without the jump terrain), randomly shuffled each episode.
We can see from Table 4 that Setup Policies setup policies improve the success
rate compared to switching policies without the intermediate policy (26.471.2%
success compared to 6.81.9%). Table 4 provides a breakdown of the number of
times the robot failed on each terrain. From Table 4 we can see that Setup
Policies setup policies are most critical for traversing the difficult jump
terrain (reducing the failures from 710 to 420782 to 36), though we also see
improvements for gap terrain. These results show the highest failure rate on
the multi terrain setup is on the jump terrain, and that there is clear room
for improvement on this terrain . While the results show that without setup
policies the hurdle, stairs, and steps terrain appear to have less failures
than running setup policies, we note that these terrains are visited far less
times when running without setup policies due to significant failure with the
jumps terrain To investigate further, we exclude the jump obstacle from the
terrain sequence and still observe a significant benefit using setup policies.
Table 4 shows the performance increase from 68.2% to 76.7% of successful
traversals for 4 terrain types (excluding the jump obstacle). The stepping
stone obstacle had the highest failure rate of the remaining terrain types,
investigation is required to improve the setup policies for this terrain
(Table 4). Despite the clear need for Setup Policiessetup policies, we
attribute the relatively low success rate (26.4%) failures to out of
distribution states, i.e. the robot visits states in the terrain sequence that
it does not experience when training the Setup Policy. For example, if the
robot was on a gap, and using the gaps behaviour, then switches to the jumps
Setup Policy to prepare for an upcoming jump, it will not necessarily be in a
state that has been seen during training. Training Setup Policies setup policy
on the individual terrain types. Training setup policies for wider range of
states will be investigated in future work. Furthermore, the performance of
behaviours on each individual terrain type could be analysed to improve the
generalisation of this method to new behaviours.
## 5 Conclusion
It is common practice for legged robots to have separate locomotion policies
to traverse different terrain conditions. We propose Setup Policies that
allows setup policies that enable smooth transition from the trajectory of one
locomotion policy to a Target Policy for the trajectory of a target policy for
traversing difficult terrain with a dynamic biped. We use deep reinforcement
learning to learn the Setup Policies setup policies for which we introduce a
novel reward based on the Advantage Weighted Target Value, utilising the
scaled value prediction of the Target Policy target policy as the reward for
the Setup Policysetup policy. In simulation experiments for transitioning from
a walking policy to a jumping policy, we show using an in-between Setup Policy
setup policy yields a higher success rate compared to using a single policy
(0$\%$ success rate), or the two policies without the Setup Policy setup
policy (from 1.5$\%$ success rate without Setup Policies setup policies to
82$\%$). We further show that our method is scalable to other terrain types,
and demonstrate the effectiveness of the approach on a sequence of difficult
terrain conditions, improving the success rate from 6.81.9$\%$ without Setup
Policies to 26.4setup policies to 71.2$\%$. A limitation of our method is that
the Setup Policies setup policies are trained from a default walking policy.
We would like to improve the scope of the Setup Policies setup policies such
that a wider range of states are funnelled toward the Target Policytowards the
target policy. If we allowed further training of the Target Policiestarget
policies, our training method could be used to include more states and improve
robustness. This will be considered The idea of utilising the advantage
(estimated by the temporal difference (TD) error) as a confidence metric can
be applied generally for region of attraction (RoA) expansion, safe
reinforcement learning, and also for blending several behaviours for
performing complex composite tasks. These ideas will be explored in future
work. A significant assumption in this work is that we have access to a
terrain oracle, removing this dependency will also be investigated in future
workrequired as we consider experiments in the real world.
|
# Unraveling S$\&$P500 stock volatility and networks - An encoding-and-
decoding approach
Xiaodong Wang Department of Statistics, University of California, Davis.
Fushing Hsieh Department of Statistics, University of California, Davis.
## Abstract
Volatility of financial stock is referring to the degree of uncertainty or
risk embedded within a stock’s dynamics. Such risk has been received huge
amounts of attention from diverse financial researchers. By following the
concept of regime-switching model, we proposed a non-parametric approach,
named encoding-and-decoding, to discover multiple volatility states embedded
within a discrete time series of stock returns. The encoding is performed
across the entire span of temporal time points for relatively extreme events
with respect to a chosen quantile-based threshold. As such the return time
series is transformed into Bernoulli-variable processes. In the decoding
phase, we computationally seek for locations of change points via estimations
based on a new searching algorithm in conjunction with the information
criterion applied on the observed collection of recurrence times upon the
binary process. Besides the independence required for building the Geometric
distributional likelihood function, the proposed approach can functionally
partition the entire return time series into a collection of homogeneous
segments without any assumptions of dynamic structure and underlying
distributions. In the numerical experiments, our approach is found favorably
compared with parametric models like Hidden Markov Model. In the real data
applications, we introduce the application of our approach in forecasting
stock returns. Finally, volatility dynamic of every single stock of S&P500 is
revealed, and a stock network is consequently established to represent
dependency relations derived through concurrent volatility states among
S&P500.
## 1 Introduction
To discover the mystery of the stock dynamics, financial researchers focus on
stock returns or log returns. Black and Scholes [1] proposed in their seminal
work to use stochastic processes in modeling stock prices. One particular
model of focus is the geometric Brownian motion (GBM), which assumes all log-
returns being normally distributed. That is, if a time series of the price of
a stock is denoted as $\\{X(t)\\}_{t}$, the GBM modeling structure prescribes
that
$log\frac{X(t)}{X(t-1)}\sim N(\mu,\sigma^{2})$
Later Merton [2] extended Black and Scholes’ model by involving time-dependent
parameters for accommodating potential serial correlations. Further, in order
to go beyond normal distribution, models belonging to a broader category,
including a general Lévy process or particular geometric Lévy process model
[11], become popular and appropriate alternatives by embracing stable
distribution with heavy tails. Since the independent increments property of
Brownian motion or Lévy process, returns over disjoint equal-length time
intervals remain identically independently distributed (i.i.d). So, the
independent increment property restricts modeling stochasticity to be
invariant across the entire time span. However, it is well known that the
distributions of returns are completely different over various volatility
stages. Thus, these models are prone to fail in capturing extreme price
movements [3].
Research attempts from various perspectives have experimented to make stock
price modelings more realistic. One fruitful perspective is to incorporate
stochastic volatility into stock price modeling. From this perspective,
regime-switching or hidden-state models are proposed to govern the stock price
dynamics. The regime-switching model can be represented by the distributional
changes between a low-volatility regime and a more unstable high-volatility
regime. In particular, different regimes are characterized by distinct sets of
distributional modeling structures. One concrete example of such modeling is
the Hidden Markov Model(HMM). HMM appeals increasingly to researchers due to
its mathematical tractability under the assumption of Markovian. Its
parametric approach has gained popularity and its parameter estimation
procedures have been discussed comprehensively. For instance, Hamilton [4]
described AR and ARCH-type models under Markov regime-switching structure.
Hardy [3] offered Markov regime-switching lognormal model by assuming
different normal distribution within each state,
$log\frac{X(t)}{X(t-1)}|s\sim N(\mu_{s},\sigma^{2}_{s})$
where $s$ indicates the hidden states for $s=1,2,...$. Fine et al. [5]
developed hierarchical HMM by putting additional sources of dependency in the
model. To further increasing the degree of flexibility in modeling stochastic
volatility, another well-known financial model was related to volatility
clustering [19]. For instance, GARCH models have been studied to model the
time-varying conditional variance of asset returns [20]. However, such a
complicated dynamic structure usually involves a large number of parameters.
This modeling complexity renders the model hard to interpret. In contrast,
non-parametric approaches are still scarce in the literature due to the lack
of tractability and involvement of many unspecified characteristics [12].
In this paper, we take up a standpoint right in between the purely parametric
and non-parametric modelings. We adopt the research platform of regime-
switching models but aim to develop an efficient non-parametric procedure to
discover the dynamic volatility without assuming any distribution family nor
underlying structure. The idea is motivated by a non-parametric approach,
named Hierarchical Factor Segmentation(HFS) [7, 8], to mark extremely large
returns as 1 and others 0, and then partition the resultant 0-1 Bernoulli
sequence into alternating homogeneous segments. HFS takes advantage in
transforming the returns into a 0-1 process with time-varying Bernoulli
parameters, so parametric approaches such as likelihood-based function can be
applied to fit each segment respectively. However, it is unclear in HFS to
define a “large” return that should be marked, which makes the implementation
limited in application. Another limitation of HFS, which is also shared by
regime-switching models or HMM, is that there exists no data-driven way of
determining the number of underlying regimes or hidden states.
We propose an encoding-and-decoding approach to resolve the issues tied to the
aforementioned limitations simultaneously. The encoding procedure is done by
iteratively marking the returns at different thresholding quantile levels, so
the time series can be transformed into multiple 0-1 processes. In the
decoding phase, a searching algorithm in conjunction with model selection
criteria is developed to discover the dynamic pattern for each 0-1 process
separately. Finally, the underlying states are revealed by aggregating the
decoding results via cluster analysis. It is remarked that the non-parametric
approach is able to discover both light-tail and heavy-tail distributional
changes without assuming any dynamic structure or Markovian properties. Though
the proposed method is derived under independence or exchangeability
conditions, our numerical experiments show that the approach still works for
settings with the presence of weak serial dependency, which can be checked by
testing the significance of lagged correlation in practice.
Another contribution of this paper is that a searching algorithm is developed
to partition a 0-1 process into segments with different probability
parameters. Therefore, our computational development is a change point
analysis on a sequence of Bernoulli variables with the number of change points
being large and unknown. For such a setting and its like, the current
searching algorithm is infeasible, such as bisection procedures [21, 22]. As
an alternative to the hierarchical searching strategy, our proposed search
algorithm concurrently generates multiple segments with only a few parameters.
The optimal partition of homogeneous segments is ultimately obtained via model
selection.
The paper is constructed as follows. In Section2, we review the HFS and
develop a new searching algorithm that can handle multiple-states decoding. In
Section3, we present the main approach in modeling distributional changes. In
Section4, real data analysis is performed to illustrate the procedure of stock
forecasting and networks of revealing relational patterns among S$\&$P500.
Several remarks and conclusion are given in Section5.
## 2 Volatility Dynamics
To investigate stock dynamics, we consider volatility as a temporal
aggregation of rare events which have large absolute returns. Consider the
recursive time or waiting time between successive extreme returns, Chang et
al.[10] proved that the empirical distribution of recursive time converges to
a geometric distribution asymptotically, when the observations are i.i.d, or
more generally, in exchangeable join distributions. Under the framework of
regime-switching model, each regime can be considered as a homogeneous time
period with exchangeable distributions. Motivated by that, geometric
distributions with different emission or intensity probabilities are adopted
to model returns under different volatility periods. The potentially feasible
assumption we made here is the invariance of the recursive time distributions
embedded within a regime. Long-term serial correlation is out of our
consideration in the paper. Though this popular assumption may seemingly hold
when the duration of each regime is short enough, it would raise the switching
frequency between different volatility periods. This characteristic makes the
model complex and non-traceable. Further, based on the abundant permutation
tests in 30-s, 1-min, 5-min return scales, it is claimed in [9] that returns
should only be considered exchangeable within a short time period, which
should not be longer than, for example, 20 minutes. On the other hand, a
longer duration of volatility period would have more samples to ensure a
goodness-of-fit of within-regime distributions.
With the above background knowledge in mind, we go into the encoding phase of
computational developments. Consider a pair of thresholds $(l,u)$ applied to
define events of interest or large returns. Specially, an excursion 0-1
process $C(t)$ at time $t$ is defined as
$C(t)=\begin{cases}1&\quad log\frac{X(t)}{X(t-1)}\leq
l\enspace\textit{or}~{}log\frac{X(t)}{X(t-1)}\geq u\\\
0&\quad\textit{otherwise}\end{cases}$ (1)
where $l$ and $u$ are lower and upper quantiles of log returns, respectively.
It is easy to generalize the thresholds to involve one-sided tale excursions,
for instance, to set $l=0$ and $u\in(0,1)$ to focus on positive returns or
upper tail excursions. If the thresholds are set too extreme, then only fewer
excursive returns can stand out. As a result, the excursion process is too
simple to preserve enough information about the volatility dynamics due to the
reduction of sample size. While, if the quantile value is set close to the
median, then the dynamic pattern is overwhelmed by irrelevant information or
noise. There is an inevitable trade-off between the magnitude of the sample
size and the amount of information about excursive events. Our remedy to this
problem is to systematically apply a series of thresholds and encode the time
series returns into multiple binary (0-1) excursion processes. For the
completeness of the analysis, we will discuss a searching algorithm in
conjunction with model selection criteria in the section below, which is the
key in the decoding phase. More details about the encoding procedure are
described later in Section3.
### 2.1 The Searching Algorithm
Suppose a 0-1 excursion process has been obtained. In this subsection, we
discuss how to search for potential temporal segmentation. As the study
involving multiple change points, we aim to detect abrupt distributional
changes from one segment of low-volatility regime to another segment of high-
volatility regime. To properly accommodate a potentially large number of
unknown change points due to the recursive emissions of volatility, and to
effectively differentiate the alternating volatility-switching patterns, the
Hierarchical Factor Segmentation(HFS) was employed to partition the excursion
process into a sequence of high and low event-intensity segments [8]. The
original HFS assumes that there exist only two kinds of hidden states within
the returns corresponding to low-volatility and high-volatility regimes.
Though the assumption is plausible within a short time period, its potential
becomes limited when the time series of returns is lengthy and embracing more
complicated regime-specific distributions. In this subsection, we expand the
HFS by incorporating a more generalized searching algorithm to handle the
scenarios of multiple states.
Denote the entire 0-1 excursion process sequence of length $n$ as
$\\{C(t)\\}_{t=1}^{n}$. The recursive recurrent time between two successive
1’s of $\\{C(t)\\}_{t=1}^{n}$ is recorded into a sequence, denoted as
$\\{R(t)\\}_{t}$. It is noted that the recurrent time can be 0 if two 1’s
appear consecutively. Also, we denote $R(1)=0$ if $C(1)=1$. As such, the
length of $\\{R(t)\\}_{t=1}^{n^{*}}$ is $n^{*}=n^{{}^{\prime}}+1$ where
$n^{{}^{\prime}}$ is the number of 1’s in $\\{C(t)\\}_{t=1}^{n}$. To make the
notations consistent, we denote $\\{C_{i}(t)\\}_{t=1}^{n^{*}}$ as the $i$-th
coding sequence if $i$ is present and its corresponding recurrent time
sequence as $\\{R_{i}(t)\\}_{t=1}^{n_{i}^{**}}$.
Suppose that the number of internal states is $m$ and $m>1$. Then, there are
$m$ tuning parameters are required in the searching algorithm given below.
Denote the first thresholding parameter vector as
$T=(T_{1},T_{2},...,T_{m-1})$ where $T_{1}<T_{2}<...<T_{m-1}$, and the second
thresholding parameter as $T^{*}$. The searching algorithm is described in
Alg.1.
Alg.1 multiple-states searching algorithm
1.Define events of interest and encode the time series of return into a 0-1
digital sequence $\\{C(t)\\}_{t=1}^{n}$ with 1 indicating an event and 0
otherwise.
2.Calculate the recurrence time in $\\{C(t)\\}_{t=1}^{n}$ and denote the
resultant sequence as $\\{R(t)\\}_{t=1}^{n^{*}}$.
3\. For loop: cycle through $i=1,2,...,m-1$:
1. i.
Transform $\\{R(t)\\}_{t=1}^{n^{*}}$ into a new 0-1 digital strings
$\\{C_{i}(t)\\}_{t=1}^{n^{*}}$ via the second-level coding scheme:
$C_{i}(t)=\begin{cases}1&\quad R(t)\geq T_{i}\\\
0&\quad\textit{otherwise}\end{cases}$
2. ii.
Upon code sequence $\\{C_{i}(t)\\}_{t=1}^{n^{*}}$, take code digit 1 as
another new event and recalculate the event recurrence time sequence
$\\{R_{i}(t)\\}_{t=1}^{n_{i}^{**}}$
3. iii.
If a recursive time $R_{i}(t)\geq T^{*}$, then record its associated time
segment in $\\{C_{i}(t)\\}_{t=1}^{n^{*}}$, denoted as $Seg_{i}$ where
$Seg_{i}\subset\\{1,...,n\\}$.
4\. The $m$ internal states are returned by $S_{1}=Seg_{1}$,
$S_{2}=Seg_{2}\backslash Seg_{1}$,…, $S_{m-1}=Seg_{m-1}\backslash Seg_{m-2}$,
and $S_{m}=\\{1,...,n\\}\backslash Seg_{m-1}$.
A sequence of Gaussian distributed observations are generated with mean 0 and
variance varying under different unknown states in Figure1(A). A pair of
thresholds $l=-2$ and $u=2$ are applied to code the observations via (1), so a
sequence of recursive time is obtained in Figure1(B). The first-level
parameter $T_{i}$ are set to control the event-intensity that we aim to
partition for $i=1,...,m-1$, see thresholds $T_{1}$ and $T_{2}$ in Figure1(B).
If $T_{i}$ takes its maximum $T_{m-1}$, then a high-intensity segment is
separated from other level segments, see $T_{2}$ in Figure1(B). By decreasing
the value of $T_{i}$ from $T_{m-1}$ to $T_{1}$ to implement a series of
partitions, multiple intensity levels of phases get separated. In this
example, $T_{1}$ is set to partition high- and median-intensity from the low-
intensity segment.
In the second level of recursive time calculation,
$\\{R_{i}(t)\\}_{t=1}^{n_{i}^{**}}$ are calculated for $i=1,...,m-1$. If
$R_{i}(t)$ is above the second-level threshold $T^{*}$, the segment
corresponds to a period with low-intensity events. So, for a fixed $T_{i}$,
$T^{*}$ is set to decide which phases having relatively low intensity, so the
rests are in high intensity. It is noticed that $Seg_{j}\subset Seg_{i}$, for
$j>i$. It is because if a recursive time $R(t)$ is greater than $T_{j}$, it is
greater than $T_{i}$ as well. By applying the same parameter $T^{*}$ in
Figure1(B), for example, Segment2 is wider than Segment1, so the median-
intensity segment is obtained by $Seg2\backslash Seg1$. More numerical
experiment for the application of Alg.1 is available in AppendixB.
Figure 1: A simple example to illustrate the implementation of Alg.1.
### 2.2 Mixed-geometric Model
Multiple volatility phases of time series $S_{i}$, $i=1,...,m$ are computed
and achieved by applying the searching algorithm. Assuming join distribution
is exchangeable within each volatility phase, the recursive distribution
$\\{R(t)\\}_{t\in S_{i}}$ converges to a geometric distribution with parameter
$p_{S_{i}}$ as the sample size going to infinity where $p_{S_{i}}$ is the
emission probability under state $S_{i}$. Maximized Likelihood Estimation(MLE)
and Method of Moment(MOM) give the same estimator for $p_{S_{i}}$,
$\hat{p}_{S_{i}}=\frac{1}{\sum_{t\in S_{i}}R(t)}$ (2)
for $i=1,...,m$. The searching algorithm actually advocates a way to partition
the process into segments with $m$ different intensities. With enough sample
size, geometric distribution is appropriate to model the $m$ phases with
estimated parameter $\hat{p}_{S_{i}}$. Thus, maximum likelihood approaches can
be applied for model selection. A penalized likelihood or loss function can be
simplified by,
$Loss(\theta)=-2\sum_{i=1}^{m}[\sum_{t\in
S_{i}^{\theta}}{C(t)}log\hat{p}_{S^{\theta}_{i}}+\sum_{t\in
S_{i}^{\theta}}{(1-C(t))}log(1-\hat{p}_{S^{\theta}_{i}})]+kN$ (3)
where $S^{\theta}_{i}$ is generated segments by applying $m$ parameter
$\theta=T_{1},...,T_{m-1},$ and $T^{*}$; $m$ is the number of embedded phases
or hidden states; $N$ is the total number of alternating segments, $N\geq m$,
and $k$ is a penalty coefficient. For instance, $k=2$ corresponds to Akaike
Information Criterion(AIC), and $k=log(n)$ corresponds to Bayesian Information
criterion(BIC). In this paper, we consistently use BIC in all the experiment.
The optimal parameters $\theta^{*}$ are tuned such that the loss can achieves
its minimum, so
$\theta^{*}=\operatorname*{argmin}_{\theta}Loss(\theta)$ (4)
Thus, the segments are ultimately achieved by applying $\theta^{*}$. The
computation cost is expensive if all possible $T_{1},...,T_{m-1}$ combinations
are considered. In practice, a random grid-search strategy can be applied.
### 2.3 Simulation: Bernoulli-distributed Observations
Numerical experiments are conducted to demonstrate the performance of Alg.1.
The first experiment is designed to investigate the asymptotic property of the
proposed algorithm as sample size $n$ increases. 2 hidden states $S_{1}$ and
$S_{2}$ are generated in a sequence like
$\\{S_{1},S_{1},...,S_{1},S_{2},...,S_{2},S_{1},...\\}$. The length of each
segmentation of a hidden state is a fixed proportion of $n$. The change points
are set at $0.1n,0.2n,0.4n,0.7n,0.9n$, so there are 7 alternative segments in
total. Observations are Bernoulli distributed with emission probability
$p_{1}$ under state $S_{1}$ and $p_{2}$ under $S_{2}$. The experiment is
repeated via different $n$ and emission $p_{1}$ and $p_{2}$.
The mean and standard deviation of decoding error rates for AIC and BIC are
presented in Table1. It seems that asymptotic property can hold for the
decoding algorithm. Moreover, AIC performs better than BIC in the most cases,
especially when $p_{1}$ is close to $p_{2}$. We will consistently apply AIC in
the rest experiments of the paper.
Table 1: Independent Bernoulli sequence: average decoding error rates with
standard deviation in brackets
| $p_{1}$=0.1, $p_{2}$=0.05 | $p_{1}$=0.1, $p_{2}$=0.2 | $p_{1}$=0.1, $p_{2}$=0.3 | $p_{1}$=0.1, $p_{2}$=0.5
---|---|---|---|---
n | AIC | BIC | AIC | BIC | AIC | BIC | AIC | BIC
1000 | 0.361 (0.107) | 0.455 (0.072) | 0.280 (0.122) | 0.410 (0.105) | 0.115 (0.051) | 0.147 (0.095) | 0.060 (0.023) | 0.054 (0.019)
2000 | 0.308 (0.117) | 0.441 (0.100) | 0.183 (0.098) | 0.309 (0.151) | 0.073 (0.035) | 0.059 (0.026) | 0.035 (0.013) | 0.030 (0.010)
3000 | 0.235 (0.116) | 0.412 (0.116) | 0.122 (0.065) | 0.190 (0.135) | 0.048 (0.021) | 0.042 (0.016) | 0.023 (0.008) | 0.020 (0.007)
In the second experiment, data is generated under Hidden Markov Model (HMM)
with 2 hidden states. The decoding error rate is calculated and compared for
the proposed algorithm and the HMM. Since the true transition probability and
emission probability are unknown in the application of Viterbi’s[13], forward-
backward algorithm[35] and EM algorithm in Baum–Welch type[6] need to be
firstly implemented to calibrate the model parameters. Generally, transition
and emission are randomly initialized at first and then updated through the
iteration. We name the whole process of parameter estimation and decoding by
‘HMM’. On the other hand, a pure Viterbi’s is implemented with the input of
true transition and emission, which can be regarded as mathematical ‘Truth’.
For the convenience of simulation, we set $p_{11}=p_{22}$ in transition
probability matrix $A$, so $A$ is controlled by a only parameter $p_{12}$,
$A=\begin{bmatrix}1-p_{12}&p_{12}\\\ p_{12}&1-p_{12}\end{bmatrix}$
Observations of length $n=1000$ are simulated via different transition matrix
$A$ and emission $p_{1}$ and $p_{2}$. The experiment is repeated and the mean
of decoding error rates is reported in Table2. It shows that the overall
decoding errors decrease as $p_{12}$ decreases, and the proposed method gets
largely improved and close to the ‘Truth’ especially when $p_{12}$ is less
than $0.01$. It can be explained by the fact that a simple model with fewer
alternating segments is favored for all the approaches. However, Viterbi’s in
conjunction with the parameter estimation is unsatisfactory in all the cases.
It reveals the challenges of applying Viterbi’s or Baum–Welch’s in reality.
Table 2: Bernoulli sequence under HMM: average decoding error rates
| $p_{1}$=0.1, $p_{2}$=0.05 | $p_{1}$=0.1, $p_{2}$=0.2 | $p_{1}$=0.1, $p_{2}$=0.3 | $p_{1}$=0.1, $p_{2}$=0.5
---|---|---|---|---
$p_{12}$ | Truth | HMM | Alg.1 | Truth | HMM | Alg.1 | Truth | HMM | Alg.1 | Truth | HMM | Alg.1
0.1 | 0.4558 | 0.4660 | 0.4585 | 0.4411 | 0.4482 | 0.4570 | 0.3572 | 0.3985 | 0.4414 | 0.2162 | 0.2854 | 0.3891
0.05 | 0.4225 | 0.4519 | 0.4318 | 0.4092 | 0.4415 | 0.4309 | 0.2945 | 0.3957 | 0.3790 | 0.1430 | 0.2752 | 0.2887
0.01 | 0.3590 | 0.4040 | 0.3522 | 0.2839 | 0.4042 | 0.2972 | 0.1431 | 0.3936 | 0.2047 | 0.0415 | 0.2677 | 0.1048
0.005 | 0.2777 | 0.3417 | 0.2995 | 0.2086 | 0.3731 | 0.2287 | 0.0639 | 0.3682 | 0.1133 | 0.0168 | 0.2767 | 0.0596
Apart from decoding, the estimation accuracy of emission probability is also
compared for the proposed method and the Baum–Welch’s. Following the same
simulation above, results of two-dimensional estimation for $p_{1}$ and
$p_{2}$ are shown in Figure2.
As $p_{12}$ decreases, the estimated points of the proposed method are closely
around the true parameters. Instead, the estimations from the Baum–Welch’s are
far apart from the truth with choice of different $p_{12}$. Indeed, When
$p_{12}=0.01$, the average 2-dim Euclidean distance from the estimations to
the true parameter is 0.041 for the proposed method, which is much lower than
0.184 for the Baum–Welch’s.
As a summary in this section, the proposed method has a good performance in
both decoding and probability estimation. Though an assumption of independent
observations is advocated, the proposed method is robust when a weak serial
dependence is present and competitive to HMM even under Markovian conditions.
Figure 2: Data is generated under HMM settings with $p_{1}=0.1$, $p_{2}=0.5$,
and different $p_{12}$. The simulation is repeated for 100 times and each dot
is an estimation for $p_{1}$ and $p_{2}$ in one realization. (A) $p_{12}=0.1$;
mean (std) Euclidean distance is 0.182 (0.089) for our method and 0.163
(0.080) for EM. (B) $p_{12}=0.01$; mean (std) Euclidean distance is 0.041
(0.024) for our method and 0.184 (0.095) for EM.
## 3 Encoding-and-Decoding Procedure
### 3.1 The Method
So far, the choice of threshold in defining an event or large returns plays an
importance role. A natural question to ask is how stable is the estimation
result with respect to a threshold. For example, an observed return is marked
if it is below a threshold $\pi$, the intensity parameter of geometric
distribution becomes $p_{s}(\pi)=F_{s}(\pi)$ given a hidden state $s$. By
assuming the continuity of the underlying distribution $F_{s}$, $p_{s}$ is
also continuous with $\pi$. Thus, the emission probability under a hidden
state would not fluctuate much if $\pi$ varies slightly. Indeed, our
experiment shows that the estimated emission probability is not sensitive to
$\pi$. To make the notation consistent, we will use $p^{\pi}(t)$ or
$F^{\pi}(t)$ if both $t$ and $\pi$ are present.
The idea of dealing with a stochastic process of continuous observations is
described as follows. In the encoding phase, we iteratively switch the
excursion threshold and discretize the time series into a 0-1 process with
each threshold applied. After that, we implement the searching algorithm to
decode each process separately. As a consequent result, a vector of estimated
emission probability $\hat{p}^{\pi}(t)$ is obtained at time $t$ with different
choices of $\pi$. It actually provides an estimation to the Cumulative
Distribution Function(CDF) at time $t$ by $\hat{F}^{\pi}(t)=\hat{p}^{\pi}(t)$
where $\hat{F}^{\pi}(t)$ is a function of $\pi$ at a fixed $t$. Take the
simulated t-distributed data in AppendixA as an example, Figure3 shows a
series of CDF with a change point embedded in the middle though it is hard to
detect by eyes. Lastly, all the decoding information is aggregated in an
emission vector $\vec{p}(t)$, and the volatility dynamic is discovered via
cluster analysis. Suppose that $\\{C^{\pi}(t)\\}_{t}$ is a 0-1 coding sequence
obtained by applying a threshold $\pi$ upon the returns, and $\Pi$ is a pre-
determined threshold set, for example, $\Pi$ can be a series of quantiles of
the marginal distribution
$\Pi=\\{0.9\,\textit{quantile},0.8\,\textit{quantile},...,0.1\,\textit{quantile}\\}$.
The encoding-and-decoding algorithm is described in Alg.2.
Alg.2 Encoding-and-Decoding
1\. For loop: cycle threshold $\pi$ through
$\Pi=\\{\pi_{1},\pi_{2},...,\pi_{V}\\}$:
Define events and code the whole process as a 0-1 digital string
$\\{C^{\pi}(t)\\}_{t=1}^{n}$,
$C^{\pi}(t)=\begin{cases}1&\quad
log\frac{X(t)}{X(t-1)}\leq\pi\enspace\textit{if}\enspace\pi<0\\\ 1&\quad
log\frac{X(t)}{X(t-1)}\geq\pi\enspace\textit{if}\enspace\pi>0\\\
0&\quad\textit{otherwise}\end{cases}$
Repeat step 3 & 4 in Alg.1 and estimate the probability $\hat{p}^{\pi}(t)$ by
(2).
End For
2\. Stack the estimated emission probability at $t$ in a vector $\vec{p}(t)$,
$\vec{p}(t):=(\hat{p}^{\pi_{1}}(t),\hat{p}^{\pi_{2}}(t),...,\hat{p}^{\pi_{V}}(t))$
3\. Merge time points with comparable $\vec{p}(t)$ together via clustering
analysis
Figure 3: Simulated data from 1998 to 2003. Data from 1998 to 2000 follows
student-t distribution with degree of freedom 2; data from 2001 to 2003
follows student-t distribution with degree of freedom 5.
It is surprising that how the CDF is estimated based on the only observation
at each time stamp. Indeed, the $\hat{F}^{\pi}$ does not consistently converge
to the true CDF as the number of thresholds $\pi$ increases due to the
limitation of data information. The estimated CDF here depends on the decoding
result. Specifically, if there exists a threshold $\pi$ by which the
underlying distribution can be separated well, then the decoding achieves a
good result to reflect the distributional changes. On the other hand, if the
$\pi$ is set not appropriate, for example, the emission probability of the
underlying distribution at state $s$ and $s^{{}^{\prime}}$ are very close to
each other, say $\hat{F}_{s}^{\pi}\cong\hat{F}_{s^{{}^{\prime}}}^{\pi}$, then
the decoding algorithm fails to separate the two states with such a threshold
applied. There is an ongoing discussion about how to choose a good threshold
to discretize a sequence of continuous observations. A heuristic idea is to
tune the optional value of $\pi$ such that the estimated probabilities under
different states are far apart from each other. For example, consider a max-
min estimator of $\pi$,
$\hat{\pi}=\operatorname*{argmax}_{\pi}\min_{s,s^{{}^{\prime}}}|\hat{p}_{s}(\pi)-\hat{p}_{s^{{}^{\prime}}}(\pi)|$
(5)
It is remarked that the proposed procedure avoids the issue of tuning
parameters by imposing a series of thresholds and aggregating all the decoding
results together. The information of distributional changes is reserved into
the emission vector $\vec{p}(t)$. On the other hand, an irrelevant result with
an unreasonable threshold applied would not change the aggregation result
much. For example, if $\hat{F}_{s}^{\pi}\cong\hat{F}_{s^{{}^{\prime}}}^{\pi}$,
then there is no distributional changes detected in the process, so
$\hat{p}^{\pi}(t)$ is a constant for any $t$. In summary, the algorithm is
implemented by shifting $\pi$ value from high to low to obtain a sequence of
estimated CDFs, although not all the $\pi$’s are meaningful in the decoding.
By further combining the estimated emission in a vector, the aggregation sheds
a light on differentiating underlying distributions.
### 3.2 Clustering Analysis
The next question is how to extract the information in
$\\{\vec{p}(t)\\}_{t=1}^{n}$, and how many underlying distributions are
required to describe the patterns of distribution switching. It actually
raises a question for all the regime-switching models to determine the number
of underlying states. Generally, the more states are taken into consideration,
the less tractable a model becomes. For example, a 2-state lognormal Markov
Model contains 6 parameters, while a 3-state model increases the number to 10.
The number of states is usually decided subjectively or tuned with an extra
criterion. Given the estimated probability vector $\vec{p}(t)$ in Alg.2, the
problem above can be resolved by clustering similar time points together such
that the CDF curves within each cluster are in a comparable shape. It is
proved by [38] that the clustering index obtained via hierarchical clustering
with ‘Ward’ linkage maintains the asymptotic property in change-point
analysis. Furthermore, the number of underlying states can be determined by
searching through the number of clusters embedded in the emission vectors
$\vec{p}(t)$. Here, hierarchical clustering with ‘Ward’ linkage is implemented
to cluster similar time points shown in Figure4(A). One can visualize the
dendrogram to decide the number of clusters, or employ extra criteria like
Silhouette to measure the quality of clustering.
---
---
Figure 4: 3-states decoding results for simulated data in AppendixA. (A)
Hierarchical Clustering Tree; (B) cluster index switching over time;
(C),(D),(E): estimated CDF versus true CDF, in cluster 1,2,3, respectively.
Numerical data simulated with student-t distributions is available in
AppendixA. The dendrogram shows that 2 or 3 clusters may be embedded inside
the observations. If we cut the dendrogram into 3 clusters, the trajectory of
cluster indices can almost perfectly represent the alternating hidden states,
see Figure4(B). If 2 is taken rather than 3, the result makes sense as well
since cluster2 and cluster3 are combined together as a contradiction to the
high-intensity cluster or cluster1. By calculating the average emission curve
in each cluster, one can compare the estimated probability function with the
theoretical distributions. It shows in Figure4 (C)-(E) that the emission curve
is a goodness-of-fit in each cluster.
It is demonstrated that the decoding result is robust to the number of hidden
states that is supposed in each 0-1 process. Without the prior knowledge of
3-states embedding, if we implement a 2-states or 4-states decoding schedule,
the clustering trajectory can still describe the distributional changes well,
see Figure14 and Figure15 in AppendixD.
### 3.3 Simulation: Continuous Observations
In this section, we present simulation results for the encoding-and-decoding
approach under continuous Hidden Markov Model (HMM) settings. Decoding errors
are compared with well-known parametric approaches. All the computation of
parametric approaches is completed using the Python package hmmlearn.
In the first setting, data is generated by Gaussian HMM with 2 hidden states
$S_{1}$ and $S_{2}$. We set both conditional means as $\mu_{1}=\mu_{2}=0$ and
conditional variance as $\sigma^{2}_{1}$ and $\sigma^{2}_{2}$. Gaussian HMM
with 2 hidden states is implemented to calibrate model parameters and decode
hidden states. In the second setting, data is generated under Gaussian Mixture
Model HMM (GMMHMM) with 2 hidden states and 2 Gaussian components,
$w_{a}\mathcal{N}(0,\sigma^{2}_{i,a})+w_{b}\mathcal{N}(0,\sigma^{2}_{i,b})$,
in which $w_{a}$ is the weight of component ‘a’ and $\sigma^{2}_{i,a}$ is the
variance of component ‘a’ under state $S_{i}$, for $i=1,2$. GMMHMM with 2
hidden states and 2 Gaussian components is implemented for comparison.
The mean of decoding error rates under both settings is reported in Table3 and
Table4, respectively. With a fixed sample size $n=1000$, all the decoding
result gets better as the transition probability $p_{12}$ decreases.
Especially, as the serial dependence is low enough or $p_{12}$ is less than
0.01, the proposed method is far better than the parametric models from which
the data is generated. The small sample size may explain the failure of
parametric models. It shows that the proposed non-parametric method is more
stable and reliable than the parametric when data resource is limited in a
real application.
Table 3: Gaussian HMM setting: average decoding error rates; better error rate
marked in bold
| $\sigma^{2}_{1}$=0.4, $\sigma^{2}_{2}$=1 | $\sigma^{2}_{1}$=1, $\sigma^{2}_{2}$=2 | $\sigma^{2}_{1}$=1, $\sigma^{2}_{2}$=3
---|---|---|---
$p_{12}$ | GaussianHMM | Our Method | GaussianHMM | Our Method | GaussianHMM | Our Method
0.1 | 0.4067 | 0.4576 | 0.4592 | 0.4589 | 0.3220 | 0.4616
0.05 | 0.3631 | 0.4321 | 0.4586 | 0.4462 | 0.2623 | 0.4272
0.01 | 0.3558 | 0.2771 | 0.4314 | 0.3330 | 0.1755 | 0.2604
0.005 | 0.3447 | 0.2160 | 0.4298 | 0.2488 | 0.1842 | 0.1609
Table 4: GMMHMM setting: average decoding error rates; better error rate
marked in bold
| $\sigma^{2}_{1,a}$=0.1 , $\sigma^{2}_{1,b}$=0.5 , $\sigma^{2}_{2,a}$=1 , $\sigma^{2}_{2,b}$=1.5 | $\sigma^{2}_{1,a}$=0.1 , $\sigma^{2}_{1,b}$=0.8 , $\sigma^{2}_{2,a}$=0.5 , $\sigma^{2}_{2,b}$=1.5
---|---|---
$w_{a}$=$w_{b}$=0.5 | $w_{a}$=0.3 , $w_{b}$=0.7 | $w_{a}$=$w_{b}$=0.5 | $w_{a}$=0.3 , $w_{b}$=0.7
$p_{12}$ | GMMHMM | Our Method | GMMHMM | Our Method | GMMHMM | Our Method | GMMHMM | Our Method
0.1 | 0.3943 | 0.4533 | 0.4326 | 0.4622 | 0.4370 | 0.4573 | 0.4661 | 0.4596
0.05 | 0.3661 | 0.4137 | 0.4135 | 0.4150 | 0.4368 | 0.4259 | 0.4605 | 0.4401
0.01 | 0.3456 | 0.2342 | 0.4072 | 0.2287 | 0.4290 | 0.3390 | 0.4565 | 0.3663
0.005 | 0.3292 | 0.1431 | 0.3810 | 0.1515 | 0.4236 | 0.2558 | 0.4550 | 0.2981
## 4 Real Data Experiments
### 4.1 Forecasting
Forecasting stock prices or returns has become one of the basic topics in
financial markets. As one of the forecasting models, continuous Hidden Markov
Model has been widely implemented due to its strong statistical foundation and
tractability. Based on the work of [37, 36], the 1-step-ahead forecasting can
be done by looking for a “similar” historical data set that is a close match
to the current values. In this section, we implement our encoding-and-decoding
approach under the forecasting framework of HMM to predict stock returns and
compare the results with that of Gaussian HMM.
We first introduce the stock forecasting process using HMM by [36]. Given
observations $\\{Y(t)\\}_{t=1}^{T}$, suppose that the goal is to predict stock
price at time $T+1$. In the first step, a fixed training window of length $D$
is chosen and then training data set $Y_{tr}=\\{Y(t)\\}_{t=T-D+1}^{T}$ is used
to calibrate HMM’s parameters denoted by $\lambda$ and calculate observation
probability $P(Y_{tr}|\lambda)$ given $\lambda$ . In the second step, we move
the training window backward by one stamp to obtain
$Y_{-1}=\\{Y(t)\\}_{t=T-D}^{T-1}$, and keep moving backward stamp by stamp
until all the historical data is covered, so
$Y_{-k}=\\{Y(t)\\}_{t=T-D+1-k}^{T-k}$, for $k=1,...,T-D$. In the last step, we
find a data set $Y_{-k^{*}}$ which is the best match to the training data,
i.e. $P(Y_{-k^{*}}|\lambda)\cong P(Y_{tr}|\lambda)$. Thus, $\hat{Y}(T+1)$ is
predicted by
$Y(T)+(Y(T-k^{*}+1)-Y(T-k^{*}))\times
sign(P(Y_{tr}|\lambda)-P(Y_{-k^{*}}|\lambda))$ (6)
Similarly, to predict the stock price at time $T+2$, the training window moves
one stamp forward, so the training data is updated by
$Y_{tr}=\\{Y(t)\\}_{t=T-D+2}^{T+1}$.
---
---
Figure 5: Time plot of IBM hourly returns from from September to December,
2006. Solid line is real return value and dashed line 1-step-ahead forecasts
via (A) Gaussian Hidden Markov Model (B) the proposed method. The $D$ value is
set at 200.
In the implementation of Gaussian HMM, model parameters are summarized by
$\lambda:=\\{\pi,A,\mu|s,\sigma|s\\}$ where $\mu|s$ and $\sigma|s$ are mean
and standard deviation of Gaussian distribution given hidden state $s$,
respectively. Once $\lambda$ is estimated in the training set, the probability
of observations can be calculated by the forward-backward algorithm. However,
the Gaussian assumption is usually violated in describing the heavy-tail
returns, which makes the forecasting model implausible. While, the proposed
encoding-and-decoding approach is able to estimate the probability of
observations without any supposed distribution family. By applying a series of
quantile threshold $\pi=\\{\pi_{1},\pi_{2},...,\pi_{V}\\}$ in encoding, one
can figure out the bins in which $Y(t)$ is located, so the probability of
observing $Y(t)$ can be calculated based on the estimated emission
$\hat{F}^{\pi}$ by
$\hat{P}(Y(t)|S)=\sum_{i=1}^{V+1}(\hat{F}^{\pi_{i}}-\hat{F}^{\pi_{i-1}})\mathbf{1}_{\\{\pi_{i}<Y(t)\leq\pi_{i-1}\\}}$
(7)
where $\hat{F}^{\pi_{0}}$ is denoted as 0 and $\hat{F}^{\pi_{V+1}}$ as 1. The
probability of a set of observations is finally calculated due to the
independence assumption, so
$\hat{P}(Y_{-k}|S)=\prod_{t=T-D+1-k}^{T-k}\hat{P}(Y(t)|S)$ (8)
In the real data experiment, hourly returns from January to August 2006 is
trained to predict its next-hour-ahead returns from September to December
2006. Gaussian HMM with 4 hidden states is implemented for comparison. Since
the choice of window length $D$ is not well defined, two values are tried by
$D=100,200$. To measure the model performance, three error estimators are
reported for four technology stocks in Table5\- root mean squared error(RMSE),
mean absolute error(MAE), and mean absolute percentage error(MAPE). Figure5
visualizes the forecasting result for stock of IBM. In the forecasting task,
the proposed method slightly outperforms the classic Gaussian HMM in most
cases except ‘INTC’ at $D=100$. It can be explained according to our previous
experiments that the proposed method takes advantage of decoding hidden states
and estimating emission probability compared with HMM. The violation of the
Gaussian assumption in HMM may restrict its effectiveness in the real
application.
Table 5: 1-step-ahead prediction error for four different stocks- IBM Common
Stock(IBM), Intel Corporation(INTC), NVIDIA Corporation(NVDA), and Broadcom
Inc(BRCM); better result marked in bold
| | D=100 | D=200
---|---|---|---
Stock | Criterion | GaussianHMM | Our Method | GaussianHMM | Our Method
| RMSE | 0.0041 | 0.0040 | 0.0041 | 0.0038
| MAE | 0.0032 | 0.0031 | 0.0032 | 0.0029
IBM | MAPE | 6.487 | 6.986 | 6.366 | 6.502
| RMSE | 0.0057 | 0.0062 | 0.0067 | 0.0058
| MAE | 0.0044 | 0.0046 | 0.0050 | 0.0045
INTC | MAPE | 5.046 | 6.656 | 7.402 | 6.728
| RMSE | 0.0112 | 0.0111 | 0.0126 | 0.0114
| MAE | 0.0090 | 0.0088 | 0.0096 | 0.0089
NVDA | MAPE | 10.122 | 8.035 | 8.750 | 8.166
| RMSE | 0.0117 | 0.0109 | 0.0118 | 0.0113
| MAE | 0.0091 | 0.0086 | 0.0088 | 0.0086
BRCM | MAPE | 13.905 | 8.246 | 9.550 | 9.715
### 4.2 Volatility Dynamics in High-frequent Data
In this section, the most high-frequent tick-by-tick data is analyzed for
S$\&$P500. The stock returns are calculated in a market time scale which is
measured by transaction rather than the real-time clock. The analysis was
firstly suggested by [14], and then worked thoroughly by [15]. A well-known
example is a random-walk model suggesting that the variance of returns depends
on the number of transactions. Following the idea above, we apply the tiniest
sampling rate to alleviate the serial dependency. It is reasonable to assume
that the stock returns are exchangeable within a certain number of
transactions.
The encoding-and-decoding algorithm is implemented to discover the volatility
dynamics for every single stock in S&P500. Since the result is not sensitive
to the number of hidden states, a 2-state decoding procedure is applied and
the number of clusters is determined according to the tree height of
hierarchical clustering. It turns out that there are 3 potential clusters
embedded in the returns of IBM in 2006. The estimated CDF given each cluster
is shown in Figure6. The heavier-tail distribution of cluster3 reflects a
high-volatility phase; cluster1 indicates a phase with low volatility. As a
phase in the middle, cluster2 shows an asymmetric distribution with a heavy
tail on the left but a light tail on the right. Instead, cluster1 and cluster3
look more symmetric on both sides.
Figure 6: The estimated CDFs for the 3 clusters of IBM in January 2006.
The single-stock volatility dynamic is then present by cluster index returned
via Alg.2. The dynamic pattern of IBM in January 2006 is shown in Figure7.
According to the previous notation, cluster1, cluster2, and cluster3 indicates
a low-, median-, and high-volatility phase, respectively. Based on the daily
segments, it is clear that the unstable high-volatility mostly appears at the
beginning of a stock market, and usually shows up twice or three times per
day.
Figure 7: Recovered volatility trajectory of IBM in January 2006.
### 4.3 Information Flows between Stocks
Beyond detecting the volatility dynamics for a single stock, we further
consider the network among all the S$\&$P500 to present how one stock’s
returns is related to that of another. Such a relationship can be quantified
by calculating the cross-correlation between two stock time series [23, 24]
and correlation matrix was investigated via random matrix theory [25] or
clustering analysis [26]. Conditional correlation [27] and partial correlation
[28] were studied to provide information about how the relationship of two
stocks is eventually influenced given other stocks. However, since the
empirical distribution of returns is very different from Gaussian and
correlation is only adaptive for a linear relationship, a distribution-free
and non-linear measurement is studied to measure the financial connection.
Transfer Entropy(TE) [17, 18], as an extension of the concept of Granger
causality [16], was proposed to measure the reduction of Shannon’s entropy in
forecasting a target variable via the past value of a source variable. Denote
the target variable at time $t$ as $Y_{t}$ and the source variable at $t$ as
$X_{t}$. The Transfer Entropy from $X$ to $Y$ in terms of past $l$ lags is
defined by,
$TE_{X\rightarrow
Y}(l)=\sum_{t=l+1}^{n}P(Y_{t},Y_{(t-l):(t-1)},X_{(t-l):(t-1)})log\frac{P(Y_{t}|Y_{(t-l):(t-1)},X_{(t-l):(t-1)})}{P(Y_{t}|Y_{(t-l):(t-1)})}$
It is remarked that the measure is asymmetric, generally, $TE_{X\rightarrow
Y}\neq TE_{Y\rightarrow X}$.
However, it is computationally infeasible to calculate the exact TE value due
to the difficulty in estimating a conditional distribution or joint
distribution especially when $l$ is large. In the application of finance,
people commonly cut the observation range into disjoint bins and assign a
binning symbol to each data point [30, 31, 32]. However, a straightforward
implementation of binning with equal probability for every symbol will lead to
sensible results [29]. To the best of our knowledge, it still lacks in the
literature to digitize the stock returns and effectively reveal the dynamic
volatility. The simple binning methods such as histogram or clustering fail to
catch the excursion of large returns, so only the trend of returns is studied
but with dynamic pattern or volatility missing. The encoding-and-decoding
procedure remedies the drawbacks of simple binnings by calculating TE through
recovered volatility states. The measure of information flow is improved in
measuring the causality of stock volatility rather than the similarity of
trend patterns.
To deal with missing data in high-frequency trading time series, a
transformation is proposed to link a pair of stocks into the same time scale,
then a pairwise dependency is measured based on Transfer Entropy. The detail
of implementation is shown in AppendixC. A pair of decoded symbolic sequences
are shown in Figure8. The higher information flow from $X$ to $Y$, the
stronger impact that $X$ promotes volatility of $Y$. In the first example,
Figure8(A) shows that MXIM and NTAP share a large intersection in volatility
phases. Especially, when MXIM is in volatility, the price of NTAP has a high
probability to be in state3. The information flow from MXIM to NTAP is 0.039
and 0.016 in reverse. In the second example, Figure8(B) shows that TWX has a
stronger influence on the volatility stages of BRCM. The measure from TWX to
BRCM is 0.036 and 0.026 in reverse.
---
---
Figure 8: A pair of volatility trajectories summarized in real time. (A)
MXIM(Maxim Integrated Products, Inc.) v.s NTAP(NetApp, Inc.); (B) TWX(Time
Warner, Inc.) v.s BRCM(Broadcom, Inc.)
Once the Transfer Entropy is calculated for all pairs of stocks in S$\&$P500,
the result is recorded in a $500\times 500$ asymmetric matrix with the entry
value of the $i$-th row and the $j$-column as the information flow from the
$i$-th stock to the $j$-th stock. We rearrange the rows and columns such that
the sum of rows and the sums of columns are in ascending order, respectively.
The reordered TE matrix is shown in Figure16 in AppendixD. The idea of
reordering follows the discussion about the node centrality for directed
networks in [31]. Two types of node strength are considered for incoming and
outgoing edges. An incoming node strength at node $i$, denoted as
$NS^{i}_{in}$, is defined by the sum of the weights of all the incoming edges
to $i$,
$NS^{i}_{in}=\sum_{j=1}^{n}TE_{j\rightarrow i}$ (9)
Similarly, an outgoing node strength, denoted as $NS^{i}_{out}$, is defined by
the sum of the weights of all the outgoing edges from $i$,
$NS^{i}_{out}=\sum_{j=1}^{n}TE_{i\rightarrow j}$ (10)
If a stock has a large value of incoming node strength, it receives more
information flow, which means the stock is strongly influenced by others;
while, if a stock has a large outgoing node strength, it sends more impacts to
other stocks. The top30 stocks with the largest incoming and outgoing node
strength values are reported in Table6 in AppendixD. If we take the
intersection between the top30 incoming nodes and the top30 outgoing nodes, a
group of most central stocks gets returned. The central stocks can be
interpreted as intermediate nodes connecting all the other stocks in the
S$\&$P500 network. The central stocks include CHK(Chesapeake Energy),
VLO(Valero Energy), NTAP(NetApp, Inc.), BRCM(Broadcom, Inc.), and TWX(Time
Warner, Inc.), which are all big manufacturers, retailers, suppliers, or media
covering the important fields in the United States.
### 4.4 S&P500 Networks
In this subsection, we present two different types of networks to illustrate
the volatility connection among the S$\&$P500 in 2006. A weighted directed
network is established by regarding each stock as a node, information flow
from one node to another as an edge, and the Transfer Entropy value as the
weight of an edge. Nodes with weak dependency are filtered out, so only the
strongest edges and their conjunct nodes are shown in Figure9. Apart from the
central stocks such as CHK, VLO, NTAP, and BRCM, the result shows that big
investment corporations, such as JPM(JPMorgan), BAC(Bank of America), and
C(Citigroup) also heavily depend on other stocks. Instead, TWX(Time Warner,
Inc.), MXIM(Maxim Integrated Products Inc.), APC(Apple inc.), EBAY(eBay Inc.),
and YHOO(Yahoo! Inc.) has a primary impact on other S$\&$P500.
Figure 9: A directed network of S$\&$P500: edges with the strongest weights
and the conjunct nodes are shown.
Another way to visualize the network is to transform the asymmetric matrix
into a symmetric dissimilarity measure. The similarity between the $i$-th and
the $j$-th nodes can be defined by the average of asymmetric TE values,
$Sim(i,j)=(TE_{i\rightarrow j}+TE_{i\rightarrow j})/2$ (11)
If the range of similarity is rescaled between 0 and 1, the dissimilarity can
be simply defined by,
$Dis(i,j)=1-\frac{Sim(i,j)-\min_{i,j}Sim(i,j)}{\max_{i,j}Sim(i,j)-\min_{i,j}Sim(i,j)}$
(12)
So, the range of dissimilarity is scaled between 0 and 1. The symmetric
dissimilarity matrix of S$\&$P500 is present in Figure10(A) with a
hierarchical clustering tree imposed on the row and column sides. The idea is
similar to Multidimensional Scaling, which has been widely used to visualize
the financial connectivity in a low-dimensional space [33]. We claim that the
dendrogram provided by hierarchical clustering is more informative in
illustrating how the S$\&$P500 are hierarchically agglomerated from the bottom
to the top according to their dissimilarity. Intuitively, companies under a
similar industrial category should be merged into a small cluster branch. One
of the branches with relatively low mutual distance is extracted and shown in
Figure10(B). It looks that the cluster mainly includes technology companies
including internet retail (EBAY and AMZN), manufacturer of integrated
circuits(LLTC), video games(ERTS), information technology(ALTR), network
technology(TLAB), biotechnology(GILD and GENZ), etc. Besides, it is noticed
that energy corporations, such as VLO, COP, and CHK, are also merged into a
small cluster.
---
---
Figure 10: Heatmap of the symmetric dissimilarity matrix with a hierarchical
clustering tree imposed on the row and column sides; (A) a matrix for
S$\&$P500; (B) a submatrix extracted from (A).
As a comparison to the network constructed based on the decoding procedure,
Transfer Entropy is calculated via simple binning with different cutting
strategies, so a binning-based network is obtained accordingly. It shows that
the resultant network is sensitive to the number of bins as the simple binning
tends to overfit the error term in the high-frequent data (see Figure11) and
the dissimilarity matrix shows no significant clustering structure. However,
the proposed decoding is able to model the volatility dynamic, and its pattern
or number of code-states is stable to the price fluctuation.
Figure 11: Directed thresholding network of S&P500; edge weight is measured
based on Transfer Entropy with lag-5. (A) simple binning with 5-quantile
cutting; (B) simple binning with 7-quantile cutting.
## 5 Conclusion
Starting from a definition of large or relative extreme returns, we firstly
propose a searching algorithm to segment stock returns into multiple levels of
volatility phases. Then, we advocate a data-driven method, named encoding-and-
decoding, to discover the embedded number of hidden states and represent the
stock dynamics. By encoding the continuous observations into a sequence of 0-1
variables, a maximum likelihood approach is applied to fit the limiting
distribution of the recurrence time series. Though the assumption of
exchangeability within each hidden state is required, our numerical
experiments show that the proposed approach still works when the assumption is
slightly violated, for example, a weak transaction probability is imposed
under the Markovian condition. This demonstration of robustness with respect
to various conditions makes the proposed approach valuable in real-world
finance researches and practices.
In real data application, it was reported by [9] that stock returns are only
exchangeable in a short period. With this assumption holds, our proposed
method is implemented on high-frequent data to alleviate the serial
dependency. Moreover, it is beneficial to investigate the fine-scale
volatility, so the established network can illustrate which stocks stimulate
or even promote volatility on others. It is also noted that the non-parametric
regime-switching framework can work in conjunction with other financial
models. For example, the forecasting procedure of HMM can be applied and
improved with the help of encoding-and-decoding. Some future work like Peak
Over Threshold(PoT) [34] can be implemented to analyze the extreme value
distribution based on the homogeneous regimes discovered by the proposed
method.
## References
* [1] Black, F., and Scholes, M. S. (1973). The Pricing of Options and Corporate Liabilities. Journal of Political Economy, University of Chicago Press, 81(3), 637-654.
* [2] Merton, R. C. (1973). Theory of rational option pricing. The Bell Journal of Economics and Management Science, 4(1), 141-183.
* [3] Hardy, M. R. (2001). A Regime-Switching Model of Long-Term Stock Returns. North American Actuarial Journal, 5(2), 41-53.
* [4] Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica, 57(2), 357-384.
* [5] Fine, S., Singer, W., and Tishby, N. (1998). The hierarchical hidden Markov model: analysis and applications. Machine Learning, 32, 41–62.
* [6] Baum, L., Petrie, T., Soules, G., and Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Annals of Mathematical Statistics, 41(1), 164–171.
* [7] Hsieh, F., Hwang, C. R., Lee, H. C., Lan, Y. C., and Horng, S. B. (2006). Testing and mapping non-stationarity in animal behavioral processes: a case study on an individual female bean weevil. Journal of Theoretical Biology, 238, 805-816.
* [8] Hsieh, F., Chen, S. C., and Hwang, C. R. (2012), Discovering Stock Dynamics through Multidimensional Volatility Phases. Quantitative Finance, 12, 213–230.
* [9] Chang, L. B., Stuart, G., Hsieh, F., and Hwang, C. R. (2013), Invariance in The Recurrence of Large Returns and The Validation of Models of Price Dynamics. Physical Review E, 88, 022116.
* [10] Chang, L. B., Goswami, A., Hsieh, F., and Hwang, C. R. (2013), An Invariance for The Large-sample Empirical Distribution of Waiting Time Between Successive Extremes.Bulletion of the Institute of Mathematics Academia Sinica, 8, 31-48.
* [11] Eberlein, E. (2001) Application of Generalized Hyperbolic Lévy Motions to Finance. In: Barndorff-Nielsen O.E., Resnick S.I., Mikosch T. (eds) Lévy Processes. Birkhäuser, Boston, MA.
* [12] Tenyakov, A. (2014). Estimation of Hidden Markov Models and Their Applications in Finance. Electronic Thesis and Dissertation Repository, 2348. https://ir.lib.uwo.ca/etd/2348.
* [13] Viterbi, A. J. (1967). Error bounds for convolutional codes and an asymptotically optimal decoding algorithm. IEEE Transactions on Information Theory, 13(2), 260–269.
* [14] Mandelbrot, B., and Taylor, H. M. (1967). On the Distribution of Stock Price Differences. Operations Research, 15(6), 1057-1062.
* [15] Clark, P. K. (1973). A Subordinated Stochastic Process Model with Finite Variance for Speculative Prices. Econometrica, 41(1), 135-55
* [16] Barnett, L., Barrett, A. B., and Seth, A. K. (2009). Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables. Physical Review Letters, 103(23), 238701.
* [17] Schreiber, T. (2000). Measuring information transfer. Physical Review Letters, 85(2), 461–464.
* [18] Hlaváčková-Schindler, K., Palus, M., Vejmelka, M., and Bhattacharya, J. (2007). Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441(1), 1–46.
* [19] Ding, Z., and Granger, C. W. J. (1996), Modeling Volatility Persistence of Speculative Returns: A New Approach. Journal of Econometrics, 73, 185-215.
* [20] Bollersley, T. (1986), Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31, 307-327.
* [21] Vostrikova, L. (1981). Detection disorder in multidimensional random processes. Soviet Mathematics Doklady, 24, 55–59.
* [22] Olshen, A.B., and Venkatraman, E. (2004). Segmentation for the analysis of array-based DNA copy number data. Biostatistics, 5, 557–572.
* [23] Mantegna, RN (1999) Hierarchical structure in financial markets. Eur Phys J B-Condens Matter Compl Syst, 11(1), 193–197.
* [24] Chi, K. T., Liu J., Lau, F. C. (2010) A network perspective of the stock market. J Empir Financ, 17(4), 659–667.
* [25] Sandoval, L. Jr., and Franca, I. D. P. Correlation of financial markets in times of crisis. (2012) Physica A, 391, 187-208.
* [26] Isogai, T. (2014) Clustering of Japanese stock returns by recursive modularity optimization for efficient portfolio diversification. J Complex Netw, 2(4), 557–584.
* [27] Engle, R. F. (2002) Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. Journal of Business and Economic Statistics, 20(3), 339-350
* [28] Kenett, D. Y., Huang, X., Vodenska, I., Havlin, S., and Stanley, H. E. (2015) Partial correlation analysis: applications for financial markets, Quantitative Finance, 15(4), 569-578
* [29] Marschinski, R., and Kantz, H. (2002) Analysing the information flow between financial time series - an improved estimator for transfer entropy. The European Physical Journal B, 30, 275–281.
* [30] Sandoval, L. Jr. (2014) Structure of a global network of financial companies based on transfer entropy. Entropy, 16, 4443-4482.
* [31] Sandoval, L. Jr., Mullokandov, A., and Kenett, D. Y. (2015) Dependency relations among international stock market indices. Journal of Risk and Financial Management, 8, 227-265.
* [32] Dimpfl, T., and Peter, F. J. (2012) Using transfer entropy to measure information flows between financial markets. In Proceedings of Midwest Finance Association 2012 Annual Meetings, New Orleans, LA, USA, 21–24
* [33] He, J., Shang, P., and Xiong, H. (2018) Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods. Physica A: Statistical Mechanics and its Applications, 500, 210-221.
* [34] Leadbetter, M. R. (1991) On a basis for ’Peaks over Threshold’ modeling. Statistics and Probability Letters, 12(4), 357–362.
* [35] Baum, L.E. and Eagon, J.A. (1967). An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology. Bulletin of the American Mathematical Society, 73(3), 360.
* [36] Nguyen, N. (2018). Hidden Markov Model for Stock Trading. International Journal of Financial Studies, 6(2), 36.
* [37] Hassan, M.R. and Nath, B. (2018). Stock market forecasting using hidden Markov model: a new approach. 5th International Conference on Intelligent Systems Design and Applications (ISDA’05), 2005, 192-196.
* [38] Wang, X. and Hsieh, F. (2021). An encoding approach for stable change point detection. arXiv arXiv:2105.05341, 2021.
## Appendix
## Appendix A Simulation Appendix
Data is simulated under a regime-switching model with 3 hidden states embedded
behind. Suppose that the observations (log returns) are
$\\{Y(t)\\}_{t=1}^{8000}$ and there are 8 alternating segments over time:
$S(t)=\begin{cases}1&\quad t\in[1,1000],[4001,5000],[7001,8000]\\\ 2&\quad
t\in[1001,2000],[3001,4000],[6001,7000]\\\ 3&\quad
t\in[2001,3000],[5001,6000]\end{cases}$
The index of the hidden states is alternating like $1,2,3,2,1,3,2,1$. In the
first example, observations are generated of Gaussian distribution with mean 0
and variance varying under different states in Figure12(B), so
$Y(t)\sim\begin{cases}N(0,\sigma_{1}^{2})&\quad S(t)=1\\\
N(0,\sigma_{2}^{2})&\quad S(t)=2\\\ N(0,\sigma_{3}^{2})&\quad
S(t)=3\end{cases}$
where standard deviations $\sigma_{1}=1$,$\sigma_{2}=2$, and $\sigma_{3}=3$.
In the second example, heavy-tail distribution, student-t, is considered. The
simulation is shown in Figure12(D) by
$Y(t)\sim\begin{cases}t(df_{1})&\quad S(t)=1\\\ t(df_{2})&\quad S(t)=2\\\
t(df_{3})&\quad S(t)=3\end{cases}$
where degree of freedoms $df_{1}=1$, $df_{2}=2$, and $df_{3}=5$.
Figure 12: Data is simulated via conditional distribution given a hidden
state: (A) Gaussian distribution (B) student-t distribution. 8 underline
phases alternates over time where 3 kinds of hidden states are embedded. The
horizontal lines in (A) indicate the thresholds $l$ and $u$.
## Appendix B Illustration of Alg.1
In the Gaussian setting, we set $|u|=|l|=2$ which corresponds to 0.9 quantile
of the observations; In the student-t setting, a larger threshold is
considered, $|u|=|l|=3$, which corresponds to 0.95 quantile of the
observations. The recovered segment (marked in different colors) shows that
with appropriate choice of thresholds, the proposed method can successfully
detect the alternating hidden states. The error only appears around the change
points. Besides, we obtain good estimations of emission probability under
different hidden states. The estimators are $(0.0463,0.3210,0.4739)$ in the
Gaussian setting and $(0.0420,0.1008,0.1997)$ in the student-t setting,
respectively. They are close to the theoretical parameter
$2*(\Psi_{1}(-2),\Psi_{2}(-2),\Psi_{3}(-2))=(0.0455,0.3173,0.5049)$
and
$2*(F_{t1}(-3),F_{t2}(-3),F_{t3}(-3))=(0.0300,0.0954,0.2048)$
where $\Psi_{i}$ is the CDF of Gaussian distribution under the $i$-th hidden
state, and $F_{ti}$ is the CDF of student-t distribution under the $i$-th
hidden state, for $i=1,2,3$.
---
---
Figure 13: Simulation with Normal distribution(A)(B), student-t distribution
(C)(D). (A),(C): Recursive time; (B),(D): raw data with colored decoding
states. “red”, “yellow”, and “pink” 3 colors indicates 3 different kinds of
states.
## Appendix C Calculation of Information Flows
The proposed procedure is implemented to detect the volatility trajectory of
the tick-by-tick returns. However, since the return patterns are recorded in
transactions, the decoded sequences are not directly conjunct with each other.
It is required to transform the decoded trajectories back into the real-time
scale before calculating the Transfer Entropy. Suppose that the intensity
level of volatility is indicated by ordinal number 1, 2, or 3 meaning low-,
median-, or high-volatility state, respectively. If there exists a tiny time
scale in which at most one transaction happens, a time unit can be labeled by
symbol 0 for no transaction or an ordinal number if a transaction is present.
Thus, the decoded pattern from different stocks can share the same
chronological time. It seems that we attempt to choose a time scale as small
as possible, but the pairwise dependency is weakened due to the increasing
number of 0’s. To balance the proportion of symbols and alleviate the
sparsity, we summarize the recovered pattern by scanning a time block from the
beginning to the end of the time axis to select the maximal ordinal number.
So, uninformative 0’s are filtered out, while volatility stages are kept.
Suppose that the recovered symbol sequence is $\\{S(t)\\}_{t=1}^{n}$ where
$S(t)\in\\{0,1,2,3\\}$. The sequence is then summarized via a time block with
length $w$ by
$S^{*}(t)=\max\\{S(t):t\in(t-\lfloor\frac{w}{2}\rfloor,t+\lfloor\frac{w}{2}\rfloor)\\}$
(13)
for $t=\lfloor\frac{w}{2}\rfloor,...,n-\lfloor\frac{w}{2}\rfloor$ where
$\\{S^{*}(t)\\}_{t}$ is the summarized symbolic sequence. It is noted that a
minute-level scale $w$ is too rough to reflect the tick-by-tick volatility
pattern. A block with $w=5$-seconds is an admissible choice.
According to the way we summarize the symbolic trajectory, a non-linear
measure is developed as a variant of TE. Different from the classic TE, this
measure takes both lag and lead effects into account instead of only the lag
effect. The lag-and-lead information flow from $X$ to $Y$ is defined by
$TE_{X\rightarrow Y}^{*}(w)=\sum
P(S^{*}_{Y}(t)=3,S^{*}_{X}(t))log\frac{P(S^{*}_{Y}(t)=3|S^{*}_{X}(t))}{P(S^{*}_{Y}(t)=3)}$
(14)
We use $TE_{X\rightarrow Y}^{*}$ to differentiate it from the classic TE and
$w$ is omitted without confusion. The measure is interpreted by how much
uncertainty $Y$ is affected due to the lag-and-lead effect of $X$ such that
$Y$ is under its volatility states (state3). The higher the value, the
stronger the impact that $X$ promotes volatility phases of $Y$.
## Appendix D Graph and Table Appendix
---
---
Figure 14: 2-states decoding results for simulated data in AppendixA. (A)
Hierarchical Clustering Tree; (B) cluster index switching over time;
(C),(D),(E): estimated CDF versus true CDF, in cluster 1,2,3, respectively.
---
---
Figure 15: 4-states decoding results for simulated data in AppendixA. (A) Hierarchical Clustering Tree; (B) cluster index switching over time; (C),(D),(E): estimated CDF versus true CDF, in cluster 1,2,3, respectively. Figure 16: Transfer Entropy matrix for S$\&$P500 in 2006. The rows and columns are rearanged such that the row sum and column sum are in ascending order. Table 6: Top30 stocks with the strongest node strength incoming | | outgoing
---|---|---
Index | NS | | Index | NS
EMC | 4.3349 | | TWX | 3.3975
BAC | 4.2760 | | BRCM | 3.3148
NTAP | 4.2245 | | NTAP | 3.2715
JPM | 4.0252 | | GILD | 3.0749
WFC | 3.8934 | | ALTR | 2.9504
NBR | 3.7955 | | VLO | 2.8755
HON | 3.6844 | | EBAY | 2.8178
BRCM | 3.6363 | | HD | 2.7764
AIG | 3.6327 | | WMT | 2.7619
KBH | 3.6230 | | NVLS | 2.7501
CAT | 3.6097 | | CHK | 2.7336
WB | 3.5598 | | AMD | 2.7331
CTX | 3.5570 | | MXIM | 2.7227
WAG | 3.4599 | | YHOO | 2.6252
BJS | 3.4538 | | JNJ | 2.5732
WLP | 3.4503 | | SCHW | 2.5519
LOW | 3.3636 | | IBM | 2.5448
SWY | 3.3279 | | XLNX | 2.5334
AXP | 3.2317 | | BIIB | 2.5313
NOV | 3.1355 | | LLTC | 2.5274
BUD | 3.1273 | | MU | 2.5269
CHK | 3.1227 | | NVDA | 2.5108
DOW | 3.0809 | | BMET | 2.4964
KSS | 3.0608 | | TXN | 2.4894
VLO | 3.0252 | | C | 2.4635
TWX | 3.0220 | | ADBE | 2.4633
MO | 3.0072 | | CELG | 2.4574
DE | 2.9712 | | TGT | 2.4286
COP | 2.9598 | | KLAC | 2.3873
TRUE | 2.9564 | | ESRX | 2.3860
|
# Oscillation time and damping coefficients in a nonlinear pendulum
Jaime Arango
###### Abstract
We establish a relationship between the normalized damping coefficients and
the time that takes a nonlinear pendulum to complete one oscillation starting
from an initial position with vanishing velocity. We establish some conditions
on the nonlinear restitution force so that this oscillation time does not
depend monotonically on the viscosity damping coefficient.
ASC2020: 34C15, 34C25
Keywords. oscillation time, damping, damped oscillations
> This paper is dedicated to the memory of Prof. Alan Lazer (1938-2020),
> University of Miami. It was my pleasure to discuss with him some of the
> results presented here
## 1 Introduction
The pendulum is perhaps the oldest and fruitful paradigm for the study of an
oscillating system. The apparent regularity of an oscillating mass going to
and fro through the equilibrium position has fascinated the scientists well
before Galileo. There are plenty of mathematical models accounting for almost
any observed behavior of the pendulum’s oscillation. From the sheer amount of
the literature on the subject, one would expect that there is no reasonable
question regarding a pendulum that has no been already answered. And that
might be true. Yet, for whatever reason, it is not impossible to take on a
question whose answer does not seem to follow immediately from the classical
sources.
In a typical experimental setup with no noticeable damping, the oscillations
of a pendulum are periodic. Now, if the damping cannot be neglected, we still
observe oscillations, even though they are non periodic. However, we can
measure the time spent by a complete oscillation, and this time is a natural
generalization of the period. But, how does depend this oscillation time on
the characteristic of the medium, say on the viscosity of the surrounding
atmosphere? It seems that there is no much information on how the damping
affects the oscillation time. There are plenty of new publications regarding
damping and oscillations, ranging from analytical solutions ([5], [3],[6]), to
very clever experimental setups (see for example [4]). The nature of the
damping has been also extensively considered ([8], [2]), but the dependence of
the oscillation time on the damping or on the non-linearity seems to be less
investigated.
For the sake of simplicity we analyze the oscillation time in the frame of a
model that appear in almost any text book of ordinary differential equations
(see for example [1]):
$\ddot{x}+2\alpha\,\dot{x}+x\left(1+f\left(x\right)\right)=0,$ (1)
where $x=x(t)$ measures the pendulum’s deviation with respect to a vertical
axis of equilibrium and $\alpha\geq 0$ denote the viscous damping coefficient.
The term $x\,f(x)$ models the nonlinear part of the restoring force. We’ve
rescaled the time so that the period of the linear undamped oscillation is
exactly $2\pi$.
The math of the solutions $x=x(t)$ is classical. If $f$ is smooth and $x_{0}$
and $v_{0}$ are given real values, then there exists a unique solution
satisfying the given conditions $x(0)=x_{0}$ and $\dot{x}(0)=v_{0}.$ Moreover,
if $f(0)=0$, then $x=0$ is a stable equilibrium solution of (1). As a
consequence, $x(t)$ is defined for all $t\geq 0$ provided $|x_{0}|\ll 1$ and
$|v_{0}|\ll 1$. Notice that the points of vanishing derivative of a solution
$x=x(t)$ to (1) are isolated and those points correspond, either to local
maxima or to local minima. Denote by $\tau(x_{0},\alpha)$ the amount of time
spent (by the mass) completing one oscillation starting from $x_{0}$ with
vanishing velocity ($v_{0}=0$). To be precise, if $x=x(t)$ starts from $x_{0}$
with vanishing velocity, then $x$ reaches a local maximum at $t=0$, and the
oscillation is completed when $x$ reaches the next local maximum. Certainly,
the oscillation time generalizes the period of solutions for the undamped
model ($\alpha=0$). In this investigation we analyze the dependence of $\tau$
on $x_{0}$ and on $\alpha$ under the following working hypothesis:
###### Assumption 1.1.
On small $\epsilon-$neighborhood of $0$ the function $f$ is even and for some
constant $a>0$ we have
$f(x)=-a\,x^{2}+O\left(|x|^{4}\right),$
We shall show that for $x_{0}$ fixed, $\tau$ reaches a positive minimum at
some $0<\alpha_{0}<1.$ It does not seem obvious that an increase in the
damping coefficient $\alpha$ might cause a decrease in $\tau$. It is also
worth noticing that the existence of a minimum of $\tau$ is a consequence the
sign of the constant $a$ in the above assumption. Indeed, according to
numerical experiments carried out by the author, $\tau$ does not reach a
positive minimum if $a<0$. The author is not aware of a similar result in the
current literature nor whether this phenomena has been experimentally
addressed. The whole paper was written with the aim at the mathematical
pendulum $x(1+f(x))=\sin{x}.$ In that case, Figure 1 summarize our findings by
picturing the numerically simulated value for $\tau(x_{0},\alpha)$.
Interestingly, our qualitative analysis accurately reflects variations of
$\tau$ that are not easy to spot numerically. For instance, the minimum of
$\tau(x_{0},\alpha)$ for $x_{0}=0.1$ is not evident in Figure 1.
Figure 1: Numerical simulation $\tau(x_{0},\alpha)$ depending on $\alpha$ for
several values of $x_{0}$. The nonlinear term $f$ was chosen so that
$x(1+f(x))=\sin{x}.$
The arguments and proofs in this paper are entirely based on well established
techniques of ODE theory. However, the main result (Theorem 3.1) rests on
delicate estimates involving a differential equation describing the dependence
of the solution $x=x(t)$ with respect to $\alpha$.
## 2 Underdamped oscillations
Definitions of underdamped oscillations in linear systems naturally carry over
to solutions of (1). From now on, $x(\cdot,x_{0},\alpha)$ stands for the
unique solution to (1) satisfying the initial condition $x(0)=x_{0}$ and
$\dot{x}(0)=0.$ We also write $\tau(x_{0},\alpha)$ to highlight the dependence
of the oscillation time on $x_{0}$ and $\alpha$. We will write simply $\tau$
or $x$ when no confusion can arise. It is convenient to represent (1) in the
phase space $(x,v)$ with $\dot{x}=v:$
$\begin{split}\dot{x}&=v\\\ \dot{v}&=-2\alpha\,v-x-x\,f\left(x\right).\\\
\end{split}$ (2)
Equation (2) is explicitly solvable whenever $f\equiv 0$, and in that case,
its solution is given by
$\begin{split}x_{l}(t)=&\frac{e^{-\alpha\,t}}{\omega}\left(\omega\,\cos{\omega\,t}+\alpha\,\sin{\omega\,t}\right)x_{0}\\\
v_{l}(t)=&-\frac{e^{-\alpha\,t}}{\omega}\,\sin{\omega\,t}\,x_{0}\end{split}$
(3)
where $\omega=\sqrt{1-\alpha^{2}}$. Moreover, the oscillation time $\tau_{l}$
is given by
$\tau_{l}=\frac{2\pi}{\omega}=\frac{2\pi}{\sqrt{1-\alpha^{2}}}.$
Notice that $\tau_{l}$ is an increasing function that solely depends on
$\alpha$.
Though a closed-form solution of (1) is either not known or impractical, we
could express the relevant solutions implicitly. To that end, we rewrite (2)
so that the nonlinear term $-x\,f\left(x\right)$ assumes the role of a non
homogeneous forcing term. The expression for the solution $(x,v)$ is
implicitly given by
$\begin{split}x(t)=&x_{l}(t)-\frac{1}{\omega}\int_{0}^{t}e^{-\alpha\left(t-s\right)}\sin\omega\left(t-s\right)x(s)\,f(x(s))\,ds\\\
v(t)=&v_{l}(t)-\frac{1}{\omega}\int_{0}^{t}e^{-\alpha\left(t-s\right)}\left(\omega\,\cos{\omega\left(t-s\right)}-\alpha\sin\omega\left(t-s\right)\right)x(s)\,f(x(s))\,ds\end{split}$
(4)
Next, we estimate the solutions of (2) in the conservative case ($\alpha=0$)
in which all solutions are periodic and the period is given by
$\tau\equiv\tau(x_{0},0).$
###### Lemma 2.1.
If $(x,v)$ stands for the solution to (2) with $\alpha=0$ that satisfies
$(x(0),v(0))=(x_{0},0)$, then there exists $\delta>0$ so that for all
$|x_{0}|\leq\delta$ and all $0\leq t\leq\tau$ we have
$x(t)=x_{0}\,\cos t+R_{1}(t,x_{0}),\quad v(t)=-x_{0}\,\sin t+R_{2}(t,x_{0}),$
(5)
where
$|R_{i}(t,x_{0})|\leq\text{const }\,|x_{0}^{3}|,\quad i=1,2.$
###### Proof.
Letting $\alpha=0$ in (4) we obtain
$R_{1}(t,x_{0})=-\int_{0}^{t}\cos(t-s)\,x(s)\,f(x(s))\,ds.$ (6)
Since $(0,0)$ is a stable equilibrium solution to (2), there exists $\delta>0$
and $\epsilon>0$ so that any solution $(x,v)$ to (2) starting at $(x_{0},0)$,
with $|x_{0}|\leq\delta$ satisfies $|x(t)|\leq\epsilon$. Now write
$F(z)=-z\,f(z)$ and notice that for some $\xi\in(-\epsilon,\epsilon)$ we have
$F(x(s))=F\left(x_{0}\cos{s}+R_{1}\left(s,x_{0}\right)\right)=F\left(x_{0}\cos{s}\right)+R_{1}\left(s,x_{0}\right)F^{\prime}\left(\xi\right).$
Next, identity (6), Assumption (1.1) and some standard estimations yield
$|R_{1}(t,x_{0})|\leq 2a|x_{0}^{3}|+c_{2}\int_{0}^{t}|R_{1}(s,x_{0})|\,ds$
where $c_{2}=\max_{z\in[-\epsilon,\epsilon]}|F^{\prime}(z)|$. The first claim
follows now from Gronwall’s inequality. The proof of the estimation for
$R_{2}$ is analogous. ∎
At this point it is appropriated to define the _half oscillation time_
$\hat{\tau}=\hat{\tau}(x_{0},\alpha)$ to be the time spent by the solution
$x(t,x_{0},\alpha),$ $t\geq 0,$ reaching the next local minimum. If $\alpha=0$
and $f$ is even, the symmetry of the solution (1) yields. $2\hat{\tau}=\tau.$
###### Lemma 2.2.
If $\hat{\tau}=\hat{\tau}(x_{0},\alpha)$ denote the half oscillation time and
$a$ is the constant of Assumption 1.1, then
$\hat{\tau}(x_{0},\alpha)>\frac{\pi}{\sqrt{1-\alpha^{2}}}\quad\text{and}\quad\lim_{x_{0}\to
0^{+}}\hat{\tau}(x_{0},0)=\pi+\frac{a\,\pi}{8}\,x_{0}^{2}+o(x_{0}^{3}).$
###### Proof.
We introduce introduce the polar coordinates
$r=\sqrt{x^{2}+v^{2}},\quad\tan{\theta}=\frac{x}{v},$
to obtain
$\begin{split}\dot{\theta}&=-\left(1+\alpha\,\sin{2\theta}+\sin^{2}{\theta}\,f\left(x\right)\right)\\\
\dot{r}&=-\frac{v}{r}\left(2\alpha\,v+x\,f\left(x\right)\right)\\\
\end{split}$ (7)
As a consequence of equation (7) we obtain the following expression for the
half oscillation time $\hat{\tau}=\hat{\tau}(x_{0},\alpha)$
$\hat{\tau}=\int_{0}^{\pi}\frac{d\theta}{1+\alpha\,\sin{2\theta}+\sin^{2}{\theta}\,f\left(x(\theta)\right)}.$
(8)
Now, the effect of the nonlinearity on the oscillation time is clear. By
Assumption 1.1 we obtain
$\hat{\tau}(x_{0},\alpha)>\int_{0}^{\pi}\frac{d\theta}{1+\alpha\,\sin{2\theta}}=\frac{\pi}{\sqrt{1-\alpha^{2}}}.$
For $\alpha=0$ we use estimation (5) to obtain
$\hat{\tau}(x_{0},0)=\int_{0}^{\pi}\frac{d\theta}{1-a\,x_{0}^{2}\,\sin^{2}{\theta}\,\cos^{2}{t(\theta)}}+o(x_{0}^{3}).$
Now a straightforwards computation yields
$\lim_{x_{0}\to 0^{+}}\hat{\tau}(x_{0},0)=\pi,\;\lim_{x_{0}\to
0^{+}}\frac{\partial\hat{\tau}}{\partial x_{0}}(x_{0},0)=0.$
Now, the expression for $\frac{\partial^{2}\hat{\tau}}{\partial
x_{0}^{2}}(x_{0},0)$ is somewhat cumbersome. However, taking into account that
$\lim_{x_{0}\to 0}t(\theta)=\theta$, we readily obtain
$\lim_{x_{0}\to 0^{+}}\frac{\partial^{2}\hat{\tau}}{\partial
x_{0}^{2}}(x_{0},0)=\int_{0}^{\pi}2a\,\sin^{2}{\theta}\,\cos^{2}{\theta}\,d\theta=\frac{2\,a\pi}{8},$
and the second claim of the lemma follows by the second order Taylor expansion
of $\hat{\tau}(x_{0},0)$ around $x_{0}$ ∎
A reasoning analogous to that in the proof of the preceding lemma shows that
$\tau(x_{0},\alpha)>\frac{2\pi}{\sqrt{1-\alpha^{2}}}\equiv\tau_{l}.$
This inequality is illustrated in Figure 2 when $a=1$. Had we considered in
Assumption 1.1 negative values for $a$, then the inequality would reverse to
$\tau(x_{0},\alpha)<\tau_{l}$ as it is depicted in Figure 2.
## 3 The role of the viscous damping
It is not difficult at all to obtain a differential equation describing the
movement of the pendulum depending on the viscous damping coefficient. Indeed,
writing
$X(t,x_{0},\alpha)=\frac{\partial
x}{\partial\alpha}\left(t,x_{0},\alpha\right),\quad
V(t,x_{0},\alpha)=\frac{\partial
v}{\partial\alpha}\left(t,x_{0},\alpha\right).$
Derivation of equation (2) with respect to $\alpha$ yields:
$\begin{split}\dot{X}&=V\\\
\dot{V}&=-2\alpha\,V-X-2v-\left(x\,f^{\prime}\left(x\right)+f\left(x\right)\right)X.\end{split}$
(9)
As for the initial conditions we have
$X(0,x_{0},\alpha)=0,\quad V(0,x_{0},\alpha)=0.$
Let us write $G(x)=-\frac{d}{dx}\left(x\,f\left(x\right)\right)$. Again, as we
did with equation (2), equation (9) can be seen as a linear homogeneous part
plus the forcing term $-2v+G(x)\,X.$ The solution $X,V$ is implicitly given by
$\begin{split}X(t)=&\frac{1}{\omega}\int_{0}^{t}e^{-\alpha\left(t-s\right)}\sin\omega\left(t-s\right)\big{\\{}\\\
&\qquad-2v(s)+G(x(s))\,X(s)\left.\right\\}ds\\\
V(t)=&\frac{1}{\omega}\int_{0}^{t}e^{-\alpha\left(t-s\right)}\left(\omega\,\cos{\omega\left(t-s\right)}-\alpha\sin\omega\left(t-s\right)\right)\big{\\{}\\\
&\qquad-2v(s)+G(x(s))\,X(s)\left.\right\\}ds\end{split}$
In particular, for $\alpha=0$ the above expressions reduce to
$\begin{split}X(t)=&\int_{0}^{t}\sin\left(t-s\right)\big{\\{}-2v(s)+G(x(s))\,X(s)\big{\\}}\,ds\\\
V(t)=&\int_{0}^{t}\cos{\left(t-s\right)}\big{\\{}-2v(s)+G(x(s))\,X(s)\big{\\}}\,ds\end{split}$
(10)
The following lemma does the heavy lifting to deliver the main result of the
paper.
###### Lemma 3.1.
Under Assumption 1.1, if $\hat{\tau}=\hat{\tau}(x_{0},0)$ denotes the half
oscillation time when $\alpha=0$, then for $0<x_{0}\ll 1$ we have
$V(\hat{\tau},x_{0},0)>0.$
###### Proof.
We start with an auxiliary estimate for $X(t)$ in equation (10). By Lemma
(2.1) and by Assumption 1.1, for $0<t\leq\pi$ we have
$X(t)=x_{0}\left(-t\,\cos{t}+\sin{t}\right)+3a\,x_{0}^{2}\,\int_{0}^{t}\sin\left(t-s\right)\cos^{2}{s}\,X(s)\,ds+O(|x_{0}|^{4})$
(11)
Notice that $X_{1}(t)\equiv x_{0}\left(-t\,\cos{t}+\sin{t}\right)$ does not
vanish on $(0,\pi)$ and that $G(x(s))>0$ provided $0<x_{0}\ll 1$. Further, the
initial conditions for $X(t)$ at $t=0$ and equation (9) yield that
$X(0)=0=\dot{X}(0)=\ddot{X}(0)\text{ and
}\dddot{X}(0)=2x_{0}\left(1+f\left(x_{0}\right)\right)>0,$
meaning that $X(t)$ is positive on an interval $(0,\epsilon)$ with
$\epsilon>0$. We claim that $X(t)>0$ for $0<t\leq\pi.$ On the contrary, there
exists $\epsilon<t_{0}<\pi$ such that $X(t_{0})=0$ and $X(t)>0$ for
$t\in(0,t_{0})$. Now, by Lemma 2.2 we know that $\hat{\tau}>\pi$. Therefore,
the polar angle $\theta(t)$ in (7) satisfies $-\pi<\theta(t)<0$ for all
$0<t<\pi$ and a fortiori $v(t)<0$ on $(0,\pi]$. But this is a contradiction to
the first equation of (10) evaluated at $t=t_{0}$ since for $s\in(0,t_{0})$ we
have
$\sin\left(t_{0}-s\right)\big{\\{}-2v(s)+G(x(s))\,X(s)\big{\\}}>0.$
Next, by the equation (11) it follows immediately that
$X(t)=X_{1}(t)+O(|x_{0}|^{3})$. Analogously, for $V(t)$ we obtain
$\begin{split}V(t)=&x_{0}\,t\,\sin{t}+3a\,x_{0}^{2}\int_{0}^{t}\cos\left(t-s\right)\cos^{2}{s}\,X_{1}(s)\,ds+O(|x_{0}|^{4})\\\
\equiv&V_{1}(t)+V_{2}(t)+O(|x_{0}|^{4})\end{split}$
where $V_{1}(t)\equiv x_{0}\,t\,\sin{t}$. Now, $V_{2}(t)$ can be explicitly
evaluated. For the reader’s convenience, we write the complete expression for
$V_{2}:$
$\begin{split}V_{2}(t)=&3a\,x_{0}^{3}\Big{(}-\frac{1}{32}\,(6\,t^{2}+5)\,\cos{t}-\frac{3}{32}\,t\,\sin{3\,t}-\\\
&\frac{1}{16}\,t\,\sin{t}-\frac{17}{128}\,\cos{3\,t}+\frac{37}{128}\,\cos{t}\Big{)}.\end{split}$
Moreover, it is somewhat tedious but straightforward to show that $V_{2}$ is
positive and increasing on a small neighborhood of $\pi$. By Lemma 2.2
$\hat{\tau}>\pi$, therefore
$V_{2}(\hat{\tau})>V_{2}(\pi)=\frac{9a\,x_{0}^{3}\,\pi^{2}}{16}.$
Again, by Lemma 2.2 we obtain
$\begin{split}V_{1}(\hat{\tau})=&V_{1}(\pi)+\left(\hat{\tau}-\pi\right)V_{1}^{\prime}(\pi)+O(|x_{0}|^{4})\\\
=&-\frac{a\,x_{0}^{3}\,\pi^{2}}{8}+O(|x_{0}|^{4}),\end{split}$
so that $V(\hat{\tau})=V_{1}(\hat{\tau})+V_{2}(\hat{\tau})>0.$ ∎
Now we are in a position to show the main result of the paper
###### Theorem 3.1.
Under Assumption 1.1, there exists a $\delta>0$ such that for $0<x_{0}<\delta$
fixed, the oscillation time $\tau(x_{0},\alpha)$, for $0<\alpha<1$, reaches a
positive minimum at some $0<\alpha<1$. Moreover,
$\lim_{\alpha\to 1^{-}}\tau(x_{0},\alpha)=\infty.$
###### Proof.
We let $0<x_{0}\ll 1$ fixed by now and denote by $(x,v)$ be the solution of
equation (2). By definition of $\hat{\tau}$ we have $v(\hat{\tau},\alpha)=0$,
so that the Implicit Function Theorem yields
$\frac{\partial\hat{\tau}}{\partial\alpha}\,\dot{v}(\hat{\tau},\alpha)+V(\hat{\tau},\alpha)=0,$
therefore
$\frac{\partial\hat{\tau}}{\partial\alpha}=\frac{V(\hat{\tau},\alpha)}{x\left(\hat{\tau},\alpha\right)\left(1+f\left(x\left(\hat{\tau},\alpha\right)\right)\right)}.$
Since $x\left(\hat{\tau},\alpha\right)$ is negative, it follows from Lemma 3.1
that and $\frac{\partial\hat{\tau}}{\partial\alpha}|_{\alpha=0}<0$. Now we
shall show that the last inequality holds for the oscillation time $\tau$. To
do that, write $\hat{x}_{0}=-x(\hat{\tau}(\alpha,x_{0}),x_{0})$ and see that
$\tau(\alpha,x_{0})=\hat{\tau}(\alpha,x_{0})+\hat{\tau}(\alpha,\hat{x}_{0}).$
That is to say, the half oscillation time depends on $|x_{0}|$ only. Notice
that $\hat{x}_{0}\leq x_{0}$ and the equality holds in the conservative case
$\alpha=0$ only. Therefore
$\frac{\partial\tau}{\partial\alpha}(\alpha,x_{0})=\frac{\partial\hat{\tau}}{\partial\alpha}(\alpha,x_{0})+\frac{\partial\hat{\tau}}{\partial\alpha}(\alpha,\hat{x}_{0})-\frac{\partial\hat{x}_{0}}{\partial\alpha}(\alpha,x_{0})\,\frac{\partial\hat{\tau}}{\partial\alpha}(\alpha,x_{0})=0.$
Moreover, since
$\frac{\partial\hat{x}_{0}}{\partial\alpha}(\alpha,x_{0})=v(\hat{\tau}(\alpha,x_{0}),x_{0})=0,$
we have that
$\lim_{x_{0}\to
0^{+}}\frac{\partial\tau}{\partial\alpha}(\alpha,x_{0})=2\lim_{x_{0}\to
0^{+}}\frac{\partial\hat{x}_{0}}{\partial\alpha}(\alpha,x_{0})$
Finally, by the first claim of Lemma 2.2, $\tau(\alpha,x_{0})$ must attain a
minimum at some $0<\alpha<1$. ∎
## 4 Conclusions and final remarks
An oscillating mass exhibits gradually diminishing amplitude in the presence
of damping. The time spent by the mass completing one oscillation depends on
several factors, as the model for the restoring force, how the oscillation
starts, and the nature of the damping. For the sake of our discussion we
consider a vertical pendulum with a nonlinear restoring force resembling the
mathematical pendulum, letting the oscillation start at a small amplitude with
vanishing velocity and a viscous damping model with a (normalized) viscosity
coefficient $\alpha$. We have proved that the oscillation time
$\tau\equiv\tau(\alpha)$ does not depend monotonically on $\alpha$, meaning
that there exists a threshold $\alpha_{0}$ (which depends on the starting
amplitude of the oscillation) such that $\tau$ reaches a local minimum at
$\alpha_{0}$ (see Figure 1). It is worth noticing that this behavior cannot be
observed if the restitution force is linear, i. e., what we report in this
paper is essentially a nonlinear phenomenon.
Figure 2: Numerical simulation of the oscillation time $\tau$ depending on
the damping coefficient $\alpha$ with starting amplitude $x_{0}=0.2$ and non
linear restoring term given by $f(x)=-a\,x^{2}$, $a=\pm 1$. The curve with the
round marker (blue in the online version) corresponds to the oscillation time
$\tau_{l}$ of the linear case $f\equiv 0$
The proof of existence of a positive minimum for the oscillation time rests
heavily on the fact that the constant $a$ in Assumption 1.1 is positive. Just
to experiment the effect of changing the sign of the constant $a$, we carried
out some numerical simulations of $\tau$ with the nonlinear term
$f(x)=-a\,x^{2}$ for $a=1,-1$. The corresponding equations are particular
cases of an unforced Duffing oscillator [7]. The numerical results are shown
in Figure 2. Just for the sake of the numerical experimentation we also
considered negative values for $\alpha$. If $a=1$ we see that $\tau$ reaches
its minimum at a positive value for $\alpha$. By contrast, if $a=-1$ no
minimum seems to exist. The curve with the round marker (blue in the online
version) corresponds to the oscillation time of the linear case
$\tau_{l}=\frac{2\pi}{\sqrt{1-\alpha^{2}}}$.
The numerical experimentation of the oscillation time $\tau$ (not shown in
this paper) assuming a quadratic damping exhibits the same behavior as the
graphics of Figure 2. If the readers are curious about the numerical
experiments, they could take a look at the author’s GitHub page
https://github.com/arangogithub/Oscillation-time
and download a Jupyter notebook with the python code featuring the results
shown in Figures 1 and 2
## Acknowledgment
The author would like to give the reviewer his very heartfelt thanks for
carefully reading the manuscript and for pointing out several inaccuracies of
the document.
## References
* [1] V. I. Arnold. Mathematical Methods of Classical Mechanics. Springer, 1989.
* [2] L. Cveticanin. Oscillator with strong quadratic damping force. Publ. Inst. Math. (Beograd) (N.S.), 85(99):119–130, March 2009\.
* [3] A Ghose-Choudhury and Partha Guha. An analytic technique for the solutions of nonlinear oscillators with damping using the abel equation arxiv:1608.02324 [nlin.si], 2016.
* [4] Remigio Cabrera-Trujillo Niels C Giesselmann Dag Hanstorp Javier Tello Marmolejo, Oscar Isaksson. A fully manipulable damped driven harmonic oscillator using optical levitation. American Journal of Physics, 88(6):490–498, sep 2018.
* [5] Kim Johannessen. An analytical solution to the equation of motion for the damped nonlinear pendulum. European Journal of Physics, 35(3):035014, mar 2014.
* [6] D Kharkongor and Mangal C Mahato. Resonance oscillation of a damped driven simple pendulum. European Journal of Physics, 39(6):065002, sep 2018.
* [7] S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, 1990.
* [8] L F C Zonetti, A S S Camargo, J Sartori, D F de Sousa, and L A O Nunes. A demonstration of dry and viscous damping of an oscillating pendulum. European Journal of Physics, 20(2):85–88, jan 1999.
|
# 4D Atlas: Statistical Analysis of the Spatiotemporal Variability in
Longitudinal 3D Shape Data
Hamid Laga, Marcel Padilla, Ian H. Jermyn, Sebastian Kurtek, Mohammed
Bennamoun Anuj Srivastava Hamid Laga is with the Information Technology
Discipline, Murdoch University, Murdoch, 6150 (Australia), with the Harry
Butler Institute, Murdoch University, Murdoch, 6150 (Australia), Email:
<EMAIL_ADDRESS>and with the University of South Australia, The
Phenomics and Bioinformatics Research Centre, SA 5000, Australia. Marcel
Padilla is with TU Berlin, Germany. Email<EMAIL_ADDRESS>Ian H.
Jermyn is with Durham University. Email<EMAIL_ADDRESS>Sebastian
Kurtek is with the Ohio State University, US. Email<EMAIL_ADDRESS>Mohammed Bennamoun is with the University of Western Australia, Perth, WA
6009, Australia. Email<EMAIL_ADDRESS>Anuj Srivasta is with
Florida State University, US. Email<EMAIL_ADDRESS>Manuscript received
XXXX, 2021; revised XXX, 2021.
###### Abstract
We propose a novel framework to learn the spatiotemporal variability in
longitudinal 3D shape data sets, which contain observations of objects that
evolve and deform over time. This problem is challenging since surfaces come
with arbitrary parameterizations and thus, they need to be spatially
registered. Also, different deforming objects, also called _4D surfaces_ ,
evolve at different speeds and thus they need to be temporally aligned. We
solve this spatiotemporal registration problem using a Riemannian approach. We
treat a 3D surface as a point in a shape space equipped with an elastic
Riemannian metric that measures the amount of bending and stretching that the
surfaces undergo. A 4D surface can then be seen as a trajectory in this space.
With this formulation, the statistical analysis of 4D surfaces can be cast as
the problem of analyzing trajectories embedded in a nonlinear Riemannian
manifold. However, performing the spatiotemporal registration, and
subsequently computing statistics, on such nonlinear spaces is not
straightforward as they rely on complex nonlinear optimizations. Our core
contribution is the mapping of the surfaces to the space of Square-Root Normal
Fields (SRNF) where the $\mathbb{L}^{2}$ metric is equivalent to the partial
elastic metric in the space of surfaces. Thus, by solving the spatial
registration in the SRNF space, the problem of analyzing 4D surfaces becomes
the problem of analyzing trajectories embedded in the SRNF space, which has a
Euclidean structure. In this paper, we develop the building blocks that enable
such analysis. These include: (1) the spatiotemporal registration of
arbitrarily parameterized 4D surfaces even in the presence of large elastic
deformations and large variations in their execution rates; (2) the
computation of geodesics between 4D surfaces; (3) the computation of
statistical summaries, such as means and modes of variation, of collections of
4D surfaces; and (4) the synthesis of random 4D surfaces. We demonstrate the
performance of the proposed framework using 4D facial surfaces and 4D human
body shapes.
###### Index Terms:
Dynamic surfaces, Elastic metric, Square-Root Normal Field, Statistical
summaries, Shape synthesis and generation, 4D surface, Human4D, Face4D.
## 1 Introduction
Shape, an essential property of natural and man-made 3D objects, deforms over
time as a result of many internal and external factors. For instance,
anatomical organs such as bones, kidneys, and subcortical structures in the
brain deform due to natural growth or disease progression; human faces deform
as a consequence of talking, executing facial expressions, and aging.
Similarly, actions and motions such as walking, jumping, and running are the
result of deformations, over time, of the human body shape. The ability to
understand and model (1) the typical deformation patterns of a class of 3D
objects and (2) the variability of these deformations within and across object
classes has many applications. In medical diagnosis and biological growth
modeling, one is interested in measuring the intensity of pain from facial
deformations [1], and in distinguishing between normal growth and disease
progression using the deformation over time of body shape. In computer vision
and graphics, the ability to statistically model such spatiotemporal
variability can be used to summarize collections of 3D animations, and
synthesize and simulate animations and motions. Similar to 3D morphable face
models [2], these tools can also be used in a generative model for
synthesizing large corpora of labeled longitudinal 3D shape data, e.g., 4D
faces, for training deep neural networks.
This paper proposes a novel framework for the statistical analysis of
longitudinal 3D shape data composed of objects that deform over time. Each
object is represented as a closed manifold surface. We refer to an object
captured at different points in time, e.g., a 3D human face performing a
facial expression or speaking a sentence, or a 3D human body shape growing or
performing actions, as a _4D (or 3D + t) surface_. Given a set of 4D surfaces,
our goal is to:
* •
Compute the mean deformation pattern, _i.e.,_ the statistical mean 4D surface.
For example, the same person can smile in different ways. Similarly, different
people smile differently. The goal is to learn, based on observed longitudinal
shape data, the typical smile.
* •
Compute the directions of variation, analogous to Principal Component Analysis
(PCA) for modeling 3D shape variability [3, 4], but here we focus on modeling
variability in 4D surface collections.
* •
Characterize a population of 4D surfaces using statistical models.
* •
Synthesize new 4D surfaces by sampling, randomly or in a controlled fashion,
from these statistical models.
We refer to these tasks as the process of constructing a 4D atlas. Achieving
this goal requires solving important fundamental challenges. In fact, 3D
objects such as faces, human body shapes, and anatomical organs, which come
with arbitrary parameterizations, exhibit large elastic deformations within
the same object and across different objects. This makes their spatial
registration, _i.e.,_ finding one-to-one correspondences between each pair of
shapes, very challenging. In the case of 4D surfaces, there is an additional
temporal variability due to different execution rates (speeds) of evolution
within and across objects. For instance, a walking action can be executed at
variable speeds even by the same person. Thus, the statistical analysis of the
spatiotemporal variability in samples of 4D surfaces requires efficient
spatiotemporal registration of these samples. _Spatial registration_ refers to
the process of finding a one-to-one correspondence between two 3D surfaces of
the same individual, captured at different points in time, or of different
individuals. _Temporal registration_ refers to the problem of finding the
optimal time warping that aligns 4D surfaces, e.g., walking actions, performed
at different execution rates.
We treat a 4D surface as a trajectory in a high-dimensional nonlinear space.
We then formulate the problem of analyzing the spatiotemporal variability of
4D surfaces as the statistical analysis of elastic trajectories, where
elasticity corresponds to variations in the execution rates of the 4D
surfaces. However, performing statistics on trajectories embedded in nonlinear
spaces of high dimension is computationally expensive since it relies on
nonlinear optimizations. Our core contribution in this paper is the mapping of
the surfaces to the space of Square-Root Normal Fields (SRNF) [5, 4], which
has a Euclidean structure (see Section 3.1—in particular, the $\mathbb{L}^{2}$
metric in the space of SRNFs is equivalent to the partial elastic metric in
the space of surfaces), meaning that the problem of analyzing 4D surfaces
becomes the problem of analyzing trajectories, or curves, embedded in the
Euclidean space of SRNFs.
This paper develops the building blocks that enable such analysis. We use
these building blocks to compute statistical summaries, such as means and
modes of variation of collections of 4D surfaces, and for the synthesis,
either randomly or in a controlled fashion, of 4D surfaces. We demonstrate the
utility and performance of the proposed framework using 4D facial surfaces
from the VOCA dataset [6], 4D human body shapes from the Dynamic FAUST
(DFAUST) dataset [7], and dressed 4D human body shapes from the CAPE dataset
[8]. Our approach is, however, general and applies to all spherically-
parameterized surfaces. In summary, the main contributions of this article
are:
* •
We represent 4D surfaces as trajectories in the space of SRNFs, which has a
Euclidean structure (Section 3.1). This key contribution enables the usage of
standard computational tools for the analysis and modeling of 4D surfaces
(Section 3.2).
* •
We propose efficient algorithms for the spatiotemporal registration of 4D
surfaces and the computation of geodesics between such 4D surfaces, even in
the presence of large elastic deformations and significant variations in
execution rates (Sections 3.2.2 and 3.2.3).
* •
The framework does not explicitly or implicitly assume that the
correspondences between the surfaces are given. It simultaneously solves for
the spatial and temporal registrations, and for the 4D geodesics that are
optimal under the proposed metrics.
* •
We develop computational tools for (1) computing summary statistics of 4D
surfaces and (2) synthesizing 4D surfaces from formal statistical models
(Section 4).
The remainder of this paper is organized as follows. We first discuss related
work in Section 2. Section 3 describes the proposed mathematical framework.
Section 4 discusses its application to various statistical analysis tasks.
Section 5 presents the results and discusses the performance of the proposed
framework. Section 6 summarizes the main findings of this paper and discusses
future research directions.
## 2 Related work
We classify the state-of-the-art into two categories. Methods in the first
category focus on cross-sectional shape data (Section 2.1). Methods in the
second category focus on longitudinal shape data (Section 2.2).
### 2.1 Statistical models of cross-sectional 3D shape data
Modeling shape variability in 2D and 3D objects has been studied extensively
in the literature. Early methods use Principal Component Analysis (PCA) to
characterize the shape space of objects. Initially introduced for the analysis
of planar shapes, the active shape model of Cootes _et al._ [9] has been
extended to 3D faces [10] and 3D human bodies [3]; see [2] for a detailed
survey. These methods represent 3D objects as discrete sets of landmarks,
e.g., vertices, which are assumed to be in correspondence across a population
of objects, and use standard Euclidean metrics for their comparison. Thus,
they are limited to 3D objects that undergo small elastic deformations.
To handle large nonlinear variations, e.g., elastic deformations such as the
bending and stretching observed in 3D human body shapes, Anguelov _et al._
[11] introduced SCAPE, which represents body shape and pose-dependent shape in
terms of triangle deformations instead of vertex displacements. Hasler _et
al._ [12] learn two linear models: one for pose and one for body shape. Loper
_et al._ [13] introduce SMPL, a vertex-based linear model for human body shape
and pose-dependent shape variation. This model has been extensively used in
the literature for the analysis of the human body shape. It has also been
adapted to other types of objects such as animals [14] and human body parts
[15]. While these models can capture large variations, they exhibit two
fundamental limitations. First, they rely on separate models for pose-
independent shape, pose-dependent shape, and pose. Thus, they are limited to
specific classes of objects, e.g., human bodies. Changing the target
application, e.g., to animals [14] or infants [16], requires redefining the
model. Second, they either assume a given registration between the surfaces of
the 3D objects or solve for registration separately by matching vertices
across the surfaces using an unrelated optimization criterion.
Recently, there has been a growing interest in analyzing variability in 3D
shape collections using tools from differential and Riemannian geometry [17,
18, 5, 19, 20, 21, 4]; see [22] for a detailed survey. The work most relevant
to ours is the Square-Root Normal Field (SRNF) representation introduced in
[5]. In this work, parameterized surfaces are compared using a partial elastic
Riemannian metric defined as a weighted sum of a bending term and a stretching
term. More importantly, Jermyn _et al._ [5] show that by carefully choosing
the weights of these two terms, the complex partial elastic metric reduces to
the $\mathbb{L}^{2}$ metric in the space of SRNFs. Thus, by treating shapes of
objects as points in the SRNF space, a straight line between two points in
this space is equivalent to the geodesic (or shortest) curve in the original
space of surfaces under the partial elastic metric, and represents the optimal
deformation between them. As a result, one can perform statistical analysis in
the SRNF space using standard vector calculus, and then map the results back
to the space of surfaces (for visualization), using the approach of Laga _et
al._ [4]. Another important property of SRNFs is that both registration and
optimal deformation (geodesic) are computed jointly, using the same partial
elastic metric.
One of the fundamental problems in statistical shape analysis is
correspondence and registration, see [23]. Past methods do not define a shape
space and a metric that enable the computation of geodesics and statistics.
Also, correspondence methods that are based on the intrinsic properties of
surfaces, e.g., Generalized Multidimensional Scaling [24], spectral
descriptors [25], or functional maps (which rely on the availability of
descriptors) [26, 27], are primarily suited for surfaces that deform in an
isometric manner. They also require landmarks to resolve symmetry ambiguities.
### 2.2 Statistical models for longitudinal shape data
As stated in [7], we live in a 4D world of 3D shapes in motion. With the
availability of a variety of range sensing devices that can scan dynamic
objects at high temporal frequency, there is a growing interest in capturing
and modeling the 4D dynamics of objects [28, 29, 30]. For instance, Wand _et
al._ [28] and Tevs _et al._ [30] propose methods to reconstruct the deforming
geometry of time-varying point clouds. Li _et al._ [31] use sequences of 4D
scans to learn a statistical 3D facial model. This model, referred to as
FLAME, has been later used by Cudeiro _et al._ [6] to capture, learn, and
synthesize 3D speaking styles. Bogo _et al._ [7] build a 4D human dataset by
registering a 3D human template to sequences of 3D human scans performing
various types of actions. These methods focus on the 3D reconstruction of
deforming objects. The literature on the statistical analysis of their
spatiotemporal variability is rather limited.
Early works focused on longitudinal 2D shape data. For instance, Anirudh _et
al._ [32] represent the contour of planar shapes that evolve as trajectories
on a Grassmann manifold. They then use the Transported Square-Root Vector
Fields (TSRVF) representation for their rate-invariant analysis. This approach
was later extended to the analysis of the trajectories of sparse features or
landmarks measured on the surface of a deforming 3D object. For instance,
Akhter _et al._ [33] introduced a bilinear spatiotemporal basis to model the
spatiotemporal variability in 4D surfaces. The approach treats surfaces as $N$
discrete landmarks and uses the $\mathbb{L}^{2}$ metric and PCA in 4N for
their analysis. Thus, the approch is not suitable for highly articulated
shapes that undergo large articulated and elastic motion (e.g., human bodies).
The approach also assumes that the landmarks are in correspondence, both
spatially and temporally.
Anirudh _et al._ [32] and Ben Amor _et al._ [34] represent human body actions
using dynamic skeletons. By treating each skeleton, represented by a set of
landmarks, as a high-dimensional point on Kendall’s shape space [35], motions
become trajectories in a high-dimensional, Euclidean space. Thus, one can use
the rich literature on the statistical analysis of high-dimensional curves
[36] to build a framework for the statistical analysis of human motions and
actions. This approach, however, has two fundamental limitations. First, the
$\mathbb{L}^{2}$ metric on Kendall’s shape space is not suitable for large
articulated motions. Second, skeletons and landmarks do not capture surface
elasticity, and thus, cannot be used to model growth processes and surface
deformations due to motion. While this can be addressed by using two separate
models, one for shape and another for motion, it will fail to capture motion-
dependent shape variations.
Using the LDDMM framework [37], Debavelaere _et al._ [38] and Bone _et al._
[39] represent a 4D surface as a flow of deformations of the 3D volume around
each surface and then code deformations as geodesics on a Riemannian manifold.
However, in general, natural deformations do not correspond to geodesics but
can be arbitrary paths on the shape space. Also, deforming 3D volumes is
expensive in terms of computation and memory requirements. Finally, this
approach relies on manually specified landmarks to efficiently register the 3D
volumes. Our approach, which can handle large articulated and elastic motions,
works directly on surfaces, does not assume that deformations are (piecewise)
geodesics, and does not rely on landmarks for the spatiotemporal registration.
## 3 Mathematical framework
We describe in this section the proposed mathematical framework for the
spatiotemporal registration and comparison of 4D surfaces. Section 4 discusses
its application to various statistical analysis tasks. A 4D surface, where the
fourth dimension refers to time, is a 3D surface that evolves over time.
Examples of such 4D surfaces include facial expressions (e.g., a smiling
face), a human body shape performing an action such as walking or jumping, or
an anatomical organ that evolves over time due to natural growth or disease
progression. It can be represented as a path $\alpha(t),\ t\in[0,1]$ such that
$\alpha(0)$ and $\alpha(1)$ are the initial and final surfaces, respectively,
and $\alpha(t),\ 0<t<1$ are the intermediate surfaces. The main challenges
posed by the statistical analysis of such 4D surfaces are two-fold. First,
surfaces within the same 4D surface and across different 4D surfaces come with
arbitrary poses and registrations. Second, 4D surfaces can have different
execution rates, e.g., two smiling expressions performed at different speeds.
Thus, to compare and perform statistical analysis on samples of 4D surfaces,
we first need to spatiotemporally register them.
We solve the spatiotemporal registration problem using tools from differential
geometry. We treat surfaces as points in a Riemannian shape space equipped
with an elastic metric that captures shape differences using bending and
stretching energies. We then formulate the elastic registration problem,
_i.e.,_ the problem of computing spatial correspondences, as that of finding
the optimal rotation and reparameterization that align one surface onto
another. This enables comparing and spatially registering surfaces, even in
the presence of large elastic deformations (Section 3.1).
With this representation, a 4D surface becomes a time-parameterized trajectory
in the above-referenced Riemannian shape space. Thus, the problem of analyzing
4D surfaces is reduced to the problem of analyzing curves. Similar to
surfaces, we define a space of curves equipped with a Riemannian metric, which
quantifies the amount of elastic deformation, or time warping, needed to align
two 4D surfaces (Section 3.2).
### 3.1 The elastic shape space of surfaces
Figure 1: Overview of the proposed spatial registration framework. Surfaces
are first mapped onto the space of Square-Root Normal Fields (SRNF) and
spatially registered using the $\mathbb{L}^{2}$ metric, which is equivalent to
the partial elastic metric in the original space of surfaces. 4D surfaces can
then be treated as curves embedded in the $\mathbb{L}^{2}$ space of SRNFs. The
operator $\star$ refers to the composition of functions in the SRNF space.
Fig. 1 overviews the proposed spatial registration framework. We consider a
surface as a function $f$ of the form:
$\displaystyle f$ $\displaystyle:$ $\displaystyle\Omega\to{}^{3};\hskip
12.0pts\mapsto f(s)=(X(s),Y(s),Z(s)),$ (1)
where $\Omega$ is a parameterization domain and $s\in\Omega$ is the parameter
in this domain. The choice of $\Omega$ depends on the nature of the surfaces
of interest. When dealing with closed surfaces of genus-0, $\Omega$ is a
sphere, _i.e.,_ $\Omega=\mathbb{S}^{2}$, and $s=(u,v)$, with $u\in[0,\pi]$ and
$v\in[0,2\pi[$, are the spherical coordinates. In practice, surfaces come as
unregistered triangulated meshes. We then use the spherical parameterization
algorithm of [40] to map them to a spherical domain.
To remove shape-preserving transformations, we first translate the surfaces so
that their center of mass is located at the origin, and then scale them to
have unit surface area. The space normalized surfaces, denoted by
$\mathcal{F}$, is called the _preshape space_.
Having removed translation and scale, we still need to account for rotations
and reparameterizations. Those are handled algebraically. For any surface
$f\in\mathcal{F}$ and for any rotation $O\in SO(3)$, $Of$ and $f$ have
equivalent shapes. Similarly, any reparameterization of a surface with an
orientation-preserving diffeomorphism preserves its shape. Let $\Gamma$ be the
space of all orientation-preserving diffeomorphisms of $\Omega$. Then,
$\forall\ \gamma\in\Gamma$, $f$ and $f\circ\gamma$, _i.e.,_ the
reparameterization of $f$ with $\gamma$, have the same shape. (Here, $\circ$
refers to the composition of two functions.) Note that reparameterizations
provide dense correspondences across surfaces. If one wants to put a surface
$f_{2}$ in correspondence with another surface $f_{1}$, then we need to find a
rotation $O^{*}$ and a reparameterization $\gamma^{*}$ such that
$O^{*}(f_{2}\circ\gamma^{*})$ is as close as possible to $f_{1}$. This is
precisely the process of 3D surface registration. It is defined mathematically
as:
$(O^{*},\gamma^{*})=\mathop{\rm argmin}_{O\in
SO(3),\gamma\in\Gamma}d_{\mathcal{F}}(f_{1},O(f_{2}\circ\gamma)),$ (2)
where $d_{\mathcal{F}}$ is a measure of distances between surfaces in
$\mathcal{F}$.
#### 3.1.1 SRNF representation of surfaces
For efficient registration and comparison of surfaces, the distance measure,
or metric, $d_{\mathcal{F}}$ should quantify interpretable shape differences,
_i.e.,_ the amount of bending and stretching one needs to apply to one surface
to deform it into another. It should also be simple enough to facilitate
efficient computation of correspondences and geodesic paths. Jermyn _et al._
[5] introduced a partial elastic metric that measures differences between
surfaces as a weighted sum of the amount of bending and stretching that one
needs to apply to a surface to align it to another. In this approach, bending
is measured in terms of changes in the orientation of the unit normal vectors,
while stretching is measured in terms of changes in the infinitesimal surface
areas. More importantly, Jermyn _et al._ [5] showed that by using a special
representation of surfaces, called the Square-Root Normal Field (SRNF), the
complex partial elastic metric reduces to the simple $\mathbb{L}^{2}$ metric
on the SRNF space.
###### Definition 3.1 (SRNF maps)
The SRNF map $H(f)$ of a surface $f\in\mathcal{F}$ is defined as the normal
vector field of the surface scaled by the square-root of the local area around
each surface point:
$\displaystyle H:\ $ $\displaystyle\mathcal{F}\rightarrow\mathcal{C}_{h}$
$\displaystyle f\mapsto H(f)=h\text{, such that
}h(u,v)=\frac{\textbf{n}(u,v)}{\|\textbf{n}(u,v)\|^{\frac{1}{2}}_{2}},$ (3)
where $\mathcal{C}_{h}$ is the space of all SRNFs, $\textbf{n}=\frac{\partial
f}{\partial u}\times\frac{\partial f}{\partial v}$ is the normal field to $f$
and $\|\cdot\|_{2}$ is the $\mathbb{L}^{2}$ norm in 3.
The SNRF representation of surfaces has nice properties that make it suitable
for the various analysis tasks at hand:
* •
It is translation invariant. Also, the SRNF of a rotated surface is simply the
rotation of the SRNF of that surface, _i.e.,_ $H(Of)=OH(f)$.
* •
$\forall\gamma\in\Gamma$,
$H(f\circ\gamma)=\sqrt{|J_{\gamma}|}(h\circ\gamma)\equiv h\ast\gamma$, where
$J_{\gamma}$ is the Jacobian of $\gamma$ and $|\cdot|$ is its determinant.
* •
Under the $\mathbb{L}^{2}$ metric in the space of SRNFs, the action of
$\Gamma$ is by isometries, _i.e.,_ $\forall\ \gamma\in\Gamma$ and $\forall\
f_{1},f_{2}\in\mathcal{F},\ \|h_{1}-h_{2}\|=\|h_{1}\ast\gamma-
h_{2}\ast\gamma\|$, where $h_{i}=H(f_{i}),\ i=1,2$.
* •
The space of SRNFs is a subset of $\mathbb{L}^{2}(\Omega,{}^{3})$. In
addition, the $\mathbb{L}^{2}$ metric in $\mathcal{C}_{h}$ is equivalent to
the partial elastic metric in the space of surfaces. As such, geodesics in
$\mathcal{F}$ become straight lines in the SRNF space $\mathcal{C}_{h}$; see
Fig. 1.
* •
Currently, there is no analytical expression for the inverse SRNF map, and in
fact, the injectivity and surjectivity of the SRNF remain open questions.
However, Laga _et al._ [4] showed that, for a given SRNF of a valid surface,
one can always numerically estimate the original surface, up to translation
[4].
The last three properties are critical for comparison and atlas construction
of 4D surfaces. One can perform elastic registration of surfaces using the
standard $\mathbb{L}^{2}$ metric in the space of SRNFs, which is
computationally very efficient compared to using the complex elastic metric in
the space of surfaces (Section 3.1.2). Further, temporal evolutions of
surfaces can be interpreted as curves in the Euclidean space of SRNFs, making
them amenable to statistical analysis. Thus, the problem of constructing 4D
atlases becomes the problem of statistical analysis of elastic curves in the
space of SRNFs using standard statistical tools developed for Euclidean
spaces. After analysis, the results can be mapped back to the original space
of surfaces using efficient SRNF inversion procedures [4] (Section 3.2).
#### 3.1.2 Spatial elastic registration of surfaces
Under the SRNF representation, the elastic registration problem in Eqn. (2)
can be reformulated using the $\mathbb{L}^{2}$ metric on $\mathcal{C}_{h}$,
the space of SRNFs, instead of the complex partial elastic metric on the
preshape space $\mathcal{F}$. Let $f_{1}$ and $f_{2}$ be two surfaces in the
preshape space $\mathcal{F}$, and $h_{1}$ and $h_{2}$ their SRNFs. Then, the
rotation and reparameterization that optimally register $f_{2}$ to $f_{1}$ are
given by:
$(O^{*},\gamma^{*})=\mathop{\rm argmin}_{O\in
SO(3),\gamma\in\Gamma}\|h_{1}-O(h_{2}\ast\gamma)\|,$ (4)
where $\ast$ is the composition operator between an SRNF and a diffeomorphism
$\gamma\in\Gamma$. This joint optimization over $SO(3)$ and $\Gamma$ can be
solved by alternating, until convergence, between the two marginal
optimizations (this is allowed due to the product structure of
$SO(3)\times\Gamma$) [41]:
* •
Assuming a fixed parameterization, solve for the optimal rotation using
Procrustes analysis via Singular Value Decomposition (SVD).
* •
Assuming a fixed rotation, solve for the optimal reparameterization using a
gradient descent algorithm.
To solve for the optimal reparameterization, we represent the space $\Gamma$
of diffeomorphisms $\gamma$, which are functions on the sphere, using
spherical harmonic basis $\\{B_{i}\\}_{i=1,\dots,n}$. This way, every
$\gamma\in\Gamma$ can be written as a weighted sum of the harmonic basis:
$\gamma=\sum_{i=1}^{n}a_{i}B_{i}$. Thus, the search for the optimal
diffeomorphism is reduced to the search for the optimal weights $\\{a_{i}\\}$.
This procedure is described in detail in Section A.1 of the Supplementary
Material.
Although this approach converges to a local optimum, in practice, it can be
used in a very efficient way. Since a 4D surface $\alpha$ is a sequence of
discrete realizations $\alpha(t_{i})\in\mathcal{F},\ i=0,\cdots,n$, with
$t_{0}=0$ and $t_{n}=1$, one can perform the elastic registration
sequentially. Let $\beta=H(\alpha)$ be the SRNF map of the 4D surface
$\alpha$, _i.e.,_ $\forall\ t\in[0,1],\ \beta(t)=H(\alpha(t))$. Also, let
$\alpha_{0}$ be a reference surface randomly chosen from the population of
surfaces being analyzed, and $\beta_{0}$ its SRNF map ($\alpha_{0}$ can be,
for example, $\alpha(0)$). Then,
1. 1.
Find $O_{0}\in SO(3)$ and $\gamma_{0}\in\Gamma$ that register $\beta(t_{0})$
(the start point of SRNF path) to the SRNF of the reference surface
$\beta_{0}$, by solving Eqn. (4).
2. 2.
For $i=0,\dots,n$,
* •
$\beta(t_{i})\leftarrow O_{0}(\beta(t_{i})\ast\gamma_{0}$) and
$\alpha(t_{i})\leftarrow O_{0}\alpha(t_{i})\circ\gamma_{0}$.
3. 3.
For $i=1,\dots,n$,
* •
Find, by solving Eqn. (4), $O_{i}\in SO(3)$ and $\gamma_{i}\in\Gamma$ that
register $\beta(t_{i})$ to $\beta(t_{i-1})$.
* •
$\beta(t_{i})\leftarrow O_{i}(\beta(t_{i})\ast\gamma_{i}$) and
$\alpha(t_{i})\leftarrow O_{i}\alpha(t_{i})\circ\gamma_{i}$.
The first step ensures that, when given a collection of 4D surfaces
$\alpha_{j},\ j=1,\cdots,n$, the surfaces $\alpha_{j}(0),\ j=1,\cdots,n$ are
registered to each other. The subsequent steps ensure that $\forall t,\
\alpha_{j}(t)$ is registered to $\alpha_{j}(0)$. This sequential approach is
efficient since, in general, elastic deformations between two consecutive
frames in a 4D surface are relatively small. In what follows, we assume that
all surfaces within a 4D surface and across 4D surfaces are correctly
registered, _i.e.,_ they have been normalized for translation and scale, and
optimally rotated and reparameterized using the approach described in this
section.
### 3.2 The shape space of 4D surfaces
Figure 2: In the proposed temporal registration framework, 4D surfaces,
represented as curves in the SRNF space, are first mapped to the space of
Transported Square-Root Vector Fields (TSRVF) for their temporal registration.
Points in the TSRVF space are mapped back to the space of SRNFs and then to
the original space of surfaces for visualization. The operator $\odot$ refers
to the composition of functions in the TSRVF space.
Under the setup of Section 3.1, a 4D surface becomes a curve
$\alpha:[0,1]\to\mathcal{F}$. However, since $\mathcal{F}$ is endowed with the
partial elastic metric, which is non-Euclidean, we propose to further map the
4D surfaces to the SRNF space, which has a Euclidean structure. Thus, 4D
surfaces become curves of the form $\beta:[0,1]\to\mathcal{C}_{h}$. With this
representation, all statistical tasks are carried out in $\mathcal{C}_{h}$
under the $\mathbb{L}^{2}$ metric with results mapped back to the space of
surfaces $\mathcal{F}$ for visualization.
#### 3.2.1 TSRVF representation of SRNF trajectories
Let $\alpha$ be a curve (path) in $\mathcal{F}$ and $\beta$ its image under
the SRNF map, _i.e.,_ $\forall\ t\in[0,1],\ \beta(t)=H(\alpha(t))$; $\beta$ is
also a curve, but in $\mathcal{C}_{h}$. Let $\mathcal{M}_{\mathcal{F}}$ be the
space of all paths in $\mathcal{F}$, and $\mathcal{M}_{h}$ be the space of all
paths in $\mathcal{C}_{h}$:
$\mathcal{M}_{h}=\\{\beta:[0,1]\to\mathcal{C}_{h}|\beta=H(\alpha),\
\alpha\in\mathcal{M}_{\mathcal{F}}\\}$.
To temporally register, compare, and summarize samples of such curves, we need
to define an appropriate metric on $\mathcal{M}_{\mathcal{F}}$, or
$\mathcal{M}_{h}$, that is invariant to the rate (or speed) of the 4D
surfaces. For example, facial expressions that only differ in the rate of
their execution should be deemed equivalent under such a metric. Let
$\Xi=\\{\xi:[0,1]\to[0,1]\text{ such that }0<\dot{\xi}<\infty,\xi(0)=0\text{
and }\xi(1)=1\\}$ denote all reparameterizations of the temporal domain
$[0,1]$. Here, $\dot{\xi}=\frac{d\xi}{dt}$. Then, for any $\xi\in\Xi$,
$\beta\circ\xi$ and $\beta$ only differ in the rate of execution and are thus
equivalent. The function $\xi$ is often referred to as a time warping of the
domain $[0,1]$. Temporal registration of two 4D surfaces $\alpha_{1}$ and
$\alpha_{2}$ then becomes the problem of registering their corresponding
curves $\beta_{1}$ and $\beta_{2}$ in $\mathcal{C}_{h}$. This requires solving
for an optimal reparameterization $\xi^{*}\in\Xi$ that minimizes an
appropriate distance $d(\cdot,\cdot)$ between $\beta_{1}$ and $\beta_{2}$:
$\xi^{*}=\mathop{\rm argmin}_{\xi\in\Xi}d(\beta_{1},\beta_{2}\circ\xi).$ (5)
The optimization over $\Xi$ in Eqn. (5) ensures rate invariance. Thus, we are
left with defining a distance $d(\cdot,\cdot)$ that is invariant to time
warping of the temporal domain $[0,1]$. To this end, we borrow tools from
Srivastava _et al._ [36] for analyzing shapes of curves in ${}^{n},n\geq 2$.
The associated elastic metric defined therein is invariant to
reparameterizations of curves, and quantifies the amount of bending and
stretching of the curves in terms of changes in the orientations and lengths
of their tangent vectors, respectively. However, instead of directly working
with such a complex elastic metric, Su _et al._ [42] introduced the
Transported Square-Root Vector Field (TSRVF) representation, which simplifies
the complex elastic metric into the simple $\mathbb{L}^{2}$ metric.
###### Definition 3.2 (Transported Square-Root Vector Field (TSRVF))
For any smooth trajectory $\beta\in\mathcal{M}_{h}$, the transported square-
root vector field (TSRVF) is a parallel transport of a scaled velocity vector
field of $\beta$ to a reference point $c\in\mathcal{C}_{h}$ according to
$Q(\beta)(t)=q(t)=\frac{\dot{\beta}(t)|_{\beta(t)\to
c}}{\sqrt{\|\dot{\beta}(t)\|}},$ (6)
where $\dot{\beta}=\frac{\partial\beta}{\partial t}$ is the tangent vector
field on $\beta$ and $\|\cdot\|$ is the $\mathbb{L}^{2}$ metric on
$\mathcal{C}_{h}$.
Note that the parallel transport $\dot{\beta}(t)|_{\beta(t)\to c}$ is
performed along the geodesic from $\beta(t)$ to $c$. The TSRVF representation
has nice properties that facilitate efficient temporal registration of 4D
surfaces. Let $\beta_{1}$ and $\beta_{2}$ be two trajectories on
$\mathcal{M}_{h}$, and let $q_{1}$ and $q_{2}$ be their respective TSRVFs.
Then,
* •
The elastic metric on the space of trajectories $\mathcal{M}_{h}$ reduces to
the $\mathbb{L}^{2}$ metric on the space of their TSRVFs. Thus, one can use
the $\mathbb{L}^{2}$ metric to compare two paths:
$d(\beta_{1},\beta_{2})=\|q_{1}-q_{2}\|=\left(\int_{0}^{1}\|q_{1}(t)-q_{2}(t)\|^{2}dt\right)^{\frac{1}{2}},$
(7)
where $\|\cdot\|$ is again the $\mathbb{L}^{2}$ norm on $\mathcal{C}_{h}$.
* •
For any $\xi\in\Xi$, $Q(\beta\circ\xi)=(q\circ\xi)\sqrt{\dot{\xi}(t)}$, which
we denote by $q\odot\xi$.
* •
Under the $\mathbb{L}^{2}$ metric, the action of the reparameterization group
$\Xi$ on the space of TSRVFs is by isometries, _i.e.,_
$\|q_{1}-q_{2}\|=\|(q_{1}\odot\xi)-(q_{2}\odot\xi)\|,\ \forall\ \xi\in\Xi$.
* •
Given a TSRVF $q$ and an initial trajectory point, one can reconstruct the
corresponding path $\beta$, such that $Q(\beta)=q$, by solving an ordinary
differential equation [42].
As we will see next, these properties enable efficient temporal registration
of trajectories and subsequent rate-invariant statistical analysis. In what
follows, let $\mathcal{Q}$ denote the space of TSRVFs equipped with the
$\mathbb{L}^{2}$ metric defined in Eqn. 7.
#### 3.2.2 Temporal registration
Under the TSRVF representation, the temporal registration problem in Eqn. (5),
which involved optimization over $\Xi$, can now be reformulated using the
standard $\mathbb{L}^{2}$ metric on the space of TSRVFs:
$\xi^{*}=\mathop{\rm argmin}_{\xi\in\Xi}\|q_{1}-q_{2}\odot\xi\|.$ (8)
This problem can be solved efficiently using a Dynamic Programming algorithm
[43, 42]. Then, the rate-invariant distance between two trajectories is given
by:
$d(\beta_{1},\beta_{2})=\inf_{\xi\in\Xi}\|q_{1}-q_{2}\odot\xi\|.$ (9)
Figure 3: Example of a geodesic between the source 4D surface (top row) and
the target 4D surface (bottom row) after spatiotemporal registration. The
highlighted row corresponds to the mean 4D surface. A video of the figure is
included in the Supplementary Material.
#### 3.2.3 Geodesics between 4D surfaces
We now summarize our entire pipeline. Let
$\alpha_{1},\alpha_{2}\in\mathcal{M}_{\mathcal{F}}$ be two 4D surfaces. The
pipeline to spatiotemporally register them and compute the geodesic path
between them can be summarized as follows.
(1) Proposed spatial registration. The goal is to spatially register the
surfaces in $\alpha_{1}$ and $\alpha_{2}$ to the same reference surface, which
can be any arbitrary surface. For simplicity, we choose it to be
$\alpha_{1}(0)$, the first surface in the sequence $\alpha_{1}$. The spatial
registration can then be performed in two steps.
* •
Compute the SRNF maps, $\forall\ t\in[0,1]$, $\beta_{1}(t)=H(\alpha_{1}(t))$
and $\beta_{2}(t)=H(\alpha_{2}(t))$.
* •
Spatially register $\beta_{1}$ and $\beta_{2}$, and thus $\alpha_{1}$ and
$\alpha_{2}$, to the reference surface, using the algorithm described in
Section 3.1.2.
For simplicity of notation, we also use $\beta_{1}$ and $\beta_{2}$ to denote
the spatially-registered trajectories.
(2) Proposed temporal alignment. $\beta_{1}$ and $\beta_{2}$ are elements of
$\mathcal{M}_{h}$. We perform temporal registration in three steps.
* •
Map $\beta_{1}$ and $\beta_{2}$ to the TSRVF space
$\mathcal{Q}:q_{1}=Q(\beta_{1})$ and $q_{2}=Q(\beta_{2})$.
* •
Find $\xi^{*}$, the optimal reparametrization that registers $q_{2}$ to
$q_{1}$ by solving Eqn. (8).
* •
$q_{2}^{*}\leftarrow q_{2}\odot\xi^{*}$ and
$\beta_{2}^{*}\leftarrow\beta_{2}\circ\xi^{*}$.
(3) Proposed geodesic computation. Since $\mathcal{Q}$ is Euclidean, the
geodesic path $\Lambda_{q}$ between $q_{1}$ and $q_{2}^{*}$ is a straight
line:
$\Lambda_{q}(\tau)=(1-\tau)q_{1}+\tau q_{2}^{*},\ \tau\in[0,1].$ (10)
Next, we map $\Lambda_{q}$ back to $\mathcal{M}_{h}$ using the inverse TSRVF
map, _i.e.,_ $\forall\ \tau,\
\Lambda_{\beta}(\tau)=Q^{-1}(\Lambda_{q}(\tau))$. The computation of the
inverse mapping uses the starting point on the trajectory and has a closed-
form solution, making it computationally efficient. This is described in
detail in [42]. After applying the inverse mapping to the entire geodesic
path, we have $\Lambda_{\beta}(0)=\beta_{1},\ \Lambda_{\beta}(1)=\beta_{2}$,
and $\beta_{\tau}=\Lambda_{\beta}(\tau),\ \tau\in(0,1)$, _i.e.,_ a geodesic
path between the SRNF curves $\beta_{1}$ and $\beta_{2}$.
(4) Visualization. To visualize geodesic paths between 4D surfaces (and not
their SRNFs), we need to further map all SRNFs on the trajectory
$\Lambda_{\beta}(\tau)$ to their corresponding surfaces in $\mathcal{F}$. This
is done using the inverse SRNF map, _i.e.,_ $\forall\ \tau\in[0,1],\
t\in[0,1],\ \Lambda(\tau)(t)=H^{-1}(\Lambda_{\beta}(\tau)(t)).$ Unlike the
TSRVF map whose inverse can be computed analytically, inversion of the SRNF
map, whose injectivity and surjectivity are yet to be determined, has to be
accomplished numerically using the approach of Laga _et al._ [4].
Now, $\Lambda$ is the geodesic path between the 4D surfaces $\alpha_{1}$ and
$\alpha_{2}^{*}$, _i.e.,_ $\Lambda(0)=\alpha_{1},\Lambda(1)=\alpha_{2}^{*}$,
and $\alpha_{\tau}=\Lambda(\tau)$ is a 4D surface at time $\tau$ along the
geodesic path. Fig. 3 shows an example of a geodesic between two 4D surfaces
representing talking faces. Each row corresponds to one 4D surface. The top is
the source, the bottom row is the target, after optimal spatiotemporal
registration, and the highlighted row in the middle corresponds to the mean 4D
surface. The temporal registration is further illustrated in Fig. 4, where we
show the source 4D surface, the target 4D surface before the spatiotemporal
registration, and the target 4D surface after the spatiotemporal registration.
Section 5 provides more examples of geodesics computed between various types
of 4D surfaces.
## 4 Statistical analysis of 4D surfaces
Now that we have devised all of the required mathematical tools for comparing
4D surfaces, we shift our focus to how these tools can be used to build a 4D
atlas from a sample of 4D surfaces. Let $\alpha_{1},\cdots,\alpha_{n}$ be a
set of 4D surfaces and $\beta_{1},\cdots,\beta_{n}$ be their corresponding
trajectories in $\mathcal{C}_{h}$. We assume that all of the surfaces, and
their corresponding SRNFs, have been spatially registered to a common
reference; see Section 3.1.2. We proceed to map all of the 4D surfaces to
their corresponding TSRVFs, hereinafter denoted by $q_{1},\cdots,q_{n}$, and
compute statistics in the space of TSRVFs. As before, all results are mapped
at the end to the original space of surfaces $\mathcal{F}$ for visualization.
We will use this framework to compute means and modes of variation, and to
synthesize novel 4D surfaces by sampling from probability distributions fitted
to a set of exemplar 4D surfaces.
Mean of 4D surfaces. Intuitively, the mean of a collection of 4D surfaces is
the 4D surface that is as close as possible to all of the 4D surfaces in the
collection, under the specified distance measure (or metric). It is also
called Karcher mean and is defined as the 4D surface that minimizes the sum of
squared distances to all of the 4D surfaces in the given sample. In other
words, we seek to solve the following optimization problem, defined in the
space of TSRVFs:
$\bar{q}=\mathop{\rm
argmin}_{q\in\mathcal{Q}}\sum_{i=1}^{n}\min_{\xi_{i}\in\Xi}\|q-q_{i}\odot\xi_{i}\|^{2}.$
(11)
Algorithm 3 in the Supplementary Material describes the proposed procedure for
solving this optimisation problem. It outputs the TSRVF Karcher mean
$\bar{q}$, the optimal temporal reparameterizations $\xi_{i}^{*},\
i=1,\ldots,n$, and the temporally registered TSRVFs
$q_{i}^{*}=q_{i}\odot\xi_{i}^{*}$; again, for simplified notation we simply
use $\xi_{i}$ and $q_{i}$ to denote the optimal temporal reparameterizations
and the temporally registered TSRVFs. The mean 4D surface can be obtained by
TSRVF inversion of the mean TSRVF followed by SRNF inversion [4].
---
Figure 4: Examples of the spatiotemporal registration of two facial
expressions (4D faces). In each example, we show (a) the source 4D face, (b)
the target 4D face, and (c) the target 4D face after spatiotemporal
registration using the proposed framework. Note how the spatiotemporally
registered target 4D surface became fully synchronised with the source 4D
surface. The full video sequence is provided in the supplementary material.
Principal directions of variation. Since the TSRVF space is Euclidean, the
principal directions of variation can also be computed in a standard way,
_i.e.,_ using the Singular Value Decomposition (SVD) of the covariance matrix.
In the following, we assume that the TSRVFs are sampled using a finite set of
points and appropriately vectorized. Let
$K=\frac{1}{n-1}(q_{i}-\bar{q})(q_{i}-\bar{q})^{\top}$ be the covariance
matrix of the input sample, $\sigma_{i},\ i=1,\dots,k$ its $k-$leading
eigenvalues, and $\Sigma_{i},\ i=1,\dots,k$ the corresponding eigenvectors.
Then, one can explore the variability in the $i-$th principal direction using
$q_{\tau}=\bar{q}+\tau\sqrt{\sigma_{i}}\Sigma_{i}$, where $\tau\in\real$. To
visualize this principal direction of variation, we again use TSRVF inversion
followed by SRNF inversion to compute the 4D surface $\alpha_{\tau}$, such
that $Q(H(\alpha_{\tau}))=\bar{q}+\tau\sqrt{\sigma_{i}}\Sigma_{i},\
\tau\in\real$.
Random 4D surface synthesis. Given the mean and the $k$-leading principal
directions of variation, any TSRVF $q$ of a 4D surface $\alpha$ can be
approximately represented, in a parameterized form, as:
$q=\bar{q}+\sum_{i=1}^{k}\tau_{i}\sqrt{\sigma_{i}}\Sigma_{i},\tau_{i}\in\real.$
(12)
Thus, to generate a random TSRVF, we only need to generate $k$ random values
$\tau_{i}\in\real$ and plug them into Eqn. (12). Then, to compute the
corresponding random 4D surface, we apply the inverse TSRVF map followed by
the inverse SRNF map. Also, by enforcing each $\tau_{i}$ to be within a
certain range, e.g., $[-1,1]$, we can ensure that the generated random 4D
surfaces are similar to the given sample and thus plausible.
This procedure allows the generation of new random 4D surfaces. However, it
does not offer any control over the generation process, which is entirely
random. In many situations, we would like to control this process using a set
of parameters. For instance, when dealing with 4D facial expressions, these
parameters can be the degree of sadness, facial dimensions, etc. This type of
control can be implemented using regression in the TSRVF space, a problem that
we plan to explore in the future.
Figure 5: Example of the spatiotemporal registration, using the proposed
algorithm, of two 4D human body shapes (from the DFAUST dataset) performing a
jumping action at different speeds. Note how the spatiotemporally registered
target 4D surface in (c) became synchronised with the source 4D surface in
(a). The full video sequence is provided in the supplementary material.
## 5 Results
This section demonstrates some results of the proposed framework and evaluates
its performance. Section 5.1 focuses on spatiotemporal registration and
geodesic computation between 4D surfaces. Section 5.2 focuses on the
computation of the statistical summaries while Section 5.3 focuses on the
random synthesis of 4D surfaces. Finally, Section 5.4 provides an ablation
study to demonstrate the importance of each component of the proposed
framework. We use three datasets: (1) VOCA [6], which contains 4D facial
scans, captured at $60$fps, of $12$ subjects speaking various sentences; (2)
MPI DFAUST [7], which contains high-resolution 4D scans of $10$ human subjects
in motion, captured at $60$fps; in total, the dataset contains $129$ dynamic
performances; and (3) MPI 4D CAPE [8], which contains high-resolution 4D scans
of $10$ male and $5$ female subjects in clothing. These datasets come as
polygonal meshes with consistent triangulation and given registration across
the meshes. We spherically parameterize them using Kurtek _et al._ ’s
implementation [17] of the spherical parameterization approach of [40]. We
also apply randomly generated spatial diffeomorphisms to simulate non-
registered surfaces. Our framework does not use, either explicitly or
implicitly, the provided vertex-wise correspondences.
Figure 6: Example of the spatiotemporal registration, using the proposed
algorithm, of two 4D body shapes with different clothing ( from the CAPE
dataset). Note how the spatiotemporally registered target 4D surface in (c)
became fully synchronised with the source 4D surface in (a). The full video
sequence is provided in the supplementary material.
---
(a) Before registration. The highlighted middle row corresponds to the mean 4D
surface.
(b) After registration. The highlighted middle row corresponds to the mean 4D
surface.
Figure 7: Example of a geodesic between 4D surfaces corresponding to punching
actions ((a) before registration and (b) after registration). In each example,
we show the source 4D surface in the first row, the target 4D surface in the
last row, and three intermediate 4D surfaces along the geodesic between the
source and the target. Observe how misaligned are the highlighted frames
before registration. A video illustrating these sequences is included in the
Supplementary Material.
### 5.1 Spatiotemporal registration and 4D geodesics
We consider pairs of 4D facial expressions from the VOCA dataset. We first
reparameterize each 4D surface using randomly generated time-warping functions
to simulate facial expressions performed at different execution rates. We then
apply the framework proposed in this paper to spatiotemporally register them.
Fig. 4 shows an example of such spatiotemporal registration. In this example,
we show (a) the source 4D surface, (b) the target 4D surface before
spatiotemporal registration, and (c) the target 4D surface after
spatiotemporally registering it to the source. We also highlight some key
frames. As one can see, the original 4D surfaces differ significantly in their
execution rates. The spatiotemporal registration framework synchronizes the
source and target expressions, thus enabling their comparison, interpolation
and averaging. We also perform a similar experiment on the human body shapes
of the DFAUST [7] and CAPE [8] datasets; see Figs. 5, 6, and 12(a)-(c).
Compared to faces, human body shapes are very challenging to analyze since
they perform complex articulated motions, which result in large bending and
stretching of their surfaces.
4D geodesics. Fig. 7 shows geodesics between 4D human body shapes. In this
example, both the source and target perform a punching action but at different
rates. We show the geodesic before and after the spatiotemporal registration
of the target 4D surface onto the source. Unlike the jumping action in Fig. 5,
the left hand of the target surface does not perform the same action as the
left hand of the source surface. Nevertheless, our framework can bring these
two 4D surfaces as close as possible to each other. The supplementary material
includes a video of the sequence and more examples of geodesics between 4D
faces (from VOCA), 4D human bodies (from DFAUST), and clothed 4D human bodies
(from CAPE).
TABLE I: Comparison of the performance and accuracy of the proposed spatial registration with state-of-the-art techniques such as MAP Tree [44] and Fast Sinkhorn filters [45], which are based on functional maps [27]. | COMA [46] | CAPE [8] | DFAUST [7]
---|---|---|---
| Mean | Std | Median | Mean | Std | Median | Mean | Std | Median
MapTree [44] | $1.6042$ | $0.6956$ | $1.6069$ | $1.4447$ | $0.7227$ | $1.4483$ | $1.5344$ | $0.7065$ | $1.5211$
ICP-NN [45] | $1.6028$ | $0.6973$ | $1.6056$ | $1.4684$ | $0.7087$ | $1.5116$ | $1.5185$ | $0.6905$ | $1.5082$
ICP-Sinkhorn [45] | $1.6019$ | $0.6974$ | $1.6053$ | $1.4684$ | $0.7037$ | $1.5058$ | $1.5275$ | $0.6888$ | $1.5227$
Zoomout-NN [45] | $1.5997$ | $0.6902$ | $1.6002$ | $1.4743$ | $0.7029$ | $1.4781$ | $1.5101$ | $0.6922$ | $1.4857$
Zoomout-Sinkhorn [45] | $1.6016$ | $0.6908$ | $1.5968$ | $1.4737$ | $0.6937$ | $1.4925$ | $1.5019$ | $0.6948$ | $1.4731$
SRNF (ours) | 0.0012 | 0.0008 | 0.0003 | 0.0008 | 0.0008 | 0.0006 | 0.0008 | 0.0008 | 0.0007
Evaluation of the spatial registration. We quantitatively evaluate the
accuracy of the proposed spatial registration method and compare it to the
latest functional map-based techniques such as MapTree [44] and Fast Sinkhorn
filters [45]. Similar to our method, functional maps operate on clean manifold
surfaces and do not use any form of (deep) learning. We take the surfaces of
COMA [46], CAPE [8], and DFAUST [7] datasets, which come with ground-truth
correspondences, and apply random spatial diffeomorphisms to them to simulate
unregistered surfaces. We then compute the correspondence map between each
pair of surfaces in the dataset. We measure the spatial registration error in
terms of the geodesic distance, on the parameterization domain, between the
ground-truth and the computed correspondence. Table I reports the mean,
standard deviation, and median of the registration errors computed across all
the models in each data set. As one can see, the proposed SRNF-based spatial
registration method significantly outperforms state-of-the-art algorithms [44,
45]. We refer the reader to the supplementary material, which includes visual
examples of pairs of surfaces before and after spatial registration. It also
includes additional spatial registration experiments using the quadruped
animal data set of Kulkarni _et al._ [47].
An important property of the proposed approach is that it finds a one-to-one
mapping between the source and target surfaces. This is not the case with
functional map-based methods, which can map a point on the source to multiple
points on the target. Thus, they cannot be used to compute geodesics,
interpolations, and statistical summaries.
Evaluation of the temporal registration. We use the FLAME fitting framework
[31] to generate random 4D facial surfaces with known ground-truth temporal
registrations. We first generate two random SMPL parameters, each
corresponding to a 3D surface, and then linearly interpolate them to simulate
a deforming 4D facial surface. Let $\alpha_{i},\ i\in\\{1,\dots,100\\}$ be the
resulting 4D surfaces. Next, we generate $100$ random temporal diffeomorphisms
$\xi_{i}$; see Fig. 24-(b) of the supplementary material. These will be used
to simulate 4D facial surfaces that have different execution rates.
|
---|---
(a) Before registration. | (b) After registration.
Figure 8: Boxplots of errors between $20$ pairs of 4D surfaces: (a)
unregistered 4D surfaces generated using $100$ random diffeomorphic
transformations of a single 4D surface for $5$ sequences and (b)
spatiotemporally registered surfaces. The red lines represent the media error
and the boxes represent its spread. The green curve in (a) and (b) is the
distance between the perfectly registered 4D surfaces. Figure 9: Co-
registration of multiple 4D surfaces. In this example, we consider four human
body shapes performing a jumping action (first four rows) and two others
performing a punching action (rows 5 and 6). Here, we show the
spatiotemporally co-registered 4D surfaces and the 4D mean computed using the
proposed algorithm. The supplementary material includes the input 4D surfaces
before their spatiotemporal registration. It also includes the full video
sequences. The surfaces are from the DFAUST dataset.
Now, given a pair of 4D surfaces $\alpha_{i}$ and $\alpha_{j}$, and for each
pair of temporal diffeomosphims $\xi_{k}$ and $\xi_{l}$,
$\alpha_{i}\circ\xi_{k}$ and $\alpha_{j}\circ\xi_{l}$ can be seen as a pair of
4D surfaces with different execution rates. Next, we compute, using the
proposed framework, the optimal diffeomorphism $\xi^{*}_{i,j}$ that aligns
$\alpha_{i}\circ\xi_{k}$ onto $\alpha_{j}\circ\xi_{l}$. Let
$\alpha^{*}_{i}=\alpha_{i}\circ\xi^{*}_{i,j}\circ\xi_{k}$. To quantitatively
evaluate the quality of the computed temporal registration, we compute:
* •
The distance between the perfectly registered 4D surfaces $\alpha_{i}$ and
$\alpha_{j}$; see the green curves in Fig. 8.
* •
The distance between $\alpha_{i}\circ\xi_{k}$ and $\alpha_{j}\circ\xi_{l}$
before temporal registration (Fig. 8-(a)), and the distance between
$\alpha^{*}_{i}$ and $\alpha_{j}$, _i.e.,_ the distance between the two 4D
surfaces after their temporal registration (Fig. 8-(b)). Ideally, the latter
should be significantly smaller than the former. It should also be as close as
possible to the distance between the perfectly registered 4D surfaces
$\alpha_{i}$ and $\alpha_{j}$.
Figs. 8-(a) and (b) report statistics of these errors for each pair of 4D
surfaces, but aggregated over the $100$ random diffeomorphisms. As one can
see, the median distance between the 4D surfaces after registration (Fig.
8-(b)) is significantly lower than the one before registration (Fig. 8-(a)).
The former is significantly closer to the ground-truth (shown with green
curves in Fig. 8) than the latter.
Computation time. Our approach runs entirely on a CPU. The Matlab
implementation of the spatiotemporal registration process takes less than
$31.43$ seconds on $4.2$ GHz Intel Core i7 with $32$ GB of RAM. The
visualization, which is needed when computing geodesics, means, and directions
of variation, and when synthesizing random 4D surfaces, relies on the
inversion of the SRNF maps. It requires $6$ seconds per frame and a total of
$30$ minutes for the $300$ temporal frames used in this paper. All the
experiments were performed using a high spherical resolution ($256\times
256$).
---
(a) First mode of variation.
(b) Second mode of variation.
(c) Five randomly synthesized 4D faces.
Figure 10: First (a) and second (b) principal directions of variation (the
mean 4D surface is highlighted in the middle). Each row corresponds to one 4D
surface sampled between $-1.5$ to $1.5$ times the standard deviation along the
principal direction of variation. We refer the reader to the supplementary
material, which shows the input 4D faces (before their spatiotemporal
registration). It also includes more modes of variation and random samples, as
well as the complete video sequences.
### 5.2 Summary statistics
We now consider a set of unregistered 4D surfaces and compute their mean and
principal directions of variation. Fig. 9 shows the 4D mean (highlighted with
a blue box) computed from six 4D human shapes performing different types of
actions. The figure also shows the input 4D surfaces after their
spatiotemporal registration; see the video in the supplementary material for
an illustration of the input 4D surfaces before spatiotemporal registration.
Despite the large articulated motion, the large differences in the type of
actions and the significant differences in the execution rates of the 4D
surfaces, our framework is able to co-register them and generate a plausible
average 4D surface. Figs. 10-(a) and (b) show the mean and the first two
principal modes of variation computed on input 4D facial surfaces. As we can
see, the computed mean also captures the main features of the dataset. The
principal directions of variation further capture relevant variability in the
given data. The supplementary material includes the input 4D surfaces prior to
their registration. Please also refer to the videos in the supplementary
material for additional results.
### 5.3 4D surface synthesis
Fig. 10-(c) shows five 4D facial expressions randomly sampled from a Gaussian
distribution with parameters estimated from the VOCA dataset using the method
of Sec. 4. To ensure that the synthesized 4D surfaces are plausible, we only
consider those that are within $1.5$ standard deviations along each principal
direction of variation. We refer the reader to the supplementary material for
videos of all of the randomly generated 4D surfaces. The ability to synthesize
novel 4D surfaces can benefit many applications in computer vision and
graphics. It can be used to augment datasets for efficient training of deep
learning models.
### 5.4 Ablation study
We undertake an ablation study to demonstrate the importance of each component
of the proposed framework.
Importance of the SRNF representation. In this experiment (Fig. 11), we take
two challenging 3D human body models, which undergo a large articulated
motion, perform their spatial registration using the proposed SRNF approach,
and then compute their statistical mean using the $\mathbb{L}^{2}$ metric in
the original surface space (Fig. 11-(a)) and the $\mathbb{L}^{2}$ metric in
the SRNF space (Fig. 11-(b)). Fig. 11-(a) shows that the articulated parts of
the mean computed in the original surface space unnaturally shrink. This is
predictable since, under the $\mathbb{L}^{2}$ metric, geodesics correspond to
straight lines. However, in the SRNF space, the $\mathbb{L}^{2}$ metric is
equivalent to the optimal bending and stretching of the surfaces, and thus the
computed mean is more natural; see Fig. 11-(b).
|
---|---
(a) Without SRNF. | (b) With SRNF.
Figure 11: The mean shape between the left and right surfaces, computed (a)
in the original surface space without the SRNF representation, and (b) in the
SRNF space. In (a), the mean shape is distorted due to the use of the
$\mathbb{L}^{2}$ metric in the originals pace of surfaces. In both cases, the
spatial registration is performed using the proposed registration method.
Next, we consider two full 4D surfaces of deforming human body shapes (Fig.
12-(a) and (b)) and show their mean 4D surface obtained: (1) with the SRNF
representation, with spatial registration, and without temporal registration
(Fig. 12-(d)), (2) with the SRNF representation, with spatial registration,
and with temporal registration (Fig. 12-(e)), (3) without SRNF representation,
with spatial registration, and without temporal registration (Fig. 12-(f)),
and (4) without SRNF representation, with spatial registration, and with
temporal registration (Fig. 12-(g)). The last two cases are equivalent to a
linear interpolation in the original surface space, after spatial
registration. In all cases, we perform the spatial registration using the SRNF
framework.
Figure 12: Ablation study: illustration of the effect of the different
components of the proposed framework on the quality of the computed mean 4D
surface, which is the middle point along the geodesic between the source and
target 4D surfaces. Te bottom row is a zoom on the frame highlighted in (a) to
(g). The 4D surfaces are from the DFAUST dataset. A video illustrating these
sequences is included in the Supplementary Material.
First, we can see that the temporally-aligned target 4D surface (Fig. 12-(c))
is very close to the source 4D surface in Fig. 12-(a). We observe that the
right hands became fully synchronized. As such, the mean 4D surface obtained
after temporal registration (Fig. 12-(e)) is fully synchronised with the
source and the aligned target, unlike the mean 4D surface in Fig. 12-(d),
which has been obtained without temporal registration. Second, in the mean 4D
surfaces obtained without the SRNF framework (Figs. 12-(f) and (g)), we can
observe that the parts that undergo large articulated motion (e.g., the arms)
unnaturally shrink. This shrinkage is stronger in Fig. 12-(f) since the mean
is obtained without temporal registration. The bottom row of Fig. 12 shows a
zoom on the time frame highlighted in Figs. 12-(a) to (g).
Finally, we quantitatively evaluate the importance of the SRNF representation
by comparing the expressive power of PCA on the original space of spatially
registered surfaces and on the space of SRNFs. We randomly divide a data set
equally into a training set and a testing set. We then fit a PCA model to the
training set (both in the original space and in the space of SRNFs), project
each model in the test set onto the PCA model, reconstruct it, and measure the
error between the original and the reconstructed models. We perform 5-fold
cross-validation. Table II reports the mean, the median, and the standard
deviation of the error over the test set and averaged over the five runs. As
one can see, PCA on the SRNF space has a significantly lower reconstruction
error than PCA in the original space of surfaces. This demonstrates that the
former is more suitable to characterize variability in the shape of 3D objects
that bend and stretch.
TABLE II: Comparison of the expressive power of PCA on the original space of surfaces and on the space of SRNFs. The lower the values are, the better. | PCA on surfaces | PCA on SRNFs
---|---|---
| Mean | Std | Median | Mean | Std | Median
DFAUST | $0.26$ | $0.042$ | $0.270$ | 0.12 | $0.023$ | 0.121
VOCA | $0.030$ | $0.011$ | $0.028$ | 0.009 | $0.004$ | 0.008
CAPE | $0.105$ | $0.041$ | $0.101$ | 0.053 | $0.021$ | 0.050
TABLE III: Comparison of the expressive power of PCA on the original space of curves and on the space of TSRVFs. The lower the values are, the better. | PCA on curves | PCA on TSRVFs
---|---|---
| Mean | Std | Median | Mean | Std | Median
DFAUST | $0.926$ | $1.110$ | $0.693$ | 0.756 | $0.104$ | 0.715
VOCA | $0.676$ | $0.182$ | $0.64$ | 0.486 | $0.318$ | 0.603
Importance of the TSRVF representation for 4D surfaces. We perform a similar
ablation study, but on 4D surfaces, to compare the expressive power of PCA on
the original space of curves and on the space of TSRVFs. Table III shows that
PCA error on the TSRVF space is lower than the error in the original space.
This demonstrates that the former is more suitable to characterize variability
in 4D surfaces.
## 6 Conclusion
We have proposed a new framework for the statistical analysis of longitudinal
3D shape data (or 4D surfaces), _i.e.,_ surfaces that deform over time, e.g.,
3D human body shapes performing actions at different execution rates or 3D
human faces pronouncing sentences at different speeds. Unlike traditional
techniques, which only consider how features such as landmarks or measurements
vary over time, the proposed framework considers the deformation of the entire
surface of a 3D object. Our key contribution is in representing 4D surfaces as
trajectories in the space of SRNFs, and the use of Transported Square-Root
Vector Fields to analyze such trajectories statistically. The proposed
framework can spatiotemporally register 4D surfaces, even in the presence of
large elastic deformations and significant variations in the execution rates.
It is also able to compute geodesics and summary statistics, which in turn can
be used to synthesize new, unseen 4D surfaces randomly.
Although we have demonstrated the proposed 4D analysis framework on human body
shapes and facial surfaces, it is general and can be applied to other types of
surfaces. Our current implementation is limited to surfaces that are
homeomorphic to a sphere, but we plan to extend the framework to higher-genus
surfaces by exploring different parameterization methods, including mesh-based
representations [48]. The approach uses the numerical SRNF inversion procedure
of Laga _et al._ [4], which is sometimes not accurate near the poles of the
parameterization domain; we plan to improve its performance via the use of
charts.
The framework deals with surfaces that bend and stretch but do not change in
topology; as such, it does not apply to tree-like shapes, e.g., botanical
trees or roots. However, the concept of representing deformations as
trajectories in a shape space also applies to tree-shape spaces such as those
used in [49, 50]. The framework is also limited to clean surfaces that are
free of geometric and topological noise; as such, the proposed spatial
registration method cannot be used to register partial scans to each other, or
to register a template to partial scans. However, similar to statistical shape
models such as 3D morphable models and SMPL, the proposed 4D atlas can be used
as a prior; in conjunction with a data generation model, it can thereby be
applied to noisy or partial data, e.g., to reconstruct entire 4D surfaces. The
statistical analysis presented in this paper assumes that the population of
the 4D surfaces follows a Gaussian distribution. We plan to extend the
approach to other types of distributions, e.g., Gaussian Mixture Models, which
can represent populations that follow multimodal distributions
The proposed framework has various applications in computer vision, graphics,
biology, and medicine. In computer vision, collecting large animations to
train deep neural networks, e.g., for 3D reconstruction or action recognition
[51, 52], is complex and time-consuming. Our framework can contribute to
solving this problem by automatically synthesizing new samples from a small
dataset. Our current implementation has only considered random synthesis,
which is very important for populating virtual environments and for data
augmentation to train deep learning networks. However, there are many
situations where we would like to control this process using a set of
parameters. For instance, when dealing with 4D facial expressions, these
parameters can be the degree of sadness, facial dimensions, etc. This type of
control can be implemented efficiently using regression in the TSRVF space.
Finally, our framework can be used to statistically analyze how anatomical
organs deform due to growth or disease progression.
Acknowledgement. We would like to thank the authors of [12, 7, 46, 6, 8] for
making their datasets publicly available, and [44] and [45] for sharing the
codes of MapTree and the Fast Sinkhorn Filters-based surface registration.
## Appendix A Spatial registration
This appendix discusses the implementation details of the spatial registration
and provides additional results on various complex 3D shapes.
### A.1 Algorithm
In this section, we provide more details on the procedure used to solve the
spatial registration problem in Eqn. (4) of the main manuscript. In our
formulation, we consider spatial registration of a surface $f_{2}$ to a
surface $f_{1}$ as the problem of finding the rotation and reparameterization
that bring the SRNF $h_{2}$ of $f_{2}$ as close as possible to the SRNF
$h_{1}$ of $f_{1}$. We measure closeness using the $\mathbb{L}^{2}$ metric in
the SRNF space, which is equivalent to the partial elastic metric in the
original space of surfaces. That is, we seek to solve the following
optimization problem:
$(O^{*},\gamma^{*})=\mathop{\rm argmin}_{O\in
SO(3),\gamma\in\Gamma}\|h_{1}-O(h_{2}\ast\gamma)\|,$ (13)
where $\ast$ is the composition operator between an SRNF and a diffeomorphism
$\gamma\in\Gamma$. This joint optimization over $SO(3)$ and $\Gamma$ can be
solved by alternating, until convergence, between the two marginal
optimizations:
* •
Assuming a fixed parameterization, solve for the optimal rotation using
Procrustes analysis via Singular Value Decomposition (SVD).
* •
Assuming a fixed rotation, solve for the optimal reparameterization using a
gradient descent algorithm.
Below we detail each of these steps.
Optimization over the rotation group. For a fixed $\gamma\in\Gamma$, the
minimization over $SO(3)$ can be performed directly using Procrustes analysis.
Let $\tilde{h}_{2}$ denote
$(h_{2},\gamma)\equiv\sqrt{J[\gamma]}\>(h_{2}\circ\gamma)$ in Eqn. (13).
(Here, $J[\gamma]$ is the determinant of the Jacobian of $\gamma$.) The
optimal rotation matrix
$O^{*}=\mathop{\rm argmin}_{O\in SO(3)}\lVert h_{1}-O\tilde{h}_{2}\rVert^{2}$
(14)
can then be obtained using Algorithm 1.
Input: Two surfaces $\\{f_{1},f_{2}\\}\in{\mathcal{F}}$.
Output: Optimal rotation matrix $O^{*}$ and optimally rotated surface
$f_{2}^{*}$.
1: Compute the SRNFs $h_{1}=H(f_{1})$ and $h_{2}=H(f_{2})$.
2: Compute the $3\times 3$ matrix
$A=\int_{\Omega}h_{1}(s){\tilde{h}}_{2}(s)^{\top}ds$.
3: Compute the singular value decomposition $A=U\Sigma V^{T}$.
4: Compute the optimal rotation as $O^{*}=UV^{\top}$. (If the determinant of
$A$ is negative, the last column of $V$ changes sign.)
5: Compute the optimally rotated surface $f_{2}^{*}=O^{*}f_{2}$.
Algorithm 1 Optimal rotational alignment of two surfaces.
Optimization over the space of spatial diffeomorphisms. We use a gradient
descent approach to solve the optimization problem over $\Gamma$. Although
this approach has an obvious limitation of converging to a local solution, it
is still general enough to be applicable to general surfaces and provides
plausible results. In order to specify the gradient, we focus on the current
iteration, and define the reduced cost function
$E_{\text{reg}}:\Gamma\to{}_{\geq 0}$:
$E_{\text{reg}}(\gamma)=\lVert h_{1}-(\tilde{h}_{2},\gamma)\rVert^{2}=\lVert
h_{1}-\phi(\gamma)\rVert^{2},$ (15)
where $\tilde{h}_{2}=(h_{2},\gamma_{0})$, $\gamma_{0}$ and $\gamma$ denote the
current and the incremental reparameterizations respectively, and
$\phi:\Gamma\to[h_{2}]$ is defined to be
$\phi(\gamma)=(\tilde{h}_{2},\gamma)$. Let $b$ be a unit vector in
$T_{\gamma_{\text{id}}}(\Gamma)$ for $\gamma_{\text{id}}(s)=s$. Then, the
directional derivative of $E_{\text{reg}}$ at $\gamma_{\text{id}}$, in the
direction of $b$, is given by $\langle
h_{1}-\phi(\gamma_{\text{id}}),d\phi(b)\rangle b$, where $\phi_{\ast}$ is the
differential of $\phi$ and $\langle\cdot,\cdot\rangle$ is the $\mathbb{L}^{2}$
inner product. If we have an orthonormal basis for
$T_{\gamma_{\text{id}}}(\Gamma)$, we can specify the full gradient of
$E_{\text{reg}}$ with respect to $\gamma$, which is an element of
$T_{\gamma_{\text{id}}}(\Gamma)$ given by
$\partial\gamma\backsimeq\sum_{b_{i}\in{{\mathcal{B}}}_{I}}\langle
h_{1}-\tilde{h}_{2},d\phi(b_{i})\rangle_{2}b_{i}$. This linear combination of
the orthonormal basis elements of $T_{\gamma_{\text{id}}}(\Gamma)$ provides
the incremental update of $\tilde{h}_{2}$ in the orbit $[h_{2}]$.
This leaves two remaining issues: (1) the specification of an orthonormal
basis of $T_{\gamma_{\text{id}}}(\Gamma)$, and (2) an expression for
$\phi_{\ast}$. In our implementation, we use gradients of the spherical
harmonic basis to define the orthonormal basis of
$T_{\gamma_{\text{id}}}(\Gamma)$; see Section 3.2.2 of [21]. The expression of
$\phi_{\ast}$ is also derived in Section 3.2.2 of [21]. With this, the
optimization over the space of diffeomorphisms can be performed using
Algorithm 2.
Input: Two surfaces $\\{f_{1},\ f_{2}\\}\in{\mathcal{F}}$ and small step size
$\epsilon$.
Output: Optimal registration $\gamma^{*}$ and optimally registered surface
$f_{2}^{*}$.
1: Generate basis ${\mathcal{B}}_{I}=\\{b_{i},\ i=1,\dots,N\\}$ using
spherical harmonics.
2: Compute the SRNFs $h_{1}=H(f_{1})$ and $h_{2}=H(f_{2})$.
3: Initialize $\gamma_{0}=\gamma_{id}$, $h_{2}^{0}=h_{2}$ and $j=0$.
4: For each $b_{i},\ i=1,\dots,N$, compute $d\phi(b_{i})$.
5: Compute the registration update
$\partial\gamma=\sum_{b_{i}\in{{\mathcal{B}}}_{I}}\langle
h_{1}-{h}^{j}_{2},d\phi(b_{i})\rangle_{2}b_{i}$.
6: Apply the registration update using
$\gamma_{j+1}=\gamma_{j}\circ(\gamma_{id}+\epsilon\partial\gamma)$.
7: Update $h_{2}^{j+1}=(h_{2}^{0},\gamma_{j+1})$ and $j=j+1$.
8: Iterate steps 4-7 until convergence.
9: Let $\gamma^{*}=\gamma_{j}$ and $f_{2}^{*}=f_{2}\circ\gamma^{*}$.
Algorithm 2 Optimal registration of two surfaces. |
---|---
(a) Without SRNF. | (b) With SRNF.
Figure 13: The mean shape between the left and right surfaces, computed (a) in the original surface space without the SRNF representation, and (b) in the SRNF space. The computed mean shape is shown in the middle of each subfigure. Observe that in (a), elongated parts such as the arms significantly shrink while in (b) they bend in a natural fashion. |
---|---
(a) Input (before registration). | (b) After registration.
Figure 14: Illustration of the spatial correspondences, on samples from the CAPE dataset [8], before and after applying the proposed spatial registration algorithm. Correspondences are color-coded, _i.e.,_ points that are in correspondence are rendered with the same color. |
---|---
(a) Input (before spatial registration). | (b) After spatial registration.
Figure 15: Illustration of the spatial correspondences, on some samples from
the DFAUST dataset [7], before and after applying the proposed spatial
registration algorithm. Correspondences are color-coded, _i.e.,_ points that
are in correspondence are rendered with the same color.
---
(a) Input (before spatial registration).
(b) After spatial registration.
Figure 16: Illustration of the spatial correspondences, on samples from the
COMA dataset [46], before and after applying the proposed spatial registration
algorithm. Correspondences are color-coded, _i.e.,_ points that are in
correspondence are rendered with the same color.
---
Figure 17: Examples of spatial correspondences between objects with missing
parts. Correspondences are color-coded. The top row shows one view of the 3D
models while the bottom row shows another view of the same models. In both
cases, we show the smooth rendering as well as the wireframe rendering of the
surfaces so that the reader can appreciate the quality of the correspondences.
---
Figure 18: Examples of spatial correspondences between 3D hands with one
finger missing. Correspondences are color-coded, _i.e.,_ points that are in
correspondence are rendered with the same color. We show the smooth rendering
as well as the wireframe rendering so that the reader can appreciate the
quality of the correspondences.
### A.2 Additional spatial registration results
In this section, we provide more results to show the importance of each
component of the proposed framework and to demonstrate the efficiency of the
proposed spatial registration.
Importance of the SRNF representation. Fig. 13 illustrates, with a practical
example, the importance of the SRNF representation for computing geodesics and
statistics between 3D surfaces that undergo complex articulated and elastic
motion. In this example, we use two human body shapes, perform their spatial
registration using the proposed SRNF framework, and then compute their mean
using the $\mathbb{L}^{2}$ metric in the original space of surfaces (Fig.
13-(a)) and using the $\mathbb{L}^{2}$ metric in the space of SRNFs (Fig.
13-(b)). In the former case, we can see that the parts that undergo large
articulated motion (arms in this example) significantly shrink. This is
predictible since a geodesic under the $\mathbb{L}^{2}$ metric in the original
space of surfaces corresponds to straight lines. In the SRNF space, the arms
naturally bend since the $\mathbb{L}^{2}$ metric in this space is equivalent
to the optimal bending and stretching of surfaces; see Fig. 13-(b).
---
(a) Correspondences using the SRNF framework.
(b) Correspondences using MAPTree [44].
Figure 19: Examples of spatial correspondences between pairs of quadruped
animals from the ACSM animal dataset [47]. Correspondences are color-coded. We
also show in (a) the wireframe rendering of the surfaces. Correspondences are
color-coded, _i.e.,_ points that are in correspondence are rendered with the
same color. Correspondences are color-coded, _i.e.,_ points that are in
correspondence are rendered with the same color.
Human bodies and faces. Fig. 14 shows three dressed 3D human models, from the
CAPE dataset, before registration (Fig. 14-(a)) and after registration (Fig.
14-(b)) using the proposed SRNF-based approach. In both cases, the
correspondences are color-coded. CAPE dataset is particularly challenging
since it contains dressed 3D human bodies. Despite the significant differences
in the initial parameterizations (see Fig. 14-(a)) and in the shape of the
bodies and clothes, the proposed approach is able to find plausible
correspondences between the surfaces (see Fig. 14-(b)). Figs. 15 and 16 show
more examples using, respectively, the DFAUST human body dataset [7] and the
COMA human faces [46]. In both cases, we show the surface before and after
registration with correspondences color-coded.
Surfaces with missing parts. Figs. 17 and 18 show examples of spatial
registrations between surfaces that have missing parts. Since our framework
uses an elastic metric, which allows bending and stretching of surfaces, it is
able to find one-to-one correspondences even under these challenging cases.
Note that, similar to functional maps, the framework requires closed manifold
surfaces and thus it is not able to handle (noisy) partial scans.
Quadruped animals. We test the proposed SRNF-based spatial registration on the
quadruped animal dataset of [47]. Figs. 19-(a), 20-(a), and 21-(a) show the
correspondence results obtained using our approach (correspondences are color-
coded). We also compare, visually, our results to the registrations obtained
using the state-of-the-art functional map-based methods such as MapTree [44];
see Figs. 20-(b), 19-(b), and 21-(b). As one can see, the mappings obtained
using MapTree are often not correct in many regions of the shapes. More
importantly, the maps obtained using MapTree are not one-to-one since a point
on the source shape can be mapped to multiple points on the target. Our
approach finds one-to-one correspondences and thus is more suitable for
computing geodesics and shape statistics.
---
View 1. View 2.
View 1 with tessellation. View 2 with tessellation.
(a) Correspondences using the SRNF framework.
View 1. View 2.
(b) Correspondences using MAPTree [44].
Figure 20: Examples of spatial correspondences between quadruped animals from
the ACSM animal dataset [47]. Correspondences are color-coded. We also show in
(a) the tessellation grid of the surfaces. Correspondences are color-coded,
_i.e.,_ points that are in correspondence are rendered with the same color.
---
View 1. View 2.
View 1 with tessellation. View 2 with tessellation.
(a) Correspondences using the SRNF framework.
View 1. View 2.
(b) Correspondences using MAPTree [44].
Figure 21: Examples of spatial correspondences between quadruped animals from
the ACSM animal dataset [47]. Correspondences are color-coded. We also show in
(a) the tessellation grid of the surfaces. Correspondences are color-coded,
_i.e.,_ points that are in correspondence are rendered with the same color.
### A.3 Karcher mean of surfaces
Algorithm 3 summarizes the Karcher mean algorithm used to compute the mean 4D
surface of a collections of 4D surfaces; see Section 4 of the main manuscript.
We refer the reader to the main manuscript for the definition of each symbol.
Input: A set of spatially-registered 4D surfaces
$\\{\alpha_{1},\dots,\alpha_{n}\\}\in{\mathcal{M}_{\mathcal{F}}}$ and their
corresponding SRNF trajectories
$\\{\beta_{1},\dots,\beta_{n}\\}\in\mathcal{M}_{h}$.
Output: Karcher mean $\bar{\alpha}\in\mathcal{M}_{\mathcal{F}}$.
1: Compute the TSRVF maps: $q_{i}=Q(\beta_{i}),i=1,\dots,n$.
2: Set $\bar{q}\leftarrow q_{1}$ as an initial estimate of the Karcher mean.
3: Set $\xi_{i},\ i=1,\dots,n$ to the identity $\xi_{id}(t)=t$.
4: For $i=1,\dots,n$,
* •
Temporally register $q_{i}$ to $\bar{q}$, resulting in
$q^{*}_{i}=q_{i}\odot\xi_{i}^{*}$, by solving Eqn (8) of the main manuscript.
* •
$q_{i}\leftarrow q^{*}_{i}$ and $\xi_{i}\leftarrow\xi_{i}\circ\xi_{i}^{*}$.
5: Update the Karcher mean $\bar{q}=\frac{1}{n}\sum_{i=1}^{n}q_{i}$.
6: If the change in $\left\|\bar{q}\right\|$ is large then go to Step 4.
7: Find $\bar{\alpha}$ by TSRVF inversion of $\bar{q}$ followed by SRNF
inversion.
8: Return $\bar{\alpha}$, $\xi_{i},q_{i},\i=1,\dots,n$.
Algorithm 3 Karcher mean of 4D surfaces.
## Appendix B Temporal registration
In this appendix, we provide additional results and additional quantitative
evaluations to further demonstrate the quality and performance of the temporal
registration framework proposed in this paper.
|
---|---
(a) Before registration. | (b) After registration.
Figure 22: We consider twelve 4D surfaces from the VOCA dataset, perturb each of them using $100$ random temporal diffeomorphisms, and then measure the distance between each original 4D surface and its perturbed versions. We show the boxplots of distances (a) before the temporal registration and (b) after the temporal registration using the proposed framework for each of the twelve 4D surfaces. In each plot, the X axis corresponds to the 4D surfaces and the Y axis to the temporal alignment error. The lower the error is the better is the alignment. |
---|---
(a) Before registration. | (b) After registration.
Figure 23: We consider five 4D surfaces generated using FLAME framework,
perturb each of them using $100$ random temporal diffeomorphisms, and then
measure the distance between each original 4D surface and its perturbed
versions. We show the boxplots of distances (a) before the temporal
registration and (b) after the temporal registration using the proposed
framework for each of the five 4D surfaces. In each plot, the X axis
corresponds to the 4D surfaces and the Y axis to the temporal alignment error.
The lower the error is the better is the alignment.
### B.1 Evaluation of the temporal registration
To quantitatively evaluate the performance of the proposed temporal
registration, we first consider a 4D surface $\alpha$ and parameterize it with
randomly generated temporal diffeomorphims $\xi_{i},i\in\\{1,\cdots,100\\}$,
to obtain new 4D surfaces $\alpha_{i}=\alpha\circ\xi_{i}$. Next, we compute
the distances $d_{i},\ i=1,\dots,100$, between $\alpha$ and $\alpha_{i}$,
using the expression given in Eqn. (7) of the main manuscript, before temporal
registration. Fig. 22-(a) shows the box plots of the resulting distances for
$12$ sequences from the VOCA dataset before the temporal registration.
Ideally, $\forall i,\ d_{i}=0$. However, since the 4D surfaces are not
temporally registered, the distances are large in most of the cases.
In Fig. 23, we perform the same experiment but this time on five 4D surfaces
simulated using the FLAME framework; see Section 5.1 of the main manuscript
for a detailed description of how these 4D surfaces have been generated. Fig.
23 reports the temporal registration errors before and after applying the
proposed framework for each of the five 4D surfaces. As one can see, our
approach is able to properly align the perturbed 4D surfaces to their original
4D surfaces.
Next, we compute the optimal temporal registration, _i.e.,_ for every 4D
surface $\alpha_{i}$, we find the optimal diffeomorphism $\xi^{*}_{i}$ that
aligns $\alpha_{i}$ to $\alpha$. Let
$\alpha^{*}_{i}=\alpha_{i}\circ\xi^{*}_{i}$. Fig. 23-(b) shows boxplots of the
distances between $\alpha^{*}_{i}$ and $\alpha$ for the same five sequences.
Compared to the distances between the original unregistered 4D expressions
shown in Fig. 23-(a), these distances are significantly lower. This shows that
the proposed temporal alignment framework brings the 4D surfaces as close as
possible to each other.
|
---|---
(a) | (b)
Figure 24: The 100 random temporal diffeomorphisms that have been applied to
the original 4D surfaces to simulate the data (a) in Figs. 22 and 23 of this
supplementary material, and (b) in Fig. 8 of the main manuscript. In each
plot, the X and U axis both correspond to the temporal domain $[0,1]$ since,
in our formulation, a temporal diffeomorphism $\xi$ is defined as the mapping
of the temporal domain $[0,1]$ to itself, _i.e.,_ $\xi:[0,1]\to[0,1]\text{
such that }0<\frac{d\xi}{dt}<\infty,\xi(0)=0\text{ and }\xi(1)=1.$ Figure 25:
Examples of the spatiotemporal registration of two facial expressions. Note
how the spatiotemporally registered target 4D surface in (c) became fully
synchronised with the source 4D surface (a). The full video sequence is
provided in the supplementary material.
Finally, Figs. 24-(a) and (b) show the $100$ random temporal diffeomorphisms
that have been applied to perturb the 4D surfaces used for quantitative
evaluation shown in Fig. 8 of the main manuscript, and in Figs. 22 and 23 in
this Supplementary Material.
### B.2 Additional temporal registration results
Fig. 25 shows another example of the spatiotemporal registration of two 4D
facial surfaces. In this example, we show the source surface, the target 4D
surface before the spatiotemporal registration, and the target 4D surface
after spatiotemporal registration. We also highlight some key frames to
illustrate the quality of the temporal registration.
## Appendix C Geodesics between 4D surfaces
This appendix presents additional results on the computation of geodesics
between 4D surfaces. Fig. 26 shows two additional examples of gedoesics
between 4D facial surfaces. Fig. 27, on the other hand, shows the input 4D
surfaces, prior to the spatiotemporal registration, used to generate the mean
4D surface in Fig. 9 in the main manuscript.
---
(a) Example 1.
(b) Example 2.
Figure 26: Two additional examples of geodesics between 4D surfaces. In each
example, we show the source 4D surface, the target 4D surface after
spatiotemporal registration, and five intermediate 4D surfaces along the
geodesic between the source and the target. The highlight 4D surface
corresponds to the mean. A video illustrating this sequence is included in the
supplementary material. Figure 27: Input 4D surfaces, prior to the
spatiotemporal registration, used to generate the mean 4D surface of Fig. 9 of
the main manuscript. Each row correspond to one 4D surface. The supplementary
material also includes the full video sequences.
## Appendix D Statistics between 4D surfaces
Finally, Section D provides additional results on the computation of summary
statistics on collections of 4D surfaces. Figs. 28 and 29 show statistics
computed on a collection of six 4D facial expressions from the VOCA dataset.
In Fig. 28, we show the input 4D surfaces before and after the spatiotemporal
registration. In both cases, we highlight the computed mean 4D surface. Fig.
29 shows the leading three modes of variation; each mode of variation is
discretized as a path from $-1.5$ to $+1.5$ standard deviations.
---
(a) Input unregistered 4D surfaces. The last row is the mean obtained without
temporal registration.
(b) Spatio-temporally co-registered 4D surfaces. The last row is the mean
obtained after co-registration
The mean is shown in the last column.
Figure 28: Input 4D faces and mean 4D surface computed before (a) and after
(b) co-registration. Each row corresponds to one 4D surfaces. The
supplementary material includes the full video sequences.
---
(a) First mode of variation.
(b) Second mode of variation.
(c) Third mode of variation.
Figure 29: Three principal modes of variation (the mean 4D surface is
highlighted in the middle). Each row corresponds to one 4D surface. The
supplementary material includes the full video sequences.
## References
* [1] P. Werner, D. Lopez-Martinez, S. Walter, A. Al-Hamadi, S. Gruss, and R. Picard, “Automatic recognition methods supporting pain assessment: A survey,” _IEEE Transa. Affective Computing_ , 2019.
* [2] B. Egger, W. A. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani _et al._ , “3D Morphable Face Models: Past, Present, and Future,” _ACM TOG_ , vol. 39, no. 5, pp. 1–38, 2020.
* [3] B. Allen, B. Curless, and Z. Popović, “The space of human body shapes: Reconstruction and parameterization from range scans,” _ACM Transactions on Graphics_ , vol. 22, no. 3, pp. 587–594, Jul. 2003.
* [4] H. Laga, Q. Xie, I. H. Jermyn, and A. Srivastava, “Numerical inversion of srnf maps for elastic shape analysis of genus-zero surfaces,” _IEEE PAMI_ , vol. 39, no. 12, pp. 2451–2464, 2017.
* [5] I. Jermyn, S. Kurtek, E. Klassen, and A. Srivastava, “Elastic shape matching of parameterized surfaces using square root normal fields,” _ECCV_ , vol. 5, no. 14, pp. 805–817, 2012.
* [6] D. Cudeiro, T. Bolkart, C. Laidlaw, A. Ranjan, and M. Black, “Capture, learning, and synthesis of 3D speaking styles,” _IEEE CVPR_ , pp. 10 101–10 111, 2019. [Online]. Available: http://voca.is.tue.mpg.de/
* [7] F. Bogo, J. Romero, G. Pons-Moll, and M. J. Black, “Dynamic FAUST: Registering human bodies in motion,” in _IEEE CVPR_ , 2017.
* [8] Q. Ma, J. Yang, A. Ranjan, S. Pujades, G. Pons-Moll, S. Tang, and M. J. Black, “Learning to dress 3d people in generative clothing,” in _IEEE CVPR_ , 2020, pp. 6469–6478.
* [9] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” _Computer vision and image understanding_ , vol. 61, no. 1, pp. 38–59, 1995.
* [10] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” in _ACM Siggraph_ , 1999, pp. 187–194.
* [11] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis, “Scape: shape completion and animation of people,” in _ACM SIGGRAPH 2005 Papers_ , 2005, pp. 408–416.
* [12] N. Hasler, C. Stoll, M. Sunkel, B. Rosenhahn, and H.-P. Seidel, “A statistical model of human pose and body shape,” in _CGF_ , vol. 28, no. 2, 2009, pp. 337–346.
* [13] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, “Smpl: A skinned multi-person linear model,” _ACM TOG_ , vol. 34, no. 6, pp. 1–16, 2015.
* [14] S. Zuffi, A. Kanazawa, T. Berger-Wolf, and M. J. Black, “Three-d safari: Learning to estimate zebra pose, shape, and texture from images,” in _IEEE CVPR_ , 2019, pp. 5359–5368.
* [15] G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. Osman, D. Tzionas, and M. J. Black, “Expressive body capture: 3d hands, face, and body from a single image,” in _IEEE CVPR_ , 2019, pp. 10 975–10 985.
* [16] N. Hesse, S. Pujades, J. Romero, M. J. Black, C. Bodensteiner, M. Arens, U. G. Hofmann, U. Tacke, M. Hadders-Algra _et al._ , “Learning an infant body model from rgb-d data for accurate full body motion analysis,” in _MICCAI_ , 2018, pp. 792–800.
* [17] S. Kurtek, A. Srivastava, E. Klassen, and H. Laga, “Landmark-guided elastic shape analysis of spherically-parameterized surfaces,” _CGF_ , vol. 32, no. 2pt4, pp. 429–438, 2013.
* [18] M. Kilian, N. J. Mitra, and H. Pottmann, “Geometric modeling in shape space,” in _ACM SIGGRAPH_ , 2007.
* [19] Q. Xie, S. Kurtek, H. Le, and A. Srivastava, “Parallel transport of deformations in shape space of elastic surfaces,” in _IEEE ICCV_ , December 2013.
* [20] Q. Xie, I. Jermyn, S. Kurtek, and A. Srivastava, “Numerical inversion of srnfs for efficient elastic shape analysis of star-shaped objects,” in _ECCV_ , 2014, pp. 485–499.
* [21] I. H. Jermyn, S. Kurtek, H. Laga, and A. Srivastava, “Elastic shape analysis of three-dimensional objects,” _Synthesis Lectures on Computer Vision_ , vol. 12, no. 1, pp. 1–185, 2017.
* [22] H. Laga, “A survey on nonrigid 3D shape analysis,” in _Academic Press Library in Signal Processing, Volume 6_. Elsevier, 2018, pp. 261–304.
* [23] H. Laga, Y. Guo, H. Tabia, R. B. Fisher, and M. Bennamoun, _3D Shape analysis: fundamentals, theory, and applications_. John Wiley & Sons, 2018.
* [24] A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching,” _Proceedings of the National Academy of Sciences_ , vol. 103, no. 5, pp. 1168–1172, 2006.
* [25] R. Litman and A. M. Bronstein, “Learning spectral descriptors for deformable shape correspondence,” _IEEE PAMI_ , vol. 36, no. 1, pp. 171–180, 2013.
* [26] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas, “Functional maps: a flexible representation of maps between shapes,” _ACM TOG_ , vol. 31, no. 4, pp. 1–11, 2012.
* [27] M. Ovsjanikov, E. Corman, M. Bronstein, E. Rodolà, M. Ben-Chen, L. Guibas, F. Chazal, and A. Bronstein, “Computing and processing correspondences with functional maps,” in _SIGGRAPH ASIA 2016 Courses_ , 2016, pp. 1–60.
* [28] M. Wand, P. Jenke, Q. Huang, M. Bokeloh, L. Guibas, and A. Schilling, “Reconstruction of deforming geometry from time-varying point clouds,” in _SGP_ , 2007, pp. 49–58.
* [29] T. Beeler, F. Hahn, D. Bradley, B. Bickel, P. Beardsley, C. Gotsman, R. W. Sumner, and M. Gross, “High-quality passive facial performance capture using anchor frames,” in _ACM SIGGRAPH_ , 2011, pp. 1–10.
* [30] A. Tevs, A. Berner, M. Wand, I. Ihrke, M. Bokeloh, J. Kerber, and H.-P. Seidel, “Animation cartography?intrinsic reconstruction of shape and motion,” _ACM TOG_ , vol. 31, no. 2, pp. 1–15, 2012.
* [31] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero, “Learning a model of facial shape and expression from 4d scans.” _ACM Trans. Graph._ , vol. 36, no. 6, pp. 194–1, 2017.
* [32] R. Anirudh, P. Turaga, J. Su, and A. Srivastava, “Elastic functional coding of human actions: From vector-fields to latent variables,” in _IEEE CVPR_ , 2015, pp. 3147–3155.
* [33] I. Akhter, T. Simon, S. Khan, I. Matthews, and Y. Sheikh, “Bilinear spatiotemporal basis models,” _ACM Transactions on Graphics (TOG)_ , vol. 31, no. 2, pp. 1–12, 2012.
* [34] B. B. Amor, J. Su, and A. Srivastava, “Action recognition using rate-invariant analysis of skeletal shape trajectories,” _IEEE PAMI_ , vol. 38, no. 1, pp. 1–13, 2015.
* [35] I. Dryden and K. Mardia, _Statistical Shape Analysis_. John Wiley & Son, 1998.
* [36] A. Srivastava, E. Klassen, S. Joshi, and I. Jermyn, “Shape analysis of elastic curves in euclidean spaces,” _IEEE PAMI_ , no. 99, pp. 1–1, 2011.
* [37] M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes, “Computing large deformation metric mappings via geodesic flows of diffeomorphisms,” _IJCV_ , vol. 61, no. 2, pp. 139–157, 2005.
* [38] V. Debavelaere, S. Durrleman, S. Allassonnière, and A. D. N. Initiative, “Learning the Clustering of Longitudinal Shape Data Sets into a Mixture of Independent or Branching Trajectories,” _IJCV_ , pp. 1–16, 2020.
* [39] A. Bône, O. Colliot, and S. Durrleman, “Learning the Spatiotemporal Variability in Longitudinal Shape Data Sets,” _IJCV_ , vol. 128, no. 12, pp. 2873–2896, 2020.
* [40] E. Praun and H. Hoppe, “Spherical parametrization and remeshing,” vol. 22, 2003, pp. 340–349.
* [41] S. Kurtek, E. Klassen, Z. Ding, S. Jacobson, J. Jacobson, M. Avison, and A. Srivastava, “Parameterization-invariant shape comparisons of anatomical surfaces,” _Medical Imaging, IEEE Transactions on_ , vol. 30, no. 3, pp. 849–858, 2011.
* [42] J. Su, S. Kurtek, E. Klassen, A. Srivastava _et al._ , “Statistical analysis of trajectories on Riemannian manifolds: bird migration, hurricane tracking and video surveillance,” _The Annals of Applied Statistics_ , vol. 8, no. 1, pp. 530–552, 2014.
* [43] D. T. Robinson, _Functional data analysis and partial shape matching in the square root velocity framework_. The Florida State University, 2012.
* [44] J. Ren, S. Melzi, M. Ovsjanikov, and P. Wonka, “Maptree: recovering multiple solutions in the space of maps,” _ACM Transactions on Graphics (TOG)_ , vol. 39, no. 6, pp. 1–17, 2020.
* [45] G. Pai, J. Ren, S. Melzi, P. Wonka, and M. Ovsjanikov, “Fast sinkhorn filters: Using matrix scaling for non-rigid shape correspondence with functional maps,” in _CVPR_ , 2021.
* [46] A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black, “Generating 3D faces using convolutional mesh autoencoders,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 704–720.
* [47] N. Kulkarni, A. Gupta, D. F. Fouhey, and S. Tulsiani, “Articulation-aware canonical surface mapping,” in _IEEE CVPR_ , 2020, pp. 452–461.
* [48] M. Bauer, N. Charon, P. Harms, and H.-W. Hsieh, “A numerical framework for elastic surface matching, comparison, and interpolation,” _International Journal of Computer Vision_ , pp. 1–20, 2021.
* [49] G. Wang, H. Laga, N. Xie, J. Jia, and H. Tabia, “The shape space of 3d botanical tree models,” _ACM TOG_ , vol. 37, no. 1, pp. 1–18, 2018.
* [50] G. Wang, H. Laga, J. Jia, S. J. Miklavcic, and A. Srivastava, “Statistical analysis and modeling of the geometry and topology of plant roots,” _J. of Theoretical Biology_ , vol. 486, p. 110108, 2020.
* [51] X. Han, H. Laga, and M. Bennamoun, “Image-based 3d object reconstruction: State-of-the-art and trends in the deep learning era,” _IEEE PAMI_ , 2019\.
* [52] H. Laga, L. V. Jospin, F. Boussaid, and M. Bennamoun, “A survey on deep learning techniques for stereo-based depth estimation,” _IEEE PAMI_ , 2020\.
|
# Channel Estimation for RIS Assisted Wireless Communications: Part I -
Fundamentals, Solutions, and Future Opportunities
(Invited Paper)
Xiuhong Wei, Decai Shen, and Linglong Dai All authors are with the Beijing
National Research Center for Information Science and Technology (BNRist) as
well as the Department of Electronic Engineering, Tsinghua University, Beijing
100084, China (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>daill@tsinghua.edu.cn).This work was supported in
part by the National Key Research and Development Program of China (Grant No.
2020YFB1807201) and in part by the National Natural Science Foundation of
China (Grant No. 62031019).
###### Abstract
The reconfigurable intelligent surface (RIS) with low hardware cost and energy
consumption has been recognized as a potential technique for future 6G
communications to enhance coverage and capacity. To achieve this goal,
accurate channel state information (CSI) in RIS assisted wireless
communication system is essential for the joint beamforming at the base
station (BS) and the RIS. However, channel estimation is challenging, since a
large number of passive RIS elements cannot transmit, receive, or process
signals. In the first part of this invited paper, we provide an overview of
the fundamentals, solutions, and future opportunities of channel estimation in
the RIS assisted wireless communication system. It is noted that a new channel
estimation scheme with low pilot overhead will be provided in the second part
of this paper.
###### Index Terms:
Reconfigurable intelligent surface (RIS), wireless communication, channel
estimation.
## I Introduction
Recently, reconfigurable intelligent surface (RIS) has been proposed to
enhance the coverage and capacity of the wireless communication system with
low hardware cost and energy consumption [1]. In general, RIS consisting of
massive passive low-cost elements can be deployed to establish extra links
between the base station (BS) and users. By reconfiguring these RIS elements
according to the surrounding environment, RIS can provide high beamforming
gain [2]. The reliable beamforming requires accurate channel state information
(CSI). Hence, it is essential to develop accurate channel estimation schemes
for the RIS assisted wireless communication system [3].
Although channel estimation has been widely studied in the conventional
wireless communication system, there are two main obstacles for conventional
schemes to be directly applied in the RIS assisted system [4]. Firstly, all
RIS elements are passive, which cannot transmit, receive, or process any pilot
signals to realize channel estimation. Secondly, since an RIS usually consists
of hundreds of elements, the dimension of channels to be estimated is much
larger than that in conventional systems, which will result in a sharp
increase of the pilot overhead for channel estimation. Therefore, channel
estimation is a key challenge in the RIS assisted system, which will be
investigated in this invited paper composed of two parts.
In the first part, we provide an overview of the fundamentals, solutions, and
future opportunities of channel estimation in the RIS assisted wireless
communication system. Firstly, the fundamentals of channel estimation are
explained in Section II. Then, in Section III, we discuss and compare three
types of overhead-reduced channel estimation solutions that exploit the two-
timescale channel property, the multi-user correlation, and the channel
sparsity, respectively. After that, we point out key challenges and the
corresponding future opportunities about channel estimation in the RIS
assisted system in Section IV. Finally, some conclusions are drawn in Section
V.
Notation: Lower-case and upper-case boldface letters ${\bf{a}}$ and ${\bf{A}}$
denote a vector and a matrix, respectively; ${{{\bf{A}}^{T}}}$ and
${{{\bf{A}}^{H}}}$ denote the transpose and conjugate transpose of matrix
$\bf{A}$, respectively; ${{\left\|\bf{a}\right\|}}$ denotes the
${{l_{2}}}$-norm of vector ${\bf{a}}$; ${\rm{diag}}\left({\bf{x}}\right)$
denotes the diagonal matrix with the vector $\bf{x}$ on its diagonal.
## II Fundamentals of Channel Estimation in The RIS Assisted System
In this section, we will first illustrate the system model of an RIS assisted
wireless communication system. Then, the channel estimation problem in this
system will be presented. Finally, we will introduce the basic channel
estimation schemes.
Figure 1: An example of RIS assisted wireless communication system.
### II-A System Model
For the uplink RIS assisted wireless communication system as shown in Fig. 1,
we consider one $M$-antenna BS and one $N$-element RIS to serve $K$ single-
antenna users. Let ${\bf{h}}_{d,k}\in{\mathbb{C}}^{M\times 1}$ denote the
direct channel between the ${k}$th user and the BS,
${\bf{G}}\in{\mathbb{C}}^{M\times N}$ be the channel between the RIS and the
BS, and ${\bf{h}}_{r,k}\in{\mathbb{C}}^{N\times 1}$ be the channel between the
${k}$th user and the RIS. The received signal
${\bf{y}}\in{\mathbb{C}}^{M\times 1}$ at the BS can be expressed by
${{\bf{y}}}=\sum\limits_{k=1}^{{K}}\left({\bf{h}}_{d,k}+{\bf{G}}{{\rm{diag}}\left({\bm{\theta}}\right)}{\bf{h}}_{r,k}\right){s}_{k}+{{\bf{w}}},$
(1)
where $s_{k}$ is the symbol sent by the $k$th user,
$\bm{\theta}=\left[{\theta_{1}},{\theta_{2}},\cdots,{\theta_{N}}\right]^{T}$
is the reflecting vector at the RIS with $\theta_{n}$ representing the
reflecting coefficient for the $n$th RIS element, and
${{\bf{w}}}\in\mathbb{C}^{M\times 1}$ is the received noise at the BS. Note
that $\theta_{n}$ can be further set as ${\theta_{n}=\beta_{n}e^{j\phi_{n}}}$,
with $\beta_{n}\in[0,1]$ and $\phi_{n}\in[0,2\pi]$ representing the amplitude
and the phase for the $n$th RIS element, respectively.
For the RIS assisted system, reliable beamforming requires the accurate CSI
consisting of the direct link and the RIS related reflecting link. We consider
a time division duplex (TDD) RIS assisted system, where the downlink channel
can be obtained based on the estimated uplink channel because of the TDD
channel reciprocity.
### II-B Channel Estimation Problem
The channel estimation problem for the direct channel ${\bf{h}}_{d,k}$ can be
solved by the conventional schemes in the conventional wireless communication
system. Unfortunately, it is difficult to estimate the RIS related channels
$\bf{G}$ and ${\bf{h}}_{r,k}$ due to passive RIS elements without signal
processing capability.
Let
${\bf{H}}_{k}\triangleq{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right)\in{\mathbb{C}}^{M\times
N}$ represent the cascaded channel between the $k$th user and the BS via the
RIS, and the received signal $\bf{y}$ in (1) can be also rewritten as
${{\bf{y}}}=\sum\limits_{k=1}^{{K}}\left({\bf{h}}_{d,k}+{\bf{H}}_{k}{\bm{\theta}}\right){s}_{k}+{{\bf{w}}}.$
(2)
Note that many beamforming algorithms (i.e., how to design the optimal RIS
reflecting vector $\bm{\theta}$ in (2)) aim to optimize the power of the
effective reflecting link, i.e.,
${\parallel{{\bf{G}}{\rm{diag}}\left({\bm{\theta}}\right){\bf{h}}_{r,k}}\parallel}^{2}={\parallel{{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right){\bm{\theta}}}\parallel}^{2}={\parallel{{\bf{H}}_{k}{\bm{\theta}}}\parallel}^{2}$.
Therefore, most of existing channel estimation schemes directly estimate the
cascaded channel ${\bf{H}}_{k}$ instead of the individual channels $\bf{G}$
and ${\bf{h}}_{r,k}$.
By adopting the orthogonal pilot transmission strategy among users, the uplink
channel estimation associated with different users can be independent. Without
loss of generality, the subscript $k$ in ${\bf{h}}_{d,k}$, ${\bf{h}}_{r,k}$,
and ${\bf{H}}_{k}$ can be omitted to represent the corresponding channels
related to any users.
### II-C Basic Channel Estimation Schemes
If all RIS elements are turned off, i.e., the incident electromagnetic wave
will be perfectly absorbed by the RIS instead of reflected to the
receiver111Note that “turn off” is a widely used expression in the literature
but inaccurate, since an RIS with all elements turned off is also a scatterer
to reflect the incident electromagnetic wave. An implementation method with a
special setting of RIS elements proposed in [5] can realize the perfect “turn
off” for the incident electromagnetic wave., the RIS assisted communication
system can be simplified as the conventional communication system without the
RIS. Hence, the direct channel ${\bf{h}}_{d}$ can be estimated by some
classical solutions such as the least square (LS) algorithm.
As mentioned above, the channel estimation for the RIS related channels
$\bf{G}$ and ${\bf{h}}_{r}$ is challenging. A straightforward solution is to
estimate the cascaded channel $\bf{H}$ in (2) based on the ON/OFF protocol
proposed in [4]. The key idea is to divide the entire cascaded channel
estimation process into $N$ stages, where each stage only estimates one column
vector of ${\bf{H}}\in{\mathbb{C}}^{M\times N}$ associated with one RIS
element. Specifically, the cascaded channel ${\bf{H}}\in{\mathbb{C}}^{M\times
N}$ can be represented by $N$ columns as
${\bf{H}}=\left[\mathbf{h}_{1},\cdots,\mathbf{h}_{n},\cdots,\mathbf{h}_{N}\right],$
(3)
where $\mathbf{h}_{n}\in\mathbb{C}^{M\times 1}$ is the cascaded channel
corresponding to the $n$th RIS element. In the $n$th stage, only the $n$th RIS
element is turned on, while the remained $N-1$ RIS elements are turned off.
Since the direct channel ${\bf{h}}_{d}$ has been estimated in advance, its
impact can be removed from the received pilot signal at the BS. Then,
$\mathbf{h}_{n}$ can be estimated based on the LS algorithm. By following this
similar procedure, $\mathbf{h}_{1}$, $\cdots$, $\mathbf{h}_{N}$ can be
estimated in turn by sequentially turning on the $1$st, $\cdots$, $N$th RIS
element one by one, while the remained $N-1$ RIS elements are turned off.
After $N$ stages, the cascaded channel ${\bf{H}}\in{\mathbb{C}}^{M\times N}$
composed of $N$ columns can be completely estimated. However, since only one
RIS element can reflect the pilot signal to the BS based on the ON/OFF
protocol, the channel estimation accuracy may be degraded.
In order to improve the channel estimation accuracy, [6] further proposed the
discrete Fourier transform (DFT) protocol based channel estimation scheme,
where all RIS elements are always turned on. In this scheme, the entire
cascaded channel estimation process is still divided into $N$ stages. However,
in each stage, the reflecting vector $\bm{\theta}$ at the RIS is specially
designed as one column vector of the DFT matrix. After $N$ stages, based on
the LS algorithm, the cascaded channel $\bf{H}$ can be directly estimated
based on all received pilot signals in $N$ stages at the BS. It is noted that
the overall reflecting matrix for $N$ stages forms a DFT matrix of size
$N\times N$, which has been proved to be the optimal choice to ensure the
channel estimation accuracy [6].
However, the required pilot overhead in [4, 6] is huge. This is mainly caused
by the fact that the number of unknown channel coefficients (e.g., $64\times
256$ with 64 antennas at the BS, 256 elements at the RIS and single antenna at
the user) in the RIS assisted communication system is much larger than that of
unknown channel coefficients (e.g., $64\times 1$ with 64 antennas at the BS
and a single antenna at the user) in the conventional communication system
without the RIS. The huge pilot overhead will significantly decrease the
effective capacity improvement. Thus, it is essential to develop overhead-
reduced channel estimation schemes for the RIS assisted system. In the next
Section III, we will introduce three typical types of overhead-reduced channel
estimation solutions.
## III Overhead-Reduced Channel Estimation Solutions
In this section, we will introduce three typical types of channel estimation
solutions to reduce the pilot overhead, which exploit the two-timescale
channel property, the multi-user correlation, and the channel sparsity,
respectively.
### III-A Two-Timescale Based Channel Estimation
The first typical solution to reduce the pilot overhead for channel estimation
is to exploit the two-timescale channel property in the RIS assisted
communication system [7, 8, 9].
Specifically, the two-timescale channel property can be explained as follows.
On the one hand, since the BS and the RIS are usually placed in fixed
positions, the channel $\bf{G}$ between the RIS and the BS usually remains
unchanged for a long period of time, which shows the large timescale property.
On the other hand, due to the mobility of the user, the channel ${\bf{h}}_{r}$
between the user and the RIS and the direct channel ${\bf{h}}_{d}$ between the
user and the BS vary in a much smaller timescale than that of the quasi-static
channel $\bf{G}$, which show the small timescale property.
As shown in Fig. 2, based on this two-timescale channel property, [7] proposed
a two-timescale channel estimation framework, where the two different pilot
transmission strategies are respectively designed for estimating the large
timescale channel $\bf{G}$ and the small timescale channels ${\bf{h}}_{d}$ and
${\bf{h}}_{r}$. Firstly, the high-dimensional channel $\bf{G}$ is estimated
once in a large timescale by using the dual-link pilot transmission strategy
proposed in [7]. Although the pilot overhead required for estimating $\bf{G}$
is large due to its high dimension, such overhead is negligible from a long-
time perspective. Then, based on the widely used uplink pilot transmission
strategy, the low-dimensional channels ${\bf{h}}_{d}$ and ${\bf{h}}_{r}$ can
be estimated before data transmission in a small timescale. Although these
channels have to be estimated more frequently, the required pilot overhead is
small due to their low dimensions. As a result, the average pilot overhead can
be significantly reduced by exploiting the two-timescale channel property.
Figure 2: The two-timescale channel estimation framework [7].
The main difficulty of this scheme is how to estimate $\bf{G}$, since all RIS
elements are passive without signal processing capability. To achieve this
goal, [7] proposed a dual-link pilot transmission strategy as mentioned
before. The key idea is that, the BS firstly transmits pilot signals to the
RIS via the downlink channel ${\bf{G}}^{T}$, and then the RIS reflects pilot
signals back to the BS via the uplink channel $\bf{G}$. There are $(N+1)$ sub-
frames for the dual-link pilot transmission, where each sub-frame consists of
$M$ time slots. In the $m_{1}$th time slot ($m_{1}=1,2,\cdots,M$) of the $t$th
sub-frame ($t=1,2,\cdots,N+1$), the $m_{1}$th antenna of the BS transmits the
pilot signal $s_{m_{1},t}$ to the RIS and other $(M-1)$ antennas of the BS
receive the pilot signal reflected by the RIS. The received pilot signal
$y_{m_{1},m_{2},t}$ at the $m_{2}$th antennas of the BS can be represented as
$(m_{2}=1,2,\cdots,M,m_{1}\neq m_{2})$
$\displaystyle y_{m_{1},m_{2},t}=$
$\displaystyle\left[{\bf{g}}_{m_{2}}^{T}{\rm{diag}}{({\bm{\theta}}_{t})}{\bf{g}}_{m_{1}}+z_{m_{1},m_{2}}\right]s_{m_{1},t}+w_{m_{1},m_{2},t}$
(4) $\displaystyle=$
$\displaystyle\left[{\bf{g}}_{m_{2}}^{T}{\rm{diag}}{({\bf{g}}_{m_{1}})}{\bm{\theta}}_{t}+z_{m_{1},m_{2}}\right]s_{m_{1},t}+w_{m_{1},m_{2},t},$
where ${\bf{g}}_{m_{2}}^{T}\in{\mathbb{C}}^{1\times N}$ is the $m_{2}$th row
vector of the uplink channel $\bf{G}$,
${\bf{g}}_{m_{1}}\in{\mathbb{C}}^{N\times 1}$ is the $m_{1}$th row vector of
the downlink channel ${\bf{G}}^{T}$, ${\bm{\theta}}_{t}$ is the reflecting
vector at the RIS in the $t$th sub-frame, $z_{m_{1},m_{2}}$ is the self-
interference after mitigation from the $m_{1}$th antenna to the $m_{2}$th
antenna of the BS, and $w_{m_{1},m_{2},t}$ is the received noise at the
$m_{2}$th antenna of the BS. After $N+1$ sub-frames, all received pilot
signals $\\{y_{m_{1},m_{2},t}|1\leq m_{1},m_{2}\leq N,m_{1}\neq m_{2},1\leq
t\leq N+1\\}$ can be obtained, which consist of $MN$ unknown variables, i.e.,
$MN$ elements of $\bf{G}$. Then, based on all received pilots, each element of
$\bf{G}$ can be alternately estimated in an iterative manner by utilizing the
coordinate descent algorithm [7].
Some alternative schemes for estimating the channel $\bf{G}$ between the RIS
and the BS were proposed [8, 9]. In [8], two users ($U_{1}$ and $U_{2}$) are
deployed near the RIS to assist its channel estimation. The uplink cascaded
channels ${\bf{H}}_{1}$ for user $U_{1}$, ${\bf{H}}_{2}$ for user $U_{2}$, and
the $U_{1}$-RIS-$U_{2}$ cascaded channel between the user $U_{1}$ and the user
$U_{2}$ via the RIS are estimated based on the pilot symbols transmitted by
the two users, respectively. After that, the entries for BS-RIS channel
$\bf{G}$ can be calculated based on three estimated cascaded channels
mentioned above. By utilizing the long-term channel averaging prior
information [9], the large timescale channel $\bf{G}$ can also be estimated
based on the channel matrix calibration.
After acquiring $\bf{G}$, the low-dimensional channels ${\bf{h}}_{d}$ and
${\bf{h}}_{r}$ can be estimated based on the conventional uplink pilot
transmission strategy. The user transmits the pilot signals to the BS via both
the direct channel ${\bf{h}}_{d}$ and the effective reflecting channel
${\bf{G}}{\bm{\Phi}}{\bf{h}}_{r}$. Based on the received uplink pilot signals
with the known $\bf{G}$ and ${\bm{\Phi}}$, ${\bf{h}}_{d}$ and ${\bf{h}}_{r}$
can be directly estimated at the BS by the conventional channel estimation
algorithms such as the LS algorithm.
Based on the two-timescale channel property, the large timescale channel
$\bf{G}$ and the small timescale channels ${\bf{h}}_{d}$ and ${\bf{h}}_{r}$
can be respectively estimated in different timescales, which can indeed
significantly reduce the average pilot overhead. However, the channel
estimation for $\bf{G}$ is still challenging. In [7], the BS should work in
the full-duplex mode, where different antennas are required to transmit and
receive pilots simultaneously to estimate $\bf{G}$. In [8], the complexity for
user scheduling and the overhead for the $U_{1}$-RIS-$U_{2}$ cascaded channel
feedback are not negligible.
### III-B Multi-User Correlation Based Channel Estimation
Another solution to reduce the pilot overhead is to directly estimate the
corresponding cascaded channels by utilizing the multi-user correlation. Since
all users communicate with the BS via the same RIS, the cascaded channels
$\\{{{\bf{H}}_{k}}\\}_{k=1}^{K}$ associated with different users have some
correlations. Thus, this multi-user correlation can be exploited to reduce the
pilot overhead required by the cascaded channel estimation [10].
Specifically, the multi-user correlation can be explained as follows. For
convenience, we take the $n$th column $\mathbf{h}_{k,n}\in\mathbb{C}^{M\times
1}$ of the cascaded channel
${\bf{H}}_{k}=\left[\mathbf{h}_{k,1},\mathbf{h}_{k,2}\cdots,\mathbf{h}_{k,N}\right]\in\mathbb{C}^{M\times
N}$ as an example, which can expressed as
$\mathbf{h}_{k,n}=t_{k,n}\mathbf{g}_{n},$ (5)
where $t_{k,n}$ denotes the channel between the $k$th user and the $n$th RIS
element, which is also the $n$th element of ${\bf{h}}_{r}$,
${\bf{g}}_{n}\in\mathbb{C}^{M\times 1}$ denotes the user-independent channel
between the $n$th RIS element and the BS, which is also the $n$th column
vector of ${\bf{G}}$. Since different users enjoy the same channel ${\bf{G}}$
from the RIS to the BS, $\mathbf{h}_{k,n}$ in (5) can be rewritten as
$\mathbf{h}_{k,n}=\lambda_{k,n}\mathbf{h}_{1,n},$ (6)
where
$\lambda_{k,n}=\frac{t_{k,n}}{t_{1,n}}.$ (7)
The key idea of the multi-user correlation based cascaded channel estimation
scheme can be expressed as follows. Firstly, the cascaded channel
${\bf{H}}_{1}=\left[\mathbf{h}_{1,1},\cdots,\mathbf{h}_{1,n},\cdots,\mathbf{h}_{1,N}\right]$
for the first user (which is also called as the typical user) can estimated by
utilizing the DFT protocol based channel estimation scheme discussed in
Subsection II-C. Then, for the $k$th user with $k\geq 2$, the column vector
$\mathbf{h}_{k,n}$ ($n=1,2,\cdots,N$) of ${\bf{H}}_{k}$ can be obtained by
only estimating the unknown scalar $\lambda_{k,n}$ in (7) with only one
unknown coefficient instead of $\mathbf{h}_{k,n}$ with $M$ unknown
coefficients. Hence, there are only $N$ scalars to be estimated in total for
obtaining the cascaded channel ${\bf{H}}_{k}$ of size $M\times N$.
By exploiting the multi-user correlation, the pilot overhead can be
significantly decreased, since the number of channel coefficients to be
estimated becomes much smaller. However, this scheme proposed in [10] has
assumed that there is no receiving noise at the BS. In the typical scenario of
low SNR for channel estimation, the estimation accuracy will be degraded.
### III-C Sparsity Based Channel Estimation
The overhead-reduced channel estimation solutions in the previous two
subsections are mainly realized in the spatial domain. By contrast, in this
subsection, we will introduce some overhead-reduced based channel estimation
solutions by exploiting the sparsity of channels in the angular domain [11,
12].
In the conventional wireless communication system, since there are limited
propagation paths, the channel ${\bf{h}}_{d}$ is sparse in the angular domain.
Thus, the channel estimation problem for ${\bf{h}}_{d}$ can be formulated as a
sparse signal recovery problem, which can be solved by compressive sensing
(CS) algorithms with reduced pilot overhead. Similarly, the cascaded channel
${\bf{H}}\in\mathbb{C}^{M\times N}$ in RIS assisted systems also shows the
sparsity when transformed into the angular domain. Specially, by using the
virtual angular-domain representation, the cascaded channel ${\bf{H}}$ can be
decomposed as
${{\bf{H}}}={\bf{U}}_{M}{\tilde{\bf{H}}}{\bf{U}}_{N}^{T},$ (8)
where ${\tilde{\bf{H}}}$ denotes the $M\times N$ angular cascaded channel,
${\bf{U}}_{M}$ and ${\bf{U}}_{N}$ are the ${M\times M}$ and ${N\times N}$
dictionary unitary matrices at the BS and the RIS, respectively. The number of
non-zero elements in ${\tilde{\bf{H}}}$ is determined by the product of the
number of paths between the RIS and the BS and that between the user and the
RIS. Since there are the limited number of scatters around the BS and the RIS,
especially in high-frequency communications, ${\tilde{\bf{H}}}$ is usually
sparse in nature [11], as shown in Fig. 3.
Figure 3: The sparsity of the angular cascaded channel [11].
Based on the sparsity of the angular cascaded channel, the cascaded channel
estimation problem can be also formulated as a sparse signal recovery problem
[11]. Then, some classical CS algorithms, such as orthogonal matching pursuit
(OMP), can be directly used to estimate the angular cascaded channel with
reduced pilot overhead. However, these conventional CS algorithms cannot
achieve the satisfying estimation accuracy, especially in low SNR ranges. In
order to improve the estimation accuracy, a joint sparse matrix recovery based
channel estimation scheme was proposed in [12]. In this scheme, all angular
channels associated with different users can be projected to the same subspace
by considering the fact that different users enjoy the same channel $\bf{G}$
from the RIS to the BS. However, for these sparsity based channel estimation
schemes [11, 12], the required pilot overhead is still high, since the
sparsity of the angular cascaded channel becomes less significant compared
with the angular channel in conventional communications.
Moreover, [13] proposed to divide the entire RIS into several sub-surfaces,
where all RIS elements on the same sub-surface are considered to have the same
channel coefficients. Therefore, the number of the channel coefficients to be
estimated can be significantly decreased. By combining the typical overhead-
reduced channel estimation schemes mentioned above with this idea of dividing
sub-surface, the pilot overhead can be further reduced.
For the sake of simplicity, the above discussions on channel estimation
schemes take the narrow band as an example. The similar idea can be extended
to the wideband orthogonal frequency division multiplexing (OFDM) channel
estimation. Specifically, the channel on each sub-carrier can be estimated
separately as in a narrow band system, such as [13]. It is noted that the
reflecting vector $\bm{\theta}$ at the RIS are the same for all sub-carriers.
Besides, by considering the common sparsity of angular domain channels among
different sub-carriers, [14] proposed a joint overhead-reduced channel
estimation scheme for all sub-carriers, where a denoising convolution neural
network (DnCNN) is used to improve the estimation accuracy.
## IV Challenges and Future Opportunities for Channel Estimation
In this section, we will point out key challenges for channel estimation in
the RIS assisted wireless communication system, based on which the
corresponding future research opportunities will be discussed.
### IV-A Ultra-Wideband Channel Estimation
In order to achieve ultra-high-speed wireless transmission, the RIS assisted
ultra-wideband wireless communication will be an important trend. However, the
beam squint effect caused by the ultra-wideband communication brings a serious
challenge for the channel estimation, since the single physical angle will be
transformed to multiple spatial angles. [15] proposed a beam squint pattern
matching based wideband channel estimation in the conventional wireless
communication system. The similar idea may be applied in the wideband channel
estimation for the RIS assisted wireless communication system.
### IV-B Spatial Non-Stationarity
To further exploit the potential beamforming gains and spatial resolutions of
the RIS, the number of RIS elements may be hundreds of times larger than that
of most scenarios discussed so far, which will result in a large size for the
RIS array. With the significant increase of the array size, the RIS related
channels will present a new characteristic called as the spatial non-
stationarity [16]. This channel characteristic means that the incident
direction and power for the electromagnetic wave at the RIS vary with
different RIS elements. Under this condition, all existing channel estimation
schemes based on the spatial stationarity may not work. The estimation scheme
based on the concept of visibility regions [16] may be used to address this
challenge.
### IV-C RIS Assisted Cell-Free Network
The cell-free network has been recently proposed to address the inter-cell
interference of the conventional cellular network. In order to further improve
the network capacity with low power consumption, the energy-efficient RISs can
be investigated in the cell-free network. However, with the increase of the
number of RIS, the number of channels to be estimated increases accordingly in
the RIS assisted cell-free network. One possible solution is to exploit the
multi-user correlation mentioned in [10] to reduce the number of channels to
be estimated.
### IV-D High Pilot Overhead
Since the RIS consists of a large number of elements, the RIS related channels
have many coefficients to be estimated. Although some overhead-reduced channel
estimation solutions have been recently proposed, the required pilot overhead
is still high due to the passivity of RIS elements. By exploiting more channel
characteristics in the RIS, the pilot overhead can be further reduced, which
will be discussed in the second part of this invited paper composed of two
parts.
## V Conclusions
In the first part of this invited paper, we have investigated the channel
estimation in the RIS assisted wireless communication system. Due to a large
number of passive RIS elements without signal processing capability, the
channel estimation in the RIS assisted system is more challenging than that in
the conventional system. We first explained the fundamentals of channel
estimation. Then, three typical types of overhead-reduced channel estimation
solutions were introduced. Finally, we pointed out several challenges and the
corresponding future research opportunities for channel estimation. Note that
a feasible solution to one of these key challenges, i.e., the high pilot
overhead, will be proposed in the second part of this invited paper.
## References
* [1] E. Basar, M. Di Renzo, J. De Rosny, M. Debbah, M. Alouini, and R. Zhang, “Wireless communications through reconfigurable intelligent surfaces,” _IEEE Access_ , vol. 7, pp. 116 753–116 773, Aug. 2019.
* [2] Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” _IEEE Trans. Wireless Commun._ , vol. 18, no. 11, pp. 5394–5409, Nov. 2019.
* [3] Q. Wu, S. Zhang, B. Zheng, C. You, and R. Zhang, “Intelligent reflecting surface aided wireless communications: A tutorial,” _arXiv preprint arXiv:2007.02759v2_ , Jul. 2020.
* [4] D. Mishra and H. Johansson, “Channel estimation and low-complexity beamforming design for passive intelligent surface assisted MISO wireless energy transfer,” in _Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (IEEE ICASSP’19)_ , Brighton, UK, May 2019, pp. 4659–4663.
* [5] M. F. Imani, D. R. Smith, and P. Hougne, “Perfect absorption in a metasurface-programmable complex scattering enclosure,” _arXiv preprint arXiv:2003.01766v2_ , Jun. 2020.
* [6] Q. Nadeem, H. Alwazani, A. Kammoun, A. Chaaban, M. Debbah, and M. S. Alouini, “Intelligent reflecting surface-assisted multi-user MISO communication: Channel estimation and beamforming design,” _IEEE Open J. Commun. Soc._ , vol. 1, pp. 661–680, May 2020.
* [7] C. Hu and L. Dai, “Two-timescale channel estimation for reconfigurable intelligent surface aided wireless communications,” _arXiv preprint arXiv:1912.07990_ , Dec. 2019.
* [8] X. Guan, Q. Wu, and R. Zhang, “Anchor-assisted intelligent reflecting surface channel estimation for multiuser communications,” _arXiv preprint arXiv:2008.00622_ , Aug. 2020.
* [9] H. Liu, X. Yuan, and Y. J. A. Zhang, “Matrix-calibration-based cascaded channel estimation for reconfigurable intelligent surface assisted multiuser MIMO,” _IEEE J. Sel. Areas Commun._ , vol. 38, no. 11, pp. 2621–2636, Jul. 2020.
* [10] Z. Wang, L. Liu, and S. Cui, “Channel estimation for intelligent reflecting surface assisted multiuser communications,” in _Proc. IEEE Wireless Commun. and Networking Conf. (IEEE WCNC’20)_ , Seoul, Korea, May 2020, pp. 1–6.
* [11] P. Wang, J. Fang, H. Duan, and H. Li, “Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems,” _IEEE Signal Process. Lett._ , vol. 27, pp. 905–909, May 2020.
* [12] J. Chen, Y.-C. Liang, H. V. Cheng, and W. Yu, “Channel estimation for reconfigurable intelligent surface aided multi-user MIMO systems,” _arXiv preprint arXiv:1912.03619_ , Dec. 2019.
* [13] B. Zheng and R. Zhang, “Intelligent reflecting surface-enhanced OFDM: Channel estimation and reflection optimization,” _IEEE Wireless Commun. Lett._ , vol. 9, no. 4, pp. 518–522, Apr. 2020.
* [14] S. Liu, Z. Gao, J. Zhang, M. D. Renzo, and M.-S. Alouini, “Deep denoising neural network assisted compressive channel estimation for mmWave intelligent reflecting surfaces,” _IEEE Trans. Veh. Technol._ , vol. 69, no. 8, pp. 9223–9228, Jun. 2020.
* [15] X. Gao, L. Dai, S. Zhou, A. M. Sayeed, and L. Hanzo, “Wideband beamspace channel estimation for millimeter-wave MIMO systems relying on lens antenna arrays,” _IEEE Trans. Signal Process._ , vol. 67, no. 18, pp. 4809–4824, Sep. 2019.
* [16] A. Amiri, M. Angjelichinoski, E. de Carvalho, and J. R. W. Heath, “Extremely large aperture massive MIMO: Low complexity receiver architectures,” in _Proc. IEEE Globecom Workshops (IEEE GC Wkshps’18)_ , Abu Dhabi, UAE, Dec. 2018, pp. 1–6.
|
# Channel Estimation for RIS Assisted Wireless Communications: Part II - An
Improved Solution Based on Double-Structured Sparsity
(Invited Paper)
Xiuhong Wei, Decai Shen, and Linglong Dai All authors are with the Beijing
National Research Center for Information Science and Technology (BNRist) as
well as the Department of Electronic Engineering, Tsinghua University, Beijing
100084, China (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>daill@tsinghua.edu.cn).This work was supported in
part by the National Key Research and Development Program of China (Grant No.
2020YFB1807201) and in part by the National Natural Science Foundation of
China (Grant No. 62031019).
###### Abstract
Reconfigurable intelligent surface (RIS) can manipulate the wireless
communication environment by controlling the coefficients of RIS elements.
However, due to the large number of passive RIS elements without signal
processing capability, channel estimation in RIS assisted wireless
communication system requires high pilot overhead. In the second part of this
invited paper, we propose to exploit the double-structured sparsity of the
angular cascaded channels among users to reduce the pilot overhead.
Specifically, we first reveal the double-structured sparsity, i.e., different
angular cascaded channels for different users enjoy the completely common non-
zero rows and the partially common non-zero columns. By exploiting this
double-structured sparsity, we further propose the double-structured
orthogonal matching pursuit (DS-OMP) algorithm, where the completely common
non-zero rows and the partially common non-zero columns are jointly estimated
for all users. Simulation results show that the pilot overhead required by the
proposed scheme is lower than existing schemes.
###### Index Terms:
Reconfigurable intelligent surface (RIS), channel estimation, compressive
sensing.
## I Introduction
In the first part of this two-part invited paper, we have introduced the
fundamentals, solutions, and future opportunities of channel estimation in the
reconfigurable intelligent surface (RIS) assisted wireless communication
system. One of the most important challenges of channel estimation is that,
the pilot overhead is high, since the RIS consists of a large number of
passive elements without signal processing capability [1, 2]. By exploiting
the sparsity of the angular cascaded channel, i.e., the cascade of the channel
from the user to the RIS and the channel from the RIS to the base station
(BS), the channel estimation problem can be formulated as a sparse signal
recovery problem, which can be solved by compressive sensing (CS) algorithms
with reduced pilot overhead [3, 4]. However, the pilot overhead of most
existing solutions is still high.
In the second part of this paper, in order to further reduce the pilot
overhead, we propose a double-structured orthogonal matching pursuit (DS-OMP)
based cascaded channel estimation scheme by leveraging the double-structured
sparsity of the angular cascaded channels111Simulation codes are provided to
reproduce the results presented in this paper:
http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html. .
Specifically, we reveal that the angular cascaded channels associated with
different users enjoy the completely common non-zero rows and the partially
common non-zero columns, which is called as “double-structured sparsity” in
this paper. Then, by exploiting this double-structured sparsity, we propose
the DS-OMP algorithm based on the classical OMP algorithm to realize channel
estimation. In the proposed DS-OMP algorithm, the completely common row
support and the partially common column support for different users are
jointly estimated, and the user-specific column supports for different users
are individually estimated. After detecting all supports mentioned above, the
least square (LS) algorithm can be utilized to obtain the estimated angular
cascaded channels. Since the double-structured sparsity is exploited, the
proposed DS-OMP based channel estimation scheme is able to further reduce the
pilot overhead.
The rest of the paper is organized as follows. In Section II, we introduce the
channel model and formulate the cascaded channel estimation problem. In
Section III, we first reveal the double-structured sparsity of the angular
cascaded channels, and then propose the DS-OMP based cascaded channel
estimation scheme. Simulation results and conclusions are provided in Section
IV and Section V, respectively.
Notation: Lower-case and upper-case boldface letters ${\bf{a}}$ and ${\bf{A}}$
denote a vector and a matrix, respectively; ${{{\bf{a}}^{T}}}$ denotes the
conjugate of vector $\bf{a}$; ${{{\bf{A}}^{T}}}$ and ${{{\bf{A}}^{H}}}$ denote
the transpose and conjugate transpose of matrix $\bf{A}$, respectively;
${{\left\|\bf{A}\right\|_{F}}}$ denotes the Frobenius norm of matrix
${\bf{A}}$; ${\rm{diag}}\left({\bf{x}}\right)$ denotes the diagonal matrix
with the vector $\bf{x}$ on its diagonal; ${\bf{a}}{\otimes}{\bf{b}}$ denotes
the Kronecker product of ${\bf{a}}$ and ${\bf{b}}$. Finally, $\cal
CN\left(\mu,\sigma\right)$ denotes the probability density function of the
circularly symmetric complex Gaussian distribution with mean $\mu$ and
variance $\sigma^{2}$.
## II System Model
In this section, we will first introduce the cascaded channel in the RIS
assisted communication system. Then, the cascaded channel estimation problem
will be formulated.
### II-A Cascaded Channel
We consider that the BS and the RIS respectively employ the $M$-antenna and
the $N$-element uniform planer array (UPA) to simultaneously serve ${K}$
single-antenna users. Let $\bf{G}$ of size ${M\times N}$ denote the channel
from the RIS to the BS, and ${\bf{h}}_{r,k}$ of size ${N\times 1}$ denote the
channel from the ${k}$th user to the RIS $\left({k=1,2,\cdots,K}\right)$. The
widely used Saleh-Valenzuela channel model is adopted to represent ${\bf{G}}$
as [5]
${\bf{G}}=\sqrt{\frac{MN}{L_{G}}}\sum\limits_{l_{1}=1}^{L_{G}}{\alpha^{G}_{l_{1}}}{\bf{b}}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right){{\bf{a}}\left(\vartheta^{G_{t}}_{l_{1}},\psi^{G_{t}}_{l_{1}}\right)}^{T},$
(1)
where $L_{G}$ represents the number of paths between the RIS and the BS,
${\alpha}^{G}_{l_{1}}$, $\vartheta^{G_{r}}_{l_{1}}$
(${\psi}^{G_{r}}_{l_{1}}$), and ${\vartheta}^{G_{t}}_{l_{1}}$
(${\psi}^{G_{t}}_{l_{1}}$) represent the complex gain consisting of path loss,
the azimuth (elevation) angle at the BS, and the azimuth (elevation) angle at
the RIS for the $l_{1}$th path. Similarly, the channel ${\bf{h}}_{r,k}$ can be
represented by
${\bf{h}}_{r,k}=\sqrt{\frac{N}{L_{r,k}}}\sum\limits_{l_{2}=1}^{L_{r,k}}{\alpha^{r,k}_{l_{2}}}{{\bf{a}}\left(\vartheta^{r,k}_{l_{2}},\psi^{r,k}_{l_{2}}\right)},$
(2)
where $L_{r,k}$ represents the number of paths between the $k$th user and the
RIS, ${\alpha}^{r,k}_{l_{2}}$, $\vartheta^{r,k}_{l_{2}}$
(${\psi}^{r,k}_{l_{2}}$) represent the complex gain consisting of path loss,
the azimuth (elevation) angle at the RIS for the $l_{2}$th path.
${\bf{b}}\left(\vartheta,\psi\right)\in{\mathbb{C}^{M\times 1}}$ and
${\bf{a}}\left(\vartheta,\psi\right)\in{\mathbb{C}^{N\times 1}}$ represent the
normalized array steering vector associated to the BS and the RIS,
respectively. For a typical $N_{1}\times N_{2}$ ($N=N_{1}\times N_{2}$) UPA,
${\bf{a}}\left(\vartheta,\psi\right)$ can be represented by [5]
${\bf{a}}\left(\vartheta,\psi\right)=\frac{1}{{\sqrt{N}}}{\left[{{e^{-j2{\pi}d{\rm{sin}}\left(\vartheta\right){\rm{cos}}\left(\psi\right){\bf{n}}_{1}/{\lambda}}}}\right]}{\otimes}{\left[{{e^{-j2{\pi}d{\rm{sin}}\left(\psi\right){\bf{n}}_{2}/{\lambda}}}}\right]},$
(3)
where ${\bf{n}}_{1}=[0,1,\cdots,N_{1}-1]$ and
${\bf{n}}_{2}=[0,1,\cdots,N_{2}-1]$, $\lambda$ is the carrier wavelength, and
$d$ is the antenna spacing usually satisfying $d=\lambda/2$.
Further, we denote
${\bf{H}}_{k}\triangleq{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right)$ as the
$M\times N$ cascaded channel for the $k$th user. Using the virtual angular-
domain representation, ${\bf{H}}_{k}\in\mathbb{C}^{M\times N}$ can be
decomposed as
${{\bf{H}}_{k}}={\bf{U}}_{M}{\tilde{\bf{H}}}_{k}{\bf{U}}_{N}^{T},$ (4)
where ${\tilde{\bf{H}}}_{k}$ denotes the ${M\times N}$ angular cascaded
channel, ${\bf{U}}_{M}$ and ${\bf{U}}_{N}$ are respectively the ${M\times M}$
and ${N\times N}$ dictionary unitary matrices at the BS and the RIS [5]. Since
there are limited scatters around the BS and the RIS, the angular cascaded
channel ${\tilde{\bf{H}}}_{k}$ has a few non-zero elements, which exhibits the
sparsity.
### II-B Problem Formulation
In this paper, we assume that the direct channel between the BS and the user
is known for BS, which can be easily estimated as these in conventional
wireless communication systems [5]. Therefore, we only focus on the cascaded
channel estimation problem.
By adopting the widely used orthogonal pilot transmission strategy, all users
transmit the known pilot symbols to the BS via the RIS over ${Q}$ time slots
for the uplink channel estimation. Specifically, in the ${q}$th
$\left({q=1,2,\cdots,Q}\right)$ time slot, the effective received signal
${{\bf{y}}_{k,q}}\in\mathbb{C}^{M\times 1}$ at the BS for the $k$th user after
removing the impact of the direct channel can be represented as
$\displaystyle{{\bf{y}}_{k,q}}=$
$\displaystyle{\bf{G}}{\rm{diag}}\left({\bm{\theta}}_{q}\right){\bf{h}}_{r,k}{s}_{k,q}+{\bf{w}}_{k,q}$
(5) $\displaystyle=$
$\displaystyle{\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right){\bm{\theta}}_{q}{s}_{k,q}+{\bf{w}}_{k,q},$
where $s_{k,q}$ is the pilot symbol sent by the $k$th user,
${\bm{\theta}}_{q}=[\theta_{q,1},\cdots,\theta_{q,N}]^{T}$ is the $N\times 1$
reflecting vector at the RIS with $\theta_{q,n}$ representing the reflecting
coefficient at the $n$th RIS element $(n=1,\cdots,N)$ in the $q$th time slot,
${{\bf{w}}_{k,q}}\sim{\cal C}{\cal N}\left({0,\sigma^{2}{\bf{I}}_{M}}\right)$
is the ${{M}\times 1}$ received noise with ${\sigma^{2}}$ representing the
noise power. According to the cascaded channel
${\bf{H}}_{k}={\bf{G}}{\rm{diag}}\left({\bf{h}}_{r,k}\right)$, we can rewrite
(5) as
${{\bf{y}}_{k,q}}={\bf{H}}_{k}{\bm{\theta}}_{q}{s}_{k,q}+{\bf{w}}_{k,q}.$ (6)
After ${Q}$ time slots of pilot transmission, we can obtain the ${M\times Q}$
overall measurement matrix
${\bf{Y}}_{k}={[{\bf{y}}_{k,1},\cdots,{\bf{y}}_{k,Q}]}$ by assuming
${s_{k,q}}=1$ as
${{\bf{Y}}_{k}}={\bf{H}}_{k}{\bm{\Theta}}+{\bf{W}}_{k},$ (7)
where ${\bm{\Theta}}={[{\bm{\theta}}_{1},\cdots,{\bm{\theta}}_{Q}]}$ and
${\bf{W}}_{k}=[{\bf{w}}_{k,1},\cdots,{\bf{w}}_{k,Q}]$. By substituting (4)
into (7), we can obtain
${{\bf{Y}}_{k}}={\bf{U}}_{M}{\tilde{\bf{H}}}_{k}{\bf{U}}_{N}^{T}{\bm{\Theta}}+{\bf{W}}_{k}.$
(8)
Let denote
${{\tilde{\bf{Y}}}_{k}}=\left({\bf{U}}_{M}^{H}{{\bf{Y}}_{k}}\right)^{H}$ as
the $Q\times M$ effective measurement matrix, and
${{\tilde{\bf{W}}}_{k}}=\left({\bf{U}}_{M}^{H}{{\bf{W}}_{k}}\right)^{H}$ as
the $Q\times M$ effective noise matrix, (7) can be rewritten as a CS model:
${{\tilde{\bf{Y}}}}_{k}={\tilde{\bm{\Theta}}}{\tilde{\bf{H}}}_{k}^{H}+{\tilde{\bf{W}}}_{k},$
(9)
where
${{\tilde{\bf{\Theta}}}}=\left({\bf{U}}_{N}^{T}{{\bm{\Theta}}}\right)^{H}$ is
the $Q\times N$ sensing matrix. Based on (9), we can respectively estimate the
angular cascaded channel for each user $k$ by conventional CS algorithms, such
as OMP algorithm. However, under the premise of ensuring the estimation
accuracy, the pilot overhead required by the conventional CS algorithms is
still high.
## III Joint Channel Estimation for RIS Assisted Wireless Communication
Systems
In this section, we will first reveal the double-structured sparsity of the
angular cascaded channels. Then, by exploiting this important channel
characteristic, we will propose a DS-OMP based cascaded channel estimation
scheme to reduce the pilot overhead. Finally, the computational complexity of
the proposed scheme will be analyzed.
### III-A Double-Structured Sparsity of Angular Cascaded Channels
In order to further explore the sparsity of the angular cascaded channel both
in row and column, the angular cascaded channel ${\tilde{\bf{H}}}_{k}$ in (4)
can be expressed as
$\displaystyle{\tilde{\bf{H}}}_{k}=$
$\displaystyle\sqrt{\frac{MN}{L_{G}L_{r,k}}}\sum\limits_{l_{1}=1}^{L_{G}}\sum\limits_{l_{2}=1}^{L_{r,k}}{\alpha^{G}_{l_{1}}}{\alpha^{r,k}_{l_{2}}}$
(10)
$\displaystyle{\tilde{\bf{b}}}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right){\tilde{\bf{a}}^{T}\left(\vartheta^{G_{t}}_{l_{1}}+\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}+\psi^{r,k}_{l_{2}}\right)},$
where both
${\tilde{\bf{b}}}\left(\vartheta,\psi\right)={\bf{U}}_{M}^{H}{\bf{b}}\left(\vartheta,\psi\right)$
and
${\tilde{\bf{a}}}\left(\vartheta,\psi\right)={\bf{U}}_{N}^{H}{{\bf{a}}}\left(\vartheta,\psi\right)$
have only one non-zero element, which lie on the position of array steering
vector at the direction $\left(\vartheta,\psi\right)$ in ${\bf{U}}_{M}$ and
${\bf{U}}_{N}$. Based on (10), we can find that each complete reflecting path
$(l_{1},l_{2})$ can provide one non-zero element for ${\tilde{\bf{H}}}_{k}$,
whose row index depends on
$\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right)$ and column index
depends on
$\left(\vartheta^{G_{t}}_{l_{1}}+\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}+\psi^{r,k}_{l_{2}}\right)$.
Therefore, ${\tilde{\bf{H}}}_{k}$ has $L_{G}$ non-zero rows, where each non-
zero row has $L_{r,k}$ non-zero columns. The total number of non-zero elements
is $L_{G}L_{r,k}$, which is usually much smaller than $MN$.
Figure 1: Double-structured sparsity of the angular cascaded channels.
More importantly, we can find that different sparse channels
$\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$ exhibit the double-structured sparsity,
as shown in Fig. 1. Firstly, since different users communicate with the BS via
the common RIS, the channel $\bf{G}$ from the RIS to the BS is common for all
users. From (10), we can also find that
$\bigg{\\{}\left(\vartheta^{G_{r}}_{l_{1}},\psi^{G_{r}}_{l_{1}}\right)\bigg{\\}}_{l_{1}=1}^{L_{G}}$
is independent of the user index $k$. Therefore, the non-zero elements of
$\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$ lie on the completely common $L_{G}$
rows. Secondly, since different users will share part of the scatters between
the RIS and users, $\\{{\bf{h}}_{r,k}\\}_{k=1}^{K}$ may enjoy partially common
paths with the same angles at the RIS. Let $L_{c}$
(${L_{c}}\leq{L_{r,k}},\forall k$) denote the number of common paths for
$\\{{\bf{h}}_{r,k}\\}_{k=1}^{K}$, then we can find that for $\forall l_{1}$,
there always exists
$\bigg{\\{}\left(\vartheta^{G_{t}}_{l_{1}}-\vartheta^{r,k}_{l_{2}},\psi^{G_{t}}_{l_{1}}-\psi^{r,k}_{l_{2}}\right)\bigg{\\}}_{l_{2}=1}^{L_{c}}$
shared by $\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$. That is to say, for each
common non-zero rows $l_{1}$ ($l_{1}=1,2,\cdots,L_{G}$),
$\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$ enjoy $L_{c}$ common non-zero columns.
This double-structured sparsity of the angular cascaded channels can be
summarized as follows from the perspective of row and column, respectively.
* •
Row-structured sparsity: Let $\Omega_{r}^{k}$ denote the row set of non-zero
elements for ${\tilde{\bf{H}}}_{k}$, then we have
$\Omega_{r}^{1}=\Omega_{r}^{2}=\cdots=\Omega_{r}^{K}=\Omega_{r},$ (11)
where $\Omega_{r}$ represents the completely common row support for
$\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$.
* •
Partially column-structured sparsity: Let $\Omega_{c}^{l,k}$ denote the column
set of non-zero elements for the $l_{1}$th non-zero row of
${\tilde{\bf{H}}}_{k}$, then we have
$\Omega_{c}^{l_{1},1}\cap\Omega_{c}^{l_{1},2}\cap\cdots\cap\Omega_{c}^{l_{1},K}=\Omega_{c}^{l_{1},{\rm{Com}}},\quad
l_{1}=1,2,\cdots,L_{G},$ (12)
where $\Omega_{c}^{{l,{\rm{Com}}}}$ represents the partially common column
support for the $l_{1}$th non-zero row of
$\\{{\tilde{\bf{H}}}_{k}\\}_{k=1}^{K}$.
Based on the above double-structured sparsity, the cascaded channels for
different users can be jointly estimated to improve the channel estimation
accuracy.
### III-B Proposed DS-OMP Based Cascaded Channel Estimation
In this subsection, we propose the DS-OMP based cascaded channel estimation
scheme by integrating the double-structured sparsity into the classical OMP
algorithm. The specific algorithm can be summarized in Algorithm 1, which
includes three key stages to detect supports of angular cascaded channels.
Input: ${\tilde{\bf{Y}}}_{k}:\forall k$, ${\tilde{\bm{\Theta}}}$, ${L_{G}}$,
${L_{r,k}}:\forall k$, $L_{c}$.
Initialization: ${\hat{\tilde{\bf{H}}}}_{k}={\bf{0}}_{M\times N},\forall k$.
1\. Stage 1: Return estimated completely common row support
${\hat{\Omega}}_{r}$ by Algorithm 2.
2\. Stage 2: Return estimated partially common column supports
$\\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\\}_{l_{1}=1}^{L_{G}}$ based on
${\hat{\Omega}}_{r}$ by Algorithm 3.
3\. Stage 3: Return estimated column supports
$\\{\\{{\hat{\Omega}}_{c}^{l_{1},k}\\}_{l_{1}=1}^{L_{G}}\\}_{k=1}^{K}$ based
on ${\hat{\Omega}}_{r}$ and
$\\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\\}_{l_{1}=1}^{L_{G}}$ by Algorithm
4.
4\. for $l_{1}=1,2,\cdots,L_{G}$ do
5\. for $k=1,2,\cdots,K$ do
6\.
${\hat{\tilde{\bf{H}}}}^{H}_{k}({\hat{\Omega}}_{c}^{l_{1},k},{{\hat{\Omega}}_{r}}(l_{1}))={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){\tilde{\bf{Y}}}_{k}(:,{{\hat{\Omega}}_{r}}(l_{1}))$
7\. end for
8\. end for
9\.
${{\hat{\bf{H}}}_{k}}={{\bf{U}}_{M}^{H}}{{{\hat{\tilde{\bf{H}}}}}_{k}}{{\bf{U}}_{N}},\forall
k$
Output: Estimated cascaded channel matrices ${{\hat{{\bf{H}}}}}_{k},\forall
k$.
Algorithm 1 DS-OMP based cascaded channel estimation
The main procedure of Algorithm 1 can be explained as follows. Firstly, the
completely common row support $\Omega_{r}$ is jointly estimated thanks to the
row-structured sparsity in Step 1, where $\Omega_{r}$ consists of $L_{G}$ row
indexes associated with $L_{G}$ non-zero rows. Secondly, for the $l_{1}$th
non-zero row, the partially common column support
$\Omega_{c}^{l_{1},{\rm{Com}}}$ can be further jointly estimated thanks to the
partially column-structured sparsity in Step 2. Thirdly, the user-specific
column supports for each user $k$ can be individually estimated in Step 3.
After detecting supports of all sparse matrices, we adopt the LS algorithm to
obtain corresponding estimated matrices
$\\{{{\hat{\tilde{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ in Steps 4-8. It should be
noted that the sparse signal in (9) is ${\tilde{\bf{H}}}_{k}^{H}$, thus the
sparse matrix estimated by the LS algorithm in Step 6 is
${\hat{\tilde{\bf{H}}}}_{k}^{H}$. Finally, we can obtain the estimated
cascaded channels $\\{{{\hat{{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ by transforming
angular channels into spatial channels in Step 9.
In the following part, we will introduce how to estimate the completely common
row support, the partially common column supports, and the individual column
supports for the first three stages in detail.
Input: ${\tilde{\bf{Y}}}_{k}:\forall k$, ${L_{G}}$.
Initialization: ${\bf{g}}={\bf{0}}_{M\times 1}$.
1\. for $k=1,2,\cdots,K$ do
2\. ${\bf{g}}(m)={\bf{g}}(m)+{\|{{\tilde{\bf{Y}}}_{k}}(:,m)\|}^{2}_{F}$,
$\forall m=1,2,\cdots,M$
3\. end for
4\. ${\hat{\Omega}}_{r}={\Gamma}_{\mathcal{T}}({\bf{g}},L_{G})$
Output: Estimated completely common row support ${\hat{\Omega}}_{r}$.
Algorithm 2 Joint completely common row support estimation
_1) Stage 1: Estimating the completely common row support._ Thanks to the row-
structured sparsity of the angular cascaded channels, we can jointly estimate
the completely common row support $\Omega_{r}$ for
$\\{{{{\tilde{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ by Algorithm 2.
From the virtual angular-domain channel representation (4), we can find that
non-zero rows of $\\{{{{\tilde{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ are corresponding
to columns with high power in the received pilots
$\\{{{{\tilde{\bf{Y}}}}_{k}}\\}_{k=1}^{K}$. Since
$\\{{{{\tilde{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ have the completely common non-zero
rows, $\\{{{{\tilde{\bf{Y}}}}_{k}}\\}_{k=1}^{K}$ can be jointly utilized to
estimate the completely common row support $\Omega_{r}$, which can resist the
effect of noise. Specifically, we denote $\bf{g}$ of size $M\times 1$ to save
the sum power of columns of $\\{{{{\tilde{\bf{Y}}}}_{k}}\\}_{k=1}^{K}$, as in
Step 2 of Algorithm 2. Finally, $L_{G}$ indexes of elements with the largest
amplitudes in $\bf{g}$ are selected as the estimated completely common row
support ${\hat{\Omega}}_{r}$ in Step 4, where $\mathcal{T}({\bf{x}},L)$
denotes a prune operator on $\bf{x}$ that sets all but $L$ elements with the
largest amplitudes to zero, and $\Gamma(\bf{x})$ denotes the support of
${\bf{x}}$, i.e., $\Gamma({\bf{x}})=\\{i,{\bf{x}}(i)\neq 0\\}$.
After obtaining $L_{G}$ non-zero rows by Algorithm 2, we focus on estimating
the column support ${{\Omega}}_{c}^{l_{1},k}$ for each non-zero row $l_{1}$
and each user $k$ by the following Stage 2 and 3.
_2) Stage 2: Estimating the partially common column supports._ Thanks to the
partially column-structured sparsity of the angular cascaded channels, we can
jointly estimate the partially common column supports
$\\{{{{\Omega}}_{c}^{l_{1},{\rm{Com}}}}\\}_{l_{1}=1}^{L}$ for
$\\{{{{\tilde{\bf{H}}}}_{k}}\\}_{k=1}^{K}$ by Algorithm 3.
Input: ${\tilde{\bf{Y}}}_{k}:\forall k$, $L_{G}$, ${\tilde{\bm{\Theta}}}$,
${L_{r,k}}:\forall k$, $L_{c}$, ${\hat{\Omega}}_{r}$.
Initialization: ${\hat{\Omega}}_{c}^{l_{1},k}=\emptyset$, $\forall l_{1},k$,
${\bf{c}}^{l}_{1}={\bf{0}}_{N\times 1}$, $\forall l_{1}$.
1\. for $l_{1}=1,2,\cdots,L_{G}$ do
2\. for $k=1,2,\cdots,K$ do
3\.
${{\tilde{\bf{y}}}}_{k}={{\tilde{\bf{Y}}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1}))$,
${{\tilde{\bf{r}}}}_{k}={\tilde{\bf{y}}}_{k}$
4\. for $l_{2}=1,2,\cdots,L_{r,k}$ do
5\.
${n^{*}}={\mathop{\rm{argmax}}\limits_{n=1,2,\cdots,N}}{\|{\tilde{\bm{\Theta}}}^{H}(:,n){{\tilde{\bf{r}}}_{k}}\|}^{2}_{F}$
6\. ${\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},k}\bigcup n^{*}$
7\. ${\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}$
8\.
${\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){{\tilde{\bf{y}}}_{k}}$,
9\.
${{\tilde{\bf{r}}}_{k}}={\tilde{\bf{y}}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\tilde{\bf{h}}}}_{k}$
10\. ${\bf{c}}^{l_{1}}(n^{*})={\bf{c}}^{l_{1}}(n^{*})+1$
11\. end for
12\. end for
13\.
${\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}=\Gamma_{{\mathcal{T}}({\bf{c}}^{l_{1}},P_{c})}$
14\. end for
Output: Estimated completely common row support
$\\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\\}_{l_{1}=1}^{L_{G}}$.
Algorithm 3 Joint partially common column supports estimation
For the $l_{1}$th non-zero row, we only need to utilize the effective
measurement vector
${\tilde{\bf{y}}}_{k}={{\tilde{\bf{Y}}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1}))$ to
estimate the partially common column support ${\Omega}_{c}^{l_{1},{\rm Com}}$.
The basic idea is that, we firstly estimate the column support
${{\Omega}}_{c}^{l_{1},k}$ with $L_{r,k}$ indexes for each user $k$, then we
select $L_{c}$ indexes associated with the largest number of times from all
$\\{{{\Omega}}_{c}^{l_{1},k}\\}_{k=1}^{K}$ as the estimated partially common
column support ${\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}$.
In order to estimate the column supports for each user $k$, the correlation
between the sensing matrix ${\tilde{\bm{\Theta}}}$ and the residual vector
${\tilde{\bf{r}}}_{k}$ needs to be calculated. As shown in Step 5 of Algorithm
3, the most correlative column index in ${\tilde{\bm{\Theta}}}$ with
${\tilde{\bf{r}}}_{k}$ is regarded as the newly found column support index
$n^{*}$. Based on the updated column support ${\hat{\Omega}}_{c}^{l_{1},k}$ in
Step 6, the estimated sparse vector ${\hat{\tilde{\bf{h}}}}_{k}$ is obtained
by using the LS algorithm in Step 8. Then, the residual vector
${\tilde{\bf{r}}}_{k}$ is updated by removing the effect of non-zero elements
that have been estimated in Step 9. Particularly, the $N\times 1$ vector
${\bf{c}}^{l_{1}}$ is used to count the number of times for selected column
indexes in Step 10. Finally, the $L_{c}$ indexes of elements with the largest
value in ${\bf{c}}^{l_{1}}$ are selected as the estimated partially common
column support ${\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}$ in Step 13.
_3) Stage 3: Estimating the individual column supports._ Based on the
estimated completely common row support ${\hat{\Omega}}_{r}$ and the estimated
partially common column supports
$\\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\\}_{l_{1}=1}^{L}$, the column
support ${{\Omega}}_{c}^{l_{1},k}$ for each non-zero row $l_{1}$ and each user
$k$ can be estimated by Algorithm 4.
Input: ${\tilde{\bf{Y}}}_{k}:\forall k$, ${\tilde{\bm{\Theta}}}$, $L_{G}$
${L_{r,k}}:\forall k$, $L_{c}$, ${\hat{\Omega}}_{r}$,
$\\{{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}\\}_{l_{1}=1}^{L}$.
Initialization:
${\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}$, $\forall
l_{1},k$.
1\. for $l_{1}=1,2,\cdots,L_{G}$ do
2\. for $k=1,2,\cdots,K$ do
3\. ${\tilde{\bf{y}}}_{k}={\tilde{\bf{Y}}}_{k}(:,{\hat{\Omega}}_{r}(l_{1}))$
4\. ${\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}$
5\.
${\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},{\rm{Com}}}){{\tilde{\bf{y}}}_{k}}$
6\. ${\bf{r}}_{k}={\bf{y}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\bf{h}}}_{k}$,
7\. for $l_{2}=1,2,\cdots,L_{r,k}-L_{c}$ do
8\.
${n^{*}}={\mathop{\rm{argmax}}\limits_{n=1,2,\cdots,N}}{\|{\tilde{\bm{\Theta}}}^{H}(:,n){{\tilde{\bf{r}}}_{k}}\|}^{2}_{F}$
9\. ${\hat{\Omega}}_{c}^{l_{1},k}={\hat{\Omega}}_{c}^{l_{1},k}\bigcup n^{*}$
10\. ${\hat{\tilde{\bf{h}}}}_{k}={\bf{0}}_{N\times 1}$
11\.
${\hat{\tilde{\bf{h}}}}_{k}({\hat{\Omega}}_{c}^{l_{1},k})={\tilde{\bm{\Theta}}}^{{\dagger}}(:,{\hat{\Omega}}_{c}^{l_{1},k}){{\tilde{\bf{y}}}_{k}}$
12\.
${{\tilde{\bf{r}}}_{k}}={\tilde{\bf{y}}}_{k}-{\tilde{\bm{\Theta}}}{\hat{\tilde{\bf{h}}}}_{k}$
13\. end for
14\. end for
15\. end for
Output: Estimated the individual column supports
$\\{\\{{\hat{\Omega}}_{c}^{l_{1},k}\\}_{l_{1}=1}^{L_{G}}\\}_{k=1}^{K}$.
Algorithm 4 Individual column supports estimation
For the $l_{1}$th non-zero row, we have estimated $L_{c}$ column support
indexes by Algorithm 3. Thus, there are $L_{r,k}-L_{c}$ user-specific column
support indexes to be estimated for each user $k$. The column support
${\hat{\Omega}}_{c}^{l_{1},k}$ is initialized as
${\hat{\Omega}}_{c}^{l_{1},{\rm Com}}$. Based on
${\hat{\Omega}}_{c}^{l_{1},{\rm Com}}$, the estimated sparse vector
${\hat{\tilde{\bf{h}}}}_{k}$ and residual vector ${{\tilde{\bf{r}}}}_{k}$ are
initialized in Step 5 and Step 6. Then, the column support
${\hat{\Omega}}_{c}^{l_{1},k}$ for $\forall l_{1}$ and $\forall k$ can be
estimated in Steps 7-13 by following the same idea of Algorithm 3.
Through the above three stages, the supports of all angular cascaded channels
are estimated by exploiting the double-structured sparsity. It should be
pointed out that, if there are no common scatters between the RIS and users,
the double-structured sparse channel will be simplified as the row-structured
sparse channel. In this case, the cascaded channel estimation can also be
solved by the proposed DS-OMP algorithm, where Stage 2 will be removed.
### III-C Computational Complexity Analysis
In this subsection, the computational complexity of the proposed DS-OMP
algorithm is analyzed in terms of three stages of detecting supports. In Stage
1, the computational complexity mainly comes from Step 2 in Algorithm 2, which
calculates the power of $M$ columns of ${\tilde{\bf{Y}}}_{k}$ of size $Q\times
M$ for $k=1,2,\cdots,K$. The corresponding computational complexity is
$\mathcal{O}(KMQ)$. In Stage 2, for each non-zero row $l_{1}$ and each user
$k$ in Algorithm 3 , the computational complexity $\mathcal{O}(NQL_{r,k}^{3})$
is the same as that of OMP algorithm [6]. Considering $L_{G}K$ iterations, the
overall computational complexity of Algorithm 3 is
$\mathcal{O}(L_{G}KNQ{L}_{r,k}^{3})$. Similarly, the overall computational
complexity of Algorithm 4 is $\mathcal{O}(L_{G}KNQ{(L_{r,k}-L_{c})}^{3})$.
Therefore, the overall computational complexity of proposed DS-OMP algorithm
is $\mathcal{O}(KMQ)+\mathcal{O}(L_{G}KNQ{L}_{r,k}^{3})$.
## IV Simulation Results
In our simulation, we consider that the number of BS antennas, RIS elements
and users are respectively $M=64$ ($M_{1}=8,M_{2}=8$), $N=256$ ($N_{1}=16$,
$N_{2}=16$), and $K=16$. The number of paths between the RIS and the BS is
$L_{G}=5$, and the number of paths from the $k$th user to the RIS is set as
$L_{r,k}=8$ for $\forall k$. All spatial angles are assumed to be on the
quantized grids. Each element of RIS reflecting matrix $\bm{\Theta}$ is
selected from ${\\{-\frac{1}{\sqrt{N}},+\frac{1}{\sqrt{N}}\\}}$ by considering
discrete phase shifts of the RIS [7].
$|{\alpha}^{G}_{l}|=10^{-3}d_{BR}^{-2.2}$, where $d_{BR}$ denotes the distance
between the BS and RIS and is assumed to be $d_{BR}=10m$.
$|{\alpha}^{r,k}_{l}|=10^{-3}d_{RU}^{-2.8}$, where $d_{RU}$ denotes the
distance between the RIS and user and is assumed to be $d_{RU}=100m$ for
$\forall k$ [7]. The SNR is defined as
$\mathbb{E}\\{||{\tilde{\bm{\Theta}}}{\tilde{\bf{H}}}_{k}^{H}||_{F}^{2}/||{\tilde{\bf{W}}}_{k}||_{F}^{2}\\}$
in (14) and is set as $0$ dB.
We compare the proposed DS-OMP based scheme with the conventional CS based
scheme [3] and the row-structured sparsity based scheme [4]. In the
conventional CS based scheme, the OMP algorithm is used to estimate the sparse
cascaded channel ${{\tilde{\bf{H}}}_{k}}$ for $\forall k$. In the row-
structured sparsity based scheme, the common row support $\Omega_{r}$ with
$L_{G}$ indexes are firstly estimated, and then for each user $k$ and each
non-zero row $l_{1}$, column supports are respectively estimated by following
the idea of the classical OMP algorithm. In addition, we consider the oracle
LS scheme as our benchmark, where the supports of all sparse channels are
assumed to be perfectly known.
Figure 2: NMSE performance comparison against the pilot overhead $Q$.
Fig. 2 shows the normalized mean square error (NMSE) performance comparison
against the pilot overhead, i.e., the number of time slots $Q$ for pilot
transmission. As shown in Fig. 2, in order to achieve the same estimation
accuracy, the pilot overhead required by the proposed DS-OMP based scheme is
lower than the other two existing schemes [3, 4]. However, when there is no
common path between the RIS and all users, i.e., $L_{c}=0$, the double-
structured sparsity will be simplified as the row-structured sparsity [4].
Thus the NMSE performance of the proposed DS-OMP based and the row-structured
sparsity based scheme is the same. With the increased number of common paths
$L_{c}$ between the RIS and users, the NMSE performance of the proposed scheme
can be improved to approach the benchmark of perfect channel supports.
## V Conclusions
In this paper, we developed a low-overhead cascaded channel estimation scheme
in RIS assisted wireless communication systems. Specifically, we first
analyzed the double-structured sparsity of the angular cascaded channels among
users. Based on this double-structured sparsity, we then proposed a DS-OMP
algorithm to reduce the pilot overhead. Simulation results show that the pilot
overhead required by the proposed DS-OMP algorithm is lower compared with
existing algorithms. For the future work, we will apply the double-structured
sparsity to the super-resolution channel estimation problem by considering the
channel angles are continuous in practice.
## References
* [1] L. Dai et. al, “Reconfigurable intelligent surface-based wireless communications: Antenna design, prototyping, and experimental results,” _IEEE Access_ , vol. 8, pp. 45 913–45 923, Mar. 2020.
* [2] M. Di Renzo et. al, “Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison,” _IEEE Open J. Commun. Soc._ , vol. 1, pp. 798–807, Jun. 2020.
* [3] P. Wang, J. Fang, H. Duan, and H. Li, “Compressed channel estimation for intelligent reflecting surface-assisted millimeter wave systems,” _IEEE Signal Process. Lett._ , vol. 27, pp. 905–909, May 2020.
* [4] J. Chen, Y.-C. Liang, H. V. Cheng, and W. Yu, “Channel estimation for reconfigurable intelligent surface aided multi-user MIMO systems,” _arXiv preprint arXiv:1912.03619_ , Dec. 2019.
* [5] C. Hu, L. Dai, T. Mir, Z. Gao, and J. Fang, “Super-resolution channel estimation for mmWave massive MIMO with hybrid precoding,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 9, pp. 8954–8958, Sep. 2018\.
* [6] X. Gao, L. Dai, S. Zhou, A. M. Sayeed, and L. Hanzo, “Wideband beamspace channel estimation for millimeter-wave MIMO systems relying on lens antenna arrays,” _IEEE Trans. Signal Process._ , vol. 67, no. 18, pp. 4809–4824, Sep. 2019.
* [7] Q. Wu and R. Zhang, “Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts,” _IEEE Trans. Commun._ , vol. 68, no. 3, pp. 1838–1851, 2020.
|
# Black hole fueling in galaxy mergers: a high-resolution analysis
Joaquin Prieto1, Andrés Escala1, George C. Privon2 & Juan d’Etigny1
1Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago,
Chile.
2Department of Astronomy, University of Florida, 211 Bryant Space Sciences
Center, Gainesville, FL 32611, USA.
###### Abstract
Using parsec scale resolution hydrodynamical adaptive mesh refinement
simulations we have studied the mass transport process throughout a galactic
merger. The aim of such study is to connect both the peaks of mass accretion
rate onto the BHs and star formation bursts with both gravitational and
hydrodynamic torques acting on the galactic gaseous component. Our merger
initial conditions were chosen to mimic a realistic system.The simulations
include gas cooling, star formation, supernovae feedback, and AGN feedback.
Gravitational and hydrodynamic torques near pericenter passes trigger gas
funneling to the nuclei which is associated with bursts of star formation and
black hole growth. Such episodes are intimately related with both kinds of
torques acting on the galactic gas. Pericenters trigger both star formation
and mass accretion rates of $\sim$ few $(1-10)\,M_{\odot}$/yr. Such episodes
last $\sim\,(50-75)$ Myrs. Close passes also can produce black hole accretion
that approaches and reaches the Eddington rate, lasting $\sim$ few Myrs. Our
simulation shows that both gravitational and hydrodynamic torques are enhanced
at pericenter passes with gravitational torques tending to have higher values
than the hydrodynamic torques throughout the merger. We also find that in the
closest encounters, hydrodynamic and gravitational torques can be comparable
in their effect on the gas, the two helping in the redistribution of both
angular momentum and mass in the galactic disc. Such phenomena allow inward
mass transport onto the BH influence radius, fueling the compact object and
lighting up the galactic nuclei.
###### Subject headings:
galaxies: formation — large-scale structure of the universe — stars: formation
— turbulence.
## 1\. Introduction
Cosmological N-body numerical simulations of structure formation shows that
dark matter (DM) haloes were formed by mergers between smaller haloes in a
hierarchical way (e.g. Angulo et al., 2012). Similar kinds of simulations
including baryonic physics have shown that galaxies were formed inside DM
haloes in this hierarchical model (e.g. Dubois et al., 2014; Genel et al.,
2014). In this sense, mergers and interaction between galaxies are a
fundamental piece of the galaxy formation process and certainly they influence
the galaxies evolution. In fact, observation of irregular and disrupted
systems are consistent with mergers and interactions between galaxies (e.g.
Toomre &Toomre, 1972; Schweizer, 1982; Engel et al., 2010; Bussmann et al.,
2012).
The infrared and optical properties of interacting galaxies are different
compared with isolated systems (e.g. Sanders & Mirabel, 1996). Such
differences can be a consequence of star formation burst associated to
galactic interactions (e.g. Sanders et al., 1998; Duc et al., 1997; Jogee et
al., 2009). Beside the radiative signatures of star formation, some
interacting galaxies also show nuclear activity which can be associated with
black hole (BH) fueling (e.g. Petric, 2011; Stierwalt et al., 2013) These two
features, i.e. SF bursts and active galactic nuclei (AGN), suggest that
galactic encounters are able to redistribute gas inside galaxies, moving
material toward their central regions to feed massive BHs and trigger SF
bursts (e.g. Barnes & Hernquist, 1991; Mihos & Hernquist, 1996; Springel et
al., 2005).
Smoothed Particle Hydrodynamic (SPH) numerical simulations of galactic mergers
with $\sim$ few $(10-100)$ pc of resolution have shown that gravitational
torques are able to produce inflows of gas toward the galactic central regions
(e.g. Barnes & Hernquist, 1991; Wurster & Thacker, 2013; Newton & Kay, 2013;
Blumenthal & Barnes, 2018). Such inflows, increase the gas density of galactic
centers enhancing SF (e.g. Teyssier, 2010; Powell et al., 2013) and at the
same time are able to feed central super massive BHs triggering AGN activity
(e.g. Sanders et al., 1998; Bahcall et al., 1995; Debuhr et al., 2011).
Besides the low resolution SPH simulations including both SNe and AGN feedback
mentioned above (e.g. Debuhr et al., 2011; Wurster & Thacker, 2013; Newton &
Kay, 2013), using an adaptive mesh refinement (AMR) simulation of $\sim$ 8 pc
of resolution Gabor et al. (2016) have also shown that pericenter passes
correlate with peaks in both BH and stellar activity but did not analyze the
source of torques. The lack of parsec scale resolution AMR simulations
studying mass transport in galaxy mergers strengthen the relevance of torque
analysis in this kind of experiments. However studies of mass transport in
$\sim$pc-resolution simulations have not been performed.
When choosing to deal with idealized mergers using AMR codes over a Lagrangian
code, problems with the advection of material and grid alignment issues may
arise, which could result in a loss of angular momentum conservation (Wadsley
et al., 2008; Hahn et al., 2010; Hopkins, 2015). This issues are minimized as
high spatial resolution is imposed at central galaxy regions, minimizing
spurious field misalignments, and also, since pericenter passes have short
durations (few orbital times at most) there are no significant orbital angular
momentum deviations with respect to the ideal case. Furthermore, the galaxies
are maintained at resolutions that are high enough for the AMR technique to be
effective at resolving shocks throughout the merger process and therefore
contact discontinuities are captured.
As the objective of our simulation is to properly and fully characterize a
generic galaxy merger that exhibits realistic dynamics, we have to choose
appropriate orbital initial conditions that have been proven capable of
nearly-reproducing such observed controlled environments. Due to the
degeneracy of the problem and the large parameter space of galaxy
interactions, constraining the initial conditions with hydrodynamic
simulations would be prohibitively time-consuming. Privon et al. (2013) used
the Identikit code to find the orbital parameters capable to reproduce the
morphology and kinematics of tidal features of four known observed galaxy
mergers (NGC 5257/8, The Mice, Antennae and NGC 2623). In this work we will
adopt their orbital parameters (as an ansatz) for NGC 2623. Whilst the
objective of this work will not be to reproduce neither the morphology of this
system, we cite its characteristics as order of magnitude control values.
NGC 2623 is a low-redshift, luminous infrared galaxy (LIRG) with an infrared
luminosity of $L_{\rm IR}=3.6\times 10^{11}L_{\odot}$ from Armus et al.
(2009). The system has been classified as an M4 merger (Larson et al., 2016),
i.e. they are galaxies with apparent single nucleus and evident tidal tails.
The merger shows two tidal tails of $\sim 20-25$ kpc in length, approximately
with a single nucleus in IR (e.g. Evans et al., 2008). Sanders & Mirabel
(1996) found a system stellar mass of $M_{\star}=2.95\times 10^{10}M_{\odot}$
with a molecular hydrogen mass of $M_{\rm H_{2}}=6.76\times 10^{9}M_{\odot}$.
This values do not stray afar from typical LIRG values found in samples like
the GOALS survey (Armus et al., 2009). Haan et al. (2011) show that although
there is some spread in the central BH mass values found in the GOALS survey,
masses are generally found in the $10^{7}-10^{9}\,M_{\odot}$ range.
In this paper, for the first time we will study the evolution of a merger
system from its early stages, up to the point where their BHs coalesce, using
a $\sim$ 3 pc resolution AMR simulation including SF, supernovae (SNe)
feedback, BH particles and AGN feedback. The goal of this paper is to
understand the connection between torques, SF bursts and AGN activity in such
large scale galactic environments with unprecedented high resolution,
resolving the BH influence radius.
The paper is organized as follows. In §2 we describe the numerical details of
the experiment, in §3 we show our results and in §4 we present our discussion
and conclusions.
## 2\. Methodology and Numerical Simulation Details
### 2.1. Initial conditions
As already mentioned, in this work we use the parameters found in Privon et
al. (2013) for NGC 2623, as initial conditions for a high resolution
hydrodynamic numerical simulation. Table 1 shows the initial orbital
parameters of the simulated merger used in this work and table 2 shows the
initial position and velocity for both galactic centers.
In addition to the orbital ICs, it is necessary to specify both the mass
content and the mass distribution for each component of the galaxies,
including the gaseous disc, stellar disc, and stellar bulge. In order to
create ICs for the DM haloes, gas and stars for each galaxy we have used the
DICE code (Perret et al., 2014). For our setup the gaseous disc follows an
exponential profile with a characteristic radius of 1 kpc. The stellar disc is
modelled with a Myamoto-Nagai profile with a characteristic radius of 0.677
kpc. The stellar bulge follows a Einasto profile with a characteristic radius
of 0.6 kpc. Finally for the DM profile we employ a Navarro Frenk and White
profile (Navarro, Frenk & White, 1996) with a concentration parameter equal to
10. The SFR in the stellar disc follows Bouché (2010). Table 3 shows a
complete summary of the galactic parameters of the system.
Table 1Initial orbital parameters.
$D_{\rm ini}$ [kpc] | $e$ | $p$ [kpc] | $\mu$ | $(i_{1};\omega_{1})$ | $(i_{2};\omega_{2})$
---|---|---|---|---|---
50.0 | 1.0 | 0.6 | 1.0 | $(30^{\circ};330^{\circ})$ | $(25^{\circ};110^{\circ})$
* •
the orbit $e$, pericentral distance of the orbit $p$, mass galaxies ratio
$\mu$ and disk orientation for both galaxies $(i,\omega)$ with respect the
orbital plane.
Table 2Initial position and velocity.
Coordinates | Gal1 | Gal2
---|---|---
(x, y, z) [kpc] | (-25, 0, 0) | (25, 0, 0)
(vx, vy, vz) [km/s] | (25, 4.1, 0.0) | (-25, -4.1, 0.0)
* •
The orbital plane has been rotated 45∘ in the polar direction and 45∘ in the
azimuthal direction. Those data are in the reference frame of the simulated
box.
Table 3Initial isolated galaxy setup. Gaseous disc |
---|---
(Exponential profile) |
Mass [$10^{9}{\rm M}_{\odot}$] | 1.0
Characteristic radius [kpc] | 1.0
Truncation radius [kpc] | 5.0
Stellar disk |
(Myamoto-Nagai profile) |
Mass [$10^{9}{\rm M}_{\odot}$] | 2.975
Number of particles | 11900000
Characteristic radius [kpc] | 0.677
Truncation radius [kpc] | 5.0
Stellar bulge |
(Einasto profile) |
Mass [$10^{9}{\rm M}_{\odot}$] | 0.975
Number of particles | 390000
Characteristic radius [kpc] | 0.6
Truncation radius [kpc] | 1.0
Dark mater halo |
(NFW profile) |
Mass [$10^{9}{\rm M}_{\odot}$] | 20
Number of particles | 200000
Concentration parameter | 10
Truncation radius [kpc] | 60
Figure 1.— Gas number density projection (left column) and stellar mass
projection (right column) at different times. From top to bottom: central BHs
are at 10 kpc, the first pericenter, the first apocenter and the point of
coalescence
### 2.2. Gas physics
The simulation was performed with the cosmological N-body hydrodynamical code
RAMSES (Teyssier, 2002). The code uses adaptive mesh refinement, and solves
the Euler equations with a second-order Godunov method and MUSCL scheme using
a MinMod total variation diminishing scheme to reconstruct the cell centered
values at cell interfaces.
The galaxies were set inside a computational box of ${\rm L_{box}}=400$ kpc.
The coarse level of the simulation corresponds to $\ell_{\rm min}=7$ and
$\Delta x_{\rm coarse}=3.125$ kpc. We allowed 10 levels of refinement to get a
maximum resolution at $\ell_{\rm max}=17$ of $\Delta x_{\rm min}=3.05$ pc. The
refinement is allowed inside a cell if i) its total mass is more than 8 times
that of the initial mass resolution, and ii) the Jeans length is resolved by
less than 4 cells (Truelove et al., 1997). If we take into account grid
regions where number density is above 0.01 cm-3, the worst cell refinement we
find is at level 12 with $\Delta x\approx 97$ pc, and cells at refinement
level 14 and 15 account for almost $\sim 60\%$ of the total number of cells
throughout such areas. Above these, at level 16 we account for the $\sim 16\%$
of cells and at level 17 we account for $\sim 8\%$.
Our simulation includes optically thin (no self-shielding) gas cooling
following the Sutherland & Dopita (1993) model down to temperature $T=10^{4}$
K with a contribution from metals, assuming a primordial composition of the
various heavy elements. Below this temperature, the gas can cool down to
$T=10$ K due to metal line cooling (Dalgarno & MacCray, 1972).
We adopted a star formation number density threshold of $n_{0}=250\,{\rm
H\,cm^{-3}}$ with a star formation efficiency ${\rm\epsilon_{\star}=0.03}$
(e.g. Rasera & Teyssier, 2006; Dubois & Teyssier, 2008). When a cell reaches
the conditions for star formation, stellar (population) particles can be
spawned following a Poisson distribution with a mass resolution of
$m_{\star,\rm res}\approx 2\times 10^{2}$ $\rm M_{\odot}$. In order to ensure
numerical stability we do not allow cells to convert more than $50\%$ of the
gas into stars within a single time step.
After 10 Myr the most massive stars explode as SN releasing a specific energy
of $E_{\rm SN}=10^{51}$ erg/10 M⊙, returning 20 per cent of the stellar
particle mass back into the gas with a yield (fraction of metals) of $0.1$
inside a sphere of $r_{\rm SN}=2\Delta x_{\rm min}$. In order to capture the
delay of stellar feedback energy release from non-thermal processes, we used
the delayed cooling implementation of SNe feedback (Teyssier et al., 2013). In
this work we use $t_{\rm diss}\approx 0.25$ Myr and the energy threshold
$e_{\rm NT}$ is the one associated to a turbulent velocity dispersion
$\sigma_{\rm NT}\approx 50$ km/s, consistent with our resolution (see Dubois
et al., 2015; Prieto & Escala, 2016, for details).
Figure 2.— BH particles distance as a function of time. The different gray
vertical lines mark the first, second and third pericenter passes of the orbit
(from left to right). After the third pericenter the BHs approach each other
until they finally merge at ${\rm t_{merger}}=1.25$ Gyr, $\sim$ 800 Myr after
the first pericenter pass.
In order to follow the evolution of the central BH in the galaxies, we used
sink particles (Bleuler & Teyssier, 2014). We computed the mass accretion rate
onto the BH using the standard Bondi-Hoyle (Bondi, 1952) model, $\dot{M}_{\rm
Bondi}$. In such accretion implementation the gas density is computed as an
average weighted value taken from the sink’s cloud particles using a kernel
following Krumholz et al. (2004) as presented in Dubois et al. (2012).
Throughout the simulation we cap the accretion rate at the Eddington rate. The
initial BH mass for both galaxies is $M_{\rm BH}=10^{6}\,{M_{\odot}}$,
approximately lying on the $M_{\rm BH}-\sigma_{\star}$ relation. Assuming a
sound speed of $c_{s}\approx 10-30$ km/s (for a gas temperature
$T_{gas}\approx 10^{4}-10^{5}$ K, note that most of the time the gas is at
$10^{4}$ K, the $10^{5}$ K are associated to AGN activity when the gas is
expelled from the BH vicinity), the BH influence radius at the beginning of
the simulation is $R_{BH}=G\,M_{BH}/c_{s}^{2}\approx\,420-42\,{\rm
pc}\,\approx\,140-14\,\Delta x_{\rm min}$. In other words, we can resolve the
BH influence radius with several cells (note that such radius increases
throughout the simulation). In addition to the forementioned grid refinement
criteria, we impose that a $20$-sided cubic volume surrounding sink particles,
stays fixed at maximum spatial resolution, helping to resolve the BH influence
radius and to account for any potential non trivial physical processes
occurring nearby. Finally, the BHs particles merge if their separation is
lower than $d_{\rm merge}=2\Delta x_{\rm min}$ and if they are gravitationally
bound.
We have also included AGN feedback from the central BHs. AGN feedback is
modeled with thermal energy input (Teyssier et al., 2011; Dubois et al.,
2012). The rate of energy deposited by the BH inside the injection radius
$r_{\rm inj}\equiv 4\Delta x_{\rm min}$ is
$\dot{E}_{\rm AGN}=\epsilon_{\rm c}\epsilon_{\rm r}\dot{M}_{\rm BH}c^{2}.$ (1)
In the above expression, $\epsilon_{\rm r}=0.1$ is the radiative efficiency
for a standard thin accretion disc (Shakura & Sunyaev, 1973) and
$\epsilon_{\rm c}=0.15$ is the fraction of this energy coupled to the gas in
order to reproduce the local BH-galaxy mass relation (Dubois et al., 2012). As
explained in Booth & Schaye (2009), in order to avoid gas over-cooling the AGN
energy is not released instantaneously every time step $\Delta t$ but it is
accumulated until the surrounding gas temperature can be increased by $\Delta
T_{\rm min}=10^{7}$ K. In order to reduce the heating effect of the AGN we
have included an extra multiplicative factor of 0.1 to $\dot{E}_{\rm AGN}$,
which is done as otherwise the feedback is too effective at preventing
accretion onto the central BHs, and is consistent with the scaling of
radiative efficiency and BH mass found in Davis & Laor (2011). Such factor can
be interpreted as a lower radiative efficiency, a lower energy coupling or a
combination of both effects. This lowering of factor feedback is consistent
with how NGC 2623’s energetics are dominated by star formation over AGN
feedback Privon et al. (2013).
Figure 3.— Enclosed SFR at different distances from BH particle for both
galaxies as a function of time: Inside 5 kpc in black, 1 kpc in violet, 0.5
kpc in blue, 0.2 kpc in green, 0.1 kpc in orange and 0.05 kpc in red. The gray
solid vertical lines mark the pericenters of the orbit. It is clear that after
each pericenter pass there is a SF burst. Both the second and the third
pericenter passes trigger nuclear SF on scales below few 100 pc whereas the
first one produces more extended SF at $\sim$ kpc scales.
## 3\. Results
Figure 1 shows four different snapshots throughout the merger, namely (from
top to bottom) when the BHs are at a distance of 10 kpc, the first pericenter,
the first apocenter and time where systems coalesce (marked by the time in
which the SMBHs merge). After the first pericenter pass the system develops
two prominent tidal tails producing a “double-tailed” object, which can be
appreciated in how the system looks like, at the point of its first apocenter
in figure 1.
Before we show results related with SF properties, BH growth and gas dynamics
throughout the merger, it is illustrative to look at the BH separation
evolution shown in figure 2. This quantity is a good proxy for the galactic
center separation. The figure shows the time for pericenter passes (in solid
gray vertical lines). After the third pericenter the BHs start to orbit around
each other, rapidly decreasing their separation until they merge at ${\rm
t_{merger}}=1.275$ Gyr. This time corresponds to $\sim$ 800 Myr after the
first pericenter. Furthermore, the simulated time of the BH merger depends on
the minimum separation adopted (which in our case is able to resolve the
sphere influence of the BHs) for the BH coalescence; if the minimum separation
is further decreased the BHs will spend more time orbiting each other. The
pericenter passes marked by vertical lines in the figure 2 will guide our
discussion in the following lines.
### 3.1. Star formation rate
A number of works have shown how galactic mergers-interactions trigger bursts
of SF (e.g. Barnes & Hernquist, 1991; Cox et al., 2006; Di Matteo et al.,
2007; Hopkins et al., 2008; Moreno et al., 2019). The SF in mergers is not
restricted to the galactic nuclear regions (the inner $\sim$ kpc) but it can
also be triggered in gaseous tails of the system (e.g. Soifer et al., 1984;
Keel et al., 1985; Lawrence et al., 1989). In this analysis we will focus on
the nuclear (not tail) SF burst produced by enhancement of gas density due the
galactic interactions and we will not study the extended, in-tail SF (e.g.
Barnes, 2004; Chien & Barnes, 2010; Renaud et al., 2014).
Figure 3 shows the enclosed SFR inside a given radius of both galaxies
throughout the evolution. The SFR is computed as the ratio between the total
stellar mass produced in the last $\sim$ 3.75 Myr and a characteristic time
defined as the mass weighted stellar age:
$t_{\rm\star,avg}=\frac{1}{\sum_{i}m_{\star,i}}\sum_{i}t_{\star,i}m_{\star,i}.$
(2)
Then,
${\rm SFR}=\frac{1}{t_{\rm\star,avg}}\sum_{i}m_{\star,i},$ (3)
where $m_{\star,i}$ and $t_{\star,i}$ are the new stellar population mass and
its age, respectively. We computed the SFR inside a sphere centered at the BH
position for different radius, namely $R_{\rm
SFR}=5,\,1,\,0.5,\,0.2,\,0.1,\,0.05$ kpc.
Figure 4.— Kennicutt-Schmidt relation for both galaxies in small circles. The
different colors mark different times. In black solid line the Kennicutt
(1998) relation, in green solid line the Daddi et al. (2010) relation for
normal galaxies and in blue solid line the Daddi et al. (2010) relation for
star bursts. The thin lines mark the error of the corresponding relation.
The SFR in figure 3 shows intermittent behavior due to both stellar and BH
feedback. Before any pericenter passages between the galaxies, at scales of
$\sim$ few kilo-parsec, formation rates generally fluctuate around $\sim$ 0.1
$M_{\odot}/$yr. We see SF episodes lasting $\sim 30-40$ Myr after the
pericenter passes. These relatively short periods are explained by the
response of the medium to feedback from the massive stellar particles. We also
note that star formation evolves differently for both galaxies after their
first encounter. At $\sim$ 70 Myr before the first pericenter pass (376 Myr
after the start of the simulation, see the first image of figure 1) the spiral
arms of the galaxies start to collide (at this time galactic center distance
is 10 kpc). This working interface progressively increases the gas density,
translating to a burst of SF.
Once the galaxies reach the first pericenter a clear SF burst appears across
all distance scales in Galaxy 1, with the SFR reaching $\sim$ few
$M_{\odot}/$yr at distances over $0.1$ kpc, showing an enhancement of
formation rates exceeding two orders of magnitude, which stems from the
nuclear regions of the system. This strong burst lasts around $\gtrsim$ 30 Myr
(longer for bigger scales), after which the system is left with a relatively
high but slowly decreasing nuclear SFR. In the case of Galaxy 2 there is a not
too drastic increase in nuclear star formation at $0.1-0.2$ kpc after the
first passage, but like for the first system, there is also a delayed strong
SF episode in the outer region of the galaxy, which is itself a response to
the galactic arms colliding. The collision of the spiral arms evolves more
slowly than the central region collisions, which explains why the high SFRs at
large scales are more persistent than the nuclear ones at this stage. After
this first passage we see a steady decline in star formation for galaxy 2 (at
least until another close passage happens).
The second and third pericenter encounters show prominent increases in the SF
activity in both galaxies, where a clear increase is seen on all galactic
scales. The second encounter shows a bigger SF enhancement on extended scales
at first and then delayed at a close third pericenter passage, star formation
from the nuclear regions becomes a highly prominent feature. These high star
formation rates are maintained for around $50-100$Myr, after which, the
central black holes are merged, and while the system starts stabilizing, star
formation declines.
Figure 5.— Mass accretion rate onto the BHs in black solid line. The solid
vertical gray lines mark the pericenters of the orbit. The Eddington mass
accretion rate limit is in blue solid line. Both the second and the third
pericenter produce a clear increase in the BHAR.
These results show a clear correlation between pericenter passes and increases
in SF. The SF bursts are localized to different galactic regions depending on
the stage of the merger: the first passage trigger extended (above $\sim$ kpc,
due to spiral arms collision) SF whereas the second and third pericenter
produce new stars at the nuclear region (inside $\sim$ 0.5 kpc).
The SFR values reached in the simulation at the pericenter are in the range of
$1-10\,M_{\odot}/$yr, below the SFRs measured by Evans et al. (2008)
corresponding to $\sim\,50-90\,M_{\odot}/$yr, and also below the
$\sim\,70\,M_{\odot}/$yr found by Howell et al. (2010) for a system with
similar initial conditions as ours (NGC 2623). The simulated values are closer
to the $8\,M_{\odot}/$yr rate found for the system’s recent past in in
Cortijo-Ferrero et al. (2017). This SFRs values are realistic for a merger
system (Pearson et al., 2019), albeit it would put our simulation below
typical starburst galaxy rates.
Figure 4 shows the Kennicutt-Schmidt relation (Schmidt, 1959; Kennicutt, 1998,
here after K98) for both galaxies as a function of time. In order to compute
the galactic disc surface density $\Sigma_{\rm gas}$ and the surface SFR
$\Sigma_{\rm SFR}$ we have defined a radius $R_{\rm disc}$ and a height
$h_{\rm disc}$ inside a box with 12 kpc of side centered at the BH position;
the SFR is computed within this cylinder. The equatorial plane of the cylinder
is constructed with a point and a normal vector, namely the BH position and
the gas angular momentum vector computed inside 2 kpc from the BH position
(see appendix A for a discussion about rotational center). Inside the 12 kpc
side box we compute the enclosed mass in both the positive and the negative
$\hat{z}$ direction as a function of height $z$. The disc height $h_{\rm
disc}$ corresponds to the altitude $z$ where the cylinder contains 90% of the
baryonic mass. Following an analogous method we computed the radius $R_{\rm
disc}$ as the radius where for a cylinder height $h_{\rm disc}$ the disc
contains 90% of the baryonic mass. After this procedure we define
$\Sigma_{\rm gas}=\frac{M_{\rm gas}}{\pi R_{\rm disc}^{2}},$ (4)
where $M_{\rm gas}$ is the gas mass inside the cylinder and
$\Sigma_{\rm SFR}=\frac{{\rm SFR}}{\pi R_{\rm disc}^{2}},$ (5)
with the SFR computed each $\sim$ 3.75 Myr.
Both systems start near the Daddi et al. (2010) (hereafter D10) Star Burst SF
relation and evolves between this and the usual K98 relation (blue-cyan dots)
showing no clear transition behaviour around the first merger passage. After
around $\sim\,600$ Myr (green to orange transition) where we found a steady
decline in SFRs at figure 3, the galaxy 2 system starts going below both the
K98 and D10 relations, exhibiting how star formation is not able to keep up
with the amount of gas stripped from the galaxy by the merger interaction.
Later pericenter passages bring both galaxies above the star burst D10
relation, and after the violent episodes of both SF and AGN feedback that the
systems are subjected to, the eventually merged galaxy evolves progressively
to the region below the D10 normal galaxies, showing a clear decrease in SFR
(see Renaud et al., 2014).
Figure 6.— BH mass evolution. In solid black line the BH mass associated to
galaxy 1 and in solid blue line the BH mass associated to galaxy 2. The solid
vertical gray lines mark the pericenters of the orbit.
### 3.2. Black hole evolution
Observational evidence suggests that galactic encounters can trigger AGN
activity (e.g. Veilleux et al., 2002; Giavalisco et al., 2004; Treister et
al., 2012). In order to feed the BHs the galactic gas should reach the sphere
of influence of the central massive objects. To accrete on to the BH, gas
orbiting around the BH must lose angular momentum, resulting in an inward gas
mass flow in the galaxy. This can be triggered by gravitational torques acting
on the gas due to the galaxy-galaxy interaction (e.g. Barnes, 1988; Barnes &
Hernquist, 1991; Di Matteo et al., 2005; Cox et al., 2008) or in general by
any type of torque (gravitational, pressure gradients/ hydrodynamic, magnetic,
or viscous) capable of changing the angular momentum of the gaseous component
of the galaxy.
Figure 5 shows the BH mass accretion rate as a function of time for both
galaxies. The black hole accretion rates (BHAR) oscillate in the range of
$\sim$ $10^{-2}-10^{-4}\,M_{\odot}/$yr over the first $\sim 600$ Myr, where we
see galaxy 1 approaching (and eventually peaking at) the Eddington limit on
different occasions. After the first pericenter passage, AGN feedback strongly
regulates accretion rates, lowering them for at least an order of magnitude.
Figure 7.— BH mass-bulge stellar velocity dispersion relation. Different
colors mark different times. Filled circles mark the relation taking into
account all stars inside 1 kpc around the central BH and empty squares mark
the relation for stars inside 0.5 kpc around the central BH. The broad black
solid line shows the McConnell et al. (2011) relation, the broad blue solid
line shows the McConnell & Ma (2013) relation and the broad green solid line
shows the Gültekin et al. (2009) relation. The thin green, blue and black line
are the corresponding relation 0.4 dex above and below the central one.
Following the low BHAR after the first pericenter passage, in the two
following passes, both systems exhibit clear peaks due to the funnelling of
gas towards the central galaxy regions. These peaks are more pronounced on
galaxy 1 than in galaxy 2, the first reaching BHAR values of a few
$10^{-2}\,M_{\odot}/$yr, whilst the second only has low (albeit pronounced)
peaks of a $10^{-3}\,M_{\odot}/$yr. After these two last encounters the BHs
merge (where just before this, galaxy 1 accreted at the Eddington rate for a
short period of time).
Figure 6 shows the BH masses as a function of time. Because the differences in
their mass accretion rate the BH masses are also different for both objects.
The BH1 mass shows a clearer increase with the first pericenter compared with
BH2, as can be seen from its mass accretion rate (figure 4). We confirm that
after undergoing a strong mass gaining episode, BH1’s mass stays nearly
constant for $\sim 600$Myr, with a mass of $\sim 1.8\times 10^{6}\,M_{\odot}$.
During this time interval the galaxies have reached their first pericenter
producing an enhancement in the BH1 mass accretion rate and consequently in
its mass. This BH growth is not associated with galactic bulge coalescence and
show that BHs can grow in stages before the galactic bulge merge (Medling et
al., 2013).
Figure 8.— Inward gas mass accretion rate as a function of time at different
distances from the galactic stellar center of mass: 2 kpc in black, 1 kpc in
violet, 0.5 kpc in blue, 0.1 kpc in green, 0.05 kpc in orange and 0.01 kpc in
red. The solid vertical gray lines mark the pericenters of the galactic orbit.
The figure shows that pericenters correlate with peaks of mass accretion rate.
In contrast to the BH1 evolution, the second compact object does show a clear
increase in the first pericenter but it is substancially lower, as can be seen
from the low mass accretion rate shown in figure 4 (which caps at $\sim
10^{-2}\,M_{\odot}/$yr, which is not necessarily low, but there is a big
amount of perceivable variation in rates). The evolution becomes nearly flat,
with a low amount of growth until the merger. At the time of coalescence, BH1
had grown nearly twice as much as what BH2 had grown through accretion, and
after merger, the remnant BH ends up at $\sim 3.8\times 10^{6}\,M_{\odot}$.
This final value would put the final BH mass well below the LIRG galaxy
mergers (like NGC 2623) found in the GOALS sample (Haan et al. (2011)), and
although initially this is neither an indication that the black holes are not
accreting enough gas through the merger evolution, nor that the BH initial
masses are wrong, we can further the analysis by checking how the M-$\sigma$
relation evolves throughout the simulation.
Given the BH mass at each point and the stellar velocities, it is possible to
compute the $M_{\rm BH}-\sigma_{\star}$ (“M-sigma”) relation. (McConnell et
al., 2011; McConnell & Ma, 2013; Gültekin et al., 2009) Figure 7 shows such
relation as a function of time. We have initialized the simulation with a BH
mass $M_{\rm BH}=10^{6}$ M⊙ and a bulge velocity dispersion $\sigma_{b}\approx
110$ km$/$s, which means our setup is inside the empirical relation of
McConnell & Ma (2013) when using the velocity dispersion from the $<0.5$kpc
region. Even though it is normal for velocity dispersion to grow quickly in
proportion to BH mass in a merger process due to the strong dynamical
perturbation the bulges suffer (and therefore we expect a tendency that the
measured M$-\sigma$ values should partially stray to the right of the
relation), we see in the figure both galaxies quickly moving far away from the
empirical relation. This means that the feeding of the BHs is not being able
to catch up with the growth of velocity dispersion (it should be noted that
after the stellar bulges merge, velocity dispersion should not increase, and
BH feeding could slowly bring the system back to the empirical relation as the
galaxy stabilizes).
To further support how BHs are growing less than what is expected of them, we
see for instance, that BH1 grows from $10^{6}\,M_{\odot}$ to $\sim 2\times
10^{6}\,M_{\odot}$ in $1.2$ Gyr, and if the Eddington accretion rate is
$\dot{M}_{\text{Edd}}(t)=\frac{M_{\text{BH}}(t)}{t_{\text{Sal}}}$ (with the
Salpeter timescale being $t_{\text{Sal}}=4.5$ Myr for our radiative
efficiency), we would have an average accretion rate of $\approx 2,5\times
10^{-3}\dot{M}_{\text{Edd}}$, well under typical feeding expected for
radiative AGN feedback to be relevant.
The apparent culprit of these overall lower than expected average BHAR, would
be the amount of thermal feedback being put back into the grid, which heats
the gas surrounding the vicinity of our BHs too effectively for accretion to
be steadily maintained. This is further evidenced by how even though torques
at the hill radius are sustained all through the simulation (see section 3.4),
this does not translate into a feeding of the black holes, as we see instead
that the only important feeding episodes occur in the initial stages of the
simulation and at close passages (where material is too efficiently
transported towards the center, allowing gas dynamics to overcome the heating
effect of feedback).
The straightforward AGN feedback approach that we are using from Dubois et al.
(2012) was developed for cosmological simulations, and even though it has seen
successful use in that context, the main difference here is that at the high
resolutions we achieve, such simple recipe may result in the failure to
capture the correct small scale physics that model the heating of the central
bulges by the BHs. It has been shown that different methods for dealing with
BH feedback may yield quite different results, and that direct injection of
thermal energy to galactic cores may produce strong and persistent outflows or
cavities in central regions that suppress accretion (Wurster & Thacker, 2013).
It is then imperative that we try to capture the more detailed heating
structure that is produced by the radiative transfer of the soft X-ray photons
that produce quasar-mode feedback. There have been successful efforts at
capturing the heating rate from the expected X-ray emission of the central AGN
from Choi et al. (2012), but this recipe is still at its heart a direct
injection of energy back to the galactic core, and does not offer any
accounting of radiative transfer effects.
A more consistent option for improving our feedback recipe, would be to
include radiation coupling to our hydrodynamics through RAMSES-RT, in the code
presented by Rosdahl et al. (2013) and Rosdahl & Teyssier (2015). Quasar
feedback has already been modelled in this way (Bieri et al., 2017), and it
relies in the coupling of hydrodynamics with the radiative transfer of photons
being introduced by the sink particle into the grid through an AGN template
spectral distribution, allowing for a detailed accounting of the production
and reprocessing of X-ray radiation (and therefore the overall heating
mechanisms) produced by the innermost regions. The introduction of the RT
module would also allow for a more consistent modelling of SN feedback, and
presents the opportunity for future work.
Figure 9.— Left: Total gravitational torque on the gas associated to inward
mass transport at different distances from the stellar center of mass: 2 kpc
in black, 1 kpc in violet, 0.5 kpc in blue, 0.1 kpc in red, 0.05 kpc in orange
and 0.01 kpc in green. The solid vertical gray lines mark the pericenters of
the orbit. Right: Same as left column but for the hydrodynamic torque. The
figure shows that pericenters are associated with increases in torques.
### 3.3. Gas accretion rate
In the last section we showed that peaks of BH mass accretion rate correlate
with the pericentric passages, suggesting a connection between close
encounters and enhancement of gas inflows in galaxies. Under this scenario it
is useful to look at the inward gas mass accretion rate at different radii.
Figure 8 shows the inward gas mass accretion rate at different distances from
the stellar center of mass. As with the KS computation, we have constructed a
disc perpendicular to the gas angular momentum vector. After that, in order to
compute the gas accretion rate we look for the stellar center of mass inside 2
kpc around the BH for each galaxy. Given the position $\vec{r}_{\rm CM}$ and
bulk velocity $\vec{v}_{\rm CM}$ of the center of mass we have computed the
inflowing gas mass accretion rate as
$\dot{M}_{\rm g}=\sum_{i}\rho_{i}\left(\vec{v}_{i}-\vec{v}_{\rm
CM}\right)\cdot\Delta\vec{A}_{i}.$ (6)
where $\vec{v}_{i}$ is the gas cell velocity and $\vec{A}_{i}$ is the surface
element crossed by the gas in a direction parallel to the radial vector
$\vec{r}_{i}-\vec{r}_{\rm CM}$, with $\vec{r}_{i}$ the gas cell position. The
sum is computed inside an annulus of width $\Delta x_{\rm min}$ for $r\leq$
200 pc, corresponding to the level of refinement 17 and $32\,\Delta x_{\rm
min}$ for $r\geq$ 500 pc, corresponding to the refinement level 12.
Figure 8 shows a clear correlation between peaks of gas mass accretion rate on
scales $\gtrsim 0.5$ kpc and close passes for both galaxies (first and third
panel from top). The first pericenter pass is associated with a gas mass
accretion rate as high as $\sim$ $5\,M_{\odot}/$yr. A few Myr before the
second pericenter pass mass accretion rates reach $\sim$ few
$5-10\,M_{\odot}/$yr at large scales. Such episodes of inflowing mass on large
scales are consistent with enhancement of SF in pericenters as shown in figure
3. The causal relation between these two phenomena can be seen by comparing
the SFR and the mass accretion rate: the close passes produce mass inflows
which are followed after a few Myr by bursts of SF.
At small scales (below $\sim\,100$ pc), the enhancement in mass accretion rate
at the first pericenter is not as significant as it is in large scales, except
for very short bursts of inflow at $0.05-0.01$kpc scales in Galaxy 1. The
inflowing mass accretion rate reaches values that are above $\sim
10M_{\odot}/$yr in galaxy 1 in the nucleus at $10$ pc in this brief burst
(which happens at distances below the order of the BH sphere of influence),
but rates are generally around the $\sim 1M_{\odot}/$yr value, and are
sustained in a somewhat irregular fashion before the first encounter. Galaxy 2
shows a slightly more consistent mass accretion rate in the same period of
time at similar scales, but rates are not perceivably higher. The second and
third pericenter passes show an enhancement in mass accretion at small scales.
the amount of inflowing mass is able to trigger (after few Myr) SF bursts and
feed the BHs as has been shown in the previous sections. In particular, at the
third pericenter pass the gas inflow rate approaches $\sim 10\,M_{\odot}/$yr
due to the gas bulges coalescence.
We conclude that there is a correlation between peaks of gas mass accretion
rate, SFR, and BHAR associated with pericenter passages. In other words, close
galactic encounters trigger mass inflows crossing the BH influence radius,
producing SF bursts and lighting up AGN activity in galactic centers.
### 3.4. Torques on the gas
At this point we have shown that throughout the merger process there are
episodes of efficient gas inflows toward the galaxy centers. In order to fully
understand the origin of mass transport into the galactic center it is
necessary to quantify the torques acting on the gas, in order to link mass
inflow episodes with angular momentum loses (see appendix C).
Figure 9 shows the torques acting on the galactic disc at different radius as
a function of time. Before computing the torques, we have defined the galactic
disc in the same way we did it to compute the KS relation and the gas mass
transport.
We have computed the torques with respect the stellar center of mass
$\vec{r}_{\rm CM}$ as a proxy for the rotational center of each spiral galaxy
(see appendix A for a discussion about rotational centers). In order to do
that, it is necessary to set a non-rotating coordinate system free falling
with the stars. In such a frame, the acceleration of a particle becomes
$\vec{a}^{\prime}_{i}=\vec{a}_{i}-\vec{a}_{\rm CM}$, where $\vec{a}_{i}$ is
the particle acceleration with respect an inertial reference frame (the center
of the fixed simulation box in our case) and $\vec{a}_{\rm CM}$ is the
acceleration of the stellar center of mass with respect the same inertial
frame (see appendix B). Then, in the co-moving reference frame the torques can
be computed as
$\vec{\tau}^{\prime}=\sum_{i}m_{i}(\vec{r}_{i}-\vec{r}_{\rm
CM})\times(\vec{a}_{i}-\vec{a}_{\rm CM}).$ (7)
In the previous expression $\vec{r}_{i}$ is the cell position, $m_{i}$ is the
gas cell mass. The acceleration $\vec{a}_{i}$ is the combination of the
gravitational acceleration $-\nabla\phi_{i}$ and the acceleration associated
to hydrodynamic on the gas $\nabla P_{i}/\rho_{i}$, where $\phi_{i}$ is the
gravitational potential at a given cell and $P_{i}$ is the pressure in the
same cell.
Because the galactic disc is defined in terms of the disc angular momentum,
negative torques imply a loss of angular momentum and a resulting inward mass
transport. Figure 9 shows $-\vec{\tau}^{\prime}$, i.e. torques producing net
inward mass transport inside an annulus at a given distance from the galactic
centers (the regions without data are dominated by outward mass transport
torques). The sum is computed inside an annulus of width $\Delta x_{\rm min}$
for $r\leq$ 200 pc and $32\,\Delta x_{\rm min}$ for $r\geq$ 500 pc, as in the
gas mass accretion rate computation.
The left column of figure 9 shows that at large scales ($\gtrsim\,0.5$ kpc)
both galaxies show large fluctuations in gravitational torques with galaxy 1
reaching larger torque values with higher fluctuation. In both galaxies
gravitational torques are more important at 0.5 kpc than at larger radii. It
can be understood in terms of the higher gas concentration at lower galactic
radii.
The huge fluctuations in gravitational torques at large scales makes difficult
to identify a peak associated with the first pericenter in both galaxies. In
galaxy 1 it is possible to identify a coherent increase at 2 kpc about $\sim$
few Myr after the first passage. On the other hand, in galaxy 2 no
gravitational torque peak can be identified at large scales.
In galaxy 2 the late pericenter passages are associated with increased
gravitational torques on larger scales (mainly at 0.5 kpc scales for the
second passage). In galaxy 1 it is more difficult to recognize a torque
increase (only having measured one relevant torque spike at 1 kpc between
passages). We also see some torque presence at 0.5 kpc after the merger of the
systems.
Gravitational torques acting on the galactic central region, i.e. less than
100 pc from the stellar center of mass, show a clear enhancement associated
with the second and third pericenter passes but an almost imperceptible change
during the first pericenter pass for both galaxies. Figure 9 shows that the
inner galactic region, besides a very short increase of torque at 10 pc in
galaxy 1 after the first passage, feels the maximum gravitational torque
around the third pericenter pass (with a very big spike at the smallest scales
for galaxy 2 when the systems are about to merge). Such strong torques acting
on the galactic gas produces gas inflows and feeds the central massive
objects, lighting up the AGN. It is also of note, that gravitational torques
feature most importantly, at 100 pc scales, which aligns with how the BH
influence sphere helps with gas transport at this distance.
The right column of figure 9 shows the hydrodynamic torques associated with
inward mass transport. At large spatial scales hydrodynamic torques are lower
and more sporadic than gravitational torques. This hydrodynamical torque
values become closer to gravitational ones at pericenter passages, where
especially for galaxy 2, we se features at every passage.
Within the galactic nuclei the hydrodynamic torques match quite well with
gravitational torques at the smallest scales, showing peaks in the same places
where their counterparts do. Such enhancements are at the same level as the
gravitational torques showing that both mechanisms are working to redistribute
matter in the later stages of the merger. In other words, hydrodynamic torques
work in tandem with gravitational torque in order to redistribute mass and
angular momentum in the galactic disc.
## 4\. Discussion and Conclusions
With the aim of studying the connection between torques and mass transport in
galactic discs, we have simulated a galaxy merger employing realistic initial
conditions based in Privon et al. (2013). The SFR reaches values of
$\sim\,1-10\,M_{\odot}/$yr, below the observational measurements from (Evans
et al., 2008; Howell et al., 2010) for NGC 2623 specifically, but closer to
the values presented in Cortijo-Ferrero et al. (2017), this puts our system
below the star forming capabilities of a starbursting system, but inside
expected rates for generic merger systems (Pearson et al., 2019). The final BH
mass of the system is $M_{\rm BH}\approx 3.8\times 10^{6}\,M_{\odot}$, around
one or two order of magnitudes below the usual values presented in Haan et al.
(2011) and below the dispersion of the “M-sigma” relation (Gültekin et al.,
2009). This low BH mass is due to low amounts of accretion stemming from the
effectiveness feedback has at heating the immediate environment around our
sink particles, calling for an improvement of the feedback model at our
resolution, one option being the inclusion of a fully coupled radiation
hydrodynamical feedback (see Bieri et al. (2017)).
Our results confirm that galactic encounters can trigger bursts of SF (e.g.
Barnes & Hernquist, 1991; Mihos & Hernquist, 1996; Springel et al., 2005;
Gabor et al., 2016). The first pericenter pass clearly increase the SF of both
galaxies but those increases are more evident beyond $\sim$ 500 pc from the
galactic center, when it reaches $\sim$ few $M_{\odot}/$yr. At these higher
scales the SFR enhancement is due to the gas density increase triggered by the
collision of the gaseous galactic spiral arms. Because the first pericenter
pass has a nuclear separation of $\sim$ 2 kpc, most of the SF is localized at
those distances from the center. In contrast, the second and third pericenter
passes trigger bursts of SF at the inner hundred of parsecs, again reaching
$\sim$ few $M_{\odot}/$yr. At this stage the gas density has increased due to
mass transport, resulting in a prominent nuclear SF burst.
Besides the SFR, the BHAR peaks also show correlations with pericenter
encounters. Whereas one of the BHs has a growth rate correlated with its three
pericenter passess the other one correlates better with its second and third
pericenter passes. In both cases it is evident that the second and third
pericenter passes increase the BHAR, reaching values of $\sim 50-100\%$ and
$\sim 25\%$ of the corresponding Eddington limits for the BHs (corresponding
to a few $\sim\,10^{-2}\,M_{\odot}/$yr and a few
$\sim\,10^{-3}\,M_{\odot}/$yr). Such high mass accretion rate onto the compact
objects will trigger the AGN activity.
Both phenomena described above, i.e., star formation activity and BH
accretion, are driven by the amount of gas available to form stars and to feed
the BHs. Our simulation shows that pericenter passes correlate with peaks of
gas mass accretion rates driving gas mass density variations in the BH
vicinity, i.e. inside its influence radius. The first encounter produces a
direct mass inflow of $\sim 3\,M_{\odot}/$yr outside of $\sim$ 500 pc,
associated with the galaxy-galaxy crossing. This encounter triggers $\sim$ kpc
scale SF in both galaxies. On the other hand, at smaller scales ($r\lesssim$
100 pc) the first pericenter produces a big increase in the mass accretion
rate for one of the galaxies (reaching a short peak of $\sim
10\,M_{\odot}/$yr), and a smaller increase for the second one, but still
enough to produce SF and to feed one of the BHs. The second and third
pericenter passes produce a clear enhancement in mass accretion rate onto the
nuclear galactic region. In fact, at the third closest passage the gas mass
inflow at inner scales is simultaneously high for both systems and as such,
the galactic gas entering the BH sphere of influence efficiently feeds the BHs
and triggers nuclear SF bursts.
Neglecting magnetic fields and viscosity, any variation on the gas angular
momentum will be due to torques from both gravitational and pressure gradient
forces (see appendix C). In other words, the merger triggers changes in the
gas angular momentum due to variations in the gravitational potential and gas
pressure. The former are produced due to the dynamics of the merger which is
characterized by strong gravitational interactions, and the latter is produced
by gas layers with strong differences in density and/or temperature. Such
conditions naturally arise when both galaxies cross each other and finally
merge. We have shown that pericenter passes correlate with both gravitational
and hydrodynamic torque peaks. In general, gravitational torques dominate over
hydrodynamic torques but at inner scales pressure gradient torques can reach
values approaching that of the gravitational ones helping to radially
transport gas in galactic disc. These torques redistribute angular momentum
allowing inward mass transport onto the galactic center. The high resolution
of our simulation showed that such gas inflows can cross the BH influence
radius producing peaks in the BHAR and triggering SF burst.
## Acknowledgements
Powered@NLHPC: This research was partially supported by the supercomputing
infrastructure of the NLHPC (ECM-02). The Geryon cluster at the Centro de
AstroIngenieria UC was extensively used for the analysis calculations
performed in this paper. JP is funded by ESO-Chile Comite Mixto grant ORP
79/16. AE acknowledges partial support from the Center for Astrophysics and
Associated Technologies CATA (PFB06) and Proyecto Regular Fondecyt (grant
1181663). G.C.P. acknowledges support from the University of Florida.
## References
* Angulo et al. (2012) Angulo, R. E., Springel, V., White, S. D. M., Jenkins, A., Baugh, C. M., & Frenk, C. S. 2012, MNRAS, 426, 2046
* Armus et al. (2009) Armus, L., et al. 2009, PASP, 121, 559
* Bahcall et al. (1995) Bahcall J. N., Kirhakos S. & Schneider D. P., 1995, ApJ, 447, L1
* Barnes (1988) Barnes J. E., 1988, ApJ, 331, 699
* Barnes (2004) Barnes J. E., 2004, MNRAS, 350, 798
* Barnes & Hernquist (1991) Barnes J. E., & Hernquist L. E. 1991, ApJ, 370, L65
* Bieri et al. (2017) Bieri, R., Dubois, Y., Rosdahl, J., et al. 2017, MNRAS, 464, 1854
* Bleuler & Teyssier (2014) Bleuler A. & Teyssier R., 2014, MNRAS, 445, 4015
* Blumenthal & Barnes (2018) Blumenthal K. A., Barnes J. E., 2018, MNRAS, 479, 3952
* Bondi (1952) Bondi H., 1952, MNRAS, 112, 195
* Booth & Schaye (2009) Booth C. M. & Schaye J., 2009, MNRAS, 398, 53
* Bouché (2010) Bouché N., Dekel A., Genzel R., Genel S., Cresci G., Förster Schreiber N. M., Shapiro K. L., Davies R. I., Tacconi L.
* Bussmann et al. (2012) Bussmann, R. S., et al. 2012, ApJ, 744, 150
* Chien & Barnes (2010) Chien L.-H., & Barnes J. E., 2010, MNRAS, 407, 43
* Choi et al. (2012) Choi E., Ostriker J. P., Naab T., Johansson P. H., 2012, ApJ, 754, 125
* Cortijo-Ferrero et al. (2017) Cortijo-Ferrero C., 2017, A&A, 607,70
* Cox et al. (2006) Cox T. J., Jonsson P., Primack J. R., &Somerville R. S., 2006, MNRAS, 373, 1013
* Cox et al. (2008) Cox T. J., Jonsson P., Somerville R. S., Primack J. R. & Dekel A., 2008, MNRAS, 384, 386
* Daddi et al. (2010) Daddi E., Elbaz D., Walter F., Bournaud F., Salmi F., Carilli C., Dannerbauer H., Dickinson M., Monaco P. & Riechers D., 2010, ApJ, 714, 118
* Dalgarno & MacCray (1972) Dalgarno, A. & McCray, R. A., 1972, ARA&A, 10, 375
* Debuhr et al. (2011) Debuhr J., Quataert E., Ma Ch.-P., MNRAS, 412, 1341
* Di Matteo et al. (2005) Di Matteo T., Springel V. & Hernquist L., 2005, Nature, 433, 604
* Di Matteo et al. (2007) Di Matteo P., Combes F. & Melchior A. & Semelin B., 2007, A&A, 468, 61
* Dubois & Teyssier (2008) Dubois Y. & Teyssier R., 2008, A&A, 477, 79
* Dubois et al. (2012) Dubois, Y., Devriendt, J., Slyz, A., & Teyssier, R. 2012, MNRAS, 420, 2662
* Dubois et al. (2014) Dubois Y., et al. 2014, MNRAS, 444, 1453
* Dubois et al. (2015) Dubois Y., Volonteri M., Silk J., Devriendt J., Slyz A. & Teyssier R., 2015, MNRAS, 452, 1502
* Duc et al. (1997) Duc P. A., Brinks E., Wink J. E., & Mirabel I. F., 1997, A&A, 326, 537
* Engel et al. (2010) Engel, H., et al. 2010, ApJ, 724, 233
* Evans et al. (2008) Evans A. S., et al. 2008, ApJ, 675, L69
* Gabor et al. (2016) Gabor J. M., Capelo P. R., Volonteri M., Bournaud F., Bellovary J., Governato F. & Quinn T., 2016, A&A, 592, 62
* Genel et al. (2014) Genel Sh., Vogelsberger M., Springel V., Sijacki D., Nelson D., Snyder G., Rodriguez-Gomez V., Torrey P. & Hernquist L., MNRAS, 2014, 445, 175
* Giavalisco et al. (2004) Giavalisco, M., Ferguson, H. C., Koekemoer, A. M., et al. 2004, ApJ, 600, L93
* Gültekin et al. (2009) Gültekin K., Richstone D. O., Gebhardt K., Lauer T.R., Tremaine S., Aller M. C., Bender R., Dressler A., Faber S. M., Filippenko A. V., Green R., Ho L. C., Kormendy J., Magorrian J., Pinkney J. & Siopis C., 2009, ApJ, 698, 198
* Haan et al. (2011) Haan S., et al., 2011, AJ, 141, 100
* Hahn et al. (2010) Hahn, O., Teyssier, R., & Carollo, C. M. 2010, MNRAS, 405, 274
* Hopkins et al. (2008) Hopkins Ph. F., Hernquist L., Cox T. J., Kere$\check{s}$, Du$\check{s}$an, 2008, ApJS, 175, 356, 2015, MNRAS, 450, 53
* Hopkins (2015) Hopkins, Ph. F., 2015, MNRAS, 450, 53
* Howell et al. (2010) Howell J. H., et al. 2010, ApJ, 715, 572
* Jogee et al. (2009) Jogee S., et al. 2009, ApJ, 697, 1971
* Keel et al. (1985) Keel W. C., Kennicutt R. C., Jr. Hummel, E. & van der Hulst J. M. 1985, AJ, 90, 708
* Kennicutt (1998) Kennicutt R. C. Jr., 1998, ApJ, 498, 541
* Krumholz et al. (2004) Krumholz M. R., MaKee C. F. & Klein R. I., 2004, ApJ, 611, 399
* Larson et al. (2016) Larson K. L., Sanders D. B., Barnes J. E., Ishida C. M., Evans A. S.,Mazzarella J. M., Kim D. C., Privon G. C., Mirabel I. F. & Flewelling H. A.
* Lawrence et al. (1989) Lawrence A., Rowan-Robinson M., Leech K., Jones D. H. P. & Wall J. V., 1989, MNRAS, 240, 329
* Maiolino et al. (2003) Maiolino R., Comastri A., Gilli R., et al. 2003, MNRAS, 344, L59
* Mayer et al. (2010) Mayer L., Kazantzidis S., Escala A. & Callegari S., Nature, 466, 1082
* McConnell et al. (2011) McConnell N. J., Ma Ch., Gebhardt K., Wright Sh. A., Murphy J. D., Lauer T. R., Graham J. R. & Richstone D. O., 2011, Nature, 480, 215
* McConnell & Ma (2013) McConnell N. J. & Ma Ch.-P., 2013, ApJ, 764, 184
* Medling et al. (2013) Medling A. et al., 2015, ApJ, 803, 61
* Mihos & Hernquist (1996) Mihos, J. C. & Hernquist, L. 1996, ApJ, 464, 641
* Moreno et al. (2019) Moreno J., et al., 2019, MNRAS, 485, 1320
* Navarro, Frenk & White (1996) Navarro J. F., Frenk C. S., White S. D. M., 1996, ApJ, 462, 563
* Newton & Kay (2013) Newton, R. D. A., & Kay, S. T. 2013, MNRAS, 434, 3606
* Pearson et al. (2019) Pearson W. J., et al., 2019b, A&A, 631, A51
* Perret et al. (2014) Perret V., Renaud F., Epinat B., Amram P., Bournaud F., Contini T., Teyssier R. & Lambert J.-C., 2014, A&A, 562A, 1
* Petric (2011) Petric, A. O., et al. 2011, ApJ, 730, 28
* Powell et al. (2013) Powell L. C., Bournaud F., Chapon D. & Teyssier R., 2013, MNRAS, 434, 1028
* Prieto & Escala (2016) Prieto J. & Escala A. (PE16), 2016, MNRAS, 460, 4018
* Privon et al. (2013) Privon G. C., Barnes J. E., Evans A. S., Hibbard J. E., Yun M. S., Mazzarella J. M., Armus, L. & Surace, J., 2013, ApJ, 771, 120
* Sanders & Mirabel (1996) Sanders D.B. & Mirabel I.F., 1996, ARA&A 34, 749
* Renaud et al. (2014) Renaud F., Bournaud F., Kraljic K. & Duc P.-A., 2014, MNRAS, 442, 33
* Rasera & Teyssier (2006) Rasera Y. & Teyssier R., 2006, A&A, 445, 1
* Rosdahl & Teyssier (2015) Rosdahl J., Teyssier R., 2015, MNRAS, 449, 4380
* Rosdahl et al. (2013) Rosdahl J., Blaizot J., Aubert D., Stranex T., Teyssier R., 2013, MNRAS, 436, 2188
* Sanders et al. (1998) Sanders D. B., Soifer B. T., Elias J. H., Madore B. F., Matthews K., Neugebauer G., & Scoville N. Z. 1988, ApJ, 325, 74
* Schmidt (1959) Schmidt, M., 1959, ApJ, 129, 243
* Schweizer (1982) Schweizer, F. 1982, ApJ, 252, 455
* Shakura & Sunyaev (1973) Shakura N. I. & SunyaevR. A., 1973, A&A, 24, 337
* Soifer et al. (1984) Soifer B. T., et al. 1984, ApJL, 278, L71
* Springel et al. (2005) Springel, V., Di Matteo, T., & Hernquist, L. 2005, MNRAS, 361, 776
* Stierwalt et al. (2013) Stierwalt et al., 2013, ApJS, 206, 1
* Sutherland & Dopita (1993) Sutherland R. S. & Dopita M. A., 1993, Apj Sup., 88, 253
* Teyssier (2002) Teyssier R., 2002, A&A, 385, 337
* Teyssier (2010) Teyssier R., Chapon D. & Bournaud F., 2010, ApJ, 720, L149
* Teyssier et al. (2011) Teyssier R., Moore B., Martizzi D., Dubois Y. & Mayer L., 2011, MNRAS, 414, 195
* Teyssier et al. (2013) Teyssier R., Pontzen A., Dubois Y. & Read J. I., 2013, MNRAS, 429, 3068
* Toomre &Toomre (1972) Toomre, A., & Toomre, J. 1972, ApJ, 178, 623
* Treister et al. (2012) Treister E., Schawinski K., Urry C. M. & Simmons B. D., 2012, ApJ, 758, L39
* Truelove et al. (1997) Truelove J. K., Klein R. I., McKee Ch. F., Holliman J. H., Howell L. H. & Greenough J. A., 1997, ApJl, 489, 179
* U et al. (2012) U V., et al. 2012, ApJS, 203, 9
* Veilleux et al. (2002) Veilleux S., Kim D.-C., & Sanders D. B., 2002, ApJS, 143, 315
* Wadsley et al. (2008) Wadsley, J. W., Veeravalli, G., & Couchman, H. M. P. 2008, MNRAS, 387, 427
* Davis & Laor (2011) Davis, S.W. & Laor A., 2011, ApJ 728 98
* Wurster & Thacker (2013) Wurster, J. & Thacker, R. J., 2013, MNRAS, 431, 539
## Appendix A Appendix: Rotation center
The rotational center of a system composed by particles of mass $m_{i}$ at
position $\vec{r}_{i}$ and acceleration $\vec{a}_{i}$, and where the amount of
particles well-represent the phase space near such center, can be defined as
the point $\vec{r}_{\rm rot}$ where the torque
$\vec{\tau}_{\rm rot}=\sum_{i}\,m_{i}\,(\vec{r}_{i}-\vec{r}_{\rm
rot})\times(\vec{a}_{i}-\vec{a}_{\rm rot})$ (8)
inside a given volume is null, with $\vec{a}_{\rm rot}$ the rotational center
acceleration. In a well approximated system which is supported by ideal
rotation, all the accelerations will point to a common center, the rotational
center, then the cross product position-acceleration will be null. In systems
with a given degree of turbulence and strong noise in its acceleration field,
such null point does not necessarily exist, here the task reduces to searching
for minima in the torque field to define our rotational center, which
necessarily introduces degeneracy in its estimation.
A kinematic approach to identify the rotational center of a system can be
based in the previous dynamical definition. In this case, instead of focusing
on the particle accelerations it is useful to look at the particle velocities
$\vec{v}_{i}$. Then the rotational center will be the point where the angular
momentum
$\vec{L}_{\rm rot}=\sum_{i}\,m_{i}\,(\vec{r}_{i}-\vec{r}_{\rm
rot})\times(\vec{v}_{i}-\vec{v}_{\rm rot})$ (9)
inside a given volume is maximized. Here $\vec{v}_{\rm rot}$ is the velocity
of the rotational center. Note that in this case the cross product position-
velocity should be a maximum. As with the dynamical definition, if the system
has a given degree of turbulence, it would be possible to find more than one
center of rotation. We note that if our context was understood as a generic
dynamical system, our search criterion reduces to finding the best candidate
fulfilling the characteristics of a non-stationary irrational vortex, where
$\vec{L}_{\rm rot}$ is the local circulation field maxima.
The process of identifying a rotational center is computationally expensive as
it requires computing the angular momentum (or torque) inside a given volume
for each point in the space. Given the 3D map for the modulus of the angular
momentum it is necessary to look for peaks in the angular momentum
distribution. In other words it is necessary to look for “clumps” of angular
momentum. Given the “clumps” of angular momentum it is possible to compute the
centroid of such objects to define rotational centers. Thus, identifying the
stellar center of mass given an ansatz for the rotational center (the BH
positions for instance) is faster, computationally.
## Appendix B Appendix: Non-inertial frames
Inside an accelerating reference frame the Newtonian dynamical equations are
modified. In such moving frame an observer will describe the movement of any
object as influenced by “fictitious forces”. Quantitatively, from a moving
system with position $\vec{R}$ with respect an inertial reference frame the
force described by an observer at $\vec{R}$ acting on a particle at position
$\vec{r}_{i}$ is
$m_{i}\frac{d^{2}\vec{r}_{i}^{\prime}}{d\,t^{2}}=\vec{F}_{i}-m_{i}\frac{d^{2}\vec{R}}{d\,t^{2}}-m_{i}\vec{\omega}\times(\vec{\omega}\times\vec{r}_{i}^{\prime})-2m_{i}\vec{\omega}\times\vec{v}_{i}^{\prime}-m_{i}\frac{d\,\vec{\omega}}{d\,t},$
(10)
where $m_{i}$ is the mass particle $\vec{r}_{i}^{\prime}=\vec{r}_{i}-\vec{R}$
is the particle position with respect the moving system position $\vec{R}$ and
$\vec{r}_{i}$ the particle position with respect an inertial reference frame.
$\vec{F}_{i}$ is the net force acting on the particle $i$ (due to the
magnetic, gravitational, viscous or hydrodynamic contribution), $\vec{\omega}$
is the angular velocity of the moving system and
$\vec{v}_{i}^{\prime}=d\vec{r}_{i}^{\prime}/dt$.
In the simple case when $\vec{\omega}=\vec{0}$, i.e. a moving reference frame
without rotation, with $\vec{a}_{i}^{\prime}$ the particle acceleration with
respect the moving system, $\vec{a}_{i}$ the particle acceleration with
respect an inertial frame and $\vec{A}$ the moving system acceleration with
respect the same inertial reference frame it is possible to write
$\vec{a}_{i}^{\prime}=\vec{a}_{i}-\vec{A},$ (11)
## Appendix C Appendix: Torques-mass transport relation
The momentum conservation equation in its conservative form in Cartesian
coordinates $x_{i}$ can be written as
$\frac{\partial(\rho v_{k})}{\partial t}+\frac{\partial}{\partial
x_{l}}(R_{kl}+P_{kl}+B_{kl}-G_{kl}-S_{kl})=0,$ (12)
where $\rho$ is the gas mass density and $v_{i}$ is the Cartesian component of
the gas velocity. $R_{kl}$, $P_{kl}$, $B_{kl}$, $G_{kl}$ and $S_{kl}$ are the
hydrodynamical stress, the pressure stress, the magnetic stress, the
gravitational stress and the viscous stress, respectively. The stresses are
defined by:
$\displaystyle R_{kl}$ $\displaystyle=$ $\displaystyle\rho v_{k}v_{l},$ (13)
$\displaystyle P_{kl}$ $\displaystyle=$ $\displaystyle\delta_{kl}P,$ (14)
$\displaystyle B_{kl}$ $\displaystyle=$
$\displaystyle\frac{1}{4\pi}\left(B_{k}B_{l}-\frac{1}{2}B^{2}\delta_{kl}\right)$
(15) $\displaystyle G_{kl}$ $\displaystyle=$ $\displaystyle\frac{1}{4\pi
G}\left[\frac{\partial\phi}{\partial x_{k}}\frac{\partial\phi}{\partial
x_{l}}-\frac{1}{2}(\nabla\phi)^{2}\delta_{kl}\right],$ (16) $\displaystyle
S_{kl}$ $\displaystyle=$ $\displaystyle\rho\nu\left(\frac{\partial
v_{k}}{\partial x_{l}}+\frac{\partial v_{l}}{\partial
x_{k}}-\frac{2}{3}\delta_{kl}\nabla\cdot\vec{v}\right),$ (17)
where $P$ is the gas pressure, $B_{k}$ the cartesian component of the magnetic
field, $B$ the modulus of the magnetic field, $\phi$ the gravitational
potential, $\nu$ is the kinematic viscosity and $\delta_{kl}$ is the Kronecker
delta symbol.
Neglecting the magnetic term and the dissipative-viscous term (Balbus 2003)
the momentum conservation equation can be written as
$\frac{\partial}{\partial t}(\rho v_{k})+\frac{\partial}{\partial x_{l}}(\rho
v_{k}v_{l})+\frac{\partial P}{\partial x_{k}}-\rho\frac{\partial\phi}{\partial
x_{k}}=0,$ (18)
and taking the cross product between the Cartesian position $\vec{x}$ and eq.
18 (applying $\epsilon_{ijk}x_{j}$, with $\epsilon_{ijk}$ the Levi-Civita
symbol) it is possible to derive the angular momentum conservation equation
and after some algebra it is possible to write the $\hat{z}$ component of this
equation as
$\frac{\partial}{\partial
t}\left(\rho\ell_{z}\right)=-\left[\ell_{z}\rho\,\nabla\cdot\vec{v}+\vec{v}\cdot\nabla(\rho\,\ell_{z})+\tau_{z}^{P}+\tau_{z}^{G}\right],$
(19)
from where it is possible to get the gas mass density variation
$\frac{\partial\rho}{\partial
t}=-\frac{1}{\ell_{z}}\left[\rho\frac{\partial\ell_{z}}{\partial
t}+\ell_{z}\rho\nabla\cdot\vec{v}+\vec{v}\cdot\nabla(\rho\ell_{z})+\tau_{z}^{P}+\tau_{z}^{G}\right],$
(20)
where $\ell_{z}=(\vec{x}\times\vec{v})\cdot\hat{z}$ is the $z$ component of
the gas specific angular momentum,
$\tau_{z}^{G}=\rho\,(\vec{x}\times\nabla\phi)\cdot\hat{z}$ is the $z$
component of the gravitational torque and $\tau_{z}^{P}=(\vec{x}\times\nabla
P)\cdot\hat{z}$ is the $z$ component of the hydrodynamic torque.
Equation 20 relates the changes in gas density $\rho$ with torques
$\tau_{z}^{P,G}$ acting on the gas. For a system starting from an axisymmetric
stationary state with $\vec{v}=v(r)\,\hat{\theta}$ and $\rho=\rho(r)$ the
azimuthal perturbations in both gas pressure and gravitational potential are
the sources of changes in gas density, i.e. both hydrodynamic torques and
gravitational torques are able to transport matter from a given radius to
another radius of the system.
|
Equational Reasoning for Non-determinism Monad]
Equational Reasoning for Non-determinism Monad:
The Case of Spark Aggregation
Shin-Cheng Mu
Institute of Information Science
Academia Sinica
As part of the author's studies on equational reasoning for monadic programs, this report focus on non-determinism monad.
We discuss what properties this monad should satisfy, what additional operators and notations can be introduced to facilitate equational reasoning about non-determinism, and put them to the test by proving a number of properties in our example problem inspired by the author's previous work on proving properties of Spark aggregation.
§ INTRODUCTION
In functional programming, pure programs are those that can be understood as static mappings from inputs to outputs.
The main advantage of staying in the pure realm is that properties of pure entities can be proved by equational reasoning.
Side effects, in contrast, used to be considered the “awkward squad” that are difficult to be reasoned about.
<cit.>, however, showed that effectful, monadic programs may also be reasoned about in a mathematical manner, using monad laws and properties of effect operators.
This report is part of a series of the author's studies on equational reasoning for monadic programs.
In this report we focus on non-determinism monad — in our definition that is a monad having two effect operators, one allowing a program to fail, another allowing a non-deterministic choice between two results.
We discuss what properties these operators should satisfy, what additional operators and notations can be introduced to facilitate equational reasoning of this monad, and put them to the test by proving a number of properties in our example problem: Spark aggregation.
Much of this report is inspired by the author's joint work with <cit.>, in which we formalised Spark, a platform for distributed computation, and derived properties under which a distributed Spark aggregation represents a deterministic computation.
Therefore, many examples in this report are about finding out when processing a non-deterministic permutation (simulating arbitrary distribution of data) produces a deterministic result.
§ MONAD AND NON-DETERMINISM
A monad consists of a type constructor and two operators and “bind” that satisfy the following monad laws:
\begin{align}
\ensuremath{\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{return}\;\Varid{x}} &= \ensuremath{\Varid{f}\;\Varid{x}}\mbox{~~,} \label{eq:monad-bind-ret}\\
\ensuremath{\Varid{return}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}} &= \ensuremath{\Varid{m}} \mbox{~~,} \label{eq:monad-ret-bind}\\
\ensuremath{\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}(\Varid{g}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m})} &= \ensuremath{(\lambda \Varid{x}\to \Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{g}\;\Varid{x})\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}} \mbox{~~.}
\label{eq:monad-assoc}
\end{align}
Rather than the usual , in the laws above we use the reversed bind , which is consistent with the direction of function composition and more readable when we program in a style that uses composition.
When we use bind with $\lambda$-abstractions, it is more natural to write .
In this report we use the former more than the latter, thus the choice of notation.
We also define . Note that has type .
[8]::(b→M c)→(a→M b)→a→M c[E]
[B](f0.7<=<g) x=f0.7=<<g x[E]
[B](0.50.5⟨ 0.8$ 0.50.5⟩)[8]
[8]::(a→b)→M a→M b[E]
[B]f0.50.5⟨ 0.8$ 0.50.5⟩m=(return·f)0.7=<<m[E]
[8]::(b→c)→(a→M b)→(a→M c)[E]
[B] [E]
Some monadic operators we find handy for this paper.
More operators we find useful are given in Figure <ref>. Right-to-left Kleisli composition, denoted by , composes two monadic operations and into an operation . Operators and are monadic counterparts of function application and composition: applies a pure function to a monad, while composes a pure function after a monadic function.
We now introduce a collections of properties that allows us to rotate an expression that involves two operators and three operands.
These properties will be handy when we need to move parenthesis around in expressions.
To begin with, the following properties show that and share properties similar to pure function application and composition:
\begin{align}
\ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{g})\;\Varid{x}} &= \ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{g}\;\Varid{x}} \mbox{~~,}
\label{eq:comp-ap}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{g}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m})} &= \ensuremath{(\Varid{f}\mathbin{\cdot}\Varid{g})\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m}} \mbox{~~,}
\label{eq:comp-ap-ap}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}(\Varid{g}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{h})} &= \ensuremath{(\Varid{f}\mathbin{\cdot}\Varid{g})\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{h}} \mbox{~~.}
\label{eq:comp-mcomp-mcomp}
\end{align}
We also have the following law that allows us to rotate an expression that uses and :
\begin{align}
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}(\Varid{g}\mathbin{\cdot}\Varid{h})} &= \ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{g})\mathbin{\cdot}\Varid{h}} \mbox{~~.}
\label{eq:mcomp-comp-mcomp}
\end{align}
Note that in (<ref>) must be a function returning a monad. Furthermore, (<ref>) and (<ref>) relate and , both operators applying functions to monads,
while (<ref>) and (<ref>) relate and , both operators composing functions on monads:
\begin{align}
\ensuremath{\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}(\Varid{g}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m})} &= \ensuremath{(\Varid{f}\mathbin{\cdot}\Varid{g})\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}} \mbox{~~,}
\label{eq:comp-bind-ap}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{g}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m})} &= \ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{g})\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}} \mbox{~~,}
\label{eq:mcomp-bind-ap}\\
\ensuremath{\Varid{f}\mathrel{\hstretch{0.7}{<\!\!\!=\!\!<}}(\Varid{g}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{h})} &= \ensuremath{(\Varid{f}\mathbin{\cdot}\Varid{g})\mathrel{\hstretch{0.7}{<\!\!\!=\!\!<}}\Varid{h}} \mbox{~~,}
\label{eq:kc-mcomp}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}(\Varid{g}\mathrel{\hstretch{0.7}{<\!\!\!=\!\!<}}\Varid{h})} &= \ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle \bullet \rangle}}}\Varid{g})\mathrel{\hstretch{0.7}{<\!\!\!=\!\!<}}\Varid{h}} \mbox{~~.}
\label{eq:mcomp-kc}
\end{align}
Having these properties is one of the advantages of writing and backwards.
All the properties above can be proved by expanding definitions, and it is a good warming-up exercise proving some of them.
Some of them are proved in Appendix <ref>.
None of these operators and properties are strictly necessary: they can all be reduced to , , and $\lambda$-abstractions. As is often the case when designing notations, having more operators allows ideas to be expressed concisely in a higher level of abstraction, at the expense of having more properties to memorise. It is personal preference where the balance should be. Properties (<ref>) through (<ref>) may look like a lot of properties to remember. In practice, we find it usually sufficient to let us be guided by types. For example, when we have and want to bring and together, by their types we can figure out the resulting expression should be .
Non-determinism Monad
Non-determinism is the only effect we use in this report.
We assume two operators and : the former denotes failure, while denotes that the computation may yield either or .
As pointed out by <cit.>, for proofs and derivations, what matters is not how a monad is implemented but what properties its operators satisfy.
What laws and should satisfy, however, can be a tricky issue.
As discussed by <cit.>, it eventually comes down to what we use the monad for.
It is usually expected that be a monoid. That is, is associative, with as its zero:
\begin{align*}
\ensuremath{(\Varid{m}\mathbin{\talloblong}\Varid{n})\mathbin{\talloblong}\Varid{k}}~ &=~ \ensuremath{\Varid{m}\mathbin{\talloblong}(\Varid{n}\mathbin{\talloblong}\Varid{k})} \mbox{~~,}\\
\ensuremath{\emptyset\mathbin{\talloblong}\Varid{m}} ~=~ & \ensuremath{\Varid{m}} ~=~ \ensuremath{\Varid{m}\mathbin{\talloblong}\emptyset} \mbox{~~.}
\end{align*}
It is also assumed that monadic bind distributes into from the end,
while is a right zero for :
\begin{align}
\ensuremath{\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}(\Varid{m}_{1}\mathbin{\talloblong}\Varid{m}_{2})} ~&=~ \ensuremath{(\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}_{1})\mathbin{\talloblong}(\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\Varid{m}_{2})} \mbox{~~,}
\label{eq:bind-mplus-dist}\\
\ensuremath{\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}}\emptyset} ~&=~ \ensuremath{\emptyset} \label{eq:bind-mzero-zero} \mbox{~~.}
\end{align}
For our purpose in this section, we also assume that is commutative () and idempotent (). Implementation of such non-determinism monads have been studied by <cit.>.
Here are some induced laws about how interacts with and non-determinism operators:
\begin{align}
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{return}\;\Varid{x}} &= \ensuremath{\Varid{return}\;(\Varid{f}\;\Varid{x})} \mbox{~~,}\label{eq:ap-return}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\emptyset} &= \ensuremath{\emptyset} \mbox{~~,} \label{eq:ap-mzero}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{m}_{1}\mathbin{\talloblong}\Varid{m}_{2})} &= \ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m}_{1})\mathbin{\talloblong}(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m}_{2})}\mbox{~~.}
\label{eq:ap-mplus}
\end{align}
§ PERMUTATION AND INSERTION
As a warm-up example, the function non-deterministically computes a permutation of its input, using an auxiliary function that inserts an element to an arbitrary position in a list:
[22][1.5mu a1.5mu]→M [1.5mu a1.5mu][E]
[B]perm [1.5mu 1.5mu][18]
[22]return [1.5mu 1.5mu][E]
[B]perm (x:xs)[18]
[22]insert x0.7=<<perm xs ,[E]
[22]a→[1.5mu a1.5mu]→M [1.5mu a1.5mu][E]
[B]insert x [1.5mu 1.5mu][18]
[22]return [1.5mu x1.5mu][E]
[B]insert x (y:xs)[18]
[22]return (x:y:xs)((y:)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs) .[E]
For example, possible results of include
, , , , , and
The following lemma presents properties under which permuting the input list does not change the result of a :
Given . If for all and , we haveB@>l<@
[3]foldr (⊙) z0.50.5⟨∙⟩perm=return·foldr (⊙) z .[E]
Since is defined in terms of , proof of Lemma <ref> naturally depends on a lemma about a related property of :
Given , we haveB@>l<@
[3]foldr (⊙) z0.50.5⟨∙⟩insert x=return·foldr (⊙) z·(x:) ,[E]
provided that
for all and .
Prove . Induction on .
Case :
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩insert x [1.5mu 1.5mu][E]
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩return [1.5mu x1.5mu][E]
[4]return (foldr (⊙) z [1.5mu x1.5mu])[31]
[31] .[E]
Case :
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩insert x (y:xs)[E]
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩(return (x:y:xs)((y:)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs))[E]
[4]return (foldr (⊙) z (x:y:xs))((foldr (⊙) z·(y:))0.50.5⟨ 0.8$ 0.50.5⟩insert x xs) .[E]
Focus on the second branch of :
[4](foldr (⊙) z·(y:))0.50.5⟨ 0.8$ 0.50.5⟩insert x xs[E]
[4]((y⊙)·foldr (⊙) z)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs[E]
[4](y⊙)0.50.5⟨ 0.8$ 0.50.5⟩(foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩insert x xs)[E]
[4](y⊙)0.50.5⟨ 0.8$ 0.50.5⟩return (foldr (⊙) z (x:xs))[E]
[4]return (y⊙foldr (⊙) z (x:xs))[E]
[4]return (y⊙(x⊙foldr (⊙) z xs))[E]
[4]return (foldr (⊙) z (x:y:xs)) .[E]
Thus we have
[4](foldr (⊙) z0.50.5⟨∙⟩insert x) (y:xs)[E]
[4]return (foldr (⊙) z (x:y:xs))return (foldr (⊙) z (x:y:xs))[E]
[4]return (foldr (⊙) z (x:y:xs)) .[E]
Proof of Lemma <ref> then follows:
Prove that .
Induction on .
Case :
[3]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩perm [1.5mu 1.5mu][E]
[3]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩return [1.5mu 1.5mu][E]
[3]return (foldr (⊙) z [1.5mu 1.5mu]) .[E]
Case :
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩perm (x:xs)[E]
[4]foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩(insert x0.7=<<perm xs)[E]
[4](foldr (⊙) z0.50.5⟨∙⟩insert x)0.7=<<perm xs[E]
[4](return·foldr (⊙) z·(x:))0.7=<<perm xs[E]
[4]((x⊙)·foldr (⊙) z)0.50.5⟨ 0.8$ 0.50.5⟩perm xs[E]
[4](x⊙)0.50.5⟨ 0.8$ 0.50.5⟩(foldr (⊙) z0.50.5⟨ 0.8$ 0.50.5⟩perm xs)[E]
[4](x⊙)0.50.5⟨ 0.8$ 0.50.5⟩(return (foldr (⊙) z xs))[E]
[4]return (x⊙foldr (⊙) z xs)[E]
[4]return (foldr (⊙) z (x:xs)) .[E]
Map, Filter, and Permutation
It is not hard for one to formulate the following relationship between
and , which is also based on a related property relating and :
[Lemma <ref> and <ref> are in fact free theorems
of and <cit.>. They serve as good exercises, nevertheless.]
The lemma is true because is a pure computation — in reasoning about monadic programs it is helpful, and sometimes essential, to identify its pure segments, because these are the parts more properties are applicable. Note that the composition on the lefthand side is turned into once we move leftwards.
We prove only Lemma <ref>.
Prove by induction on that for all . We present only the inductive case
[4]map f0.50.5⟨ 0.8$ 0.50.5⟩insert x (y:xs)[E]
[4]map f0.50.5⟨ 0.8$ 0.50.5⟩(return (x:y:xs)((y:)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs))[E]
[4]return (map f (x:y:xs))(map f0.50.5⟨ 0.8$ 0.50.5⟩((y:)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs)) .[E]
For the second branch we reason:
[4]map f0.50.5⟨ 0.8$ 0.50.5⟩((y:)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs)[E]
[4](map f·(y:))0.50.5⟨ 0.8$ 0.50.5⟩insert x xs[E]
[4]((f y:)·map f)0.50.5⟨ 0.8$ 0.50.5⟩insert x xs[E]
[4](f y:)0.50.5⟨ 0.8$ 0.50.5⟩(map f0.50.5⟨ 0.8$ 0.50.5⟩insert x xs)[E]
[4](f y:)0.50.5⟨ 0.8$ 0.50.5⟩insert (f x) (map f xs) .[E]
Thus we have
[4]map f0.50.5⟨ 0.8$ 0.50.5⟩insert x (y:xs)[E]
[4]return (f x:f y:map f xs)((f y:)0.50.5⟨ 0.8$ 0.50.5⟩(insert (f x) (map f xs)))[E]
[4]insert (f x) (map f (y:xs)) .[E]
One may have noticed that the style of proof is familiar: replace by and by , the proof is more-or-less what one would do for a list version of . This is exactly the point: the style of proofs we use to do for pure programs still works for monadic programs, as long as the monad satisfies the demanded laws, be it a list, a more advanced implementation of non-determinism, or a monad having other effects.
A similar property relating and can be formulated.
Its proof is routine and omitted. Finally, in a number of occasions it helps to know that is a result of . The proof is also routine and omitted.
For all we have that for some .
§ SPARK AGGREGATION
Spark <cit.> is a popular platform for scalable distributed data-parallel computation based on a flexible programming environment with high-level APIs, considered by many as the successor of MapReduce.
In a typical Spark program, data is partitioned and stored distributively on read-only Resilient Distributed Datasets (RDDs) —
we can think of it as a list of lists, where each sub-list is potentially stored on a remote node.
On an RDD one can apply operations, called combinators, such as , , and .
The combinator, for example, takes user-defined functions and : accumulates a sub-result for each data partition while merges sub-results across different partitions.
Programming in Spark, however, can be tricky.
Since sub-results are computed across partitions concurrently, the order of their applications varies on different executions.
Aggregation in Spark is therefore inherently non-deterministic.
An example from <cit.> showed that computing the integral of $x^{73}$ from $x=-2$ to $x=2$, which should be $0$, using a function in the Spark machine learning library, yields results ranging from $-8192.0$ to $12288.0$ in different runs.
It is thus desirable to find out conditions, which Spark's documentation does not specify formally, under which a Spark computation yields deterministic outcomes.
§.§ List Homomorphism
Since a Spark aggregation is typically used to computes a list homomorphism <cit.>,
we digress a little in this section to give a brief review and present some results that we will use.
A function is called a list homomorphism if there exists , , and such that:
[B]h [1.5mu 1.5mu][15]
[B]h [1.5mu x1.5mu][15]
[15]=k x[E]
[B]h (xs++ys)[15]
[15]=h xs⊕h ys .[E]
That is such a list homomorphism is denoted by .
Note that the properties above implicitly demand that be associative with as its identity element.
Lemma <ref> and <ref> below are about when a computation defined in terms of is actually a list homomorphism. In Lemma <ref>, denotes the image of a function .
if and only if , where .
Let be associative on with as its identity, where .
We have if and only if
for all
and .
Notice, in Lemma <ref>, that .
Proofs of both lemmas are interesting exercises, albeit being a bit off-topic.
They are recorded in Appendix <ref>.
§.§ Formalisation and Results
Distributed collections of data are represented by Resilient Distributed Datasets (RDDs) in Spark. Informally, an RDD is a collection of data entries; these data entries are further divided into partitions stored on different machines. Abstractly, an can be seen as a list of lists:
[B]𝐭𝐲𝐩𝐞 Partition a[19]
[19]=[1.5mu a1.5mu] ,[E]
[B]𝐭𝐲𝐩𝐞 RDD a[19]
[19]=[1.5mu Partition a1.5mu] ,[E]
where each may be stored in a different machine.
While Spark provides a collection of combinators (functions on s that are designed to be composed to form larger programs), in this report we focus on a particular one, . It can be seen as a parallel implementation . The combinator processes an in two levels: each partition is first processed locally on one machine by . The sub-results are then communicated and combined —
this second step can be think of as another with .
[In fact, the actual Spark aggregation (and that modelled in <cit.>) are like .
For convenience in our proofs we see all list operations the other way round and use . This is not a fundamental difference.]
Spark programmers like to assume that their programs are deterministic. To exploit concurrency, however, the sub-results from each machine might be processed in arbitrary order and the result could be non-deterministic.
The following is our characterisation of , where we use to model the fact that sub-results from each machine are processed in unknown order:
[B]aggregate::b→(a→b→b)→(b→b→b)→RDD a→M b[E]
[B]aggregate z (⊗) (⊕)=foldr (⊕) z0.50.5⟨∙⟩(perm·map (foldr (⊗) z)) .[E]
It is clear from the types that and are pure computations, and non-determinism is introduced solely by .
Deterministic Aggregation
We are interested in finding out conditions under which produces deterministic outcomes.
Given and , where is associative and commutative, we have:B@>l<@
[3]aggregate z (⊗) (⊕)=return·foldr (⊕) z·map (foldr (⊗) z) .[E]
We reason:
[4]aggregate z (⊗) (⊕)[E]
[4]foldr (⊕) z0.50.5⟨∙⟩(perm·map (foldr (⊗) z))[E]
[4](foldr (⊕) z0.50.5⟨∙⟩perm)·map (foldr (⊗) z)[E]
[4]return·foldr (⊕) z·map (foldr (⊗) z) .[E]
The following corollary summaries the results and present conditions under which computes a homomorphism.
, provided that
is associative, commutative, and has as identity, and that for all
and .
We reason:
[4]aggregate z (⊗) (⊕)[E]
[4]return·foldr (⊕) z·map (foldr (⊗) z)[E]
[4]return·hom (⊕) (⊗z) z·concat .[E]
Determinism Implies Homomorphism
The final part of the report deals with an opposite question:
what can we infer if we know that is deterministic?
To answer that, however, we need to assume two more properties:
\begin{align}
\ensuremath{\Varid{m}_{1}\mathbin{\talloblong}\Varid{m}_{2}\mathrel{=}\Varid{return}\;\Varid{x}} ~~&\Rightarrow~~ \ensuremath{\Varid{m}_{1}\mathrel{=}\Varid{m}_{2}\mathrel{=}\Varid{return}\;\Varid{x}} \mbox{.}
\label{eq:mplus-return}\\
\ensuremath{\Varid{return}\;\Varid{x}_{1}\mathrel{=}\Varid{return}\;\Varid{x}_{2}} ~~&\Rightarrow~~ \ensuremath{\Varid{x}_{1}\mathrel{=}\Varid{x}_{2}} \mbox{.}
\label{eq:return-injective}
\end{align}
Property (<ref>) can be seen as the other direction of idempotency of ,
while (<ref>) states that is injective.
The following lemma can be understood this way: when , which could be non-deterministic, can be performed by a deterministic function, the operator should be insensitive to ordering:
If ,
and for some , we have
[3]foldr (⊗) z (concat xss)=[E]
[5]foldr (⊕) z (map (foldr (⊗) z) xss)=[E]
[7]foldr (⊕) z (map (foldr (⊗) z) yss) .[E]
We reason:
[4]return·foldr (⊗) z·concat$xss[E]
[4]aggregate z (⊗) (⊕)$xss[E]
[4](foldr (⊕) z·map (foldr (⊗) z))0.50.5⟨ 0.8$ 0.50.5⟩perm xss[E]
[4](return·foldr (⊕) z·map (foldr (⊗) z)$yss)[E]
[7]((foldr (⊕) z·map (foldr (⊗) z))0.50.5⟨ 0.8$ 0.50.5⟩m) .[E]
Thus by (<ref>) and (<ref>),
The former also equals
by Lemma <ref>,
for some .
Based on Lemma <ref>, the following theorem explicitly states that should be associative, commutative, and has as its identity in restricted domain.
If ,
we have that , when restricted to values in , is associative, commutative, and has as its identity.
In the discussion below, let ,, and be in . That is, there exists , , and such that ,
, and .
Identity. We reason:
[7]foldr (⊗) z (concat [1.5mu xs1.5mu])[E]
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu xs1.5mu])[E]
[7]y⊕z .[E]
Thus is a right identity of . Similarly,
[7]foldr (⊗) z (concat [1.5mu [1.5mu 1.5mu],xs1.5mu])[E]
[4]foldr (⊕) z (map (foldr (⊗) z) [1.5mu [1.5mu 1.5mu],xs1.5mu])[E]
[7]z⊕y .[E]
Thus is also a left identity of .
Commutativity. We reason:
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu xs,ys1.5mu])[E]
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu ys,xs1.5mu])[E]
[7]y⊕x .[E]
Associativity. We reason:
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu xs,ys,ws1.5mu])[E]
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu ws,xs,ys1.5mu])[E]
[7](x⊕y)⊕w .[E]
If ,
we have .
Apparently and
We are left with proving the case for concatenation.
[7]foldr (⊗) z (xs++ys)[E]
[7]foldr (⊗) z (concat [1.5mu xs,ys1.5mu])[E]
[7]foldr (⊕) z (map (foldr (⊗) z) [1.5mu xs,ys1.5mu])[E]
[7]foldr (⊗) z xs⊕(foldr (⊗) z ys⊕z)[E]
[7]foldr (⊗) z xs⊕foldr (⊗) z ys .[E]
Given and .
if and only
if forms a commutative monoid, and
that .
A conclusion following from Theorem <ref>,
Theorem <ref>, and Theorem <ref>.
§.§.§ Acknowledgements
In around late 2016,
Yu-Fang Chen, Chih-Duo Hong, Ondřej Lengál,
Nishant Sinha and Bow-Yaw Wang invited me into their project
formalising Spark. It was what inspired my interests in reasoning about monads, which led to a number of subsequent work.
The initial proofs of properties of and other combinators were done by Ondřej Lengál, without using monads.
<cit.> modelled a hierarchy of monadic effects in Coq. The formalisation was applied to verify a number of equational proofs of monadic programs, including some of the proofs in an earlier version of this report. I am solely responsible for any remaining errors, however.
#1 #1#1#1 #1 #1 #1 #1#1 #1#1
[Affeldt, Nowak, and SaikawaAffeldt
et al2019]
authorpersonReynald Affeldt, personDavid
Nowak, and personTakafumi Saikawa.
A hierarchy of monadic effects for program
verification using equational reasoning. In
booktitleMathematics of Program Construction,
editorpersonGraham Hutton (Ed.).
authorpersonRichard S. Bird.
An introduction to the theory of lists.
In booktitleLogic of Programming and Calculi of
Discrete Design, editorpersonManfred Broy (Ed.).
Number 36 in seriesNATO ASI Series F.
publisherSpringer-Verlag, pages3–42.
[Chen, Hong, Lengál, Mu, Sinha, and
WangChen et al2017]
authorpersonYu-Fang Chen, personChih-Duo
Hong, personOndřej Lengál,
personShin-Cheng Mu, personNishant Sinha, and
personBow-Yaw Wang. year2017.
An executable sequential specification for Spark
aggregation. In booktitleInternational Conference on
Networked Systems. publisherSpringer-Verlag.
[Fischer, Kiselyov, and ShanFischer
et al2011]
authorpersonSebastian Fischer, personOleg
Kiselyov, and personChung-chieh Shan.
Purely functional lazy nondeterministic
journalJournal of Functional Programming
volume21, number4-5 (dateSeptember
year2011), pages413–465.
[Gibbons and HinzeGibbons and Hinze2011]
authorpersonJeremy Gibbons and personRalf
Hinze. year2011.
Just do it: simple monadic equational reasoning.
In booktitleInternational Conference on Functional
Programming, editorpersonOlivier Danvy (Ed.).
publisherACM Press, pages2–14.
authorpersonOleg Kiselyov.
titleLaws of MonadPlus.
authorpersonJanis Voigtländer.
Free theorems involving type constructor classes.
In booktitleInternational Conference on Functional
Programming, editorpersonAndrew Tolmach (Ed.).
publisherACM Press, pages173–184.
[Zaharia, Chowdhury, Das, Dave, Ma, McCauley,
Franklin, Shenker, and StoicaZaharia et al2012]
authorpersonMatei Zaharia, personMosharaf
Chowdhury, personTathagata Das, personAnkur Dave,
personJustin Ma, personMurphy McCauley,
personMichael J. Franklin, personScott Shenker, and
personIon Stoica. year2012.
Resilient distributed datasets: a fault-tolerant
abstraction for in-memory cluster computing. In
booktitleNetworked Systems Design and Implementation,
editorpersonSteven Gribble and
personDina Katabi (Eds.). publisherUSENIX.
§ MISCELLANEOUS PROOFS
Proving (<ref>) .
We reason:
[7]f0.7=<<(g0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[7](λx→f0.7=<<return (g x))0.7=<<m[E]
[7](λx→f (g x))0.7=<<m[E]
[7](f·g)0.7=<<m .[E]
Proving (<ref>) .
We reason:
[4]f0.50.5⟨ 0.8$ 0.50.5⟩(g0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[4](return·f)0.7=<<(g0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[4](f·g)0.50.5⟨ 0.8$ 0.50.5⟩m .[E]
For the next results we prove a lemma:
\begin{align}
\ensuremath{(\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}})\mathbin{\cdot}(\Varid{g}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}})} &= \ensuremath{(((\Varid{f}\mathbin{\hstretch{0.7}{=\!\!<\!\!<}})\mathbin{\cdot}\Varid{g})\mathbin{\hstretch{0.7}{=\!\!<\!\!<}})} \mbox{~~.} \label{eq:bind-comp-bind}
\end{align}
[4](λm→(λy→f0.7=<<g y)0.7=<<m)[E]
[4](((f0.7=<<)·g)0.7=<<) .[E]
Proving (<ref>) .
We reason:
[6](f·g)0.50.5⟨∙⟩m .[E]
Proving (<ref>) .
We reason:
[4](f·g)0.7<=<h .[E]
Proving (<ref>) .
We reason:
[4](f0.50.5⟨∙⟩g)0.7<=<h .[E]
Proof of Lemma <ref>
A Ping-pong proof.
Direction $(\Rightarrow)$. Let , prove by induction on .
Case :
[7]foldr (⊕) z (map h [1.5mu 1.5mu])[E]
[7]foldr (⊕) z [1.5mu 1.5mu][E]
[7]h (concat [1.5mu 1.5mu]) .[E]
Case :
[7]foldr (⊕) z (map h (xs:xss))[E]
[7]h xs⊕foldr (⊕) z (map h xss)[E]
[7]h xs⊕h (concat xss)[E]
[7]h (concat (xs:xss)) .[E]
Direction $(\Leftarrow)$. Assuming , prove that .
Case empty list:
[7]h [1.5mu 1.5mu][E]
[7]h (concat [1.5mu 1.5mu])[E]
[7]foldr (⊕) z (map h [1.5mu 1.5mu])[E]
[7]z .[E]
Case singleton list: certainly .
Case concatentation:
[7]h (xs++ys)[E]
[7]h (concat [1.5mu xs,ys1.5mu])[E]
[7]foldr (⊕) z (map h [1.5mu xs,ys1.5mu])[E]
[7]h xs⊕(h ys⊕z)[E]
[7]h xs⊕h ys .[E]
Proof of Lemma <ref>
A Ping-pong proof.
Direction $(\Leftarrow)$. We show that , provided that .
It is immediate that around
For , note that
\begin{align*}
\ensuremath{\Varid{foldr}\;(\otimes)\;\Varid{z}\;(\Varid{xs}\mathbin{+\!\!\!\!\!+}\Varid{ys})} &= \ensuremath{\Varid{foldr}\;(\otimes)\;(\Varid{foldr}\;(\otimes)\;\Varid{z}\;\Varid{ys})\;\Varid{xs}} \mbox{~~.}
\label{eq:fold-cat}
\end{align*}
The aim is thus to prove that
[3]foldr (⊗) (foldr (⊗) z ys) xs=(foldr (⊗) z xs)⊕(foldr (⊗) z ys) .[E]
We perform an induction on . The case when trivially
holds. For , we reason:
[7]foldr (⊗) (foldr (⊗) z ys) (x:xs)[E]
[7]x⊗foldr (⊗) (foldr (⊗) z ys) xs[E]
[7]x⊗((foldr (⊗) z xs)⊕(foldr (⊗) z ys))[E]
[7](x⊗(foldr (⊗) z xs))⊕(foldr (⊗) z ys)[E]
[7](foldr (⊗) z (x:xs))⊕(foldr (⊗) z ys) .[E]
Direction $(\Rightarrow)$. Given
prove that for and in the range of .
Let and for some
and . We reason:
[7]x⊗(foldr (⊗) z xs⊕foldr (⊗) z ys)[E]
[7]x⊗(foldr (⊗) z (xs++ys))[E]
[7]foldr (⊗) z (x:xs++ys)[E]
[7]foldr (⊗) z (x:xs)⊕foldr (⊗) z ys[E]
[7](x⊗foldr (⊗) z xs)⊕foldr (⊗) z ys[E]
[7](x⊗y)⊕w .[E]
|
Calculating a Backtracking Algorithm]
Calculating a Backtracking Algorithm:
An Exercise in Monadic Program Derivation
Shin-Cheng Mu
Institute of Information Science
Academia Sinica
Equational reasoning is among the most important tools that functional programming provides us.
Curiously, relatively less attention has been paid to reasoning about monadic programs.
In this report we derive a backtracking algorithm for problem specifications
that use a monadic unfold to generate possible solutions, which are filtered using a -like predicate.
We develop theorems that convert a variation of to a that uses the state monad, as well as theorems constructing hylomorphism.
The algorithm is used to solve the -queens puzzle, our running example.
The aim is to develop theorems and patterns useful for the derivation of monadic programs, focusing on the intricate interaction between state and non-determinism.
§ INTRODUCTION
Equational reasoning is among the many gifts that functional programming offers us. Functional programs preserve a rich set of mathematical properties, which not only helps to prove properties about programs in a relatively simple and elegant manner, but also aids the development of programs. One may refine a clear but inefficient specification, stepwise through equational reasoning, to an efficient program whose correctness may not be obvious without such a derivation.
It is misleading if one says that functional programming does not allow side effects.
In fact, even a purely functional language may allow a variety of side effects — in a rigorous, mathematically manageable manner.
Since the introduction of monads into the functional programming community <cit.>, it has become the main framework in which effects are modelled.
Various monads were developed for different effects, from general ones such as IO, state, non-determinism, exception, continuation, environment passing, to specific purposes such as parsing.
Numerous research were also devoted to producing practical monadic programs.
It is also a wrong impression that impure programs are bound to be difficult to reason about.
In fact, the laws of monads and their operators are sufficient to prove quite a number of useful properties about monadic programs.
The validity of these properties, proved using only these laws, is independent from the particular implementation of the monad.
This report follows the trail of Hutton and Fulger HuttonFulger:08:Reasoning and Gibbons and Hinze GibbonsHinze:11:Just, aiming to develop theorems and patterns that are useful for reasoning about monadic programs.
We focus on two effects — non-determinism and state.
In this report we consider problem specifications that use a monadic unfold to generate possible solutions, which are filtered using a -like predicate.
We develop theorems that convert a variation of to a that uses the state monad, as well as theorems constructing hylomorphism.
The algorithm is used to solve the -queens puzzle, our running example.
While the interaction between non-determinism and state is known to be intricate,
when each non-deterministic branch has its own local state, we get a relatively well-behaved monad that provides a rich collection of properties to work with.
The situation when the state is global and shared by all non-deterministic branches is much more complex, and is dealt with in a subsequent paper <cit.>.
§ MONAD AND EFFECT OPERATORS
A monad consists of a type constructor and two operators
and “bind” , often modelled by the following Haskell type class declaration:
This report uses type classes to be explicit about the effects a program uses.
For example, programs using non-determinism are labelled with constraint .
The style of reasoning proposed in this report is not tied to type classes or Haskell,
and we do not strictly follow the particularities of type classes in the current Haskell standard.
For example, we overlook the particularities that a must also be , be , and that functional dependency is needed in a number of places in this report.
[B]𝐜𝐥𝐚𝐬𝐬 Monad m 𝐰𝐡𝐞𝐫𝐞[E]
[22]m a[E]
[13]::m a→(a→m b)→m b .[E]
They are supposed to satisfy the following monad laws:
\begin{align}
\ensuremath{\Varid{return}\;\Varid{x}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{f}} &= \ensuremath{\Varid{f}\;\Varid{x}}\mbox{~~,} \label{eq:monad-bind-ret}\\
\ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{return}} &= \ensuremath{\Varid{m}} \mbox{~~,} \label{eq:monad-ret-bind}\\
\ensuremath{(\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{f})\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{g}} &= \ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}(\lambda \Varid{x}\to \Varid{f}\;\Varid{x}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{g})} \mbox{~~.} \label{eq:monad-assoc}
\end{align}
We also define , which has type .
Kleisli composition, denoted by , composes two monadic operations and into an operation .
The operator applies a pure function to a monad.
[8]::Monad m⇒(a→m b)→(b→m c)→a→m c[E]
[B](f0.7>=>g) x=f x0.7>>=g ,[E]
[B](0.50.5⟨ 0.8$ 0.50.5⟩)[8]
[8]::Monad m⇒(a→b)→m a→m b[E]
[B]f0.50.5⟨ 0.8$ 0.50.5⟩n=n0.7>>=(return·f) .[E]
The following properties can be proved from their definitions and the monad laws:
\begin{align}
\ensuremath{(\Varid{f}\mathbin{\cdot}\Varid{g})\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m}} &= \ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{g}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m})} \mbox{~~,}
\label{eq:comp-ap-ap}\\
\ensuremath{(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m})\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{g}} &= \ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}(\Varid{g}\mathbin{\cdot}\Varid{f})} \mbox{~~,}
\label{eq:comp-bind-ap}\\
\ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{k})} &= \ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}(\lambda \Varid{x}\to \Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{k}\;\Varid{x})} \mbox{~~, \ensuremath{\Varid{x}} not free in \ensuremath{\Varid{f}}.}
\label{eq:ap-bind-ap}
\end{align}
Effect and Effect Operators
Monads are used to model effects, and each effect comes with its collection of operators. For example, to model non-determinism we assume two operators and , respectively modeling failure and choice.
A state effect comes with operators and , which respectively reads from and writes to an unnamed state variable.
A program may involve more than one effect.
In Haskell, the type class constraint in the type of a program denotes that the program may use or , and possibly other effects, while denotes that it may use and .
Some theorems in this report, however, apply only to programs that, for example, use non-determinism and no other effects.
In such cases we will note in text that the theorem applies only to programs “whose only effect is non-determinism.”
The set of effects a program uses can always be statically inferred by syntax.
Total, Finite Programs Like in other literature on program derivation, we assume a set-theoretic semantics in which functions are total. We thus have the following laws regarding branching:
\begin{align}
\ensuremath{\Varid{f}\;(\mathbf{if}\;\Varid{p}\;\mathbf{then}\;\Varid{e}_{1}\;\mathbf{else}\;\Varid{e}_{2})} &= \ensuremath{\mathbf{if}\;\Varid{p}\;\mathbf{then}\;\Varid{f}\;\Varid{e}_{1}\;\mathbf{else}\;\Varid{f}\;\Varid{e}_{2}} \mbox{~~,}
\label{eq:if-distr}\\
\ensuremath{\mathbf{if}\;\Varid{p}\;\mathbf{then}\;(\lambda \Varid{x}\to \Varid{e}_{1})\;\mathbf{else}\;(\lambda \Varid{x}\to \Varid{e}_{2})} &=
\ensuremath{\lambda \Varid{x}\to \mathbf{if}\;\Varid{p}\;\mathbf{then}\;\Varid{e}_{1}\;\mathbf{else}\;\Varid{e}_{2}} \mbox{~~.}\label{eq:if-fun}
\end{align}
Lists in this report are inductive types, and unfolds generate finite lists too. Non-deterministic choices are finitely branching.
Given a concrete input, a function always expands to a finitely-sized expression consisting of syntax allowed by its type. We may therefore prove properties of a monadic program by structural induction over its syntax.
§ EXAMPLE: THE -QUEENS PROBLEM
Reasoning about monadic programs gets more interesting when more than one effect is involved.
Backtracking algorithms make good examples of programs that are stateful and non-deterministic, and the -queens problem, also dealt with by Gibbons and Hinze GibbonsHinze:11:Just, is among the most well-known examples of backtracking.[Curiously, Gibbons and Hinze GibbonsHinze:11:Just did not finish their derivation and stopped at a program that exhaustively generates all permutations and tests each of them. Perhaps it was sufficient to demonstrate their point.]
In this section we present a specification of the problem, before transforming it into the form (whose components will be defined later), which is the general form of problems we will deal with in this report.
§.§ Non-Determinism
Since the -queens problem will be specified by a non-deterministic program,
we discuss non-determinism before presenting the specification.
We assume two operators and :
[B]𝐜𝐥𝐚𝐬𝐬 Monad m⇒MonadPlus m 𝐰𝐡𝐞𝐫𝐞[E]
[10]::m a[E]
[10]::m a→m a→m a .[E]
The former denotes failure, while denotes that the computation may yield either or . What laws they should satisfy, however, can be a tricky issue. As discussed by Kiselyov Kiselyov:15:Laws, it eventually comes down to what we use the monad for. It is usually expected that and form a monoid. That is, is associative, with as its zero:
\begin{align}
\ensuremath{(\Varid{m}\mathbin{\talloblong}\Varid{n})\mathbin{\talloblong}\Varid{k}}~ &=~ \ensuremath{\Varid{m}\mathbin{\talloblong}(\Varid{n}\mathbin{\talloblong}\Varid{k})} \mbox{~~,}
\label{eq:mplus-assoc}\\
\ensuremath{\emptyset\mathbin{\talloblong}\Varid{m}} ~=~ & \ensuremath{\Varid{m}} ~=~ \ensuremath{\Varid{m}\mathbin{\talloblong}\emptyset} \mbox{~~.}
\label{eq:mzero-mplus}
\end{align}
It is also assumed that monadic bind distributes into from the end,
while is a left zero for :
We will refer to the laws (<ref>), (<ref>),
(<ref>), (<ref>) collectively as the
nondeterminism laws.
Other properties regarding and will be introduced when needed.
The monadic function returns if holds, and fails otherwise:
[B]filt::MonadPlus m⇒(a→Bool)→a→m a[E]
[B]filt p x=guard (p x)0.7>>return x ,[E]
where is a standard monadic function defined by:
[B]guard::MonadPlus m⇒Bool→m ()[E]
[B]guard b=𝐢𝐟 b 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅ .[E]
The following properties allow us to move around.
Their proofs are given in Appendix <ref>.
\begin{align}
\ensuremath{\Varid{guard}\;(\Varid{p}\mathrel{\wedge}\Varid{q})} ~&=~ \ensuremath{\Varid{guard}\;\Varid{p}\mathbin{\hstretch{0.7}{>\!\!>}}\Varid{guard}\;\Varid{q}} \mbox{~~,}
\label{eq:guard-conj} \\
\ensuremath{\Varid{guard}\;\Varid{p}\mathbin{\hstretch{0.7}{>\!\!>}}(\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}\Varid{m})} ~&=~ \ensuremath{\Varid{f}\mathrel{\raisebox{0.5\depth}{\scaleobj{0.5}{\langle}} \scaleobj{0.8}{\$} \raisebox{0.5\depth}{\scaleobj{0.5}{\rangle}}}(\Varid{guard}\;\Varid{p}\mathbin{\hstretch{0.7}{>\!\!>}}\Varid{m})} \mbox{~~.}
\label{eq:guard-fmap} \\
\ensuremath{\Varid{guard}\;\Varid{p}\mathbin{\hstretch{0.7}{>\!\!>}}\Varid{m}}~=~ \ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}}& \,\ensuremath{(\lambda \Varid{x}\to \Varid{guard}\;\Varid{p}\mathbin{\hstretch{0.7}{>\!\!>}}\Varid{return}\;\Varid{x})}
\mbox{,~~~~if~} \ensuremath{\Varid{m}\mathbin{\hstretch{0.7}{>\!\!>}}\emptyset\mathrel{=}\emptyset} \mbox{~~.}
\label{eq:guard-commute}
\end{align}
§.§ Specification
\begin{array}{rrrrrrrrr}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
0 & . & . & . & . & . & Q & . & .\\
1 & . & . & . & Q & . & . & . & .\\
2 & . & . & . & . & . & . & Q & .\\
3 & Q & . & . & . & . & . & . & .\\
4 & . & . & . & . & . & . & . & Q\\
5 & . & Q & . & . & . & . & . & .\\
6 & . & . & . & . & Q & . & . & .\\
7 & . & . & Q & . & . & . & . & .
\end{array}$
\begin{array}{r|rrrrrrrr}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline
0 & 0 & 1 & 2 & 3 & 4 & . & . & .\\
1 & 1 & 2 & 3 & 4 & . & . & . & .\\
2 & 2 & 3 & 4 & . & . & . & . & .\\
3 & 3 & 4 & . & . & . & . & . & .\\
4 & 4 & . & . & . & . & . & . & .\\
5 & . & . & . & . & . & . & . & 12\\
6 & . & . & . & . & . & . & 12& 13\\
7 & . & . & . & . & . & 12& 13& 14
\end{array}
\begin{array}{r|rrrrrrrr}
& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline
0 & 0 &-1 & . & . & . &-5 &-6 &-7\\
1 & . & 0 &-1 & . & . & . &-5 &-6\\
2 & . & . & 0 &-1 & . & . & . &-5\\
3 & 3 & . & . & 0 & . & . & . & .\\
4 & 4 & 3 & . & . & 0 & . & . & .\\
5 & 5 & 4 & 3 & . & . & 0 & . & .\\
6 & 6 & 5 & 4 & 3 & . & . & 0 & .\\
7 & 7 & 6 & 5 & 4 & 3 & . & . & 0
\end{array}
(a) This placement can be represented by . (b) Up diagonals.
(c) Down diagonals.
The aim of the puzzle is to place queens on a by chess board such that no two queens can attack each other. Given , we number the rows and columns by . Since all queens should be placed on distinct rows and distinct columns, a potential solution can be represented by a permutation of the list , such that denotes that the queen on the $i$th column is placed on the $j$th row (see Figure <ref>(a)). In this representation queens cannot be put on the same row or column, and the problem is reduced to filtering, among permutations of , those placements in which no two queens are put on the same diagonal. The specification can be written as a non-deterministic program:
[B]queens::MonadPlus m⇒Int→m [1.5mu Int1.5mu][E]
[B]queens n=perm [1.5mu 0n-11.5mu]0.7>>=filt safe ,[E]
where non-deterministically computes a permutation of its input, and the pure function determines whether no queens are on the same diagonal.
This specification of generates all the permutations, before checking them one by one, in two separate phases. We wish to fuse the two phases and produce a faster implementation. The overall idea is to define in terms of an unfold, transform into a fold, and fuse the two phases into a hylomorphism <cit.>. During the fusion, some non-safe choices can be pruned off earlier, speeding up the computation.
The monadic function can be written both as a fold or an unfold.
For this problem we choose the latter.
The function non-deterministically splits a list into a pair containing one chosen element and the rest:
[B]select::MonadPlus m⇒[1.5mu a1.5mu]→m (a,[1.5mu a1.5mu]) .[E]
[B]select [1.5mu 1.5mu][16]
[B]select (x:xs)[16]
[19]return (x,xs)((id×(x:))0.50.5⟨ 0.8$ 0.50.5⟩select xs) ,[E]
where .
For example, yields one of , and . The function call generates a list from a seed . If holds, the generation stops. Otherwise an element and a new seed is generated using . It is like the usual apart from that , and thus the result, is monadic:
[B]unfoldM::Monad m⇒(b→Bool)→(b→m (a,b))→b→m [1.5mu a1.5mu][E]
[B]unfoldM p f y[16]
[16]|p y[29]
[29]=return [1.5mu 1.5mu][E]
[29]=f y0.7>>=λ(x,z)→(x:)0.50.5⟨ 0.8$ 0.50.5⟩unfoldM p f z .[E]
Given these definitions, can be defined by:
[B]perm::MonadPlus m⇒[1.5mu a1.5mu]→m [1.5mu a1.5mu][E]
[B]perm=unfoldM null select .[E]
§.§ Safety Check in a
We have yet to define .
Representing a placement as a permutation allows an easy way to check whether two queens are put on the same diagonal.
An 8 by 8 chess board has 15 up diagonals (those running between bottom-left and top-right). Let them be indexed by (see Figure <ref>(b)).
If we apply to a permutation, we get the indices of the up-diagonals where the chess pieces are placed.
Similarly, there are 15 down diagonals (those running between top-left and bottom right).
By applying to a permutation, we get the indices of their down-diagonals (indexed by .
See Figure <ref>(c)).
A placement is safe if the diagonals contain no duplicates:
[B]ups,downs::[1.5mu Int1.5mu]→[1.5mu Int1.5mu][E]
[B]ups [8]
[12]=zipWith (+) [1.5mu 01.5mu] xs[36]
[36] ,[E]
[B]downs [8]
[12]=zipWith (-) [1.5mu 01.5mu] xs[36]
[36] ,[E]
[10]::[1.5mu Int1.5mu]→Bool[E]
[B]safe xs[10]
[10]=nodup (ups xs)∧nodup (downs xs) ,[E]
where determines whether there is no duplication in a list.
The eventual goal is to transform into a , to be fused with , an unfold that generates a list from left to right.
In order to do so, it helps if can be expressed in a computation that processes the list left-to-right, that is, a or a .
To derive such a definition we use the standard trick — introducing accumulating parameters, and generalising to below:
[B]safeAcc::(Int,[1.5mu Int1.5mu],[1.5mu Int1.5mu])→[1.5mu Int1.5mu]→Bool[E]
[B]safeAcc (i,us,ds) xs=[25]
[25]nodup us'∧[39]
[39]nodup ds'∧[E]
[25]all (∉us) us'∧all (∉ds) ds' ,[E]
[3]𝐰𝐡𝐞𝐫𝐞 [10]
[15]=zipWith (+) [1.5mu i1.5mu] xs[E]
[15]=zipWith (-) [1.5mu i1.5mu] xs .[E]
It is a generalisation because .
By plain functional calculation, one may conclude that can be defined using a variation of :
[B]safeAcc (i,us,ds)=all ok·scanl_+ (⊕) (i,us,ds) ,[E]
[3]𝐰𝐡𝐞𝐫𝐞 [10]
[10]ok (i,(x:us),(y:ds))=x∉us∧y∉ds ,[E]
where and is like the standard , but applies to all non-empty prefixes of a list.
It can be specified by:
[B]scanl_+::(b→a→b)→b→[1.5mu a1.5mu]→[1.5mu b1.5mu][E]
[B]scanl_+ (⊕) st=tail·scanl (⊕) st ,[E]
and it also adopts an inductive definition:
[B]scanl_+ (⊕) st [1.5mu 1.5mu][25]
[25]=[1.5mu 1.5mu][E]
[B]scanl_+ (⊕) st (x:xs)[25]
[25]=(st⊕x):scanl_+ (⊕) (st⊕x) xs .[E]
Operationally, examines the list from left to right, while keeping a state , where is the current position being examined, while and are respectively indices of all the up and down diagonals encountered so far. Indeed, in a function call , the value can be seen as a “state” that is explicitly carried around. This naturally leads to the idea: can we convert a to a monadic program that stores in its state? This is the goal of the next section.
As a summary of this section, after defining , we have transformed it into the following form:
[3]unfoldM p f0.7>=>filt (all ok·scanl_+ (⊕) st)[52]
[52] .[E]
This is the form of problems we will consider for the rest of this report: problems whose solutions are generated by an monadic unfold, before being filtered by an that takes the result of a .
§ FROM PURE TO STATEFUL
The aim of this section is to turn the filtering phase into a . For that we introduce a state monad to pass the state around.
The state effect provides two operators:
[B]𝐜𝐥𝐚𝐬𝐬 Monad m⇒MonadState s m 𝐰𝐡𝐞𝐫𝐞[E]
[10]::m s[E]
[10]::s→m () ,[E]
where retrieves the state, while overwrites the state by the given value. They are supposed to satisfy the state laws:
§.§ From to monadic
Consider the following monadic variation of :
[B]scanlM::MonadState s m⇒(s→a→s)→s→[1.5mu a1.5mu]→m [1.5mu s1.5mu][E]
[B]scanlM (⊕) st xs=put st0.7>>foldr (⊗) (return [1.5mu 1.5mu]) xs[E]
[3]𝐰𝐡𝐞𝐫𝐞 x⊗n=[25]
[25]get0.7>>=λst→𝐥𝐞𝐭 st'=st⊕x[E]
[25]𝐢𝐧 (st':)0.50.5⟨ 0.8$ 0.50.5⟩(put st'0.7>>n) .[E]
It behaves like , but stores the accumulated information in a monadic state, which is retrieved and stored in each step. The main body of the computation is implemented using a .
To relate and , one would like to have .
However, the lefthand side does not alter the state, while the righthand side does.
One of the ways to make the equality hold is to manually backup and restore the state.
[B]protect::MonadState s m⇒m b→m b[E]
[B]protect n = get0.7>>=λini→n0.7>>=λx→put ini0.7>>return x ,[E]
We have
For all , , and ,
[3]return (scanl_+ (⊕) st xs)=protect (scanlM (⊕) st xs) .[E]
By induction on . We present the case .
[7]protect (scanlM (⊕) st (x:xs))[E]
[7]get0.7>>=λini→put st0.7>>get0.7>>=λst→[E]
[7]((st':)0.50.5⟨ 0.8$ 0.50.5⟩(put st'0.7>>foldr (⊗) (return [1.5mu 1.5mu]) xs))0.7>>=λr→[E]
[7]put ini0.7>>return r[E]
[7]get0.7>>=λini→put st0.7>>[E]
[7]((st':)0.50.5⟨ 0.8$ 0.50.5⟩(put st'0.7>>foldr (⊗) (return [1.5mu 1.5mu]) xs))0.7>>=λr→[E]
[7]put ini0.7>>return r[E]
[7](st':)0.50.5⟨ 0.8$ 0.50.5⟩([22]
[22]get0.7>>=λini→put st0.7>>put st'0.7>>[E]
[22]foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=λr→[E]
[22]put ini0.7>>return r)[E]
[7](st':)0.50.5⟨ 0.8$ 0.50.5⟩([22]
[22]get0.7>>=λini→put st'0.7>>foldr (⊗) (return [1.5mu 1.5mu]) xs[E]
[22]0.7>>=λr→put ini0.7>>return r)[E]
[7](st':)0.50.5⟨ 0.8$ 0.50.5⟩protect (scanlM (⊕) st' xs)[E]
[7](st':)0.50.5⟨ 0.8$ 0.50.5⟩return (scanl_+ (⊕) st' xs)[E]
[7]return ((st⊕x):scanl_+ (⊕) (st⊕x) xs)[E]
[7]return (scanl_+ (⊕) st (x:xs)) .[E]
This proof is instructive due to the use of properties (<ref>) and (<ref>), and that , being a pure function, can be easily moved around using (<ref>).
We have learned that can be turned into , defined in terms of a stateful .
In the definition, state is the only effect involved.
The next task is to transform into a .
The operator is defined using non-determinism.
Hence the transformation involves the interaction between two effects.
§.§ Right-Distributivity and Local State
We now digress a little to discuss one form of interaction between non-determinism and state.
In this report, we wish that the following two additional properties are valid:
Note that the two properties hold for some monads with non-determinism, but not all.
With some implementations of the monad, it is likely that in the lefthand side of (<ref>), the effect of happens once, while in the righthand side it happens twice. In (<ref>), the on the lefthand side may incur some effects that do not happen in the righthand side.
Having (<ref>) and (<ref>) leads to profound consequences on the semantics and implementation of monadic programs.
To begin with, (<ref>) implies that be commutative. To see that, let and in (<ref>).
Implementation of such non-deterministic monads have been studied by Kiselyov Kiselyov:13:How.
When mixed with state, one consequence of (<ref>) is that . That is, and get the same state regardless of whether is performed outside or inside the non-determinism branch.
Similarly, (<ref>) implies — when a program fails, the changes it performed on the state can be discarded.
These requirements imply that each non-determinism branch has its own copy of the state.
Therefore, we will refer to (<ref>) and (<ref>) as local state laws in this report — even though they do not explicitly mention state operators at all!
One monad satisfying the local state laws is , which is the same monad one gets by in the Monad Transformer Library <cit.>.
With effect handling <cit.>, the monad meets the requirements if we run the handler for state before that for list.
The advantage of having the local state laws is that we get many useful properties, which make this stateful non-determinism monad preferred for program calculation and reasoning.
Recall, for example, that (<ref>) is the antecedent of
The result can be stronger: non-determinism commutes with all other effects if we have local state laws.
Let and be two monadic programs such that does not occur free in , and does not occur free in . We say and commute if
\begin{equation} \label{eq:commute}
\begin{split}
\ensuremath{\Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\lambda \Varid{x}\to \Varid{n}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\lambda \Varid{y}\to \Varid{f}\;\Varid{x}\;\Varid{y}~\mathrel{=}}\\
\ensuremath{\Varid{n}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\lambda \Varid{y}\to \Varid{m}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\lambda \Varid{x}\to \Varid{f}\;\Varid{x}\;\Varid{y}~~.}
\end{split}
\end{equation}
We say that commutes with effect if commutes with any whose only effects are , and that effects and commute if any and commute as long as their only effects are respectively and .
If right-distributivity (<ref>) and right-zero (<ref>) hold
in addition to the monad laws stated before, non-determinism commutes with any effect .
Let be a monadic program whose only effect is non-determinism, and be any monadic program. The aim is to prove that and commute. Induction on the structure of .
Case :
[4]stmt0.7>>=λx→return e0.7>>=λy→f x y[E]
[4]stmt0.7>>=λx→f x e[E]
[4]return e0.7>>=λy→stmt0.7>>=λx→f x y .[E]
Case :
[4]stmt0.7>>=λx→(m_1m_2)0.7>>=λy→f x y[E]
[4]stmt0.7>>=λx→(m_10.7>>=f x)(m_20.7>>=f x)[E]
[4](stmt0.7>>=λx→m_10.7>>=f x)(stmt0.7>>=λx→m_20.7>>=f x)[E]
[4](m_10.7>>=λy→stmt0.7>>=λx→f x y)(m_20.7>>=λy→stmt0.7>>=λx→f x y)[E]
[4](m_1m_2)0.7>>=λy→stmt0.7>>=λx→f x y .[E]
Case :
[4]stmt0.7>>=λx→∅0.7>>=λy→f x y[E]
[4]∅0.7>>=λy→stmt0.7>>=λx→f x y .[E]
Note We briefly justify proofs by induction on the syntax tree.
Finite monadic programs can be represented by the free monad constructed out of and the effect operators, which can be represented by an inductively defined data structure, and interpreted by effect handlers <cit.>.
When we say two programs and are equal, we mean that they have the same denotation when interpreted by the effect handlers of the corresponding effects, for example, , where and are respectively handlers for nondeterminism and state.
Such equality can be proved by induction on some sub-expression in or , which are treated like any inductively defined data structure.
A more complete treatment is a work in progress.
(End of Note)
§.§ Filtering Using a Stateful, Non-Deterministic Fold
Having dealt with in Section <ref>,
in this section we aim to turn a filter of the form to a stateful and non-deterministic .
We calculate, for all , , , and :
[7]filt (all ok·scanl_+ (⊕) st) xs[E]
[7]guard (all ok (scanl_+ (⊕) st xs))0.7>>return xs[E]
[7]return (scanl_+ (⊕) st xs)0.7>>=λys→[E]
[7]guard (all ok ys)0.7>>return xs[E]
[7]get0.7>>=λini→scanlM (⊕) st xs0.7>>=λys→put ini0.7>>[E]
[7]guard (all ok ys)0.7>>return xs[E]
[7]get0.7>>=λini→scanlM (⊕) st xs0.7>>=λys→[E]
[7]guard (all ok ys)0.7>>put ini0.7>>return xs[E]
[7]protect (scanlM (⊕) st xs0.7>>=(guard·all ok)0.7>>return xs) .[E]
Recall that .
The following theorem fuses a monadic with a that uses its result.
Assume that state and non-determinism commute.
Let be defined as that in for any given . We have that for all and :
[3]foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=(guard·all ok)0.7>>return xs=[E]
[7]foldr (⊙) (return [1.5mu 1.5mu]) xs ,[E]
[5]𝐰𝐡𝐞𝐫𝐞 x⊙m=[25]
[25]get0.7>>=λst→guard (ok (st⊕x))0.7>>[E]
[25]put (st⊕x)0.7>>((x:)0.50.5⟨ 0.8$ 0.50.5⟩m) .[E]
Unfortunately we cannot use a fusion, since
occurs free in . Instead we
use a simple induction on . For the case :
[7](x⊗foldr (⊗) (return [1.5mu 1.5mu]) xs)0.7>>=(guard·all ok)0.7>>return (x:xs)[E]
[7](((st⊕x):)0.50.5⟨ 0.8$ 0.50.5⟩(put (st⊕x)0.7>>foldr (⊗) (return [1.5mu 1.5mu]) xs))0.7>>=[E]
[7](guard·all ok)0.7>>return (x:xs)[E]
[7]get0.7>>=λst→put (st⊕x)0.7>>[E]
[7]foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=λys→[E]
[7]guard (all ok (st⊕x:ys))0.7>>return (x:xs)[E]
[7]get0.7>>=λst→put (st⊕x)0.7>>[E]
[7]foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=λys→[E]
[7]guard (ok (st⊕x))0.7>>guard (all ok ys)0.7>>[E]
[7]return (x:xs)[E]
[7]get0.7>>=λst→guard (ok (st⊕x))0.7>>put (st⊕x)0.7>>[E]
[7]foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=λys→[E]
[7]guard (all ok ys)0.7>>return (x:xs)[E]
[7]get0.7>>=λst→guard (ok (st⊕x))0.7>>put (st⊕x)0.7>>[E]
[7](x:)0.50.5⟨ 0.8$ 0.50.5⟩(foldr (⊗) (return [1.5mu 1.5mu]) xs0.7>>=λys→guard (all ok ys)0.7>>return xs)[E]
[7]get0.7>>=λst→guard (ok (st⊕x))0.7>>put (st⊕x)0.7>>[E]
[7](x:)0.50.5⟨ 0.8$ 0.50.5⟩foldr (⊙) (return [1.5mu 1.5mu]) xs[E]
[7]foldr (⊙) (return [1.5mu 1.5mu]) (x:xs) .[E]
This proof is instructive due to extensive use of commutativity.
In summary, we now have this corollary performing using a non-deterministic and stateful foldr:
Let be defined as in Theorem <ref>. If state and non-determinism commute, we have:
[B]filt (all ok·scanl_+ (⊕) st) xs=[E]
[4]protect (put st0.7>>foldr (⊙) (return [1.5mu 1.5mu]) xs) .[E]
§ MONADIC HYLOMORPHISM
To recap what we have done,
we started with a specification of the form
, where
, and have shown that
[7]unfoldM p f z0.7>>=filt (all ok·scanl_+ (⊕) st)[E]
[7]unfoldM p f z0.7>>=λxs→protect (put st0.7>>foldr (⊙) (return [1.5mu 1.5mu]) xs)[E]
[7]protect (put st0.7>>unfoldM p f z0.7>>=foldr (⊙) (return [1.5mu 1.5mu])) .[E]
The final task is to fuse with .
§.§ Monadic Hylo-Fusion
In a pure setting, it is known that, provided that the unfolding phase terminates, is the unique solution of in the equation below <cit.>:
[B]hylo y[9]
[9]|p y[22]
[22]=𝐥𝐞𝐭 f y=(x,z) 𝐢𝐧 x⊗hylo z .[E]
Hylomorphisms with monadic folds and unfolds are a bit tricky.
Pardo Pardo:01:Fusion discussed hylomorphism for regular base functors, where the unfolding phase is monadic while the folding phase is pure.
As for the case when both phases are monadic, he noted “the drawback ... is that they cannot be always transformed into a single function that avoids the construction of the intermediate data structure.”
For our purpose, we focus our attention on lists, and have a theorem fusing the monadic unfolding and folding phases under a side condition.
Given , , , and (where ), consider the expression:
[3]unfoldM p f0.7>=>foldr (⊗) (return e) :: Monad m⇒a→m c .[E]
The following theorem says that this combination of folding and unfolding can be fused into one, with some side conditions:
Let be in type class .
For all , , , and , we have that , defined by:
[B]hyloM (⊗) e p f y[23]
[23]|p y[36]
[36]=f y0.7>>=λ(x,z)→[57]
[57]x⊗hyloM (⊗) e p f z ,[E]
if the relation is well-founded (see the note below) and, for all , we have
\begin{align}
\ensuremath{\Varid{n}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}((\Varid{x}\mathbin{\otimes})\mathbin{\cdot}\Varid{k})\mathbin{=}\Varid{x}\mathbin{\otimes}(\Varid{n}\mathrel{\hstretch{0.7}{>\!\!>\!\!=}}\Varid{k})} \mbox{~~,} \label{eq:hylo-fusion-prem}
\end{align}
where abbreviates .
The “well-foundedness” condition essentially says that eventually terminates — details to be explained after the proof of this theorem.
Condition (<ref>) may look quite restrictive.
In most cases the author have seen, however, we can actually prove that (<ref>) holds for an entire class of that includes .
In the application of this report, for example, the only effect of is non-determinism, and we will prove in Lemma <ref> that (<ref>) holds for all whose only effect is non-determinism, for the particular operator we use in the -queens problem.
We prove Theorem <ref> below.
We start with showing that is a fixed-point of the recursive equations of . When holds, it is immediate that
[3]return [1.5mu 1.5mu]0.7>>=foldr (⊗) e = e .[E]
When , we reason:
[5]unfoldM p f y0.7>>=foldr (⊗) e[E]
[5](f y0.7>>=(λ(x,z)→(x:)0.50.5⟨ 0.8$ 0.50.5⟩unfoldM p f z))0.7>>=foldr (⊗) e[E]
[5]f y0.7>>=(λ(x,z)→unfoldM p f z0.7>>=λxs→x⊗foldr (⊗) e xs)[E]
[5]f y0.7>>=(λ(x,z)→x⊗(unfoldM p f z0.7>>=[55]
[55]foldr (⊗) e)) .[E]
Now that is a fixed-point, we may conclude that it equals if the latter has a unique fixed-point,
which is guaranteed by the well-foundedness condition.
See the note below.
Note Let be a predicate, is a relation defined by . The parameter in is called the seed used to generate the list. The relation maps one seed to the next seed (where is written reversed). If it is well-founded, intuitively speaking, the seed generation cannot go on forever and will eventually hold. It is known that inductive types (those can be folded) and coinductive types (those can be unfolded) do not coincide in SET. To allow a fold to be composed after an unfold, typically one moves to a semantics based on complete partial orders. However, it was shown <cit.> that, in Rel, when the relation generating seeds is well-founded, hylo-equations do have unique solutions. One may thus stay within a set-theoretic semantics. Such an approach is recently explored again <cit.>. (End of Note)
Theorem <ref> does not rely on the local state laws (<ref>) and (<ref>), and does not put restriction on .
To apply the theorem to our particular case, we have to show that its preconditions hold for our particular —
for that we will need (<ref>) and perhaps also (<ref>). In the lemma below we slightly generalise in Theorem <ref>:
Assume that (<ref>) holds.
Given , , and , define as below:
[3](⊙)::(MonadPlus m,MonadState s m)⇒a→m b→m b[E]
[17]get0.7>>=λst→guard (p x st)0.7>>[E]
[17]put (next x st)0.7>>(res x0.50.5⟨ 0.8$ 0.50.5⟩m) .[E]
We have , if commutes with state.
We reason:
[7]n0.7>>=λy→x⊙k y[E]
[7]guard (p x st)0.7>>put (next x st)0.7>>(res x0.50.5⟨ 0.8$ 0.50.5⟩k y)[E]
[7]guard (p x st)0.7>>put (next x st)0.7>>(res x0.50.5⟨ 0.8$ 0.50.5⟩(k y))[E]
[7]get0.7>>=λst→guard (p x st)0.7>>[E]
[7]n0.7>>=λy→put (next x st)0.7>>(res x0.50.5⟨ 0.8$ 0.50.5⟩(k y))[E]
[7]get0.7>>=λst→guard (p x st)0.7>>[E]
[7]put (next x st)0.7>>n0.7>>=λy→(res x0.50.5⟨ 0.8$ 0.50.5⟩(k y))[E]
[7]get0.7>>=λst→guard (p x st)0.7>>[E]
[7]put (next x st)0.7>>(res x0.50.5⟨ 0.8$ 0.50.5⟩n0.7>>=k)[E]
[7]x⊙(n0.7>>=k) .[E]
§.§ Summary, and Solving -Queens
To conclude our derivation, a problem formulated as can be solved by a hylomorphism. Define:
[B]solve::(MonadState s m,MonadPlus m)⇒[E]
[3](b→Bool)→(b→m (a,b))→(s→Bool)→(s→a→s)→s→b→m [1.5mu a1.5mu][E]
[B]solve p f ok (⊕) st z=protect (put st0.7>>hyloM (⊙) (return [1.5mu 1.5mu]) p f z) ,[E]
[3]𝐰𝐡𝐞𝐫𝐞 x⊙m=[23]
[23]get0.7>>=λst→guard (ok (st⊕x))0.7>>[E]
[23]put (st⊕x)0.7>>((x:)0.50.5⟨ 0.8$ 0.50.5⟩m) .[E]
Given , , , , , ,
If the relation is well-founded,
the local state laws hold in addition to the other laws,
and commutes with state, we have
[B]unfoldM p f z0.7>>=filt (all ok·scanl_+ (⊕) st)=[E]
[5]solve p f ok (⊕) st z .[E]
-Queens Solved
Recall that
[B]queens n[11]
[11]=perm [1.5mu 0n-11.5mu]0.7>>=filt safe[E]
[11]=unfoldM null select [1.5mu 0n-11.5mu]0.7>>=filt (all ok·scanl_+ (⊕) (0,[1.5mu 1.5mu],[1.5mu 1.5mu])) ,[E]
where the auxiliary functions , , are defined in Section <ref>.
The function cannot be applied forever since the length of the given list decreases after each call,
and , using only non-determinism, commutes with state.
Therefore, Corollary <ref> applies, and we have .
Expanding the definitions we get:
[B]queens::(MonadPlus m,MonadState (Int,[1.5mu Int1.5mu],[1.5mu Int1.5mu]) m)⇒Int→m [1.5mu Int1.5mu][E]
[B]queens n=protect (put (0,[1.5mu 1.5mu],[1.5mu 1.5mu])0.7>>queensBody [1.5mu 0n-11.5mu]) ,[E]
[B]queensBody::(MonadPlus m,MonadState (Int,[1.5mu Int1.5mu],[1.5mu Int1.5mu]) m)⇒[1.5mu Int1.5mu]→m [1.5mu Int1.5mu][E]
[B]queensBody [1.5mu 1.5mu][16]
[19]return [1.5mu 1.5mu][E]
[B]queensBody xs[16]
[19]select xs0.7>>=λ(x,ys)→[E]
[19]get0.7>>=λst→guard (ok (st⊕x))0.7>>[E]
[19]put (st⊕x)0.7>>((x:)0.50.5⟨ 0.8$ 0.50.5⟩queensBody ys) ,[E]
[3]𝐰𝐡𝐞𝐫𝐞 [10]
[10]ok (,u:us,d:ds)=(u∉us)∧(d∉ds) .[E]
This completes the derivation of our backtracking algorithm for the -queens problem.
§ CONCLUSIONS AND RELATED WORK
This report is a case study of reasoning and derivation of monadic programs.
To study the interaction between non-determinism and state, we
construct backtracking algorithms solving problems that can be specified in the form .
The derivation of the backtracking algorithm works by fusing the two phases into a monadic hylomorphism.
It turns out that in derivations of programs using non-determinism and state, commutativity plays an important role.
We assume in this report that the local state laws (right-distributivity and right-zero) hold.
In this scenario we have nicer properties at hand, and commutativity holds more generally.
The local state laws imply that each non-deterministic branch has its own state.
It is cheap to implement when the state can be represented by linked data structures, such as a tuple containing lists, as in the -queens example.
When the state contains blocked data, such as a large array, duplicating the state for each non-deterministic branch can be costly.
Hence there is practical need for sharing one global state among non-deterministic branches.
When a monad supports shared global state and non-determinism, commutativity of the two effects holds in limited cases. The behaviour of the monad is much less intuitive, and might be considered awkward sometimes.
In a subsequent paper <cit.>,
we attempt to find out what algebraic laws we can expect and how to reason with programs when the state is global.
<cit.> modelled a hierarchy of monadic effects in Coq. The formalisation was applied to verify a number of equational proofs of monadic programs, including some of the proofs in an earlier version of this report. A number of errors was found and reported to the author.
§.§.§ Acknowledgements
The author would like to thank Tyng-Ruey Chuang for examining a very early draft of this report; Jeremy Gibbons, who has been following the development of this research and keeping giving good advices; and Tom Schrijvers and Koen Pauwels, for nice cooperation on work following-up this report.
Thanks also go to Reynald Affeldt, David Nowak and Takafumi Saikawa for verifying and finding errors in the proofs in an earlier version of this report.
The author is solely responsible for any remaining errors, however.
#1 #1#1#1 #1 #1 #1 #1#1 #1#1
[Affeldt, Nowak, and SaikawaAffeldt
et al2019]
authorpersonReynald Affeldt, personDavid
Nowak, and personTakafumi Saikawa.
A hierarchy of monadic effects for program
verification using equational reasoning. In
booktitleMathematics of Program Construction,
editorpersonGraham Hutton (Ed.).
[Doornbos and BackhouseDoornbos and
authorpersonHenk Doornbos and
personRoland C. Backhouse. year1995.
Induction and recursion on datatypes. In
booktitleMathematics of Program Construction
(seriesLecture Notes in Computer Science),
editorpersonBernhard Möller (Ed.).
publisherSpringer, pages242–256.
[Gibbons and HinzeGibbons and Hinze2011]
authorpersonJeremy Gibbons and personRalf
Hinze. year2011.
Just do it: simple monadic equational reasoning.
In booktitleInternational Conference on Functional
Programming, editorpersonOlivier Danvy (Ed.).
publisherACM Press, pages2–14.
[Gill and KmettGill and Kmett2014]
authorpersonAndy Gill and personEdward
Kmett. year2014.
titleThe Monad Transformer Library.
[Hinze, Wu, and GibbonsHinze
et al2015]
authorpersonRalf Hinze, personNicolas Wu,
and personJeremy Gibbons. year2015.
Conjugate hylomorphisms, or: the mother of all
structured recursion schemes. In booktitleSymposium on
Principles of Programming Languages,
editorpersonDavid Walker (Ed.).
publisherACM Press, pages527–538.
[Hutton and FulgerHutton and Fulger2008]
authorpersonGraham Hutton and personDiana
Fulger. year2008.
Reasoning about effects: seeing the wood through
the trees. In booktitleDraft Proceedings of Trends in
Functional Programming, editorpersonPeter Achten,
personPieter Koopman, and personMarco T.
Morazán (Eds.).
authorpersonOleg Kiselyov.
titleHow to restrict a monad without breaking it: the
winding road to the Set monad.
authorpersonOleg Kiselyov.
titleLaws of MonadPlus.
[Kiselyov and IshiiKiselyov and
authorpersonOleg Kiselyov and personHiromi
Ishii. year2015.
Freer monads, more extensible effects. In
booktitleSymposium on Haskell,
editorpersonJohn H Reppy (Ed.).
publisherACM Press, pages94–105.
[Kiselyov, Sabry, and SwordsKiselyov
et al2013]
authorpersonOleg Kiselyov, personAmr Sabry,
and personCameron Swords. year2013.
Extensible effects: an alternative to monad
transformers. In booktitleSymposium on Haskell,
editorpersonChung-chieh Shan (Ed.).
publisherACM Press, pages59–70.
[Meijer, Fokkinga, and PatersonMeijer
et al1991]
authorpersonErik Meijer, personMaarten
Fokkinga, and personRoss Paterson.
Functional programming with bananas, lenses,
envelopes, and barbed wire. In booktitleFunctional
Programming Languages and Computer Architecture
(seriesLecture Notes in Computer Science),
editorpersonR. John Muir Hughes (Ed.).
publisherSpringer-Verlag, pages124–144.
authorpersonEugenio Moggi.
Computational lambda-calculus and monads. In
booktitleLogic in Computer Science,
editorpersonRohit Parikh (Ed.).
publisherIEEE Computer Society Press, pages14–23.
authorpersonAlberto Pardo.
Fusion of recursive programs with computational
journalTheoretical Computer Science
volume260, number1-2 (year2001),
[Pauwels, Schrijvers, and MuPauwels
et al2019]
authorpersonKoen Pauwels, personTom
Schrijvers, and personShin-Cheng Mu.
Handling local state with global state. In
booktitleMathematics of Program Construction,
editorpersonGraham Hutton (Ed.).
authorpersonPhilip L. Wadler.
Monads for functional programming. In
booktitleProgram Design Calculi: Marktoberdorf Summer
School, editorpersonManfred Broy (Ed.).
publisherSpringer-Verlag, pages233–264.
[Wu, Schrijvers, and HinzeWu
et al2012]
authorpersonNicolas Wu, personTom
Schrijvers, and personRalf Hinze.
Effect handlers in scope. In
booktitleSymposium on Haskell,
editorpersonJanis Voigtländer (Ed.).
publisherACM Press, pages1–12.
§ MISCELLANEOUS PROOFS
Proofs of (<ref>) and (<ref>).
Proof of (<ref>) relies only on property of
and conjunction:
[4]guard (p∧q)[E]
[4]𝐢𝐟 p∧q 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 (𝐢𝐟 q 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅) 𝐞𝐥𝐬𝐞 ∅[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 guard q 𝐞𝐥𝐬𝐞 ∅[E]
[4]guard p0.7>>guard q .[E]
To prove (<ref>), surprisingly, we need only
distributivity and not (<ref>):
[4]guard p0.7>>(f0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[4](𝐢𝐟 p 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅)0.7>>(f0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 return ()0.7>>(f0.50.5⟨ 0.8$ 0.50.5⟩m) 𝐞𝐥𝐬𝐞 ∅>>>(f0.50.5⟨ 0.8$ 0.50.5⟩m)[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 f0.50.5⟨ 0.8$ 0.50.5⟩(return ()0.7>>m) 𝐞𝐥𝐬𝐞 f0.50.5⟨ 0.8$ 0.50.5⟩(∅0.7>>m)[E]
[4]f0.50.5⟨ 0.8$ 0.50.5⟩((𝐢𝐟 p 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅)0.7>>m)[E]
[4]f0.50.5⟨ 0.8$ 0.50.5⟩(guard p0.7>>m) .[E]
Proof of (<ref>).
We reason:
[4]m0.7>>=λx→guard p0.7>>return x[E]
[4]m0.7>>=λx→(𝐢𝐟 p 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅)0.7>>return x[E]
[4]𝐢𝐟 p [10]
[10]𝐭𝐡𝐞𝐧 [16]
[16]m0.7>>=λx→return ()0.7>>return x[E]
[10]𝐞𝐥𝐬𝐞 [16]
[16]m0.7>>=λx→∅0.7>>return x[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 m 𝐞𝐥𝐬𝐞 m0.7>>∅[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 m 𝐞𝐥𝐬𝐞 ∅[E]
[4]𝐢𝐟 p 𝐭𝐡𝐞𝐧 return ()0.7>>m[E]
[9]𝐞𝐥𝐬𝐞 ∅0.7>>m[E]
[4](𝐢𝐟 p 𝐭𝐡𝐞𝐧 return () 𝐞𝐥𝐬𝐞 ∅)0.7>>m[E]
[4]guard p0.7>>m .[E]
|
# Optimal linear response for Markov Hilbert-Schmidt integral operators and
stochastic dynamical systems
Fadi Antown1, Gary Froyland1, and Stefano Galatolo2
1School of Mathematics and Statistics, University of New South Wales, Sydney
NSW 2052, Australia
2Department of Mathematics, University of Pisa, Via Buonarroti 1, 56127 Pisa,
Italy
###### Abstract
We consider optimal control problems for discrete-time random dynamical
systems, finding unique perturbations that provoke maximal responses of
statistical properties of the system. We treat systems whose transfer operator
has an $L^{2}$ kernel, and we consider the problems of finding (i) the
infinitesimal perturbation maximising the expectation of a given observable
and (ii) the infinitesimal perturbation maximising the spectral gap, and hence
the exponential mixing rate of the system. Our perturbations are either (a)
perturbations of the kernel or (b) perturbations of a deterministic map
subjected to additive noise. We develop a general setting in which these
optimisation problems have a unique solution and construct explicit formulae
for the unique optimal perturbations. We apply our results to a Pomeau-
Manneville map and an interval exchange map, both subjected to additive noise,
to explicitly compute the perturbations provoking maximal responses.
Keywords— Stochastic dynamical system, optimal linear response, transfer
operator, mixing rate, optimal control
[MSC2020]: 37H30, 47A55, 49J50, 49N05
## 1 Introduction
The statistical properties of the long-term behaviour of deterministic or
stochastic dynamical systems are strongly related to the properties of
invariant or stationary measures and to the spectral properties of the
associated transfer operator. When the dynamical system is perturbed it is
useful to understand and predict the response of the statistical properties of
the system through these objects. When such responses are differentiable, we
say that the system exhibits a _linear response_ to the class of
perturbations. To first order, this response can be described by a suitable
derivative expressing the infinitesimal rate of change in e.g. the natural
invariant measure or in the spectrum. Understanding the response of
statistical properties to perturbation has particular importance in
applications, including to climate science (see e.g. [25], [27] and the
references therein).
In the present paper we go beyond quantifying responses and address natural
problems concerning the _optimal response_ , namely which perturbations elicit
a _maximal_ response. For example, given an observation function, which
perturbation produces the greatest change in the expectation of this
observation, and which perturbation produces the greatest change in the rate
of convergence to equilibrium. Continuing the climate science application, one
may wish to know which small climate action (which perturbation) would produce
the greatest reduction in the average temperature (the expected observation
value). We note that by considering trajectories of a perturbed map and using
ergodicity, one may view the problem of maximising the response in the
expectation of an observation as an infinite-horizon optimal control problem,
averaging an observation along trajectories.
The linear response of dynamical systems is an area of intense research and we
present a brief overview of the literature that is related to the present
work. Early results concerning the response of invariant measures to the
perturbation of a deterministic system have been obtained by Ruelle [42] in
the uniformly hyperbolic case. More recently, these results have been extended
to several other situations in which one has some hyperbolicity and sufficient
regularity of the system and its perturbations. We refer the reader to the
survey [5] for an extended discussion of the literature about linear response
(and its failure) for deterministic systems.
The mathematical literature on linear response of invariant measures of
stochastic or random dynamical systems is more recent. In the framework of
continuous-time random processes and stochastic differential equations, linear
response results were proved in [27, 32]. Results related to the linear
response of the stationary measure for diffusion in random media appear in
[33, 24, 23, 13, 40]. In the discrete-time case, examples of linear response
for small random perturbations of uniformly hyperbolic deterministic systems
appeared in [26]. In [4] linear response results are given for random
compositions of expanding or non-uniformly expanding maps. In the paper [49]
the smoothness of the invariant measure response under suitable perturbations
is proved for a class of random diffeomorphisms, but no explicit formula is
given for the derivatives; an application to the smoothness of the rotation
number of Arnold circle maps with additive noise is presented. Systems
generated by the iteration of a deterministic map subjected to i.i.d. additive
random perturbations are one class of stochastic systems studied in the
present paper (see Section 6). The linear response of such systems is
considered systematically in [20] and linear response results are proved for
perturbations to the deterministic map or to the additive noise. These results
are used to by [39] to extend some results of [49] outside the diffeomorphism
case and applied to an idealized model of El Niño-Southern Oscillation, given
by a noninvertible circle map with additive noise. Higher derivative results
for the response of systems with additive noise are presented in [22].
Response results for random systems in the so-called quenched point of view
appeared recently in [43], [44] where the random composition of expanding maps
is considered using Hilbert cones techniques and in [11] where the random
composition of hyperbolic maps is considered by a transfer operator based
approach.
We remark that the addition of random perturbations is not necessarily
sufficient to guarantee a linear response. An i.i.d. composition of the
identity map and a rotation on the circle is considered in [19], and it is
shown that using observables with square-integrable first derivative, one only
has Hölder continuity of the response with respect to $C^{0}$ perturbations of
the circle rotation.
One can similarly consider the linear response of the dominant eigenvalues of
the transfer operator under perturbation. In the literature there are several
results describing the way eigenvalues and eigenvectors of suitable classes of
operators change when those operators are perturbed in some way, for example
classical results concerning compact operators subjected to analytic
perturbations [29], and quasi-compact Markov operators subjected to $C^{k}$
perturbations [28]. In specific classes of dynamics, differentiability of
isolated spectral data is demonstated in [26] for transfer operators of Anosov
maps where the map is subjected to $C^{k}$ perturbations and in [32] for
transfer operators arising from SDEs subjected to $C^{k}$ perturbations of the
drift.
Optimal linear response questions have been considered in the dynamical
setting of homogeneous (and inhomogeneous) finite-state Markov chains [1],
where explicit formulae are provided for the unique maximising perturbations
that (i) maximise the norm of the response, (ii) maximise the expectation of a
given observable, and (iii) maximise the spectral gap. The efficient Lagrange
multiplier approach developed in [1] for questions (ii) and (iii) will be
extended to the infinite-dimensional setting in the present paper. In
continuous time, [18] maximised the spectral gap of a numerical discretisation
of a periodically forced Fokker-Planck equation (perturbing the velocity field
to maximally speed up or slow down the exponential mixing rate). The same
problem is considered by [16], but for general aperiodic forcing over a finite
time, using the Lagrange multiplier approach of [1]. A non-spectral approach
to increasing mixing rates by optimal kernel perturbations in discrete time is
[15].
Related optimal control problems have been considered in [21] where the goal
was to find a minimal perturbation realising a specific response to the
invariant measure of a deterministic system (about the problem of finding an
infinitesimal perturbation realising a given response see also [30]). These
kinds of questions and other similar ones were also briefly considered in [20]
for random dynamical systems consisting of deterministic maps perturbed by
additive noise. Similar problems in the case of probabilistic cellular
automata were considered in [38].
The present work takes the point of view of [1], extending the theory to the
infinite-dimensional setting of stochastic integral operators, proving the
existence of unique optimal perturbations, deriving explicit formulae for
these optimal perturbations, and illustrating the formulae and their
conclusions via two topical examples. We consider the class of stochastic
dynamical systems with transfer operators representable by an $L^{2}$-compact,
integral operator, which includes deterministic systems perturbed by additive
noise. The transfer operator $L$ has the form
$Lf(x)=\int k(x,y)f(y)\ dy,$ (1)
where $k$ is a stochastic kernel; in the case of deterministic systems $T$
with additive noise, $k(x,y)=\rho(x-T(y))$, with $\rho$ a probability density
representing the distribution of the noise intensity (see Section 6). We
consider perturbations of two types: firstly, perturbations to the kernel $k$,
and secondly, perturbations to the map $T$.
An outline of the paper is as follows. In Section 2 we consider general
compact, integral-preserving operators $L:L^{2}\to L^{2}$ (see (3)) and state
general linear response statements for the normalised fixed points and the
leading eigenvalues of these operators (Theorem 2.2 and Proposition 2.6). In
Section 3, we derive response formulae for the normalised fixed points
(Corollary 3.5) and spectral values (Corollary 3.6) of operators of the form
(1), under perturbation of the kernel $k$. In Section 4 we consider the
problem of finding the perturbation that provokes a _maximal_ response in the
average of a given observable (General Problem 1) and the spectral gap
(General Problem 2). We show that if the feasible set of perturbations is
convex, an optimal solution exists, and that this optimum is unique if the
feasible set is strictly convex. In Section 5.1, using Lagrange multipliers we
derive an explicit formula for the unique optimal kernel perturbation that
maximises the expectation of an observable (Theorem 5.4). In section 5.2 we
prove an explicit formula for the perturbation that maximise the change in
spectral gap (and therefore the rate of mixing) of the system (Theorem 5.6).
In Section 6 we specialise our integral operators to annealed transfer
operators corresponding to deterministic maps $T$ with additive noise. For
these systems the kernel $k$ has the form $k(x,y)=\rho(x-T(y))$ for some
nonsingular transformation $T$, and we consider perturbations of the map $T$
directly. Response formulas for these perturbations are developed in
Proposition 6.3 and Proposition 6.6 for the invariant measure and the
dominating eigenvalues, respectively. In this framework we again prove
existence and uniqueness of the map perturbation maximising the derivative of
the expectation of an observation (Proposition 7.3) and then derive an
explicit formula for the extremiser (Theorem 7.4). Proposition 7.6 and Theorem
7.7 state results analogous to Proposition 7.3 and Theorem 7.4 for the
optimization of the spectral gap and mixing rate.
In section 8 we apply and illustrate the theoretical findings of this work on
the Pomeau-Manneville map and a weakly mixing interval exchange, each
perturbed by additive noise. For each map we numerically estimate (i) the
optimal stochastic perturbation (perturbing the kernel $k$) and (ii) the
optimal deterministic perturbation (perturbing the map $T$) that maximises the
derivatives of the expectation of an observable and the mixing rate. One of
the interesting lessons is that to maximally increase the mixing rate of the
noisy Pomeau-Manneville map, one should perturb the kernel (stochastic
perturbation) to move mass away from the indifferent fixed point or deform the
map to transport mass away from the fixed point (deterministic perturbation);
see Figures 4 and 7, respectively. Further numerical outcomes are discussed
and explained in Section 8.
## 2 Linear response for compact integral-preserving operators
In this section we introduce general response results for integral-preserving
compact operators. We consider both the response of the invariant measure to
the perturbations and the response of the dominant eigenvalues.
### 2.1 Existence of linear response for the invariant measure
In the following, we consider integral-preserving compact operators acting on
$L^{2}$, which are not necessarily positive. We will give a general linear
response statement for their invariant measures. In Section 3 we show how
these results can be applied to Hilbert-Schmidt integral operators, which will
later be transfer operators of suitable random dynamical systems.
Let $L^{2}([0,1])$ be the space of square-integrable functions over the unit
interval (considered with the Lebesgue measure $m$); for brevity, we will
denote it as simply $L^{2}$. We remark that the analysis in the rest of the
paper can be extended to manifolds, but we keep the setting simple so as not
to obscure the main ideas. Let us consider the space of zero-average functions
$V:=\bigg{\\{}f\in L^{2}~{}s.t.~{}~{}\int f\,dm=0\bigg{\\}}.$
###### Definition 2.1.
We say that an operator $L:L^{2}\rightarrow L^{2}$ has _exponential
contraction of the zero average space_ $V$ if there are $C\geq 0$ and
$\lambda<0$ such that $\forall g\in V$
$\|L^{n}g\|_{2}\leq Ce^{\lambda n}\|g\|_{2}$ (2)
for all $n\geq 0$.
For $\bar{\delta}>0$ and $\delta\in[0,\bar{\delta})$, we consider a family of
integral-preserving, compact operators $L_{\delta}:L^{2}\rightarrow L^{2}$; we
think of $L_{\delta}$ as perturbations of $L_{0}$. We say that $f_{\delta}\in
L^{2}$ is an _invariant function_ of $L_{\delta}$ if
$L_{\delta}f_{\delta}=f_{\delta}$. We will see that under natural assumptions,
the operators $L_{\delta}$, $\delta\in[0,\bar{\delta})$, have a family of
normalized invariant functions $f_{\delta}\in L^{2}$. Furthermore, for
suitable perturbations the invariant functions vary smoothly in $L^{2}$ and we
get an explicit formula for the resulting derivative
$\frac{df_{\delta}}{d\delta}$. We remark that since the operators we consider
are not necessarily positive, the invariant functions are not necessarily
positive.
###### Theorem 2.2 (Linear response for integral-preserving compact
operators).
Let us consider a family of compact operators $L_{\delta}:L^{2}\rightarrow
L^{2}$, with $\delta\in\left[0,\overline{\delta}\right)$, preserving the
integral: for each $g\in L^{2}$
$\int L_{\delta}g~{}dm=\int g~{}dm.$ (3)
Then,
1. (I)
The operators have invariant functions in $L^{2}$: for each $\delta$ there is
$g_{\delta}\neq 0$ such that $L_{\delta}g_{\delta}=g_{\delta}$.
2. (II)
Suppose $L_{0}$ also satisfies the following:
(A1) (mixing of the unperturbed operator) For every $g\in V$,
$\lim_{n\rightarrow\infty}\|L_{0}^{n}g\|_{2}=0.$
Under this assumption, the unperturbed operator $L_{0}$ has a unique
normalized invariant function $f_{0}$ such that $\int{f}_{0}\ dm=1$.
Furthermore, $L_{0}$ has exponential contraction of the zero average space
$V$.
3. (III)
Suppose the family of operators $L_{\delta}$ also satisfy the following:
(A2) ($L_{\delta}$ are small perturbations and existence of derivative
operator at $f_{0}$) Suppose there is a $K\geq 0$ such that
$\left||L_{\delta}-L_{0}|\right|_{L^{2}\rightarrow L^{2}}\leq K\delta$ for
small $\delta$. Furthermore, suppose there exist $\hat{f}\in V$ such that
$\underset{\delta\rightarrow
0}{\lim}\frac{(L_{\delta}-L_{0})}{\delta}f_{0}=\hat{f}.$
Under these assumptions, the following hold:
1. (a)
There exists a $\delta_{2}>0$ such that for each $0\leq\delta<\delta_{2}$, the
operators $L_{\delta}$ have unique invariant functions ${f}_{\delta}$ such
that $\int{f}_{\delta}dm=1.$
2. (b)
The resolvent operator $({Id}-L_{0})^{-1}:V\rightarrow$ $V$ is continuous.
3. (c)
$\lim_{\delta\rightarrow
0}\left\|\frac{f_{\delta}-f_{0}}{\delta}-({Id}-L_{0})^{-1}\hat{f}\right\|_{2}=0;$
thus, $({Id}-L_{0})^{-1}\hat{f}$ represents the first order term in the
perturbation of the invariant function for the family of systems $L_{\delta}$.
###### Proof.
##### Claim (I):
We start by proving the existence of the invariant functions $g_{\delta}$ for
the operators $L_{\delta}$. Since the operators are compact and integral
preserving, $L_{\delta}$ has an eigenvalue $1$ for each $\delta$. Indeed, let
us consider the adjoint operators $L^{*}_{\delta}:L^{2}\to L^{2}$ defined by
the duality relation $\langle L_{\delta}f,g\rangle=\langle
f,L^{*}_{\delta}g\rangle$ for all $f,g\in L^{2}.$ Because of the integral-
preserving assumption, we have $\langle
f,L^{*}_{\delta}\mathbf{1}\rangle=\langle L_{\delta}f,\mathbf{1}\rangle=\int
L_{\delta}f\ dm=\int f\ dm=\langle f,\mathbf{1}\rangle$.111We use the notation
$\mathbf{1}$ for the constant function and $\mathbf{1}_{A}$ for the indicator
function of the set $A$. This implies $L^{*}_{\delta}\mathbf{1}=\mathbf{1}$
and thus, $1$ is in the spectrum of $L^{*}_{\delta}$ and $L_{\delta}$. Since
$L_{\delta}$ is compact, its spectrum equals the eigenvalues and we have
nontrivial fixed points for the operators $L_{\delta}$.
##### Claim (III)(a) for $\delta=0$:
Now we prove the uniqueness of the normalized invariant function of $L_{0}$.
Above we proved that $L_{0}$ has some invariant function $g_{0}\neq 0$. The
mixing assumption $(A1)$ implies that $\int g_{0}\ dm\neq 0$; to see this, we
note that if $\int g_{0}\ dm=0$, then $g_{0}\in V$, and, by $(A1)$, $g_{0}$
cannot be a nontrivial fixed point of $L_{0}$. We claim that
$f_{0}=\frac{g_{0}}{\int g_{0}\ dm}$ is the unique normalized invariant
function for $L_{0}$. To see this, suppose there was a second normalized
invariant function $f^{\prime}_{0}$; then, $f^{\prime}_{0}-f_{0}$ would be an
invariant function in $V$, which is a contradiction.
##### Claim (II):
To show that $L_{0}$ has exponential contraction on $V$, we first note that
for $f\in L^{2}$, we can write $f=f_{0}\int f\ dm+[f-f_{0}\int f\ dm]$. Since
$[f-f_{0}\int f\ dm]\in V$, it follows from $(A1)$ that
$L_{0}^{n}f\to_{L^{2}}f_{0}\int f\ dm$. Thus, the spectrum of $L_{0}$ is
contained in the unit disk by the spectral radius theorem. Now suppose
$\lambda$ is in the spectrum of $L_{0}$ and $|\lambda|=1$. By the compactness
assumption, there is an eigenvector $f_{\lambda}$ for $\lambda$ and then we
have $||L_{0}^{n}(f_{\lambda})||_{2}=||f_{\lambda}||_{2}$. However,
$L_{0}^{n}(f_{\lambda})\to_{L^{2}}f_{0}\int f_{\lambda}\ dm$, which is not
possible unless $\lambda=1$. Hence, the spectrum of $L_{0}|_{V}$ is strictly
contained in the unit disk. Thus, by the spectral radius theorem, there is an
$n>0$ such that $||L_{0}^{n}|_{V}||_{L^{2}\rightarrow L^{2}}\leq\frac{1}{2}$
and we have exponential contraction of $L_{0}$ on $V$.
##### Claim (III)(a) for $\delta\in[0,\bar{\delta}]$:
From the assumption $||L_{0}-L_{\delta}||_{L^{2}\rightarrow L^{2}}\leq
K\delta$, we have for small enough $\delta$ that
$||L_{\delta}^{n}|_{V}||_{L^{2}\rightarrow L^{2}}\leq\frac{2}{3}$ and
therefore, $L_{\delta}$ is also mixing. We can apply the argument above to the
operators $L_{\delta}$ and obtain, for each small enough $\delta$, a unique
normalized invariant function $f_{\delta}$.
##### Claim (III)(b):
Using the exponential contraction of $L_{0}$ on $V$, we now show that
$(\text{Id}-L_{0})^{-1}:V\rightarrow V$ is continuous. Indeed, for $f\in V$,
we get $(\text{Id}-L_{0})^{-1}f=f+\sum_{n=1}^{\infty}L_{0}^{n}f$. Since
$L_{0}$ is exponentially contracting on $V$, and
$\sum_{n=1}^{\infty}Ce^{\lambda n}:=M<\infty,$ the sum
$\sum_{n=1}^{\infty}L_{0}^{n}f$ converges in $V$ with respect to the $L^{2}$
norm. The resolvent $(\text{Id}-L_{0})^{-1}:V\rightarrow V$ is then a
continuous operator and $||(\text{Id}-L_{0})^{-1}||_{V\rightarrow V}\leq 1+M.$
We remark that since $\hat{f}\in V,$ the resolvent can be computed at
$\hat{f}$.
##### Claim (III)(c):
Now we are ready to prove the linear response formula. Furthermore, we have
$\displaystyle\|f_{\delta}-f_{0}\|_{2}$ $\displaystyle\leq$
$\displaystyle\|L_{\delta}^{n}f_{\delta}-L_{0}^{n}f_{0}\|_{2}$
$\displaystyle\leq$
$\displaystyle\|L_{\delta}^{n}f_{0}-L_{0}^{n}f_{0}\|_{2}+\|L_{\delta}^{n}f_{\delta}-L_{\delta}^{n}f_{0}\|_{2}$
$\displaystyle\leq$
$\displaystyle\|L_{\delta}^{n}-L_{0}^{n}\|_{2}\|f_{0}\|_{2}+\|L_{\delta}^{n}|_{V}\|_{L^{2}\rightarrow
L^{2}}\|f_{\delta}-f_{0}\|_{2}$ $\displaystyle\leq$
$\displaystyle\|L_{\delta}^{n}-L_{0}^{n}\|_{2}\|f_{0}\|_{2}+\frac{2}{3}\|f_{\delta}-f_{0}\|_{2},$
from which we get $\|f_{\delta}-f_{0}\|_{2}\leq
3\|L_{\delta}^{n}-L_{0}^{n}\|_{L^{2}\rightarrow L^{2}}\|f_{0}\|_{2}$. Since
$||L_{0}-L_{\delta}||_{L^{2}\rightarrow L^{2}}\leq K\delta$, we have
$\|f_{\delta}-f_{0}\|_{2}\to 0$ as $\delta\to 0.$ Since $f_{0}$ and
$f_{\delta}$ are the invariant functions of $L_{0}$ and $L_{\delta}$, we have
$(\text{Id}-L_{0})\frac{f_{\delta}-f_{0}}{\delta}=\frac{1}{\delta}(L_{\delta}-L_{0})f_{\delta}.$
By applying the resolvent to both sides we obtain
$\displaystyle\frac{f_{\delta}-f_{0}}{\delta}$
$\displaystyle=(\text{Id}-L_{0})^{-1}\frac{L_{\delta}-L_{0}}{\delta}f_{\delta}$
$\displaystyle=(\text{Id}-L_{0})^{-1}\frac{L_{\delta}-L_{0}}{\delta}f_{0}+(\text{Id}-L_{0})^{-1}\frac{L_{\delta}-L_{0}}{\delta}(f_{\delta}-f_{0}).$
Moreover, from assumption $(A2)$, we have for sufficiently small $\delta$ that
$\left\|(\text{Id}-L_{0})^{-1}\frac{L_{\delta}-L_{0}}{\delta}(f_{\delta}-f_{0})\right\|_{2}\leq\|(\text{Id}-L_{0})^{-1}\|_{V\rightarrow
V}K\|f_{\delta}-f_{0}\|_{2}.$
Since we already proved that $\lim_{\delta\rightarrow
0}\|f_{\delta}-f_{0}\|_{2}=0$, we are left with
$\lim_{\delta\rightarrow
0}\frac{f_{\delta}-f_{0}}{\delta}=(\text{Id}-L_{0})^{-1}\hat{f}$
converging in the $L^{2}$ norm.
We remark that the strategy of proof of Theorem 2.2 is similar to the one of
Theorem 3 of [20] although the assumptions made are quite different, here we
consider a compact integral preserving operator on $L^{2}$, while in [20]
several norms are considered to allow low regularity perturbations and the
operator is required to be positive.
It is worth to remark that the above proof gives a description of the spectral
picture of $L_{0}$. By Theorem 2.2, if $L_{0}$ satisfies $(A1)$ then the
invariant function is unique, up to normalization; this shows that $1$ is a
simple eigenvalue. Furthermore, $L_{0}$ preserves the direct sum $L^{2}=$
span$\\{f_{0}\\}\oplus V$ and the spectrum of $L_{0}$ is strictly inside the
unit disk when $L_{0}$ is restricted to $V$. Hence, the spectrum of $L_{0}$ is
contained in the unit disk and there is a spectral gap.
###### Remark 2.3.
The mixing assumption in $(A1)$ is required only for the unperturbed operator
$L_{0}$. This assumption is satisfied, for example, if $L_{0}$ is an integral
operator and an iterate of this operator has a strictly positive kernel, see
Corollary 5.7.1 of [35]. Later in Remark 6.4 we show this assumption is
verified for a wide range of examples of stochastic dynamical systems.
### 2.2 Existence of linear response of the dominant eigenvalues
In this section, we consider the existence of linear response for the second
largest eigenvalues (in magnitude) and provide a formula for the linear
response. An important object needed to quantify linear response statements is
a “derivative” of the transfer operator with respect to the perturbation.
###### Definition 2.4.
We define $\dot{L}:L^{2}\to V$ as the unique linear operator satisfying
$\lim_{\delta\to
0}\left\|\frac{(L_{\delta}-L_{0})}{\delta}-\dot{L}\right\|_{L^{2}\to V}=0.$
Let $\mathcal{B}(L^{2})$ denote the space of bounded linear operators from the
Banach space $L^{2}$ to itself and $r(L)$ denote the spectral radius of an
operator $L$; we begin with the following definition.
###### Definition 2.5 ([28], Definition III.7).
Let $s\in\mathbb{N},s\geq 1$. We say that
$L\in\mathcal{B}(L^{2}([0,1],\mathbb{C}))$ has $s$ dominating simple
eigenvalues if there exists closed subspaces $E$ and $\tilde{E}$ such that
1. 1.
$L^{2}([0,1],\mathbb{C})=E\oplus\tilde{E}$,
2. 2.
$L(E)\subset E$, $L(\tilde{E})\subset\tilde{E}$,
3. 3.
dim$(E)=s$ and $L|_{E}$ has $s$ geometrically simple eigenvalues
$\lambda_{i}$, $i=1,\dots,s$,
4. 4.
$r(L|_{\tilde{E}})<\min\\{|\lambda_{i}|:i=1,\dots,s\\}$.
Adapting Theorem III.8 and Corollary III.11 of [28] to our situation, we can
now state a linear response result for these eigenvalues.
###### Proposition 2.6.
Let $L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow
L^{2}([0,1],{\mathbb{C}})$, where $\delta\in[0,\bar{\delta})=:I_{0}$, be
integral-preserving (see equation (3)) compact operators. Assume that the map
$\delta\mapsto L_{\delta}$ is in
$C^{1}(I_{0},\mathcal{B}(L^{2}([0,1],{\mathbb{C}})))$ and $L_{0}$ is mixing
(see $(A1)$ in Theorem 2.2). Then, $\lambda_{1,0}:=1\in\sigma(L_{0})$ and
$r(L_{0})=1$. Let $\mathcal{I}\subset\sigma(L_{0})\setminus\\{1\\}$ be the
eigenvalue(s) of maximal modulus strictly inside the unit disk; assume they
are geometrically simple and let $s:=|\mathcal{I}|+1$. Then there exists an
interval $I_{1}:=[0,\delta_{1})$, $I_{1}\subset I_{0}$ such that for
$\delta\in I_{1}$, $L_{\delta}$ has $s$ dominating simple eigenvalues. Thus,
there exists functions $e_{i,(\cdot)},\ \hat{e}_{i,(\cdot)}\in
C^{1}(I_{1},L^{2}([0,1],{\mathbb{C}}))$ and $\lambda_{i,(\cdot)}\in
C^{1}(I_{1},{\mathbb{C}})$ such that for $\delta\in I_{1}$ and $i,j=2,\dots,s$
* (i)
$L_{\delta}e_{i,\delta}=\lambda_{i,\delta}e_{i,\delta}$,
$L^{*}_{\delta}\hat{e}_{i,\delta}=\lambda_{i,\delta}\hat{e}_{i,\delta}$,
* (ii)
$\langle
e_{i,\delta},\hat{e}_{j,\delta}\rangle_{L^{2}([0,1],{\mathbb{C}})}=\delta_{i,j}$,
where $\delta_{i,j}$ is the Kronecker delta.
Furthermore, let $\dot{\lambda}_{i}\in{\mathbb{C}}$ satisfy
$\lim_{\delta\rightarrow
0}\bigg{|}\frac{\lambda_{i,\delta}-\lambda_{i,0}}{\delta}-\dot{\lambda}_{i}\bigg{|}=0,$
then
$\displaystyle\dot{\lambda}_{i}=\langle\hat{e}_{i,0},\dot{L}e_{i,0}\rangle_{L^{2}([0,1],{\mathbb{C}})},$
(4)
where $\dot{L}$ is as in Definition 2.4.
###### Proof.
From Theorem 2.2 and the discussion following it, $1\in\sigma(L_{0})$ and
$r(L_{0})=1$.
We now use Theorem III.8 in [28] to obtain the existence of linear response
and Corollary III.11 [28] to obtain the formula. We begin by verifying the two
hypotheses of Theorem III.8 [28]. We remark that our map $\delta\mapsto
L_{\delta}$ belonging to
$C^{1}([0,\bar{\delta}),\mathcal{B}(L^{2}([0,1],{\mathbb{C}})))$ can be
extended to a map
$C^{1}((-\bar{\delta},\bar{\delta}),\mathcal{B}(L^{2}([0,1],{\mathbb{C}})))$.
Doing so, hypothesis $(H1)$ of Theorem III.8 [28] is satisfied. Since
$r(L_{0})=1$, we just need to show that $L_{0}$ has $s$ dominating
eigenvalues. Since $L_{0}$ is a compact operator, the eigenvalues
$\lambda_{i,0}\in\mathcal{I}$ are isolated. Let $\Pi_{i}$ be the
eigenprojection onto the eigenspace of $\lambda_{i,0}$ and
$E_{i}:=\Pi_{i}(L^{2}([0,1],{\mathbb{C}}))$. Define the eigenspaces
$E:=\bigoplus_{i=1}^{s}E_{i}$ and
$\widetilde{E}:=(\text{Id}-\sum_{i=1}^{s}\Pi_{i})(L^{2}([0,1],{\mathbb{C}}))$.
We thus have:
* (1)
$L^{2}([0,1],{\mathbb{C}})=E\oplus\widetilde{E}$.
* (2)
$L_{0}\left(E\right)\subset E$ and $L_{0}(\widetilde{E})\subset\widetilde{E}$.
* (3)
dim$\left(E\right)=s$ and $L_{0}|_{E}$ has $s$ simple eigenvalues
$\lambda_{1,0}\cup\mathcal{I}$. This point follows from the assumption that
the eigenvalues in $\mathcal{I}$ are geometrically simple and the fact that
$\lambda_{1,0}$ is simple (see Theorem 2.2).
* (4)
$r(L_{0}|_{\widetilde{E}})<|\lambda_{i,0}|$ where
$\lambda_{i,0}\in\mathcal{I}$.
Thus, $L_{0}$ satisfies hypothesis (H2) of Theorem III.8 since it has $s$
dominating simple eigenvalues and $r(L_{0})=1$. Hence, from Theorem III.8
[28], the map $\delta\mapsto\lambda_{i,\delta}$ is differentiable at
$\delta=0$.
We can now apply the argument in Corollary III.11 [28] for $\lambda_{i,0}$ to
obtain (15) (the result and proof of Corollary III.11 [28] is for the top
eigenvalue, however the argument still holds for any eigenvalue
$\lambda_{i,0}$, $\in\mathcal{I}$ by changing the index value in the proof of
the corollary).
## 3 Application to Hilbert-Schmidt integral operators
In this section we apply the results of the previous section to Hilbert-
Schmidt integral operators and suitable perturbations. The operators we
consider are compact operators on $L^{2}([0,1],{\mathbb{R}})$ (or
$L^{2}([0,1],{\mathbb{C}})$); for brevity we will denote222We will also denote
$L^{p}:=L^{p}([0,1],{\mathbb{R}})$; this notation will not be used for
$L^{2}([0,1],{\mathbb{C}})$. $L^{2}:=L^{2}([0,1],{\mathbb{R}})$. To avoid
confusion we point out that in the following we will also consider the space
$L^{2}([0,1]^{2})$ of square integrable real functions on the unit square;
this space contains the kernels of the operators we consider.
Let $k\in L^{2}([0,1]^{2})$ and consider the operator $L:L^{2}\to L^{2}$
defined in the following way: for $f\in L^{2}$
$Lf(x)=\int k(x,y)f(y)dy;$ (5)
such an operator is called a Hilbert-Schmidt integral operator. Such operators
may represent the annealed transfer operators of systems perturbed by additive
noise (see Section 6).
We now list some well-known and basic facts about Hilbert-Schmidt integral
operators with kernels in $L^{2}([0,1]^{2})$:
* •
The operator $L:L^{2}\rightarrow L^{2}$ is bounded and
$||Lf||_{2}\leq||k||_{L^{2}([0,1]^{2})}||f||_{2}$ (6)
(see Proposition 4.7 in II.§4 [8]).
* •
If $k\in L^{\infty}([0,1]^{2})$, then
$||Lf||_{\infty}\leq||k||_{L^{\infty}([0,1]^{2})}||f||_{1}$ (7)
and the operator $L:L^{1}\rightarrow L^{\infty}$ is bounded. Furthermore,
$\|L\|_{L^{p}\rightarrow L^{\infty}}\leq\|k\|_{L^{\infty}([0,1]^{2})}$ for
$1\leq p\leq\infty$.
* •
If for almost every $y\in[0,1]$ we have
$\int k(x,y)dx=1,$
then the Hilbert-Schmidt integral operator associated to the kernel $k$ is
integral preserving (satisfies (3)).
* •
The operator $L:L^{2}\rightarrow L^{2}$ is compact (see [31]).
Combining the last two points, we have from Theorem 2.2 that such an operator
has an invariant function in $L^{2}$. Furthermore, for $k\in
L^{\infty}([0,1]^{2})$ we have an analogous result.
###### Lemma 3.1.
Let $L:L^{2}\rightarrow L^{2}$ be an integral operator, with integral-
preserving kernel $k\in L^{\infty}([0,1]^{2})$, that is mixing (satisfies
$(A1)$ of Theorem 2.2). Then, there exists a unique fixed point $f\in
L^{\infty}$ of $L$ satisfying $\int f\ dm=1$. Furthermore, if the kernel is
nonnegative, then $f$ is nonnegative.
###### Proof.
Since $k$ is an integral-preserving kernel, $L_{0}$ satisfies (3). Thus, we
can apply Theorem 2.2 to conclude that there exists a unique $f\in L^{2}$,
$\int f\ dm=1$, such that $Lf=f$. Noting that $k\in L^{\infty}([0,1]^{2})$, we
have from inequality (7) that $f\in L^{\infty}$.
We now assume $k$ is nonnegative. Let $k^{j}$ be the kernel of the operator
$L^{j}$. Since $k$ is an integral-preserving kernel, we have
$\displaystyle|k^{2}(x,y)|$ $\displaystyle=\bigg{|}\int
k(x,z)k(z,y)dz\bigg{|}\leq\int|k(x,z)k(z,y)|dz$
$\displaystyle\leq\|k\|_{L^{\infty}([0,1]^{2})}\int
k(z,y)dz=\|k\|_{L^{\infty}([0,1]^{2})};$
it easily follows that
$\|k^{j}\|_{L^{\infty}([0,1]^{2})}\leq\|k\|_{L^{\infty}([0,1]^{2})}$. Thus,
for any probability density $g\in L^{1}$, we have
$\|L^{j}g\|_{\infty}\leq\|k\|_{L^{\infty}([0,1]^{2})}$; thus, by Corollary
5.2.2 in [35], there exists a probability density $\hat{f}\in L^{1}$ such that
$L\hat{f}=\hat{f}$. Since $f$ is the unique invariant function with integral
$1$, we have $\hat{f}=f$; thus, $f$ is a probability density.
### 3.1 Characterising valid perturbations and the derivative of the transfer
operator
In this subsection we consider perturbations of integral-preserving Hilbert-
Schmidt integral operators such that assumption (A2) of Theorem 2.2 can be
verified and the derivative operator $\dot{L}$ computed. We begin, however, by
first characterizing the set of perturbations for which the integral
preserving property of the operators is preserved.
Consider the set $V_{\ker}$ of kernels having zero average in the $x$
direction, defined as
$V_{\ker}:=\bigg{\\{}k\in L^{2}([0,1]^{2}):\int
k(x,y)dx=0~{}for~{}a.e.~{}y\bigg{\\}}.$
###### Lemma 3.2.
Consider a kernel operator $A:L^{2}([0,1])\rightarrow L^{2}([0,1])$ defined by
$Af(x)=\int k(x,y)f(y)dy$. Then, the following are equivalent
1. 1.
$A(L^{2}([0,1]))\subseteq V$,
2. 2.
$k\in V_{\ker}$.
###### Proof.
Clearly, the second condition implies the first. For the other direction we
prove the contrapositive. If $\int k(x,y)dx\neq 0$ on a set of positive
measure, then for a small $\epsilon>0$ there is a set $S$ of positive measure
$m(S)>0$ such that $\int k(x,y)dx\geq\epsilon$ or $\int k(x,y)dx\leq-\epsilon$
for each $y\in S$. Suppose $\int k(x,y)dx\geq\epsilon$ in this set, consider
$f:=\mathbf{1}_{S}$ and $g:=Af.$ Then, $g(x)=\int k(x,y)\mathbf{1}_{S}(y)dy$
and we have $\int g(x)dx=\int_{S}\int k(x,y)dxdy\geq\epsilon\ m(S)$ and
$g\notin V$. The other case $\int k(x,y)dx\leq-\epsilon$ is analogous.
We now prove that $V_{\ker}$ is closed.
###### Lemma 3.3.
The set $V_{\ker}$ is a closed vector subspace of $L^{2}([0,1]^{2}).$
###### Proof.
The fact that $V_{\ker}$ is a vector space is trivial. For fixed $f\in
L^{2}([0,1])$, the set of $k\in L^{2}([0,1]^{2})$ such that $\int
k(x,y)f(y)dx\in V$ is closed. To see this, define the function
$K_{f}:L^{2}([0,1]^{2})\rightarrow L^{2}([0,1])$ as
$K_{f}(k)=\int k(x,y)f(y)dy.$ (8)
By (6), $K_{f}$ is continuous. Since $V$ is closed in $L^{2}([0,1])$, this
implies that $K_{f}^{-1}(V)$ is closed in $L^{2}([0,1]^{2}).$ Finally,
$V_{\ker}$ is closed in $L^{2}([0,1]^{2})$ because $V_{\ker}=\cap_{f\in
L^{2}([0,1])}K_{f}^{-1}(V)$.
We now introduce the type of perturbations which we will investigate
throughout the paper. Let $L_{\delta}:L^{2}\rightarrow L^{2}$ be a family of
integral operators, with kernels $k_{\delta}\in L^{2}([0,1]^{2})$, given by
$L_{\delta}f(x)=\int k_{\delta}(x,y)f(y)dy.$
###### Lemma 3.4.
Let $k_{\delta}\in L^{2}([0,1]^{2})$ for each $\delta\in[0,\bar{\delta}).$
Suppose that
$k_{\delta}=k_{0}+\delta\cdot\dot{k}+r_{\delta}$ (9)
where $\dot{k},\ r_{\delta}\in L^{2}([0,1]^{2})$ and
$||r_{\delta}||_{L^{2}([0,1]^{2})}=o(\delta).$ The bounded linear operator
$\dot{L}:L^{2}\to V$ defined by
$\dot{L}f(x):=\int\dot{k}(x,y)f(y)dy$ (10)
satisfies
$\lim_{\delta\rightarrow
0}\bigg{\|}\frac{L_{\delta}-L_{0}}{\delta}-\dot{L}\bigg{\|}_{L^{2}\to V}=0.$
If additionally the derivative of the map $\delta\mapsto k_{\delta}$ with
respect to $\delta$ varies continuously in a neighborhood of $\delta=0$, then
$\delta\mapsto L_{\delta}$ has a continuous derivative in a neighborhood of
$\delta=0$.
###### Proof.
By integral preservation of $L_{\delta}$ and the fact that $\dot{k}\in
L^{2}([0,1]^{2})$, one sees that $\dot{L}:L^{2}\to V$ and is bounded. By (9),
$\displaystyle\left\|\frac{L_{\delta}-L_{0}}{\delta}-\dot{L}\right\|_{L^{2}\to
V}$ $\displaystyle=$
$\displaystyle\sup_{\|f\|_{L^{2}}=1}\left\|\int\frac{k_{\delta}(x,y)-k_{0}(x,y)}{\delta}f(y)\
dy-\int\dot{k}(x,y)f(y)\ dy\right\|_{L^{2}}$ $\displaystyle=$
$\displaystyle\sup_{\|f\|_{L^{2}}=1}\left\|\int r_{\delta}(x,y)f(y)\
dy\right\|_{L^{2}}$ $\displaystyle\leq$
$\displaystyle\|r_{\delta}\|_{L^{2}([0,1]^{2})}=o(\delta).$
Proceeding similarly, one shows that if the map $\delta\mapsto k_{\delta}$ has
a continuous derivative with respect to $\delta$ in a neighborhood of
$\delta=0$, then $\delta\mapsto L_{\delta}$ has a continuous derivative.
Indeed we are supposing that for each $\delta\in[0,\overline{\delta})$ there
is $\dot{k}_{\delta}$ such that for small enough $h$
$k_{\delta+h}=k_{\delta}+h\cdot\dot{k}_{\delta}+r_{\delta,h}$
where $\dot{k}_{\delta},\ r_{\delta,h}\in L^{2}([0,1]^{2})$,
$||r_{\delta,h}||_{L^{2}([0,1]^{2})}=o(h)$ and furthermore
$\delta\mapsto\dot{k}_{\delta}$ is continuous. We have then by (6) that the
associated operators $\dot{L}_{\delta}$ defined as
$\dot{L}_{\delta}f(x):=\int\dot{k}_{\delta}(x,y)f(y)dy$ (11)
also varies in a continuous way as $\delta$ increases.
### 3.2 A formula for the linear response of the invariant measure and its
continuity
Now we apply Theorem 2.2 to Hilbert-Schmidt integral operators to obtain a
linear response formula for $L^{2}$ perturbations.
###### Corollary 3.5 (Linear response formula for kernel operators).
Suppose $L_{\delta}:L^{2}\rightarrow L^{2}$ are integral-preserving
(satisfying (3)) integral operators with stochastic kernels $k_{\delta}\in
L^{2}([0,1]^{2})$ as in (9). Suppose $L_{0}$ satisfies assumption $(A1)$ of
Theorem 2.2. Then $\dot{k}\in V_{\ker}$, the system has linear response for
this perturbation and an explicit formula for it is given by
$\lim_{\delta\rightarrow
0}\frac{f_{\delta}-f_{0}}{\delta}=(\text{Id}-L_{0})^{-1}\int\dot{k}(x,y)f_{0}(y)dy$
(12)
with convergence in $L^{2}.$
###### Proof.
Since $L_{\delta}$, $\delta\in[0,\bar{\delta})$, is integral preserving, we
have $(L_{\delta}-L_{0})(L^{2})\subset V$ and therefore, $k_{\delta}-k_{0}\in
V_{\ker}$ by Lemma 3.2, i.e. $\delta\dot{k}+r_{\delta}\in V_{\ker}$. Then, for
a.e. $y\in[0,1]$ and $\delta\not=0$, we have
$\displaystyle\bigg{|}\int\dot{k}(x,y)dx\bigg{|}\leq\frac{1}{\delta}\int|r_{\delta}(x,y)|dx\leq\frac{1}{\delta}\|r_{\delta}\|_{L^{2}([0,1]^{2})}.$
As $\delta\rightarrow 0$, the right hand side approaches $0$ and, since
$\int\dot{k}(x,y)dx$ is independent of $\delta$, we have
$\int\dot{k}(x,y)dx=0$ for a.e. $y\in[0,1]$, i.e. $\dot{k}\in V_{\ker}$.
Furthermore by $(\ref{perturb1})$ there is a $K\geq 0$ such that
$\left||L_{0}-L_{\delta}|\right|_{L^{2}\rightarrow L^{2}}\leq K\delta.$ (13)
Hence the family of operators satisfy the first part of assumption $(A2)$. The
second part of this assumption is established by the results of Lemma 3.4.
Since the operators $L_{\delta}$ are compact, integral preserving, and satisfy
assumptions $(A1)$ and $(A2)$ we can conclude by applying Theorem 2.2 to this
family of operators, obtaining
$\lim_{\delta\rightarrow
0}\left\|\frac{f_{\delta}-f_{0}}{\delta}-(\text{Id}-L_{0})^{-1}\int\dot{k}(x,y)f_{0}(y)dy\right\|_{2}=0.$
Now we show that the linear response of the invariant measure is continuous
with respect to the kernel perturbation. This will be used in Section 4 for
the proof of the existence of solutions of our main optimization problems.
Consider the transfer operator $L_{0}$, having a kernel $k_{0}\in
L^{2}([0,1]^{2})$, and a set of infinitesimal perturbations $P\subset
V_{\ker}$ of $k_{0}$. We will endow $P$ with the topology induced by its
inclusion in $L^{2}([0,1]^{2})$. Suppose $L_{\delta}$ is a perturbation of
$L_{0}$ satisfying the assumptions of Lemma 3.4. By Corollary 3.5, the linear
response will depend on the first-order term of the perturbation, $\dot{k}\in
P$, allowing us to define the function $R:P\rightarrow V$ by
$R(\dot{k}):=(\text{Id}-L_{0})^{-1}\int\dot{k}(x,y)f_{0}(y)dy.$ (14)
By (6) and the continuity of the resolvent operator it follows directly that
the response function
$R:(P,\|\cdot\|_{L^{2}([0,1]^{2})})\to(V,\|\cdot\|_{L^{2}})$ is continuous.
### 3.3 A formula for the linear response of the dominant eigenvalues and its
continuity
We apply Proposition 2.6 to Hilbert-Schmidt integral operators and obtain a
linear response formula for the dominant eigenvalues in the case of $L^{2}$
perturbations.
###### Corollary 3.6.
Suppose $L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow
L^{2}([0,1],{\mathbb{C}})$ are integral-preserving (satisfying (3)) integral
operators with kernels $k_{\delta}\in L^{2}([0,1]^{2})$ satisfying
$\delta\mapsto k_{\delta}\in C^{1}([0,\bar{\delta}),L^{2}([0,1]^{2}))$.
Suppose $L_{0}$ satisfies $(A1)$ of Theorem 2.2. Let
$\lambda_{0}\in{\mathbb{C}}$ be an eigenvalue of $L_{0}$ with the largest
magnitude strictly inside the unit circle and assume that $\lambda_{0}$ is
geometrically simple. Then, there exists $\dot{\lambda}\in\mathbb{C}$ such
that
$\lim_{\delta\rightarrow
0}\bigg{|}\frac{\lambda_{\delta}-\lambda_{0}}{\delta}-\dot{\lambda}\bigg{|}=0.$
Furthermore,
$\displaystyle\dot{\lambda}$
$\displaystyle=\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Re(\hat{e})(x)\Re(e)(y)+\Im(\hat{e})(x)\Im(e)(y)\right)dydx$
(15)
$\displaystyle\qquad+i\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Im(\hat{e})(x)\Re(e)(y)-\Re(\hat{e})(x)\Im(e)(y)\right)dydx,$
where $e\in L^{2}([0,1],\mathbb{C})$ is the eigenvector of $L_{0}$ associated
to the eigenvalue $\lambda_{0}$, $\hat{e}\in L^{2}([0,1],\mathbb{C})$ is the
eigenvector of $L_{0}^{*}$ associated to the eigenvalue $\lambda_{0}$ and
$\dot{L}$ is the operator in Lemma 3.4.
###### Proof.
Since $k_{\delta}\in L^{2}([0,1]^{2})$, the operator
$L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow L^{2}([0,1],{\mathbb{C}})$ is
compact; by assumption, it also satisfies (3). From Lemma 3.4, the map
$\delta\mapsto L_{\delta}$ is $C^{1}$. Hence, by Proposition 2.6, we have
$\dot{\lambda}=\langle\hat{e},\dot{L}e\rangle_{L^{2}([0,1],\mathbb{C})}$.
Finally, we compute
$\displaystyle\dot{\lambda}=\langle\hat{e},\dot{L}e\rangle_{L^{2}([0,1],\mathbb{C})}$
$\displaystyle=\int_{0}^{1}\hat{e}(x)\overline{\dot{L}e}(x)dx$
$\displaystyle=\int_{0}^{1}\hat{e}(x)\overline{\int_{0}^{1}\dot{k}(x,y)e(y)dy}dx$
$\displaystyle=\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\hat{e}(x)\bar{e}(y)dydx$
$\displaystyle=\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Re(\hat{e})(x)\Re(e)(y)+\Im(\hat{e})(x)\Im(e)(y)\right)dydx$
$\displaystyle\qquad+i\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Im(\hat{e})(x)\Re(e)(y)-\Re(\hat{e})(x)\Im(e)(y)\right)dydx.$
From the expression in the final line of the proof above, it is clear that if
we consider $\dot{\lambda}$ as a function of $\dot{k}$, the map
$\dot{\lambda}:(V_{\ker},\|\cdot\|)_{L^{2}([0,1]^{2})}\to\mathbb{C}$ is
continuous.
## 4 Optimal response: optimising the expectation of observables and mixing
rate
Having described the responses of our dynamical systems to perturbations, it
is natural to consider the optimisation problem of finding perturbations that
provoke _maximal_ responses. We consider the problems of finding the
infinitesimal perturbation that maximises the expectation of a given
observable and the infinitesimal perturbation that maximally enhances mixing.
In doing so, we extend the approach in [1] from the setting of finite-state
Markov chains to the integral operators considered in the present paper.
We show that at an abstract level these problems reduce to the optimization of
a linear continuous functional $\mathcal{J}$ on a convex set $P$ of feasible
perturbations; this problem has a solution and the solution is unique if the
set $P$ of allowed infinitesimal perturbations is strictly convex. The
convexity assumption on $P$ is natural because if two different perturbations
of the system are possible, then their convex combination (applying the two
perturbations with different intensities) will also be possible. After
introducing the abstract setting, we construct the objective functions for our
two optimal response problems and state general existence and uniqueness
results for the optima. Later, in Section 5 we focus on the construction of
the set of feasible perturbations and provide explicit formulae for the
maximising perturbations.
### 4.1 General optimisation setting, existence and uniqueness
We recall some general results (adapted for our purposes) on optimizing a
linear continuous function on convex sets; see also Lemma 6.2 [16]. The
abstract problem is to find $\dot{k}$ such that
$\mathcal{J}(\dot{k})=\max_{\dot{h}\in P}\mathcal{J}(\dot{h}),$ (16)
where $\mathcal{J}:\mathcal{H}\rightarrow{\mathbb{R}}$ is a continuous linear
function, $\mathcal{H}$ is a separable Hilbert space and
$P\subset\mathcal{H}$.
###### Proposition 4.1 (Existence of the optimal solution).
Let $P$ be bounded, convex, and closed in $\mathcal{H}$. Then, problem
considered at (16) has at least one solution.
###### Proof.
Since $P$ is bounded and $\mathcal{J}$ is continuous, we have that $\sup_{k\in
P}\mathcal{J}(k)<\infty$. Consider a maximizing sequence $k_{n}$ such that
$\lim_{n\rightarrow\infty}\mathcal{J}(k_{n})=\sup_{k\in P}\mathcal{J}(k)$.
Then, $k_{n}$ has a subsequence $k_{n_{j}}$ converging in the weak topology.
Since $P$ is strongly closed and convex in $\mathcal{H}$, we have that it is
weakly closed. This implies that
$\overline{k}:=\lim_{j\rightarrow\infty}k_{n_{j}}\in P.$ Also, since
$\mathcal{J}(k)$ is continuous and linear, it is continuous in the weak
topology. Then we have that
$\mathcal{J}(\overline{k})=\lim_{j\rightarrow\infty}\mathcal{J}(k_{n_{j}})=\sup_{k\in
P}\mathcal{J}(k)$ and we realise a maximum.
Uniqueness of the optimal solution will be provided by strict convexity of the
feasible set.
###### Definition 4.2.
We say that a convex closed set $A\subseteq\mathcal{H}$ is _strictly convex_
if for each pair $x,y\in A$ and for all $0<\gamma<1$, the points $\gamma
x+(1-\gamma)y\in\mathrm{int}(A)$, where the relative interior333The relative
interior of a closed convex set $C$ is the interior of $C$ relative to the
closed affine hull of $C$, see e.g. [7]. is meant.
###### Proposition 4.3 (Uniqueness of the optimal solution).
Suppose $P$ is closed, bounded, and strictly convex subset of $\cal{H}$, and
that $P$ contains the zero vector in its relative interior. If $\mathcal{J}$
is not uniformly vanishing on $P$ then the optimal solution to (16) is unique.
###### Proof.
Suppose that there are two distinct maxima $\dot{k}_{1},\dot{k}_{2}\in P$ with
$\mathcal{J}(\dot{k}_{1})=\mathcal{J}(\dot{k}_{2})=\alpha$. Let $0<\gamma<1$
and set $z=\gamma\dot{k}_{1}+(1-\gamma)\dot{k}_{2}$. By strict convexity of
$P$, $z\in\mathrm{int}(P)$, and by linearity of $\mathcal{J}$,
$\mathcal{J}(z)=\alpha$. Let $B_{r}(z)$ denote a (relative in $P$) open ball
of radius $r$ centred at $z$, with $r>0$ chosen small enough so that
$B_{r}(z)\subset\mathrm{int}(P)$. Because the zero vector lies in the relative
interior of $P$, and $\mathcal{J}$ does not uniformly vanish on $P$, there
exists a vector $v\in B_{r}(z)$ such that $\mathcal{J}(v)>0$. Now
$z+\frac{rv}{2\|v\|}\in\mathrm{int}(P)$ and
$\mathcal{J}(z+\frac{rv}{2\|v\|})>\alpha$, contradicting maximality of
$\dot{k}_{1}$.
In the following subsections we apply the general results of this section to
our specific optimisation problems.
### 4.2 Optimising the response of the expectation of an observable
Let $c\in L^{2}$ be a given observable. We consider the problem of finding an
infinitesimal perturbation that maximises the expectation of $c$. The
perturbations we consider are perturbations to the kernels of Hilbert-Schmidt
integral operators, of the form (9). If we denote the average of $c$ with
respect to the perturbed invariant density $f_{\delta}$ by
$\mathbb{E}_{c,\delta}:=\int c~{}f_{\delta}~{}dm,$
we have
$\frac{d\mathbb{E}_{c,\delta}}{d\delta}\bigg{|}_{\delta=0}=\lim_{\delta\rightarrow
0}\frac{\mathbb{E}_{c,\delta}-\mathbb{E}_{c,0}}{\delta}=\lim_{\delta\rightarrow
0}\int c~{}\frac{f_{\delta}-f_{0}}{\delta}~{}dm=\int c~{}R(\dot{k})~{}dm,$
where the last equality follows from Corollary 3.5.
The function $\mathcal{J}(\dot{k})=\langle c,R(\dot{k})\rangle$ is clearly
continuous as a map from $(V_{\ker},\|\cdot\|)_{L^{2}([0,1]^{2})}$ to
$\mathbb{R}$. Suppose that $P$ is a closed, bounded, convex subset of
$V_{\ker}$ containing the zero perturbation, and that $\mathcal{J}$ is not
uniformly vanishing on $P$. We wish to solve the following problem:
###### General Problem 1.
Find $\dot{k}\in P$ such that
$\big{\langle}c,R(\dot{k})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}=\max_{\dot{h}\in
P}\big{\langle}c,R(\dot{h})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}.$ (17)
We may immediately apply Proposition 4.1 to obtain that there exists a
solution to (17). If, in addition, $P$ is strictly convex, then by Proposition
4.3 the solution to (17) _is unique_.
To end this subsection we note that without loss of generality, we may assume
that $c\in$ span$\\{f_{0}\\}^{\perp}$. This is because for $c\in L^{2}$, we
have
$\langle c,R(\dot{k})\rangle_{L^{2}([0,1],{\mathbb{R}})}=\langle c-\langle
c,f_{0}\rangle_{L^{2}([0,1],{\mathbb{R}})}\mathbf{1},R(\dot{k})\rangle_{L^{2}([0,1],{\mathbb{R}})},$
since $R(\dot{k})\in V$. From $\int f_{0}(x)dx=1,$ we have that
$f\mapsto\langle f,f_{0}\rangle_{L^{2}([0,1],{\mathbb{R}})}\mathbf{1}$ is a
projection onto span$\\{\mathbf{1}\\}$ and so $f\mapsto f-\langle
f,f_{0}\rangle_{L^{2}([0,1],{\mathbb{R}})}\mathbf{1}$ is a projection onto
span$\\{f_{0}\\}^{\perp}$.
### 4.3 Optimising the response of the rate of mixing
We now consider the linear response problem of optimising the rate of mixing.
Let $\lambda_{0}\in{\mathbb{C}}$ denote an eigenvalue of $L_{0}$ strictly
inside the unit circle with largest magnitude. From now on, whenever
discussing the linear response of eigenvalues to kernel perturbations we
assume the conditions of Corollary 3.6. We recall that $e$ and $\hat{e}$ are
the eigenfunctions of $L_{0}$ and $L_{0}^{*}$, respectively, corresponding to
the eigenvalue $\lambda_{0}$.
To find the kernel perturbations that enhance mixing, we follow the approach
taken in [1] (see also [18, 16] in the continuous time setting), namely
perturbing our original dynamics $L_{0}$ in such a way that the modulus of the
second eigenvalue of the perturbed dynamics decreases. Equivalently, we want
to decrease the real part of the logarithm of the perturbed second eigenvalue.
The following result provides an explicit formula for this instantaneous rate
of change. Define
$E(x,y):=\left(\Re(\hat{e})(x)\Re(e)(y)+\Im(\hat{e})(x)\Im(e)(y)\right)\Re(\lambda_{0})+\left(\Im(\hat{e})(x)\Re(e)(y)-\Re(\hat{e})(x)\Im(e)(y)\right)\Im(\lambda_{0}).$
(18)
###### Lemma 4.4.
One has
$\frac{d}{d\delta}\Re\left(\log\lambda_{\delta}\right)\bigg{|}_{\delta=0}=\frac{\big{\langle}\dot{k},E\big{\rangle}_{L^{2}([0,1]^{2},{\mathbb{R}})}}{|\lambda_{0}|^{2}}.$
###### Proof.
From (15), we have that
$\Re(\dot{\lambda}_{0})=\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Re(\hat{e})(x)\Re(e)(y)+\Im(\hat{e})(x)\Im(e)(y)\right)dydx$
(19)
and
$\Im(\dot{\lambda}_{0})=\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)\left(\Im(\hat{e})(x)\Re(e)(y)-\Re(\hat{e})(x)\Im(e)(y)\right)dydx.$
(20)
Next, we note that
$\frac{d}{d\delta}\Re(\log\lambda_{\delta})=\Re\left(\frac{d}{d\delta}\log\lambda_{\delta}\right)=\Re\left(\frac{d\lambda_{\delta}}{d\delta}\frac{1}{\lambda_{\delta}}\right).$
(21)
From (19)-(21), we obtain
$\frac{d}{d\delta}\Re\left(\log\lambda_{\delta}\right)\bigg{|}_{\delta=0}=\Re\left(\frac{\dot{\lambda}_{0}}{\lambda_{0}}\right)=\Re\left(\frac{\dot{\lambda}_{0}}{\lambda_{0}}\frac{\overline{\lambda_{0}}}{\overline{\lambda_{0}}}\right)=\frac{\Re(\dot{\lambda}_{0})\Re(\lambda_{0})+\Im(\dot{\lambda}_{0})\Im(\lambda_{0})}{|\lambda_{0}|^{2}}=\frac{\big{\langle}\dot{k},E\big{\rangle}_{L^{2}([0,1]^{2},{\mathbb{R}})}}{|\lambda_{0}|^{2}}.$
The function $\mathcal{J}(\dot{k})=\langle\dot{k},E\rangle$ is clearly
continuous as a map from $(V_{\ker},\|\cdot\|_{L^{2}([0,1]^{2})})$ to
$\mathbb{R}$. As in subsection 4.2, suppose that $P$ is a closed, bounded,
strictly convex subset of $V_{\ker}$ containing the zero element, and that
$\mathcal{J}$ is not uniformly vanishing on $P$. We wish to solve the
following problem:
###### General Problem 2.
Find $\dot{k}\in P$ such that
$\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}=\min_{\dot{h}\in
P}\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}.$ (22)
We may immediately apply Proposition 4.1 to obtain that there exists a
solution to (17). If, in addition, $P$ is strictly convex, then by Proposition
4.3 the solution to (22) _is unique_.
## 5 Explicit formulae for the optimal perturbations
Thus far we have not been specific about the feasible set $P$; we take up this
issue in this and the succeeding subsections to provide explicit formulae for
the optimal responses in both problems (17) and (22). First, we have not
required that the perturbed kernel $k_{\delta}$ in (9) be nonnegative for
$\delta>0$, however, this is a natural assumption. To facilitate this, for
$0<l<1$, define
$F_{l}:=\\{(x,y)\in[0,1]^{2}:k_{0}(x,y)\geq l\\}\quad\mbox{ and }\quad
S_{k_{0},l}:=\\{k\in L^{2}([0,1]^{2}):\text{supp}(k)\subseteq F_{l}\\}.$ (23)
The set of allowable perturbations that we will consider in the sequel is
$P_{l}:=V_{\ker}\cap S_{k_{0},l}\cap B_{1},$ (24)
where $B_{1}$ is the closed unit ball in $L^{2}([0,1]^{2})$.
We now begin verifying the conditions on $P_{l}$ and $\mathcal{J}$ required by
Proposition 4.3. First, $P_{l}$ is clearly bounded in $L^{2}([0,1]^{2})$.
Second, we note that as long as $F_{l}$ has positive Lebesgue measure, the
zero kernel is in the relative interior of $P_{l}$. Third, the following lemma
handles closedness of $P_{l}$. Fourth, from this, since $V_{\ker}$ and
$S_{k_{0},l}$ are closed subspaces, $V_{\ker}\cap S_{k_{0},l}$ is itself a
Hilbert space, and hence, $P_{l}$ is strictly convex. Finally, sufficient
conditions for the objective function to not uniformly vanish are given in
Lemma 5.2.
###### Lemma 5.1.
The set $S_{k_{0},l}$ is a closed subspace of $L^{2}([0,1]^{2})$.
###### Proof.
The fact that $S_{k_{0},l}$ is a subspace is trivial. Let $\\{k_{n}\\}\subset
S_{k_{0},l}$ and suppose $k_{n}\rightarrow_{L^{2}}k\in L^{2}([0,1]^{2})$.
Further suppose $\\{(x,y)\in[0,1]^{2}:k_{0}(x,y)<l\\}$ is not a null set;
otherwise $S_{k_{0},l}=L^{2}([0,1]^{2})$ and the result immediately follows.
Then, we have
$\int_{\\{k_{0}\geq
l\\}}(k_{n}(x,y)-k(x,y))^{2}dydx+\int_{\\{k_{0}<l\\}}k(x,y)^{2}dxdy\rightarrow
0.$
Since $\int_{\\{k_{0}\geq l\\}}(k_{n}(x,y)-k(x,y))^{2}dydx\geq 0$, if
$\int_{\\{k_{0}<l\\}}k(x,y)^{2}dxdy>0$ then we obtain a contradiction; thus,
$\int_{\\{k_{0}<l\\}}k(x,y)^{2}dxdy=0$ and therefore $k=0$ a.e. on
$\\{(x,y)\in[0,1]^{2}:k_{0}(x,y)<l\\}$. Hence, $S_{k_{0},l}$ is closed.
Let
$F_{l}^{y}:=\\{x\in[0,1]:(x,y)\in F_{l}\\},$ (25)
and for $F_{l}\subset[0,1]^{2}$, define
$\Xi(F_{l})=\\{y\in[0,1]:m(F_{l}^{y})>0\\}.$
The following lemma provides sufficient conditions for a functional of the
general form we wish to optimise to not uniformly vanish. The general
objective has the form
$\mathcal{J}(\dot{k})=\int\int\dot{k}(x,y)\mathcal{E}(x,y)\ dy\ dx$; in our
first specific objective (optimising response of expectations) we put
$\mathcal{E}(x,y)=((\text{Id}-L_{0}^{*})^{-1}c)(x)\cdot f_{0}(y)$ and in our
second specific objective (optimising mixing) we put $\mathcal{E}(x,y)=E(x,y)$
from (18). Let $\mathcal{E}^{+}$ and $\mathcal{E}^{-}$ denote the positive and
negative parts of $\mathcal{E}$. For $y\in\Xi(F_{l})$, let
$A(y)=\int_{F_{l}^{y}}\mathcal{E}^{+}(x,y)\ dx$ and
$a(y)=\int_{F_{l}^{y}}\mathcal{E}^{-}(x,y)\ dx$.
###### Lemma 5.2.
Assume that there is $\Xi^{\prime}\subset\Xi(F_{l})$ such that
$m(\Xi^{\prime})>0$ and $A(y),a(y)>0$ for $y\in\Xi^{\prime}$. Then there is a
$\dot{k}\in P_{l}$ such that $\mathcal{J}(\dot{k})>0$.
###### Proof.
For $y\in\Xi(F_{l})$, set
$\dot{k}(x,y)=\mathbf{1}_{F_{l}^{y}}(x)\left(a(y)\mathcal{E}^{+}(x,y)-A(y)\mathcal{E}^{-}(x,y)\right)$.
To show $\dot{k}\in P_{l}$ we need to check that (i) the support of $\dot{k}$
is contained in $F_{l}$ and (ii) $\int_{F_{l}^{y}}\dot{k}(x,y)\ dx=0$ for a.e.
$y\in\Xi(F_{l})$; these points show $\dot{k}\in S_{k_{0},l}\cap V_{\ker}$ and
by trivial scaling we may obtain $\dot{k}\in B_{1}$. Item (i) is obvious from
the definition of $\dot{k}$. For item (ii) we compute
$\int_{F_{l}^{y}}\dot{k}(x,y)\
dx=\int_{F_{l}^{y}}(a(y)\mathcal{E}^{+}(x,y)-A(y)\mathcal{E}^{-}(x,y))\
dx=a(y)A(y)-A(y)a(y)=0.$
Finally, we check that $\mathcal{J}(\dot{k})>0$. One has
$\displaystyle\int_{F_{l}}\dot{k}(x,y)\mathcal{E}(x,y)\ dx\ dy$
$\displaystyle=$
$\displaystyle\int_{F_{l}}\left(a(y)\mathcal{E}^{+}(x,y)-A(y)\mathcal{E}^{-}(x,y)\right)\cdot\mathcal{E}(x,y)\
dx\ dy$ $\displaystyle=$
$\displaystyle\int_{F_{l}}a(y)(\mathcal{E}^{+}(x,y))^{2}+A(y)(\mathcal{E}^{-}(x,y))^{2}\
dx\ dy$ $\displaystyle=$
$\displaystyle\int_{\Xi(F_{l})}\left[\left(\int_{F_{l}^{y}}\mathcal{E}^{-}(x,y)\
dx\right)\cdot\left(\int_{F_{l}^{y}}(\mathcal{E}^{+}(x,y))^{2}\
dx\right)+\left(\int_{F_{l}^{y}}\mathcal{E}^{+}(x,y)\
dx\right)\cdot\left(\int_{F_{l}^{y}}(\mathcal{E}^{-}(x,y))^{2}\
dx\right)\right]\ dy.$
This final expression is positive due by the hypotheses of the Lemma.
###### Remark 5.3.
We note that in the situation where $\mathcal{E}(x,y)$ is in separable form
$\mathcal{E}(x,y)=h_{1}(x)h_{2}(y)$—as in the case of optimising the
derivative of the expectation of an observable $c$ , and in the case of
optimising the derivative of a real eigenvalue—then
$A(y)=h_{2}(y)\int_{F_{l}^{y}}h_{1}^{+}(x)\ dx$ and
$a(y)=h_{2}(y)\int_{F_{l}^{y}}h_{1}^{-}(x)\ dx$. Because $h_{2}=f_{0}$ and
$h_{2}=e$ are not the zero function, and $h_{1}=(\text{Id}-L_{0}^{*})^{-1}c$
and $h_{1}=\hat{e}$ are both nontrivial signed functions, the conditions of
Lemma 5.2 are relatively easy to satisfy.
### 5.1 Maximising the expectation of an observable
In this section we provide an explicit formula for the optimal kernel
perturbation to increase the expectation of an observation function $c$ by the
greatest amount. Since the objective function in (17) is linear in $\dot{k}$,
a maximum will occur on $\partial B_{1}\cap V_{\ker}\cap S_{k_{0},l}$ (i.e. we
only need to consider the optimization over the unit sphere and not the unit
ball). Thus, we consider the following reformulation of the general problem 1:
###### Problem A.
Given $l>0$ and $c\in$ span$\\{f_{0}\\}^{\perp}$, solve
$\displaystyle\min_{\dot{k}\in V_{\ker}\cap S_{k_{0},l}}$
$\displaystyle-\big{\langle}c,R(\dot{k})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}$
(26) subject to $\displaystyle\|\dot{k}\|_{L^{2}([0,1]^{2})}^{2}-1=0.$ (27)
Our first main result is:
###### Theorem 5.4.
Let $L_{0}:L^{2}\rightarrow L^{2}$ be an integral operator with the stochastic
kernel $k_{0}\in L^{2}([0,1]^{2})$. Suppose that $L_{0}$ satisfies $(A1)$ of
Theorem 2.2 and that there is a $\Xi^{\prime}\subset\Xi(F_{l})$ with
$m(\Xi^{\prime})>0$ and
$f_{0}(y)>0,\int_{F_{l}^{y}}((\text{Id}-L_{0}^{*})^{-1}c)^{+}(x)\ dx>0$, and
$\int_{F_{l}^{y}}((\text{Id}-L_{0}^{*})^{-1}c)^{-}(x)\ dx>0$ for
$y\in\Xi^{\prime}$. Then the unique solution to Problem A is
$\dot{k}(x,y)=\begin{cases}\frac{f_{0}(y)}{\alpha}\left(((\text{Id}-L_{0}^{\ast})^{-1}c)(x)-\frac{\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz}{m(F_{l}^{y})}\right)&(x,y)\in
F_{l},\\\ 0&\text{otherwise},\end{cases}$ (28)
where $\alpha>0$ is selected so that $\|\dot{k}\|_{L^{2}([0,1]^{2})}=1$.
Furthermore, if $c\in W:=$ span$\\{f_{0}\\}^{\perp}\cap L^{\infty}$, $k_{0}\in
L^{\infty}([0,1]^{2})$, and $k_{0}$ is such that $L_{0}:L^{1}\rightarrow
L^{1}$ is compact, then $\dot{k}\in L^{\infty}([0,1]^{2})$.
###### Proof.
See Appendix A.
Note that the expression for the optimal perturbation $\dot{k}$ in (28)
depends only on $k_{0}$ and $c$. This is in part a consequence of the fact
that the linear response formula (12) depends only on the first order term
$\dot{k}$ (the “direction” of the perturbation) in the expansion of
$k_{\delta}$. Thus, in order to find the unique perturbation that optimises
our linear response, we seek the best “direction” for the perturbation.
Similar comments hold for our other three optimal linear perturbation results
in later sections.
###### Remark 5.5.
In certain situations we may desire to make non-infinitesimal perturbations
$k_{\delta}:=k_{0}+\delta\cdot\dot{k}$ that remain stochastic for small
$\delta>0$. If $\dot{k}\in L^{\infty}([0,1]^{2})\cap V_{\ker}\cap
S_{k_{0},l}$, clearly $k_{\delta}=k_{0}+\delta\cdot\dot{k}$ satisfies $\int
k_{\delta}(x,y)dx=1$ for a.e. $y$. Also, as we are only perturbing at values
where $k_{0}\geq l>0$, and since $\dot{k}$ is essentially bounded, there
exists a $\bar{\delta}>0$ such that $k_{\delta}\geq 0$ a.e. for all
$\delta\in(0,\bar{\delta})$. In summary, for $\delta\in(0,\bar{\delta})$,
$k_{\delta}$ is a stochastic kernel.
The compactness condition on $L_{0}:L^{1}\to L^{1}$ required for essential
boundedness of $\dot{k}$ can be addressed as follows. A criterion for $L_{0}$
to be compact on $L^{1}([0,1])$ is the following (see [12]): Given
$\varepsilon>0$ there exists $\beta>0$ such that for a.e. $y\in[0,1]$ and
$\gamma\in{\mathbb{R}}$ with $|\gamma|<\beta$,
$\int_{{\mathbb{R}}}\big{|}\tilde{k}(x+\gamma,y)-\tilde{k}(x,y)\big{|}dx<\varepsilon,$
where $\tilde{k}:\mathbb{R}\times[0,1]\to\mathbb{R}$ is defined by
$\tilde{k}(x,y)=\begin{cases}k_{0}(x,y)&x\in[0,1],\\\
0&\text{otherwise}.\end{cases}$
A class of kernels that satisfy this are essentially bounded kernels
$k_{0}:[0,1]\times[0,1]\rightarrow{\mathbb{R}}$ that are uniformly continuous
in the first coordinate. Such a class naturally arises in our dynamical
systems settings.
### 5.2 Maximally increasing the mixing rate
Let $\lambda_{0}\in{\mathbb{C}}$ denote a geometrically simple eigenvalue of
$L_{0}$ strictly inside the unit circle and $e$ and $\hat{e}$ denote the
corresponding eigenvectors of $L_{0}$ and $L_{0}^{*}$, respectively. Our
results concerning optimal rate of movement of $\lambda_{0}$ under system
perturbation work for any $\lambda_{0}$ as above, but eigenvalues of largest
magnitude inside the unit circle have the additional significance of
controlling the exponential rate of mixing. We therefore primarily focus on
these eigenvalues and in this section we consider again the linear response
problem for enhancing the rate of mixing, now providing explicit formulae for
optimal perturbations and the response.
Since we are again interested in kernel perturbations that will ensure that
the perturbed kernel $k_{\delta}$ is nonnegative, we consider the constraint
set $P_{l}$, as in Section 4.1, where $0<l<1$. The objective function of (22)
is linear and therefore, we only need to consider the optimization problem on
$V_{\ker}\cap S_{k_{0},l}\cap\partial B_{1}$. Thus, to obtain the perturbation
$\dot{k}$ that will enhance the mixing rate, we solve the following
optimization problem:
###### Problem B.
Given $l>0$, solve
$\displaystyle\min_{\dot{k}\in V_{\ker}\cap S_{k_{0},l}}$
$\displaystyle\big{\langle}\dot{k},E\big{\rangle}_{L^{2}([0,1]^{2},{\mathbb{R}})}$
(29) such that
$\displaystyle\|\dot{k}\|_{L^{2}([0,1]^{2},{\mathbb{R}})}^{2}-1=0,$ (30)
where $E$ is defined in (18).
###### Theorem 5.6.
Let $L_{0}:L^{2}([0,1],{\mathbb{C}})\rightarrow L^{2}([0,1],{\mathbb{C}})$ be
an integral operator with the stochastic kernel $k_{0}\in
L^{2}([0,1]^{2},{\mathbb{R}})$. Suppose that $L_{0}$ satisfies $(A1)$ of
Theorem 2.2 and that there is a $\Xi^{\prime}\subset\Xi(F_{l})$ with
$m(\Xi^{\prime})>0$, and $\int_{F_{l}^{y}}E(x,y)^{+}\ dx>0$ and
$\int_{F_{l}^{y}}E(x,y)^{-}\ dx>0$ for $y\in\Xi^{\prime}$. Then, the unique
solution to Problem B is
$\dot{k}(x,y)=\begin{cases}\frac{1}{\alpha}\left(\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}E(x,y)dx-E(x,y)\right)&(x,y)\in
F_{l}\\\ 0&\text{otherwise},\end{cases}$ (31)
where $E$ is given in (18) and $\alpha>0$ is selected so that
$\|\dot{k}\|_{L^{2}([0,1]^{2},{\mathbb{R}})}=1$. Furthermore, if $k_{0}\in
L^{\infty}([0,1]^{2},{\mathbb{R}})$ then $\dot{k}\in
L^{\infty}([0,1]^{2},{\mathbb{R}})$.
###### Proof.
See Appendix B.
If $\lambda_{0}$ is real, the optimal kernel has a simpler form:
###### Corollary 5.7.
If $\lambda_{0}$ is real and $k_{0}\geq l$, then the solution to Problem B is
$\dot{k}(x,y)=sgn(\lambda_{0})\frac{e(y)}{\|e\|_{2}}\left(\frac{\langle\hat{e},\mathbf{1}\rangle_{L^{2}([0,1],{\mathbb{R}})}\mathbf{1}-\hat{e}(x)}{\|\langle\hat{e},\mathbf{1}\rangle_{L^{2}([0,1],{\mathbb{R}})}\mathbf{1}-\hat{e}\|_{2}}\right).$
(32)
###### Proof.
We have $E(x,y)=\lambda_{0}\hat{e}(x)e(y)$; thus, the solution to the
optimization problem (29)-(30) is
$\dot{k}(x,y)=(\lambda_{0}/\alpha)\left(\int_{0}^{1}\hat{e}(x)dx-\hat{e}(x)\right)e(y),$
where $\alpha>0$ is the normalization constant such that
$\|\dot{k}\|_{L^{2}([0,1]^{2},{\mathbb{R}})}^{2}=1$.
## 6 Linear response for map perturbations
In this section we consider random dynamics governed by the composition of a
deterministic map $T_{\delta}$, $\delta\in[0,\bar{\delta})$, and additive
i.i.d. perturbations, or “additive noise”. We will assume that the noise is
distributed according to a certain Lipschitz kernel $\rho$ and impose a
reflecting boundary condition that ensures that the dynamics remain in the
interval $[0,1]$. More precisely, we consider a random dynamical system whose
trajectories are given by
$x_{n+1}=T_{\delta}(x_{n})\ \hat{+}\ \omega_{n},$ (33)
where $\hat{+}$ is the “boundary reflecting” sum, defined by
$a\hat{+}b:=\pi(a+b)$, and $\pi:\mathbb{R}\rightarrow[0,1]$ is the piecewise
linear map $\pi(x)=\min_{i\in\mathbb{Z}}|x-2i|$. We assume throughout that
* (T1)
$T_{\delta}:[0,1]\rightarrow[0,1]$ is a Borel-measurable map for each
$\delta\in[0,\bar{\delta})$,
* (T2)
$\omega_{n}$ is an i.i.d. process distributed according to a probability
density $\rho\in Lip(\mathbb{R})$, supported on $[-1,1]$ with Lipschitz
constant $K$.
### 6.1 Expressing the map perturbation as a kernel perturbation
In this subsection we describe precisely the kernel of the transfer operator
of the system (33). Associated with the process (33) is an integral-type
transfer operator $L_{\delta}$, which we will derive (following the method of
§10.5 in [35]). Noting that $|\pi^{\prime}(z)|=1$ for all $z\in\mathbb{R}$,
the Perron-Frobenius operator $P_{\pi}:L^{1}(\mathbb{R})\rightarrow
L^{1}([0,1])$ associated to the map $\pi$ is given by
$P_{\pi}f(x)=\sum_{z\in\pi^{-1}(x)}f(z)=\sum_{i\in
2\mathbb{Z}}(f(i+x)+f(i-x)).$ (34)
For $b\in{\mathbb{R}}$ consider the shift operator $\tau_{b}$ defined by
$(\tau_{b}g)(y):=g(y+b)$ for $g\in Lip(\mathbb{R})$. For the process (33),
suppose that $x_{n}$ has the distribution $f_{n}:[0,1]\to\mathbb{R}^{+}$ (i.e.
$f_{n}\in L^{1},\ f_{n}\geq 0$ and $\int f_{n}\ dm=1$). We note that
$T_{\delta}(x_{n})$ and $\omega_{n}$ are independent and thus the joint
density of $(x_{n},\omega_{n})\in[0,1]\times[-1,1]$ is $f_{n}\cdot\rho$. Let
$h:[0,1]\rightarrow{\mathbb{R}}$ be a bounded, measurable function and let
$\mathbb{E}$ denote expectation with respect to Lebesgue measure; we then
compute
$\displaystyle\mathbb{E}(h(x_{n+1}))$
$\displaystyle=\int_{-\infty}^{\infty}\int_{0}^{1}h(\pi(T_{\delta}(y)+z))f_{n}(y)\rho(z)dydz$
$\displaystyle=\int_{0}^{1}\int_{-\infty}^{\infty}h(\pi(z^{\prime}))f_{n}(y)\rho(z^{\prime}-T_{\delta}(y))dz^{\prime}dy$
$\displaystyle=\int_{0}^{1}f_{n}(y)\int_{-\infty}^{\infty}h(\pi(z^{\prime}))(\tau_{-T_{\delta}(y)}\rho)(z^{\prime})dz^{\prime}dy$
$\displaystyle=\int_{0}^{1}f_{n}(y)\int_{0}^{1}h(z^{\prime})(P_{\pi}\tau_{-T_{\delta}(y)}\rho)(z^{\prime})dz^{\prime}dy,$
where the last equality follows from the duality of the Perron-Frobenius and
the Koopman operators for $\pi$. Since
$\mathbb{E}(h(x_{n+1}))=\int_{0}^{1}h(x)f_{n+1}(x)dx$, and $h$ is arbitrary,
the map $f_{n}\mapsto f_{n+1}$ is given by
$f_{n+1}(z^{\prime})=\int_{0}^{1}(P_{\pi}\tau_{-T_{\delta}(y)}\rho)(z^{\prime})f_{n}(y)dy$
for all $z^{\prime}\in[0,1]$. Thus, for $\delta\in[0,\bar{\delta})$ the
integral operator $L_{\delta}:L^{2}([0,1])\rightarrow L^{2}([0,1])$ associated
to the process (33) is given by
$L_{\delta}f(x)=\int k_{\delta}(x,y)f(y)dy,$ (35)
where
$k_{\delta}(x,y)=(P_{\pi}\tau_{-T_{\delta}(y)}\rho)(x)$ (36)
and $x,y\in[0,1]$.
###### Lemma 6.1.
The kernel (36) is a stochastic kernel in $L^{\infty}([0,1]^{2})$.
###### Proof.
Stochasticity and nonnegativity of $k_{\delta}$ follow from stochasticity and
nonnegativity of $\rho$ and the fact that Perron-Frobenius operators preserve
these properties. Essential boundedness of $k_{\delta}$ follows from the facts
that $\rho$ is Lipschitz (thus essentially bounded), $\tau$ is a shift, and
$P_{\pi}$ is constructed from a finite sum because $\rho$ has compact support.
###### Proposition 6.2.
Assume that $k_{\delta}$ arising from the system $(T_{\delta},\rho)$ is given
by (36). Suppose that the family of interval maps
$\\{T_{\delta}\\}_{\delta\in[0,\bar{\delta})}$ satisfies
$T_{\delta}=T_{0}+\delta\cdot\dot{T}+t_{\delta},$
where $\dot{T},t_{\delta}\in L^{2}$ and $\|t_{\delta}\|_{2}=o(\delta)$. Then
$k_{\delta}=k_{0}+\delta\cdot\dot{k}+r_{\delta}$
where $\dot{k}\in L^{2}([0,1]^{2})$ is given by
$\dot{k}(x,y)=-\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot\dot{T}(y)$
(37)
and $r_{\delta}\in L^{2}([0,1]^{2})$ satisfies
$\|r_{\delta}\|_{L^{2}([0,1]^{2})}=o(\delta)$.
If additionally, $d\rho/dx$ is Lipschitz and the derivative of the map
$\delta\mapsto T_{\delta}$ with respect to $\delta$ varies continuously in
$L^{2}$ in a neighborhood of $\delta=0$, then $\delta\mapsto k_{\delta}$ has a
continuous derivative with respect to $\delta$ in a neighborhood of
$\delta=0$.
###### Proof.
We show that
$\|k_{\delta}(x,y)-k_{0}(x,y)-\delta\cdot\dot{k}(x,y)\|_{L^{2}([0,1]^{2})}=o(\delta)$,
where $\dot{k}$ is as in (37). We have
$\displaystyle\left\|k_{\delta}(x,y)-k_{0}(x,y)-\delta\cdot\dot{k}(x,y)\right\|_{L^{2}([0,1]^{2})}$
$\displaystyle\leq$
$\displaystyle\left\|(P_{\pi}\tau_{-T_{\delta}(y)}\rho)(x)-(P_{\pi}\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho)(x)\right\|_{L^{2}([0,1]^{2})}$
$\displaystyle+\left\|(P_{\pi}\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho)(x)-(P_{\pi}\tau_{-T_{0}(y)}\rho)(x)-\delta\left(-\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot\dot{T}(y)\right)\right\|_{L^{2}([0,1]^{2})}.$
We begin by showing that the first term on the right hand side of (6.1) is
$o(\delta)$. Since $\rho$ is Lipschitz with constant $K$, one has
$\big{|}(\tau_{-(T_{\delta}(y))}\rho)(x)-(\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho)(x)\big{|}=\big{|}\rho(x-T_{\delta}(y))-\rho(x-T_{0}(y)-\delta\cdot\dot{T}(y))\big{|}\leq
K|t_{\delta}(y)|.$ (39)
Because the support of
$\tau_{-(T_{\delta}(y))}\rho-\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho$ is
contained in 2 intervals, each of length 2, by (39) and Lemma C.1, we
therefore see that
$\left\|(P_{\pi}\tau_{-T_{\delta}(y)}\rho)(x)-(P_{\pi}\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho)(x)\right\|_{L^{2}([0,1]^{2})}\leq
6K\|t_{\delta}\|_{L^{2}}=o(\delta).$
Next we show that the second term on the right hand side of (6.1) is
$o(\delta)$. Using the definition of the derivative and the fact that $\rho$
is differentiable a.e. we see that
$\lim_{\delta\rightarrow 0}D(\delta):=\lim_{\delta\rightarrow
0}\left[\frac{\rho(x-T_{0}(y)-\delta\cdot\dot{T}(y))-\rho(x-T_{0}(y))}{\delta}-\left(-\frac{d\rho}{dx}(x-T_{0}(y))\dot{T}(y)\right)\right]=0$
(40)
for a.e. $x,y$. Since
$\bigg{|}\frac{\rho(x-T_{0}(y)-\delta\cdot\dot{T}(y))-\rho(x-T_{0}(y))}{\delta}\bigg{|}\leq
K\dot{T}(y)$, by dominated convergence the limit (40) also converges in
$L^{2}.$ Hence, applying Lemma C.1 to the second term on the right hand side
of (6.1), noting that $D(\delta)$ in (40) is square-integrable and supported
in at most 3 intervals of length at most 2, we obtain
$\displaystyle{\left\|(P_{\pi}\tau_{-(T_{0}(y)+\delta\cdot\dot{T}(y))}\rho)(x)-(P_{\pi}\tau_{-T_{0}(y)}\rho)(x)-\delta\left(-\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot\dot{T}(y)\right)\right\|_{L^{2}([0,1]^{2})}}$
$\displaystyle\leq$ $\displaystyle 9\delta D(\delta)=o(\delta).$
Regarding the final statement, suppose that $\delta\mapsto T_{\delta}$ has a
continuous derivative with respect to $\delta$ at a neighborhood of
$\delta=0$. This implies that $\dot{T}$ exists and varies continuously on a
small interval $[0,\delta^{*}]$, with $0<\delta^{*}\leq\bar{\delta}$. Denote
the derivative $dT_{\delta}/d\delta$ at $\delta$ by $\dot{T}_{\delta}$, and
similarly for $\dot{k}$. One has
$\displaystyle\|\dot{k}_{\delta}-\dot{k}_{0}\|_{L^{2}([0,1]^{2})}$
$\displaystyle=$
$\displaystyle\left\|\left(P_{\pi}\left(\tau_{-T_{\delta}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot\dot{T}_{\delta}(y)-\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot\dot{T}_{0}(y)\right\|_{L^{2}([0,1]^{2})}$
$\displaystyle\leq$
$\displaystyle\left\|\left(P_{\pi}\left(\tau_{-T_{\delta}(y)}\frac{d\rho}{dx}\right)\right)(x)\cdot(\dot{T}_{\delta}(y)-\dot{T}_{0}(y))\right\|_{L^{2}([0,1]^{2})}$
$\displaystyle\quad+\left\|\left[\left(P_{\pi}\left(\tau_{-T_{\delta}(y)}\frac{d\rho}{dx}\right)\right)(x)-\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\right]\cdot\dot{T}_{0}(y)\right\|_{L^{2}([0,1]^{2})}$
$\displaystyle\leq$ $\displaystyle
3\|d\rho/dx\|_{2}\|\dot{T}_{\delta}-\dot{T}_{0}\|_{2}+6{\mathrm{L}ip}(d\rho/dx)\|\delta\cdot\dot{T}_{0}+r_{\delta}\|_{2}\|\dot{T}_{0}\|_{2},$
where the final inequality follows from Lemma C.1 applied to each term in the
previous line, noting that $\rho$ is supported in a single interval of length
2. The first term in the final inequality goes to zero as $\delta\to 0$ by
continuity of $\dot{T}$, and the second term goes to zero as $\delta\to 0$
since $\|r_{\delta}\|_{2}\to 0$.
### 6.2 A formula for the linear response of the invariant measure and
continuity with respect to map perturbations
By considering the kernel form of map perturbations, we can apply Corollary
3.5 to obtain the following.
###### Proposition 6.3.
Let $L_{\delta}:L^{2}\rightarrow L^{2}$, $\delta\in[0,\bar{\delta})$, be the
integral operators in (35) with the kernels $k_{\delta}$ as in (36). Suppose
that $L_{0}$ satisfies $(A1)$ of Theorem 2.2. Then the kernel $\dot{k}$ in
(37) is in $V_{\ker}$ and
$\lim_{\delta\rightarrow
0}\frac{f_{\delta}-f_{0}}{\delta}=-(\text{Id}-L_{0})^{-1}\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\dot{T}(y)f_{0}(y)dy,$
with convergence in $L^{2}.$
###### Proof.
The result is a direct application of Corollary 3.5; we verify its
assumptions. From Lemma 6.1, $k_{\delta}\in L^{2}([0,1]^{2})$ is a stochastic
kernel and so $L_{\delta}$ is an integral-preserving compact operator. From
Proposition 6.2, $k_{\delta}$ has the form (9). Thus, we can apply Corollary
3.5 to obtain the result.
###### Remark 6.4.
If $T$ is covering444We say $T$ is covering if for each small open interval
$I\subseteq[0,1]$ there is $n=n(I)$ such that $T^{n}(I)=[0,1]$. and $\rho$ is
strictly positive in a neighbourhood of zero one can show the corresponding
transfer operator $L_{0}$ satisfies assumption $(A1)$ of Theorem 2.2, using
arguments similar to e.g. [49] Proposition 8.1, [14] Lemmas 3 and 10, or [20],
Lemma 41. Let $f\in L^{1}$ have zero average: $\int_{[0,1]}f=0$. If $f$ is $0$
almost everywhere, $L_{0}^{n}(f)=0$ and we are done. Otherwise, given
$\epsilon<0$, we can find an $f_{1}$ such that $\|f-f_{1}\|_{1}<\epsilon$ and
$f_{1}$ is positive in some small interval $I\subset[0,1]$. Since $\rho$ is
positive in a neighbourhood of zero, $\mathrm{s}upp(L_{0}(f_{1}^{+}))\supset
T(I)$. By the covering condition there is some $n^{\prime}\in{\mathbb{N}}$
such that $\mathrm{s}upp(L_{0}^{n^{\prime}}(f_{1}^{+}))=[0,1]$. It is then
standard to deduce that there is an $n_{0}\geq n^{\prime}$ such that
$\|L_{0}^{n}(f_{1})\|_{1}<\epsilon$ for $n\geq n_{0}$. Since the transfer
operator contracts the $L^{1}$ norm, then $||L_{0}^{n}f||_{1}\leq 2\epsilon$
for $n\geq n_{0}$ and since $\epsilon$ was arbitrary, this implies that
$L_{0}$ satisfies $(A1)$.
Let the linear response $\widehat{R}:L^{2}\rightarrow L^{2}$ of the invariant
density be defined as
$\widehat{R}(\dot{T}):=-(\text{Id}-L_{0})^{-1}\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\dot{T}(y)f_{0}(y)dy.$
(41)
###### Lemma 6.5.
The function $\widehat{R}:L^{2}\rightarrow L^{2}$ is continuous.
###### Proof.
We have
$\widehat{R}(\dot{T}_{1})-\widehat{R}(\dot{T}_{2})=-(\text{Id}-L_{0})^{-1}\int_{0}^{1}\tilde{k}(x,y)\left(\dot{T}_{1}(y)-\dot{T}_{2}(y)\right)dy,$
where
$\tilde{k}(x,y):=\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)f_{0}(y)$.
Since $\frac{d\rho}{dx}\in L^{\infty}$, we have
$\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\in
L^{\infty}([0,1]^{2})$. From inequality (7), we then have $f_{0}\in
L^{\infty}$ and so $\tilde{k}\in L^{\infty}([0,1]^{2})$. We finally have
$\|\widehat{R}(\dot{T}_{1})-\widehat{R}(\dot{T}_{2})\|_{2}\leq
l\|(\text{Id}-L_{0})^{-1}\|_{V\rightarrow
V}\|\tilde{k}\|_{L^{2}([0,1]^{2})}\cdot\|\dot{T}_{1}-\dot{T}_{2}\|_{2}.$
### 6.3 A formula for the linear response of the dominant eigenvalues and
continuity with respect to map perturbations
We are also able to express the linear response of the dominant eigenvalues as
a function of the perturbing map $\dot{T}$. Define
$H(y)=-\bar{e}(y)\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\hat{e}(x)dx.$
###### Proposition 6.6.
Let $L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow
L^{2}([0,1],{\mathbb{C}})$, $\delta\in[0,\bar{\delta})$, be integral operators
generated by the kernels $k_{\delta}$ as in (36), assume that $d\rho/dx$ is
Lipschitz and $\delta\mapsto T_{\delta}$ is $C^{1}$. Let $\lambda_{\delta}$ be
an eigenvalue of $L_{\delta}$ with second largest magnitude strictly inside
the unit disk. Suppose that $L_{0}$ satisfies $(A1)$ of Theorem 2.2 and
$\lambda_{0}$ is geometrically simple. Then
$\frac{d\lambda_{\delta}}{d\delta}\bigg{|}_{\delta=0}=\langle
H,\dot{T}\rangle_{L^{2}([0,1],\mathbb{C})},$ (42)
where $e$ is the eigenvector of $L_{0}$ associated to the eigenvalue
$\lambda_{0}$ and $\hat{e}$ is the eigenvector of $L_{0}^{*}$ associated to
the eigenvalue $\lambda_{0}$.
###### Proof.
Since $k_{\delta}\in L^{2}([0,1]^{2},{\mathbb{R}})$,
$L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow L^{2}([0,1],{\mathbb{C}})$ is
compact. From Lemma 6.1 we have that $k_{\delta}$ is a stochastic kernel and
so $L_{\delta}$ preserves the integral (i.e. it satisfies (3)). By Proposition
6.2 the kernel $k_{\delta}$ is in the form (9) and the map $\delta\mapsto
k_{\delta}$ is $C^{1}$. By Lemma 3.4 we see that $\delta\mapsto L_{\delta}$ is
$C^{1}$, where the derivative operator $\dot{L}$ is the integral operator with
the kernel $\dot{k}$. Using the assumption that $L_{0}$ is mixing and
$\lambda_{0}$ is geometrically simple, we apply Proposition 2.6 to obtain
$\frac{d\lambda_{\delta}}{d\delta}\big{|}_{\delta=0}=\langle\hat{e},\dot{L}e\rangle_{L^{2}([0,1],\mathbb{C})}$.
Finally, we compute
$\displaystyle\langle\hat{e},\dot{L}e\rangle_{L^{2}([0,1],\mathbb{C})}$
$\displaystyle=\int_{0}^{1}\hat{e}(x)\overline{\int_{0}^{1}\dot{k}(x,y)e(y)dy}dx$
$\displaystyle=\int_{0}^{1}\int_{0}^{1}\hat{e}(x)\dot{k}(x,y)\bar{e}(y)dxdy$
$\displaystyle=-\int_{0}^{1}\bar{e}(y)\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\hat{e}(x)dx\
\dot{T}(y)dy$ $\displaystyle=\langle
H,\dot{T}\rangle_{L^{2}([0,1],\mathbb{C})}.$
From (42), the linear response of the dominant eigenvalues is continuous with
respect to map perturbations.
###### Lemma 6.7.
The eigenvalue response function $\check{R}:L^{2}\to\mathbb{C}$ given by
$\check{R}(\dot{T})=\langle H,\dot{T}\rangle$ is continuous.
###### Proof.
This follows from Cauchy-Schwarz and the fact that $H\in
L^{2}([0,1],{\mathbb{C}})$; the latter claim follows from the fact that
$\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\in
L^{\infty}([0,1]^{2},{\mathbb{R}})$ (see proof of Lemma 6.5) and that
$e,\hat{e}\in L^{\infty}([0,1],{\mathbb{C}})$ (which follows from (7) and the
fact that $k_{0}\in L^{\infty}([0,1]^{2},{\mathbb{R}})$, see Lemma 6.1).
## 7 Optimal linear response for map perturbations
In this section we derive formulae for the map perturbations that maximise our
two types of linear response. We begin by formalising the set of allowable map
perturbations then state the formulae.
### 7.1 The feasible set of perturbations
Before we formulate the optimization problem, we note that in this setting, we
require some restriction on the space of allowable perturbations to $T_{0}$ if
we are to interpret $T_{0}+\delta\dot{T}$ as a map of the unit interval for
some $\delta$ strictly greater than 0 (a non-infinitesimal perturbation). With
this in mind, let $\ell>0$ and $\widetilde{F}_{\ell}:=\\{x\in[0,1]:\ell\leq
T_{0}(x)\leq 1-\ell\\}$; it will turn out that we obtain for free that
$\dot{T}\in L^{\infty}$. Note that in principle, $\ell>0$ can be taken as
small as one likes, and indeed if one wishes to consider only infinitesimal
perturbations $\dot{T}$ then one may set
$\widetilde{F}_{\ell}=\widetilde{F}_{0}=[0,1]$. Of course if $T:S^{1}\to
S^{1}$ then may may use $\widetilde{F}_{\ell}=\widetilde{F}_{0}=[0,1]$ even
for non-infinitesimal perturbations. Recalling that in Proposition 6.2 we are
considering $L^{2}$ perturbations $\dot{T}$ of the map $T_{0}$, we define
$S_{T_{0},\ell}:=\\{T\in
L^{2}:\text{supp}(T)\subseteq\widetilde{F}_{\ell}\\}.$ (43)
###### Lemma 7.1.
$S_{T_{0},\ell}$ is a closed subspace of $L^{2}$.
###### Proof.
It is clear that $S_{T_{0},\ell}$ is a subspace. To show it is closed, let
$\\{f_{n}\\}\subset S_{T_{0},\ell}$ and suppose that
$f_{n}\rightarrow_{L^{2}}f\in L^{2}$. Further, suppose that
$\widetilde{F}_{\ell}$ is not $[0,1]$ up to measure zero; otherwise
$S_{T_{0},\ell}=L^{2}$, which is closed. Then, we have
$\|f_{n}-f\|_{2}^{2}=\int_{\widetilde{F}_{\ell}}(f_{n}(x)-f(x))^{2}dx+\int_{\widetilde{F}_{\ell}^{c}}f(x)^{2}\
dx\rightarrow 0.$
If $\int_{\widetilde{F}_{\ell}^{c}}f(x)^{2}dx>0$, we obtain a contradiction
since $\int_{\widetilde{F}_{\ell}}(f_{n}(x)-f(x))^{2}dx\geq 0$; thus,
$\int_{\widetilde{F}_{\ell}^{c}}f(x)^{2}dx=0$ and so $f=0$ a.e. on
$\widetilde{F}_{\ell}^{c}$. Hence, $S_{T_{0},\ell}$ is closed.
For the remainder of this section, the set of allowable perturbations that we
consider is
$P_{\ell}:=S_{T_{0},\ell}\cap B_{1},$ (44)
where $B_{1}$ is the unit ball in $L^{2}$. Since $S_{T_{0},\ell}$ is a closed
subspace of $L^{2}$, it is itself a Hilbert space and so $P_{\ell}$ is
strictly convex. The following lemma concerns the existence of a perturbation
$\dot{T}$ for which our objectives will be nonzero; that is, our objective
$\mathcal{J}$ is not uniformly vanishing. Denote
$\mathcal{P}(x,y):=P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)(x)$
and let
$\mathcal{J}(\dot{T}):=\int_{\Xi(\widetilde{F}_{\ell})}\int_{0}^{1}\mathcal{P}(x,y)\dot{T}(y)\mathcal{E}(x,y)\
dx\ dy$
be our objective. In our first specific objective (optimising response of
expectations) we will insert
$\mathcal{E}(x,y)=((\text{Id}-L_{0}^{*})^{-1}c)(x)f_{0}(y)$ and in our second
specific objective (optimising mixing) we will insert
$\mathcal{E}(x,y)=E(x,y)$ from (18).
###### Lemma 7.2.
Assume that there is $F^{\prime}\subset\widetilde{F}_{\ell}$ such that
$m(F^{\prime})>0$ and
$\mathcal{E}(\cdot,y)\notin{\mathrm{s}pan}\\{\mathcal{P}(\cdot,y)\\}^{\perp}$
for all $y\in F^{\prime}$. Then there is a $\dot{T}\in P_{\ell}$ such that
$\mathcal{J}(\dot{T})>0$.
###### Proof.
Because
$\mathcal{J}(\dot{T})=\int_{\Xi(\widetilde{F}_{\ell})}\dot{T}(y)\left(\int_{0}^{1}\mathcal{P}(x,y)\mathcal{E}(x,y)\
dx\right)\ dy,$
we may set $\dot{T}(y)=\int_{0}^{1}\mathcal{P}(x,y)\mathcal{E}(x,y)\ dx$ for
$y\in F^{\prime}$ and $\dot{T}(y)=0$ otherwise to obtain
$\mathcal{J}(\dot{T})>0$. Trivial scaling yields $\dot{T}\in B_{1}$.
We expect the hypotheses of Lemma 7.2 to be satisfied “generically”.
### 7.2 Explicit formula for the optimal map perturbation that maximally
increases the expectation of an observable
In this section we consider the problem of finding the optimal map
perturbation that maximizes the expectation of some observable $c\in L^{2}$.
We first present a result that ensures a unique solution exists and then
derive an explicit expression for the optimal perturbation.
We begin by noting that $\widehat{R}(\dot{T})\in V$; this follows from the
fact that
$\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)f_{0}(y)\in
V_{\ker}$ (since $\dot{k}\in V_{\ker}$, see Proposition 6.3) and therefore
$\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)f_{0}(y)g(y)dy\in
V$ for $g\in L^{2}$ (see Lemma 3.2). Hence, we only need to consider $c\in$
span$\\{f_{0}\\}^{\perp}$ (see the discussion at the end of Section 4.2).
###### Proposition 7.3.
Let $c\in$ span$\\{f_{0}\\}^{\perp}$ and $P_{\ell}$ be the set in (44). Assume
that the function
$\mathcal{J}(\dot{T}):=\big{\langle}c,\widehat{R}(\dot{T})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}$
is not uniformly vanishing on $P_{\ell}$. Then the optimisation problem
$\big{\langle}c,\widehat{R}(\dot{T})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}=\max_{\dot{h}\in
P_{\ell}}\big{\langle}c,\widehat{R}(\dot{h})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})},$
(45)
where $\widehat{R}$ is as in (41), has a unique solution $\dot{T}\in L^{2}$.
###### Proof.
Let $\mathcal{H}=L^{2}$, $P=P_{\ell}$ and $\mathcal{J}(\dot{h})=\langle
c,\widehat{R}(\dot{h})\rangle_{L^{2}([0,1],{\mathbb{R}})}$. Using Lemma 7.1 we
note that $P_{\ell}$ is closed, as well as bounded, strictly convex and that
it contains the zero element of $\mathcal{H}$. From Lemma 6.5, it follows that
$\langle c,\widehat{R}(\dot{h})\rangle_{L^{2}([0,1],{\mathbb{R}})}$ is
continuous as a function of $\dot{h}$; note that it is also linear in
$\dot{h}$. By hypothesis, $\mathcal{J}$ is not uniformly vanishing on
$P_{\ell}$. We can therefore apply Propositions 4.1 and 4.3 to conclude that
(45) has a unique solution.
Before we present the explicit formula for the optimal solution, we will
reformulate the optimization problem (45) to simplify the analysis. We first
note that since the objective function in (45) is linear in $\dot{T}$, the
maximum will occur on $S_{T_{0},\ell}\cap\partial B_{1}$. Combining this with
the fact that we only need $c\in$ span$\\{f_{0}\\}^{\perp}$, we consider the
following reformulation of (45):
###### Problem C.
Given $\ell\geq 0$ and $c\in$ span$\\{f_{0}\\}^{\perp}$ solve
$\displaystyle\min_{\dot{T}\in S_{T_{0},\ell}}$
$\displaystyle-\big{\langle}c,\widehat{R}(\dot{T})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}$
(46) subject to $\displaystyle\|\dot{T}\|^{2}_{2}-1=0.$ (47)
###### Theorem 7.4.
Suppose the transfer operator $L_{0}$ associated with the system
$(T_{0},\rho)$ has a kernel $k_{0}$ as in (36), which satisfies $(A1)$ of
Theorem 2.2, and there is a $F^{\prime}\subset\widetilde{F}_{\ell}$ such that
$m(F^{\prime})>0$, and $f_{0}(y)>0$ and
$(\text{Id}-L_{0}^{*})^{-1}c\notin{\mathrm{s}pan}\\{\mathcal{P}(\cdot,y)\\}^{\perp}$
for all $y\in F^{\prime}$. Let $\mathcal{G}:L^{2}\rightarrow L^{2}$ be defined
as
$\mathcal{G}f(y):=\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)f(x)dx.$
(48)
Then, the unique solution to Problem C is
$\dot{T}(y)=\begin{cases}-f_{0}(y)\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)(y)/\|f_{0}\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)\mathbf{1}_{\widetilde{F}_{\ell}}\|_{2}&y\in\widetilde{F}_{\ell},\\\
0&\text{otherwise}.\end{cases}$ (49)
Furthermore, $\dot{T}\in L^{\infty}$.
###### Proof.
See Appendix D.
### 7.3 Explicit formula for the optimal map perturbation that maximally
increases the mixing rate
In this section we set up the optimisation problem for mixing enhancement and
derive a formula for the optimal map perturbation. We remark that related
spectral approaches to mixing enhancement for continuous-time flows were
developed in [18, 16].
Recall that to enhance mixing in Section 5.2, we perturbed $k_{0}$ so that the
logarithm of the real part of the second eigenvalue decreases. From Lemma 4.4,
we have
$\frac{d}{d\delta}\Re(\log\lambda_{\delta})\bigg{|}_{\delta=0}=\frac{\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}}{|\lambda_{0}|^{2}},$
(50)
where $\lambda_{\delta}$ denotes the second largest eigenvalue in magnitude
(assumed to be simple) of the integral operator $L_{\delta}$ with the kernel
$k_{\delta}=k_{0}+\delta\cdot\dot{k}+o(\delta)$, where $\delta\mapsto
k_{\delta}$ is $C^{1}$ at $\delta=0$. Since we want to perturb $T_{0}$ by
$\dot{T}$, we reformulate the above inner product. Define
$\widehat{E}(y)=-\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)E(x,y)dx,$
(51)
where $E(x,y)$ is as in (18).
###### Proposition 7.5.
Let $L_{\delta}:L^{2}([0,1],{\mathbb{C}})\rightarrow
L^{2}([0,1],{\mathbb{C}})$, $\delta\in[0,\bar{\delta})$, be integral operators
generated by the kernels $k_{\delta}$ as in (36), assume that $d\rho/dx$ is
Lipschitz and $\delta\mapsto T_{\delta}$ is $C^{1}$. Let $\lambda_{\delta}$ be
an eigenvalue of $L_{\delta}$ with second largest magnitude strictly inside
the unit disk. Suppose that $L_{0}$ satisfies $(A1)$ of Theorem 2.2 and
$\lambda_{0}$ is geometrically simple. Let $e$ and $\hat{e}$ be the
eigenvectors of $L_{0}$ and $L_{0}^{*}$, respectively, corresponding to the
eigenvalue $\lambda_{0}$. Then $\widehat{E}\in L^{\infty}([0,1],{\mathbb{R}})$
and
$\big{\langle}\dot{k},E\big{\rangle}_{L^{2}([0,1]^{2},{\mathbb{R}})}=\big{\langle}\dot{T},\widehat{E}\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}.$
###### Proof.
We first show that $\widehat{E}\in L^{\infty}([0,1],{\mathbb{R}})$. We can
write
$\displaystyle-\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)E(x,y)dx$
$\displaystyle=-\sum_{i=1}^{4}\beta_{i}h_{i}(y)\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)g_{i}(x)dx$
$\displaystyle=-\sum_{i=1}^{4}\beta_{i}h_{i}(y)(\mathcal{G}g_{i})(y),$
where $\beta_{1}=\beta_{2}=\Re(\lambda_{0})$,
$\beta_{3}=-\beta_{4}=\Im(\lambda_{0}),g_{1}=g_{4}=\Re(\hat{e}),g_{2}=g_{3}=\Im(\hat{e})$,
$h_{1}=h_{3}=\Re(e),h_{2}=h_{4}=\Im(e)$. From the proof of Theorem 7.4, we
have $\mathcal{G}g_{i}\in L^{\infty}([0,1],{\mathbb{R}})$. Also, from Lemma
6.1, we have that $k_{0}\in L^{\infty}([0,1]^{2})$ and therefore $h_{i}\in
L^{\infty}([0,1],{\mathbb{R}})$; thus, $\widehat{E}\in
L^{\infty}([0,1],{\mathbb{R}})$.
Finally, we compute
$\displaystyle\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}$
$\displaystyle=$ $\displaystyle\int_{0}^{1}\int_{0}^{1}\dot{k}(x,y)E(x,y)dxdy$
$\displaystyle=$
$\displaystyle-\int_{0}^{1}\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\dot{T}(y)E(x,y)dxdy$
$\displaystyle=$
$\displaystyle\int_{0}^{1}\dot{T}(y)\widehat{E}(y)dy=\big{\langle}\dot{T},\widehat{E}\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}.$
From equation (50), in order to maximally increase the spectral gap, by
Proposition 7.5, we should choose the map perturbation $\dot{T}$ to minimise
$\langle\dot{T},\widehat{E}\rangle$. We first show this optimisation problem
has a unique solution.
###### Proposition 7.6.
Let $P_{\ell}$ be the set in (44) and assume that
$\mathcal{J}(\dot{T})=\langle\dot{T},\widehat{E}\rangle$ does not uniformly
vanish on $P_{\ell}$. Then, the problem of finding $\dot{T}\in P_{\ell}$ such
that
$\big{\langle}\dot{T},\widehat{E}\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}=\min_{\dot{h}\in
P_{\ell}}\big{\langle}\dot{h},\widehat{E}\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}$
(52)
has a unique solution.
###### Proof.
Note that $P_{\ell}$ is closed (by Lemma 7.1), bounded, strictly convex and
contains the zero element of $L^{2}$. Now, since
$\mathcal{J}(\dot{h}):=\langle\dot{h},\widehat{E}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
is linear and continuous and by hypothesis does not vanish everywhere on
$P_{\ell}$, we may apply Propositions 4.1 and 4.3 to obtain the result.
Since the objective function in (52) is linear, all optima will lie in
$S_{T_{0},\ell}\cap\partial B_{1}$. Hence, we equivalently consider the
following optimization problem:
###### Problem D.
Given $\ell\geq 0$, solve
$\displaystyle\min_{\dot{T}\in S_{T_{0},\ell}}$
$\displaystyle\big{\langle}\dot{T},\widehat{E}\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})}$
(53) such that $\displaystyle\|\dot{T}\|_{2}^{2}-1=0.$ (54)
We now state a formula for the unique optimum.
###### Theorem 7.7.
Let $(T_{0},\rho)$ be a deterministic system with additive noise satisfying
$(T1)$ and $(T2)$. Suppose the associated transfer operator
$L_{0}:L^{2}([0,1],{\mathbb{C}})\rightarrow L^{2}([0,1],{\mathbb{C}})$, with
the kernel $k_{0}$ as in (36), satisfies $(A1)$ of Theorem 2.2, and that there
is a $F^{\prime}\subset\tilde{F}_{\ell}$ with $m(F^{\prime})>0$ and
$E(\cdot,y)\notin{\mathrm{s}pan}\\{\mathcal{P}(\cdot,y)\\}^{\perp}$ for all
$y\in F^{\prime}$. Suppose $\lambda_{0}$ is geometrically simple. Then, the
unique solution to the optimization problem D is
$\dot{T}(y)=\begin{cases}\frac{1}{\alpha}\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)E(x,y)dx&y\in\widetilde{F}_{\ell},\\\
0&\text{otherwise},\end{cases}$ (55)
where $E(x,y)$ is as in (18) and $\alpha>0$ is selected so that
$\|\dot{T}\|_{2}=1$. Furthermore, $\dot{T}\in L^{\infty}$.
###### Proof.
See Appendix E.
###### Corollary 7.8.
If $\lambda_{0}$ is real, then
$\dot{T}(y)=\begin{cases}\text{sgn}(\lambda_{0})\frac{e(y)(\mathcal{G}\hat{e})(y)}{\|e\mathcal{G}\hat{e}\mathbf{1}_{\widetilde{F}_{\ell}}\|_{2}}&y\in\widetilde{F}_{\ell},\\\
0&\text{otherwise},\end{cases}$
where $\mathcal{G}$ is the operator in (48). Furthermore, if there exists an
$\ell>0$ such that $\ell\leq T_{0}(x)\leq 1-\ell$ for $x\in[0,1]$, then
$\dot{T}=\text{sgn}(\lambda_{0})\frac{e\cdot\mathcal{G}\hat{e}}{\|e\cdot\mathcal{G}\hat{e}\|_{2}}.$
(56)
###### Proof.
Since $e,\hat{e}$ and $\lambda_{0}$ are real, we have
$E(x,y)=\hat{e}(x)e(y)\lambda_{0}$ and the expression for $\dot{T}$ follows
from (55). Finally, if $\ell\leq T_{0}(x)\leq 1-\ell$, then
$\widetilde{F}_{\ell}=[0,1]$ and we have (56).
## 8 Applications and numerical experiments
In this section we will consider two stochastically perturbed deterministic
systems, namely the Pomeau-Manneville map and a weakly mixing interval
exchange map. For each of these maps we numerically estimate:
1. 1.
The unique kernel perturbation that maximises the change in expectation of a
prescribed observation function (see Problem A). An expression for this
optimal kernel is given by (28).
2. 2.
The unique kernel perturbation that maximally increases the mixing rate (see
Problem B). An expression for this optimal kernel is given by (31) and (32).
3. 3.
The unique map perturbation that maximises the change in expectation of a
prescribed observation function (see Problem C). An expression for this
optimal map perturbation is given by (49).
4. 4.
The unique map perturbation that maximally increases the mixing rate (see
Problem D). An expression for this optimal map perturbation is given by (55)
and (56).
The numerics will be explained as we proceed through these four optimisation
problems. We refer the reader to [1] for additional details on the
implementation and related experiments.
### 8.1 Pomeau-Manneville map
We consider the Pomeau-Manneville map [36]
$T_{0}(x)=\left\\{\begin{array}[]{ll}x(1+(2x)^{\alpha}),&\hbox{$x\in[0,1/2)$;}\\\
2x-1,&\hbox{$x\in[1/2,1]$}\end{array}\right.,$ (57)
with parameter value $\alpha=1/2$. For this parameter choice it is known that
the map $T_{0}$ admits a unique absolutely continuous invariant probability
measure, but only algebraic decay of correlations [36]. With the addition of
noise as per (33), the transfer operator defined by (35) and (36) for
$\delta=0$ becomes compact as an operator on $L^{2}$. In our numerical
experiments we will use the smooth noise kernel
$\rho_{\epsilon}:[-\epsilon,\epsilon]\to\mathbb{R}$, defined by
$\rho_{\epsilon}(x)=N(\epsilon)\exp(-\epsilon^{2}/(\epsilon^{2}-x^{2}))$,
where $N(\epsilon)$ is a normalisation factor ensuring
$\int\rho_{\epsilon}(x)\ dx=1$.
We now begin to set up our numerical procedure for estimating $L_{0}$, which
is a standard application of Ulam’s method [47]. Let
$B_{n}=\\{I_{1},\dots,I_{n}\\}$ denote an equipartition of $[0,1]$ into $n$
subintervals, and set $\mathcal{B}_{n}=$
span$\\{\mathbf{1}_{I_{1}},\dots,\mathbf{1}_{I_{n}}\\}$. We define the (Ulam)
projection $\pi_{n}:L^{2}([0,1])\rightarrow\mathcal{B}_{n}$ by
$\pi_{n}(g)=\sum_{i=1}^{n}\left(\frac{1}{m(I_{i})}\int_{I_{i}}g(x)dx\right)\mathbf{1}_{I_{i}}$.
The finite-rank transfer operator
$L_{n}:=\pi_{n}L_{0}:L^{2}([0,1])\rightarrow\mathcal{B}_{n}$ can be computed
numerically. We use MATLAB’s built-in functions `integral.m` and `integral2.m`
to perform the $\rho$-convolution (using an explicit form of
$\rho_{\epsilon}$) and the Ulam projections, respectively. Figure 1 displays
the nonzero entries in the column-stochastic matrix corresponding to $L_{n}$
for $\epsilon=0.1$.
Figure 1: Transition matrix $L_{n}$ for the system (33) generated by the
Pomeau-Manneville map $T_{0}$ (57) using $n=500$ subintervals of equal length.
The matrix entries are located according to the subinterval positions in the
domain $[0,1]$, so that the image appears as a “blurred” version of the graph
of $T_{0}$. The additive noise in (33) is drawn according to $\rho_{\epsilon}$
with $\epsilon=1/10$.
Approximations to the invariant probability densities for our stochastic
dynamics are displayed in Figure 2 (left) for large and small noise supports.
Figure 2: Approximate invariant densities (left) and eigenfunctions
corresponding to the 2nd largest eigenvalue of $L_{0}$ (right) for the system
(33) with $T_{0}$ given by the Pomeau-Manneville map (57). The additive noise
in (33) is drawn according to $\rho_{\epsilon}$ with $\epsilon$ taking the
values 1/10 (blue) and $\sqrt{6}/100$ (red). The Ulam matrix $L_{n}$ is
constructed with 500 subintervals.
A lower level of noise permits greater concentration of invariant probability
mass near the fixed point $x=0$ of the map $T_{0}$. Also shown in Figure 2
(right) are the estimated eigenfunctions corresponding to the second-largest
eigenvalue of $L_{n}$. The signs of these second eigenfunctions split the
interval $[0,1]$ into left and right hand portions, broadly indicating that
the slow mixing is due to positive mass near $x=0$ and negative mass away from
$x=0$ [9]; see [17] for further discussion of this point in the Pomeau-
Manneville setting.
#### 8.1.1 Kernel perturbations
In the framework of Problems A and B we use the (arbitrarily chosen)
monotonically increasing observation function $c(x)=-\cos(x)$. In order to
estimate $\dot{k}$ as in (28) we use the code from Algorithm 3 [1]; the inputs
are the Ulam matrix $L_{n}$ and $c_{n}$ (obtained as $\pi_{n}(c)$).
Equivalently, directly using (28) one may substitute $f_{n}$ (obtained as the
leading eigenvector of $L_{n}$) for $f$, $L_{n}$ for $L$, $c_{n}$ as above for
$c$, and solve $(Id-L_{n}^{*})^{-1}c_{n}$ (obtained as a vector
$y\in\mathbb{R}^{n}$ by numerically solving the linear system $(Id-
L_{n}^{*})y=c_{n},f_{n}^{\top}y=0$). Figure 3 shows the optimal kernel
perturbations $\dot{k}_{n}$ for $n=500$.
Figure 3: Optimal kernel perturbations for the Pomeau-Manneville map to
maximise the change in expectation of $c(x)=-\cos(x)$, based on an Ulam
approximation of (28) with $n=500$ subintervals. Left: $\epsilon=1/10$, Right:
$\epsilon=\sqrt{6}/100$.
Because $c$ is an increasing function, intuitively one might expect the kernel
perturbation to try to shift mass in the invariant density from left to right.
Broadly speaking, this is what one sees in the high-noise case in Figure 3
(left): vertical strips typically have red above blue, corresponding to a
shift of mass to the right in $[0,1]$. The main exception to this is around
the $y$-axis value of 1/2, where red is strongly below blue along vertical
strips. This is because at the next iteration, these red regions will be
mapped near $x=1$ and achieve the highest value of $c$, while the blue regions
will be mapped near to $x=0$ with the least value of $c$. In the low-noise
case of Figure 3 (right), we see a similar solution with higher spatial
frequencies, and strong perturbations near the critical values of $x=0$ and
$T_{0}(x)=1/2$.
To investigate the optimal kernel perturbation to maximally increase the rate
of mixing in the stochastic system, we use the expression $\dot{k}$ in (31). A
natural approximate version (31) requires estimates of the left and right
eigenfunctions of $L_{0}$ corresponding to the second largest eigenvalue
$\lambda_{2}$; these are obtained directly as eigenvectors of $L_{n}$. Figure
4 shows the resulting optimal kernel perturbations, computed using the code
from Algorithm 4 [1] with input $L_{n}$.
Figure 4: Optimal kernel perturbation for the Pomeau-Manneville map to
maximally increase the mixing rate, computed with $n=500$ subintervals. Left:
$\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
Because the fixed point at $x=0$ is responsible for the slow algebraic decay
of correlations for the deterministic dynamics of $T_{0}$, the fixed point
will also play a dominant role in the mixing rate of the stochastic system for
low to moderate levels of noise. Indeed, Figure 4 shows that the optimal
perturbation concentrates its effort in a neighbourhood of the fixed point,
and pushes mass away from the fixed point as much as possible. This is
particularly extreme in the low noise case of Figure 4 (right) with the
perturbation almost exclusively concentrated in a small neighbourhood of
$x=0$.
#### 8.1.2 Map perturbations
We now turn to the problem of finding the unique map perturbation $\dot{T}$
that maximises the change in expectation of the observation $c(x)=-\cos(x)$
(see Problem C for a precise formulation) and maximises the speed of mixing
(see Problem D). We use the natural Ulam discretisation of the
expression555Note that since $T_{0}^{-1}(\\{0,1\\})$ is a finite set, we may
take $\ell>0$ as small as we like. In the computations we set $\ell=0$, so
that $\widetilde{F}_{\ell}=[0,1]$ mod $m$. (49). The objects $f_{n}$ and $(Id-
L_{n}^{*})^{-1}c_{n}$ are computed exactly as before in Section 8.1.1. The
action of the operator $\mathcal{G}$ in (49) is computed using MATLAB’s built-
in function `integral.m` using an explicit form of $d\rho_{\epsilon}/dx$ for
$d\rho/dx$ in (49).
Figure 5 (left) shows the optimal $\dot{T}$ for the two noise amplitudes
$\epsilon=1/10$ and $\epsilon=\sqrt{6}/100$.
Figure 5: Left: Optimal map perturbation $\dot{T}$ for the Pomeau-Manneville
map to maximise the change in expectation of $c(x)=-\cos(x)$, computed using
(49) with $n=500$. Right: Illustration of $T_{0}+\dot{T}/100$.
Note that for the noise amplitude $\epsilon=0.1$ (blue curve in Figure 5) the
map perturbation $\dot{T}$ is mostly positive, corresponding to moving
probability mass to the right, as expected because we are maximising the
change in expectation of an increasing observation function $c$. The blue
curve is most negative in neighbourhoods of the two preimages of $x=1/2$,
corresponding to moving probability mass to the left. The reason for this is
identical to the discussion of the “blue above red” effect in Figure 3, namely
moving mass to the left creates a very large increase in the objective
function value at the next iterate. This “look ahead” effect is even more
pronounced in the low noise case (red curve of Figure 5), where $\dot{T}$ is
mostly positive, but has deep negative perturbations at multiple preimages of
$x=1/2$ reaching further into the past.
Figure 5 (right) illustrates the Pomeau-Manneville map (black) with perturbed
maps $T_{0}+\dot{T}/100$. We have chosen a scale factor of 1/100 for
visualisation purposes; one should keep in mind we have optimised for an
infinitesimal change in the map. Figure 6 shows the kernel derivatives
$\dot{k}$ corresponding to the optimal map derivatives $\dot{T}$ for the two
noise levels.
Figure 6: Kernel perturbations corresponding to the optimal map perturbations
in Figure 5. Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
These kernel derivatives have a restricted form because they arise purely from
a derivative in the map. One may compare Figure 6 with Figure 3 and note that
the kernel derivative in Figure 6 (left) attempts to follow the general
structure of the kernel derivative in Figure 3 (left), while obeying its
structural restrictions arising from the less flexible map perturbation.
Broadly speaking, in Figure 6 (left), red lies above blue (mass is shifted to
the right). Exceptions are near $y=1/2$ because at the next iteration these
red points will land near $x=1$, achieving very high objective value, while
the blue region will get mapped to near $x=0$, encountering the lowest value
of $c$. Note that the perturbation decreases from a peak to very close to zero
near $x=0$. This is because in a small neighbourhood of $x=0$ there is already
some stochastic perturbation away from $x=0$ “for free” due to the reflecting
boundary conditions. Thus, the map perturbation $\dot{T}$ does not need to
invest energy in large perturbations very close to $x=0$.
The map perturbation that maximally increases the rate of mixing is a
particularly interesting question. Our computations use the natural Ulam
discretisation of (56). The computations follow as in Section 8.1.1 with the
action of $\mathcal{G}$ computed as above. Figure 7 (left) shows the optimal
$\dot{T}$ for the two noise amplitudes $\epsilon=1/10$ and
$\epsilon=\sqrt{6}/100$.
Figure 7: Left: Optimal map perturbation $\dot{T}$ for the Pomeau-Manneville
map to maximise the change in the mixing rate, computed using (56) with
$n=500$. Right: Illustration of $T_{0}+\dot{T}/100$.
A sharp map perturbation away from $x=0$ is seen for both noise levels, with
the perturbation sharper for the lower noise case. In both cases, the
perturbations far from $x=0$ are weak (low magnitude values of $\dot{T}$).
This result corresponds well with the results seen for the optimal kernel
perturbations in Figure 4, where mass was primarily moved away from $x=0$. As
in the optimal solution shown in Figure 5 (left), the optimal perturbation in
Figure 7 decreases from a sharp peak down to zero near $x=0$. This is again
because in a small neighbourhood of $x=0$ the system experiences “free”
stochastic perturbations away from $x=0$ due to the reflecting boundary
conditions, and thus the map perturbation $\dot{T}$ need not need invest
energy in large perturbations very close to $x=0$. Figure 7 (right)
illustrates the Pomeau-Manneville map (black) with perturbed maps
$T_{0}+\dot{T}/100$, where again the factor $1/100$ is just for illustrative
purposes. When inspecting the kernel derivatives $\dot{k}$ corresponding to
the optimal map perturbations $\dot{T}$ in Figure 8, we see similar behaviour
to those in Figure 7.
Figure 8: Kernel perturbations corresponding to the optimal map perturbations
in Figure 7. Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
### 8.2 Interval exchange map
In our second example, we consider a weak-mixing interval exchange map. This
is because of an existing literature in mixing optimisation for these classes
of maps with the addition of noise. Avila and Forni [3] prove that a typical
interval exchange is either weak mixing or an irrational rotation. We use a
specific weak-mixing [45] interval exchange map $T_{0}$ with interval
permutation $(1234)\mapsto(4321)$ and interval lengths given by the normalised
entries of the leading eigenvector of the matrix
$\left(\begin{array}[]{cccc}13&37&77&47\\\ 10&30&60&37\\\ 3&10&24&14\\\
4&10&19&12\end{array}\right)$; see equation (51) in [45]. We again form a
stochastic system using the same noise kernels as for the Pomeau-Manneville
map in Section 8.1. The mixing properties of this map have been studied in
[15]. Figure 9 shows the column-stochastic matrix corresponding to $L_{n}$ for
$n=500$ and $\epsilon=0.1$.
Figure 9: Transition matrix for the system (33) for $\delta=0$ and $T_{0}$
given by the interval exchange map above using $n=500$ subintervals. The
additive noise is drawn from the density $\rho_{\epsilon}$ with
$\epsilon=1/10$.
#### 8.2.1 Kernel perturbations
In the framework of Problem A, we use the same observation function
$c(x)=-\cos(x)$ as in the Pomeau-Manneville case study, and estimate the
optimal kernel perturbation $\dot{k}$ that maximally increases the expectation
of $c$ in an identical fashion. In broad terms, one again sees that $\dot{k}$
attempts to shift invariant probability mass to the right in $[0,1]$. In
Figure 10 (left), in each smooth part of the support of $\dot{k}$, red is
“above” blue, meaning mass is pushed to the right.
Figure 10: Optimal kernel perturbation for the interval exchange map to
maximise the change in expectation of $c(x)=-\cos(x)$, computed with $n=500$
Ulam subintervals. Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
Clear exceptions to the “red above blue” scheme are seen as three sharp
horizontal lines. The $y$-coordinates of these three sharp horizontal lines
coincide with the three points of discontinuity in the domain of the interval
exchange at approximately $x=0.43,0.77,0.89$. Consider the sharp horizontal
“blue above red” line at $y\approx 0.43$. According to Figure 9, under the
action of the kernel $k_{0}$, mass in the vicinity of $x=0.6$ will be
transported near to $x=0.43$. The perturbation $\dot{k}$ shown in Figure 10
will then tend to push this mass to the left of $x=0.43$. Thus, on the next
iteration there will be a bias for mass to be mapped near to $x=1$ rather than
near $x=0.25$, achieving a much larger objective value at this iterate. A
similar reasoning applies to the “blue above red” horizontal lines at
$y\approx 0.77$ and $0.89$; the contrast is a little weaker because the
potential gain at the next iterate is also weaker. In the low noise case,
Figure 10 (right), displays similar behaviour to the higher noise case of
Figure 10 (left). With lower noise, the deterministic dynamics plays a greater
role and additional preimages are taken into account, leading to a more
oscillatory optimal $\dot{k}$.
To investigate the optimal kernel perturbation to maximally increase the rate
of mixing in the stochastic system (in the framework of Problem B) we use the
expression $\dot{k}$ in (31). The method of numerical approximation is
identical to that used for the Pomeau-Manneville map. Figure 11 shows the
signed distribution of mass that is responsible for the slowest real666In our
numerical experiments the largest magnitude real eigenvalue appears as the
sixth (resp. fourth) eigenvector of $L_{500}$ for $\epsilon=1/10$ (resp.
$\epsilon=\sqrt{6}/100$). Slightly larger complex eigenvalues are present, but
we do not investigate these in order to make the dynamic interpretation more
straightforward. exponential rate of decay in the stochastic system.
Figure 11: Approximate second eigenfunctions of the transfer operator $L_{0}$
of the system (33) with $T_{0}$ given by the interval exchange map above. The
additive noise in (33) is drawn from the density $\rho_{\epsilon}$ with
$\epsilon$ taking the values 1/10 (blue) and $\sqrt{6}/100$ (red).
This eigenfunction becomes more oscillatory as the level of noise decreases,
and as must be the case, the magnitude of the corresponding eigenvalue
increases from $\lambda\approx-0.7476$ ($\epsilon=1/10$) to
$\lambda\approx-0.9574$ ($\epsilon=\sqrt{6}/100$). Because the sign of these
eigenvalues is negative, one expects a pair of almost-2-cyclic sets [10],
consisting of three subintervals each, given by the positive and negative
supports of the eigenfunctions.
Figure 12 shows the approximate optimal kernel perturbations.
Figure 12: Optimal kernel perturbation for the interval exchange map to
maximally increase the mixing rate, computed with $n=500$ Ulam subintervals.
Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
In the high-noise situation of Figure 12 (left), the sharp horizontal changes
are present at preimages of the deterministic dynamics, as they were in to
Figure 10 (left). The importance of the break points to the overall mixing
rate is thus clearly borne out in the optimal $\dot{k}$; a precise
interpretation of the optimal $\dot{k}$ is not very straightforward. For the
low noise case (Figure 12 (right)) it appears that there is an alternating
shifting of mass left and right with alternating “red above blue” and “blue
above red”. This leads to greater mixing at smaller spatial scales than is
possible in a single iteration of the deterministic interval exchange. We
anticipate that decreasing the noise amplitude further will result in more
rapid alternation of “red above blue” and “blue above red”. As the diffusion
amplitude decreases, the efficient large-scale diffusive mixing is no longer
possible and so a transition is made to small-scale mixing, accessed by
increasing oscillation in the kernel.
#### 8.2.2 Map perturbations
The computations in this section follow those of Section 8.1.2. Figure 13
(left) shows the optimal map perturbations $\dot{T}$ at two different noise
levels.
Figure 13: Left: Optimal map perturbation $\dot{T}$ for the interval exchange
map to maximise the change in expectation of $c(x)=-\cos(x)$, computed using
(49) with $n=500$. Right: Illustration of $T_{0}+\dot{T}/100$.
Figure 13 (right) illustrates $T_{0}+\dot{T}/100$ for the two different levels
of noise. The kernel perturbations generated by these optimal map
perturbations are displayed in Figure 14.
Figure 14: Kernel perturbations corresponding to the optimal map perturbations
in Figure 13. Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
If one compares the kernel perturbations in Figure 14 with those more flexible
kernel perturbations in Figure 10, one sees that the two sets of kernel
perturbations are broadly equivalent with one another in terms of the relative
positions of the positive and negative (red and blue) perturbations. Note that
the more restrictive kernel derivative in Figure 14 by construction cannot
replicate the sharp horizontal red-blue switches in Figure 10. It turns out
that the strongest of these red-blue switches, namely the one at $y\approx
0.43$ in Figure 10 (left) is approximated as best as is allowed by a map
perturbation, see Figure 14 (left), while the other two (weaker) horizontal
red/blue switches seen in 10 are ignored.
We now turn to optimal map perturbations for the mixing rate. The combined
effect of the “cutting and shuffling” of interval exchanges with diffusion on
mixing rates has been widely studied, e.g. [2, 46, 15, 34, 48], including
investigations of the impact of changing the diffusion or the interval
exchange on mixing. The very general type of formal map optimisation we
consider here has not been attempted before, and we hope that our novel
techniques will stimulate interesting new research questions and motivate more
sophisticated experiments in the field of mixing optimisation.
Under repeated iteration, the original interval exchange $T_{0}$ cuts and
shuffles the unit interval into an increasing number of smaller pieces,
assisting the small scale mixing of diffusion. Our results in Figure 15 (left)
show an oscillatory $\dot{T}$, with increasing oscillations as the noise
amplitude decreases.
Figure 15: Left: Optimal map perturbation $\dot{T}$ for the interval exchange
map to maximise the change in the mixing rate, computed using (56) with
$n=500$. Right: Illustration of $T_{0}+\dot{T}/100$.
This increased oscillation effect is also seen when comparing the left and
right panes of Figure 16. Thus, the optimisation attempts to include some
additional mixing by rapid local warping of the phase space. It is plausible
that this additional warping effect enhances mixing beyond the rigid shuffling
of the interval exchange. An illustration of $T_{0}+\dot{T}/100$ is given in
Figure 15.
Figure 16: Kernel perturbations corresponding to the optimal map perturbations
in Figure 15. Left: $\epsilon=1/10$, Right: $\epsilon=\sqrt{6}/100$.
We emphasise that the factor $1/100$ is only for visualisation purposes and
for smaller factors, the perturbed map would remain a piecewise homeomorphism
(modulo small overshoots at the boundaries, which are taken care of by the
reflecting boundary conditions on the noise).
## 9 Acknowledgments
FA is supported by a UNSW University Postgraduate Award. GF is partially
supported by an ARC Discovery Project. FA and GF thank the Department of
Mathematics at the University of Pisa for generous support and hospitality. SG
is partially supported by the research project `PRIN 2017S35EHN_004` “Regular
and stochastic behaviour in dynamical systems” of the Italian Ministry of
Education and Research.
## Appendix A Proof of Theorem 5.4
First we need a technical lemma. We note that the statement of the lemma is
analogous to the continuity of $(\text{Id}-L_{0})^{-1}$, which was treated in
the proof of Theorem 2.2.
###### Lemma A.1.
Consider the closed subspace span$\\{f_{0}\\}^{\perp}\subset L^{2}$ equipped
with the $L^{2}$ norm. Then, the operator $(\text{Id}-L_{0}^{*})^{-1}:$
span$\\{f_{0}\\}^{\perp}\rightarrow$ span$\\{f_{0}\\}^{\perp}$ is bounded.
###### Proof.
We begin by finding the kernel and range of the operator
$\text{Id}-L_{0}^{*}$. Recall that $L_{0}(V)\subset V$ and that $L_{0}$
preserves a one-dimensional eigenspace span$\\{f_{0}\\}$, with eigenvalue $1$.
Thus, we have $\ker(\text{Id}-L_{0})=$ span$\\{f_{0}\\}$ and
ran$(\text{Id}-L_{0})\subset V$. Recalling that $L_{0}:V\rightarrow V$ is
compact and $f_{0}\not\in V$, we have by the Fredholm alternative (see [11],
VII.11) that for any $g\in V$, there exists a unique $h\in V$ such that
$g=(\text{Id}-L_{0})h$. Hence, ran($\text{Id}-L_{0})=V$. Since $V$ is closed,
the range of $\text{Id}-L_{0}$ is closed and so, by the Closed Range Theorem
(Theorem 5.13, IV-§5.2,[29]), we have
$\text{ran}((\text{Id}-L_{0})^{*})=\ker(\text{Id}-L_{0})^{\perp}=$
span$\\{f_{0}\\}^{\perp}$, which is a co-dimension $1$ space, and
$\ker((\text{Id}-L_{0})^{*})=$ ran($\text{Id}-L_{0})^{\perp}=V^{\perp}=$
span$\\{\mathbf{1}\\}^{\perp\perp}=$ span$\\{\mathbf{1}\\}$, where the last
equality follows from Corollary 1.41 in III-§1.8 [29] and the fact that
span$\\{\mathbf{1}\\}$ is a finite-dimensional closed subspace of $L^{2}$.
To prove that $(\text{Id}-L_{0}^{*})^{-1}:$
span$\\{f_{0}\\}^{\perp}\rightarrow$ span$\\{f_{0}\\}^{\perp}$ is bounded, we
will use the Inverse Mapping Theorem (Theorem III.11, [41]). Since the
integral operator $L_{0}^{*}$ has an $L^{2}$ kernel, by (6) and the triangle
inequality it follows that $\text{Id}-L_{0}^{*}$ is bounded. Also, from the
Fredholm alternative argument above,
$\text{Id}-L_{0}^{*}:{\mathrm{s}pan}\\{f_{0}\\}^{\perp}\to{\mathrm{s}pan}\\{f_{0}\\}^{\perp}$
is surjective. Thus, to apply the Inverse Mapping Theorem, we just need to
show that $\text{Id}-L_{0}^{*}$ is injective on span$\\{f_{0}\\}^{\perp}$. Let
$f_{1},f_{2}\in$ span$\\{f_{0}\\}^{\perp}$ be such that
$(\text{Id}-L_{0}^{*})f_{1}=(\text{Id}-L_{0}^{*})f_{2}$. Thus,
$f_{1}-f_{2}\in\ker(\text{Id}-L_{0}^{*})=$ span$\\{\mathbf{1}\\}$ and so
$f_{1}-f_{2}=\gamma\mathbf{1}$ for some $\gamma\in{\mathbb{R}}$. Since
$f_{1}-f_{2}\in$ span$\\{f_{0}\\}^{\perp}$, we have that
$0=\int(f_{1}(x)-f_{2}(x))f_{0}(x)dx=\gamma\int f_{0}(x)dx$ and so $\gamma=0$
(since $\int f_{0}(x)dx=1$), i.e. $f_{1}=f_{2}$; thus, $(\text{Id}-L_{0}^{*})$
is injective and the result follows.
###### Proof of Theorem 5.4.
We will use the method of Lagrange multipliers to derive the expression (28)
from the first-order necessary conditions for optimality and then show that
such a $\dot{k}$ satisfies the second-order sufficient conditions. To this
end, we consider the following Lagrangian function
$\mathcal{L}(\dot{k},\mu):=f(\dot{k})+\mu g(\dot{k}),$
where
$f(\dot{k}):=-\big{\langle}c,R(\dot{k})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})},$
$g(\dot{k}):=\|\dot{k}\|_{L^{2}([0,1]^{2})}^{2}-1$ and $\dot{k}\in
V_{\ker}\cap S_{k_{0},l}$.
Necessary conditions: We verify the conditions in Theorem 2, §7.7, [37]. We
want to find $\dot{k}$ and $\mu$ that satisfy the first-order necessary
conditions:
$\displaystyle g(\dot{k})=0$ $\displaystyle
D_{\dot{k}}\mathcal{L}(\dot{k},\mu)$ $\displaystyle\tilde{k}=0\text{ for all
}\tilde{k}\in V_{\ker}\cap S_{k_{0},l},$
where
$D_{\dot{k}}\mathcal{L}(\dot{k},\mu)\in\mathcal{B}(L^{2}([0,1]^{2}),{\mathbb{R}})$
is the Frechet derivative with respect to the variable $\dot{k}$. Since $f$ is
linear, we have $(D_{\dot{k}}f)\tilde{k}=f(\tilde{k})$. Also,
$(D_{\dot{k}}g)\tilde{k}=2\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$
since
$\displaystyle\frac{|g(\dot{k}+\tilde{k})-g(\dot{k})-2\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}|}{\|\tilde{k}\|_{L^{2}([0,1]^{2})}}$
$\displaystyle=\frac{|\|\dot{k}+\tilde{k}\|_{L^{2}([0,1]^{2})}^{2}-\|\dot{k}\|_{L^{2}([0,1]^{2})}^{2}-2\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}|}{\|\tilde{k}\|_{L^{2}([0,1]^{2})}}$
$\displaystyle=\frac{|\langle\dot{k}+\tilde{k},\dot{k}+\tilde{k}\rangle_{L^{2}([0,1]^{2})}-\langle\dot{k},\dot{k}\rangle_{L^{2}([0,1]^{2})}-2\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}|}{\|\tilde{k}\|_{L^{2}([0,1]^{2})}}$
$\displaystyle=\frac{|\langle\tilde{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}|}{\|\tilde{k}\|_{L^{2}([0,1]^{2})}}=\|\tilde{k}\|_{L^{2}([0,1]^{2})}.$
Thus, for the necessary conditions of the Lagrange multiplier method to be
satisfied, we need that
$D_{\dot{k}}\mathcal{L}(\dot{k},\mu)\tilde{k}=(D_{\dot{k}}f)\tilde{k}+\mu(D_{\dot{k}}g)\tilde{k}=f(\tilde{k})+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}=0$
(58)
for all $\tilde{k}\in V_{\ker}\cap S_{k_{0},l}$ and
$g(\dot{k})=0.$ (59)
Noting Lemma A.1 and the fact that $c\in$ span$\\{f_{0}\\}^{\perp}$, we have
$\displaystyle f(\tilde{k})$
$\displaystyle+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$ (60)
$\displaystyle=-\langle
c,R(\tilde{k})\rangle_{L^{2}([0,1],{\mathbb{R}})}+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$
$\displaystyle=-\bigg{\langle}c,(\text{Id}-L_{0})^{-1}\int\tilde{k}(x,y)f_{0}(y)dy\bigg{\rangle}_{L^{2}([0,1],{\mathbb{R}})}+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$
$\displaystyle=\int\int-((\text{Id}-L_{0}^{*})^{-1}c)(x)\tilde{k}(x,y)f_{0}(y)dydx+\int\int
2\mu\dot{k}(x,y)\tilde{k}(x,y)dydx$
$\displaystyle=\int\int\left[-((\text{Id}-L_{0}^{*})^{-1}c)(x)f_{0}(y)+2\mu\dot{k}(x,y)\right]\tilde{k}(x,y)dydx.$
We claim that
$\dot{k}(x,y)=\frac{1}{2\mu}\mathbf{1}_{F_{l}}(x,y)f_{0}(y)\left(((\text{Id}-L_{0}^{\ast})^{-1}c)(x)-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\right)$
satisfies the necessary condition (58) and lies in $V_{\ker}\cap S_{k_{0},l}$.
Before we verify this, we show that
$M(x,y):=\mathbf{1}_{F_{l}}(x,y)f_{0}(y)\left(((\text{Id}-L_{0}^{\ast})^{-1}c)(x)-\hat{g}(y)\right),$
where
$\hat{g}(y):=\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz$,
is in $L^{2}([0,1]^{2})$. Since $f_{0},(\text{Id}-L_{0}^{\ast})^{-1}c\in
L^{2}$, we just need to show that $\mathbf{1}_{F_{l}}(x,y)f_{0}(y)\hat{g}(y)$
is in $L^{2}([0,1]^{2})$. First, we note that
$\displaystyle|\hat{g}(y)|$
$\displaystyle\leq\frac{1}{m(F_{l}^{y})}\int\big{|}\mathbf{1}_{F_{l}^{y}}(z)((\text{Id}-L_{0}^{\ast})^{-1}c)(z)\big{|}dz$
$\displaystyle\leq\frac{1}{m(F_{l}^{y})}\|(\text{Id}-L_{0}^{\ast})^{-1}c\|_{2}\|\mathbf{1}_{F_{l}^{y}}\|_{2}$
$\displaystyle=\frac{1}{m(F_{l}^{y})}\|(\text{Id}-L_{0}^{\ast})^{-1}c\|_{2}\sqrt{m(F_{l}^{y})}$
$\displaystyle=\frac{\|(\text{Id}-L_{0}^{\ast})^{-1}c\|_{2}}{\sqrt{m(F_{l}^{y})}}$
and therefore
$\displaystyle\hat{g}(y)^{2}\leq\frac{\|(\text{Id}-L_{0}^{\ast})^{-1}c\|^{2}_{2}}{m(F_{l}^{y})}.$
We then have
$\displaystyle\int\int\mathbf{1}_{F_{l}}(x,y)\hat{g}(y)^{2}f_{0}(y)^{2}dxdy$
$\displaystyle=\int_{\Xi(F_{l})}\int_{F_{l}^{y}}\hat{g}(y)^{2}f_{0}(y)^{2}dxdy$
$\displaystyle=\int_{\Xi(F_{l})}m(F_{l}^{y})\hat{g}(y)^{2}f_{0}(y)^{2}dy$
$\displaystyle\leq\int_{\Xi(F_{l})}m(F_{l}^{y})\frac{\|(\text{Id}-L_{0}^{\ast})^{-1}c\|^{2}_{2}}{m(F_{l}^{y})}f_{0}(y)^{2}dy$
$\displaystyle\leq\|(\text{Id}-L_{0}^{\ast})^{-1}c\|^{2}_{2}\|f_{0}\|_{2}^{2}.$
Thus, $\mathbf{1}_{F_{l}}(x,y)f_{0}(y)\hat{g}(y)$ is in $L^{2}([0,1]^{2})$ and
therefore $M\in L^{2}([0,1]^{2})$.
Now, to verify $\dot{k}$ satisfies (58), we compute, for $\tilde{k}\in
V_{\ker}\cap S_{k_{0},l}$,
$\displaystyle f$
$\displaystyle(\tilde{k})+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$
$\displaystyle=\int_{F_{l}}\left[-((\text{Id}-L_{0}^{*})^{-1}c)(x)f_{0}(y)+2\mu\dot{k}(x,y)\right]\tilde{k}(x,y)dxdy$
$\displaystyle=\int_{F_{l}}\bigg{[}-((\text{Id}-L_{0}^{*})^{-1}c)(x)f_{0}(y)$
$\displaystyle\qquad\quad+f_{0}(y)\left(((\text{Id}-L_{0}^{\ast})^{-1}c)(x)-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\right)\bigg{]}\tilde{k}(x,y)dxdy$
$\displaystyle=-\int_{\Xi(F_{l})}\left[\int_{F^{y}_{l}}\left(f_{0}(y)\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\right)\tilde{k}(x,y)dx\right]dy$
$\displaystyle=-\int_{\Xi(F_{l})}\left(f_{0}(y)\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\right)\left[\int_{F^{y}_{l}}\tilde{k}(x,y)dx\right]dy$
$\displaystyle=0,$
where the last equality follows from $\tilde{k}\in V_{\ker}\cap S_{k_{0},l}$.
To conclude checking that $\dot{k}$ satisfies the necessary condition (58), we
need to check that $\mu\neq 0$. Since $M\in L^{2}([0,1]^{2})$, note that the
necessary condition (59) yields $\mu=\pm\frac{1}{2}\|M\|_{L^{2}([0,1]^{2})}$;
thus, to finish the proof that $\dot{k}$ satisfies both necessary conditions
(58)-(59), we will show that $\|M\|_{L^{2}([0,1]^{2})}\neq 0$. From the
hypotheses on $f_{0}$ and $(\text{Id}-L_{0}^{\ast})^{-1}c$ we conclude that
$\|M\|_{L^{2}([0,1]^{2})}^{2}=\int_{F_{l}}f_{0}(y)^{2}\left(((\text{Id}-L_{0}^{\ast})^{-1}c)(x)-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\right)^{2}dxdy\neq
0.$
Hence, $\mu=\pm\frac{1}{2}\|M\|_{L^{2}([0,1]^{2})}\neq 0$. The sign of $\mu$
is determined by checking the sufficient conditions.
We can now verify that $\dot{k}\in V_{\ker}\cap S_{k_{0},l}$. We note from
$M\in L^{2}([0,1]^{2})$ and $\mu\neq 0$ that $\dot{k}\in L^{2}([0,1]^{2})$. By
construction supp$(\dot{k})\subseteq F_{l}$. Finally, we have
$\displaystyle\int\dot{k}(x,y)dx$
$\displaystyle=\frac{1}{2\mu}f_{0}(y)\left(\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(x)dx-\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\frac{\int_{F_{l}^{y}}\mathbf{1}_{F_{l}}(x,y)dx}{m(F_{l}^{y})}\right)$
$\displaystyle=0.$
Sufficient conditions: We want to show that $\dot{k}$ in (28) is a solution to
the optimization problem (26)-(27) by checking that it satisfies the second-
order sufficient conditions. We first demonstrate the set of Lagrange
multipliers $\Lambda(\dot{k})$ (in Definition 3.8, §3.1 [6]) is not empty in
our setting; this will enable us to use the second-order sufficient conditions
of Lemma 3.65 [6]. Note that in terms of the notation used in [6] versus our
notation, $Q=X=V_{\ker}\cap S_{k_{0},l}$, $x_{0}=\dot{k}$,
$Y^{\ast}={\mathbb{R}}$, $G(x_{0})=g(\dot{k})$, $K=\\{0\\}$,
$N_{K}(G(x_{0}))={\mathbb{R}}$, $T_{K}(G(x_{0}))=\\{0\\}$ and
$N_{Q}(x_{0})=\\{0\\}$ (since $Q=X$, see discussion in §3.1 following
Definition 3.8). Thus, to show that $\Lambda(\dot{k})$ is not empty, we need
to show that $\dot{k}$ and $\mu$ satisfy
$D_{\dot{k}}\mathcal{L}(\dot{k},\mu)\dot{k}=0,\;g(\dot{k})=0,\;\mu\in\\{0\\}^{-},\;\mu
g(\dot{k})=0,$ (61)
where $\\{0\\}^{-}:=\\{a\in\mathbb{R}:ax\leq 0\ \forall
x\in\\{0\\}\\}=\mathbb{R}$ (this simplification of conditions (3.16) in [6]
follows from the discussion following Definition 3.8 in §3.1 and the fact that
$\\{0\\}$ is a convex cone). Since the second condition in (61) implies the
fourth, and since $\mu\in{\mathbb{R}}$, we only need to check the first two
equalities in (61). However, these two conditions are implied from the first-
order necessary conditions. Hence, $\Lambda(\dot{k})$ is not empty and thus,
to show that $\dot{k}$ is a solution to (26)-(27), we need to show that it
satisfies the following second-order conditions (see Lemma 3.65): there exists
constants $\nu>0$, $\eta>0$ and $\beta>0$ such that
$\sup_{|\mu|\leq\nu,\
\mu\in\Lambda(\dot{k})}D_{\dot{k}\dot{k}}^{2}\mathcal{L}(\dot{k},\mu)(\tilde{k},\tilde{k})\geq\beta\|\tilde{k}\|_{L^{2}([0,1]^{2})}^{2},\
\forall\tilde{k}\in C_{\eta}(\dot{k}),$ (62)
where $C_{\eta}(\dot{k}):=\big{\\{}v\in V_{\ker}\cap
S_{k_{0},l}:|2\langle\dot{k},v\rangle_{V_{\ker\cap
S_{k_{0},l}}}|\leq\eta\|v\|_{V_{\ker}\cap S_{k_{0},l}}\text{ and
}f(v)\leq\eta\|v\|_{V_{\ker}\cap S_{k_{0},l}}\big{\\}}$ is the approximate
critical cone (see equation (3.131) in §3.3 [6]). Since
$D_{\dot{k}}\mathcal{L}(\dot{k},\mu)\tilde{k}=f(\tilde{k})+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$
and $\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$ is linear in
$\dot{k}$, we have that
$D_{\dot{k}\dot{k}}^{2}\mathcal{L}(\dot{k},\mu)(\tilde{k},\tilde{k})=2\mu\langle\tilde{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}$.
Thus, we conclude that the second-order condition (62) holds with $\mu>0$,
$\nu=|\mu|=\frac{1}{2}\|M\|_{V_{\ker}\cap S_{k_{0},l}}$, $\beta=2\mu$ and
$\eta=\max\big{\\{}2\|\dot{k}\|_{V_{\ker}\cap
S_{k_{0},l}},\|c\|_{2}\|f_{0}\|_{2}\|(\text{Id}-L_{0})^{-1}\|_{V\rightarrow
V}\big{\\}}$. Since $\dot{k}$ satisfies the necessary conditions (58) and (59)
with $\mu>0$, we conclude that $\dot{k}$ is a solution to the optimization
problem (26)-(27).
_Uniqueness of the solution:_ The set $P_{l}=V_{\ker}\cap S_{k_{0},l}\cap
B_{1}$ is a closed (Lemma 5.1), bounded, strictly convex set, containing
$\dot{k}=0$. The objective $\mathcal{J}(\dot{k})=\langle c,R(\dot{k})\rangle$
is continuous (since $\mathcal{J}$ is linear and $R$ is continuous (see
comment following (14))) and not uniformly vanishing (Lemma 5.2). Therefore by
Propositions 4.1 and 4.3, $\dot{k}$ is the unique optimum.
$L^{\infty}$ boundedness of the solution: Suppose that $c\in W$ and $k_{0}\in
L^{\infty}([0,1]^{2})$. From $L_{0}f_{0}=f_{0}$ and $k_{0}\in
L^{\infty}([0,1]^{2})$, we have by (7) that $f_{0}\in L^{\infty}$. Let
$V_{1}:=\\{f\in L^{1}:\int f\ dm=0\\}$. We would like to show that
$(\text{Id}-L_{0})^{-1}:V_{1}\rightarrow V_{1}$ is bounded. To obtain this, we
first need the exponential contraction of $L_{0}$ on $V_{1}$. Since $L_{0}$ is
integral preserving and compact on $L^{1}$, from the argument in the proof of
Theorem 2.2 we only need to verify the $L^{1}$ version of assumption $(A1)$ on
$V_{1}$. To verify this, we note that for $h\in V_{1}$, we have
$\|L_{0}h\|_{2}\leq\|L_{0}h\|_{\infty}\leq\|k_{0}\|_{L^{\infty}([0,1]^{2})}\|h\|_{1}$
and therefore, $L_{0}h\in V$ since $L_{0}$ preserves the integral. Thus, for
any $h\in V_{1}$,
$\lim_{n\rightarrow\infty}\|L_{0}^{n}h\|_{1}\leq\lim_{n\rightarrow\infty}\|L_{0}^{n-1}(L_{0}h)\|_{2}=0$
since $L_{0}$ satisfies $(A1)$ on $V$. Hence, the $L^{1}$ version of $(A1)$
holds and $L_{0}$ has exponential contraction on $V_{1}$. We then have
$\displaystyle\|(\text{Id}-L_{0})^{-1}\|_{V_{1}\rightarrow V_{1}}$
$\displaystyle\leq\|\text{Id}\|_{V_{1}\rightarrow
V_{1}}+\bigg{\|}\sum_{n=1}^{\infty}L_{0}^{n}\bigg{\|}_{V_{1}\rightarrow
V_{1}}$ (63) $\displaystyle=1+\sup_{\begin{subarray}{c}f\in V_{1}\\\
\|f\|_{1}=1\end{subarray}}\bigg{\|}\sum_{n=1}^{\infty}L_{0}^{n}f\bigg{\|}_{1}$
$\displaystyle\leq 1+\sup_{\begin{subarray}{c}f\in V_{1}\\\
\|f\|_{1}=1\end{subarray}}\sum_{n=1}^{\infty}Ce^{\lambda n}\|f\|_{1}$
$\displaystyle=1+\sum_{n=1}^{\infty}Ce^{\lambda n}<\infty,$
where the last inequality follows from $\lambda<0$; thus,
($\text{Id}-L_{0})^{-1}:V_{1}\rightarrow V_{1}$ is bounded.
Next we would like to find the subspace where the operator
($\text{Id}-L_{0}^{*})^{-1}$ is bounded. We will replicate the result of Lemma
A.1, however, ($\text{Id}-L_{0})^{-1}$ is now acting on $L^{1}$, so we note
that for a subspace $\mathcal{S}$ of $L^{1}$, we have that
$\mathcal{S}^{\perp}:=\bigg{\\{}h\in L^{\infty}:\int h(x)w(x)dx=0\ \forall
w\in\mathcal{S}\bigg{\\}},$ (64)
where we are using the fact that $(L^{1})^{*}=L^{\infty}$. Also,
$\mathcal{S}^{\perp}$ is a closed subspace of $L^{\infty}$ (see III-§1.4,
[29]).
Now, as in the proof of Lemma A.1, we have $\ker(\text{Id}-L_{0})=$
span$\\{f_{0}\\}$ and ran($\text{Id}-L_{0})=V_{1}$. We also have
$\text{ran((Id}-L_{0})^{*})=$ span$\\{f_{0}\\}^{\perp}=\\{h\in L^{\infty}:\int
h(x)f_{0}(x)dx=0\\}=:W$ and $\ker((\text{Id}-L_{0})^{*})=V_{1}^{\perp}=\\{h\in
L^{\infty}:\int h(x)w(x)dx=0\ \forall\ w\in V_{1}\\}$. Next, for $h\in W$, we
have
$\int(L_{0}^{*}h)(x)f_{0}(x)dx=\int h(x)(L_{0}f_{0})(x)dx=\int
h(x)f_{0}(x)dx=0;$
thus, ($\text{Id}-L_{0}^{*})(W)\subset W$. We again, as in Lemma A.1, apply
the Inverse Mapping Theorem to prove that
($\text{Id}-L_{0}^{*})^{-1}:W\rightarrow W$ is bounded. From (7), and the
triangle inequality, the operator $\text{Id}-L_{0}^{*}:W\rightarrow W$ is
bounded. Noting that $V_{1}$ is a closed co-dimension 1 subspace of $L^{1}$,
we have codim$(V_{1})=$ dim$(V_{1}^{\perp})$ (see Lemma 1.40 III-§1.8 [29]);
hence, dim$(\ker(\text{Id}-L_{0}^{*}))=$ dim$(V_{1}^{\perp})=$
codim$(V_{1})=1$ and therefore, $1$ is a geometrically simple eigenvalue of
$L_{0}^{*}$. Thus, $\ker(\text{Id}-L_{0}^{*})=$ span$\\{\mathbf{1}\\}$ because
$L_{0}^{*}\mathbf{1}=\mathbf{1}$. Since $\int f_{0}\ dm=1$,
$\mathbf{1}\not\in$ span$\\{f_{0}\\}^{\perp}$ and so, by the Fredholm
alternative, $\text{Id}-L_{0}^{*}$ is a bijection on $W$. Hence, by the
Inverse Mapping Theorem, ($\text{Id}-L_{0}^{*})^{-1}$ is bounded on $W$. Since
$c\in W$, we have $\|(\text{Id}-L_{0}^{*})^{-1}c\|_{\infty}<\infty$.
To conclude the proof, we now show that
$\hat{g}(y):=\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz$
is in $L^{\infty}$. We compute
$|\hat{g}(y)|=\bigg{|}\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}((\text{Id}-L_{0}^{\ast})^{-1}c)(z)dz\bigg{|}\leq\|(\text{Id}-L_{0}^{*})^{-1}c\|_{\infty}.$
Since $(\text{Id}-L_{0}^{*})^{-1}c\in L^{\infty}$, we conclude that
$\hat{g}\in L^{\infty}$; thus, $\dot{k}\in L^{\infty}([0,1]^{2})$.
## Appendix B Proof of Theorem 5.6
###### Proof.
The optimization problem is very similar to that considered in Theorem 5.4;
thus, we will refer to the proof of that theorem with the following
modifications.
Consider the Lagrangian function
$\mathcal{L}(\dot{k},\mu):=f(\dot{k})+\mu g(\dot{k}),$
where, in this setting, we have
$f(\dot{k})=\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}$ and
$g(\dot{k})=\|\dot{k}\|_{L^{2}([0,1]^{2},{\mathbb{R}})}^{2}-1$. Thus, for the
necessary conditions of the Lagrange multiplier method to be satisfied, we
need that
$f(\tilde{k})+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2})}=\langle\tilde{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}+2\mu\langle\dot{k},\tilde{k}\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}=0$
(65)
for all $\tilde{k}\in V_{\ker}\cap S_{k_{0},l}$ and
$g(\dot{k})=0.$ (66)
We claim that
$\dot{k}(x,y)=-\mathbf{1}_{F_{l}}(x,y)\frac{1}{2\mu}\left(E(x,y)-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}E(x,y)dx\right)$
(67)
satisfies the necessary condition (65), and lies in $V_{\ker}\cap
S_{k_{0},l}$. Before we verify this, we will show that
$\displaystyle M(x,y):=\mathbf{1}_{F_{l}}(x,y)(E(x,y)-h(y)),$
where $h(y):=\frac{1}{m(F^{y}_{l})}\int_{F^{y}_{l}}E(x,y)dx$, is in
$L^{2}([0,1]^{2})$. Since $E\in L^{2}([0,1]^{2})$, we just need to show that
$\mathbf{1}_{F_{l}}(x,y)h(y)$ is in $L^{2}([0,1]^{2})$. We have
$\displaystyle\int\int\mathbf{1}_{F_{l}}(x,y)h(y)^{2}dxdy=\int_{\Xi(F_{l})}\int_{F_{l}^{y}}h(y)^{2}dxdy=\int_{\Xi(F_{l})}m(F_{l}^{y})h(y)^{2}dy.$
Substituting (18) into $h$, the terms in $h(y)^{2}$ are a linear combination
of functions of the form
$\tilde{g}_{i_{1}}(y)\tilde{g}_{i_{2}}(y)f_{i_{3}}(y)f_{i_{4}}(y)$,
$i_{1},\ldots,i_{4}\in\\{1,\ldots,4\\}$ where
$f_{j}=\Re(\hat{e}),\Re(e),\Im(\hat{e})$ or $\Im(e)$, $j=1,\ldots,4$,
respectively, and
$\tilde{g}_{j}(y)=\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}f_{j}(x)dx$,
$j=1,\ldots,4$. Thus, to show $\mathbf{1}_{F_{l}}(x,y)h(y)$ is in
$L^{2}([0,1]^{2})$ (and therefore $M\in L^{2}([0,1]^{2})$), we need to bound
$\displaystyle
I:=\int_{\Xi(F_{l})}m(F_{l}^{y})|\tilde{g}_{i_{1}}(y)||\tilde{g}_{i_{2}}(y)|f_{i_{3}}(y)||f_{i_{4}}(y)|dy.$
We note that
$\displaystyle|\tilde{g}_{j}(y)|\leq\frac{1}{m(F_{l}^{y})}\int|\mathbf{1}_{F_{l}^{y}}(x)f_{j}(x)|dx\leq\frac{1}{m(F_{l}^{y})}\|\mathbf{1}_{F_{l}^{y}}\|_{2}\|f_{j}\|_{2}=\frac{\|f_{j}\|_{2}}{\sqrt{m(F_{l}^{y})}}.$
Thus, we have
$\displaystyle I$
$\displaystyle\leq\int_{\Xi(F_{l})}m(F_{l}^{y})\frac{\|f_{i_{1}}\|_{2}}{\sqrt{m(F_{l}^{y})}}\frac{\|f_{i_{2}}\|_{2}}{\sqrt{m(F_{l}^{y})}}|f_{i_{3}}(y)||f_{i_{4}}(y)|dy$
$\displaystyle=\|f_{i_{1}}\|_{2}\|f_{i_{2}}\|_{2}\int_{\Xi(F_{l})}|f_{i_{3}}(y)f_{i_{4}}(y)|dy\leq\|f_{i_{1}}\|_{2}\|f_{i_{2}}\|_{2}\|f_{i_{3}}\|_{2}\|f_{i_{4}}\|_{2}.$
Since $f_{j}\in L^{2}$, $j=1,\ldots,4$, we conclude that $M\in
L^{2}([0,1]^{2})$.
Now, to verify $\dot{k}$ satisfies the first necessary condition, we compute,
for $\tilde{k}\in V_{\ker}\cap S_{k_{0},l}$, the central term in (65)
$\displaystyle\langle\tilde{k},E+2\mu\dot{k}\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}$
$\displaystyle=\int_{F_{l}}\tilde{k}(x,y)\left(E(x,y)+2\mu\dot{k}(x,y)\right)dxdy$
$\displaystyle=\int_{\Xi(F_{l})}\int_{F_{l}^{y}}\tilde{k}(x,y)\left(E(x,y)-E(x,y)+h(y)\right)dxdy$
$\displaystyle=\int_{\Xi(F_{l})}\left[\int_{F_{l}^{y}}\tilde{k}(x,y)dx\right]h(y)dy=0,$
where the last equality is from $\tilde{k}\in V_{\ker}\cap S_{k_{0},l}$. To
conclude the check that $\dot{k}$ satisfies the necessary condition (65), we
need to check that $\mu\neq 0$. Since $M\in L^{2}([0,1]^{2})$, note that the
necessary condition (66) yields
$\mu=\pm\frac{1}{2}\|M\|_{L^{2}([0,1]^{2},{\mathbb{R}})}$; thus, to finish the
proof that $\dot{k}$ satisfies both necessary conditions (65)-(66), we will
show that $\|M\|_{L^{2}([0,1]^{2},{\mathbb{R}})}\neq 0$. From the hypotheses
on $E$ we conclude
$\|M\|_{L^{2}([0,1]^{2},{\mathbb{R}})}^{2}=\int_{F_{l}}\left(E(x,y)-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}E(z,y)dz\right)^{2}dxdy\neq
0.$
Hence $\mu=\pm\frac{1}{2}\|M\|_{L^{2}([0,1]^{2},{\mathbb{R}})}\neq 0$. The
sign of $\mu$ is determined by checking the sufficient conditions.
We can now verify that $\dot{k}\in V_{\ker}\cap S_{k_{0},l}$. We note from
$M\in L^{2}([0,1]^{2})$ and $\mu\neq 0$, $\dot{k}\in L^{2}([0,1]^{2})$. By
construction, supp$(\dot{k})\subseteq F_{l}$. Finally, we have
$\displaystyle\int\dot{k}(x,y)dx$
$\displaystyle=-\frac{1}{2\mu}\left(\int_{F_{l}^{y}}E(x,y)dx-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}E(z,y)dz\int\mathbf{1}_{F_{l}}(x,y)dx\right)$
$\displaystyle=-\frac{1}{2\mu}\left(\int_{F_{l}^{y}}E(x,y)dx-\frac{1}{m(F_{l}^{y})}\int_{F_{l}^{y}}E(z,y)dz\
m(F_{l}^{y})\right)$ $\displaystyle=0.$
For the sufficient conditions, we note that in this setting
$D_{\dot{k}\dot{k}}^{2}\mathcal{L}(\dot{k},\lambda)(\tilde{k},\tilde{k})$ is
the same as in the proof of Theorem 5.4 (since the objectives considered in
both this and the other optimization problem are linear). Hence, the second
order sufficient conditions are satisfied with $\mu>0$. Thus, with
$2\mu=\|M\|_{L^{2}([0,1]^{2},{\mathbb{R}})}$, (31) satisfies the necessary and
sufficient conditions. Next, we note that the set $P_{l}=V_{\ker}\cap
S_{k_{0},l}\cap B_{1}$ is a closed (Lemma 5.1), bounded, strictly convex set,
containing $\dot{k}=0$. The objective
$\mathcal{J}(\dot{k})=\langle\dot{k},E\rangle_{L^{2}([0,1]^{2},{\mathbb{R}})}$
is continuous and not uniformly vanishing (Lemma 5.2). Therefore by
Propositions 4.1 and 4.3, (31) is the unique solution to the optimization
problem (29)-(30).
We finally show that $E\in L^{\infty}([0,1]^{2},{\mathbb{R}})$ by supposing
$k_{0}\in L^{2}([0,1]^{\infty},{\mathbb{R}})$. Recall that
$E(x,y)=\big{(}\Re(\hat{e})(x)\Re(e)(y)+\Im(\hat{e})(x)\Im(e)(y)\big{)}\Re(\lambda_{0})+\big{(}\Im(\hat{e})(x)\Re(e)(y)-\Re(\hat{e})(x)\Im(e)(y)\big{)}\Im(\lambda_{0}).$
Since $L_{0}e=\lambda_{0}e$ and $L_{0}^{*}\hat{e}=\lambda_{0}\hat{e}$, we have
from inequality (7) that $e,\hat{e}\in L^{\infty}([0,1],{\mathbb{C}})$ since
$k_{0}\in L^{\infty}([0,1]^{2},{\mathbb{R}})$. Hence, we have that
$\Re(e),\Re(\hat{e}),\Im(e),\Im(\hat{e})\in L^{\infty}([0,1],{\mathbb{R}})$
and thus $E\in L^{\infty}([0,1]^{2},{\mathbb{R}})$.
## Appendix C Upper bound for the norm of the reflection operator
###### Lemma C.1.
Let $P_{\pi}$ be as in (34) and assume that the support of $f\in
L^{2}({\mathbb{R}})$ is contained in $N$ intervals of lengths
$a_{j},j=1,\ldots,N$. Then,
$\|P_{\pi}f\|_{L^{2}([0,1])}\leq\left(\sum_{j=1}^{N}\lceil
a_{j}+1\rceil\right)\|f\|_{L^{2}({\mathbb{R}})}$, where $\lceil x\rceil$
denotes the smallest integer greater than or equal to $x$.
###### Proof.
Using translation invariance of Lebesgue measure, and the fact that for each
fixed $x$ there are at most $\sum_{j=1}^{N}\lceil a_{j}+1\rceil$ nonzero
evaluations of $f$ in the infinite sum below,
$\displaystyle\int_{0}^{1}(P_{\pi}f)(x)^{2}\ dx$ $\displaystyle=$
$\displaystyle\int_{0}^{1}\left(\sum_{i\in
2\mathbb{Z}}(f(i+x)+f(i-x))\right)^{2}\ dx$ $\displaystyle\leq$
$\displaystyle\int_{-\infty}^{\infty}\left(\sum_{j=1}^{N}\lceil
a_{j}+1\rceil\right)^{2}f(x)^{2}\ dx$ $\displaystyle=$
$\displaystyle\left(\sum_{j=1}^{N}\lceil
a_{j}+1\rceil\right)^{2}\|f\|_{L^{2}}^{2}.$
## Appendix D Proof of Theorem 7.4
###### Proof.
The proof will follow the structure of the proof of Theorem 5.4. To this end,
we consider the following Lagrangian function
$\mathcal{L}(\dot{T},\mu):=f(\dot{T})+\mu g(\dot{T}),$
where
$f(\dot{T}):=-\big{\langle}c,\widehat{R}(\dot{T})\big{\rangle}_{L^{2}([0,1],{\mathbb{R}})},$
$g(\dot{T}):=\|\dot{T}\|^{2}_{2}-1$ and $\dot{T}\in S_{T_{0},\ell}$.
Necessary conditions: We want to find $\dot{T}$ and $\mu$ that satisfy the
first-order necessary conditions:
$\displaystyle g(\dot{T})=0$ $\displaystyle
D_{\dot{T}}\mathcal{L}(\dot{T},\mu)$ $\displaystyle\tilde{T}=0\text{ for all
}\tilde{T}\in S_{T_{0},\ell},$
where $D_{\dot{T}}\mathcal{L}(\dot{T},\mu)\in\mathcal{B}(L^{2},{\mathbb{R}})$
is the Frechet derivative with respect to the variable $\dot{T}$. Since $f$ is
linear, we have $(D_{\dot{T}}f)\tilde{T}=f(\tilde{T})$. Also, we have that
$(D_{\dot{T}}g)\tilde{T}=2\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
(following the computation in the proof of Theorem 5.4). Thus, for the
necessary conditions of the Lagrange multiplier method to be satisfied, we
need that
$D_{\dot{T}}\mathcal{L}(\dot{T},\mu)\tilde{T}=(D_{\dot{T}}f)\tilde{T}+\mu(D_{\dot{T}}g)\tilde{T}=f(\tilde{T})+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}=0$
(68)
for all $\tilde{T}\in S_{T_{0},\ell}$ and
$g(\dot{T})=0.$ (69)
Following the proof of Theorem 5.4, we will solve for $\dot{T}$ by rewriting
$f(\tilde{T})+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
as an inner product on $L^{2}$. To this end, we have that
$\displaystyle f(\tilde{T}$
$\displaystyle)+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
(70)
$\displaystyle=\bigg{\langle}c,(\text{Id}-L_{0})^{-1}\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\tilde{T}(y)f_{0}(y)dy\bigg{\rangle}_{L^{2}([0,1],{\mathbb{R}})}+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
$\displaystyle=\bigg{\langle}(\text{Id}-L_{0}^{*})^{-1}c,\int_{0}^{1}\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\tilde{T}(y)f_{0}(y)dy\bigg{\rangle}_{L^{2}([0,1],{\mathbb{R}})}+\langle
2\mu\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
$\displaystyle=\int_{0}^{1}\int_{0}^{1}((\text{Id}-L_{0}^{*})^{-1}c)(x)\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\tilde{T}(y)f_{0}(y)dydx+\langle
2\mu\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
$\displaystyle=\int_{0}^{1}\left[\int_{0}^{1}((\text{Id}-L_{0}^{*})^{-1}c)(x)\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)dxf_{0}(y)+2\mu\dot{T}(y)\right]\tilde{T}(y)dy$
$\displaystyle=\int_{0}^{1}\left[f_{0}(y)\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)(y)+2\mu\dot{T}(y)\right]\tilde{T}(y)dy.$
We note that since $c\in$ span$\\{f_{0}\\}^{\perp}$, we have from Lemma A.1
that $(\text{Id}-L_{0}^{*})^{-1}c\in L^{2}$ and the above expression is well
defined. Now, from (70), we have that
$f(\tilde{T})+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}=\langle
f_{0}\
\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)+2\mu\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$.
From this we can conclude that finding $\dot{T}$ and $\mu$ that satisfy (68)
and (69) reduces to finding $\dot{T}\in S_{T_{0},\ell}$ and
$\mu\in{\mathbb{R}}$ that satisfy $\langle f_{0}\
\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)+2\mu\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}=0$
for all $\tilde{T}\in S_{T_{0},\ell}$ and (69). Using the non-degeneracy of
the inner product, we find that
$\dot{T}=-\frac{M}{2\mu},$
where
$M=\mathbf{1}_{\widetilde{F}_{\ell}}f_{0}\
\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c).$ (71)
To conclude that the above $\dot{T}$ satisfies the necessary condition (68),
we need to check that $\mu\neq 0$. Since $M\in L^{\infty}$ (see the
_Boundedness of the solution_ paragraph below), the necessary condition (69)
yields $\mu=\pm\frac{1}{2}\|M\|_{2}$; thus, to finish the proof that $\dot{T}$
satisfies both necessary conditions (68)-(69), we will show that
$\|M\|_{2}\neq 0$. From the hypotheses on $f_{0}$ and
$(\text{Id}-L_{0}^{*})^{-1}c$, and recalling that
$\mathcal{P}(x,y)=P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)(x)$, we
conclude that
$\|M\|_{2}^{2}=\int_{\widetilde{F}_{\ell}}f_{0}(y)^{2}\left(\int\mathcal{P}(x,y)((\text{Id}-L_{0}^{*})^{-1}c)(x)dx\right)^{2}dy\neq
0.$
Hence $\mu=\pm\frac{1}{2}\|M\|_{2}\neq 0$; the sign of $\mu$ is determined by
checking the sufficient conditions. We thus have verified that $\dot{T}\in
S_{T_{0},\ell}$ because $\dot{T}\in L^{2}$ and the term
$\mathbf{1}_{\widetilde{F}_{\ell}}$ in (71) guarantees
supp$(\dot{T})\subseteq\widetilde{F}_{\ell}$.
Sufficient conditions: As in the proof of Theorem 5.4, we will show that
$\dot{T}$ in (49) is the solution to the optimization problem (46)-(47) by
checking that it satisfies the second-order sufficient conditions. We first
note that in this setting we have $Q=X=S_{T_{0},\ell}$, $x_{0}=\dot{T}$,
$Y^{*}={\mathbb{R}}$, $G(x_{0})=g(\dot{T})$, $K=\\{0\\}$,
$N_{K}(G(x_{0}))={\mathbb{R}}$, $T_{K}(G(x_{0}))=\\{0\\}$ and
$N_{Q}(x_{0})=\\{0\\}$. Thus, to show that $\Lambda(\dot{T})$ is not empty, we
need to show that $\dot{T}$ and $\mu$ satisfy
$D_{\dot{T}}\mathcal{L}(\dot{T},\mu)\dot{T}=0,\;g(\dot{T})=0,\;\mu\in\\{0\\}^{-},\;\mu
g(\dot{T})=0,$ (72)
where $\\{0\\}^{-}:=\\{\alpha\in\mathbb{R}:\alpha x\leq 0\ \forall
x\in\\{0\\}\\}=\mathbb{R}$. Following the argument in the proof of Theorem
5.4, it is easily verifiable that $\Lambda(\dot{T})$ is not empty. Thus, to
show that $\dot{T}$ is a solution to (46)-(47), we need to show that it
satisfies the following second-order conditions: there exists constants
$\nu>0$, $\eta>0$ and $\beta>0$ such that
$\sup_{|\mu|\leq\nu,\
\mu\in\Lambda(\dot{T})}D^{2}_{\dot{T}\dot{T}}\mathcal{L}(\dot{T},\mu)(\tilde{T},\tilde{T})\geq\beta\|\tilde{T}\|^{2}_{2},\
\forall\ \tilde{T}\in C_{\eta}(\dot{T}),$ (73)
where $C_{\eta}(\dot{T}):=\big{\\{}v\in
S_{T_{0},\ell}:|2\langle\dot{T},v\rangle_{S_{T_{0},\ell}}|\leq\eta\|v\|_{S_{T_{0},\ell}}\text{
and }f(v)\leq\eta\|v\|_{S_{T_{0},\ell}}\big{\\}}$ is the approximate critical
cone. Since
$D_{\dot{T}}\mathcal{L}(\dot{T},\mu)\tilde{T}=f(\tilde{T})+2\mu\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$
and $\langle\dot{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$ is linear in
$\dot{T}$, we have that
$D^{2}_{\dot{T}\dot{T}}\mathcal{L}(\dot{T},\mu)(\tilde{T},\tilde{T})=2\mu\langle\tilde{T},\tilde{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}$.
Thus, we conclude that the second-order condition (73) holds with $\mu>0$,
$\nu=|\mu|=\frac{1}{2}\|M\|_{S_{T_{0},\ell}}$, $\beta=2\mu$ and
$\eta=\max\big{\\{}2\|\dot{T}\|_{S_{T_{0},\ell}},\|M\|_{S_{T_{0},\ell}}\big{\\}}$.
Since $\dot{T}$ satisfies the necessary conditions (68) and (69), with
$\mu>0$, $\dot{T}$ is a solution to the optimization problem (46)-(47). Using
Lemma 7.2 and Proposition 7.3, we conclude that this solution is unique.
Boundedness of the solution: We have that
$\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)\in
L^{\infty}([0,1]^{2})$ (see proof of Lemma 6.5). From inequality (7), with the
kernel $\left(P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)\right)(x)$,
we have that $\mathcal{G}h\in L^{\infty}$ for any $h\in L^{2}$. Since
$f_{0}\in L^{\infty}$, we have that $f_{0}\
\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)\in L^{\infty}$. Thus,
$M=\mathbf{1}_{\widetilde{F}_{\ell}}f_{0}\
\mathcal{G}((\text{Id}-L_{0}^{*})^{-1}c)\in L^{\infty}$ and therefore
$\dot{T}\in L^{\infty}$.
## Appendix E Proof of Theorem 7.7
###### Proof.
We use arguments similar to those in the proofs of Theorems 7.4 and 5.6. Let
$\widehat{E}$ be as in (51). For the necessary conditions, we will need that
$\langle\tilde{T},\widehat{E}+2\mu\dot{T}\rangle_{L^{2}([0,1],{\mathbb{R}})}=0$
(74)
for all $\tilde{T}\in S_{T_{0},\ell}$ and
$\|\dot{T}\|_{2}^{2}=1.$ (75)
Thus, from (74) and the nondegeneracy of the inner product we have that
$\dot{T}=-\mathbf{1}_{\widetilde{F}_{\ell}}\frac{\widehat{E}}{2\mu}$. To
conclude that $\dot{T}$ satisfies the necessary condition (74), we need to
check that $\mu\neq 0$. Since $\widehat{E}\in L^{2}$ (as it is essentially
bounded, see Proposition 7.5), the necessary condition (75) yields
$\mu=\pm\frac{1}{2}\|\widehat{E}\|_{2}$. Thus, to finish the proof that
$\dot{T}$ satisfies both necessary conditions (74)-(75), we will show that
$\|\widehat{E}\|_{2}\neq 0$. From the hypotheses on $E$, and recalling that
$\mathcal{P}(x,y)=P_{\pi}\left(\tau_{-T_{0}(y)}\frac{d\rho}{dx}\right)(x)$, we
conclude that
$\|\widehat{E}\|_{2}^{2}=\int_{\widetilde{F}_{\ell}}\left(\int_{0}^{1}\mathcal{P}(x,y)E(x,y)dx\right)^{2}dy\neq
0.$
Hence $\mu=\pm\frac{1}{2}\|\widehat{E}\|_{2}\neq 0$ and
$\dot{T}=\mp\mathbf{1}_{\widetilde{F}_{\ell}}\frac{\widehat{E}}{\big{\|}\widehat{E}\big{\|}_{2}}$;
the sign of $\mu$ is determined by checking the sufficient conditions. Clearly
$\dot{T}\in L^{2}$ and has support contained in $\tilde{F}_{l}$, thus
$\dot{T}\in S_{T_{0},l}$. For the sufficient conditions, as in the proof of
Theorem 7.4, since the objective is linear, we require that $\mu>0$. Using
Lemma 7.2 and Proposition 7.6 we conclude that (55) is the unique solution.
The essential boundedness of $\dot{T}$ follows from the essential boundedness
of $\widehat{E}$ (see Proposition 7.5).
## References
* [1] F. Antown, D. Dragičević, and G. Froyland. Optimal linear responses for Markov chains and stochastically perturbed dynamical systems. Journal of Statistical Physics, 170(6):1051–1087, 2018.
* [2] P. Ashwin, M. Nicol, and N. Kirkby. Acceleration of one-dimensional mixing by discontinuous mappings. Physica A: Statistical Mechanics and its Applications, 310(3-4):347–363, 2002.
* [3] A. Avila and G. Forni. Weak mixing for interval exchange transformations and translation flows. Annals of Mathematics, pages 637–664, 2007.
* [4] W. Bahsoun, M. Ruziboev, and B. Saussol. Linear response for random dynamical systems. Advances in Mathematics, 364:107011, 2020.
* [5] V. Baladi. Linear response, or else. ICM Seoul 2014 talk (arXiv:1408.2937), 2014.
* [6] J. Bonnans and A. Shapiro. Perturbation analysis of optimization problems. Springer Science & Business Media, 2013.
* [7] J. Borwein and R. Goebel. Notions of relative interior in Banach spaces. Journal of Mathematical Sciences, 115(4), 2003.
* [8] J. Conway. A course in functional analysis, volume 96. Springer Science & Business Media, 2013.
* [9] M. Dellnitz, G. Froyland, and S. Sertl. On the isolated spectrum of the Perron-Frobenius operator. Nonlinearity, 13(4):1171, 2000.
* [10] M. Dellnitz and O. Junge. On the approximation of complicated dynamical behavior. SIAM Journal on Numerical Analysis, 36(2):491–515, 1999.
* [11] D. Dragicevic and J. Sedro. Statistical stability and linear response for random hyperbolic dynamics. arXiv:2007.06088, 2020.
* [12] S. Eveson. Compactness criteria for integral operators in $L^{\infty}$ and $L^{1}$ spaces. Proceedings of the American Mathematical Society, 123(12):3709–3716, 1995.
* [13] A. Faggionato, N. Gantert, and M. Salvi. Einstein relation and linear response in one-dimensional Mott variable-range hopping. Ann. Inst. H. Poincaré Probab. Statist., 55(3):1477–1508, 2019\.
* [14] G. Froyland. An analytic framework for identifying finite-time coherent sets in time-dependent dynamical systems. Physica D., 250:1–19, 2013.
* [15] G. Froyland, C. González-Tokman, and T. Watson. Optimal mixing enhancement by local perturbation. SIAM Review, 58(3):494–513, 2016.
* [16] G. Froyland, P. Koltai, and M. Stahn. Computation and optimal perturbation of finite-time coherent sets for aperiodic flows without trajectory integration. SIAM Journal on Applied Dynamical Systems, 19(3):1659–1700, 2020\.
* [17] G. Froyland, R. Murray, and O. Stancevic. Spectral degeneracy and escape dynamics for intermittent maps with a hole. Nonlinearity, 24(9):2435, 2011.
* [18] G. Froyland and N. Santitissadeekorn. Optimal mixing enhancement. SIAM Journal on Applied Mathematics, 77(4):1444–1470, 2017.
* [19] S. Galatolo. Quantitative statistical stability and speed of convergence to equilibrium for partially hyperbolic skew products. J. Éc. Pol. Math., 5:377–405, 2018.
* [20] S. Galatolo and P. Giulietti. A linear response for dynamical systems with additive noise. Nonlinearity, 32(6):2269, 2019.
* [21] S. Galatolo and M. Pollicott. Controlling the statistical properties of expanding maps. Nonlinearity, 30:2737–2751, 2017.
* [22] S. Galatolo and J. Sedro. Quadratic response of random and deterministic dynamical systems. Chaos, 30(2):023113, 2020.
* [23] N. Gantert, X. Guo, and J. Nagel. Einstein relation and steady states for the random conductance model. Ann. Probab., 45(4):2533–2567, 2017.
* [24] N. Gantert, P. Mathieu, and A. Piatnitski. Einstein relation for reversible diffusions in random environment. Comm. Pure Appl. Math., 65(2):187–228, 2012.
* [25] M. Ghil and V. Lucarini. The physics of climate variability and climate change. Reviews of Modern Physics, 92(3):035002, 2020.
* [26] S. Gouëzel and C. Liverani. Banach spaces adapted to Anosov systems. Ergodic Theory and Dynamical Systems, 26:189–217, 2006.
* [27] M. Hairer and A. Majda. A simple framework to justify linear response theory. Nonlinearity, 23:909–922, 2010.
* [28] H. Hennion and L. Hervé. Limit theorems for Markov chains and stochastic properties of dynamical systems by quasi-compactness, volume 1766. Springer Science & Business Media, 2001.
* [29] T. Kato. Perturbation theory for linear operators. Reprint of the 1980 edition. Classics in Mathematics. Springer-Verlag, Berlin, 1995.
* [30] B. R. Kloeckner. The linear request problem. Proc. Amer. Math. Soc., 146:2953–2962, 2018.
* [31] A. Kolmogorov and S. Fomin. Elements of the Theory of Functions and Functional Analysis. Volume 2: Measure. The Lebesgue Integral. Hilbert Space. Graylock, 1961.
* [32] P. Koltai, H. C. Lie, and M. Plonka. Fréchet differentiable drift dependence of Perron–Frobenius and Koopman operators for non-deterministic dynamics. Nonlinearity, 32(11):4232, 2019.
* [33] T. Komorowski and S. Olla. On mobility and Einstein relation for tracers in time-mixing random environments. Journal of Statistical Physics, 118(3/4):407–435, 2005.
* [34] H. Kreczak, R. Sturman, and M. C. Wilson. Deceleration of one-dimensional mixing by discontinuous mappings. Physical review E, 96(5):053112, 2017.
* [35] A. Lasota and M. Mackey. Probabilistic properties of deterministic systems. Cambridge university press, 1985.
* [36] C. Liverani, B. Saussol, and S. Vaienti. A probabilistic approach to intermittency. Ergodic theory and dynamical systems, 19(3):671–685, 1999.
* [37] D. Luenburger. Optimization by vector space methods. John Wiley & Sons, Inc., 1969.
* [38] R. MacKay. Management of complex dynamical systems. Nonlinearity, 31:R52–R66, 2018.
* [39] L. Marangio, J. Sedro, S. Galatolo, A. Di Garbo, and M. Ghil. Arnold maps with noise: Differentiability and non-monotonicity of the rotation number. Journal of Statistical Physics, Nov 2019.
* [40] P. Mathieu and A. Piatnitski. Steady states, fluctuation–dissipation theorems and homogenization for diffusions in a random environment with finite range of dependence. Archive for Rational Mechanics and Analysis, 230(3/4):277–320, 2018\.
* [41] M. Reed and B. Simon. Methods of modern mathematical physics. Volume I: Functional analysis. Academic press, 1980.
* [42] D. Ruelle. Differentiation of SRB states. Communications in Mathematical Physics, 187:227–241, 1997.
* [43] J. Sedro. On regularity loss in dynamical systems. PhD thesis, 2019. https://www.theses.fr/2018SACLS254.
* [44] J. Sedro and H. H. Rugh. Regularity of characteristic exponents and linear response for transfer operator cocycles. arXiv:2004.10103, 2020.
* [45] Y. Sinai and C. Ulcigrai. Weak mixing in interval exchange transformations of periodic type. Letters in Mathematical Physics, 74(2):111–133, 2005.
* [46] R. Sturman. The role of discontinuities in mixing. Advances in applied mechanics, 45:51–90, 2012.
* [47] S. Ulam. A Collection of Mathematical Problems, vol. 8. Interscience Publishers, New York, 1960.
* [48] M. Wang and I. C. Christov. Cutting and shuffling with diffusion: Evidence for cut-offs in interval exchange maps. Physical Review E, 98(2):022221, 2018.
* [49] H. Zmarrou and A. Homburg. Bifurcations of stationary measures of random diffeomorphisms. Ergodic Theory Dynam. Systems, 27:1651–1692, 2007.
|
# A Machine Learning Approach to Predicting Continuous Tie Strengths
James Flamino†
Department of Physics
Rensselaer Polytechnic Institute
Troy, NY 12180
<EMAIL_ADDRESS>
&Ross DeVito†
Department of Computer Science and Engineering
University of California San Diego
La Jolla, CA 92093
<EMAIL_ADDRESS>
&Boleslaw K. Szymanski
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
<EMAIL_ADDRESS>
&Omar Lizardo
Department of Sociology
University of California Los Angeles
Los Angeles, CA 90095
<EMAIL_ADDRESS>
###### Abstract
Relationships between people constantly evolve, altering interpersonal
behavior and defining social groups. Relationships between nodes in social
networks can be represented by a tie strength, often empirically assessed
using surveys. While this is effective for taking static snapshots of
relationships, such methods are difficult to scale to dynamic networks. In
this paper, we propose a system that allows for the continuous approximation
of relationships as they evolve over time. We evaluate this system using the
NetSense study, which provides comprehensive communication records of students
at the University of Notre Dame over the course of four years. These records
are complemented by semesterly ego network surveys, which provide discrete
samples over time of each participant’s true social tie strength with others.
We develop a pair of powerful machine learning models (complemented by a suite
of baselines extracted from past works) that learn from these surveys to
interpret the communications records as signals. These signals represent
dynamic tie strengths, accurately recording the evolution of relationships
between the individuals in our social networks. With these evolving tie
values, we are able to make several empirically derived observations which we
compare to past works.
22footnotetext: Authors contributed to this work equally
## Introduction
Relationships and the interactions that characterize them are a defining
features of social networks [1, 2, 3]. In the network and social sciences, the
strength of these relationships are often represented by a “tie strength,” a
weighted edge between two nodes that marks the existence of a connection
between the people portrayed by the nodes. Previously, work on understanding
tie strength has ranged from interpreting its importance in information spread
[4, 5, 6, 7, 8] to using a variety of social features to predict the magnitude
of tie strength between individuals [9, 10, 11, 12, 13].
A question fundamental to this topic is: what contributes to the strength of a
tie between two people? Or, more specifically, what attributes of a
relationship can we use to predict a tie strength value that properly
represents the closeness of two individuals within a social network? This
question has no singular answer, though there have been popular works delving
into possible interpretations [4, 5, 7]. Such works have pointed to both
qualitative and quantitative attributes of relationships that seem to
influence the strength of a the relationship between two individuals, and
therefore would contribute to the evolution of tie weights within the involved
social network.
In Granovetter’s popular early work on this topic [4] these factors were
identified as time invested, emotional intensity, mutual confiding, and
reciprocal services. He suggested tie strength was ultimately a linear
combination of these factors. Krackhart’s response to this work [5] introduced
an alternative characterization of tie strength that consisted of interaction
frequency, affection, and time, which he defined qualitatively as an enduring
history between the two linked individuals.
Marsden’s work provided further clarification to the considered factors,
introducing predictors (aspects of relationships that are related to, but not
a part of, tie strength), and indicators (actual components of tie strength).
The former set of factors contain relationship descriptors like kinship and
educational differences. The latter set of factors contained attributes of
communication and shared interests, and intimacy that are more commonly seen
as features of tie strength in other works. In particular, Marsden addressed
closeness (emotional intensity), duration of connection, frequency of
communication, breadth of discussion topics, and mutual confiding (all of
which correspond to Granovetter’s characterizations of tie strength) and found
that closeness played an important part in informing tie strength.
Given the subjective nature of relationships and their attributes like
intimacy and affection, a more robust and generalizable quantitative approach
to prediction faces some challenges, though there has been groundwork laid to
this end [10, 9, 11, 12, 13]. In some of these works, tie strength is
approximated by linking features like communication frequency, social media
friend overlap, shared attributes (like gender or education), directed message
keyword usage, and the like to predict closeness between two individuals. The
predictions are then usually compared to a ground truth extracted from a
survey asking participants to rate their closeness either on a numbered scale
or indirectly by using questions like “How strong is your relationship with
this person?”.
Facebook has become a prevalent medium for these kinds of experiments. Since
among the online social media platforms, Facebook maintains a massive social
network and facilitates broad forms of interaction between users. The results
of these experiments revealed that features like days since the last
communication, participant’s number of friends, and exchanged intimacy words
contribute a fair amount to the prediction of tie strength. In addition to
this, some of the work showed that public communications, like Facebook wall
posts, and private communications, like private Facebook direct messages,
often contribute equally to predict tie strength.
Despite the interesting implications of these works, the scope of these
systems are always limited, restricted to a snapshot in time of the social
network. But as most people’s lives are constantly a witness to, relationships
evolve over time. They can be subject to changes, and such changes will have a
direct impact on how people interact with former, current, and future friends.
In fact, the progression of a relationship is important for characterizing the
connection’s strength. All of the past works mentioned above only focus on
predicting tie strength at a single moment in time. Additionally, these models
often faced the issue of being tied to their specific application. The
representations of tie strength were often characterized by attributes
extracted from a singular platform, namely Facebook. This results in
interpretations of tie strength that are defined by their specific platforms,
making them incapable of being applied generally.
In this paper, we lay out a generalizable system that addresses these concerns
and demonstrates that evolving tie strengths for a dynamic social network can
be accurately predicted given only a practically small survey-based ground
truth. There are three core pieces to our system: the input data, the training
data, and the model that learns to interpret the input data as tie strengths
using the training data.
A person’s digital communication records are used as the input data, from
which a trained model can predict their social ties. While communication is
just one of the many hypothesized aspects of a relationship which impacts the
tie strength, as a data source, digital communications have the practical
benefits of being abundant, multifaceted, and easy to collect. These
advantages grow as the world becomes increasingly dependent on digital
interactions. Our system can work with any number of communications mediums
simultaneously (e.g. text messages, phone calls, video calls, WhatsApp, and
Facebook Messenger), it just requires that the records include the time, type,
and pair of people involved for each communication event.
To train a model and evaluate its performance on converting input data into
tie strength values, ground truth data on social ties is needed for some
subset of those for whom we also have communication records. To meet this
need, our system just requires a small number of top $k$ lists of social ties.
These lists can be procured at any time over the course of the dataset, as
long as there is concurrent communication data related to the person who’s top
social ties make up the list. Any representation of tie strength ground truth
in practice would likely be from survey responses. This being the case, a
ranking-based ground truth would be more robust when compared to more common
exact social tie values or binary relationship labels. Using this ordering
benefits from never having to ask for explicit social tie values or cutoffs,
which is important as these concepts would be highly subjective among survey
respondees. Furthermore, we show that even asking for an explicit ranking in
the survey is not required to avoid response bias. Instead we derive our top
$k$ rankings using survey questions based on Granovetter’s and Krackhart’s
work.
We found pairwise comparison based machine learning models to be excellent
predictors in comparison to the baselines. These models are able to take
advantage of using top $k$ orderings and pairwise comparisons to provide many
training examples for their underlying models using a realistic amount of
survey based ground truth data. This is important as machine learning
performance tends to rise with an increase in quality training data. This
pairwise comparison framework has the additional benefit of producing an
interpretable social tie value.
We train these machine learning models using the top $k$ lists, and show that
once trained, we can use these models to continuously, and accurately, predict
the evolving tie strength of one person towards another using just
communication data. In the following sections we discuss our system in greater
detail, evaluating its efficacy at accurately producing evolving tie strength
values. We then show that analyzing these dynamic values for all participants
reveals interesting observations related to communication evolution,
relationship stability, and triadic dynamics.
## Data
To develop and evaluate our dynamic tie strength model, we needed data on a
social network, and data on relationship attributes that could be used to
predict tie strengths within that social network. Optimally, both would extend
over a long enough period of time to capture changing relationship dynamics.
The NetSense [14] study, which consist of data voluntarily collected from
randomly selected students entering the the University of Notre Dame, fits
this need, providing linked ego network surveys and digital communications
records. Data for this study was collected from Fall 2011 and Spring 2013.
Student phone records, including text messages and phone calls, were provided,
along with corresponding ego network surveys that were filed out each semester
by the participants. In terms of scope, NetSense followed 196 students at its
peak, yielding extensive communication records that we could use to capture
relationship changes over time.
### Communication Records
NetSense’s communication record conforms to the standard Call Detail Record
(CDR) format, listing a timestamp, sender, receiver, message type, and message
length. The NetSense study contained $7,465,776$ events generated by the
participants of the study. Text messages make up about $94\%$ of these events,
with the remainder being calls. Despite this imbalance in volume, phone calls
remain an important medium for communication and carry an emotional weight,
especially among the younger population captured in the study [15]. Thus, we
choose to include both calls and text messages. In fact, we find that
considering calls and text separately also improve the machine learning models
we implement.
### Ego Network Surveys
As mentioned earlier, ego network surveys were collected once a semester to
complement the communication record. These surveys were prefaced with a
question asking the survey-taker (the ego) to list individuals (the alters)
with whom they spend a significant amount of time communicating or
interacting. This list could include up to 20 alters, and could include people
that were not involved with the study. This allowed for these lists to contain
a variety of relationship types, including fellow students, roommates,
parents, siblings, coworkers, and romantic partners. The ego was subsequently
asked to specify their relationships with these individuals. This
classification was provided to the ego as a closed list. In general, options
available ranged in familiarity from “significant other” and “parent” to
“acquaintance”. Other related information on these alters was also collected
through additional follow-up questions that included asking about the history
of contact, shared interests and activities, and the frequency of
communication. Importantly, the surveys also asked the ego to subjectively
rate similarity and closeness with the alters.
Despite the thoroughness of this study, the time between survey postings is
significant: the data has four ego network surveys over the four semesters,
and the study participants listed on average $15.9$ people per survey.
Regardless of the sparsity of these ego network surveys, our results
demonstrate that they still provide sufficient ground truth support for our
models.
### Definition of Tie Strength
Using Granovetter’s and Krackhart’s tie strength definitions, we can outline a
template for evaluating the connection between an ego and their alters. In
early works on this subject, tie strength was represented discretely as labels
(such as “close” or “not close”). More recently, tie strength has been encoded
in the form of a numerical range, which conforms to Granovetter’s belief that
tie strength is continuous, not discrete. These representations were varied,
and were often based on a combination of some large set of qualitative and
quantitative predictors. However, these methods for interpreting tie strength
are also usually reliant on platform-specific attributes [10, 9].
To avoid these limitations, we take a different approach to encoding tie
strength by simply ordering the list of alters from our ego network surveys.
For any given ego network survey, we produce a ranked list of the individuals
listed in the survey, where the order is determined by how strong the survey-
taker’s social tie is with each individual. Subsequently, each survey-taker
produces a set of top $k$ social tie rankings, timestamped by their respective
ego network surveys (greater details of how this is done are presented in the
methods section). We train and evaluate our social tie prediction models
through how well their predicted tie strengths conform with these rankings at
their corresponding times. Specifically, we introduce a suite of models that
interpret our communication data streams as dyadic tie strengths. These tie
strengths are represented by a signal value, which is used to establish a
predicted ranking by ordering said signals by magnitude. We compare this
ordering with the corresponding survey’s ground truth top $k$ social tie
ranking.
Given that these rankings are determined by how close an alter is to their
ego, a model that is properly trained to produce signals that accurately
reconstruct the appropriate ranking of each alter for any associated ego
ultimately means that the model is capable of generating continuous signals
that are a representation of evolving tie strength between an ego and those
they’ve communicated with. In other words, a signal’s magnitude that indicates
the level of affinity an individual has for another in the context of a ranked
list fits within the definition of evolving tie strength, which in the past
has been defined loosely. And since the model is designed to generate signals
over long periods of time (as tuned by the multiple social tie rankings over
time in the training data), any new target individual with simple
communication data should be able to have the model produce tie strengths that
(when ordered by magnitude) are be able to effectively identify their closer
social ties and subsequently capture the evolution of their connections with
the new target individual across the course of their communication data.
## Models
Our suite of models for this survey reconstruction process can be divided into
two classes: a baseline class and a machine learning class. For the baseline
class, we implement single-attribute models that use specific attributes of
communication behavior that are often cited as aspects of tie strength or
directly used as a proxy for it [11, 12, 10, 4, 13]. The second class, the
focus of this paper, uses machine learning methods on time series or a
collection of single-attribute model values to predict all out tie strengths
for a target person whose communications record are given. These models do
this by making pairwise comparisons between everyone the target person had
communicated with.
### Baseline Models
It has long been postulated “the more frequently persons interact with one
another, the stronger their sentiments of friendship for one another are apt
to be” [16]. Previous research has often used the frequency of communication
to predict tie strength or emotional closeness [11, 12, 10]. Following this
established methodology, we created a frequency model that calculates
frequency by dividing the number of communication events between two
individuals by the elapsed time since they first communicated at the timestep
for which it is being evaluated. In addition, we assessed a recency model that
uses the elapsed time since the last communication between two people as an
inversely related estimate of frequency of contact, as is done in [11]. This
measure of time since last communication was found to be the most predictive
single feature in [10].
Stronger ties, by definition, tend to involve longer time commitments [4].
Following this logic, we also created a duration model that uses the time
since the first communication record as a proxy for length of friendship or
other social bond. This was found to be the second most predictive feature in
[10]. As another proxy for a pair’s time commitment to communicating, we add
the volume model, which counts the total number of calls and text messages
between two individuals.
Recently, there has been work showing that tie strength can also be predicted
using the overlap of friend groups between two specific individuals [17]. In
one implementation of this concept [13], a metric called “weighted overlap”
for social bow tie structures is used as a feature to help machine learning
algorithms predict the tie strength between two individuals in a specific time
frame. Given that this particular feature contributes heavily to the
predictive performance in a couple of the tested cases in this work, we
implement this metric here as well as a model to explore the predictive
capabilities of evolving friend group overlap. The specific implementation is
shown in detail in the methods section.
Given that this is the baseline class, we also ensure to set the lowest bar
for survey reconstruction with a simple random baseline. This randomly sorts
the individuals with whom the target participant had communicated previously
into an arbitrary ranking.
### Machine Learning Models
At their core, our machine learning models compare a selected individual’s
(person A) communication history with one individual (person B) against A’s
communication history with another (person C). Provided these two histories,
the machine learning models then will predict, between B and C, which of the
two will have a greater tie strength with A. When these comparisons are made
for all pairs of people in the selected individual’s records, we can generate
the predicted ranked list for that person (see Methods for more details), and
subsequently produce meaningful tie strength values for all of this person’s
relationships.
These tie strengths for the machine learning models are expressed as winning
percentages. For an individual being evaluated, winning percentage is the
fraction of pairwise comparisons with all other people the target had
communicated with were the model predicts the evaluated individual has a
stronger social tie. This score has the range $[0,1]$ where higher the score
means the more likely the scored individual is closer to the selected
individual. This tie strength value can also be generated at any point in time
provided the models are trained and there is communication history available
for those being considered in the pairwise comparisons.
Pairwise comparison-based ranking models can also take advantage of a ground
truth in ranked form. Specifically, selecting permutations from this ordering
allows for the generation of many training examples from relatively little
surveying. This is important as the quality of machine learning models is tied
to the quantity and quality of training samples.
The first machine learning model uses an ensemble method that utilizes
features of duration, recency, frequency, and volume in the communication data
to inform a random forest classifier [18]. The random forest classifier
predicts which of the two compared connections with a selected individual is
indicative of a greater tie strength, and thus should be higher ranked. The
second machine learning model uses communications time series and recurrent
neural networks, specifically a two-channel Long Short Term Memory (LSTM)
networks [19]. The LSTM is used to make pairwise comparisons, which are in
turn used to produce a signal and ranking, all of which is done in the same
manner as in the Ensemble model. As mentioned earlier, we find that
performance improves for both machine learning models when texts and calls are
treated as separate data streams.
## Results
### Ranking Metrics
To determine survey reconstruction error and evaluate the models’
capabilities, we compare a model’s predicted ranked individuals against the
ground truth using the rank-biased overlap (RBO) [20]. RBO is an indefinite
rank similarity measure used to evaluate the similarity of two ranked
incomplete lists, making it better suited for this task than, for example,
Jaccard. RBO has several other desirable attributes for this kind of
comparison; it handles items being present only in one ranking, weights higher
ranking items more, works with any given ranking length, and requires little
to no assumptions about the data.
### Ranking Performance
The evaluation process we chose for comparing our suite of models against the
NetSense ground truth was the standard 3-fold cross validation. Given the
total list of NetSense egos, the validation process shuffled and equally split
this list of participants into three mutually exclusive groups. For each fold,
a test group was selected, with the remaining two groups used as training
data. Within a fold, we separate the training and testing data into four
subsets, split equally by semesterly survey time (the time at which the
surveys were filled out by the egos). At each survey time, the training and
testing subset only includes the surveys and communication data from before
that time (therefore the later subsets contain the earlier subsets). Then for
each subset we trained and tested the machine learning models with all
training data available. We then subsequently tested these trained models on
the available testing data in the subset, using the models to predict the
current testing ego’s surveys. This process was repeated for each subset,
allowing more training and testing data to be released with each proceeding
survey time. We do not, however, allow the models to train on the preceding
ground truth of the test data after it’s been predicted. Instead, we ensured
the models have predicted the surveys for the test egos for each survey time
first within the current fold before releasing ground truth for comparison.
This setup prevents any model from using future communication history in any
of the folds during evaluation. Once this setup was completed for each model
within the current fold, the fold score was determined by finding the weighted
average survey reconstruction accuracy (RBO) across all available surveys for
each individual test participant. The weight of each predicted survey is the
size of that survey’s ground truth, accentuating the prediction of larger
surveys. Given a score for every participant in this fold, this score was then
averaged over all test participants. The final score was computed as the
average fold score over all folds, which are shown for all models in Table 1.
Of the baseline class models, overall volume of calls and texts since the
start of college was the most predictive. This was followed by frequency and
recency of communication. Duration of communication was a relatively distant
fourth, but this may be tied to the time frame of data to which we had access
to. Specifically, those with whom a study participant had been friends with
long before college could only have an estimated duration of friendship
spanning back to the start of college. This start of college period was also a
time when participants were making many new contacts. Some of these contacts
would go on to become friendships, but many were just freshmen meeting new
people who would not be significant as their time in college went on. For this
reason, this may have been a poor estimate of the length of friendship and
therefore of the social bond.
The limited scope of the data negatively affected the overlap model as well,
which performed even worse than the duration model. The limit in accuracy here
is most likely from the fact that the communication data only provides the
comprehensive communities of neighboring friends for study participants. Non-
participants do not have their out-going messages recorded, so their only
neighbors will always be strictly participants. The overlap model requires a
sizable sample of the overlapping and non-overlapping neighbors of both
individuals being evaluated for tie strength prediction. As one of these two
individuals might be a non-participant, their neighborhood will be incomplete
which skews the overlap value. These issues demonstrate some of the difficulty
of inferring social ties given just a single target person’s records.
Table 1: Results of the NetSense survey reconstruction models, with variance in parentheses Model Class | Model | RBO
---|---|---
Baseline | Random | 0.037 (0.003)
Overlap | 0.064 (0.008)
Duration | 0.234 (0.033)
Recency | 0.307 (0.025)
Frequency | 0.320 (0.032)
Volume | 0.363 (0.032)
Machine Learning | Ensemble | 0.450 (0.029)
LSTM | 0.481 (0.029)
But despite these limitations, the Ensemble and LSTM models produced RBO
scores of $0.450$ and $0.481$ respectively for the NetSense study. While
frequency, recency, duration, and volume of communication have merit for
approximating relationship strength on their own [16, 11, 10], they are unable
to capture a greater whole of the latent mechanics of social dynamics. But
when all used in conjunction, this feature space allows for even a simple
random forest classifier to perform well. However, the relative performance of
the Ensemble model hints at the weaknesses of such simple models for inferring
complex social dynamics, and that even more improvements can be made. This is
where the final model, the LSTM comes in.
Recurrent neural networks can accept, as input, temporal sequences of an
arbitrary length. This allows them to use as features the whole communications
histories from the start of the dataset to any time at which social ties are
being evaluated. This ability to analyze histories of interactions in a
temporally aware way is likely the key to achieving the best performance.
Instead of using heuristic features calculated at a specific time, the LSTM is
able to internally learn latent features from patterns of communication over
time that are most meaningful for evaluating the strength of social ties.
## Evolving Tie Strength Analysis
### Evaluating Continuous Signals
As mentioned in the Data section, the signals that can be generated by the
trained models are used to represent evolving tie strengths due to the fact
that the ground truth rankings that the models are fit to are ordered by
closeness. Hence, the signal magnitude between two individuals is also the
magnitude of tie strength between them. Now, to illustrate the signal
generation dynamics of our top performing models, we present the evolution of
tie strengths between a NetSense study participant and a sample of their
listed alters as encoded by our Volume model, the Ensemble model, and the
LSTM. For consistency, we normalize all values. In this particular analysis we
train the machine learning models using all the NetSense data, excluding all
data relating to the selected participant. We sequentially sampled the
resultant signal values of our two trained machine learning models and the
Volume model over the entire duration of the NetSense study, which includes
the time of each survey where the ground truth values are known. We plot these
sampled values against time, marking the times of the surveys and denoting if
the signals, when ordered with all other alters in ascending order by value,
place each alter at the right survey rank when compared against ground truth.
These signals, as generated by the top four models for a selected participant,
are shown in Figure 1. We specifically selected a subset of listed individuals
that had particular relationships with the target individual (e.g. parents,
siblings, significant others, and close friends).
Figure 1: Generated signals for a randomly selected NetSense participant using
the top three survey reconstruction models (Volume, Ensemble, and LSTM). The
x-axis marks the timestamp, and the y-axis marks the signal magnitude. The
colored vertical bars indicate the occurrence of a survey, denoting if the
models correctly (or incorrectly) classified the ranking of the considered
individual.
The first observation that we must be made in Figure 1 is the differences in
signal shapes between model types. Naturally this is due to the differing
signal generation methods of each model. Specifically, the Volume model
captures the continuously growing intensity of communication while the machine
learning models convert their pairwise predictions into signals using
comparative probability (i.e. a strong signal for an individual means they
have a higher probability of being closer to the selected participant than
other individuals).
But despite the differences in tie strength interpretations between models,
there are still clear trends that are reflected across models. For example, in
Figure 1, “Romantic Partner 1” becomes socially involved with our selected
participant right around the time of the last survey. This is universally
reflected by a massive spike in predicted tie strength, which is maintained
for the remainder of the study. The consistent communication between the
selected participant and their family (the sibling and parent) is also easily
shown across the models, with accompanying high tie strength values.
The universal visibility of these relationship transitions can be attributed
to an accompanying spike in communication activity. Since the Volume model
operates directly using total event occurrences, obvious shifts in
communication behavior are easily captured in the outputted signals. However,
only the machine learning models are able to capture more nuanced trends
visibly.
One example can be found in the listed individual “Friend 4”. Around the 3rd
semester, this friend and participant became strong friends, enough to warrant
the participant placing the friend on their ego network survey. But
communication is sparse overall as shown by the Volume model, meaning a change
in calls and text volume was not the biggest change here. An underlying shift
in the pattern of communication occurred in a way that only the machine
learning models were able to detect it, and subsequently boost the tie
strength value between the two. We can use “Acquaintance 1” as another
example. This listed individual initially meets the participant around the
first semester, most likely through a study group. Beyond the first semester
the participant forms other more concrete and significant social circles,
relegating the acquaintance to a strictly academic role (which is why
communication with them persists past the first semester though they are no
longer included within the surveys). This social tie weakens as class overlap
inevitably diverges, and the two eventually move on with their lives. While
these initial anecdotes are interesting, analyzing the tie strength dynamics
of larger groups could allow us to draw stronger general conclusions.
For the remainder of our analysis, we will be using the LSTM model with its
winning percentage representation of tie strength. In addition to being the
best performing models in our suite, its comparative winning percentage
representation of dynamic tie strength is meaningful and easy to understand.
For this analysis, we sampled predicted tie strengths from the LSTM model
across the duration of the NetSense study. We can represent the inferred
strength of a social tie from one person (who’s communications records are
being used for the inference) to another as a directed edge between nodes. The
weight for each edge is the tie strength value itself, which varies as a
function of time. With the resulting dynamic social network, we can analyze
general tie strength trends and compare them with previous work to verify the
efficacy of the model in capturing important relationships trends using our
generalizable methodology of survey reconstruction.
To establish the groundwork for future research, we deliberately chose two
broad subjects for initial analysis: dynamics of relationships and dynamics of
triadic groups. The analysis of tie strength evolution as it relates to
different relationship types is important, as the different kinds of
classifications (i.e. friend, sibling, parent, etc) can often help determine
the trends of closeness between individuals in the past and future. We also
choose to analyze triadic motifs within our evolving network due to the
importance of triads in distinguishing communities [21] and network structure
as a whole.
### Relationship Dynamics
As stated in the Data section, participants were asked to classify their
relationships with those they listed in their ego network surveys. Given these
classifications, we can analyze how signals on average change with time for
different nodes depending on their relationship type. In particular, we
analyze the average edge weight over time for friends, kin, and significant
others (as identified by survey-takers), to evaluate relationship stability
over time.
In Figure 2a we show the average edge weight over time for our selected
relationship types. We found that while friends tend to have a more volatile
edge weight (due to the fact that there are so many different kinds of
friends), parents (and similar kin) tend to stay fairly consistent with high
tie strength. This agrees with previous research [22], which shows that
college students very often maintain consistent contact with their families.
For those that have frequent communication with their parents and siblings,
this behavior is correlated with their closeness to them, as reflected in
survey rankings.
The average tie strength value is also very stable towards parents across
time, with little variation (which can be even be seen in Figure 1),
indicating that the rank held by parents rarely wavers between participants,
confirming the general strength of connection. This is likely due to the fact
that if parents are even going to make it on to a ranked list of individuals
with whom the participant has communicated with the most, these relationships
are going to inherently stable. If they were not, students would not invest
time communicating with them enough to warrant listing them.
Figure 2: A sequence of illustrations for the analysis of the evolving tie
values in NetSense as generated by one of our best survey reconstruction
models. (a) shows the growth of the average edge weight of different
relationships over time. (b) shows the Kernel Density Estimation of transition
differences between relationships. (c) shows the absolute difference in edge
weight between gendered majorities and minorities in detected triads in our
social networks. (d) shows the number of detected triads that fit into one of
the three standard triadic motifs.
Surprisingly, significant others are often times ranked below parents. This
difference in ranking is due to the fact that the label of romantic partner is
not static, like the label of “parent.” Significant others can be introduced
prior to the study or anywhere during the study, and that label can be removed
or changed at any time as well. This effect is seen to a greater degree with
the term friend. In addition to the conditional assignments of the term, there
are many different stages of rapport that any friend could be while being
included in a participant’s survey listing. It is also important to note that
despite the volatility (or lack thereof) of these classifications, the growth
of tie strengths inevitably slows as time goes on. This is reflected in Figure
1, as most signals settle after either a transition point or a period of
growth/loss. This indicates in the absence of a perturbing force (e.g. getting
to know a person, experiencing a breakup), relationships tend to cement
themselves to some standard level closeness, resulting in some consistent
pattern of communication. A similar observation was made in Saramaki’s work
[23]. In this paper, social signatures between individuals in a social network
were derived directly from communication data. These social signatures were
found to be generally stable and consistent in shape, corroborating the trend
of stability seen in tie strength signals here.
We can further characterize our analysis of relationship interactions by
analyzing the transition difference in signal weight over time. We do this by
finding the greatest change in signal value (the “transition point”), then
taking the difference in the average signal before and after this transition
point. We find the transition difference for every listed individual of the
study participants, with the resultant values binned by relationship label.
The Kernel Density Estimation (Gaussian kernel) of the binned transition
differences for Significant Others, Parents, and Friends is seen in Figure 2b.
We find that for our data, most family-related relationships remain stable
with a primary mode centered about $0$, and a very slight mode about $0.5$,
indicating there is usually either little change or deviation from whatever
the initial signal was, or if there was change, it was positive. Significant
others in NetSense had a more noticeable positive trend, with a primary mode
at $0.5$ and a secondary mode at $0$, with a tail that trails off into the
negative region. This shape marks the dynamic of close (but non-familial)
relationships, which undergo positive changes when the relationship forms, and
negative changes when a break-up occurs. And since this kind of relationship
is a lot more volatile than a family-related relationship, this type of
dynamic occurs more often, and is therefore reflected more by our models. The
distributions of transition differences for friendships, like significant
others, has a wider mode at $0$, with a smaller mode at $0.5$ and a tail into
the negative domain. This shape can be attributed to the varying types of
friendships that can be classified under the “friends” relationship class in
the ego network surveys. An example the variety of friends and their
associated signals for this can easily be seen in Figure 1 as well, where the
signal shapes for the friend classifications differ visibly, as opposed to the
signals of the kin classifications. Importantly, these significant stability
differences between friendships and kin relationships have been observed
before [24]. In this work, kin relations were found to be more stable and
maintain a higher level of emotional closeness with little maintenance,
compared to friendships which were less stable and required active maintenance
to prevent decay. Our analysis further confirms these observations with the
clear quantification of the difference in tie strength stability in friends
versus family.
### Triadic Dynamics
To analyze the dynamics of triadic motifs within our network, we first extract
a set of triads that consistently occur in the communication data across each
semester. With these mappings we then analyzed the tie strengths for the in-
degree and out-degree edges between each of the nodes involved. With the
mappings and associated the tie values obtained, we choose to first focus on
the evolution differences between genders within triadic groups. In
particular, we looked into mixed gender triads, analyzing groups where there
were two males and one female, or two females and one male. We took the
absolute difference between the average degree value of the majority gender
and the average degree value of the edges from the majority gender to the
minority gender. This difference characterizes how the majority treats the
minority and evaluates the interconnected relationships. The differences
across times for both triad types is presented in Figure 2c. As seen in the
plot, majorities do often interact with minorities differently, though male
majorities do so to a greater degree than female majorities, though not by a
great amount. The interactivity difference between a triadic majority and
minority can be viewed as two nodes uniting against one (either directly or
indirectly). This two-against-one behavior is fairly common in sociology [25],
and happens to varying degrees. But how prevalent is this motif?
To answer this question, we can use our evolving tie strength values to
observe the growth of this motif, and compare it to its counterparts: the weak
link triad and the equalist triad. In the weak link scenario, two of the three
nodes are highly connected; however, there is one link (in both directions) in
the triad that is weak compared to the others. In terms of social group
formation, this would mean that while two members are good friends with the
third member (and vice versa), the two members themselves are not as strong
friends with each other. Alternatively, in the equalist scenario, all links in
the triad are fairly equal in value. At every timestep in our dynamic network,
we tally the number of triads that meet one of the three criteria to track the
trends of the triadic dynamics. These trends are shown in Figure 2d. We find
that while often times equalist triads is the most prevalent, there is a
consistently growing trend of two-against-one triads that become more
established as time evolves. This is reflected in Figure 2c, as the absolute
difference in edge weight between majority and minority grows. Overall, this
indicates that the two-against-one motif is a prevalent dynamic in triads as
tie strengths settle. Essentially, in a triadic dynamic, after friendships
begin to cement, there will often be a “third wheel”.
## Discussion
Tie strengths play an important role in the analysis of social networks,
characterizing the relationship between individuals and providing insight into
how those involved will interact with each other [4, 5, 6]. While past works
have delved into predicting tie strength [9, 10], there has been limited
research into the forecasting and subsequent analysis of tie strength that
evolve over long periods of time. The paucity of this kind of research is in
part due to the difficulty in collecting data social ties as they change,
which is typically done through surveys. Additionally, many past works tend to
implement tie strength measures that depend on many platform-specific
attributes. In this paper we address both problems by introducing a system
that converts easily collected communication data into continuous tie strength
values with machine learning. We design this system to be generalizable,
depending only on the communication data and a sparse number of ego network
surveys. Using a small set of modular questions, we extract social tie
rankings from the surveys that we use to train our predictive models and
predict the social tie rankings. The trained models can also convert
communication data to continuous signals over time. Given the nature of these
signals that are generated by our models to predict survey rankings, we can
interpret these values as continually evolving tie strengths. And with these
values, we can analyze the relationship dynamics of social networks.
The NetSense study provided long term real-world communication data and
surveys that provided ground truth values at points in time. Using machine
learning, we are able to reconstruct ranked versions of these surveys with a
relatively high average RBO. Provided the resultant continuous tie strength
values from the best-performing models, we are able to effectively track the
evolution of relationships (like identifying the time at which a significant
other enters a participant’s social circle). Furthermore, we show that
relationships with parents (and other close family, like siblings) remain
fairly consistent over time, with significant others coming in second in terms
of tie strength stability. By further analyzing tie strengths about signal
transition points we show that while parent tie strengths tend to be very
strong and stable, often going unchanged over time, while significant others
are more likely to experience significant transitions (driven by the initial
formation of the relationship, or a subsequent dissolution). We also establish
the paradigm that without a perturbing force, most relationships reach some
form of resting-state as tie strengths settle.
We further our analysis by looking into triadic dynamics of participants. We
find that in mixed triads, male majorities tended to treat female minorities
differently, while female majorities did the same to male minorities to a
slightly lesser degree. This behavior reflects the two-against-one triad
motif. We delve into this observation further by comparing this behavior
against two other triad motifs (weak link and equality), and observe the
growth of the three in our social networks over time, discovering that the
two-against-one dynamic increases as relationships cement themselves. In
summary, our novel system for predicting continuous tie strength values using
general, platform-agnostic communication data establishes an innovative
paradigm for studying the transitions and trends of interpersonal connections
as they evolve in dynamic social networks. Moving forward, this paper will act
as a foundation for our continued analysis into the evolution of
relationships, and how they characterize past, present, and future
interactions within social networks.
## Methods
### Establishing Social Tie Rankings from Ego Network Surveys
When implementing a tie strength measure, our foremost interest is choosing a
system that allows for generalizability and customization, yet also produces a
measure that is capable representing a nuanced spectrum of relationship
strength. We additionally want to avoid a reliance on platform-specific
attributes. While Marsden’s work indicates that closeness is the best
predictor of tie strength over other factors (specifically frequency of
communication and duration of relationship) [6], we choose to avoid making any
particular assumptions about predictor importance as well. Therefore, we
introduce tie strength as an ranked list of individuals, where the ordering
determines the depth of the relationship between a listed individual and the
the participant that took the ego network survey.
In the ego network surveys the wording of the starting question that prompts a
survey-taker to list individuals with whom they’ve communicated is void of any
instructions on the ordering of said list. Thus, we cannot rely on the order
of the raw list to be consistently indicative of a survey-taker’s preference
on any of the listed people. Given this, we assume there is none initially and
instead craft our own using the follow-up survey questions and the staples of
tie strength characterization as a guide. The answers to the follow-up
questions are mostly selected from a set of answers that indicate a range of
magnitude. For example, when asking after a survey-takers’s perceived
closeness with a listed individual they can choose "Especially close",
"Close", "Less than close", or "Distant". Most questions follow this form,
though there are a few questions like "How long in years have you known this
person?" that have open inputs that can take any rational number. We utilized
four inputs from our data as guided by the previously established definitions
for tie strength: Closeness (how close the survey-taker is to one of the
listed people), duration (how long the survey-taker has known the person),
frequency (how often does the survey-taker communicate with the person), and
similarity (subjectively, how similar does the survey-taker think themselves
to be to the person).
To determine an ordering from these mixed inputs without assuming the
importance of any one input over another, we use a pairwise tournament
selection process. Consider a ego network survey taken by one participant.
Every individual listed by the participant is compared against all of the
other listed individual on a question. An individual that has a greater value
in the question than a counterpart is awarded a point. If two listed
individuals have the same value, both are awarded a point. These points are
aggregated across all the questions and then a ranked list is created by
ordering everyone by their score in descending order. If there is a tie in
aggregate score, the inputs with rational numbers are used to break the tie
(e.g. for two tied individuals, the one who has been known for longer
ultimately wins). After all tournaments are complete for that ego network
survey, we are left with our top $k$ social ties ranking for the survey-taker,
where the orderings indicate the importance of the listed individuals, as
determined by the questions in the ego network survey. We repeat this process
for all participants for all surveys throughout the study’s timeline.
Ultimately, the simplicity of this system ensures there are no assumptions
made about the weighting of the questions. Additionally, this system is not
dependent on a static set of features, since questions can be removed,
replaced, or added and this won’t change the architecture of how tie strength
is generated in the end.
### The Bow Tie Overlap Model
Since overlapping (and non-overlapping) friend groups have shown to be a
powerful tool in understanding tie strengths between people [17, 13],
implementing some measure of this kind of overlap is important to test its
applicability within our datasets. Therefore, we introduce the weighted
overlap metric from Mattie’s work in bow tie frameworks [13] given that (as
mentioned earlier) this feature was highly informative for a couple of their
tie strength prediction machine learning models. Weighted overlap is defined
as below for two individuals $i$ and $j$:
$\widetilde{o}_{ij}=\frac{\sum_{k\in
n_{ij}}(w_{ik}+w_{jk})}{s_{i}+s_{j}-2w_{ij}}$ (1)
Where $n_{ij}$ is the shared friends between $i$ and $j$. That is, the overlap
in the $K=1$ neighbors of $i$ and $j$. We interpret the weights $w_{ij}$ here
to be the total number of events between some $i$ and $j$ before the time of
the survey being evaluated for reconstruction. And $s_{i}$ ($s_{j}$) is the
total number of events generated by $i$ ($j$) before the time of the survey.
Therefore, if all the individuals that have communicated with $i$ have also
communicated with $j$ and vice versa, then $\widetilde{o}_{ij}=1$. And so for
some target individual $i$, we iteratively consider every communicated with
individual as $j$ and sample each $\widetilde{o}_{ij}$ before the considered
survey time. We then rank by value of $\widetilde{o}_{ij}$ to predict the
ground truth survey ordering.
### Machine Learning Models
The primary models of this paper are our machine learning models. The machine
learning models consider one target person at a time and make pairwise
comparisons between the people with whom the target person has any
communications history. Specifically, we consider the Ensemble model (which
makes these pairwise comparisons with a random forest classifier), and the
LSTM model (a two-channel long short-term memory recurrent neural network).
For both models we use a method of ranking called Borda count [26] for
pairwise comparisons to generate the predicted ranked lists that we compare
against the ground truth. This method is commonly viewed as “an information-
theoretically optimal procedure” for recovering the top $k$ ranked items based
on noisy comparisons that emphasizes simplicity, optimality, and robustness
with regards to the underlying pairwise-comparison probability generation.
Given collection of $n$ people whom the target person has interacted indexed
by the set $[n]\equiv\\{1,...,n\\}$, we create a matrix $M$ of dimensions
$n\times n$ where $M_{ij}$ is the probability of $i$ having a greater social
tie with the target person than $j$ as determined by the random forest or LSTM
using $i$’s and $j$’s communication history with the target person up to the
time of consideration. The diagonal of $M$, where $i=j$, is set to a
probability of $\frac{1}{2}$. Now to find the Borda score, we must keep track
of wins and losses in the pairwise tournament in $M$. To do this, we transform
$M$ into $M^{\prime}$ using Eq. 2.
$M^{\prime}_{ij}=\begin{cases}1&M_{ij}>\frac{1}{2}\cr
0&M_{ij}=\frac{1}{2}\cr-1&M_{ij}<\frac{1}{2}\end{cases}$ (2)
The Borda count itself for $i\in[n]$, which is used to form the actual
ranking, is calculated using Eq. 3.
$B_{i}=\sum\limits_{j=1}^{n}M^{\prime}_{ij}$ (3)
We then find $B_{i}$ for all $i\in[n]$ and then order by magnitude. This
becomes the current predicted ranking that we compare against ground truth.
Now, to generate the signals for Figure 1 and our network analysis we can
convert the count to the winning percentage with Eq. 4. In the equation,
$w_{i}$, $l_{i}$, and $t_{i}$ are the number of head to head wins, losses, and
ties for $i$. We generate the winning percentage incrementally across the
entire NetSense study, and the resultant time series is then used as the
dynamic edge weights between an ego and those they’ve communicated with.
$\text{WinningPercentage}_{i}=\frac{B_{i}+(n-1)}{2(n-1)}=\frac{w_{i}+0.5\cdot
t_{i}}{w_{i}+l_{i}+t_{i}}$ (4)
#### Ensemble Model
The Ensemble model uses a random forest classifier to generate the pairwise
comparison probabilities in $M$. The classifiers for the best performing
Ensemble model used 100 weak classifiers. These classifiers are trained using
a specific feature vector that is used to predict which individual will the
target person have a greater social tie to. Consider the features for
$i\in[n]$ (denoted as $f_{i}$) as the four baseline class features computed
for just calls and just texts. These features are frequency, recency,
duration, and volume as described in Models section. Integrating the Bow Tie
Overlap attribute significantly brings down overall performance, and so was
excluded in the final Ensemble model. We take the difference of these two
feature vectors as the given feature vector for the classifiers, defined as
$\text{DifferenceFeatureVector}(x,y)=f_{x}-f_{y}$ given $x,y\in[n]$.
#### LSTM Model
For our LSTM models, $f_{i}$ is a two-channel time series. The two channels
are the histories of calls and texts, both binned into 21 days intervals. The
feature vector used by the LSTM is the time series for the two individuals
being compared stacked on each other, resulting in a four channel time series
that spans through time at which the social tie is being evaluated to the
first interaction between the target person and either person in the
comparison.
## References
* [1] Stephen P Borgatti, Ajay Mehra, Daniel J Brass, and Giuseppe Labianca. Network analysis in the social sciences. science, 323(5916):892–895, 2009.
* [2] James A Kitts, Eric Quintane, and ESMT Berlin. Rethinking social networks in the era of computational social science, 2020.
* [3] Mark T Rivera, Sara B Soderstrom, and Brian Uzzi. Dynamics of dyads in social networks: Assortative, relational, and proximity mechanisms. annual Review of Sociology, 36:91–115, 2010.
* [4] Mark S Granovetter. The strength of weak ties. In Social networks, pages 347–367. Elsevier, 1977.
* [5] David Krackhardt, N Nohria, and B Eccles. The strength of strong ties. Networks in the knowledge economy, 82, 2003.
* [6] Peter V Marsden and Karen E Campbell. Measuring tie strength. Social forces, 63(2):482–501, 1984.
* [7] Verónica Policarpo. What is a friend? an exploratory typology of the meanings of friendship. Social Sciences, 4(1):171–191, 2015.
* [8] J-P Onnela, Jari Saramäki, Jorkki Hyvönen, György Szabó, David Lazer, Kimmo Kaski, János Kertész, and A-L Barabási. Structure and tie strengths in mobile communication networks. Proceedings of the national academy of sciences, 104(18):7332–7336, 2007.
* [9] Jason J Jones, Jaime E Settle, Robert M Bond, Christopher J Fariss, Cameron Marlow, and James H Fowler. Inferring tie strength from online directed behavior. PloS one, 8(1), 2013.
* [10] Eric Gilbert and Karrie Karahalios. Predicting tie strength with social media. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 211–220, 2009.
* [11] M. Conti, A. Passarella, and F. Pezzoni. A model for the generation of social network graphs. In 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, pages 1–6, 2011.
* [12] Jason Wiese, Jun-Ki Min, Jason I Hong, and John Zimmerman. "you never call, you never write" call and sms logs do not always indicate tie strength. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing, pages 765–774, 2015.
* [13] Heather Mattie, Kenth Engø-Monsen, Rich Ling, and Jukka-Pekka Onnela. Understanding tie strength in social networks using a local “bow tie” framework. Scientific reports, 8(1):1–9, 2018.
* [14] Rachael Purta, Stephen Mattingly, Lixing Song, Omar Lizardo, David Hachen, Christian Poellabauer, and Aaron Striegel. Experiences measuring sleep and physical activity patterns across a large college cohort with fitbits. In Proceedings of the 2016 ACM international symposium on wearable computers, pages 28–35, 2016.
* [15] Bethany L Blair, Anne C Fletcher, and Erin R Gaskin. Cell phone decision making: Adolescents’ perceptions of how and why they make the choice to text or call. Youth & Society, 47(3):395–411, 2015.
* [16] George Homans. The Human Group, page 133. Harcourt, Brace & World, 1950.
* [17] Elizabeth Bott and Elizabeth Bott Spillius. Family and social network: Roles, norms and external relationships in ordinary urban families. Routledge, 2014.
* [18] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
* [19] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
* [20] William Webber, Alistair Moffat, and Justin Zobel. A similarity measure for indefinite rankings. ACM Transactions on Information Systems (TOIS), 28(4):1–38, 2010\.
* [21] Comandur Seshadhri, Tamara G Kolda, and Ali Pinar. Community structure and scale-free collections of erdős-rényi graphs. Physical Review E, 85(5):056109, 2012.
* [22] Yi-Fan Chen and James E Katz. Extending family to school life: College students’ use of the mobile phone. International Journal of Human-Computer Studies, 67(2):179–191, 2009.
* [23] Jari Saramäki, Elizabeth A Leicht, Eduardo López, Sam GB Roberts, Felix Reed-Tsochas, and Robin IM Dunbar. Persistence of social signatures in human communication. Proceedings of the National Academy of Sciences, 111(3):942–947, 2014.
* [24] Sam GB Roberts and Robin IM Dunbar. The costs of family and friends: an 18-month longitudinal study of relationship maintenance and decay. Evolution and Human Behavior, 32(3):186–197, 2011.
* [25] Theodore Caplow. Two against one: Coalitions in triads. Prentice-Hall, 1968.
* [26] Nihar B Shah and Martin J Wainwright. Simple, robust and optimal ranking from pairwise comparisons. The Journal of Machine Learning Research, 18(1):7246–7283, 2017\.
## Acknowledgements
This work was sponsored in part by DARPA under contract W911NF-17-C-0099, the
Army Research Office (ARO) under contract W911NF-17-C-0099, and the Office of
Naval Research (ONR) under grant N00014-15-1-2640. The views and conclusions
contained in this document are those of the authors and should not be
interpreted as representing the official policies either expressed or implied
of the U.S. Government.
## Author contributions statement
B.K.S and J.F. conceived the study. J.F. and R.D. formalized and implemented
all models. B.K.S., J.F., and R.D. analyzed model performance. J.F. designed
and implemented all evolving tie strength analytical tests and evaluated the
results. B.K.S., J.F., and R.D. wrote the first draft of the manuscript. All
authors read and approved the paper.
|
# Analyzing Team Performance with Embeddings from Multiparty Dialogues
††thanks: This material is based upon work supported by the Defense Advanced
Research Projects Agency (DARPA) under Contract No. W911NF-20-1-0008.
Ayesha Enayet Department of Computer Science
University of Central Florida
Orlando, USA
<EMAIL_ADDRESS>Gita Sukthankar Department of Computer Science
University of Central Florida
Orlando, USA
<EMAIL_ADDRESS>
###### Abstract
Good communication is indubitably the foundation of effective teamwork. Over
time teams develop their own communication styles and often exhibit
entrainment, a conversational phenomena in which humans synchronize their
linguistic choices. This paper examines the problem of predicting team
performance from embeddings learned from multiparty dialogues such that teams
with similar conflict scores lie close to one another in vector space.
Embeddings were extracted from three types of features: 1) dialogue acts 2)
sentiment polarity 3) syntactic entrainment. Although all of these features
can be used to effectively predict team performance, their utility varies by
the teamwork phase. We separate the dialogues of players playing a cooperative
game into stages: 1) early (knowledge building) 2) middle (problem-solving)
and 3) late (culmination). Unlike syntactic entrainment, both dialogue act and
sentiment embeddings are effective for classifying team performance, even
during the initial phase. This finding has potential ramifications for the
development of conversational agents that facilitate teaming.
###### Index Terms:
teamwork, multiparty dialogues, entrainment, sentiment analysis, dialogue
acts, embeddings
## I Introduction
The aim of our research is to create agents who can assist human teams by
intervening when teamwork goes awry. To do this, it is important to be able to
rapidly assess the status of team performance through “thin-slicing”, making
accurate classifications from short behavior samples; Jung suggests that
developing this capability would remove the need for developing continuous
team monitoring systems[1]. Ambady and Rosenthal demonstrate that many types
of social interactions remain sufficiently stable that even a small sample is
meaningful at predicting long term outcomes, the most famous application of
this theory being thin-slicing marital interactions to predict divorce
outcomes [2, 3]. Rather than developing specific measures for predicting
future team conflict, we demonstrate that an embedding grouping teams with
similar conflict levels can be learned directly from multiparty dialogue. An
advantage is that this approach avoids the necessity of collecting advance
data on team members, such as personality traits or training records.
This paper compares the performance of three types of embeddings extracted
from: 1) dialogue acts, 2) sentiment polarity, and 3) syntactic entrainment;
these features were selected based on previous work on team communications and
group problem-solving. Dialogue acts capture the interactive pattern between
speakers in multiparty communication[4]. During dialogue act classification,
utterances are grouped according to their communication purpose. Sentiment
polarity measures the attitude or emotion of the speaker during conversation;
it can be used to detect disagreement. Entrainment is the natural tendency of
the speakers to adopt a similar style during a conversation, causing them to
achieve linguistic alignment. There are several types of entrainment including
lexical choice [5], style [6], pronunciation [7], and many others [8]. Reitter
and Moore demonstrated that syntactic entrainment, based on alignment of
lexical categories, can be used to predict success in task-oriented dialogues
[5].
Good team communication exhibits all these characteristics: greater emphasis
on problem solving than arguing, positive sentiment, and communication
synchronization [9]. Our research was conducted on the Teams corpus [10] which
consists of player dialogue during a cooperative game. One advantage of
studying a clearly defined, time-bounded team task is that the dialogues can
be divided into teamwork phases: 1) early (knowledge building) 2) middle
(problem solving) and 3) late (culmination). For thin-slicing, we seek to
predict the team performance from the initial teamwork stages. The Teams
corpus includes team conflict scores, which measure the amount of disagreement
that occurred during gameplay. Our hypotheses are:
* •
H1: an embedding leveraging dialogue acts will be useful for classifying team
performance at all phases since it directly detects utterances related to
conflict (eristic dialogues).
* •
H2: sentiment analysis will consistently reveal team conflict and thus be a
good predictor of performance.
* •
H3: the entrainment embedding will be predictive when the entire dialogue is
considered, but will be less useful at analyzing early phases before
entrainment has been established.
Embeddings are mechanisms for mapping high-dimensional spaces to low-
dimensions while only retaining the most effective structural representations,
making it possible to apply machine learning on large inputs by representing
them in the form of sparse vector. This paper presents our approach for
extracting embeddings from multiparty dialogues that encode team conflict. The
next section describes the rich literature on analyzing team communication and
multiparty dialogues.
## II Related Work
Team communication, both spoken or written, is a critical element of
collaborative tasks and can be studied in a variety of ways. Semantic analysis
centers on the meaning of utterances, while pragmatics involves identifying
speech acts[11]; both analytic approaches are important and often occur in
parallel. In many studies of team communication, this analysis is arduously
done through hand coding the utterances.
Parsons et al. [12] contrast two different schemes to code utterances in team
dialogues as part of their long term research goal of developing a virtual
assistant for human teams. Their comparison illustrates the benefits and
problems of the Walton and Krabbe typology [13], which includes categories for
information-seeking, inquiry, negotiation, persuasion, deliberation, and
eristic, but does not consider the context in which the utterance occurs. The
McGrath theory of group behavior [14] focuses on modes of operation:
inception, problem-solving, conflict resolution, and execution. When applying
the McGrath theory of group behavior, utterance classification is modified by
conversational context.
Sukthankar et al. also used an explicit team utterance coding scheme towards
the problem of agent aiding of ad hoc, decentralized human teams to improve
team performance on time-stressed group tasks [15]. Unlike teamwork studies,
we do not specifically map individual utterances to team communication
categories, but leverage dialogue act classification models to identify
features that are indicative of team conflict. Shibani et al.[16] discussed
some of the practical challenges in designing an automated assessment system
to provide students feedback on their teamwork competency: 1) dialogue pre-
processing, 2) assessing teamwork chat text, and 3) classifying teamwork
dimensions. They evaluated the performance of rule-based systems vs.
supervised machine learning (SVM) at classifying coordination, mutual
performance monitoring, team decision making, constructive conflict, team
emotional support, and team commitment. Even with dataset imbalance, the SVM
model generally outperformed the hand coded rules. Our proposed method can
also be used to assist human teams by proactively warning them of deficiencies
during the early phases of team tasks, without the onerous data labeling
requirements.
Other analytic techniques focus on linguistic coordination between speakers in
groups. For instance, Danescu et al. studied the effect of power differences
on lexical category choices during goal-oriented discussion [17]. This is one
form of entrainment in which the speakers preferentially select function-word
classes used by other group members. Our paper uses a dataset (Teams corpus),
that was created to study entrainment in teams [10]. Rahimi and Litman
demonstrated a method for learning an entrainment embedding to predict team
performance [18]; we use a modified version of their technique to express
syntactic entrainment. However since entrainment develops over time, we
compare the performance of entrainment at early vs. late task phases.
Furthermore, they only focused on syntactic/lexical features of utterances,
not semantic.
Sentiment analysis has been applied to the study of group dynamics; for
instance, researchers have leveraged sentiment features to detect communities
in social networks [19, 20]. Our work demonstrates the utility of sentiment
features towards predicting team conflict and show that the sentiment-based
embedding is useful during all teamwork phases. We rely exclusively on the
multiparty team dialogues; however there have been many attempts to predict
team performance using other types of multimodal features. TCdata, a team
cooperation dataset, includes both audio and video recordings of teams
performing cooperative tasks [21]. Liu et al. explicitly extracted 159
features from team speaking cues, individual speaking time statistics, and
face-to-face interaction cues to predict team performance on this dataset.
Several studies [22, 23] have shown team member personality traits to be
useful predictors of conflict and team performance. Yang et al. used
individual personality traits to predict the performance of final year student
project teams using neural networks [22]. Omar et al. developed a student
performance prediction model that included both personality types and team
personality diversity [23]. Even though these additional data sources can be
highly predictive, they are rarely available in real-world team scenarios,
unlike multi-party dialogue which is often self-archived to preserve
organizational memory.
## III Method
This section describes our procedure for computing embeddings using doc2vec
[24], an unsupervised method that is used to create a vector representation of
the team dialogue. We compare the performance of different possible inputs to
doc2vec: 1) dialogue acts, 2) sentiment analysis, and 3) syntactic
entrainment.
### III-A Dialogue Acts
Dialogue acts can be created from the semantic classification of dialogue at
the utterance level to identify the intent of the speaker. A transfer learning
approach was used to tag utterances of the Teams corpus using the DAMSL
(Discourse Annotation and Markup System of Labeling) tagset. Figure 1 shows
the architecture of our dialogue act classifier, which was constructed using
the Universal Sentence Encoder; we selected USE for its ability to achieve
consistently good performance across multiple NLP tasks [25]. There are two
different variants of the model: 1) a transformer architecture, which exhibits
high accuracy at the cost of increased resource consumption and 2) a deep
averaging network that requires few resources and makes small compromises for
efficiency. The former uses attention-based, context-aware encoding subgraphs
of the transfer architecture. The model outputs a 512-dimensional vector. The
deep averaging network works by averaging words and bigram embeddings to use
as an input to a deep neural network. The models are trained on web news,
Wikipedia, web question-answer pages, discussion forums, and the Stanford
Natural Language Inference (SNLI) corpus, and are freely available on TF Hub.
Figure 1: Dialogue Act Classifier Architecture.
We selected the USE Transformer-based Architecture model with three dense
layers and a softmax activation function. Figure 1 shows the architecture of
our DA classification model, which achieves a validation accuracy of 70%.
The model was fine-tuned using the Switchboard Dialogue Act Corpus (SwDA)
dataset. SwDA is one of the most popular public datasets for DA
classification. It consists of 1155 human-to-human telephone speech
conversations, tagged using 42 tags from the DAMSL tagset. Table I shows the
statistics of both SwDA and the Teams corpus.
TABLE I: Dataset Statistics Dataset | #Utterances | #Tokens
---|---|---
SwDA | 200,052 | 19K
Teams Corpus | 110,206 | 573,200
TABLE II: SwDA Dataset Sample Speaker | Utterance | DA | Description
---|---|---|---
A | I don’t, I don’t have any kids. | sd | Statement-non-Opinion
A | I, uh, my sister has a, she just had a baby, | sd | Statement-non-Opinion
A | he’s about five months old | sd | Statement-non-Opinion
A | and she was worrying about going back to work and what she was going to do with him and – | sd | Statement-non-Opinion
A | Uh-huh. | b | Acknowledge
A | do you have kids? | qy | Yes-No-Question
B | I have three. | na | Affirmative non-yes Answer
A | Oh, really? | bh | Backchannel in question form
TABLE III: Teams Corpus Example Speaker | Utterance | DA | Description
---|---|---|---
A | Ok I’m going to | sd | Statement-non-Opinion
A | shore up these two. | sd | Statement-non-Opinion
B | Good move. | ba | Appreciation
A | Then we got one and then I guess I can also | sd | Statement-non-Opinion
A | Can I use my powers twice in one play | sd | Statement-non-Opinion
C | Mm | b | Acknowledge (Backchannel)
B | yes | ny | Yes answer
Table II shows examples from the SwDA training dataset, and Table III shows
examples from Teams corpus. Each team dialogue generates a unique sequence
where each element of the sequence represents the dialogue act of the
corresponding utterance. This sequence of dialogue acts is then used as an
input to doc2vec algorithm to create the embedding.
### III-B Sentiment Analysis
Another option is to represent the team dialogue as a series of changes in the
emotional state of the team. This can be done by applying sentiment analysis
to the individual utterances. Sentiment analysis is the task of predicting the
emotion or attitude of the speaker; we are using the TextBlob python
implementation [26] to determine sentiment polarity of each utterance in the
dialogue. The polarities are float values which lies between -1 and 1
representing negative, positive and neutral sentiment. For each team the
unique sequence of these polarities is used as input to doc2vec, where each
element of the sequence represents the polarity of the corresponding
utterance. This representation encodes transitions in the emotional state of
the team across the duration of the task.
### III-C Entrainment
Entrainment is one form of linguistic coordination in which team members adopt
similar speaking styles during conversation. Here we evaluate the performance
of a syntactic entrainment embedding based on Rahmi and Litman’s [18]’s work
that encodes the propensity of subsequent speakers to make similar lexical
choices. Eight lexical categories were used: noun (NN), adjective (JJ), verb
(VB), adverb (RB), coordinating conjunction (CC), cardinal digit (CD),
preposition/subordinating conjunction (IN), and personal pronoun (PRP) . To
calculate the entrainment between two speakers we follow the method proposed
by Danescu et al. [17] shown in Equation 1. $Ent_{c}(x,y)$ is the entrainment
of speaker $y$ to speaker $x$, $c$ is the lexical category, $e_{yx^{c}}$
represents the event where speaker $y$ utterance immediately follows the
speaker $x$ utterance and contains $c$, ${e_{x}^{c}}$ is the event when
utterance (spoken to y) of speaker $x$ contains $c$.
$Ent_{c}(x,y)=p(\frac{e_{yx^{c}}}{e_{x}^{c}})-p(e_{yx^{c}})$ (1)
The NLTK part-of-speech (POS) tagger was used to tag all the utterances with
their respective lexical categories. A directed weighted graph was generated
for each dialogue linking speakers with positive entrainment. The structure of
this graph encodes the entrainment relationships between team members. To
translate the graph into a feature representation, six graph centrality kernel
functions were applied to represent each node of the team graph. The kernel
functions are: (1) PageRank (2) betweenness centrality (3) closeness
centrality (4) degree centrality (5) in degree centrality (6) Katz centrality.
To create the final team representation, the vectors of individual nodes were
averaged, and doc2vec was applied to create the embedding. This method
corresponds to the Kernel version of Entrainment2Vec [18] and achieves
comparable performance when applied to the whole dialogue.
Our implementation is slightly different from that of [18] and [17] in two
aspects. First, we are using the NLTK POS tagger to assign lexical categories
to the utterances instead of using LIWC-derived categories. Second, we are
using six graph kernel algorithms instead of ten. We observed that using more
graph kernel functions on graphs that consist of three to four team members
does not improve performance. The POS tagging reflects the sentence’s
syntactic structure; we have carefully selected the POS categories that are
consistent with the conventional English part of speech categories used by
[18] and [17]. While calculating the entrainment, we do not consider the
actual word and its context; therefore, this embedding only captures syntactic
features, not semantics.
### III-D Doc2vec
Le and Mikolov[24] introduced doc2vec as an unsupervised learning algorithm to
generate distributed vector representations of text of arbitrary size; it is
inspired by the word2vec model[27]. They proposed two different models for
learning numerical representations of text: 1) Distributed Memory Model of
Paragraph Vectors (PV-DM) 2) paragraph vector with a distributed bag of words
(PV-DBOW).
Distributed Memory Model of Paragraph Vectors (PV-DM) uses both word vectors
and paragraph vectors to predict the next word. It attempts to learn paragraph
vectors that can predict the word given different contexts sampled from the
text. The context size is a tuneable parameter, and a sliding window of
arbitrary context size generates multiple context samples. Doc2vec works by
averaging these word vectors and paragraph vectors to predict the next word.
It employs stochastic gradient descent to learn word and paragraph vectors.
The resultant paragraph vectors serve as a feature vector of the corresponding
paragraph and can be used as an input to machine learning models like SVM and
logistic regression.
Paragraph vector with a distributed bag of words (PV-DBOW) ignores the context
words and attempts to predict randomly selected words from the paragraph. At
each iteration of stochastic gradient descent, it classifies a randomly
selected word from the sampled text window using paragraph vectors.
Instead of using doc2vec on the raw team dialogues, doc2vec was applied to the
output of the dialogue act classifier, sentiment analysis, and syntactic
entrainment. This procedure enables us to disentangle the contribution of
different elements of team communication at predicting conflict.
## IV Dataset
Our evaluation was conducted on the Teams corpus dataset collected by Litman
et al. [10]. It contains 124 team dialogues from 62 different teams, playing
two different collaborative board games. The length of the dialogues varies
from 291 to 2124 utterances. In addition to collecting dialogue data, the
researchers administered surveys of team level social outcomes. Team social
outcome scores include task conflict, relation conflict, and process conflict
scores. All these scores are highly correlated, and we are using process
conflict z-scores to represent team performance. Jehn et al. have identified
that low process conflict scores indicate good team performance and vice versa
[28]. To study the problem of early prediction of team conflict, we divide
each dialogue into three equal sections that correspond to the knowledge-
building, problem solving, and culmination teamwork phases. Our final
classification dataset consists of 12 patterns per dialogue, which are
generated from applying the three methods (semantic, sentiment, syntactic) to
the whole time period, as well as the initial, middle and final segments.
Teams were divided into high performing and low performing teams based on
their process conflict z-scores, and classification accuracy was measured.
Doc2vec was used to generate the vector representation of all the patterns.
Doc2vec comes in two different flavors: 1) Distributed Memory Model of
Paragraph Vectors (PV-DM) and 2) Distributed Bag of Words version of Paragraph
Vector (PV-DBOW). Through extensive experiments, we identified that PV-DM with
epoch size of 5, negative sampling 5, and window size 10 works best for our
setting. By default, we only report results for PV-DM. Table IV shows the
comparison of PV-DM & PV-DBOW when applied to the complete dialogue.
TABLE IV: Doc2Vec Comparison | PV-DBOW | PV-DM
---|---|---
Dialogue Act | 57.89 | 68.42
Sentiment | 55.26 | 78.94
Entrainment | 55.26 | 60.52
We evaluated the performance of both logistic regression and the support
vector machine (SVM) classifier on the full dialogue (shown in Table V); for
the other experiments, the better performer, SVM, was used.
TABLE V: Comparison of Supervised Classifiers | Logistic Regression | SVM
---|---|---
Dialogue Act | 63.15 | 68.42
Sentiment | 71.05 | 78.94
Entrainment | 63.15 | 60.52
## V Results
Table VI presents the classification accuracy of the three embeddings on the
whole dialogue. SVM exhibits the best classification accuracy of 78.94% on
sentiment based vectors, followed by dialogue act based vectors. Figure 2
visually illustrates the effects of different embeddings. By plotting the
vectors in 2d using t-Distributed Stochastic Neighbor Embedding (TSNE), we can
observe the formation of two clusters, representing teams with high social
outcomes and low social outcomes in the dialogue act and sentiment vectors,
whereas the entrainment ones are intermixed.
TABLE VI: Accuracy by Team Phase Phase | DA | Sentiment | Entrainmemt
---|---|---|---
Whole | 68.42 | 78.94 | 60.52
Initial | 71.05 | 65.78 | 42.10
Middle | 73.68 | 65.78 | 47.36
End | 68.42 | 71.05 | 60.52
Figure 2: t-SNE representation of vectors in 2D, where ’S’ represents the
teams with low process conflict scores and ’U’ represents the teams with high
process conflict scores. Both sentiment (left) and dialogue act embedding
(right) show a better class separation than entrainment (center). Note that
the axes have no explicit meaning.
Table VI shows the accuracy of the conflict classifier across the duration of
the games. The sentiment classifier achieved the best accuracy when the whole
dialogue was used and exhibited consistent performance across all team phases.
The dialogue act embedding was the best at the initial phase, making it a good
choice for the “thin-slice” problem of rapidly diagnosing teamwork health from
a small sample of utterances. Syntactic entrainment lagged behind the
sentiment and semantic analysis, but performance improved during the final
phase.
For statistical testing, we generated 30 results for each phase using each
embedding. Since some of the result distributions (Figure 3) failed the
D’Agostino-Pearson normality test, the Kolmogorov-Smirnov test was used for
significance testing. The performance differences between each pair of
embeddings were statistically significant ($p<0.01$). However the differences
between the initial and end phase results for the sentiment and entrainment
embeddings were not significant (Table VII). Semantic and sentiment based
vectors outperformed the syntactic entrainment vectors at the classification
task across all phases.
Figure 3: Distribution of embedding results for initial and final teamwork phases for dialogue acts (left), sentiment (middle) and entrainment (right) TABLE VII: Comparison of performance of all the three approaches at Knowledge Discovery & Culmination Phase | Knowledge Discovery | Culmination |
---|---|---|---
| min | max | min | max | p-value
Dialogue Act | 0.552632 | 0.710526 | 0.473684 | 0.684211 | 2.48e-05
Sentiment | 0.526316 | 0.657895 | 0.500000 | 0.710526 | 0.455695
Entrainment | 0.4210 | 0.4210 | 0.394737 | 0.605263 | 0.594071
## VI Conclusion
This study presents an evaluation of different embeddings for predicting team
conflict from multiparty dialogue. Embeddings were extracted from three types
of features: 1) dialogue acts 2) sentiment polarity 3) syntactic entrainment.
Results confirm the effectiveness of both sentiment (H2) and dialogue acts
(H1). However, experiments failed to confirm that classification based on
syntactic entrainment signficantly improves over time (H3). Although there are
many other ways to measure linguistic synchronizaton, it seems less promising
for integration into an agent assistance system. The dialogue act embedding is
strong during the initial phase making it a good candidate for diagnosing the
health of team formation activity. A continuous team monitoring agent
assistant system might do better with sentiment analysis.
In future work we plan to explore embeddings based on macrocognitive teamwork
states, such as those in the Macrocognition in Teams Model (MITM) [29].
Drawing from research on externalized cognition, team cognition, group
communication and problem solving, and collaborative learning and adaptation,
MITM provides a coherent theoretically based conceptualization for
understanding complex team processes and how these emerge and change over
time. It captures the parallel and iterative processes engaged by teams as
they synthesize these components in service of team cognitive processes such
as problem solving, decision making and planning.
## VII Acknowledgement
This material is based upon work supported by the Defense Advanced Research
Projects Agency (DARPA) under Contract No. W911NF-20-1-0008. Any opinions,
findings and conclusions or recommendations expressed in this material are
those of the authors and do not necessarily reflect the views of DARPA or the
University of Central Florida.
## References
* [1] M. F. Jung, “Coupling interactions and performance: Predicting team performance from thin slices of conflict,” _ACM Transactions on Computer-Human Interaction (TOCHI)_ , vol. 23, no. 3, pp. 1–32, 2016.
* [2] N. Ambady and R. Rosenthal, “Thin slices of expressive behavior as predictors of interpersonal consequences: a meta-analysis,” _Psycholology Bulletin 111_ , vol. 2, pp. 256––274, 1992.
* [3] ——, “Half a minute: predicting teacher evaluations from thin slices of non-verbal behavior and physical attractiveness,” _J. Pers. Soc. Psychol._ , vol. 64, no. 3, pp. 431––441, 1993.
* [4] C.-W. Goo and Y.-N. Chen, “Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts,” in _IEEE Spoken Language Technology Workshop (SLT)_ , 2018, pp. 735–742.
* [5] D. Reitter and J. D. Moore, “Predicting success in dialogue,” _Proceedings of the ACL_ , 2007.
* [6] C. Danescu-Niculescu-Mizil, M. Gamon, and S. Dumais, “Mark my words!: linguistic style accommodation in social media,” in _Proceedings of the International Conference on World Wide Web_ , 2011, pp. 745––754.
* [7] J. S. Pardo, “On phonetic convergence during conversational interaction,” _The Journal of the Acoustical Society of America_ , vol. 119, no. 4, pp. 2382––2393, 2006.
* [8] M. Mizukami, K. Yoshino, G. Neubig, D. Traum, and S. Nakamura, “Analyzing the effect of entrainment on dialogue acts,” in _Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue_. Los Angeles: Association for Computational Linguistics, Sep. 2016, pp. 310–318.
* [9] Y. Yang, G. N. Kuria, and D.-X. Gu, “Mediating role of trust between leader communication style and subordinate’s work outcomes in project teams,” _Engineering Management Journal_ , vol. 32, no. 3, pp. 152–165, 2020.
* [10] D. Litman, S. Paletz, Z. Rahimi, S. Allegretti, and C. Rice, “The Teams corpus and entrainment in multi-party spoken dialogues,” in _Proceedings of the Conference on Empirical Methods in Natural Language Processing_ , 2016, pp. 1421–1431.
* [11] S. Bird, B. Boguraev, M. Kay, D. McDonald, D. Hindle, and Y. Wilks, _Survey of the state of the art in human language technology_. Cambridge University Press, 1997, vol. 12.
* [12] S. Parsons, S. Poltrock, H. Bowyer, and Y. Tang, “Analysis of a recorded team coordination dialogue,” in _Proceedings of the Second Annual Conference of the ITA_ , 2008.
* [13] D. N. Walton and E. C. W. Krabbe, _Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning_. State University of New York Press, 1995.
* [14] J. E. McGrath, “Time, interaction, and performance,” _Small Group Research_ , 1991.
* [15] G. Sukthankar, K. Sycara, J. A. Giampapa, C. Burnett, and A. Preece, “An analysis of salient communications for agent support of human teams,” in _Multi-agent Systems: Semantics and Dynamics of Organizational Models_ , V. Dignum, Ed. IGI Global, 2009, pp. 284–312.
* [16] A. Shibani, E. Koh, V. Lai, and K. J. Shim, “Assessing the language of chat for teamwork dialogue,” _Journal of Educational Technology & Society_, vol. 20, no. 2, pp. 224–237, 2017.
* [17] C. Danescu-Niculescu-Mizil, L. Lee, B. Pang, and J. Kleinberg, “Echoes of power: Language effects and power differences in social interaction,” in _Proceedings of the International Conference on World Wide Web_ , 2012, pp. 699–708.
* [18] Z. Rahimi and D. Litman, “Entrainment2vec: Embedding entrainment for multi-party dialogues,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 05, 2020, pp. 8681–8688.
* [19] K. Sawhney, M. C. Prasetio, and S. Paul, “Community detection using graph structure and semantic understanding of text,” _SNAP Stanford University_ , 2017.
* [20] K. Xu, J. Li, and S. S. Liao, “Sentiment community detection in social networks,” in _Proceedings of the iConference_ , 2011, pp. 804–805.
* [21] S. Liu, L. Wang, S. Lin, Z. Yang, and X. Wang, “Analysis and prediction of team performance based on interaction networks,” in _Chinese Control Conference (CCC)_. IEEE, 2017, pp. 11 250–11 255.
* [22] F.-S. Yang and C.-H. Chou, “Prediction of team performance and members’ interaction: A study using neural network,” in _International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems_. Springer, 2014, pp. 290–300.
* [23] M. Omar, S.-L. Syed-Abdullah, and N. M. Hussin, “Developing a team performance prediction model: A rough sets approach,” in _International Conference on Informatics Engineering and Information Science_. Springer, 2011, pp. 691–705.
* [24] Q. Le and T. Mikolov, “Distributed representations of sentences and documents,” in _International Conference on Machine Learning_ , 2014, pp. 1188–1196.
* [25] D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar _et al._ , “Universal sentence encoder,” _arXiv preprint arXiv:1803.11175_ , 2018.
* [26] “Textblob,” https://textblob.readthedocs.io/en/dev/.
* [27] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in _Advances in Neural Information Processing Systems_ , 2013, pp. 3111–3119.
* [28] K. A. Jehn and E. A. Mannix, “The dynamic nature of conflict: A longitudinal study of intragroup conflict and group performance,” _Academy of Management Journal_ , vol. 44, no. 2, pp. 238–251, 2001.
* [29] S. M. Fiore, S.-J. K. A., E. Salas, N. Warner, and L. M., “Toward an understanding of macrocognition in teams: Developing and defining complex collaborative processes and products,” _Theoretical Issues in Ergonomic Science_ , vol. 11, no. 4, pp. 250–271, 2010.
|
# Recovery and Analysis of Architecture Descriptions using Centrality Measures
Sanjay Thakare1, Arvind W Kiwelekar2
Babasaheb Ambedkar Technological University,
Maharashtra, India.
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The necessity of an explicit architecture description has been continuously
emphasized to communicate the system functionality and for system maintenance
activities. This paper presents an approach to extract architecture
descriptions using the centrality measures from the theory of Social Network
Analysis. The architecture recovery approach presented in this paper works in
two phases. The first phase aims to calculate centrality measures for each
program element in the system. The second phase assumes that the system has
been designed around the layered architecture style and assigns layers to each
of the program element. Two techniques to assign program elements are
presented. The first technique of layer assignment uses a set of pre-defined
rules, while the second technique learns the rules of assignment from a pre-
labelled data set. The paper presents the evaluation of both approaches.
Keywords: Architecture Recovery, Centrality Measures, Module Dependency View,
Layered Architecture Style, Supervised Classification, Architecture
Descriptions.
## 1 Introduction
The value of explicit software architecture has been increasingly recognized
for software maintenance, and evolution activities [15]. Especially
architecture descriptions relating coarsely granular programming elements
found as a useful tool to effectively communicate system functionality and
architectural decisions [22, 29]. These descriptions also support dependency
analysis which drives the task of software modernization [6]. Despite the
numerous benefits, a legacy or open-source software system often lacks such
kind of architecture descriptions. Moreover when such architecture
descriptions are available, these are not aligned with the latest version of
system implementation [26].
In such situations, a light-weight architecture recovery approach which
approximately represents the true architecture of a system may be more
convenient over the sophisticated techniques of architecture recovery. Such a
light-weight approach shall quickly extract relevant information necessary to
build architecture descriptions so that it can provide much-needed assistance
to software architects dealing with re-engineering and modernization of
existing systems, thus increasing their productivity.
Intending to design a light-weight approach, this paper presents an
architecture recovery approach based on centrality measures from the theory of
Social Network Analysis [23, 1]. Three observations drove the rationale behind
using centrality measures for architecture extraction: (i) Most of these
measures provide a highly intuitive and computationally simple way to analyze
interactions when a graph represents the structure of a system. (ii) These
measures quantify the structure of a system at multiple levels, i.e., at a
particular node level, in relation to other nodes in the graph, and at a group
of nodes or communities. (iii) These measures support the development of data-
driven approaches to architecture recovery.
The centrality measures-based approach presented in this paper recovers
architecture descriptions in two phases. In the first phase, a centrality
score is assigned to each program element. We assume that the system
functionality is decomposed among multiple layers, and so in the second phase,
a layer is assigned to each program element.
The paper primarily contributes to existing knowledge-base of architecture
recovery domain in the following ways. (1) The paper demonstrates the use of
centrality measures in recovering high-level architecture descriptions. (2)
The paper describes a data-driven approach to architecture recovery using
supervised classification algorithms. (3) The paper presents an evaluation of
supervised classification algorithms in extracting architectural descriptions.
Rest of the paper is organized as follows: The centrality measures used in the
paper are defined in Section II. Section III describes the central element of
the approach. The algorithmic and data-driven approaches to the problem of
layer assignment are explained in Section IV. The results and evaluation of
the approach are presented in Section V. The section VI puts our approach in
the context of existing approaches by discussing its features in relation with
them. Finally, the paper concludes in Section VII.
Figure 1: An Example of Class Dependencies with and their centrality scores.
---
PID | ind | outd | deg | bet | clos | eig
---|---|---|---|---|---|---
A | 0 | 3 | 3 | 0 | 0.71 | 0
B | 0 | 1 | 1 | 0 | 0.5 | 0
C | 1 | 1 | 2 | 2 | 0.6 | 0.055
D | 2 | 2 | 4 | 2.5 | 1 | 0.27
E | 2 | 3 | 5 | 5.5 | 0.8 | 0.0055
F | 3 | 0 | 3 | 0 | 0 | 1
G | 2 | 0 | 2 | 0 | 0 | 0.99
L2 | 1 | 5 | 6 | 0 | 1 | 0.5
L1 | 4 | 5 | 9 | 1 | 1 | 0.5
L0 | 5 | 0 | 5 | 0 | 0 | 1
## 2 Social Network Analysis Measures
The theory of Social Network Analysis (SNA) provides a generic framework to
analyze the structure of complex systems. This framework includes a rich set
of measures, models, and methods to extract the patterns of interactions among
systems’ elements. A complex system is expressed as a network of nodes and
edges to support the analysis of systems from diverse application domains.
Examples of complex systems that have been analyzed with the help of SNA
include communities on social media platforms[23], and neural systems[1]. The
techniques from SNA have been applied to control the influence of the
disease[30] to understand biological systems[27], to investigate the protein
interactions [2], and to examine animal behavior[31].
These diverse applications of SNA show that a complex system exhibits certain
graph-theoretic common properties such as centrality, scale-free, small world,
community structure, and power-law degree distribution [19, 21, 20, 1, 4, 7].
Some of these commonly observed SNA measures relevant to our study are
described below.
The theory of Social Network Analysis (SNA) provides a range of measures with
varying levels. Some are applied at the node level where others applied at the
network level. The node-level measures are fine-grain measures that are
calculated from the nodes which are directly connected to a given node. The
Centrality measures[28] are the examples of node-level measures that quantify
the importance of an individual or a node in the network. A central node is an
influential node having significant potential to communicate and access the
information. There exists different centrality measures and they are derived
from the connections to a node, position of a node in the network, distance of
a node from others, and relative importance of nodes.
### 2.1 Degree centrality
This measure determines the central node based on the connections to the
individual node. A node with a higher degree in the network is considered as
the most influential one. In a directed graph, two different centrality
measures exist in-degree and out-degree based on the number of incoming and
outgoing edges respectively. The degree centrality of a node $v$ is equal to
the number of its connections normalized to the maximum possible degree of the
node.
$C_{D}(v)=deg(v)$ (1) $NC_{D}=\frac{C_{D}(v)}{n-1}=\frac{deg(v)}{n-1}$ (2)
### 2.2 Closeness centrality
This measure aims to identify an influential node in terms of faster and wider
spread of information in the network. The influential nodes are characterized
by a smaller inter-node distance which signify the faster transfer of
information. The closeness centrality is derived from the average distance
from a node to all the connected nodes at different depths. However, the
distance between the disconnected components of the network is infinite and
hence it is excluded. For the central node, the average distance would be
small and is calculated as the inverse of the sum of the distance to all other
nodes. The normalized closeness($NC_{C}$) is in the range from 0 to 1 where 0
represents an isolated node and 1 indicates a strongly connected node.
$C_{C}(v)=\sum(\frac{1}{d_{vw}})$ (3) $NC_{C}=\sum(\frac{n-1}{d_{vw}})$ (4)
### 2.3 Betweenness centrality
This measure aims to identify those central nodes which are responsible for
connecting two or more components of the network. Removal of such a central
node would mean a disconnection of the complete network. Hence, these nodes
act as a bridge to pass the information [4, 32]. Betweenness centrality is
defined as the number of shortest paths passing through a node.
$C_{B}(v)=\sum_{s\neq v\neq t}\frac{\sigma_{st}(v)}{\sigma_{st}}$ (5)
where, $\sigma_{st}$ is the total number of shortest paths from a node $s$ to
$t$ and $\sigma_{st}(v)$ is the number of paths that pass through $v$. The
relative betweenness centrality of any node in the graph with respect to the
maximum centrality of the node is calculated from $C_{B}(v)$.
$C_{B}^{{}^{\prime}}(v)=\frac{2C_{B}(v)}{n^{2}-3n+2}$ (6)
### 2.4 Eigenvector centrality
The Eigenvector centrality is a relative centrality measure, unlike the
previous three measures that are absolute one. The Eigenvector centrality
calculation depends on the largest real Eigenvalue present in the symmetric
adjacency matrix. The centrality of a node $v$ is proportional to the sum of
the centralities of the nodes connected to it [3, 4].
$\lambda v_{i}=\sum_{j=1}^{n}{a_{ij}v_{j}}$ (7)
In general, it requires the solution of the equation $Av=\lambda v$ where $A$
is an adjacency matrix.
Figure 1 shows the centrality scores of various programming elements
calculated by considering the dependencies as shown in the figure. Here, it is
to be noted that centrality scores can be calculated at different granularity
levels i.e., at the object, method, class, package or at a logical layer. In
the figure, centrality scores are calculated at class and layer level. Here,
we have considered a layer as logical encapsulation unit loosely holding
multiple classes.
## 3 Approach
The broad objective of the approach is to extract high-level architecture
descriptions from the implementation artefacts so that the analysis specific
to an architecture style can be performed. In this paper, we demonstrate the
approach with the help of implementation artefacts available in a Java-based
system such as Java and JAR files. However, the method can be extended to
other language-specific artefacts. Similarly, the approach assumes that a
system under study is implemented around the Layered architecture style and
demonstrates analyses specific to the Layered architecture style.
Figure 2: Block diagram of a tool implement in Java to discover layered
architecture using centrality.
As shown in Figure 2, the approach consists of following two phases.
1. 1.
Dependency Network Builder and Analysis [Phase 1]: The purpose of this phase
is to retrieve the dependencies present in the implementation artefacts. For a
Java-based system, this phase takes Java or Jar files as an input and
generates a dependency network. The programming elements such as Classes,
Interfaces, and Packages are the nodes of the network and the Java
relationships such as $extends$, $implements$, and $imports$ are the edges in
the dependency network. The output of this stage is represented as a graph in
Graph Modeling Language (GML) notations. In the second stage, a centrality
score to each node is assigned. The centrality score includes the different
measures described in Section 2, and they are calculated at the Class and
Interface levels. The output of this stage is a data file in the CSV format
describing the centrality score assigned to each program element.
2. 2.
Architecture Style Recovery and Analysis [Phase 2] The purpose of this phase
is to perform architecture style-specific activities. In this paper, the
activities of this phase are illustrated by assuming Layered architecture
style. For the Layered architecture style, we define a sub-activity called
layer assignment. The layer assignment activity aims to assign the most
appropriate layer to a program element.
Additional style-specific analyses such as analysis of layer violations, and
performance modeling can be supported once the programming elements are
assigned to appropriate layers.
The Phase 1 activities which include building a dependency network and
calculating centrality scores are straightforward to realize when the tools
such as $JDependency$ are available. The Phase 2 activities that are
performing style-specific analyses can be realized in multiple ways. Two such
techniques to realize architecture style-specific analyses are described in
the following section.
Centrality | Upper | Middle | Lower
---|---|---|---
In-degree | low | - | high
Out-degree | high | - | low
Betweenness | low | high | low
Closeness | high | high | low
Eigenvector | low | low | high
Table 1: Relative Significance of centrality measures with respect to layers $\delta_{il}$ and $\delta_{iu}$ | Lower and upper bound for in-degree centrality values.
---|---
$\delta_{ol}$ and $\delta_{ou}$ | Lower and upper bound for out-degree centrality values.
$\delta_{b}$ | Critical Value for between-ness centrality
$\delta_{c}$ | Critical Value for closeness centrality
$\delta_{e}$ | Critical Value for eigen-value centrality
Table 2: Configuration Parameters
## 4 Layer Assignment
The objective of the layer assignment stage is to identify the most
appropriate layer based on the centrality measures. We assume a three-layers
based decomposition. Here, we use the term layer in the loose sense that a
layer is a logical coarse-grained unit of encapsulating program elements and
not in the strict sense as that used for Layer architecture style [5]. The
decision to decompose all the system responsibilities into three layers is
based on the observation that functionalities for the majority of the
applications can be cleanly decomposed into three coarse-grained layers. For
example, many applications typically use architectural styles such as Model-
View-Controller (MVC), Presentation-Abstraction-Control (PAC),[5] and a 3-tier
style, i.e. Presentation, Business Logic, and Data Storage.
Algorithm 1 : primaryLabel(inDegree, outDegree, n)
Input: inDegree[1:n],outDegree[1:n]: Vector, n:Integer
Output: inPartition[1:n], outPartition[1:n] Vector
1:Initialize $\delta_{iu}$, $\delta_{il}$, $\delta_{ou}$ and $\delta_{ol}$
2:for node in 1 to n do
3: if $in(node)=0$ and $out(node)=0$ then
4: $inPartition[node]\leftarrow lower$
5: $outPartition[node]\leftarrow lower$ 5
6: else
7: if $in(node)>\delta_{il}$ then
8: $inPartition[node]\leftarrow lower$
9: else
10: if $in(node)<\delta_{iu}$ then
11: $inPartition[node]\leftarrow upper$
12: else
13: $inPartition[node]\leftarrow middle$
14: end if
15: end if
16: if $out(node)>\delta_{ou}$ then
17: $outPartition[node]\leftarrow upper$
18: else
19: if $out(node)<\delta_{ol}$ then
20: $outPartition[node]\leftarrow lower$
21: else
22: $outPartition[node]\leftarrow middle$
23: end if
24: end if
25: end if
26:end for
Two different techniques are developed to assign layers to program elements
based on centrality measures. The first techniques uses a set of pre-defined
rules. The second technique automatically learns the assignment rules from the
pre-labelled layer-assignments using a supervisory classification algorithm.
### 4.1 Rule-Driven Layer Assignment
Dependencies among the program elements are used to identify logical units of
decomposition. These dependencies are quantified in-terms of centrality
measures described in Section 2. The measure of degree centrality from Section
II-A is further divided as in-degree ($inDeg$) and out-degree ($outDeg$)
measures which count the number of incoming and outgoing edges of a node.
Total five measures of centrality are used. A set of configuration parameters,
as shown in Table 2, are defined. These parameters provide flexibility while
mapping program elements to a specific layer.
Five accessor functions namely $in$, $out$, $between$, $closeness$ and $eigen$
are defined to get the values of in-degree centrality, out-degree centrality,
betweenness centrality, closeness centrality and eigenvector centrality
associated to a specific node. These functions are used to assign a program
element to a specific layer from three layers, i.e. upper, middle and lower.
Table 1 describes the relative significance of various centrality measure
regarding upper, middle and lower layers.
The Algorithm 1 is operated on a dependency-network in which nodes represent
program element and edges dependencies. The objective of this algorithm is to
partition the node space representing into three segments corresponding to
lower, middle and upper layers. The algorithm calculates two different
partitions. The first partition i.e. $inPartition$ is calculated using the in-
degree centrality measure while the second partition is calculated using the
out-degree centrality measure.
Algorithm 2 refineLabel(inParticion, outPartition, n)
Input: inPartition[1:n], outPartition[1:n]: Vector
n: Integer
Output: nodeLabels[1:n]: Vector
1:Initialize $\delta_{b}$, $\delta_{c}$ and $\delta_{e}$
2:for node in 1 to n do
3: if $inPartition[node]=outPartition[node]$ then
4: $nodeLabels[node]\leftarrow outPartition[node]$
5: else
6: $nodeLabels[node]\leftarrow\newline
upDown(inPartition[node],outPatition[node])$
7: end if
8:end for
After the execution of Algorithm 1 each node is labeled with two labels
corresponding to layers. The various combination of labels include (lower,
lower), (middle, middle), (top, top), (middle, top), and (middle, lower). Out
of these six labelling, the labels (middle, top), and (middle, lower) are
conflicting because two different labels are assigned to a node. This conflict
needs to be resolved.
The Algorithm 2 resolves the conflicting labels and assigns the unique label
to each node. The conflicting labels are resolved by using the rules described
in the decision Table 3. The function $upDown$ called in the Algorithm 2 uses
these rules. The rules in Table 3 resolve the conflicting assignments using
the centrality measures of closeness, between-ness, and Eigen vector, while
the primary layer assignment is done with in-degree and out-degree centrality
measures. When Algorithm 2 is executed, some of the nodes from the middle
layer bubble up to the upper layer, and some nodes fall to the lower level.
Some nodes remain at the middle layer. The vector $nodeLabels$ holds the
unique labelling of each node in the dependency network after resolving all
conflicts.
Table 3: Decision Table used to Refine Layering Layer | Measure | Significance | Rationale
---|---|---|---
upper | in | 0 | Classes with in-degree value equal to 0 are placed in the upper layer.
out | high | Classes with high out-degree are placed in the top layer because they use services from layers beneath them.
closeness | high | Classes with high closeness value are placed in the upper layer because of large average distance from top layer to bottom layer.
middle | between | high | Classes with high betweenness value are placed in the middle layer as they fall on the path from top layer to bottom layer.
lower | in | high | Classes with high in-degree value are placed in the bottom layer because they are highly used.
out | 0 | Classes with out-degree value equal to zero are placed to bottom layer because they only provide services.
eigen | 1 | Classes with eigen value equal to 1 are placed to bottom layer because they are highly reused.
in | - | Classes with in-degree and out-degree values are equal to 0 are placed to bottom layer, because they are isolated classes.
### 4.2 Supervised Classification based Layered Assignment
The configuration parameters need to be suitably initialized for the correct
functioning of the algorithmic-centric approach discussed in the previous
section. The system architect responsible for architecture recovery needs to
fine tune the parameters to get layering at the desired level of abstraction.
To overcome this drawback a data-driven approach is developed to assign labels
to the programming elements.
Table 4: Sample observations from the Datasets used for Supervised Learning Id | Label | In-Degree | Out-Degree | Closeness | Between- ness | Eigen- vector | Layer
---|---|---|---|---|---|---|---
HealthWatcher
1 | ComplaintRecord | 1 | 10 | 1.714 | 19 | 0.0056 | 2
2 | ObjectAlreadyInserted Exception | 37 | 0 | 0 | 0 | 0.347 | 1
3 | ObjectNotFound Exception | 53 | 0 | 0 | 0 | 0.943 | 1
4 | ObjectNotValidException | 41 | 0 | 0 | 0 | 0.883 | 1
5 | RepositoryException | 60 | 0 | 0 | 0 | 1 | 1
ConStore
1 | Cache | 2 | 1 | 1 | 0 | 0.0162 | 2
2 | CacheObject | 4 | 0 | 0 | 0 | 0.053 | 2
3 | LRUCache | 0 | 2 | 1 | 0 | 0 | 2
4 | MRUCache | 1 | 2 | 1 | 7 | 0.0246 | 2
5 | ItemQuery | 1 | 20 | 0.412 | 47.166 | 0.0388 | 2
In the data-driven approach, the problem of layered assignment is modeled as a
multi-class classification problem with three labels i.e. lower (1), middle
(2) and upper (3) with numerically encoded as 1,2, and 3 respectively. The
classification model is trained on the labeled data-set. The data set, as
shown in Table 4, includes program element identifiers, values of all the
centrality measures and layering labels as specified by the system architect
responsible for architecture recovery. The layering labels can be used from
the previous version of the system under study or the labels guessed by system
architect to explore different alternatives for system decomposition.
We implement three supervised classification algorithm namely K-Nearest
Neighbour, and Support Vector Machine and Decision Tree. These are the machine
learning algorithms particularly used for multi-class classification problems.
A detailed comparison of these various algorithms can be found in [12]
Python’s Scikit-Learn [11] library is used to develop classification model
based on these algorithms. Table 4 shows the format of the sample dataset used
to train the classification models. The developed models are evaluated against
the classification metrics such as accuracy, precision, recall, and F1-Score.
.
Figure 3: An Architecture of a System Designed to Test the approach Table 5:
Accuracy and Confusion Matrix for Data-Driven and Algorithmic Approach
Constore (Size: 66 classes or interfaces) Confusion Matrix
---
SVM | Decision Tree | KNN classifier | Rule based
Layer | lower | middle | upper | lower | middle | upper | lower | middle | upper | lower | middle | upper
lower | 43 | 0 | 0 | 43 | 0 | 0 | 40 | 3 | 0 | 27 | 16 | 0]
middle | 14 | 0 | 1 | 13 | 2 | 0 | 12 | 3 | 0 | 9 | 5 | 1
upper | 6 | 0 | 2 | 6 | 0 | 2 | 7 | 1 | 0 | 4 | 2 | 2
Accuracy = 0.68 | Accuracy = 0.71 | Accuracy = 0.65 | Accuracy =0.52
Recall (R), Precision (P), F1-Score (F1) Evaluation
| R | P | F-1 | R | P | F-1 | R | P | F-1 | R | P | F-1
lower | 0.68 | 1.00 | 0.81 | 0.69 | 1.00 | 0.82 | 0.68 | 0.93 | 0.78 | 0.68 | 0.63 | 0.65
middle | 0.00 | 0.00 | 0.00 | 1.00 | 0.13 | 0.24 | 0.43 | 0.20 | 0.27 | 0.22 | 0.33 | 0.26
top | 0.67 | 0.25 | 0.36 | 1.00 | 0.25 | 0.40 | 0.00 | 0.00 | 0.00 | 0.67 | 0.25 | 0.36
HealthWatcher(Size: 135 classes or interfaces) Confusion Matrix
lower | 47 | 1 | 9 | 49 | 4 | 4 | 41 | 8 | 8 | 28 | 16 | 13
middle | 20 | 5 | 12 | 15 | 20 | 2 | 7 | 28 | 2 | 6 | 30 | 1
upper | 5 | 1 | 35 | 7 | 0 | 34 | 6 | 6 | 29 | 3 | 9 | 29
Accuracy = 0.64 | Accuracy = 0.76 | Accuracy = 0.72 | Accuracy = 0.63
Recall (R), Precision (P), F1-Score (F1) Evaluation
| R | P | F-1 | R | P | F-1 | R | P | F-1 | R | P | F-1
lower | 0.65 | 0.82 | 0.73 | 0.69 | 0.86 | 0.77 | 0.76 | 0.72 | 0.74 | 0.76 | 0.49 | 0.60
middle | 0.71 | 0.14 | 0.23 | 0.83 | 0.54 | 0.66 | 0.67 | 0.76 | 0.71 | 0.55 | 0.81 | 0.65
upper | 0.62 | 0.85 | 0.72 | 0.85 | 0.83 | 0.84 | 0.74 | 0.71 | 0.72 | 0.66 | 0.66 | 0.66
Test Architecture System (Size = 16 Classes) Confusion Matrix
lower | 5 | 2 | 0 | 4 | 3 | 0 | 5 | 2 | 0 | 5 | 2 | 0
middle | 1 | 4 | 0 | 0 | 5 | 0 | 1 | 4 | 0 | 1 | 4 | 0
upper | 1 | 0 | 3 | 0 | 1 | 3 | 4 | 0 | 0 | 1 | 0 | 3
Accuracy = 0.75 | Accuracy = 0.75 | Accuracy = 0.56 | Accuracy =0.75
Recall (R), Precision (P), F1-Score (F1) Evaluation
| R | P | F-1 | R | P | F-1 | R | P | F-1 | R | P | F-1
lower | 0.71 | 0.71 | 0.71 | 1.00 | 0.57 | 0.73 | 0.50 | 0.71 | 0.59 | 0.71 | 0.71 | 0.71
middle | 0.67 | 0.80 | 0.73 | 0.56 | 1.00 | 0.71 | 0.67 | 0.80 | 0.73 | 0.67 | 0.80 | 0.73
upper | 1.00 | 0.75 | 0.86 | 1.00 | 0.75 | 0.86 | 0.00 | 0.00 | 0.00 | 1.00 | 0.75 | 0.86
## 5 Evaluation
### 5.1 Test cases
The following software systems are used to evaluate the performance of the
architecture recovery approach developed in this paper. It includes:
1. 1.
Test Architecture system: A small-scale test architecture system, as shown in
Figure 3, has been specially designed to test the approach. It is a simulated
architecture test case. It includes 16 classes without the implementation of
any functionalities. It includes only dependencies among classes, as shown in
Figure 3. The classes named as SEC, TX, SERI in the figure represent
crosscutting concerns.
2. 2.
ConStore: ConStore is a small scale Java-based library designed to manage
concept networks. The concept network is a mechanism used to represent the
meta-model of an application domain consisting of concepts and connections
between them. The ConStore is a framework for detailing out the concepts and
creating a domain model for a given application. It provides services to
store, navigate and retrieve the concept network[13].
3. 3.
HealthWatcher: The HealthWatcher is a web-based application providing
healthcare-related services[10]. This application provides services to users
to communicate health-related issues. Users can register, update and query
their health-related complaints and problems. The application follows a
client-server, layered architecture style.
All these applications are selected as test cases because the layering of the
program elements was known in advance.
### 5.2 Results and Evaluation
The performance of classification models is typically evaluated against
measures such as accuracy, precision, recall, and F1-Score [9]. These metrics
are derived from a confusion matrix which compares the count of actual class
labels for the observations in a given data set and the class labels as
predicted by a classification model. Four different metrics are derived by
comparing true labels with the predicted labels. These are accuracy, recall,
precision and F1-score. Table 5 shows the performance analysis against these
metrics. The table compares the performance of algorithmic-centric approach
and data-driven approach for all the test cases.
#### 5.2.1 Accuracy Analysis
The accuracy is the rate of correction for classification models. Higher the
value of accuracy, better is the model. From the accuracy point of view, one
can observe from the Table 5 that the data-driven approach performs better as
compared to the algorithmic-centric approach. The decision-based classifier
preforms better on all the test cases with an average accuracy of 74%. This is
because the performance of algorithmic approach depends on the proper tuning
of various configuration parameters. The results shown in Table 5 are obtained
with the values of configuration parameters shown in Table 6.
| ConStore | Healthwatcher | Test Arch.
---|---|---|---
$\delta_{il}$ | 4 | 10 | 2
$\delta_{iu}$ | 1 | 1 | 1
$\delta_{ol}$ | 4 | 2 | 2
$\delta_{ou}$ | 1 | 5 | 2
$\delta_{b}$ | 6 | 9 | 6
$\delta_{c}$ | 0.8 | 0.8 | 0.6
$\delta_{e}$ | 0.6 | 0.5 | 0.6
Table 6: Configuration Parameters used during layer recovery
The machine learning models automatically learn and adjust the model
parameters for the better results of accuracy. In case of algorithmic
approach, the configuration parameter tuning is an iterative process and need
to try different combinations.
#### 5.2.2 Recall, Precision, F1-Score Analysis
Recall indicates the proportion of correctly identified true positives while
precision is the proportion of correct positive identification. High values of
both recall and precision are desired, but it isn’t easy to get high values
simultaneously for recall and precision. Hence, F1- score combines recall and
precision into one metrics. From the recall, precision, and F1-score point of
view, one can observe from Table 5 that decision tree-based classifier
performs better with the highest F1-score of 0.86 for the upper layer
classification of test architecture system. Recalling class labels with higher
precision for middle layer is a challenging task for all the models described
in this paper. This is because of the presence of many not so cleanly
encapsulated functionalities in a module at the middle layer and mapping
crosscutting concerns to one of the three layers.
## 6 Earlier Approaches and Discussion
Recovering architecture descriptions from the code has been one of the widely
and continuously explored problem by Software Architecture researchers. This
has resulted in a large number of techniques[17], survey papers [8] and books
[14] devoted to the topic. In the context of these earlier approaches, this
section provides the rationale behind the implementation decisions taken while
developing our approach.
### 6.1 Include Dependencies vs Symbolic Dependencies
The recent study reported in [16] has recognized that the quality of recovered
architecture depends on the type of dependencies analyzed to recover
architecture. The study analyzes the impact of symbolic dependencies i.e.
dependencies at the program identifier level versus include dependencies i.e.
at the level of importing files or including packages. Further, it emphasizes
that symbolic dependencies are more accurate way to recover structural
information. The use of include dependencies is error prone owing to the fact
that a programmer may include a package without using it.
We used include dependencies in our approach because extracting and managing
include dependencies are simple as compared to symbolic dependencies. Further,
we mitigated the risk of unused packages by excluding these relationship from
further analysis. Many programming environments facilitate the removal of
unused packages. One of our objectives was to develop a data-driven approach
and cleaning data in this way is an established practice in the field of data
engineering.
### 6.2 Unsupervised Clustering vs Supervised classification
The techniques of unsupervised clustering have been adopted widely to extract
high-level architectures through the analysis of dependencies between
implementation artefacts [17]. These approaches use hierarchical and search-
based methods for clustering. These approaches usually take substantial search
time to find not so good architectures [18]. One of the advantages of
clustering methods is that unlabelled data sets drive these methods. But, the
identified clusters of program elements need to be labelled with appropriate
labels.
Our choice of supervised classification method is driven by the fact that
centrality measures quantify the structural properties with reference to a
node, and relation of the nodes with respect to others. Processing such
quantified values in efficient way is one of the advantages of many of
supervised classification methods. Further, assigning program elements with
layering labels is not an issue if such information is available from the
previous version of software, which is the case for many re-engineering and
modernization projects. In the absence of such labelled data set, the approach
presented in the paper can still be adopted in two stages. In the first stage,
a tentative layer labelling can be done through algorithmic approach followed
by the labelling through supervised classification method.
The architecture descriptions extracted by the approach can be viewed as
multiple ways of decomposing a system rather than a single ground truth
architecture which is often difficult to agree upon and laborious to discover
[8]. One of the architecture out of these extracted architectures can be
selected by assessing these architectures for the properties such as minimal
layer of violation [25, 24] or satisfaction of a particular quality
attribute[14] or any other project specific criteria.
### 6.3 Choice of Number of Layers
We described the working of the approach by assuming a three-layer
decomposition, But this is not a strict restriction. The algorithmic centric
method can be adapted by redesigning rules for additional layers while
supervised classification method can be adjusted by relabelling program
element with a number of layers considered.
## 7 Conclusion
The paper presents an approach to recover high-level architecture from system
implementations. The main highlights of the approach presented in the paper
include: (i) The approach uses the centrality measures from the field of
Social Network Analysis to quantify the structural properties of an
implementation. (ii) The dependency graph formed by including programming
units(i.e. classes in Java) is treated as a network, and centrality measures
are applied to extract structural properties. (iii) The paper treats a layer
as a coarsely granular abstraction encapsulating system functionalities. Then
paper maps a group programming elements sharing common structural properties
manifested through centrality measures to a layer. (iv) Paper describes two
mapping methods for this purpose called algorithmic centric and data-driven.
(v) Overall data-driven methods perform better compared to the algorithmic
centric method to map a program element to a layer.
Paper makes particular assumption such as availability of Java-based system
implementation; a system is decomposed into three layers and availability of
pre-labelled data set for supervised classification. These are the assumption
made to simplify the demonstration of the approach and its realization. Hence,
these assumptions do not make the approach a restrictive one. However, these
assumptions can be relaxed, and the approach is flexible enough to extend.
Exploring the impact of fusing structural properties along with some semantic
features such as a dominant concern addressed by a programming element would
be an exciting exercise for future exploration.
## References
* [1] Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002.
* [2] Gil Amitai, Arye Shemesh, Einat Sitbon, Maxim Shklar, Dvir Netanely, Ilya Venger, and Shmuel Pietrokovski. Network analysis of protein structures identifies functional residues. Journal of Molecular Biology, 344(4):1135 – 1146, 2004.
* [3] Phillip Bonacich. Some unique properties of eigenvector centrality. Social networks, 29(4):555–564, 2007.
* [4] Stephen P Borgatti. Centrality and network flow. Social networks, 27(1):55–71, 2005.
* [5] Frank Buschmann, Kevlin Henney, and Douglas C Schmidt. Pattern-oriented software architecture, on patterns and pattern languages, volume 5. John wiley & sons, 2007.
* [6] Daniel Escobar, Diana Cárdenas, Rolando Amarillo, Eddie Castro, Kelly Garcés, Carlos Parra, and Rubby Casallas. Towards the understanding and evolution of monolithic applications as microservices. In 2016 XLII Latin American Computing Conference (CLEI), pages 1–11. IEEE, 2016.
* [7] Linton C Freeman. Centrality in social networks conceptual clarification. Social networks, 1(3):215–239, 1979.
* [8] Joshua Garcia, Igor Ivkovic, and Nenad Medvidovic. A comparative analysis of software architecture recovery techniques. In 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 486–496. IEEE, 2013.
* [9] Cyril Goutte and Eric Gaussier. A probabilistic interpretation of precision, recall and f-score, with implication for evaluation. In European conference on information retrieval, pages 345–359. Springer, 2005.
* [10] Phil Greenwood, Thiago Bartolomei, Eduardo Figueiredo, Marcos Dosea, Alessandro Garcia, Nelio Cacho, Cláudio Sant’Anna, Sergio Soares, Paulo Borba, Uirá Kulesza, et al. On the impact of aspectual decompositions on design stability: An empirical study. In European Conference on Object-Oriented Programming, pages 176–200. Springer, 2007.
* [11] Jiangang Hao and Tin Kam Ho. Machine learning made easy: A review of scikit-learn package in python programming language. Journal of Educational and Behavioral Statistics, 44(3):348–361, 2019.
* [12] Ch Anwar Ul Hassan, Muhammad Sufyan Khan, and Munam Ali Shah. Comparison of machine learning algorithms in data classification. In 2018 24th International Conference on Automation and Computing (ICAC), pages 1–6. IEEE, 2018.
* [13] http://www.cse.iitb.ac.in/constore. http://www.cse.iitb.ac.in/constore, 2009.
* [14] Ayaz Isazadeh, Habib Izadkhah, and Islam Elgedawy. Source code modularization: theory and techniques. Springer, 2017.
* [15] Daniel Link, Pooyan Behnamghader, Ramin Moazeni, and Barry Boehm. The value of software architecture recovery for maintenance. In Proceedings of the 12th Innovations on Software Engineering Conference (formerly known as India Software Engineering Conference), pages 1–10, 2019.
* [16] Thibaud Lutellier, Devin Chollak, Joshua Garcia, Lin Tan, Derek Rayside, Nenad Medvidović, and Robert Kroeger. Measuring the impact of code dependencies on software architecture recovery techniques. IEEE Transactions on Software Engineering, 44(2):159–181, 2017\.
* [17] Onaiza Maqbool and Haroon Babri. Hierarchical clustering for software architecture recovery. IEEE Transactions on Software Engineering, 33(11):759–780, 2007\.
* [18] Sina Mohammadi and Habib Izadkhah. A new algorithm for software clustering considering the knowledge of dependency between artifacts in the source code. Information and Software Technology, 105:252–256, 2019.
* [19] M. E. J. Newman. Random graphs as models of networks. arXiv preprint cond-mat/0202208, 2002.
* [20] Mark EJ Newman. The structure and function of complex networks. SIAM review, 45(2):167–256, 2003.
* [21] Mark EJ Newman, Steven H Strogatz, and Duncan J Watts. Random graphs with arbitrary degree distributions and their applications. Physical review E, 64(2):026118, 2001.
* [22] Alexia Pacheco, Gabriela Marín-Raventós, and Gustavo López. Designing a technical debt visualization tool to improve stakeholder communication in the decision-making process: a case study. In International Conference on Research and Practical Issues of Enterprise Information Systems, pages 15–26. Springer, 2018.
* [23] Zizi Papacharissi. The virtual geographies of social networks: a comparative analysis of facebook, linkedin and asmallworld. New media & society, 11(1-2):199–220, 2009.
* [24] Santonu Sarkar and Vikrant Kaulgud. Architecture reconstruction from code for business applications-a practical approach. In 1st India Workshop on Reverse Engineering,(IWRE), 2010.
* [25] Santonu Sarkar, Girish Maskeri, and Shubha Ramachandran. Discovery of architectural layers and measurement of layering violations in source code. Journal of Systems and Software, 82(11):1891–1905, 2009.
* [26] Arman Shahbazian, Youn Kyu Lee, Duc Le, Yuriy Brun, and Nenad Medvidovic. Recovering architectural design decisions. In 2018 IEEE International Conference on Software Architecture (ICSA), pages 95–9509. IEEE, 2018.
* [27] Juliana Saragiotto Silva and Antonio Mauro Saraiva. A methodology for applying social network analysis metrics to biological interaction networks. In Advances in Social Networks Analysis and Mining (ASONAM), 2015 IEEE/ACM International Conference on, pages 1300–1307. IEEE, 2015.
* [28] Rishi Ranjan Singh. Centrality measures: A tool to identify key actors in social networks. arXiv preprint arXiv:2011.01627, 2020.
* [29] Colin C Venters, Rafael Capilla, Stefanie Betz, Birgit Penzenstadler, Tom Crick, Steve Crouch, Elisa Yumi Nakagawa, Christoph Becker, and Carlos Carrillo. Software sustainability: Research and practice from a software architecture viewpoint. Journal of Systems and Software, 138:174–188, 2018.
* [30] Duncan J Watts. Networks, dynamics, and the small-world phenomenon 1. American Journal of sociology, 105(2):493–527, 1999.
* [31] Tina Wey, Daniel T Blumstein, Weiwei Shen, and Ferenc Jordán. Social network analysis of animal behaviour: a promising tool for the study of sociality. Animal behaviour, 75(2):333–344, 2008.
* [32] Douglas R White and Stephen P Borgatti. Betweenness centrality measures for directed graphs. Social Networks, 16(4):335–346, 1994.
|
# Bubble tree compactification of instanton moduli spaces on 4-orbifolds
Shuaige Qiao
###### Abstract
In the study of moduli spaces defined by the anti-self-dual (ASD) Yang-Mills
equations on $SU(2)$ or $SO(3)$ bundles over closed oriented Riemannian
4-manifolds $M$, the bubble tree compactification was defined in [T88], [F95],
[C02], [C10], [F14] and [F15]. The smooth orbifold structure of the bubble
tree compactification away from the trivial stratum is defined in [C02] and
[C10]. In this paper, we apply the technique to 4-orbifolds of the form
$M/\mathbb{Z}_{\alpha}$ satisfying Condition 1.1 to get a bubble tree
compactification of the instanton moduli space on this particular kind of
4-orbifolds.
###### Contents
1. 1 Introduction
2. 2 Equivariant removable singularity theorem
1. 2.1 Uhlenbeck’s removable singularity theorem
2. 2.2 $\Gamma$-equivariant removable singularity
3. 3 Equivariant Taubes gluing construction
1. 3.1 Gluing $P_{i}$ and getting an approximate ASD connection $A^{\prime}$
2. 3.2 Constructing an ASD connection from $A^{\prime}$
4. 4 Bubble tree compactification for 4-manifolds
5. 5 $\mathbb{Z}_{\alpha}$-equivariant bundles and $\mathbb{Z}_{\alpha}$-invariant moduli spaces
6. 6 Invariant ASD connections on $S^{4}$
1. 6.1 Instanton-1 moduli space on $S^{4}$
2. 6.2 $\Gamma$-invariant connections in $\mathcal{M}_{1}(S^{4})$
3. 6.3 $\mathbb{Z}_{p}$-invariant connections in $\mathcal{M}_{k}(S^{4})$
4. 6.4 Balanced $\mathbb{Z}_{p}$-invariant connections on $S^{4}$
7. 7 Bubble tree compactification for $M/\mathbb{Z}_{\alpha}$
8. 8 An example: instanton moduli space on $SO(3)$-bundle over $\mathbb{CP}^{2}$ and weighted complex projective space $\mathbb{CP}^{2}_{(r,s,t)}$
1. 8.1 $\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})$ and its bubble tree compactification
2. 8.2 $\mathcal{M}_{\mathbb{Z}_{a}}(P_{-7}(\mathbb{CP}^{2}))$ and its bubble tree compactification
## 1 Introduction
From the late 1980s, gauge theoretic techniques were applied in the area of
finite group actions on 4-manifolds. [F89] showed that on $S^{4}$, there is no
smooth finite group action with exactly 1 fixed point by arguing that
instanton-one invariant connections form a 1-manifold whose boundary can be
identified as fixed points of the group action. In [BKS90], gauge theoretic
techniques were used in studying fixed points of a finite group action on
3-manifolds. [FS85] studied pseudofree orbifolds using ASD moduli spaces. A
4-dimensional pseudofree orbifold is a special kind of orbifold which can be
expressed as $M^{5}/S^{1}$, a quotient of a pseudofree $S^{1}$-action on a
5-manifold $M^{5}$. [A90] studied the orbifold $S^{4}/\mathbb{Z}_{\alpha}$,
which is a compactification of $L(\alpha,\beta)\times\mathbb{R}$ where
$L(\alpha,\beta)$ is a Lens space. Austin gave a criterion for existence of
instantons on $S^{4}/\mathbb{Z}_{\alpha}$ and calculated the dimension of the
instanton moduli space. A more general kind of orbifold, orbifold with
isolated fixed points, was discussed in [F92], especially when the group-
action around each singular point is a cyclic group.
Let $(M,g)$ be a closed, connected, oriented 4-dimensional Riemannian
manifold, $G=SU(2)$ or $SO(3)$, $P$ be a principal-$G$ bundle over $M$,
$k=c_{2}(P)$ or $-p_{1}(P)/4$, $\mathcal{A}$ be the space of connections,
$\mathcal{G}$ be the space of gauge transformations, and
$\mathcal{B}=\mathcal{A}/\mathcal{G}$. There is a Yang-Mills functional on
$\mathcal{B}$ which measures the energy of a connection $A$:
$[A]\mapsto YM(A):=\int|F_{A}|^{2}~{}dvol$
where $F_{A}\in\Omega^{2}(adP)$ is the curvature of $A$. Since 2-forms on $M$
can be decomposed into self-dual and anti-self-dual parts, we have
$\Omega^{2}(adP)=\Omega^{2,+}(adP)\oplus\Omega^{2,-}(adP).$
Denote by $P_{+}:\Omega^{2}(adP)\to\Omega^{2,+}(adP)$ the projection. A
connection $A$ is anti-self-dual (ASD) if $P_{+}F_{A}=0$. Denote the moduli
space of ASD connections by $\mathcal{M}_{k}(M)$. If $A$ is ASD, we have
$YM(A)=8\pi^{2}k$.
A connection $A$ is reducible if there exists a principal $U(1)$-bundle $Q$
such that $P=Q\times_{U(1)}G$ and $A$ is induced by a connection on $Q$. Note
that
$A\text{ is reducible}\Leftrightarrow\Gamma_{A}=U(1),$ $A\text{ is
irreducible}\Leftrightarrow\Gamma_{A}=\begin{cases}\mathbb{Z}_{2}~{}~{}\text{in
}SU(2)\text{-bundle}\\\ 0~{}~{}\text{in }SO(3)\text{-bundle}\end{cases},$
where $\Gamma_{A}\subset\mathcal{G}$ is the stabiliser of $A$. The irreducible
part of $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{M}_{k}$ are denoted by
$\mathcal{A}^{*}$, $\mathcal{B}^{*}$ and $\mathcal{M}^{*}_{k}$ respectively.
On 4-manifolds, the bubble tree compactification of ASD moduli spaces is
defined in [T88], [F95], [C02], [C10], [F14] and [F15]. The smooth orbifold
structure of the bubble tree compactification away from the trivial stratum is
defined in [C02] and [C10]. We aim to apply the technique in [C02] and [C10]
to define the smooth orbifold structure of the corresponding bubble tree
compactification for 4-orbifolds of the form $M/\mathbb{Z}_{\alpha}$ which
satisfy the following condition:
###### Condition 1.1.
The $\mathbb{Z}_{\alpha}$-action on $M$ is free away from finite points and
$\\{x_{1},\dots,x_{n}\\}\subset X:=M/\mathbb{Z}_{\alpha}$ are singularities
such that for each $i$ a neighbourhood of $x_{i}$ in $X$ is $cL(a_{i},b_{i})$,
a cone over a Lens space where $a_{i}$ divides $\alpha$ and $b_{i}$ is coprime
to $a_{i}$.
In Section 2, we prove the equivariant removable singularity theorem.
In Section 3, we introduce the equivariant Taubes gluing construction by
following the methods in Section 7.2 of [DK90].
In Section 4, we briefly review the bubble tree compactification for
4-manifolds.
In Section 5, we first characterise $\mathbb{Z}_{\alpha}$-equivariant bundles
over $M$ satisfying Condition 1.1. Recall that $SU(2)$ (or $SO(3)$)-bundles
over a simply connected manifold $M$ are classified by $c_{2}$ (or
$(p_{1},w_{2})$). While $\mathbb{Z}_{\alpha}$-equivariant bundles are
characterised by these characteristic classes along with isotropy
representations of each singular point. In Section 5 we define a set
$\\{\mathcal{O}_{i}\\}_{i\in I}$ and an injective map from the space of
isomorphic classes of $\mathbb{Z}_{\alpha}$-equivariant bundles to
$\\{\mathcal{O}_{i}\\}_{i\in I}$ where $I$ is the index set defined in (5.5).
That is to say, $\mathbb{Z}_{\alpha}$-equivariant bundles are characterised by
the index set $I$. After that, we make use of some results in [F89], [F92] and
[FS85] to describe the $\mathbb{Z}_{\alpha}$-invariant moduli spaces on $M$.
In Section 6, we state some results from [A90], [F92] and [FS85] to describe
the $\mathbb{Z}_{\alpha}$-invariant ASD moduli spaces on $S^{4}$.
In Section 7, we define the bubble tree compactification on
$M/\mathbb{Z}_{\alpha}$. Observe that a bubble on singular points of an
orbifold $M/\mathbb{Z}_{\alpha}$ may be a “singular bubble”
$S^{4}/\mathbb{Z}_{\alpha}$. The gluing parameter needs to be
$\mathbb{Z}_{\alpha}$-equivariant. After dealing with these problems properly,
techniques in [C02] can be applied and we get the bubble tree compactification
for $M/\mathbb{Z}_{\alpha}$.
###### Theorem 1.2.
(Theorem 7.9) Suppose we have the $\mathbb{Z}_{\alpha}$-action on $M$
satisfies Condition 1.1 and $P\to M$ is an $SU(2)$(or $SO(3)$)-bundle with
$c_{2}$(or $p_{1}$)=k. Let the bubble tree compactification of the irreducible
$\mathbb{Z}_{\alpha}$-invariant instanton moduli space be
$\underline{\mathcal{M}}_{k,\mathbb{Z}_{a}}:=\bigsqcup_{i\in
I}\underline{\mathcal{M}}^{\mathcal{O}_{i}},$
where $I$ is the index set defined in (5.5). Then each component
$\underline{\mathcal{M}}^{\mathcal{O}_{i}}$ is an orbifold away from the
stratum with trivial connection as the background connection of the bubble
tree instantons. The dimension of the component
$\underline{\mathcal{M}}^{\mathcal{O}}$ with
$\mathcal{O}=\\{k,(a_{i},b_{i},m_{i})_{i=1}^{n}\\}$ is, for $SU(2)$ case,
$\frac{8c_{2}}{\alpha}-3(1+b_{2}^{+})+n^{\prime}+\sum_{i=1}^{n}\frac{2}{a_{i}}\sum_{j=1}^{a_{i}-1}\cot\left(\frac{\pi
jb_{i}}{a_{i}}\right)\cot\left(\frac{\pi
j}{a_{i}}\right)\sin^{2}\left(\frac{\pi jm_{i}}{a_{i}}\right),$
where $n^{\prime}$ is the number of $\\{m_{i}~{}|~{}m_{i}\not\equiv 0\text{
mod }a_{i}\\}$. For $SO(3)$ case replace $8c_{2}$ by $-2p_{1}$.
In Section 8, we calculate examples for the complex projective space
$\mathbb{CP}^{2}$ and the weighted complex projective space
$\mathbb{CP}^{2}_{[r,s,t]}$.
Acknowledgements. The paper would not be possible without my supervisor Bai-
Ling Wang’s guidance and encouragement. I want to express to him my heartfelt
appreciation. My special thanks to the ANU-CSC scholarship and the MSI ‘Kick-
start’ Postdoctoral Fellowship for the financial support.
## 2 Equivariant removable singularity theorem
The Uhlenbeck removable singularity theorem tells us that any ASD connection
$A$ on a trivial principal $G$-bundle over a punctured 4-dimensional ball with
$L^{2}$-bounded curvature can be extended to the origin of the ball. This
section discusses the case when there is a finite group $\Gamma$ acting on
$(B^{4}\setminus\\{0\\})\times G$ and $A$ is a $\Gamma$-invariant ASD
connection. The main result shows that the $\Gamma$-action can also be
extended to $B^{4}\times G$ such that the extended connection is
$\Gamma$-invariant. Moreover, we will see how the connection $A$ determines
the isotropy representation $\rho:\Gamma\to G$ at the origin.
### 2.1 Uhlenbeck’s removable singularity theorem
We first state Theorem 4.1 of [U82] here.
###### Theorem 2.1.
(Uhlenbeck’s removable singularity theorem) Let $A$ be an ASD connection on
the bundle $P=(B^{4}\setminus\\{0\\})\times G$. If $||F_{A}||_{L^{2}}<\infty$,
then there exists an ASD connection $\tilde{A}$ on $B^{4}\times G$ and a gauge
transformation $g:P\to P$ such that
$g^{*}A=\tilde{A}|_{B^{4}\setminus\\{0\\}\times G}$.
In the proof of this theorem, a trivialisation $\tau$ on $P$ is defined under
which $A$ can be expressed as a $\mathfrak{g}$-valued one-form on
$B^{4}\setminus\\{0\\}$
$A^{\tau}:T(B^{4}\setminus\\{0\\})\to\mathfrak{g}$
satisfying $|A^{\tau}(x)|\xrightarrow{x\to 0}0$. Thus $A^{\tau}$ can be
extended to the origin.
To get a better understanding of this theorem, we translate
$B^{4}\setminus\\{0\\}$ into $S^{3}\times(0,\infty)$ by the map
$\displaystyle f:S^{3}\times(0,\infty)$ $\displaystyle\to$ $\displaystyle
B^{4}\setminus\\{0\\}$ $\displaystyle(\psi,t)$ $\displaystyle\mapsto$
$\displaystyle(\psi,e^{-t}).$
Then $f^{*}A$ is an ASD connection on $S^{3}\times(0,\infty)\times G$. To
simplify the notation, we denote $f^{*}A$ also by $A$. When we restrict $A$ to
the slice $S^{3}\times\\{t\\}$, we get a connection $A_{t}$ on
$S^{3}\times\\{t\\}\times G$.
###### Theorem 2.2.
(Theorem 4.18 of [D02]) Let $Y$ be a 3-dimensional closed oriented Riemannian
manifold and $A$ be an ASD connection over a half-tube $Y\times(0,\infty)$
with $\int_{Y\times(0,\infty)}|F_{A}|^{2}<\infty$. Then the connections
$[A_{t}]$ converge (in $C^{\infty}$ topology on
$\mathcal{A}_{Y}/\mathcal{G}_{Y}$) to a limiting flat connection
$[A_{\infty}]$ over $Y$ as $t\to\infty$.
By Theorem 2.2, there exists a flat connection $A_{\infty}$ such that, after
gauge transformation if necessary, $A_{t}\to A_{\infty}$ in $C^{\infty}$
topology. The removable singularity theorem means that there exists a gauge
transformation $g$ on $S^{3}\times(0,\infty)\times G$ such that
$\displaystyle\lim_{t\to\infty}g|_{S^{3}\times\\{t\\}}=g_{\infty}$ and
$g_{\infty}^{*}A_{\infty}=A_{trivial}$
is the trivial connection on $S^{3}\times G$. Thus $A$ can be extended to the
infinity point (corresponding to the origin in the punctured ball).
### 2.2 $\Gamma$-equivariant removable singularity
Let $\Gamma$ be a finite group acting on $P=B^{4}\setminus\\{0\\}\times G$
through
$\rho:\Gamma\to Aut(P)$
such that $\rho$ comes from an element in Hom$(\Gamma,G)$, i.e., for all
$\gamma\in\Gamma,x_{1},x_{2}\in B^{4}\setminus\\{0\\}$, we have
$pr_{2}\big{(}\rho(\gamma)(x_{1},g)\big{)}=pr_{2}\big{(}\rho(\gamma)(x_{2},g)\big{)}$
where $pr_{2}:(B^{4}\setminus\\{0\\})\times G\to G$ is the projective map.
Assume that this $\Gamma$-action on $P$ induces a $\Gamma$-action on
$B^{4}\setminus\\{0\\}$ that preserves the metric.
###### Theorem 2.3.
Suppose $A$ is an ASD $\Gamma$-invariant connection on
$P=(B^{4}\setminus\\{0\\})\times G$ with $||F_{A}||_{L^{2}}<\infty$. Then
there exists an ASD connection $\tilde{A}$ on $B\times G$, a $\Gamma$-action
on $B^{4}\times G$
$\tilde{\rho}:\Gamma\to Aut(B^{4}\times G)$
and a $\Gamma$-equivariant gauge transformation $g:P\to P$ such that
$\tilde{A}$ is $\Gamma$-invariant and
$g^{*}A=\tilde{A}|_{B^{4}\setminus\\{0\\}\times G}$.
###### Proof.
Let $\tilde{A}$ and $g$ be defined as in Theorem 2.1 and define $\tilde{\rho}$
by
$\tilde{\rho}(\gamma)(p)=\begin{cases}g^{-1}\circ\rho(\gamma)\circ
g(p)~{}~{}~{}\forall p\in B^{4}\setminus\\{0\\}\times G\\\
\displaystyle\lim_{q\to p}g^{-1}\circ\rho(\gamma)\circ g(q)~{}~{}~{}\forall
p\in\\{0\\}\times G\end{cases}.$
To show $\tilde{\rho}$ is well-defined, it suffices to show the limit in the
definition exists.
For all $\gamma\in\Gamma$, we have
$(g^{-1}\circ\rho(\gamma)\circ g)^{*}\tilde{A}=\tilde{A}$ (2.1)
on $B^{4}\setminus\\{0\\}\times G$. Choose a trivialization (i.e. a section)
$\tau$ of $B^{4}\times G$ so that $\tilde{A}$ can be written as
$\tilde{A}^{\tau}:TB^{4}\to\mathfrak{g}$
and that $\tilde{A}^{\tau}(0)=0$. Then $g^{-1}\circ\rho(\gamma)\circ g:P\to P$
can be seen as a map $\tilde{\rho}^{\tau}:B^{4}\setminus\\{0\\}\to G$ under
this trivialization:
$\gamma\cdot\tau(b)=\tau(\gamma\cdot b)\cdot\tilde{\rho}^{\tau}(b)$
where $\gamma\in\Gamma$, $b\in B^{4}\setminus\\{0\\}$. Under this
trivialization, the connection $(g^{-1}\circ\rho(\gamma)\circ g)^{*}\tilde{A}$
can be written as
$\displaystyle\big{(}(g^{-1}\circ\rho(\gamma)\circ
g)^{*}\tilde{A}\big{)}^{\tau}:T(B^{4}\setminus\\{0\\})$ $\displaystyle\to$
$\displaystyle\mathfrak{g}$ $\displaystyle(b,v)$ $\displaystyle\mapsto$
$\displaystyle(\tilde{\rho}^{\tau})^{-1}d\tilde{\rho}^{\tau}(b,v)+(\tilde{\rho}^{\tau})^{-1}(b)\tilde{A}^{\tau}(\gamma\cdot
b,\gamma_{*}v)\tilde{\rho}^{\tau}(b).$
Then equality (2.1) becomes
$(\tilde{\rho}^{\tau})^{-1}d\tilde{\rho}^{\tau}+(\tilde{\rho}^{\tau})^{-1}\tilde{A}^{\tau}\tilde{\rho}^{\tau}=\tilde{A}^{\tau}.$
Taking the limit of the equality as $x\to 0$, since $\tilde{A}^{\tau}(0)=0$,
we have $d\tilde{\rho}^{\tau}\to 0$, which means that $\tilde{\rho}^{\tau}$
tends to a constant in $G$ as $x\to 0$. ∎
From the proof above, we have that $\tilde{\rho}$ induces a $\Gamma$-action on
$\\{0\\}\times G$, which is called an isotropy representation and denoted as
$\tilde{\rho}_{0}:\Gamma\to G.$ (2.2)
Since $A$ is $\Gamma$-invariant, we can treat $A_{t}$ as a connection
$A_{t}^{\Gamma}$ on $S^{3}\times_{\rho}G$. From the discussion of the last
section, we know that
$[A_{t}^{\Gamma}]\to[A_{\infty}^{\Gamma}]$ (2.3)
where $A_{\infty}^{\Gamma}$ is a flat connection on $S^{3}\times_{\rho}G$.
Moreover, the following theorem shows that $[A_{\infty}^{\Gamma}]$ can be seen
as an element in Hom$(\Gamma,G)$. It is not surprising that $\tilde{\rho}_{0}$
and $A_{\infty}^{\Gamma}$ are related.
###### Theorem 2.4.
Suppose $B$ is connected, $\pi:P\to B$ is a principal $G$-bundle and $A$ is a
flat connection on $P$, then there exists a representation $\pi_{1}(B)\to G$
unique up to conjugation such that
1. (a)
If $p:\tilde{B}\to B$ is the universal covering space of $B$ and $\pi_{1}(B)$
acts on $\tilde{B}$ as covering transformation, then $P$ is isomorphic to
$\tilde{B}\times_{\pi_{1}(B)}G$.
2. (b)
There exists an isomorphism $\Phi:\tilde{B}\times_{\pi_{1}(B)}G\to P$ such
that $q^{*}\Phi^{*}A=\Theta$ where $q:\tilde{B}\times
G\to\tilde{B}\times_{\pi_{1}(B)}G$ is the quotient map and $\Theta$ is the
trivial connection on $\tilde{B}\times G$.
3. (c)
The holonomy induces an injective map:
$hol:\mathcal{A}_{flat}(P)/\mathcal{G}(P)\to\frac{Hom(\pi_{1}(B),G)}{\text{conjugation}},$
and a bijection:
$hol:\bigsqcup_{\begin{matrix}\text{isomorphism}\\\ \text{classes of
}P\end{matrix}}\mathcal{A}_{flat}(P)/\mathcal{G}(P)\to\frac{Hom(\pi_{1}(B),G)}{\text{conjugation}}.$
###### Proof.
(a) Suppose that $\pi:P\to B$ is a principal $G$-bundle with a flat connection
$A$. Through the parallel transport associated to $A$, each smooth path $l$ in
$B$ induces an isomorphism from $P|_{l(0)}$ to $P|_{l(1)}$. Since $A$ is flat,
this isomorphism is invariant under homotopies of the paths which fix the end
points. Choose a base point $x_{0}\in B$ and a point $p_{0}\in P|_{x_{0}}$. If
$l$ is a closed path through $x_{0}$, i.e., $[l]\in\pi_{1}(B,x_{0})$, the
corresponding isomorphism of $P|_{x_{0}}$ to itself is a right translation by
an element of $G$, denoted as $hol_{A,p_{0}}([l])$. By Lemma 3.5.1 in [M98],
$hol_{A,p_{0}}:\pi_{1}(B)\to G$
is an anti-homomorphism, i.e., for any loops $\lambda,\mu$ in $B$
$hol_{A,p_{0}}([\lambda]^{-1})=hol_{A,p_{0}}([\lambda])^{-1}~{},~{}~{}hol_{A,p_{0}}(\lambda*\mu)=hol_{A,p_{0}}(\mu)\cdot
hol_{A,p_{0}}(\lambda),$
where $*$ is the multiplication operator in $\pi_{1}(B)$. Thus
$hol_{A,p_{0}}^{-1}$ defines a left action of $\pi_{1}(B)$ on $G$:
$\displaystyle\pi_{1}(B)\times G$ $\displaystyle\to G$
$\displaystyle([\alpha],g)$ $\displaystyle\mapsto
hol_{A,p_{0}}([\alpha])^{-1}\cdot g.$ (2.4)
By Lemma 3.5.3 in [M98], if we change the point $p_{0}$ on the fibre, we will
get an element given by conjugation of $hol_{A,p_{0}}$:
$hol_{A,p_{0}\cdot g}=g^{-1}\cdot hol_{A,p_{0}}\cdot g.$
Similarly, if we change $(A,p_{0})$ by a gauge transformation $g$,
$hol_{f(A,p_{0})}$ is conjugate to $hol_{A,p_{0}}$. So we get a map
$hol:\mathcal{A}_{flat}(P)/\mathcal{G}(P)\to\frac{Hom(\pi_{1}(B),G)}{\text{conjugation}}.$
Suppose $p:(\widetilde{B},\tilde{x}_{0})\to(B,x_{0})$ is the universal
covering of $B$. Denote by $\widetilde{B}\times_{\pi_{1}(B)}G$ the associated
bundle defined through $hol_{A,p_{0}}$. Define
$\Phi:\widetilde{B}\times_{\pi_{1}(B)}G\to P$
as follows:
Given $[(\tilde{x},g)]\in\widetilde{B}\times_{\pi_{1}(B)}G$, choose a path
$\tilde{l}$ from $\tilde{x}_{0}$ to $\tilde{x}$, and project $\tilde{l}$ onto
a path $l$ with end points $x_{0}=p(\tilde{x}_{0})$, $x:=p(\tilde{x})$ in $B$.
Then the isomorphism $P|_{x_{0}}\to P|_{x}$ induced by $l$ sends $p_{0}\cdot
g\in\pi^{-1}(x_{0})$ to $\Phi(\tilde{x},g)$, as shown in Figure 1.
Figure 1:
Check $\Phi$ is well defined:
First, let $\tilde{l}_{1},\tilde{l}_{2}$ be two different paths in
$\widetilde{B}$ with endpoints $\tilde{x}_{0}$ and $\tilde{x}$ and
$l_{1},l_{2}$ be their projections. $\tilde{l}_{1},\tilde{l}_{2}$ are
homotopic since $\widetilde{B}$ is a universal covering, therefore $l_{1}$ and
$l_{2}$ are homotopic, which implies that the isomorphisms induced by $l_{1}$
and $l_{2}$ are the same. Second, we need to show that for any
$[\alpha]\in\pi_{1}(B)$,
$\Phi(\tilde{x}\cdot[\alpha],g)=\Phi(\tilde{x},[\alpha]^{-1}\cdot g).$ (2.5)
the right hand side of which is equal to
$\Phi(\tilde{x},hol_{A,p_{0}}([\alpha])\cdot g)$ by (2.2).
Suppose $p:\widetilde{B}\to B$ lifts the loop $\alpha$ to a path
$\tilde{\alpha}$ from $\tilde{x}_{0}$ to $\tilde{x}_{1}$ in $\widetilde{B}$.
Choose a path $\tilde{\beta}_{0}$ in $\tilde{B}$ from $\tilde{x}$ to
$\tilde{x}_{0}$. Project it by $p$ to get a path $\beta$ from $x$ to $x_{0}$
in $B$. Then lift $\beta$ to a path $\tilde{\beta}_{1}$ such that one of end
points of $\tilde{\beta}_{1}$ is $\tilde{x}_{1}$. Denote the other end point
of $\tilde{\beta}_{1}$ by $\tilde{y}$. Since $\pi_{1}(B)$ acts on
$\widetilde{B}$ as a covering transformation,
$\tilde{x}\cdot[\alpha]=\tilde{y}$. Figure 2 describes the relations between
these points and paths.
Figure 2:
Note that $\tilde{\beta}_{0}^{-1}$ is a path from $\tilde{x}_{0}$ to
$\tilde{x}$ and $\tilde{\alpha}*\tilde{\beta}_{1}^{-1}$ is a path from
$\tilde{x}_{0}$ to $\tilde{y}$. By the definition of $\Phi$, the isomorphism
induced by parallel transport along $\beta^{-1}$ (also denoted as
$\beta^{-1}$) sends $p_{0}\cdot hol_{A,p_{0}}([\alpha])\cdot g$ to
$\Phi(\tilde{x},hol_{A,p_{0}}([\alpha])\cdot g)$ and the isomorphism along
$\alpha*\beta^{-1}$(also denoted as $\alpha*\beta^{-1}$) sends $p_{0}\cdot g$
to $\Phi(\tilde{x}\cdot[\alpha],g)$. That is to say
$\beta^{-1}(p_{0}\cdot hol_{A,p_{0}}([\alpha])\cdot
g)=\Phi(\tilde{x},hol_{A,p_{0}}([\alpha])\cdot g),$
$(\alpha*\beta^{-1})(p_{0}\cdot g)=\Phi(\tilde{x}\cdot[\alpha],g).$
Formula (2.5) follows from the identity:
$(\alpha*\beta^{-1})(p_{0}\cdot g)=\beta^{-1}(p_{0}\cdot g\cdot
hol_{A,p_{0}\cdot g}([\alpha]))=\beta^{-1}(p_{0}\cdot
hol_{A,p_{0}}([\alpha])\cdot g).$
(b) For any $\tilde{x}\in\tilde{B}$, choose a little neighbourhood $U$ of
$\tilde{x}$ so that $U\cap U\cdot[\alpha]=\emptyset$ for any non-zero
$[\alpha]\in\pi_{1}(B)$. Then for any $g\in G$,
$\\{[\tilde{y},g]~{}|~{}\tilde{y}\in U\\}\subset\tilde{B}\times_{\pi_{1}(B)}G$
is a local horizontal section of $\tilde{B}\times_{\pi_{1}(B)}G$ with respect
to the connection $\Phi^{*}A$. Thus $\tilde{B}\times\\{g\\}$ is a horizontal
section in $\tilde{B}\times G$ with respect to the connection
$q^{*}\Phi^{*}A$, which means $q^{*}\Phi^{*}A$ is the trivial connection.
(c) It suffices to find the inverse of the map
$hol:\bigsqcup_{\begin{matrix}\text{isomorphism}\\\ \text{classes of
}P\end{matrix}}\mathcal{A}_{flat}(P)/\mathcal{G}(P)\to\frac{Hom(\pi_{1}(B),G)}{\text{conjugation}}.$
For any $[\rho]\in Hom(\pi_{1}(B),G)/conjugation$, define
$P:=\tilde{B}\times_{\rho}G.$
Check that $P$ is well-defined up to isomorphism: For any $h\in G$, define the
following map
$\displaystyle\tilde{B}\times_{\rho}G$
$\displaystyle\to\tilde{B}\times_{h^{-1}\rho h}G$
$\displaystyle[\tilde{x},g]_{\rho}$
$\displaystyle\mapsto[\tilde{x},h^{-1}\cdot g]_{h^{-1}\rho h},$
where $[\cdot]_{\rho}$, $[\cdot]_{h^{-1}\rho h}$ are equivalence classes in
$\tilde{B}\times_{\rho}G$ and $\tilde{B}\times_{h^{-1}\rho h}G$ respectively.
This is a well-defined isomorphism between principal $G$-bundles since
$[\tilde{x}\cdot[\alpha],h^{-1}\cdot\rho([\alpha])^{-1}\cdot g]_{h^{-1}\rho
h}=[\tilde{x},h^{-1}\cdot g]_{h^{-1}\rho
h}~{},~{}~{}\forall[\alpha]\in\pi_{1}(B).$
We define $A$ to be the connection on $P$ corresponding to the distribution
$\mathcal{H}$, which is defined by
$\mathcal{H}_{[\tilde{x},g]}=q_{*}(T_{\tilde{x}}\tilde{B}\times\\{0\\}),$
where $q:\tilde{B}\times G\to\tilde{B}\times_{\rho}G$ is the quotient map and
$T_{\tilde{x}}\tilde{B}\times\\{0\\}\subset T_{(\tilde{x},g)}(\tilde{B}\times
G)$.
Next we check that $\mathcal{H}$ is well-defined and invariant under the
$G$-action.
For any $\tilde{x},\tilde{x}^{\prime}\in\tilde{B}$, $[\alpha]\in\pi_{1}(B)$
satisfying $\tilde{x}^{\prime}=\tilde{x}\cdot[\alpha]$, let
$U\ni\tilde{x},U^{\prime}\ni\tilde{x}^{\prime}$ be small neighbourhoods such
that $p:\tilde{B}\to B$ maps $U$ and $U^{\prime}$ homeomorphically onto the
same set $p(U)=p(U^{\prime})$. Then we have $U\cdot[\alpha]=U^{\prime}$. For
any $g\in G$,
$q(U\times\\{g\\})=\\{[\tilde{y},g]~{}|~{}\tilde{y}\in
U\\}~{},~{}~{}q(U^{\prime}\times\\{g\\})=\\{[\tilde{y},g]~{}|~{}\tilde{y}\in
U^{\prime}\\}.$
This implies
$q(U\times\\{[\alpha]^{-1}\cdot g\\})=q(U^{\prime}\times\\{g\\}).$
Therefore $\mathcal{H}$ is well-defined.
$\mathcal{H}$ is invariant under the $G$-action since it pulls back to the
distribution
$\tilde{\mathcal{H}}_{(\tilde{x},g)}:=T_{\tilde{x}}\tilde{B}\times\\{0\\}\subset
T_{(\tilde{x},g)}(\tilde{B}\times G),$
which is $G$-invariant. $A$ is flat since $\tilde{\mathcal{H}}$ corresponds to
the trivial connection $\Theta$ on $\tilde{B}\times G$.
It is obvious that the map $\rho\mapsto A$ constructed above is the inverse of
$hol$. ∎
We now show the relation between the isotropy representation
$\tilde{\rho}_{0}$ defined in (2.2) and $A_{\infty}^{\Gamma}$ defined in
(2.3).
Since $A^{\Gamma}_{\infty}$ is the connection on $S^{3}\times_{\rho}G$ such
that $q_{1}^{*}A^{\Gamma}_{\infty}=A_{\infty}$ where $q:S^{3}\times G\to
S^{3}\times_{\rho}G$ is the quotient map. Define the following bundle
isomorphism
$\displaystyle g^{\Gamma}_{\infty}:S^{3}\times_{\tilde{\rho}_{0}}G$
$\displaystyle\to$ $\displaystyle S^{3}\times_{\rho}G$
$\displaystyle~{}[x,g]_{\tilde{\rho}_{0}}$ $\displaystyle\mapsto$
$\displaystyle[x,g_{\infty}\cdot g]_{\rho}.$
It is well-defined since $\tilde{\rho}_{0}=g^{-1}_{\infty}\rho g_{\infty}$ by
definition, thus
$\displaystyle
g^{\Gamma}_{\infty}([x\cdot\gamma,\tilde{\rho}_{0}(\gamma)^{-1}\cdot
g]_{\tilde{\rho}_{0}})=g^{\Gamma}_{\infty}([x\cdot\gamma,g^{-1}_{\infty}\rho(\gamma)^{-1}g_{\infty}\cdot
g]_{\tilde{\rho}_{0}})$ $\displaystyle=$
$\displaystyle[x\cdot\gamma,\rho(\gamma)^{-1}g_{\infty}\cdot
g]_{\rho}=[x,g_{\infty}\cdot
g]_{\rho}=g^{\Gamma}_{\infty}([x,g]_{\tilde{\rho}_{0}}).$
Then the following two compositions
$S^{3}\times G\xrightarrow{g_{\infty}}S^{3}\times
G\xrightarrow{q}S^{3}\times_{\rho}G$ $S^{3}\times
G\xrightarrow{q}S^{3}\times_{\tilde{\rho}_{0}}G\xrightarrow{g^{\Gamma}_{\infty}}S^{3}\times_{\rho}G$
give the same map. Therefore
$q^{*}(g_{\infty}^{\Gamma})^{*}A^{\Gamma}_{\infty}=g_{\infty}^{*}q^{*}A^{\Gamma}_{\infty}=g_{\infty}^{*}A_{\infty}=A_{trivial},$
which implies $hol([A^{\Gamma}_{\infty}])=[\tilde{\rho}_{0}]$.
To sum up, the $\Gamma$-invariant connection $A$ on
$S^{3}\times(0,\infty)\times G$ with finite $L^{2}$-curvature gives a limit
$\Gamma$-invariant flat connection $A_{\infty}$ on $S^{3}$, which determines a
representation in Hom$(\Gamma,G)$ through holonomy. This representation is the
isotropy representation on the origin of $B^{4}$ after extending
$\Gamma$-action to the origin.
## 3 Equivariant Taubes gluing construction
Taubes’ gluing construction tells us that given two anti-self-dual connections
$A_{1},A_{2}$ on 4-manifolds $X_{1},X_{2}$ respectively, we can glue them
together to get a new ASD connection on the space $X_{1}\\#_{\lambda}X_{2}$.
This section follows the idea from Section 7.2 of [DK90] and discusses the
case when there is a finite group $\Gamma$ acting on the 4-manifolds
$X_{1},X_{2}$ with $x_{1},x_{2}$ as isolated fixed points, how to glue two ASD
$\Gamma$-invariant connections over $X_{1},X_{2}$ together to get an ASD
$\Gamma$-invariant connection on $X_{1}\\#_{\lambda}X_{2}$.
Suppose $X_{1},X_{2}$ are smooth, oriented, compact, Riemannian 4-manifolds,
and $P_{1},P_{2}$ are principal $G$-bundles over $X_{1},X_{2}$ respectively.
Let $\Gamma$ be a finite group acting on $P_{i},X_{i}$ from left which is
smooth and orientation preserving and the action on $P_{i}$ cover the action
on $X_{i}$.
${P_{i}}$${~{}~{}~{}~{}~{}~{}~{}~{}~{}X_{i}~{}\ni~{}x_{i}}$$\scriptstyle{\Gamma}$$\scriptstyle{G}$
$x_{1}\in X_{1}^{\Gamma}$, $x_{2}\in X_{2}^{\Gamma}$ are two isolated fixed
points with equivalent isotropy representations. i.e., there exists $h\in G$
such that
$\rho_{2}(\gamma)=h\rho_{1}(\gamma)h^{-1}~{}~{}~{}\forall\gamma\in\Gamma$
(3.1)
where $\rho_{1},\rho_{2}$ are isotropy representations of $\Gamma$ at
$x_{1},x_{2}$ respectively.
Now we fix two metrics $g_{1},g_{2}$ on $X_{1},X_{2}$ such that the
$\Gamma$-action preserves the metrics. This can be achieved by the following
proposition.
###### Proposition 3.1.
For any Riemannian metric $g$ on $X$,
$\tilde{g}:=\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}\gamma^{*}g$
defines a $\Gamma$-invariant metric.
The proof is straightforward. We omit it here.
### 3.1 Gluing $P_{i}$ and getting an approximate ASD connection $A^{\prime}$
The first step is to glue manifolds $X_{1}$ and $X_{2}$ by connecting sum.
Fix a large enough constant $T$ and a small enough constant $\delta$. Let
$\lambda>0$ be a constant satisfying $\lambda e^{\delta}\leq\frac{1}{2}b$
where $b:=\lambda e^{T}$. We first glue $X^{\prime}_{1}:=X_{1}\setminus
B_{x_{1}}(\lambda e^{-\delta})$ and $X^{\prime}_{2}:=X_{2}\setminus
B_{x_{2}}(\lambda e^{-\delta})$ together as shown in Figure 3,
Figure 3:
where $e_{\pm}$ are defined in polar coordinates by
$\displaystyle e_{\pm}:\mathbb{R}^{4}\setminus\\{0\\}$ $\displaystyle\to$
$\displaystyle\mathbb{R}\times S^{3}$ $\displaystyle rm$
$\displaystyle\mapsto$ $\displaystyle(\pm\log\frac{r}{\lambda},m)$
and
$f:\Omega_{1}=(-\delta,\delta)\times S^{3}\to\Omega_{2}=(-\delta,\delta)\times
S^{3}$ (3.2)
is defined to be a $\Gamma$-equivariant conformal map that fixes the first
component. Denote the connected sum by $X_{1}\\#_{\lambda}X_{2}$ or $X$.
On the new manifold $X$, we define the metric $g_{\lambda}$ to be a weighted
average of $g_{1}$, $g_{2}$ on $X_{1}$ and $X_{2}$, compared by the
diffeomorphism $f$. If $g_{\lambda}=\sum m_{i}g_{i}$ on $X^{\prime}_{i}$, we
can arrange $1\leq m_{i}\leq 2$. This means points are further away from each
other on the gluing area.
We now turn to the bundles $P_{i}$. Suppose $A_{i}$ are ASD $\Gamma$-invariant
connections on $P_{i}$. We want to glue $P_{i}|_{X^{\prime}_{i}}$ together so
that $A_{1}$ and $A_{2}$ match on the overlapping part.
The first step is to replace $A_{i}$ by two $\Gamma$-invariant connections
which are flat on the annuli $\Omega_{i}$. Define the cut-off function
$\eta_{i}$ on $X_{i}$ as following:
$\eta_{1}(x)=\begin{cases}0~{}~{}~{}x\in[-\delta,+\infty]\times S^{3}\\\
1~{}~{}~{}x\in X_{1}\setminus
B_{x_{1}}(b)~{},\end{cases}~{}~{}\eta_{2}(x)=\begin{cases}0~{}~{}~{}x\in[-\infty,\delta]\times
S^{3}\\\ 1~{}~{}~{}x\in X_{2}\setminus B_{x_{2}}(b)~{},\end{cases}$ (3.3)
which are shown in Figure 4.
Figure 4:
Note that $\eta_{i}(x)$ depend only on $|x-x_{i}|$ when $x\in B_{x_{i}}(b)$.
Therefore $\eta_{i}$ are $\Gamma$-invariant.
Now we introduce a lemma, which comes from Lemma 2.1 of [U82].
###### Lemma 3.2.
For any connection $A$ on $\mathbb{R}^{4}$, recall that an exponential gauge
associated to it is a gauge under which the connection satisfies
$A(0)=0~{},~{}~{}\sum_{j=1}^{4}x_{j}A_{j}(x)=A_{r}=0.$
In an exponential gauge in $\mathbb{R}^{n}$,
$|A(x)|\leq\frac{1}{2}\cdot|x|\cdot\max_{|y|\leq|x|}|F(y)|,$
where $F$ is the curvature of the connection.
By Lemma 3.2, choose exponential gauge on $B_{x_{i}}(b)$ so that for $x\in
B_{x_{i}}(b)$, we have
$|A_{i}(x)|\leq\frac{1}{2}d(x,x_{i})\max_{d(y,x_{i})\leq
d(x,x_{i})}|F_{A_{i}}(y)|.$
Under this trivialisation, define $A^{\prime}_{i}=\eta_{i}A_{i}$. That is to
say, $A^{\prime}_{i}$ equals to $A_{i}$ on $X_{i}\setminus B_{x_{i}}(b)$, and
equals to $\eta_{i}A_{i}$ on $B_{x_{i}}(b)$ under the chosen trivialisation.
Then $A_{i}-A^{\prime}_{i}$ is supported on $B_{x_{i}}(b)$. Since $X_{i}$ is
compact, $|F_{A_{i}}|\leq$const. Together, we have
$|A_{i}-A^{\prime}_{i}|\leq|A_{i}|\leq\frac{b}{2}\max|F_{A_{i}}(y)|\leq\text{const}\cdot
b.$
We can also get a $L^{4}$-bound
$||A_{i}-A^{\prime}_{i}||_{L^{4}}\leq\bigg{(}vol(B_{x_{i}}(b))\cdot\text{const}\cdot
b^{4}\bigg{)}^{\frac{1}{4}}=\text{const}\cdot b^{2}.$ (3.4)
Since $|F_{A^{\prime}_{i}}|\leq$const and $F^{+}_{A^{\prime}_{i}}$ is
supported on the annulus $B_{x_{i}}(\frac{b}{2},b)$ with radii $\frac{b}{2}$
and $b$, we have
$\displaystyle||F^{+}_{A^{\prime}_{i}}||_{L^{2}}$
$\displaystyle\leq\bigg{(}vol(B_{x_{i}}(b))\max|F^{+}_{A^{\prime}_{i}}(y)|\bigg{)}^{\frac{1}{2}}$
$\displaystyle\leq\bigg{(}vol(B_{x_{i}}(b))\max|F_{A^{\prime}_{i}}(y)|\bigg{)}^{\frac{1}{2}}\leq\text{const}\cdot
b^{2}.$ (3.5)
The next step is to glue $P_{1}|_{X^{\prime}_{1}}$ and
$P_{2}|_{X^{\prime}_{2}}$ together to get a principal $G$-bundle $P$ over $X$
and glue $A^{\prime}_{1}$ and $A^{\prime}_{2}$ together to get a
$\Gamma$-invariant connection $A^{\prime}$ on $P$.
###### Lemma 3.3.
There exists a canonical $(\Gamma,G)$-equivariant map $\varphi$:
$\displaystyle G\cong P_{1}|_{x_{i}}$ $\displaystyle\xrightarrow{\varphi}$
$\displaystyle P_{2}|_{x_{2}}\cong G$ $\displaystyle g$ $\displaystyle\mapsto$
$\displaystyle hg,$
where $h$ is defined in (3.1) and $(\Gamma,G)$-equivariant means $\varphi$ is
$\Gamma$-equivariant and $G$-equivariant.
###### Proof.
The $G$-equivariance is obvious and the $\Gamma$-equivariance follows from:
$P_{1}|_{x_{1}}\xrightarrow{\varphi}P_{2}|_{x_{2}}\xrightarrow{\gamma}P_{2}|_{x_{2}}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}P_{1}|_{x_{1}}\xrightarrow{\gamma}P_{1}|_{x_{1}}\xrightarrow{\varphi}P_{2}|_{x_{2}}$
$~{}~{}~{}~{}~{}g~{}~{}~{}\mapsto~{}hg~{}~{}\mapsto\rho_{2}(\gamma)hg~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}g~{}\mapsto\rho_{1}(\gamma)g\mapsto~{}h\rho_{1}(\gamma)g$
and $\rho_{2}(\gamma)hg=h\rho_{1}(\gamma)h^{-1}hg=h\rho_{1}(\gamma)g$ for any
$\gamma\in\Gamma$. ∎
Denote the subgroup of $(\Gamma,G)$-equivariant gluing parameters by
$Gl^{\Gamma}:=Hom_{(\Gamma,G)}(P_{1}|_{x_{1}},P_{2}|_{x_{2}}).$ (3.6)
###### Proposition 3.4.
The subgroup of $(\Gamma,G)$-equivariant gluing parameters $Gl^{\Gamma}$ takes
three forms:
$Gl^{\Gamma}\cong\begin{cases}G~{}~{}~{}~{}~{}~{}~{}\text{if
}\rho_{1}(\Gamma),\rho_{2}(\Gamma)\subset C(G),\\\ U(1)~{}~{}~{}\text{if
}\rho_{1}(\Gamma),\rho_{2}(\Gamma)\not\subset C(G)\text{ and are contained in
some }U(1)\subset G,\\\ C(G)~{}~{}\text{if
}\rho_{1}(\Gamma),\rho_{2}(\Gamma)\text{ are not contained in any }U(1)\text{
subgroup in }G,\end{cases}$ (3.7)
where $C(G)$ is the center of $G$.
###### Proof.
By formula (3.1), $\rho_{1}(\Gamma)$ and $\rho_{2}(\Gamma)$ are isomorphic and
have isomorphic centralisers. For any element $h^{\prime}$ in the centraliser
of $\rho_{1}(\Gamma)$, $\varphi^{\prime}:g\mapsto hh^{\prime}g$ is also a
$(\Gamma,G)$-equivariant map between $P_{1}|_{x_{1}}$ and $P_{2}|_{x_{2}}$
since for all $\gamma\in\Gamma$
$hh^{\prime}\rho_{1}(\gamma)g=h\rho_{1}(\gamma)h^{\prime}g=\rho_{2}(\gamma)hh^{\prime}g.$
For any element $\varphi^{\prime}\in Gl^{\Gamma}$, it can be written as
$g\mapsto h^{\prime}g$ for some $h^{\prime}\in G$. Then $h^{-1}h^{\prime}$ is
in the centraliser of $\rho_{1}(\Gamma)$ since for any $\gamma\in\Gamma$,
$g\in G$, we have
$\displaystyle\rho_{2}(\gamma)h^{\prime}g=h^{\prime}\rho_{1}(\gamma)g~{}~{}$
$\displaystyle\Rightarrow~{}~{}h^{-1}\rho_{2}(\gamma)h^{\prime}g=h^{-1}h^{\prime}\rho_{1}(\gamma)g$
$\displaystyle\Rightarrow~{}~{}\rho_{1}(\gamma)h^{-1}h^{\prime}g=h^{-1}h^{\prime}\rho_{1}(\gamma)g,$
which implies $h^{-1}h^{\prime}$ commutes with $\rho_{1}(\gamma)$.
Therefore $Gl^{\Gamma}$ is isomorphic to the centraliser of $\rho_{1}(\Gamma)$
in $G$. The three cases in (3.7) are the only three groups that are
centraliser of some subgroup in $G$ when $G=SU(2)$ or $SO(3)$. ∎
Recall that annuli $\Omega_{i}$ are identified by $f:\Omega_{1}\to\Omega_{2}$
defined in (3.2). Take $\varphi\in Gl^{\Gamma}$, we define the identification
between $P_{1}|_{\Omega_{1}}$ and $P_{2}|_{\Omega_{2}}$, also denoted by
$f:P_{1}|_{\Omega_{1}}\to P_{2}|_{\Omega_{2}}$, as the following. For any
point $q\in P_{1}|_{\Omega_{1}}$, choose a path from $\pi(q)$ to $x_{1}$; then
lift it to a path beginning at $q$ by parallel transport corresponding to
$A^{\prime}_{1}$. The resulting path has an end point $p$. On the other hand,
choose a path from $x_{2}$ to $f(\pi(q))$ and lift it to a path beginning at
$\varphi(p)$ by parallel transport corresponding to $A^{\prime}_{2}$. The
resulting path has an end point $f(q)$, as shown in Figure 5.
Figure 5:
This map $f$ is well-defined since $A^{\prime}_{i}$ are flat on
$B_{x_{i}}(\lambda e^{\delta})$, thus a choice of paths on $B_{x_{i}}(\lambda
e^{\delta})$ does not matter. Also, $f$ is $\Gamma$-equivariant since
$A^{\prime}_{i}$ are $\Gamma$-invariant.
Denote $(P_{1}|_{X^{\prime}_{1}})\\#_{\varphi}(P_{2}|_{X^{\prime}_{2}})$ by
$P$ and the induced connection on $P$ by $A^{\prime}(\varphi)$. In general,
for different gluing parameter $\varphi_{1},\varphi_{2}$,
$A^{\prime}(\varphi_{1})$ and $A^{\prime}(\varphi_{2})$ are not gauge
equivalent.
###### Proposition 3.5.
(Proposition 7.2.9 in [DK90]) Suppose $G=SU(2)$, and $X_{i}$ are simply
connected, then the isomorphism class of the bundle $P$ is independent of
gluing parameters and the connections $A^{\prime}(\varphi_{1})$,
$A^{\prime}(\varphi_{2})$ are gauge equivalent if and only if the parameters
$\varphi_{1}$, $\varphi_{2}$ are in the same orbit of the action of
$\Gamma_{A_{1}}\times\Gamma_{A_{2}}$ on $Gl$.
###### Proof.
(1). Suppose $A^{\prime}(\varphi_{1})$, $A^{\prime}(\varphi_{2})$ are gauge
equivalent, we show there exist
$\tilde{\sigma}_{1}\in\Gamma_{A_{1}},\tilde{\sigma}_{2}\in\Gamma_{A_{2}}$ such
that
$\varphi_{2}=\tilde{\sigma}_{2}^{-1}\circ\varphi_{1}\circ\tilde{\sigma}_{1}$.
Since $A^{\prime}(\varphi_{i})$ are connections on the bundle
$(P_{1}|_{X^{\prime}_{1}})\\#_{\varphi_{i}}(P_{2}|_{X^{\prime}_{2}})$ over
$X=X_{1}\\#_{\lambda}X_{2}$, then on the cylindrical ends, we have the
following local trivialisations (exponential gauge) of $P_{i}$
$P_{1}|_{(-T,+\infty)\times S^{3}}\cong(-T,+\infty)\times S^{3}\times G,$
$P_{2}|_{(-\infty,T)\times S^{3}}\cong(-\infty,T)\times S^{3}\times G.$
Under these trivializations, $A_{1}\in\Omega^{1}((-T,+\infty)\times
S^{3},\mathfrak{g})$, $A_{2}\in\Omega^{1}((-\infty,T)\times
S^{3},\mathfrak{g})$ are Lie algebra valued 1-forms on the cylindrical ends
and
$A^{\prime}_{1}=\eta_{1}A_{1}~{},~{}~{}A^{\prime}_{2}=\eta_{2}A_{2}.$
Suppose $\varphi_{i}:g\mapsto h_{i}g$. Define the bundle isomorphisms
$\alpha_{i}:(P_{1}|_{X^{\prime}_{1}})\\#_{id}(P_{2}|_{X^{\prime}_{2}})\to(P_{1}|_{X^{\prime}_{1}})\\#_{\varphi_{i}}(P_{2}|_{X^{\prime}_{2}})$
by
$\displaystyle\alpha_{i}(p)=p~{}~{}~{}\text{for}~{}~{}p\in
P_{1}|_{X^{\prime}_{1}}~{}~{}\text{or}~{}~{}p\in
P_{2}|_{X^{\prime}_{2}\setminus(-\delta,T)\times S^{3}}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\alpha_{i}(t,m,g)=(t,m,\beta_{i}(t)g)~{}~{}~{}\text{for}~{}~{}(t,m,g)\in
P_{2}|_{(-\delta,T)\times S^{3}},$
where $\beta_{i}$ are smooth functions from $(-\delta,T)$ to $G$ satisfying
$\beta_{i}(t)=h_{i}$ when $t\in(-\delta,\delta)$ and $\beta_{i}(t)=id$ when
$t=T$. The well-definedness of the bundle isomorphism $\alpha_{i}$ is obvious.
Pulled back by $\alpha_{i}$, the connections $A^{\prime}(\varphi_{1})$ and
$A^{\prime}(\varphi_{2})$ become $\alpha_{1}^{*}A^{\prime}(\varphi_{1})$ and
$\alpha_{2}^{*}A^{\prime}(\varphi_{2})$ on
$(P_{1}|_{X^{\prime}_{1}})\\#_{id}(P_{2}|_{X^{\prime}_{2}})$. Since they are
equivalent, there exists a gauge transformation $\tilde{\sigma}$ in
Aut$((P_{1}|_{X^{\prime}_{1}})\\#_{id}(P_{2}|_{X^{\prime}_{2}}))$ such that
$\tilde{\sigma}^{*}\alpha_{1}^{*}A^{\prime}(\varphi_{1})=\alpha_{2}^{*}A^{\prime}(\varphi_{2}).$
Therefore
$(\alpha_{1}\tilde{\sigma}\alpha_{2}^{-1})^{*}A^{\prime}(\varphi_{1})=A^{\prime}(\varphi_{2})$.
Since $A^{\prime}(\varphi_{i})$ both equal to $A^{\prime}_{2}$ on
$X^{\prime}_{2}$, we have that
$\alpha_{1}\tilde{\sigma}\alpha_{2}^{-1}\big{|}_{X^{\prime}_{2}}$ stabilises
$A^{\prime}_{2}$ on $X^{\prime}_{2}$. In fact,
$\alpha_{1}\tilde{\sigma}\alpha_{2}^{-1}\big{|}_{X^{\prime}_{2}}$ can be
extended to $X_{2}$ as a gauge transformation of $P_{2}$ that stabilises
$A^{\prime}_{2}$. This is because under the exponential gauge over
$(-\delta,\delta)\times S^{3}\subset X^{\prime}_{2}$, we have
$\alpha_{1}\tilde{\sigma}\alpha_{2}^{-1}=h_{1}\sigma
h_{2}^{-1}:(-\delta,\delta)\times S^{3}\to G,$
$(h_{2}\sigma^{-1}h_{1}^{-1})d(h_{1}\sigma
h_{2}^{-1})+(h_{2}\sigma^{-1}h_{1}^{-1})A^{\prime}_{2}(h_{1}\sigma
h_{2}^{-1})=A^{\prime}_{2},$
where $\sigma:(-\delta,\delta)\times S^{3}\to G$ is the map corresponding to
the gauge transformation $\tilde{\sigma}$ under the exponential gauge. Since
$A^{\prime}_{2}=0$ on $(-\delta,\delta)\times S^{3}$, we find
$\sigma:(-\delta,\delta)\times S^{3}\to G$ is a constant function, thus
$\alpha_{1}\tilde{\sigma}\alpha_{2}^{-1}\big{|}_{X^{\prime}_{2}}$ can be
extended to $X_{2}$. We denote this gauge transformation of $P_{2}$ as
$\tilde{\sigma}_{2}$.
Similarly, $\tilde{\sigma}\big{|}_{X^{\prime}_{1}}$ stabilises
$A^{\prime}_{1}$ and can be extended to $\tilde{\sigma}_{1}$ on $X_{1}$ which,
again, stabilises $A^{\prime}_{1}$. We also get
$\tilde{\sigma}_{2}^{-1}\varphi_{1}\tilde{\sigma}_{1}=\varphi_{2}.$
To show $\varphi_{1}$, $\varphi_{2}$ are in the same orbit of the action of
$\Gamma_{A_{1}}\times\Gamma_{A_{2}}$ on $Gl$, it is enough to show
$\tilde{\sigma}_{1}$ stabilises $A_{1}$ and $\tilde{\sigma}_{2}$ stabilises
$A_{2}$:
If $A^{\prime}_{1}$ is irreducible, then $\tilde{\sigma}_{1}\in\\{\pm
1\\}\subset\Omega^{0}(X_{1},AdP_{1})$, thus $\tilde{\sigma}_{1}$ also
stabilises $A_{1}$.
If $A^{\prime}_{1}$ is reducible, by Lemma 4.3.21 of [DK90], we know that
$A_{1}$ is reducible since $A_{1}=A^{\prime}_{1}$ on $X_{1}\setminus
B_{x_{1}}(b)$. Then $P_{1}$ reduces to a $U(1)$-principal bundle $Q^{\prime}$
so that $\Gamma_{A^{\prime}_{1}}$ consists of constant sections of
$Q^{\prime}\times_{U(1)}U(1)\subset P_{1}\times_{SU(2)}SU(2)$. $P_{1}$ also
reduces to another $U(1)$-principal bundle $Q$ so that $\Gamma_{A_{1}}$
consists of constant sections of $Q\times_{U(1)}U(1)\subset
P_{1}\times_{SU(2)}SU(2)$. Since $A_{1}=A^{\prime}_{1}$ on $X_{1}\setminus
B_{x_{1}}(b)$, we have $Q^{\prime}\big{|}_{X_{1}\setminus
B_{x_{1}}(b)}=Q\big{|}_{X_{1}\setminus B_{x_{1}}(b)}$.
Since $\tilde{\sigma}_{1}\in\Gamma_{A^{\prime}_{1}}$, $\tilde{\sigma}_{1}$ is
a constant section of $Q^{\prime}\times_{U(1)}U(1)$. By definition of
$\tilde{\sigma}_{1}$,
$\tilde{\sigma}\big{|}_{X_{1}\setminus
B_{x_{1}}(b)}=\tilde{\sigma}_{1}\big{|}_{X_{1}\setminus B_{x_{1}}(b)}.$
Therefore $\tilde{\sigma}$ is constant on $Q^{\prime}\times_{U(1)}U(1)$ over
$X_{1}\setminus B_{x_{1}}(b)$. This in turn implies that $\tilde{\sigma}$ is
contant on $Q\times_{U(1)}U(1)$ over $X_{1}\setminus B_{x_{1}}(b)$. Then
$\tilde{\sigma}\big{|}_{X_{1}\setminus B_{x_{1}}(b)}$ extends constantly on
$Q\times_{U(1)}U(1)$ to a gauge transformation $\tilde{\sigma}_{A_{1}}$ on
$X_{1}$ that stabilises $A_{1}$.
In summary, $\tilde{\sigma}|_{X^{\prime}_{1}}$ extends to $\tilde{\sigma}_{1}$
on $X_{1}$, which stabilises $A^{\prime}_{1}$;
$\tilde{\sigma}|_{X_{1}\setminus B_{x_{1}}(b)}$ extends to
$\tilde{\sigma}_{A_{1}}$ on $X_{1}$, which stabilises $A_{1}$. Next we show
that $\tilde{\sigma}_{A_{1}}=\tilde{\sigma}_{1}$.
It is enough to show $\tilde{\sigma}_{A_{1}}=\tilde{\sigma}_{1}$ on
$B_{x_{1}}(b)$ since they both equal to $\tilde{\sigma}$ on $X_{1}\setminus
B_{x_{1}}(b)$. Under the exponential gauge on $B_{x_{1}}(b)$,
$\tilde{\sigma}_{A_{1}}$ and $\tilde{\sigma}_{1}$ can be written as
$\sigma_{A_{1}},\sigma_{1}:B_{x_{1}}(b)\to G$ respectively. Then we have
$\sigma_{A_{1}}^{-1}d\sigma_{A_{1}}+\sigma_{A_{1}}^{-1}A_{1}\sigma_{A_{1}}=A_{1},$
$\sigma_{1}^{-1}d\sigma_{1}+\sigma_{1}^{-1}A^{\prime}_{1}\sigma_{1}=A^{\prime}_{1},$
$(A_{1})(\frac{\partial}{\partial
r})=(A^{\prime}_{1})(\frac{\partial}{\partial r})=0,$
which implies that $\displaystyle\frac{\partial\sigma_{A_{1}}}{\partial
r}=\displaystyle\frac{\partial\sigma_{1}}{\partial r}=0$. This means
$\sigma_{A_{1}},\sigma_{1}$ are constant on $B_{x_{1}}(b)$. Moreover, they are
the same constant since they have the same value on $\partial B_{x_{1}}(b)$.
That is, $\tilde{\sigma}_{A_{1}}=\tilde{\sigma}_{1}$, i.e.,
$\tilde{\sigma}_{1}\in\Gamma_{A_{1}}$.
Similarly, $\tilde{\sigma}_{2}$ stabilises $A_{2}$.
(2). Suppose there exist
$\tilde{\sigma}_{1}\in\Gamma_{A_{1}},\tilde{\sigma}_{2}\in\Gamma_{A_{2}}$ such
that
$\varphi_{2}=\tilde{\sigma}_{2}^{-1}\circ\varphi_{1}\circ\tilde{\sigma}_{1}$,
we now show that $A^{\prime}(\varphi_{1})$, $A^{\prime}(\varphi_{2})$ are
gauge equivalent.
We first show that $\tilde{\sigma}_{i}\in\Gamma_{A_{i}}$ implies
$\tilde{\sigma}_{i}\in\Gamma_{A^{\prime}_{i}}$. Since
$A_{1}\in\Omega^{1}((-T,+\infty)\times S^{3},\mathfrak{g})$,
$A_{2}\in\Omega^{1}((-\infty,T)\times S^{3},\mathfrak{g})$ are given in
exponential gauge, under the coordinate $(r,\psi)$, we have
$A_{1}(\frac{\partial}{\partial r})=A_{2}(\frac{\partial}{\partial r})=0$.
Under the exponential gauge, $\tilde{\sigma}_{1}$ corresponds to
$\sigma_{1}:(-T,+\infty)\times S^{3}\to G$, and $\tilde{\sigma}_{2}$
corresponds to $\sigma_{2}:(-\infty,T)\times S^{3}\to G$. Then we have the
formulas
$\sigma_{1}^{-1}d\sigma_{1}+\sigma_{1}^{-1}A_{1}\sigma_{1}=A_{1}~{}~{}~{}x\in(-T,+\infty)\times
S^{3},$
$\sigma_{2}^{-1}d\sigma_{2}+\sigma_{2}^{-1}A_{2}\sigma_{2}=A_{2}~{}~{}~{}x\in(-\infty,T)\times
S^{3}.$
Therefore $\displaystyle\frac{\partial\sigma_{i}}{\partial r}=0$. Then
$\sigma_{i}$ are constant functions and
$\sigma_{i}^{-1}\eta_{i}A_{i}\sigma_{i}=\eta_{i}A_{i}$, which implies that
$\tilde{\sigma}_{i}$ stabilises $A^{\prime}_{i}$.
Define a gauge transformation on
$(P_{1}|_{X^{\prime}_{1}})\\#_{id}(P_{2}|_{X^{\prime}_{2}})$ by
$\tilde{\sigma}=\begin{cases}\tilde{\sigma}_{1}~{}~{}~{}~{}~{}~{}~{}~{}x\in
X^{\prime}_{1}\\\ \alpha_{1}^{-1}\tilde{\sigma}_{2}\alpha_{2}~{}~{}x\in
X^{\prime}_{2}\end{cases}.$
The well-definedness of $\tilde{\sigma}$ follows from
$\varphi_{2}=\tilde{\sigma}_{2}^{-1}\circ\varphi_{1}\circ\tilde{\sigma}_{1}$.
It is easy to see that
$\tilde{\sigma}^{*}(\tilde{\alpha}^{*}_{1}A^{\prime}(\varphi_{1}))=\tilde{\alpha}^{*}_{2}A^{\prime}(\varphi_{2})$.
Therefore $A^{\prime}(\varphi_{1})$ on
$(P_{1}|_{X^{\prime}_{1}})\\#_{\varphi_{1}}(P_{2}|_{X^{\prime}_{2}})$ and
$A^{\prime}(\varphi_{2})$ on
$(P_{1}|_{X^{\prime}_{1}})\\#_{\varphi_{2}}(P_{2}|_{X^{\prime}_{2}})$ are
gauge equivalent.
∎
We denote $A^{\prime}(\varphi)$ by $A^{\prime}$ when the gluing parameter is
contextually clear.
### 3.2 Constructing an ASD connection from $A^{\prime}$
The general idea is to find a solution $a\in\Omega^{1}(X,adP)^{\Gamma}$ so
that $A:=A^{\prime}+a$ is anti-self-dual, i.e.,
$F_{A}^{+}=F^{+}_{A^{\prime}}+d^{+}_{A^{\prime}}a+(a\wedge a)^{+}=0.$ (3.8)
To do so, we wish to find a right inverse $R^{\Gamma}$ of $d^{+}_{A^{\prime}}$
and an element $\xi\in\Omega^{2,+}(X,adP)^{\Gamma}$ satisfying
$F^{+}_{A^{\prime}}+\xi+(R^{\Gamma}\xi\wedge R^{\Gamma}\xi)^{+}=0.$ (3.9)
Then $a=R^{\Gamma}\xi$ is a solution of equation (3.8).
Since $A_{i}$ are two ASD connections, we have the complex:
$0\to\Omega^{0}(X_{i},adP_{i})\xrightarrow{d_{A_{i}}}\Omega^{1}(X_{i},adP_{i})\xrightarrow{d_{A_{i}}^{+}}\Omega^{2,+}(X_{i},adP_{i})\to
0.$
We assume that the second cohomology classes $H^{2}_{A_{1}}$, $H^{2}_{A_{2}}$
are both zero. The $\Gamma$-action can be induced on this chain complex
naturally. It is worth mentioning that the $\Gamma$-action preserves the
metric, so the space $\Omega^{2,+}(X_{i},adP_{i})$ is $\Gamma$-invariant.
Define the following two averaging maps:
$\displaystyle ave:\Omega^{1}(X_{i},adP_{i})$ $\displaystyle\to$
$\displaystyle\Omega^{1}(X_{i},adP_{i})^{\Gamma}$ $\displaystyle a$
$\displaystyle\mapsto$
$\displaystyle\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}\gamma^{*}a$
$\displaystyle ave:\Omega^{2,+}(X_{i},adP_{i})$ $\displaystyle\to$
$\displaystyle\Omega^{2,+}(X_{i},adP_{i})^{\Gamma}$ $\displaystyle\xi$
$\displaystyle\mapsto$
$\displaystyle\frac{1}{|\Gamma|}\sum_{\gamma\in\Gamma}\gamma^{*}\xi.$
Note that these maps are surjective since any $\Gamma$-invariant element is
mapped to itself.
###### Proposition 3.6.
The following diagram
${0}$${\Omega^{0}(X_{i},adP_{i})}$${\Omega^{1}(X_{i},adP_{i})}$${\Omega^{2,+}(X_{i},adP_{i})}$${0}$${0}$${\Omega^{0}(X_{i},adP_{i})^{\Gamma}}$${\Omega^{1}(X_{i},adP_{i})^{\Gamma}}$${\Omega^{2,+}(X_{i},adP_{i})^{\Gamma}}$${0}$$\scriptstyle{d_{A_{i}}}$$\scriptstyle{d_{A_{i}}^{+}}$$\scriptstyle{ave}$$\scriptstyle{ave}$$\scriptstyle{d_{A_{i}}}$$\scriptstyle{d_{A_{i}}^{+}}$
commutes.
###### Proof.
It suffices to show that
$d_{A_{i}}:\Omega^{1}(X_{i},adP_{i})\to\Omega^{2}(X_{i},adP_{i})$ and $\gamma$
commute for any $\gamma\in\Gamma$. For any $\eta\in\Omega^{1}(X_{i},adP_{i})$,
we treat $\eta$ as a Lie algebra valued 1-form on $P_{i}$, then
$\displaystyle(d+A_{i})(\gamma^{*}\eta)$ $\displaystyle=$
$\displaystyle\gamma^{*}d\eta+[A_{i},\gamma^{*}\eta]$
$\displaystyle\gamma^{*}\big{(}(d+A_{i})\eta\big{)}$ $\displaystyle=$
$\displaystyle\gamma^{*}d\eta+\gamma^{*}[A_{i},\eta]$ $\displaystyle=$
$\displaystyle\gamma^{*}d\eta+[\gamma^{*}A_{i},\gamma^{*}\eta]$
$\displaystyle=$ $\displaystyle\gamma^{*}d\eta+[A_{i},\gamma^{*}\eta].$
∎
By Proposition 3.6, we have $(H^{2}_{A_{i}})^{\Gamma}=0$ since
$\text{Im}(d^{+}_{A_{i}}\circ ave)=\text{Im}(ave\circ
d^{+}_{A_{i}})=\Omega^{2,+}(X_{i},adP_{i})^{\Gamma}.$
###### Lemma 3.7.
Suppose
$L:H_{1}\to H_{2}$
is a linear surjection between Hilbert spaces, then there exists a linear
right inverse of $L$.
###### Proof.
Decompose $H_{1}$ as $H_{1}=\ker(L)\oplus H_{0}$ so that
$L|_{H_{0}}\xrightarrow{\cong}H_{2}$ is an isomorphism. Then for any linear
map $P$ from $H_{2}$ to $\ker(L)$,
$R:=(L|_{H_{0}})^{-1}+P$
is right inverse of $L$ since $\forall\xi\in H_{2}$, $LR(\xi)=\xi+LP\xi=\xi$.
∎
By Lemma 3.7, there exist right inverses
$R_{i}^{\Gamma}:\Omega^{2,+}(X_{i},adP_{i})^{\Gamma}\to\Omega^{1}(X_{i},adP_{i})^{\Gamma}$
to $d_{A}^{+}$.
###### Proposition 3.8.
$R_{i}^{\Gamma}$ are bounded operators from
$\Omega^{2,+}_{L^{2}}(X_{i},adP_{i})^{\Gamma}$ to
$\Omega^{1}_{L^{2}_{1}}(X_{i},adP_{i})^{\Gamma}$.
The proof of Proposition 3.8 follows from Proposition 2.13 of Chapter III of
[LM89] and the fact that $X_{i}$ are compact.
By the Sobolev embedding theorem, we have
$||R_{i}^{\Gamma}\xi||_{L^{4}}\leq\text{const.}||R_{i}^{\Gamma}\xi||_{L_{1}^{2}},$
and combined with Proposition 3.8, we have
$||R_{i}^{\Gamma}\xi||_{L^{4}}\leq\text{const.}||\xi||_{L^{2}}.$ (3.10)
Define two operators
$Q_{i}^{\Gamma}:\Omega^{2,+}(X_{i},adP_{i})^{\Gamma}\to\Omega^{1}(X_{i},adP_{i})^{\Gamma}$
by
$Q_{i}^{\Gamma}(\xi):=\beta_{i}R_{i}^{\Gamma}\gamma_{i}(\xi),$
where $\beta_{i},\gamma_{i}$ are cut-off functions defined in the Figure 6
where $\beta_{1}$ varies on $(1,\delta)\times S^{3}$, $\beta_{2}$ varies on
$(-\delta,-1)\times S^{3}$ and $\gamma_{i}$ varies on $(-1,1)\times S^{3}$. We
can choose $\beta_{i}$ such that
$\left|\displaystyle\frac{\partial\beta_{i}}{\partial
t}\right|<\displaystyle\frac{2}{\delta}$ pointwise, then
$||\nabla\beta_{i}||_{L^{4}}\leq
4\pi\left(\int_{1}^{\delta}\frac{2^{4}}{\delta^{4}}dt\right)^{1/4}<64\pi\delta^{-3/4}.$
(3.11)
Figure 6:
We can choose $\gamma_{i}$ such that $\gamma_{1}+\gamma_{2}=1$ on
$\Omega_{1}\\#_{f}\Omega_{2}$ where $f$ is defined in (3.2).
Now we want to extend the operators $Q_{i}^{\Gamma}$ to
$X=X_{1}\\#_{\lambda}X_{2}$. Firstly, extend $\beta_{i},\gamma_{i}$ to $X$ in
the obvious way. It is worth mentioning that after the extension
$\gamma_{1}+\gamma_{2}=1$ on $X$. Secondly, for any
$\xi\in\Omega^{2,+}(X,adP)^{\Gamma}$, $\gamma_{i}\xi$ is supported on
$X^{\prime}_{i}$, thus $R_{i}^{\Gamma}\gamma_{i}\xi$ makes sense. Finally,
extend $\beta_{i}R_{i}^{\Gamma}\gamma_{i}(\xi)$ to the whole $X$. Therefore
$Q_{i}^{\Gamma}$ can be treated as an operator:
$Q_{i}^{\Gamma}:\Omega^{2,+}(X,adP)^{\Gamma}\to\Omega^{1}(X,adP)^{\Gamma}.$
Define
$Q^{\Gamma}:=Q_{1}^{\Gamma}+Q_{2}^{\Gamma}:\Omega^{2,+}(X,adP)^{\Gamma}\to\Omega^{1}(X,adP)^{\Gamma}.$
###### Lemma 3.9.
With definitions above, we have $\forall\xi\in\Omega^{2,+}(X,adP)^{\Gamma}$,
$||d_{A^{\prime}}^{+}Q^{\Gamma}(\xi)-\xi||_{L^{2}}\leq\text{const.}(b^{2}+\delta^{-3/4})||\xi||_{L^{2}}.$
###### Proof.
$\displaystyle||d_{A^{\prime}}^{+}Q^{\Gamma}(\xi)-\xi||_{L^{2}}$
$\displaystyle=$
$\displaystyle||d_{A^{\prime}}^{+}(Q_{1}^{\Gamma}(\xi)+Q_{2}^{\Gamma}(\xi))-\gamma_{1}\xi-\gamma_{2}\xi||_{L^{2}}$
$\displaystyle=$
$\displaystyle||d_{A^{\prime}_{1}}^{+}Q_{1}^{\Gamma}(\xi)+d_{A^{\prime}_{2}}^{+}Q_{2}^{\Gamma}(\xi)-\gamma_{1}\xi-\gamma_{2}\xi||_{L^{2}}$
$\displaystyle\leq$
$\displaystyle||d_{A^{\prime}_{1}}^{+}Q_{1}^{\Gamma}(\xi)-\gamma_{1}\xi||_{L^{2}}+||d_{A^{\prime}_{2}}^{+}Q_{2}^{\Gamma}(\xi)-\gamma_{2}\xi||_{L^{2}}.$
Suppose $A^{\prime}_{i}=A_{i}+a_{i}$, then
$\displaystyle d_{A^{\prime}_{i}}^{+}Q_{i}^{\Gamma}\xi$ $\displaystyle=$
$\displaystyle
d_{A_{i}}^{+}\beta_{i}R_{i}^{\Gamma}\gamma_{i}\xi+[a_{i},\beta_{i}R_{i}^{\Gamma}\gamma_{i}\xi]^{+}$
$\displaystyle=$
$\displaystyle\beta_{i}d_{A_{i}}^{+}R_{i}^{\Gamma}\gamma_{i}\xi+\nabla\beta_{i}R_{i}^{\Gamma}\gamma_{i}\xi+[\beta_{i}a_{i},R_{i}^{\Gamma}\gamma_{i}\xi]^{+}.$
The three terms on the right hand side have the following estimates.
1. (i).
$\beta_{i}d_{A_{i}}^{+}R_{i}^{\Gamma}\gamma_{i}\xi=\beta_{i}\gamma_{i}\xi=\gamma_{i}\xi$.
2. (ii).
$||\nabla\beta_{i}R_{i}^{\Gamma}\gamma_{i}\xi||_{L^{2}}\leq||\nabla\beta_{i}||_{L^{4}}||R_{i}^{\Gamma}\gamma_{i}\xi||_{L^{4}}\leq\text{const.}\delta^{-3/4}||\xi||_{L^{2}}$
by the Sobolev multiplication theorem and (3.10) and (3.11).
3. (iii).
$||[\beta_{i}a_{i},R_{i}^{\Gamma}\gamma_{i}\xi]^{+}||_{L^{2}}\leq\text{const.}||a_{i}||_{L^{4}}||R_{i}^{\Gamma}\gamma_{i}\xi||_{L^{4}}\leq\text{const.}b^{2}||\xi||_{L^{2}}$
by the Sobolev multiplication theorem and (3.4) and (3.10).
Therefore
$||d_{A^{\prime}_{i}}^{+}Q_{i}^{\Gamma}(\xi)-\gamma_{i}\xi||_{L^{2}}\leq\text{const.}(b^{2}+\delta^{-3/4})||\xi||_{L^{2}}$
and the result follows. ∎
The result of Lemma 3.9 means that $Q^{\Gamma}$ is almost a right inverse of
$d_{A^{\prime}}^{+}$. Next we show there is a right inverse $R^{\Gamma}$ of
$d_{A^{\prime}}^{+}$.
By Lemma 3.9, we can choose $\delta$ large enough and $b$ small enough so that
$||d_{A^{\prime}}^{+}Q^{\Gamma}(\xi)-\xi||_{L^{2}}\leq 2/3||\xi||_{L^{2}}$,
which implies
$1/3||\xi||_{L^{2}}\leq||d_{A^{\prime}}^{+}Q^{\Gamma}(\xi)||_{L^{2}}\leq
5/3||\xi||_{L^{2}}.$
Then $d_{A^{\prime}}^{+}Q^{\Gamma}$ is invertible and
$1/3||(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{2}}\leq||\xi||_{L^{2}}.$
(3.12)
Define $R^{\Gamma}:=Q^{\Gamma}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}$, then it is
easy to see that $R^{\Gamma}$ is the right inverse of $d_{A^{\prime}}^{+}$.
Note that $R^{\Gamma}$ depends on the gluing parameter $\varphi$, so we denote
the operator by $R^{\Gamma}_{\varphi}$ when the gluing parameter is not
contextually clear.
$R^{\Gamma}$ has the following good estimate:
$\displaystyle||R^{\Gamma}\xi||_{L^{4}}$ $\displaystyle=$
$\displaystyle||(Q_{1}^{\Gamma}+Q_{2}^{\Gamma})(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{4}}$
$\displaystyle\leq$
$\displaystyle||Q_{1}^{\Gamma}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{4}}+||Q_{2}^{\Gamma}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{4}}$
$\displaystyle\leq$
$\displaystyle||R_{1}^{\Gamma}\gamma_{1}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{4}}+||R_{2}^{\Gamma}\gamma_{2}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{4}}$
(by (3.10)) $\displaystyle\leq$
$\displaystyle\text{const.}||\gamma_{1}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{2}}+\text{const.}||\gamma_{2}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{2}}$
$\displaystyle\leq$
$\displaystyle\text{const.}||(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}(\xi)||_{L^{2}}$
(by (3.12)) $\displaystyle\leq$ $\displaystyle\text{const.}||\xi||_{L^{2}}.$
(3.13)
Then we have
$\displaystyle||(R^{\Gamma}\xi_{1}\wedge
R^{\Gamma}\xi_{1})^{+}-(R^{\Gamma}\xi_{2}\wedge
R^{\Gamma}\xi_{2})^{+}||_{L^{2}}$ $\displaystyle\leq$
$\displaystyle||R^{\Gamma}\xi_{1}\wedge
R^{\Gamma}\xi_{1}-R^{\Gamma}\xi_{2}\wedge R^{\Gamma}\xi_{2}||_{L^{2}}$
$\displaystyle=$
$\displaystyle\frac{1}{2}||(R^{\Gamma}\xi_{1}+R^{\Gamma}\xi_{2})\wedge(R^{\Gamma}\xi_{1}-R^{\Gamma}\xi_{2})+(R^{\Gamma}\xi_{1}-R^{\Gamma}\xi_{2})\wedge(R^{\Gamma}\xi_{1}+R^{\Gamma}\xi_{2})||_{L^{2}}$
$\displaystyle\leq$
$\displaystyle\text{const.}||R^{\Gamma}\xi_{1}-R^{\Gamma}\xi_{2}||_{L^{4}}||R^{\Gamma}\xi_{1}+R^{\Gamma}\xi_{2}||_{L^{4}}$
(by (3.2)) $\displaystyle\leq$
$\displaystyle\text{const.}||\xi_{1}-\xi_{2}||_{L^{2}}(||\xi_{1}||_{L^{2}}+||\xi_{2}||_{L^{2}}).$
(3.14)
Define an operator $T:\xi\mapsto-F^{+}(A^{\prime})-(R^{\Gamma}\xi\wedge
R^{\Gamma}\xi)^{+}$, then solving equation (3.9) means to find a fixed point
of the operator $T$. Here we apply the contraction mapping theorem to $T$ to
show there exists a unique fixed point of $T$. There are two things to check:
1. 1.
There is an $r>0$ such that for small enough $b$, $T$ is a map from the ball
$B(r)\subset\Omega^{2,+}_{L^{2}}(X,adP)$ to itself. This follows from
$\displaystyle||\xi||_{L^{2}}<r~{}~{}\Rightarrow~{}~{}||T\xi||_{L^{2}}$
$\displaystyle\leq$
$\displaystyle||F^{+}(A^{\prime})||_{L^{2}}+||R^{\Gamma}\xi\wedge
R^{\Gamma}\xi||_{L^{2}}$ $\displaystyle\leq$ $\displaystyle
const.b^{2}+||R^{\Gamma}\xi||^{2}_{L^{4}}$ $\displaystyle\leq$ $\displaystyle
const.b^{2}+const.||\xi||^{2}_{L^{2}}$ $\displaystyle\leq$ $\displaystyle
const.(b^{2}+r^{2})$ $\displaystyle<$ $\displaystyle
r~{}~{}(for~{}small~{}b,r~{}with~{}b<<r).$
2. 2.
$T$ is a contraction for sufficiently small $r$, i.e., there exists
$\lambda<1$ such that
$||T\xi_{1}-T\xi_{2}||\leq\lambda||\xi_{1}-\xi_{2}||~{}~{}\forall\xi_{1},\xi_{2}.$
This follows from (3.2).
Now we have proved that there exists a unique solution to equation (3.9).
###### Theorem 3.10.
Suppose $A_{1}$, $A_{2}$ are $\Gamma$-invariant ASD connections on $X_{1}$,
$X_{2}$ respectively with $H^{2}_{A_{i}}=0$. Let $\lambda,T,\delta$ be
positive real numbers such that $b:=\lambda e^{T}>2\lambda e^{\delta}$. Then
we can make $\delta$ large enough and $b$ small enough so that for any
$(\Gamma,G)$-equivariant gluing parameter $\varphi\in
Hom_{(G,\Gamma)}(P_{1}|_{x_{1}},P_{2}|_{x_{2}})$, there exists
$a_{\varphi}\in\Omega^{1}(X,adP)^{\Gamma}$ with
$||a_{\varphi}||_{L^{4}}\leq\text{const.}b^{2}$ such that
$A^{\prime}(\varphi)+a_{\varphi}$ is a $\Gamma$-invariant ASD connection on
$X$. Moreover, if $\varphi_{1},\varphi_{2}$ are in the same orbit of
$\Gamma_{A_{1}}\times\Gamma_{A_{2}}$ on $Gl$, then
$A^{\prime}(\varphi_{1})+a_{\varphi_{1}},A^{\prime}(\varphi_{2})+a_{\varphi_{2}}$
are gauge equivalent.
###### Proof.
We only need to prove the last statement.
If $\varphi_{1},\varphi_{2}$ are in the same orbit of
$\Gamma_{A_{1}}\times\Gamma_{A_{2}}$ on $Gl$, then
$A^{\prime}(\varphi_{1}),A^{\prime}(\varphi_{2})$ are gauge equivalent. For
some gauge transformation $\sigma$ we have
$\sigma^{*}A^{\prime}(\varphi_{1})=A^{\prime}(\varphi_{2})$. Applying
$\sigma^{*}$ on both sides of the following formula
$F^{+}_{A^{\prime}(\varphi_{1})}+\xi(\varphi_{1})+(R^{\Gamma}_{\varphi_{1}}\xi(\varphi_{1})\wedge
R^{\Gamma}_{\varphi_{1}}\xi(\varphi_{1}))^{+}=0$
gives
$\sigma^{*}F^{+}_{A^{\prime}(\varphi_{1})}+\sigma^{*}\xi(\varphi_{1})+\sigma^{*}(R^{\Gamma}_{\varphi_{1}}\xi(\varphi_{1})\wedge
R^{\Gamma}_{\varphi_{1}}\xi(\varphi_{1}))^{+}=0.$ (3.15)
Since $\sigma^{*}$ and $d_{A^{\prime}}^{+}$ commute and
$R^{\Gamma}=Q^{\Gamma}(d_{A^{\prime}}^{+}Q^{\Gamma})^{-1}$, then $\sigma^{*}$
and $R^{\Gamma}$ commute. Then (3.15) becomes
$F^{+}_{A^{\prime}(\varphi_{1})}+\sigma^{*}\xi(\varphi_{1})+(R^{\Gamma}_{\varphi_{2}}\sigma^{*}\xi(\varphi_{1})\wedge
R^{\Gamma}_{\varphi_{2}}\sigma^{*}\xi(\varphi_{1}))^{+}=0$
This means $\sigma^{*}\xi(\varphi_{1})$, $\xi(\varphi_{2})$ are solutions to
$F^{+}_{A^{\prime}(\varphi_{2})}+\xi+(R^{\Gamma}_{\varphi_{2}}\xi\wedge
R^{\Gamma}_{\varphi_{2}}\xi)^{+}=0$, which implies
$\sigma^{*}\xi(\varphi_{1})$=$\xi(\varphi_{2})$. The following deduction
completes the proof.
$\sigma^{*}\xi(\varphi_{1})=\xi(\varphi_{2})~{}~{}\Rightarrow~{}~{}\sigma^{*}R^{\Gamma}_{\varphi_{1}}\xi(\varphi_{1})=R^{\Gamma}_{\varphi_{2}}\sigma^{*}\xi(\varphi_{1})=R^{\Gamma}_{\varphi_{2}}\xi(\varphi_{2})$
$\Rightarrow~{}~{}\sigma^{*}a_{\varphi_{1}}=a_{\varphi_{2}}~{}~{}\Rightarrow~{}~{}\sigma^{*}(A^{\prime}(\varphi_{1})+a_{\varphi_{1}})=A^{\prime}(\varphi_{2})+a_{\varphi_{2}}.$
∎
## 4 Bubble tree compactification for 4-manifolds
In this section we sketch the construction of bubble tree compactification for
4-manifolds. Details can be found in [C02].
###### Definition 4.1.
1. (1).
A rooted tree $(T,v_{0})$ is a triple $(V_{T},E_{T},v_{0})$ where $V_{T}$ is a
set of discrete points (called vertices), $v_{0}\in V_{T}$ (called the root),
and $E_{T}$ is a set of segments with vertices as end points (called edges)
such that any two vertices are connected by exactly one path.
2. (2).
A vertex $v_{1}$ is called an ancestor of a vertex $v_{2}$ if $v_{1}$ lies on
the path from the root $v_{0}$ to $v_{2}$. And $v_{2}$ is called a descendant
of $v_{1}$.
3. (3).
A vertex $v_{1}$ is called the parent of a vertex $v_{2}$ if $v_{1}$ is an
ancestor of $v_{2}$ and there is an edge between them. $v_{2}$ is called a
child of $v_{1}$.
4. (4).
A vertex $v$ is called a leaf if it has no child.
5. (5).
A subtree of $T$ (with root $v$ for some $v\in V_{T}$) is a tree, denoted by
$(t(v),v)$, such that $V_{t(v)}\subset V_{T}$ is the union of $\\{v\\}$ and
the set of all descendants of $v$ and $E_{t(v)}$ is the set of edges
connecting vertices in $V_{t(v)}$.
###### Definition 4.2.
A bubble tree is a weighted-rooted-tree $(T,v_{0},w)$ (or briefly denoted by
$T$) satisfying
1. $\bullet$
$w$, called the weight, is a map from $V_{T}$ to $\mathbb{Z}_{\geq 0}$.
2. $\bullet$
For any non-root vertex $v$, either $w(v)\neq 0$ or
$\\#child(v)\geq 2\text{ and }W(v_{i}):=\sum_{v^{\prime}\in
V_{t(v_{i})}}w(v^{\prime})>0\text{ for all }v_{i}\in child(v).$
Here $W(v_{i})$ is called the total weight of the vertex $v_{i}$ or the total
weight of the tree $t(v_{i})$.
A vertex in a bubble tree is called a ghost vertex if it is non-root and has
weight 0. A bubble tree is called a ghost bubble tree if it has a ghost
vertex. Denote
$\displaystyle\mathcal{T}_{k}$ $\displaystyle:=$
$\displaystyle\\{(T,v_{0},w)~{}|~{}W(v_{0})=k\\}.$
Note that $\mathcal{T}_{k}$ is a finite set of bubble trees with total weight
$k$. Here is a bubble tree example and a non bubble tree example.
###### Examples 4.3.
Vertices and edges in a bubble tree have geometric meaning in our situation.
The root vertex corresponds to the base manifold $X$, each non-root vertex
corresponds to a bubble $S^{4}$, and each edge corresponds to a gluing point
on $X$ or an $S^{4}$. Consider the bubble tree in Example 4.3, the associated
bubble tree space can be drawn as in Figure 7.
Figure 7: An example of bubble tree
By Definition 4.2, any subtree of a bubble tree $T$ is again a bubble tree.
For any $v\in V_{T}$ with children $\\{v_{1},\dots,v_{n}\\}$, we define the
set of all its subtrees to be $\mathfrak{m}_{v}:=(t(v_{1}),\dots,t(v_{n}))$.
Let $S_{\mathfrak{m}_{v}}$ be the subgroup of the permutation group $S_{n}$
whose elements fix $\mathfrak{m}_{v}$.
###### Definition 4.4.
Suppose $(T,v_{0},w_{T})$ is a bubble tree, $e$ is an edge of $T$,
$v_{1},v_{2}$ are the two vertices connected by $e$ such that $v_{1}$ is the
parent of $v_{2}$. A bubble tree $(T^{\prime},v_{0},w_{T^{\prime}})$ is the
contraction of $T$ at $e$ if
1. $\bullet$
$V_{T^{\prime}}=V_{T}\setminus\\{v_{2}\\}$,
$E_{T^{\prime}}=E_{T}\setminus\\{e\\}$.
2. $\bullet$
$w_{T^{\prime}}(v_{1})=w_{T}(v_{1})+w_{T}(v_{2})$,
$child_{T^{\prime}}(v_{1})=child_{T}(v_{1})\cup child_{T}(v_{2})$.
$T^{\prime}$ is also denoted as $T\setminus\\{v_{2}\\}$.
The following lemma is obvious.
###### Lemma 4.5.
There is a partial order on $\mathcal{T}_{k}$: $T<T^{\prime}$ if $T^{\prime}$
is the contraction of $T$ at some edges of $T$.
###### Definition 4.6.
Given a tuple $\mathfrak{m}=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}_{+}$, a
generalised instanton on $S^{4}$ associated to $\mathfrak{m}$ is an element
$([A],(p_{1},\dots,p_{n}))\in\mathcal{M}_{k_{0}}(S^{4})\times(S^{4}\setminus\\{\infty\\})^{n}\setminus\triangle=:\mathcal{M}_{k_{0},\mathfrak{m}}(S^{4}),$
where $k_{0}\in\mathbb{Z}_{\geq 0}$, $\triangle$ is the big diagonal of
$(S^{4}\setminus\\{\infty\\})^{n}$, i.e. the complement of the subset of
$(S^{4}\setminus\\{\infty\\})^{n}$ whose elements are distinct $n$-tuples.
$p_{i}$ are called mass points and $k_{i}$ are called the weight at $p_{i}$.
The space of generalised instantons on $S^{4}$ is denoted by
$\mathcal{M}_{k_{0},\mathfrak{m}}$. The space of balanced generalised
instantons is defined to be
$\mathcal{M}_{k_{0},\mathfrak{m}}^{b}:=\mathcal{M}_{k_{0},\mathfrak{m}}/H,$
where $H\subset Aut(\mathbb{R}^{4})$ is the subgroup generated by translations
and dilations in $\mathbb{R}^{4}$. Another equivalent way to define
$\mathcal{M}_{k_{0},\mathfrak{m}}^{b}$ is
$\mathcal{M}_{k_{0},\mathfrak{m}}^{b}=\left\\{([A],(p_{1},\dots,p_{n}))\in\mathcal{M}_{k_{0},\mathfrak{m}}~{}\Big{|}~{}\begin{tabular}[]{@{}c@{}}$m(A)\cdot
k_{0}+p_{1}k_{1}+\dots+p_{n}k_{n}=0$\\\ One of the following (a) and (b)
holds.\end{tabular}\right\\}$
where
$m(A)=\displaystyle\frac{1}{||F_{A}||^{2}}\displaystyle\int_{\mathbb{R}^{4}}y|F_{A}|^{2}dy$
is the mass center of $A$ and the $(a),(b)$ conditions are
1. (a).
The mass of a subdomain is fixed and all the mass points $p_{i}$ are on the
complement of the subdomain, i.e.,
$\displaystyle\int_{\mathbb{R}^{4}\setminus
B(1)}|F_{A}|^{2}=\hbar~{},~{}~{}p_{i}\in\overline{B(1)}~{}\forall i=1,\dots,n$
where $\hbar$ is a fixed constant less than $4\pi^{2}$.
2. (b).
The mass of a subdomain is less than $\hbar$ and all the mass points $p_{i}$
are on the complement of the subdomain. Moreover, at least one mass point lies
on the boundary of the subdomain, i.e.,
$\displaystyle\int_{\mathbb{R}^{4}\setminus
B(1)}|F_{A}|^{2}<\hbar~{},~{}~{}p_{i}\in\overline{B(1)}~{},~{}~{}\\#\\{p_{i}\in\partial\overline{B(1)}\\}>0.$
We will mainly use the second description of
$\mathcal{M}_{k_{0},\mathfrak{m}}^{b}$.
###### Definition 4.7.
Given a bubble tree $(T,v_{0},w)$, the space of bubble tree instantons
associated to $T$ is defined to be
$\mathcal{S}_{T}(X):={\raisebox{1.99997pt}{$\mathcal{M}_{w(v_{0})}(X)\times\left(\displaystyle\prod^{fiber~{}product}_{v_{i}\in
child(v_{0})}P_{v_{i}}(X)\right)$}\left/\raisebox{-1.99997pt}{$S_{\mathfrak{m}_{v_{0}}}$}\right.},$
(4.1)
where
1. $\bullet$
$P_{v_{i}}(X)$ is the pull-back bundle of
$\left(Fr(X)\times_{SO(4)}\mathcal{S}^{b}_{t(v_{i})}\right)\rightarrow X$ by
$X^{n(v_{0})}\setminus\triangle\xrightarrow{i^{th}~{}proj}X$, where
$\triangle$ is the big diagonal and n(v)=#child(v).
2. $\bullet$
$\mathcal{S}^{b}_{t(v)}=\mathcal{M}^{b}_{w(v)}(S^{4})$ if $t(v)$ is the tree
with only one vertex $v$.
3. $\bullet$
$\mathcal{S}^{b}_{t(v)}$ is a subset of
$\mathcal{S}_{t(v)}(S^{4}\setminus\\{\infty\\})$ if $t(v)$ is not the tree
with only one vertex. $\mathcal{S}^{b}_{t(v)}$ consists of those elements in
$\mathcal{S}_{t(v)}(S^{4}\setminus\\{\infty\\})$ such that on each bubble the
induced generalised instanton is balanced.
Elements in $\mathcal{S}^{b}_{t(v)}$ are called balanced bubble tree
instantons. In the definition of a bubble tree instanton (4.1), if we use
$\mathcal{S}_{t(v)}$ instead of $\mathcal{S}^{b}_{t(v)}$ (i.e., the data on
each bubble need not to be balanced), then we get a larger space, denoted by
$\widetilde{\mathcal{S}}_{T}(X)$. Given a bubble tree instanton, the
underlying connection on $X$ in it is called the background connection.
###### Lemma 4.8.
$\mathcal{S}_{T}(X)$ is a smooth manifold.
###### Proof.
By definition, $\displaystyle\prod^{fiber~{}product}_{v_{i}\in
child(v_{0})}P_{v_{i}}(X)$ is a fibre bundle over
$X^{n(v_{0})}\setminus\triangle$ with fibre $\displaystyle\prod_{v_{i}\in
child(v_{0})}S^{b}_{t(v_{i})}$, which is a smooth manifold by Lemma 3.7 of
[C02]. Since the $S_{\mathfrak{m}_{v_{0}}}$-action on
$X^{n(v_{0})}\setminus\triangle$ is free, the action on
$\displaystyle\prod^{fiber~{}product}_{v_{i}\in child(v_{0})}P_{v_{i}}(X)$ is
also free, therefore $\mathcal{S}_{T}(X)$ is a smooth manifold. ∎
###### Remark 4.9.
In Definition 4.6, if we remove the “ASD” condition, that is, consider
$\mathcal{B}$ rather than $\mathcal{M}$, then we get a space of generalised
connections and balanced generalised connections:
$\mathcal{B}_{k_{0},\mathfrak{m}}:=\mathcal{B}_{k_{0}}\times\Big{(}(S^{4}\setminus\\{\infty\\})^{n}\setminus\triangle\Big{)},$
$\mathcal{B}_{k_{0},\mathfrak{m}}^{b}:=\left\\{\big{(}[A],(x_{1},\dots,x_{n})\big{)}\in\mathcal{B}_{k_{0},\mathfrak{m}}~{}\Big{|}~{}\begin{tabular}[]{@{}c@{}}$m([A])e([A])+k_{1}x_{1}+\dots+k_{n}x_{n}=0$,\\\
one of (a),(b) holds.\end{tabular}\right\\},$
where $\mathcal{B}_{k_{0}}\subset\mathcal{B}$ is an open neighbourhood of
$\mathcal{M}\subset\mathcal{B}$ with elements having energy in an interval
$(k_{0}-\epsilon,k_{0}+\epsilon)$ for some fixed constant $\epsilon$ (without
loss of generality we take the interval to be $(k_{0}/2,3k_{0}/2)$), and
$e([A])=\frac{1}{8\pi^{2}}\int_{\mathbb{R}^{4}}|F_{A}|^{2}dvol$
is the energy of $A$. Since the $H$-action preserves $e([A])$,
$\mathcal{B}_{k_{0},\mathfrak{m}}^{b}$ is well-defined. Here we use
$\mathcal{B}_{k_{0}}$ rather than the whole $\mathcal{B}$, since if a
connection $A$ has energy less than $\hbar$ and $\mathfrak{m}$ is empty, then
both conditions $(a)$ and $(b)$ cannot be satisfied. We shall see that
$\mathcal{B}_{k_{0}}$ is enough for later applications.
Correspondingly, if we remove the “ASD” condition in Definition 4.7, then we
get a space of bubble tree connections $\mathcal{B}_{T}(X)$. Moreover, if the
data on each bubble is not balanced, we get a space
$\widetilde{\mathcal{B}}_{T}(X)$.
In summary, for elements in $\mathcal{S}_{T}(X)$, a connection on each bubble
needs to be “ASD” and “balanced”; for elements in
$\widetilde{\mathcal{S}}_{T}(X)$, a connection on each bubble needs to be
“ASD”, not necessarily “balanced”; for elements in $\mathcal{B}_{T}(X)$, a
connection on each bubble needs to be “balanced”, not necessarily “ASD”; for
elements in $\widetilde{\mathcal{B}}_{T}(X)$, a connection on each bubble need
not be “ASD” or “balanced”.
Bubble tree instantons play a similar role in bubble tree compactification as
ideal connections in Uhlenbeck compactification but with more information on
the description of ASD connections at each bubbling point: as the limit of a
sequence of ASD connection with energy blow-up, ideal connections only tell us
where the energy blows up, while bubble tree instantons tell us where and how
the energy blows up. A bubble tree instanton will be written as
$\left\\{\left[[A_{v}],[x_{1},\dots,x_{n(v)}]\right]\right\\}_{v\in V_{T}},$
where
$[x_{1},\dots,x_{n(v)}]=(Y^{n(v)}\setminus\triangle)/\sim~{},~{}~{}Y=\begin{cases}X~{}~{}v=v_{0}\\\
S^{4}\setminus\\{\infty\\}~{}~{}v\neq v_{0}\end{cases},$
such that when $v\neq v_{0}$, $\left[[A_{v}],[x_{1},\dots,x_{n(v)}]\right]$ is
balanced. Consider the bubble tree in Example 4.3, Figure 8 gives an
associated bubble tree instanton.
Figure 8: An example of a bubble tree instanton
We denote
$\displaystyle\overline{\mathcal{M}}_{k}(X)$ $\displaystyle:=$
$\displaystyle\bigcup_{T\in\mathcal{T}_{k}}\mathcal{S}_{T}(X),$ $\displaystyle
S_{k}(X)$ $\displaystyle:=$
$\displaystyle\bigcup_{{\scriptsize\begin{tabular}[]{@{}c@{}}$T$ ghost
bubble\\\ tree in $\mathcal{T}_{k}$\end{tabular}}}\mathcal{S}_{T}(X).$
Given an element in $\mathcal{S}_{T}(X)$, to glue the background connection
with all connections on each bubble sphere together, we need gluing data
$(\rho,\lambda)\in Gl\times\mathbb{R}^{+}$ for each gluing point, where $Gl$
is the space of gluing parameters defined by taking $\Gamma$ to be the trivial
group in $Gl^{\Gamma}$. ($Gl$ is also defined in page 286 in [DK90]).
Motivated by this, we define the gluing bundle over $\mathcal{S}_{T}(X)$ with
fibre $Gl_{T}=\displaystyle\prod_{e\in edge(T)}Gl\times\mathbb{R}^{+}$,
denoted by:
$\textbf{GL}_{T}(X)\to\mathcal{S}_{T}(X).$
Here we give a description of $\textbf{GL}_{T}(X)$, see page 32 of [C02] for
details. Points in $\textbf{GL}_{T}(X)$ are of the following inductive form:
(here we use notations from page 335 of [T88])
$e=\left[A_{0},\\{q_{\alpha},e_{\alpha}\\}_{\alpha}\right],~{}e_{\alpha}=\left(\lambda_{\alpha},\left[p_{\alpha},A_{\alpha},\\{q_{\alpha\beta},e_{\alpha\beta}\\}_{\beta}\right]\right),~{}e_{\alpha\beta}=\dots$
where
1. $\bullet$
$A_{0}\in\mathcal{A}^{ASD}(P_{0})$, $P_{0}\to X$ is a $G$-bundle with $c_{2}$
equal to $w(v_{0})$.
2. $\bullet$
$q_{\alpha}\in P_{0}\tilde{\times}Fr(X)$, the index $\alpha$ runs over
$child(v_{0})$, and for different $\alpha,\alpha^{\prime}$,
$q_{\alpha},q_{\alpha^{\prime}}$ are on different fibres of
$P_{0}\tilde{\times}Fr(X)\to X$.
3. $\bullet$
$\lambda_{\alpha}\in\mathbb{R}^{+}$, $p_{\alpha}\in P_{\alpha}|_{\infty}$,
$P_{\alpha}\to S^{4}$ is a $G$-bundle with $c_{2}$ equals to $w(\alpha)$,
$A_{\alpha}\in\mathcal{A}^{ASD}(P_{\alpha})$, $q_{\alpha\beta}\in
P_{\alpha}|_{S^{4}\setminus\infty}$, the index $\beta$ runs over
$child(\alpha)$.
###### Example 4.10.
Consider the bubble tree in Example 4.3, an element in its gluing bundle can
be written as in Figure 9.
Figure 9: A point in $\textbf{GL}_{T}(X)$
Given such an element, we have all the data we need to do the Taubes’ gluing
construction. For $\lambda_{i}$ small enough, define the pre-glued
approximated ASD connection
$\Psi^{\prime}_{T}(e):=A_{0}\\#_{(\rho_{1},\lambda_{1})}A_{1}\\#\dots\\#_{(\rho_{5},\lambda_{5})}A_{5},$
and the corresponding unique ASD connection
$\Psi_{T}(e):=\Psi^{\prime}_{T}(e)+a(\Psi^{\prime}_{T}(e)).$
The image of $\Psi_{T}$ is in $\mathcal{M}_{k}$ where $k$ is the total weight
of $T$. Note that when $T$ is a tree whose root has weight 0 and $X$ is a
space with $b_{2}^{+}>0$, the Atiyah-Hitchin-Singer complex of the trivial
connection on $X$ has nonzero obstruction space $H^{2}$, so the right inverse
of the map $d_{\Psi^{\prime}_{T}(e)}^{+}$ is not defined in general. Therefore
we exclude this case.
Since $G=SU(2)$ or $SO(3)$,
$Gl\times\mathbb{R}^{+}\cong\mathbb{R}^{4}\setminus\\{0\\}$ or
$(\mathbb{R}^{4}\setminus\\{0\\})/\mathbb{Z}_{2}$, therefore
$\textbf{GL}_{T}(X)$ has fibre
$\prod_{e\in E_{T}}(\mathbb{R}^{4}\setminus\\{0\\})\text{ or }\prod_{e\in
E_{T}}(\mathbb{R}^{4}\setminus\\{0\\})/\mathbb{Z}_{2}.$
Therefore $\textbf{GL}_{T}(X)$ can be identified as a vector bundle (up to
$\mathbb{Z}_{2}$ actions) with the zero gluing data removed. We add zero
gluing data to $\textbf{GL}_{T}(X)$ to get the whole vector bundle, which is
also denoted by $\textbf{GL}_{T}(X)$. Fibre of $\textbf{GL}_{T}(X)$ is
$Gl_{T}:=\prod_{e\in E_{T}}\mathbb{R}^{4}\text{ or }\prod_{e\in
E_{T}}\mathbb{R}^{4}/\mathbb{Z}_{2}.$
It is obvious that the Taubes’ gluing construction can be applied to elements
in $\textbf{GL}_{T}(X)$ with zero gluing parameters. So we can define
$\Psi^{\prime}_{T},\Psi_{T}$ on $\textbf{GL}_{T}(X)$, but the image of
elements with zero gluing parameters may not be in
$\overline{\mathcal{M}}_{k}(X)$ since after gluing a balanced bubble tree
instanton on an $S^{4}$ we get a non-balanced connection in general. To fix
this, define a new map by composing $\Psi_{T}$ with the balanced map ‘$b$’:
$\Psi_{T}^{b}(e):=b\circ\Psi_{T}(e)\in\overline{\mathcal{M}}_{k}(X)$
where $b$ is the map making all generalised connections (defined in Remark
4.9) on all bubbles balanced by translations and dilations (note that if all
gluing parameters in $e$ are nonzero, then $b$ is the identity).
When gluing two connections using different gluing parameters
$\rho_{1},\rho_{2}$, as long as they are in the same orbit of the
$\Gamma_{A_{1}}\times\Gamma_{A_{2}}$-action on $Gl$, they yield the same
element in $\mathcal{M}(X)$. Because of this, $\Psi_{T}$ and $\Psi_{T}^{b}$
may fail to be injective. To fix this, we define a $\Gamma_{T}$-action on
$\textbf{GL}_{T}(X)$.
For an ASD connection on $S^{4}$, the stabiliser is $C(G)$ (centre of $G$) if
it has non-zero energy, and is $G$ if it has zero energy. Take $G=SO(3)$, then
$C(G)=1$. Define
$\Gamma_{T}:=\prod_{v~{}is~{}a~{}ghost~{}vertex}SO(3)_{v}.$
For each ghost vertex $v$, suppose the edge connecting $v$ and its parent
$v_{-1}$ is $e_{-1}$, the edges connecting $v$ and its children
$v_{1},\dots,v_{n}$ are $e_{1},\dots,e_{n}$, then $SO(3)_{v}$ acts on
$\textbf{GL}_{T}(X)$ by acting on gluing parameters associated to
$e_{-1},e_{1},\dots,e_{n}$:
$\forall g\in SO(3),~{}~{}(\rho_{-1},\rho_{1},\dots,\rho_{n})\cdot
g=(g^{-1}\rho_{-1},\rho_{1}g,\dots,\rho_{n}g).$
Here $\rho_{i}$ is the gluing parameter associated to edge $e_{i}$. Note that
the way $SO(3)_{v}$ acts on $\rho_{-1}$ is different from the way it acts on
$\rho_{i}$ for $i=1,\dots,n$. This is because
$\displaystyle\rho_{-1}$ $\displaystyle\in$ $\displaystyle
Hom_{G}(P(S_{v_{-1}}^{4})|_{x_{-1}},P(S_{v}^{4})|_{\infty}),$
$\displaystyle\rho_{i}$ $\displaystyle\in$ $\displaystyle
Hom_{G}(P(S_{v}^{4})|_{x_{i}},P(S_{v_{i}}^{4})|_{\infty}),~{}i=1,\dots,n,$
where $S_{v_{-1}}^{4},S_{v}^{4},S_{v_{i}}^{4}$ are bubble spheres corresponds
to the vertices $v_{-1},v,v_{i}$ respectively, $x_{-1}\in
S_{v_{-1}}^{4}\setminus\\{\infty\\},x_{i}\in S_{v}\setminus\\{\infty\\}$ are
gluing points associated to $e_{-1},e_{i}$ respectively. And $SO(3)_{v}$ acts
on $\rho_{-1},\rho_{i}$ through acting on the trivial bundle $P(S_{v}^{4})$.
This provides a fibre bundle
$\textbf{GL}_{T}(X)/\Gamma_{T}\to\mathcal{S}_{T}(X)$
with a particular section defined by the zero section of $\textbf{GL}_{T}(X)$.
The map $\Psi_{T}$ on a small neighbourhood of the zero section of
$\textbf{GL}_{T}(X)$ induces a well-defined map on a small neighbourhood of
the zero section of $\textbf{GL}_{T}(X)/\Gamma_{T}$, which is also denoted by
$\Psi_{T}$.
Suppose $T^{\prime}$ is the contraction of $T$ at $e_{1},\dots,e_{n}$, define
$Gl_{T,T^{\prime}}:=\left(\prod_{e_{1},\dots,e_{n}}\mathbb{R}^{4}\setminus\\{0\\}\right)\times\\{0\\}\subset
Gl_{T}$
and define a sub-bundle $\textbf{GL}_{T,T^{\prime}}$ of $\textbf{GL}_{T}(X)$
with fibre $Gl_{T,T^{\prime}}$. Denote by $\textbf{GL}_{T}(\epsilon)$ and
$\textbf{GL}_{T,T^{\prime}}(\epsilon)$ the $\epsilon$-neighbourhoods of the
zero-sections.
###### Theorem 4.11.
(Theorem 3.32 of [C02]) For any $T\in\mathcal{T}_{k}$ and any precompact
subset $U$ of $\mathcal{S}_{T}(X)$, there exists $\epsilon$ such that
$\Psi_{T}:\textbf{GL}_{T}(\epsilon)|_{U}\to\bigcup_{T\in\mathcal{T}_{k}}\widetilde{\mathcal{S}}_{T}(X)$
maps $\textbf{GL}_{T,T^{\prime}}(\epsilon)|_{U}/\Gamma_{T}$ to
$\widetilde{\mathcal{S}}_{T^{\prime}}(X)$ diffeomorphically onto its image for
any $T^{\prime}>T$.
###### Proposition 4.12.
For any $T\in\mathcal{T}_{k}$ and $T^{\prime}>T$, let $U,\epsilon$ be as in
Theorem 4.11, then
$\Psi_{T}^{b}=b\circ\Psi_{T}:\textbf{GL}_{T,T^{\prime}}(\epsilon)|_{U}/\Gamma_{T}\xrightarrow{\Psi_{T}}\widetilde{\mathcal{S}}_{T^{\prime}}(X)\xrightarrow{b}\mathcal{S}_{T^{\prime}}(X)$
maps $\textbf{GL}_{T,T^{\prime}}(\epsilon)|_{U}/\Gamma_{T}$ to
$\mathcal{S}_{T^{\prime}}(X)$ diffeomorphically onto its image for any
$T^{\prime}>T$.
###### Proof.
It suffices to find a smooth inverse map of $\Psi_{T}^{b}$ from the image of
$\Psi_{T}^{b}$ to $\textbf{GL}_{T,T^{\prime}}(\epsilon)|_{U}/\Gamma_{T}$.
Recall that $\Psi_{T}^{b}$ is the composition of the following three maps:
$\displaystyle\textbf{GL}_{T,T^{\prime}}(\epsilon)|_{U}/\Gamma_{T}\xrightarrow{\Psi^{\prime}_{T}}$
$\displaystyle im(\Psi^{\prime}_{T})$
$\displaystyle\xrightarrow{a}\widetilde{\mathcal{S}}_{T^{\prime}}(X)\xrightarrow{b}\mathcal{S}_{T^{\prime}}(X).$
$\displaystyle\bigcap$ $\displaystyle\widetilde{\mathcal{B}}_{T^{\prime}}(X)$
Without loss of generality, assume
i.e., $T^{\prime}$ is the contraction of $T$ at the middle vertex. Then
$\textbf{GL}_{T,T^{\prime}}(\epsilon)=\mathcal{M}_{k}(X)\times
Fr(X)\times_{SO(4)}\mathcal{M}_{k_{1},\mathfrak{m}}^{b}\times\mathcal{M}_{k_{2}}^{b}\times
B^{4}(0,\epsilon)$
where $\mathfrak{m}$ is the tuple consists of total weight of children of
$v_{1}$ in $T$, i.e., $\mathfrak{m}=(k_{2})$, and $B^{4}(0,\epsilon)$ is the
4-dimensional $\epsilon$-ball in $\mathbb{R}^{4}$ with the origin removed. So
it suffices to find a smooth inverse map of
${\mathcal{M}_{k_{1},\mathfrak{m}}^{b}\times\mathcal{M}_{k_{2}}^{b}\times
B^{4}(0,\epsilon)/\Gamma}$${(A_{1},A_{2},\rho,\lambda)}$${im(\Psi^{\prime})}$${A_{1}\\#_{(\rho,\lambda)}A_{2}}$${\mathcal{M}_{k_{1}+k_{2}}}$${A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})}$${\mathcal{M}_{k_{1}+k_{2}}^{b}(S^{4})}$${h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}.}$$\scriptstyle{\Psi^{\prime}}$$\scriptstyle{\Psi}$$\scriptstyle{a}$$\scriptstyle{b}$
Here $h\in H=Aut(\mathbb{R}^{4})$ pulls back
$A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})$ to a
balanced connection, hence $h$ depends on $A_{1},A_{2},\rho,\lambda$. It is
obvious that
$h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}=h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}+h\big{(}a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}.$
Recall that in Section 3, given two points $x_{1}\in X_{1},x_{2}\in X_{2}$ and
a small positive parameter $\lambda$, we denote the glued space by
$X_{1}\\#_{\lambda}X_{2}$. Let $X_{1}=X_{2}=S^{4}$, then we denote by
$X_{1}\\#_{(x,\lambda)}X_{2}$ or $S^{4}\\#_{(x,\lambda)}S^{4}$ the space
$X_{1}\\#_{\lambda}X_{2}$ with $x_{1}=x\in X_{1}\setminus\\{\infty\\}$ and
$x_{2}=\infty\in X_{2}$. Note that the canonical conformal diffeomorphism
$X_{1}\cong X_{1}\\#_{(x,\lambda)}X_{2}$ maps $B_{x}(\lambda e^{\delta})$ onto
$X_{2}\setminus B_{\infty}(\lambda e^{-\delta})$ where $\delta$ is the
constant fixed in Taubes’ gluing construction. We denote this diffeomorphism
by $i:B_{x}(\lambda e^{\delta})\to X_{2}\setminus B_{\infty}(\lambda
e^{-\delta})$.
The $H$-action on $\mathbb{R}^{4}$ induces an $H$-action on
$X_{1}\setminus\\{\infty\\}\times\mathbb{R}_{>0}$ in the following way: since
the image of a ball in $\mathbb{R}^{4}$ under the map $h\in H$ is a ball, we
can define
$h(x,\lambda):=(h(x),h(\lambda)),\text{ s.t.
}B_{h(x)}(h(\lambda)e^{T})=h(B_{x}(\lambda e^{T})),$
where $B_{x}(\lambda e^{T})\subset
X_{1}\setminus\\{\infty\\}\cong\mathbb{R}^{4}$, $T>0$ is the constant fixed in
Taubes’ gluing construction. This $H$-action on
$X_{1}\setminus\\{\infty\\}\times\mathbb{R}_{>0}$ lifts to an $H$-action on
$P(X_{1})|_{X_{1}\setminus\\{\infty\\}}\times\mathbb{R}_{>0}$ in the obvious
way. Moreover, the $H$-action on
$P(X_{1})|_{X_{1}\setminus\\{\infty\\}}\times\mathbb{R}_{>0}$ induces an
$H$-action on
$\bigcup_{x\in
X_{1}\setminus\\{\infty\\}}Hom_{G}(P(X_{1})|_{x},P(X_{2})|_{\infty}).$
To construct the inverse of $b\circ a\circ\Psi^{\prime}$, first observe that
$h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}$ lies in the image of
$\Psi^{\prime}$ since
$h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}=h(A_{1})\\#_{(h(\rho),h(\lambda))}\left(\frac{1}{h(\lambda)}\right)(A_{2}).$
(4.3)
To see this, note that both left hand side and right hand side of (4.3) equal
to
$\displaystyle\begin{cases}\eta_{1}(h^{-1})\cdot h(A_{1})&\text{ outside the
ball }B_{h(x)}(h(\lambda)e^{-\delta}),\\\ i^{*}(\eta_{2}\cdot A_{2})&\text{ on
}B_{h(x)}(h(\lambda)e^{\delta}),\end{cases}$
where $x\in X_{1}$ is the point such that $\rho\in
Hom_{G}(P(X_{1})|_{x},P(X_{2})|_{\infty})$, $\eta_{1},\eta_{2}$ are defined in
(3.3).
We claim that $h$ commutes with $a$:
$h(a(A_{1}\\#_{(\rho,\lambda)}A_{2}))=a(h(A_{1}\\#_{(\rho,\lambda)}A_{2})).$
(4.4)
$a$ is a map defined on the image of $\Psi^{\prime}$ and
$h(A_{1}\\#_{(\rho,\lambda)}A_{2})$ lies in the image of $\Psi^{\prime}$,
therefore the right hand side of (4.4) makes sense. (4.4) holds since the two
metrics on $X_{1}\\#_{(\rho,\lambda)}X_{2}$ and
$X_{1}\\#_{(h(\rho),h(\lambda))}X_{2}$ are conformally equivalent.
Therefore
$h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}=h(A_{1}\\#_{(\rho,\lambda)}A_{2})+a(h(A_{1}\\#_{(\rho,\lambda)}A_{2}))$
is an element in the image of $\Psi$ and
$\displaystyle\Psi^{-1}\left(h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}\right)$
$\displaystyle=$
$\displaystyle\left(h(A_{1}),\left(\frac{1}{h(\lambda)}\right)(A_{2}),h(\rho),h(\lambda)\right)\in\mathcal{M}_{k_{1},\mathfrak{m}}\times\mathcal{M}_{k_{2}}\times
B^{4}(0,\epsilon),$
which is in the same $H$-orbit as the element $(A_{1},A_{2},\rho,\lambda)$.
Since $(A_{1},A_{2},\rho,\lambda)$ is balanced, after applying the balanced
map $b$ we have
$\displaystyle
b\circ\Psi^{-1}\left(h\big{(}A_{1}\\#_{(\rho,\lambda)}A_{2}+a(A_{1}\\#_{(\rho,\lambda)}A_{2})\big{)}\right)$
$\displaystyle=$
$\displaystyle(A_{1},A_{2},\rho,\lambda)\in\mathcal{M}_{k_{1},\mathfrak{m}}^{b}\times\mathcal{M}_{k_{2}}^{b}\times
B^{4}(0,\epsilon).$
So the inverse of $b\circ\Psi$ is $b\circ\Psi^{-1}$. ∎
Proposition 4.12 gives a set of maps on an open cover of
$\overline{\mathcal{M}}_{k}(X)$
$\mathcal{D}(X,k):=\left\\{\left(\Psi_{T}^{b}\left(\textbf{GT}_{T}(\epsilon)|_{U}\right)/\Gamma_{T},~{}(\Psi_{T}^{b})^{-1}\right)\right\\}_{T\in\mathcal{T}_{k}}.$
(4.5)
If $T$ contains no ghost vertex, $\Gamma_{T}$-action is free up to a finite
group on $\Psi_{T}^{b}\left(\textbf{GT}_{T}(\epsilon)|_{U}\right)$. So away
from the ghost strata, (4.5) give an orbifold atlas on
$\overline{\mathcal{M}}_{k}(X)\setminus S_{k}(X)$. In [C02], it is proved that
after perturbing the atlas (4.5) and applying so-called “flip resolutions” to
resolve ghost strata, we get a smooth orbifold
$\underline{\mathcal{M}}_{k}(X)$.
## 5 $\mathbb{Z}_{\alpha}$-equivariant bundles and
$\mathbb{Z}_{\alpha}$-invariant moduli spaces
Let $\Gamma$ be a finite group acting on $M$ from left and preserve metric $g$
on $M$. The $\Gamma$-action on $M$ may not lift to $P$ in general. Let
$\mathcal{H}$ be the bundle automorphisms of $P$ covering an element in
$\Gamma\subset\text{Diff}(M)$. Then $\mathcal{G}$ is a subset of $\mathcal{H}$
covering identity in Diff$(M)$. There is an exact sequence:
$1\to\mathcal{G}\to\mathcal{H}\to\Gamma\to 1.$
Note that the $\Gamma$-action on $M$ induces a well-defined $\Gamma$-action on
$\mathcal{B}$: given $\gamma\in\Gamma$, $[A]\in\mathcal{B}$, define
$\gamma\cdot[A]:=[h_{\gamma}^{*}A]$ where $h_{\gamma}\in\mathcal{H}$ covers
$\gamma\in$Diff$(M)$. The well-definedness of this action follows from the
fact that two elements in $\mathcal{H}$ covering the same $\gamma$ differ by a
gauge transformation. The metric $g$ on $M$ is $\Gamma$-invariant, therefore
$\mathcal{M}_{k}\subset\mathcal{B}$ is $\Gamma$-invariant. Denote
$\displaystyle\mathcal{B}_{\Gamma}$
$\displaystyle:=\\{[A]\in\mathcal{B}~{}|~{}\gamma\cdot[A]=[A]\\},$
$\displaystyle\mathcal{B}_{\Gamma}^{*}$
$\displaystyle:=\\{[A]\in\mathcal{B}^{*}~{}|~{}\gamma\cdot[A]=[A]\\},$
$\displaystyle\mathcal{M}_{\Gamma}$
$\displaystyle:=\\{[A]\in\mathcal{M}~{}|~{}\gamma\cdot[A]=[A]\\}.$
Suppose $[A]\in\mathcal{B}_{\Gamma}$, we get the following short exact
sequence:
$1\to\Gamma_{A}\to\mathcal{H}_{A}\to\Gamma\to 1,$ (5.1)
where $\Gamma_{A}\subset\mathcal{G},\mathcal{H}_{A}\subset\mathcal{H}$ are
stabilisers of $A$. If $A$ is irreducible, $\Gamma_{A}=\mathbb{Z}_{2}$ (or 0
in $SO(3)$ cases) and $\mathcal{H}_{A}\to\Gamma$ is a double cover (or an
isomorphism in $SO(3)$ cases). In summary, any irreducible $\Gamma$-invariant
connection in $\mathcal{B}$ induces a group action on $P$ that double covers
(or covers) the $\Gamma$-action on $M$. Denote
$\displaystyle\mathcal{A}^{\mathcal{H}_{A}}$
$\displaystyle:=\\{A~{}|~{}h_{\gamma}^{*}A=A,~{}\forall
h_{\gamma}\in\mathcal{H}_{A}\\},$ $\displaystyle\mathcal{G}^{\mathcal{H}_{A}}$
$\displaystyle:=\\{g~{}|~{}gh_{\gamma}=h_{\gamma}g,~{}\forall
h_{\gamma}\in\mathcal{H}_{A}\\},$ $\displaystyle\mathcal{B}^{\mathcal{H}_{A}}$
$\displaystyle:=\mathcal{A}^{\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}.$
Now there are two kinds of “invariant moduli space”:
$\mathcal{B}_{\Gamma}~{}\text{ and
}~{}\mathcal{A}^{\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}.$
A prerequisite of the former is a $\Gamma$-action on $X$, of the latter is a
$\mathcal{H}_{A}$-action on $P$.
Let $\Gamma$ be a cyclic group $\mathbb{Z}_{\alpha}$. Suppose the
$\mathbb{Z}_{\alpha}$-action on $M$ satisfies Condition 1.1. As stated in
[F92], the irreducible ASD part of
$\mathcal{A}^{\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}$ (denoted by
$\mathcal{A}^{*,ASD,\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}$) is, after
a perturbation, a smooth manifold. While the irreducible ASD part of
$\mathcal{B}_{\mathbb{Z}_{\alpha}}$, denoted by
$\mathcal{M}^{*}_{\mathbb{Z}_{\alpha}}$, is not a manifold in general. It
might contain components with different dimension as we will see in the
example of the weighted complex projective space in Section 8.
###### Proposition 5.1.
(Proposition 2.4 of [F89]) The natural map
$\mathcal{A}^{*,\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}\to\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}$
is a homeomorphism onto some components. Restricting to the ASD part, we get
$\mathcal{A}^{*,ASD,\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}\to\mathcal{M}^{*}_{\mathbb{Z}_{\alpha}},$
which is also a homeomorphism onto some components.
From the discussion above, we have
$\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}=\bigcup_{[A]\in\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}}\mathcal{A}^{*,\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}},$
where each $\mathcal{A}^{*,\mathcal{H}_{A}}/\mathcal{G}^{\mathcal{H}_{A}}$ is
viewed as a subset in $\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}$ through the map
in Proposition 5.1. The union is not disjoint. Furthermore, each
$\mathcal{H}_{A}$ can be seen as an element in
$\displaystyle\\{\phi\in
Hom(\mathbb{Z}_{2\alpha},\mathcal{H})~{}|~{}\tilde{\phi}\in
Hom(\mathbb{Z}_{2\alpha},\text{Diff}(M))\text{ is a double cover of the
}\mathbb{Z}_{\alpha}\text{-action on }M\\}$ $\displaystyle\text{(or
}\\{\phi\in Hom(\mathbb{Z}_{\alpha},\mathcal{H})~{}|~{}\tilde{\phi}\in
Hom(\mathbb{Z}_{\alpha},\text{Diff}(M))\text{ is the
}\mathbb{Z}_{\alpha}\text{-action on }M\\})$
where $\tilde{\phi}$ is the group action on $M$ induced by $\phi$. Denote by
$J$ the quotient of (5) by conjugation of gauge transformations. Then we have
###### Theorem 5.2.
The map
$\bigsqcup_{[\phi]\in
J}\mathcal{A}^{*,\phi}/\mathcal{G}^{\phi}\to\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}$
is bijective.
###### Proof.
We already have surjectivity. Proposition 5.1 implies injectivity on each
disjoint set. Hence it suffices to show that if
$A_{1}\in\mathcal{A}^{*,\phi_{1}}$,$A_{2}\in\mathcal{A}^{*,\phi_{2}}$ and
$A_{1}=g\cdot A_{2}\text{ for some }g\in\mathcal{G}$, then
$[\phi_{1}]=[\phi_{2}].$
This follows from
$g\cdot\phi_{2}\cdot g^{-1}\cdot A_{1}=g\cdot\phi_{2}\cdot A_{2}=g\cdot
A_{2}=A_{1}=\phi_{1}\cdot A_{1}.$
∎
Next we classify the index set $J$.
###### Proposition 5.3.
The map
$\displaystyle\left\\{\begin{tabular}[]{@{}c@{}}isomorphism classes of\\\
$\mathbb{Z}_{2\alpha}$-equivariant $SU(2)$ bundle $P\to M$\\\ double covering
$\mathbb{Z}_{\alpha}$-action on $M$\end{tabular}\right\\}$ $\displaystyle\to$
$\displaystyle\mathbb{Z}\times\mathbb{Z}_{a_{1}}\times\dots\times\mathbb{Z}_{a_{n}}$
$\displaystyle P$ $\displaystyle\mapsto$
$\displaystyle(c_{2}(P),m_{1},\dots,m_{n})$
is injective where $c_{2}$ is the second Chern number and $m_{i}$ is defined
as follows: let $f_{i}:\mathbb{Z}_{2a_{i}}\to S^{1}\subset SU(2)$ be the
isotropy reprensentation at points
$\pi^{-1}(x_{i})=\\{x_{i}^{1},\dots,x_{i}^{\alpha/a_{i}}\\}$, then
$m_{i}\in\mathbb{Z}_{a_{i}}\cong H^{2}(L(a_{i},b_{i}))$ is the Euler class of
the $S^{1}$-bundle
$\frac{S^{3}\times S^{1}}{f_{i}}\to S^{3}/\mathbb{Z}_{a_{i}}=L(a_{i},b_{i}).$
###### Proof.
(Sketch) The proof is basically the same as that of Proposition 4.1 in [FS85].
If $f_{i}$ is the trivial map, then $P|_{\partial B^{4}(x_{i}^{1})}/f_{i}$ is
the trivial bundle over $L(a_{i},b_{i})$ and $m_{i}=0$. If $f_{i}$ is non-
trivial, then $P|_{\partial B^{4}(x_{i}^{1})}$ reduces to an $S^{1}$-bundle
and the Euler class of this $S^{1}$-bundle quotient by $f_{i}$ is the weight
of $f_{i}$. Therefore any two isomorphic $\mathbb{Z}_{2\alpha}$-equivariant
$SU(2)$ bundles have the same $m_{1},\dots,m_{n}$. This gives the well-
definedness.
Given $(c_{2},m_{1},\dots,m_{n})$, we use $m_{1},\dots,m_{n}$ to get the
unique $\mathbb{Z}_{2\alpha}$-equivariant $SU(2)$-bundle over a neighbourhood
$N$ of singularities in $M$,
$N:=\bigcup_{i=1}^{n}\bigcup_{j=1}^{\alpha/a_{i}}B^{4}(x_{i}^{j}),$
and hence get an $SU(2)$-bundle over $\partial N/\mathbb{Z}_{\alpha}$. If
$(c_{2},m_{1},\dots,m_{n})$ lies in the image of the map in this proposition,
then the $SU(2)$-bundle over $\partial N/\mathbb{Z}_{\alpha}$ extends uniquely
to $M\setminus N/\mathbb{Z}_{\alpha}$ since $c_{2}$ is fixed. This shows the
injectivity. ∎
For the $SO(3)$ case, there is a similar injective map:
$\displaystyle\left\\{\begin{tabular}[]{@{}c@{}}isomorphism classes of\\\
$\mathbb{Z}_{\alpha}$-equivariant $SO(3)$ bundle $P\to M$\\\ covering
$\mathbb{Z}_{\alpha}$-action on $M$\end{tabular}\right\\}$ $\displaystyle\to$
$\displaystyle\mathbb{Z}\times
H^{2}(M;\mathbb{Z}_{2})\times\mathbb{Z}_{a_{1}}\times\dots\times\mathbb{Z}_{a_{n}}$
$\displaystyle P$ $\displaystyle\mapsto$
$\displaystyle(p_{1}(P),w_{2}(P),m_{1},\dots,m_{n}).$
These two maps are not surjective in general. For the $SU(2)$ case, since
$-1\in\mathbb{Z}_{2\alpha}$ acts on $P$ either as $id$ or as $-id$, so
$m_{1},\dots,m_{n}$ all are either even or odd. For the $SO(3)$ case, not all
pairs $(p_{1},w_{2})$ occur as $(p_{1}(P),w_{2}(P))$ for some bundle $P$. In
this chapter we do not describe what the image subset is. If an element
$(c_{2},m_{1},\dots,m_{n})$ is not in the image, then we say the set of
bundles and connections corresponding to $(c_{2},m_{1},\dots,m_{n})$ is empty.
Denote
$I:=\\{c_{2}(P)\\}\times\mathbb{Z}_{a_{1}}\times\dots\times\mathbb{Z}_{a_{n}}(\text{
or
}\\{(p_{1}(P),w_{2}(P))\\}\times\mathbb{Z}_{a_{1}}\times\dots\times\mathbb{Z}_{a_{n}}).$
(5.5)
We have
$\mathcal{B}^{*}_{\mathbb{Z}_{\alpha}}=\bigsqcup_{i\in
I}\mathcal{A}^{*,\phi_{i}}/\mathcal{G}^{\phi_{i}}~{},~{}~{}\mathcal{M}^{*}_{\mathbb{Z}_{\alpha}}=\bigsqcup_{i\in
I}\mathcal{A}^{*,ASD,\phi_{i}}/\mathcal{G}^{\phi_{i}},$
where $\phi_{i}$ is the $\mathbb{Z}_{2\alpha}$ (or
$\mathbb{Z}_{\alpha}$)-action on $P$ corresponds to $i\in I$.
###### Definition 5.4.
Let $M$, the $\mathbb{Z}_{\alpha}$-action on $M$ and
$\\{a_{i},b_{i}\\}_{i=1}^{n}$ be defined as above. The bundle $P\to M$
characterised by $(c_{2},m_{1},\dots,m_{n})$ (or
$(p_{1},w_{2},m_{1},\dots,m_{n})$) is called a bundle of type $\mathcal{O}$
where $\mathcal{O}=\\{c_{2},(a_{i},b_{i},m_{i})_{i=1}^{n}\\}$(or
$(p_{1},w_{2},(a_{i},b_{i},m_{i})_{i=1}^{n})$). In the $SO(3)$ case, sometimes
$w_{2}$ in $\mathcal{O}$ is omitted if $w_{2}$ is fixed and contextually
clear.
So far we have discussed irreducible connections. For reducible connections,
the only difference between the ordinary case and the
$\mathbb{Z}_{2\alpha}$(or $\mathbb{Z}_{\alpha}$)-equivariant case is that the
$U(1)$-reduction of $P$ needs to be $\mathbb{Z}_{2\alpha}$(or
$\mathbb{Z}_{\alpha}$)-equivariant. Proposition 4.1, 5.3 and 5.4 in [FS85]
give a full description of equivariant $U(1)$-reduction and invariant
reducible connections.
To summarise, Table 1 shows the similarity and difference in the study of
instanton moduli spaces on manifolds and this special kind of orbifolds.
Table 1 Objects manifold $M$ orbifold $X:=M/\mathbb{Z}_{\alpha}$ Bundles $P$
$SU(2)$-bundle or $SO(3)$-bundle $\mathbb{Z}_{2\alpha}$-equivariant
$SU(2)$-bundle on $M$ or $\mathbb{Z}_{\alpha}$-equivariant $SO(3)$-bundle on
$M$ Classification of $P$ $c_{2}$ or $(p_{1},w_{2})$
$(c_{2},m_{1},\dots,m_{n})$ or $(p_{1},w_{2},(a_{i},b_{i},m_{i})_{i=1}^{n})$
Classification of $U(1)$-reductions $Q$ $\pm e\in H^{2}(M;\mathbb{Z})$ with
$e^{2}=-c_{2}$ or $w_{2}=e$ mod 2 and $e^{2}=p_{1}$ $\pm e\in
H^{2}(X\setminus\\{x_{1},\dots,x_{n}\\};\mathbb{Z})$ with
$e|_{L(a_{i},b_{i})}=m_{i}$ and $i^{*}\pi^{*}(e)^{2}=-c_{2}$ or
$w_{2}=i^{*}\pi^{*}(e)$ mod 2 and $i^{*}\pi^{*}(e)^{2}=p_{1}$ Elliptic complex
$\Omega^{0}(adP)\xrightarrow{d_{A}}\Omega^{1}(adP)\xrightarrow{d_{A}^{+}}\Omega^{2,+}(adP)$
$\Omega^{0}(adP)^{\mathbb{Z}_{2\alpha}}\xrightarrow{d_{A}}\Omega^{1}(adP)^{\mathbb{Z}_{2\alpha}}\xrightarrow{d_{A}^{+}}\Omega^{2,+}(adP)^{\mathbb{Z}_{2\alpha}}$
or
$\Omega^{0}(adP)^{\mathbb{Z}_{\alpha}}\xrightarrow{d_{A}}\Omega^{1}(adP)^{\mathbb{Z}_{\alpha}}\xrightarrow{d_{A}^{+}}\Omega^{2,+}(adP)^{\mathbb{Z}_{\alpha}}$
Dimension of ASD moduli space $8c_{2}-3(1+b^{+})$ or $-2p_{1}-3(1+b^{+})$
index of the complex above, shown in formula (5.6)
In Table 1, $\pi:M\to X$ is the projection and $i$ is the inclusion map from
punctured $M$ to $M$.
###### Remark 5.5.
The calculation of the index of the invariant elliptic complex in Table 1 is
in Section 6 of [FS85]. Two points need to be mentioned in our case.
First, [FS85] considered pseudofree orbifolds, which can also be expressed as
$M/\mathbb{Z}_{\alpha}$. Although $M/\mathbb{Z}_{\alpha}$ has isolated
singular points $\\{x_{1},\dots,x_{n}\\}$, the branched set of the branched
cover $\pi:M\to M/\mathbb{Z}_{\alpha}$ is $\\{x_{1},\dots,x_{n}\\}$ union with
a 2-dimensional surface $F$ in $M/\mathbb{Z}_{\alpha}$. In the set-up of
[FS85], the $\mathbb{Z}_{\alpha}$-action on the $SO(3)$-bundle $P\to M$ is
trivial when restricted to $P|_{\pi^{-1}(F)}$, hence the Lefschetz number
restricted to $\pi^{-1}(F)$ is easy to calculate and the index is the same as
the case when the branched set consists only of isolated points.
Second, by the construction of $P$ in [FS85], $m_{i}$ is relatively prime to
$a_{i}$, (i.e. $m_{i}$ is a generator in $H^{2}(L(a_{i},b_{i});\mathbb{Z})$)
but the method used in [FS85] also works for the case when
$\mathbb{Z}_{a_{i}}$ acts by any weight on the fibre over the singular point.
([A90] used the same approach to calculate this index for $M=S^{4}$.) In this
more general case the result of Theorem 6.1 in [FS85] becomes
$-\frac{2p_{1}(P)}{\alpha}-3(1+b_{2}^{+})+n^{\prime}+\sum_{i=1}^{n}\frac{2}{a_{i}}\sum_{j=1}^{a_{i}-1}\cot\left(\frac{\pi
jb_{i}}{a_{i}}\right)\cot\left(\frac{\pi
j}{a_{i}}\right)\sin^{2}\left(\frac{\pi jm_{i}}{a_{i}}\right),$ (5.6)
where $n^{\prime}$ is the number of $\\{m_{i}~{}|~{}m_{i}\not\equiv 0\text{
mod }a_{i}\\}$. For the $SU(2)$ case, we replace $-2p_{1}(P)$ in this formula
by $8c_{2}(P)$.
## 6 Invariant ASD connections on $S^{4}$
### 6.1 Instanton-1 moduli space on $S^{4}$
Let $P\to S^{4}$ be the $SU(2)$-bundle with $c_{2}(P)=1$. The ASD moduli space
$\mathcal{M}_{1}(S^{4})$ is a 5-dimensional ball. In this section we describe
this moduli space explicitly.
The 4-sphere $S^{4}$ can be covered by two open sets
$S^{4}\cong\mathbb{HP}^{1}=U_{0}\cup U_{1}$ with coordinate charts and
transition map as
$\displaystyle\varphi_{0}:U_{0}:=\\{[z_{0},z_{1}]~{}|~{}z_{0}\neq 0\\}$
$\displaystyle\to$ $\displaystyle\mathbb{H}$ $\displaystyle~{}[z_{0},z_{1}]$
$\displaystyle\mapsto$ $\displaystyle z_{1}/z_{0}$
$\displaystyle\varphi_{1}:U_{1}:=\\{[z_{0},z_{1}]~{}|~{}z_{1}\neq 0\\}$
$\displaystyle\to$ $\displaystyle\mathbb{H}$ $\displaystyle~{}[z_{0},z_{1}]$
$\displaystyle\mapsto$ $\displaystyle z_{0}/z_{1}$
$\displaystyle\varphi_{1}\circ\varphi_{0}^{-1}:\mathbb{H}\setminus\\{0\\}$
$\displaystyle\to$ $\displaystyle\mathbb{H}\setminus\\{0\\}$ $\displaystyle z$
$\displaystyle\mapsto$ $\displaystyle z^{-1}.$
The bundle $P$ over $S^{4}$ can be reconstructed as
$U_{0}\times Sp(1)\bigsqcup U_{1}\times
Sp(1)/\sim~{},~{}~{}(z,g)\sim\left(z^{-1},\frac{z}{|z|}g\right)~{}\forall z\in
U_{0}\setminus\\{0\\}.$
According to Section 3.4 of [DK90] and Chapter 6 of [FU84], there is a
standard ASD connection $\theta$ defined by
$A_{0}=Im\frac{\bar{z}dz}{1+|z|^{2}}\in\Omega^{1}(U_{0},\mathfrak{su}(2)),$
which is invariant under the standard $SO(5)$-action on $S^{4}$. The moduli
space $\mathcal{M}_{1}(S^{4})$ is a 5-dimensional ball:
$\mathcal{M}_{1}(S^{4})=\left\\{T_{\lambda,b}^{*}\theta~{}|~{}\lambda\in(0,1],b\in
S^{4}\right\\},$
where $T_{\lambda,b}$ is a conformal transformation on $S^{4}$ that is induced
by the automorphism $x\mapsto\lambda(x-b)$ on $\mathbb{R}^{4}$. In particular,
for $b=0$, the connection $T_{\lambda,0}^{*}\theta$ can be represented under
the trivialisation over $U_{0}$ as
$A_{0}(\lambda):=Im\frac{\bar{z}dz}{\lambda^{2}+|z|^{2}}\in\Omega^{1}(U_{0},\mathfrak{su}(2)).$
(6.1)
### 6.2 $\Gamma$-invariant connections in $\mathcal{M}_{1}(S^{4})$
In this section we describe the $\Gamma$-invariant part of
$\mathcal{M}_{1}(S^{4})$ for some finite group $\Gamma$.
Let $SO(4)\cong Sp(1)\times_{\pm 1}Sp(1)$ act on $S^{4}$ by the extension of
the action on $\mathbb{H}$:
$\displaystyle(Sp(1)\times_{\pm 1}Sp(1))\times\mathbb{H}$
$\displaystyle\to\mathbb{H}$ $\displaystyle([e_{+},e_{-}],z)$
$\displaystyle\mapsto e_{+}ze_{-}^{-1}.$
This action has no lift to the bundle $P$, hence the action can not be defined
on $\mathcal{A}(P)$ in the standard way. But this $SO(4)$-action lifts to a
$Spin(4)\cong Sp(1)\times Sp(1)$ action on $P$ in the following way:
$\displaystyle(e_{+},e_{-})\cdot(z,g)=(e_{+}ze_{-}^{-1},e_{-}g)~{}~{}~{}\forall(z,g)\in
U_{0}\times Sp(1),$
$\displaystyle(e_{+},e_{-})\cdot(z,g)=(e_{-}ze_{+}^{-1},e_{+}g)~{}~{}~{}\forall(z,g)\in
U_{1}\times Sp(1).$
This lifting defines an $SO(4)$-action on $\mathcal{A}$ and $\mathcal{G}$: For
any element $\gamma$ in $SO(4)$ acting on $S^{4}$, there are two elements
$\tilde{\gamma},-\tilde{\gamma}$ in $Spin(4)$ that act on $P$ covering
$\gamma$. For any $A\in\mathcal{A}$, we define $\gamma\cdot
A:=\tilde{\gamma}\cdot A$. Since $\\{\pm 1\\}\subset\Gamma_{A}$ for any
$A\in\mathcal{A}$, this action is well-defined. $\gamma$ acts on $\mathcal{G}$
by conjugation of $\pm\tilde{\gamma}$.
From Section 6.1, we know $\mathcal{M}_{1}(S^{4})$ is a 5-dimensional ball
parametrised by $b$, the mass centre in $S^{4}$, and $\lambda$, the scale of
concentration of mass around the mass centre. Then we have the following
proposition.
###### Proposition 6.1.
For any non-trivial finite subgroup $\Gamma\subset SO(4)$ such that $\Gamma$
acts freely on $\mathbb{R}^{4}\setminus\\{0\\}$, the $\Gamma$-invariant ASD
moduli space is
$\mathcal{M}_{1}^{\Gamma}(S^{4}):=\mathcal{A}^{ASD,\Gamma}(P)/\mathcal{G}^{\Gamma}(P)\cong\left\\{A_{0}(\lambda)~{}|~{}\lambda\in(0,\infty)\right\\}$
where $A_{0}(\lambda)$ is defined in (6.1). Moreover, this space is a smooth
1-dimensional manifold diffeomorphic to $\mathbb{R}$.
The proof can be found in Lemma 5.2 of [F92]. If $\Gamma$ is a cyclic group,
there is another way to calculate the dimension of
$\mathcal{M}_{1}^{\Gamma}(S^{4})$, which will be shown in Example 6.4.
### 6.3 $\mathbb{Z}_{p}$-invariant connections in $\mathcal{M}_{k}(S^{4})$
In this section we describe the $\mathbb{Z}_{p}$-invariant part of
$\mathcal{M}_{k}(S^{4})$.
Suppose $\mathbb{Z}_{p}$ acts on $S^{4}$ as the extension of the following
action:
$e^{\frac{2\pi i}{p}}\cdot(z_{1},z_{2})=(e^{\frac{2\pi
i}{p}}z_{1},e^{\frac{2\pi iq}{p}}z_{2}),$
where $e^{\frac{2\pi i}{p}}$ is the generator of $\mathbb{Z}_{p}$,
$(z_{1},z_{2})\in\mathbb{C}^{2}\cong\mathbb{R}^{4}$, $q$ is an integer
relatively prime to $p$. Identify $\mathbb{C}^{2}$ with $\mathbb{H}$ by
$(z_{1},z_{2})\mapsto z_{1}+z_{2}j$, then this action is
$\forall z\in\mathbb{H}~{},~{}~{}e^{\frac{2\pi i}{p}}\cdot z=e^{\frac{\pi
i(1+q)}{p}}ze^{\frac{\pi i(1-q)}{p}}.$
From the discussion in Section 5, $\mathcal{H}_{A}$ is $\mathbb{Z}_{2p}$ or
$\mathbb{Z}_{p}\times\mathbb{Z}_{2}$ (depending on whether (5.1) splits or
not). This means there always exists a $\mathbb{Z}_{2p}$-action on $P$ that is
a double covering $\mathbb{Z}_{p}$-action on $S^{4}$.
According to Section 2.3 of [A90] or the Proposition 5.3 in this paper, a
$\mathbb{Z}_{2p}$-equivariant $SU(2)$-bundle over $S^{4}$ can be described by
a triple $(k,m,m^{\prime})$: where $k$ is the second Chern number of $P$, $m$
and $m^{\prime}$ are the weights of the $\mathbb{Z}_{2p}$-action on the fibres
over $\\{0\\}$ and $\\{\infty\\}$ respectively. By ‘$\mathbb{Z}_{2p}$-action
on a fibre has weight $m$’ we mean there is a trivialisation near this fibre
such that the generator $e^{\frac{\pi i}{p}}$ of $\mathbb{Z}_{2p}$ acts on
this fibre by $e^{\frac{\pi i}{p}}\cdot g=e^{\frac{m\pi i}{p}}g$ where $g\in
Sp(1)$.
###### Example 6.2.
Consider the bundle $P\to S^{4}$ defined in Section 6.2. In the
trivialisations over $U_{0},U_{1}$, the $\mathbb{Z}_{2p}$-action is defined by
$\displaystyle e^{\frac{\pi i}{p}}\cdot(z,g)=(e^{\frac{\pi
i(1+q)}{p}}ze^{\frac{\pi i(1-q)}{p}},e^{\frac{\pi
i(q-1)}{p}}g)~{}~{}~{}\forall(z,g)\in U_{0}\times Sp(1),$ $\displaystyle
e^{\frac{\pi i}{p}}\cdot(z,g)=(e^{\frac{\pi i(q-1)}{p}}ze^{\frac{\pi
i(-q-1)}{p}},e^{\frac{\pi i(q+1)}{p}}g)~{}~{}~{}\forall(z,g)\in U_{1}\times
Sp(1).$
###### Theorem 6.3.
(Lemma 5.1, Proposition 4.3, Theorem 5.2 and Section 4.4 of [A90]) Suppose an
$SU(2)$-bundle $P\to S^{4}$ with $\mathbb{Z}_{2p}$-action on $P$ that is a
double covering of the $\mathbb{Z}_{p}$-action on $S^{4}$ has second Chern
number and action-weights $(k,m,m^{\prime})$.
1. (1).
If $(k,m,m^{\prime})$ satisfies the condition that there exists
$a,b\in\mathbb{Z}$ s.t.
$\displaystyle\begin{cases}2aq\equiv m^{\prime}+m\text{ mod }2p,\\\ 2b\equiv
m^{\prime}-m\text{ mod }2p.\end{cases}$ (6.2)
Then the space of $\mathbb{Z}_{p}$-invariant ASD connections in
$\mathcal{M}_{k}(S^{4})$, denoted by
$\mathcal{M}^{\mathbb{Z}_{p}}_{(k,m,m^{\prime})}(S^{4})$ is non-empty.
Furthermore, $ab\equiv k$ mod $p$.
2. (2).
$\mathcal{M}^{\mathbb{Z}_{p}}_{(k,m,m^{\prime})}(S^{4})\neq\emptyset$ if and
only if there is a sequence of finite triples
$\\{(k_{i},m_{i},m^{\prime}_{i})\\}_{i=1}^{n}$ satisfying formula (6.2) and
$k_{i}>0~{},~{}~{}k=\sum_{i=1}^{n}k_{i}~{},~{}~{}m\equiv m_{1}\text{ mod
}2p~{},~{}~{}m^{\prime}_{i}\equiv m_{i+1}\text{ mod }2p~{},~{}~{}m\equiv
m^{\prime}_{n}\text{ mod }2p.$
3. (3).
If $(k,m,m^{\prime})$ satisfies the conditions in (2), the dimension of the
invariant moduli space is
$\text{dim}\mathcal{M}^{\mathbb{Z}_{p}}_{(k,m,m^{\prime})}(S^{4})=\frac{8k}{p}-3+n+\frac{2}{p}\sum_{j=1}^{p-1}\cot\frac{\pi
j}{p}\cot\frac{\pi jq}{p}\left(\sin^{2}\frac{\pi
jm^{\prime}}{p}-\sin^{2}\frac{\pi jm}{p}\right),$ (6.3)
where $n\in\\{0,1,2\\}$ is the number of elements of the set
$\\{x\in\\{m,m^{\prime}\\}~{}|~{}x\not\equiv 0,p~{}mod~{}2p\\}.$
###### Example 6.4.
Consider the bundle $P$ in Example 6.2, the second Chern number and action
weights are $(1,q-1,q+1)$, which satisfies (6.2) for $a=b=1$.
$\mathcal{M}^{\mathbb{Z}_{p}}_{(1,q-1,q+1)}(S^{4})$ is non-empty and by
formula (6.3) the dimension is given by
$\displaystyle\text{dim}\mathcal{M}^{\mathbb{Z}_{p}}_{(1,q-1,q+1)}(S^{4})$
$\displaystyle=$
$\displaystyle\frac{8}{p}-3+n+\frac{8}{p}\sum_{j=1}^{p-1}\cos^{2}\left(\frac{\pi
j}{p}\right)\cos^{2}\left(\frac{\pi jq}{p}\right)$ $\displaystyle=$
$\displaystyle\frac{8}{p}-3+n+\frac{8}{p}\sum_{j=1}^{p-1}\frac{1+\cos\left(\frac{2\pi
j}{p}\right)}{2}\frac{1+\cos\left(\frac{2\pi jq}{p}\right)}{2}$
$\displaystyle=$
$\displaystyle\frac{8}{p}-3+n+\frac{2}{p}\sum_{j=1}^{p-1}\left(1+\cos\left(\frac{2\pi
j}{p}\right)+\cos\left(\frac{2\pi jq}{p}\right)+\frac{\cos\left(\frac{2\pi
j(q+1)}{p}\right)+\cos\left(\frac{2\pi j(q-1)}{p}\right)}{2}\right)$
$\displaystyle=$ $\displaystyle 1.$
where $n\in\\{0,1,2\\}$ be the number of $q-1,q+1\not\equiv 0$ mod $p$. The
last equality follows from the fact that $\sum_{j=1}^{p-1}\cos\left(\frac{2\pi
jm}{p}\right)$ equals to $-1$ if $m\not\equiv 0$ mod $p$, or $(p-1)$ if
$m\equiv 0$ mod $p$. This matches the result in Proposition 6.1.
### 6.4 Balanced $\mathbb{Z}_{p}$-invariant connections on $S^{4}$
Recall that the space of balanced connections on $S^{4}$ is defined to be
$\mathcal{A}^{b}_{k}:=\mathcal{A}_{k}/H,$
where $H\subset Aut(\mathbb{R}^{4})$ is the subgroup generated by translations
and dilations. Correspondingly, the balanced moduli space and the balanced ASD
moduli space are
$\mathcal{B}^{b}:=\mathcal{B}/H~{},~{}~{}\mathcal{M}_{k}^{b}:=\mathcal{M}_{k}/H.$
When $k\neq 0$, an equivalent definition of $\mathcal{M}_{k}^{b}$ is
$\mathcal{M}_{k}^{b}=\left\\{[A]~{}|~{}\int_{\mathbb{R}^{4}}x|F_{A}|^{2}=0~{},~{}\int_{B(1)}|F_{A}|^{2}=\hbar\right\\},$
where $B(1)$ is the unit ball in $\mathbb{R}^{4}$ and $\hbar$ is a positive
constant less than $4k\pi^{2}$.
Given $\mathbb{Z}_{2p}$-action on $P\to S^{4}$ whose second Chern number and
action weights on $\\{0\\},\\{\infty\\}$ are $(k,m,m^{\prime})$, there is an
$\mathbb{R}^{+}$-action on $\mathcal{M}^{\mathbb{Z}_{p}}_{(k,m,m^{\prime})}$
induced by the dilations on $\mathbb{R}^{4}$. Denote the balanced
$\mathbb{Z}_{p}$-invariant moduli space by
$\mathcal{M}^{\mathbb{Z}_{p},b}_{(k,m,m^{\prime})}:=\mathcal{M}^{\mathbb{Z}_{p}}_{(k,m,m^{\prime})}/\mathbb{R}^{+}.$
Note that in the $\mathbb{Z}_{p}$-invariant case we only quotient by the
dilation action since translation does not preserve
$\mathbb{Z}_{p}$-invariance of a connection in general.
## 7 Bubble tree compactification for $M/\mathbb{Z}_{\alpha}$
Recall that in the bubble tree compactification for a 4-manifold $M$, there
are 3 important concepts:
1. (1)
Bubble tree $T$,
2. (2)
Space of bubble tree instantons $\mathcal{S}_{T}(M)$,
3. (3)
Gluing bundle $GL_{T}(M)\to\mathcal{S}_{T}(M)$.
Now given a $\mathbb{Z}_{2\alpha}$(or $\mathbb{Z}_{\alpha}$)-equivariant
bundle over $M$ characterised by $\mathcal{O}$ defined in Definition 5.4, we
need to define the following corresponding version:
1. (1)
$\mathcal{O}$-bubble-tree $T$,
2. (2)
$\mathcal{O}$-invariant bubble tree instanton space
$\mathcal{S}_{T}^{\mathcal{O}}(M)$,
3. (3)
$\mathcal{O}$-invariant gluing bundle $GL_{T}^{\mathcal{O}}(M)$ over
$\mathcal{S}_{T}^{\mathcal{O}}(M)$.
###### Definition 7.1.
Let $\mathbb{Z}_{a}$ act on $S^{4}\cong\mathbb{C}^{2}\cup\\{\infty\\}$ by
$e^{2\pi i/a}\cdot(z_{1},z_{2})=(e^{2\pi i/a}z_{1},e^{2\pi ib/a}z_{2})$
for some coprime integers $a,b$. Given
$\mathcal{O}=\\{k,(a,b),(m,m^{\prime})\\}$ where $k\in\mathbb{Z}_{\geq 0}$,
$m,m^{\prime}\in\mathbb{Z}_{a}$. An $\mathcal{O}$-bubble-tree on
$S^{4}/\mathbb{Z}_{a}$ is a triple $(T,v_{0},w)$, where $T$ is a tree with
root $v_{0}$ and $w$ is a map on the vertices of $T$, such that
1. (1).
$(T,v_{0},w)$ has a path of the form
for some $j\geq 0$. $k_{i}$ are called the weight of $v_{i}$.
2. (2).
On the complement of this path, $w$ assigns to each vertex a non-negative
integer, which is also called the weight of the vertex.
3. (3).
Assign
1. $\bullet$
each vertex in $\\{v_{0},\dots,v_{j}\\}$ a singular bubble
$S^{4}/\mathbb{Z}_{a}$,
2. $\bullet$
each vertex in $V_{T}\setminus\\{v_{0},\dots,v_{j}\\}$ a bubble $S^{4}$,
3. $\bullet$
each edge $e_{i}\in\\{e_{0},\dots,e_{j-1}\\}$ the north pole of the singular
bubble corresponds to $v_{i}$,
4. $\bullet$
each edge $e\in E_{T}\setminus\\{e_{0},\dots,e_{j-1}\\}$ a point on the bubble
corresponds to $v$ or a point away from the north pole on the singular bubble
corresponds to $v$, where $v$ is the elder vertex among the two vertices
connected by $e$.
Define the pullback of $(T,v_{0},w)$ to be the triple
$(\tilde{T},v_{0},\tilde{w})$ with a $\mathbb{Z}_{a}$-action such that
$T=\tilde{T}/\mathbb{Z}_{a}$. Take $a=3$, $j=1$, then $(T,v_{0},w)$ and
$(\tilde{T},v_{0},\tilde{w})$ defined in Figure 10 gives an example of the
pullback of $T$.
Figure 10: The pullback of $T$
4. (4).
Let $(\tilde{T},v_{0},\tilde{w})$ be the pullback of $(T,v_{0},w)$. By
composing $\tilde{w}$ with $pr_{1}$, the projection on to the first
coordinate, $(\tilde{T},v_{0},pr_{1}\circ\tilde{w})$ defines a bubble tree
with total weight $k$.
A singular bubble in an $\mathcal{O}$-bubble-tree is called ghost bubble if it
has weight 0. By Definition 7.1, an $\mathcal{O}$-bubble-tree could have a
ghost singular bubble with only one child. Figure 10 gives such an example if
we take $k_{1}=0$, $k_{2}\neq 0$.
###### Definition 7.2.
Let $M/\mathbb{Z}_{\alpha}$ be the orbifold defined as before. Given
$\mathcal{O}=\\{k,(a_{i},b_{i},m_{i})_{i=1}^{n}\\}$, an $\mathcal{O}$-bubble-
tree on $M/\mathbb{Z}_{\alpha}$ is a triple $(T,v_{0},w)$, where $T$ is a tree
with root $v_{0}$ and $w$ is a map on vertices of $T$ such that
1. (1).
$w$ maps $v_{0}$ to
$\mathcal{O}_{0}:=\\{m(v_{0}),(a_{i},b_{i},m_{i}^{0})_{i=1}^{n}\\}$ for some
$m(v_{0}),m_{i}^{0}$.
2. (2).
Among children of $v_{0}$, for each $i\in\\{1,\dots,n\\}$, there is at most
one vertex $v$ such that
$w(v)=(m(v),(a_{i},b_{i}),(m,m^{\prime})),\text{ for some }m(v),m,m^{\prime}.$
If such $v$ exists, we assign the edge connecting $v$ and $v_{0}$ the singular
point of the cone $cL(a_{i},b_{i})\subset M/\mathbb{Z}_{\alpha}$. The subtree
$(t(v),v,w)$ is a $\\{k_{i},(a_{i},b_{i}),(m_{i}^{0},m_{i})\\}$-bubble-tree on
$S^{4}/\mathbb{Z}_{a_{i}}$ for some $k_{i}>0$.
If such $v$ does not exists, $m_{i}^{0}=m_{i}$.
3. (3).
For all other vertices of $T$, $w$ assigns each a non-negative integer.
4. (4).
Define the pullback of $(T,v_{0},w)$ to be the triple
$(\tilde{T},v_{0},\tilde{w})$ with a $\mathbb{Z}_{\alpha}$-action such that
$T=\tilde{T}/\mathbb{Z}_{\alpha}$. By composing $\tilde{w}$ with $pr_{1}$, the
projection to the first coordinate, $(\tilde{T},v_{0},pr_{1}\circ\tilde{w})$
defines a bubble tree with total weight $k$.
Denote the space of $\mathcal{O}$-bubble-trees by $\mathcal{T}_{\mathcal{O}}$.
The pullback of an $\mathcal{O}$-bubble-tree on $M/\mathbb{Z}_{\alpha}$ is a
bubble tree on $M$. The difference between $\mathcal{O}$-bubble-trees and
bubble trees is that the $w$ map in $\mathcal{O}$-bubble-trees gives more
information. When there is no group action, the weight-map $w$ in a bubble
tree assigns each vertex simply an integer since $SU(2)$ or $SO(3)$-bundles
are characterised by $c_{2}$ or $p_{1}$ when $w_{2}$ is fixed. For an
$\mathcal{O}$-bubble-tree $(T,v_{0},w)$, to characterise
$\mathbb{Z}_{2\alpha}$(or $\mathbb{Z}_{\alpha}$)-equivariant bundles over $M$
and $S^{4}$’s, the weight-map $w$ contains more information, as defined in
Definition 7.2.
In the following discussion, we also treat $\mathcal{O}$-bubble-trees as the
trees pulled back to $M$, then there is an obvious
$\mathbb{Z}_{\alpha}$-action on each $\mathcal{O}$-bubble-tree on $M$.
###### Examples 7.3.
Suppose $\alpha=6,a_{1}=2,a_{2}=3,b_{1}=b_{2}=1$ and the space
$X=M/\mathbb{Z}_{\alpha}$ has two singularities whose neighbourhoods are
$cL(a_{1},b_{1})$ and $cL(a_{2},b_{2})$. The branched covering space $M$ has a
$\mathbb{Z}_{6}$-action on it such that the action is free away from 5 points
$x_{1},\dots,x_{5}$ ($x_{1},x_{2},x_{3}$ are in the same
$\mathbb{Z}_{6}$-orbit with stabiliser $\mathbb{Z}_{2}$ and $x_{4},x_{5}$ are
in the same $\mathbb{Z}_{6}$-orbit with stabiliser $\mathbb{Z}_{3}$), as shown
in Figure 11.
Figure 11:
Figure 12 is an $\mathcal{O}$-bubble-tree for
$\mathcal{O}=(k,(a_{i},b_{i},m_{i})_{i=1}^{2})$
Figure 12:
where $k=m(v_{0})+6m(v_{1})+2m(v_{2})+6m(v_{3})$. The right diagram in Figure
12 is the tree on $X$ and the left one is the tree pulled back to $M$. Figure
13 is another way to draw this tree. In Figure 13, $x_{4},x_{5}$ are defined
in Example 7.3; the points $y_{1},\dots,y_{6}\in
M\setminus\\{x_{1},\dots,x_{5}\\}$ are in the same $\mathbb{Z}_{6}$-orbit
(these six points are the preimage of some non-singular point $y\in X$ under
the branched covering map); $e^{1},e^{2},e^{3}$ are in the same orbit of the
induced $\mathbb{Z}_{3}$-action on the bubble.
Figure 13:
Note that given the definition of $w(v_{1})$ and $w(v_{2})$, the vertex
$v_{1}$ corresponds to a bubble $S^{4}$ and has to be attached to non-singular
points in $X$. $v_{2}$ corresponds to a singular bubble
$S^{4}/\mathbb{Z}_{a_{2}}$ and has to be attached to the singular point of the
cone $cL(a_{2},b_{2})$.
###### Definition 7.4.
Given an $\mathcal{O}$-bubble-tree $T$ on $M$, it induces a unique (if it
exists) $\mathbb{Z}_{2\alpha}$ (or $\mathbb{Z}_{\alpha}$)-action on bundles
over $M$ and $S^{4}$’s, therefore also on $\mathcal{A}$, $\mathcal{G}$ and
$\mathcal{S}_{T}(M)$. The space of $\mathcal{O}$-invariant bubble tree
instantons, denoted as $\mathcal{S}_{T}^{\mathcal{O}}(M)$, is the
$\mathbb{Z}_{2\alpha}$(or $\mathbb{Z}_{\alpha}$)-invariant part of
$\mathcal{S}_{T}(M)$. In particular, if $T$ is the trivial tree, i.e., $T$ has
only one vertex, we denote the corresponding space of $\mathcal{O}$-invariant
bubble tree instantons by $\mathcal{M}^{\mathcal{O}}$.
###### Examples 7.5.
The following is an example of an $\mathcal{O}$-invariant bubble tree
instanton associated to the tree in Example 7.3,
where $A_{0}$ is $\mathbb{Z}_{12}$(or $\mathbb{Z}_{6}$)-invariant, $A_{1}$ is
$\mathbb{Z}_{6}$(or $\mathbb{Z}_{3}$)-invariant.
Denote the compactified $\mathcal{O}$-invariant moduli space and
$\mathbb{Z}_{\alpha}$-invariant moduli space by
$\overline{\mathcal{M}^{\mathcal{O}}}:=\bigcup_{T\in\mathcal{T}_{\mathcal{O}}}\mathcal{S}_{T}^{\mathcal{O}}(M)~{},~{}~{}\overline{\mathcal{M}}_{k,\mathbb{Z}_{\alpha}}:=\bigcup_{i\in
I}\overline{\mathcal{M}^{\mathcal{O}_{i}}},$
where $I$ is the index set defined in (5.5).
###### Definition 7.6.
Given an $\mathcal{O}$-bubble-tree $T$, the $\mathcal{O}$-invariant gluing
bundle $\textbf{GL}_{T}^{\mathcal{O}}(M)$ over
$\mathcal{S}_{T}^{\mathcal{O}}(M)$ is the $\mathbb{Z}_{2\alpha}$(or
$\mathbb{Z}_{\alpha}$)-invariant part of $\textbf{GL}_{T}(M)$.
Recall that the fibre of $\textbf{GL}_{T}(M)$ is
$Gl_{T}=\prod_{e\in E(T)}\mathbb{R}^{4}/\mathbb{Z}_{2}$
where $E(T)$ is the set of edges in $T$. Since $\mathbb{Z}_{\alpha}\subset
U(1)$, by Proposition 3.4, the fibre of $\textbf{GL}_{T}^{\mathcal{O}}(M)$ is
$Gl_{T}^{\mathcal{O}}=\prod_{e\in
E_{1}(T)}\mathbb{R}^{4}/\mathbb{Z}_{2}\times\prod_{e\in
E_{2}(T)}\mathbb{R}^{2}/\mathbb{Z}_{2}$
where $E_{1}(T),E_{2}(T)$ are the subsets of edges in $T$ on $X$ whose
corresponding isotropy representation is trivial and non-trivial respectively.
Therefore $\textbf{GL}_{T}^{\mathcal{O}}(M)$ is a vector bundle (up to a
finite group action) over $\mathcal{S}_{T}^{\mathcal{O}}(M)$. Similarly to the
no-group-action version, supposing $T^{\prime}\in\mathcal{T}_{\mathcal{O}}$ is
the contraction of $T$ at edges $e_{1},\dots,e_{n}$, we define
$\textbf{GL}_{T,T^{\prime}}^{\mathcal{O}}(M)$ to be the sub-bundle of
$\textbf{GL}_{T}^{\mathcal{O}}(M)$ whose fibres consist of points having non-
zero gluing parameters corresponding to $e_{1},\dots,e_{n}$ and zero gluing
parameters corresponding to other edges. Sometimes we omit $M$ in
$\mathcal{S}_{T}^{\mathcal{O}}(M)$, $\textbf{GL}_{T}^{\mathcal{O}}(M)$,
$\textbf{GL}_{T,T^{\prime}}^{\mathcal{O}}(M)$ when it is clear from the
context.
Define
$\Gamma_{T}^{\mathcal{O}}:=\prod_{v\in V_{1}(T)}SO(3)\times\prod_{v\in
V_{2}(T)}U(1)$
where $V_{1}(T)$ is the subset of ghost vertices in $T$ on $X$ whose
corresponding bubble is $S^{4}$ or $S^{4}/\mathbb{Z}_{a_{i}}$ with trivial
isotropy representations on the fibre over its south and north poles.
$V_{2}(T)$ is the complement of $V_{1}(T)$ in the set of ghost vertices.
Given the definition of the space of $\mathcal{O}$-invariant bubble tree
instantons $\mathcal{S}_{T}^{\mathcal{O}}$, and the $\mathcal{O}$-equivariant
gluing bundle $\textbf{GL}_{T}^{\mathcal{O}}$, we define the gluing map and
orbifold structure on $\overline{\mathcal{M}^{\mathcal{O}}}(M)$ away from the
ghost strata.
First consider the simplest case: consider the tree $T$ on $X$ with two
vertices $v_{0},v_{1}$ and one edge. Fix an $l\in\\{1,\dots,n\\}$, let
$w(v_{0})=(k_{0},(a_{i},b_{i},m_{i}^{0})_{i=1}^{n})~{},~{}~{}m_{i}^{0}=m_{i}\text{
for }i\neq l,$ $w(v_{1})=(k_{1},(a_{l},b_{l}),(m_{l}^{0},m_{l})).$
In this case the point assigned to the edge is the singular point of the cone
$cL(a_{l},b_{l})\subset X$. For the gluing map associated to this tree we have
the following theorem.
###### Theorem 7.7.
Let $\mathcal{O}_{0}=\\{k_{0},(a_{i},b_{i},m_{i}^{0})_{i=1}^{n}\\}$,
$l\in\\{1,\dots,n\\}$ and $U_{1}\subset\mathcal{M}^{\mathcal{O}_{0}}(M)$,
$U_{2}^{b}\subset\mathcal{M}_{(k_{1},m_{l}^{0},m_{l})}^{\mathbb{Z}_{a_{l}},b}(S^{4})$
be open neighbourhoods. Assume $I\cong U(1)$ if $m_{l}^{0}\not\equiv 0$ mod
$a_{l}$ and $I\cong SO(3)$ if $m_{l}^{0}\equiv 0$ mod $a_{l}$, then by the
equivariant Taubes gluing construction in Section 3, we have the following map
gluing connections at the singular point of the cone $cL(a_{l},b_{l})\subset
X$
$\Psi_{T}:U_{1}\times U_{2}^{b}\times
I\times(0,\epsilon)\to\mathcal{M}^{\mathcal{O}}(M),$
where $\mathcal{O}=\\{k_{0}+\frac{\alpha
k_{1}}{a_{l}},(a_{i},b_{i},m_{i})_{i=1}^{n}\\}$ and $m_{i}=m_{i}^{0}$ for
$i\neq l$. $\Psi_{T}$ is a diffeomorphism onto its image.
###### Proof.
Since $\Psi_{T}$ is the restriction of the map in Theorem 3.8 in [C02] to the
$\mathbb{Z}_{\alpha}$-invariant part, it suffices to check the dimensions of
$U_{1}\times U_{2}^{b}\times I\times(0,\epsilon)$ and
$\mathcal{M}^{\mathcal{O}}(M)$ are equal. Take an $SU(2)$-bundle as an
example, with $k_{0}$ its second Chern number.
$dim(U_{1})=\frac{8k_{0}}{\alpha}-3(1+b_{2}^{+})+n^{\prime}+\sum_{i=1}^{n}\frac{2}{a_{i}}\sum_{j=1}^{a_{i}-1}\cot\left(\frac{\pi
jb_{i}}{a_{i}}\right)\cot\left(\frac{\pi
j}{a_{i}}\right)\sin^{2}\left(\frac{\pi jm_{i}^{0}}{a_{i}}\right)$
$dim(U_{2}^{b})=\frac{8k}{a_{l}}-4+n^{\prime\prime}+\frac{2}{a_{l}}\sum_{j=1}^{a_{l}-1}\cot\left(\frac{\pi
jb_{l}}{a_{l}}\right)\cot\left(\frac{\pi
j}{a_{l}}\right)\left(\sin^{2}\left(\frac{\pi
jm_{l}}{a_{l}}\right)-\sin^{2}\left(\frac{\pi
jm_{l}^{0}}{a_{l}}\right)\right)$ $\displaystyle
dim(\mathcal{M}^{\mathcal{O}}(M))$ $\displaystyle=$
$\displaystyle\left(\frac{8k_{0}}{\alpha}+\frac{8k}{a_{l}}\right)-3(1+b_{2}^{+})+n^{\prime\prime\prime}$
$\displaystyle+\sum_{i\neq
l}\frac{2}{a_{i}}\sum_{j=1}^{a_{i}-1}\cot\left(\frac{\pi
jb_{i}}{a_{i}}\right)\cot\left(\frac{\pi
j}{a_{i}}\right)\sin^{2}\left(\frac{\pi jm_{i}^{0}}{a_{i}}\right)$
$\displaystyle+\frac{2}{a_{l}}\sum_{j=1}^{a_{l}-1}\cot\left(\frac{\pi
jb_{l}}{a_{l}}\right)\cot\left(\frac{\pi
j}{a_{l}}\right)\sin^{2}\left(\frac{\pi jm_{l}}{a_{l}}\right)$
where $n^{\prime}$ is the number of $\\{m_{i}^{0}~{}|~{}m_{i}^{0}\not\equiv
0\text{ mod }a_{i}\\}$, $n^{\prime\prime}$ is the number of
$\\{m_{l},m^{0}_{l}\not\equiv 0\text{ mod }a_{l}\\}$, $n^{\prime\prime\prime}$
is the number of $\\{m_{i}~{}|~{}m_{i}\not\equiv 0\text{ mod }a_{i}\\}$.
We check $n^{\prime\prime\prime}=-4+n^{\prime}+n^{\prime\prime}+dim(I)+1$.
If $m_{l}^{0}\not\equiv 0$, $m_{l}\not\equiv 0$,
$n^{\prime\prime\prime}-n^{\prime}=0~{},~{}~{}n^{\prime\prime}=2~{},~{}~{}dim(I)=1.$
If $m_{l}^{0}\equiv 0$, $m_{l}\not\equiv 0$,
$n^{\prime\prime\prime}-n^{\prime}=1~{},~{}~{}n^{\prime\prime}=1~{},~{}~{}dim(I)=3.$
If $m_{l}^{0}\not\equiv 0$, $m_{l}\equiv 0$,
$n^{\prime\prime\prime}-n^{\prime}=-1~{},~{}~{}n^{\prime\prime}=1~{},~{}~{}dim(I)=1.$
If $m_{l}^{0}\equiv 0$, $m_{l}\equiv 0$,
$n^{\prime\prime\prime}-n^{\prime}=0~{},~{}~{}n^{\prime\prime}=0~{},~{}~{}dim(I)=3.$
∎
Now we state the result for general $\mathcal{O}$-bubble tree $T$.
###### Theorem 7.8.
For any $\mathcal{O}$-bubble tree $T$ and precompact subset
$U\subset\mathcal{S}_{T}^{\mathcal{O}}(M)$, there exists $\epsilon>0$ such
that
$\Psi_{T}^{b}:(\textbf{GL}_{T}^{\mathcal{O}}(\epsilon)|_{U})/\Gamma_{T}^{\mathcal{O}}\to\overline{\mathcal{M}^{\mathcal{O}}}$
is a local diffeomorphism from
$(\textbf{GL}_{T,T^{\prime}}^{\mathcal{O}}(\epsilon)|_{U})/\Gamma_{T}$ to
$\mathcal{S}_{T^{\prime}}^{\mathcal{O}}$ for any $\mathcal{O}$-bubble tree
$T^{\prime}$ satisfying $T^{\prime}>T$.
The proof is the same as the proof of the corresponding theorem of manifold
without group action. Then we get a set
$\mathcal{D}(X,\mathcal{O})=\\{\Psi_{T}^{b}(\textbf{GL}_{T}^{\mathcal{O}}(\epsilon)|_{U})/\Gamma_{T}^{\mathcal{O}},(\Psi_{T}^{b})^{-1}\\}_{T\in\mathcal{T}_{\mathcal{O}}},$
which gives an orbifold structure away from the ghost strata. Just like the
case with no $\mathbb{Z}_{\alpha}$-action, we can perturb the map
$\Psi_{T}^{b}$ to be $\overline{\Psi_{T}^{b}}$ so that this atlas is smooth,
apply flip resolutions to resolve ghost strata and get
$\underline{\mathcal{M}}^{\mathcal{O}}$.
###### Theorem 7.9.
Suppose we have the $\mathbb{Z}_{\alpha}$-action on $M$ satisfies Condition
1.1 and $P\to M$ is an $SU(2)$(or $SO(3)$)-bundle with $c_{2}$(or $p_{1}$)=k.
Let the bubble tree compactification of the irreducible
$\mathbb{Z}_{\alpha}$-invariant instanton moduli space be
$\underline{\mathcal{M}}_{k,\mathbb{Z}_{a}}:=\bigsqcup_{i\in
I}\underline{\mathcal{M}}^{\mathcal{O}_{i}},$
where $I$ is the index set defined in (5.5). Then each component
$\underline{\mathcal{M}}^{\mathcal{O}_{i}}$ is an orbifold away from the
stratum with trivial connection as the background connection of the bubble
tree instantons. The dimension of the component
$\underline{\mathcal{M}}^{\mathcal{O}}$ with
$\mathcal{O}=\\{k,(a_{i},b_{i},m_{i})_{i=1}^{n}\\}$ is, for $SU(2)$ case,
$\frac{8c_{2}}{\alpha}-3(1+b_{2}^{+})+n^{\prime}+\sum_{i=1}^{n}\frac{2}{a_{i}}\sum_{j=1}^{a_{i}-1}\cot\left(\frac{\pi
jb_{i}}{a_{i}}\right)\cot\left(\frac{\pi
j}{a_{i}}\right)\sin^{2}\left(\frac{\pi jm_{i}}{a_{i}}\right),$
where $n^{\prime}$ is the number of $\\{m_{i}~{}|~{}m_{i}\not\equiv 0\text{
mod }a_{i}\\}$. For $SO(3)$ case replace $8c_{2}$ by $-2p_{1}$.
## 8 An example: instanton moduli space on $SO(3)$-bundle over
$\mathbb{CP}^{2}$ and weighted complex projective space
$\mathbb{CP}^{2}_{(r,s,t)}$
For any $SO(3)$-bundle $P$ on $\mathbb{CP}^{2}$, the second Stiefel-Witney
class $w_{2}(P)$ is either 0 or 1. In this chapter we describe the ASD moduli
space of $SO(3)$-bundles over $\mathbb{CP}^{2}$ with $w_{2}=1$. Choose a
$Spin^{\mathbb{C}}(3)$ structure on $P$, i.e. a $U(2)$-bundle $E$ lifting $P$,
we have
$w_{2}(P)=c_{1}(E)\text{ mod }2~{},~{}~{}p_{1}(P)=c_{1}^{2}(E)-4c_{2}(E).$
Since $w_{2}^{2}=p_{1}$ mod 2, the $SO(3)$-bundles on $\mathbb{CP}^{2}$ are
classified by $p_{1}$. If $P$ admits some ASD connection, we have
$p_{1}(P)=-3,-7,-11,\dots$, then along with the fact that
$b_{2}^{-}(\mathbb{CP}^{2})=0$, the bundle $P$ is not reducible. Hence there
is no reducible connection on $P$ if it admits an ASD connection.
The bundle with $p_{1}=-3$ can be constructed as the subbundle of
$\Lambda^{2}(\mathbb{CP}^{2})$ consisting of all anti-self-dual 2-forms,
denoted as $\Lambda^{2,-}(\mathbb{CP}^{2})$. By Section 4.1.4 of [DK90], the
ASD moduli space of this bundle is a single point: the standard connection
induced by the Fubini-Study metric.
The first subsection calculates $\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})$ and
its compactification, in which case the Uhlenbeck compactification and bubble
tree compactification coincide. The second subsection introduces a cyclic
group action on $\mathbb{CP}^{2}$, and calculates the
$\mathbb{Z}_{a}$-invariant moduli space
$\mathcal{M}_{p_{1}=-7}^{\mathbb{Z}_{a}}(\mathbb{CP}^{2})$ and its
compactification.
Theorem 8.1 and 8.2 are the main tools used in this section.
###### Theorem 8.1.
(Proposition 6.1.13 in [DK90]) If $V$ is an $SO(3)$ bundle over a compact,
simply connected, Kähler surface $X$ with $w_{2}(V)$ the reduction of a (1,1)
class $c$, there is a natural one-to-one correspondence between the moduli
space $\mathcal{M}^{*}(V)$ of irreducible ASD connections on $V$ and
isomorphism classes of stable holomorphic rank-two bundles $\mathcal{E}$ with
$c_{1}(\mathcal{E})=c$ and $c_{2}(\mathcal{E})=\frac{1}{4}(c^{2}-p_{1}(V))$.
Since $\mathbb{CP}^{2}$ is a Kähler surface and the generator of
$H^{2}(\mathbb{CP}^{2};\mathbb{Z})\cong\mathbb{Z}$ is a type (1,1) class which
is also an integral lift of the generator of
$H^{2}(\mathbb{CP}^{2};\mathbb{Z}_{2})=\mathbb{Z}_{2}$, the theorem above
implies the following correspondence
$\mathcal{M}_{p_{1}=1-4k}(\mathbb{CP}^{2})\leftrightarrow\left\\{\begin{tabular}[]{@{}c@{}}isomorphism
classes of\\\ stable holomorphic rank-2 bundles\\\ over $\mathbb{CP}^{2}$ with
$c_{1}=-1,c_{2}=k$\end{tabular}\right\\}.$
Here we choose a generator of the cohomology ring
$H^{*}(\mathbb{CP}^{2};\mathbb{Z})$ so that the Chern classes of bundles over
$\mathbb{CP}^{2}$ can be expressed as integers.
###### Theorem 8.2.
(Theorem 4.1.15 of Chapter 2 in [OSS88]) Let $V,H,K$ be complex vector spaces
with dimension $3,k-1,k$ respectively. There is a one-to-one correspondence
$\left\\{\begin{tabular}[]{@{}c@{}}isomorphism classes of\\\ stable
holomorphic rank-2 bundle\\\ over $\mathbb{CP}^{2}$ with
$c_{1}=-1,c_{2}=k$\end{tabular}\right\\}\leftrightarrow P/G$
where $G=GL(H)\times O(K)$ and
$P=\left\\{\alpha\in
Hom(V^{*},Hom(H,K))~{}\bigg{|}~{}\begin{tabular}[]{@{}c@{}}($F1$) $\forall
h\neq 0$, the map $\alpha_{h}:V^{*}\to K$ defined by\\\
$\alpha_{h}(z):=\alpha(z)(h)$ has rank $\geq 2$.\\\ (F2)
$\alpha(z^{\prime})^{t}\alpha(z^{\prime\prime})=\alpha(z^{\prime\prime})^{t}\alpha(z^{\prime})$
for all $z^{\prime},z^{\prime\prime}\in V^{*}$.\end{tabular}\right\\}$
and $G$ acts on $P$ by $(g,\phi)\cdot\alpha=\phi\circ\alpha(z)\circ g^{-1}.$
### 8.1 $\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})$ and its bubble tree
compactification
In Theorem 8.2, $P$ is a subspace of $Hom(V^{*},Hom(H,K))$ satisfying
conditions (F1) and (F2). (F2) is a closed condition while (F1) is an open
condition, hence $P/G$ fails to be compact. In this section we compactify
$P/G$ for $k=2$.
The space $P/G$ for $k=2$ is described in Section 4.3 of Chapter 2 of [OSS88].
In this case, $dim_{\mathbb{C}}H=1,dim_{\mathbb{C}}K=2$, and the condition
(F2) is automatically true, so
$P=\\{\alpha\in M_{2,3}(\mathbb{C})~{}|~{}\alpha\text{ has rank }\geq 2\\},$
$G=O(2,\mathbb{C})\times\mathbb{C}^{\times},$
where $M_{2,3}(\mathbb{C})$ is space of $2\times 3$ complex matrices,
$\mathbb{C}^{\times}$ is space of non-zero complex numbers. Let $S_{3}^{2}$ be
the space of $3\times 3$ symmetric matrices with rank 2. Then there is an
isomorphism
$\displaystyle P/G$ $\displaystyle\to$ $\displaystyle
S_{3}^{2}/\mathbb{C}^{\times}=:\mathbb{P}(S_{3}^{2})$
$\displaystyle~{}[\alpha]$ $\displaystyle\mapsto$
$\displaystyle[\alpha^{t}\alpha].$
There is an obvious way to compactify $\mathbb{P}(S_{3}^{2})$: By adding the
projective space of $S_{3}^{1}$, the $3\times 3$ symmetric matrices with rank
1, we get $\mathbb{P}(S_{3}\setminus\\{0\\})$. However, this is different from
the bubble tree compactification.
To compactify $P/G$ as the bubble tree compactification, we need to figure out
which points in $P/G$ correspond to connections near lower strata in
$\underline{\mathcal{M}}_{p_{1}=-7}(\mathbb{CP}^{2})$ via the one-to-one
correspondence
$\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})\leftrightarrow P/G.$
To do this we first introduce some concepts in algebraic geometry.
Recall that any topological rank $r$ complex vector bundle $E$ on
$\mathbb{CP}^{1}$ with first Chern class $c_{1}$ has the form
$E\stackrel{{\scriptstyle
topo}}{{\cong}}\mathcal{O}_{\mathbb{CP}^{1}}(c_{1})\oplus\underline{\mathbb{C}}^{r-1},$
where
$\mathcal{O}_{\mathbb{CP}^{n}}(c_{1})=\mathcal{O}_{\mathbb{CP}^{n}}(1)^{\otimes
c_{1}}$, $\mathcal{O}_{\mathbb{CP}^{n}}(1)$ is dual to the canonical bundle
$\mathcal{O}_{\mathbb{CP}^{n}}(-1)$ on $\mathbb{CP}^{n}$,
$\underline{\mathbb{C}}^{r-1}$ is the trivial rank $r-1$ complex vector bundle
on $\mathbb{CP}^{1}$. By the theorem of Grothendieck (ref. Theorem 2.1.1 in
Chapter 1 of [OSS88]), given any holomorphic structure on $E$, it has a unique
form
$E\stackrel{{\scriptstyle
holo}}{{\cong}}\mathcal{O}(a_{1})\oplus\dots\oplus\mathcal{O}(a_{r}),$
where
$a_{1},\dots,a_{r}\in\mathbb{Z},~{}a_{1}\geq\dots\geq
a_{r},~{}a_{1}+\dots+a_{r}=c_{1}.$
Let $E\to\mathbb{CP}^{2}$ be any rank-$r$ holomorphic complex vector bundle,
$L$ be any complex projective line in $\mathbb{CP}^{2}$, i.e.,
$L\cong\mathbb{CP}^{1}$ and $L\subset\mathbb{CP}^{2}$, then
$E|_{L}\stackrel{{\scriptstyle
holo}}{{\cong}}\mathcal{O}(a_{1}(L))\oplus\dots\oplus\mathcal{O}(a_{r}(L)).$
We get a map
$\displaystyle a_{E}:\\{\text{complex prjective lines in }\mathbb{CP}^{2}\\}$
$\displaystyle\to$ $\displaystyle\mathbb{Z}^{r}$ $\displaystyle L$
$\displaystyle\mapsto$ $\displaystyle a_{E}(L):=(a_{1}(L),\dots,a_{r}(L)).$
$a_{E}(L)$ is called the splitting type of $E$ on $L$. Define a total order on
$\mathbb{Z}^{r}$: $(a_{1},\dots,a_{r})<(b_{1},\dots,b_{r})$ if the first non-
zero $b_{i}-a_{i}$ is positive.
###### Definition 8.3.
The generic splitting type of $E$ is
$\underline{a}_{E}:=\inf_{L\subset\mathbb{CP}^{2},L\cong\mathbb{CP}^{1}}a_{E}(L).$
The set of jump lines of $E$ is
$S_{E}:=\\{L~{}|~{}L\subset\mathbb{CP}^{2},L\cong\mathbb{CP}^{1},a_{E}(L)>\underline{a}_{E}\\}.$
###### Examples 8.4.
1. (i).
The stable holomorphic bundle corresponding to the point in
$\mathcal{M}_{p_{1}=-3}(\mathbb{CP}^{2})$ is
$E:=T^{*}\mathbb{CP}^{2}\otimes\mathcal{O}_{\mathbb{CP}^{2}}(1)$. The
stability of $E$ is proved in Theorem 1.3.2 in Chapter 2 of [OSS88]. Its Chern
classes are
$c_{1}(E)=c_{1}(T^{*}\mathbb{CP}^{2})+2c_{1}(\mathcal{O}(1))=-1,$
$c_{2}(E)=c_{2}(T^{*}\mathbb{CP}^{2})+c_{1}(T^{*}\mathbb{CP}^{2})c_{1}(\mathcal{O}(1))+c_{1}(\mathcal{O}(1))^{2}=1.$
The $GL(3,\mathbb{C})$-action on $\mathbb{C}^{3}$ induces an action on
$\mathbb{CP}^{2}$ and $E$ such that $E\stackrel{{\scriptstyle
holo}}{{\cong}}t^{*}E$ for any $t\in GL(3,\mathbb{C})$. Therefore the
splitting type of $E$ on any projective line $L$ in $\mathbb{CP}^{2}$ is the
same, i.e., the set of jump lines $S_{E}=\emptyset$. The splitting type of $E$
is $(0,-1)$.
2. (ii).
Set $E$ to be the stable holomorphic rank 2 bundle corresponding to
$[\alpha]\in P/G$. Then $\alpha$ can be expressed as a map from $V^{*}$ to
$\mathbb{C}^{2}$ with rank 2, where $V$ is a 3-dimensional complex vector
space. For any element $z\in V^{*}$, it defines a complex plane $ker(z)$ in
$V$ and thus a projective line $L_{z}$ in $\mathbb{P}(V)$. From Section 4.3 in
Chapter 2 of [OSS88], $L_{z}$ is a jump line if and only if $\alpha(z)=0$.
There is only one such $L_{z}$ since $\alpha$ has rank 2. For example, if
$\alpha=\begin{bmatrix}1&0&0\\\ 0&1&0\end{bmatrix}$
we have $z=(0,0,1)^{t}$ and $L_{z}=\\{[z_{0},z_{1},0]\\}$ is the projective
line defined by $z$. $\blacksquare$
Now we explain the relation between jump lines and ASD connections.
Introduce the following notations:
$\displaystyle P_{-3}(\mathbb{CP}^{2})$ $\displaystyle:$ the $SO(3)$-bundle
$\Lambda^{2,-}(\mathbb{CP}^{2})$ on $\mathbb{CP}^{2}$ $\displaystyle
P_{1}(S^{4})$ $\displaystyle:$ the $SO(3)$ bundle on $S^{4}$ with $p_{1}=-4$
$\displaystyle A_{0}$ $\displaystyle:$ the point in
$\mathcal{M}(P_{-3}(\mathbb{CP}^{2}))$ $\displaystyle A_{1}$ $\displaystyle:$
the point in $\mathcal{M}^{b}(P_{1}(S^{4}))$ $\displaystyle E_{-1,2}$
$\displaystyle:$ $\displaystyle\text{the $U(2)$-bundle on $\mathbb{CP}^{2}$
with $c_{1}=-1,c_{2}=2$}.$
For any $z\in\mathbb{CP}^{2}$ and gluing parameter $\rho$, the glued ASD
connection $\Psi(A_{0},A_{1},z,\rho)$ induces a holomorphic structure
$\mathcal{E}$ on $E_{-1,2}$. From the example above we know that there is a
unique jump line of $\mathcal{E}$.
###### Proposition 8.5.
Using the notations above, the jump line of $\mathcal{E}$ is the projective
line $L_{z}$ defined by $z$.
###### Proof.
Without loss of generality, assume $z=[1,0,0]\in\mathbb{CP}^{2}$. There is an
$SU(2)$ action on $\mathbb{CP}^{2}$ induced by the action of the group
$\begin{bmatrix}1&0\\\ 0&SU(2)\end{bmatrix}\subset SU(3)$
on $\mathbb{C}^{3}$. This action fixes $z=[1,0,0]$. On the open set
$\\{[1,z_{1},z_{2}]\\}\subset\mathbb{CP}^{2}$, the Fubini-Study metric is
$\omega_{FS}=\sqrt{-1}\partial\bar{\partial}\log(1+|z_{1}|^{2}+|z_{2}|^{2}).$
It is $U(3)$-invariant and thus $SU(2)$-invariant. The group $SU(2)$ acts on
$S^{4}\cong\mathbb{C}^{2}\cup\\{\infty\\}$ in the obvious way.
Let $A$ be the glued connection $\Psi(A_{0},A_{1},z,\rho)$ on
$P_{-7}(\mathbb{CP}^{2})$, the $SO(3)$-bundle on $\mathbb{CP}^{2}$ with
$p_{1}=-7$. Given any element $g\in SU(2)$, lift $g$ to
$\tilde{g}_{1},\tilde{g}_{2}$, automorphisms on the bundle
$P_{-3}(\mathbb{CP}^{2})$ and $P_{1}(S^{4})$ covering $g$ respectively. By
acting some element in gauge transformation on $\tilde{g}_{1}$, we can make
$\tilde{g}_{1},\tilde{g}_{2}$ to be $\rho$-equivariant when restricting
$\tilde{g}_{1},\tilde{g}_{2}$ to $P_{-3}(\mathbb{CP}^{2})|_{z}$ and
$P_{1}(S^{4})|_{\infty}$ respectively. Therefore $\tilde{g}_{1},\tilde{g}_{2}$
induce an automorphism $\tilde{g}$ on $P_{-7}(\mathbb{CP}^{2})$ covering $g$.
Since $A_{0},A_{1}$ are $SU(2)$-invariant, the glued connection $A$ is gauge
equivalent to $\tilde{g}\cdot A$. So the holomorphic structure $\mathcal{E}$
on $E_{-1,2}$ induced by $A$ satisfies
$\mathcal{E}|_{L}\stackrel{{\scriptstyle holo}}{{\cong}}\mathcal{E}|_{g\cdot
L}$
for any $g\in SU(2)$ and projective line $L$ in $\mathbb{CP}^{2}$. Since there
is only one jump line on $\mathcal{E}$, this jump line has to be invariant
under $SU(2)$-action, i.e., the projective line $\\{[0,z_{1},z_{2}]\\}$. ∎
To sum up, when gluing $A_{1}$ on $S^{4}$ to $A_{0}$ on $\mathbb{CP}^{2}$ at
$z\in\mathbb{CP}^{2}$, the holomorphic structure on
$L_{z}\subset\mathbb{CP}^{2}$ is changed and $L_{z}$ becomes a jump line of
the holomorphic structure on $E_{-1,2}$ induced by $A$.
To compactify $\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})$, we first define a
fibre bundle
$\displaystyle\pi:P/G$ $\displaystyle\to$ $\displaystyle\mathbb{CP}^{2}$
$\displaystyle~{}[\alpha]$ $\displaystyle\mapsto$
$\displaystyle\pi(\alpha)=z,\text{ s.t. }\alpha(z)=0,$
i.e., $\pi$ maps $\alpha$ to the point in $\mathbb{CP}^{2}$ that defines the
jump line of the holomorphic structure induced by $\alpha$. Then the bubble
tree compactification (which is same as Uhlenbeck compactificaition in this
case) of $\mathcal{M}_{p_{1}=-7}(\mathbb{CP}^{2})$ is by compactifying $P/G$
through the one-point-compactification of each fibre of
$\pi:P/G\to\mathbb{CP}^{2}$:
$\underline{\mathcal{M}}(P_{-7}(\mathbb{CP}^{2}))=\mathcal{M}(P_{-7}(\mathbb{CP}^{2}))\cup\mathcal{M}(P_{-3}(\mathbb{CP}^{2}))\times\mathbb{CP}^{2}=P/G\cup\mathbb{CP}^{2}.$
Next we describe the neighbourhood of points in the lower stratum of the
compactified space. To do this we first introduce a notion called “jump line
of the second kind”, which was developed by K.Hulek in [H79]. The set of jump
lines only contains part of the information of a stable holomophic structure
of $E_{-1,2}$ since points on the same fibre of $\pi:P/G\to\mathbb{CP}^{2}$
have the same jump line. We shall see in the case $c_{1}=-1,c_{2}=2$, the set
of jump lines of second kind determines the holomorphic structure completely.
###### Definition 8.6.
(Definition 3.2.1 of [H79]) Given a holomorphic complex vector bundle
$E\to\mathbb{CP}^{2}$, a jump line of the second kind is a projective line $L$
in $\mathbb{CP}^{2}$ defined by the equation $z=0$ such that
$h^{0}(E|_{L^{2}})\neq 0$, where $L^{2}$ is the subvariety in
$\mathbb{CP}^{2}$ defined by $z^{2}=0$. Denote the space of jump lines of the
second kind by $C_{E}$.
As claimed in page 242 of [H79], suppose that $E$ corresponds to the element
$\alpha\in P/G$, then this space is given by the equation
$C_{E}=\\{z\in\mathbb{CP}^{2}~{}|~{}det(\alpha(z)^{t}\alpha(z))=0\\}.$
The relation between the space of jump lines and that of the second kind is
given by Proposition 9.1 of [H79]:
$S_{E}\subset\text{Singular locus of }C_{E}.$
In the case $k=2$, if $\alpha\in P/G$ is represented by a matrix
$\begin{bmatrix}a&b&c\\\ d&e&f\end{bmatrix}$, then
$\displaystyle C_{E}$ $\displaystyle=$
$\displaystyle\left\\{z=[z_{0},z_{1},z_{2}]\in\mathbb{CP}^{2}~{}|~{}az_{0}+bz_{1}+cz_{2}=\pm
i(dz_{0}+ez_{1}+fz_{2})\right\\}$ $\displaystyle=$ $\displaystyle
L_{z^{\prime}}\cup L_{z^{\prime\prime}},$ $\displaystyle S_{E}$
$\displaystyle=$ $\displaystyle L_{z^{\prime}}\cap
L_{z^{\prime\prime}}=\\{z~{}|~{}\alpha(z)=0\\},$
where $L_{z}$ is the projective line defined by $z$ and
$z^{\prime}=[a+id,b+ie,c+if]$, $z^{\prime\prime}=[a-id,b-ie,c-if]$.
In summary, $C_{E}$ is the union of two different projective lines whose
intersection is a single point that defines the only jump line of $E$.
Therefore $C_{E}$ can be seen as an element in
$\frac{\mathbb{CP}^{2}\times\mathbb{CP}^{2}\setminus\triangle}{\Sigma^{2}}$
with $\triangle$ being the diagonal and $\Sigma^{2}$ being the permutation
group. Moreover, $C_{E}$ determines $E$ completely through the following map
$\displaystyle\frac{\mathbb{CP}^{2}\times\mathbb{CP}^{2}\setminus\triangle}{\Sigma^{2}}$
$\displaystyle\xrightarrow{\phi}$ $\displaystyle P/G$
$\displaystyle\\{[a,b,c],[d,e,f]\\}$ $\displaystyle\mapsto$
$\displaystyle\begin{bmatrix}\frac{a+d}{2}&\frac{b+e}{2}&\frac{c+f}{2}\\\
\frac{a-d}{2i}&\frac{b-e}{2i}&\frac{c-f}{2i}\end{bmatrix}.$
One can check that $\phi$ is well-defined and a one-to-one correspondence.
Under this correspondence, the fibre bundle $\pi:P/G\to\mathbb{CP}^{2}$ is the
composition
$\displaystyle
P/G\to\frac{\mathbb{CP}^{2}\times\mathbb{CP}^{2}\setminus\triangle}{\Sigma^{2}}\to\mathbb{CP}^{2}$
$\displaystyle\begin{bmatrix}u^{t}\\\
v^{t}\end{bmatrix}\mapsto\\{u+iv,u-iv\\}\mapsto z.~{}~{}$
Here $z$ satisfies $(u+iv)^{t}z=(u-iv)^{t}z=0$. The fibre of $\pi$ is
$\frac{\mathbb{CP}^{1}\times\mathbb{CP}^{1}\setminus\triangle}{\Sigma^{2}}$.
There is another way to interpret the fibre of $\pi$. Note that
$\alpha_{1},\alpha_{2}$ are in the same fibre if and only if there exists
$A\in GL(2,\mathbb{C})$ such that
$A\cdot\alpha_{1}=\alpha_{2}.$
Therefore $\pi:P/G\to\mathbb{CP}^{2}$ is a fibre bundle with fibre
$GL(2,\mathbb{C})/G$, where $G=O(2,\mathbb{C})\times\mathbb{C}^{\times}$ acts
on $GL(2,\mathbb{C})$ by
$(\phi,\lambda)\cdot A=\phi\cdot A\cdot\lambda^{-1}.$
There are homeomorphisms
$\displaystyle GL(2,\mathbb{C})/G$ $\displaystyle\to$ $\displaystyle
S^{2}_{2\times 2}/\mathbb{C}^{\times}=\frac{S_{2\times
2}\setminus\\{0\\}}{\mathbb{C}^{\times}}-S^{1}_{2\times
2}/\mathbb{C}^{\times}~{}~{}\cong~{}~{}\mathbb{CP}^{2}-\mathbb{CP}^{1}$
$\displaystyle A$ $\displaystyle\mapsto$ $\displaystyle A^{t}A$
where $S^{i}_{2\times 2}$ is the space of symmetric $2\times 2$ matrices with
rank $i$ and $S_{2\times 2}$ is the space of symmetric $2\times 2$ matrices.
### 8.2 $\mathcal{M}_{\mathbb{Z}_{a}}(P_{-7}(\mathbb{CP}^{2}))$ and its
bubble tree compactification
###### Definition 8.7.
Suppose $r,s,t$ are pairwise coprime positive integers, then the weighted
complex projective space $\mathbb{CP}^{2}_{(r,s,t)}$ is defined to be
$\mathbb{CP}^{2}_{(r,s,t)}=S^{5}/S^{1},$
where $S^{1}$ acts on $S^{5}$ by
$g\cdot(z_{0},z_{1},z_{2})=(g^{r}z_{0},g^{s}z_{1},g^{t}z_{2})$ for $g\in
S^{1}$.
Let $a=rst$, then
$\mathbb{Z}_{a}=\mathbb{Z}_{r}\oplus\mathbb{Z}_{s}\oplus\mathbb{Z}_{t}$.
Define a $\mathbb{Z}_{a}$-action on $\mathbb{CP}^{2}$ by
$(e^{2\pi i/r},e^{2\pi i/s},e^{2\pi i/t})\cdot[z_{0},z_{1},z_{2}]=[e^{2\pi
i/r}z_{0},e^{2\pi i/s}z_{1},e^{2\pi i/t}z_{2}],$
then $\mathbb{CP}^{2}/\mathbb{Z}_{a}$ is diffeomorphic to
$\mathbb{CP}^{2}_{(r,s,t)}$ through
$[z_{0},z_{1},z_{2}]\mapsto[z_{0}^{r},z_{1}^{s},z_{2}^{t}].$
Hence we can investigate connections on $\mathbb{CP}^{2}_{(r,s,t)}$ through
$\mathbb{Z}_{a}$-invariant connections on $\mathbb{CP}^{2}$.
The singularities of the $\mathbb{Z}_{a}$-action and stabilisers of these
singularities on $\mathbb{CP}^{2}$ are:
$\Gamma_{[1,0,0]}=\Gamma_{[0,1,0]}=\Gamma_{[0,0,1]}=\mathbb{Z}_{a}$
$\Gamma_{[0,z_{1},z_{2}]}=\mathbb{Z}_{r}~{},~{}~{}z_{1}z_{2}\neq 0,$
$\Gamma_{[z_{0},0,z_{2}]}=\mathbb{Z}_{s}~{},~{}~{}z_{0}z_{2}\neq 0,$
$\Gamma_{[z_{0},z_{1},0]}=\mathbb{Z}_{t}~{},~{}~{}z_{0}z_{1}\neq 0,$
where $\Gamma_{z}$ is the stabiliser of $z$. The $\mathbb{Z}_{a}$-action on
$\mathbb{CP}^{2}$ naturally induces an action on the bundle
$P_{-3}(\mathbb{CP}^{2})=\Lambda^{2,-}(\mathbb{CP}^{2})$.
For simplicity, assume $s=t=1$ in the following calculation. Then
$g\in\mathbb{Z}_{a}$ acts on $P_{-3}(\mathbb{CP}^{2})|_{[1,0,0]}$ and fixes
this fibre. To see this, under the local chart $[1,z_{1},z_{2}]$ around
$[1,0,0]$, anti-self-dual 2-forms at $[1,0,0]$ have the following basis
$\left\\{\frac{i}{2}(dz_{1}\wedge d\bar{z}_{1}-dz_{2}\wedge
d\bar{z}_{2}),~{}\frac{1}{2}(dz_{1}\wedge d\bar{z}_{2}-d\bar{z}_{1}\wedge
dz_{2}),~{}\frac{i}{2}(dz_{1}\wedge d\bar{z}_{2}+d\bar{z}_{1}\wedge
dz_{2})\right\\}.$
Hence the $\mathbb{Z}_{a}$-action fixes the fibre
$P_{-3}(\mathbb{CP}^{2})|_{[1,0,0]}$. Recall the $\mathbb{Z}_{a}$-invariant
instanton-1 moduli space of $S^{4}$ introduced in Section 6.2. Consider the
$\mathbb{Z}_{a}$-action defined on $S^{4}$, which is induced by the action
$e^{2\pi i/a}\cdot(z_{1},z_{2})=(e^{2\pi i/a}z_{1},e^{2\pi i/a}z_{2})$ on
$\mathbb{C}^{2}$, or equivalently, by the action $e^{2\pi i/a}\cdot z=e^{2\pi
i/a}z$ on $\mathbb{H}$. Then Example 6.2 lifts this action to $P_{1}(S^{4})$
such that $\mathbb{Z}_{a}$ fixes the fibre over the south pole
$P_{1}(S^{4})|_{\infty}$. That is to say, the $\mathbb{Z}_{a}$-action has
weight zero on $P_{-3}(\mathbb{CP}^{2})|_{[1,0,0]}$ and
$P_{1}(S^{4})|_{\infty}$.
Any gluing parameter $\rho:P_{-3}(\mathbb{CP}^{2})|_{[1,0,0]}\to
P_{1}(S^{4})|_{\infty}$ at the point $[1,0,0]$ is
$\mathbb{Z}_{a}$-equivariant, which implies that any connection obtained by
gluing $A_{0}\in\mathcal{M}(P_{-3}(\mathbb{CP}^{2}))$ and
$A_{1}\in\mathcal{M}_{1}^{b}(S^{4})$ at $[1,0,0]$ is
$\mathbb{Z}_{a}$-invariant.
The $\mathbb{Z}_{a}$-action on $P/G$ induced from $\mathbb{Z}_{a}$-action on
$\mathbb{CP}^{2}$ is
$e^{2\pi i/a}\cdot\begin{bmatrix}x_{1}&x_{2}&x_{3}\\\
x_{4}&x_{5}&x_{6}\end{bmatrix}=\begin{bmatrix}e^{2\pi i/a}x_{1}&x_{2}&x_{3}\\\
e^{2\pi i/a}x_{4}&x_{5}&x_{6}\end{bmatrix}.$
Hence the $\mathbb{Z}_{a}$-invariant part of
$P/G\cong\frac{\mathbb{CP}^{2}\times\mathbb{CP}^{2}\setminus\triangle}{\Sigma^{2}}$
is
$\left(\frac{\mathbb{CP}^{2}\times\mathbb{CP}^{2}\setminus\triangle}{\Sigma^{2}}\right)^{\mathbb{Z}_{a}}=\frac{\mathbb{CP}^{1}\times\mathbb{CP}^{1}\setminus\triangle}{\Sigma^{2}}\cup\left(\\{[1,0,0]\\}\times\mathbb{CP}^{1}\right),$
where $\mathbb{CP}^{1}\subset\mathbb{CP}^{2}$ is $\\{[0,z_{1},z_{2}]\\}$. The
first part of this union is the fibre over $[1,0,0]$ in the fibre bundle
$\pi:P/G\to\mathbb{CP}^{2}$, which corresponds to the space of holomorphic
structures on $E_{-1,2}$ with jump line $\\{[0,z_{1},z_{2}]\\}$. This fibre
$\pi^{-1}([1,0,0])$ contains the subspace of connections in
$\mathcal{M}(P_{-7}(\mathbb{CP}^{2}))$ obtained by gluing
$A_{0}\in\mathcal{M}(P_{-3}(\mathbb{CP}^{2}))$ and
$A_{1}\in\mathcal{M}_{1}^{b}(S^{4})$ at $[1,0,0]$.
To sum up, the compactified $\mathbb{Z}_{a}$-invariant ASD moduli space is
$\underline{\mathcal{M}}_{\mathbb{Z}_{a}}(P_{-7}(\mathbb{CP}^{2}))=\overline{\pi^{-1}([1,0,0])}^{\text{1pt}}\cup\mathbb{CP}^{1},$
where $\overline{\pi^{-1}([1,0,0])}^{\text{1pt}}$ is the one point
compactificaiton of $\pi^{-1}([1,0,0])$, and
$\mathbb{CP}^{1}\subset\mathbb{CP}^{2}$ is the projective line
$\\{[0,z_{1},z_{2}]\\}$.
###### Remark 8.8.
In this particular case, we have described gluing at the isolated fixed point
in $\mathbb{CP}^{2}$. Now consider those fixed points in the fixed sphere
$\\{[0,z_{1},z_{2}]\\}\subset\mathbb{CP}^{2}$. Suppose there exists a gluing
parameter $\rho:P_{-3}(\mathbb{CP}^{2})|_{[0,z_{1},z_{2}]}\to
P_{1}(S^{4})|_{\infty}$ that is $\mathbb{Z}_{a}$-equivariant, we can glue the
$\mathbb{Z}_{a}$-invariant $A_{0}\in\mathcal{M}(P_{-3}(\mathbb{CP}^{2}))$ and
$\mathbb{Z}_{a}$-invariant $A_{1}\in\mathcal{M}_{1}(S^{4})$ at
$[0,z_{1},z_{2}]$ using the gluing data $(\rho,\lambda)$ and get a set of
$\mathbb{Z}_{a}$-invariant connections in
$\mathcal{M}(P_{-7}(\mathbb{CP}^{2}))$ for all small enough $\lambda$. This
set of connections is contained in
$(P/G)^{\mathbb{Z}_{a}}\cap\pi^{-1}([0,z_{1},z_{2}])$. Therefore
$(P/G)^{\mathbb{Z}_{a}}\cap\pi^{-1}([0,z_{1},z_{2}])$ is non-compact since
$\lambda$ can be arbitrarily small. This contradicts the fact that
$(P/G)^{\mathbb{Z}_{a}}\cap\pi^{-1}([0,z_{1},z_{2}])$ is compact. Thus
$\mathbb{Z}_{a}$-equivariant gluing parameters do not exist, that is, the
isotropy representation at $[0,z_{1},z_{2}]$ induced by $[A_{0}]$ is not
equivalent to the isotropy representation at the south pole of $S^{4}$ induced
by $[A_{1}]$.
## References
* [1] [[A90]] D.M.Austin. $SO(3)$-invariants on $L(p,q)\times\mathbb{R}$. J. Differential Geometry, 32: 383-413, 1990.
* [2] [[BKS90]] N.P.Buchdahl, S.Kwasik, R.Schultz. One Fixed Point Actions on Low-dimensional Sphere. Inventiones Mathematicae, 102: 633-662, 1990.
* [3] [[C02]] B.Chen. A Smooth Compactification of Moduli Space of Instantons and Its Application. Preprint, math.GT/0204287, 2002.
* [4] [[C10]] B.Chen. Smoothness on Bubble Tree Compactified Instanton Moduli Spaces. Acta Mathematica Sinica, English Series, 26(2): 209-240, 2010.
* [5] [[D02]] S.K.Donaldson. Floer Homology Groups in Yang-Mills Theory. Cambridge University Press, 2002.
* [6] [[DK90]] S.K.Donaldson, P.B.Kronheimer. The Geometry of Four-Manifolds. Clarendon Press, Oxiford, 1990.
* [7] [[F95]] P.M.N.Feehan. Geometry of the ends of the moduli space of anti-self-dual connections. Journal of Differential Geometry, 42(3): 465-553, 1995. https://doi.org/10.4310/jdg/1214457548.
* [8] [[F14]] P.M.N.Feehan. Global existence and convergence of solutions to gradient systems and applications to Yang-Mills gradient flow. Preprint, arXiv:1409.1525 [math.DG], 2014.
* [9] [[F15]] P.M.N.Feehan. Discreteness for energies of Yang-Mills connections over four-dimensional manifolds. Preprint, arXiv:1505.06995 [math.DG], 2015.
* [10] [[F89]] M.Furuta. A Remark on a Fixed Point of Finite Group Action on $S^{4}$. Topology, 28(1): 35-38, 1989.
* [11] [[F92]] M.Furuta. On self-dual pseudo-connections on some orbifolds. Mathematische Zeitschrift, 209: 319-337, 1992.
* [12] [[FS85]] R.Fintushel, R.J.Stern. Pseudofree Orbifolds. Annal of Mathematics, 122: 335-364, 1985.
* [13] [[FU84]] D.S.Freed, K.K.Uhlenbeck. Instantons and Four-Manifolds. Springer-Verlag, 1984.
* [14] [[H79]] K.Hulek. Stable Rank-2 Vector Bundles on $\mathbb{P}_{2}$ with $c_{1}$ Odd. Mathematische Annalen, 242: 241-266, 1979.
* [15] [[LM89]] H.B.Lawson, JR, M.L.Michelsohn. Spin Geometry. Princeton University Press, 1989.
* [16] [[M98]] J.W.Morgan. An introduction to gauge theory. IAS/Park City Mathematics Series, Volume 4, 1998.
* [17] [[OSS88]] C.Okonek, M.Schneider, H.Spindler. Vector Bundles on Complex Projective Spaces. Progress in Mathematics 3, 1988.
* [18] [[T88]] C.H.Taubes. A framework for Morse theory for the Yang-Mills functional. Inventiones Mathematicae, 94: 327-402, 1988.
* [19] [[U82]] K.K.Uhlenbeck. Removable singularities in Yang-Mills fields. Communications in Mathematical Physics, 83: 31-42, 1982.
(Shuaige Qiao) Department of Mathematics, Mathematical Sciences Institute,
Australian National University, Canberra, ACT 2600, Australia.
Email address<EMAIL_ADDRESS>
|
# Vertical federated learning based on DFP and BFGS
1st Wenjie Song China University of Petroleum
Qingdao, China
<EMAIL_ADDRESS>2nd Xuan Shen China University of Petroleum
Qingdao, China
<EMAIL_ADDRESS>3rd Given Name Surname dept. name of organization
(of Aff.)
City, Country
email address
###### Abstract
As data privacy is gradually valued by people, federated learning(FL) has
emerged because of its potential to protect data. FL uses homomorphic
encryption and differential privacy encryption on the promise of ensuring data
security to realize distributed machine learning by exchanging encrypted
information between different data providers. However, there are still many
problems in FL, such as the communication efficiency between the client and
the server and the data is non-iid. In order to solve the two problems
mentioned above, we propose a novel vertical federated learning framework
based on the DFP and the BFGS(denoted as BDFL), then apply it to logistic
regression. Finally, we perform experiments using real datasets to test
efficiency of BDFL framework.
###### Index Terms:
Federated learning, Machine learning, Non-iid data, Data privacy.
## I INTRODUCTION
On the one hand, due to the emergence of the General Data Protection
Regulation, more and more people are paying attention to privacy protection in
machine learning. On the other hand, in real situations, more and more data
island appears, making traditional machine learning difficult to achieve.
Generally speaking, AI service needs data provided by users to train on a
server. However, in this process, the data may come from various institutions,
and although the institution wants to get a perfect model, it does not like
leaking its own data. Therefore, in order to break data island and achieve
privacy protection, Google [1] proposed federated learning in 2016. In FL, AI
services can perform machine learning without collecting data from various
institutions. FL allows the model to be trained locally and send encrypted
information to the center server. Then the center server aggregates received
data and send back to every client. Finally client could update parameter by
themselves. For the method of updating parameters, there are GD, SGD, Mini-
Batch SGD methods, but these methods are all first-order accuracy. Therefore,
we consider a higher-order accuracy method, the newton method, but in the
newton method, the Hessian matrix may be irreversible and even if it does be a
inverse matrix, it is also extremely difficult to compute it. Therefore, we
consider adopting the quasi-newton method. Among them, DFP and BFGS are two
representative algorithms. Yang [2] et al. implemented BFGS under the
algorithm architecture of logistic regression and applied it to vertical
federated learning. But in terms of communication, there are still problems.
Therefore, we combined DFP and BFGS to propose a new algorithm, which is used
in the logistic regression algorithm of vertical federated learning. In the
end, compared to other algorithm, our algorithm can achieve better results
with less communication times.
## II RELATED WORK
In recent years, a large number of studies on federated learning have emerged
[3], [4], [5]. In its architecture, the use of gradient descent methods is
common. However, the convergence of the first-order gradient descent method is
lower than that of the second-order newton method. The calculation is very
large when calculating the inverse of the Hesian matrix, so the quasi-newton
method came into being, BFGS and DFP, as the two representative methods. A
series of works on horizontal federated learning has been proposed [6],[7],
each client has a part of the sample, but has all the data attributes. In
vertical federated learning, each client holds part of the data attributes,
and the samples are overlapped. [8] suggests that logistic regression is
applied under the framework of vertical federation. Yang [2] and others use
L-BFGS to implement logistic regression algorithm of vertical federated
learning. It reduces communication cost. [9] combines federated learning with
blockchain proposing BlockFL. Because of the consensus mechanism in the
blockchain, BlockFL can resist attacks from malicious clients. FedAvg [4] is
an iterative method that has become a universal optimization method in FL. In
addition, in terms of theoretical proof, [10], [11] gives a proof of
convergence for the FedAvg algorithm for non-IID data. In particular, [12]
offers a boosting method based on tree model SecureBoost. Recently, [13]
proposes the FedProx algorithm on the basis of FedAvg by adding proximal term.
FedProx is absolutely superior to FedAvg in statistical heterogeneity and
system heterogeneity.
In summary, FeaAvg as the baseline in FL, shows bad performance in the case of
statistical heterogeneity and system heterogeneity. As an improvement of
FedAvg, FedProx has great performance in non-iid environments. The first-order
gradient descent method in traditional machine learning has strong
universality. But for FL, when the communication cost is much more than the
calculation cost, a higher-precision algorithm should be selected. In other
words, higher computation cost should be used in exchange for smaller
communication cost.
## III ANALYTICAL MODEL
In this work, inspired by BFGS in logistic regression of vertical federated
learning [2], we exlore a broader framework, BDFL, that is capable of managing
heterogeneous federated environments when ensuring privacy security. Besides,
our novel framework performs better than BFGS [2] and SGD [14].
### III-A Logistic Regression
In vertical federated learning, [14] realizes classic logistic regression
method. Let $X\in R^{N\times T}$ be a data set containing $T$ data samples,
and each instance has $N$ features. Corresponding data label is
$y\in\\{-1,+1\\}^{T}$. Suppose there are two honest but curious participants
party A(host) and party B(guest). A has only the characteristics of the data.
And B has not only the characteristics, but also the label of the data. So
$X^{A}\in R^{N_{A}\times T}$ is owned by A and $X^{B}\in R^{N_{B}\times T}$ is
owned by B. Each party has different data characteristics, but the sample id
is the same. Therefore, the goal of optimization is to train classification
model to solve
$\min\limits_{\boldsymbol{w}\in
R^{N}}\quad\frac{1}{T}\sum_{i}^{T}l(\boldsymbol{w};\boldsymbol{x}_{i},y_{i})$
(1)
where $\boldsymbol{w}$ is the model parameters. So
$\boldsymbol{w}=(\boldsymbol{w}^{A},\boldsymbol{w}^{B})$ where
$\boldsymbol{w}^{A}\in R^{N_{A}}$ and $\boldsymbol{w}^{B}\in R^{N_{B}}$.
Moreover $\boldsymbol{x}_{i}$ represents the feature of the i-th data instance
and $y_{i}$ is the corresponding label. The loss function is negative log-
likelihood
$l(\boldsymbol{w};\boldsymbol{x}_{i},y_{i})=log(1+exp(-y_{i}\boldsymbol{w}^{T}\boldsymbol{x}_{i}))$
(2)
In [14], they use SGD to decrease gradient by exchanging encrypted middle
information at each iteration. Party A and Party B hold vertically encrypted
gradients $\boldsymbol{g}^{A}\in R^{n_{A}}$ and $\boldsymbol{g}^{B}\in
R^{n_{B}}$ respectively, which can be decrypted by the third party C.
Furthemore, to achieve secure multi-party computing, the additively
homomorphic encryption is accepted. In the field of homomorphic encryption, a
lot of works have been completed [15] [16]. Different computing requirements
correspond to different encryption methods, such as PHE, SHE, FHE. After
encryption, we can directly perform encrypted data with addition or
multiplication operations, the value of decrypting the operation result is
consistent with the result of the direct operation on the original data. That
is $[\\![m]\\!]+[\\![n]\\!]=[\\![m+n]\\!]$ and $[\\![m]\\!]\cdot n=[\\![m\cdot
n]\\!]$ with $[\\![\cdot]\\!]$ represent encryption method. However,
homomorphic encryption has no idea to solve exponential calculation yet. So
equation (2) cannot directly apply homomorphic encryption. We consider using
Taylor expansion to approximate the loss function. Fortunately, it’s proposed
in [14] as
$l(\boldsymbol{w};\boldsymbol{x}_{i},y_{i})\approx
log2-\frac{1}{2}y_{i}\boldsymbol{w}^{T}\boldsymbol{x}_{i}+\frac{1}{8}(\boldsymbol{w}^{T}\boldsymbol{x}_{i})^{2}$
(3)
### III-B Newton Method
The basic idea of newton’s method is to use the first-order gradient and the
second-order gradient(Hessian) at the iteration point to approximate the
objective function with the quadratic function, and then use the minimum point
of the quadratic model as the new iteration point. This process is repeated
until the approximate minimum value that satisfies the required accuracy. The
newton’s method can highly approximate the optimal value and its speed is
quite fast. Though it is very quickly, the calculation is extremely huge. For
federated learning, this method is perfect when trading larger computational
costs for smaller communication costs.
For convenience, we mainly discuss the one-dimensional situation. For an
objective function $f(\boldsymbol{w})$, the problem of finding the extreme
value of the function can be transformed into the derivative function
$f^{\prime}(\boldsymbol{w})=0$, and the second-order Taylor expansion of the
function $f(\boldsymbol{w})$ is obtained
$f(\boldsymbol{w})=f(\boldsymbol{w}_{k})+f^{\prime}(\boldsymbol{w}_{k})(\boldsymbol{w}-\boldsymbol{w}_{k})+\frac{1}{2}f^{\prime\prime}(\boldsymbol{w}_{k})(\boldsymbol{w}-\boldsymbol{w}_{k})^{2}$
(4)
and take the derivative of the above formula and set it to 0, then
$f^{\prime}(\boldsymbol{w}_{k})+f^{\prime\prime}(\boldsymbol{w}_{k})(\boldsymbol{w}-\boldsymbol{w}_{k})=0$
(5)
$\boldsymbol{w}=\boldsymbol{w}_{k}-\frac{f^{\prime}(\boldsymbol{w}_{k})}{f^{\prime\prime}(\boldsymbol{w}_{k})}$
(6)
it is further organized into the following iterative expression:
$\boldsymbol{w}_{k+1}=\boldsymbol{w}_{k}-\lambda
H^{-1}f^{\prime}(\boldsymbol{w}_{k})$ (7)
where $\lambda$ represent step-size and $H$ represent Hessian.
This formula is an iterative formula of newton method. But this method also
has a fatal flaw, that is, in equation (7), the inverse of the Hessian matrix
needs to be required. As we all know, not all matrices have inverses. And the
computational complexity of the inversion operation is also very large.
Therefore, there is quasi-newton methods. BFGS and DFP, approximate newton
method.
### III-C Quasi-Newton Method
The central idea of the quasi-newton method is getting a matrix similar to the
Hessian inverse without computing the inverse of Hessian. Therefore, the
expression of the quasi-newton method is similar to equation (7), as follows
$\boldsymbol{w}_{k+1}=\boldsymbol{w}_{k}-\lambda
C_{k}f^{\prime}(\boldsymbol{w}_{k})$ (8)
where $C_{k}$ is the matrix used to approximate $H^{-1}$.
In contrast, the update formula is as follows in SGD
$\boldsymbol{w}_{k+1}=\boldsymbol{w}_{k}-\lambda
f^{\prime}(\boldsymbol{w}_{k})$ (9)
Different quasi-newton methods are inconsistent with the iterative formula of
$C_{k}$. Therefore, we explain the iterative formula of DFP and BFGS on
$C_{k}$ below.
#### III-C1 DFP
$C^{\prime}_{i+1}=C_{i}+\frac{\triangle\boldsymbol{w}_{i}\triangle\boldsymbol{w}_{i}^{T}}{\triangle\boldsymbol{w}_{i}^{T}\triangle\boldsymbol{g}_{i}}-\frac{(C_{i}\triangle\boldsymbol{g}_{i})(C_{i}\triangle\boldsymbol{g}_{i})^{T}}{\triangle\boldsymbol{g}_{i}^{T}C_{i}\triangle\boldsymbol{g}_{i}}$
(10)
#### III-C2 BFGS
$C^{\prime\prime}_{i+1}=(I-\frac{\triangle\boldsymbol{w}_{i}\triangle\boldsymbol{g}_{i}^{T}}{\triangle\boldsymbol{g}_{i}^{T}\triangle\boldsymbol{w}_{i}})C_{i}(I-\frac{\triangle\boldsymbol{g}_{i}\triangle\boldsymbol{w}_{i}^{T}}{\triangle\boldsymbol{g}_{i}^{T}\triangle\boldsymbol{w}_{i}})+\frac{\triangle\boldsymbol{w}_{i}\triangle\boldsymbol{w}_{i}^{T}}{\triangle\boldsymbol{g}_{i}^{T}\triangle\boldsymbol{w}_{i}}$
(11)
#### III-C3 BDFL
$C_{i+1}=\alpha C^{\prime}_{i+1}+(1-\alpha)C^{\prime\prime}_{i+1}$ (12)
In equation (10) and equation (11),
$\triangle\boldsymbol{w}_{i}=\boldsymbol{w}_{i+1}-\boldsymbol{w}_{i},\triangle\boldsymbol{g}_{i}=\boldsymbol{g}_{i+1}-\boldsymbol{g}_{i}$.
In equation (12), $\alpha$ is a number with no limits.
### III-D Compute and Exchange information
Figure 1: Information exchange between parties
The gradient and the Hessian of Taylor loss of the i-th data sample are given
by $\boldsymbol{g}_{i}=\nabla
l(\boldsymbol{w};\boldsymbol{x}_{i},y_{i})\approx(\frac{1}{4}\boldsymbol{w}^{T}\boldsymbol{x}_{i}-\frac{1}{2}y_{i})\boldsymbol{x}_{i}$,
$H=\nabla^{2}l(\boldsymbol{w};\boldsymbol{x}_{i},y_{i})\approx\frac{1}{4}\boldsymbol{x}_{i}\boldsymbol{x}_{i}^{T}$
respectively. For convenience, we calculate the intermediate variable
$\boldsymbol{w}^{T}\boldsymbol{x}$ and express it
$\boldsymbol{u}=\boldsymbol{w}^{T}\boldsymbol{x}_{i}$ (13)
#### III-D1 Compute Gradient and Loss
First, after initializing $w$ and $C$, both parties A and B calculate $u_{a}$
and $u_{b}$. After the calculation is completed, party A sends
$[\\![\boldsymbol{u}_{a}]\\!]$ and $[\\![\boldsymbol{u}_{a}^{2}]\\!]$ to B.
Next, B calculates $[\\![loss]\\!]$, $[\\![\boldsymbol{d}]\\!]$ according to
formula (16) and (14) and then sends $[\\![\boldsymbol{d}]\\!]$ to A. Then
according to the equation (15), A calculates $[\\![\boldsymbol{g}_{a}]\\!]$
and B calculates $[\\![\boldsymbol{g}_{b}]\\!]$.
$[\\![\boldsymbol{d}]\\!]=\frac{1}{4}([\\![\boldsymbol{u}_{A}[i]]\\!]+[\\![\boldsymbol{u}_{B}[i]]\\!]-2[\\![y_{i}]\\!])$
(14) $[\\![\boldsymbol{g}]\\!]\approx\frac{1}{N}\sum_{i\in
N}[\\![d_{i}]\\!]x_{i}$ (15)
$\displaystyle\\-[\\![loss]\\!]\approx\frac{1}{N}\sum_{i\in
N}log2-\frac{1}{2}y_{i}([\\![\boldsymbol{u}_{A}[i]]\\!]+[\\![\boldsymbol{u}_{B}[i]]\\!])+$
(16)
$\displaystyle\frac{1}{8}([\\![\boldsymbol{u}_{A}^{2}[i]]\\!]+2\boldsymbol{u}_{B}[i][\\![\boldsymbol{u}_{A}[i]]\\!]+[\\![\boldsymbol{u}_{B}^{2}[i]]\\!])$
#### III-D2 Send Encrypted Information And Return
B sends the calculated $[\\![loss]\\!]$ to C. And C decrypts it and displays
the results. Then A&B send
$[\\![\boldsymbol{g}_{a}]\\!],[\\![\boldsymbol{g}_{b}]\\!]$ to C. After
decrypting the gradient, C sends back the respective gradient plaintext.
#### III-D3 Update Hessian and $\boldsymbol{w}$
After both A and B have received their respective gradients, they first update
their $C_{k}$. Later, update $\boldsymbol{w}$ using the equation (8).
#### III-D4 Check
Party A&B check whether $w$ has reached convergence. If both of them converge,
then output $w$, if one of them does not converge, continue the loop.
Procedure 1 Basic Logistic Regression In Vertical FL
Input : $w_{0}^{A},w_{0}^{B},X_{A},X_{B},Y_{B},E,\lambda$
Output : $w^{A},w^{B}$
Party C: Generated public key and private key
Party C: Send private key to A and B
1:for each round $k=1,..,E$ do
2: Party A: Compute $u_{a},u_{a}^{2}$ as equation (13)
3: Party A: Send $[\\![u_{a}]\\!],[\\![u_{a}^{2}]\\!]$ to B.
4: Party B: Compute $u_{b},u_{b}^{2}$ as equation (13)
5: Party B: Compute $[\\![loss]\\!]$ as equation (16)
6: Party B: Compute $[\\![d]\\!]$ as equation (14) and send to A
7: Party A: Compute $[\\![g_{A}]\\!]$ as equation (15)
8: Party B: Compute $[\\![g_{B}]\\!]$ as equation (15)
9: Party A&B: Send $[\\![g_{A}]\\!],[\\![g_{B}]\\!]$ to C
10: Party C: Decrypted $[\\![g_{A}]\\!],[\\![g_{B}]\\!]$ and send back
11: Party A&B: Update $w$ as equation (9)
12:end for
Procedure 2 BDFL Framework
Input : $w_{0}^{A},w_{0}^{B},X_{A},X_{B},Y_{B},C_{0}^{A},C_{0}^{B},E,\lambda$
Output : $w^{A},w^{B}$
Party C: Generated public key and private key
Party C: Send private key to A and B
1:for each round $k=1,..,E$ do
2: Party A: Compute $u_{a},u_{a}^{2}$ as equation (13)
3: Party A: Send $[\\![u_{a}]\\!],[\\![u_{a}^{2}]\\!]$ to B.
4: Party B: Compute $u_{b},u_{b}^{2}$ as equation (13)
5: Party B: Compute $[\\![loss]\\!]$ as equation (16)
6: Party B: Compute $[\\![d]\\!]$ as equation (14) and send to A
7: Party A: Compute $[\\![g_{A}]\\!]$ as equation (15)
8: Party B: Compute $[\\![g_{B}]\\!]$ as equation (15)
9: Party A&B: Send $[\\![g_{A}]\\!],[\\![g_{B}]\\!]$ to C
10: Party C: Decrypted $[\\![g_{A}]\\!],[\\![g_{B}]\\!]$ and send back
11: if $k!=1$ then
12: Party A&B: Update separately $C$ as equation (12)
13: end if
14: Party A&B: Update $w$ as equation (8)
15:end for
## IV PERFORMANCE EVALUEATION
Our numerical experiment has two parts. In both of the experiments, we select
80% of the data for training and check the training loss. The remaining 20% is
used as the test dataset to check the generalization ability of the model.
### IV-A Compare Quasi-Newton with SGD
The first part is to compare SGD and quasi-Newton. It is applied to credit
card dataset, which consists of 30000 instances and each instances holds 23
features. So, we shuffle the order of the instances. Party A holds 12
features, and Party B holds remaining 11 features and corresponding target. By
using the two quasi-newton methods of DFP and BFGS, it is compared with the
SGD method.
Figure 2: Training loss in Credit Card experiment. Compare Quasi-newton method
with SGD. All of them use leanring-rate decay of 0.06 per round.
### IV-B Compare Quasi-Newton with BDFL
In the second part, to go further, we use BDFL(we proposed) and quasi-newton
method to compare. Using the breast cancer dataset, which has 569 instances,
30 atrributes and label. Because there are 20% test dataset, so split them to
$X_{A}\in R^{455\times 20}$, $X_{B}\in R^{455\times 10}$ and $Y_{B}\in
R^{455\times 1}$. The attribute index held by Party A are from 10-29, and
those held by Party B are from 0-9.
Figure 3: Training loss in Breast Cancer experiment. Compare BFGS and DFP with
BDFL. All of them use leanring-rate decay of 0.05 per round.
In figure 2, it is clear that BFGS is much faster than SGD. In figure 3, it
shows BDFL is better than both DFP and BFGS. What is more, we run every model
in test dataset.
TABLE I: The Test Accuracy Method | Credit Card | Breast Cancer
---|---|---
SGD | 90.90% | 86.26%
DFP | 94.41% | 85.57%
BFGS | 95.10% | 91.29%
BDFL | – | 91.35%
## V CONCLUSIONS
In this article, we use the quasi-newton method to replace the gradient
descent method on the purpose of exchanging a larger amount of calculation for
a smaller communication cost. In addition, we make improvements on the
original basis of the quasi-newton. A novel framework, named BDFL, is proposed
under vertical federated learning. Logistic regression is applied to the BDFL
framework, which is used to test actual dataset. And the experiments have
shown that BDFL can meet the following two premises for multi-party modeling:
1. 1.
Ensure data privacy would not leak;
2. 2.
One of them has only data but no labels. The other has data and label.
And the convergence speed and accuracy of the model are also better than
traditional methods.
But our model still has some problems, such as the convergence speed did not
meet our expectations, the amount of calculation is too large, etc. We will
continue to study in future work.
## References
* [1] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” 2017.
* [2] K. Yang, T. Fan, T. Chen, Y. Shi, and Q. Yang, “A quasi-newton method based vertical federated learning framework for logistic regression,” 2019.
* [3] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” 2019.
* [4] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” 2017.
* [5] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” _IEEE Signal Processing Magazine_ , 2020.
* [6] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” 2017.
* [7] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated learning via over-the-air computation,” 2019.
* [8] S. Hardy, W. Henecka, H. Ivey-Law, R. Nock, G. Patrini, G. Smith, and B. Thorne, “Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption,” 2017.
* [9] H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Blockchained on-device federated learning,” 2019.
* [10] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of fedavg on non-iid data,” 2020.
* [11] A. Khaled, K. Mishchenko, and P. Richtárik, “First analysis of local gd on heterogeneous data,” 2020.
* [12] K. Cheng, T. Fan, Y. Jin, Y. Liu, T. Chen, and Q. Yang, “Secureboost: A lossless federated learning framework,” 2019.
* [13] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” 2020.
* [14] S. Hardy, W. Henecka, H. Ivey-Law, R. Nock, G. Patrini, G. Smith, and B. Thorne, “Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption,” 2017.
* [15] A. Acar, H. Aksu, A. S. Uluagac, and M. Conti, “A survey on homomorphic encryption schemes: Theory and implementation,” 2017.
* [16] H. Zhu, R. Wang, Y. Jin, K. Liang, and J. Ning, “Distributed additive encryption and quantization for privacy preserving federated deep learning,” 2020\.
|
# Explainable Artificial Intelligence Approaches: A Survey
Sheikh Rabiul Islam, University of Hartford, William Eberle, Tennessee Tech
University, Sheikh Khaled Ghafoor, Tennessee Tech University, Mohiuddin Ahmed,
Edith Cowan University
###### Abstract
The lack of explainability of a decision from an Artificial Intelligence (AI)
based “black box” system/model, despite its superiority in many real-world
applications, is a key stumbling block for adopting AI in many high stakes
applications of different domain or industry. While many popular Explainable
Artificial Intelligence (XAI) methods or approaches are available to
facilitate a human-friendly explanation of the decision, each has its own
merits and demerits, with a plethora of open challenges. We demonstrate
popular XAI methods with a mutual case study/task (i.e., credit default
prediction), analyze for competitive advantages from multiple perspectives
(e.g., local, global), provide meaningful insight on quantifying
explainability, and recommend paths towards responsible or human-centered AI
using XAI as a medium. Practitioners can use this work as a catalog to
understand, compare, and correlate competitive advantages of popular XAI
methods. In addition, this survey elicits future research directions towards
responsible or human-centric AI systems, which is crucial to adopt AI in high
stakes applications.
###### Index Terms:
Explainable Artificial Intelligence, Explainability Quantification, Human-
centered Artificial Intelligence, Interpretability.
## 1 Introduction
Artificial Intelligence (AI) has become an integral part of many real-world
applications. Factors fueling the proliferation of AI-based algorithmic
decision making in many disciplines include: (1) the demand for processing a
variety of voluminous data, (2) the availability of powerful computing
resources (e.g., GPU computing, cloud computing), and (3) powerful and new
algorithms. However, most of the successful AI-based models are “black box” in
nature, making it a challenge to understand how the model or algorithm works
and generates decisions. In addition, the decisions from AI systems affect
human interests, rights, and lives; consequently, the decision is crucial for
high stakes applications such as credit approval in finance, automated
machines in defense, intrusion detection in cybersecurity, etc. Regulators are
introducing new laws such as European Union’s General Data Protection
Regulation (GDPR) 111https://www.eugdpr.org [1] aka “right to explanation”
[2], US government’s “Algorithmic Accountability Act of 2019”
222https://www.senate.gov [3], or U.S. Department of Defense’s Ethical
Principles for Artificial Intelligence 333https://www.defense.gov [4] ) to
tackle primarily fairness, accountability, and transparency-related risks with
automated decision making systems.
XAI is a re-emerging research trend, as the need to advocate these
principles/laws, and promote the explainable decision-making system and
research, continues to increase. Explanation systems were first introduced in
the early ’80s to explain the decisions of expert systems. Later, the focus of
the explanation systems shifted towards human-computer systems (e.g.,
intelligent tutoring systems) to provide better cognitive support to users.
The primary reason for the renewed interest in XAI research has stemmed from
recent advancements in AI and ML, and their application to a wide range of
areas, as well as prevailing concerns over the unethical use, lack of
transparency, and undesired biases in the models. Many real-world applications
in the Industrial Control System (ICS) greatly increase the efficiency of
industrial production from the automated equipment and production processes
[5]. However, in this setting, the use of ’black box’ is still not in a
favorable position due to the lack of explainability and transparency of the
model and decisions.
According to [6] and [7], XAI encompasses Machine Learning (ML) or AI
systems/tools for demystifying black models internals (e.g., what the models
have learned) and/or for explaining individual predictions. In general,
explainability of an AI model’s prediction is the extent of transferable
qualitative understanding of the relationship between model input and
prediction (i.e., selective/suitable causes of the event) in a recipient
friendly manner. The term “explainability” and “interpretability” are being
used interchangeably throughout the literature. To this end, in the case of an
intelligent system (i.e., AI-based system), it is evident that explainability
is more than interpretability in terms of importance, completeness, and
fidelity of prediction. Based on that, we will use these terms accordingly
where appropriate.
Due to the increasing number of XAI approaches, it has become challenging to
understand the pros, cons, and competitive advantages, associated with the
different domains. In addition, there are lots of variations among different
XAI methods, such as whether a method is global (i.e., explains the model’s
behavior on the entire data set), local (i.e., explains the prediction or
decision of a particular instance), ante-hoc (i.e. involved in the pre
training stage), post-hoc (i.e. works on already trained model), or surrogate
(i.e. deploys a simple model to emulate the prediction of a “black box”
model). However, despite many reviews on XAI methods, there is still a lack of
comprehensive analysis of XAI when it comes to these methods and perspectives.
Some of the popular work/tools on XAI are LIME, DeepVis Toolbox,
TreeInterpreter, Keras-vis, Microsoft InterpretML, MindsDB, SHAP, Tensorboard
WhatIf, Tensorflow’s Lucid, Tensorflow’s Cleverhans, etc. However, a few of
these work/tools are model specific. For instance, DeepVis, keras-vis, and
Lucid are for a neural network’s explainability, and TreeInterpreter is for a
tree-based model’s explainability. At a high level, each of the proposed
approaches have similar concepts, such as feature importance, feature
interactions, shapely values, partial dependence, surrogate models,
counterfactual, adversarial, prototypes and knowledge infusion. However,
despite some visible progress in XAI methods, the quantification or evaluation
of explainability is under-focused, and in particular, when it comes to human
study-based evaluations.
In this paper, we (1) demonstrate popular methods/approaches towards XAI with
a mutual task (i.e., credit default prediction) and explain the working
mechanism in layman’s terms, (2) compare the pros, cons, and competitive
advantages of each approach with their associated challenges, and analyze
those from multiple perspectives (e.g., global vs local, post-hoc vs ante-hoc,
and inherent vs emulated/approximated explainability), (3) provide meaningful
insight on quantifying explainability, and (4) recommend a path towards
responsible or human-centered AI using XAI as a medium. Our survey is only one
among the recent ones (See Table I) which includes a mutual test case with
useful insights on popular XAI methods (See Table IV).
TABLE I: Comparison with other Surveys Survey | Reference | Mutual test case
---|---|---
Adadi et al., 2018 | [8] | $\times$
Mueller et al., 2019 | [9] | $\times$
Samek et al., 2017 | [6] | $\times$
Molnar et al., 2019 | [10] | $\times$
Staniak et al., 2018 | [11] | $\times$
Gilpin et al., 2018 | [12] | $\times$
Collaris et al., 2018 | [13] | $\times$
Ras et al., 2018 | [1] | $\times$
Dosilovic et al., 2018 | [14] | $\times$
Tjoa et al., 2019 | [15] | $\times$
Dosi-Valez et al., 2017 | [16] | $\times$
Rudin et al., 2019 | [17] | $\times$
Arrieta et al., 2020 | [18] | $\times$
Miller et al., 2018 | [19] | $\times$
Zhang et al., 2018 | [20] | $\times$
This Survey | | $\checkmark$
We start with a background of related works (Section 2), followed by a
description of the test case in Section 3, and then a review of XAI methods in
Section 4. We conclude with an overview of quantifying explainability and a
discussion addressing open questions and future research directions towards
responsible for human-centered AI in Section 5.
## 2 Background
Research interests in XAI are re-emerging. The earlier works such as [21],
[22], and [23] focused primarily on explaining the decision process of
knowledge-based systems and expert systems. The primary reason behind the
renewed interest in XAI research has stemmed from the recent advancements in
AI, its application to a wide range of areas, the concerns over unethical use,
lack of transparency, and undesired biases in the models. In addition, recent
laws by different governments are necessitating more research in XAI.
According to [6] and [7], XAI encompasses Machine Learning (ML) or AI systems
for demystifying black models internals (e.g., what the models have learned)
and/or for explaining individual predictions.
In 2019, Mueller et al. presents a comprehensive review of the approaches
taken by a number of types of “explanation systems” and characterizes those
into three generations: (1) first-generation systems—for instance, expert
systems from the early 70’s, (2) second generation systems—for instance,
intelligent tutoring systems, and (3) third generation systems—tools and
techniques from the recent renaissance starting from 2015[9]. The first
generation systems attempt to clearly express the internal working process of
the system by embedding expert knowledge in rules often elicited directly from
experts (e.g., via transforming rules into natural language expressions). The
second generations systems can be regarded as the human-computer system
designed around human knowledge and reasoning capacities to provide cognitive
support. For instance, arranging the interface in such a way that complements
the knowledge that the user is lacking. Similar to the first generation
systems, the third generation systems also attempt to clarify the inner
workings of the systems. But this time, these systems are mostly “black box”
(e.g., deep nets, ensemble approaches). In addition, nowadays, researchers are
using advanced computer technologies in data visualizations, animation, and
video, that have a strong potential to drive the XAI research further. Many
new ideas have been proposed for generating explainable decisions from the
need of primarily accountable, fair, and trust-able systems and decisions.
There has been some previous work [10] that mentions three notions for
quantification of explainability. Two out of three notions involve
experimental studies with humans (e.g., domain expert or a layperson, that
mainly investigate whether a human can predict the outcome of the model) [24],
[25], [26], [27], [28]. The third notion (proxy tasks) does not involve a
human, and instead uses known truths as a metric (e.g., the less the depth of
the decision tree, the more explainable the model).
Some mentionable reviews on XAI are listed in Table I. However, while these
works provide analysis from one or more of the mentioned perspectives, a
comprehensive review considering all of the mentioned important perspectives,
using a mutual test case, is still missing. Therefore, we attempt to provide
an overview using a demonstration of a mutual test case or task, and then
analyze the various approaches from multiple perspectives, with some future
directions of research towards responsible or human-centered AI.
## 3 Test Case
The mutual test case or task that we use in this paper to demonstrate and
evaluate the XAI methods is credit default prediction. This mutual test case
enables a better understanding of the comparative advantages of different XAI
approaches. We predict whether a customer is going to default on a mortgage
payment (i.e., unable to pay monthly payment) in the near future or not, and
explain the decision using different XAI methods in a human-friendly way. We
use the popular Freddie Mac [29] dataset for the experiments. Table II lists
some important features and their descriptions. The description of features
are taken from the data set’s [29] user guide.
TABLE II: Dataset description Feature | Description
---|---
creditScore | A number in between 300 and 850 that indicates the creditworthiness of the borrowers.
originalUPB | Unpaid principle balance on the note date.
originalInterestRate | Original interest rate as indicated by the mortgage note.
currentLoanDelinquencyStatus | Indicates the number of days the borrower is delinquent.
numberOfBorrower | Number of borrower who are obligated to repay the loan.
currentInterestRate | Active interest rate on the note.
originalCombinedLoanToValue | Ratio of all mortgage loans and apprised price of mortgaged property on the note date.
currentActualUPB | Unpaid principle balance as of latest month of payment.
defaulted | Whether the customer was default on payment (1) or not (0.)
We use well-known programming language R’s package “iml” [30] for producing
the results for the XAI methods described in this review.
## 4 Explainable Artificial Intelligence Methods
This section summarizes different explainability methods with their pros,
cons, challenges, and competitive advantages primarily based on two recent
comprehensive surveys: [31] and [16]. We then enhance the previous surveys
with a multi-perspective analysis, recent research progresses, and future
research directions.[16] broadly categorize methods for explanations into
three kinds: Intrinsically Interpretable Methods, Model Agnostic Methods, and
Example-Based Explanations.
### 4.1 Intrinsically Interpretable Methods
The convenient way to achieve explainable results is to stick with
intrinsically interpretable models such as Linear Regression, Logistic
Regression, and Decision Trees by avoiding the use of “black box” models.
However, usually, this natural explainability comes with a cost in
performance.
In a Linear Regression, the predicted target consists of the weighted sum of
input features. So the weight or coefficient of the linear equation can be
used as a medium of explaining prediction when the number of features is
small.
$y=b\textsubscript{0}+b\textsubscript{1}*x\textsubscript{1}+...+b\textsubscript{n}*x\textsubscript{n}+\epsilon$
(1)
In Formula 1, y is the target (e.g., chances of credit default), b0 is a
constant value known as the intercept (e.g., .33), bi is the learned feature’s
weight or coefficient (e.g., .33) for the corresponding feature xi (e.g.,
credit score), and $\epsilon$ is a constant error term (e.g., .0001). Linear
regression comes with an interpretable linear relationship among features.
However, in cases where there are multiple correlated features, the distinct
feature influence becomes indeterminable as the individual influences in
prediction are not additive to the overall prediction anymore.
Logistic Regression is an extension of Linear Regression to the classification
problems. It models the probabilities for classification tasks. The
interpretation of Logistic Regression is different from Linear Regression as
it gives a probability between 0 and 1, where the weight might not exactly
represent the linear relationship with the predicted probability. However, the
weight provides an indication of the direction of influence (negative or
positive) and a factor of influence between classes, although it is not
additive to the overall prediction.
Decision Tree-based models split the data multiple times based on a cutoff
threshold at each node until it reaches a leaf node. Unlike Logistic and
Linear Regression, it works even when the relationship between input and
output is non-linear, and even when the features interact with one another
(i.e., a correlation among features). In a Decision Tree, a path from the root
node (i.e., starting node) (e.g., credit score in Figure 1) to a leaf node
(e.g., default) tells how the decision (the leaf node) took place. Usually,
the nodes in the upper-level of the tree have higher importance than lower-
level nodes. Also, the less the number of levels (i.e., height) a tree has,
the higher the level of explainability the tree possesses. In addition, the
cutoff point of a node in the Decision Trees provides counterfactual
information—for instance, increasing the value of a feature equal to the
cutoff point will reverse the decision/prediction. In Figure 1, if the credit
score is greater than the cutoff point 748, then the customer is predicted as
non-default. Also, tree-based explanations are contrastive, i.e., a ”what if”
analysis provides the relevant alternative path to reach a leaf node.
According to the tree in Figure 1, there are two separate paths (credit score
$\rightarrow$ delinquency $\rightarrow$ non-default; and credit score
$\rightarrow$ non-default) that lead to a non-default classification.
However, tree-based explanations cannot express the linear relationship
between input features and output. It also lacks smoothness; slight changes in
input can have a big impact on the predicted output. Also, there can be
multiple different trees for the same problem. Usually, the more the nodes or
depth of the tree, the more challenging it is to interpret the tree.
Figure 1: Decision Trees
Decision Rules (simple IF-THEN-ELSE conditions) are also an inherent
explanation model. For instance, ”IF credit score is less than or equal to 748
AND if the customer is delinquent on payment for more than zero days
(condition), THEN the customer will default on payment (prediction)”. Although
IF-THEN rules are straightforward to interpret, it is mostly limited to
classification problems (i.e., does not support a regression problem), and
inadequate in describing linear relationships. In addition, the RuleFit
algorithm [32] has an inherent interpretation to some extent as it learns
sparse linear models that can detect the interaction effects in the form of
decision rules. Decision rules consist of the combination of split decisions
from each of the decision paths. However, besides the original features, it
also learns some new features to capture the interaction effects of original
features. Usually, interpretability degrades with an increasing number of
features.
Other interpretable models include the extension of linear models such as
Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs); they
help to deal with some of the assumptions of linear models (e.g., the target
outcome y and given features follow a Gaussian Distribution; and no
interaction among features). However, these extensions make models more
complex (i.e., added interactions) as well as less interpretable. In addition,
a Naïve Bayes Classifier based on Bayes Theorem, where the probability of
classes for each of the features is calculated independently (assuming strong
feature independence), and K-Nearest Neighbors, which uses nearest neighbors
of a data point for prediction (regression or classification), also fall under
intrinsically interpretable models.
### 4.2 Model-Agnostic Methods
Model-agnostic methods separate explanation from a machine learning model,
allowing the explanation method to be compatible with a variety of models.
This separation has some clear advantages such as (1) the interpretation
method can work with multiple ML models, (2) provides different forms of
explainability (e.g., visualization of feature importance, linear formula) for
a particular model, and (3) allows for a flexible representation—a text
classifier uses abstract word embedding for classification but uses actual
words for explanation. Some of the model-agnostic interpretation methods
include Partial Dependence Plot (PDP), Individual Conditional Expectation
(ICE), Accumulation Local Effects (ALE) Plot, Feature Interaction, Feature
Importance, Global Surrogate, Local Surrogate (LIME), and Shapley Values
(SHAP).
#### 4.2.1 Partial Dependence Plot (PDP)
The partial Dependence Plot (PDP) or PD plot shows the marginal effect of one
or two features (at best three features in 3-D) on the predicted outcome of an
ML model [33]. It is a global method, as it shows an overall model behavior,
and is capable of showing the linear or complex relationships between target
and feature(s). It provides a function that depends only on the feature(s)
being plotted by marginalizing over other features in such a way that includes
the interactions among them. PDP provides a clear and causal interpretation by
providing the changes in prediction due to changes in particular features.
However, PDP assumes features under the plot are not correlated with the
remaining features. In the real world, this is unusual. Furthermore, there is
a practical limit of only two features that PD plot can clearly explain at a
time. Also, it is a global method, as it plots the average effect (from all
instances) of a feature(s) on the prediction, and not for all features on a
specific instance. The PD plot in Figure 2 shows the effect of credit score on
prediction. Individual bar lines along the X axis represent the frequency of
samples for different ranges of credit scores.
Figure 2: Partial Dependence Plot (PDP)
#### 4.2.2 Individual Conditional Expectation (ICE)
Unlike PDP, ICE plots one line per instance showing how a feature influences
the changes in prediction (See Figure 3. The average on all lines of an ICE
plot gives a PD plot [34] (i.e., the single line shown in the PD plot in
Figure 2). Figure 4, combines both PDP and ICE together for a better
interpretation.
Figure 3: Individual Conditional Expectation (ICE) Figure 4: PDP and ICE
combined together in the same plot
Although ICE curves are more intuitive to understand than a PD plot, it can
only display one feature meaningfully at a time. In addition, it also suffers
from the problem of correlated features and overcrowded lines when there are
many instances.
#### 4.2.3 Accumulated Local Effects (ALE) Plot
Similar to PD plots (Figure 2, ALE plots (Figure 5 describe how features
influence the prediction on average. However, unlike PDP, ALE plot reasonably
works well with correlated features and is comparatively faster. Although ALE
plot is not biased to the correlated features, it is challenging to interpret
the changes in prediction when features are strongly correlated and analyzed
in isolation. In that case, only plots showing changes in both correlated
features together make sense to understand the changes in the prediction.
Figure 5: Accumulated Local Effects (ALE) Plot
#### 4.2.4 Feature Interaction
When the features interact with one another, individual feature effects do not
sum up to the total feature effects from all features combined. An H-statistic
(i.e., Friedman’s H-statistic) helps to detect different types of interaction,
even with three or more features. The interaction strength between two
features is the difference between the partial dependence function for those
two features together and the sum of the partial dependence functions for each
feature separately. Figure 6 shows the interaction strength of each
participating feature. For example, current Actual UPB has the highest level
of interaction with other features, and credit score has the least interaction
with other features. However, calculating feature interaction is
computationally expensive. Furthermore, using sampling instead of the entire
dataset usually shows variances from run to run. 6,
Figure 6: Feature interaction
#### 4.2.5 Feature Importance
Usually, the feature importance of a feature is the increase in the prediction
error of the model when we permute the values of the feature to break the true
relationship between the feature and the true outcome. After shuffling the
values of the feature, if errors increase, then the feature is important. [35]
introduced the permutation-based feature importance for Random Forests; later
[36] extended the work to a model-agnostic version. Feature importance
provides a compressed and global insight into the ML model’s behavior. For
example, Figure 7 shows the importance of each participating feature, current
Actual UPB possess the highest feature importance, and credit score possess
the lowest feature importance. Although feature importance takes into account
both the main feature effect and interaction, this is a disadvantage as
feature interaction is included in the importance of correlated features. We
can see that the feature current Actual UPB possesses the highest feature
importance (Figure 7), at the same time it also possesses the highest
interaction strength 6. As a result, in the presence of interaction among
features, the feature importance does not add up to total drop-in of
performance. Besides, it is unclear whether the test set or training set
should be used for feature importance, as it demonstrates variance from run to
run in the shuffled dataset. It is necessary to mention that feature
importance also falls under the global methods.
Figure 7: Feature importance
#### 4.2.6 Global Surrogate
A global surrogate model tries to approximate the overall behavior of a “black
box” model using an interpretable ML model. In other words, surrogate models
try to approximate the prediction function of a black-box model using an
interpretable model as correctly as possible, given the prediction is
interpretable. It is also known as a meta-model, approximate model, response
surface model, or emulator. We approximate the behavior of a Random Forest
using CART decision trees (Figure 8). The original black box model could be
avoided given the surrogate model demonstrates a comparable performance.
Although a surrogate model comes with interpretation and flexibility (i.e.,
such as model agnosticism), diverse explanations for the same “black box” such
as multiple possible decision trees with different structures, is a drawback.
Besides, some would argue that this is only an illusion of interpretability.
Figure 8: Global surrogate
#### 4.2.7 Local Surrogate (LIME)
Unlike global surrogate, local surrogate explains individual predictions of
black-box models. Local Interpretable Model-Agnostic Explanations (LIME) was
proposed by [37]. Lime trains an inherently interpretable model (e.g.,
Decision Trees) on a new dataset made from the permutation of samples and the
corresponding prediction of the black box. Although the learned model can have
a good approximation of local behavior, it does not have a good global
approximation. This trait is also known as local fidelity. Figure 9 is a
visualization of the output from LIME. For a random sample, the black box
predicts that a customer will default on payment with a probability of 1; the
local surrogate model, LIME also predict that the customer will default on the
payment, however, the probability is 0.99, that is little less than the black
box models prediction. LIME also shows which feature contributes to the
decision making and by how much. Furthermore, LIME allows replacing the
underlying “black box” model by keeping the same local interpretable model for
the explanation. In addition, LIME works for tabular data, text, and images.
As LIME is an approximation model, and the local model might not cover the
complete attribution due to the generalization (e.g., using shorter trees,
lasso optimization), it might be unfit for cases where we legally need
complete explanations of a decision. Furthermore, there is no consensus on the
boundary of the neighborhood for the local model; sometimes, it provides very
different explanations for two nearby data points.
Figure 9: Local Interpretable Model-Agnostic Explanations (LIME)
#### 4.2.8 Shapley Values
Shapley is another local explanation method. In 1953, Shapley [38] coined the
Shapley Value. It is based on coalitional game theory that helps to distribute
feature importance among participating features fairly. Here the assumption is
that each feature value of the instance is a player in a game, and the
prediction is the overall payout that is distributed among players (i.e.,
features) according to their contribution to the total payout (i.e.,
prediction). We use Shapely values (See Figure 10) to analyze the prediction
of a random forest model for the credit default prediction problem. The actual
prediction for a random sample is 1.00, the average prediction from all
samples in the data set is 0.53, and their difference .47 (1.00 $-$ 0.53)
consists of the individual contributions from the features (e.g., Current
Actual UPB contributes 0.36). The Shapely Value is the average contribution in
prediction over all possible coalition of features, which make it
computationally expensive when there is a large number of features—for
example, for k number of features, there will be 2k number of coalitions.
Unlike LIME, Shapely Value is an explanation method with a solid theory that
provides full explanations. However, it also suffers from the problem of
correlated features. Furthermore, the Shapely value returns a single value per
feature; there is no way to make a statement about the changes in output
resulting from the changes in input. One mentionable implementation of the
Shapely value is in the work of [39] that they call SHAP.
Figure 10: Shapely values
#### 4.2.9 Break Down
The Break Down package provides the local explanation and is loosely related
to the partial dependence algorithm with an added step-wise procedure known as
“Break Down” (proposed by [11]). It uses a greedy strategy to identify and
remove features iteratively based on their influence on the overall average
predicted response (baseline) [40]. For instance, from the game theory
perspective, it starts with an empty team, then adds feature values one by one
based on their decreasing contribution. In each iteration, the amount of
contribution from each feature depends on the features values of those are
already in the team, which is considered as a drawback of this approach.
However, it is faster than the Shapley value method due to the greedy
approach, and for models without interactions, the results are the same [31].
Figure 11 is a visualization of break down for a random sample, showing
contribution (positive or negative) from each of the participating features
towards the final prediction.
Figure 11: Breakdown
### 4.3 Example-Based Explanations
Example-Based Explanation methods use particular instances from the dataset to
explain the behavior of the model and the distribution of the data in a model
agnostic way. It can be expressed as “X is similar to Y and Y caused Z, so the
prediction says X will cause Z”. According to [31], a few explanation methods
that fall under Example-Based Explanations are described as follows:
#### 4.3.1 Counterfactual
The counterfactual method indicates the required changes in the input side
that will have significant changes (e.g., reverse the prediction) in the
prediction/output. Counterfactual explanations can explain individual
predictions. For instance, it can provide an explanation that describes causal
situations such as “If A had not occurred, B would not have occurred”.
Although counterfactual explanations are human-friendly, it suffers from the
“Rashomon effect”, where each counterfactual explanation tells a different
story to reach a prediction. In other words, there are multiple true
explanations (counterfactual) for each instance level prediction, and the
challenge is how to choose the best one. The counterfactual methods do not
require access to data or models and could work with a system that does not
use machine learning at all. In addition, this method does not work well for
categorical variables with many values. For instance, if the credit score of
customer 5 (from Table III) can be increased to 749 (similar to the credit
score of customer 6) from 748, given other features values remain unchanged,
the customer will not default on a payment. In short, there can be multiple
different ways to tune feature values to make customers move from non-default
to default, or vice versa.
Traditional explanation methods are mostly based on explaining correlation
rather than causation. Moraffah et al. [41] focus on the causal interpretable
model that explains the possible decision under different situations such as
being trained with different inputs or hyperparameters. This causal
interpretable approach share concept of counterfactual analysis as both work
on causal inference. Their work also suggests possible use in fairness
criteria evaluation of decisions.
TABLE III: Example-Based Explanations Customer | Delinquency | Credit score | Defaulted
---|---|---|---
1 | 162 | 680 | yes
2 | 149 | 691 | yes
3 | 6 | 728 | yes
4 | 6 | 744 | yes
5 | 0 | 748 | yes
6 | 0 | 749 | no
7 | 0 | 763 | no
8 | 0 | 790 | no
9 | 0 | 794 | no
10 | 0 | 806 | no
#### 4.3.2 Adversarial
An adversarial technique is capable of flipping the decision using
counterfactual examples to fool the machine learner (i.e., small intentional
perturbations in input to make a false prediction). However, adversarial
examples could help to discover hidden vulnerabilities as well as to improve
the model. For instance, an attacker can intentionally design adversarial
examples to cause the AI system to make a mistake (i.e., fooling the machine),
which poses greater threats to cyber-security and autonomous vehicles. As an
example, the credit default prediction system can be fooled for customer 5,
just by increasing the credit score by 1 (see Table III), leading to a
reversed prediction.
Hartl et al. [42] emphasize on understanding the implications of adversarial
samples on Recurrent Neural Network (RNNs) based IDS because RNNs are good for
sequential data analysis, and network traffic exhibits some sequential
patterns. They find that adversarial the adversarial training procedure can
significantly reduce the attack surface. Furthermore, [43] apply an
adversarial approach to finding minimum modification of the input features of
an intrusion detection system needed to reverse the classification of the
misclassified instance. Besides satisfactory explanations of the reason for
misclassification, their approach work provide further diagnosis capabilities.
#### 4.3.3 Prototypes
Prototypes consist of a selected set of instances that represent the data very
well. Conversely, the set of instances that do not represent data well are
called criticisms[44]. Determining the optimal number of prototypes and
criticisms are challenging. For example, customers 1 and 10 from Table III can
be treated as prototypes as those are strong representatives of the
corresponding target. On the other hand, customers 5 and 6 (from Table III)
can be treated as a criticism as the distance between the data points is
minimal, and they might be classified under either class from run to run of
the same or different models.
#### 4.3.4 Influential Instances
Influential instances are data points from the training set that are
influential for prediction and parameter determination of the model. While it
helps to debug the model and understand the behavior of the model better,
determining the right cutoff point to separate influential or non-influential
instances is challenging. For example, based on the values of feature credit
score and delinquency, customers 1, 2, 9, and 10 from Table III can be treated
as influential instances as those are strong representatives of the
corresponding target. On the other hand, customers 5 and 6 are not influential
instances, as those would be in the margin of the classification decision
boundary.
#### 4.3.5 k-nearest Neighbors Model
The prediction of the k-nearest neighbor model can be explained with the
k-neighbor data points (neighbors those were averaged to make the prediction).
A visualization of the individual cluster containing similar instances
provides an interpretation of why an instance is a member of a particular
group or cluster. For example, in Figure 12, the new sample (black circle) is
classified according to the other three (3-nearest neighbor) nearby
samples(one gray, two white). This visualization gives an interpretation of
why a particular sample is part of a particular class.
Figure 12: KNN
Table IV summarizes the explainability methods from the perspective of (A)
whether the method approximates the model behavior (i.e., creates an illusion
of interpretability) or finds actual behavior, (B) whether the method alone is
inherently interpretable or not, (C) whether the interpretation method is
ante-hoc, that is, it incorporates explainability into a model from the
beginning, or post-hoc, where explainability is incorporated after the regular
training of the actual model (i.e., testing time), (D) whether the method is
model agnostic (i.e., works for any ML model) or specific to an algorithm, and
(E) whether the model is local, providing instance-level explanations, or
global, providing overall model behavior.
Our analysis says there is a lack of an explainability method (i.e., a gap in
the literature), which is, at the same time actual and direct (i.e., does not
create an illusion of explainability by approximating the model), model
agnostic, and local, such that it utilizes the full potential of the
explainability method in different applications. There are some recent works
that bring external knowledge and infuse that into the model for better
interpretation. These XAI methods have the potential to fill the gap to some
extent by incorporating domain knowledge into the model in a model agnostic
and transparent way (i.e., not by illusion).
TABLE IV: Comparison of different explainability methods from a set of key perspectives (approximation or actual; inherent or not; post-hoc or ante-hoc; model-agnostic or model specific; and global or local) Method | Approx. | Inherent | Post/Ante | Agnos./Spec. | Global/Local
---|---|---|---|---|---
Linear/Logistic Regression | No | Yes | Ante | Specific | Both
Decision Trees | No | Yes | Ante | Specific | Both
Decision Rules | No | Yes | Ante | Specific | Both
k-Nearest Neighbors | No | Yes | Ante | Specific | Both
Partial Dependence Plot (PDP) | Yes | No | Post | Agnostic | Global
Individual Conditional Expectation (ICE) | Yes | No | Post | Agnostic | Both
Accumulated Local Effects (ALE) Plot | Yes | No | Post | Agnostic | Global
Feature Interaction | No | Yes | Both | Agnostic | Global
Feature Importance | No | Yes | Both | Agnostic | Global
Global Surrogate | Yes | No | Post | Agnostic | Global
Local Surrogate (LIME) | Yes | No | Post | Agnostic | Local
Shapley Values (SHAP) | Yes | No | Post | Agnostic | Local
Break Down | Yes | No | Post | Agnostic | Local
Counterfactual explanations | Yes | No | Post | Agnostic | Local
Adversarial examples | Yes | No | Post | Agnostic | Local
Prototypes | Yes | No | Post | Agnostic | Local
Influential instances | Yes | No | Post | Agnostic | Local
### 4.4 Other Techniques
Chen et al. [45] introduce an instance-wise feature selection as a methodology
for model interpretation where the model learns a function to extract a subset
of most informative features for a particular instance. The feature selector
attempt to maximize the mutual information between selected features and
response variables. However, their approach is mostly limited to posthoc
approaches.
In a more recent work, [46] study explainable ML using information theory
where they quantify the effect of an explanation by the conditional mutual
information between the explanation and prediction considering user
background. Their approach provides personalized explanation based on the
background of the recipient, for instance, a different explanation for those
who know linear algebra and those who don’t. However, this work is yet to be
considered as a comprehensive approach which considers a variety of user and
their explanation needs. To understand the flow of information in a Deep
Neural Network (DNN), [47] analyzed different gradient-based attribution
methods that assign an attribution value (i.e., contribution or relevance) to
each input feature (i.e., neuron) of a network for each output neurons. They
use a heatmap for better visualizations where a particular color represents
features that contribute positively to the activation of target output, and
another color for features that suppress the effect on it.
A survey on the visual representation of Convolutional Neural Networks (CNNs),
by [20], categorizes works based on a) visualization of CNN representations in
intermediate network layers, b) diagnosis of CNN representation for feature
space of different feature categories or potential representation flaws, c)
disentanglement of “the mixture of patterns” encoded in each filter of CNNs,
d) interpretable CNNs, and e) semantic disentanglement of CNN representations.
In the industrial control system, an alarm from the intrusion/anomaly
detection system has a very limited role unless the alarm can be explained
with more information. [5] design a layer-wise relevance propagation method
for DNN to map the abnormalities between the calculation process and features.
This process helps to compare the normal samples with abnormal samples for
better understanding with detailed information.
### 4.5 Knowledge Infusion Techniques
[48] propose a concept attribution-based approach (i.e., sensitivity to the
concept) that provides an interpretation of the neural network’s internal
state in terms of human-friendly concepts. Their approach, Testing with CAV
(TCAV), quantifies the prediction’s sensitivity to a high dimensional concept.
For example, a user-defined set of examples that defines the concept
’striped’, TCAV can quantify the influence of ’striped’ in the prediction of
’zebra’ as a single number. However, their work is only for image
classification and falls under the post-modeling notion (i.e., post-hoc) of
explanation.
[49] propose a knowledge-infused learning that measures information loss in
latent features learned by the neural networks through Knowledge Graphs (KGs).
This external knowledge incorporation (via KGs) aids in supervising the
learning of features for the model. Although much work remains, they believe
that (KGs) will play a crucial role in developing explainable AI systems.
[50] and [51] infuse popular domain principles from the domain in the model
and represent the output in terms of the domain principle for explainable
decisions. In [50], for a bankruptcy prediction problem they use the 5C’s of
credit as the domain principle which is commonly used to analyze key factors:
character (reputation of the borrower/firm), capital (leverage), capacity
(volatility of the borrower’s earnings), collateral (pledged asset) and cycle
(macroeconomic conditions) [52], [53]. In [51], for an intrusion detection and
response problem, they incorporate the CIA principles into the model; C stands
for confidentiality—concealment of information or resources, I stands for
integrity—trustworthiness of data or resources, and A stands for
availability—ability to use the information or resource desired [54]. In both
cases, the infusion of domain knowledge leads to better explainability of the
prediction with negligible compromises in performance. It also comes with
better execution time and a more generalized model that works better with
unknown samples.
Although these works [50, 51] come with unique combinations of merits such as
model agnosticism, the capability of both local and global explanation, and
authenticity of explanation—simulation or emulation free, they are still not
fully off-the-shelf systems due to some domain-specific configuration
requirements. Much work still remains and needs further attention.
## 5 Quantifying Explainability and Future Research Directions
### 5.1 Quantifying Explainability
The quantification or evaluation of explainability is an open challenge. There
are two primary directions of research towards the evaluation of
explainability of an AI/ML model: (1) model complexity-based, and (2) human
study-based.
#### 5.1.1 Model Complexity-based Explainability Evaluation
In the literature, model complexity and (lack of) model interpretability are
often treated as the same [10]. For instance, in [55], [56], model size is
often used as a measure of interpretability (e.g., number of decision rules,
depth of the tree, number of non-zero coefficients).
[56] propose a scalable Bayesian Rule List (i.e., probabilistic rule list)
consisting of a sequence of IF-THEN rules, identical to a decision list or
one-sided decision tree. Unlike the decision tree that uses greedy splitting
and pruning, their approach produces a highly sparse and accurate rule list
with a balance between interpretability, accuracy, and computation speed.
Similarly, the work of [55] is also rule-based. They attempt to evaluate the
quality of the rules using a rule learning algorithm by: the observed
coverage, which is the number of positive examples covered by the rule, which
should be maximized to explain the training data well; and consistency, which
is the number of negative examples covered by the rule, which should be
minimized to generalize well to unseen data.
According to [57], while the number of features and the size of the decision
tree are directly related to interpretability, the optimization of the tree
size or features (i.e., feature selection) is costly as it requires the
generation of a large set of models and their elimination in subsequent steps.
However, reducing the tree size (i.e., reducing complexity) increases error,
as they could not find a way to formulate the relation in a simple functional
form. More recently, [10] attempts to quantify the complexity of the arbitrary
machine learning model with a model agnostic measure. In that work, the author
demonstrates that when the feature interaction (i.e., the correlation among
features) increases, the quality of representations of explainability tools
degrades. For instance, the explainability tool ALE Plot (see Figure 5 starts
to show harsh lines (i.e., zigzag lines) as feature interaction increases. In
other words, with more interaction comes a more combined influence in the
prediction, induced from different correlated subsets of features (at least
two), which ultimately makes it hard to understand the causal relationship
between input and output, compared to an individual feature influence in the
prediction. In fact, from our study of different explainability tools (e.g.,
LIME, SHAP, PDP), we have found that the correlation among features is a key
stumbling block to represent feature contribution in a model agnostic way.
Keeping the issue of feature interactions in mind, [10] propose a technique
that uses three measures: number of features, interaction strength among
features, and the main effect (excluding the interaction part) of features, to
measure the complexity of a post-hoc model for explanation.
Although,[10] mainly focuses on model complexity for post-hoc models, their
work was a foundation for the approach by [58] for the quantification of
explainability. Their approach to quantify explainability is model agnostic
and is for a model of any notion (e.g., pre-modeling, post-hoc) using proxy
tasks that do not involve a human. Instead, they use known truth as a metric
(e.g., the less number of features, the more explainable the model). Their
proposed formula for explainability gives a score in between 0 and 1 for
explainability based on the number of cognitive chunks (i.e., individual
pieces of information) used on the input side and output side, and the extent
of interaction among those cognitive chunks.
#### 5.1.2 Human Study-based Explainability Evaluation
The following works deal with the application-level and human-level evaluation
of explainability involving human studies.
[26] investigate the suitability of different alternative representation
formats (e.g., decision tables, (binary) decision trees, propositional rules,
and oblique rules) for classification tasks primarily focusing on the
explainability of results rather than accuracy or precision. They discover
that decision tables are the best in terms of accuracy, response time, the
confidence of answer, and ease of use.
[24] argue that interpretability is not an absolute concept; instead, it is
relative to the target model, and may or may not be relative to the human.
Their finding suggests that a model is readily interpretable to a human when
it uses no more than seven pieces of information [59]. Although, this might
vary from task to task and person to person. For instance, a domain expert
might consume a lot more detailed information depending on their experience.
The work of [27] is a human-centered approach, focusing on previous work on
human trust in a model from psychology, social science, machine learning, and
human-computer interaction communities. In their experiment with human
subjects, they vary factors (e.g., number of features, whether the model
internals are transparent or a black box) that make a model more or less
interpretable and measures how the variation impacts the prediction of human
subjects. Their results suggest that participants who were shown a transparent
model with a small number of features were more successful in simulating the
model’s predictions and trusted the model’s predictions.
[25] investigate interpretability of a model based on two of its definitions:
simulatability, which is a user’s ability to predict the output of a model on
a given input; and “what if” local explainability, which is a user’s ability
to predict changes in prediction in response to changes in input, given the
user has the knowledge of a model’s original prediction for the original
input. They introduce a simple metric called runtime operation count that
measures the interpretability, that is, the number of operations (e.g., the
arithmetic operation for regression, the boolean operation for trees) needed
in a user’s mind to interpret something. Their findings suggest that
interpretability decreases with an increase in the number of operations.
Despite some progress, there are still some open challenges surrounding
explainability such as an agreement of what an explanation is and to whom; a
formalism for the explanation; and quantifying the human comprehensibility of
the explanation. Other challenges include addressing more comprehensive human
studies requirements and investigating the effectiveness among different
approaches (e.g., supervised, unsupervised, semi-supervised) for various
application areas (e.g., natural language processing, image recognition).
### 5.2 Future Research Directions
The long term goal for current AI initiatives is to contribute to the design,
development, and deployment of human-centered artificial intelligent systems,
where the agents collaborate with the human in an interpretable and
explainable manner, with the intent on ensuring fairness, transparency, and
accountability. To accomplish that goal, we propose a set of research
plans/directions towards achieving responsible or human-centered AI using XAI
as a medium.
#### 5.2.1 A Generic Framework to Formalize Explainable Artificial
Intelligence
The work in [50] and [51], demonstrates a way to collect and leverage domain
knowledge from two different domains, finance and cybersecurity, and further
infused that knowledge into black-box models for better explainability. In
both of these works, competitive performance with enhanced explainability is
achieved. However, there are some open challenges such as (A) a lack of
formalism of the explanation, (B) a customized explanation for different types
of explanation recipients (e.g., layperson, domain expert, another machine),
(C) a way to quantify the explanation, and (D) quantifying the level of
comprehensibility with human studies. Therefore, leveraging the knowledge from
multiple domains, a generic framework could be useful considering the
mentioned challenges. As a result, mission-critical applications from
different domains will be able to leverage the black-box model with greater
confidence and regulatory compliance.
#### 5.2.2 Towards Fair, Accountable, and Transparent AI-based Models
Responsible use of AI is crucial for avoiding risks stemming from a lack of
fairness, accountability, and transparency in the model. Remediation of data,
algorithmic, and societal biases is vital to promote fairness; the AI
system/adopter should be held accountable to affected parties for its
decision; and finally, an AI system should be analyzable, where the degree of
transparency should be comprehensible to have trust in the model and its
prediction for mission-critical applications. Interestingly, XAI enhances
understating directly, increasing trust as a side-effect. In addition, the
explanation techniques can help in uncovering potential risks (e.g., what are
possible fairness risks). So it is crucial to adhere to fairness,
accountability, and transparency principles in the design and development of
explainable models.
#### 5.2.3 Human-Machine Teaming
To ensure the responsible use of AI, the design, development, and deployment
of human-centered AI, that collaborates with the humans in an explainable
manner, is essential. Therefore, the explanation from the model needs to be
comprehensible by the user, and there might be some supplementary questions
that need to be answered for a clear explanation. So, the interaction (e.g.,
follow-ups after the initial explanation) between humans and machines is
important. The interaction is more crucial for adaptive explainable models
that provide context-aware explanations based on user profiles such as
expertise, domain knowledge, interests, and cultural backgrounds. The social
sciences and human behavioral studies have the potential to impact XAI and
human-centered AI research. Unfortunately, the Human-Computer Interaction
(HCI) community is kind of isolated. The combination of HCI empirical studies
and human science theories could be a compelling force for the design of
human-centered AI models as well as furthering XAI research. Therefore,
efforts to bring a human into the loop, enabling the model to receive input
(repeated feedback) from the provided visualization/explanations to the human,
and improving itself with the repeated interactions, has the potential to
further human-centered AI. Besides adherence to fairness, accountability, and
transparency, the effort will also help in developing models that adhere to
our ethics, judgment, and social norms.
#### 5.2.4 Collective Intelligence from Multiple Disciplines
From the explanation perspective, there is plenty of research in philosophy,
psychology, and cognitive science on how people generate, select, evaluate,
and represent explanations and associate cognitive biases and social
expectations in the explanation process. In addition, from the interaction
perspective, human-computer teaming involving social science, the HCI
community, and social-behavioral studies could combine for further
breakthroughs. Furthermore, from the application perspective, the collectively
learned knowledge from different domains (e.g., Health-care, Finance,
Medicine, Security, Defense) can contribute to furthering human-centric AI and
XAI research. Thus, there is a need for a growing interest in
multidisciplinary research to promote human-centric AI as well as XAI in
mission-critical applications from different domains.
## 6 Conclusion
We demonstrate and analyze mutual XAI methods using a mutual test case to
explain competitive advantages and elucidate the challenges and further
research directions. Most of the available works on XAI are on the post-hoc
notion of explainability. However, the post-hoc notion of explainability is
not purely transparent and can be misleading, as it explains the decision
after it has been made. The explanation algorithm can be optimized to placate
subjective demand, primarily stemming from the emulation effort of the actual
prediction, and the explanation can be misleading, even when it seems
plausible [60, 61]. Thus, many suggest not to explain black-box models using
post-hoc notions, instead, they suggest adhering to simple and intrinsically
explainable models for high stakes decisions [17]. Furthermore, from the
literature review, we find that explainability in pre-modeling is a viable
option to avoid the transparency related issues, albeit, under-focused. In
addition, knowledge infusion techniques have the potential to enhance
explainability greatly, although, also an under-focused challenge. Therefore,
we need more focus on the explainability of “black box” models using domain
knowledge. At the same time, we need to focus on the evaluation or
quantification of explainability using both human and non-human studies. We
believe this review provides a good insight into the current progress on XAI
approaches, evaluation and quantification of explainability, open challenges,
and a path towards responsible or human-centered AI using XAI as a medium.
## Acknowledgments
Our sincere thanks to Christoph Molnar for his open E-book on Interpretable
Machine Learning and contribution to the open-source R package “iml”. Both
were very useful in conducting this survey.
## References
* [1] G. Ras, M. van Gerven, and P. Haselager, “Explanation methods in deep learning: Users, values, concerns and challenges,” in _Explainable and Interpretable Models in Computer Vision and Machine Learning_. Springer, 2018, pp. 19–36.
* [2] B. Goodman and S. Flaxman, “Eu regulations on algorithmic decision-making and a “right to explanation”,” in _ICML workshop on human interpretability in machine learning (WHI 2016), New York, NY. http://arxiv. org/abs/1606.08813 v1_ , 2016.
* [3] B. Wyden, “Algorithmic accountability,” https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202019%20Bill%20Text.pdf, (Accessed on 11/21/2019).
* [4] M. T. Esper, “Ai ethical principles,” https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/, February 2020, (Accessed on 03/07/2020).
* [5] Z. Wang, Y. Lai, Z. Liu, and J. Liu, “Explaining the attributes of a deep learning based intrusion detection system for industrial control networks,” _Sensors_ , vol. 20, no. 14, p. 3817, 2020.
* [6] W. Samek, T. Wiegand, and K.-R. Müller, “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models,” _arXiv preprint arXiv:1708.08296_ , 2017.
* [7] A. Fernandez, F. Herrera, O. Cordon, M. J. del Jesus, and F. Marcelloni, “Evolutionary fuzzy systems for explainable artificial intelligence: why, when, what for, and where to?” _ieee Computational intelligenCe magazine_ , vol. 14, no. 1, pp. 69–81, 2019.
* [8] A. Adadi and M. Berrada, “Peeking inside the black-box: A survey on explainable artificial intelligence (xai),” _IEEE Access_ , vol. 6, pp. 52 138–52 160, 2018.
* [9] S. T. Mueller, R. R. Hoffman, W. Clancey, A. Emrey, and G. Klein, “Explanation in human-ai systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable ai,” _arXiv preprint arXiv:1902.01876_ , 2019.
* [10] C. Molnar, G. Casalicchio, and B. Bischl, “Quantifying model complexity via functional decomposition for better post-hoc interpretability,” in _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 2019, pp. 193–204.
* [11] M. Staniak and P. Biecek, “Explanations of model predictions with live and breakdown packages,” _arXiv preprint arXiv:1804.01955_ , 2018.
* [12] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in _2018 IEEE 5th International Conference on data science and advanced analytics (DSAA)_. IEEE, 2018, pp. 80–89.
* [13] D. Collaris, L. M. Vink, and J. J. van Wijk, “Instance-level explanations for fraud detection: A case study,” _arXiv preprint arXiv:1806.07129_ , 2018\.
* [14] F. K. Došilović, M. Brčić, and N. Hlupić, “Explainable artificial intelligence: A survey,” in _2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO)_. IEEE, 2018, pp. 0210–0215.
* [15] E. Tjoa and C. Guan, “A survey on explainable artificial intelligence (xai): towards medical xai,” _arXiv preprint arXiv:1907.07374_ , 2019.
* [16] F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” _arXiv preprint arXiv:1702.08608_ , 2017.
* [17] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” _Nature Machine Intelligence_ , vol. 1, no. 5, pp. 206–215, 2019.
* [18] A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins _et al._ , “Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai,” _Information Fusion_ , vol. 58, pp. 82–115, 2020.
* [19] T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” _Artificial Intelligence_ , 2018.
* [20] Q.-s. Zhang and S.-C. Zhu, “Visual interpretability for deep learning: a survey,” _Frontiers of Information Technology & Electronic Engineering_, vol. 19, no. 1, pp. 27–39, 2018.
* [21] B. Chandrasekaran, M. C. Tanner, and J. R. Josephson, “Explaining control strategies in problem solving,” _IEEE Intelligent Systems_ , no. 1, pp. 9–15, 1989.
* [22] W. R. Swartout and J. D. Moore, “Explanation in second generation expert systems,” in _Second generation expert systems_. Springer, 1993, pp. 543–585.
* [23] W. R. Swartout, “Rule-based expert systems: The mycin experiments of the stanford heuristic programming project: Bg buchanan and eh shortliffe,(addison-wesley, reading, ma, 1984); 702 pages,” 1985.
* [24] A. Dhurandhar, V. Iyengar, R. Luss, and K. Shanmugam, “Tip: Typifying the interpretability of procedures,” _arXiv preprint arXiv:1706.02952_ , 2017\.
* [25] S. A. Friedler, C. D. Roy, C. Scheidegger, and D. Slack, “Assessing the local interpretability of machine learning models,” _arXiv preprint arXiv:1902.03501_ , 2019.
* [26] J. Huysmans, K. Dejaeger, C. Mues, J. Vanthienen, and B. Baesens, “An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models,” _Decision Support Systems_ , vol. 51, no. 1, pp. 141–154, 2011.
* [27] F. Poursabzi-Sangdeh, D. G. Goldstein, J. M. Hofman, J. W. Vaughan, and H. Wallach, “Manipulating and measuring model interpretability,” _arXiv preprint arXiv:1802.07810_ , 2018.
* [28] Q. Zhou, F. Liao, C. Mou, and P. Wang, “Measuring interpretability for different types of machine learning models,” in _Pacific-Asia Conference on Knowledge Discovery and Data Mining_. Springer, 2018, pp. 295–308.
* [29] “Single family loan level dataset - freddie mac.” [Online]. Available: http://www.freddiemac.com/research/datasets/sf_loanlevel_dataset.page
* [30] “Iml-cran package.” [Online]. Available: https://cran.r-project.org/web/packages/iml/index.html
* [31] C. Molnar _et al._ , “Interpretable machine learning: A guide for making black box models explainable,” _E-book at¡ https://christophm. github. io/interpretable-ml-book/¿, version dated_ , vol. 10, 2018.
* [32] J. H. Friedman, B. E. Popescu _et al._ , “Predictive learning via rule ensembles,” _The Annals of Applied Statistics_ , vol. 2, no. 3, pp. 916–954, 2008.
* [33] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” _Annals of statistics_ , pp. 1189–1232, 2001.
* [34] A. Goldstein, A. Kapelner, J. Bleich, and M. A. Kapelner, “Package ‘icebox’,” 2017.
* [35] L. Breiman, “Random forests,” _Machine learning_ , vol. 45, no. 1, pp. 5–32, 2001.
* [36] A. Fisher, C. Rudin, and F. Dominici, “Model class reliance: Variable importance measures for any machine learning model class, from the “rashomon” perspective,” _arXiv preprint arXiv:1801.01489_ , 2018.
* [37] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should i trust you?: Explaining the predictions of any classifier,” in _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. ACM, 2016, pp. 1135–1144.
* [38] L. S. Shapley, “A value for n-person games,” _Contributions to the Theory of Games_ , vol. 2, no. 28, pp. 307–317, 1953.
* [39] S. Lundberg and S.-I. Lee, “An unexpected unity among methods for interpreting model predictions,” _arXiv preprint arXiv:1611.07478_ , 2016.
* [40] B. B. . B. Greenwell, “Chapter 16 interpretable machine learning — hands-on machine learning with r,” https://bradleyboehmke.github.io/HOML/iml.html, (Accessed on 11/28/2019).
* [41] R. Moraffah, M. Karami, R. Guo, A. Raglin, and H. Liu, “Causal interpretability for machine learning-problems, methods and evaluation,” _ACM SIGKDD Explorations Newsletter_ , vol. 22, no. 1, pp. 18–33, 2020.
* [42] A. Hartl, M. Bachl, J. Fabini, and T. Zseby, “Explainability and adversarial robustness for rnns,” _arXiv preprint arXiv:1912.09855_ , 2019.
* [43] D. L. Marino, C. S. Wickramasinghe, and M. Manic, “An adversarial approach for explainable ai in intrusion detection systems,” in _IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society_. IEEE, 2018, pp. 3237–3243.
* [44] B. Kim, R. Khanna, and O. O. Koyejo, “Examples are not enough, learn to criticize! criticism for interpretability,” in _Advances in Neural Information Processing Systems_ , 2016, pp. 2280–2288.
* [45] J. Chen, L. Song, M. J. Wainwright, and M. I. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” _arXiv preprint arXiv:1802.07814_ , 2018.
* [46] A. Jung and P. H. J. Nardelli, “An information-theoretic approach to personalized explainable machine learning,” _IEEE Signal Processing Letters_ , 2020.
* [47] M. Ancona, E. Ceolini, C. Öztireli, and M. Gross, “Towards better understanding of gradient-based attribution methods for deep neural networks,” _arXiv preprint arXiv:1711.06104_ , 2017.
* [48] B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. Sayres, “Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav),” _arXiv preprint arXiv:1711.11279_ , 2017\.
* [49] U. Kursuncu, M. Gaur, and A. Sheth, “Knowledge infused learning (k-il): Towards deep incorporation of knowledge in deep learning,” _arXiv preprint arXiv:1912.00512_ , 2019.
* [50] S. R. Islam, W. Eberle, S. Bundy, and S. K. Ghafoor, “Infusing domain knowledge in ai-based” black box” models for better explainability with application in bankruptcy prediction,” _ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2019, Anomaly Detection in Finance Workshop_ , 2019.
* [51] S. R. Islam, W. Eberle, S. K. Ghafoor, A. Siraj, and M. Rogers, “Domain knowledge aided explainable artificial intelligence for intrusion detection and response,” _arXiv preprint arXiv:1911.09853_ , 2019.
* [52] E. Angelini, G. di Tollo, and A. Roli, “A neural network approach for credit risk evaluation,” _The quarterly review of economics and finance_ , vol. 48, no. 4, pp. 733–755, 2008.
* [53] J. Segal, “Five cs of credit.” [Online]. Available: https://www.investopedia.com/terms/f/five-c-credit.asp
* [54] B. Matt _et al._ , _Introduction to computer security_. Pearson Education India, 2006.
* [55] J. Fürnkranz, D. Gamberger, and N. Lavrač, “Rule learning in a nutshell,” in _Foundations of Rule Learning_. Springer, 2012, pp. 19–55.
* [56] H. Yang, C. Rudin, and M. Seltzer, “Scalable bayesian rule lists,” in _Proceedings of the 34th International Conference on Machine Learning-Volume 70_. JMLR. org, 2017, pp. 3921–3930.
* [57] S. Rüping _et al._ , “Learning interpretable models,” 2006.
* [58] S. R. Islam, W. Eberle, and S. K. Ghafoor, “Towards quantification of explainability in explainable artificial intelligence methods,” _arXiv preprint arXiv:1911.10104_ , 2019.
* [59] G. A. Miller, “The magical number seven, plus or minus two: Some limits on our capacity for processing information.” _Psychological review_ , vol. 63, no. 2, p. 81, 1956.
* [60] Z. C. Lipton, “The mythos of model interpretability,” _arXiv preprint arXiv:1606.03490_ , 2016.
* [61] P. Gandhi, “Explainable artificial intelligence.” [Online]. Available: https://www.kdnuggets.com/2019/01/explainable-ai.html
|
# Machine learning the dynamics of quantum kicked rotor
Tomohiro Mano Tomi Ohtsuki Physics Division, Sophia University, Kioicho 7-1,
Chiyoda-ku, Tokyo 102-8554, Japan<EMAIL_ADDRESS>
###### Abstract
Using the multilayer convolutional neural network (CNN), we can detect the
quantum phases in random electron systems, and phase diagrams of two and
higher dimensional Anderson transitions and quantum percolations as well as
disordered topological systems have been obtained. Here, instead of using CNN
to analyze the wave functions, we analyze the dynamics of wave packets via
long short-term memory network (LSTM). We adopt the quasi-periodic quantum
kicked rotors, which simulate the three and four dimensional Anderson
transitions. By supervised training, we let LSTM extract the features of the
time series of wave packet displacements in localized and delocalized phases.
We then simulate the wave packets in unknown phases and let LSTM classify the
time series to localized and delocalized phases. We compare the phase diagrams
obtained by LSTM and those obtained by CNN.
###### keywords:
Anderson transition, quantum phase transition, quantum kicked rotor, machine
learning, convolutional neural network, long short-term memory network
## 1 Introduction
Critical behaviors of the Anderson transition[1] have been attracting
considerable attention for more than half a century. The problem is related to
quantum percolation[2, 3, 4, 5, 6], where the wave functions on the randomly
connected lattice sites begin to be extended [7]. Electron states on random
lattice systems are difficult to study, because the conventional methods of
using the transfer matrix[8] are not applicable. The scaling analyses of the
energy level statistics[9] are also difficult, if not impossible[10, 11],
owing to the spiky density of states [6].
To overcome these difficulties, neural networks[12, 13, 14, 15] to classify
the states [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] had been used. That
is, instead of classifying images of photos, we input the wave functions
(actually the squared modulus of them) at the Fermi energy and classify them
to metal, insulator, topological insulator etc. First we train the
convolutional neural network in Anderson model of localization, whose phase
diagram is well known. We then apply the CNN to classify the eigenfunctions of
quantum percolation to metal or insulator relying on the generalization
capability of CNN. We have shown in refs. [27, 28] that this method is free
from the above difficulties and works well in determining the phase diagrams
of quantum percolation.
The above method, however, requires many eigenfunctions, which are difficult
to obtain in higher dimensions. One of the way to study the Anderson
transition without diagonalizing the Hamiltonian is to use quantum kicked
rotor (QKR) [29, 30, 31], where we analyze the wave packet dynamics in one
dimension. The simple quantum kicked rotor can be mapped to one-dimensional
(1D) Anderson model[32], whereas with incommensurate modulation of the
strength of kick, the model is mapped to tight binding models in higher
dimensions[33].
In this paper, we draw phase diagrams of the three dimensional (3D) and four
dimensional (4D) tight binding models that correspond to the QKR using the CNN
trained for standard Anderson models of localization. We also analyze the time
series of QKR via long short-term memory (LSTM) network[34, 35], let LSTM
classify the time series to localized/delocalized phases, draw the phase
diagrams, and compare them with those drawn by the CNN analyses of tight
binding models. We demonstrate that the phase boundaries of localized and
delocalized phases are less noisy in the case of LSTM.
## 2 Model and Method
We consider QKR with incommensurate modulation of the kick as follows;
$H(t)=\frac{p^{2}}{2}+K\cos x\times\sum_{n}\delta(t-n)\times F(t)\,,$ (1)
with
$F(t)=\left\\{\begin{array}[]{ll}1+\epsilon\cos(\omega_{2}t+\theta_{2})\times\cos(\omega_{3}t+\theta_{3})&\mathrm{3D}\\\
1+\epsilon\cos(\omega_{2}t+\theta_{2})\times\cos(\omega_{3}t+\theta_{3})\times\cos(\omega_{4}t+\theta_{4})&\mathrm{4D}\end{array}\right.\,$
(2)
where $\omega_{i}$ are irrational numbers that are incommensurate with each
other, $K$ the strength of the kick, $\epsilon$ the modulation strength, and
$\theta_{i}$ the initial phases. We took
$\omega_{2}=2\pi\sqrt{5},\omega_{3}=2\pi\sqrt{13}$ and
$\omega_{4}=2\pi\sqrt{23}$ [29, 30, 31].
We analyze the QKR in two ways. One way is to map this model to tight binding
models,
$H_{\mathrm{tb}}=\sum_{m}\varepsilon_{m}|m\rangle\langle
m|+\sum_{m,r}W_{r}|m\rangle\langle m-r|\,,$ (3)
with $m=(m_{1},m_{2},m_{3})$,
$\epsilon_{m}=\tan\left[-\frac{1}{2}(m_{1}^{2}\hbar/2+\omega_{2}m_{2}+\omega_{3}m_{3})\right]$
and $W_{r_{1},r_{2},r_{3}}$ the Fourier transform of
$W(x_{1},x_{2},x_{3})=\tan\left[\frac{K\cos x_{1}(1+\epsilon\cos x_{2}\cos
x_{3})}{2\hbar}\right]$ for 3D [30, 32]. For 4D, we include $m_{4},r_{4}$ and
$x_{4}$ in a straightforward way. We diagonalize $H_{\mathrm{tb}}$ numerically
to obtain the eigenfunctions, and let CNN determine whether they are localized
or delocalized. Details of the CNN method is reviewed in ref. [14]. Note that
$H_{\mathrm{tb}}$ is defined on 3D cubic lattice for the case of two
incommensurate frequencies ($\omega_{2}$ and $\omega_{3}$), whereas it is on
4D hypercubic lattice in the case of three incommensurate frequencies
($\omega_{2},\omega_{3},$ and $\omega_{4}$). Note also that we use CNN that
has been trained for Anderson models of localization [27, 14].
The other way is to solve the time dependent Schrödinger equation
$i\hbar\frac{d}{dt}\psi(t)=H(t)\psi(t)\,,$ (4)
with $\hbar$ set to 2.89[29, 30, 31], calculate the time dependence of
“displacement” in momentum space,
$p^{2}(t)=\langle\psi(t)|p^{2}|\psi(t)\rangle\,,$ (5)
and analyze the time series of $p^{2}(t)$ via LSTM. To follow the wave packet
time evolution from $t=n+\eta$ to $t=n+1+\eta$ with $\eta$ infinitely small
positive number, we use
$\psi(n+1)=\exp\left(-i\frac{K\,F(n+1)\,\cos
x}{\hbar}\right)\times\exp\left(-i\frac{p^{2}}{2\hbar}\right)\psi(n)\,.$ (6)
We work in the $p$-space, and the multiplication of $\exp(-ia\cos x)$
($a=KF(n+1)/\hbar$) is expanded in the $p$-space as $\langle p|\exp(-ia\cos
x)|p^{\prime}\rangle$, which is expressed by Bessel functions.
Figure 1: $p^{2}(t)$ vs. $t$ for various $K$ and $\epsilon$ with two
incommensurate frequencies (3D case). (a) the plot before normalization.
$p^{2}(t)$ is proportional to $t$ for delocalized states, whereas it saturates
to finite values for localized states. (b) after normalizing $p^{2}(t)$ to
$x^{(t)}$, the mean and standard deviation of which are 0 and 1, respectively.
We have calculated the wave packet dynamics up to $T=10^{4}$ time steps, and
recorded $p^{2}(t)$ at every 50 time steps.
## 3 Results
We first apply the CNN trained for the Anderson model to the wave functions
obtained by diagonalizing Eq. (3). For 3D systems, the system size is
$32\times 32\times 32$, whereas for 4D it is $10\times 10\times 10\times 10$.
We diagonalize the systems with periodic boundary conditions, obtain the
eigenfunctions at the center of the energy spectrum, input squared modulus of
the eigenfunctions to the CNN, and let CNN calculate the probabilities for the
inputs being delocalized. The results are shown in Fig. 2 (a) (3D) and (c)
(4D) as a heat map.
We next analyze the time series of $p^{2}(t)$, Fig. 1. We first note that
$p^{2}(t)\sim\left\\{\begin{array}[]{ll}Dt&\mathrm{delocalized}\,,\\\
\xi^{2}&\mathrm{localized}\,,\\\
t^{2/d}&\mathrm{critical}\,,\end{array}\right.$ (7)
with $D$ the diffusion constant, $\xi$ the localization length, and $d$ the
dimension. At the critical point, $p^{2}(t)\propto t^{2/d}$, so for 3D
$p^{2}(t)\propto t^{2/3}$ and for 4D $p^{2}(t)\propto t^{1/2}$[36]. Note that
we discuss here the localization/delocalization in momentum space.
(a) 3D CNN (b) 3D LSTM
---
(c) 4D CNN (d) 4D LSTM
Figure 2: Phase diagrams of QKR in $\epsilon$-$K$ plane. At each
$(K,\epsilon)$, we plot the probability that the parameter belongs to the
delocalized phase. Those obtained by CNN[(a) and (c)] and those by LSTM[(b)
and (d)]. 3D cases [(a) and (b)] and 4D cases[(c) and (d)]. The CNN is trained
by Anderson model of localization. The training regions of LSTM are indicated
as green arrows. White crosses are obtained by the critical behaviors of
$p^{2}(t)$. In all cases, average over 5 samples has been taken. Random choice
of $0\leq\theta_{i}<2\pi$ and random shift of
$(m_{1},m_{2},\cdots)\rightarrow(m_{1}+\beta_{1},m_{2}+\beta_{2},\cdots)\,,\,0\leq\beta_{i}<1$
have been performed[30].
We find the critical strength $(K,\epsilon)$ by detecting the behaviors
$p^{2}(t)\propto t^{2/d}$, and use this information for supervised training of
LSTM. The values of $p^{2}(t)$, however, strongly depend on $K$ and
$\epsilon$, and the neural network tends to learn only maxima and minima of
$p^{2}(t)$. We therefore preprocessed the data by normalizing them, i.e.,
normalize $p^{2}(t)$ to $x^{(t)}$, whose mean and standard deviation are 0 and
1, respectively.
We first determine the critical point along a straight line in $\epsilon$-$K$
plane by finding a point that shows $p^{2}(t)\propto t^{2/d}$ (see white
crosses in Fig. 2). Once the critical point is determined, we prepare time
series $p^{2}(t)$ for localized and delocalized phases by varying
$(K,\epsilon)$ along a straight line indicated by green arrows in Fig.
2(b),(d). We then normalize $p^{2}(t)$ to $x^{(t)}$ and use them for training
bidirectional LSTM. Once the LSTM is trained, we vary parameters in
$\epsilon$-$K$ plane and calculate $x^{(t)}$, and feed $x^{(t)}$ to LSTM,
which outputs the probability that the input time series $x^{(t)}$ belongs to
the delocalized phase. The results are shown in Fig. 2(b) for 3D and Fig. 2(d)
for 4D, which nicely distinguish localized and delocalized phases.
Now we compare the phase diagrams (heat maps) for 3D and 4D systems. In the
case of 3D, both CNN and LSTM give reasonably sharp phase boundaries [see Fig.
2(a), (b)]. On the other hand, in the case of 4D, the phase boundary becomes
noisy if we use 4D CNN [Fig. 2(c)]. This is because in the case of 4D, only
small system can be diagonalized, and CNN fail to learn the detailed features
of localized and delocalized states. In the case of LSTM [Fig. 2(d)], the
phase boundary is less noisy, since we do not need to diagonalize the system,
and we can follow as long time series as in 3D case.
## 4 Summary
To summarize, we have analyzed the quantum kicked rotor with time modulated
kick strength. The systems are analyzed in two ways. One is to map the
Hamiltonian to static higher dimensional tight binding models and study the
eigenfunctions via the deep convolutional neural network. The other is to
analyze the time series of the original time dependent one dimensional systems
via bidirectional long short-term memory network. We have demonstrated that
the latter approach gives less noisy phase boundary between the localized and
delocalized phases. The latter approach works especially well for analyzing
the Anderson transitions in higher dimensions.
Acknowledgement This work was supported by JSPS KAKENHI Grant Nos. 16H06345,
and 19H00658. We thank Dr. Matthias Stosiek for critical reading of the
manuscript.
## References
* Anderson [1958] P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109 (1958) 1492.
* Kirkpatrick and Eggarter [1972] S. Kirkpatrick, T. P. Eggarter, Localized states of a binary alloy, Phys. Rev. B 6 (1972) 3598–3609.
* Sur et al. [1976] A. Sur, J. L. Lebowitz, J. Marro, M. H. Kalos, S. Kirkpatrick, Monte carlo studies of percolation phenomena for a simple cubic lattice, Journal of Statistical Physics 15 (1976) 345–353.
* Schubert et al. [2005] G. Schubert, A. Weiße, G. Wellein, H. Fehske, HQS@HPC: Comparative numerical study of Anderson localisation in disordered electron systems, Springer Berlin Heidelberg, Berlin, Heidelberg, 2005, pp. 237–249. URL: https://doi.org/10.1007/3-540-28555-5_21. doi:10.1007/3-540-28555-5_21.
* Aharony and Stauffer [1994] A. Aharony, D. Stauffer, Introduction To Percolation Theory: Revised Second Edition, Taylor & Francis, London, 1994\.
* Ujfalusi and Varga [2015] L. Ujfalusi, I. Varga, Finite-size scaling and multifractality at the anderson transition for the three wigner-dyson symmetry classes in three dimensions, Phys. Rev. B 91 (2015) 184206.
* Makiuchi et al. [2018] T. Makiuchi, M. Tagai, Y. Nago, D. Takahashi, K. Shirahama, Elastic anomaly of helium films at a quantum phase transition, Phys. Rev. B 98 (2018) 235104.
* Slevin and Ohtsuki [2014] K. Slevin, T. Ohtsuki, Critical exponent for the anderson transition in the three-dimensional orthogonal universality class, New Journal of Physics 16 (2014) 015012.
* Shklovskii et al. [1993] B. I. Shklovskii, B. Shapiro, B. R. Sears, P. Lambrianides, H. B. Shore, Statistics of spectra of disordered systems near the metal-insulator transition, Phys. Rev. B 47 (1993) 11487–11490.
* Berkovits and Avishai [1996] R. Berkovits, Y. Avishai, Spectral statistics near the quantum percolation threshold, Phys. Rev. B 53 (1996) R16125–R16128.
* Kaneko and Ohtsuki [1999] A. Kaneko, T. Ohtsuki, Three-dimensional quantum percolation studied by level statistics, Journal of the Physical Society of Japan 68 (1999) 1488–1491.
* Mehta et al. [2019] P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, D. J. Schwab, A high-bias, low-variance introduction to machine learning for physicists, Phys. Rep. 810 (2019) 1.
* Carleo et al. [2019] G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, L. Zdeborová, Machine learning and the physical sciences, Rev. Mod. Phys. 91 (2019) 045002.
* Ohtsuki and Mano [2020] T. Ohtsuki, T. Mano, Drawing phase diagrams of random quantum systems by deep learning the wave functions, J. Phys. Soc. Jpn. 89 (2020) 022001.
* Bedolla et al. [2020] E. Bedolla, L. C. Padierna, R. Castañeda-Priego, Machine learning for condensed matter physics, Journal of Physics: Condensed Matter 33 (2020) 053001.
* Ohtsuki and Ohtsuki [2016] T. Ohtsuki, T. Ohtsuki, Deep learning the quantum phase transitions in random two-dimensional electron systems, Journal of the Physical Society of Japan 85 (2016) 123706.
* Ohtsuki and Ohtsuki [2017] T. Ohtsuki, T. Ohtsuki, Deep learning the quantum phase transitions in random electron systems: Applications to three dimensions, Journal of the Physical Society of Japan 86 (2017) 044708.
* Broecker et al. [2017] P. Broecker, J. Carrasquilla, R. G. Melko, S. Trebst, Machine learning quantum phases of matter beyond the fermion sign problem, Scientific Reports 7 (2017) 8823.
* Carrasquilla and Melko [2017] J. Carrasquilla, R. G. Melko, Machine learning phases of matter, Nature Physics 13 (2017) 431–434.
* Zhang and Kim [2017] Y. Zhang, E.-A. Kim, Quantum loop topography for machine learning, Phys. Rev. Lett. 118 (2017) 216401.
* Zhang et al. [2017] Y. Zhang, R. G. Melko, E.-A. Kim, Machine learning $z_{2}$ quantum spin liquids with quasiparticle statistics, Phys. Rev. B 96 (2017) 245119.
* Yoshioka et al. [2018] N. Yoshioka, Y. Akagi, H. Katsura, Learning disordered topological phases by statistical recovery of symmetry, Phys. Rev. B 97 (2018) 205110.
* van Nieuwenburg et al. [2017] E. P. van Nieuwenburg, Y.-H. Liu, S. D. Huber, Learning phase transitions by confusion, Nature Physics 13 (2017) 435.
* Zhang et al. [2018] P. Zhang, H. Shen, H. Zhai, Machine learning topological invariants with neural networks, Phys. Rev. Lett. 120 (2018) 066401.
* Araki et al. [2019] H. Araki, T. Mizoguchi, Y. Hatsugai, Phase diagram of a disordered higher-order topological insulator: A machine learning study, Phys. Rev. B 99 (2019) 085406.
* Carvalho et al. [2018] D. Carvalho, N. A. García-Martínez, J. L. Lado, J. Fernández-Rossier, Real-space mapping of topological invariants using artificial neural networks, Phys. Rev. B 97 (2018) 115453.
* Mano and Ohtsuki [2017] T. Mano, T. Ohtsuki, Phase diagrams of three-dimensional anderson and quantum percolation models using deep three-dimensional convolutional neural network, Journal of the Physical Society of Japan 86 (2017) 113704.
* Mano and Ohtsuki [2019] T. Mano, T. Ohtsuki, Application of convolutional neural network to quantum percolation in topological insulators, Journal of the Physical Society of Japan 88 (2019) 123704.
* Chabé et al. [2008] J. Chabé, G. Lemarié, B. Grémaud, D. Delande, P. Szriftgiser, J. C. Garreau, Experimental observation of the anderson metal-insulator transition with atomic matter waves, Phys. Rev. Lett. 101 (2008) 255702.
* Lemarié et al. [2009] G. Lemarié, J. Chabé, P. Szriftgiser, J. C. Garreau, B. Grémaud, D. Delande, Observation of the anderson metal-insulator transition with atomic matter waves: Theory and experiment, Phys. Rev. A 80 (2009) 043626.
* Lopez et al. [2012] M. Lopez, J.-F. Clément, P. Szriftgiser, J. C. Garreau, D. Delande, Experimental test of universality of the anderson transition, Phys. Rev. Lett. 108 (2012) 095701.
* Haake [2010] F. Haake, Quantum signatures of chaos, Springer, Berlin ; New York, 2010\.
* Casati et al. [1989] G. Casati, I. Guarneri, D. L. Shepelyansky, Anderson transition in a one-dimensional system with three incommensurate frequencies, Phys. Rev. Lett. 62 (1989) 345.
* Hochreiter and Schmidhuber [1997] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (1997) 1735–1780.
* Ola [2015] C. Ola, Understanding lstm networks, http://colah.github.io/posts/2015-08-Understanding-LSTMs/, 2015\.
* Ohtsuki and Kawarabayashi [1997] T. Ohtsuki, T. Kawarabayashi, Anomalous diffusion at the anderson transitions, Journal of the Physical Society of Japan 66 (1997) 314–317.
|
# A Software Architecture Teacher’s Dilemmas 111This is the reviewed version
of the paper submitted to International Conference on Software Architecture
(ICSA 2021)
Arvind W Kiwelekar
Department of Computer Engineering
Dr. Babasaheb Ambedkar Technological University
Lonere-Raigad India
<EMAIL_ADDRESS>
###### Abstract
An instructor teaching a course on Software Architecture needs to be more
reflective to engage students productively in the learning activities. In this
reflective essay, the author identifies a few decisive moments referred to as
instructional dilemmas at which a teacher reflects upon choices and their
consequences so that meaningful learning happens. These situations are
referred to as dilemmas because they offer two options to instructors. Some of
these dilemmas arise from the inherent nature of Software Architecture as a
discipline, while the source of others is the background knowledge of
learners. The paper suggests a set of principles and small-teaching methods to
make teaching and learning more effective in such situations.
Keywords: Instruction Design and Sequencing, Instructional Decisions, Software
Architecture Education, Soft skills for Software Architects.
## 1 Introduction
A course on Software Architecture is increasingly becoming an integral part of
the curriculum for the graduate programs in Software Engineering and Computer
Science. Many academic institutes are preferring a dedicated course on
Software Architecture over offering it as a one of the module in a course on
Software Engineering. This is because of the emphasis of architectural issues
in software development processes[1], and lack of sufficient coverage given to
architectural problems in a course on Software Engineering.
Such a dedicated course on Software Architecture is usually offered at the
senior undergraduate level (i.e. third or final year of a four years duration
program) or at the graduate level. While offering a course on Software
Architecture, it is ensured that students enrolled for the course have
completed prerequisite courses on Software Engineering, Programming Languages,
Databases, Operating Systems and Distributed Systems.
With this prerequisite knowledge, the students enrolled in the course need to
understand software design issues such as information storage, transaction
management, process concurrency, information presentation, application
security and deployment models. Students are aware of the principles of
programming-in-small [2, 3] such as data abstraction, information hiding,
code-reuse, and to some extent design reuse through Object-Oriented design
patterns.
With the experience of programming-in-small that too in only one programming
language (e.g., C/Java) students find it difficult to grasp the high-level
systemic issues (e.g., Viewpoints, Architectural Styles, Architectural
Decisions and their consequences) dealt in a course on Software Architecture.
These difficulties stem from two sources. First, students lack experience of
working on a large-sized project. The second source is the abstract nature of
Software Architecture [4]. As a result, students become passive learners, and
they start loosing interest in the course content.
Hence, to engage students more productively during class-room interaction and
as well as through off-class assignments, become a major goal of teaching. To
attain this instructional objective, a teacher needs to make the right
instructional decisions. This paper identifies a few such decisive moments
referred to as instructional dilemmas(Section III) which offers two choices to
a teacher about sequencing of the course content. The author of the paper has
experienced these situations while teaching courses on Software Architecture
to undergraduate and graduate students during the last five years and while
conducting teacher’s training programs.
The experiences described in the paper can be useful to the first-time
teachers of Software Architecture to design their instructions in a more
meaningful way. Also, the experienced teachers delivering the course can use
these experiences to set up controlled experiments for evaluating the
effectiveness of their individual choices. Besides, the paper provides a set
of guidelines (Section II) based on modern learning theories which can be
useful to deal with similar situations in either a course on Software
Architecture or any other Software Engineering courses.
Abstractions First | Implementation First
---|---
Unit | Topic | Unit | Topic
1 | Requirement Analysis:Architectural Design and Requirements, Allocation | 1 | Architecture Recovery Documenting Module Dependency View in SEI framework . UML Notations for Module View
2 | Software Structure and Architecture: Architectural Structures and Viewpoints, Architectural Styles, design patterns, architecture design decisions. | 2 | Technological Architectures Service Oriented Architecture, 3 tier Web Application Development (LAMP vas MEAN), Mobile Application Development, Blockchain Technology, Microservices
3 | Technological Architectures Service Oriented Architecture, 3 tier Web Application Development (LAMP vas MEAN), Mobile Application Development, Blockchain Technology, Microservices | 3 | Requirement Analysis:Architectural Design and Requirements Allocation
4 | Architecture Evaluation: Quality Attributes, Quality Analysis and Evaluation Techniques, Measures | 4 | Software Structure and Architecture: Architectural Structures and Viewpoints, Architectural Styles, design patterns, architecture design decisions.
5 | Architecture Recovery and Description Documenting Module Dependency View in SEI framework . UML Notations for Module View | 5 | Architecture Evaluation: Quality Attributes, Quality Analysis and Evaluation Techniques, Measures
Table 1: Two Different Course Sequencing Models
## 2 Guiding Principles
The guiding principles explained below help us to deal with the instructional
dilemmas, to define learning outcomes, and to adopt appropriate teaching
methods.
1. 1.
Increase the level of students engagement.Increased students engagement is
typically used as the indicator of measuring learning progress and
effectiveness of course delivery. Designing purposeful learning activities and
instruction sequences are the common means through which student’s
participation can be enhanced [5].
2. 2.
Engage students in the knowledge construction process. In the conventional
view, teaching is perceived as a process of knowledge dissemination in which
students passively absorb the knowledge. In contrast to this, the
constructivist approach perceives learning as a knowledge construction process
in which students develop insight about a topic through the active
participation in learning activities. Few educators have highlighted the
importance of the constructivist approach while teaching courses on Software
Engineering [6].
3. 3.
Engage students in Project or Problem based Learning Working collaboratively
in a team is an essential soft-skill required for engineers in general and
software architects in particular. Educators suggest designing curriculum and
courses around a project or problem-based learning activities to train
students for collaborative working [7, 8].
4. 4.
Teach concrete things before abstract things. The abstract or vague nature of
Software Architecture is the prime reason that makes teaching Software
Architecture difficult [4]. Also, education psychologists suggest that
teaching concrete things before abstract concepts improve student’s
comprehension of the subject matter [9].
5. 5.
Adopt of modern technologies. Modern technologies such as web-based course
delivery platforms provide timely and effective media to disseminate course
content, engage students in discussions, conduct exit quizzes and seek feed-
backs. These technologies can be effectively adopted to address challenges
faced by teachers and students.
The principles mentioned above guide an instructor to devise appropriate
interventions to overcome the following instructional dilemmas.
## 3 Instructional Dilemmas
This section describes four kinds instructional dilemmas. These are: (i)
Architecture Modelling Dilemma, (ii) Definitional Dilemma, (iii) Architecture
Design Dilemma, and (iv) Implementation Dilemma.
We use a standard template to describe each dilemma. The template includes the
following elements.
1. 1.
Context: This element describes the background and source of the dilemma .
2. 2.
Alternatives: This element describes the two conflicting or competing
alternatives.
3. 3.
Decision: This element describes the selected alternatives.
4. 4.
Guiding Principles: The rationale behind the selected option is explained. The
explanation is based on one or more guiding principles presented in the
previous section.
5. 5.
Teaching Method: A teaching method used to deal with the instructional dilemma
is described.
6. 6.
Learning outcomes: The expected learning outcomes that will be attained by the
application of the teaching method described in this section.
The use of a standard template to document the instructional dilemmas is
inspired by the framework used to document architectural decisions [10].
### 3.1 Architecture Modelling Dilemma
#### 3.1.1 Context
Software architects usually perform architecture modelling in two different
contexts, i.e. forward engineering and reverse engineering. During forward
engineering, architects specify a software solution satisfying the given
functional requirements and quality attributes. The specified architecture
acts as a blueprint for the downstream engineering activities such as low-
level design, system implementation, and testing. These kinds of architecture
models are referred to as prescriptive architectures. During reverse
engineering, architects recover high-level descriptions from the
implementation to support system maintenance and evolution activities. These
kinds of architecture models are referred to as descriptive architectures [11,
12].
#### 3.1.2 Alternatives
Whether to begin the course with the descriptive or prescriptive architecture
can be the first dilemmatic situation. The conventional life-cycle model
prescribes that the course shall start with the topic of prescriptive
architecture. This sequence of instruction will follow the path in which the
first topic would be identifying architecturally significant requirements
followed by high-level system design and architecture evaluation. This
instructional sequencing referred to as abstractions first is shown in the
first column of Table 1. The second alternative is shown in the second column
of Table 1. With this sequence, the course begins with the topic of
architecture recovery followed by documenting the extracted architecture with
the module view from the SEI framework [13]. The sequencing in the second
column is referred to as implementation first approach.
#### 3.1.3 Decision
We observe that starting the course with the descriptive architectures is a
better choice.
#### 3.1.4 Guiding Principles
The following guiding principles justify the choice. (i) Teach concrete
concepts before abstract concepts. (ii) Use technology-based teaching methods
to flip the classroom activities.
#### 3.1.5 Teaching Method
A course typically begins with handing over the code of an open-source
software application (e.g., DSpace [14], HealthWatcher [15]) to students and
asking them to document the system with UML notations such as class diagram.
Students will struggle to cope with the size of the program when they start
documenting such a large-sized application.
The teaching method is based on project-based learning and flipped classroom
with the following steps.
1. i
An instructor shares a video explaining Module View from SEI Documentation
framework.
2. ii
In the classroom or laboratory session, an instructor helps students to
identify dependency relationships among programming elements manually.
3. iii
A group of 3 to 4 students are asked to prepare a report documenting the
module view for the assigned system.
#### 3.1.6 Learning Outcomes
The students will realize the necessity of higher-level abstractions to gain
control over a large-sized application. The students will be able to document
a given application in module view.
### 3.2 Definitional Dilemma
#### 3.2.1 Context
This dilemma arises from the definition of Software Architecture as a concept.
Two dominant views exist concerning the definition of Software Architecture.
The first and classical view suggests that Software Architecture refers to the
fundamental structure of a software system manifested through program elements
and relationships among them[16, 17]. This definition highlights architecting
as a decomposition process. Secondly, one of the most recent approaches
defines software architecture as a set of hard decisions made during the
initial stages of development that affect quality attributes or externally
visible properties. These decisions are usually referred to as architectural
decisions and difficult to change in the later stages of development—for
example, decisions concerning the choice of technological platforms or choice
of architecture style [10]. This definition highlights that software
architecture is a decision making process.
#### 3.2.2 Alternatives
These two equally dominant views of Software Architecture provides two
alternatives to teach architecture documentation, i.e. whether to teach how to
document the decomposition of a system or to teach how to document
architectural decisions. The researchers have developed frameworks to document
both,i.e. architectural decisions [18], as well as, system decomposition [13].
Resolving upon what to document? becomes a crucial instructional decision once
it is decided to emphasize descriptive modelling.
#### 3.2.3 Decision
We suggest that asking students to document a decomposition of a system is a
better choice over architectural decisions.
#### 3.2.4 Guiding Principle
The choice of emphasizing the decomposition view is guided by following
observations. (i) Documenting the decomposition of a system is more concrete
activity as compared to documenting architectural decisions. This observation
holds, mostly when students are engaged in the architecture recovery process.
Because recovering architecture decisions is challenging when information
necessary to extract decisions is absent in the implementation artefacts, on
the other hand, information about dependencies among program elements is
present and visible in the implementation artefacts, thus simplifying the
extraction of high-level views. (ii) Instructors can easily design projects
and form teams to support project-based learning when students have to
document decomposition of a system.
#### 3.2.5 Teaching Method
A combination of the flipped classroom and project-based learning method can
be adapted to emphasize the significance of the structural aspect of software
architecture. The teaching method includes following steps.
1. i
The instructor shares a video explaining various concerns handled in software
design in general which includes concurrency, control, handling of events,
data persistence, distribution of components, error and exception handling,
interaction, presentation, and security.
2. ii
In a lab session, the instructor explains how language-specific mechanisms
(e.g., Java) that deal with these concerns
3. iii
Students are asked to look for the programming elements handling these
concerns in the application software assigned to them.
4. iv
Further, students are asked to group the programming elements handling similar
concerns and to prepare a report.
#### 3.2.6 Learning Outcomes
The students will realize that there exist multiple ways to group or decompose
a software system. Further, students will realize that there exist few
concerns which can not be cleanly grouped called cross-cutting concerns (e.g.,
Error handling, persistence). The students will be able to identify design
concerns in the system implementation and document them with a given template
or a UML profile.
### 3.3 Architecture Design Dilemma
The third kind of dilemma is concerned with the process of architecture
design. The design process includes two steps. The first stage aims to
identify Architecturally Significant Requirements (ASR) [19] from software
requirement specification or a problem statement. The second step maps
architecturally significant requirements to software architecture elements.
The Pattern-Oriented Software Architecture (POSA)[20] and the Attribute-Driven
Design (ADD) [21] are two commonly used approaches in architecture-centric
software development. The POSA approach suggests using a ready-made solution
to recurrently occurring design problem, which is catalogued as a pattern
(e.g., Model-View-Controller, Pipe-and-Filter). The ADD approach is a
recursive design method which, when applied, leads to a solution specific to a
quality attribute (e.g., Modifiability, Security).
#### 3.3.1 Alternatives
The ADD and POSA are two approaches available for teaching architecture
design. The ADD as compared to POSA is more systematic and a planned method.
The architecture design is a process of solving a wicked problem; hence, both
approaches lead to multiple and different solutions. Moreover, knowing
architecture styles and patterns is a prerequisite to apply ADD for solving
design problems.
#### 3.3.2 Decision
We observe that teaching architecture design with POSA approach followed by a
brief explanation of ADD is a better option.
#### 3.3.3 Guiding Principle
The students of engineering programs are familiar to solving problems. As
architectural styles and patterns catalogue solutions to recurrent problems,
it will be easier to devise a teaching method around the POSA approach. An
instructor can frame a set of questions for a given design problem which can
lead to an architecture-style-based solution. Thus students get engaged in the
knowledge construction process as recommended by the constructivist theory of
learning.
#### 3.3.4 Teaching Method
Software architecture design is a specialized skill set of the general-purpose
skill called design thinking. Though all engineering graduates are expected to
acquire design thinking, it has been observed that teaching and learning
design skill is hard. Various specialized pedagogy centred around project-
based learning have been devised, and educators have debated their
effectiveness [22]. We suggest adopting the following teaching method aimed at
increasing students engagement.
1. i
The instructor hands over an architecture design problem from a domain to
which students are familiar with. For example, we used problems such as
student’s academic record management, room-allotment in student’s hostel,
travel-grant approval and disbursement.
2. ii
The instructor explains a specific architecture style such as Model-View-
Controller (MVC), Pipe-Filter, Client-Server, Service-Oriented Architecture,
Micro-Service, Publisher-Subscribers.
3. iii
The instructor asks questions that will lead to the identification of
Architecturally Significant Requirements.
4. iv
The instructor forms multiple teams out of the students attending the class.
5. v
The instructor asks a team to select an architectural style and to map the
ASRs to architectural elements from the chosen style (e.g., Publisher-
Subscriber)
6. vi
The instructor displays each groups’ mappings on the board so that a solution
is visible to everyone.
7. vii
Multiple design mappings will be displayed on the board. At which point, the
instructor asks the students to select the ’best’ mapping and justify their
selection.
8. viii
At this point, the instructor introduces the concept of architecture
evaluation against quality attributes to the students.
#### 3.3.5 Learning Outcomes
During the course delivery an instructor will observe following outcomes:
1. 1.
Increased participation in architecture design activity.
2. 2.
Students will understand the wicked nature of architecture design problem.
3. 3.
Students will be able to apply various architecture styles.
4. 4.
Students will be able to evaluate a given solution against a set of quality
attributes.
### 3.4 Implementation Dilemma
#### 3.4.1 Context
The implementation dilemma is not an instructional dilemma in the sense as
described in the previous sections which affect the sequence of instructions.
But, this dilemma helps to describe an important topic missed in the course
content i.e. architectural decisions.
A software architect needs to select a particular technology platform to
implement the designed architectural solution. For example, a client/server
web application can be realized using either the Linux-Apache-MySQL-PHP (LAMP)
stack or the MongoDB-Express.js-Angular.js-Node.js (MEAN) stack. Similarly,
the Service-Oriented-Architecture (SOA) and Micro-services are two
technologies to implement component-oriented distributed applications. These
implementation dilemmas are best examples to explain the significance of
architectural decisions and documentation framework for architectural
decisions as defined in [10].
#### 3.4.2 Alternatives
Technological platforms such as (i) LAMP vs MEAN (ii) SOA vs Micro-Services
(iii) Ethereum vs HyperLedger.
#### 3.4.3 Decision
To explain architectural elements in various technological platforms at
conceptual level with an emphasis on quality attributes supported by them.
#### 3.4.4 Guiding Principle
To adopt latest technologies to implement an architectural solution.
#### 3.4.5 Teaching Method
The prescribed teaching method is a combination of classroom lectures and
project based learning. The method includes following instructional steps.
1. i
The instructor explains the basic building blocks in the following
technological platforms (i) LAMP and MEAN (ii) SOA and Micro-Services (iii)
Hyperledger and Ethereum
2. ii
The instructor asks student to select an appropriate technology to implement
the architectural solution for the problem used during architecture design
activity.
3. iii
Further, the instructor asks student to document their decision using the
framework described in [10].
#### 3.4.6 Learning Outcomes
The students will develop an insight about architectural decisions and they
will be able to document their rationale.
## 4 Earlier Work and Discussion
Teachers and educators from the disciplines other than Software Architecture
have identified teaching dilemmas concerning to their domains. For example,
teachers teaching Mathematics courses experience a conflict between teaching
symbolism and logic versus procedural aspects of problem-solving [23].
Further, in Reference [24], the author identifies a set of classroom-teaching
dilemmas for Mathematics teachers and asks them to weigh their classroom time
devoted to the activity. For example, Mathematics can be learnt either through
individual practice or by solving a problem in a team.
Another example of the teaching dilemma experienced by an English teacher is
described in [25], which reports the conflict between authoritarian teacher-
centric approach versus student-centric approach while teaching grammar and
writing.
In this paper, the author identifies a set of conflicting situations
experienced while teaching a course on Software Architecture to graduate,
undergraduate students and conducting teacher’s training program. Unlike
earlier approaches described above, the author goes beyond reporting conflicts
and tensions among teaching alternatives. In this paper, the author specifies
what it is called as small teaching methods [26] by James Lang. These small
teaching methods are derived from modern learning theories and applicable to
day-to-day classroom teaching.
Here, it needs to be noted that these are the dilemmas experienced by an
instructor and not to be confused with conflicting situations encountered by
the Software Developer (e.g., particularly implementation dilemma). These
conflicting situations need to be resolved, and their choice affects
instruction design and sequence and not the quality of software.
Further, an instructor may experience other kinds of dilemmas such as breadth
versus depth of a concerning topic. Hence, it is not a comprehensive catalogue
of teaching dilemmas for a course on Software Architecture.
## 5 Conclusion and Future Scope
Teaching design skills in general and Software Architecture, in particular,
has been recognized as a difficult task by many educators. Industrial
professionals have also found that software design as a skill is difficult to
transfer from an expert mentor to a novice trainee joined in the profession
because the design and architecting skills heavily make use of experiences
accrued over the period. However, the discipline of Software Architecture has
a rich and evolving knowledge base that codify these experiences in the form
of Architectural Styles, Design Patterns, design methods, and documentation
frameworks which when properly utilized can help instructors to design a
course instruction aimed to develop software design thinking.
This paper contributes by identifying some of the challenges faced by
instructors. These challenges are referred to as instructional dilemmas.
Further, the paper contributes by suggesting small-teaching methods specific
to a challenge or an instructional dilemma. These small-teaching methods are
based on the proven principles in modern learning theories. The author has
used these methods to overcome the passivity observed among students and
participants attending the teacher’s training programs.
Instructors can use the teaching methods and instructional dilemmas described
in the paper to design and plan instruction sequence. Also, the effectiveness
of these methods needs to be evaluated in terms of attainment of course
objectives.
## References
* [1] S. Angelov and P. de Beer, “Designing and applying an approach to software architecting in agile projects in education,” _Journal of Systems and Software_ , vol. 127, pp. 78–90, 2017.
* [2] M. E. Fayad, M. Laitinen, and R. P. Ward, “Thinking objectively: software engineering in the small,” _Communications of the ACM_ , vol. 43, no. 3, pp. 115–118, 2000.
* [3] F. DeRemer and H. Kron, “Programming-in-the large versus programming-in-the-small,” _ACM Sigplan Notices_ , vol. 10, no. 6, pp. 114–121, 1975.
* [4] M. Galster and S. Angelov, “What makes teaching software architecture difficult?” in _2016 IEEE/ACM 38th International Conference on Software Engineering Companion (ICSE-C)_. IEEE, 2016, pp. 356–359.
* [5] R. d. A. Maurício, L. Veado, R. T. Moreira, E. Figueiredo, and H. Costa, “A systematic mapping study on game-related methods for software engineering education,” _Information and software technology_ , vol. 95, pp. 201–218, 2018.
* [6] A. Cain and M. A. Babar, “Reflections on applying constructive alignment with formative feedback for teaching introductory programming and software architecture,” in _Proceedings of the 38th International Conference on Software Engineering Companion_ , 2016, pp. 336–345.
* [7] P. Dolog, L. L. Thomsen, and B. Thomsen, “Assessing problem-based learning in a software engineering curriculum using bloom’s taxonomy and the ieee software engineering body of knowledge,” _ACM Transactions on Computing Education (TOCE)_ , vol. 16, no. 3, pp. 1–41, 2016.
* [8] E. L. Ouh and Y. Irawan, “Applying case-based learning for a postgraduate software architecture course,” in _Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education_ , 2019, pp. 457–463.
* [9] R. Moreno, G. Ozogul, and M. Reisslein, “Teaching with concrete and abstract visual representations: Effects on students’ problem solving, problem representations, and learning perceptions.” _Journal of educational psychology_ , vol. 103, no. 1, p. 32, 2011.
* [10] J. TYREE and A. AKERMAN, “Architecture decisions: Demystifying architecture: Postmodern software design,” _IEEE software_ , vol. 22, no. 2, pp. 19–27, 2005.
* [11] R. Heldal, P. Pelliccione, U. Eliasson, J. Lantz, J. Derehag, and J. Whittle, “Descriptive vs prescriptive models in industry,” in _Proceedings of the acm/ieee 19th international conference on model driven engineering languages and systems_ , 2016, pp. 216–226.
* [12] U. Eliasson, R. Heldal, P. Pelliccione, and J. Lantz, “Architecting in the automotive domain: Descriptive vs prescriptive architecture,” in _2015 12th Working IEEE/IFIP Conference on Software Architecture_. IEEE, 2015, pp. 115–118.
* [13] P. Clements, D. Garlan, R. Little, R. Nord, and J. Stafford, “Documenting software architectures: views and beyond,” in _25th International Conference on Software Engineering, 2003. Proceedings._ IEEE, 2003, pp. 740–741.
* [14] M. Smith, M. Barton, M. Bass, M. Branschofsky, G. McClellan, D. Stuve, R. Tansley, and J. H. Walker, “Dspace: An open source dynamic digital repository,” 2003.
* [15] M. Pinto, N. Gamez, and L. Fuentes, “Towards the architectural definition of the health watcher system with ao-adl,” in _Early Aspects at ICSE: Workshops in Aspect-Oriented Requirements Engineering and Architecture Design (EARLYASPECTS’07)_. IEEE, 2007, pp. 5–5.
* [16] D. L. Parnas, “On the criteria to be used in decomposing systems into modules,” in _Pioneers and Their Contributions to Software Engineering_. Springer, 1972, pp. 479–498.
* [17] D. Taibi and K. Systä, “From monolithic systems to microservices: A decomposition framework based on process mining.” in _CLOSER_ , 2019, pp. 153–164.
* [18] U. Van Heesch, P. Avgeriou, and R. Hilliard, “A documentation framework for architecture decisions,” _Journal of Systems and Software_ , vol. 85, no. 4, pp. 795–820, 2012.
* [19] L. Chen, M. A. Babar, and B. Nuseibeh, “Characterizing architecturally significant requirements,” _IEEE software_ , vol. 30, no. 2, pp. 38–45, 2012\.
* [20] F. Buschmann, K. Henney, and D. C. Schmidt, _Pattern-oriented software architecture, on patterns and pattern languages_. John wiley & sons, 2007, vol. 5.
* [21] F. Bachmann and L. Bass, “Introduction to the attribute driven design method,” in _Software Engineering, International Conference on_. Citeseer, 2001, pp. 0745–0745.
* [22] C. L. Dym, A. M. Agogino, O. Eris, D. D. Frey, and L. J. Leifer, “Engineering design thinking, teaching, and learning,” _Journal of engineering education_ , vol. 94, no. 1, pp. 103–120, 2005.
* [23] M. Kamaruddin _et al._ , “Dilemma in teaching mathematics.” _Online Submission_ , 2012.
* [24] M. Swan, “Designing tasks that challenge values, beliefs and practices: A model for the professional development of practicing teachers,” in _Constructing knowledge for teaching secondary mathematics_. Springer, 2011, pp. 57–71.
* [25] P. Smagorinsky, A. A. Wilson, and C. Moore, “Teaching grammar and writing: A beginning teacher’s dilemma,” _English Education_ , vol. 43, no. 3, pp. 262–292, 2011.
* [26] J. M. Lang, _Small teaching: Everyday lessons from the science of learning_. John Wiley & Sons, 2016.
|
# Experimental observation of edge-dependent quantum pseudospin Hall effect
Huanhuan Yang1 Lingling Song1 Yunshan Cao1 X. R. Wang2<EMAIL_ADDRESS>Peng Yan1
<EMAIL_ADDRESS>1School of Electronic Science and Engineering and State Key
Laboratory of Electronic Thin Films and Integrated Devices, University of
Electronic Science and Technology of China, Chengdu 610054, China 2Physics
Department, The Hong Kong University of Science and Technology, Clear Water
Bay, Kowloon, Hong Kong
###### Abstract
It is a conventional wisdom that the helical edge states of quantum spin Hall
(QSH) insulator are particularly stable due to the topological protection of
time-reversal symmetry. Here, we report the first experimental observation of
an edge-dependent quantum (pseudo-)spin Hall effect by employing two Kekulé
electric circuits with molecule-zigzag and partially-bearded edges, where the
chirality of the circulating current in the unit cell mimics the electron
spin. We observe a helicity flipping of the topological in-gap modes emerging
in opposite parameter regions for the two edge geometries. Experimental
findings are interpreted in terms of the mirror winding number defined in the
unit cell, the choice of which exclusively depends on the edge shape. Our work
offers a deeper understanding of the boundary effect on the QSH phase, and
pave the way for studying the spin-dependent topological physics in electric
circuits.
A paradigm in the topological band insulator family Hasan2010 ; Qi2011 is the
quantum spin Hall (QSH) insulator, which has an insulating gap in the bulk,
but supports gapless helical states on the boundary Kane2005 ; Kane20052 ;
Bernevig2006 ; Konig2007 . QSH insulators are characterized by the topological
$\mathbb{Z}_{2}$ invariant, defined in the presence of time-reversal symmetry.
Because of the symmetry protection, the helical edge states are robust against
the electronic backscattering Chen2018 ; Bernevig2006 ; Konig2007 ; Konig2013
; Roushan2009 , ushering in a new era in spintronics and quantum computing
Roth2009 ; Brune2012 ; Hart2014 ; Wu2018 . Counterintuitively, Freeney _et
al._ recently reported an edge-dependent topology in artificial Kekulé
lattices Freeney2020 . The mechanism is that the edge geometries of samples
determine the choice of the unit cell, and further dictate the value of
topological invariants Fu2011 ; Slager2013 ; Kariyado2017 ; Cao2017 ;
LeeNL2018 . However, the experimental evidence of an edge-dependent quantum
(pseduo-)spin Hall effect is still lacking.
Recently, the topolectrical circuit springs up as a powerful platform to study
the fundamental topological physics Lee2018 ; Imhof2018 ; Hofmann2019 ;
Zhu2019 ; Lu2019 ; Yyt2020 ; Yang2020 ; Song2020 ; Ezawa2020 ; Ezawa20202 ,
since simple inductor-capacitor (LC) networks can fully simulate the tight-
binding model in condensed matter physics. In this Letter, we fabricate two
kinds of Kekulé LC circuits with molecule-zigzag and partially-bearded edges
(see Fig. 1). By measuring the node-ground impedance and monitoring the
spatiotemporal voltage signal propagation, we observe the quantum pseudospin
Hall effect emerging in the opposite parameter regions with flipped helicities
for the two different edge terminations, where the chirality of the
circulating current in the unit cell mimics the spin. Quantized mirror winding
number is proposed to explain our experimental findings.
We consider two finite-size artificial Kekulé circuits with molecule-zigzag
and partially-bearded edge terminations, as shown in Figs. 1 (a) and 1(b),
respectively. The circuits consist of two types of capacitors $C_{A}$, $C_{B}$
and inductor $L$. The response of the circuit at frequency $\omega$ is given
by Kirchhoff’s law:
$I_{a}(\omega)=\sum_{b}J_{ab}(\omega)V_{b}(\omega),$ (1)
where $I_{a}$ is the external current flowing into node $a$, $V_{b}$ is the
voltage of node $b$, and
$J_{ab}(\omega)=i\omega\left[C_{ab}+\delta_{ab}(\sum_{n}C_{an}-\frac{1}{\omega^{2}L_{a}})\right]$
is the circuit Laplacian, with $C_{ab}$ the capacitance between nodes $a$ and
$b$. Based on Eq. (1), one can explicitly express the circuit Laplacian
$J_{\rm I}(\omega)$ and $J_{\rm II}(\omega)$ of the two circuits in Figs. 1(a)
and 1(b) SM . At the resonant frequency $\omega_{0}=1/\sqrt{(2C_{A}+C_{B})L}$,
the diagonal elements of circuit Laplacians vanish, and the circuit model is
equivalent to the tight-binding model with $-\omega_{0}C_{A}$ and
$-\omega_{0}C_{B}$ being two hopping coefficients.
Figure 1: Illustration of two artificial Kekulé LC circuits with (a) molecule-
zigzag and (b) partially-bearded edge terminations. Each node is grounded by
inductors and capacitors with the configuration shown in the inset. Dashed red
hexagon and rhombus represent the approximate unit cells for the two different
edge shapes.
We fabricate two printed circuit boards with different edge geometries
displayed in Figs. 2(a) and 2(b), respectively. In experiments, we adopt
$C_{A}=1$ nF, $C_{B}=10$ nF or $0.1$ nF, and $L=39~{}\mu$H (all circuit
elements have a $2\%$ tolerance), with the resonant frequency being
$\omega_{0}/2\pi=1/[2\pi\sqrt{(2C_{A}+C_{B})L}]=232.65$ kHz or $556.13$ kHz,
respectively.
We measure the distributions of impedance between each node and the ground by
an analyser (Keysight E4990A), with the results plotted in Figs. 2(c)-2(f).
For devices with molecule-zigzag edge at $C_{A}/C_{B}=0.1$ [Fig. 2(c)] and
partially-bearded edge at $C_{A}/C_{B}=10$ [Fig. 2(f)], we observe that the
impedance concentrates on the sample edge, the value of which is larger than
one thousand Ohms, indicating the existence of edge states. Theoretically, the
impedance between node $a$ and $b$ is given by Yang2020 :
$Z_{ab}=\frac{V_{a}-V_{b}}{I_{ab}}=\sum_{n}\frac{|\psi_{n,a}-\psi_{n,b}|^{2}}{j_{n}},$
(2)
where $|\psi_{n,a}-\psi_{n,b}|$ is the amplitude difference between $a$ and
$b$ nodes of the $n$th eigenstate, and $j_{n}$ is the $n$-th eigenvalue. We
plot the numerical results in the insets of Figs. 2(c)-2(f), showing an
excellent agreement with the experimental measurements.
It’s known that the QSH insulator allows bidirectional propagation states
along the boundary. However, we cannot directly observe the time-resolved wave
dynamics by measuring the impedance. To solve this problem, we monitor and
record the spatiotemporal voltage signal in the circuits. Specifically, we
impose a sinusoidal voltage signal $v(t)=v_{0}\sin(\omega_{0}t)$ with the
amplitude $v_{0}=5$ V at the node labeled by blue stars in Figs. 3(a) and 3(b)
by an arbitrary function generator (GW AFG-3022), and then measure the steady-
state voltage distribution by the oscilloscope (Keysight MSOX3024A). We indeed
observe a strong voltage response along both directions of the device edge. It
is noted that the voltage signal decays very fast away from the voltage
source, because of the low quality factor ($Q=25-50$) of the inductors. In
Figs. 3(b) and 3(e), we plot the theoretical steady-state voltage
distributions with higher $Q$-factor inductors (we set $Q=1000$, realized by
introducing a small resistance to each inductor), which improves the
visualization of the bidirectional edge states.
Figure 2: Printed circuit boards with (a) molecule-zigzag and (b) partially-
bearded edges. Yellow stars indicate the position of signal sources in the
voltage measurements. (c)-(f) Experimental measurements of the spatial
distribution of impedance between each node and the ground. Insets: numerical
results.
To see the propagation details of the edge states, we perform circuit
simulations with LTspice LTspice and record the voltage of all nodes. For the
two edge states along molecule-zigzag and partially-bearded boundaries, the
voltage signals propagate in both directions along the edge, as displayed in
Figs. 3(c) and 3(f), accompanied by a helicity flipping indicated by red and
blue arrows (see analysis below).
Figure 3: Experimental measurements of the steady-state voltage distribution
in the devices with (a) molecule-zigzag ($C_{A}/C_{B}=0.1$) and (d) partially-
bearded ($C_{A}/C_{B}=10$) edges. (b)(e) Theoretical calculation with a higher
$Q$-factor ($Q=1000$). (c)(f) Snapshots of the propagating voltage signal at
different times, with the blue star indicating the position of the signal
source, and the red and blue arrows representing the propagation direction of
the voltage signal with pseudospin up and down, respectively.
Figure 4: (a) Schematic plot of a ribbon with molecule-zigzag edge (top) and
graphene-zigzag edge (bottom). The ribbon is periodic along $\hat{x}$
direction and contains 40 unit cells along $\hat{y}$ direction. Insets: the
pesudospin is denoted by the chirality of the circulating current in the unit
cell. The band structure of the ribbon with two different capacitor ratios:
(b) $C_{A}:C_{B}=1:1.1$ and (c) $C_{A}:C_{B}=1:0.9$. Red and blue lines
represent the dispersive edge states with pesudospin up and down
counterpropagating along the top edge. Brown line denotes the localized edge
mode in the bottom boundary. (d) Illustration of a ribbon with partially-
bearded edge (top) and graphene-zigzag edge (bottom). The band structure of
the ribbon with two different capacitor ratios: (e) $C_{A}:C_{B}=1:1.1$ and
(f) $C_{A}:C_{B}=1:0.9$.
To explain the experimental results, we numerically calculate the band
structure of the circuits. By diagonalizing the circuit Laplacians $J_{\rm
I}(\omega)$ and $J_{\rm II}(\omega)$, we obtain the admittance spectrum
$j_{n}$ and the corresponding wave functions $\psi_{n,m}$, shown in Fig. S1 in
Supplemental Material SM . For circuits of molecule-zigzag edge, with
$C_{A}/C_{B}=0.1$, isolated states emerge in the gap of the bulk admittance
spectrum, which correspond to the edge states. When $C_{A}/C_{B}=10$, only are
bulk states identified. For circuits of partially-bearded edge, on the
contrary, we find that the edge states emerge in the opposite capacitance
ratio, i.e., $C_{A}/C_{B}=10$. For $C_{A}/C_{B}=0.1$, one can only observe the
bulk states. These results are fully consistent with our experimental
observations.
Next, we analyze the origin of the bidirectional edge states. First of all, we
can exclude the Tamm-Shockley mechanism Tamm1932 ; Shockley1939 , which
predicts that the periodicity breaking of the crystal potential at the
boundary can lead to the formation of a conducting surface/edge state.
However, this surface/edge state is trivial because it is sensitive to
impurities, defects, and disorder, which is not compatible with our
experimental findings. There thus must be a topological reason for the
emerging bidirectional edge states we observed. To justify this point of view,
we employ the mirror winding number $(n_{+},n_{-})$ defined in the unit cell
with
$n_{\pm}=-\frac{1}{2\pi}\oint\frac{d}{dk_{\perp}}\arg(\det
Q_{k^{\pm}_{\perp}})dk_{\perp}$ (3)
in the presence of chiral symmetry. The analytical expression of matrices
$Q_{k_{\perp}^{\pm}}$ can be found in Sec. II of Supplemental Material SM .
The choice of the unit cell depends on the shape of sample edge. As shown in
Figs. 1(a) and 1(b), the dashed red hexagon and rhombus represent the
approximate unit cells for the two different edge geometries, respectively.
For the circuit with molecule-zigzag edge, we obtain $(n_{+},n_{-})=(1,-1)$
when $C_{A}/C_{B}<1$ and $(0,0)$ when $C_{A}/C_{B}>1$. Therefore, we can
observe the topological edge states when $C_{A}/C_{B}<1$. For the circuit with
partially-bearded edge, the case is adverse to the former:
$(n_{+},n_{-})=(0,0)$ when $C_{A}/C_{B}<1$ and $(1,-1)$ when $C_{A}/C_{B}>1$,
indicating that the topological edge states arise in the region of
$C_{A}/C_{B}>1$ SM .
Figures 4(a) and 4(d) show two infinite-long ribbons with molecule-zigzag and
partially-bearded edges. For the ribbon with molecule-zigzag edge, in the case
of $C_{A}/C_{B}<1$, we find three isolated modes in the band gap [see Fig.
4(b)]. The red and blue spectrums represent the helical edge states because of
the opposite group velocity. Interestingly, we can define the circulating bond
currents inside the unit cell: $i_{m\rightarrow n}={\rm
Im}[\psi_{m}^{*}\psi_{n}]$ Zhang2008 ; Zhang2009 ; Wu2016 with their flowing
direction plotted in the right side of Figs. 4(a) and 4(d). We find that the
chirality of the circulating current in the unit cell are opposite for the in-
gap red and blue bands, which mimics the electron spin-up and spin-down
states, respectively. This observation is reminiscent of the spin-momentum
locking in the QSH effect. Brown line denotes the flat band localized in the
bottom zigzag edge of the ribbon Fujita1996 . For $C_{A}/C_{B}>1$, there is no
in-gap energy spectrum expect for the flat band, see Fig. 4(c). For the ribbon
with partially-bearded edge, the edge modes with flipped helicity however only
appear in the region of $C_{A}/C_{B}>1$ [see Figs. 4(e) and 4(f)]. These
results well explain the numerical calculations and experimental measurements.
To understand the helicity flipping, we map the six-band circuit model to the
four-band Bernevig-Hughes-Zhang (BHZ) model originally proposed for HgTe
quantum wells Bernevig2006 ; Konig2007 . To this end, we express
$J_{ab}(\omega)=i\mathcal{H}_{ab}(\omega)$, in which $\mathcal{H}(\omega)$ can
be viewed as a hermitian tight-binding Hamiltonian. Taking the molecule-zigzag
unit cell as an example, one can write the Hamiltonian of an infinite Kekulé
circuit at resonant frequency as below:
$\mathcal{H}=-\omega_{0}C_{A}\sum_{\left<i,j\right>}c_{i}^{\dagger}c_{j}-\omega_{0}C_{B}\sum_{\left<i^{\prime},j^{\prime}\right>}c_{i^{\prime}}^{\dagger}c_{j^{\prime}},$
(4)
where $c_{i}$ is the annihilation operator at site $i$, and $\left<i,j\right>$
and $\left<i^{\prime},j^{\prime}\right>$ run over nearest-neighboring sites
inside and between hexagonal unit cells, respectively. Diagonalizing
Hamiltonian (4), we obtain six bands, two of which are high-energy bands with
the phase transition point $C_{A}/C_{B}=1$ at the low-energy $\Gamma$ point,
as shown in Fig. S3 in Supplemental Material SM . We further note that the
high-energy parts are irrelevant to the topological phase transition. By
performing a unitary transformation
$\mathcal{H^{\prime}}=U^{\dagger}\mathcal{H}U$ on $\mathcal{H}$ around the
$\Gamma$ point SM , we separate the two high-energy orbits and obtain the low-
energy effective BHZ-type Hamiltonian as:
$\mathcal{H}_{\rm eff}({\bf k})=-\omega_{0}\left(\begin{array}[]{cc}H(k)&0\\\
0&H^{*}(-k)\\\ \end{array}\right),\\\ $ (5)
with $H(k)=\left(\begin{array}[]{cc}M-Bk^{2}&Ak_{-}\\\ A^{*}k_{+}&-M+Bk^{2}\\\
\end{array}\right),$ where $M=C_{B}-C_{A}$, $A=-\frac{3}{2}iC_{B}$,
$B=\frac{9}{4}C_{B}$, $k^{2}=k_{x}^{2}+k_{y}^{2}$, and
$k_{\pm}=k_{x}{\pm}ik_{y}$.
For the circuit with partially-bearded unit cell, we get the similar low-
energy effective Hamiltonian, but with a different $M=C_{A}-C_{B}$. The sign
of parameter $M$ is opposite for the two edge geometries, leading to the
helicity flipping of the edge states in the opposite parameter regions based
on the band inversion mechanism. We thus conclude that, although Kirchhoff’s
law is rather different from the Schördinger equation, the underlying physics
between our circuit model and the quantum well model is actually quite
similar. The parameter $M$ can be viewed as an effective spin-orbit coupling
(SOC) associated with the pseudo spin, which is different from the intrinsic
one originating from the relativistic effect. Whereas, the SOC in circuit is
more controllable and can be very large, enabling the observation of the
quantum pseduo-spin Hall states at room temperature.
In summary, we reported an edge-dependent quantum pseudospin Hall effect in
topolectric circuits. We showed that the pesudospin is represented by the
chirality of the circulating current in the unit cell. Through the impedance
measurement and spatiotemporal voltage signal detection assisted by circuit
simulations, we directly identified the helical nature of the edge states. The
emerging topological phases were characterized by mirror winding numbers,
which depend on the shape of device edge. Our work uncovers the importance of
the edge geometry on the QSH effect, and opens a new pathway of using circuits
to simulate the spin-dependent topological physics, that may inspire research
in other solid-state systems in the future.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China
(Grants No. 12074057, No. 11604041, and No. 11704060). X. R. Wang acknowledges
the financial support of Hong Kong RGC (Grants No. 16300117, 16301518, and
16301619).
## References
* (1) M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
* (2) X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011).
* (3) C. L. Kane and E. J. Mele, Quantum Spin Hall Effect in Graphene, Phys. Rev. Lett. 95, 226801 (2005).
* (4) C. L. Kane and E. J. Mele, $Z_{2}$ Topological Order and the Quantum Spin Hall Effect, Phys. Rev. Lett. 95, 146802 (2005).
* (5) B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Quantum Spin Hall Effect and Topological Phase Transition in HgTe Quantum Wells, Science 314, 1757 (2006).
* (6) M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Quantum Spin Hall Insulator State in HgTe Quantum Wells, Science 318, 766 (2007).
* (7) M. König, M. Baenninger, A. G. F. Garcia, N. Harjee, B. L. Pruitt, C. Ames, P. Leubner, C. Brüne, H. Buhmann, L. W. Molenkamp, and D. Goldhaber-Gordon, Spatially Resolved Study of Backscattering in the Quantum Spin Hall State, Phys. Rev. X 3, 021003 (2013).
* (8) P. Roushan, J. Seo, C. V. Parker, Y. S. Hor, D. Hsieh, D. Qian, A. Richardella, M. Z. Hasan, R. J. Cava, and A. Yazdani, Topological surface states protected from backscattering by chiral spin texture, Nature (London) 460, 1106 (2009).
* (9) H. Chen, H. Nassar, A. N. Norris, G. K. Hu, and G. L. Huang, Elastic quantum spin Hall effect in kagome lattices, Phys. Rev. B 98, 094302 (2018).
* (10) A. Roth, C. Brüne, H. Buhmann, L. W. Molenkamp, J. Maciejko, X.-L. Qi, and S.-C. Zhang, Nonlocal Transport in the Quantum Spin Hall State, Science 325, 294 (2009).
* (11) C. Brüne, A. Roth, H. Buhmann, E. M. Hankiewicz, L. W. Molenkamp, J. Maciejko, X.-L. Qi, and S.-C. Zhang, Spin polarization of the quantum spin Hall edge states, Nat. Phys. 8, 485 (2012).
* (12) S. Hart, H. Ren, T. Wagner, P. Leubner, M. Mühlbauer, C. Brüne, H. Buhmann, L. W. Molenkamp, and A. Yacoby, Induced superconductivity in the quantum spin Hall edge, Nat. Phys. 10, 638 (2014).
* (13) S. Wu, V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J. Cava, and P. Jarillo-Herrero, Observation of the quantum spin Hall effect up to 100 kelvin in a monolayer crystal, Science 359, 76 (2018).
* (14) S. E. Freeney, J. J. van den Broeke, A. J. J. Harsveld van der Veen, I. Swart, and C. M. Smith, Edge-Dependent Topology in Kekulé Lattices, Phys. Rev. Lett. 124, 236404 (2020).
* (15) L. Fu, Topological Crystalline Insulators, Phys. Rev. Lett. 106, 106802 (2011).
* (16) R.-J. Slager, A. Mesaros, V. Juričić, and J. Zaanen, The space group classification of topological band-insulators, Nat. Phys. 9, 98 (2013).
* (17) T. Kariyado and X. Hu, Topological States Characterized by Mirror Winding Numbers in Graphene with Bond Modulation, Sci. Rep. 7, 16515 (2017).
* (18) T. Cao, F. Zhao, and S. G. Louie, Topological Phases in Graphene Nanoribbons: Junction States, Spin Centers, and Quantum Spin Chains, Phys. Rev. Lett. 119, 076401 (2017).
* (19) Y.-L. Lee, F. Zhao, T. Cao, J. Ihm, and S. G. Louie, Topological Phases in Cove-Edged and Chevron Graphene Nanoribbons: Geometric Structures, $\mathbb{Z}_{2}$ Invariants, and Junction States, Nano Lett. 18, 7247 (2018).
* (20) C. H. Lee, S. Imhof, C. Berger, F. Bayer, J. Brehm, L. W. Molenkamp, T. Kiessling, and R. Thomale, Topolectrical Circuits, Comm. Phys. 1, 39 (2018).
* (21) S. Imhof, C. Berger, F. Bayer, J. Brehm, L. W. Molenkamp, T. Kiessling, F. Schindler, C. H. Lee, M. Greiter, T. Neupert, and R. Thomale, Topolectrical-circuit realization of topological corner modes, Nat. Phys. 14, 925 (2018).
* (22) T. Hofmann, T. Helbig, C. H. Lee, M. Greiter, and R. Thomale, Chiral Voltage Propagation and Calibration in a Topolectrical Chern Circuit, Phys. Rev. Lett. 122, 247702 (2019).
* (23) W. Zhu, Y. Long, H. Chen, and J. Ren, Quantum valley Hall effects and spin-valley locking in topological Kane-Mele circuit networks, Phys. Rev. B 99, 115410 (2019).
* (24) Y. Lu, N. Jia, L. Su, C. Owens, G. Juzeliūnas, D. I. Schuster, and J. Simon, Probing the Berry curvature and Fermi arcs of a Weyl circuit, Phys. Rev. B 99, 020302(R) (2019).
* (25) Y. Yang, D. Zhu, Z. H. Hang, Y. D. Chong, Observation of antichiral edge states in a circuit lattice, arXiv:2008.10161.
* (26) H. Yang, Z.-X. Li, Y. Liu, Y. Cao, and P. Yan, Observation of symmetry-protected zero modes in topolectrical circuits, Phys. Rev. Research 2, 022028(R) (2020).
* (27) L. Song, H. Yang, Y. Cao, and P. Yan, Realization of the square-root higher-order topological insulator in electric circuits, Nano Lett. 20, 7566 (2020).
* (28) M. Ezawa, Braiding of Majorana-like corner states in electric circuits and its non-Hermitian generalization, Phys. Rev. B 100, 045407 (2020).
* (29) M. Ezawa, Electric circuits for non-Hermitian Chern insulators, Phys. Rev. B 100, 081401(R) (2019).
* (30) See Supplemental Material at http://link.aps.org/ supplemental/ for the form of the circuit Laplacian (Sec. I), the derivation of the mirror winding number (Sec. II), and the mapping to the BHZ model (Sec. III), which includes Refs. Bernevig2006 ; Kariyado2017 ; Yang2020s .
* (31) Y. Yang, Z. Jia, Y. Wu, Z.-H. Hang, H. Jiang, and X. C. Xie, Gapped topological kink states and topological corner states in graphene, Sci. Bull. 65, 531 (2020).
* (32) LTspice, www.linear.com/LTspice.
* (33) I. Tamm, Über eine mögliche Art der Elektronenbindung an Kristalloberflächen, Phys. Z. Sowjetunion 76, 849 (1932).
* (34) W. Shockley, On the surface states associated with a periodic potential, Phys. Rev. 56, 317 (1939).
* (35) Y. Zhang, J.-P. Hu, B. A. Bernevig, X. R. Wang, X. C. Xie, and W. M. Liu, Quantum blockade and loop currents in graphene with topological defects, Phys. Rev. B 78, 155413 (2008).
* (36) Y. Zhang, J.-P. Hu, B. A. Bernevig, X. R. Wang, X. C. Xie, and W. M. Liu, Localization and the Kosterlitz-Thouless Transition in Disordered Graphene, Phys. Rev. Lett. 102, 106401 (2009).
* (37) L.-H. Wu and X. Hu, Topological Properties of Electrons in Honeycomb Lattice with Detuned Hopping Energy, Sci. Rep. 6, 24347 (2016).
* (38) M. Fujita, K. Wakabayashi, K. Nakada, and K. Kusakabe, Peculiar localized state at zigzag graphite edge, J. Phys. Soc. Jpn. 65, 1920 (1996).
Supplemental Material
Experimental observation of edge-dependent quantum pseudospin Hall effect
Huanhuan Yang1, Lingling Song1, Yunshan Cao1, X. R. Wang2,∗ and Peng Yan1†
1 _School of Electronic Science and Engineering and State Key Laboratory of
Electronic Thin Films and Integrated Devices, University of Electronic Science
and Technology of China, Chengdu 610054, China and_
2 _Physics Department, The Hong Kong University of Science and Technology,
Clear Water Bay, Kowloon, Hong Kong_
## I I. Circuit Laplacian
In this section, we show the circuit Laplacian of the two circuits in the main
text. For the circuit with molecule-zigzag edge geometry:
$J_{\rm
I}(\omega)=\omega\left(\begin{array}[]{ccccccc}J_{0}&0&0&-J_{A}&-J_{A}&0&\ldots\\\
0&J_{0}&0&0&0&-J_{A}&\ldots\\\ -J_{A}&0&J_{0}&0&0&0&\ldots\\\
-J_{A}&0&0&J_{0}&0&0&\ldots\\\ 0&0&0&0&J_{0}&0&\ldots\\\
0&-J_{A}&0&0&0&J_{0}&\ldots\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\\
\end{array}\right)_{168\times 168},$ (6)
with $J_{0}=2C_{A}+C_{B}-1/(\omega^{2}L)$, $J_{A}=C_{A}$, and $J_{B}=C_{B}$.
For the circuit with partially-bearded edge geometry:
$J_{\rm
II}(\omega)=\omega\left(\begin{array}[]{ccccccc}J_{0}&0&0&0&0&-J_{A}&\ldots\\\
0&J_{0}&0&0&0&0&\ldots\\\ 0&0&J_{0}&0&0&0&\ldots\\\ 0&0&0&J_{0}&0&0&\ldots\\\
0&0&0&0&J_{0}&0&\ldots\\\ -J_{A}&0&0&0&0&J_{0}&\ldots\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\\
\end{array}\right)_{156\times 156}.$ (7)
Diagonalizing $J_{\rm I}(\omega)$ and $J_{\rm II}(\omega)$, we obtain the
admittance spectrum $j_{n}$ and the corresponding wave functions $\psi_{n,m}$.
To directly compare with the experimental results, we adopt $C_{A}=1$ nF,
$L=39$ $\mu$H, and $C_{B}=10$ nF or $0.1$ nF. The admittance spectrums with
the insets show the typical profiles of wave functions are displayed in Fig.
5(a)-5(d). For the circuits with molecule-zigzag edges, in the case of
$C_{A}/C_{B}=0.1$, we find a series of isolated states in the gap of the
admittance spectrum (blue dots), which correspond to the helical edge states,
shown in Fig. 5(a). We confirm that all blue dots are edge states (not shown).
In the regime of $C_{A}/C_{B}=10$, only the bulk states exist, see Fig. 5(b).
However, for the circuits with partially-bearded edges, we find that the edge
states emerge in the opposite region, i.e., $C_{A}/C_{B}=10$, as shown in Fig.
5(d). In the case of $C_{A}/C_{B}=0.1$, we can only see the bulk states, see
Fig. 5(c).
Figure 5: Admittance spectrum at different edges and parameters. The blue and
black dots denote the edge states and bulk states, respectively. Insets:
spatial distribution of wave functions with the number of state indicated by
the arrows. (a)(b) molecule-zigzag edge with $C_{A}/C_{B}=0.1$ and
$C_{A}/C_{B}=10$. (c)(d) partially-bearded edge with $C_{A}/C_{B}=0.1$ and
$C_{A}/C_{B}=10$.
## II II. Mirror winding number
In this section, we calculate the topological invariant mirror winding number
to characterize the helical edge states. If we express
$J_{ab}(\omega)=i\mathcal{H}_{ab}(\omega)$, $\mathcal{H}(\omega)$ can be
viewed as a tight-binding Hamiltonian. With the appropriate unit cells in Fig.
6 (unit cell I for the circuit with molecule-zigzag edge, and unit cell II for
the circuit with partially-bearded edge), one can write the Hamiltonian of an
infinite Kekulé circuit as:
$\mathcal{H}=\omega\left(\begin{array}[]{cccccc}h_{0}&0&0&&&\\\
0&h_{0}&0&&-Q_{\bf k}&\\\ 0&0&h_{0}&&&\\\ &&&h_{0}&0&0\\\ &-Q_{\bf
k}^{\dagger}&&0&h_{0}&0\\\ &&&0&0&h_{0}\\\ \end{array}\right),$ (8)
with the matrix elements $h_{0}=2C_{A}+C_{B}-1/(\omega^{2}L)$,
$Q^{\rm I}_{\bf
k}=\left(\begin{array}[]{ccc}C_{B}X\overline{Y}^{2}&C_{A}&C_{A}\\\
C_{A}&C_{B}\overline{X}Y&C_{A}\\\ C_{A}&C_{A}&C_{B}Y\\\ \end{array}\right)$
(9)
for molecule-zigzag edge, where $X=e^{i{\bf k}\cdot{\bf a}_{1}}$, $Y=e^{i{\bf
k}\cdot{\bf a}_{2}}$ with ${\bf a}_{1}=3\sqrt{3}\hat{x}$ and ${\bf
a}_{2}=\frac{3\sqrt{3}}{2}\hat{x}+\frac{3}{2}\hat{y}$ being the two basic
vectors, and
$Q^{\rm II}_{\bf k}=\left(\begin{array}[]{ccc}C_{B}&C_{A}&C_{A}\\\
C_{A}\overline{Y}&C_{B}&C_{A}\overline{X}Y\\\
C_{A}X\overline{Y}&C_{A}Y&C_{B}\\\ \end{array}\right)$ (10)
for partially-bearded edge.
Figure 6: (a) Appropriate unit cells for molecule-zigzag and partially-bearded
edges. The orange arrows indicate the two basic vectors. (b) The mirror
winding numbers $(n_{+},n_{-})$ as a function of the capacitance ratio
$C_{A}/C_{B}$.
At resonant frequency $\omega_{0}=1/\sqrt{(2C_{A}+C_{B})L}$, the diagonal
element $h_{0}$ vanishes, and the Hamiltonian can be simplified as:
$\mathcal{H}=-\omega_{0}\left(\begin{array}[]{cc}0&Q_{\bf k}\\\
Q^{\dagger}_{\bf k}&0\\\ \end{array}\right),$ (11)
where $Q_{\bf k}$ is $Q^{\rm I}_{\bf k}$ (Eq. 9) for molecule-zigzag edge, and
$Q^{\rm II}_{\bf k}$ (Eq. 10) for partially-bearded edge.
Regarding the momentum $\bf k$ parallel to the unit vector ${\bf a}_{1}$
defined as a free parameter, the system can be viewed as an effective 1D
model, to which one can assign the winding number as:
$n(k_{\parallel})=-\frac{1}{2\pi}\oint\frac{d}{dk_{\perp}}\arg(\det
Q_{k_{\parallel},k_{\perp}})dk_{\perp}$ (12)
For $k_{\parallel}=0$, the mirror symmetry with the mirror plane perpendicular
to ${\bf a}_{1}$ enables us to decompose the Hamiltonian (11) into even and
odd sectors $H_{k_{\perp}^{\pm}}$, where k is replaced by $k_{\perp}$.
Concretely, $Q_{\bf k}$ can be decomposed into even and odd sectors
$Q_{k_{\perp}^{\pm}}$. Then, we can assign winding numbers for the even and
odd sectors separately by substituting $Q_{k_{\perp}^{+}}$ and
$Q_{k_{\perp}^{-}}$ into Eq. 12, which constitutes the mirror winding number
$(n_{+},n_{-})$ Kariyado2017 .
At $k_{\parallel}=0$, $Q^{\rm I}_{\bf k}$ is decomposed into
$Q^{\rm
I}_{k_{\perp}^{+}}=\left(\begin{array}[]{cc}C_{B}\overline{Y}^{2}&\sqrt{2}C_{A}\\\
\sqrt{2}C_{A}&C_{A}+C_{B}Y\\\ \end{array}\right),~{}Q^{\rm
I}_{k_{\perp}^{-}}=C_{B}Y-C_{A},$ (13)
and $Q^{\rm II}_{\bf k}$ is decomposed into
$Q^{\rm II}_{k_{\perp}^{+}}=\left(\begin{array}[]{cc}C_{B}&\sqrt{2}C_{A}\\\
\sqrt{2}C_{A}\overline{Y}&C_{B}+C_{A}Y\\\ \end{array}\right),~{}Q^{\rm
II}_{k_{\perp}^{-}}=C_{B}-C_{A}Y.$ (14)
Using Eq. 12, we can compute the mirror winding number $(n_{+},n_{-})$
immediately, with the results plotted in Fig. 6(b). For the circuit with
molecule-zigzag and partially-bearded edge, the topological edge states appear
in the region of $C_{A}/C_{B}<1$ and $C_{A}/C_{B}>1$ respectively.
## III III. Analogy to the Quantum Spin Hall Effect
In this section, we map our six-band circuit model to the four-band Bernevig-
Hughes-Zhang (BHZ) model for CdTe/HgTe/CdTe quantum wells.
Figure 7: Admittance spectrum for different capacitance ratio. (a)
$C_{A}/C_{B}=0.9$, (b) $C_{A}/C_{B}=1$, and (c) $C_{A}/C_{B}=1.1$.
We calculate the energy spectrum of Eq. 8 for three capacitance ratios,
plotted in Fig. 7. The spectrum are gapless when $C_{A}/C_{B}=1$, and the
phase transition point is at the $\Gamma$ point. Two bands of the spectrum are
high-energy parts, which are irrevelant to the topological phase transition.
Therefore, the six-band Hamiltonian (8) can be downfolded into the four-band
one by omitting the two high-energy bands Yang2020 .
Taking the circuit with molecule-zigzag edge geometry as an example, we impose
a unitary transformation $\mathcal{H^{\prime}}=U^{\dagger}\mathcal{H}U$ on
Hamiltonian $\mathcal{H}$ (8) to separate the high-energy parts of the
Hamiltonian with the matrix:
$\mathcal{U}=\frac{1}{\sqrt{6}}\left(\begin{array}[]{cccccc}e^{i\frac{\pi}{2}}&e^{i\pi}&e^{i\frac{3\pi}{2}}&e^{i\pi}&1&1\\\
e^{i\frac{7\pi}{6}}&e^{i\frac{\pi}{3}}&e^{i\frac{5\pi}{6}}&e^{i\frac{5\pi}{3}}&-1&1\\\
e^{i\frac{11\pi}{6}}&e^{i\frac{5\pi}{3}}&e^{i\frac{\pi}{6}}&e^{i\frac{\pi}{3}}&1&1\\\
e^{i\frac{\pi}{2}}&e^{i2\pi}&e^{i\frac{3\pi}{2}}&e^{i2\pi}&-1&1\\\
e^{i\frac{7\pi}{6}}&e^{i\frac{4\pi}{3}}&e^{i\frac{5\pi}{6}}&e^{i\frac{2\pi}{3}}&1&1\\\
e^{i\frac{11\pi}{6}}&e^{i\frac{2\pi}{3}}&e^{i\frac{\pi}{6}}&e^{i\frac{4\pi}{3}}&-1&1\\\
\end{array}\right).$ (15)
Then, imposing Taylor expansion on each matrix element of
$\mathcal{H}^{\prime}$ around the $\Gamma$ point to 2nd-order terms, we
obtain:
$\mathcal{H}_{\Gamma}=-\omega_{0}\left(\begin{array}[]{cccccc}\delta
C-\frac{9}{4}C_{B}k^{2}&-\frac{3}{2}iC_{B}k_{-}&h_{13}&0&-\frac{3}{2}iC_{B}k_{+}&h_{16}\\\
\frac{3}{2}iC_{B}k_{+}&-\delta
C+\frac{9}{4}C_{B}k^{2}&0&h_{24}&h_{25}&-\frac{3}{2}C_{B}k_{-}\\\
h_{13}^{*}&0&\delta
C-\frac{9}{4}C_{B}k^{2}&-\frac{3}{2}iC_{B}k_{+}&-\frac{3}{2}iC_{B}k_{-}&h_{36}\\\
0&h_{24}^{*}&\frac{3}{2}iC_{B}k_{-}&\delta
C+\frac{9}{4}C_{B}k^{2}&h_{45}&\frac{3}{2}C_{B}k_{+}\\\
\frac{3}{2}iC_{B}k_{-}&h_{25}^{*}&\frac{3}{2}iC_{B}k_{+}&h_{45}^{*}&-2C_{A}-C_{B}+\frac{9}{4}C_{B}k^{2}&0\\\
h_{16}^{*}&-\frac{3}{2}C_{B}k_{+}&h_{36}^{*}&\frac{3}{2}C_{B}k_{-}&0&2C_{A}+C_{B}-\frac{9}{4}C_{B}k^{2}\\\
\end{array}\right)$ (16)
with $\delta C=C_{B}-C_{A}$, $k^{2}=k_{x}^{2}+k_{y}^{2}$,
$k_{-}=k_{x}-ik_{y}$,
$h_{13}=h_{24}^{*}=\frac{9}{8}C_{B}(k_{y}^{2}-k_{x}^{2})-\frac{9}{4}C_{B}ik_{x}k_{y}$,
$h_{16}=\frac{9}{8}C_{B}i(k_{y}^{2}-k_{x}^{2})-\frac{9}{4}C_{B}k_{x}k_{y}$,
$h_{25}=h_{45}^{*}=\frac{9}{8}C_{B}(k_{x}^{2}-k_{y}^{2})+\frac{9}{4}C_{B}ik_{x}k_{y}$,
and
$h_{36}=\frac{9}{8}C_{B}i(k_{x}^{2}-k_{y}^{2})-\frac{9}{4}C_{B}k_{x}k_{y}$.
Dropping the last two high-energy orbits and the second-order off-diagonal
terms $h_{ij}$ ($h_{ij}$ contribute as high-order perturbations), Hamiltonian
(16) is block diagonalized. We obtain the low-energy effective Hamiltonian as:
$\mathcal{H}^{\rm I}_{\rm eff}=-\omega_{0}\left(\begin{array}[]{cccc}\delta
C-\frac{9}{4}C_{B}k^{2}&-\frac{3}{2}iC_{B}k_{-}&0&0\\\
\frac{3}{2}iC_{B}k_{+}&-\delta C+\frac{9}{4}C_{B}k^{2}&0&0\\\ 0&0&\delta
C-\frac{9}{4}C_{B}k^{2}&-\frac{3}{2}iC_{B}k_{+}\\\
0&0&\frac{3}{2}iC_{B}k_{-}&\delta C+\frac{9}{4}C_{B}k^{2}\\\
\end{array}\right).$ (17)
The effective Hamiltonian $\mathcal{H}^{\rm I}_{\rm eff}$ can be rewritten in
a concise BHZ form Bernevig2006 as:
$\displaystyle\mathcal{H}_{\rm
eff}(k)=-\omega_{0}\left(\begin{array}[]{cc}H(k)&0\\\ 0&H^{*}(-k)\\\
\end{array}\right),~{}{\rm with}~{}H(k)$
$\displaystyle=\left(\begin{array}[]{cc}M-Bk^{2}&Ak_{-}\\\
A^{*}k_{+}&-M+Bk^{2}\\\ \end{array}\right),$ (18)
where $M=\delta C=C_{B}-C_{A}$, $A=-\frac{3}{2}iC_{B}$, $B=\frac{9}{4}C_{B}$,
$k^{2}=k_{x}^{2}+k_{y}^{2}$, and $k_{\pm}=k_{x}{\pm}ik_{y}$.
Similarly, near the $\Gamma$ point, the Hamiltonian (11) with the partially-
bearded edge geometry can be simplified as:
$\mathcal{H}^{\rm II}_{\rm eff}=-\omega_{0}\left(\begin{array}[]{cccc}\delta
C+\frac{9}{4}C_{A}k_{x}^{2}+\frac{3}{4}C_{A}k_{y}^{2}&\frac{3}{2}iC_{A}k_{-}&0&0\\\
-\frac{3}{2}iC_{A}k_{+}&-\delta
C-\frac{9}{4}C_{A}k_{x}^{2}-\frac{3}{4}C_{A}k_{y}^{2}&0&0\\\ 0&0&\delta
C+\frac{9}{4}C_{A}k_{x}^{2}+\frac{3}{4}C_{A}k_{y}^{2}&\frac{3}{2}iC_{A}k_{+}\\\
0&0&-\frac{3}{2}iC_{A}k_{-}&-\delta
C-\frac{9}{4}C_{A}k_{x}^{2}-\frac{3}{4}C_{A}k_{y}^{2}\\\ \end{array}\right),$
(19)
which can be rewritten as the BHZ form with
$H(k)=-\left(\begin{array}[]{cc}M-B(3k_{x}^{2}+k_{y}^{2})&Ak_{-}\\\
A^{*}k_{+}&-M+B(3k_{x}^{2}+k_{y}^{2})\\\ \end{array}\right),$ (20)
where $A=-\frac{3}{2}iC_{A}$, $B=\frac{3}{4}C_{A}$, and $M=C_{A}-C_{B}$. The
extra minus sign of $H(k)$ has no effect on the energy spectrums, because the
four bands are symmetrical with respect to the zero energy. However, the
opposite sign of parameter $M$ leads to the helical edge states emerging in
the opposite parameter regions for the two edge geometries.
|
††footnotetext: *Corresponding author.††footnotetext: _E-mail address:_
<EMAIL_ADDRESS>(H.-L. Li<EMAIL_ADDRESS>(S.
Zhao),††footnotetext<EMAIL_ADDRESS>(H.-W. Zuo).
# Existence and Nonlinear Stability of Steady-States to Outflow Problem for
the Full Two-Phase Flow
Hai-Liang Li a,b, Shuang Zhaoa,b∗, Han-Wen Zuoa,b
a School of Mathematical Sciences, Capital Normal University, Beijing 100048,
P.R. China.
b Academy for Multidisciplinary Studies, Capital Normal University, Beijing
100048, P.R. China.
###### Abstract
The outflow problem for the viscous full two-phase flow model in a half line
is investigated in the present paper. The existence, uniqueness and nonlinear
stability of the steady-state are shown respectively corresponding to the
supersonic, sonic or subsonic state at far field. This is different from the
outflow problem for the isentropic Navier-Stokes equations, where there is no
steady-state for the subsonic state. Furthermore, we obtain either exponential
time decay rates for the supersonic state or algebraic time decay rates for
supersonic and sonic states in weighted Sobolev spaces.
Key words. Full two-phase flow, outflow problem, stationary solution,
nonlinear stability.
## 1 Introduction
Two-phase flow models play important roles in applied scientific areas, for
instance, nuclear, engines, chemical engineering, medicine, oil-gas,
fluidization, waste water treatment, biomedical, liquid crystals, lubrication
[12, 24, 6, 1, 21], etc. In this paper, we consider the full two-phase flow
model which can be formally obtained from a Vlasov-Fokker-Planck equation
coupled with the compressible Navier-Stokes equations through the Chapman-
Enskog expansion [19].
We consider the initial-boundary value problem (IBVP) for the full two-phase
flow model as follows:
$\left\\{\begin{split}&\rho_{t}+(\rho u)_{x}=0,\\\ &(\rho u)_{t}+[\rho
u^{2}+p_{1}(\rho)]_{x}=\mu u_{xx}+n(v-u),\\\ &n_{t}+(nv)_{x}=0,\\\
&(nv)_{t}+[nv^{2}+p_{2}(n)]_{x}=(nv_{x})_{x}-n(v-u),~{}~{}~{}x>0,~{}t>0,\end{split}\right.$
(1.1)
where $\rho>0$ and $n>0$ stand for the densities, $u$ and $v$ are the
velocities of two fluids respectively, the constant $\mu>0$ is the viscosity
coefficient, and the pressure-density functions take forms
$p_{1}(\rho)=A_{1}\rho^{\gamma},\quad p_{2}(n)=A_{2}n^{\alpha}$ (1.2)
with $A_{1}>0,~{}A_{2}>0$, $\gamma\geq 1$ and $\alpha\geq 1$. The initial data
are given by
$(\rho,u,n,v)(0,x)=(\rho_{0},u_{0},n_{0},v_{0})(x),~{}~{}~{}~{}\underset{x\in\mathbb{R}_{+}}{\inf}\rho_{0}(x)>0,~{}~{}~{}~{}\underset{x\in\mathbb{R}_{+}}{\inf}n_{0}(x)>0,$
(1.3)
$\begin{split}&\lim_{x\to+\infty}(\rho_{0},u_{0},n_{0},v_{0})(x)=(\rho_{+},u_{+},n_{+},u_{+}),\quad\rho_{+}>0,\quad
n_{+}>0,\end{split}$ (1.4)
and the outflow boundary condition is imposed by
$(u,v)(t,0)=(u_{-},u_{-}),\quad u_{-}<0,$ (1.5)
where $\rho_{+}>0$, $n_{+}>0$, $u_{+}$ and $u_{-}<0$ are constants.
The condition $u_{-}<0$ means that the fluids flow out the region
$\mathbb{R}_{+}$ through the boundary $x=0$ with the velocity $u_{-}$, and
therefore the problem $({\ref{f}})$-$({\ref{outflow c}})$ is called the
outflow problem[22]. On the other hand, the similar problem with the case
$u_{-}>0$ is called the inflow problem[22], and the densities on the boundary
$(\rho,n)(t,0)=(\rho_{-},n_{-})$ are also imposed for the well-posedness of
the inflow problem.
It is an interesting issue to study the outflow/inflow problem. There are many
important progress made recently about the existence and nonlinear stability
of steady-states and basic waves to the outflow/inflow problem for one-phase
flow, such as compressible Navier-Stokes equations [11, 16, 27, 15, 28, 29, 3,
9, 23, 31, 8, 26, 17, 10, 32, 33, 34, 5, 18, 30]. For instance, for the inflow
problem of isentropic Navier-Stokes equations, Matsumura-Nishihara [23] showed
the existence and stability of steady-states for both subsonic and sonic cases
together with the stability of the superposition of steady-states and
rarefaction waves under small perturbation, while Fan-Liu-Wang-Zhao [5]
investigated that steady-states and rarefaction waves were nonlinear stable
under large perturbation, and Huang-Matsumura-Shi [9] obtained the stability
of superposition of steady-states and shock waves under small perturbation.
For the inflow problem of full compressible Navier-Stokes equations, Nakamura-
Nishibata[26] got the existence and stability of steady-states also for both
subsonic and sonic cases, and Qin-Wang [32, 31] and Hong-Wang [8] showed the
combination of steady-states and rarefaction waves. For the outflow problem of
isentropic Navier-Stokes equations, the existence and nonlinear stability of
steady-states for both supersonic and sonic cases under small perturbation
were proved in [16, 27, 28, 29], the convergence rates toward steady-states
for supersonic and sonic cases were investigated in [28, 29], the stability of
the superposition of steady-states and rarefaction waves were got in [17, 11].
For the outflow problem of full compressible Navier-Stokes equations,
Kawashima-Nakamura-Nishibata[15] established the existence and nonlinear
stability of steady-states under small perturbation for three cases:
supersonic, sonic and subsonic flow while there is no steady-state to the
outflow problem of isentropic Navier-Stokes equations for the subsonic case,
in addition they also gained the convergence rates toward the stationary
solutions for supersonic and sonic cases [15], and Qin[30], Wan-Wang-Zhao[33]
and Wan-Wang-Zou[34] obtained the stability of steady-states, rarefaction
waves and their combination under large perturbation.
It is of interest and challenges to investigate the outflow/inflow problem for
two-phase flow models due to the coupled motions of two phases. Although it is
rather complicated, some important progress has been made about the existence
and nonlinear stability of steady-states and basic waves to the outflow/inflow
problem for two-phase flow models [4, 35, 38, 7, 37, 20]. For instance, Yin-
Zhu[37] obtained the existence of steady-states similar to isentropic Navier-
Stokes equations[16], and the nonlinear stability and convergence rates of
steady-states for the supersonic case to the outflow problem of the drift-flux
model. For the outflow problem of the two-fluid Navier-Stokes-Poisson system,
the existence of stedy-states which is similar to that of isentropic Navier-
Stokes equations[16], and the nonlinear stability of rarefactions waves and
steady-states together with the superposition of steady-states and rarefaction
waves were proved in [4, 35]. We established existence and nonlinear stability
of steady-states to inflow problem of the model $(\ref{f})$ for supersonic,
sonic and subsonic cases in [20], which is a different phenomena compared with
the inflow problem for isentropic Navier-Stokes equations [23] and full
compressible Navier-Stokes equations[26], where there is no steady-state for
the supersonic case.
However, there is no result about the existence and nonlinear stability of
steady-states for the outflow problem $(\ref{f})$-$(\ref{outflow c})$. The
main purpose of this paper is to prove the existence and nonlinear stability
of steady-states for the supersonic, sonic and subsonic case, and obtain
either exponential time decay rates for the supersonic flow or algebraic time
decay rates for both supersonic and sonic flows. Contrary to the isentropic
Navier-Stokes equations, the steady-state to the IBVP
$(\ref{f1})$-$(\ref{outflow c})$ exists for the subsonic case.
The steady-state
$(\widetilde{\rho},\widetilde{u},\widetilde{n},\widetilde{v})(x)$ to the
outflow problem $(\ref{f})$-$(\ref{outflow c})$ satisfies the following system
$\left\\{\begin{split}&(\widetilde{\rho}\widetilde{u})_{x}=0,\\\
&[\widetilde{\rho}\widetilde{u}^{2}+p_{1}(\widetilde{\rho})]_{x}=(\mu\widetilde{u}_{x})_{x}+\widetilde{n}(\widetilde{v}-\widetilde{u}),\\\
&(\widetilde{n}\widetilde{v})_{x}=0,\\\
&[\widetilde{n}\widetilde{v}^{2}+p_{2}(\widetilde{n})]_{x}=(\widetilde{n}\widetilde{v}_{x})_{x}-\widetilde{n}(\widetilde{v}-\widetilde{u}),~{}~{}~{}x>0,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\end{split}\right.$
(1.6)
with the boundary conditions and spatial far field conditions
$\begin{split}&(\widetilde{u},\widetilde{v})(0)=(u_{-},u_{-}),\quad\lim_{x\to\infty}(\widetilde{\rho},\widetilde{u},\widetilde{n},\widetilde{v})(x)=(\rho_{+},u_{+},n_{+},u_{+}),\quad\underset{x\in\mathbb{R}_{+}}{\inf}\widetilde{\rho}(x)>0,\quad\underset{x\in\mathbb{R}_{+}}{\inf}\widetilde{n}(x)>0.\end{split}$
(1.7)
Integrating $(\ref{stationary f})_{1}$ and $(\ref{stationary f})_{3}$ over
$(x,+\infty)$, we have
$\begin{split}&\widetilde{u}=\frac{\rho_{+}}{\widetilde{\rho}}u_{+},\quad\widetilde{v}=\frac{n_{+}}{\widetilde{n}}u_{+},\end{split}$
(1.8)
which implies that the following relationship
$u_{+}=\frac{\widetilde{n}(0)}{n_{+}}u_{-}=\frac{\widetilde{\rho}(0)}{\rho_{+}}u_{-}<0$
(1.9)
is the necessary property of the steady-state to the boundary value problem
(BVP) $(\ref{stationary f})$-$(\ref{stationary boundary c})$.
Define the Mach number $M_{+}$ and the sound speed $c_{+}$ as below
$M_{+}:=\frac{|u_{+}|}{c_{+}},\quad
c_{+}:=(\frac{A_{1}\gamma\rho_{+}^{\gamma}+A_{2}\alpha
n_{+}^{\alpha}}{\rho_{+}+n_{+}})^{\frac{1}{2}}.$ (1.10)
Then, we have the following results about the existence and uniqueness of the
steady-state.
###### Theorem 1.1.
Let $\delta:=|u_{-}-u_{+}|>0$ and $u_{+}<0$ hold. Then there exists a set
$\Omega_{-}\subset\mathbb{R}_{-}$ such that if $u_{-}\in\Omega_{-}$ and
$\delta$ sufficiently small, there exists a unique strong solution
$(\widetilde{\rho},\widetilde{u},\widetilde{n},\widetilde{v})(x)$ to the BVP
$(\ref{stationary f})$-$(\ref{stationary boundary c})$ which satisfies either
for the supersonic or subsonic case $M_{+}\neq 1$ that
$~{}~{}~{}~{}~{}~{}|\partial_{x}^{k}(\widetilde{\rho}-\rho_{+},\widetilde{u}-u_{+},\widetilde{n}-n_{+},\widetilde{v}-u_{+})|\leq
C_{1}\delta e^{-c_{0}x},~{}~{}~{}k=0,1,2,3,$ (1.11)
or for the sonic case $M_{+}=1$ that
$|\partial_{x}^{k}(\widetilde{\rho}-\rho_{+},\widetilde{u}-u_{+},\widetilde{n}-n_{+},\widetilde{v}-u_{+})|\leq
C_{2}\frac{\delta^{k+1}}{(1+\delta x)^{k+1}},~{}~{}~{}k=0,1,2,3,$ (1.12)
and
$(\widetilde{u}_{x},\widetilde{v}_{x})=(a\sigma^{2}(x),a\sigma^{2}(x))+O(|\sigma(x)|^{3}),$
(1.13)
where $\sigma(x)$ is a smooth function satisfying
$\sigma_{x}=-a\sigma^{2}+O(|\sigma|^{3})$ and
$c_{1}\frac{\delta}{1+\delta x}\leq\sigma(x)\leq C_{3}\frac{\delta}{1+\delta
x},~{}~{}~{}~{}|\partial^{k}_{x}\sigma(x)|\leq
C_{3}\frac{\delta^{k+1}}{(1+\delta x)^{k+1}},~{}~{}~{}k=0,1,2,3,$ (1.14)
and $C_{i}>0$, $i=1,2,3$, $c_{0}>0$, $c_{1}>0$, and $a>0$ are positive
constants.
###### Remark 1.1.
Due to the drag force term, the existence of steady-states to the IBVP
$(\ref{f})$-$(\ref{outflow c})$ is obtained even for the subsonic case
$M_{+}<1$, which is different from that of the isentropic Navier-Stokes
equations [16]. Moreover, the existence of steady-states to the IBVP
$(\ref{f})$-$(\ref{outflow c})$ is similar to that of the full compressible
Navier-Stokes equations [15].
Then, we have the nonlinear stability of the steady-state to the IBVP
$(\ref{f})$-$(\ref{outflow c})$ for supersonic, sonic and subsonic cases.
###### Theorem 1.2.
Let the same conditions in Theorem 1.1 hold and assume that it holds
$|p^{\prime}_{1}({\rho_{+}})-p^{\prime}_{2}({n_{+}})|\leq\sqrt{2}|u_{+}|\min\\{(1+\frac{\rho_{+}}{n_{+}})[(\gamma-1)p^{\prime}_{1}(\rho_{+})]^{\frac{1}{2}},(1+\frac{n_{+}}{\rho_{+}})[(\alpha-1)p^{\prime}_{2}(n_{+})]^{\frac{1}{2}}\\}$
(1.15)
for the sonic case $M_{+}=1.$ Then, there exists a small positive constant
$\varepsilon_{0}>0$ such that if
$\|(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}+\delta\leq\varepsilon_{0},$
(1.16)
the IBVP $(\ref{f})$-$(\ref{outflow c})$ has a unique global solution
$(\rho,u,n,v)(t,x)$ satisfying
$\left\\{\begin{split}&(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})\in
C([0,+\infty);H^{1}),\\\ &(\rho-\widetilde{\rho},n-\widetilde{n})_{x}\in
L^{2}([0,+\infty);L^{2}),\\\ &(u-\widetilde{u},v-\widetilde{v})_{x}\in
L^{2}([0,+\infty);H^{1}),\end{split}\right.$
and
$\lim_{t\to+\infty}\sup_{x\in\mathbb{R_{+}}}|(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})(t,x)|=0.$
(1.17)
In addition, we have the time convergence rates of the global solution to the
IBVP $(\ref{f})$-$(\ref{outflow c})$ for both supersonic and sonic cases.
###### Theorem 1.3.
Assume that the same conditions in Theorem 1.1 hold. Then, the following
results hold.
* (i)
For $M_{+}>1$ and $\lambda>0$, if the initial data satisfy
$(1+x)^{\frac{\lambda}{2}}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{u},n-\widetilde{n})\in
L^{2}(\mathbb{R_{+}})$
and
$\|(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}+\delta\leq\varepsilon_{0},$
(1.18)
for a small positive constant $\varepsilon_{0}>0$, then the solution
$(\rho,u,n,v)(t,x)$ to the IBVP $(\ref{f})$-$(\ref{outflow c})$ satisfies
$\displaystyle~{}~{}~{}~{}\|(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})(t)\|_{L^{\infty}}\leq
C_{4}\delta_{0}(1+t)^{-\frac{\lambda}{2}},$ (1.19)
where $C_{4}>0$ and
$\delta_{0}:=\|(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}+\|(1+x)^{\frac{\lambda}{2}}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{L^{2}}$
are constants independent of time.
* (ii)
For $M_{+}=1$,
$1\leq\lambda<\lambda^{*},\lambda^{*}:=2+\sqrt{8+\frac{1}{1+b^{2}}}$ and
$b:=\frac{\rho_{+}(u_{+}^{2}-p^{\prime}_{1}(\rho_{+}))}{|u_{+}|\sqrt{(\mu+n_{+})n_{+}}}$,
if for arbitrary $\nu\in(0,\lambda]$, there exists a small positive constant
$\varepsilon_{0}>0$ such that
$\|\sigma^{-\frac{\lambda}{2}}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}+\delta^{\frac{1}{2}}\leq\varepsilon_{0},$
(1.20)
then the IBVP $(\ref{f})$-$(\ref{outflow c})$ has a unique global solution
$(\rho,u,n,v)(t,x)$ satisfying
$\left\\{\begin{aligned}
&\sigma^{-\frac{\nu}{2}}(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})\in
C([0,+\infty):H^{1}),\\\
&\sigma^{-\frac{\nu}{2}}(\rho-\widetilde{\rho},n-\widetilde{n})_{x}\in
L^{2}([0,+\infty);L^{2}),\\\
&\sigma^{-\frac{\nu}{2}}(u-\widetilde{u},v-\widetilde{v})_{x}\in
L^{2}([0,+\infty);H^{1}),\end{aligned}\right.$ (1.21)
and
$\|\sigma^{-\frac{\nu}{2}}(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})(t)\|_{H^{1}}\leq
C_{5}\delta_{1}(1+t)^{-\frac{\lambda-\nu}{4}},$ (1.22)
where $\sigma(x)$ is defined by $(\ref{sig})$ satisfying $(\ref{sg0})$,
$C_{5}>0$ is a positive constant independent of time, and
$\delta_{1}:=\|\sigma^{-\frac{\lambda}{2}}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}$
is a constant.
###### Remark 1.2.
For $M_{+}>1$, the exponential time convergence rates of the global solution
to the IBVP $(\ref{f})$-$(\ref{outflow c})$ can be established. Indeed, assume
that $M_{+}>1$, $u_{+}<0$ and a certain positive constant $\lambda>0$ hold.
For a certain positive constant $\kappa\in(0,\lambda]$, there exists a small
positive constant $\varepsilon_{0}>0$ such that if
$e^{\frac{\lambda}{2}x}(n_{0}-\widetilde{n},\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\in
L^{2}(\mathbb{R_{+}})$ and
$\|(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v}))\|_{H^{1}}+\delta\leq\varepsilon_{0},$
then the solution $(\rho,u,n,v)(t,x)$ to the IBVP $(\ref{f})$-$(\ref{outflow
c})$ satisfies
$\|(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})(t)\|_{H^{1}}\leq
C_{6}\delta_{2}e^{-\frac{\kappa_{1}}{2}t},$ (1.23)
where $C_{6}>0$ and $\kappa_{1}\ll\kappa$ are positive constants independent
of time, and
$\delta_{2}:=\|(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{H^{1}}+\|e^{\frac{\lambda}{2}x}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\|_{L^{2}}$
is a constant.
The proof of $(\ref{M_{+}>1 exp d})$ can be obtained by similar arguments as
for $(\ref{M_{+}>1 algebra d})$. The details are omitted.
###### Remark 1.3.
In Theorem 1.3, we remove the restriction $(\ref{p small})$, and obtain the
nonlinear stability of steady-states and time decay rates of the solution to
the IBVP $(\ref{f})$-$(\ref{outflow c})$ for $M_{+}=1$ with the weighted
energy method. Moreover, if $p^{\prime}_{1}(\rho_{+})=p^{\prime}_{2}(n_{+})$,
time decay rates of the solution to the IBVP $(\ref{f})$-$(\ref{outflow c})$
for $M_{+}=1$ are the same as that of isentropic Navier-Stokes equations [28].
We explain main strategies to prove Theorems 1.1-1.3. The system
$(\ref{f})$-$(\ref{outflow c})$ can be viewed as two compressible isentropic
Navier-Stokes equations coupled with each other through the drag force
relaxation mechanisms. Different from the isentropic Navier-Stokes
equations[16], we can not reformulate two momentum equations $(\ref{f})_{2}$
and $(\ref{f})_{4}$ into conservation forms due to the influence of drag
force, which implies that the steady-state satisfies the system
$(\ref{stationary f})$-$(\ref{stationary boundary c})$ consisting of a first-
order and a second-order ordinary differential equations instead of only a
first-order ordinary differential equation in [16], and it is not
straightforward to apply the center manifold theory[2]. To overcome the
difficulty, we introduce a new variable $\widetilde{w}:=\widetilde{u}_{x}$,
get the estimate $|\widetilde{u}_{x}(0)|\leq C|u_{-}-u_{+}|$ in Lemma 2.1, and
then rewrite the system $(\ref{stationary f})$ into the $3\times 3$ system
$(\ref{sf bar})$ of autonomous ordinary differential equations. Since the
condition $p^{\prime}_{1}(\rho_{+})=p^{\prime}_{2}(n_{+})$ is not necessary,
we need subtle analysis to obtain the sign of ${\rm Re}\lambda_{i},i=1,2,3$,
where $\lambda_{i},i=1,2,3$ are three eigenvalues of the linearized $3\times
3$ system of $(\ref{sf bar})$. It should be noticed that the linearized system
of $(\ref{sf bar})$ has at least one eigenvalue with negative real part due to
the effect of drag force, so that we obtain the existence of steady-states for
the supersonic, sonic and subsonic case in Theorem 1.1, which is different
from the outflow problem of the isentropic Navier-Stokes system[16], where
there is no steady-state for the subsonic case.
We establish the uniform estimates of the perturbation
$(\phi,\psi,\bar{\phi},\bar{\psi}):=(\rho-\widetilde{\rho},u-\widetilde{u},n-\widetilde{n},v-\widetilde{v})$
to prove the nonlinear stability of steady-states for the supersonic
$M_{+}>1$, sonic case $M_{+}=1$ and subsonic case $M_{+}<1$. For $M_{+}=1$, it
is easy to check that
$(\widetilde{\rho}\widetilde{u}^{2}+p_{1}(\widetilde{\rho}))_{x}$ and
$(\widetilde{n}\widetilde{v}^{2}+p_{2}(\widetilde{n}))_{x}$ decay slower than
$\widetilde{u}_{xx}$ for $p^{\prime}_{1}(\rho_{+})\neq p^{\prime}_{2}(n_{+})$
owing to $(\ref{stationary f})_{2}$, $(\ref{stationary f})_{4}$ and
$(\ref{sigma})$, which implies the term
$\int
R_{2}dx:=\int\phi\psi\frac{(\widetilde{\rho}\widetilde{u}^{2}+p_{1}(\widetilde{\rho}))_{x}}{\widetilde{\rho}}+\bar{\phi}\bar{\psi}\frac{(\widetilde{n}\widetilde{v}^{2}+p_{2}(\widetilde{n}))_{x}}{\widetilde{n}}dx$
(1.24)
can not be controlled directly as in [16]. With the help of
$\widetilde{u}_{x}\geq 0$ and $\widetilde{v}_{x}\geq 0$, we turn to deal with
the terms
$\int
R_{2}dx+\int(\rho\psi^{2}+p_{1}(\rho)-p_{1}(\widetilde{\rho})-p^{\prime}_{1}(\widetilde{\rho})\phi)\widetilde{u}_{x}dx+(n\bar{\psi}^{2}+p_{2}(n)-p_{2}(\widetilde{n})-p^{\prime}_{2}(\widetilde{n}))\widetilde{v}_{x}dx,$
(1.25)
the leading terms of which can be rewritten as two positive semidefinite 2
variable quadratic forms $(\ref{r_1'})$ under the condition $(\ref{p small})$.
By the weighted energy method, we get the exponential or algebraic time decay
rates for the supersonic case $M_{+}>1$ if the initial perturbation belongs to
the exponential or algebraic weighted Sobolev space, and obtain algebraic time
decay rates for the sonic case $M_{+}=1$. To get the basic weighted energy
estimates, we use $(\ref{M_{+}>1 stationary solution d})$ and the dissipation
on the relaxation friction term $\bar{\psi}-\psi$, and decompose $\psi$ as
$\psi=\bar{\psi}+(\psi-\bar{\psi})$, as motivated by Li-Wang-Wang[19]. Due to
the algebraic decay $(\ref{sigma})$ of steady-states, convergence rates of
steady-states for the sonic case $M_{+}=1$ is worse than that of the
supersonic case $M_{+}>1$. It is necessary to use the delicate algebraic decay
$(\ref{sigma})$-$(\ref{sig})$ and the dissipation on the drag force
$\bar{\psi}-\psi$, decompose $\psi$ as $\psi=\bar{\psi}+(\psi-\bar{\psi})$,
and obtain more delicate estimates to get the convergence rates for $M_{+}=1$.
It should be noticed that we make full use of the dissipation on drag force
term and the viscous terms, and take a linear coordinate transformation
$\begin{pmatrix}\phi\\\ \bar{\phi}\\\
\bar{\psi}\end{pmatrix}=\boldsymbol{P}\begin{pmatrix}\hat{\rho}\\\ \hat{n}\\\
\hat{v}\end{pmatrix},\quad\quad\quad{\rm
with~{}a~{}invertible~{}matrix~{}}\boldsymbol{P},$ (1.26)
to gain the 3 variable quadratic form
$\hat{\lambda}_{1}\hat{\rho}^{2}+\hat{\lambda}_{2}\hat{n}^{2}$ with
$\hat{\lambda}_{1},\hat{\lambda}_{2}>0$, which plays an important role in
basic weighted energy estimates together with some crucial cancellations. In
fact, we obtain the algebraic time decay rates of the solution to IBVP
$(\ref{f})$-$(\ref{outflow c})$ for $M_{+}=1$ for the initial perturbation
satisfying
$\sigma^{-\frac{\lambda}{2}}(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v})\in
L^{2}(\mathbb{R}_{+})$, with
$\lambda<\lambda^{*},\lambda^{*}:=2+\sqrt{8+\frac{1}{1+b^{2}}}$,
$b=\frac{\rho_{+}(u_{+}^{2}-p^{\prime}_{1}(\rho_{+}))}{|u_{+}|\sqrt{(\mu+n_{+})n_{+}}}$
and the function $\sigma\geq 0$ satisfying $(\ref{sig})$. This is an
interesting phenomena describing the influences of two fluids on each other
somehow and it should be emphasized that $\lambda^{*}=5$ is the same as that
of the isentropic Navier-Stokes equations [28] for
$u_{+}^{2}=p^{\prime}_{1}(\rho_{+})=p^{\prime}_{2}(n_{+})$.
Notation. We denote by $\|\cdot\|_{L^{p}}$ the norm of the usual Lebesgue
space $L^{p}=L^{p}({\mathbb{R}_{+}})$, $1\leq p\leq\infty$. And if $p=2$, we
write $\|\cdot\|_{L^{p}(\mathbb{R}_{+})}=\|\cdot\|$. $H^{s}(\mathbb{R}_{+})$
stands for the standard $s$-th Sobolev space over $\mathbb{R}_{+}$ equipped
with its norm
$\|f\|_{H^{s}(\mathbb{R}_{+})}=\|f\|_{s}:=(\sum_{i=0}^{s}\|\partial^{i}f\|^{2})^{\frac{1}{2}}.$
$C([0,T];H^{1}(\mathbb{R}_{+}))$ represents the space of continuous functions
on the interval $[0,T]$ with values in $H^{s}(\mathbb{R}_{+})$.
$L^{2}([0,T];\mathcal{B})$ denotes the space of $L^{2}$ functions on the
interval $[0,T]$ with values in Banach space $\mathcal{B}$. For a scalar
function $W(x)>0$, the weighted $L^{2}(\mathbb{R}_{+})$ and
$H^{1}(\mathbb{R}_{+})$ spaces are defined as follows:
$\displaystyle L^{2}_{W}(\mathbb{R}_{+}):=$
$\displaystyle\left\\{\begin{matrix}~{}f\in
L^{2}(\mathbb{R}_{+})~{}|\end{matrix}\right.~{}\|f\|_{L^{2}_{W}}:=(\int_{\mathbb{R}_{+}}W(x)f^{2}dx)^{\frac{1}{2}}<+\infty\left.\begin{matrix}~{}\end{matrix}\right\\},$
$\displaystyle H^{1}_{W}(\mathbb{R}_{+}):=$
$\displaystyle\left\\{\begin{matrix}~{}f\in
H^{1}(\mathbb{R}_{+})~{}|\end{matrix}\right.~{}\|f\|_{H^{1}_{W}}:=(\sum_{i=0}^{1}\|\partial^{i}f\|^{2}_{L^{2}_{W}})^{\frac{1}{2}}<+\infty\left.\begin{matrix}~{}\end{matrix}\right\\}.$
For a scalar function $W_{a,\nu}:=(1+x)^{\nu}$ with $\nu\geq 0$, we denote
$\|f\|_{a,\nu}:=\|(1+x)^{\frac{\nu}{2}}f\|$.
The rest of this paper will be organized as follows. We prove the existence
and uniqueness of steady-states in Section 2, get the nonlinear stability of
steady-states in Section 3 for supersonic, sonic and subsonic cases, and
obtain convergence rates of steady-states for the supersonic flow in
Subsection 4.1 and the sonic flow in Subsection 4.2.
## 2 Existence of Steady-State
We prove Theorem 1.1 on the existence and uniqueness of steady-states to the
BVP $(\ref{stationary f})$-$(\ref{stationary boundary c})$ with $u_{+}<0$ and
$\delta$ sufficiently small as follows. In order to apply the center manifold
theory [2], it is necessary to get the bounds of $\widetilde{u}_{x}(0)$ or
$\widetilde{v}_{x}(0)$.
###### Lemma 2.1.
Assume that $u_{+}<0$ and $\delta=|u_{-}-u_{+}|$ hold with $\delta$
sufficiently small. Then the steady-state
$(\widetilde{\rho},\widetilde{u},\widetilde{n},\widetilde{v})$ to the BVP
$({\ref{stationary f}})$-$(\ref{stationary boundary c})$ satisfies
$|\widetilde{u}_{x}(0)|\leq C|u_{-}-u_{+}|,\quad|\widetilde{v}_{x}(0)|\leq
C|u_{-}-u_{+}|,$ (2.1)
where $C>0$ is a positive constant.
###### Proof.
Due to $\widetilde{\rho}=\frac{\rho_{+}u_{+}}{\widetilde{u}}$ and
$\widetilde{n}=\frac{n_{+}u_{+}}{\widetilde{v}}$, we have
$\left\\{\begin{aligned}
&(\rho_{+}u_{+}\widetilde{u}+A_{1}\rho^{\gamma}_{+}u^{\gamma}_{+}\widetilde{u}^{-\gamma})_{x}=(\mu\widetilde{u}_{x})_{x}+\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u}),\\\
&(n_{+}u_{+}\widetilde{v}+A_{2}n^{\alpha}_{+}u^{\alpha}_{+}\widetilde{v}^{-\alpha})_{x}=(n_{+}u_{+}\frac{\widetilde{v}_{x}}{\widetilde{v}})_{x}-\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u}).\end{aligned}\right.$
(2.2)
Adding $(\ref{sf})_{1}$ to $(\ref{sf})_{2}$ and integrating the resulted
equation over $(0,+\infty)$ lead to
$\mu\widetilde{u}_{x}(0)+\frac{n_{+}u_{+}}{u_{-}}\widetilde{v}_{x}(0)=\frac{1}{u_{+}}[(\rho_{+}+n_{+})u^{2}_{+}-(A_{1}\gamma\rho^{\gamma}_{+}+A_{2}\alpha
n^{\alpha}_{+})](u_{-}-u_{+})+O(|u_{-}-u_{+}|^{2}).$ (2.3)
With the help of $(\ref{wt u v})$, we multiply $(\ref{sf})_{1}$ by
$\widetilde{u}$, $(\ref{sf})_{2}$ by $\widetilde{v}$ respectively, then
integrate the summation of the resulted equations over $(0,+\infty)$ to gain
$\displaystyle\int_{0}^{+\infty}(\mu\widetilde{u}_{x}^{2}+n_{+}u_{+}\frac{\widetilde{v}^{2}_{x}}{\widetilde{v}})dx+\int_{0}^{+\infty}\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u})^{2}dx$
(2.4) $\displaystyle=$ $\displaystyle-
u_{-}[\mu\widetilde{u}_{x}(0)+n_{+}u_{+}\frac{\widetilde{v}_{x}(0)}{u_{-}}]+[(\rho_{+}+n_{+})u_{+}^{2}-(A_{1}\gamma\rho^{\gamma}_{+}+A_{2}\alpha
n^{\alpha}_{+})](u_{-}-u_{+})+O(|u_{-}-u_{+}|^{2})$ $\displaystyle=$
$\displaystyle O(|u_{-}-u_{+}|^{2}).$
Multiplying $(\ref{sf})_{2}$ by $\frac{\widetilde{v}_{x}}{\widetilde{v}}$ and
then integrating the resulted equation over $(0,\infty)$ yield
$-n_{+}u_{+}\frac{\widetilde{v}^{2}_{x}(0)}{2u^{2}_{-}}+\int_{0}^{+\infty}A_{2}\alpha
n^{\alpha}_{+}\frac{u_{+}^{\alpha}}{\widetilde{v}^{\alpha+2}}\widetilde{v}^{2}_{x}dx=\int_{0}^{+\infty}\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u})\frac{\widetilde{v}_{x}}{\widetilde{v}}dx+\int_{0}^{+\infty}n_{+}u_{+}\frac{\widetilde{v}^{2}_{x}}{\widetilde{v}}dx$
(2.5)
We estimate terms in the right hand side of $(\ref{sf'b3})$. With
$\inf\limits_{x\in\mathbb{R}_{+}}\widetilde{n}>0$, $\alpha\geq 1$ and
$(\ref{wt v-u})$, we have
$\displaystyle\int_{0}^{+\infty}\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u})\frac{\widetilde{v}_{x}}{\widetilde{v}}dx+\int_{0}^{+\infty}n_{+}u_{+}\frac{\widetilde{v}^{2}_{x}}{\widetilde{v}}dx$
(2.6) $\displaystyle\leq$ $\displaystyle
C\int_{0}^{+\infty}\frac{n_{+}u_{+}}{\widetilde{v}}(\widetilde{v}-\widetilde{u})^{2}dx+\frac{1}{4}\int_{0}^{+\infty}A_{2}\alpha
n^{\alpha}_{+}\frac{u_{+}^{\alpha}}{\widetilde{v}^{\alpha+2}}\widetilde{v}^{2}_{x}dx$
$\displaystyle\leq$ $\displaystyle
C|u_{-}-u_{+}|^{2}+\frac{1}{4}\int_{0}^{+\infty}A_{2}\alpha
n^{\alpha}_{+}\frac{u_{+}^{\alpha}}{\widetilde{v}^{\alpha+2}}\widetilde{v}^{2}_{x}dx.$
Combining $(\ref{sf'b3})$ and $(\ref{sf''b2})$, we get
$|\widetilde{v}_{x}(0)|\leq C|u_{-}-u_{+}|,\quad|\widetilde{u}_{x}(0)|\leq
C|u_{-}-u_{+}|.$ (2.7)
∎
Then, we can prove Theorem 1.1 with the above lemma. For $(\ref{stationary
f})_{2}$ and $(\ref{stationary f})_{4}$, using
$\widetilde{\rho}=\frac{\rho_{+}u_{+}}{\widetilde{u}}$,
$\widetilde{n}=\frac{n_{+}u_{+}}{\widetilde{v}}$ and integrating the summation
of $(\ref{stationary f})_{2}$ and $(\ref{stationary f})_{4}$ over
$(x,+\infty)$, we have
$\left\\{\begin{split}&\widetilde{v}_{x}=\frac{\widetilde{v}}{n_{+}u_{+}}[\rho_{+}u_{+}(\widetilde{u}-u_{+})+A_{1}\rho^{\gamma}_{+}(\frac{u^{\gamma}_{+}}{\widetilde{u}^{\gamma}}-1)+n_{+}u_{+}(\widetilde{v}-u_{+})+A_{2}n^{\alpha}_{+}(\frac{u^{\alpha}_{+}}{\widetilde{v}^{\alpha}}-1)-\mu\widetilde{u}_{x}],\\\
&\widetilde{u}_{xx}=\frac{1}{\mu}[(\rho_{+}u_{+}-A_{1}\rho^{\gamma}_{+}u^{\gamma}_{+}\widetilde{u}^{-\gamma-1})\widetilde{u}_{x}-n_{+}u_{+}(1-\frac{\widetilde{u}}{\widetilde{v}})].\end{split}\right.$
(2.8)
Define $\widetilde{w}:=\widetilde{u}_{x}$ and
$\bar{U}:=(\bar{u},\bar{w},\bar{v})^{\rm
T}:=(\widetilde{u}-u_{+},\widetilde{w},\widetilde{v}-u_{+})^{\rm T}$. The
system $(\ref{2sf})$ can be reformulated into the autonomous system as follows
$\left\\{\begin{aligned}
&\bar{U}_{x}=\boldsymbol{J_{+}}\bar{U}+(0,\bar{g}_{2}(\bar{U}),\bar{g}_{3}(\bar{U}))^{\rm
T},\\\ &\bar{U}_{-}:=(\bar{u},\bar{w},\bar{v})^{\rm
T}(0)=(u_{-}-u_{+},\widetilde{u}_{x}(0),u_{-}-u_{+})^{\rm
T},~{}\lim_{x\to\infty}\bar{U}^{\rm T}=(0,0,0),\end{aligned}\right.$ (2.9)
where
$\displaystyle\boldsymbol{J_{+}}=$ $\displaystyle\begin{pmatrix}0&1&0\\\
\frac{n_{+}}{\mu}&\frac{\rho_{+}u_{+}^{2}-A_{1}\gamma\rho^{\gamma}_{+}}{\mu
u_{+}}&-\frac{n_{+}}{\mu}\\\
\frac{\rho_{+}u_{+}^{2}-A_{1}\gamma\rho^{\gamma}_{+}}{n_{+}u_{+}}&-\frac{\mu}{n_{+}}&\frac{n_{+}u_{+}^{2}-A_{2}\alpha
n^{\alpha}_{+}}{n_{+}u_{+}}\end{pmatrix},$ (2.10)
$\displaystyle\bar{g}_{2}(\bar{U})=$
$\displaystyle\frac{1}{2}(2\frac{n_{+}}{\mu}\frac{1}{u_{+}}\bar{v}^{2}-2\frac{n_{+}}{\mu}\frac{1}{u_{+}}\bar{u}\bar{v}+2\frac{A_{1}\gamma(\gamma+1)\rho^{\gamma}_{+}}{\mu
u^{2}_{+}}\bar{w}\bar{u})+O(|\bar{U}|^{3}),$ (2.11)
$\displaystyle\bar{g}_{3}(\bar{U})=$
$\displaystyle\frac{1}{2}[\frac{A_{1}\gamma(\gamma+1)\rho^{\gamma}_{+}}{n_{+}u^{2}_{+}}\bar{u}^{2}+2\frac{\rho_{+}u_{+}^{2}-A_{1}\gamma\rho^{\gamma}_{+}}{n_{+}u^{2}_{+}}\bar{v}\bar{u}+(2\frac{n_{+}u_{+}^{2}-A_{2}\alpha
n^{\alpha}_{+}}{n_{+}u^{2}_{+}}$
$\displaystyle+\frac{A_{2}\alpha(\alpha+1)n^{\alpha}_{+}}{n_{+}u^{2}_{+}})\bar{v}^{2}-2\frac{\mu}{n_{+}u_{+}}\bar{w}\bar{v}]+O(|\bar{U}|^{3}).$
(2.12)
Three eigenvalues $\lambda_{1},\lambda_{2},\lambda_{3}$ of matrix
$\boldsymbol{J_{+}}$ satisfy
$\left\\{\begin{aligned}
&\lambda_{1}\lambda_{2}\lambda_{3}=-\frac{(\rho_{+}+n_{+})u^{2}_{+}-(A_{1}\gamma\rho^{\gamma}_{+}+A_{2}\alpha
n^{\alpha}_{+})}{\mu u_{+}},\\\
&\lambda_{1}+\lambda_{2}+\lambda_{3}=\frac{\rho_{+}u^{2}_{+}-A_{1}\gamma\rho^{\gamma}_{+}}{\mu
u_{+}}+\frac{n_{+}u^{2}_{+}-A_{2}\alpha n^{\alpha}_{+}}{n_{+}u_{+}},\\\
&\lambda_{1}\lambda_{2}+\lambda_{1}\lambda_{3}+\lambda_{2}\lambda_{3}=\rho_{+}\frac{(u_{+}^{2}-A_{1}\gamma\rho_{+}^{\gamma-1})(u_{+}^{2}-A_{2}\alpha
n_{+}^{\alpha-1})}{\mu u^{2}_{+}}-1-\frac{n_{+}}{\mu}.\end{aligned}\right.$
(2.13)
If $M_{+}>1$, it is easy to obtain $\lambda_{1}\lambda_{2}\lambda_{3}>0$ and
$u_{+}^{2}>\min\\{A_{1}\gamma\rho_{+}^{\gamma-1},A_{2}\alpha
n_{+}^{\alpha-1}\\}$. Without loss of generality, we assume
$A_{1}\gamma\rho_{+}^{\gamma-1}\geq A_{2}\alpha n_{+}^{\alpha-1}$. Moreover,
we have
$\lambda_{1}+\lambda_{2}+\lambda_{3}<0{~{}\rm
for~{}}u_{+}^{2}>A_{1}\gamma\rho_{+}^{\gamma-1},\quad\lambda_{1}\lambda_{2}+\lambda_{1}\lambda_{3}+\lambda_{2}\lambda_{3}<0{~{}\rm
for~{}}A_{2}\alpha n_{+}^{\alpha-1}\leq u^{2}_{+}\leq
A_{1}\gamma\rho_{+}^{\gamma-1},$ (2.14)
which can imply ${\rm Re}\lambda_{1}<0,~{}{\rm Re}\lambda_{2}<0$ and
$\lambda_{3}>0$ for $M_{+}>1$. Using similar arguments, we have the following
results:
$\left\\{\begin{split}&{\rm if}~{}M_{+}>1,{\rm then}~{}{\rm
Re}\lambda_{1}<0,{\rm Re}\lambda_{2}<0,\lambda_{3}>0,\\\ &{\rm
if}~{}M_{+}<1,{\rm then}~{}{\rm Re}\lambda_{1}>0,{\rm
Re}\lambda_{2}>0,\lambda_{3}<0,\\\ &{\rm if}~{}M_{+}=1,{\rm
then}~{}\lambda_{1}>0,\lambda_{2}<0,\lambda_{3}=0.\end{split}\right.$ (2.15)
Then, applying the center manifold theory [2], it is not difficult to show the
supersonic or subsonic case $M_{+}\neq 1$ in Theorem 1.1 if $\delta$ is small.
Finally, we prove the sonic case $M_{+}=1$ in Theorem 1.1 which implies
$\lambda_{1}>0,\lambda_{2}<0,\lambda_{3}=0$. The eigenvectors
$r_{1},r_{2},r_{3}$ of $\lambda_{1},\lambda_{2},\lambda_{3}$ are obtained
respectively as follows
$r_{1}=\begin{pmatrix}1\\\ \lambda_{1}\\\
-\frac{\mu}{n_{+}}(\lambda_{1}^{2}-\frac{\rho_{+}u^{2}_{+}-A_{1}\gamma\rho^{\gamma}_{+}}{\mu
u_{+}}\lambda_{1})+1\end{pmatrix},r_{2}=\begin{pmatrix}1\\\ \lambda_{2}\\\
-\frac{\mu}{n_{+}}(\lambda_{2}^{2}-\frac{\rho_{+}u^{2}_{+}-A_{1}\gamma\rho^{\gamma}_{+}}{\mu
u_{+}}\lambda_{2})+1\end{pmatrix},r_{3}=\begin{pmatrix}1\\\ 0\\\
1\end{pmatrix}.$ (2.16)
Define the matrix $\boldsymbol{P_{1}}:=[r_{1},r_{2},r_{3}]$ and take a linear
transformation $Z:=(z_{1},z_{2},z_{3})^{\rm
T}=\boldsymbol{P_{1}}^{-1}\bar{U}$. With $(\ref{g2})$ and $(\ref{g})$, the
system $(\ref{sf bar})$ can be reformulated as follows
$\left\\{\begin{aligned} &\frac{d}{dx}\begin{pmatrix}z_{1}\\\ z_{2}\\\
z_{3}\end{pmatrix}=\begin{pmatrix}\lambda_{1}&*&0\\\ 0&\lambda_{2}&0\\\
0&0&\lambda_{3}\end{pmatrix}\begin{pmatrix}z_{1}\\\ z_{2}\\\
z_{3}\end{pmatrix}+\begin{pmatrix}g_{1}(z_{1},z_{2},z_{3})\\\
g_{2}(z_{1},z_{2},z_{3})\\\ g_{3}(z_{1},z_{2},z_{3})\end{pmatrix},\\\
&(z_{1},z_{2},z_{3})(0)=(z_{1-},z_{2-},z_{3-})=(\boldsymbol{P_{1}}^{-1}\bar{U}_{-})^{\rm
T},\quad\lim_{x\to\infty}(z_{1},z_{2},z_{3})=(0,0,0),\end{aligned}\right.$
(2.17)
where nonlinear functions $g_{i}(i=1,2,3)$ are denoted by
$\begin{pmatrix}g_{1}(z_{1},z_{2},z_{3})\\\ g_{2}(z_{1},z_{2},z_{3})\\\
g_{3}(z_{1},z_{2},z_{3})\end{pmatrix}=\boldsymbol{P_{1}}^{-1}\begin{pmatrix}0\\\
\bar{g}_{2}(\bar{u},\bar{w},\bar{v})\\\
\bar{g}_{3}(\bar{u},\bar{w},\bar{v})\end{pmatrix}.$ (2.18)
With the help of the manifold theory [2], there exist a local center manifold
$W^{c}(0,0,0)$ and a local stable manifold $W_{3}^{s}(0,0,0)$
$W^{c}(0,0,0)=\\{(z_{1},z_{2},z_{3})~{}|~{}z_{1}=f^{c}_{1}(z_{3}),z_{2}=f^{c}_{2}(z_{3}),|z_{3}|~{}{\rm
sufficient~{}small}\\},$ (2.19)
$W_{3}^{s}(0,0,0)=\\{(z_{1},z_{2},z_{3})~{}|~{}z_{1}=f^{s}_{1}(z_{2}),z_{3}=f^{s}_{2}(z_{2}),|z_{2}|~{}{\rm
sufficient~{}small}\\},$ (2.20)
where $f^{c}_{i},f^{s}_{i},i=1,2$ are smooth functions and
$f^{c}_{i}(0)=0,~{}Df^{c}_{i}(0)=0,~{}f^{s}_{i}(0)=0,~{}Df^{s}_{i}(0)=0,~{}i=1,2$.
Using $\bar{U}=PZ$, $(\ref{g3})$-$(\ref{g})$, and $(\ref{bg})$, we gain
$\bar{g}_{3}(z_{3})=az^{2}_{3}+O(|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{3}+|z_{1}z_{3}|+|z_{2}z_{3}|),$
(2.21)
where
$a=\frac{A_{1}\gamma(\gamma+1)\rho^{\gamma}_{+}+A_{2}\alpha(\alpha+1)n^{\alpha}_{+}}{2u_{+}^{2}(1+b^{2})(\mu+n_{+})},\quad
b:=\frac{\rho_{+}(u_{+}^{2}-p^{\prime}_{1}(\rho_{+}))}{|u_{+}|\sqrt{(\mu+n_{+})n_{+}}}.$
(2.22)
Therefore, the system $(\ref{sf z})$ can be reformulated as follows
$\left\\{\begin{aligned} &z_{1x}=\lambda_{1}z_{1}+O(|Z|^{2}),\\\
&z_{2x}=\lambda_{2}z_{2}+O(|Z|^{2}),\\\
&z_{3x}=az_{3}^{2}+O(|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{3}+|z_{1}z_{3}|+|z_{2}z_{3}|),\\\
&(z_{1},z_{2},z_{3})(0):=(z_{1-},z_{2-},z_{3-})=(\boldsymbol{P_{1}}^{-1}\bar{U}_{-})^{\rm
T},~{}\lim_{x\to\infty}(z_{1},z_{2},z_{3})=(0,0,0).\end{aligned}\right.$
(2.23)
Let $\sigma_{1}(x)$ be a solution to $({\ref{z}})_{1}$ restricted on the local
center manifold satisfying the equation
$\sigma_{1x}=a\sigma_{1}^{2}+O(\sigma_{1}^{3}),\quad\sigma_{1}(x)\to 0~{}{\rm
as}~{}x\to+\infty.$ (2.24)
which implies that there exists the monotonically increasing solution
$\sigma_{1}(x)<0$ to $(\ref{sg})$ if $\sigma_{1}(0)<0$ holds and
$|\sigma_{1}(0)|$ is sufficiently small. Therefore, if the initial data
$(z_{1-},z_{2-},z_{3-})$ belongs to the region
$\mathcal{M}\subset\mathbb{R}^{3}$ associated to the local stable manifold and
the local center manifold, then we have
$\left\\{\begin{aligned} &z_{i}=O(\sigma_{1}^{2})+O(\delta
e^{-cx}),~{}i=1,2,\\\ &z_{3}=\sigma_{1}+O(\delta
e^{-cx}),\end{aligned}\right.$ (2.25)
with $z_{3-}<0$, the smallness of $|(z_{1-},z_{2-},z_{3-})|$ and
$c\frac{\delta}{1+\delta x}\leq|\sigma_{1}|\leq C\frac{\delta}{1+\delta
x},\quad|\partial^{k}\sigma_{1}|\leq C\frac{\delta^{k+1}}{(1+\delta
x)^{k+1}},\quad C>0,~{}~{}k=0,1,2,3.$ (2.26)
Due to $\sigma_{1}(x)\leq 0$, we define
$\sigma(x):=-\sigma_{1},$ (2.27)
which satisfies
$\sigma_{x}=-a\sigma^{2}+O(|\sigma|^{3}),\quad\sigma\to 0~{}{\rm
as}~{}x\to+\infty$ (2.28)
It is easy to get
$|\partial_{x}^{k}(\widetilde{\rho}-\rho_{+},\widetilde{u}-u_{+},\widetilde{n}-n_{+},\widetilde{v}-u_{+})|\leq
C\frac{\delta^{k+1}}{(1+\delta x)^{k+1}},\quad C>0,~{}~{}k=0,1,2,3,$ (2.29)
and
$(\widetilde{u}-u_{+},\widetilde{v}-u_{+})=(-\sigma(x),-\sigma(x))+O(|\sigma(x)|^{2}),\quad(\widetilde{u}_{x},\widetilde{v}_{x})=(a\sigma^{2}(x),a\sigma^{2}(x))+O(|\sigma|^{3}),$
(2.30)
with the help of $\bar{U}=\boldsymbol{P}Z$ and $(\ref{ev})$.
## 3 Nonlinear stability of steady-states
The function space $Y(0,T)$ for $T>0$ is denoted by
$\displaystyle Y(0,T):=\\{~{}(\phi,\psi,\bar{\phi},\bar{\psi})~{}|{}$
$\displaystyle(\phi,\psi,\bar{\phi},\bar{\psi})\in
C([0,T];H^{1}(\mathbb{R}_{+})),$ (3.1)
$\displaystyle(\phi_{x},\bar{\phi}_{x})\in
L^{2}([0,T];L^{2}(\mathbb{R}_{+})),~{}(\psi_{x},\bar{\psi}_{x})\in
L^{2}([0,T];H^{1}(\mathbb{R}_{+}))~{}\\}.$
Let
$\phi=\rho-\widetilde{\rho},\quad\psi=u-\widetilde{u},\quad\bar{\phi}=n-\widetilde{n},\quad\bar{\psi}=v-\widetilde{v}.$
(3.2)
Then the perturbation $(\phi,\psi,\bar{\phi},\bar{\psi})$ satisfies the
following system
$\left\\{\begin{aligned}
&\phi_{t}+u\phi_{x}+\rho\psi_{x}=-(\psi\widetilde{\rho}_{x}+\phi\widetilde{u}_{x}),\\\
&\psi_{t}+u\psi_{x}+\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}-\frac{\mu\psi_{xx}}{\rho}-\frac{n(\bar{\psi}-\psi)}{\rho}=F_{1},\\\
&\bar{\phi}_{t}+v\bar{\phi}_{x}+n\bar{\psi}_{x}=-(\bar{\psi}\widetilde{n}_{x}+\bar{\phi}\widetilde{v}_{x}),\\\
&\bar{\psi}_{t}+v\bar{\psi}_{x}+\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}-\frac{(n\bar{\psi}_{x})_{x}}{n}+(\bar{\psi}-\psi)=F_{2},\end{aligned}\right.$
(3.3)
where
$F_{1}=-[-\mu(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\widetilde{u}_{xx}+\psi\widetilde{u}_{x}+(\frac{p^{\prime}_{1}(\rho)}{\rho}-\frac{p^{\prime}_{1}(\widetilde{\rho})}{\widetilde{\rho}})\widetilde{\rho}_{x}-(\frac{n}{\rho}-\frac{\widetilde{n}}{\widetilde{\rho}})(\widetilde{v}-\widetilde{u})],$
(3.4)
$F_{2}=-[-(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\widetilde{v}_{x})_{x}-\frac{(\bar{\phi}\widetilde{v}_{x})_{x}}{n}+\bar{\psi}\widetilde{v}_{x}+(\frac{p^{\prime}_{2}(n)}{n}-\frac{p^{\prime}_{2}(\widetilde{n})}{\widetilde{n}})\widetilde{n}_{x}].~{}~{}~{}~{}~{}~{}~{}$
(3.5)
The initial and boundary conditions to the system $(\ref{f1})$ satisfy
$(\phi,\psi,\bar{\phi},\bar{\psi})(0,x):=(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})=(\rho_{0}-\widetilde{\rho},u_{0}-\widetilde{u},n_{0}-\widetilde{n},v_{0}-\widetilde{v}),$
(3.6)
$\lim_{x\to\infty}(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})=(0,0,0,0),\quad(\psi,\bar{\psi})(t,0)=(0,0).$
(3.7)
###### Proposition 3.1.
Assume that the same assumptions in Theorem 1.2 hold. Let
$(\phi,\psi,\bar{\phi},\bar{\psi})$ be the solution to the problem
$(\ref{f1})$-$(\ref{boundary d1})$ satisfying
$(\phi,\psi,\bar{\phi},\bar{\psi})\in Y(0,T)$ for any time $T>0$. Then there
exist positive constants $\varepsilon>0$ and $C>0$ independent of $T$ such
that if
$\sup_{0\leq t\leq
T}\|(\phi,\psi,\bar{\phi},\bar{\psi})(t)\|_{1}+\delta\leq\varepsilon$ (3.8)
is satisfied, then it holds for arbitrary $t\in[0,T]$ that
$\begin{split}&\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}+\int_{0}^{t}\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}d\tau+\int_{0}^{t}\|(\bar{\psi}-\psi,\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau\leq
C\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|_{1}^{2}.\end{split}$
(3.9)
With the help of $(\ref{priori e})$, it is easy to verify the following
Sobolev inequality
$~{}~{}~{}~{}~{}\|(\phi,\psi,\bar{\phi},\bar{\psi})(t)\|_{L^{\infty}}\leq\|(\phi,\psi,\bar{\phi},\bar{\psi})(t)\|_{H^{1}}\leq\sqrt{2}\varepsilon.$
(3.10)
###### Lemma 3.2 ([16] ).
For any function $\psi(t,\cdot)\in H^{1}(\mathbb{R}_{+})$, it holds
$\displaystyle\int_{0}^{\infty}\delta e^{-c_{0}x}|\psi|^{2}dx\leq$
$\displaystyle C\delta(|\psi(t,0)|^{2}+\|\psi_{x}(t)\|^{2}),$ (3.11)
$\displaystyle\int_{0}^{\infty}\frac{\delta^{j}}{(1+\delta
x)^{j}}|\psi|^{2}dx\leq$ $\displaystyle
C\delta^{j-2}(|\psi(t,0)|^{2}+\|\psi_{x}(t)\|^{2}),\quad~{}j>2,$ (3.12)
where $\delta>0$, $c_{0}>0$ and $C>0$ are positive constants.
With the Lemma 3.2, we can gain the basic $L^{2}$ energy estimates of
$(\phi,\psi,\bar{\phi},\bar{\psi})$.
###### Lemma 3.3.
Under the same conditions in Proposition 3.1, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the problem
$(\ref{f1})$-$(\ref{boundary d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+\int_{0}^{t}\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}d\tau+\int_{0}^{t}|\phi(t,0)|^{2}+|\bar{\phi}(t,0)|^{2}d\tau$
(3.13) $\displaystyle\leq$ $\displaystyle
C\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}+C(\delta+\varepsilon)\int_{0}^{t}\|(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau.$
###### Proof.
Define
$\Phi_{1}(\rho,\widetilde{\rho})=\int_{\widetilde{\rho}}^{\rho}\frac{p_{1}(s)-p_{1}(\widetilde{\rho})}{s^{2}}ds,\quad\mathcal{E}_{1}=\rho(\frac{\psi^{2}}{2}+\Phi_{1}),$
(3.14)
$\Phi_{2}(n,\widetilde{n})=\int_{\widetilde{n}}^{n}\frac{p_{2}(s)-p_{2}(\widetilde{n})}{s^{2}}ds,\quad\mathcal{E}_{2}=n(\frac{\bar{\psi}^{2}}{2}+\Phi_{2}).$
(3.15)
Then, by $(\ref{f})$ and $(\ref{stationary f})$, the direct computations lead
to
$\begin{split}&\quad(\mathcal{E}_{1}+\mathcal{E}_{2})_{t}+(G_{1}+G_{2})_{x}+n(\bar{\psi}-\psi)^{2}+\mu\psi_{x}^{2}+n\bar{\psi}^{2}_{x}+R_{1}+R_{2}=-R_{3},\end{split}$
(3.16)
where
$\left\\{\begin{aligned}
&G_{1}:=u\mathcal{E}_{1}+v\mathcal{E}_{2}+(p_{1}(\rho)-p_{1}(\widetilde{\rho}))\psi+(p_{2}(n)-p_{2}(\widetilde{n}))\bar{\psi},\\\
&G_{2}:=-(\mu\psi\psi_{x}+n\bar{\psi}\bar{\psi}_{x}+\bar{\phi}\bar{\psi}\widetilde{v}_{x}),\\\
&R_{1}:=[\rho\psi^{2}+p_{1}(\rho)-p_{1}(\widetilde{\rho})-p^{\prime}_{1}(\widetilde{\rho})\phi]\widetilde{u}_{x}+[n\bar{\psi}^{2}+p_{2}(n)-p_{2}(\widetilde{n})-p^{\prime}_{2}(\widetilde{n})\bar{\phi}]\widetilde{v}_{x},\\\
&R_{2}:=\phi\psi\frac{\widetilde{\rho}\widetilde{u}\widetilde{u}_{x}+(p_{1}(\widetilde{\rho}))_{x}}{\widetilde{\rho}}+\bar{\phi}\bar{\psi}\frac{\widetilde{n}\widetilde{v}\widetilde{v}_{x}+(p_{2}(\widetilde{n}))_{x}}{\widetilde{n}},\\\
&R_{3}:=\bar{\phi}(\bar{\psi}-\psi)(\widetilde{v}-\widetilde{u})+\bar{\phi}\bar{\psi}_{x}\widetilde{v}_{x}.\end{aligned}\right.$
(3.17)
Integrating $(\ref{f_0})$ in $x$ over $\mathbb{R}_{+}$ leads to
$\frac{d}{dt}\int\mathcal{E}_{1}+\mathcal{E}_{2}dx-G_{1}(t,0)+\int
n(\bar{\psi}-\psi)^{2}+\mu\psi_{x}^{2}+n\bar{\psi}^{2}_{x}dx+\int R_{1}dx+\int
R_{2}dx=-\int R_{3}dx.$ (3.18)
Under the condition $(\ref{boundary d1})$, we get
$-G_{1}(t,0)=-u_{-}[\Phi_{1}(\rho(t,0),\widetilde{\rho}(0))+\Phi_{2}(n(t,0),\widetilde{n}(0))]\geq
c(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$ (3.19)
For the supersonic or subsonic case $M_{+}\neq 1$, with the help of
$(\ref{M_{+}>1 stationary solution d})$, $(\ref{infty 1})$ and $(\ref{d'1})$,
we have
$\int_{0}^{\infty}|R_{1}|+|R_{2}|+|R_{3}|dx\leq
C\delta\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$
(3.20)
For the sonic case $M_{+}=1$ and the restriction
$|p^{\prime}_{1}({\rho_{+}})-p^{\prime}_{2}({n_{+}})|\leq\sqrt{2}|u_{+}|\min\\{(1+\frac{\rho_{+}}{n_{+}})[(\gamma-1)p^{\prime}_{1}(\rho_{+})]^{\frac{1}{2}},(1+\frac{n_{+}}{\rho_{+}})[(\alpha-1)p^{\prime}_{2}(n_{+})]^{\frac{1}{2}}\\}$,
using $(\ref{sigma})$-$(\ref{sig})$, $(\ref{infty 1})$, and $(\ref{d'2})$, we
get
$\displaystyle\int R_{1}+R_{2}+R_{3}dx$ (3.21) $\displaystyle\geq$
$\displaystyle\int(\psi,\phi)\boldsymbol{M_{1}}(\psi,\phi)^{\rm
T}\widetilde{u}_{x}+(\bar{\psi},\bar{\phi})\boldsymbol{M_{2}}(\bar{\psi},\bar{\phi})^{\rm
T}\widetilde{v}_{x}dx-C\int\frac{\delta^{3}}{(1+\delta
x)^{3}}(\phi^{2}+\psi^{2}+\bar{\phi}^{2}+\bar{\psi}^{2})dx$
$\displaystyle-C\delta\int|\bar{\psi}-\psi|^{2}+\psi_{x}^{2}dx-C\delta^{\frac{1}{2}}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|[\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)+\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\phi}_{x})\|^{2}]$
$\displaystyle\geq$
$\displaystyle-C(\delta+\varepsilon)\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}-C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),$
where $\boldsymbol{M_{1}}$, $\boldsymbol{M_{2}}$ are positive definite or non-
negative definite matrices defined by
$\begin{split}\boldsymbol{M_{1}}=\begin{pmatrix}&\rho_{+}&\frac{u_{+}^{2}-A_{1}\gamma\rho_{+}^{\gamma-1}}{2u_{+}}\\\
&\frac{u_{+}^{2}-A_{1}\gamma\rho_{+}^{\gamma-1}}{2u_{+}}&\frac{A_{1}\gamma(\gamma-1)\rho_{+}^{\gamma-2}}{2}\end{pmatrix},\quad\boldsymbol{M_{2}}=\begin{pmatrix}&n_{+}&\frac{u_{+}^{2}-A_{2}\alpha
n_{+}^{\alpha-1}}{2u_{+}}\\\ &\frac{u_{+}^{2}-A_{2}\alpha
n_{+}^{\alpha-1}}{2u_{+}}&\frac{A_{2}\alpha(\alpha-1)n_{+}^{\alpha-2}}{2}\end{pmatrix}.\end{split}$
(3.22)
Finally, with the help of $(\ref{f_})$-$(\ref{r_1'})$, we get $(\ref{e0})$.
Hence, the proof of Lemma 3.3 is completed. ∎
In order to complete the proof of Proposition 3.1, we need to establish the
high order estimates of $(\phi,\psi,\bar{\phi},\bar{\psi})$.
###### Lemma 3.4.
Under the same conditions in Proposition 3.1, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the problem
$(\ref{f1})$-$(\ref{boundary d1})$ satisfies for $t\in[0,T]$ that
$\begin{split}&\quad\|(\phi_{x},\bar{\phi}_{x})\|^{2}+\int_{0}^{t}\|(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau+\int_{0}^{t}\phi_{x}^{2}(t,0)+\bar{\phi}_{x}^{2}(t,0)d\tau\\\
&\leq
C\|(\phi_{0},\psi_{0},\phi_{0x},\bar{\phi}_{0},\bar{\psi}_{0},\bar{\phi}_{0x})\|^{2}+C(\varepsilon+\delta)\int_{0}^{t}\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau.\end{split}$
(3.23)
###### Proof.
Differentiating $(\ref{f1})_{1}$ in $x$, then multiplying the resulted
equation by $\mu\phi_{x}$, $(\ref{f1})_{2}$ by $\widetilde{\rho}^{2}\phi_{x}$
respectively, we gain
$\displaystyle(\mu\frac{\phi_{x}^{2}}{2})_{t}+(\mu
u\frac{\phi_{x}^{2}}{2})_{x}+\mu\widetilde{\rho}\phi_{x}\psi_{xx}$
$\displaystyle=$
$\displaystyle-\mu[\frac{3}{2}\psi_{x}\phi_{x}^{2}+\phi\phi_{x}\psi_{xx}+(\frac{1}{2}\phi_{x}\widetilde{u}_{x}+\psi_{x}\widetilde{\rho}_{x})\phi_{x}+(\phi\widetilde{u}_{x}+\psi\widetilde{\rho}_{x})_{x}\phi_{x}],$
(3.24)
$\displaystyle(\widetilde{\rho}^{2}\phi_{x}\psi)_{t}-(\widetilde{\rho}^{2}\phi_{t}\psi)_{x}+\widetilde{\rho}^{2}\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}^{2}-\mu\widetilde{\rho}\phi_{x}\psi_{xx}$
$\displaystyle=$
$\displaystyle-(\widetilde{\rho}^{2}\phi_{t}\psi_{x}+\widetilde{\rho}^{2}u\phi_{x}\psi_{x}+2\widetilde{\rho}\widetilde{\rho}_{x}\phi_{t}\psi)+\mu\widetilde{\rho}^{2}(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\phi_{x}\psi_{xx}+\widetilde{\rho}^{2}\frac{n}{\rho}(\bar{\psi}-\psi)\phi_{x}+F_{1}\widetilde{\rho}^{2}\phi_{x}.$
(3.25)
Similarly, differentiating $(\ref{f1})_{3}$ in $x$, then multiplying the
resulted equation by $\bar{\phi}_{x}$, $(\ref{f1})_{4}$ by
$\widetilde{n}\bar{\phi}_{x}$ respectively lead to
$\displaystyle(\frac{\bar{\phi}_{x}^{2}}{2})_{t}+(v\frac{\bar{\phi}_{x}^{2}}{2})_{x}+\widetilde{n}\bar{\phi}_{x}\bar{\psi}_{xx}=-[\frac{3}{2}\bar{\psi}_{x}\bar{\phi}_{x}^{2}+\bar{\phi}\bar{\phi}_{x}\bar{\psi}_{xx}+(\frac{1}{2}\bar{\phi}_{x}\widetilde{v}_{x}+\bar{\psi}_{x}\widetilde{n}_{x})\bar{\phi}_{x}-(\bar{\phi}\widetilde{v}_{x}+\bar{\psi}\widetilde{n}_{x})_{x}\bar{\phi}_{x}],$
(3.26)
$\displaystyle(\widetilde{n}\bar{\phi}_{x}\bar{\psi})_{t}-(\widetilde{n}\bar{\phi}_{t}\bar{\psi})_{x}+\widetilde{n}\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}^{2}-\widetilde{n}\bar{\phi}_{x}\bar{\psi}_{xx}$
$\displaystyle=$
$\displaystyle-(\widetilde{n}\bar{\phi}_{t}\bar{\psi}_{x}+\widetilde{n}v\bar{\phi}_{x}\bar{\psi}_{x}+\widetilde{n}_{x}\bar{\phi}_{t}\bar{\psi})+[\widetilde{n}\frac{(\bar{\phi}\bar{\psi}_{x})_{x}}{n}+\widetilde{n}(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\bar{\psi}_{x})_{x}-\widetilde{n}(\bar{\psi}-\psi)]\bar{\phi}_{x}$
(3.27)
$\displaystyle+\widetilde{n}_{x}\bar{\phi}_{x}\bar{\psi}_{xx}+F_{2}\widetilde{n}\bar{\phi}_{x}.$
Adding $(\ref{h_{x}})$-$(\ref{bs_{x}})$ together, integrating the resulted
equation over $\mathbb{R}_{+}$, we have
$\displaystyle\frac{d}{dt}\int(\mu\frac{\phi_{x}^{2}}{2}+\frac{\bar{\phi}_{x}^{2}}{2}+\widetilde{\rho}^{2}\phi_{x}\psi+\widetilde{n}\bar{\phi}_{x}\bar{\psi})dx+\int(\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2}-\widetilde{\rho}^{2}\phi_{t}\psi-\widetilde{n}\bar{\phi}_{t}\bar{\psi})_{x}dx$
(3.28)
$\displaystyle+\int(\widetilde{\rho}^{2}\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}^{2}+\widetilde{n}\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}^{2})dx$
$\displaystyle=$ $\displaystyle\sum_{i=1}^{4}J_{i},$
where
$\displaystyle J_{1}=$
$\displaystyle-\int[\widetilde{\rho}^{2}(\phi_{t}+u\phi_{x})\psi_{x}+\widetilde{n}(\bar{\phi}_{t}+v\bar{\phi}_{x})\bar{\psi}_{x}+2\widetilde{\rho}\widetilde{\rho}_{x}\phi_{t}\psi+\widetilde{n}_{x}\bar{\phi}_{t}\bar{\psi}]dx,$
$\displaystyle J_{2}=$
$\displaystyle\int-\mu\phi\phi_{x}\psi_{xx}-\bar{\phi}\bar{\phi}_{x}\bar{\psi}_{xx}+\mu\widetilde{\rho}^{2}(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\phi_{x}\psi_{xx}+\widetilde{n}(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\bar{\psi}_{x})_{x}\bar{\phi}_{x}+\widetilde{\rho}^{2}\frac{n}{\rho}(\bar{\psi}-\psi)\phi_{x}-\widetilde{n}(\bar{\psi}-\psi)\bar{\phi}_{x}dx,$
$\displaystyle J_{3}=$
$\displaystyle-\int\frac{3}{2}\mu\psi_{x}\frac{\phi_{x}^{2}}{2}+\frac{3}{2}\bar{\psi}_{x}\bar{\phi}_{x}^{2}dx+\int\widetilde{n}\frac{(\bar{\phi}\bar{\psi}_{x})_{x}}{n}\bar{\phi}_{x}dx,\quad$
$\displaystyle J_{4}=$
$\displaystyle-\int[\mu(\frac{1}{2}\phi_{x}\widetilde{u}_{x}+\psi_{x}\widetilde{\rho}_{x})\phi_{x}+\mu(\phi\widetilde{u}_{x}+\psi\widetilde{\rho}_{x})_{x}\phi_{x}-F_{1}\widetilde{\rho}^{2}\phi_{x}$
$\displaystyle+(\frac{1}{2}\bar{\phi}_{x}\widetilde{v}_{x}+\bar{\psi}_{x}\widetilde{n}_{x})\bar{\phi}_{x}-(\bar{\phi}\widetilde{v}_{x}+\bar{\psi}\widetilde{n}_{x})_{x}\bar{\phi}_{x}-\widetilde{n}_{x}\bar{\phi}_{x}\bar{\psi}_{xx}-F_{2}\widetilde{n}\bar{\phi}_{x}]dx.$
First, we estimate terms in the left side of $(\ref{1f})$. Under the condition
$(\ref{boundary d1})$, the terms in the left side is estimated as follows
$\int(\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2}-\widetilde{\rho}^{2}\phi_{t}\psi-\widetilde{n}\bar{\phi}_{t}\bar{\psi})_{x}dx=-u_{-}\frac{\mu\phi_{x}^{2}(0,t)+\bar{\phi}_{x}^{2}(0,t)}{2}\geq
0,$ (3.29)
$\int\widetilde{\rho}^{2}\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}^{2}+\widetilde{n}\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}^{2}dx\geq\rho_{+}p^{\prime}_{1}(\rho_{+})\|\phi_{x}\|^{2}+p^{\prime}_{2}(n_{+})\|\bar{\phi}_{x}\|^{2}-C(\varepsilon+\delta)\|(\phi_{x},\bar{\phi}_{x})\|^{2}.$
(3.30)
We turn to estimate terms in the right hand side of $(\ref{1f})$. By
$(\ref{M_{+}>1 stationary solution d})$-$(\ref{sigma})$, $(\ref{f1})_{1}$,
$(\ref{f1})_{3}$, $(\ref{d'1})$-$(\ref{d'2})$, Cauchy-Schwartz inequality and
Young inequality with $0<\eta<1$, we obtain
$\displaystyle|J_{1}|\leq$ $\displaystyle
C\|(\psi_{x},\bar{\psi}_{x})\|^{2}+C\delta\|(\phi_{x},\bar{\phi}_{x})\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),$
(3.31) $\displaystyle|J_{2}|\leq$ $\displaystyle
C\|(\phi,\bar{\phi})\|_{L^{\infty}}\|(\phi_{x},\bar{\phi}_{x},\psi_{xx},\bar{\psi}_{xx})\|^{2}+C_{\eta}\|\bar{\psi}-\psi\|^{2}+\eta\|(\phi_{x},\bar{\phi}_{x})\|^{2}+C\delta\|(\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle\leq$ $\displaystyle
C(\varepsilon+\delta+\eta)\|(\phi_{x},\bar{\phi}_{x})\|^{2}+C\varepsilon\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C_{\eta}\|\bar{\psi}-\psi\|^{2}+C\delta\|\bar{\psi}_{x}\|^{2},$
(3.32) $\displaystyle|J_{3}|\leq$ $\displaystyle
C\|(\psi_{x},\bar{\psi}_{x})\|_{L^{\infty}}\|(\phi_{x},\bar{\phi}_{x})\|^{2}+C\|\bar{\phi}\|_{L^{\infty}}\|(\bar{\phi}_{x},\bar{\psi}_{xx})\|^{2}$
$\displaystyle\leq$ $\displaystyle
C\varepsilon\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}+C\varepsilon\|(\psi_{xx},\bar{\psi}_{xx})\|^{2},$
(3.33) $\displaystyle|J_{4}|\leq$ $\displaystyle
C\delta\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}+C\delta\|\bar{\psi}_{xx}\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$
(3.34)
Finally, the substitution of $(\ref{h(0)})$-$(\ref{I_{5}})$ into $(\ref{1f})$
for $\delta$, $\varepsilon$ and $\eta$ small enough leads to
$\displaystyle\frac{d}{dt}\int(\phi_{x}^{2}+\bar{\phi}_{x}^{2}+\widetilde{\rho}^{2}\phi_{x}\psi+\widetilde{n}\bar{\phi}_{x}\bar{\psi})dx+\|(\phi_{x},\bar{\phi}_{x})\|^{2}+\phi_{x}^{2}(t,0)+\bar{\phi}_{x}^{2}(t,0)$
(3.35) $\displaystyle\leq$ $\displaystyle
C\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}_{x}-\psi_{x})\|^{2}+C(\delta+\varepsilon)\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$
Integrating $(\ref{e1})$ in $\tau$ over $[0,t]$, and using Lemma 3.3 and Young
inequality, we have $(\ref{1'-order time e1})$. ∎
###### Lemma 3.5.
Under the same conditions in Proposition 3.1, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the problem
$(\ref{f1})$-$(\ref{boundary d1})$ satisfies for $t\in[0,T]$ that
$\begin{split}&~{}~{}~{}~{}\|(\psi_{x},\bar{\psi}_{x})\|^{2}+\int_{0}^{t}\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau\leq
C\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{1}.\end{split}$
(3.36)
###### Proof.
Multiplying $(\ref{f1})_{2}$ by $-\psi_{xx}$, $(\ref{f1})_{4}$ by
$-\bar{\psi}_{xx}$, respectively, then adding them together and integrating
the resulted equation in $x$ over $\mathbb{R}_{+}$ imply
$\begin{split}&\frac{d}{dt}\int\frac{\psi^{2}_{x}}{2}+\frac{\bar{\psi}_{x}^{2}}{2}dx+\int\mu\frac{1}{\rho}\psi^{2}_{xx}+\bar{\psi}_{xx}^{2}dx=\sum^{3}_{i=1}K_{i},\end{split}$
(3.37)
where
$\displaystyle K_{1}=$
$\displaystyle\int[u\psi_{x}\psi_{xx}-\frac{n}{\rho}(\bar{\psi}-\psi)\psi_{xx}+\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}\psi_{xx}+v\bar{\psi}_{x}\bar{\psi}_{xx}+(\bar{\psi}-\psi)\bar{\psi}_{xx}+\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}\bar{\psi}_{xx}]dx,$
$\displaystyle K_{2}=$
$\displaystyle\int[\widetilde{u}_{x}\psi\psi_{xx}-\mu\widetilde{u}_{xx}(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\psi_{xx}+(\frac{p^{\prime}_{1}(\rho)}{\rho}-\frac{p^{\prime}_{1}(\widetilde{\rho})}{\widetilde{\rho}})\psi_{xx}\widetilde{\rho}_{x}-(\frac{n}{\rho}-\frac{\widetilde{n}}{\widetilde{\rho}})(\widetilde{v}-\widetilde{u})\psi_{xx}+\widetilde{v}_{x}\bar{\psi}\bar{\psi}_{xx}$
$\displaystyle+(\widetilde{n}\widetilde{v}_{x})_{x}(\frac{1}{n}-\frac{1}{\widetilde{n}})\bar{\psi}_{xx}+(\frac{p^{\prime}_{2}(n)}{n}-\frac{p^{\prime}_{2}(\widetilde{n})}{\widetilde{n}})\bar{\psi}_{xx}\widetilde{n}_{x}-\frac{\widetilde{n}_{x}}{\widetilde{n}}\bar{\psi}_{x}\bar{\psi}_{xx}-\frac{(\bar{\phi}\widetilde{v}_{x})_{x}}{n}\bar{\psi}_{xx}]dx,$
$\displaystyle K_{3}=$
$\displaystyle-\int[\frac{(\bar{\phi}\bar{\psi}_{x})_{x}}{n}\bar{\psi}_{xx}+(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\bar{\psi}_{x})_{x}\bar{\psi}_{xx}]dx.$
We estimate terms in the left side of $(\ref{psi_{xx} space w1})$. By the
decomposition
$\frac{1}{\rho}=(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})+(\frac{1}{\widetilde{\rho}}-\frac{1}{\rho_{+}})+\frac{1}{\rho_{+}}$,
the second term is estimated as follows:
$\displaystyle\int\frac{\mu}{\rho}\psi_{xx}^{2}+\bar{\psi}_{xx}^{2}dx\geq$
$\displaystyle\frac{\mu}{\rho_{+}}\|\psi_{xx}\|^{2}+\|\bar{\psi}_{xx}\|^{2}-C(\|\phi\|_{L^{\infty}}+\delta)\|\psi_{xx}\|^{2}$
(3.38) $\displaystyle\geq$
$\displaystyle\frac{\mu}{\rho_{+}}\|\psi_{xx}\|^{2}+\|\bar{\psi}_{xx}\|^{2}-C(\varepsilon+\delta)\|\psi_{xx}\|^{2}.$
We turn to estimate terms in the right side of $(\ref{psi_{xx} space w1})$.
With the aid of $(\ref{M_{+}>1 stationary solution d})$, Sobolev inequality
and Cauchy-Schwarz inequality, we have
$\displaystyle|K_{1}|\leq$
$\displaystyle\frac{\mu}{16\rho_{+}}\|\psi_{xx}\|^{2}+\frac{1}{16}\|\bar{\psi}_{xx}\|^{2}+C\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2},$
(3.39) $\displaystyle|K_{2}|\leq$ $\displaystyle
C\delta\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}+C\delta\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),$
(3.40) $\displaystyle|K_{3}|\leq$ $\displaystyle
C\|\bar{\phi}\|_{L^{\infty}}\|(\bar{\psi}_{x},\bar{\psi}_{xx})\|^{2}+C\|\bar{\psi}_{x}\|_{L^{\infty}}\|\bar{\psi}_{xx}\|~{}\|\bar{\phi}_{x}\|\leq
C\varepsilon\|(\bar{\psi}_{x},\bar{\psi}_{xx})\|^{2}.$ (3.41)
Finally, taking $\delta$ and $\varepsilon$ small enough and substituting of
$(\ref{s_{xx} e})$-$(\ref{K_{3} e})$ into $(\ref{psi_{xx} space w1})$, we
obtain
$\frac{d}{dt}\int\psi_{x}^{2}+\bar{\psi}_{x}^{2}dx+\frac{\mu}{2\rho_{+}}\|\psi_{xx}\|^{2}+\frac{1}{2}\|\bar{\psi}_{xx}\|^{2}\leq
C\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$
(3.42)
Integrating $(\ref{psi_{xx} e1'})$ in $\tau$ over $[0,t]$, and using Lemmas
3.3-3.4 and the smallness of $\delta$ and $\varepsilon$, we obtain the desired
estimate $(\ref{s 3})$. Therefore, we complete the proof of Lemma 3.5. ∎
With the help of Lemmas 3.3-3.5, we get $(\ref{e})$ and complete the proof of
Proposition 3.1.
## 4 Time convergence rates
### 4.1 Convergence rate of supersonic steady-state
###### Proposition 4.1.
Assume that the same conditions in Theorem 1.3 for $M_{+}>1$ hold and let
$(\phi,\psi,\bar{\phi},\bar{\psi})$ be a solution to the IBVP
$(\ref{f1})$-$(\ref{boundary d1})$ satisfying
$(\phi,\psi,\bar{\phi},\bar{\psi})\in C([0,T];H^{1})$ and
${(1+x)^{\frac{\nu}{2}}}(\phi,\psi,\bar{\phi},\bar{\psi})\in C([0,T];L^{2})$
for any time $T>0$. Then for arbitrary $\nu\in[0,\lambda]$, there exist
positive constants $\varepsilon>0$ and $C>0$ independent of $T$ such that if
$\sup_{0\leq t\leq
T}\|(\phi,\psi,\bar{\phi},\bar{\psi})(t)\|_{1}+\delta\leq\varepsilon$ (4.1)
is satisfied, it holds for arbitrary $t\in[0,T]$ that
$\displaystyle(1+t)^{\lambda-\nu+\theta}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|_{1}+\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu})+\nu\int_{0}^{t}(1+\tau)^{\lambda-\nu+\theta}\|(\phi,\bar{\phi},\psi,\bar{\psi})\|^{2}_{a,\nu-1}d\tau$
(4.2)
$\displaystyle+\int_{0}^{t}(1+\tau)^{\lambda-\nu+\theta}\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}_{a,\nu}d\tau+\int_{0}^{t}(1+\tau)^{\lambda-\nu+\theta}\|(\phi_{x},\psi_{xx},\bar{\phi}_{x},\bar{\psi}_{xx})\|^{2}d\tau$
$\displaystyle\leq$ $\displaystyle
C(1+t)^{\theta}(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{1}+\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}),$
with $\theta>0$.
Our first goal is to obtain the basic weighted energy estimates of
$(\phi,\psi,\bar{\phi},\bar{\psi})$.
###### Lemma 4.2.
Under the same conditions in Proposition 4.1, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the IBVP $(\ref{f1})$-$(\ref{boundary
d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+t)^{\xi}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}+\nu\int_{0}^{t}(1+\tau)^{\xi}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}d\tau$
(4.3)
$\displaystyle+\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}_{a,\nu}d\tau+\int_{0}^{t}(1+\tau)^{\xi}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))d\tau$
$\displaystyle\leq$ $\displaystyle
C\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+C\delta\int_{0}^{t}(1+\tau)^{\xi}\|(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau$
$\displaystyle+C\nu\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}_{a,\nu-1}d\tau+C\xi\int_{0}^{t}(1+\tau)^{\xi-1}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}d\tau.$
with $\xi\geq 0$.
###### Proof.
We multiply $(\ref{f_0})$ by $W_{a,\nu}$, where $W_{a,\nu}:=(1+x)^{\nu}$ is a
space weight function. We integrate the resulted equality over
$\mathbb{R}_{+}$ to obtain
$\displaystyle\frac{d}{dt}\int
W_{a,\nu}(\mathcal{E}_{1}+\mathcal{E}_{2})-(W_{a,\nu}G_{1})(t,0)-\int
W_{a,\nu-1}G_{1}dx-\int W_{a,\nu-1}G_{2}dx$ (4.4) $\displaystyle+\int
W_{a,\nu}[\mu\psi_{x}^{2}+n\bar{\psi}^{2}_{x}+n(\bar{\psi}-\psi)^{2}]dx$
$\displaystyle=$ $\displaystyle-\int W_{a,\nu}(R_{1}+R_{2}+R_{3})dx,$
where $\mathcal{E}_{i}$, $i=1,2$ are defined by $(\ref{mcE 1})$-$(\ref{mcE
2})$, and $G_{j}$ for $j=1,2$, $R_{k}$ for $k=1,2,3$ are defined by
$(\ref{G})$.
First, we estimate terms on the left hand side of (4.4). Under the condition
$(\ref{boundary d1})$, the second term on the left hand side is estimated as
$\begin{split}&~{}~{}~{}-(W_{a,\nu}G_{1})(t,0)=|u_{-}|[\Phi_{1}(\rho(t,0),\widetilde{\rho}(0))+\Phi_{2}(n(t,0),\widetilde{n}(0))]\geq
c(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),~{}~{}~{}~{}~{}~{}~{}\end{split}$ (4.5)
We decompose $\psi$ as $\psi=\bar{\psi}+(\psi-\bar{\psi})$ and use
$(\ref{M_{+}>1 stationary solution d})$ to gain
$\displaystyle-\nu\int W_{a,\nu-1}G_{1}dx$ (4.6) $\displaystyle\geq$
$\displaystyle\nu\int
W_{a,\nu-1}[\frac{1}{2}(\phi,\bar{\phi},\bar{\psi})\boldsymbol{M_{3}}(\phi,\bar{\phi},\bar{\psi})^{\rm
T}-\rho_{+}u_{+}\bar{\psi}(\psi-\bar{\psi})-A_{1}\gamma\rho_{+}^{\gamma-1}\phi(\psi-\bar{\psi})]dx$
$\displaystyle-C(\delta+\varepsilon)\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1},$
where the symmetric matrix $\boldsymbol{M_{3}}$ is denoted by
$\boldsymbol{M_{3}}=\begin{pmatrix}-A_{1}\gamma\rho_{+}^{\gamma-2}u_{+}&0&-A_{1}\gamma\rho_{+}^{\gamma-1}\\\
0&-A_{2}\alpha n_{+}^{\alpha-2}u_{+}&-A_{2}\alpha n_{+}^{\alpha-1}\\\
-A_{1}\gamma\rho_{+}^{\gamma-1}&-A_{2}\alpha
n_{+}^{\alpha-1}&-(\rho_{+}+n_{+})u_{+}\end{pmatrix}.$ (4.7)
It is easy to verify that $\boldsymbol{M_{3}}$ is a positive definite matrix
for $M_{+}>1$. Hence, the estimate of the third term on the left hand side is
obtained under the condition $\varepsilon$, $\delta$ and $\eta$ small enough
that
$\displaystyle-\nu\int W_{a,\nu-1}G_{1}dx$ (4.8) $\displaystyle\geq$
$\displaystyle
c\nu\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}-\eta\nu\|(\phi,\bar{\psi})\|^{2}_{a,\nu-1}-C_{\eta}\nu\|\bar{\psi}-\psi\|^{2}_{a,\nu-1}-C(\varepsilon+\delta)\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}$
$\displaystyle\geq$ $\displaystyle
c\nu\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}-C\nu\|\bar{\psi}-\psi\|^{2}_{a,\nu-1},$
By Young inequality with $0<\eta<1$, the forth and fifth terms on the left
hand side are estimated as
$\begin{split}&-\nu\int
W_{a,\nu-1}G_{2}dx\leq\nu\eta\|(\psi,\bar{\psi})\|^{2}_{a,\nu-1}+\nu
C_{\eta}\|(\psi_{x},\bar{\psi}_{x})\|^{2}_{a,\nu-1}+C\delta\|(\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}.\end{split}$
(4.9) $\begin{split}&\quad\int
W_{a,\nu}[\mu\psi_{x}^{2}+n\bar{\psi}^{2}_{x}+n(\psi-\bar{\psi})^{2}]dx\geq
c\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}_{a,\nu}-C(\varepsilon+\delta)\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu}.\end{split}$
(4.10)
By $(\ref{M_{+}>1 stationary solution d})$ and $(\ref{d'1})$, it follows from
Sobolev inequality and Cauchy-Schwarz inequality that
$\displaystyle|\int W_{a,\nu}(R_{1}+R_{2}+R_{3})dx|\leq$ $\displaystyle
C\delta\int
e^{-\frac{c_{0}}{2}x}(\phi^{2}+\psi^{2}+\bar{\phi}^{2}+\bar{\psi}^{2}+|\bar{\psi}-\psi|^{2})dx$
(4.11) $\displaystyle\leq$ $\displaystyle
C\delta\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+C\delta(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)).$
Finally, with $\eta$, $\delta$ and $\varepsilon$ suitably small, the
substitution of $(\ref{second term e})$-$(\ref{mathcal{I}_{2} e})$ into
$(\ref{f_01})$ leads to
$\displaystyle\frac{d}{dt}\int
W_{a,\nu}(\mathcal{E}_{1}+\mathcal{E}_{2})dx+c\nu\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}+c\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu}+c(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))$
(4.12) $\displaystyle\leq$ $\displaystyle
C\delta\|(\phi_{x},\bar{\phi}_{x})\|^{2}+C\nu\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu-1}.~{}~{}~{}~{}~{}~{}~{}$
Multiplying (4.12) by $(1+\tau)^{\xi}$ and integrating the resulted equation
in $\tau$ over $[0,t]$, we gain the desired estimate $(\ref{L^{2} time e1})$.
∎
Similar to Lemmas 3.4-3.5, we get the following high order weighted estimates
of $(\phi,\psi,\bar{\phi},\bar{\psi})$. The details are omitted.
###### Lemma 4.3.
Under the same conditions in Proposition 4.1, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the IBVP $(\ref{f1})$-$(\ref{boundary
d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+t)^{\xi}\|(\phi_{x},\bar{\phi}_{x})\|^{2}+\int_{0}^{t}(1+\tau)^{\xi}\|(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau$
(4.13) $\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\bar{\phi}_{0x})\|^{2})+C\nu\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu-1}d\tau$
$\displaystyle+C\varepsilon\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}_{a,\nu}d\tau+C\xi\int_{0}^{t}(1+\tau)^{\xi-1}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}+\|(\phi_{x},\bar{\phi}_{x})\|^{2})d\tau,$
with $\xi\geq 0$.
###### Lemma 4.4.
Under the same conditions in Proposition 4.1 hold, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the IBVP $(\ref{f1})$-$(\ref{boundary
d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+t)^{\xi}\|(\psi_{x},\bar{\psi}_{x})\|^{2}+\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau$
(4.14) $\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\bar{\phi}_{0x},\psi_{0x},\bar{\psi}_{0x})\|^{2})+C\nu\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu-1}d\tau$
$\displaystyle+C\xi\int_{0}^{t}(1+\tau)^{\xi-1}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}+\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2})d\tau$
with $\xi\geq 0$.
Proof of Proposition 4.1 For $\nu\in[0,\lambda]$ and $\xi\geq 0$, it follows
from Lemmas 4.2-4.4 that
$\displaystyle(1+t)^{\xi}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}+\|(\phi_{x},\psi_{x},\bar{\psi}_{x},\bar{\phi}_{x})\|^{2})+\nu\int(1+t)^{\xi}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu-1}d\tau$
(4.15)
$\displaystyle+\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\nu}d\tau+\int_{0}^{t}(1+\tau)^{\xi}\|(\phi_{x},\psi_{xx},\bar{\phi}_{x},\bar{\psi}_{xx})\|^{2}d\tau$
$\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2})+C\nu\int_{0}^{t}(1+\tau)^{\xi}\|(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}_{a,\nu-1}d\tau$
$\displaystyle+C\xi\int_{0}^{t}(1+\tau)^{\xi-1}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\nu}+\|(\phi_{x},\psi_{x},\bar{\psi}_{x},\bar{\phi}_{x})\|^{2}),$
where $C>0$ is a generic positive constant independent of $T,\nu,$ and $\xi$.
Hence, applying similar induction arguments as in [14, 25, 3] to (4.15), we
gain the desired estimate (4.2).
Indeed, for any $\lambda>0$ and $k=0,1,2,...[\lambda]$, we have
$\displaystyle(1+t)^{k}(\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\lambda-k}+\|(\phi_{x},\psi_{x},\bar{\psi}_{x},\bar{\phi}_{x})\|^{2})+\nu\int(1+t)^{k}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{a,\lambda-k-1}d\tau$
(4.16)
$\displaystyle+\int_{0}^{t}(1+\tau)^{k}\|(\psi_{x},\bar{\psi}_{x},\psi-\bar{\psi})\|^{2}_{a,\lambda-k}d\tau+\int_{0}^{t}(1+\tau)^{k}\|(\phi_{x},\psi_{xx},\bar{\phi}_{x},\bar{\psi}_{xx})\|^{2}d\tau$
$\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2}),$
and
$\displaystyle(1+t)^{k}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}+\int_{0}^{t}(1+\tau)^{k}\|(\phi_{x},\psi_{x},\psi_{xx},\bar{\psi}_{x},\bar{\phi}_{x},\bar{\psi}_{xx},\psi-\bar{\psi})\|^{2}d\tau$
(4.17) $\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2}).$
To prove $(\ref{e1-2})$ and $(\ref{e1-3})$, we apply similar induction
arguments as in [14, 25, 3] to (4.15).
Step 1. Taking $\xi=0$, $\nu=\lambda$ in $(\ref{e1-1})$ and using $(\ref{e})$,
we have $(\ref{e1-2})$ and $(\ref{e1-3})$ for $k=0$. Therefore, $(\ref{e1-2})$
and $(\ref{e1-3})$ hold for $0<\lambda<1$.
Step 2. Taking $\xi=1$, $\nu=0$ in $(\ref{e1-1})$ and using $(\ref{e1-2})$
with $k=0$, we have $(\ref{e1-3})$ with $k=1$. Then, taking $\xi=1$,
$\nu=\lambda-1$ in $(\ref{e1-1})$ and using $(\ref{e1-3})$ with $k=1$ and
$(\ref{e1-2})$ with $k=0$, we obtain the desired estimate $(\ref{e1-2})$ with
$k=1$. Therefore, the proof is finished for $1\leq\lambda<2$.
Step 3. We repeat the same procedure as in Step 2. The estimate $(\ref{e1-1})$
(with $\xi=2$, $\nu=0$) together with $(\ref{e1-3})$ (with $k=1$) lead to
$(\ref{e1-3})$ (with $k=2$). Also, $(\ref{e1-1})$ (with $\xi=2$,
$\nu=\lambda-2$) together with $(\ref{e1-3})$ (with $k=2$) and $(\ref{e1-2})$
(with $k=1$) lead to $(\ref{e1-2})$ (with $k=2$), which proves the estimates
$(\ref{e1-2})$ and $(\ref{e1-3})$ for $2\leq\lambda<3$.
Repeating the same procedure, we get the desired estimates $(\ref{e1-2})$ and
$(\ref{e1-3})$ for any $\lambda>0$.
If $\lambda>0$ is integer, we obtain $(\ref{e__1})$ from $(\ref{e1-2})$
letting $k=\lambda$.
If $\lambda>0$ is not integer, we obtain $(\ref{e__1})$ as follows.
Taking $\nu=0$ in $(\ref{e1-1})$, we have
$\displaystyle(1+t)^{\xi}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}+\int_{0}^{t}(1+\tau)^{\xi}\|(\phi_{x},\psi_{x},\psi_{xx},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}_{xx},\psi-\bar{\psi})\|^{2}d\tau$
(4.18) $\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2})+C\xi\int_{0}^{t}(1+\tau)^{\xi-1}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}d\tau.$
Using $(\ref{e1-2})$ with $k=[\lambda]$ and taking $s=1-(\lambda-[\lambda])$,
we have
$\displaystyle\int_{0}^{t}(1+\tau)^{\xi-1}\|(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}d\tau$
$\displaystyle\leq$
$\displaystyle\int_{0}^{t}(1+\tau)^{\xi-1-[\lambda]}\\{((1+t)^{[\lambda]}\|(\phi,\psi,\bar{\phi},\bar{\phi})\|^{2}_{a,\lambda-[\lambda]})^{s}((1+t)^{[\lambda]}\|(\phi,\psi,\bar{\phi},\bar{\phi})\|^{2}_{a,\lambda-[\lambda]-1})^{1-s}$
$\displaystyle+(1+t)^{[\lambda]}\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}\\}d\tau$
$\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2})^{s}\int_{0}^{t}(1+\tau)^{\xi-1-[\lambda]}((1+t)^{[\lambda]}\|(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle+(1+t)^{[\lambda]}\|(\phi,\psi,\bar{\phi},\bar{\phi})\|^{2}_{a,\lambda-[\lambda]-1})^{1-s}d\tau$
$\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2})(\int_{0}^{t}(1+\tau)^{\frac{\xi-1-[\lambda]}{1+[\lambda]-\lambda}}d\tau)^{1+[\lambda]-\lambda}$
$\displaystyle\leq$ $\displaystyle
C(\|(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{a,\lambda}+\|(\phi_{0x},\psi_{0x},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2})(1+t)^{\theta},$
(4.19)
where we take $\xi=\lambda+\theta(1+[\lambda]-\lambda)$ and $\theta>0$.
### 4.2 Convergence rate of sonic steady-state
The function space $Y_{W}(0,T)$ for $T>0$ is denoted by
$\displaystyle Y_{W}(0,T):=\\{~{}(\phi,\psi,\bar{\phi},\bar{\psi})~{}|{}$
$\displaystyle(\phi,\psi,\bar{\phi},\bar{\psi})\in
C([0,T];H_{W}^{1}(\mathbb{R}_{+})),$ (4.20)
$\displaystyle(\phi_{x},\bar{\phi}_{x})\in
L^{2}([0,T];L_{W}^{2}(\mathbb{R}_{+})),~{}(\psi_{x},\bar{\psi}_{x})\in
L^{2}([0,T];H_{W}^{1}(\mathbb{R}_{+}))~{}\\}.$
###### Proposition 4.5.
Assume that $1\leq\lambda<\lambda^{*}$ with
$\lambda^{*}:=2+\sqrt{8+\frac{1}{1+b^{2}}}$,
$b:=\frac{\rho_{+}(u_{+}^{2}-p^{\prime}_{1}(\rho_{+}))}{|u_{+}|\sqrt{(\mu+n_{+})n_{+}}}$,
and that the same conditions in Theorem 1.3 hold for $M_{+}=1$. Let
$(\phi,\psi,\bar{\phi},\bar{\psi})$ be a solution to the IBVP
$(\ref{f1})$-$(\ref{boundary d1})$ satisfying
$(\phi,\psi,\bar{\phi},\bar{\psi})\in Y_{\sigma^{-\lambda}}(0,T)$ for any time
$T>0$. Then for arbitrary $\nu\in(0,\lambda]$, there exist positive constants
$\varepsilon>0$ and $C>0$ independent of $T$ such that if
$\sup_{0\leq t\leq
T}\|\sigma^{-\frac{\lambda}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})(t)\|_{1}+\delta^{\frac{1}{2}}\leq\varepsilon$
(4.21)
is satisfied, it holds for arbitrary $t\in[0,T]$ that
$\displaystyle(1+\delta
t)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}+\int^{t}_{0}(1+\delta\tau)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}d\tau$
(4.22)
$\displaystyle+\int^{t}_{0}(1+\delta\tau)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}d\tau+\int^{t}_{0}(1+\delta\tau)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx},\bar{\psi}-\psi)\|^{2}d\tau$
$\displaystyle\leq$ $\displaystyle C(1+\delta
t)^{\beta}\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\phi_{0x},\psi_{0x},\bar{\phi}_{0},\bar{\psi}_{0},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
with $\beta>0$.
By the fact $\lambda\geq 1$ and $(\ref{M_{+}=1 prior a})$, it is easy to
verify the following estimate:
$\|\sigma^{-\frac{1}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|_{L^{\infty}}\leq\|\sigma^{-\frac{\lambda}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|_{1}\leq\sqrt{2}\varepsilon.$
(4.23)
To deal with some nonlinear terms, we use the following inequality as in [27,
15, 36].
###### Lemma 4.6 ([36] ).
Let $\nu\geq 1$. Then a function $\sigma^{-\frac{\nu}{2}}(x)\phi(t,x)\in
H^{1}(\mathbb{R}_{+})$ satisfies
$\int\sigma^{-\frac{\nu-1}{2}}|\phi|^{3}dx\leq
C\|\sigma^{-\frac{1}{2}}\phi\|~{}(\sigma(0)\phi^{2}(t,0)+\|\sigma^{-\frac{\nu}{2}}\phi_{x}\|^{2}+\|\sigma^{-\frac{\nu-2}{2}}\phi\|^{2}),$
(4.24)
where the function $\sigma(x)\geq 0$ is defined by $(\ref{sg0})$ with
$\sigma(0)$ small enough.
To gain faster decay rates, it is necessary to use the following Hardy type
inequality.
###### Lemma 4.7 ([13] ).
Let $\zeta\in C^{1}[0,\infty)$ satisfies $\zeta>0$, $\zeta_{x}>0$ and
$\zeta(x)\rightarrow\infty$ for $x\rightarrow\infty$. Then we have
$\int_{\mathbb{R}_{+}}\psi^{2}\zeta_{x}dx\leq
4\int_{\mathbb{R}_{+}}\psi^{2}_{x}\frac{\zeta^{2}}{\zeta_{x}}dx$ (4.25)
for $\psi$ satisfying $\psi(t,0)=0$ and $\sqrt{w}\psi\in
H^{1}(\mathbb{R}_{+})$, with the function $w:=\frac{\zeta^{2}}{\zeta_{x}}$.
With the aid of Lemmas 4.6-4.7, we obtain the weighted $L^{2}$ estimate of
$(\phi,\psi,\bar{\phi},\bar{\psi})$.
###### Lemma 4.8.
Under the same conditions in Proposition 4.5, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the problem
$(\ref{f1})$-$(\ref{boundary d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+\int^{t}_{0}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}d\tau$
(4.26)
$\displaystyle+\int^{t}_{0}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}d\tau+\int^{t}_{0}(1+\delta\tau)^{\xi}\frac{1}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))d\tau$
$\displaystyle\leq$ $\displaystyle
C\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}+C\delta\int^{t}_{0}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau$
$\displaystyle+C\delta\xi\int^{t}_{0}(1+\delta\tau)^{\xi-1}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}d\tau,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
with $\xi\geq 0$.
###### Proof.
We multiply $(\ref{f_0})$ by the space weight function $\sigma^{-\nu}$, where
the space weight function $\sigma\geq 0$ satisfies $(\ref{sig})$ and
$(\ref{sg0})$. Then, we integrate the resulted equation over $\mathbb{R}_{+}$
to get
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}(\mathcal{E}_{1}+\mathcal{E}_{2})dx-(\sigma^{-\nu}G_{1})(t,0)-a\nu\int\sigma^{-(\nu-1)}G_{1}dx-a\nu\int\sigma^{-(\nu-1)}G_{2}dx$
(4.27)
$\displaystyle+\int\sigma^{-\nu}n(\bar{\psi}-\psi)^{2}dx+\int\sigma^{-\nu}(\mu\psi_{x}^{2}+n\bar{\psi}^{2}_{x})dx+\int\sigma^{-\nu}R_{1}dx$
$\displaystyle=$
$\displaystyle-\int\sigma^{-\nu}R_{2}dx-\int\sigma^{-\nu}R_{3}dx,$
where $\mathcal{E}_{i}$, $i=1,2$ are defined by $(\ref{mcE 1})$-$(\ref{mcE
2})$, and $G_{j}$ for $j=1,2$, $R_{k}$ for $k=1,2,3$ are defined by
$(\ref{G})$. First, we estimate terms on the left hand side of $(\ref{f_02})$.
Under the condition $(\ref{boundary d1})$, the second term on the left hand
side is estimated as
$\begin{split}&~{}~{}~{}-(\sigma^{-\nu}G_{1})(t,0)\geq\frac{c}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))\geq
0.~{}~{}~{}~{}~{}~{}~{}\end{split}$ (4.28)
For the third term on the left hand side, using $(\ref{nonlinear f})$ and
$\psi=\bar{\psi}+(\psi-\bar{\psi})$ yields
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}G_{1}dx$ (4.29) $\displaystyle\geq$
$\displaystyle
a\nu\int\sigma^{-(\nu-1)}[\frac{1}{2}(\phi,\bar{\phi},\bar{\psi})\boldsymbol{M_{4}}(\phi,\bar{\phi},\bar{\psi})^{\rm
T}-\rho_{+}u_{+}\bar{\psi}(\psi-\bar{\psi})-A_{1}\gamma\rho^{\gamma-1}_{+}\phi(\psi-\bar{\psi})]dx$
$\displaystyle+a\nu\int\sigma^{-(\nu-1)}[-(A_{1}\gamma\widetilde{\rho}^{\gamma-2}\widetilde{u}-A_{1}\gamma\widetilde{\rho}_{+}^{\gamma-2}u_{+})\frac{\phi^{2}}{2}-(A_{2}\alpha\widetilde{n}^{\alpha-2}\widetilde{v}-A_{2}\alpha
n_{+}^{\alpha-2}u_{+})\frac{\bar{\phi}^{2}}{2}$
$\displaystyle-(A_{1}\gamma\widetilde{\rho}^{\gamma-1}-A_{1}\gamma\rho_{+}^{\gamma-1})\phi\psi-(A_{2}\alpha\widetilde{n}^{\alpha-1}-A_{2}\alpha
n_{+}^{\alpha-1})\bar{\phi}\bar{\psi}]dx-C\frac{\varepsilon}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))$
$\displaystyle-C\varepsilon(\|\sigma^{-(\frac{\nu-2}{2})}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}),$
where the symmetric matrix $\boldsymbol{M_{4}}$ is defined as
$\begin{split}&\boldsymbol{M_{4}}=\begin{pmatrix}-A_{1}\gamma\rho_{+}^{\gamma-2}u_{+}&0&-A_{1}\gamma\rho_{+}^{\gamma-1}\\\
0&-A_{2}\alpha n_{+}^{\alpha-2}u_{+}&-A_{2}\alpha n_{+}^{\alpha-1}\\\
-A_{1}\gamma\rho_{+}^{\gamma-1}&-A_{2}\alpha
n_{+}^{\alpha-1}&-(\rho_{+}+n_{+})u_{+}\end{pmatrix}.\end{split}$ (4.30)
Owing to $M_{+}=1$, it is easy to check that three eigenvalues of the matrix
$\boldsymbol{M_{4}}$ satisfy: $\hat{\lambda}_{1}>0$, $\hat{\lambda}_{2}>0$,
$\hat{\lambda}_{3}=0$. Take the coordinate transformation
$\begin{split}\begin{pmatrix}\phi\\\ \bar{\phi}\\\
\bar{\psi}\end{pmatrix}=\boldsymbol{P}\begin{pmatrix}\hat{\rho}\\\ \hat{n}\\\
\hat{v}\end{pmatrix},\end{split}$ (4.31)
where the matrix $\boldsymbol{P}$ is denoted by
$\displaystyle\boldsymbol{P}=\begin{pmatrix}r_{11}&r_{21}&-\frac{\rho_{+}}{u_{+}}\\\
r_{12}&r_{22}&-\frac{n_{+}}{u_{+}}\\\ r_{13}&r_{23}&1\end{pmatrix}{\rm
with~{}constants~{}}r_{ij}~{}{\rm for}~{}1\leq i\leq 2,~{}1\leq j\leq 3,$
(4.32)
such that
$\begin{split}(\phi,\bar{\phi},\bar{\psi})~{}\boldsymbol{M_{4}}~{}(\phi,\bar{\phi},\bar{\psi})^{\rm
T}=(\hat{\rho},\hat{n},\hat{v})\begin{pmatrix}\hat{\lambda}_{1}&0&0\\\
0&\hat{\lambda}_{2}&0\\\ 0&0&0\end{pmatrix}(\hat{\rho},\hat{n},\hat{v})^{\rm
T}=\hat{\lambda}_{1}\hat{\rho}^{2}+\hat{\lambda}_{2}\hat{n}^{2},\end{split}$
(4.33)
By $(\ref{sigma 1})$, $(\ref{infty e2})$, $(\ref{bound})$, and $(\ref{bar rho
u})$-$(\ref{P 1})$, the third term is estimated as
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}G_{1}dx$ (4.34) $\displaystyle\geq$
$\displaystyle
a\nu\int\sigma^{-(\nu-1)}(\frac{\hat{\lambda}_{1}}{2}\hat{\rho}^{2}+\frac{\hat{\lambda}_{2}}{2}\hat{n}^{2})dx+a\nu\int\sigma^{-(\nu-2)}\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}\hat{v}^{2}dx$
$\displaystyle+a\nu\int\sigma^{-(\nu-1)}\frac{\rho_{+}(u_{+}^{2}-A_{1}\gamma\rho_{+}^{\gamma-1})}{|u_{+}|}\hat{v}(\psi-\bar{\psi})dx-C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}-C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}$
$\displaystyle-C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu}{2}}(\psi-\bar{\psi})\|^{2}-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}-C\varepsilon\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle-C\varepsilon\frac{1}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),$
where we have used the following facts
$\displaystyle-(A_{1}\gamma\widetilde{\rho}^{\gamma-1}-A_{1}\gamma\rho_{+}^{\gamma-1})\geq\frac{A_{1}\gamma(\gamma-1)\rho_{+}^{\gamma-1}}{|u_{+}|}\sigma-C\sigma^{2},$
$\displaystyle-(A_{2}\alpha\widetilde{n}^{\alpha-1}-A_{2}\alpha
n_{+}^{\alpha-1})\geq\frac{A_{2}\alpha(\alpha-1)n_{+}^{\alpha-1}}{|u_{+}|}\sigma-C\sigma^{2},$
$\displaystyle-(A_{1}\gamma\widetilde{\rho}^{\gamma-2}\widetilde{u}-A_{1}\gamma\rho_{+}^{\gamma-2}u_{+})\geq
A_{1}\gamma(3-\gamma)\rho_{+}^{\gamma-2}\sigma-C\sigma^{2},$
$\displaystyle-(A_{2}\alpha\widetilde{n}^{\alpha-2}\widetilde{v}-A_{2}\alpha
n_{+}^{\alpha-2}u_{+})\geq
A_{2}\alpha(3-\alpha)n_{+}^{\alpha-2}\sigma-C\sigma^{2}.$
With the help of $(\ref{sigma})$-$(\ref{sig})$, $(\ref{boundary d1})$,
$(\ref{M_{+}=1 prior a})$, $(\ref{infty e2})$-$(\ref{nonlinear f})$,
$(\ref{bar rho u})$-$(\ref{P 1})$, and $\psi=\bar{\psi}+(\psi-\bar{\psi})$, it
holds that
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}G_{2}(t,x)dx$ (4.35)
$\displaystyle\geq$
$\displaystyle-a^{2}\frac{\mu+n_{+}}{2}\nu(\nu-1)\|\sigma^{-\frac{\nu-2}{2}}\bar{\psi}\|^{2}-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu-2}{2}}\bar{\psi}\|^{2}-C\varepsilon\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{x}\|^{2}-C\delta\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi)\|^{2}$
$\displaystyle\geq$
$\displaystyle-a^{2}\frac{\mu+n_{+}}{2}\nu(\nu-1)\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}-C\delta^{\frac{1}{2}}(\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2})-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu-2}{2}}\bar{\psi}\|^{2}$
$\displaystyle-C\varepsilon\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{x}\|^{2}-C\delta\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi)\|^{2},$
$\int\sigma^{-\nu}n(\bar{\psi}-\psi)^{2}dx\geq[n_{+}-C(\delta+\varepsilon)]\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi)\|^{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(4.36) $\displaystyle\int\sigma^{-\nu}R_{1}dx$ (4.37) $\displaystyle\geq$
$\displaystyle
a\int\sigma^{-(\nu-2)}[\frac{A_{1}\gamma(\gamma-1)\rho_{+}^{\gamma-2}}{2}\phi^{2}+\rho_{+}\psi^{2}+\frac{A_{2}\alpha(\alpha-1)n_{+}^{\alpha-2}}{2}\bar{\phi}^{2}+n_{+}\bar{\psi}^{2}]dx$
$\displaystyle-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}\quad\quad$
$\displaystyle\geq$ $\displaystyle
a\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}-C\delta(\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2})$
$\displaystyle-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}.$
For $\nu\in(0,3]$, with the help of $(\ref{sigma 1})$ and $(\ref{a})$, we add
$(\ref{G1-3})$-$(\ref{fifth term})$ together to have
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}G_{1}dx-a\nu\int\sigma^{-(\nu-1)}G_{2}dx+\int\sigma^{-\nu}n(\bar{\psi}-\psi)^{2}dx+\int\sigma^{-\nu}R_{1}dx$
$\displaystyle\geq$ $\displaystyle
c\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+\frac{1}{4}\\{a\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}[1+\nu-\frac{\nu(\nu-1)}{2(1+b^{2})}]\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}$
$\displaystyle+n_{+}\|\sigma^{-\frac{\nu}{2}}(\psi-\bar{\psi})\|^{2}\\}+\int\sigma^{-\nu}(\psi-\bar{\psi},\hat{v})\boldsymbol{M_{5}}(\psi-\bar{\psi},\hat{v})^{\rm
T}dx-C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}$
$\displaystyle-C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}-C(\varepsilon+\delta^{\frac{1}{2}})\|\sigma^{-\frac{\nu}{2}}(\psi-\bar{\psi})\|^{2}-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle-C\frac{\varepsilon}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}$
$\begin{split}&\geq
c\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+c\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+c\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi)\|^{2}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\\\
&\quad-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}-C\varepsilon\frac{\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)}{\delta^{\nu}},\end{split}$
(4.38)
where the positive definite matrix $\boldsymbol{M_{5}}$ is defined by
$\boldsymbol{M_{5}}=\begin{pmatrix}\frac{3}{4}n_{+}&\frac{\sqrt{(\mu+n_{+})n_{+}}}{2}ab\nu\sigma\\\
\frac{\sqrt{(\mu+n_{+})n_{+}}}{2}ab\nu\sigma&\quad\frac{3}{4}a\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}[(1+\nu)-\frac{\nu(\nu-1)}{2(1+b^{2})}]\sigma^{2}\end{pmatrix}.$
(4.39)
Then, we consider the case $\nu\in[3,2+\sqrt{8+\frac{1}{1+b^{2}}})$ using the
Lemma 4.7 with $\zeta=\sigma^{-(\nu-1)}$. Therefore, with the aid of
$(\ref{a})$, the sixth term is estimated as below:
$\displaystyle\int\sigma^{-\nu}(\mu\psi^{2}_{x}+n\bar{\psi}_{x}^{2})dx$ (4.40)
$\displaystyle\geq$ $\displaystyle
a^{2}(\mu+n_{+})\frac{(\nu-1)^{2}}{4}\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}-C\delta^{\frac{1}{2}}(\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}+\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2})$
$\displaystyle-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi,\bar{\psi}_{x})\|^{2}.$
For $\nu\in(3,\lambda]$, adding $(\ref{lower bdd})$ to $(\ref{six term})$,
taking $k=\nu B[4(1+b^{2})\nu+4b^{2}+5-\nu^{2}]^{-\frac{1}{2}}\in(0,1)$, and
using $(\ref{a})$ and
$c|(\phi,\bar{\phi},\bar{\psi})|\leq|(\hat{\rho},\hat{n},\hat{v})|\leq
C|(\phi,\bar{\phi},\bar{\psi})|$, we have
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}G_{1}dx-a\nu\int\sigma^{-(\nu-1)}G_{2}dx+\int\sigma^{-\nu}n(\bar{\psi}-\psi)^{2}dx+\int\sigma^{-\nu}R_{1}dx$
(4.41)
$\displaystyle+\int\sigma^{-\nu}(\mu\psi_{x}^{2}+n\bar{\psi}_{x}^{2})dx$
$\displaystyle\geq$ $\displaystyle
c\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+\int\sigma^{-\nu}(\psi-\bar{\psi},\hat{v})\boldsymbol{M_{6}}(\psi-\bar{\psi},\hat{v})^{\rm
T}dx+(1-k)\\{n_{+}\|\sigma^{-\frac{\nu}{2}}(\psi-\bar{\psi})\|^{2}$
$\displaystyle+a\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}[1+\nu-\frac{\nu(\nu-1)}{2(1+b^{2})}+\frac{(\nu-1)^{2}}{4(1+b^{2})}]\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2}\\}$
$\displaystyle-C\delta^{\frac{1}{2}}(\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+\|\sigma^{-\frac{\nu-2}{2}}\hat{v}\|^{2})-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle-C(\varepsilon+\delta^{\frac{1}{2}})\|\sigma^{-\frac{\nu}{2}}(\psi-\bar{\psi})\|^{2}-C\frac{\varepsilon}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}$
$\displaystyle\geq$ $\displaystyle
c\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+c\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+c\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi,\psi_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle-C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}-C\frac{\varepsilon}{\delta^{\nu}}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)),$
where $\delta^{\frac{1}{2}}$ and $\varepsilon$ are small enough, and the
positive definite matrix $\boldsymbol{M_{6}}$ is defined as
$\boldsymbol{M_{6}}=\begin{pmatrix}kn_{+}&\frac{\sqrt{(\mu+n_{+})n_{+}}}{2}ab\nu\sigma\\\
\frac{\sqrt{(\mu+n_{+})n_{+}}}{2}ab\nu\sigma&\quad
ka\frac{A_{1}\gamma(\gamma+1)\rho_{+}^{\gamma}+A_{2}\alpha(\alpha+1)n_{+}^{\alpha}}{2|u_{+}|^{2}}[1+\nu-\frac{\nu(\nu-1)}{2(1+b^{2})}+\frac{(\nu-1)^{2}}{4(1+b^{2})}]\sigma^{2}\end{pmatrix}.$
(4.42)
By $(\ref{sigma})$-$(\ref{sigma 1})$, $(\ref{bar rho u})$-$(\ref{P 1})$,
Cauchy-Schwarz inequality and $M_{+}=1$, we estimate terms on the right hand
side as
$\displaystyle|\int\sigma^{-\nu}R_{2}dx+\int\sigma^{-\nu}R_{3}dx|$ (4.43)
$\displaystyle\leq$ $\displaystyle
C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-1}{2}}(\hat{\rho},\hat{n})\|^{2}+C\delta^{\frac{1}{2}}\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+C\delta\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}.$
Finally, taking $\delta^{\frac{1}{2}}$ and $\varepsilon$ small enough, and
combining $(\ref{second term e2})$-$(\ref{R e})$, we obtain
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}(\mathcal{E}_{1}+\mathcal{E}_{2})dx+c\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+c\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+\frac{c}{\delta^{\nu}}[\phi^{2}(t,0)+\bar{\phi}^{2}(t,0)]$
$\displaystyle\leq$ $\displaystyle
C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}.$
(4.44)
Multiplying $(\ref{r1})$ by $(1+\delta\tau)^{\xi}$ and integrating the
resulted equation in $\tau$ over $[0,t]$, we obtain $(\ref{L^{2} e2})$. The
proof of Lemma 4.8 is completed. ∎
In order to show Proposition 4.5, we need to obtain the high order weighted
estimates of $(\phi,\psi,\bar{\phi},\bar{\psi})$.
###### Lemma 4.9.
Under the same conditions in Proposition 4.5, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the IBVP $(\ref{f1})$-$(\ref{boundary
d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+\delta
t)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+\int_{0}^{t}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}d\tau$
(4.45) $\displaystyle\leq$ $\displaystyle
C(\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}+\|\sigma^{-\frac{\lambda}{2}}(\phi_{0x},\bar{\phi}_{0x})\|^{2})+C(\varepsilon+\delta)\int(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau$
$\displaystyle+\delta\xi\int_{0}^{t}(1+\delta\tau)^{\xi-1}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\phi_{x},\bar{\phi},\bar{\psi},\bar{\phi}_{x})\|^{2}d\tau,$
with $\xi\geq 0$.
###### Proof.
Adding $(\ref{h_{x}})$-$(\ref{bs_{x}})$ together, and multiplying the resulted
equation by $\sigma^{-\nu}$ with the weight function $\sigma\geq 0$ satisfying
$(\ref{sig})$ and $(\ref{sg0})$, we integrate the resulted equation in $x$
over $\mathbb{R}_{+}$ to obtain
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}(\mu\frac{\phi_{x}^{2}}{2}+\frac{\bar{\phi}_{x}^{2}}{2}+\widetilde{\rho}^{2}\phi_{x}\psi+\widetilde{n}\bar{\phi}_{x}\bar{\psi})dx-[\sigma^{-\nu}(\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2}-\widetilde{\rho}^{2}\phi_{t}\psi-\widetilde{n}\bar{\phi}_{t}\bar{\psi})](t,0)$
(4.46) $\displaystyle-a\nu\int\sigma^{-(\nu-1)}[\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2}-\widetilde{\rho}^{2}\phi_{t}\psi-\widetilde{n}\bar{\phi}_{t}\bar{\psi}]dx+\int\sigma^{-\nu}(\widetilde{\rho}^{2}\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}^{2}+\widetilde{n}\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}^{2})dx$
$\displaystyle=$ $\displaystyle\sum_{i=1}^{6}\mathcal{J}_{i},$
where
$\displaystyle\mathcal{J}_{1}=$
$\displaystyle-\int\sigma^{-\nu}[\widetilde{\rho}^{2}(\phi_{t}+u\phi_{x})\psi_{x}+\widetilde{n}(\bar{\phi}_{t}+v\bar{\phi}_{x})\bar{\psi}_{x}+2\widetilde{\rho}\widetilde{\rho}_{x}\phi_{t}\psi+\widetilde{n}_{x}\bar{\phi}_{t}\bar{\psi}]dx,$
$\displaystyle\mathcal{J}_{2}=$
$\displaystyle\int\sigma^{-\nu}[-\mu\phi\phi_{x}\psi_{xx}-\bar{\phi}\bar{\phi}_{x}\bar{\psi}_{xx}+\mu\widetilde{\rho}^{2}(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\phi_{x}\psi_{xx}+\widetilde{n}(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\bar{\psi}_{x})_{x}\bar{\phi}_{x}]dx,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle\mathcal{J}_{3}=$
$\displaystyle\int\sigma^{-\nu}[\widetilde{\rho}^{2}\frac{n}{\rho}(\bar{\psi}-\psi)\phi_{x}-\widetilde{n}(\bar{\psi}-\psi)\bar{\phi}_{x}]dx,$
$\displaystyle\mathcal{J}_{4}=$
$\displaystyle-\int\sigma^{-\nu}(\frac{3}{2}\mu\psi_{x}\frac{\phi_{x}^{2}}{2}+\frac{3}{2}\bar{\psi}_{x}\bar{\phi}_{x}^{2})dx+\int\sigma^{-\nu}\widetilde{n}\frac{(\bar{\phi}\bar{\psi}_{x})_{x}}{n}\bar{\phi}_{x}dx,\quad\mathcal{J}_{5}=\int\sigma^{-\nu}(F_{1}\widetilde{\rho}^{2}\phi_{x}+F_{2}\widetilde{n}\bar{\phi}_{x})dx,$
$\displaystyle\mathcal{J}_{6}=$
$\displaystyle-\int\sigma^{-\nu}[\mu(\frac{1}{2}\phi_{x}\widetilde{u}_{x}+\psi_{x}\widetilde{\rho}_{x})\phi_{x}+\mu(\phi\widetilde{u}_{x}+\psi\widetilde{\rho}_{x})_{x}\phi_{x}$
$\displaystyle-\widetilde{n}_{x}\bar{\phi}_{x}\bar{\psi}_{xx}+(\frac{1}{2}\bar{\phi}_{x}\widetilde{v}_{x}+\bar{\psi}_{x}\widetilde{n}_{x})\bar{\phi}_{x}-(\bar{\phi}\widetilde{v}_{x}+\bar{\psi}\widetilde{n}_{x})_{x}\bar{\phi}_{x}]dx.$
Owing to $(\ref{sigma})$, $(\ref{boundary d1})$, and $(\ref{infty e2})$, we
obtain the estimates for terms on the left hand side as below
$\displaystyle-$ $\displaystyle[\sigma^{-\nu}(\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2}-\widetilde{\rho}\phi_{x}\psi-\widetilde{n}\bar{\phi}_{x}\bar{\psi})](t,0)\geq\frac{c}{\delta^{\nu}}(\phi^{2}_{x}(t,0)+\bar{\phi}^{2}_{x}(t,0))\geq
0,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (4.47)
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}(\mu
u\frac{\phi_{x}^{2}}{2}+v\frac{\bar{\phi}_{x}^{2}}{2})dx$ $\displaystyle\geq$
$\displaystyle\frac{a\nu|u_{+}|}{2}\int\sigma^{-(\nu-1)}(\mu\phi_{x}^{2}+\bar{\phi}_{x}^{2})dx-C(\varepsilon+\delta)\|\sigma^{-\frac{\nu-1}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2},$
(4.48)
$\displaystyle-a\nu\int\sigma^{-(\nu-1)}(-\widetilde{\rho}^{2}\phi_{t}\psi-\widetilde{n}\bar{\phi}_{t}\bar{\psi})dx$
$\displaystyle\geq$
$\displaystyle-C\varepsilon\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}-C\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}-C\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x})\|^{2},$
(4.49) $\displaystyle\int$
$\displaystyle\sigma^{-\nu}(\widetilde{\rho}^{2}\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}^{2}+\widetilde{n}\frac{p^{\prime}_{2}(n)}{n}\bar{\phi}_{x}^{2})dx\geq\frac{A_{1}\gamma\rho_{+}^{\gamma}}{2}\|\sigma^{-\frac{\nu}{2}}\phi_{x}\|^{2}+\frac{A_{2}\alpha
n_{+}^{\alpha-1}}{2}\|\sigma^{-\frac{\nu}{2}}\bar{\phi}_{x}\|^{2},~{}$ (4.50)
where we take $\delta$ and $\varepsilon$ small enough.
We turn to estimate terms on the right hand side of $(\ref{1f-1})$. With the
help of $(\ref{sigma})$, $(\ref{infty e2})$, Young inequality and Cauchy-
Schwarz inequality, we gain
$\displaystyle|\mathcal{J}_{1}|\leq$ $\displaystyle
C\delta\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+C\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x})\|^{2}+C\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2},$
(4.51) $\displaystyle|\mathcal{J}_{2}|\leq$ $\displaystyle
C(\|(\phi,\bar{\phi})\|_{L^{\infty}}+\delta)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x},\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta^{2}\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{x}\|^{2}$
$\displaystyle\leq$ $\displaystyle
C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+C(\delta+\varepsilon)\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{x}\|^{2},$
(4.52) $\displaystyle|\mathcal{J}_{3}|\leq$
$\displaystyle\frac{A_{1}\gamma\rho_{+}^{\gamma}}{8}\|\sigma^{-\frac{\nu}{2}}\phi_{x}\|^{2}+\frac{A_{2}\alpha
n_{+}^{\alpha-1}}{8}\|\sigma^{-\frac{\nu}{2}}\bar{\phi}_{x}\|^{2}+C\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}-\psi)\|^{2},$
(4.53) $\displaystyle|\mathcal{J}_{4}|\leq$ $\displaystyle
C\varepsilon\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+C\varepsilon\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\varepsilon\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x})\|^{2},$
(4.54) $\displaystyle|\mathcal{J}_{5}|\leq$ $\displaystyle
C\delta\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+C\delta\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(4.55) $\displaystyle|\mathcal{J}_{6}|\leq$ $\displaystyle
C\delta\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}+C\delta^{2}\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{xx}\|^{2}+C\delta\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x})\|^{2}+C\delta\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}.$
(4.56)
Finally, the substitution of $(\ref{h-l1})$-$(\ref{h-r6})$ into $(\ref{1f-1})$
for $\delta$ and $\varepsilon$ small enough leads to that
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}[\mu\frac{\phi_{x}^{2}}{2}+\frac{\bar{\phi}_{x}^{2}}{2}+\widetilde{\rho}^{2}\phi_{x}\psi+\widetilde{n}\bar{\phi}_{x}\bar{\psi}]dx+c\|\sigma^{-\frac{\nu-1}{2}}(\phi_{x},\bar{\phi}_{x})\|^{2}$
(4.57)
$\displaystyle+\frac{A_{1}\gamma\rho_{+}^{\gamma}}{4}\|\sigma^{-\frac{\nu}{2}}\phi_{x}\|^{2}+\frac{A_{2}\alpha
n_{+}^{\alpha-1}}{4}\|\sigma^{-\frac{\nu}{2}}\bar{\phi}_{x}\|^{2}$
$\displaystyle\leq$ $\displaystyle
C(\varepsilon+\delta)\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+C\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}.~{}~{}~{}~{}~{}~{}~{}~{}~{}$
Multiplying $(\ref{high order weighted e})$ by $(1+\delta\tau)^{\xi}$ and
integrating the resulted equation in $\tau$ over $[0,t]$, and using Cauchy-
Schwarz inequality and Lemma 4.8, we obtain the desired estimate
$(\ref{h-e-t})$. The proof of Lemma 4.9 is completed. ∎
###### Lemma 4.10.
Under the same conditions in Proposition 4.5, then the solution
$(\phi,\psi,\bar{\phi},\bar{\psi})$ to the IBVP $(\ref{f1})$-$(\ref{boundary
d1})$ satisfies for $t\in[0,T]$ that
$\displaystyle(1+\delta
t)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\psi_{x},\bar{\psi}_{x})\|^{2}+\int^{t}_{0}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}d\tau$
(4.58) $\displaystyle\leq$ $\displaystyle
C\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\phi_{0x},\psi_{0x},\bar{\phi}_{0},\bar{\psi}_{0},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2}$
$\displaystyle+C\delta\xi\int^{t}_{0}(1+\delta\tau)^{\xi-1}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\phi_{x},\psi_{x},\bar{\phi},\bar{\psi},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}d\tau,$
with $\xi\geq 0$.
###### Proof.
Multiplying $(\ref{f1})_{2}$ by $-\sigma^{-\nu}\psi_{xx}$, $(\ref{f1})_{4}$ by
$-\sigma^{-\nu}\bar{\psi}_{xx}$ respectively with the function $\sigma$
satisfying $(\ref{sig})$ and $(\ref{sg0})$, then adding them together and
integrating the resulted equation in $x$ over $\mathbb{R}_{+}$ lead to
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}(\frac{\psi_{x}^{2}}{2}+\frac{\bar{\psi}_{x}^{2}}{2})dx-a\nu\int\sigma^{-(\nu-1)}(\psi_{t}\psi_{x}+\bar{\psi}_{t}\bar{\psi}_{x})dx+\int\sigma^{-\nu}(\frac{\mu}{\rho}\psi_{xx}^{2}+\bar{\psi}_{xx}^{2})dx=\sum_{i=1}^{3}\mathcal{K}_{i},$
(4.59)
where
$\displaystyle\mathcal{K}_{1}=$
$\displaystyle\int\sigma^{-\nu}[u\psi_{x}\psi_{xx}-\frac{n}{\rho}(\bar{\psi}-\psi)\psi_{xx}+\frac{p^{\prime}_{1}(\rho)}{\rho}\phi_{x}\psi_{xx}+v\bar{\psi}_{x}\bar{\psi}_{xx}+(\bar{\psi}-\psi)\bar{\psi}_{xx}+\frac{p_{2}{{}^{\prime}}(n)}{n}\bar{\phi}_{x}\bar{\psi}_{xx}]dx,$
$\displaystyle\mathcal{K}_{2}=$
$\displaystyle\int\sigma^{-\nu}[\widetilde{u}_{x}\psi\psi_{xx}-\mu\widetilde{u}_{xx}(\frac{1}{\rho}-\frac{1}{\widetilde{\rho}})\psi_{xx}+(\frac{p^{\prime}_{1}(\rho)}{\rho}-\frac{p^{\prime}_{1}(\widetilde{\rho})}{\widetilde{\rho}})\psi_{xx}\widetilde{\rho}_{x}-(\frac{n}{\rho}-\frac{\widetilde{n}}{\widetilde{\rho}})(\widetilde{v}-\widetilde{u})\psi_{xx}+\widetilde{v}_{x}\bar{\psi}\bar{\psi}_{xx}$
$\displaystyle+(\widetilde{n}\widetilde{v}_{x})_{x}(\frac{1}{n}-\frac{1}{\widetilde{n}})\bar{\psi}_{xx}+(\frac{p^{\prime}_{2}(n)}{n}-\frac{p^{\prime}_{2}(\widetilde{n})}{\widetilde{n}})\bar{\psi}_{xx}\widetilde{n}_{x}-\frac{\widetilde{n}_{x}}{\widetilde{n}}\bar{\psi}_{x}\bar{\psi}_{xx}-\frac{(\bar{\phi}\widetilde{v}_{x})_{x}}{n}\bar{\psi}_{xx}]dx,$
$\displaystyle\mathcal{K}_{3}=$
$\displaystyle-\int\sigma^{-\nu}[\frac{(\bar{\phi}\bar{\psi}_{x})_{x}}{n}\bar{\psi}_{xx}+(\frac{1}{n}-\frac{1}{\widetilde{n}})(\widetilde{n}\bar{\psi}_{x})_{x}\bar{\psi}_{xx}]dx.$
First, we estimate terms in the left side of $(\ref{psi_{xx} space w 1})$. The
second term is estimated as follows:
$\displaystyle
a\nu\int\sigma^{-(\nu-1)}(\psi_{t}\psi_{x}+\bar{\psi}_{t}\bar{\psi}_{x})dx$
(4.60) $\displaystyle\leq$ $\displaystyle
C\delta\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+C\delta\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2},$
where we have used $(\ref{sigma})$, $(\ref{f1})_{2}$, $(\ref{f1})_{4}$,
$(\ref{infty e2})$ and Cauchy-Schwarz inequality.
With the help of $\delta$ and $\varepsilon$ small enough, the third term is
estimated as follows:
$\displaystyle\int\sigma^{-\nu}(\frac{\mu}{\rho}\psi_{xx}^{2}+\bar{\psi}_{xx}^{2})dx\geq$
$\displaystyle[\frac{\mu}{\rho_{+}}-C(\varepsilon+\delta)]\|\sigma^{-\frac{\nu}{2}}\psi_{xx}\|^{2}+\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{xx}\|^{2}$
(4.61) $\displaystyle\geq$
$\displaystyle\frac{\mu}{2\rho_{+}}\|\sigma^{-\frac{\nu}{2}}\psi_{xx}\|^{2}+\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{xx}\|^{2}.$
We turn to estimate terms on the right hand side of $(\ref{psi_{xx} space w
1})$. With the help of $(\ref{sigma})$, $(\ref{infty e2})$ and Cauchy-Schwarz
inequality, we obtain
$\displaystyle|\mathcal{K}_{1}|\leq\frac{\mu}{8\rho_{+}}\|\sigma^{-\frac{\nu}{2}}\psi_{xx}\|^{2}+\frac{1}{8}\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{xx}\|^{2}+C\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2},$
(4.62) $\displaystyle|\mathcal{K}_{2}|\leq
C\delta\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx})\|^{2}+C\delta\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}+C\delta\|\sigma^{-\frac{\nu}{2}}(\bar{\phi}_{x},\bar{\psi}_{x})\|^{2},$
(4.63) $\displaystyle|\mathcal{K}_{3}|\leq
C(\varepsilon+\delta)\|\sigma^{-\frac{\nu}{2}}(\bar{\psi}_{x},\bar{\psi}_{xx})\|^{2}.$
(4.64)
Finally, we substitute $(\ref{psi_{xx} second term e})$-$(\ref{K_{4} e})$ into
$(\ref{psi_{xx} space w 1})$ to gain under the condition $\delta$ and
$\varepsilon$ small enough that
$\displaystyle\frac{d}{dt}\int\sigma^{-\nu}(\frac{\psi_{x}^{2}}{2}+\frac{\bar{\psi}_{x}^{2}}{2})dx+\frac{\mu}{4\rho_{+}}\|\sigma^{-\frac{\nu}{2}}\psi_{xx}\|^{2}+\frac{1}{4}\|\sigma^{-\frac{\nu}{2}}\bar{\psi}_{xx}\|^{2}$
(4.65) $\displaystyle\leq$ $\displaystyle
C\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}-\psi)\|^{2}+C\delta\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}$
Multiplying $(\ref{psi_{xx} space weighted e})$ by $(1+\delta\tau)^{\xi}$ and
integrating the resulted inequality in $\tau$ over $[0,t]$, and using Lemmas
4.8-4.9 and the smallness of $\delta$ and $\varepsilon$, we obtain
$(\ref{psi_{xx} time e 1})$. The proof of Lemma 4.10 is completed. ∎
Proof of Proposition 4.5 With the help of Lemmas 4.8-4.10, it holds for
$\delta$ and $\varepsilon$ suitably small that
$\displaystyle(1+\delta
t)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}+\int^{t}_{0}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu-2}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}d\tau+\int_{0}^{t}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}$
$\displaystyle+C\int_{0}^{t}(1+\delta\tau)^{\xi}\|\sigma^{-\frac{\nu}{2}}(\psi_{xx},\bar{\psi}_{xx},\bar{\psi}-\psi)\|^{2}+C\frac{1}{\delta^{\nu}}\int_{0}^{t}(1+\delta\tau)^{\xi}(\phi^{2}(t,0)+\bar{\phi}^{2}(t,0))d\tau$
$\displaystyle\leq$ $\displaystyle
C\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\bar{\phi}_{0},\bar{\psi}_{0})\|^{2}_{1}+C\delta\xi\int^{t}_{0}(1+\delta\tau)^{\xi-1}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\bar{\phi},\bar{\psi})\|^{2}_{1}d\tau,$
(4.66)
where $C>0$ is a positive constant independent of $T$, $\nu$ and $\xi$.
Applying similar induction arguments as in [14, 25, 3] to $(\ref{high order
space time e})$, we have
$\displaystyle(1+\delta
t)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\phi_{x},\psi_{x},\bar{\phi},\bar{\psi},\bar{\phi}_{x},\bar{\psi}_{x})\|^{2}+\int^{t}_{0}(1+\delta\tau)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu-2}{2}}(\phi,\bar{\phi},\psi,\bar{\psi})\|^{2}$
(4.67)
$\displaystyle+\int^{t}_{0}(1+\delta\tau)^{\frac{\lambda-\nu}{2}+\beta}\|\sigma^{-\frac{\nu}{2}}(\phi_{x},\psi_{x},\psi_{xx},\bar{\phi}_{x},\bar{\psi}_{x},\bar{\psi}_{xx},\bar{\psi}-\psi)\|^{2}$
$\displaystyle\leq$ $\displaystyle C(1+\delta
t)^{\beta}\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\phi_{0x},\psi_{0x},\bar{\phi}_{0},\bar{\psi}_{0},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2}$
for $\beta>0$, which implies
$\|\sigma^{-\frac{\nu}{2}}(\phi,\psi,\phi_{x},\psi_{x},\bar{\phi},\bar{\psi},\bar{\phi}_{x},\bar{\psi}_{x})(t)\|^{2}\leq
C(1+\delta
t)^{-\frac{\lambda-\nu}{4}}\|\sigma^{-\frac{\lambda}{2}}(\phi_{0},\psi_{0},\phi_{0x},\psi_{0x},\bar{\phi}_{0},\bar{\psi}_{0},\bar{\phi}_{0x},\bar{\psi}_{0x})\|^{2}.$
(4.68)
Acknowledgments
The research of the paper is supported by the National Natural Science
Foundation of China (Nos. 11931010, 11871047, 11671384), by the key research
project of Academy for Multidisciplinary Studies, Capital Normal University,
and by the Capacity Building for Sci-Tech Innovation-Fundamental Scientific
Research Funds (No. 007/20530290068).
## References
* [1] F. Bubba, B. Perthame, C. Pouchol, M. Schmidtchen, Hele-Shaw limit for a system of two reaction-(cross-)diffusion equations for living tissues. Arch. Ration. Mech. Anal. 236 (2020), no. 2, 735-766.
* [2] J. Carr, Applications of Center Manifold Theory, Springer-Verlag, 1981.
* [3] Y.Z. Chen, H. Hong, X,D. Shi, Convergence rate of stationary solutions to outflow problem for full Navier-Stokes equations. Appl. Anal. 98 (2019), no. 7, 1267-1288.
* [4] R.J. Duan, X.F. Yang, Stability of rarefaction wave and boundary layer for outflow problem on the two-fluid Navier- Stokes-Poisson equations, Commun. Pure Appl. Anal. 12 (2013) 985-1014.
* [5] L.L. Fan, H.X. Liu, T. Wang, H.J. Zhao, Inflow problem for the one-dimensional compressible Navier-Stokes equations under large initial perturbation. J. Differential Equations 257 (2014), no. 10, 3521-3553.
* [6] C.C. Hao and H.L. Li, Well-posedness for a multidimensional viscous liquid-gas two-phase flow model, SIAM J. Math. Anal. 44 (2012), 1304-1332.
* [7] H. Hong, X.D. Shi, T. Wang, Stability of stationary solutions to the inflow problem for the two-fluid non-isentropic Navier-Stokes-Poisson system. J. Differential Equations 265 (2018), no. 4, 1129-1155.
* [8] H. Hong, T. Wang, Stability of stationary solutions to the inflow problem for full compressible Navier-Stokes equations with a large initial perturbation. SIAM J. Math. Anal. 49 (2017), no. 3, 2138-2166.
* [9] F.M. Huang, A. Matsumura, X.D. Shi, Viscous shock wave and boundary layer solution to an inflow problem for compressible viscous gas. Comm. Math. Phys. 239 (2003), no. 1-2, 261-285.
* [10] F.M. Huang, J. Li, X.D. Shi, Asymptotic behavior of solutions to the full compressible Navier-Stokes equations in the half space. Commun. Math. Sci. 8 (2010), no. 3, 639-654.
* [11] F.M. Huang, X.H. Qin, Stability of boundary layer and rarefaction wave to an outflow problem for compressible Navier-Stokes equations under large perturbation. J. Differential Equations 246 (2009), no. 10, 4077-4096.
* [12] M. Ishii and T. Hibiki, Thermo-fluid Dynamics of Two-Phase Flow. New York: Springer-Verlag, 2006.
* [13] S. Kawashima, K. Kurata, Hardy type inequality and application to the stability of degenerate stationary waves. J. Funct. Anal. 257 (2009), no. 1, 1-19.
* [14] S. Kawashima, A. Matsumura, Asymptotic stability of traveling wave solutions of systems for one-dimensional gas motion, Comm. Math. Phys. 101 (1985), no. 1, 97-127.
* [15] S. Kawashima, T. Nakamura, S. Nishibata, P.C. Zhu, Stationary waves to viscous heat-conductive gases in half-space: existence, stability and convergence rate. Math. Models Methods Appl. Sci. 20 (2010), no. 12, 2201-2235.
* [16] S. Kawashima, S. Nishibata and P. Zhu, Asymptotic stability of the stationary solution to the compressible Navier-Stokes equations in the half space, Comm. Math. Phys. 240 (2003), 483-500.
* [17] S. Kawashima, P.C. Zhu, Asymptotic stability of nonlinear wave for the compressible Navier-Stokes equations in the half space. J. Differential Equations 244 (2008), no. 12, 3151-3179.
* [18] S. Kawashima, P.C. Zhu, Asymptotic stability of rarefaction wave for the Navier-Stokes equations for a compressible fluid in the half space. Arch. Ration. Mech. Anal. 194 (2009), no. 1, 105-132.
* [19] H.L. Li, T. Wang, Y. Wang, Wave phenomena to the three-dimensional fluid-particle model, preprint, 2020.
* [20] H.L. Li, S. Zhao, Existence and nonlinear stability of stationary solutions to the full two-phase flow model in a half line. Appl. Math. Lett. 116 (2021), 107039.
* [21] T. Lorenzi, A. Lorz, B. Perthame, On interfaces between cell populations with different mobilities. Kinet. Relat. Models 10 (2017), no. 1, 299-311.
* [22] A. Matsumura, Inflow and outflow problems in the half space for a one-dimensional isentropic model system of compressible viscous gas, Methods Appl. Anal. 8(2001), 645-666.
* [23] A. Matsumura, K. Nishihara, Large-time behaviors of solutions to an inflow problem in the half space for a one-dimensional system of compressible viscous gas. Comm. Math. Phys. 222 (2001), no. 3, 449-474.
* [24] A. Mellet, A. Vasseur, Asymptotic analysis for a Vlasov-Fokker-Planck/compressible Navier- Stokes system of equations, Comm. Math. Phys. 281 (2008), no. 3, 573-596.
* [25] M. Nishikawa, Convergence rate to the traveling wave for viscous conservation laws, Funkcial. Ekvac. 41 (1998), no. 1, 107-132.
* [26] T. Nakamura, S. Nishibata, Stationary wave associated with an inflow problem in the half line for viscous heat-conductive gas. J. Hyperbolic Differ. Equ. 8 (2011), no. 4, 651-670.
* [27] T. Nakamura, S. Nishibata, Existence and asymptotic stability of stationary waves for symmetric hyperbolic-parabolic systems in half-line. Math. Models Methods Appl. Sci. 27 (2017), no. 11, 2071-2110.
* [28] T. Nakamura, S. Nishibata, N. Usami, Convergence rate of solutions towards the stationary solutions to symmetric hyperbolic-parabolic systems in half space. Kinet. Relat. Models 11 (2018), no. 4, 757-793.
* [29] T. Nakamura, S. Nishibata and T. Yuge, Convergence rate of solutions toward stationary solutions to the compressible Navier-Stokes equation in a half line, J. Differential Equations 241 (2007), 94-111.
* [30] X.H. Qin, Large-time behaviour of solutions to the outflow problem of full compressible Navier-Stokes equations. Nonlinearity 24 (2011), no. 5, 1369-1394.
* [31] X.H. Qin, Y. Wang, Stability of wave patterns to the inflow problem of full compressible Navier-Stokes equations. SIAM J. Math. Anal. 41 (2009), no. 5, 2057-2087.
* [32] X.H. Qin, Y. Wang, Large-time behavior of solutions to the inflow problem of full compressible Navier-Stokes equations. SIAM J. Math. Anal. 43 (2011), no. 1, 341-366.
* [33] L. Wan, T. Wang, H.J. Zhao, Asymptotic stability of wave patterns to compressible viscous and heat-conducting gases in the half-space. J. Differential Equations 261 (2016), no. 11, 5949-5991.
* [34] L. Wan, T. Wang, Q.Y. Zou, Stability of stationary solutions to the outflow problem for full compressible Navier-Stokes equations with large initial perturbation. Nonlinearity 29 (2016), no. 4, 1329-1354.
* [35] H.Y. Yin, J.S. Zhang, C.J. Zhu, Stability of the superposition of boundary layer and rarefaction wave for outflow problem on the two-fluid Navier-Stokes-Poisson system, Nonlinear Anal. Real World Appl. 31 (2016) 492-512.
* [36] H.Y. Yin, Converge rates towards stationary solutions for the outflow problem of planar magnetohydrodynamics on a half line. Proc. Roy. Soc. Edinburgh Sect. A 149 (2019), no. 5, 1291-1322.
* [37] H.Y. Yin, C.J. Zhu, Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line, Commun. Pure Appl. Anal. 14 (2015), no. 5, 2021-2042.
* [38] F. Zhou, Y.P. Li, Convergence rate of solutions toward stationary solutions to the bipolar Navier-Stokes-Poisson equations in a half line, Bound. Value Probl. 2013 (2013) 1.
|
# Possible observation of the signature of the bad metal phase and its
crossover to a Fermi liquid in $\kappa$-(BEDT-TTF)2Cu(NCS)2 bulk and
nanoparticles by Raman scattering
M. Revelli Beaumont1,2 P. Hemme1 Y. Gallais1 A. Sacuto1 K. Jacob2 L. Valade2
D. de Caro2 C. Faulmann2 M. Cazayous1 1Laboratoire Matériaux et Phénomènes
Quantiques (UMR 7162 CNRS), Université de Paris, 75205 Paris Cedex 13,
France2Laboratoire de Chimie de Coordination (UPR 8241), Université Paul
Sabatier Toulouse, France
###### Abstract
$\kappa$-(BEDT-TTF)2Cu(NCS)2 has been investigated by Raman scattering in both
bulk and nanoparticle compounds. Phonon modes from 20 to 1600 cm-1 have been
assigned. Focusing on the unexplored low frequency phonons, a plateau in
frequencies is observed in the bulk phonons between 50 and 100 K and assigned
to the signature of the bad metal phase. Nanoparticles of $\kappa$-(BEDT-
TTF)2Cu(NCS)2 exhibit anomalies at 50 K associated to the crossover from a bad
metal to a Fermi liquid whose origins are discussed.
## I Introduction
Organic compounds are usually electrical insulators. However, charge-transfer
complexes and salts, in which the large intermolecular $\pi$-orbital overlaps
create a conduction pathway. Organic conductors have been thus a focus in the
development of innovative architectures based on emerging molecular
devices.Jalabert From a fundamental point of view, organic compounds also
present phases often in competition that have attracted attention.Dressel2018
; Ishuguro1998 ; Lang2008 ; Powell2006 ; lebed2006 ; Drichko2015 Organic
superconductors share various similarities with the phase diagram of high
temperature copper oxide superconductors.Keimer1992 Among the charge transfer
salts, $\kappa$-(BEDT-TTF)2Cu(NCS)2, with BEDT-TTF
[bis(ethylene,dithio)tetrathiafulvalene] molecules commonly abbreviated as ET,
is a well known organic compound due to the occurence in its phase diagram of
unconventional superconductivity close to an antiferromagnetic
phase.Ishuguro1998 ; Lang2008 ; Powell2006 ; lebed2006
Figure 1: Schematic phase diagram of organic charge transfer salts as a
function of chemical pressure (or external) for $\kappa$-(BEDT-TTF)2Br and
$\kappa$-(ET)2Cu(NCS)2. PI and AI correspond to paramagnetic insulator and
antiferromagnetic insulator phases, respectively.
Indeed, $\kappa$-(ET)2Cu(NCS)2 exhibits a complex phase diagram at ambient
pressure (see Fig. 1). $\kappa$-(ET)2Cu(NCS)2 is superconducting up to the
critical temperature $T_{C}$ = 10.4 K.urayamaNewAmbientPressure1988 ; Sugano
Between $T_{C}$ and $T_{FL}$ = 20 - 25 K, the compound behaves like a Fermi
liquidDressel2007 ; Merino2008 characterized by a quadratic temperature
dependence in the resistivity.Limelette2003 ; Strack2005 ; Milbrandt2013
Between $T_{FL}$ and $T_{coh}\simeq 50$ K, the resistivity has no more a
quadratic dependence, indicating the breakdown of true Fermi liquid
behaviour.Limelette2003 ; Milbrandt2013 ; Georges2013 However, the charge
transport is still dominated by coherent quasiparticles.Merino2008 ;
Dressel2004 ; Frikach As the temperature is further raised above $T_{coh}$
the resistivity continues to increase monotonically until a broad maximum is
reached at $T_{max}\simeq 100$ K. The spectral weight at the Fermi energy is
negligible Analytis2006 ; Su1998 ; Strack2005 ; Merino2000 ; Limelette2003
and the resistivity starts to decrease monotonically. The electron-electron
interactions control this phase as shown by dynamical mean-field theory
(DMFT).Limelette2003 ; Merino2000 ; Georges2013 ; Kotliar2004 This phase is
described by a bad metal as an intermediate temperature state of a metal. It
is characterized for example by ill-defined quasi-particles. Ultrasonic
attenuation experiments show that the electron-phonon coupling is strong at
the crossover between the two phases.Frikach The vibrational degrees of
freedom might thus be used as a signature of this transition. The electrical
resistance of $\kappa$-(ET)2Cu(NCS)2shows semiconductor behavior above
$T_{max}$.Kuwata The Fermi-surface topology has also been investigated and
determined by magnetotransport experiments.Singleton ; Caulfield
The spin degrees of freedom are strongly impacted with temperatures. Above
$T_{coh}\simeq$ 50 K, the spin correlations are captured by a phenomenological
spin-fluctuation model from Millis, Monien, and Pines Moriya ; MMP ; Yusuf1 ;
Kawamoto95 ; Kawamoto95B which quantitatively reproduces the monotonic
increase in spin correlations observed in NMR experiments. DMFT calculations
fail to depict the NMR experiment results below this temperature (strong
decrease of the Knight shift, for example).Yusuf1 Three hypotheses have been
proposed to explain the NMR measurements below $T_{coh}$ : the loss of the
antiferromagnetic spin correlations below 50 K, the opening of a pseudogap or
the opening of a real gap associated with $T_{coh}$ on the small 1D-parts of
the FS as opposed to a pseudogap on the major quasi-2D fractions.Powell2011
However, the pseudogap opening has only been observed up to now in NMR
experiments.
Note that the ET molecules are also involved in the physics of the metallic,
insulating and superconducting phases.Muller2002 ; Scriven2009b ; Powell2004 ;
Taniguchi2003 The conformation of the ET molecules that can be staggered or
eclipsed induces a glassy transition around $T_{g}$ = 80 K. Above $T_{g}$
there is a thermal distribution of staggered and eclipsed states whereas below
$T_{g}$ the dynamics of the terminal ethylene groups are strongly suppressed
and a order-disorder transition is observed.Muller2002
Most of the previous Raman scattering measurements on $\kappa$-(ET)2Cu(NCS)2
have been devoted to the study of the superconducting phase.Dressel1992 ;
Pedron1997 ; Lin2001 ; Truong2007 In particular, anomalies in the Raman shift
and in the broadening of the phonon modes have been interpreted as the
signature of the pair breaking energy.Zeyher1990 In this work, we bring
insight into the rich physics of $\kappa$-(ET)2Cu(NCS)2 through the study of
the intermolecular phonon modes below 200 cm-1 which give access to the
electron correlations and intermolecular magnetic coupling. We have
implemented this approach in $\kappa$-(ET)2Cu(NCS)2 bulk and nanoparticles
(NPs) in order to study the effects of the size reduction on the
$\kappa$-(ET)2Cu(NCS)2 properties. In the low frequency phonons of the bulk,
we point out the signature of the transition from a semiconductor behavior to
the bad metal. For NPs, additional anomalies have been observed at as a
function of the temperature around $T_{\textrm{coh}}$ $\simeq$ 50 K and
associated to the crossover from the bad metal to the Fermi liquid.
## II Experimental Details
High quality $\kappa$-(ET)2Cu(NCS)2 single crystals have been grown by
electrocrystallization following the synthesis described in Ref.
urayamaNewAmbientPressure1988, and characterized by X-ray crystallography and
resistivity measurements. Single-crystalline $\kappa$-(ET)2Cu(NCS)2 NPs with a
mean diameter of 28 nm and a standard deviation of 4 nm, have been synthesized
by chemical route using a biobased amphiphilic molecule, the dodecanoic acid
C11H23COOH as the growth controlling agent. Their quality has been
characterized by X-ray crystallography and IR measurements. TEM measurements
evidence roughly spherical
NPs.revellibeaumontReproducibleNanostructurationSuperconducting2020 Note that
NPs differ from powder by well defined and regular shapes and a controlled
mean diameter with a small standard deviation.
The structure of mixed-valence salt $\kappa-\text{(ET)}_{2}X$ consists of
alternating layers of acceptors X and electron donor ET molecules. ET layers
consist of pairs of ET molecules, which stacks almost perpendicular to each
other, the long axis of the molecule always pointing in the same direction.
This forms then a two dimensional conducting layer in the bc-plan,Lang2008 ;
Dressel2018 ; Powell2006 which alternates with the insulating layer of the
polymeric anion along the a-direction. The ET molecule pairs donate an
electron to the anion layer, leaving a hole in the highest occupied molecular
orbital (HOMO) of ET. The $\kappa$-phases belong to the family of organic
quasi-two-dimensional conductors with an in-plane conductivity much larger
than the out-of-plane conductivity.
The Raman spectroscopy measurements were recorded using a triple substractive
T-64000 Jobin-Yvon spectrometer with a cooled CCD detector. The spectra were
acquired with a 532 nm laser line (green light) from an Oxxius-Slim solid
state laser, filtered both spatially and in frequency and obtained in quasi-
backscattering geometry. Measurements between 15 and 300 K have been performed
using an ARS closed-cycle He cryostat. In order to avoid heating and damaging
of the sample, the incident laser beam power was kept at 2 mW with a spot size
of 100 $\mu$m diameter. Phonon modes were analyzed using a Lorentzian
lineshape. The resolution of the phonon frequencies was around 0.2 cm-1.
## III Results and discussion
### III.1 Phonon modes in the high frequency region
Phonons in organic conductor salts are generally analyzed on the basis of the
molecular vibrations in the neutral molecule. For $\kappa$-(ET)2Cu(NCS)2,
observed vibrational peaks are assigned to the ET molecule.
Neglecting the staggered/eclipsed distortion, the ET molecules can be supposed
to be flat except for the hydrogen atoms and belong to the D2h point group. It
has 72 possible phonon modes, of which 36 are Raman
activekozlovAssignmentFundamentalVibrations1987
$\displaystyle\Gamma(D_{2}h)=12a_{g}+6b_{1g}+7b_{2g}+11b_{3g}+$ (1)
$\displaystyle 7a_{u}+11b_{1u}+11b_{2u}+7b_{3u}$
However, ab initio quantum chemical calculations show that ET is non
planardemiralpPredictionNewDonors1995 ;
demiralpElectrontransferBoatvibrationMechanism1995 . The stable boat structure
of ET has C2 symmetry, leading to the mode distribution
$\displaystyle\Gamma(C_{2})=37a+35b$ (2)
The reduction of symmetry is as followsdemiralpVibrationalAnalysisIsotope1998
:
$\displaystyle 12a_{g}+7a_{u}+11b_{3g}+7b_{3u}\rightarrow 37a$ (3a)
$\displaystyle 11b_{1u}+6b_{1g}+11b_{2u}+7b_{2g}\rightarrow 35b$ (3b)
As a consequence, all the vibrational modes are Raman as well as infrared
active due to this low-symmetry structure (the unit cell does not possess
inversion symmetry).
There have been several works devoted to assign the modes associated to the
neutral ET molecule.Eldridge1995 ; Lin1999 Recently, ab-initio calculations
of the vibrational properties of some compounds from this series reveal that a
simple distinction between molecular vibrations coupling the dimers and
lattice vibrations cannot be sustained.Dressel2016a ; Dressel2016b This
implies that the simple assignment of pure phonon modes is in principle
incorrect. However, nowadays, there is no calculation of phonon modes
frequencies taking into account all these interactions.
An assignment of high energy modes can be found in Refs Eldridge1997, and
Eldridge2002, . In addition, Sugai et al.
sugaiRamanactiveMolecularVibrations1993 assigned $\kappa$-(ET)2Cu(NCS)2
experimental vibrational modes using calculations made by Kozlov et al.
kozlovAssignmentFundamentalVibrations1987 on neutral molecule ET0 vibrations.
On the other hand for IR measurements, Kornelsen et al.
kornelsenInfraredOpticalProperties1991 assigned $\kappa$-(ET)2Cu(NCS)2
experimental vibrational modes using calculations made by Kozlov et al.
kozlovElectronMolecularVibration1989 on cation ET+ vibrations.
Nevertheless, in $\kappa$-(ET)2Cu(NCS)2 the charge redistribution on the ET
atoms causes a shift of vibrational
energy.sugaiRamanactiveMolecularVibrations1993 The total charge per dimer is
thus neither ET0 nor ET+ but ET0.5+. Hence, we propose to assign ET vibration
modes with respect to an averaged calculated value of ET0 and ET+.
Figure 2: Unpolarized Raman spectra of NPs and bulk of $\kappa$-(ET)2Cu(NCS)2
at 15 K in the range of 200 – 2200 cm-1. Inset : Zoom in the range of 1340 –
1560 cm-1.
Figure 2 presents unpolarized Raman spectra at 15 K from 200 cm-1 to 2200 cm-1
of $\kappa$-(ET)2Cu(NCS)2 for bulk and NPs. Despite intensity variations and
broadening of phonon peaks, there are no main differences in peaks positions
between the NPs and the bulk. Since NPs are randomly oriented, there are no
Raman selection rules. Phonons in NPs are an averaged response of all possible
bulk orientations.
Assignment | Experimental
---|---
$\nu_{i}$ | Symm | Calc. | 295K | 15K
| | ET0.5+ | Bulk | NPs | Bulk | NPs
2 | a (ag) | 1508 | 1501 | 1501 | 1510 | 1510
27 | b (b1u) | 1478 | 1484 | 1484 | 1496 | 1491
3 | a (ag) | 1460 | 1467 | 1466 | 1476 | 1475
| | | | | 1472 |
| | | | 1446 | 1453 | 1454
4 | a (ag) | 1422 | | 1413 | | 1421
| | 1409 | | 1402 | | 1411
? | | | | 1053 | | 1063
58 | a (b3g) | 1035 | 1031 | 1030 | 1041 | 1041
47 | b (b2u) | 1011 | 1007 | 1010 | 1015 | 1014
6 | a (ag) | 985 | 972 | 979 | 982 | 982
7 | a (ag) | 899 | 893 | 893 | 894 | 894
| anion | | 806 | | 811 | 810
61 | a (b3g) | 782 | 772 | 773 | 778 | 783
8 | a (ag) | 649 | 642 | | 650 | 649
9 | a (ag) | 498 | 500 | 502 | 504 | 502
34 | b (b1u) | 501 | 483 | 483 | 493 | 493
10 | a (ag) | 462 | 466 | 462 | 467 | 468
| anion | | 445 | 446 | 453 | 453
63 | a (b3g) | 351 | 354 | 352 | 354 | 356
11 | a (ag) | 314 | 311 | 310 | 317 | 318
53 | b (b2u) | 263 | 262 | 262 | 267 | 267
| anion | | 244 | 244 | 245 | 245
Table 1: Experimental frequencies (in cm-1) of the intense phonons from Fig.
2. Calculated frequencies for ET0.5+, corresponding symmetry and mode labels
are also reported.
In Tab. 1 are reported several phonon frequencies extracted from Fig. 2
measured in bulk and NPs ambient temperature and 15 K as well as their
corresponding symmetry and calculated frequencies for ET0.5+. The latter
averaged was derived using calculations on ET0 and ET+.
demiralpVibrationalAnalysisIsotope1998 ; kozlovElectronMolecularVibration1989
Experimental frequencies at room temperature agree correctly with modes of
ET0.5+. Note that the 806 and 445 cm-1 modes are assigned to the anion
modes.sugaiRamanactiveMolecularVibrations1993 At 15 K, additional modes are
measured as the vibrations at 1472 and 1453 cm-1 in the bulk and the vibration
at 1411 cm-1 in the NPs. Additional modes could have two origins, the
splitting or degeneracy of predicted modes. Here, these modes might come from
the $\nu_{3}$ and $\nu_{4}$ modes at 1460 cm-1 and 1422 cm-1. Indeed, since
the unit cell of $\kappa$-(ET)2Cu(NCS)2 contains four ET molecules, every
internal molecular vibration $\nu_{i}$ splits theoretically into four
components. In addition, the mode degeneracy can be lifted by strong dimer
interaction leading to additional modes.Maksimuk2001 ; Swietlik1992 As
expected, modes at low temperature harden compared to room temperature.
Let us discuss the origin of several modes. The peaks at 2076 and 2110 cm-1 in
Fig. 2 are related to distinct CN environments in the unit
cell.kornelsenInfraredOpticalProperties1991 The strong intensity of the mode
at 1476 cm-1 suggests that this vibration is coupled to an electronic
transition, leading to a resonant Raman scattering process. We have
experimentally found that the intensity of the mode decreases at other
wavelengths. Indeed, the laser wavelength (532 nm equivalent to 18797 cm-1) is
close to an electronic transition at 20000
cm-1.zamboniRESONANTRAMANSCATTERING1989 This intense mode is assigned to the
C=C stretching of the vibration of the central atoms of the ET molecule. The
mode at 1510 cm-1 is assigned to the a(ag) vibration of the C=C ring. The
peaks at 1421 and 1411 cm-1 are assigned to CH2 bending
modes.zamboniRESONANTRAMANSCATTERING1989 It is interesting to note that the
intensities of those peaks are strongly enhanced for the NPs. This could be a
signature of a surface effect as ET molecules could be present at the surface
of the NPs in a different arrangement than in the bulk and the ethylene groups
less "bonded" to the crystal. The mode at 1041 cm-1 is assigned to a C-C-H
bending totally symmetric vibration and the mode at 778 cm-1 is assigned to
the C-S stretching of the ET
molecule.kozlovAssignmentFundamentalVibrations1987 To illustrate some
vibrations, C=C stretching modes of the ET $\nu_{2}$, $\nu_{3}$ and $\nu_{27}$
are represented in Fig. 3.
Figure 3: Atomic displacement vectors of the C=C stretching modes $\nu_{2}$,
$\nu_{3}$ and $\nu_{27}$ (see Tab. 1) in the ET molecule.Maksimuk2001
### III.2 Phonon modes in the low frequency region
$\kappa$-(ET)2Cu(NCS)2 is monoclinic and crystallizes in the P21 space group
with two formula units per cell (Z = 2). One unit cell is made of four ET
molecules and two Cu(NCS)2 zigzag polymeric chains. The ET molecules are
paired and form two dimers in the cell. The unit cell contains 118 atoms,
hence, the phonon spectrum consists of 354 phonon branches of which 288
pertain to internal modes of the ET molecules and 30 to internal modes of the
polymeric anion layers. The remaining 36 branches consist of intermolecular
motions of the ET molecules and the anion modes (including the three acoustic
phonon branches). This calculation was derived according to Pedron et
al.pedronPedronPhysicaCETBr1997Pdf1997 .
Due to the heavy mass of ET and [Cu(NCS)2]-, we expect intermolecular modes
mostly in the low frequency spectral region.
Figure 4: Polarized Raman spectra in the low frequency range measured at 15 K for the bulk using different polarization configurations as a function of the b and c axis of the crystal. Table 2: Frequencies (in cm-1) of Raman $\kappa$-(ET)2Cu(NCS)2 modes measured on the bulk (cf. Fig. 4) using several polarization configurations. In our work | Pedron et al.pedronElectronphononCouplingBEDTTTF1999
---|---
(c,c) | (c,b) | //45 | $\perp$45 | (b,b) | (b,c) | (c,c) | (c,b)
| | | | | 26.5 | | 26.5
31.5 | 32.7 | 31.8 | 33.0 | 30.7 | 31.4 | 32.0 | 31.2
37.5 | 36.0 | 37.0 | 36.5 | 37.7 | 36.1 | 36.9 |
52.6 | | 52.6 | | | | 51.9 | 51.0
61.3 | | 61.5 | | | | 60.4 | 60.9
| 63.3 | 64.7 | 63.4 | | | 64.3 | 64.8
| 66.2 | | | | 66.1 | 66.2 |
71.5 | 71.7 | 71.8 | 72.4 | 71.3 | 72.1 | | 72.2
| | 77.1 | 77.2 | 77.4 | | 74.6 | 75.0
| | | | | 81.7 | 81.8 |
| 83.2 | | | 84.9 | | | 83.1
90.0 | | | | | | 90.5 | 90.5
| | 97.1 | 96.3 | 97.1 | | |
102.7 | 103.9 | 103.8 | | 106.6 | 104.6 | 102.5 | 102.9
112.7 | | | | | | | 113.9
134.4 | 134.2 | 134.7 | 134.1 | 133.9 | 134.1 | | 132.7
141.0 | 140.0 | 141.1 | 141.5 | 140.5 | 141.4 | | 141.1
163.8 | | 164.8 | 163.3 | 163.7 | | 162.9 |
| 165.3 | | | 168.3 | 166.8 | | 164.7
| 188.4 | 188.5 | 187.8 | 188.3 | 188.1 | |
Figure 4 shows Raman spectra measured on the bulk for six different
polarizations at 15 K in the low frequency region. The polarization
configuration (i, j) indicates that the incident and scattered light are
polarized along the i and j axis of the crystal, respectively. Special
notation //45 ($\perp$45) indicates that the incident light is polarized at
-45°with respect to the c axis and the scattered light at -45°(+45°) with
respect to the c axis. In total, 20 phonon modes are observed in this range.
The frequencies of the observed phonons are listed in Tab. 2 and compared to
the measurements of Ref. pedronElectronphononCouplingBEDTTTF1999, . One notice
that both measurements are in good agreement.
Figure 5: Unpolarized Raman spectra of bulk and NPs at 10 K in the low
frequency range. Three phonons around 32 cm-1, 103 cm-1 and 163 cm-1 are
labelled 1, 2 and 3, respectively.
Figure 5 shows unpolarized Raman spectra measured on the bulk and NPs at 15 K
in the range of 20 – 200 cm-1. All the phonons measured in the bulk are found
in the spectrum of NPs but with a larger width. Surface effect or grain
boundaries in the NPs can explain a shorter phonon lifetime and therefore a
larger width. In the low frequency regions, fewer Raman measurements have been
made. J. E. Eldridge et al. have detected phonons at 94, 118, 128 and 185 cm-1
and they have associated them with anion lattice modes.Eldridge1997 ;
Eldridge2002 We have measured phonon modes at 97, 113, 134, and 188 cm-1, and
additional modes at 141 and 164 cm-1. In the common range in energy, our
measurements are in good agreement with this previous work. In the next
sections, we are following the temperature dependencies of three phonons
around 32 cm-1, 103 cm-1 and 163 cm-1 labeled 1, 2 and 3 in Fig. 5,
respectively.
#### III.2.1 Bulk
Figure 6: (a) Unpolarized Raman spectra measured on the bulk for several
temperatures. (b-d) Frequencies of the 1, 2 and 3 phonon modes as a function
of temperature, respectively. Experimental errors on phonons frequencies are
around 0.2 cm-1. The vertical lines corresponds to the transition temperatures
$T_{coh}$ $\simeq$ 50 K and $T_{max}$ $\simeq$ 100 K.
Figure 6(a) shows unpolarized Raman spectra measured on the bulk for several
selected temperatures. Fig. 6(a), (b) and (c) presents the temperature
evolution of labeled phonons 1, 2 and 3 (cf. Fig. 5). The frequencies of all
three modes display essentially the same behavior, an increase down to
$T_{max}$ $\simeq$ 100 K followed by a plateau (or a very small increase in
the case of peak 2) between $T_{max}$ and $T_{coh}$ $\simeq$ 50 K before an
increase at lower temperature. Note that the increase of phonon 2 and 3 is
very abrupt and strongly non anharmonic close to 100 K. Such a plateau is an
unconventional behavior compared to the monotonous increase of the phonon
frequency due to the thermal contraction of the lattice decreasing the
temperature. X-ray measurements do not reveal any structural transition in
this temperature range discouting the possibility that changes in the crystal
structure could be responsible for the measured anomalies.Wolter2007
The three phonon modes heavily involve the movement of the anions with respect
to the ET molecules but also a complete bending of the molecules within the
dimers.Dressel2016a ; Dressel2016b The terminal ethylene groups are probably
taking part in these modes. The glassy transition at $T_{g}\simeq 80$ K
associated with the freezing of the terminal ethylene groups of the ET
molecule,Muller2002 occurs in the middle of the temperature plateau.
Therefore, the plateau observed in the phonon frequencies does not seem to be
related to $T_{g}$. Moreover, if the glassy transition froze out the relevant
phonon modes, pronounced changes in intensity and shifts in the energy
position of the related phonon modes would be expected. This is supported by
the work of Kuwata et al. suggesting that the decrease in the ethylene motion
at $T_{g}$ does not contribute to the mechanism associated to $T_{max}$ even
if the mechanism causing the $T_{max}$ anomalies is still under debate.Kuwata
The resistivity maximum was assigned to Wigner Mott scaling, for
instance.Radonjic
The behavior of the phonon modes between 50 K and 100 K is rather similar to
that observed in resistivity measurements. The resistivity increases lowering
the temperature from room temperature and reaches a broad maximum similar to a
plateau around 100 K. The difference between our measurements and resistivity
measurements appears at lower temperature. After the broad maximum, the
resistivity falls rapidly decreasing the temperature.Analytis2006 This change
in resistivity is understood in terms of the crossover from a bad metal at
high temperatures to a Fermi liquid and the formation of
quasiparticles.Milbrandt2013 ; Georges2013 Here the three phonon modes at low
energy seem to be sensitive to the bad metal phase.
#### III.2.2 Nanoparticles
Figure 7: (a) Unpolarized Raman spectra of the NPs for several temperatures.
(b-d) Frequencies of the 1, 2 and 3 phonon modes as a function of temperature,
respectively. Experimental errors on phonons frequencies are around 0.2 cm-1.
Figure 7 shows unpolarized Raman spectra of the NPs at selected temperatures
between 10 and 250 K and the evolution of phonons 1, 2 and 3 (cf. Fig. 5).
Note that no quadrupolar vibrational mode associated to the breathing mode of
the NPs is observed.Duval1 The frequencies of all three modes display
essentially the same behavior although the peak 2 presents attenuated
anomalies compared to the other two peaks. In Fig. 7, we can see a deviation
from the standard phonon behavior below 100 K and a double hump for the three
phonons with a first minimum closed to $T_{coh}$ = 50 K and a small softening
around $T_{FL}$ = 25K. Such behavior is different from the one observed in the
bulk. Note that the amplitude of the minima are much higher than the
experimental error on the frequency of the phonon modes (0.2 cm-1).
The softening of the phonon modes at 25 K might be related to the crossover to
the pure Fermi liquid. This softening is also observed in the bulk (Fig. 6)
but it is very small.
We can provide several approaches to explain the softening of the three phonon
modes at $T_{coh}$ = 50 K, temperature associated to the crossover from a bad
metal to a Fermi liquid.
From a theoretical point of view, the Hubbard-Holstein model using dynamical
mean-field theory has been able to predict the crossover from a high
temperature bad metal, characterized by the absence of quasiparticles, to a
Fermi liquid, transition that occurs in $\kappa$-(ET)2Cu(NCS)2at
$T_{coh}$.Merino2000HH Anomalies in the real part of the phonon self energy
have been calculated from this model. Such anomalies induce the softening of
the phonon frequencies at T${}_{\textrm{coh}}$ = 50 K, the temperature
associated to the quasiparticle formation, followed by the hardening of the
frequencies at lower temperature. The trend of the phonon frequencies in Fig.
7 is similar to predicted modes with frequency around $U/2$ in Ref.
Merino2000HH, . $U$ is the effective repulsion between two electrons on the
same ET2 dimer and $U/2$ is the location of the Hubbard bands in the spectral
function.Merino2000HH The value of $U/2$ has been calculated Scriven2009a ;
Scriven2009b and reasonably estimated Dressel2007 around 1000 cm-1. Our
observations on the low frequency phonons located in frequency well below this
value can not be explained by this model. However, the Mott physic cannot be
totally rule out. A strong enhancement of the low-energy spectral weight has
been measured in the conductivity of related compounds as $\kappa$-(BEDT-
TTF)2Cu2(CN)3 close to the Mott transition.Pustogow Note that,
$\kappa$-(ET)2Cu(NCS)2 is located in the metallic side of the phase diagram of
this family of compounds but it is not so far from the Mott transition.
Similar to related organic compoundsDressel2007 ; Merino2008 ; Takenaka ,
pronounced spectral weight shifts are expected at low frequencies upon the
crossover between bad metal and Fermi liquid.
From semiconducting to metallic phase, the electrodynamic response and the
conductivity below 1000 cm-1 are strongly modified. As a consequence, the
electronic background which is a function of the polarizability and the
coupling to the phonons should change.
The opening of a pseudogap below $T_{\textrm{coh}}\simeq$ 50 K or a rapid
decay in the spin fluctuations associated with the formation of quasiparticles
has been proposed to interpret nuclear magnetic resonance experiments where a
sharp maximum in the spin fluctuations is observed at $T_{\textrm{coh}}$
$\simeq$ 50 K.Kawamoto95 ; Kawamoto95B ; Powell2011 Both possibilities could
be at the origin of the softening of the three phonon modes at $T_{coh}$ = 50
K.
To distinguish them, a first argument is given by the expected signature of a
pseudogap in Raman spectra. A pseudogap opening is characterized by a
modification of spectral weight in the electronic background from above to
below the pseudogap temperature. We did not observe such a behaviour in the
electronic background of our spectra. So far, no microscopic model has been
developed to describe the phonon behavior in the presence of a pseudogap,
however one knows that the opening of a gap can be associated to phonon
softening. In high temperature superconductors, phonons with an energy below
the energy of the pair-breaking peak soften, whereas they harden if their
energies are above the gap.Zeyher1990 ; Bakr2009 In our experiment, the
phonon 3 in Fig. 7 softens significantly below $T_{\textrm{coh}}$. To be
consistent with a possible gap opening, the energy of this gap must be equal
to or higher than the energy of the highest phonon that is softening. This
situation corresponds to an opening of a pseudogap above 163 cm-1 = 237 K.
However this energy is higher than $T_{\textrm{coh}}$. This means that the
softening of the three phonon modes at $T_{\textrm{coh}}$ has little chance
from being related to the opening of a pseudogap. Remain the rapid decay in
the spin fluctuations associated with the formation of quasiparticles as a
possibility to explain our measurements below 50 K in NPs. Under such a
hypothesis, the spin-phonon coupling might be the link between the observed
phonon anomalies and the spin fluctuations. Although we cannot accurately
determine the origin of these phonon anomalies, the measured phonon softenings
at $T_{\textrm{coh}}$ show that these phonon modes are connected to the
transition from the bad metal to the Fermi liquid phase.
One question remains. Why are theses anomalies observed in the NPs and not in
the bulk ? NPs differ qualitatively from the bulk due to their larger surface-
to-volume ratios. Knowing that our NPs are 28 nm diameter-large, the surfaces
effects on the bulk properties can no longer be overlooked. Reduced symmetry
at the surface increases surface anisotropy due to factors including broken
exchange bonds and interatomic distance variation. This could be at the origin
of the greater sensitivity of the phonons of NPs to spin fluctuations.
## IV Conclusion
In summary, we have investigated the organic compound $\kappa$-(ET)2Cu(NCS)2
in the form of bulk and NPs by Raman light scattering and observed anomalies
in the low frequency phonon energies as a function of temperature. In the
bulk, the changes in peak energy between 100 and 50 K are attributed to the
signature of the bad metal. For NPs, the anomalies around and below
$T_{\textrm{coh}}$ $\simeq$ 50 K are related to the transition from the bad
metal to the Fermi liquid and might be associated to spin fluctuations and
their rapid decay.
## References
* (1) A. Jalabert, A. Amara, and F. Clermidy, ’Molecular Electronics Materials, Devices and Applications’ (Springer, Berlin, 2008).
* (2) M. Dressel, ’Advances in Organic Conductors and Superconductor’ special issue published in Crystals (2018).
* (3) T. Ishiguro, K. Yamaji, G. Saito, Organic Superconductors, Springer, Berlin, 1998.
* (4) M. Lang and J. Muller, ’Organic Superconductors’, in K. H.Bennemann, J. B. Ketterson (Eds.), The Physics of Superconductors, Vol. II, (Springer, Berlin, 2004 and 2008).
* (5) B. J. Powell and R. H. McKenzie, J. Phys.: Condens. Matter 18, R827 (2006).
* (6) A. Lebed, The Physics of Organic Superconductors and Conductors, Springer, Berlin, 2008.
* (7) N. Drichko, R. Hackl, and J. A. Schlueter, Phys. Rev. B 92, 161112(R) (2015).
* (8) B. Keimer, N. Belk, R. J. Birgeneau, A. Cassanho, C. Y. Chen, M. Grevem, M. Kastner A. Aharony, Y. Endoh, R. W. Erwin, and G. Shirane, Phys. Rev. B 46, 14034 (1992).
* (9) T. Sugano, K. Terui, S. Mino, K. Nozawa, H. Urayama, H. Yamochi, G. Saito, and M. Kinoshita, Chem. Lett. 17, 1171 (1988).
* (10) H. Urayama et al., Chem. Lett. 17, 55 (1988).
* (11) D. Faltermeier, J. Barz, M. Dumm, M. Dressel, N. Drichko, B. Petrov, V. Semkin, R. Vlasova, C. Meziere, and P. Batail, Phys. Rev. B 76, 165113 (2007).
* (12) J. Merino, M. Dumm, N. Drichko, M. Dressel, and R. H. McKenzie, Phys. Rev. Lett. 100, 086404 (2008).
* (13) C. Strack et al., Phys. Rev. B 72, 054511 (2005).
* (14) P. Limelette, P. Wzietek, S. Florens, A. Georges, T. A. Costi, C. Pasquier, D. Jérome, C. Mézière, and P. Batail, Phys. Rev. Lett. 91, 016401 (2003).
* (15) S. Milbradt, A. A. Bardin, C. J. S. Truncik, W. A. Huttema, A. C. Jacko, P. L. Burn, S.-C. Lo, B. J. Powell, and D. M. Broun, Phys. Rev. B 88, 064501 (2013).
* (16) X. Deng, J. Mravlje, R. Žitko, M. Ferrero, G. Kotliar, and A. Georges, Phys. Rev. Lett. 110, 086401 (2013).
* (17) M. Dressel and N. Drichko, Chem. Rev. 104, 5689 (2004).
* (18) K. Frikach, M. Poirier, M. Castonguay and K. D. Truong Phys. Rev. B 61 R6491 (2000).
* (19) J. Merino and R. H. McKenzie, Phys. Rev. B 61, 7996 (2000).
* (20) J. G. Analytis, A. Ardavan, S. J. Blundell, R. L. Owen, E. F. Garman, C. Jeynes, and B. J. Powell, Phys. Rev. Lett. 96, 177002 (2006).
* (21) X. Su, F. Zuo, J. A. Schlueter, M. E. Kelly, and J. M. Williams, Phys. Rev. B 57, R14056 (1998).
* (22) G. Kotliar and D. Vollhardt, Phys. Today 57, 53 (2004).
* (23) Y. Kuwata, M. Itaya, and A. Kawamoto, Phys. Rev. B 83, 144505 (2011).
* (24) J. Singleton, Rep. Prog. Phys. 63, 1111 (2000).
* (25) J. Caulfield et al., Synth. Metals 70, 185 (1995).
* (26) T. Moriya and K. Ueda, Adv. Phys. 49, 555 (2000).
* (27) A. J. Millis, H. Monien, and D. Pines, Phys. Rev. B 42, 167 (1990).
* (28) E. Yusuf, B. J. Powell, and R. H. McKenzie, Phys. Rev. B 75, 214515 (2007).
* (29) A. Kawamoto, K. Miyagawa, Y. Nakazawa, and K. Kanoda, Phys. Rev. Lett. 74, 3455 (1995).
* (30) A. Kawamoto, K. Miyagawa, Y. Nakazawa, and K. Kanoda, Phys. Rev. B 52, 15522 (1995).
* (31) B. J. Powell and R. H. McKenzie, Rep. Prog. Phys. 74, 056501 (2011).
* (32) J. Müller, M. Lang, F. Steglich, J. A. Schlueter, A. M. Kini, and T. Sasaki, Phys. Rev. B 65, 144521 (2002).
* (33) E. Scriven and B. J. Powell, Phys. Rev. B 80, 205107 (2009).
* (34) B. J. Powell and R. H. McKenzie, Phys. Rev. B 69, 024519 (2004).
* (35) H. Taniguchi, K. Kanoda, and A. Kawamoto, Phys. Rev. B 67, 014510 (2003).
* (36) M. Dressel, J.E. Eldridge, J. M. Williams, and H.H. Wang, Physica C 203, 247 (1992).
* (37) D. Pedron, G. Visemtini, R. Bozio, J. M. Williams, and J. A. Schlueter, Physica C. 276, 1 (1997).
* (38) Y. Lin, J. E. Eldridge, J. A. Schlueter, H. H. Wang, and A. M. Kini, Phys. Rev. B 64, 024506 (2001).
* (39) K. D. Truong, S. Jandl, and M. Poirier, Synth. Met. 157, 252 (2007).
* (40) R. Zeyher and G. Zwicknagel, Z. Phys. B 78, 175 (1990).
* (41) M. Revelli Beaumont et al., Synth. Met. 261, 116310 (2020).
* (42) M. Kozlov, K. Pokhodnia, and A. Yurchenko, Spectrochim. Act. A 43, 323 (1987).
* (43) E. Demiralp and W. A. Goddard, Synth. Met. 72, 297 (1995).
* (44) E. Demiralp, S. Dasgupta, and W. A. Goddard, J. Am. Chem. Soc. 117, 8154 (1995).
* (45) E. Demiralp and W. A. Goddard, J. Phys. Chem. A 102, 2466 (1998).
* (46) J. E. Eldridge, C. C. Homes, J. M. Williams, A.M. Kini, and H. H. Wang, Spectrochim. Acta A 51, 947 (1995).
* (47) Y Lin, J. E. Eldridge, J. M. Williams, A. M. Kini, and H. H. Wang, Spectrochim. Acta A 55, 839 (1999).
* (48) M. Dressel, P. Lazic, A. Pustogow, E. Zhukova, B. Gorshunov, J. A. Schlueter, O. Milat, B. Gumhalter, and S. Tomic, Phys. Rev. B 93, 081201 (2016).
* (49) M. Pinteric, P. Lazic, A. Pustogow, T. Ivek, M. Kuvezdic, O. Milat, B. Gumhalter, M. Basletic, M. Culo, B. Korin-Hamzic, A. Löhle, R. Hübner, M. Sanz Alonso, T. Hiramatsu, Y. Yoshida, G. Saito, M. Dressel, and S. Tomic, Phys. Rev. B 94, 161105 (2016).
* (50) J. E. Eldridge, Y. Xie, Y. Lin, C.C. Homes, H.H. Wang, J.M. Williams, A.M. Kini, and J.A. Schlueter, Spectrochim. Acta A 53, 565 (1997).
* (51) J. E. Eldridge, H. H Wang, A. M. Kini, and J. A. Schlueter, Spectrochim. Acta A 58, 2237 (2002).
* (52) S. Sugai, H. Mori, H. Yamochi, and G. Saito, Phys. Rev. B 47, 14374 (1993).
* (53) K. Kornelsen, J. E. Eldridge, H. H. Wang, and J. M. Williams, Phys. Rev. B 44, 5235 (1991).
* (54) M. Kozlov, K. Pokhodnia, and A. Yurchenko, Spectrochim. Act. A 45, 437 (1989).
* (55) M. Maksimuk, K. Yakushi, H. Taniguchi, K. Kanoda, A. Kawamoto, J. Phys. Soc. Jpn. 70, 3728 (2001).
* (56) R. Swietlik, C. Garrigouu-Lagrange, C. Sourisseau, G. Pages, P. Delhaes, J. Matter Chem. 2, 857 (1992).
* (57) R. Zamboni and D. Schweitzer, Z. Naturforsch 44, 298 (1989).
* (58) D. Pedron, Phys. C 276, 1 (1997).
* (59) D. Pedron, R. Bozio, J. Schlueter, M. Kelly, A. Kini, and J. Williams, Synth. Met. 103, 2220 (1999).
* (60) A. U. B. Wolter, R. Feyerherm, E. Dudzik, S. Sullow, A. Strack, M. Lang, and D. Schweitzer, Phys. Rev. B 75, 104512 (2007).
* (61) M. M. Radonjić, D. Tanasković, V. Dobrosavljević, K. Haule, and G. Kotliar Phys. Rev. B 85, 085133 (2012).
* (62) E. Duval, A. Boukenter, and B. Champagnon, Phys. Rev. Lett. 56, 2052 (1986).
* (63) J. Merino and R. H. McKenzie, Phys. Rev. B 62, 16442 (2000).
* (64) E. Scriven and B. J. Powell, J. Chem. Phys. 130, 104508 (2009).
* (65) A. Pustogow, M. Bories, A. Löhle, R. Rösslhuber, E. Zhukova, B. Gorshunov, S. Tomić, J. A. Schlueter, R. Hübner, T. Hiramatsu, Y. Yoshida, G. Saito, R. Kato, T.-H. Lee, V. Dobrosavljević, S. Fratini, and M. Dressel, Nat. Mat. 17, 773 (2018).
* (66) K. Takenaka, M. Tamura, N. Tajima, H. Takagi, J. Nohara, and S. Sugai Phys. Rev. Lett. 95, 227801 (2005).
* (67) M. Bakr, A. P. Schnyder, L. Klam, D. Manske, C. T. Lin, B. Keimer, M. Cardona, and C. Ulrich, Phys. Rev. B 80, 064505 (2009).
|
11institutetext: Indian Institute of Technology Kharagpur, India
11email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>22institutetext: Flipkart Internet Private Limited,
India
22email<EMAIL_ADDRESS>33institutetext: Amazon India Private
Limited, India
33email: {subhadeepmaji<EMAIL_ADDRESS>
# Reproducibility, Replicability and Beyond: Assessing Production Readiness of
Aspect Based Sentiment Analysis in the Wild
Rajdeep Mukherjee Equal contribution1 (1 ()) Shreyas Shetty* 22 Subrata
Chattopadhyay 11 Subhadeep Maji Work done while at Flipkart33 Samik Datta† 33
Pawan Goyal 11
###### Abstract
With the exponential growth of online marketplaces and user-generated content
therein, aspect-based sentiment analysis has become more important than ever.
In this work, we critically review a representative sample of the models
published during the past six years through the lens of a practitioner, with
an eye towards deployment in production. First, our rigorous empirical
evaluation reveals poor reproducibility: an average $4-5\%$ drop in test
accuracy across the sample. Second, to further bolster our confidence in
empirical evaluation, we report experiments on two challenging data slices,
and observe a consistent $12-55\%$ drop in accuracy. Third, we study the
possibility of transfer across domains and observe that as little as $10-25\%$
of the domain-specific training dataset, when used in conjunction with
datasets from other domains within the same locale, largely closes the gap
between complete cross-domain and complete in-domain predictive performance.
Lastly, we open-source two large-scale annotated review corpora from a large
e-commerce portal in India in order to aid the study of replicability and
transfer, with the hope that it will fuel further growth of the field.
###### Keywords:
Aspect based Sentiment Analysis Aspect Polarity Detection Reproducibility
Replicability Transferability.
## 1 Introduction
In recent times, online marketplaces of goods and services have witnessed an
exponential growth in terms of consumers and producers, and have proliferated
in a wide spectrum of market segments, such as e-commerce, food delivery,
healthcare, ride sharing, travel and hospitality, to name a few. The Indian
e-commerce market segment alone is projected to grow to $300-350$M consumers
and $$100-120$B revenue by $2025$ 111How India Shops Online – Flipkart and
Bain & Company.. In the face of ever-expanding choices, purchase decision-
making is guided by the reviews and ratings: Watson et al. [29] estimates that
the average product rating is the most important factor in making purchase
decisions for $60\%$ of consumers. Similarly, the academic research on Aspect
Based Sentiment Analysis (ABSA) has come a long way since its humble beginning
in the SemEval-$2014$ 222SemEval-$2014$ Task $4$.. Over the past $6$ years,
the accuracy on a benchmark dataset for aspect term polarity has grown by at
least $11.4\%$. We ask, is this progress enough to support the burgeoning
online marketplaces?
We argue on the contrary. On one hand, industrial-strength systems need to
demonstrate several traits for smooth operation and delightful consumer
experience. Breck et al. [1] articulates several essential traits and presents
a rubric of evaluation. Notable traits include: (a) “All hyperparameters have
been tuned”; (b) “A simpler model is not better”; (c) “Training is
reproducible”; and (d) “Model quality is sufficient on important data slices”.
On the other hand, recent academic research in several fields has faced
criticisms from within the community on similar grounds: Dhillon et al. [6]
points out the inadequacy of benchmark dataset and protocol for few-shot image
classification; Dacrema et al. [4] criticises the recent trend in
recommendation systems research on the ground of lack of reproducibility and
violations of (a)–(c) above; Li et al. [14] criticises the recent trend in
information retrieval research on similar grounds. A careful examination of
the recent research we conduct in this work reveals that the field of ABSA is
not free from these follies.
To this end, it is instructive to turn our attention to classic software
engineering with the hope of borrowing from its proven safe development
practises. Notably, Kang et al. [10] advocates the use of model assertions –
an abstraction to monitor and improve model performance during the development
phase. Along similar lines, Ribeiro et al. [20] presents a methodology of
large-scale comprehensive testing for NLP, and notes its effectiveness in
identifying bugs in several (commercial) NLP libraries, that would not have
been discovered had we been relying solely on test set accuracy. In this work,
in addition to the current practice of reporting test set accuracies, we
report performance on two challenging data slices – e.g., hard set [31], and,
contrast set [7] – to further bolster the comprehensiveness of empirical
evaluation.
For widespread adoption, data efficiency is an important consideration in
real-world deployment scenarios. As an example, a large e-commerce marketplace
in India operates in tens of thousands of categories, and a typical annotation
cost is $3$¢ per review. In this work, we introduce and open-source two
additional large-scale datasets curated from product reviews in lifestyle and
appliance categories to aid replicability of research and study of transfer
across domains and locales (text with similar social/linguistic
characteristics). In particular, we note that just a small fraction of the in-
domain training dataset, mixed with existing in-locale cross-domain training
datasets, guarantees comparable test set accuracies.
In summary, we make the following notable contributions:
* •
Perform a thorough reproducibility study of models sampled from a public
leaderboard 333Papers With Code: ABSA on SemEval 2014 Task 4 Sub Task 2. that
reveals a consistent $4-5\%$ drop in reported test set accuracies, which is
often larger than the gap in performance between the winner and the runner-up.
* •
Consistent with the practices developed in software engineering, we bolster
the empirical evaluation rigour by introducing two challenging data slices
that demonstrates an average $12-55\%$ drop in test set accuracies.
* •
We study the models from the perspective of data efficiency and note that as
little as $10-25\%$ of the domain-specific training dataset, when used in
conjunction with existing cross-domain datasets from within the same locale,
largely closes the gap in terms of test set accuracies between complete cross-
domain training and using $100\%$ of the domain-specific training instances.
This observation has immense implications towards reduction of annotation cost
and widespread adoption of models.
* •
We curate two additional datasets from product reviews in lifestyle and
appliances categories sampled from a large e-commerce marketplace in India,
and make them publicly accessible to enable the study of replicability.
## 2 Desiderata and Evaluation Rubric
Reproducibility and replicability have been considered the gold-standard in
academic research and has witnessed a recent resurgence in emphasis across
scientific disciplines: see for e.g., McArthur et al. [18] in the context of
biological sciences and Stevens et al. [23] in the context of psychology. We
follow the nomenclature established in [23] and define reproducibility as the
ability to obtain same experimental results when a different analyst uses an
identical experimental setup. On the other hand, replicability, is achieved
when the same experimental setup is used on a different dataset to similar
effect. While necessary, these two traits are far from sufficient for
widespread deployment in production.
Breck et al. [1] lists a total of $28$ traits spanning the entire development
and deployment life cycle. Since our goal is only to assess the production
readiness of a class of models. we decide to forego all $14$ data-, feature-
and monitoring-related traits. We borrow $1$ (“Training is reproducible”) and
$2$ (“All hyperparameters have been tuned” and “Model quality is sufficient on
important data slices”) traits from the infrastructure- and modeling-related
rubrics, respectively.
Further, we note that the ability to transfer across domains/locales is a
desirable trait, given the variety of market segments and the geographic span
of online marketplaces. In other words, this expresses data efficiency and has
implications towards lowering the annotation cost and associated deployment
hurdles. Given the desiderata, we articulate our production readiness rubric
as follows:
* •
Reproducibility. A sound experimental protocol that minimises variability
across runs and avoids common pitfalls (e.g., hyperparameter-tuning on the
test dataset itself) should reproduce the reported test set accuracy within a
reasonable tolerance, not exceeding the reported performance gap between the
winner and the runner-up in a leaderboard. §6 articulates the proposed
experimental protocol and §7 summarises the ensuing observations.
* •
Replicability. The aforementioned experimental protocol, when applied to a
different dataset, should not dramatically alter the conclusions drawn from
the original experiment; specifically, it should not alter the relative
positions within the leaderboard. §4 details two new datasets we contribute in
order to aid the study of replicability, whereas §7 contains the ensuing
observations.
* •
Performance. Besides overall test-set accuracy, an algorithm should excel at
challenging data slices such as hard- [31] and contrast sets [7]. §7
summarises our findings when this checklist is adopted as a standard reporting
practice.
* •
Transferability. An algorithm must transfer gracefully across domains within
the same locale, i.e. textual data with similar social/linguistic
characteristics. We measure it by varying the percentage of in-domain training
instances from $0\%$ to $100\%$ and locating the inflection point in test set
accuracies. See §7 for additional details.
Note that apart from the “The model is debuggable” and “A simpler model is not
better” traits, the remaining traits as defined by Breck et al.[1] are
independent of the choice of the algorithm and is solely a property of the
underlying system that embodies it, which is beyond the scope of the present
study. Unlike [1], we refrain from developing a numerical scoring system.
## 3 Related Work
First popularised in the SemEval-$2014$ Task $4$ [19], ABSA has enjoyed
immense attention from both academic and industrial research communities. Over
the past $6$ years, according to the cited literature on a public leaderboard
444Papers With Code: ABSA on SemEval 2014 Task 4 Sub Task 2., the performance
for the subtask of Aspect Term Polarity has increased from $70.48\%$ in
Pontiki et al. [19], corresponding to the winning entry, to $82.29\%$ in Yang
et al. [32] on the laptop review corpus. The restaurant review corpus has
witnessed a similar boost in performance: from $80.95\%$ in [19] to $90.18\%$
in [32].
Not surprisingly, the field has witnessed a phase change in terms of the
methodology: custom feature engineering and ensembles that frequented earlier
[19] gave way to neural networks of ever-increasing complexity. Apart from
this macro-trend, we notice several micro-trends in the literature: the year
$2015$ witnessed a proliferation of LSTM and its variants [24]; years $2016$
and $2017$ respectively witnessed the introduction [25] and proliferation [26,
16, 3, 2] of memory networks and associated attention mechanisms; in $2018$
research focused on CNN [31], transfer learning [13] and transformers [12],
while memory networks and attention mechanisms remained in spotlight [11, 27,
9, 15]; transformer and BERT-based models prevailed in $2019$ [30, 33], while
attention mechanisms continued to remain mainstream [22].
While these developments appear to have pushed the envelope of performance,
the field has been fraught with “winner’s curse” [21]. In addition to the
replicability and reproducibility crises [18, 23], criticisms around
inadequacy of baseline and unjustified complexity [4, 6, 14] applies to this
field as well. The practice of reporting performance in challenging data
slices [31] has not been adopted uniformly, despite its importance to
production readiness assessment [1]. Similarly, the study of transferability
and replicability has only been sporadically performed: e.g., Hu et al. [8]
uses a dataset curated from Twitter along with the ones introduced in Pontiki
et al. [19] for studying cross-domain transferability.
## 4 Dataset
For the Reproducibility rubric, we consider the datasets released as part of
the SemEval 2014 Task 4 - Aspect Based Sentiment Analysis 555SemEval 2014:
Task 4 http://alt.qcri.org/semeval2014/task4/ for our experiments,
specifically the Subtask 2 - Aspect term Polarity. The datasets come from two
domains – Laptop and Restaurant. We use their versions made available in this
Github 666https://github.com/songyouwei/ABSA-PyTorch repository which forms
the basis of our experimental setup.
The guidelines used for annotating the datasets were released as part of the
challenge. For the Replicability rubric, we tagged two new datasets from the
e-commerce domain viz., Men’s T-shirt and Television, using similar
guidelines.
The statistics for these four datasets are presented in Table 1. As we can
observe, the sizes of the Men’s T-shirt and Television datasets are comparable
to the laptop and restaurant datasets, respectively.
Table 1: Statistics of the datasets showing the no. of sentences with
corresponding sentiment polarities of constituent aspect terms.
Dataset Train Test Positive Negative Neutral Total Positive Negative Neutral
Total Laptop 994 870 464 2328 341 128 169 638 Restaurant 2164 807 637 3608 728
196 196 1120 Men’s T-shirt 1122 699 50 1871 270 186 16 472 Television 2540 919
287 3746 618 257 67 942
For the Performance rubric, we evaluate and compare the models on two
challenging subsets viz., hard as defined by Xue et al. [31] and contrast as
defined by Gardner et al. [7]. We describe below the process to obtain these
datasets:
* •
Hard data slice: Hard examples have been defined in Xue et al. [31] as the
subset of review sentences containing multiple aspects with different
corresponding sentiment polarities. The number of such hard examples from each
of the datasets are listed in Table 2.
* •
Contrast data slice: In order to create additional test examples, Gardner et
al. [7] adds perturbations to the test set, by modifying only a couple of
words to flip the sentiment corresponding to the aspect under consideration.
For e.g., consider the review sentence: “I was happy with their service and
food”. If we change the word “happy” with “dissatisfied”, the sentiment
corresponding to the aspect “food” changes from positive to negative. We take
a random sample of $30$ examples from each of the datasets and add similar
perturbations as above to create $30$ additional examples. These $60$ examples
for each of the four datasets thus serve as our contrast test sets.
Table 2: Statistics of the Hard test sets Dataset | Positive | Negative | Neutral | Total (% of Test Set)
---|---|---|---|---
Laptop | 31 | 24 | 46 | 101 (15.8 %)
Restaurants | 81 | 60 | 83 | 224 (20.0 %)
Men’s T-shirt | 23 | 24 | 1 | 48 (10.2 %)
Television | 43 | 40 | 19 | 102 (10.8 %)
## 5 Models Compared
As part of our evaluation, we focus on two families of models which cover the
major trends in the ABSA research community: (i) memory network based, and
(ii) BERT based. Among the initial set of models for the SemEval 14 challenge,
memory network based models had much fewer parameters compared to LSTM based
approaches and performed comparatively better. With the introduction of BERT
[5], work in NLP has focused on leveraging BERT based architectures for a wide
spectrum of tasks. In the ABSA literature, the leaderboard
777https://paperswithcode.com/sota/aspect-based-sentiment-analysis-on-semeval
has been dominated by BERT based models, which have orders of magnitude more
parameters than memory network based models. However, due to pre-training on
large corpora, BERT models are still very data efficient in terms of number of
labelled examples required. We chose three representative models from each
family for our experiments and briefly describe them below:
* •
ATAE-LSTM [28] represents aspects using target embeddings and models the
context words using an LSTM. The context word representations and target
embeddings are concatenated and combined using an attention layer.
* •
Recurrent Attention on Memory (RAM) [2] represents the input review sentence
using a memory network, and the memory cells are weighted using the distance
from the target word. The aspect representation is then used to compute
attention scores on the input memory, and the attention weighted memory is
refined iteratively using a GRU (recurrent) network.
* •
Interactive Attention Networks (IAN) [17] uses separate components for
computing representations for both the target (aspect) and the context words.
The representations are pooled and then used to compute an attention score on
each other. Finally the individual attention weighted representations are
concatenated to obtain the final representation for the 3-way classification
task, with positive, negative, and neutral being the three classes.
* •
BERT-SPC [5] is a baseline BERT model that uses “[CLS] + context + [SEP] +
target + [SEP]” as input for the sentence pair classification task, where
‘[CLS]’ and ‘[SEP]’ represent the tokens corresponding to classification and
separator symbols respectively, as defined in Devlin et al. [5] .
* •
BERT-AEN [22] uses an attentional encoder network to model the semantic
interaction between the context and the target words. Its loss function uses a
label smoothing regularization to avoid overfitting.
* •
The Local Context Focus (LCF-BERT) [33] is based on Multi-head Self-Attention
(MHSA). It uses Context features Dynamic Mask (CDM) and Context features
Dynamic Weighted (CDW) layers to focus more on the local context words. A
BERT-shared layer is adopted to LCF design to capture internal long-term
dependencies of local and global context.
## 6 Experimental Setup
We present an extensive evaluation of the aforementioned models across the
four datasets: Laptops, Restaurants, Men’s T-shirt and Television, as per the
production readiness rubrics defined in §2. While trying to reproduce the
reported results for the models, we faced two major issues; (i) the official
implementations were not readily available, and (ii) the exact hyperparameter
configurations were not always specified in the corresponding paper(s). In
order to address the first, our experimental setup is based on a community
designed implementation of recent papers available on GitHub
888https://github.com/songyouwei/ABSA-PyTorch. Our choice for this public
repository is guided by its thoroughness and ease of experimentation. As an
additional social validation, the repository had 1.1k stars and 351 forks on
GitHub at the time of writing. For addressing the second concern, we consider
the following options; (a) use commonly accepted default parameters (for e.g.,
using a learning rate of $1e^{-4}$ for Adam optimizer). (b) use the public
implementations to guide the choice of hyperparameters. The exact
hyperparameter settings used in our experiments are documented and made
available with our supporting code repository
999https://github.com/rajdeep345/ABSA-Reproducibility for further
reproducibility and replicability of results.
From the corresponding experimental protocols described in the original
paper(s), we were not sure if the final numbers reported were based on the
training epoch that gave the best performance on the test set, or whether the
hyperparameters were tuned on a separate held-out set. Therefore, we use the
following two configurations; (i) the test set is itself used as the held out
set, and the model used for reporting the results is chosen corresponding to
the training epoch with best performance on the test set; and (ii) 15% of the
training data is set aside as a held out set for tuning the hyperparameters
and the optimal training epoch is decided corresponding to the best
performance on the held out set. Finally the model is re-trained, this time
with all the training data (including 15% held out set), for the optimal no.
of epochs before evaluating the test set. For both the cases, we report mean
scores over $5$ runs of our experiments.
## 7 Results and Discussion: Production Readiness Rubrics
### 7.1 Reproducibility and Replicability
Table 3: Performance of the models on the four datasets. The first two dataset
correspond to the reproducibility study, while the next two datasets
correspond to the replicability study. Towards performance study, results on
the hard and contrast data slices are respectively enclosed in brackets in the
last two columns. All the reproduced and replicated results are averaged
across 5 runs.
Model Reported Reproduced Reproduced using 15% held out set Accuracy Macro-F1
Accuracy Macro-F1 Accuracy Macro-F1 ATAE-LSTM 68.70 - 60.28 44.33 58.62
(33.47, 26.00) 43.27 (29.01, 22.00) RAM 74.49 71.35 72.82 68.34 70.97 (56.04,
46.00) 65.31 (55.81, 43.16) IAN 72.10 - 69.94 62.84 69.40 (48.91, 34.67) 61.98
(48.75, 33.40) BERT-SPC 78.99 75.03 78.72 74.52 77.24 (59.21, 52.00) 72.80
(59.44, 48.67) BERT-AEN 79.93 76.31 78.65 74.26 75.71 (46.53, 37.33) 70.02
(45.22, 36.20) LCF-BERT 77.31 75.58 79.75 76.10 77.27 (62.57, 54.67) 72.86
(62.71, 49.56)
(a) Laptop
Model Reported Reproduced Reproduced using 15% held out set Accuracy Macro-F1
Accuracy Macro-F1 Accuracy Macro-F1 ATAE-LSTM 77.20 - 73.71 55.87 73.29
(52.41, 38.71) 54.59 (47.35, 33.13) RAM 80.23 70.80 78.21 65.94 76.36 (59.29,
56.77) 63.15 (56.36, 56.12) IAN 78.60 - 76.80 64.24 76.52 (57.05, 50.32) 63.84
(55.11, 48.19) BERT-SPC 84.46 76.98 85.04 78.02 84.23 (68.84, 57.42) 76.28
(68.11, 57.23) BERT-AEN 83.12 73.76 81.73 71.24 80.07 (51.70, 45.81) 69.80
(48.97, 46.88) LCF-BERT 87.14 81.74 85.94 78.97 84.20 (69.38, 56.77) 76.28
(69.64, 57.81)
(b) Restaurant
Model Replicated Replicated using 15% held out set Accuracy Macro-F1 Accuracy
Macro-F1 ATAE-LSTM 83.13 55.98 81.65 (58.33, 40.67) 54.84 (39.25, 30.54) RAM
90.51 61.93 88.26 (83.33, 46.00) 59.67 (56.01, 33.85) IAN 87.58 59.16 87.41
(63.75, 42.67) 58.97 (42.85, 31.94) BERT-SPC 93.13 73.86 92.42 (89.58, 66.00)
73.83 (60.62, 56.90) BERT-AEN 88.69 72.25 87.54 (50.42, 58.67) 59.14 (32.96,
43.00) LCF-BERT 93.35 72.19 91.99 (91.67, 71.33) 72.13 (62.30, 59.70)
(c) Men’s T-shirt
Model Replicated Replicated using 15% held out set Accuracy Macro-F1 Accuracy
Macro-F1 ATAE-LSTM 81.10 53.71 79.68 (53.92, 25.33) 52.78 (39.13, 16.80) RAM
84.29 58.68 83.02 (64.31, 53.33) 58.50 (50.07, 45.51) IAN 82.42 57.15 80.49
(54.31, 32.00) 56.78 (41.67, 25.16) BERT-SPC 89.96 74.68 88.56 (80.20, 62.67)
74.81 (74.32, 60.25) BERT-AEN 87.09 67.92 85.94 (50.39, 50.66) 65.65 (38.08,
45.75) LCF-BERT 90.36 76.01 90.00 (80.98, 66.67) 75.86 (73.72, 64.15)
(d) Television
Tables 3(a) and 3(b) show our reproducibility study for the Laptop and
Restaurant datasets, respectively. For both the datasets, we notice a
consistent 1-2% drop in accuracy and macro-f1 scores when we try to reproduce
the reported numbers in the corresponding papers. Only exceptions were LCF-
BERT for Laptop and BERT-SPC for Restaurant dataset, where we got higher
numbers than the reported ones. For ATAE-LSTM, the drop observed was much
larger than other models. We notice an additional 1-2% drop in accuracy when
we use 15% of the training set as a held-out set to pick the best model. These
numbers indicate that the actual performance of the models is likely to be
slightly worse than what is quoted in the papers, and the drop sometimes is
larger than the difference between the performance of two consecutive methods
on the leaderboard.
To study the replicability, Tables 3(c) and 3(d) summarise the performance of
the individual models on the Men’s T-shirt and Television datasets,
respectively. We introduce these datasets for the first time and report the
performance of all $6$ models under the two defined configurations: test set
as held out set, and 15% of train set used as held out set. We notice a
similar drop in performance when we follow the correct experimental procedure
(hyperparameter tuning on 15% train data as held-out set). Therefore,
following a consistent and rigorous experimental protocol helps us to get a
better sense of the true model performance.
### 7.2 Performance on the Hard and Contrast data slices
As per the performance rubric, we investigate the performance of all $6$
models on both hard and contrast test sets, using the correct experimental
setting (15% train data as held out set). The results are shown in brackets
(in same order) in the last two columns of Tables 3(a), 3(b), 3(c), and 3(d)
for the four datasets, respectively. We observe a large drop in performance on
both these challenging data slices across models. LCF-BERT consistently
performs very well on these test sets. Among memory network based models, RAM
performs the best.
### 7.3 Transferability rubric: Cross domain experiments
In a production readiness setting, it is very likely that we will not have
enough labelled data across individual categories and hence it is important to
understand how well the models are able to transfer across domains. To
understand the transferability of models across datasets, we first experiment
with cross domain combinations. For each experiment, we fix the test set (for
e.g., Laptop) and train three separate models, each with one of the other
three datasets as training sets (Restaurant, Men’s T-shirt, and Television in
this case). Consistent with our experimental settings, for each such
combination, we use 15% of the cross-domain data as held-out set for
hyperparameter tuning, re-train the corresponding models with all the cross-
domain data and obtain the scores for the in-domain set (here Laptop) averaged
across $5$ different runs of the experiment.
Table 4: Transferability: Average drop between in-domain and cross-domain accuracies for each dataset pair for (a) BERT based and (b) Memory network based models. Rows correspond to the train set. Columns correspond to the test set. |
---|---
(a) BERT based models | (b) Memory network based models
Table 4 summarises the results averaged across the BERT-based models and
Memory network based models, respectively on the four datasets. The rows and
columns correspond to the train and test sets, respectively. The diagonals
correspond to the in-domain experiments (denoted by $0$) and each off-diagonal
entry denotes the average drop in model performance for the cross-domain
setting compared to the in-domain combination.
From Table 4 we observe that on an average the models are able to generalize
well across the following combinations, which correspond to a lower drop in
the cross domain experiments: (i) Laptops and Restaurants, and (ii) Men’s
T-shirt and Television. For instance, when testing on the Restaurant dataset,
BERT based and memory network based models respectively show an average of
$\sim$4 and $\sim$7 point absolute drops in % accuracies, when trained using
the Laptop dataset. The drops are higher for the other two training sets.
Interestingly, the generalization is more pronounced across locales rather
than domains, contrary to what one would have expected. For e.g., we notice
better transfer from Men’s T-shirt $\rightarrow$ Television (similarity in
locale) than in the expected Laptop $\rightarrow$ Television (similarity in
domain). Given that our task is that of detecting sentiment polarities of
aspect terms, this observation might be attributed to the similarity in
social/linguistic characteristics of reviews from the same locale.
Further, in the spirit of transferability, we consider the closely related
locales as identified above – {Laptop, Restaurant} and {Men’s T-shirt,
Television}, and conduct experiments to understand the incremental benefits of
adding in-domain data on top of cross domain data, i.e., what fraction of the
in-domain training instances can help to cover the gap between purely in-
domain and purely cross-domain performance largely. For each test dataset, we
take examples from the corresponding cross-domain dataset in the same locale
as training set and incrementally add in-domain (10%, 25% and 50%) examples to
evaluate the performance of the models. Table 5 summarises the results from
these experiments for the BERT based models (a) and memory network based
models (b). For instance, on the Restaurant dataset, the average cross-domain
performance (i.e., trained on Laptop) across the three BERT-based models is
78.3 (first row), while the purely in-domain performance is 82.8 (last row).
We observe that among all increments, adding 10% of the in-domain dataset
(second row) gives the maximum improvement, and is accordingly defined as the
inflection point, which is marked in bold. In Table 5 (a), we report the
accuracy scores (averaged over 5 runs) for the individual BERT based models
(BERT-AEN, BERT-SPC, LCF-BERT) in brackets, in addition to the average
numbers. As we can see, the variability in the numbers across models is low.
For the memory network based models, on the other hand, the variability is not
so low, and the corresponding scores have been shown in Table 5 (b) in the
order (ATAE-LSTM, IAN, RAM).
Interestingly, we notice that in most of the cases, the inflection point is
obtained upon adding just 10% in-domain examples and the model performance
reaches within $0.5-2$% of purely in-domain performance, as shown in Table 6.
While in a few cases, it happens by adding 25-50% in-domain samples. This is
especially useful from the production readiness perspective since considerably
good performance can be achieved by using limited in-domain labelled data on
top of cross-domain annotated data from the same locale.
Table 5: Transferability: Results on including incremental in-domain training
data. The rows correspond to cross-domain performance (0), adding 10%, 25% and
50% in-domain dataset to the cross-domain. To improve illustration, we repeat
in-domain results. Inflection points for each dataset are boldfaced.
% in-domain Laptop Restaurant Men’s T-shirt Television 0 74.6 (73.6, 74.6,
75.5) 78.3 (77.3, 77.8, 79.9) 88.6 (86.3, 89.6, 89.8) 86.1 (83.5, 87.5, 87.4)
10 76.5 (73.9, 76.6, 78.9) 81.6 (80.1, 81.5, 83.3) 88.9 (85.7, 90.6, 90.4)
83.8 (82.0, 86.1, 83.2) 25 76.3 (74.8, 77.0, 77.0) 82.1 (79.8, 82.8, 83.7)
90.0 (87.2, 91.7, 91.0) 86.3 (83.8, 86.8, 88.2) 50 78.2 (76.4, 79.2, 78.9)
82.9 (80.8, 83.6, 84.4) 90.1 (86.8, 91.3, 92.3) 87.2 (85.5, 88.2, 87.8) In-
domain 76.7 (75.7, 77.2, 77.3) 82.8 (80.1, 84.2, 84.2) 90.6 (87.5, 92.4, 92.0)
88.2 (85.9, 88.6, 90.0)
(a) Variance across BERT based models (BERT-AEN, BERT-SPC, LCF-BERT) is small.
% in-domain Laptop Restaurant Men’s T-shirt Television 0 61.0 (58.6, 60.9,
63.6) 68.2 (68.3, 68.0, 68.3) 79.0 (76.6, 78.6, 81.9) 77.6 (75.4, 77.8, 79.6)
10 65.1 (60.7, 65.6, 69.1) 73.0 (70.1, 74.1, 74.9) 83.8 (80.3, 84.1, 86.9)
79.1 (77.1, 79.1, 81.2) 25 65.3 (59.9, 66.2, 69.8) 74.8 (72.2, 75.4, 76.6)
85.1 (82.9, 86.0, 86.4) 80.0 (78.7, 79.8, 81.5) 50 66.2 (60.5, 68.7, 69.5)
75.0 (72.9, 75.3, 76.8) 85.8 (82.7, 86.1, 88.4) 80.6 (78.8, 80.7, 82.4) In-
domain 66.3 (58.6, 69.4, 71.0) 75.4 (73.3, 76.5, 76.4) 85.8 (81.7, 87.4, 88.3)
81.1 (79.7, 80.5, 83.0)
(b) Variance across Memory network models (ATAE-LSTM, IAN, RAM) is
significant.
Table 6: Performance scorecard in accordance with the rubric: reproducibility – % drop in test set accuracy across Laptop and Restaurant, resp.; replicability – rank in leaderboard for Men’s T-shirt and Television, resp. (rank obtained from avg. test set accuracy on Laptop and Restaurant); performance – % drop in test set accuracy (averaged across all four datasets) with hard and contrast-set data slices, resp.; transferability – % drop in test set accuracy in cross-domain setting, and upon adding in-domain training instances as per the inflection point, resp. (averaged over the four datasets) Model | Reproducibility | Replicability | Performance | Transferability
---|---|---|---|---
ATAE-LSTM | (14.67, 5.06) | 6, 6 (6) | (33.07, 55.31) | (4.60, 1.44)
RAM | (4.73, 4.82) | 3, 4 (4) | (17.88, 36.12) | (8.06, 2.06)
IAN | (3.74, 2.64) | 5, 5 (5) | (28.64, 48.93) | (9.22, 3.55)
BERT-SPC | (2.22, 0.27) | 1, 2 (2) | (13.53, 30.58) | (3.83, 1.33)
BERT-AEN | (5.28, 3.67) | 4, 3 (3) | (39.44, 41.88) | (2.61, 0.83)
LCF-BERT | (0.05, 3.37) | 2, 1 (1) | (11.75, 27.55) | (3.14, 0.64)
### 7.4 Summary comparison of the different models under the production
readiness rubrics
We now make an overall comparison across different models considered in this
study under our production readiness rubrics. Table 6 shows the various
numbers across these rubrics. Under reproducibility, we observe a consistent
drop in performance even for the BERT-based models, atleast for one of the two
datasets, viz. Laptop and Restaurant. For Memory network based models, while
there is a considerable drop across both the datasets, the drop for the Laptop
dataset is quite noteworthy. Under replicability, we observe that the relative
rankings of the considered models remain quite stable for the two new
datasets, which is a good sign. Under performance, we note a large drop in
test set accuracies for all the models across the two challenging data slices,
with a minimum drop of 11-27% for LCF-BERT. Surprisingly, BERT-AEN suffered a
huge drop in performance for both hard as well as contrast data slices. This
is a serious concern and further investigation is needed to identify the
issues responsible for this significant drop. Under transferability, while
there is consistent drop in cross-domain scenario, the drop with the
inflection point, corresponding to a meager addition of 10-25% of in-domain
data samples, is much smaller.
### 7.5 Limitations of the present study
While representative of the modern trend in architecture research, memory
network- and BERT-based models do not cover the entire spectrum of the ABSA
literature. Important practical considerations, such as debuggability,
simplicity and computational efficiency, have not been incorporated into the
rubric. Lastly, a numeric scoring system based on the rubric would have made
its interpretation objective. We leave them for a future work.
## 8 Conclusion
Despite the limitations, the present study takes an important stride towards
closing the gap between empirical academic research and its widespread
adoption and deployment in production. In addition to further strengthening
the rubric and judging a broader cross-section of published ABSA models in its
light, we envision to replicate such study in other important NLP tasks. We
hope the two contributed datasets, along with the open-source evaluation
framework, shall fuel further rigorous empirical research in ABSA. We make all
the codes and datasets publicly available
101010https://github.com/rajdeep345/ABSA-Reproducibility.
## References
* [1] Breck, E., Cai, S., Nielsen, E., Salib, M., Sculley, D.: The ml test score: A rubric for ml production readiness and technical debt reduction. In: 2017 IEEE International Conference on Big Data (Big Data). pp. 1123–1132 (2017). https://doi.org/10.1109/BigData.2017.8258038
* [2] Chen, P., Sun, Z., Bing, L., Yang, W.: Recurrent attention network on memory for aspect sentiment analysis. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pp. 452–461. Association for Computational Linguistics, Copenhagen, Denmark (Sep 2017). https://doi.org/10.18653/v1/D17-1047
* [3] Cheng, J., Zhao, S., Zhang, J., King, I., Zhang, X., Wang, H.: Aspect-level sentiment classification with heat (hierarchical attention) network. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. p. 97–106. CIKM ’17, Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/3132847.3133037
* [4] Dacrema, M.F., Cremonesi, P., Jannach, D.: Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In: Proceedings of the 13th ACM Conference on Recommender Systems. p. 101–109. RecSys ’19, Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3298689.3347058
* [5] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (Jun 2019). https://doi.org/10.18653/v1/N19-1423
* [6] Dhillon, G.S., Chaudhari, P., Ravichandran, A., Soatto, S.: A baseline for few-shot image classification. In: International Conference on Learning Representations (2020)
* [7] Gardner, M., Artzi, Y., Basmov, V., Berant, J., Bogin, B., Chen, S., Dasigi, P., Dua, D., Elazar, Y., Gottumukkala, A., Gupta, N., Hajishirzi, H., Ilharco, G., Khashabi, D., Lin, K., Liu, J., Liu, N.F., Mulcaire, P., Ning, Q., Singh, S., Smith, N.A., Subramanian, S., Tsarfaty, R., Wallace, E., Zhang, A., Zhou, B.: Evaluating models’ local decision boundaries via contrast sets. In: Findings of the Association for Computational Linguistics: EMNLP 2020. pp. 1307–1323. Association for Computational Linguistics, Online (Nov 2020). https://doi.org/10.18653/v1/2020.findings-emnlp.117
* [8] Hu, M., Wu, Y., Zhao, S., Guo, H., Cheng, R., Su, Z.: Domain-invariant feature distillation for cross-domain sentiment classification. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 5559–5568. Association for Computational Linguistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1558
* [9] Huang, B., Ou, Y., Carley, K.M.: Aspect level sentiment classification with attention-over-attention neural networks. In: Thomson, R., Dancy, C., Hyder, A., Bisgin, H. (eds.) Social, Cultural, and Behavioral Modeling. pp. 197–206. Springer International Publishing, Cham (2018)
* [10] Kang, D., Raghavan, D., Bailis, P., Zaharia, M.: Model assertions for monitoring and improving ml models. In: Proceedings of the 3rd MLSys Conference, Austin, TX, USA (2020)
* [11] Li, L., Liu, Y., Zhou, A.: Hierarchical attention based position-aware network for aspect-level sentiment analysis. In: Proceedings of the 22nd Conference on Computational Natural Language Learning. pp. 181–189. Association for Computational Linguistics, Brussels, Belgium (Oct 2018)
* [12] Li, X., Bing, L., Lam, W., Shi, B.: Transformation networks for target-oriented sentiment classification. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 946–956. Association for Computational Linguistics, Melbourne, Australia (Jul 2018)
* [13] Li, Z., Wei, Y., Zhang, Y., Zhang, X., Li, X., Yang, Q.: Exploiting coarse-to-fine task transfer for aspect-level sentiment classification. CoRR abs/1811.10999 (2018)
* [14] Lin, J.: The neural hype and comparisons against weak baselines. SIGIR Forum 52(2), 40–51 (Jan 2019). https://doi.org/10.1145/3308774.3308781
* [15] Liu, Q., Zhang, H., Zeng, Y., Huang, Z., Wu, Z.: Content attention model for aspect based sentiment analysis. In: Proceedings of the 2018 World Wide Web Conference. pp. 1023–1032. WWW ’18, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland (2018). https://doi.org/10.1145/3178876.3186001
* [16] Ma, D., Li, S., Zhang, X., Wang, H.: Interactive attention networks for aspect-level sentiment classification. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. pp. 4068–4074 (2017). https://doi.org/10.24963/ijcai.2017/568
* [17] Ma, D., Li, S., Zhang, X., Wang, H.: Interactive attention networks for aspect-level sentiment classification. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. p. 4068–4074. IJCAI’17, AAAI Press (2017)
* [18] McArthur, S.L.: Repeatability, reproducibility, and replicability: Tackling the 3r challenge in biointerface science and engineering. Biointerphases 14(2), 020201 (2019). https://doi.org/10.1116/1.5093621
* [19] Pontiki, M., Galanis, D., Pavlopoulos, J., Papageorgiou, H., Androutsopoulos, I., Manandhar, S.: SemEval-2014 task 4: Aspect based sentiment analysis. In: Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014). pp. 27–35. Association for Computational Linguistics, Dublin, Ireland (Aug 2014). https://doi.org/10.3115/v1/S14-2004
* [20] Ribeiro, M.T., Wu, T., Guestrin, C., Singh, S.: Beyond accuracy: Behavioral testing of NLP models with CheckList. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 4902–4912. Association for Computational Linguistics, Online (Jul 2020). https://doi.org/10.18653/v1/2020.acl-main.442
* [21] Sculley, D., Snoek, J., Wiltschko, A.B., Rahimi, A.: Winner’s curse? on pace, progress, and empirical rigor. In: ICLR (2018)
* [22] Song, Y., Wang, J., Jiang, T., Liu, Z., Rao, Y.: Targeted sentiment classification with attentional encoder network. Lecture Notes in Computer Science p. 93–103 (2019). https://doi.org/10.1007/978-3-030-30490-4_9
* [23] Stevens, J.R.: Replicability and reproducibility in comparative psychology. Frontiers in Psychology 8, 862 (2017). https://doi.org/10.3389/fpsyg.2017.00862
* [24] Tang, D., Qin, B., Feng, X., Liu, T.: Effective LSTMs for target-dependent sentiment classification. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. pp. 3298–3307. The COLING 2016 Organizing Committee, Osaka, Japan (Dec 2016)
* [25] Tang, D., Qin, B., Liu, T.: Aspect level sentiment classification with deep memory network. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pp. 214–224. Association for Computational Linguistics, Austin, Texas (Nov 2016). https://doi.org/10.18653/v1/D16-1021
* [26] Tay, Y., Tuan, L.A., Hui, S.C.: Dyadic memory networks for aspect-based sentiment analysis. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. pp. 107–116. ACM (2017)
* [27] Wang, B., Lu, W.: Learning latent opinions for aspect-level sentiment classification. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018. pp. 5537–5544. AAAI Press (2018)
* [28] Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based LSTM for aspect-level sentiment classification. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. pp. 606–615. Association for Computational Linguistics, Austin, Texas (Nov 2016). https://doi.org/10.18653/v1/D16-1058
* [29] Watson, J., Ghosh, A.P., Trusov, M.: Swayed by the numbers: The consequences of displaying product review attributes. Journal of Marketing 82(6), 109–131 (2018). https://doi.org/10.1177/0022242918805468
* [30] Xu, H., Liu, B., Shu, L., Yu, P.: BERT post-training for review reading comprehension and aspect-based sentiment analysis. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). pp. 2324–2335. Association for Computational Linguistics, Minneapolis, Minnesota (Jun 2019). https://doi.org/10.18653/v1/N19-1242
* [31] Xue, W., Li, T.: Aspect based sentiment analysis with gated convolutional networks. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 2514–2523. Association for Computational Linguistics, Melbourne, Australia (Jul 2018). https://doi.org/10.18653/v1/P18-1234
* [32] Yang, H., Zeng, B., Yang, J., Song, Y., Xu, R.: A multi-task learning model for chinese-oriented aspect polarity classification and aspect term extraction. arXiv preprint arXiv:1912.07976 (2019)
* [33] Zeng, B., Yang, H., Xu, R., Zhou, W., Han, X.: Lcf: A local context focus mechanism for aspect-based sentiment classification. Applied Sciences 9, 3389 (2019)
|
††thanks: corresponding author<EMAIL_ADDRESS>
# Amplitude mode of charge density wave in TTF[Ni(dmit)2]2 observed by
electronic Raman scattering
M. Revelli Beaumont1,2 Y. Gallais1 A. Sacuto1 K. Jacob2 L. Valade2 D. de Caro2
C. Faulmann2 M. Cazayous1 1Laboratoire Matériaux et Phénomènes Quantiques (UMR
7162 CNRS), Université de Paris, 75205 Paris Cedex 13, France2Laboratoire de
Chimie de Coordination (UPR 8241), Université Paul Sabatier Toulouse, France
###### Abstract
We measured the optical signature of the charge density waves (CDWs) in the
multiband conductor TTF[Ni(dmit)2]2 by electronic Raman scattering. At low
energies, a hump develops below 60 K. This hump is associated to the amplitude
mode of the CDW with an energy around 9 meV. Raman symmetry-resolved
measurements show that the CDW amplitude mode is anisotropic and that the CDW
can be associated to the band nesting of Ni(dmit)2 chains.
Charge density waves (CDWs) are a widespread physical phenomenon in condensed
matter and are observed in many solids, especially in low-dimensional
systems.Gruner CDWs can be grouped into three main categories.Zhu ; Martin
The first category corresponds to CDWs that have their origin in the
instability that occurs below a TCDW critical temperature, as described by
Peierls.Peierls A Peierls instability is a dimerization of the lattice
crystal and the opening of an energy gap at the Fermi level associated to an
electronic transition from metal to insulator. The decrease of the electron
energy modulates the electron density of the system. It generates the
spontaneous modulation of the crystal lattice by the electron-phonon
interactions.Kohn ; Mazin In the Peierls picture, the Fermi surface nesting
with a vector qCDW is at the origin of CDWs resulting in a strong peak in the
susceptibility at qCDW and a sharp dip, the so-called Kohn anomaly, in phonon
dispersion at 2qCDW.
In the second group, CDWs originate from electron-phonon interactions but are
not driven by the nesting and the transition is not associated to a metal-
insulator transition. The signature of the transition results in a phonon mode
at qCDW with an energy going to zero at TCDW. Among dichalcogenides, NbSe2 is
an example of this category.
The third categorie describes CDWs associated to a charge modulation without
evidences for nesting or electron-phonon interactions. Such a behavior for the
CDWs is observed in unconventional superconductors such as cuprates.Chang ;
Comin ; Loret ; Lee
A CDW state is described by an amplitude mode (amplitudon) and a phase mode
(phason).Lee The phason corresponds to the vibration of the electron density
wave in a rearranged lattice and the amplitudon modulates the magnitude of the
gap leading to oscillations of the amplitude order parameter. The oscillation
frequency corresponds to the phonon mode driving the nesting. For several
years now, CDWs have emerged as a phenomenon that interacts or competes with
other orders (superconductivity, …) which has led to a strong renewal of
interest for this wave.Monceau It is still in NbSe2 that one find a good
example of the interaction between orders. In this compound, superconductivity
coexists with a CDW order. Spectroscopic probes point out the so-called
”Higgs” mode which becomes active by removing spectral weight from the CDW
amplitude mode upon entering the superconducting state.Grasset ; Tsang The
coupling between the CDW and the Higgs mode is made possible by the fact that
the amplitude mode of the CDW ”shakes” the density of states at the Fermi
level and then modulates the amplitude of the superconducting order
parameter.Cea
Figure 1: Projection onto the (010) plane of the TTF[Ni(dmit)2]2 crystal
structure. Slabs of Ni(dmit)2 alternate with slabs of TTF.
When one thinks to one-dimensional systems and superconductivity, the physics
of organic superconductors quickly comes to mind. The superconductivityParkin
and CDWRavy has been observed in (BEDT-TTF)2ReO4. It is in this compound that
the competition between the two orders has been first suspected.Kaddour A
phenomenom that fosters competition between orders is the contribution to the
Fermi level of molecular orbitals. Such a situation is encountered in the
TTF[M(dmit)2]2 family in which TTF is the tetrathiafulvalene molecule and
dmit2 is the 2-thioxo-1,3-dithiole-4,5-dithiolato anion (M is the Ni, Pd or Pt
atom).Canadell The present work focuses on TTF[Ni(dmit)2]2 . A comparison
between several one-dimensional compounds with a CDW such as TTF-TCNQ, BaVS3
or blue bronze K0.3MoO3 can be found in the review article of J-P. Pouget and
in the book of D. Jérome and L. G. Caron.Pouget ; Garon
Figure 1 shows a projection on the ac plane of the TTF[Ni(dmit)2]2 structure.
In this plane, TTF molecules alternate with blocks of Ni(dmit)2 molecules
along the crystallographic a axis. These molecules stack up in the
crystallographic direction b to give rise to columns of TTF cations
alternating with columns of Ni(dmit)2 anions. The structure can be then
described as alternating layers of TTF and Ni(dmit)2. The b axis is the
crystal elongation axis.Cassoux Brossard et al. discovered that this material
becomes superconducting at TC = 1.6 K under a pressure of 7 kbar.Brossard At
ambient pressure, 1D CDW fluctuations along the stacking direction b with the
wave vector q = 0.40(2)b∗ has been observed by X-ray diffuse scattering
experiments and the existence of possible successive CDW transitions below 60
K has been proposed.Ravy89 In this phase, the metallic state coexists with a
CDW state with its own nesting wave vector. More recently, two successive CDW
transitions have been identified around 55 K and 35 K and associated to the
Ni(dmit)2 chains comparing resistivity measurements under pressure and band
structure calculations.Kaddour
In this letter, we are able to directly detect by Raman spectroscopy the CDW
amplitude mode and to determine its appearance temperature and its energy.
Furthermore, polarization analyses show that the CDW is associated to the
Ni(dmit)2 chains.
Figure 2: Raman spectra of TTF[Ni(dmit)2]2 measured between 10 and 240 cm-1
with the z(bb)z̄ configuration as a function of temperature. Only a few
spectra have been plotted for an easier visualization of changes. The inset
shows the Raman spectra measured at 3 and 100 K using the laser wavelength
$\lambda$ = 660 nm and the same polarizations.
Single crystals are obtained by a slow interdiffusion of saturated solutions
of (TTF)3(BF4)2 and (n-Bu4N)[Ni(dmit)2].Bousseau The crystal has the
monoclinic symmetry and crystallizes in the C 2/c space group. High quality
TTF[Ni(dmit)2]2 single crystals have a typical size 2 $\times$ 0.15 $\times$
0.075 mm3 with the largest dimension along the b direction. Raman scattering
measurements have been performed using a triple spectrometer Jobin Yvon T64000
equipped with a liquid-nitrogen-cooled CCD detector. The incident laser spot
is about 50 $\mu$m diameter size and the power is equal to 2 mW, small enough
to keep the laser heating as low as possible and large enough to improve the
ratio of the signal from the sample to the noise. The laser lines used to
probe the sample, from solid state lasers, are at 532 nm and 660 nm.
Measurements between 3 and 300 K have been performed using an Cryomech closed-
cycle He cryostat. The Raman spectra have been measured in backscattering
using two optical configurations z(bb)z̄ and z(cc)z̄.Porto1966 The incident
wave vector is anti-parallel to the scattered one and both are along z axis.
The incident and scattered polarizations of the light are along the b and c
axis of the sample.
Figure 2 shows the Raman spectra measured at low energies between 5 K and 150
K using the z(bb)z̄ configuration. At high energy beyond 160 cm-1 the spectra
are superimposed without additional modifications (vertical translation, …).
At 3 K, we observe a phonon mode at 140 cm-1 and a lower energy hump centered
around 70 cm-1. The phonon modes of TTF[Ni(dmit)2]2 can be found in Ref.
Valade, . In our measurements, we do not observe phonons below 100 cm-1
whereas intermolecular modes are expected. Notice that there are no studies or
calculations showing phonons on TTF[Ni(dmit)2]2 below 100 cm-1. Their absences
have to be noticed and might be explained by their intensities which are too
low to be detected or by resonance effects. The phonon signal is might be also
superimposed on the signal of the charge density wave. This last point might
explain the shape of the hump associated with the CDW around 80 cm-1. However,
they should appear above the temperature of the CDW in Fig. 2 and with the
subtraction of the spectra in Fig. 3 which is not the case. The phonon mode at
140 cm-1 corresponds to the whole Ni(dmit)2 deformation mode.Pokhodnya As the
temperature increases, the phonon frequency decreases as well as its intensity
due to the thermal expansion of the lattice. In the same temperature range,
the estimated softening with temperature of the hump is about 5 cm-1 but
corresponds to the uncertainty due to the width of the hump. Its intensity
decreases until the spectra overlap above 100 K.
Figure 3: (a) Raman spectra at 3 K and 100 K substrated from the one at 150 K,
(b) Normalized area of the hump at 70 cm-1 as a function of temperature.
The Raman response at 3 K ($\chi^{"}_{3K}$) and at 100 K ($\chi^{"}_{100K}$)
subtracted from the one at 150 K ($\chi^{"}_{150K}$) have been plotted in Fig.
3(a). The $\chi^{"}$($\omega$) response is obtained by correcting all Raman
spectra for the spectrometer response and the Bose factor. The Bose factor
linking the Raman intensity and the imaginary part of the Raman response
function stems from the fluctuation dissipation theorem and is a general
property of inelastic scattering techniques.Hayes The electronic Raman
response is proportional to the imaginary part of the density-density
correlation function of the material. In order to access to this quantity, we
need to divide the experimental Raman response from the Bose factor and made
the correction of spectrometer response.
This difference clearly highlights the low energy hump and allows us to
determine its energy a slightly above 70 cm-1. The analysis of the area under
this hump as a function of temperature shows that this excitation disappears
around 60 K $\pm$ 5 K as shown in Fig. 3(b).
Notice that X-ray measurements reported by S. Ravy et al. show satellite
reflections below 60 K pointing out the appearance of a CDW in TTF[Ni(dmit)2]2
.Ravy89 The CDW transition has been recently identified around 55 K.Kaddour
Our temperature measurements indicate that the hump observed at low energies
in the Raman spectra corresponds to the signature of the CDW mode. However,
this hump might be associated at first glance to another origin as a phonon
mode (inter-molecular phonon mode or a finite q-phonon activated by the CDW)
or luminescence for example. The width of the phonons is generally in the
order of a few cm-1 at low temperatures in single crystals as shown in Fig. 2
with the full width at half maximum (FWHM) of the 140 cm-1 phonon equals to 10
cm-1. The width of the hump (FWHM around 60 cm-1) does not plead for a phonon,
in contrast to the width of amplitude mode in 2H-TaS2 for example measured
around 40 cm-1.Grasset To rule out a luminescence signal that would appear
unfortunately at 60 K, we have performed measurements with an another laser
wavelength $\lambda$ = 660 nm (inset of Fig. 2). The hump doesn’t shift in
energy with the laser wavelength allowing to exclude this possibility as an
origin of the hump.
The question is now to know whether this hump is associated to the gap or to
the amplitude mode of the CDW. The amplitude mode corresponds to the vibration
of the ions due to the intensity oscillation of the maximum/minimum charge
density and the magnitude of the CDW gap.Lee ; Rice A first argument is given
by the expected signature of a gap in Raman spectra. A gap opening is
characterized by a transfer of spectral weight in the electronic background
from low to high energy. Figures 2 and 3(a) clearly show that this is not the
case : the hump is decreasing in intensity with temperature without exhibiting
a low energy spectral weight recovery. A second argument can be obtained by
comparing our measurements to mean field theory. The Raman peak associated to
the gap $\Delta_{CDW}$ of the CDW is expected at 2$\Delta_{CDW}$.Vanyolos If
we associate the hump of Fig. 2 with the gap of the CDW, it means that
$\Delta_{CDW}$ is about 70 cm-1, equivalent to 9 meV. As a first
approximation, we can use mean field theory to relate the gap $\Delta_{CDW}$
to TCDW i.e. 2$\Delta_{CDW}$ = $\alpha k_{B}T_{CDW}$. In 1D system, $\alpha$
has an expected value much higher than the theoretical value equal to 3.52.
The experimental ratio gives 1.7, twice as low as the theoretical ratio. The
observed hump is then unlikely related to the gap of the CDW. One possibility
remains, the observed hump in our Raman spectra could be associated to the
amplitude mode of the CDW order that appears below TCDW = 60 K. The fact that
the amplitude mode softens little with temperature can be observed in
transition metal dichalcogenide for example.Grasset Deviation from the
expected mean-field behavior for the order parameter is not surprising in low
dimensional systems. As observed in other systems,Tsang ; Measson Raman
spectroscopy is sensitive to the amplitude mode of the charge wave and not
directly to the gap.
Figure 4: Raman spectra with (a) incident and scattered polarizations along
the c axis (z(cc)z̄ configuration) and (b) crossed polarizations configuration
(z(bc)z̄) as a function of temperature.
In addition, the CDW has been measured with the z(cc)z̄ and z(bc)z̄
configurations in Fig. 4(a) and (b) respectively. Since the intensity of the
hump is very weak in Fig. 4(a) and drops rapidly with increasing temperature,
we are not able experimentally to show that the hump exists between 30 and 60
K. The spectra are almost superimposable beyond 30 K. The hump does not exist
in cross polarizations. These measurements show that the Raman response of the
CDW amplitude mode is anisotropic and almost exclusively along the b axis.
Calculated electronic Raman cross section in quasi-one-dimensional
interacting-electron systems with density wave ground state have shown that
collective contributions to the Raman response appear only with polarizations
along the chains associated to the CDW.Vanyolos The b axis corresponds here
to the directions of the Ni(dmit)2 stacks.
The observation of the CDW signature perpendicular to the Ni(dmit)2 stacks in
Fig. 4(a) is not expected due to the one-dimensionality of the CDW in this
compound. This signal might be a remnant of the signal along the b axis due to
the fact that we are not perfectly along the c axis. We also note that if the
CDW is along the Ni(dmit)2 stacks (b axis), it is not obvious that the Raman
signature of the CDW appears only with polarizations along the b axis. For
example, the amplitude mode in NbSe2 is observed using two different
polarization configurations (A1 and E).Sooryakumar Naively it is expected to
appear with only one polarization configuration, the fully symmetric A1. The
reason for the disagreement is not yet well understood, but an alternative
explanation has been proposed based on anharmonic effects.Klein
TTF[Ni(dmit)2]2 is a quasi-1D material with uncorrelated spins on the
Ni(dmit)2 stacks. Bourbonnais 2 performed nuclear magnetic resonance (NMR)
measurements on TTF stacks and shown that the TTF chains keep a metallic
behavior down to low temperatures without gap opening.Bourbonnais So the CDW
cannot be associated to these chains. On the other hand, NMR measurements on
Ni(dmit)2 chains attribute the CDW to these chains.Vainrub The CDW
correlation length along the chains of the order of b at room temperature
increases above 20 nm at 55 K. However, there is no interchain CDW
correlations up to 55K. Below this temperature the authors show that
TTF[Ni(dmit)2]2 undergoes at 55 K a CDW transition related to the nesting of
the LUMO band of the Ni(dmit)2 stacks. Notice that the CDW lateral order is
not perfect probably due to the Coulomb coupling between CDWs. The authors
show that TTF[Ni(dmit)2]2 undergoes at 55 K a CDW transition related to the
nesting of the LUMO band of the Ni(dmit)2 stacks and a second CDW transition
at 35 K associated to the nesting of the HOMO bands. The insulating ground
state is thus considered to be CDW of the Ni(dmit)2 stacks with different wave
vectors. Consequently, our measurements support the idea that CDW is
associated to the Ni(dmit)2 stacks.
What are the prospects of the direct measurement of CDW in an organic
conductor ? Raman experiments contributed significantly to the establishment
of the d-wave nature of the order parameter in high-temperature
superconductors.Devereaux This technique also allows to investigate the
temperature and pressure dependence of the CDW amplitude mode in several
systems.Wilson ; Friend In M(dmit)2-based compounds, superconductivity
develops under pressure in competition with a CDW ground state.Brossard ;
Brossard89 The possibility to probe superconductivity and the CDWs as shown
in this work with the same technique will allow to study more efficiently the
interaction between the two orders in organic compounds.
To conclude, we have optically highlighted the CDW signature in the molecular
conductor TTF[Ni(dmit)2]2 . This allowed us to determine precisely the energy
of the CDW amplitude mode and its temperature of appearance. The polarization
measurements show that the CDW amplitude mode is very anisotropic and that the
CDW originates from the Ni(dmit)2 stacks.
## References
* (1) G. Gruner, Density Waves in Solids, Addison-Wesley, Reading, MA (1994).
* (2) X. Zhu, Y. Cao, J. Zhang, E. W. Plummer, and J. Guo, PNAS 112, 2367 (2015).
* (3) M. Dressel and S. Tomic,́ Adv. Phys. 69, 1 (2020).
* (4) R. E. Peierls, Quantum Theory of Solids, Oxford Univ Press, New York (1955).
* (5) W. Kohn, Phys. Rev. Lett. 2, 393 (1959).
* (6) M. D. Johannes and I. I. Mazin, Phys. Rev. B 77, 165135 (2008).
* (7) J. Chang et al., Nature Physics 8, 871 (2012).
* (8) R. Comin et al., Science 343, 390 (2014).
* (9) B. Loret et al., Nat. Phys. 15, 771 (2019).
* (10) P. A. Lee, T. M. Rice, and P. W. Anderson, Solid State Commun. 14, 703 (1974).
* (11) P. Monceau, Adv. Phys. 61, 325 (2012).
* (12) R. Grasset, Y. Gallais, A. Sacuto, M.Cazayous, S. Manas-Valero, E. Coronado, M-A. Measson, Phys. Rev. Lett. 122, 127001 (2019).
* (13) J. C. Tsang, J. E. Smith Jr., and M. W. Shafer, Phys. Rev. Lett. 37, 1407 (1976).
* (14) T. Cea and L. Benfatto, Phys. Rev. B 90, 224515 (2014).
* (15) S. S. P. Parkin, E. M. Engler, R. R. Schumaker, R. Lagier, V. Y. Lee, J. C. Scott, and R. L. Greene, Phys. Rev. Lett. 50, 270 (1983).
* (16) S. Ravy, R. Moret, J. P. Pouget, R. Comés, and S. S. P. Parkin, Phys. Rev. B 33, 2049 (1986).
* (17) W. Kaddour, P. Auban-Senzier, H. Raffy, M. Monteverde, J. P. Pouget, C. R. Pasquier, P. Alemany, E. Canadell, and L. Valade, Phys. Rev. B 90, 205132 (2014).
* (18) E. Canadell, I. E.-I. Rachidi, S. Ravy, J.-P. Pouget, L. Brossard, and J.-P. Legros, J. Phys. France 50, R2967 (1989).
* (19) J-P. Pouget, C. R. Physique 17, 332 (2016).
* (20) D. Jérome, and L. G. Caron, Low Dimensional Conductors and Superconductors, Plenum Press, New York (1987).
* (21) P. Cassoux, L. Valade, M. Bousseau, J-P. Legros, M. Garbauskas, and L. Interrante, Mol. Cryst. Liq. Cryst. 120, 377 (1985).
* (22) L. Brossard, M. Ribault, L. Valade, and P. Cassoux, Phys. Rev. B 42, 3935 (1990).
* (23) S. Ravy, J.-P. Pouget, L. Valade, and J.-P. Legros, Europhys. Lett. 9, 391 (1989).
* (24) M. Bousseau, L. Valade, J.-P. Legros, P. Cassoux, M. Garbauskas, and L. V. Interrante, J. Am. Chem. Soc. 108, 1908 (1986).
* (25) S. P. S. Porto, J. A. Giordmaine, and T. C. Damen, Phys. Rev. 147, 608 (1966).
* (26) L. Valade et al., J. of Solid State Chem. 168, 438 (2002).
* (27) K.I. Pokhodnya et al., Synth. Met. 103, 2016 (1999).
* (28) W. Hayes and R. Loudon, Scattering of light by crystals, Wiley, New York (1978).
* (29) M. J. Rice and S. Strassler, Solid State Commun. 13, 1931 (1974).
* (30) A. Ványolos, and A. Virosztek, Phys. Rev. B 72, 115119 (2005).
* (31) M.-A. Méasson, Y. Gallais, M. Cazayous, B. Clair, P. Rodiére, L. Cario, and A. Sacuto, Phys. Rev. B 89, 060503(R) (2014).
* (32) C. Bourbonnais, P. Wzietek, D. Jérome, F. Creuzet, L. Valade, and P. Cassoux., Europhys. Lett. 6, 177 (1988).
* (33) A. Vainrub, D. Jérome, M. F. Bruniquel, and P. Cassoux, Europhys. Lett. 12, 267 (1990).
* (34) R. Sooryakumar and M. V. Klein, Phys. Rev. B 23, 3213 (1981).
* (35) M. V. Klein, Phys. Rev. B 25, 7192 (1982).
* (36) T. P. Devereaux, D. Einzel, B. Stadlober, R. Hackl, D. H. Leach, and J. J. Neumeier, Phys. Rev. Lett. 72, 396 (1994).
* (37) J. A. Wilson, F. J. Salvo, and S. Mahajan, Phys. Rev. Lett. 32, 882 (1974).
* (38) R. H. Friend and A. D. Yo, Adv. Phys. 36, 1 (1987).
* (39) L. Brossard, M. Ribault, L. Valade, and P. Cassoux, J. Phys. 50, 1521 (1989).
|
# On Critical Dipoles in Dimensions $n\geqslant 3$
S. Blake Allan Department of Mathematics, Baylor University, Sid Richardson
Bldg., 1410 S. 4th Street, Waco, TX 76706, USA<EMAIL_ADDRESS>and
Fritz Gesztesy Department of Mathematics, Baylor University, Sid Richardson
Bldg., 1410 S. 4th Street, Waco, TX 76706, USA<EMAIL_ADDRESS>http://www.baylor.edu/math/index.php?id=935340
###### Abstract.
We reconsider generalizations of Hardy’s inequality corresponding to the case
of (point) dipole potentials $V_{\gamma}(x)=\gamma(u,x)|x|^{-3}$,
$x\in{\mathbb{R}}^{n}\backslash\\{0\\}$, $\gamma\in[0,\infty)$,
$u\in{\mathbb{R}}^{n}$, $|u|=1$, $n\in{\mathbb{N}}$, $n\geqslant 3$. More
precisely, for $n\geqslant 3$, we provide an alternative proof of the
existence of a critical dipole coupling constant $\gamma_{c,n}>0$, such that
for all $\gamma\in[0,\gamma_{c,n}]$, and all $u\in{\mathbb{R}}^{n}$, $|u|=1$,
$\displaystyle\quad\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\geqslant\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2},\quad
f\in D^{1}({\mathbb{R}}^{n}).$
with $D^{1}({\mathbb{R}}^{n})$ denoting the completion of
$C_{0}^{\infty}({\mathbb{R}}^{n})$ with respect to the norm induced by the
gradient. Here $\gamma_{c,n}$ is sharp, that is, the largest possible such
constant. Moreover, we discuss upper and lower bounds for $\gamma_{c,n}>0$ and
develop a numerical scheme for approximating $\gamma_{c,n}$.
This quadratic form inequality will be a consequence of the fact
$\overline{\big{[}-\Delta+\gamma(u,x)|x|^{-3}\big{]}\big{|}_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}}\geqslant
0\,\text{ if and only if }\,0\leqslant\gamma\leqslant\gamma_{c,n}$
in $L^{2}({\mathbb{R}}^{n})$ (with $\overline{T}$ the operator closure of the
linear operator $T$).
We also consider the case of multicenter dipole interactions with dipoles
centered on an infinite discrete set.
###### Key words and phrases:
Hardy-type inequalities, Schrödinger operators, dipole potentials.
###### 2010 Mathematics Subject Classification:
Primary: 35A23, 35J30; Secondary: 47A63, 47F05.
J. Diff. Eq. (to appear).
###### Contents
1. 1 Introduction
2. 2 The Dipole Hamiltonian
3. 3 Criticality
4. 4 A Numerical Approach
5. 5 Multicenter Extensions
6. A Spherical Harmonics and the Laplace–Beltrami Operator in $L^{2}\big{(}{\mathbb{S}}^{n-1}\big{)}$, $n\geqslant 2$.
## 1\. Introduction
The celebrated (multi-dimensional) Hardy inequality,
$\displaystyle\begin{split}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\geqslant[(n-2)/2]^{2}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)|^{2},&\\\
f\in D^{1}({\mathbb{R}}^{n}),\;n\in{\mathbb{N}},\;n\geqslant 3,&\end{split}$
(1.1)
the first in an infinite sequence of higher-order Birman–Hardy–Rellich-type
inequalities, received enormous attention in the literature due to its
ubiquity in self-adjointness and spectral theory problems associated with
second-order differential operators with strongly singular coefficients, see,
for instance, [4], [6, Ch. 1], [20, Sect. 1.5], [21, Ch. 5], [34], [41], [43],
[44], [45, Part 1], [50]–[54], [67, Ch. 2, Sect. 21], [74, Ch. 2], [77] and
the extensive literature cited therein. We also note that inequality
(LABEL:1.1) is closely related to Heisenberg’s uncertainty relation as
discussed in [27].
The basics behind the (point) dipole Hamiltonian $-\Delta+V_{\gamma}(x)$, with
potential
$V_{\gamma}(x)=\gamma\frac{(u,x)}{|x|^{3}},\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},\;\gamma\in[0,\infty),\;u\in{\mathbb{R}}^{n},\;|u|=1,\;n\geqslant
3$ (1.2)
(with $(a,b)$ denoting the Euclidean scalar product of
$a,b\in{\mathbb{R}}^{n}$), in the physically relevant case $n=3$, have been
discussed in great detail in the 1980 paper by Hunziker and Günther [48]. In
particular, these authors point out some of the existing fallacies to be found
in the physics literature in connection with dipole potentials and their
ability to bind electrons. The primary goal in this paper has been the attempt
to extend the three-dimensional results on dipole potentials in [48] to the
general case $n\geqslant 4$ and thereby rederiving and complementing some of
the results obtained by Felli, Marchini, and Terracini [29], [30] (see also
[28], [31], [80]). While Felli, Marchini, and Terracini primarily rely on
variational techniques, we will focus more on an operator and spectral
theoretic approach. To facilitate a comparison between the existing literature
on this topic and the results presented in the present paper, we next
summarize some of the principal achievements in [28], [29], [30], [48], [80].
However, we first emphasize that these sources also discuss a number of facts
that go beyond the scope of our paper: For instance, Hunziker and Günther [48]
also consider non-binding criteria for Hamiltonians with $M$ point charges and
applications to electronic spectra of an $N$-electron Hamiltonian in the
presence of $M$ point charges (nuclei). In addition, Felli, Marchini and
Terracini [29], [30] discuss more general operators where the point dipole
potential $V_{\gamma}$ in (1.2) is replaced by111For simplicity of notation,
we will omit the standard surface measure $d^{n-1}\omega$ in
$L^{2}({\mathbb{S}}^{n-1})$, and similarly, the Lebesgue measure $d^{n}x$ in
$L^{2}({\mathbb{R}}^{n})$.
$a(x/|x|)|x|^{-2},\;x\in{\mathbb{R}}^{n}\backslash\\{0\\},\quad a\in
L^{\infty}({\mathbb{S}}^{n-1}),$ (1.3)
and hence (1.2) represents the special case
$a_{\gamma}(x/|x|)=\gamma(u,x/|x|),\quad\gamma\in[0,\infty),\;u\in{\mathbb{R}}^{n},\,|u|=1,\;x\in{\mathbb{R}}^{n}\backslash\\{0\\}.$
(1.4)
These authors also provide a discussion of strict positivity of the underlying
quadratic form $Q_{\\{\gamma_{j}\\}_{1\leqslant j\leqslant
M}}(\,\cdot\,,\,\cdot\,)$, $M\in{\mathbb{N}}$, in the multi-center case,
$\displaystyle\begin{split}Q_{\\{\gamma_{j}\\}_{1\leqslant j\leqslant
M}}(f,f)=\int_{{\mathbb{R}}^{n}}d^{n}x\,\bigg{[}|(\nabla
f)(x)|^{2}+\sum_{j=1}^{M}\gamma_{j}\frac{(u,(x-x_{j}))}{|x-x_{j}|^{3}}|f(x)|^{2}\bigg{]},&\\\
\gamma_{j}\in[0,\infty),\;x_{j}\in{\mathbb{R}}^{n},\,x_{j}\neq x_{k}\text{ for
}j\neq k,\,1\leqslant j,k\leqslant M,\;f\in
D^{1}({\mathbb{R}}^{n}),&\end{split}$ (1.5)
and its analog with $\gamma_{j}(u,(x-x_{j})/|x-x_{j}|)$ replaced by
$a_{\gamma_{j}}(\,\cdot\,)$ restricted to suitable neighborhoods of $x_{j}$,
$1\leqslant j\leqslant M$. In this context also the problem of “localization
of binding”, a notion going back to Ovchinnikov and Sigal [68], is discussed
in [30]. In addition, applications to a class of nonlinear PDEs are discussed
in [80].
Turning to the topics directly treated in this paper and their relation to
results in [28], [29], [30], [48], [80], we start by noting that the dipole-
modified Hardy-type inequality reads as follows: For each $n\geqslant 3$,
there exists a critical dipole coupling constant $\gamma_{c,n}>0$, such that
$\displaystyle\begin{split}&\text{for all $\gamma\in[0,\gamma_{c,n}]$, and all
$u\in{\mathbb{R}}^{n}$, $|u|=1$,}\\\
&\quad\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\geqslant\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2},\quad
f\in D^{1}({\mathbb{R}}^{n}).\end{split}$ (1.6)
Here $\gamma_{c,n}>0$ is optimal, that is, the largest possible such constant,
and we recall that $D^{1}({\mathbb{R}}^{n})$ denotes the completion of
$C_{0}^{\infty}({\mathbb{R}}^{n})$ with respect to the norm
$\big{(}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla g)(x)|^{2}\big{)}^{1/2}$,
$g\in C_{0}^{\infty}({\mathbb{R}}^{n})$.
The critical constant $\gamma_{c,n}$ can be characterized by the Rayleigh
quotient
$\displaystyle\gamma_{c,n}^{-1}$ $\displaystyle=-\underset{f\in
D^{1}({\mathbb{R}}^{n})\setminus\\{0\\}}{\sup}\,\left\\{\frac{\int_{{\mathbb{R}}^{n}}\,d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2}}{\int_{{\mathbb{R}}^{n}}d^{n}x\,|\nabla
f(x)|^{2}}\right\\}$ (1.7) $\displaystyle=\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{0}^{\pi}d\theta_{n-1}\,[-\cos(\theta_{n-1})]|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\quad\times\bigg{[}\int_{0}^{\pi}d\theta_{n-1}\,|\varphi^{\prime}(\theta_{n-1})|^{2}+[(n-2)(n-4)/4][\sin(\theta_{n-1})]^{-2}|\varphi(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}},$
$\displaystyle\hskip 273.14662ptn\geqslant 3,$ (1.8)
see [29]. To obtain (1.8) one introduces polar coordinates,
$x=r\omega,\quad
r=|x|\in(0,\infty),\quad\omega=\omega(\theta_{1},\dots,\theta_{n-1})=x/|x|\in{\mathbb{S}}^{n-1},$
(1.9)
as in (A.1)–(A.4). By (LABEL:1.1), clearly,
$(n-2)^{2}/4\leqslant\gamma_{c,n},\quad n\geqslant 3.$ (1.10)
The existence of $\gamma_{c,n}$ as a finite positive number is shown for $n=3$
in [48] and for $n\in{\mathbb{N}}$, $n\geqslant 3$, in [29].
Next, one decomposes
$\displaystyle L_{\gamma}$ $\displaystyle=-\Delta+V_{\gamma}(x),\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},$ (1.11)
$\displaystyle=\bigg{[}-\frac{d^{2}}{dr^{2}}-\frac{n-1}{r}\frac{d}{dr}\bigg{]}\otimes
I_{L^{2}({\mathbb{S}}^{n-1})}+\frac{1}{r^{2}}\otimes\Lambda_{\gamma,n},\quad
r\in(0,\infty),\;\gamma\geqslant 0,\;n\geqslant 3,$
acting in $L^{2}((0,\infty);r^{n-1}dr)\otimes L^{2}({\mathbb{S}}^{n-1})$,
where
$\displaystyle\Lambda_{\gamma,n}=-\Delta_{{\mathbb{S}}^{n-1}}+\gamma\cos(\theta_{n-1}),\quad\operatorname{dom}(\Lambda_{\gamma,n})=\operatorname{dom}(-\Delta_{{\mathbb{S}}^{n-1}}),\quad\gamma\geqslant
0,\;n\geqslant 3,$ (1.12)
with $-\Delta_{{\mathbb{S}}^{n-1}}$ the Laplace–Beltrami operator in
$L^{2}({\mathbb{S}}^{n-1})$ (see Appendix A). It is shown in [29] that
$\gamma_{c,n}$ is also characterized by
$\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4,\quad n\geqslant 3,$ (1.13)
where $\lambda_{\gamma,n,0}$ denotes the lowest eigenvalue of
$\Lambda_{\gamma,n}$. We rederive (1.13) via ODE methods and also prove in
Theorem 3.1, that
$\frac{d\lambda_{\gamma,n,0}}{d\gamma}\leqslant\frac{\lambda_{\gamma,n,0}}{\gamma}<0,\quad\gamma>0,\;n\geqslant
3$ (1.14)
(this extends the $n=3$ result in [48] to $n\in{\mathbb{N}}$, $n\geqslant 3$)
and also provide the two-sided bounds
$-\frac{\gamma^{2}}{(n-1)^{2}}\leqslant\lambda_{\gamma,n,0}\leqslant-\frac{\gamma}{2}\frac{I_{n/2}(2\gamma/(n-1))}{I_{(n-2)/2}(2\gamma/(n-1))}<0,\quad\gamma>0,\;n\geqslant
3,$ (1.15)
with $I_{\nu}(\,\cdot\,)$ the regular modified Bessel function of order
$\nu\in{\mathbb{C}}$. (The additional lower bound
$-\gamma\leqslant\lambda_{\gamma,n,0}$ is of course evident since
$-1\leqslant\cos(\theta_{n-1})$.) Moreover, employing the fact that
$-y^{\prime\prime}(x)+\big{\\{}\big{[}s^{2}-(1/4)\big{]}\big{/}\sin^{2}(x)\big{\\}}y(x)=zy(x),\quad
z\in{\mathbb{C}},\;s\in[0,\infty),\;x\in(0,\pi),$ (1.16)
is exactly solvable in terms of hypergeometric functions leads to inequality
(3.49), and combining the latter with (1.8) enables us to prove the existence
of $C_{0}\in(0,\infty)$ such that
$\gamma_{c,n}\underset{n\to\infty}{=}C_{0}(n-2)(n-4)[1+o(1)],$ (1.17)
and the two-sided bounds
$15\pi[(n-2)(n-4)+4]/32\geqslant\gamma_{c,n}\geqslant\begin{cases}1/4,&n=3,\\\
1,&n=4,\\\ 3^{3/2}[(n-2)(n-4)+1]/8,&n\geqslant 5,\end{cases}$ (1.18)
in Theorem 3.3.
Briefly turning to the content of each section, Section 2 offers a detailed
treatment of the angular momentum decomposition of $H_{\gamma}$, the self-
adjoint realization of $L_{\gamma}=-\Delta+V_{\gamma}(\,\cdot\,)$ (i.e., the
Friedrichs extension of
$L_{\gamma}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}$) in
$L^{2}({\mathbb{R}}^{n})$, and hence together with Appendix A on spherical
harmonics and the Laplace–Beltrami operator in $L^{2}(S^{n-1})$, $n\geqslant
2$, provides the background information for the bulk of this paper. Equations
(1.14), (1.15), (1.17), and (1.18) represent our principal new results in
Section 3. Section 4 develops a numerical approach to $\gamma_{c,n}$ that
exhibits $-\gamma_{c,n}^{-1}$ as the smallest (negative) eigenvalue of a
particular triangular operator $K(\gamma_{c,n})$ in
$\ell^{2}({\mathbb{N}}_{0})$ with vanishing diagonal elements (cf. (4.20),
(4.25)). In addition, we prove that finite truncations of $K(\gamma_{c,n})$
yield a convergent and efficient approximation scheme for $\gamma_{c,n}$.
Finally, Section 5 considers the extension to multicenter dipole Hamiltonians
of the form
$L_{\\{\gamma_{j}\\}_{j\in J}}=-\Delta+\sum_{j\in
J}\gamma_{j}\frac{(u,(x-x_{j}))}{|x-x_{j}|^{3}}\chi_{B_{n}(x_{j};\varepsilon/4)}(x)+W_{0}(x),$
(1.19)
where $J\subseteq{\mathbb{N}}$ is an index set, $\varepsilon>0$,
$\chi_{B_{n}(x_{0};\eta)}$ denotes the characteristic function of the open
ball $B_{n}(x_{0};\eta)\subset{\mathbb{R}}^{n}$ with center
$x_{0}\in{\mathbb{R}}^{n}$ and radius $\eta>0$, $\\{x_{j}\\}_{j\in
J}\subset{\mathbb{R}}^{n}$, $n\in{\mathbb{N}}$, $n\geqslant 3$, and
$\displaystyle\begin{split}&\inf_{j,j^{\prime}\in
J}|x_{j}-x_{j^{\prime}}|\geqslant\varepsilon,\quad
0\leqslant\gamma_{j}\leqslant\gamma_{0}<\gamma_{c,n},\;j\in J,\\\ &\,W_{0}\in
L^{\infty}({\mathbb{R}}^{n}),\,\text{ $W_{0}$ real-valued a.e.~{}on
${\mathbb{R}}^{n}$.}\end{split}$ (1.20)
In particular, $\\{x_{j}\\}_{j\in J}$ is permitted to be an infinite, discrete
set, for instance, a lattice. Relying on results proven in [38], we derive the
optimal result that $L_{\\{\gamma_{j}\\}_{j\in
J}}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{x_{j}\\}_{j\in J})}$ is
bounded from below (resp., essentially self-adjoint) if each individual
$L_{\gamma_{j}}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}$, $j\in
J$, is bounded from below (resp., essentially self-adjoint). This extends
results in [30], where $J$ is assumed to be finite.
## 2\. The Dipole Hamiltonian
In this section we provide a discussion of the angular momentum decomposition
of the $n$-dimensional Laplacian $-\Delta$, introduce the dipole Hamiltonian
$H_{\gamma}$, the principal object of this paper, and discuss an analogous
decomposition of the latter.
In spherical coordinates (A.1), the Laplace differential expression in $n$
dimensions takes the form
$\displaystyle-\Delta=-\frac{{\partial}^{2}}{{\partial}r^{2}}-\frac{n-1}{r}\frac{{\partial}}{{\partial}r}-\frac{1}{r^{2}}\Delta_{{\mathbb{S}}^{n-1}}$
(2.1)
where $-\Delta_{{\mathbb{S}}^{n-1}}$ denotes the Laplace–Beltrami
operator222We will call $-\Delta$ the Laplacian to guarantee nonnegativity of
the underlying $L^{2}({\mathbb{R}}^{n})$-realization (and analogously for the
$L^{2}({\mathbb{S}}^{n-1})$-realization of the Laplace–Beltrami operator
$-\Delta_{{\mathbb{S}}^{n-1}}$). associated with the $(n-1)$-dimensional unit
sphere ${\mathbb{S}}^{n-1}$ in ${\mathbb{R}}^{n}$, see (A.16). When acting in
$L^{2}({\mathbb{R}}^{n})$, which in spherical coordinates can be written as
$L^{2}({\mathbb{R}}^{n})\simeq L^{2}((0,\infty);r^{n-1}dr)\otimes
L^{2}({\mathbb{S}}^{n-1})$, (2.1) becomes
$\displaystyle-\Delta=\bigg{[}-\frac{d^{2}}{dr^{2}}-\frac{n-1}{r}\frac{d}{dr}\bigg{]}\otimes
I_{L^{2}({\mathbb{S}}^{n-1})}-\frac{1}{r^{2}}\otimes\Delta_{{\mathbb{S}}^{n-1}}$
(2.2)
(with $I_{{\mathcal{X}}}$ denoting the identity operator in ${\mathcal{X}}$).
The Laplace–Beltrami operator $-\Delta_{{\mathbb{S}}^{n-1}}$ in
$L^{2}({\mathbb{S}}^{n-1})$, with domain
$\operatorname{dom}(-\Delta_{{\mathbb{S}}^{n-1}})=H^{2}\big{(}{\mathbb{S}}^{n-1}\big{)}$
(cf., e.g., [7]), is known to be essentially self-adjoint and nonnegative on
$C_{0}^{\infty}({\mathbb{S}}^{n-1})$ (cf. [20, Theorem 5.2.3]). Recalling the
treatment in [72, p. 160–161], one decomposes the space
$L^{2}({\mathbb{S}}^{n-1})$ into an infinite orthogonal sum, yielding
$\displaystyle\begin{split}L^{2}({\mathbb{R}}^{n})&\simeq
L^{2}((0,\infty);r^{n-1}dr)\otimes L^{2}({\mathbb{S}}^{n-1})\\\
&=\bigoplus\limits_{\ell=0}^{\infty}L^{2}((0,\infty);r^{n-1}dr)\otimes{\mathcal{Y}}_{\ell}^{n},\end{split}$
(2.3)
where ${\mathcal{Y}}_{\ell}^{n}$ is the eigenspace of
$-\Delta_{{\mathbb{S}}^{n-1}}$ corresponding to the eigenvalue
$\ell(\ell+n-2)$, $\ell\in{\mathbb{N}}_{0}$, as
$\sigma(-\Delta_{{\mathbb{S}}^{n-1}})=\\{\ell(\ell+n-2)\\}_{\ell\in{\mathbb{N}}_{0}}.$
(2.4)
In particular, this results in
$\displaystyle-\Delta=\bigoplus\limits_{\ell=0}^{\infty}\left[-\frac{d^{2}}{dr^{2}}-\frac{n-1}{r}\frac{d}{dr}+\frac{\ell(\ell+n-2)}{r^{2}}\right]\otimes
I_{{\mathcal{Y}}_{\ell}^{n}},$ (2.5)
in the space (2.3).
To simplify matters, replacing the measure $r^{n-1}dr$ by $dr$ and
simultaneously removing the term $(n-1)r^{-1}(d/dr)$, one introduces the
unitary operator
$\displaystyle U_{n}=\begin{cases}L^{2}((0,\infty);r^{n-1}dr)\rightarrow
L^{2}((0,\infty);dr),\\\\[2.84526pt] f(r)\mapsto r^{(n-1)/2}f(r),\end{cases}$
(2.6)
under which (2.5) becomes
$\displaystyle-\Delta=\bigoplus\limits_{\ell=0}^{\infty}U_{n}^{-1}\left[-\frac{d^{2}}{dr^{2}}+\frac{[(n-1)(n-3)/4]+\ell(\ell+n-2)}{r^{2}}\right]U_{n}\otimes
I_{{\mathcal{Y}}_{\ell}^{n}}$ (2.7)
acting in the space (2.3). The precise self-adjoint $L^{2}$-realization of
$-\Delta$ in the space (2.3) then is of the form
$H_{0}=\bigoplus\limits_{\ell=0}^{\infty}U_{n}^{-1}h_{n,\ell}\,U_{n}\otimes
I_{{\mathcal{Y}}_{\ell}^{n}},$ (2.8)
where $h_{n,\ell}$, $\ell\in{\mathbb{N}}_{0}$, represents the Friedrichs
extension of
$\displaystyle\left[-\frac{d^{2}}{dr^{2}}+\frac{[(n-1)(n-3)/4]+\ell(\ell+n-2)}{r^{2}}\right]\bigg{|}_{C_{0}^{\infty}((0,\infty))},\quad\ell\in{\mathbb{N}}_{0},$
(2.9)
in $L^{2}((0,\infty);dr)$. For explicit operator domains and boundary
conditions (the latter for $n=2,3$ only) we refer to (2.34)–(2.37). It is
well-known (cf. [72, Sect. IX.7, Appendix to X.1]) that
$\displaystyle
H_{0}=-\Delta,\quad\operatorname{dom}(H_{0})=H^{2}({\mathbb{R}}^{n}),$ (2.10)
$\displaystyle H_{0}|_{C_{0}^{\infty}({\mathbb{R}}^{n})}\,\text{ is
essentially self-adjoint,}$ (2.11) $\displaystyle
H_{0}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\,\text{ is
essentially self-adjoint if and only if $n\geqslant 4$.}$ (2.12)
Next, we turn to the dipole potential
$\displaystyle V_{\gamma}(x)=\gamma\frac{(u,x)}{|x|^{3}},\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},\;\gamma\geqslant 0,\;n\geqslant 2,$
(2.13)
where $u\in{\mathbb{R}}^{n}$ is a unit vector in the direction of the dipole,
the strength of the dipole equals $\gamma\geqslant 0$, and
$(\,\cdot\,,\,\cdot\,)$ represents the Euclidean scalar product in
${\mathbb{R}}^{n}$. Upon an appropriate rotation, one can always choose the
coordinate system in such a manner that $(u,x)=|x|\cos(\theta_{n-1})$,
implying
$\displaystyle V_{\gamma}(x)=\gamma\frac{\cos(\theta_{n-1})}{|x|^{2}},\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},\;\gamma\geqslant 0,\;n\geqslant 2.$
(2.14)
In the following we primarily restrict ourselves to the case $n\geqslant 3$
and comment on the exceptional case $n=2$ at the end of Section 3. The
differential expression associated with Hamiltonian for this system then
becomes
$\displaystyle L_{\gamma}=-\Delta+V_{\gamma}(x),\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},\;\gamma\geqslant 0,\;n\geqslant 3,$
(2.15)
acting in $L^{2}({\mathbb{R}}^{n})$. In analogy to (2.2), (2.15) can be
represented as
$\displaystyle
L_{\gamma}=\bigg{[}-\frac{d^{2}}{dr^{2}}-\frac{n-1}{r}\frac{d}{dr}\bigg{]}\otimes
I_{L^{2}({\mathbb{S}}^{n-1})}+\frac{1}{r^{2}}\otimes\Lambda_{\gamma,n},\quad\gamma\geqslant
0,\;n\geqslant 3,$ (2.16)
acting in $L^{2}((0,\infty);r^{n-1}dr)\otimes L^{2}({\mathbb{S}}^{n-1})$,
where
$\displaystyle\Lambda_{\gamma,n}=-\Delta_{{\mathbb{S}}^{n-1}}+\gamma\cos(\theta_{n-1}),\quad\operatorname{dom}(\Lambda_{\gamma,n})=\operatorname{dom}(-\Delta_{{\mathbb{S}}^{n-1}}),\quad\gamma\geqslant
0,\;n\geqslant 3,$ (2.17)
is self-adjoint in $L^{2}({\mathbb{S}}^{n-1})$ (since
$\gamma\cos(\theta_{n-1})$ is a bounded self-adjoint operator in
$L^{2}({\mathbb{S}}^{n-1})$). Applying the angular momentum decomposition to
$L_{\gamma}$, but this time with respect to the eigenspaces of
$\Lambda_{\gamma,n}$, then results in
$\displaystyle\begin{split}L^{2}({\mathbb{R}}^{n})&=L^{2}((0,\infty);r^{n-1}\,dr)\otimes
L^{2}({\mathbb{S}}^{n-1})\\\
&=\bigoplus\limits_{\ell=0}^{\infty}L^{2}((0,\infty);r^{n-1}\,dr)\otimes{\mathcal{Y}}_{\gamma,\ell}^{n},\quad
n\geqslant 3,\end{split}$ (2.18)
where ${\mathcal{Y}}_{\gamma,\ell}^{n}$ represents the eigenspace of
$\Lambda_{\gamma,n}$ corresponding to the eigenvalue
$\lambda_{\gamma,n,\ell}$, as
$\sigma(\Lambda_{\gamma,n})=\\{\lambda_{\gamma,n,\ell}\\}_{\ell\in{\mathbb{N}}_{0}}.$
(2.19)
We will order the eigenvalues of $\Lambda_{\gamma,n}$ according to magnitude,
that is,
$\lambda_{\gamma,n,\ell}\leqslant\lambda_{\gamma,n,\ell+1},\quad\gamma\geqslant
0,\;\ell\in{\mathbb{N}}_{0},\;n\geqslant 3,$ (2.20)
repeating them according to their multiplicity. The analog of (2.7) in the
space (2.18) then becomes
$\displaystyle
L_{\gamma}=\bigoplus\limits_{\ell=0}^{\infty}U_{n}^{-1}\left[-\frac{d^{2}}{dr^{2}}+\frac{[(n-1)(n-3)/4]+\lambda_{\gamma,n,\ell}}{r^{2}}\right]U_{n}\otimes
I_{{\mathcal{Y}}_{\gamma,\ell}^{n}},\quad n\geqslant 3.$ (2.21)
###### Remark 2.1.
Since $e^{-t(-\Delta_{{\mathbb{S}}^{n-1}})}$, $t\geqslant 0$, has a continuous
and nonnegative integral kernel (see, e,g., [20, Theorem 5.2.1]), it is
positivity improving in $L^{2}({\mathbb{S}}^{n-1})$. Hence, so is
$e^{-t\Lambda_{\gamma,n}}$, $t\geqslant 0$, by (a special case of) [73,
Theorem XIII.45]. Thus, by [73, Theorem XIII.44] one concludes that
the lowest eigenvalue $\lambda_{\gamma,n,0}$ of $\Lambda_{\gamma,n}$ is simple
for all $\gamma\geqslant 0$. (2.22)
$\diamond$
In order to deal exclusively with operators which are bounded from below we
now make the the following assumption.
###### Hypothesis 2.2.
Suppose that $n\in{\mathbb{N}}$, $n\geqslant 3$, and $\gamma\geqslant 0$ are
such that
$\lambda_{\gamma,n,0}\geqslant-(n-2)^{2}/4.$ (2.23)
Inequality (2.23) is inspired by Hardy’s inequality (LABEL:1.1) (cf. [6, Sect.
1.2], [56, p. 345], [58, Ch. 3], [59, Ch. 1], [67, Ch. 1]), which in turn
implies
$\bigg{[}-\frac{d^{2}}{dr^{2}}+\frac{c}{r^{2}}\bigg{]}\bigg{|}_{C_{0}^{\infty}((0,\infty))}\geqslant
0\,\text{ if and only if $c\geqslant-1/4$.}$ (2.24)
In fact, “$\geqslant 0$” in (2.24) can be replaced by “bounded from below”.
Assumption (2.23) is equivalent to
$[(n-1)(n-3)/4]+\lambda_{\gamma,n,0}\geqslant-1/4.$ (2.25)
###### Remark 2.3.
Since the perturbation $\gamma\cos(\theta_{n-1})$, $\gamma\in[0,\infty)$, of
$-\Delta_{{\mathbb{S}}^{n-1}}$ in (2.17) is bounded from below and from above,
$-\gamma
I_{L^{2}({\mathbb{S}}^{n-1})}\leqslant\gamma\cos(\theta_{n-1})\leqslant\gamma
I_{L^{2}({\mathbb{S}}^{n-1})},$ (2.26)
and $-\Delta_{{\mathbb{S}}^{n-1}}\geqslant 0$, it is clear that
$\lambda_{\gamma,n,0}\geqslant-\gamma,\,\text{ that is,
}\,\Lambda_{\gamma,n}\geqslant-\gamma I_{L^{2}({\mathbb{S}}^{n-1})},$ (2.27)
and $\lambda_{0,n,0}=0$. In particular, for $n\geqslant 3$ and
$0\leqslant\gamma$ sufficiently small, Hypothesis 2.2 will be satisfied. We
are particularly interested in the existence of a critical $\gamma_{c,n}>0$
such that
$\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4,$ (2.28)
and whether or not
$\lambda_{\gamma,n,0}<-(n-2)^{2}/4,\quad\gamma\in(\gamma_{c,n},\gamma_{2}),$
(2.29)
for a $\gamma_{2}\in(\gamma_{c,n},\infty)$, with
$\lambda_{\gamma,n,0}\geqslant-(n-2)^{2}/4,\quad\gamma\in(\gamma_{2},\gamma_{3}),$
(2.30)
for a $\gamma_{3}\in(\gamma_{2},\infty)$, etc. This will be clarified in the
next section (demonstrating that $\gamma_{2}=\infty$). $\diamond$
Given Hypothesis 2.2, the precise self-adjoint
$L^{2}({\mathbb{R}}^{n})$-realization of $L_{\gamma}$ in the space (2.18) is
then of the form
$H_{\gamma}=\bigoplus\limits_{\ell=0}^{\infty}U_{n}^{-1}h_{\gamma,n,\ell}\,U_{n}\otimes
I_{{\mathcal{Y}}_{\gamma,\ell}^{n}},\quad\gamma\geqslant 0,\;n\geqslant 3,$
(2.31)
where $h_{\gamma,n,\ell}$, $\ell\in{\mathbb{N}}_{0}$, represents the
Friedrichs extension of
$\displaystyle\left[-\frac{d^{2}}{dr^{2}}+\frac{[(n-1)(n-3)/4]+\lambda_{\gamma,n,\ell}}{r^{2}}\right]\bigg{|}_{C_{0}^{\infty}((0,\infty))},\quad
r>0,\;\gamma\geqslant 0,\;n\geqslant 3,\;\ell\in{\mathbb{N}}_{0},$ (2.32)
in $L^{2}((0,\infty);dr)$. Explicitly, as discussed, for instance, in [37],
[40], the Friedrichs extension of $h_{\gamma,n,\ell}$,
$\ell\in{\mathbb{N}}_{0}$, can be determined from the fact that the Friedrichs
extension $h_{\alpha,F}$ in $L^{2}((0,\infty);dr)$ of
$h_{\alpha}=\bigg{[}-\frac{d^{2}}{dr^{2}}+\frac{\alpha^{2}-(1/4)}{r^{2}}\bigg{]}\bigg{|}_{C_{0}^{\infty}((0,\infty))},\quad
r>0,\;\alpha\in[0,\infty),$ (2.33)
is given by
$\displaystyle
h_{\alpha,F}=-\frac{d^{2}}{dr^{2}}+\frac{\alpha^{2}-(1/4)}{r^{2}},\quad
r>0,\;\alpha\in[0,\infty),$ (2.34)
$\displaystyle\operatorname{dom}\big{(}h_{\alpha,F}\big{)}=\big{\\{}f\in
L^{2}((0,\infty);dr)\,\big{|}\,f,f^{\prime}\in
AC_{loc}((0,\infty));\,\widetilde{f}_{\alpha}(0)=0;$ (2.35)
$\displaystyle\hskip
71.13188pt(-f^{\prime\prime}+\big{[}\alpha^{2}-(1/4)\big{]}r^{-2}f)\in
L^{2}((0,\infty);dr)\big{\\}},\quad\alpha\in[0,1),$
$\displaystyle\operatorname{dom}\big{(}h_{\alpha,F}\big{)}=\big{\\{}f\in
L^{2}((0,\infty);dr)\,\big{|}\,f,f^{\prime}\in AC_{loc}((0,\infty));$ (2.36)
$\displaystyle\hskip
71.13188pt(-f^{\prime\prime}+\big{[}\alpha^{2}-(1/4)\big{]}r^{-2}f)\in
L^{2}((0,\infty);dr)\big{\\}},\quad\alpha\in[1,\infty),$
where
$\widetilde{f}_{\alpha}(0)=\begin{cases}\lim_{r\downarrow
0}\big{[}r^{1/2}\text{\rm ln}(1/r)\big{]}^{-1}f(r),&\alpha=0,\\\\[2.84526pt]
\lim_{r\downarrow 0}2\alpha r^{\alpha-(1/2)}f(r),&\alpha\in(0,1).\end{cases}$
(2.37)
Next we note the following fact.
###### Lemma 2.4.
Given the operator $\Lambda_{\gamma,n}$, $\gamma\geqslant 0$, in
$L^{2}({\mathbb{S}}^{n-1})$ as introduced in (2.17), one infers that
$\lim_{\gamma\downarrow
0}\lambda_{\gamma,n,\ell}=\ell(\ell+n-2),\quad\ell\in{\mathbb{N}}_{0},$ (2.38)
recalling that $\\{\ell(\ell+n-2)\\}_{\ell\in{\mathbb{N}}_{0}}$ are the
corresponding eigenvalues of the unperturbed operator,
$\Lambda_{0,n}=-\Delta_{{\mathbb{S}}^{n-1}}$, the Laplace–Beltrami operator
$($cf. (2.4)$)$.
###### Proof.
This is a special case of Rellich’s theorem in the form recorded, for
instance, in [73, Theorems XII.3 and XII.13]. ∎
###### Lemma 2.5.
Assume Hypothesis 2.2, that is, suppose that
$\lambda_{\gamma,n,0}\geqslant-(n-2)^{2}/4,\quad n\geqslant 3.$ (2.39)
Then $H_{\gamma}$ has purely absolutely continuous spectrum,
$\sigma(H_{\gamma})=\sigma_{ac}(H_{\gamma})=[0,\infty).$ (2.40)
###### Proof.
First, one notes that $H_{\gamma}$ is bounded from below if and only if each
$h_{\gamma,n,\ell}$, $\ell\in{\mathbb{N}}_{0}$, is bounded from below. The
ordinary differential operators $h_{\gamma,n,\ell}$,
$\ell\in{\mathbb{N}}_{0}$, are well-known to have purely absolutely continuous
spectrum equal to $[0,\infty)$, as proven, for instance in [25] and [42]. Thus
the result follows from the special case of direct sums (instead of direct
integrals) in [73, Theorem XIII.85 (f)]. ∎
## 3\. Criticality
We now turn to one of the principal questions – a discussion of which
$\gamma\geqslant 0$ cause $H_{\gamma}$ to be bounded from below.
The natural space to which Hardy’s inequality and its analog in connection
with a dipole potential extends is the space $D^{1}({\mathbb{R}}^{n})$
(sometimes also denoted $D_{0}^{1}({\mathbb{R}}^{n})$, or
$D^{1,2}({\mathbb{R}}^{n})$) obtained as the closure of
$C_{0}^{\infty}({\mathbb{R}}^{n})$ with respect to the gradient norm,
$\displaystyle
D^{1}({\mathbb{R}}^{n})=\overline{C_{0}^{\infty}({\mathbb{R}}^{n})}^{\|\,\cdot\,\|_{\nabla}},\quad\|f\|_{\nabla}=\bigg{(}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\bigg{)}^{1/2},\quad f\in C_{0}^{\infty}({\mathbb{R}}^{n}),$ (3.1)
see also [61, pp. 201–204].
###### Theorem 3.1.
Assume Hypothesis 2.2. Then for all $n\geqslant 3$, there exists a unique
critical dipole moment $\gamma_{c,n}>0$ characterized by
$\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4$ (3.2)
$($cf. (2.28) in Remark 2.3$)$. Moreover, $\lambda_{\gamma,n,0}$ is strictly
monotonically decreasing with respect to $\gamma\in(0,\infty)$,
$\lambda_{0,n,0}=0$, and
$\frac{d\lambda_{\gamma,n,0}}{d\gamma}\leqslant\frac{\lambda_{\gamma,n,0}}{\gamma}<0\,\text{
as well as }\,\lambda_{\gamma,n,0}\geqslant-\gamma,\quad\gamma\in(0,\infty).$
(3.3)
Moreover,
$-\frac{\gamma^{2}}{(n-1)^{2}}\leqslant\lambda_{\gamma,n,0}\leqslant-\frac{\gamma}{2}\frac{I_{n/2}(2\gamma/(n-1))}{I_{(n-2)/2}(2\gamma/(n-1))}<0,\quad\gamma\in(0,\infty),$
(3.4)
hold. In particular, $H_{\gamma}$ is bounded from below, and then
$H_{\gamma}\geqslant 0$, if and only if $\gamma\in[0,\gamma_{c,n}]$.
Consequently,
$\displaystyle\begin{split}&\text{for all $\gamma\in[0,\gamma_{c,n}]$, and all
$u\in{\mathbb{R}}^{n}$, $|u|=1$,}\\\
&\quad\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\geqslant\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2},\quad
f\in D^{1}({\mathbb{R}}^{n}).\end{split}$ (3.5)
The constant $\gamma_{c,n}>0$ in (3.5) is optimal $($i.e., the largest
possible $)$, in addition,
$\gamma_{c,n}\geqslant(n-2)^{2}/4.$ (3.6)
Finally,
$\sigma(H_{\gamma})=\sigma_{ac}(H_{\gamma})=[0,\infty),\quad\gamma\in[0,\gamma_{c,n}].$
(3.7)
###### Proof.
Existence of some critical dipole moment $\gamma_{c,n}>0$ is clear from the
discussion in Remark 2.3. To prove the remaining claims regarding
$\lambda_{\gamma,n,0}$ in Theorem 3.1, we seek spherical harmonics dependent
only on the final angle $\theta_{n-1}$, as this is the only angular variable
dependence of $V_{\gamma}(\,\cdot\,)$. From (A.10)–(A.13), one infers these
are precisely the ones indexed by the particular multi-indices
$(\ell,0,\ldots,0)\in{\mathbb{N}}_{0}^{n}$, that is (cf. (A.10)–(A.14)),
$\displaystyle\begin{split}Y_{(\ell,0,\ldots,0)}(\theta_{n-1})=\bigg{[}\frac{[(n-2)/2](n-2)_{\ell}}{\ell!(\ell+[(n-2)/2])}\bigg{]}^{1/2}C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})),&\\\\[2.84526pt]
\ell\in{\mathbb{N}}_{0},\;\theta_{n-1}\in[0,\pi).&\end{split}$ (3.8)
Introducing the subspace
$\displaystyle{\mathcal{L}}^{n}={\rm
lin.span}\\{Y_{(\ell,0,\ldots,0)}\\}_{\ell\in{\mathbb{N}}_{0}}.$ (3.9)
and restricting the Laplace–Beltrami differential expression (A.16) to
${\mathcal{L}}^{n}$, one finds for (2.17),
$\displaystyle\Lambda_{\gamma,{\mathcal{L}}^{n}}$
$\displaystyle=-\frac{d^{2}}{d\theta_{n-1}^{2}}-(n-2)\cot(\theta_{n-1})\frac{d}{d\theta_{n-1}}+\gamma\cos(\theta_{n-1}),\quad\theta_{n-1}\in(0,\pi),$
(3.10)
acting on functions in
$L^{2}\big{(}(0,\pi);[\sin(\theta_{n-1})]^{n-2}d\theta_{n-1}\big{)}$.
Reverting from the weighted measure $[\sin(\theta_{n-1})]^{n-2}d\theta_{n-1}$
to Lebesgue measure $d\theta_{n-1}$ on $(0,\pi)$ in a unitary fashion then
yields the differential expression
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}$ given by
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}=-\frac{d^{2}}{d\theta_{n-1}^{2}}+\frac{(n-2)(n-4)}{4\sin^{2}(\theta_{n-1})}-\frac{(n-2)^{2}}{4}+\gamma\cos(\theta_{n-1}),\quad\theta_{n-1}\in(0,\pi),$
(3.11)
now acting on functions in $L^{2}((0,\pi);d\theta_{n-1})$.
Next, introducing the change of variable $\xi=\cos(\theta_{n-1})\in(-1,1)$,
$\Lambda_{\gamma,{\mathcal{L}}^{n}}$ in (3.10) turns into
$\displaystyle\begin{split}{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}&=\big{(}1-\xi^{2}\big{)}^{-(n-3)/2}\bigg{[}-\frac{d}{d\xi}\big{(}1-\xi^{2}\big{)}^{(n-1)/2}\frac{d}{d\xi}+\gamma\big{(}1-\xi^{2}\big{)}^{(n-3)/2}\xi\bigg{]},\\\
&\hskip 239.00298pt\xi\in(-1,1),\end{split}$ (3.12)
acting on functions in
$L^{2}\Big{(}(-1,1);\big{(}1-\xi^{2}\big{)}^{(n-3)/2}d\xi\Big{)}$. We also
note that reverting from the weighted measure
$\big{(}1-\xi^{2}\big{)}^{(n-3)/2}d\xi$ to Lebesgue measure $d\xi$ on $(-1,1)$
in a unitary fashion then finally yields the differential expression
$\widetilde{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$ given by
$\displaystyle\begin{split}\widetilde{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}&=-\frac{d}{d\xi}\big{(}1-\xi^{2}\big{)}^{(n-1)/2}\frac{d}{d\xi}+\frac{(n-3)^{2}}{4\big{(}1-\xi^{2}\big{)}}-\frac{(n-1)(n-3)}{4}+\gamma\,\xi,\\\
&\hskip 210.55022pt\quad\xi\in(-1,1),\end{split}$ (3.13)
acting on functions in $L^{2}((-1,1);d\xi)$. One observes that the first two
terms on the right-hand side of (3.13) represent the Legendre operator
$L_{\mu}$ in $L^{2}((-1,1);d\xi)$ associated with the differential expression
$L_{\mu}=-\frac{d}{d\xi}\big{(}1-\xi^{2}\big{)}^{(n-1)/2}\frac{d}{d\xi}+\frac{\mu^{2}}{\big{(}1-\xi^{2}\big{)}},\quad\mu\in[0,\infty),\;\xi\in(-1,1),$
(3.14)
which is in the limit circle case at $\pm 1$ if $\mu\in[0,1)$ and in the limit
point case at $\pm 1$ if $\mu\in[1,\infty)$, as discussed in detail in [24].
In particular, applying this fact to $\Lambda_{\gamma,{\mathcal{L}}^{n}}$,
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}$,
${\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$, and
$\widetilde{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$ yields the
necessity of the Friedrichs boundary condition for $n=3,4$, whereas for
$n\in{\mathbb{N}}$, $n\geqslant 5$, $\Lambda_{\gamma,{\mathcal{L}}^{n}}$ and
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}$ (resp.,
${\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$ and
$\widetilde{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$) are essentially
self-adjoint on $C_{0}^{\infty}((0,\pi))$ (resp., $C_{0}^{\infty}((-1,1))$)
and hence the associated maximally defined operators are self-adjoint. For the
explicit form of the Friedrichs boundary condition corresponding to (3.14) and
hence (3.13) we also refer to [24]. Due to the $\theta_{n-1}^{-2}$ (resp.,
$(\pi-\theta_{n-1})^{-2}$) singularity at $\theta_{n-1}=0$ (resp.,
$\theta_{n-1}=\pi$), the Friedrichs extension corresponding to
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}$ in (3.11) is clear from
(2.33)–(2.37).
Following [48] in the special case $n=3$, choosing
$\psi\in\operatorname{dom}\big{(}{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}\big{)}$
normalized,
$\|\psi\|_{L^{2}((-1,1);(1-\xi^{2})^{(n-3)/2}d\xi)}=1,$ (3.15)
an appropriate integration by parts yields
$\displaystyle(\psi,{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}\psi)_{L^{2}((-1,1);(1-\xi^{2})^{(n-3)/2}d\xi)}$
$\displaystyle\quad=\int_{-1}^{1}d\xi\Big{[}\big{(}1-\xi^{2}\big{)}^{(n-1)/2}|\psi^{\prime}(\xi)|^{2}+\gamma\big{(}1-\xi^{2}\big{)}^{(n-3)/2}\xi|\psi(\xi)|^{2}\Big{]}$
(3.16)
$\displaystyle\quad=\int_{-1}^{1}d\xi\big{(}1-\xi^{2}\big{)}^{(n-1)/2}\Big{[}\big{|}\psi^{\prime}(\xi)+\gamma(n-1)^{-1}\psi(\xi)\big{|}^{2}-\gamma^{2}(n-1)^{-2}|\psi(\xi)|^{2}\Big{]}$
(3.17)
$\displaystyle\quad\geqslant-\frac{\gamma^{2}}{(n-1)^{2}}\int_{-1}^{1}d\xi\big{(}1-\xi^{2}\big{)}^{(n-1)/2}|\psi(\xi)|^{2}$
$\displaystyle\quad\geqslant-\frac{\gamma^{2}}{(n-1)^{2}}\int_{-1}^{1}d\xi\big{(}1-\xi^{2}\big{)}^{(n-3)/2}|\psi(\xi)|^{2}=-\frac{\gamma^{2}}{(n-1)^{2}}.$
(3.18)
In particular, choosing for $\psi$ a normalized eigenfunction of
${\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$ corresponding to the
eigenvalue $\lambda_{\gamma,n,0}$ in (3.16) implies the lower bound
$\lambda_{\gamma,n,0}\geqslant-\gamma^{2}\big{/}(n-1)^{2}.$ (3.19)
On the other hand (following once more [48] in the special case $n=3$),
employing the normalized trial function (cf. [46, no. 3.387])
$\displaystyle\phi_{\gamma}(\xi)=C_{\gamma}\,e^{-\gamma(n-1)^{-1}\xi},\quad\xi\in(-1,1),$
$\displaystyle
C_{\gamma}=\pi^{-1/4}[\gamma/(n-1)]^{(n-2)/4}[\Gamma((n-1)/2)]^{-1/2}[I_{(n-2)/2}(2\gamma/(n-1))]^{-1/2},$
(3.20)
$\displaystyle\|\phi_{\gamma}\|_{L^{2}((-1,1);(1-\xi^{2})^{(n-3)/2}d\xi)}=1,$
with $I_{\nu}(\,\cdot\,)$ the regular modified Bessel function of order
$\nu\in{\mathbb{C}}$ (cf. [1, Sect. 9.6]), an application of the min/max
principle and (3.17) yield the upper bound
$\displaystyle\lambda_{\gamma,n,0}$
$\displaystyle\leqslant(\phi_{\gamma},{\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}\phi_{\gamma})_{L^{2}((-1,1);(1-\xi^{2})^{(n-3)/2}d\xi)}$
$\displaystyle=-\frac{\gamma^{2}}{(n-1)^{2}}\int_{-1}^{1}d\xi\,\big{(}1-\xi^{2}\big{)}^{(n-1)/2}\phi_{\gamma}(\xi)^{2}$
$\displaystyle=-\frac{\gamma}{2}\frac{I_{n/2}(2\gamma/(n-1))}{I_{(n-2)/2}(2\gamma/(n-1))}<0,\quad\gamma\in(0,\infty),$
(3.21)
employing [46, no. 3.387] once again. Thus, (3.21) implies that
$\lambda_{\gamma,n,0}<0,\quad\gamma\in(0,\infty),$ (3.22)
and one infers a quadratic upper bound as $\gamma\downarrow 0$
$-\frac{\gamma}{2}\frac{I_{n/2}(2\gamma/(n-1))}{I_{(n-2)/2}(2\gamma/(n-1))}\underset{\gamma\downarrow
0}{=}-\frac{\gamma^{2}}{n(n-1)}\big{[}1+O\big{(}\gamma^{2}\big{)}\big{]},$
(3.23)
in addition to the quadratic lower bound in (3.19).
Next, recalling that the lowest eigenvalue $\lambda_{\gamma,n,0}$ of
$\Lambda_{\gamma,n}$ is simple for all $\gamma\geqslant 0$ (and is also the
lowest eigenvalue of $\Lambda_{\gamma,{\mathcal{L}}^{n}}$,
$\widetilde{\Lambda}_{\gamma,{\mathcal{L}}^{n}}$, and
${\underline{\Lambda}}_{\gamma,{\mathcal{L}}^{n}}$), we denote by
$\psi_{\gamma,0}\in\operatorname{dom}(\Lambda_{\gamma,n})$ the corresponding
normalized eigenfunction, that is,
$\Lambda_{\gamma,n}\psi_{\gamma,0}=\lambda_{\gamma,n,0}\psi_{\gamma,0},\quad\|\psi_{\gamma,0}\|_{L^{2}({\mathbb{S}}^{n-1})}=1,\quad\gamma\in[0,\infty).$
(3.24)
Thus, one gets
$\displaystyle\begin{split}\lambda_{\gamma,n,0}&=(\psi_{\gamma,0},\Lambda_{\gamma,n}\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})}\\\
&=(\psi_{\gamma,0},[-\Delta_{{\mathbb{S}}^{n-1}}+\gamma\cos(\theta_{n-1})]\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})}.\end{split}$
(3.25)
Moreover, one observes that $\\{\Lambda_{\gamma,n}\\}_{\gamma\in[0,\infty)}$
is a self-adjoint analytic (in fact, entire) family of type $(A)$ in the sense
of Kato (cf. [56, Sect. VII.2, p. 375–379], [73, p. 16]), implying analyticity
of $\lambda_{\gamma,n,0}$ and $\psi_{\gamma,0}$ with respect to $\gamma$ in a
complex neighborhood of $[0,\infty)$. In particular, $\lambda_{\gamma,n,0}$ is
differentiable with respect to $\gamma$, and the Feynman–Hellmann Theorem [82,
p. 151] (see also [76, Theorem 1.4.7]) yields that
$\displaystyle\frac{d\lambda_{\gamma,n,0}}{d\gamma}=(\psi_{\gamma,0},\cos(\theta_{n-1})\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})},\quad\gamma\in(0,\infty).$
(3.26)
Returning to the discussion of (2.27) in Remark 2.3, employing
$-\Delta_{{\mathbb{S}}^{n-1}}\geqslant 0$, one obtains
$\lambda_{\gamma,n,0}=(\psi_{\gamma,0},\Lambda_{\gamma,n}\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})}\geqslant(\psi_{\gamma,0},\gamma\cos(\theta_{n-1})\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})}\geqslant-\gamma,$
(3.27)
implying,
$\displaystyle\frac{d\lambda_{\gamma,n,0}}{d\gamma}=(\psi_{\gamma,0},\cos(\theta_{n-1})\psi_{\gamma,0})_{L^{2}({\mathbb{S}}^{n-1})}\leqslant\frac{\lambda_{\gamma,n,0}}{\gamma}<0,\quad\gamma\in(0,\infty),$
(3.28)
by the strict negativity of $\lambda_{\gamma,n,0}$ for $\gamma>0$ derived in
(3.21).
Given the existence of a unique critical dipole moment $\gamma_{c,n}>0$ one
concludes from (2.24), (2.31), and (2.32) the following fact:
$\displaystyle\begin{split}&H_{\gamma}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\,\text{
is bounded from below, in fact, nonnegative,}\\\ &\quad\text{if and only if
}\,\gamma\in[0,\gamma_{c,n}],\end{split}$ (3.29)
and an integration by parts thus yields
$\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|g(x)|^{2}\leqslant\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
g)(x)|^{2},\quad g\in
C_{0}^{\infty}({\mathbb{R}}^{n}),\quad\gamma\in[0,\gamma_{c,n}].$ (3.30)
It remains to extend (3.30) to elements $f\in D^{1}({\mathbb{R}})$. As in the
case of the Hardy inequality (LABEL:1.1), this follows from invoking a Fatou-
type argument to be outlined next.
Since $C_{0}^{\infty}({\mathbb{R}}^{n})$ is dense in
$D^{1}({\mathbb{R}}^{n})$, given $f\in D^{1}({\mathbb{R}}^{n})$ we pick a
sequence $\\{f_{j}\\}_{j\in{\mathbb{N}}}\subset
C_{0}^{\infty}({\mathbb{R}}^{n})$ such that
$\lim_{j\to\infty}\|f_{j}-f\|_{D^{1}({\mathbb{R}}^{n})}=0$, and, by passing to
a subsequence, we may assume without loss of generality (see (3.31) below)
that $f_{j}\underset{j\to\infty}{\longrightarrow}f$ a.e. on
${\mathbb{R}}^{n}$. (For the remainder of this proof $f,f_{j}$,
$j\in{\mathbb{N}}$, will always be assumed to have the properties just
discussed.) Indeed, the Sobolev inequality (see, e.g., [61, Theorem 8.3],
[79]),
$\displaystyle\begin{split}&\|\nabla
f\|_{L^{2}({\mathbb{R}}^{n})}^{2}\geqslant
S_{n}\|f\|_{L^{2^{*}}({\mathbb{R}}^{n})}^{2},\quad f\in
D^{1}({\mathbb{R}}^{n}),\quad 2^{*}=2n/(n-2),\\\
&\,S_{n}=[n(n-2)/4]2^{2/n}\pi^{(n+1)/n}\Gamma((n+1)/2)^{-2/n},\quad n\geqslant
3,\end{split}$ (3.31)
($\Gamma(\,\cdot\,)$ the Gamma function, cf. [1, Sect. 6.1]), yields
convergence of $f_{j}$ to $f$ in $L^{2^{*}}({\mathbb{R}}^{n})$ and hence
permits the selection of a subsequence that converges pointwise a.e. Thus,
given Hardy’s inequality for functions in $C_{0}^{\infty}({\mathbb{R}}^{n})$,
a well-known fact (see, e.g., [6, Corollary 1.2.6]),
$\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|g(x)|^{2}\leqslant\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
g)(x)|^{2},\quad g\in C_{0}^{\infty}({\mathbb{R}}^{n}),$ (3.32)
one obtains,
$\displaystyle\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f_{j}(x)|^{2}\leqslant\int_{{\mathbb{R}}^{n}}d^{n}x\,|[\nabla(f_{j}-f+f)](x)|^{2}$
$\displaystyle\quad\leqslant
2\int_{{\mathbb{R}}^{n}}d^{n}x\,|[\nabla(f_{j}-f)](x)|^{2}+2\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\leqslant C,$ (3.33)
for some $C\in(0,\infty)$ independent of $j\in{\mathbb{N}}$. Thus,
$\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)|^{2}\leqslant
C,\quad f\in D^{1}({\mathbb{R}}^{n}),$ (3.34)
by a consequence of Fatou’s Lemma (see, e.g., [61, p. 21]. Hence,
$\displaystyle\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)|^{2}=\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,\lim_{j\to\infty}|x|^{-2}|f_{j}(x)|^{2}$
$\displaystyle\quad=\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,\liminf_{j\to\infty}|x|^{-2}|f_{j}(x)|^{2}$
$\displaystyle\quad\leqslant\big{[}(n-2)^{2}/4\big{]}\liminf_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f_{j}(x)|^{2}\quad\text{(by
Fatou's Lemma)}$
$\displaystyle\quad\leqslant\liminf_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f_{j})(x)|^{2}\quad\text{(by \eqref{3.22})}$
$\displaystyle\quad=\lim_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f_{j})(x)|^{2}=\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla f)(x)|^{2},$ (3.35)
extends Hardy’s inequality (3.32) from $C_{0}^{\infty}({\mathbb{R}}^{n})$ to
$D^{1}({\mathbb{R}}^{n})$. Hardy’s inequality on $D^{1}({\mathbb{R}}^{n})$
also implies that
$\lim_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)-f_{j}(x)|^{2}=0,$
(3.36)
in particular,
$\lim_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f_{j}(x)|^{2}=\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)|^{2}.$
(3.37)
Since
$|(u,x)||x|^{-1}\leqslant 1,\quad x\in{\mathbb{R}}^{n}\backslash\\{0\\},\quad
u\in{\mathbb{R}}^{n},\;|u|=1,$ (3.38)
(3.34) also implies
$\big{[}(n-2)^{2}/4\big{]}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(u,x)||x|^{-3}|f(x)|^{2}\leqslant
C,\quad f\in D^{1}({\mathbb{R}}^{n}),$ (3.39)
similarly, (3.36), (3.37), and Hölder’s inequality imply
$\lim_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f_{j}(x)|^{2}=\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2}.$
(3.40)
Thus, for $\gamma\in[0,\gamma_{c,n}]$,
$\displaystyle\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2}=\pm\lim_{j\to\infty}\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f_{j}(x)|^{2}\quad\text{(by
\eqref{3.24c})}$
$\displaystyle\quad\leqslant\lim_{j\to\infty}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f_{j})(x)|^{2}\quad\text{(by \eqref{3.24AA})}$
$\displaystyle\quad=\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla f)(x)|^{2},$
(3.41)
finally implying (3.5). Moreover, (3.38) also yields
$\displaystyle\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla f)(x)|^{2}$
$\displaystyle\geqslant[(n-2)/2]^{2}\int_{{\mathbb{R}}^{n}}d^{n}x\,|x|^{-2}|f(x)|^{2}$
(3.42)
$\displaystyle\geqslant[(n-2)/2]^{2}\int_{{\mathbb{R}}^{n}}d^{n}x\,|(u,x)||x|^{-3}|f(x)|^{2},\quad
f\in D^{1}({\mathbb{R}}^{n}),$
and hence (3.6).
Finally, (3.7) is clear from Lemma 2.5 and the strict monotonicity of
$\lambda_{\gamma,n,0}$ with respect to $\gamma\geqslant 0$. ∎
###### Remark 3.2.
$(i)$ Theorem 3.1 demonstrates that $\gamma_{2}=\infty$ in Remark 2.3.
$(ii)$ Inequality (3.6), that is, $\gamma_{c,n}\geqslant(n-2)^{2}/4$, shows
that $\gamma_{c,n}$ grows at least like $cn^{2}$ for appropriate $c>0$ as
$n\to\infty$. $\diamond$
Next, we improve upon Remark 3.2 $(ii)$ for $n\geqslant 5$ as follows:
###### Theorem 3.3.
Assume Hypothesis 2.2. Then there exists $C_{0}\in(0,\infty)$ such that
$\gamma_{c,n}\underset{n\to\infty}{=}C_{0}(n-2)(n-4)[1+o(1)],$ (3.43)
in addition,
$15\pi[(n-2)(n-4)+4]/32\geqslant\gamma_{c,n}\geqslant\begin{cases}1/4,&n=3,\\\
1,&n=4,\\\ 3^{3/2}[(n-2)(n-4)+1]/8,&n\geqslant 5,\end{cases}$ (3.44)
###### Proof.
Employing [29, eq. (1) and Remark 1], one considers the Rayleigh quotient
$\displaystyle\begin{split}\Gamma_{n}\big{(}\gamma(u,x)|x|^{-1}\big{)}=-\gamma\underset{f\in
D^{1}({\mathbb{R}}^{n})\setminus\\{0\\}}{\sup}\,\left\\{\frac{\int_{{\mathbb{R}}^{n}}\,d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2}}{\int_{{\mathbb{R}}^{n}}d^{n}x\,|\nabla
f(x)|^{2}}\right\\},&\\\ \gamma\in(0,\infty),\;n\geqslant 3,&\end{split}$
(3.45)
and notes that $\Gamma_{n}\big{(}\gamma(u,x)|x|^{-1}\big{)}\uparrow 1$ as
$\gamma\uparrow\gamma_{c,n}$, implying (cf. (3.11))
$\displaystyle\gamma_{c,n}^{-1}$ $\displaystyle=\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{0}^{\pi}d\theta_{n-1}\,[-\cos(\theta_{n-1})]|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\quad\times\bigg{[}\int_{0}^{\pi}d\theta_{n-1}\,|\varphi^{\prime}(\theta_{n-1})|^{2}+[(n-2)(n-4)/4][\sin(\theta_{n-1})]^{-2}|\varphi(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}},$
$\displaystyle\hskip 270.30118ptn\geqslant 3.$ (3.46)
Employing the fact that
$\bigg{(}-\frac{d^{2}}{dx^{2}}+\frac{s^{2}-(1/4)}{\sin^{2}(x)}\bigg{)}\bigg{|}_{C_{0}^{\infty}((0,\pi))}\geqslant[(1/2)+s]^{2}I_{L^{2}((0,\pi);dx)},\quad
s\geqslant 0,$ (3.47)
(this follows from [36, Sect. 4] for $s>0$ and extends to $s=0$ utilizing [37,
Subsect. 6.1]) one concludes the following variant of Hardy’s inequality (upon
taking $s=0$) with optimal constants $1/4$,
$\int_{0}^{\pi}dx\,|\varphi^{\prime}(x)|^{2}\geqslant\frac{1}{4}\int_{0}^{\pi}dx\,\frac{|\varphi(x)|^{2}}{\sin^{2}(x)}+\frac{1}{4}\int_{0}^{\pi}dx\,|\varphi(x)|^{2},\quad\varphi\in
C_{0}^{\infty}((0,\pi)),$ (3.48)
which, by a density argument, extends to
$\int_{0}^{\pi}dx\,|\varphi^{\prime}(x)|^{2}\geqslant\frac{1}{4}\int_{0}^{\pi}dx\,\frac{|\varphi(x)|^{2}}{\sin^{2}(x)}+\frac{1}{4}\int_{0}^{\pi}dx\,|\varphi(x)|^{2},\quad\varphi\in
H_{0}^{1}((0,\pi))$ (3.49)
(see [39]). Thus, employing (3.49) in (3.46) yields
$\displaystyle\gamma_{c,n}^{-1}$ $\displaystyle\leqslant\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{0}^{\pi}d\theta_{n-1}\,[-\cos(\theta_{n-1})]|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\quad\times\bigg{[}\int_{0}^{\pi}d\theta_{n-1}\,\big{\\{}(1/4)+[(n-2)(n-4)/4]\big{\\}}[\sin(\theta_{n-1})]^{-2}|\varphi(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}}$
$\displaystyle\leqslant\frac{4}{(n-2)(n-4)+1}\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{0}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}\,$
$\displaystyle\hskip
163.60333pt\times[-\cos(\theta_{n-1})][\sin(\theta_{n-1})]^{2}|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\hskip
113.81102pt\times\bigg{[}\int_{0}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}\,|\varphi(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}}$
(3.50) $\displaystyle=\frac{4}{(n-2)(n-4)+1}\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{\pi/2}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}$
$\displaystyle\hskip
163.60333pt\times[-\cos(\theta_{n-1})][\sin(\theta_{n-1})]^{2}|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\hskip
139.41832pt\times\bigg{[}\int_{\pi/2}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}\,|\varphi(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}}$
$\displaystyle\leqslant\big{[}8\big{/}3^{3/2}\big{]}[(n-2)(n-4)+1]^{-1},\quad
n\geqslant 4.$ (3.51)
Here we used the estimate,
$-\cos(\theta)\sin^{2}(\theta)\leqslant
2\big{/}3^{3/2},\quad\theta\in[\pi/2,\pi],$ (3.52)
and the fact that due to the sign change of $\cos(\theta)$ as $\theta$ crosses
$\pi/2$, the numerator in (3.50) diminishes and the denominator in (3.50)
increases, altogether diminishing the ratio in (3.50) if $\varphi(\,\cdot\,)$
has support in $[0,\pi/2]$. Thus, one is justified assuming that
$\varphi(\,\cdot\,)$ has support in $[\pi/2,\pi]$ only.
In the case $n=3$, the factor $[(n-2)(n-4)+1]/4$ in ](3.50) vanishes, and
hence we now employ the additional term
$\|\varphi\|^{2}_{L^{2}((0,\pi);dx)}/4$ in (3.49) to arrive at
$\displaystyle\gamma_{c,3}^{-1}$ $\displaystyle\leqslant\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{0}^{\pi}d\theta_{2}\,[-\cos(\theta_{2})]|\varphi(\theta_{2})|^{2}\bigg{[}\frac{1}{4}\int_{0}^{\pi}d\theta_{2}\,|\varphi(\theta_{2})|^{2}\bigg{]}^{-1}\Bigg{\\}}$
$\displaystyle\leqslant 4\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}\Bigg{\\{}\int_{\pi/2}^{\pi}d\theta_{2}\,[-\cos(\theta_{2})]|\varphi(\theta_{2})|^{2}\bigg{/}\int_{\pi/2}^{\pi}d\theta_{2}\,|\varphi(\theta_{2})|^{2}\Bigg{\\}}$
$\displaystyle\leqslant 4.$ (3.53)
Altogether, this implies the lower bound in (3.44) and hence improves on
Remark 3.2 $(ii)$ for $n\geqslant 5$. (For $n=4$ one can include the term
$\|\varphi\|^{2}_{L^{2}((0,\pi));dx)}/4$ to improve the lower bound, but the
actual details become so unwieldy that we refrain from doing so.) For $n=3,4$
we just recalled (3.6).
Next, introducing the functionals
$\displaystyle
F_{n}(\varphi)=\int_{0}^{\pi}d\theta_{n-1}\,[-\cos(\theta_{n-1})]|\varphi(\theta_{n-1})|^{2}$
$\displaystyle\quad\times\bigg{[}\int_{0}^{\pi}d\theta_{n-1}\,\big{\\{}|\varphi^{\prime}(\theta_{n-1})|^{2}+[(n-2)(n-4)/4][\sin(\theta_{n-1})]^{-2}|\varphi(\theta_{n-1})|^{2}\big{\\}}\bigg{]}^{-1},$
$\displaystyle\hskip 156.49014pt\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\},\;n\in{\mathbb{N}},\;n\geqslant 3,$ (3.54)
one concludes as in (3.51) that
$\displaystyle\begin{split}F_{n}(\varphi)&\leqslant\frac{\int_{\pi/2}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}\,[-\cos(\theta_{n-1})]\sin^{2}(\theta_{n-1})|\varphi(\theta_{n-1})|^{2}}{\int_{\pi/2}^{\pi}[\sin(\theta_{n-1})]^{-2}d\theta_{n-1}\,|\varphi(\theta_{n-1})|^{2}}\\\
&\leqslant 2\big{/}3^{3/2},\quad n\in{\mathbb{N}},\;n\geqslant 3,\end{split}$
(3.55)
is uniformly bounded with respect to $n$ and strictly monotonically decreasing
with respect to $n$. Consequently, also
$\gamma_{c,n}^{-1}(n-2)(n-4)/4=\underset{\varphi\in
H_{0}^{1}((0,\pi))\backslash\\{0\\}}{\sup}F_{n}(\varphi)$ (3.56)
is bounded and monotonically decreasing with respect to $n$ and hence has a
limit as $n\to\infty$, proving (3.43).
Finally, to prove the upper bound in (3.44) one can argue as follows.
Introducing $\varphi_{0}\in H^{1}_{0}((0,\pi))\backslash\\{0\\}$ via
$\varphi_{0}(\theta)=\begin{cases}0,&\theta\in[0,\pi/2],\\\
\sin(2\theta),&\theta\in[\pi/2,\pi],\end{cases}$ (3.57)
then,
$\displaystyle\int_{\pi/2}^{\pi}d\theta\,[-\cos(\theta)]\sin^{2}(2\theta)=8/15,$
$\displaystyle\int_{\pi/2}^{\pi}d\theta\,\big{\\{}4\cos^{2}(2\theta)+[(n-2)(n-4)/4][\sin(\theta)]^{-2}4\sin^{2}(\theta)\cos^{2}(\theta)\big{\\}}$
(3.58)
$\displaystyle\quad=\int_{\pi/2}^{\pi}d\theta\,\big{[}4\cos^{2}(2\theta)+(n-2)(n-4)\cos^{2}(\theta)\big{]}=\pi[4+(n-2)(n-4)]/4,$
and hence (cf. (3.46))
$\gamma_{c,n}^{-1}\geqslant\frac{32}{15\pi[(n-2)(n-4)+4]},\quad n\geqslant 3,$
(3.59)
completes the proof of (3.44). ∎
###### Remark 3.4.
$(i)$ Since $H^{1}_{0}((0,\pi))$ embeds compactly into
$L^{2}((0,\pi);d\theta)$, the supremum in (3.46) (unlike that in
(LABEL:3.40a)) is actually attained, that is, for a particular $\varphi_{n}\in
H^{1}_{0}((0,\pi))\backslash\\{0\\}$,
$\displaystyle\gamma_{c,n}^{-1}$
$\displaystyle=\int_{0}^{\pi}d\theta_{n-1}\,[-\cos(\theta_{n-1})]|\varphi_{n}(\theta_{n-1})|^{2}$
$\displaystyle\quad\times\bigg{[}\int_{0}^{\pi}d\theta_{n-1}\,|\varphi^{\prime}(\theta_{n-1})|^{2}+[(n-2)(n-4)/4][\sin(\theta_{n-1})]^{-2}|\varphi_{n}(\theta_{n-1})|^{2}\bigg{]}^{-1}\Bigg{\\}},$
$\displaystyle\hskip 256.0748pt\quad n\geqslant 3.$ (3.60)
However, since the $n$-dependence of $\varphi_{n}$ appears to be beyond our
control, computing the exact value of $C_{0}$ in (3.43) remains elusive.
$(ii)$ The differential equation underlying (3.46) is of the type
$-y^{\prime\prime}(\theta)+[(n-2)(n-4)/4][\sin(\theta)]^{-2}y(\theta)=-\gamma_{c,n}\cos(\theta)y(\theta),\quad\theta\in(0,\pi),$
(3.61)
which naturally leads to the Birman–Schwinger-type eigenvalue problem
$\displaystyle\Big{(}h_{n}^{-1/2}[-\cos(\theta)]h_{n}^{-1/2}v\Big{)}(\theta)=\lambda_{n}v(\theta),\quad
v=h_{n}^{1/2}y,$ (3.62)
where $h_{n}$ denotes the Friedrichs extension of the preminimal operator
$\overset{\textbf{\Large.}}{h}_{n,min}$ in $L^{2}((0,\pi);d\theta)$ defined by
$\big{(}\overset{\textbf{\Large.}}{h}_{n,min}g\big{)}(\theta)=-g^{\prime\prime}(\theta)+[(n-2)(n-4)/4][\sin(\theta)]^{-2}g(\theta),\quad
g\in C_{0}^{\infty}((0,\pi)).$ (3.63)
One observes that $\overset{\textbf{\Large.}}{h}_{n,min}$ is essentially self-
adjoint for $n\geqslant 5$ and hence boundary conditions at $\theta=0,\pi$,
familiar for singular second-order differential operators of Bessel-type (see
[37, Subsection 6.1]), are only required for $n=3,4$. The Birman–Schwinger
operator
$T_{n}=h_{n}^{-1/2}[-\cos(\theta)]h_{n}^{-1/2}$ (3.64)
in $L^{2}((0,\pi);d\theta)$ is compact (in fact, Hilbert–Schmidt) upon
inspecting its integral kernel and hence by the Raleigh–Ritz quotient in
(3.46), $\gamma_{c,n}^{-1}$ is the largest eigenvalue for $T_{n}$. Finally,
introducing the unitary operator
$(Uf)(\theta)=f(\pi-\theta),\quad\theta\in(0,\pi),\;f\in
L^{2}((0,\pi);d\theta),$ (3.65)
in $L^{2}((0,\pi);d\theta)$, one verifies that
$UT_{n}U^{-1}=-T_{n},$ (3.66)
and hence the spectrum of $T_{n}$ is symmetric with respect to the origin.
$\diamond$
It is well-known that Hardy’s inequality (LABEL:1.1) is strict, that is,
equality holds in (LABEL:1.1) for some $f\in D^{1}({\mathbb{R}}^{n})$ if and
only if $f=0$. More general results regarding strictness for weighted
Hardy–Sobolev or Caffarelli–Kohn–Nirenberg inequalities based on variational
techniques can be found, for instance, in [13], [15]. Strictness in the case
of the Hardy inequality was discussed in [87]. Thus, we next turn to
strictness of inequality (3.5) on $H^{1}({\mathbb{R}}^{n})$ employing a
quadratic form approach.
To set the stage we briefly recall a few facts on quadratic forms generated by
symmetric operators $A$ bounded from below and the associated Friedrichs
extension (to be denoted by $A_{F}$) of $A$.
Let $A$ be a densely defined symmetric operator in the Hilbert space
${\mathcal{H}}$ bounded from below, that is, $A\subseteq A^{*}$ and for some
$c\in{\mathbb{R}}$, $A\geqslant cI_{{\mathcal{H}}}$. Without loss of
generality we put $c=0$ in the following. We denote by $\overline{A}$ the
closure of $A$ in ${\mathcal{H}}$, and introduce the associated forms in
${\mathcal{H}}$,
$\displaystyle q_{A}(f,g)=(f,Ag)_{{\mathcal{H}}},\quad
f,g\in\operatorname{dom}(q_{A})=\operatorname{dom}(A),$ (3.67) $\displaystyle
q_{\overline{A}}(f,g)=(f,{\overline{A}}g)_{{\mathcal{H}}},\quad
f,g\in\operatorname{dom}(q_{\overline{A}})=\operatorname{dom}({\overline{A}}),$
(3.68)
then the closures of $q_{A}$ and $q_{\overline{A}}$ coincide in
${\mathcal{H}}$ (cf., e.g., [8, Lemma 5.1.12])
$\overline{q_{A}}=\overline{q_{\overline{A}}}$ (3.69)
and the first representation theorem for forms (see, e.g., [23, Theorem
4.2.4], [56, Theorem VI.2.1, Sect. VI.2.3]) yields
$\overline{q_{A}}(f,g)=(f,A_{F}g)_{{\mathcal{H}}},\quad
f\in\operatorname{dom}(\overline{q_{A}}),\;g\in\operatorname{dom}(A_{F}),$
(3.70)
where $A_{F}\geqslant 0$ represents the self-adjoint Friedrichs extension of
$A$. Due to the fact (3.69), one infers (cf., e.g., [8, Lemma 5.3.1])
$A_{F}=(\overline{A})_{F}.$ (3.71)
The second representation theorem for forms (see, e.g., [23, Theorem 4.2.8],
[56, Theorem VI.2.123]) then yields the additional result
$\overline{q_{A}}(f,g)=\big{(}A_{F}^{1/2}f,A_{F}^{1/2}g\big{)}_{{\mathcal{H}}},\quad
f,g\in\operatorname{dom}(\overline{q_{A}})=\operatorname{dom}\big{(}A_{F}^{1/2}\big{)}.$
(3.72)
Moreover, one has the fact (see, e.g., [8, Theorem 5.3.3], [23, Corollary
4.2.7], [75, Theorem 10.17])
$\operatorname{dom}(A_{F})=\operatorname{dom}(\overline{q_{A}})\cap\operatorname{dom}(A^{*})=\operatorname{dom}\big{(}A_{F}^{1/2}\big{)}\cap\operatorname{dom}(A^{*}).$
(3.73)
###### Theorem 3.5.
Assume Hypothesis 2.2. Then inequality (3.5) is strict on
$H^{1}({\mathbb{R}}^{n})$, $n\geqslant 3$, that is, equality holds in (3.5)
for some $f\in H^{1}({\mathbb{R}}^{n})$ if and only if $f=0$.
###### Proof.
We first discuss the simpler case $\gamma\in[0,\gamma_{c,n})$. In this case
the inequality (3.5) implies that the sesquilinear form $\gamma\,q_{u}$,
$u\in{\mathbb{R}}^{n}$, $|u|=1$, where
$q_{u}(f,g)=\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}\overline{f(x)}g(x),\quad
f,g\in\operatorname{dom}(q_{u})=H^{1}({\mathbb{R}}^{n}),$ (3.74)
is bounded relative to the form $Q_{H_{0}}$ of the Laplacian $H_{0}=-\Delta$
on $\operatorname{dom}(H_{0})=H^{2}({\mathbb{R}}^{n})$,
$\displaystyle\begin{split}Q_{H_{0}}(f,g)=(\nabla f,\nabla
g)_{[L^{2}({\mathbb{R}}^{n})]^{n}}=\big{(}H_{0}^{1/2}f,H_{0}^{1/2}g\big{)}_{L^{2}({\mathbb{R}}^{n})},&\\\
f,g\in\operatorname{dom}(Q_{H_{0}})=H^{1}({\mathbb{R}}^{n}),&\end{split}$
(3.75)
with relative bound strictly less than one. Hence the form
$Q_{\gamma}(f,g)=Q_{H_{0}}(f,g)+\gamma\,q_{u}(f,g),\quad
f,g\in\operatorname{dom}(Q_{\gamma})=H^{1}({\mathbb{R}}^{n}),\;\gamma\in[0,\gamma_{c,n}),$
(3.76)
is densely defined, nonnegative, and closed. Moreover, since
$C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})$ is dense in
$H^{1}({\mathbb{R}}^{n})$ for $n\geqslant 2$ (cf., e.g., [26, p. 33–35]), that
is, $C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})$ is a core for
$Q_{H_{0}}$ (equivalently, a form core for $H_{0}$), and hence also a core for
$Q_{\gamma}$,
$\overline{Q_{\gamma}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}}=Q_{\gamma},\quad\gamma\in[0,\gamma_{c,n}).$
(3.77)
Thus, the self-adjoint, nonnegative operator $H_{Q_{\gamma}}$, uniquely
associated with $Q_{\gamma}$ by the first representation theorem for forms
coincides with the Friedrichs extension of the minimal operator associated
with the differential expression $L_{\gamma}$ in (2.15),
$H_{\gamma,min}=(-\Delta+V_{\gamma})|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\geqslant
0,\quad\gamma\in[0,\gamma_{c,n}),$ (3.78)
that is,
$H_{Q_{\gamma}}=(H_{\gamma,min})_{F}\geqslant
0,\quad\gamma\in[0,\gamma_{c,n}).$ (3.79)
In turn, since $H_{\gamma}$ coincides with the direct sum of Friedrichs
extensions in (2.31), one concludes that $H_{Q_{\gamma}}$ coincides with
$H_{\gamma}$, and hence,
$H_{Q_{\gamma}}=H_{\gamma}=\big{(}(-\Delta+V_{\gamma})|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\big{)}_{F},\quad\gamma\in[0,\gamma_{c,n}),$
(3.80)
in particular,
$Q_{\gamma}(f,g)=\big{(}H_{\gamma}^{1/2}f,H_{\gamma}^{1/2}g\big{)}_{L^{2}({\mathbb{R}}^{n})},\quad
f,g\in\operatorname{dom}(Q_{\gamma})=H^{1}({\mathbb{R}}^{n}),\;\gamma\in[0,\gamma_{c,n}).$
(3.81)
Thus, equality in the inequality (3.5) for some $f_{0}\in
H^{1}({\mathbb{R}}^{n})$ implies
$\displaystyle 0$ $\displaystyle=\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f_{0})(x)|^{2}+\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f_{0}(x)|^{2}$
(3.82)
$\displaystyle=Q_{H_{0}}(f_{0},f_{0})+\gamma\,q_{u}(f_{0},f_{0})=Q_{\gamma}(f_{0},f_{0})=\big{\|}H_{\gamma}^{1/2}f_{0}\big{\|}_{L^{2}({\mathbb{R}}^{n})}^{2},\quad\gamma\in[0,\gamma_{c,n}),$
and hence,
$f_{0}\in\ker\big{(}H_{\gamma}^{1/2}\big{)}=\ker(H_{\gamma})=\\{0\\},\quad\gamma\in[0,\gamma_{c,n}),$
(3.83)
by (3.7).
The case $\gamma=\gamma_{c,n}$ follows analogous lines but is a bit more
involved as now the form $\gamma_{c,n}\,q_{u}$ is bounded relative to the form
$Q_{H_{0}}$ with relative bound equal to one.
Since by inequality (3.5)
$H_{\gamma_{c,n},min}=(-\Delta+V_{\gamma_{c,n}})|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\geqslant
0,$ (3.84)
the form
$\displaystyle\overset{\textbf{\Large.}}{Q}_{\gamma_{c,n}}(f,g)$
$\displaystyle=(f,H_{\gamma_{c,n},min}g)_{L^{2}({\mathbb{R}}^{n})}$ (3.85)
$\displaystyle=Q_{H_{0}}(f,g)+\gamma_{c,n}q_{u}(f,g),\quad
f,g\in\operatorname{dom}\big{(}\overset{\textbf{\Large.}}{Q}_{\gamma_{c,n}}\big{)}=C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\}),$
is closable and we denote its closure in $L^{2}({\mathbb{R}}^{n})$ by
$Q_{\gamma_{c,n}}$. Thus, $Q_{\gamma_{c,n}}\geqslant 0$ and hence the self-
adjoint, nonnegative operator $H_{Q_{\gamma_{c,n}}}$ uniquely associated with
$Q_{\gamma_{c,n}}$ in $L^{2}({\mathbb{R}}^{n})$ is the Friedrichs extension of
$H_{\gamma_{c,n},min}$,
$H_{Q_{\gamma_{c,n}}}=(H_{\gamma_{c,n},min})_{F}\geqslant 0.$ (3.86)
Again, by the second representation theorem
$Q_{\gamma_{c,n}}(f,g)=\big{(}H_{Q_{\gamma_{c,n}}}^{1/2}f,H_{Q_{\gamma_{c,n}}}^{1/2}g\big{)}_{L^{2}({\mathbb{R}}^{n})},\quad
f,g\in\operatorname{dom}(Q_{\gamma_{c,n}})=\operatorname{dom}\big{(}H_{Q_{\gamma_{c,n}}}^{1/2}\big{)}.$
(3.87)
However, unlike in the case $\gamma\in[0,\gamma_{c,n})$, since the form
$\gamma_{c,n}\,q_{u}$ is bounded relative to the form $Q_{H_{0}}$ with
relative bound equal to one, one now has possible cancellations between the
forms $Q_{H_{0}}$ and $\gamma_{c,n}\,q_{u}$ and hence concludes that
$\operatorname{dom}(Q_{\gamma_{c,n}})=\operatorname{dom}\big{(}H_{Q_{\gamma_{c,n}}}^{1/2}\big{)}\supseteq
H^{1}({\mathbb{R}}^{n})$ (3.88)
(see also Remark 3.6). The rest of the proof now follows the case
$\gamma\in[0,\gamma_{c,n})$ line by line, in particular,
$H_{Q_{\gamma_{c,n}}}=H_{\gamma_{c,n}}=\big{(}(-\Delta+V_{\gamma_{c,n}})|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\big{)}_{F},$
(3.89)
and
$\ker\big{(}H_{\gamma_{c,n}}^{1/2}\big{)}=\ker(H_{\gamma_{c,n}})=\\{0\\},$
(3.90)
again by (3.7), then proves strictness of (3.5) also for
$\gamma=\gamma_{c,n}$. ∎
###### Remark 3.6.
We briefly illustrate the possibility of cancellations in $H_{\gamma_{c,n}}$.
Let $\psi_{\gamma,0}=\psi_{\gamma,0}(\theta_{n-1})$, $\gamma\geqslant 0$, be
the unique (up to constant multiples) eigenfunction of $\Lambda_{\gamma,n}$
corresponding to its lowest eigenvalue $\lambda_{\gamma,n,0}$, see (3.24), and
introduce
$\Psi_{\gamma,0}(x)=|x|^{-(n-2)/2}\psi_{\gamma,0}(\theta_{n-1}),\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},\;\gamma\geqslant 0.$ (3.91)
Then an elementary computation reveals that
$L_{\gamma}\Psi_{\gamma,0}=\big{\\{}\big{[}(n-2)^{2}/4\big{]}+\lambda_{\gamma,n,0}\big{\\}}|x|^{-(n+2)/2}\psi_{\gamma,0}(\theta_{n-1}),$
(3.92)
in the sense of distributions. In particular, if $\gamma=\gamma_{c,n}$, and
hence, $\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4$, one obtains
$L_{\gamma_{c,n}}\Psi_{\gamma_{c,n},0}=0,$ (3.93)
in the distributional sense. Thus, introducing
$f_{0}(x)=|x|^{-(n-2)/2}\psi_{\gamma_{c,n},0}(\theta_{n-1})\phi(|x|),\quad
x\in{\mathbb{R}}^{n}\backslash\\{0\\},$ (3.94)
one concludes that
$f_{0}\in\operatorname{dom}(H_{\gamma_{c,n}})\subset\operatorname{dom}\big{(}H_{\gamma_{c,n}}^{1/2}\big{)}.$
(3.95)
However, since $(\partial f_{0}/\partial r)\notin L^{2}({\mathbb{R}}^{n})$,
the elementary fact
$\bigg{|}\bigg{(}\frac{\partial f_{0}}{\partial
r}\bigg{)}(x)\bigg{|}=\bigg{|}\frac{x}{|x|}\cdot(\nabla
f_{0})(x)\bigg{|}\leqslant|(\nabla f_{0})(x)|$ (3.96)
implies that
$f_{0}\notin H^{1}({\mathbb{R}}^{n}),\;(u,x)|x|^{-3}|f_{0}|^{2}\notin
L^{1}({\mathbb{R}}^{n}),$ (3.97)
illustrating possible cancellations between $H_{0}$ and
$\gamma_{c,n}(u,x)|x|^{-3}$. $\diamond$
###### Remark 3.7.
Next, we briefly discuss the remaining case $n=2$. In this situation, the
Laplace–Beltrami operator $-\Delta_{{\mathbb{S}}^{1}}$ in
$L^{2}\big{(}{\mathbb{S}}^{1}\big{)}$ can be characterized by
$\displaystyle(-\Delta_{{\mathbb{S}}^{1}}f)(\theta_{1})=-f^{\prime\prime}(\theta_{1}),\quad\theta_{1}\in(0,2\pi),$
$\displaystyle
f\in\operatorname{dom}(-\Delta_{{\mathbb{S}}^{1}})=\big{\\{}g\in
L^{2}((0,2\pi);d\theta_{1})\,\big{|}\,g,g^{\prime}\in AC([0,2\pi]);$ (3.98)
$\displaystyle\hskip
88.2037ptg(0)=g(2\pi),\,g^{\prime}(0)=g^{\prime}(2\pi);\,g^{\prime\prime}\in
L^{2}((0,2\pi);d\theta_{1})\big{\\}},$
with
$\displaystyle\;\sigma(-\Delta_{{\mathbb{S}}^{1}})=\big{\\{}\ell^{2}\big{\\}}_{\ell\in{\mathbb{N}}_{0}},$
(3.99) $\displaystyle-\Delta_{{\mathbb{S}}^{1}}e^{\pm
i\ell\theta_{1}}=\ell^{2}e^{\pm
i\ell\theta_{1}},\quad\theta_{1}\in(0,2\pi),\;\ell\in{\mathbb{N}}_{0}.$
(3.100)
The resulting Mathieu operator $\Lambda_{\gamma,2}$ in
$L^{2}((0,2\pi);d\theta_{1})$ (cf. (2.17)), of the form
$\Lambda_{\gamma,2}=-\frac{d^{2}}{d\theta_{1}^{2}}+\gamma\cos(\theta_{1}),\quad\operatorname{dom}(\Lambda_{\gamma,2})=\operatorname{dom}(-\Delta_{{\mathbb{S}}^{1}}),$
(3.101)
has extensively been studied in the literature, see, for instance [63, Ch. 2].
More generally, the least periodic eigenvalue of Hill operators (i.e.,
situations where $\cos(\theta_{1})$ is replaced by a $2\pi$-periodic, locally
integrable potential $q(\theta_{1})$) has received enormous attention, see for
instance, [10], [35], [55], [64], [69], [70], [78], [85], and [90]. Applied to
the Mathieu operator $\Lambda_{\gamma,2}$ at hand, the results obtained (cf.
the discussion in [35]) imply,
$\displaystyle\begin{split}&-\gamma^{2}\big{/}\big{[}8\pi^{2}\big{]}<\lambda_{\gamma,2,0}<0,\quad\gamma\in(0,\infty),\\\
&\quad\text{there exists $c_{0}\in(0,\infty)$ such that
}\,\lambda_{\gamma,2,0}\leqslant-
c_{0}\gamma^{2},\quad\gamma\in[0,1].\end{split}$ (3.102)
In particular, this proves the absence of a critical coupling constant
$0<\gamma_{c,2}$ for $n=2$ (equivalently, the critical constant in two
dimensions equals zero, $\gamma_{c,2}=0$), explaining why we had to limit
ourselves to $n\geqslant 3$ in the bulk of this paper. $\diamond$
###### Remark 3.8.
While thus far we focused primarily on lower semiboundedness of $H_{\gamma}$,
the direct sum considerations in Section 2 equally apply to essential self-
adjointness of
$H_{\gamma}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}$. Indeed,
returning to the operator in (2.24), one notes that
$\bigg{[}-\frac{d^{2}}{dr^{2}}+\frac{c}{r^{2}}\bigg{]}\bigg{|}_{C_{0}^{\infty}((0,\infty))}\,\text{
is essentially self-adjoint if and only if $c\geqslant 3/4$.}$ (3.103)
The criterion (3.103) combined with (2.31), (2.32) thus implies that
$\displaystyle\begin{split}&H_{\gamma}|_{C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})}\,\text{
is essentially self-adjoint}\\\ &\quad\text{if and only if
}\,\lambda_{\gamma,n,0}\geqslant-n(n-4)/4.\end{split}$ (3.104)
$\diamond$
## 4\. A Numerical Approach
Having verified the existence and uniqueness of critical dipole moments
$\gamma_{c,n}$ for all dimensions $n\geqslant 3$, and having shown some of the
properties of $\lambda_{\gamma,n,0}$, this section is devoted to a description
of a numerical method for computing $\gamma_{c,n}$, in analogy to the Legendre
expansion in [16].
To set up the numerical algorithm one can argue as follows: Given (3.24), we
are interested in solving this eigenvalue problem in the particular scenario
where $\gamma$ ranges from $0$ to $\gamma_{c,n}$, observing that
$\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4$ (cf. (2.28)). Restricting
$-\Delta_{{\mathbb{S}}^{n-1}}$ in (A.16) to ${\mathcal{L}}^{n}$ as in
(LABEL:3.3)–(3.10), (3.24) reduces to solving the eigenvalue problem
associated with $\Lambda_{\gamma,{\mathcal{L}}^{n}}$ in
$L^{2}\big{(}(0,\pi);[\sin(\theta_{n-1})]^{n-2}d\theta_{n-1}\big{)}$ of the
type,
$\displaystyle\begin{split}&-\frac{d^{2}\Psi_{\gamma}(\theta_{n-1})}{d\theta_{n-1}^{2}}-(n-2)\cot(\theta_{n-1})\frac{d\Psi_{\gamma}(\theta_{n-1})}{d\theta_{n-1}}+\gamma\cos(\theta_{n-1})\Psi_{\gamma}(\theta_{n-1})\\\
&\quad=\lambda_{\gamma,n,0}\Psi_{\gamma}(\theta_{n-1}),\quad\gamma\in(0,\gamma_{c,n}],\;\theta_{n-1}\in(0,\pi).\end{split}$
(4.1)
Expanding $\Psi_{\gamma}(\,\cdot\,)$ in normalized Gegenbauer polynomials, one
obtains
$\displaystyle\Psi_{\gamma}(\theta_{n-1})=\sum\limits_{\ell=0}^{\infty}d_{\ell}(\gamma)\bigg{[}\frac{\ell!(2\ell+n-2)}{2^{4-n}\pi\Gamma(\ell+n-2)}\bigg{]}^{1/2}\Gamma((n-2)/2)C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})),$
$\displaystyle\hskip 256.0748pt\theta_{n-1}\in(0,\pi),$ (4.2)
where $d_{\ell}(\gamma)$ are appropriate expansion coefficients. Since the
Gegenbauer polynomial $C_{\ell}^{(n-2)/2}(\cos(\,\cdot\,))$ is an
eigenfunction of $-\Delta_{{\mathbb{S}}^{n-1}}$ corresponding to the
eigenvalue $\ell(\ell+n-2)$, (4.1) becomes
$\displaystyle\begin{split}&\sum\limits_{\ell=0}^{\infty}[\ell(\ell+n-2)+\gamma\cos(\theta_{n-1})-\lambda_{\gamma,n,0}]\,d_{\ell}(\gamma)\bigg{[}\frac{\ell!(2\ell+n-2)}{2^{4-n}\pi\Gamma(\ell+n-2)}\bigg{]}^{1/2}\\\
&\hskip
15.649pt\times\Gamma((n-2)/2)C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1}))=0.\end{split}$
(4.3)
Next, we will exploit the following recurrence relation of Gegenbauer
polynomials,
$\displaystyle\cos(\theta_{n-1})C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1}))$
$\displaystyle\quad=\frac{\ell+1}{2\ell+n-2}\left(C_{\ell+1}^{(n-2)/2}(\cos(\theta_{n-1}))+\frac{\ell+n-3}{\ell+1}C_{\ell-1}^{(n-2)/2}(\cos(\theta_{n-1}))\right),$
$\displaystyle\hskip 273.14662pt\ell\in{\mathbb{N}}_{0}$ (4.4)
(with $C_{-1}^{(n-2)/2}(\,\cdot\,)\equiv 0$) to expand the term
$\gamma\cos(\theta_{n-1})$. For the $(\ell-1)$-term, one infers
$\displaystyle\gamma\cos(\theta_{n-1})d_{\ell-1}(\gamma)\bigg{[}\frac{(\ell-1)!(2\ell+n-4))^{2}}{2^{4-n}\pi\Gamma(\ell+n-3)}\bigg{]}^{1/2}\Gamma((n-2)/2)C_{\ell-1}^{(n-2)/2}(\cos(\theta_{n-1}))$
$\displaystyle\quad=\gamma
d_{\ell-1}(\gamma)\bigg{[}\frac{(\ell-1)!(2\ell+n-4)}{2^{4-n}\pi\Gamma(\ell+n-3)}\bigg{]}^{1/2}\frac{\ell\,\Gamma((n-2)/2)}{2\ell+n-4}C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})),$
$\displaystyle\hskip 281.6821pt\ell\in{\mathbb{N}}_{0}$ (4.5)
(where $d_{-1}(\gamma)=0$) and for the $(\ell+1)$-term, one obtains
$\displaystyle\gamma\cos(\theta_{n-1})d_{\ell+1}(\gamma)\bigg{[}\frac{(\ell+1)!(2\ell+n)}{2^{4-n}\pi\Gamma(\ell+n-1)}\bigg{]}^{1/2}\Gamma((n-2)/2)C_{\ell+1}^{(n-2)/2}(\cos(\theta_{n-1}))$
$\displaystyle\quad=\gamma
d_{\ell+1}(\gamma)\bigg{[}\frac{(\ell+1)!(2\ell+n)}{2^{4-n}\pi\Gamma(\ell+n-1)}\bigg{]}^{1/2}\Gamma((n-2)/2)\frac{\ell+n-2}{2\ell+n}C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})),$
$\displaystyle\hskip 284.52756pt\ell\in{\mathbb{N}}_{0}.$ (4.6)
The $\ell$-term maintains its form
$\displaystyle[\ell(\ell+n-2)-\lambda_{\gamma,n,0}]\,d_{\ell}(\gamma)\bigg{[}\frac{\ell!(2\ell+n-2)}{2^{4-n}\pi\Gamma(\ell+n-2)}\bigg{]}^{1/2}\Gamma((n-2)/2)$
$\displaystyle\quad\times
C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})),\quad\ell\in{\mathbb{N}}_{0},$ (4.7)
so one can divide all terms by the normalizing factor from (4.2) (since the
orthogonality of the Gegenbauer polynomials mandates that every term under the
sum in (4.3) individually vanishes), obtaining
$\displaystyle\sum\limits_{\ell=0}^{\infty}\bigg{\\{}[\ell(\ell+n-2)-\lambda_{\gamma,n,0}]d_{\ell}(\gamma)$
$\displaystyle\hskip
19.91692pt+\gamma\bigg{(}\bigg{[}\frac{(2\ell+n-4)(\ell+n-3)}{\ell(2\ell+n-2)}\bigg{]}^{1/2}\,\frac{\ell}{2\ell+n-4}d_{\ell-1}(\gamma)$
(4.8) $\displaystyle\hskip
19.91692pt+\bigg{[}\frac{(\ell+1)(2\ell+n)}{(2\ell+n-2)(\ell+n-2)}\bigg{]}^{1/2}\frac{\ell+n-2}{2\ell+n}\,d_{\ell+1}(\gamma)\bigg{)}\bigg{\\}}C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1}))=0.$
Setting each coefficient equal to zero results in
$\displaystyle\begin{split}&[\ell(\ell+n-2)-\lambda_{\gamma,n,0}]d_{\ell}(\gamma)+\gamma\bigg{(}\bigg{[}\frac{\ell(\ell+n-3)}{(2\ell+n-4)(2\ell+n-2)}\bigg{]}^{1/2}d_{\ell-1}(\gamma)\\\
&\quad+\bigg{[}\frac{(\ell+1)(\ell+n-2)}{(2\ell+n-2)(2\ell+n)}\bigg{]}^{1/2}d_{\ell+1}(\gamma)\bigg{)}=0,\quad\ell\in{\mathbb{N}}_{0},\end{split}$
(4.9)
which one can rewrite as
$\displaystyle\bigg{[}\frac{(\ell+1)(\ell+n-2)}{(2\ell+n-2)(2\ell+n)}\bigg{]}^{1/2}d_{\ell+1}(\gamma)+\bigg{[}\frac{\ell(\ell+n-3)}{(2\ell+n-4)(2\ell+n-2)}\bigg{]}^{1/2}d_{\ell-1}(\gamma)$
$\displaystyle\quad=-\frac{1}{\gamma}[\ell(\ell+n-2)-\lambda_{\gamma,n,0}]\,d_{\ell}(\gamma),\quad\gamma\in(0,\gamma_{c,n}],\;\ell\in{\mathbb{N}}_{0}.$
(4.10)
Equation (4.10) can be expressed as the generalized Jacobi operator eigenvalue
problem in $\ell^{2}({\mathbb{N}}_{0};w)$,
$\displaystyle
Jd(\gamma)=-\frac{1}{\gamma}w(\gamma)d(\gamma),\quad\gamma\in(0,\gamma_{c,n}],$
(4.11)
where
$\displaystyle
Jd(\gamma)=\begin{cases}a_{\ell+1}d_{\ell+1}(\gamma)+a_{\ell}d_{\ell-1}(\gamma),&\ell\in{\mathbb{N}},\\\
a_{1}d_{1}(\gamma),&\ell=0,\end{cases}\quad
w(\gamma)d(\gamma)=\big{(}w_{\ell}(\gamma)d_{\ell}(\gamma)\big{)}_{\ell\in{\mathbb{N}}_{0}},$
(4.12)
and
$\displaystyle\begin{split}&a_{\ell}=\bigg{[}\frac{\ell(\ell+n-3)}{(2\ell+n-4)(2\ell+n-2)}\bigg{]}^{1/2},\quad
w_{\ell}(\gamma)=\ell(\ell+n-2)-\lambda_{\gamma,n,0},\quad\ell\in{\mathbb{N}}_{0}.\end{split}$
(4.13)
(One observes that $w_{0}(\gamma)\underset{\gamma\downarrow
0}{\longrightarrow}0$ by (2.38).)
Explicitly, (4.12) yields the self-adjoint Jacobi operator $J$ in
$\ell^{2}({\mathbb{N}}_{0};w)$ represented as a semi-infinite matrix with
respect to the standard Kronecker-$\delta$ basis
$\displaystyle Jd(\gamma)$
$\displaystyle=\begin{pmatrix}0&a_{1}&0&\ldots&&&\\\
a_{1}&0&a_{2}&0&\ldots&&\\\ 0&a_{2}&0&a_{3}&0&\ldots&\\\
\vdots&0&a_{3}&0&a_{4}&0&\ldots\\\ &\vdots&0&\ddots&\ddots&\ddots&\\\
\end{pmatrix}\begin{pmatrix}d_{0}(\gamma)\\\ d_{1}(\gamma)\\\ d_{2}(\gamma)\\\
\vdots\\\ \phantom{\vdots}\\\ \end{pmatrix}$
$\displaystyle=-\frac{1}{\gamma}\begin{pmatrix}w_{0}(\gamma)&0&\ldots&&&\\\
0&w_{1}(\gamma)&0&\ldots&&\\\ \vdots&0&w_{2}(\gamma)&0&\ldots&\\\
&\vdots&0&w_{3}(\gamma)&0&\ldots\\\ &&\vdots&\ddots&\ddots&\ddots\\\
\end{pmatrix}\begin{pmatrix}d_{0}(\gamma)\\\ d_{1}(\gamma)\\\ d_{2}(\gamma)\\\
\vdots\\\ \phantom{\vdots}\vspace{1mm}\\\ \end{pmatrix}$
$\displaystyle=-\frac{1}{\gamma}w(\gamma)d(\gamma),\quad\gamma\in(0,\gamma_{c,n}].$
(4.14)
One would like to calculate $\gamma_{c,n}$ approximately using finite
truncations of the matrix representation of $J$ in the first line of (4.14) -
the feasibility of truncations will be made precise below. In order for these
approximants to converge, a transformation to a compact Jacobi operator
becomes necessary. For this purpose one introduces the operator
$\displaystyle
V(\gamma)=\begin{cases}\ell^{2}({\mathbb{N}}_{0};w(\gamma))\to\ell^{2}({\mathbb{N}}_{0}),\\\
b\mapsto w^{1/2}(\gamma)b.\end{cases}$ (4.15)
That $V(\gamma)$ is unitary may be seen from
$\displaystyle||V(\gamma)b||_{\ell^{2}({\mathbb{N}}_{0})}=||w^{1/2}(\gamma)b||_{\ell^{2}({\mathbb{N}}_{0})}=||b||_{\ell^{2}({\mathbb{N}}_{0};w(\gamma))},\quad
b\in\ell^{2}({\mathbb{N}}_{0};w(\gamma)),$ (4.16)
and the fact that $V(\gamma)$ is surjective and defined on all of
$\ell^{2}({\mathbb{N}}_{0};w)$. Next, one transforms (4.14) into
$V(\gamma)^{-1}JV(\gamma)^{-1}V(\gamma)d=-\frac{1}{\gamma}V(\gamma)d(\gamma),\quad
d(\gamma)\in\ell^{2}({\mathbb{N}}_{0};w(\gamma)),$ (4.17)
which can equivalently be expressed as
$w(\gamma)^{-1/2}Jw(\gamma)^{-1/2}\widetilde{d}(\gamma)=-\frac{1}{\gamma}\widetilde{d}(\gamma),\quad\widetilde{d}(\gamma)=V(\gamma)d(\gamma)\in\ell^{2}({\mathbb{N}}_{0}).$
(4.18)
Introducing
$K(\gamma)=w(\gamma)^{-1/2}Jw(\gamma)^{-1/2}\in{\mathcal{B}}\big{(}\ell^{2}({\mathbb{N}}_{0})\big{)},\quad\gamma\in(0,\gamma_{c,n}],$
(4.19)
one can write (4.18) in the form
$\displaystyle\begin{split}K(\gamma)\widetilde{d}(\gamma)&=\begin{pmatrix}0&\widetilde{a}_{1}(\gamma)&0&\ldots&&&\\\
\widetilde{a}_{1}(\gamma)&0&\widetilde{a}_{2}(\gamma)&0&\ldots&&\\\
0&\widetilde{a}_{2}(\gamma)&0&\widetilde{a}_{3}(\gamma)&0&\ldots&\\\
\vdots&0&\widetilde{a}_{3}(\gamma)&0&\widetilde{a}_{4}(\gamma)&0&\ldots\\\
&\vdots&0&\ddots&\ddots&\ddots&\\\
\end{pmatrix}\begin{pmatrix}\widetilde{d}_{0}(\gamma)\\\
\widetilde{d}_{1}(\gamma)\\\ \widetilde{d}_{2}(\gamma)\\\ \vdots\\\
\phantom{\vdots}\\\ \end{pmatrix}\\\
&=-\frac{1}{\gamma}\begin{pmatrix}\widetilde{d}_{0}(\gamma)\\\
\widetilde{d}_{1}(\gamma)\\\ \widetilde{d}_{2}(\gamma)\\\ \vdots\\\
\phantom{\vdots}\vspace{1mm}\\\
\end{pmatrix}=-\frac{1}{\gamma}\widetilde{d}(\gamma),\quad\gamma\in(0,\gamma_{c,n}],\end{split}$
(4.20)
in analogy with (4.14), with
$\displaystyle\,\,\widetilde{a}_{\ell}(\gamma)=w_{\ell}(\gamma)^{-1/2}a_{\ell}\,w_{\ell-1}(\gamma)^{-1/2}$
$\displaystyle\,\,\,\qquad=[\ell(\ell+n-2)-\lambda_{\gamma,n,0}]^{-1/2}[(\ell-1)(\ell+n-3)-\lambda_{\gamma,n,0}]^{-1/2}$
$\displaystyle\,\,\,\quad\qquad\times\bigg{[}\frac{\ell(\ell+n-3)}{(2\ell+n-4)(2\ell+n-2)}\bigg{]}^{1/2},\quad\ell\in{\mathbb{N}},$
(4.21)
$\displaystyle\widetilde{a}_{\ell}(\gamma)\underset{\ell\to\infty}{=}O\big{(}\ell^{-2}\big{)}.$
(4.22)
Recalling once more that $\lambda_{\gamma_{c,n},n,0}=-(n-2)^{2}/4$ (cf.
(2.28)), and that $\lambda_{\gamma,n,0}$ as well as $\Psi_{\gamma}(\,\cdot\,)$
in (4.1) are analytic with respect to $\gamma$ as $\gamma$ varies in a complex
neighborhood of $[0,\infty)$, taking the limit $\gamma\uparrow\gamma_{c,n}$ on
either side of (4.20) yields
$K(\gamma_{c,n})\widetilde{d}(\gamma_{c,n})=-\gamma_{c,n}^{-1}\,\widetilde{d}(\gamma_{c,n}),$
(4.23)
implying
$-\gamma_{c,n}^{-1}\in\sigma_{p}(K(\gamma_{c,n})).$ (4.24)
Next, we prove that $-\gamma_{c,n}^{-1}$ is the smallest (negative) eigenvalue
of the operator $K(\gamma_{c,n})$ in $\ell^{2}({\mathbb{N}}_{0})$.
###### Proposition 4.1.
One has
$K(\gamma)\in{\mathcal{B}}_{\infty}\big{(}\ell^{2}({\mathbb{N}}_{0})\big{)}$,
$\gamma\in(0,\gamma_{c,n}]$, and
$\gamma K(\gamma)\geqslant-
I_{\ell^{2}({\mathbb{N}}_{0})},\quad\gamma\in(0,\gamma_{c,n}].$ (4.25)
Moreover,
$\displaystyle||K(\gamma_{c,n})||_{{\mathcal{B}}(\ell^{2}({\mathbb{N}}_{0}))}\leqslant\begin{cases}8\big{/}\big{[}3^{3/2}\big{]},\quad&n=3,\\\
2\max\\{\widetilde{a}_{\ell_{r,n}}(\gamma_{c,n}),\widetilde{a}_{\ell_{r,n}-1}(\gamma_{c,n})\\},\quad&n\geqslant
4,\end{cases}$ (4.26)
where
$r(n)=\frac{1}{4}\left(6-2n+\big{\\{}2\big{[}26-18n+3n^{2}\big{]}\big{\\}}^{1/2}\right),\quad\ell_{r,n}=\lceil
r(n)\rceil,$ (4.27)
with $\lceil x\rceil=\inf\\{m\in{\mathbb{N}}_{0}\,|\,m\geqslant x\\}$, the
ceiling function.
###### Proof.
The compactness assertion for $K(\gamma)$, $\gamma\in(0,\gamma_{c,n}]$,
follows from the limiting behavior
$\widetilde{a}_{\ell}(\gamma)\underset{\ell\to\infty}{\longrightarrow}0$, see,
for instance, [86, p. 201].
To prove the uniform lower bound (4.25) one can argue by contradiction as
follows: Fix $\gamma\in(0,\gamma_{c,n}]$ and suppose there exists
$\varepsilon>0$ such that $-(1+\varepsilon)\in\sigma(K(\gamma))$, that is,
$-(1+\varepsilon)\in\sigma_{p}(K(\gamma))$. Then working backwards from (4.20)
to (4.1) yields the existence of
$\\{d_{\ell}(\varepsilon,\gamma)\\}_{\ell\in{\mathbb{N}}_{0}}\in\ell^{2}({\mathbb{N}}_{0};w)$
such that
$\displaystyle\begin{split}&-\frac{d^{2}\Psi_{\varepsilon,\gamma}(\theta_{n-1})}{d\theta_{n-1}^{2}}-(n-2)\cot(\theta_{n-1})\frac{d\Psi_{\varepsilon,\gamma}(\theta_{n-1})}{d\theta_{n-1}}+\frac{\gamma}{1+\varepsilon}\cos(\theta_{n-1})\Psi_{\varepsilon,\gamma}(\theta_{n-1})\\\
&\quad=\lambda_{\gamma,n,0}\Psi_{\varepsilon,\gamma}(\theta_{n-1}),\quad\gamma\in(0,\gamma_{c,n}].\end{split}$
(4.28)
where
$\displaystyle\Psi_{\varepsilon,\gamma}(\theta_{n-1})$
$\displaystyle=\sum\limits_{\ell=0}^{\infty}d_{\ell}(\varepsilon,\gamma)\bigg{[}\frac{\ell!(2\ell+n-2)}{2^{4-n}\pi\Gamma(\ell+n-2)}\bigg{]}^{1/2}\Gamma((n-2)/2)C_{\ell}^{(n-2)/2}(\cos(\theta_{n-1})).$
(4.29)
By the strict monotonicity of $\lambda_{\gamma,n,0}$ with respect to
$\gamma\in(0,\gamma_{c,n}]$ one infers that
$\lambda_{\gamma,n,0}<\lambda_{\gamma/(1+\varepsilon),n,0},$ (4.30)
and hence (4.28) contradicts the fact that by definition,
$\lambda_{\gamma/(1+\varepsilon),n,0}$ is the lowest eigenvalue of
$\Lambda_{\gamma/(1+\varepsilon),{\mathcal{L}}^{n}}$.
To obtain the bound (4.26), (4.27), one applies [81, Theorem 1.5] after
calculating
$||\widetilde{a}(\gamma_{c,n})||_{\ell^{\infty}({\mathbb{N}}_{0})}$. As
$\widetilde{a}_{\ell,n}(\gamma_{c,n})$ is bounded and tends to 0 as
$\ell\to\infty$ for all $n\geqslant 3$, it attains its supremum. To find the
index where this occurs, one considers $\ell$ as a continuous variable, and
solves $\frac{d\widetilde{a}_{\ell,n}(\gamma_{c,n})}{d\ell}=0$. The value
$r(n)$ emerges as the only nonnegative, real root of this expression, but as
$0<r(n)\in{\mathbb{R}}\backslash{\mathbb{N}}$ for $n\geqslant 4$, the maximum
in (4.26) and the ceiling function in (4.27) are required. Since
$r(3)\in{\mathbb{C}}\backslash{\mathbb{R}}$, the norm
$||\widetilde{a}_{\,\cdot\,,3}(\gamma_{c,n})||_{\ell^{\infty}({\mathbb{N}}_{0})}$
must be computed separately as
$||\widetilde{a}_{\,\cdot\,,3}(\gamma_{c,n})||_{\ell^{\infty}({\mathbb{N}}_{0})}=|\widetilde{a}_{1,3}(\gamma_{c,n})|=8\big{/}\big{[}3^{3/2}\big{]}$.
∎
To introduce the notion of finite truncations, one considers the operators
$\displaystyle P_{m}=\begin{pmatrix}I_{{\mathbb{C}}^{m}}&0\\\
0&0\end{pmatrix}=I_{{\mathbb{C}}^{m}}\oplus 0,\quad
K_{m}(\gamma_{c,n})=P_{m}K(\gamma_{c,n})P_{m},\quad m\in{\mathbb{N}},$ (4.31)
on $\ell^{2}({\mathbb{N}}_{0})$ (with $I_{{\mathbb{C}}^{m}}$ denoting the
identity matrix in ${\mathbb{C}}^{m}$, $m\in{\mathbb{N}}$).
We also introduce the finite $N\times N$ tri-diagonal Jacobi matrices
$J_{N}(\widetilde{a}_{0},\dots,\widetilde{a}_{N-1})$ in ${\mathbb{C}}^{N}$,
$N\in{\mathbb{N}}$, $N\geqslant 2$, denoted by
$J_{N}(\widetilde{a}_{1},\dots,\widetilde{a}_{N-1})=\begin{pmatrix}0&\widetilde{a}_{1}&\phantom{0}&\phantom{0}&\phantom{0}&\phantom{0}\\\
\widetilde{a}_{1}&0&\widetilde{a}_{2}&\phantom{0}&\bf{0}&\phantom{0}\\\
\phantom{0}&\widetilde{a}_{2}&0&\widetilde{a}_{3}&\phantom{0}&\phantom{0}\\\
\phantom{0}&\phantom{0}&\widetilde{a}_{3}&0&\ddots&\phantom{0}\\\
\phantom{0}&\bf{0}&\phantom{0}&\ddots&\ddots&\widetilde{a}_{N-1}\\\
\phantom{0}&\phantom{0}&\phantom{0}&\phantom{0}&\widetilde{a}_{N-1}&0\\\
\end{pmatrix},\quad N\in{\mathbb{N}},N\geqslant 2,$ (4.32)
in particular,
$\displaystyle{\det}_{{\mathbb{C}}^{N}}(zI_{N}-J_{N}(\widetilde{a}_{1},\dots,\widetilde{a}_{N-1}))=z{\det}_{{\mathbb{C}}^{N-1}}(zI_{N-1}-J_{N-1}(\widetilde{a}_{2},\dots,\widetilde{a}_{N-1}))$
$\displaystyle\qquad-[\widetilde{a}_{1}]^{2}{\det}_{{\mathbb{C}}^{N-2}}(zI_{N-2}-J_{N-2}(\widetilde{a}_{3},\dots,\widetilde{a}_{N-1})),\quad
z\in{\mathbb{C}},$
$\displaystyle\quad=\begin{cases}zP_{(N-1)/2}\big{(}z^{2}\big{)},\quad
P_{(N-1)/2}(0)\neq 0,&N\text{ odd,}\\\ Q_{N/2}\big{(}z^{2}\big{)},\quad
Q_{N/2}(0)\neq 0,&N\text{ even,}\end{cases}$ (4.33)
where $P_{(N-1)/2}(\,\cdot\,)$ and $Q_{N/2}(\,\cdot\,)$ are monic polynomials
of degree $(N-1)/2$ and $N/2$, respectively.
Thus, the spectrum of each
$J_{N}(\widetilde{a}_{0},\dots,\widetilde{a}_{N-1})$ consists of $N$ real
eigenvalues, symmetric with respect to the origin, the eigenvalues being
simple as long as $\widetilde{a}_{j}>0$, $1\leqslant j\leqslant N-1$ (see,
e.g., [33, Theorem II.1.1], [81, Remark 1.10 and p. 120]). Explicitly,
$\displaystyle{\det}_{{\mathbb{C}}^{2}}((zI_{{\mathbb{C}}^{2}}-J_{2}(\widetilde{a}_{1}))=z^{2}-[\widetilde{a}_{1}]^{2},$
$\displaystyle{\det}_{{\mathbb{C}}^{3}}((zI_{{\mathbb{C}}^{3}}-J_{3}(\widetilde{a}_{1},\widetilde{a}_{2}))=z\big{\\{}z^{2}-[\widetilde{a}_{1}]^{2}-[\widetilde{a}_{2}]^{2}\big{\\}},$
$\displaystyle{\det}_{{\mathbb{C}}^{4}}((zI_{{\mathbb{C}}^{4}}-J_{4}(\widetilde{a}_{1},\widetilde{a}_{2},\widetilde{a}_{3}))=z^{4}-\big{\\{}[\widetilde{a}_{1}]^{2}+[\widetilde{a}_{2}]^{2}+[\widetilde{a}_{3}]^{2}\big{\\}}z^{2}+[\widetilde{a}_{1}]^{2}[\widetilde{a}_{3}]^{2},$
(4.34)
$\displaystyle{\det}_{{\mathbb{C}}^{5}}((zI_{{\mathbb{C}}^{5}}-J_{5}(\widetilde{a}_{1},\widetilde{a}_{2},\widetilde{a}_{3},\widetilde{a}_{4}))=z\big{\\{}z^{4}-\big{\\{}[\widetilde{a}_{1}]^{2}+[\widetilde{a}_{2}]^{2}+[\widetilde{a}_{3}]^{2}+[\widetilde{a}_{4}]^{2}\big{\\}}z^{2}$
$\displaystyle\hskip
157.91287pt+[\widetilde{a}_{1}]^{2}[\widetilde{a}_{3}]^{2}+[\widetilde{a}_{1}]^{2}[\widetilde{a}_{4}]^{2}+[\widetilde{a}_{2}]^{2}[\widetilde{a}_{4}]^{2}\big{\\}},$
etc.
In addition, we introduce the unitary, self-adjoint, diagonal operator $W$ in
$\ell^{2}({\mathbb{N}}_{0})$ as
$W=\big{(}(-1)^{p}\delta_{p,q}\big{)}_{(p,q)\in{\mathbb{N}}_{0}^{2}},\quad
W^{-1}=W=W^{*}.$ (4.35)
###### Theorem 4.2.
Given the operators $K(\gamma_{c,n})$, $K_{m}(\gamma_{c,n})$,
$m\in{\mathbb{N}}$, and $W$ as in (4.20), (4.31)–(4.35), one concludes that
$K(\gamma_{c,n})$ and $-K(\gamma_{c,n})$ as well as $K_{m}(\gamma_{c,n})$ and
$-K_{m}(\gamma_{c,n})$ are unitarily equivalent,
$-K(\gamma_{c,n})=WK(\gamma_{c,n})W^{-1},\quad-
K_{m}(\gamma_{c,n})=WK_{m}(\gamma_{c,n})W^{-1},\;m\in{\mathbb{N}},$ (4.36)
and hence the spectra of $K(\gamma_{c,n})$ and $K_{m}(\gamma_{c,n})$,
$m\in{\mathbb{N}}$, are symmetric with respect to zero. Moreover, all nonzero
eigenvalues of $K(\gamma_{c,n})$ and $K_{m}(\gamma_{c,n})$,
$m\in{\mathbb{N}}$, are simple. In addition,
$\lim_{m\to\infty}\|K_{m}(\gamma_{c,n})-K(\gamma_{c,n})\|_{{\mathcal{B}}(\ell^{2}({\mathbb{N}}_{0}))}=0,$
(4.37)
and333Here $\sigma_{ess}(\,\cdot\,)$ denotes the essential spectrum.
$\displaystyle\begin{split}&\sigma(K(\gamma_{c,n}))=\underset{m\to\infty}{\lim}\sigma(K_{m}(\gamma_{c,n})),\\\
&\sigma_{ess}(K(\gamma_{c,n}))=\sigma_{ess}(K_{m}(\gamma_{c,n}))=\\{0\\},\;m\in{\mathbb{N}}.\end{split}$
(4.38)
In particular, $\lambda\in\sigma(K(\gamma_{c,n}))$ if and only if there is a
sequence $(\lambda_{m})_{m\in{\mathbb{N}}}$ with
$\lambda_{m}\in\sigma(K_{m}(\gamma_{c,n}))$ such that
$\lambda_{m}\underset{m\to\infty}{\longrightarrow}\lambda$.
###### Proof.
The symmetry fact (4.36) follows from an elementary computation. That all
eigenvalues of $K(\gamma_{c,n})$ are simple follows from the fact that
$K(\gamma_{c,n})$ is a half-lattice operator with
$\widetilde{a}_{\ell}(\gamma_{c,n})>0$, $\ell\in{\mathbb{N}}$, and hence the
half-lattice does not decouple into a disjoint union of subsets (resp.,
$K(\gamma_{c,n})$ does not reduce to a direct sum of operators in
$\ell^{2}({\mathbb{N}}_{0})$). The same argument applies to the finite-lattice
operators $K_{m}(\gamma_{c,n})$, $m\in{\mathbb{N}}$.
One notices that
$\operatorname*{s-lim}_{m\to\infty}P_{m}=I_{\ell^{2}({\mathbb{N}}_{0})}$,
where strong operator convergence is abbreviated by $\operatorname*{s-lim}$.
Together with the compactness of $K(\gamma_{c,n})$ given in Proposition 4.1,
one obtains
$\lim_{m\to\infty}\|P_{m}K(\gamma_{c,n})-K(\gamma_{c,n})\|_{{\mathcal{B}}(\ell^{2}({\mathbb{N}}_{0}))}=0,$
(4.39)
applying [3, Proposition 3.11]. The norm convergence in (4.39), together with
the uniform bound $\|P_{m}\|_{{\mathcal{B}}(\ell^{2}({\mathbb{N}}_{0}))}=1$,
$m\in{\mathbb{N}}$, yields (4.37). The latter implies (4.38) as a consequence
of [71, Theorem VIII.23 (a) and Theorem VIII.24 (a)] (see also [89, Satz 9.24
$a)$]), taking into account that norm resolvent convergence of a sequence of
self-adjoint operators is equivalent to norm convergence of a uniformly
bounded sequence of self-adjoint operators in a complex Hilbert space (see
[71, Theorem VIII.18], [89, Satz 9.22 a) $(ii)$]). ∎
Returning to the dipole context, one may now compute approximants of
$\gamma_{c,n}$ by approximating the smallest negative eigenvalues of
$K(\gamma_{c,n})$ in terms of the smallest negative eigenvalue of
$K_{m}(\gamma_{c,n})$ with increasing $m\in{\mathbb{N}}$. Using $K_{7}$,
(which produced 16 stable digits in the case $n=3$), one obtains the following
values and approximants for $3\leqslant n\leqslant 10$:
$n$ | lower bound for $\gamma_{c,n}$ | $\gamma_{c,n}$ | upper bound $\gamma_{c,n}$
---|---|---|---
3 | 0.250 | 1.279 | 4.418
4 | 1.000 | 3.790 | 5.890
5 | 2.598 | 7.584 | 10.308
6 | 5.846 | 12.672 | 17.672
7 | 10.392 | 19.058 | 27.980
8 | 16.238 | 26.742 | 41.233
9 | 23.383 | 35.725 | 57.432
10 | 31.826 | 46.006 | 76.576
Figure 1. Dimension $n$ vs. Critical Dipole Moment $\gamma_{c,n}$ (•) with its
upper ($\blacktriangle$) and lower ($\blacksquare$) bounds.
Here the lower and upper bounds for $\gamma_{c,n}$ correspond to the values
displayed in (3.44) (see Fig. 1).
The result for $\gamma_{c,3}$ is in excellent agreement with the ones found in
the literature (see, e.g., [2], [12], [16], [17], [32], [60], [83], and [84]).
The approximate values of $\gamma_{c,n}$ for $n\geqslant 4$ (but surprisingly,
not for $n=3$) are in good agreement with those obtained in [29, p. 98].
## 5\. Multicenter Extensions
Combining the results of this manuscript with those in [38] one can extend the
scope of this investigation to include multicenter dipole interactions, that
is, sums of point dipoles supported on an infinite discrete set (a set of
distinct points spaced apart by a minimal distance $\varepsilon>0$). For
various related studies on multicenter singular interactions, see, for
instance, [11], [14], [22], [28], [29], [30], [31], [38], [48], [62], [80].
To set the stage, we recall the notion of relative form boundedness in the
special context of self-adjoint operators.
###### Definition 5.1.
Suppose that $A$ is self-adjoint in a complex Hilbert space ${\mathcal{H}}$
and bounded from below, that is, $A\geqslant cI_{{\mathcal{H}}}$ for some
$c\in{{\mathbb{R}}}$. Then the sesquilinear form $Q_{A}$ associated with $A$
is denoted by
$Q_{A}(f,g)=\big{(}(A-cI_{{\mathcal{H}}})^{1/2}f,(A-cI_{{\mathcal{H}}})^{1/2}g\big{)}_{{\mathcal{H}}}+c(f,g)_{{\mathcal{H}}},\quad
f\in\operatorname{dom}\big{(}|A|^{1/2}\big{)}.$ (5.1)
A sesquilinear form $q$ in ${\mathcal{H}}$ satisfying
$\operatorname{dom}(q)\supseteq\operatorname{dom}(Q_{A})$ is called bounded
with respect to the form $Q_{A}$ if for some $a,b\geqslant 0$,
$|q(f,f)|\leqslant a\,Q_{A}(f,f)+b\,\|f\|_{{\mathcal{H}}}^{2},\quad
f\in\operatorname{dom}(Q_{A}).$ (5.2)
The infimum of all numbers $a$ for which there exists $b\in[0,\infty)$ such
that (5.1) holds is called the bound of $q$ with respect to $Q_{A}$.
The following result is a variant of [38, Theorem 3.2], which in turn is an
abstract version of Morgan [65, Theorem 2.1] (see also [18, Proposition 3.3],
[49], [57, Sect. 4]). Throughout this section, infinite sums are understood in
the weak operator topology and $J\subseteq{\mathbb{N}}$ denotes an index set.
###### Lemma 5.2.
Suppose that $T$ is a self-adjoint operator in ${\mathcal{H}}$ bounded from
below, $T\geqslant cI_{{\mathcal{H}}}$ for some $c\in{\mathbb{R}}$, and $W$ is
a self-adjoint operator in ${\mathcal{H}}$ such that
$\operatorname{dom}\big{(}|T|^{1/2}\big{)}\subseteq\operatorname{dom}\big{(}|W|^{1/2}\big{)}.$
(5.3)
We abbreviate
$q_{W}(f,g)=\big{(}|W|^{1/2}f,\operatorname*{sgn}(W)|W|^{1/2}g\big{)}_{{\mathcal{H}}},\quad
f\in\operatorname{dom}\big{(}|W|^{1/2}\big{)}.$ (5.4)
Let $d,D\in(0,\infty)$, $e\in[0,\infty)$, assume that
$\Phi_{j}\in{\mathcal{B}}({\mathcal{H}})$, $j\in J$, leave
$\operatorname{dom}\big{(}|T|^{1/2}\big{)}$ invariant, that is,
$\Phi_{j}\operatorname{dom}\big{(}|T|^{1/2}\big{)}\subseteq\operatorname{dom}\big{(}|T|^{1/2}\big{)},\quad
j\in J,$ (5.5)
and suppose that the following conditions $(i)$–$(iii)$ hold:
$(i)$ $\sum_{j\in J}\Phi_{j}^{*}\Phi_{j}\leqslant I_{{\mathcal{H}}}$.
$(ii)$ $\sum_{j\in J}|q_{W}(\Phi_{j}f,\Phi_{j}f)|\geqslant
D^{-1}|q_{W}(f,f)|$, $f\in\operatorname{dom}\big{(}|T|^{1/2}\big{)}$.
$(iii)$ $\sum_{j\in J}\||T|^{1/2}\Phi_{j}f\|_{{\mathcal{H}}}^{2}\leqslant
d\||T|^{1/2}f\|_{{\mathcal{H}}}^{2}+e\|f\|_{{\mathcal{H}}}^{2}$,
$f\in\operatorname{dom}\big{(}|T|^{1/2}\big{)}$.
Then,
$|q_{W}(\Phi_{j}f,\Phi_{j}f)|\leqslant
a\,q_{T}(\Phi_{j}f,\Phi_{j}f)+b\|\Phi_{j}f\|_{{\mathcal{H}}}^{2},\quad
f\in\operatorname{dom}(|T|^{1/2}),\;j\in J,$ (5.6)
implies
$|q_{W}(f,f)|\leqslant
a\,d\,D\,q_{T}(f,f)+[a\,e+b]D\,\|f\|_{{\mathcal{H}}}^{2},\quad
f\in\operatorname{dom}(|T|^{1/2}).$ (5.7)
###### Proof.
For $f\in\operatorname{dom}\big{(}|T|^{1/2}\big{)}$ one computes
$\displaystyle|q_{W}(f,f)|$ $\displaystyle\leqslant D\sum_{j\in
J}|q_{W}(\Phi_{j}f,\Phi_{j}f)|\quad\text{(by $(ii)$)}$ $\displaystyle\leqslant
D\sum_{j\in
J}\left[a\big{\|}|T|^{1/2}\Phi_{j}f\big{\|}_{{\mathcal{H}}}^{2}+b\|\Phi_{j}f\|_{{\mathcal{H}}}^{2}\right]\quad\text{(by
\eqref{5.6})}$ $\displaystyle\leqslant
a\,d\,D\big{\|}|T|^{1/2}f\big{\|}_{{\mathcal{H}}}^{2}+(a\,e\,D+b\,D)\|f\|_{{\mathcal{H}}}^{2}\quad\text{(by
$(i)$ and $(iii)$)}$
$\displaystyle=a\,d\,D\,q_{T}(f,f)+[a\,e+b]D\,\|f\|_{{\mathcal{H}}}^{2},$
(5.8)
completing the proof. ∎
###### Remark 5.3.
Considering the concrete case of
$\displaystyle
H_{0}=-\Delta,\quad\operatorname{dom}(H_{0})=H^{2}({{\mathbb{R}}}^{n}),$
$\displaystyle\,Q_{H_{0}}(f,g)=\big{(}(-\Delta)^{1/2}f,(-\Delta)^{1/2}g\big{)}_{L^{2}({\mathbb{R}}^{n})}=(\nabla
f,\nabla g)_{[L^{2}({\mathbb{R}}^{n})]^{n}},$ (5.9) $\displaystyle\hskip
147.95424ptf,g\in\operatorname{dom}(Q_{H_{0}})=H^{1}({\mathbb{R}}^{n}),$
in $L^{2}({{\mathbb{R}}}^{n})$, and assuming that $W$, the operator of
multiplication with a measurable and a.e. real-valued function $W(\,\cdot\,)$,
employing a slight abuse of notation, satisfies (5.3) (for sufficient
conditions on $W$, see, e.g., [88, Theorems 10.17 (b), 10.18] with $r=1$). Let
$\\{\phi_{j}\\}_{j\in J}$, $J\subseteq{\mathbb{N}}$, be a family of smooth,
real-valued functions defined on ${{\mathbb{R}}}^{n}$ in such a manner that
for each $x\in{{\mathbb{R}}}^{n}$, there exists an open neighborhood
$U_{x}\subset{{\mathbb{R}}}^{n}$ of $x$ such that there exist only finitely
many indices $k\in J$ with $\operatorname{supp}\,(\phi_{k})\cap
U_{x}\neq\emptyset$ and $\phi_{k}|_{U_{x}}\neq 0$, as well as
$\sum_{j\in J}\phi_{j}(x)^{2}=1,\quad x\in{{\mathbb{R}}}^{n}$ (5.10)
(the sum over $j\in J$ in (5.10) being finite). Finally, let $\Phi_{j}$ be the
operator of multiplication by the function $\phi_{j}$, $j\in J$. Then one
notes that for these choices, hypothesis $(i)$ holds with equality, and
hypothesis $(ii)$ with $D=1$ follows from $(i)$. Moreover, item $(iii)$ holds
with $d=1$ as long as
$e=\bigg{\|}\sum_{j\in
J}|\nabla\phi_{j}(\cdot)|^{2}\bigg{\|}_{L^{\infty}({{\mathbb{R}}}^{n})}<\infty.$
(5.11)
To verify this, one observes that $\||H_{0}|^{1/2}\phi
f\|_{L^{2}({{\mathbb{R}}}^{n})}^{2}=\int_{{{\mathbb{R}}}^{n}}d^{n}x\,|\nabla(\phi(x)f(x))|^{2}$
and that the cross terms vanish since $\sum_{j\in
J}\phi_{j}(x)(\nabla\phi_{j})(x)=0$, $x\in{{\mathbb{R}}}^{n}$, by condition
(5.10). (We note again that the latter sum over $j\in J$ contains only
finitely many terms in every bounded neighborhood of
$x\in{{\mathbb{R}}}^{n}$.) $\diamond$
Strongly singular potentials that are covered by Lemma 5.2 are, for instance,
of the following form: Let $J\subseteq{\mathbb{N}}$ be an index set,
$\varepsilon>0$, and $\\{x_{j}\\}_{j\in J}\subset{{\mathbb{R}}}^{n}$,
$n\in{\mathbb{N}}$, $n\geqslant 3$, be a set of points such that
$\inf_{\begin{subarray}{c}j,j^{\prime}\in J\\\ j\neq
j^{\prime}\end{subarray}}|x_{j}-x_{j^{\prime}}|\geqslant\varepsilon.$ (5.12)
In addition, let $\gamma_{j}\in{{\mathbb{R}}}$, $j\in J$,
$\gamma_{0}\in[0,\infty)$, with
$|\gamma_{j}|\leqslant\gamma_{0}<(n-2)^{2}/4,\quad j\in J,$ (5.13)
and
$W_{\\{\gamma_{j}\\}_{j\in J}}(x)=\sum_{j\in
J}\gamma_{j}|x-x_{j}|^{-2}\chi_{B_{n}(x_{j};\varepsilon/4)}(x)+W_{0}(x),\quad
x\in{{\mathbb{R}}}^{n}\backslash\\{x_{j}\\}_{j\in J},$ (5.14)
with
$W_{0}\in L^{\infty}({\mathbb{R}}^{n}),\,\text{ $W_{0}$ real-
valued~{}a.e.~{}on ${\mathbb{R}}^{n}$,}$ (5.15)
and $B_{n}(x_{0};r)$ the open ball in ${\mathbb{R}}^{n}$ of radius $r>0$,
centered at $x_{0}\in{\mathbb{R}}^{n}$.
Then an application of Hardy’s inequality in ${{\mathbb{R}}}^{n}$, $n\geqslant
3$ (cf. (LABEL:1.1)), shows that $W_{\\{\gamma_{j}\\}_{j\in J}}$ is form
bounded with respect to $T_{0}$ in (5.9) with form bound strictly less than
one.
At this point one can extend existing results of [29], [30] regarding
quadratic form estimates for multicenter dipole interactions as follows.
###### Theorem 5.4.
Given (5.9)–(5.12) and $W_{0}$ in (5.15), we introduce
$\gamma_{0},\gamma_{j}\in[0,\infty)$, $j\in J$, satisfying
$0\leqslant\gamma_{j}\leqslant\gamma_{0}<\gamma_{c,n},\quad j\in J,$ (5.16)
and
$\displaystyle\begin{split}q_{\\{\gamma_{j}\\}_{j\in J}}(f,g)&=\sum_{j\in
J}\gamma_{j}\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,(x-x_{j}))|x-x_{j}|^{-3}\chi_{B_{n}(x_{j};\varepsilon/4)}(x)\overline{f(x)}g(x),\\\
&\quad+\int_{{\mathbb{R}}^{n}}d^{n}x\,W_{0}(x)\overline{f(x)}g(x),\quad f,g\in
H^{1}({\mathbb{R}}^{n}).\end{split}$ (5.17)
Then $q_{\\{\gamma_{j}\\}_{j\in J}}$ is bounded with respect to $Q_{T_{0}}$ in
(5.9) with form bound strictly less than one.
###### Proof.
Without loss of generality we put $W_{0}=0$. Inequality (3.5) and the
analogous inequality with $u\in{\mathbb{R}}^{n}$, $|u|=1$, replaced by $-u$
yields
$\displaystyle\begin{split}&\text{for all $\gamma\in[0,\gamma_{c,n}]$,}\\\
&\quad\int_{{\mathbb{R}}^{n}}d^{n}x\,|(\nabla
f)(x)|^{2}\geqslant\pm\gamma\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2},\quad
f\in H^{1}({\mathbb{R}}^{n}),\end{split}$ (5.18)
and hence for some $\varepsilon_{\gamma_{0}}\in(0,1)$ and
$c(\varepsilon_{\gamma_{0}})\in(0,\infty)$,
$\displaystyle\begin{split}&\pm\gamma_{0}\int_{{\mathbb{R}}^{n}}d^{n}x\,(u,x)|x|^{-3}|f(x)|^{2}\leqslant(1-\varepsilon_{\gamma_{0}})\big{\|}\nabla
f\big{\|}_{[L^{2}({\mathbb{R}}^{n})]^{n}}^{2}\\\
&\quad+c(\varepsilon_{\gamma_{0}})\|f\|_{L^{2}({\mathbb{R}}^{n})}^{2},\quad\gamma_{0}\in[0,\gamma_{c,n}),\;f\in
H^{1}({\mathbb{R}}^{n}).\end{split}$ (5.19)
Thus, an application of Lemma 5.2, taking into account that $d=D=1$ as
described in Remark 5.3, implies for some
$C(\varepsilon_{\gamma_{0}})\in(0,\infty)$,
$\displaystyle\begin{split}|q_{\\{\gamma_{j}\\}_{j\in
J}}(f,f)|\leqslant(1-\varepsilon_{\gamma_{0}})\big{\|}\nabla
f\big{\|}_{[L^{2}({\mathbb{R}}^{n})]^{n}}^{2}+C(\varepsilon_{\gamma_{0}})\|f\|_{L^{2}({\mathbb{R}}^{n})}^{2},&\\\
f\in H^{1}({\mathbb{R}}^{n}),&\end{split}$ (5.20)
as was to be proven. ∎
Thus, Theorem 5.4 proves semiboundedness of the self-adjoint multicenter
dipole Hamiltonian $H_{\\{\gamma_{j}\\}_{j\in J}}$ in
$L^{2}({\mathbb{R}}^{n})$, uniquely associated with the quadratic form sum
$Q_{H_{\\{\gamma_{j}\\}_{j\in J}}}=Q_{T_{0}}+q_{\\{\gamma_{j}\\}_{j\in
J}},\quad\operatorname{dom}(Q_{H_{\\{\gamma_{j}\\}_{j\in
J}}})=H^{1}({\mathbb{R}}^{n}),$ (5.21)
under very general hypotheses on $\\{x_{j}\\}_{j\in
J}\subset{{\mathbb{R}}}^{n}$ and $\\{\gamma_{j}\\}_{j\in J}$. We note that
[29], [30] derive sufficient conditions $\\{x_{j}\\}_{j\in
J}\subset{{\mathbb{R}}}^{n}$ and $\\{\gamma_{j}\\}_{j\in J}$ to guarantee
nonnegativity of $H_{\\{\gamma_{j}\\}_{j\in J}}$ and also discuss situations
characterized by the lack of nonnegativity of $H_{\\{\gamma_{j}\\}_{j\in J}}$.
Finally, we sketch how Remark 3.8 extends to the multicenter situation.
###### Remark 5.5.
Assume (5.9)–(5.12), let $W_{0}$ as in (5.15), suppose
$\gamma_{j}\in[0,\infty)$, and introduce
$\displaystyle V(x)=\sum_{j\in
J}\big{[}\gamma_{j}(u,(x-x_{j}))|x-x_{j}|^{-3}+\widetilde{V}_{j}(|x-x_{j}|)\big{]}\chi_{B_{n}(x_{j};\varepsilon/4)}(x)+W_{0}(x),$
$\displaystyle\hskip
233.3125ptx\in{{\mathbb{R}}}^{n}\backslash\\{x_{j}\\}_{j\in J},$ (5.22)
with
$r\widetilde{V}_{j}(r)\in L^{1}((0,\varepsilon);dr)\cap
L^{\infty}_{loc}((0,\varepsilon];dr),\quad j\in J.$ (5.23)
Consider the minimally defined Schrödinger operator
$\overset{\textbf{\Large.}}{H}_{\\{\gamma_{j}\\}_{j\in J}}$ in
$L^{2}({\mathbb{R}}^{n})$ given by
$\overset{\textbf{\Large.}}{H}_{\\{\gamma_{j}\\}_{j\in
J}}=-\Delta+V(\,\cdot\,),\quad\operatorname{dom}\big{(}\overset{\textbf{\Large.}}{H}_{\\{\gamma_{j}\\}_{j\in
J}}\big{)}=C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{x_{j}\\}_{j\in J}).$
(5.24)
Then the criterion (3.103) and (2.31), (2.32) combined with [38, Theorems 1.1
and 5.8] imply that
$\displaystyle\begin{split}&\text{$\overset{\textbf{\Large.}}{H}_{\\{\gamma_{j}\\}_{j\in
J}}$ is essentially self-adjoint}\\\ &\quad\text{if and only if
$\lambda_{\gamma_{j},n,0}\geqslant-n(n-4)/4$ for each $j\in J$.}\end{split}$
(5.25)
$\diamond$
## Appendix A Spherical Harmonics and the Laplace–Beltrami Operator in
$L^{2}\big{(}{\mathbb{S}}^{n-1}\big{)}$, $n\geqslant 2$.
In this appendix we summarize some of the results on spherical harmonics and
the Laplace–Beltrami operator on the unit sphere ${\mathbb{S}}^{n-1}$ in
dimensions $n\in{\mathbb{N}}$, $n\geqslant 2$, following [5, Chs. 2,3], [19,
Ch. 1], and [47, Ch. 2].
Assuming $n\in{\mathbb{N}}$, $n\geqslant 2$, cartesian and polar coordinates
(cf. e.g., [9]) on ${\mathbb{S}}^{n-1}$ are given by
$\displaystyle x=(x_{1},\dots,x_{n})\in{\mathbb{R}}^{n},$ $\displaystyle
x=r\omega,\;\omega=\omega(\theta)=\omega(\theta_{1},\theta_{2},\dots,\theta_{n-1})=x/|x|\in{\mathbb{S}}^{n-1},$
(A.1) $\displaystyle x_{k}\in{\mathbb{R}},\,1\leqslant k\leqslant
n,\;r=|x|\in[0,\infty),\;\theta_{1}\in[0,2\pi),\;\theta_{j}\in[0,\pi),\,2\leqslant
j\leqslant n-1,$
where (cf., e.g., [9], [19, Sect. 1.5])
$\begin{cases}x_{1}=r\cos(\theta_{1})\prod\limits_{j=2}^{n-1}\sin(\theta_{j}),\\\\[2.84526pt]
x_{2}=r\sin(\theta_{1})\prod\limits_{j=2}^{n-1}\sin(\theta_{j}),\\\
\;\vdots\\\ x_{n-1}=r\cos(\theta_{n-2})\sin(\theta_{n-1}),\\\
x_{n}=r\cos(\theta_{n-1}).\end{cases}$ (A.2)
The surface measure $d^{n-1}\omega$ on ${\mathbb{S}}^{n-1}$ and the volume
element in ${\mathbb{R}}^{n}$ then read
$d^{n-1}\omega(\theta)=d\theta_{1}\prod_{j=2}^{n-1}[\sin(\theta_{j})]^{j-1}d\theta_{j},\quad
d^{n}x=r^{n-1}dr\,d^{n-1}\omega(\theta),$ (A.3)
in particular, the area $\omega_{n}$ of the unit sphere ${\mathbb{S}}^{n-1}$
in ${\mathbb{R}}^{n}$ is given by (cf. [66, p. 2])
$\displaystyle\omega_{n}=\int_{{\mathbb{S}}^{n-1}}d^{n-1}\omega(\theta)=2\pi^{n/2}/\Gamma(n/2).$
(A.4)
Turning to spherical harmonics next, we recall that a homogeneous polynomial
$P(x_{1},\ldots,x_{n})$ of degree $\ell\in{\mathbb{N}}_{0}$ (in $n$ variables)
satisfies $P(tx_{1},\ldots,tx_{n})=t^{n}P(x_{1},\ldots,x_{n})$ and is a linear
combination of terms of degree $\ell$. The space of such polynomials with real
coefficients is denoted $\mathscr{P}_{\ell}^{n}$. We define the harmonic
homogeneous polynomials of degree $\ell$ in $n$ variables by
$\displaystyle\mathscr{H}_{\ell}^{n}=\big{\\{}P\in\mathscr{P}_{\ell}^{n}\,\big{|}\,\Delta
P=0\big{\\}},$ (A.5)
where $\Delta$ represents the Laplace differential expression on
${\mathbb{R}}^{n}$. Restricting the elements of $\mathscr{H}_{\ell}^{n}$ to
the sphere ${\mathbb{S}}^{n-1}$, one obtains ${\mathcal{Y}}_{\ell}^{n}$, the
space of spherical harmonics of degree $\ell$ in $n$ dimensions. Spaces of
different degrees are orthogonal with respect to the real inner product on the
sphere,
$\displaystyle\begin{split}(Y,Z)_{L^{2}({\mathbb{S}}^{n-1})}=\int_{{\mathbb{S}}^{n-1}}d^{n-1}\omega(\theta)\,Y(\theta)Z(\theta)=0,&\\\
Y\in{\mathcal{Y}}_{\ell}^{n},\,Z\in{\mathcal{Y}}_{\ell^{\prime}}^{n},\;\ell,\ell^{\prime}\in{\mathbb{N}}_{0},\,\ell\neq\ell^{\prime}.&\end{split}$
(A.6)
The dimension of ${\mathcal{Y}}_{\ell}^{n}$ equals that of
$\mathscr{H}_{\ell}^{n}$ and is given by ([19, Corollary 1.1.4])
$\displaystyle\dim(\mathscr{H}_{\ell}^{n})=\binom{\ell+n-1}{\ell}-\binom{\ell+n-3}{\ell-2}=\frac{2\ell+n-2}{\ell+n-2}\binom{\ell+n-2}{n-2},$
(A.7)
where we use the convention that the second binomial coefficient equals 0 when
$\ell=0,1$, and replace the final fraction by 1 in the case where $n=2$ and
$\ell=0$. This is equivalently formulated in [66, Lemma 3, p. 4] as the
generating series
$\displaystyle\frac{1+x}{(1-x)^{n-1}}=\sum\limits_{\ell=0}^{\infty}\dim({\mathcal{Y}}_{\ell}^{n})\,x^{\ell}.$
(A.8)
Most importantly, the spherical harmonics are the eigenfunctions of the
Laplace–Beltrami operator $\Delta_{{\mathbb{S}}^{n-1}}$ in
$L^{2}\big{(}{\mathbb{S}}^{n-1}\big{)}$, satisfying the eigenvalue equation
$\displaystyle(-\Delta_{{\mathbb{S}}^{n-1}}Y)(\theta)=\ell(\ell+n-2)Y(\theta),\quad
Y\in{\mathcal{Y}}_{\ell}^{n},\;\ell\in{\mathbb{N}}_{0}.$ (A.9)
Following [19, Sect. 1.5] an explicit characterization for the spherical
harmonics reads as follows: Introducing the multi-index
$\alpha=(\alpha_{1},\ldots,\alpha_{n})\in{\mathbb{N}}_{0}^{n}$, with
$|\alpha|=\sum\limits_{j=1}^{n}\alpha_{j}$, and
$\theta=(\theta_{1},\ldots,\theta_{n-1})$, the spherical harmonics are of the
form
$\displaystyle
Y_{\alpha}(\theta)=[N_{\alpha}]^{-1}g_{\alpha}(\theta_{1})\prod\limits_{j=1}^{n-2}[\sin(\theta_{n-j})]^{|\alpha^{j+1}|}C_{\alpha_{j}}^{\nu_{j}}(\cos(\theta_{n-j})),$
(A.10)
where
$\displaystyle|\alpha^{j}|=\sum\limits_{k=j}^{n-1}\alpha_{k},\quad\,\nu_{j}=|\alpha^{j+1}|+[(n-j-1)/2],$
(A.11)
$\displaystyle\,g_{\alpha}(\theta_{1})=\begin{cases}\cos(\alpha_{n-1}\theta_{1}),&\alpha_{n}=0,\\\
\sin(\alpha_{n-1}\theta_{1}),&\alpha_{n}=1,\end{cases}$ (A.12)
$\displaystyle[N_{\alpha}]^{2}=b_{\alpha}\prod_{j=1}^{n-2}\frac{[\alpha_{j}!]([(n-j+1)/2])_{|\alpha^{j+1}|}(\alpha_{j}+\nu_{j})}{(2\nu_{j})_{\alpha_{j}}([(n-j)/2])_{|\alpha^{j+1}|}\nu_{j}},\quad
b_{\alpha}=\begin{cases}2,&\alpha_{n-1}+\alpha_{n}>0,\\\
1,&\text{otherwise.}\end{cases}$ (A.13)
Here the Pochhammer symbol $(x)_{a}$ is defined by
$(x)_{0}=1,\quad(x)_{n}=\Gamma(x+n)/\Gamma(n)=x(x+1)\cdots(x+n-1),\quad
n\in{\mathbb{N}},$ (A.14)
and $C^{\lambda}_{n}(\,\cdot\,)$ represent the Gegenbauer (or ultrasperical)
polynomials, see, for instance, [1, Ch. 22], [19, Appendix B].
The set $\\{Y_{\alpha}\,|\,|\alpha|=\ell,\alpha_{n}=0,1\\}$ represents an
orthonormal basis of ${\mathcal{Y}}_{\ell}^{n}$.
Finally, we recall the expression of the Laplace–Beltrami differential
expression on ${\mathbb{S}}^{n-1}$ in spherical coordinates. From [5, p. 94],
[19, Lemma 1.4.2], one obtains the recursion444For clarity we indicate the
space dimension $n\in{\mathbb{N}}$ as a subscript in the Laplacian
$-\Delta_{n}$ for the remainder of this appendix.
$\displaystyle-\Delta_{{\mathbb{S}}^{1}}$
$\displaystyle=-\dfrac{{\partial}^{2}}{{\partial}\theta_{1}^{2}},$
$\displaystyle-\Delta_{{\mathbb{S}}^{2}}$
$\displaystyle=-\frac{1}{\sin(\theta_{2})}\frac{\partial}{\partial\theta_{2}}\bigg{(}\sin(\theta_{2})\frac{\partial}{\partial\theta_{2}}\bigg{)}-\frac{1}{\sin^{2}(\theta_{2})}\frac{\partial^{2}}{\partial\theta_{1}^{2}},$
(A.15) $\displaystyle-\Delta_{{\mathbb{S}}^{n-1}}$
$\displaystyle=-\dfrac{{\partial}^{2}}{{\partial}\theta_{n-1}^{2}}-(n-2)\cot(\theta_{n-1})\dfrac{{\partial}}{{\partial}\theta_{n-1}}-[\sin(\theta_{n-1})]^{-2}\Delta_{{\mathbb{S}}^{n-2}},\quad
n\geqslant 3.$
Explicitly (cf. [19, p. 19]),
$\displaystyle\begin{split}-\Delta_{{\mathbb{S}}^{n-1}}&=-[\sin(\theta_{n-1})]^{2-n}\frac{\partial}{\partial\theta_{n-1}}\bigg{[}[\sin(\theta_{n-1})]^{n-2}\frac{\partial}{\partial\theta_{n-1}}\bigg{]}\\\
&\quad-\sum_{j=1}^{n-2}\bigg{(}\prod_{k=j+1}^{n-1}[\sin(\theta_{k})]^{-2}\bigg{)}[\sin(\theta_{j})]^{1-j}\frac{\partial}{\partial\theta_{j}}\bigg{[}[\sin(\theta_{j})]^{j-1}\frac{\partial}{\partial\theta_{j}}\bigg{]}\\\
&=-\sum_{j=1}^{n-1}\bigg{(}\prod_{k=1}^{j-1}[\sin(\theta_{n-k})]^{-2}\bigg{)}[\sin(\theta_{n-j})]^{1-j-n}\\\
&\hskip
39.83368pt\times\frac{\partial}{\partial\theta_{n-j}}\bigg{[}[\sin(\theta_{n-j})]^{n-j-1}\frac{\partial}{\partial\theta_{n-j}}\bigg{]}.\end{split}$
(A.16)
Acknowledgments. We are indebted to Mark Ashbaugh, Andrei Martínez-Finkel-
shtein, and Gerald Teschl for very helpful comments. We are also very grateful
to the anonymous referee for constructive criticism.
## References
* [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972.
* [2] R. F. Alvarez-Estrada and A. Galindo, Bound states in some Coulomb systems, Nuovo Cim. 44 B, 47–66 (1978).
* [3] W. O. Amrein, Non-Relativistic Quantum Dynamics, Volume 2, Reidel, Dordrecht, 1980.
* [4] W. Arendt, G. R. Goldstein, J. A. Goldstein, Outgrowths of Hardy’s inequality, in Recent Advances in Differential Equations and Mathematical Physics, N. Chernov, Y. Karpeshina, I. W. Knowles, R. T. Lewis, and R. Weikard (eds.), Contemp. Math. 412, 51–68, 2006.
* [5] K. Atkinson and W. Han, Spherical Harmonics and Approximations on the Unit Sphere: An Introduction, Lecture Notes in Math., Vol. 2044, Springer, 2012.
* [6] A. A. Balinsky, W. D. Evans, and R. T. Lewis, The Analysis and Geometry of Hardy’s Inequality, Universitext, Springer, 2015.
* [7] J. A. Barceló, T. Luque, and S. Pérez-Esteva, Characterization of Sobolev spaces on the sphere, J. Math. Anal. Appl. 491, 124240 (2020).
* [8] J. Behrndt, S. Hassi, and H. De Snoo, Boundary Value Problems, Weyl Functions, and Differential Operators, Monographs in Math., Vol. 108, Birkhäuser, Springer, 2020.
* [9] L. E. Blumenson, A derivation of $n$-dimensional spherical coordinates, Amer. Math. Monthly 67, 63–66 (1960).
* [10] L. E. Blumenson, On the eigenvalues of Hill’s equation, Commun. Pure Appl. Math. 16, 261–266 (1963).
* [11] R. Bosi, J. Dolbeault, and M. J. Esteban, Estimates for the optimal constants in multipolar Hardy inequalities for Schrödinger and Dirac operators, Commun. Pure Appl. Anal. 7, 533–562 (2008).
* [12] W. B. Brown and R. E. Roberts, On the critical binding of an electron by an electric dipole, J. Chem. Physics 46, 2006–2007 (1967).
* [13] F. Catrina and Z.-Q. Wang, On the Caffarelli–Kohn–Nirenberg inequalities: Sharp constants, existence (and nonexistence), and symmetry of extremal functions, Commun. Pure Appl. Math. 54, 229–258 (2001).
* [14] C. Cazacu, New estimates for the Hardy constants of multipolar Schrödinger operators, Commun. Contemp. Math. 18, no. 5, 1550093 (2016), 28pp.
* [15] K. S. Chou and C. W. Chu, On the best constant for a weighted Sobolev–Hardy inequality, J. London Math. Soc. 48, 137–151 (1993).
* [16] K. Connolly and D. J. Griffiths, Critical dipoles in one, two, and three dimensions, Am. J. Phys. 75, 524–631 (2007).
* [17] O. H. Crawford, Bound states of a charged particle in a dipole field, Proc. Phys. Soc. 91, 279–284 (1967).
* [18] H. L. Cycon, R. G. Froese, W. Kirsch, and B. Simon, Schrödinger Operators with Applications to Quantum Mechanics and Global Geometry, Texts and Monographs in Physics, Springer, Berlin, 1987.
* [19] F. Dai and Y. Xu, Approximation Theory and Harmonic Analysis on Spheres and Balls, Springer, New York, 2013.
* [20] E. B. Davies, Heat Kernels and Spectral Theory, Cambridge Tracts in Math., Vol. 92, Cambridge Univ. Press, Cambridge, 1989.
* [21] E. B. Davies, Spectral Theory and Differential Operators, Cambridge University Press, Cambridge, 1995.
* [22] T. Duyckaerts, Inégalités de résolvante pour l’opérateur de Schrödinger avec potentiel multipolaire critique, Bull. Soc. Math. France 134, 201–239 (2006).
* [23] D. E. Edmunds and W. D. Evans, Spectral Theory and Differential Operators, 2nd ed., Oxford Math. Monographs, Oxford University Press, Oxford, 2018.
* [24] W. D. Evans and R. T. Lewis, On the Rellich inequality with magnetic potentials, Math. Z. 251, 267–284 (2005).
* [25] W. N. Everitt and H. Kalf, The Bessel differential equation and the Hankel transform, J. Comp. Appl. Math. 208, 3–19 (2007).
* [26] W. G. Faris, Self-Adjoint Operators, Lecture Notes in Math., Vol. 433, Springer, Berlin, 1975.
* [27] W. G. Faris, Inequalities and uncertainty principles, J. Math. Phys. 19, 461–466 (1978).
* [28] V. Felli, E. M. Marchini, and S. Terracini, On Schrödinger opewrators with multipolar inverse square potentials, J. Funct. Anal. 250, 265–316 (2007).
* [29] V. Felli, E. M. Marchini, and S. Terracini, On the behavior of solutions to Schrödinger equations with dipole type potentials near the singularity, Discrete Cont. Dyn. Syst. 21, 91–119 (2008).
* [30] V. Felli, E. M. Marchini, and S. Terracini, On Schrödinger operators with multisingular inverse-square anisotropic potentials, Indiana Univ. Math. J. 58, 617–676 (2009).
* [31] V. Felli, D. Mukherjee, and R. Ognibene, On fractional multi-singular Schrödinger operators: Positivity and localization of binding, J. Funct. Anal. 278, 108389 (2020).
* [32] E. Fermi and E. Teller, The capture of negative mesotrons in matter, Phys. Rev. 72, 406 (1947).
* [33] F. R. Gantmacher and M. G. Krein, Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, rev. ed., AMS Chelsea Publ., Amer. Math. Soc., Providence, RI, 2002.
* [34] F. Gesztesy, On non-degenerate ground states for Schrödinger operators, Rep. Math. Phys. 20, 93–109 (1984).
* [35] F. Gesztesy, G. M. Graf, and B. Simon, The ground state energy of Schrödinger operators, Commun. Math. Phys. 150, 375–384 (1992).
* [36] F. Gesztesy and W. Kirsch, One-dimensional Schrödinger operators with interactions singular on a discrete set, J. reine angew. Math. 362, 28–50 (1985).
* [37] F. Gesztesy, L. L. Littlejohn, and R. Nichols, On self-adjoint boundary conditions for singular Sturm–Liouville operators bounded from below, J. Diff. Eq. 269, 6448–6491 (2020).
* [38] F. Gesztesy, M. Mitrea, I. Nenciu, and G. Teschl, Decoupling of deficiency indices and applications to Schödinger-type operators with possibly strongly singular potentials, Adv. Math. 301, 1022–1061 (2016).
* [39] F. Gesztesy, M. M. H. Pang, and J. Stanfill, Bessel-type operators and a refinement of Hardy’s inequality, in From Operator Theory to Orthogonal Polynomials, Combinatorics, and Number Theory. A Festschrift in honor of Lance L. Littlejohn’s 70th birthday, F. Gesztesy and A. Martinez-Finkelshtein (eds.), Operator Theory: Advances and Applications, Birkhäuser, Springer, to appear, arXiv:2102.00106.
* [40] F. Gesztesy and L. Pittner, On the Friedrichs extension of ordinary differential operators with strongly singular potentials, Acta Phys. Austriaca 51, 259–268 (1979).
* [41] F. Gesztesy and L. Pittner, A generalization of the virial theorem for strongly singular potentials, Rep. Math. Phys. 18, 149–162 (1980).
* [42] F. Gesztesy and M. Zinchenko, On spectral theory for Schrödinger operators with strongly singular potentials, Math. Nachr. 279, 1041–1082 (2006).
* [43] N. Ghoussoub and A. Moradifam, On the best possible remaining term in the Hardy inequality, Proc. Nat. Acad. Sci. 105, no. 37, 13746–13751 (2008).
* [44] N. Ghoussoub and A. Moradifam, Bessel pairs and optimal Hardy and Hardy–Rellich inequalities, Math. Ann. 349, 1–57 (2011).
* [45] N. Ghoussoub and A. Moradifam, Functional Inequalities: New Perspectives and New Applications, Amer. Math. Soc., Providence, RI, 2013.
* [46] I. S. Gradshteyn and I. M. Rhyzhik, Table of Integrals, Series, and Products, Academic Press, San Diego, 1980.
* [47] L. Hermi, On the Spectrum of the Dirichlet Laplacian and Other Elliptic Operators, Ph.D. Thesis, University of Missouri, Columbia, 1999.
* [48] W. Hunziker and C. Günther, Bound states in dipole fields and continuity properties of electronic spectra, Helv. Phys. Acta 53, 201–208 (1980).
* [49] R. S. Ismagilov, Conditions for the semiboundedness and discreteness of the spectrum for one-dimensional differential equations, Sov. Math. Dokl. 2, 1137–1140 (1961).
* [50] H. Kalf, On the characterization of the Friedrichs extension of ordinary or elliptic differential operators with a strongly singular potential, J. Funct. Anal. 10, 230–250 (1972).
* [51] H. Kalf, A characterization of the Friedrichs extension of Sturm–Liouville operators, J. London Math. Soc. (2) 17, 511–521 (1978).
* [52] H. Kalf, Gauss’ theorem and the self-adjointness of Schrödinger operators, Arkiv Mat. 18, 19–47 (1980).
* [53] H. Kalf, A note on the domain characterization of certain Schrödinger operators with strongly singular potentials, Proc. Roy. Soc. Edinburgh 97A, 125–130 (1984).
* [54] H. Kalf and J. Walter, Strongly singular potentials and essential self-adjointness of singular elliptic operators in $C_{0}^{\infty}({\mathbb{R}}^{n}\backslash\\{0\\})$, J. Funct. Anal. 10, 114–130 (1972).
* [55] T. Kato, Note on the least eigenvalue of the Hill equation, Quart. Appl. Math. 10, 292–294 (1952).
* [56] T. Kato, Perturbation Theory for Linear Operators, Reprint of the 1980 edition, Classics in Mathematics, Springer, Berlin, 1995.
* [57] W. Kirsch, Über Spektren stochastischer Schrödingeroperatoren, Ph.D. thesis, Ruhr-Universität Bochum, 1981.
* [58] A. Kufner, L. Maligranda, and L.-E. Persson, The Hardy Inequality. About its History and Some Related Results, Vydavatelský Servis, Pilsen, 2007.
* [59] A. Kufner, L.-E. Persson, and N. Samko, Weighted Inequalities of Hardy Type, 2nd ed., World Scientific, Singapore, 2017.
* [60] J. Lévy-Leblond, Electron capture by polar molecules, Phy. Rev. 153, 1–4 (1967).
* [61] E. H. Lieb and M. Loss, Analysis, 2nd ed., Graduate Studies in Math., Vol. 14, Ameri. Math. Soc., Providence, RI, 2001.
* [62] M. Lucia and S. Prashanth, Criticality theory for Schrödinger operators with sungular potential, J. Diff. Eq. 265, 3400–3440 (2018); Addendum, 269, 7211–7213 (2020).
* [63] J. Meixner and F. W. Schäfke, Mathieusche Funktionen und Sphäroidfunktionen. Mit Anwendungen auf physikalische und technische Probleme, Springer, Berlin, 1954.
* [64] R. A. Moore, The least eigenvalue of Hill’s equation, J. d’Analyse Math. 5, 183–196 (1956/57).
* [65] J. D. Morgan, Schrödinger operators whose potentials have separated singularities, J. Operator Th. 1, 109–115 (1979).
* [66] C. Müller, Spherical Harmonics, Lecture Notes in Math., Vol. 17, Springer, Berlin, 1966\.
* [67] B. Opic and A. Kufner, Hardy-Type Inequalities, Pitman Research Notes in Mathematics Series, Vol. 219. Longman Scientific & Technical, Harlow, 1990.
* [68] Yu. N. Ovchinnikov and I. M. Sigal, Number of bound states of three-body systems and Efimov’s effect, Ann. Phys. 123, 274–295 (1979).
* [69] C. R. Putnam, On the least eigenvalue of Hill’s equation, Quart. Appl. Math. 9, 310–314 (1951).
* [70] S. Rademacher and H. Siedentop, Accumulation rate of bound states of dipoles in graphene, J. Math. Phys. 57, 042105 (2016).
* [71] M. Reed and B. Simon, Methods of Mathematical Physics. I: Functional Analysis. Revised and Enlarged Edition, Academic Press, New York, 1980.
* [72] M. Reed and B. Simon, Methods of Mathematical Physics. II: Fourier Analysis, Self-Adjointness, Academic Press, New York, 1975.
* [73] M. Reed and B. Simon, Methods of Mathematical Physics. IV: Analysis of Operators, Academic Press, New York, 1978.
* [74] M. Ruzhansky and D. Suragan, Hardy Inequalities on Homogeneous Groups. 100 Years of Hardy Inequalities, Progress in Math., Vol. 327, Birkhäuser, Springer, Cham, 2019.
* [75] K. Schmüdgen, Unbounded Self-adjoint Operators on Hilbert Space, Graduate Texts in Math., Vol. 265, Springer, Dordrecht, 2012
* [76] B. Simon, Operator Theory, A Comprehensive Course in Analysis, Part 4, Amer. Math. Soc., Providence, R.I., 2015.
* [77] B. Simon, Hardy and Rellich inequalities in non-integral dimension, J. Operator Th. 9, 143–146 (1983). Addendum, J. Operator Th. 12, 197 (1984).
* [78] S. Stanek, A note on the oscillation of solutions of the differential equation $y^{\prime\prime}=\lambda q(t)y$ with a periodic coefficient, Czech. math. J. 29, 318–323 (1979).
* [79] G. Talenti, Best constant in Sobolev inequality, Ann. Mat. Pura Appl. (4) 110, 353–372 (1976).
* [80] S. Terracini, On positive entire solutions to a class of equations with a singular coefficient and critical exponent, Adv. Diff. Eq. 1, 241–264 (1996).
* [81] G. Teschl, Jacobi Operators and Completely Integrable Nonlinear Lattices, AMS, Providence, 1999.
* [82] W. Thirring, A Course in Mathematical Physics, 3. Quantum Mechanics of Atoms and Molecules, transl. by E. M. Harrell, Springer, New York, 1981.
* [83] J. E. Turner, Minimum dipole moment required to bind an electron, Amer. J. Phy. 45, 758–766 (1977).
* [84] J. E. Turner and K. Fox, Minimum dipole moment required to bind an electron to a finite dipole, Phy. Letters 23, 547–549 (1966).
* [85] P. Ungar, Stable Hill equation, Commun. Pure Appl. Math. 14, 707–710 (1961).
* [86] W. Van Assche, Compact Jacobi matrices: from Stieltjes to Krein and $M(a,b)$, Ann. Fac. Sci. Toulouse S5, 195–215 (1996).
* [87] J. L. Vazquez and E. Zuazua, The Hardy inequality and the asymptotic behavior of the heat equation with an inverse-square potential, J. Funct. Anal. 173, 103–153 (2000).
* [88] J. Weidmann, Linear Operators in Hilbert Spaces, Graduate Texts in Mathematics, Vol. 68, Springer, New York, 1980.
* [89] J. Weidmann, Lineare Operatoren in Hilberträumen, Teubner, Stuttgart, 2000.
* [90] A. Wintner, On the non-existence of conjugate points, Amer. J. Math. 73, 368–380 (1951).
|
# Inertial range statistics of the Entropic Lattice Boltzmann in 3D turbulence
Michele Buzzicotti Department of Physics and INFN, University of Rome Tor
Vergata, via della Ricerca Scientifica 1, 00133, Rome, Italy. Guillaume
Tauzin Department of Physics and INFN, University of Rome Tor Vergata, via
della Ricerca Scientifica 1, 00133, Rome, Italy. Chair of Applied Mathematics
and Numerical Analysis, Bergische Universität Wuppertal, Gaußstrasse 20, 42119
Wuppertal, Germany
###### Abstract
We present a quantitative analysis of the inertial range statistics produced
by Entropic Lattice Boltzmann Method (ELBM) in the context of 3d homogeneous
and isotropic turbulence. ELBM is a promising mesoscopic model particularly
interesting for the study of fully developed turbulent flows because of its
intrinsic scalability and its unconditional stability. In the hydrodynamic
limit, the ELBM is equivalent to the Navier-Stokes equations with an extra
eddy viscosity term Malaspinas2008 . From this macroscopic formulation, we
have derived a new hydrodynamical model that can be implemented as a Large-
Eddy Simulation (LES) closure. This model is not positive definite, hence,
able to reproduce backscatter events of energy transferred from the subgrid to
the resolved scales. A statistical comparison of both mesoscopic and
macroscopic entropic models based on the ELBM approach is presented and
validated against fully resolved Direct Numerical Simulations (DNS). Besides,
we provide a second comparison of the ELBM with respect to the well known
Smagorinsky closure. We found that ELBM is able to extend the energy spectrum
scaling range preserving at the same time the simulation stability. Concerning
the statistics of higher order, inertial range observables, ELBM accuracy is
shown to be comparable with other approaches such as Smagorinsky model.
## I Introduction
Turbulence is common in nature and its unpredictable behavior has fundamental
consequences on the understanding and control of various systems, from smaller
engineering devices dumitrescu2004rotational ; Davidson2015 ; Pope2001 , up to
the larger scales geophysical and astrophysical flows gill2016atm ;
alexakis2018 ; barnes2001 . Turbulent flows are described by the Navier-Stokes
equations (NSE),
$\begin{cases}\partial_{t}\bm{u}+\bm{u}\cdot\bm{\nabla}\bm{u}=-\bm{\nabla}p+\nu\nabla^{2}\bm{u}+\bm{f}\\\
\bm{\nabla}\cdot\bm{u}=0\end{cases}$ (1)
which give the evolution of the incompressible velocity field
$\bm{u}(\bm{x},t)$, with kinematic viscosity $\nu$, subject to a pressure
field $p$ and to an external forcing $\bm{f}$. However, even though the
equations of motion are known since almost two hundred years a direct
analytical approach remains elusive Frisch1995 . To overcome mathematical
difficulties, scientists, helped by the exponential growth of the
computational power, have tried to search for approximate solutions using
numerical algorithms fox2003computational ; Pope2001 ; ishihara2009study .
Unfortunately, also in this direction not every effort were successful.
Indeed, the NSE have a very rich non-linear dynamics, where a large range of
scales from the domain size up to the small scales fluctuations, are coupled
together. This results in a very high dimensional problem, with the
dimensionality proportional to the range of active scales Frisch1995 , and
with highly intermittent statistics dominated by the presence of extreme and
rare fluctuations yeung2015extreme ; benzi2010inertial ; iyer2017reynolds . As
a consequence, no matter how powerful new-supercomputers are, numerical
algorithms cannot handle all the degrees of freedom involved in the dynamics
Davidson2015 . The way out is to introduce a scale separation and compute only
the dynamics of degrees of freedom belonging to a subset of scales while
neglecting the other ones Meneveau2000 ; filippova2001multiscale ; Dong2008a ;
Dong2008b . However due to the non-linearity of NSE there is never a real
scale separation in the equations of motion bohr2005dynamical ; Frisch1995 ,
and the small-scale effects on the scales of interest need to be compensated
by the introduction of a model. In other words, the benefit of multi-scale
modeling is to achieve a scale separation, and the main challenge is to find a
“closure”, which guaranties the numerical stability being at the same time the
most accurate as possible in reproducing the coupling of the missing scales on
the resolved ones. This is the principle behind the celebrated Large-Eddy
simulations (LES), which actually solve the flow only on a subset of “large”
scales by filtering each term of the NSE and replacing with a closure the non-
linear coupling term between the resolved and the sub-grid scales (SGS)
Pope2001 ; Lesieur2005 . One of the most important differences between the
real coupling term coming from the filtered NSE and the common LES closures
used in the literature is for the latter to be purely dissipative to ensure
the simulation stability, see Meneveau2000 . As a consequence, it is
impossible for the closures to reproduce the backscatter events of energy
going from the SGS to the large scales, with important consequences on the
statistics of the resolved velocity field. Another possible numerical
approach, who has gained particular popularity, consists in solving the flow’s
macroscopic hydrodynamical properties as an approximation of its mesoscopic
behaviour de2013non . The Lattice Boltzmann Method (LBM) falls into that
category benzi1992lattice ; Succi2001 . In LBM, the flow is simulated by
evolving the Boltzmann equation for the single phase density function,
$f(\bm{x},t)$. The idea is to evolve the streaming and collision of particles
distribution functions, where the possible velocities are restricted on a
subset of discrete lattice directions frisch1986lattice ; Wolf-Gladrow2000 .
It is crucial to choose the collision operator so that macroscopically, (in
the limit of small Knudsen number), the dynamics described by the NSE is
recovered qian1995recent ; chen1998lattice . The most common collision
operator is the Bhatnagar-Gross-Krook model (BGK), see Bhatnagar1954 ,
corresponding to the relaxation towards an equilibrium distribution,
$f^{eq}_{i}(\bm{x},t)$, taken to be a discrete Maxwellian, with a fixed
relaxation time, $\tau_{0}$,
$\displaystyle\begin{split}f_{i}(\bm{x}+\bm{c}_{i}\Delta_{t},t+\Delta_{t})-f_{i}(\bm{x},t)&=\\\
=-\frac{1}{\tau_{\rm
0}}\left[f_{i}(\bm{x},t)-f^{eq}_{i}(\bm{x},t)\right]&+F_{i}.\end{split}$ (2)
Here, $F_{i}$, is a term introduced to model a macroscopic external forcing
Succi2001 , and $i=0,..,q-1$, indexes the different velocity directions on the
lattice. Eq. (2) is obtained discretizing the Boltzmann equation and selecting
the lattice spacing, $\Delta\bm{x}$, such as divided by the time step,
$\Delta_{t}$, they are equal to the lattice velocity
$\bm{c}_{i}=\Delta\bm{x}/\Delta_{t}$. From those mesoscopic quantities, it is
then possible to recover the macroscopic velocity and density fields by
following the perturbative Chapman-Enskog expansion Wolf-Gladrow2000 . It can
be shown that evolving Eq. 2 is equivalent, up to approximations, to evolving
the weakly compressible NSE for a flow with a density
$\rho(\bm{x},t)=\sum_{i=0}^{q-1}f_{i}(\bm{x},t)$, a velocity
$\bm{u}(\bm{x},t)=\sum_{i=0}^{q-1}f_{i}(\bm{x},t)\bm{c}_{i}/\rho(\bm{x},t)$
and a viscosity directly related to the relaxation time Wolf-Gladrow2000 ,
$\nu_{0}=c_{s}^{2}\Delta_{t}\left(\tau_{0}-\frac{1}{2}\right)\text{,}$ (3)
where, $c_{s}$, is the speed of sound, and $\tau_{0}$ is the adimensional
relaxation time. A numerical validation of the hydrodynamic recovery of BGK-
LBM in the context of 2D homogeneous isotropic turbulence (HIT) was performed
in Tauzin2018 showing good agreement with DNS either in decaying that in
forced regimes. Although this method is adapted to describing various physics
of multi-phase and flows with complex boundaries in a highly scalable fashion,
the BGK-LBM model suffers of numerical instabilities when, $\tau_{\rm
0}\rightarrow\frac{1}{2}$, i.e. $\nu_{\rm 0}\rightarrow 0$, which has made the
study of turbulent flows highly prohibitive for this method Tauzin2018 . To
push LBM towards more turbulent regimes a number of collision operators have
been proposed, see eggels1996direct ; yu2006turbulent ; sagaut2010toward ;
malaspinas2012consistent . Here we focus on the Entropic LBM (ELBM) Karlin1999
; ansumali2002single , which tackles the stability issue by equipping LBM with
an H-theorem. To achieve this results the ELBM differs from BGK-LBM by two
major aspects. First, the equilibrium distribution $\bm{f}^{eq}(\bm{x},t)$ is
not anymore a discretization of the Maxwell-Boltzmann distribution, but it is
calculated as the extremum of a discretized H-function defined as;
$H[\bm{f}]=\sum_{i=0}^{q-1}f_{i}\log\left(\frac{f_{i}}{w_{i}}\right),\,\,\bm{f}=\left\\{f_{i}\right\\}_{i=0}^{q-1},$
(4)
where $w_{i}$ are the weights associated to each lattice direction, under the
constraints of mass and momentum conservation, see Ansumali2003 . The second
difference in ELBM, is that the relaxation time is not a constant anymore but
is modified at every time step in order to enforce the non-monotonicity of H
after the collision. This results in an apparent unconditional stability as
$\nu_{0}\rightarrow 0$ Karlin2015 . It follows that ELBM evolution equations
are,
$\displaystyle\begin{split}f_{i}(x+c_{i}\Delta_{t},t+\Delta_{t})-f_{i}({\bm{x}},t)&=\\\
=-\alpha({\bm{x}},t)\beta[f_{i}({\bm{x}},t)&-f_{i}^{eq}({\bm{x}},t)],\end{split}$
(5)
where $\beta=1/(2\tau_{0})$ is constant, while the new relaxation time,
$\tau_{\text{eff}}(\bm{x},t)=1/(\alpha(\bm{x},t)\beta)$, fluctuates in time
and space through the definition of an entropic parameter $\alpha(\bm{x},t)$.
More recently the ELBM method has been extended to a family of multi-
relaxation time (MRT) lattice Boltzmann models dorschner2016entropic ;
dorschner2017transitional ; dorschner2018fluid . Note that $\alpha$ can be
computed as the solution of the entropic equation
$H[\bm{f}]=H[\bm{f}-\alpha\bm{f}^{neq}]$ which represents the maximum
H-function variation due to a collision, with
$\bm{f}^{neq}=\bm{f}-\bm{f}^{eq}$. Following this approach the computation of
$\alpha(\bm{x},t)$ can be performed via an expensive Newton-Raphson algorithm
for every grid and at every time step of ELBM. To alleviate this problem,
after the original ELBM formulation Karlin1999 , a new version has been
proposed where the computation of the entropic parameter is based on an
analytical formulation derived as an first order expansion of the original
model karlin2014gibbs ; bosch2015entropic . However, to our knowledge, a study
of high-order structure functions in the context of forced 3D HIT, has never
been attempted before using the ELBM original model. In this regard, and also
aiming to measure high-order, extremely sensitive statistical observables, we
implemented the ELBM original formulation, relying on the least number of
approximations, even though computationally more expensive. More details about
ELBM will be given in section II. Let us notice that BGK-LBM is recovered from
Eq. (5) with $\alpha=2$ and the specific Maxwellian expression of
$f_{i}^{eq}$. It is important to stress that the bridge relation described in
Eq. (3) connecting viscosity and relaxation time still holds for fluctuating
quantities, hence we can write,
$\displaystyle\begin{split}\nu_{\text{eff}}\left(\alpha\right)&=c_{s}^{2}\Delta_{t}\left(\frac{1}{\alpha\beta}-\frac{1}{2}\right)=\\\
&=c_{s}^{2}\Delta_{t}\left(\frac{1}{2\beta}-\frac{1}{2}\right)+c_{s}^{2}\Delta_{t}\frac{2-\alpha}{2\alpha\beta}=\\\
&=\nu_{0}+\delta\nu_{\alpha},\end{split}$ (6)
where $\nu_{0}$ represents the constant kinematic viscosity and
$\delta\nu_{\alpha}$ is the fluctuating term. Following this idea, it has been
shown that ELBM is implicitly enforcing a SGS model of an eddy-viscosity type
Malaspinas2008 ; Karlin2015 . In particular, as initially done in
Malaspinas2008 , and then rederived in chapter 4 of tauzin2019implicit ,
performing a third-order Chapman-Enskog perturbative expansion in the limit of
small Knudsen number (Kn), it is possible to obtain a macroscopic
approximation of $\delta\nu_{\alpha}$, which can be written as,
$\delta\nu_{\alpha}^{M}=-c_{s}^{2}\Delta t^{2}\frac{S_{\ell
j}S_{ij}S_{i\ell}}{S_{ij}S_{ij}},$ (7)
where $S_{ij}=1/2(\partial_{j}u_{i}+\partial_{i}u_{j})$ is the strain rate
tensor. The entropic eddy viscosity in Eq. (7) is particularly interesting
because it is not positive definite and can reproduce events of energy
backscatter, which is generally not the case among the other LES closures,
i.e. see the Smagorinsky eddy viscosity Smagorinsky1963 . After the
introduction of the new LES model, see section II, we compare it with standard
Smagorinsky closure and with fully resolved data obtained from DNS. At the
same time, we also present for the first time a quantitative investigation of
the inertial range statistics provided by the mesoscopic ELBM approach in the
context of 3d turbulence. Results provide evidence that ELBM is a good
approximation of the 3d flows up to turbulent regimes never reachable with the
standard BGK-LBM. We found that ELBM guaranties the simulation stability
producing a considerable extension of the inertial range scaling, i.e. the
extension of the energy spectrum power law. Measuring statistical properties
of higher order observables such as the structure functions, we found that
ELBM is also able to reproduce qualitatively the intermittent features of real
3d turbulent flows with an accuracy comparable to the Smagorinsky model. To
get a more accurate estimation of the anomalous exponents more refined models
are required biferale2019self .
The paper is organized as follows. In section II we introduce the details of
the ELBM and the LES models considered in this work. In section III we present
the set of simulations performed. In sect. IV we evaluate the quality of the
two LES closures by comparing them with the real SGS energy transfer measured
from fully resolved DNS. In sect. V we focus on the intermittent properties by
analysing high-order inertial range statistics Arneodo2008 ; benzi2010inertial
. In sect. VI we discuss our conclusions.
## II Turbulence Modelling
In this section we give a description of the SGS modelling approaches
considered in this work. We start discussing the mesoscopic ELBM approach
highlighting the differences with respect to the standard BGK-LBM. Following
we discuss the new hydrodynamic LES closure inspired by the ELBM macroscopic
approximation first derived in Malaspinas2008 . In the end of this section we
briefly recall the well known Smagorinsky model.
### II.1 Entropic Lattice Boltzmann Method
Using the same formalism as in Karlin1999 , the ELBM, Eq. (5), can be
rewritten as,
$\displaystyle\begin{split}f_{i}(x+c_{i}&\Delta_{t},t+\Delta_{t})=\\\
=&f_{i}({\bm{x}},t)-\alpha({\bm{x}},t)\beta\left(f_{i}({\bm{x}},t)-f_{i}^{eq}({\bm{x}},t)\right)\\\
=&\left(1-\beta\right)\,f^{pre}_{i}({\bm{x}},t)+\beta\,f_{i}^{mir}({\bm{x}},t)\\\
=&f_{i}^{post}(\bm{x},t),\end{split}$ (8)
where the fluctuating relaxation time is
$\tau_{\text{eff}}({\bm{x}},t)=1/(\alpha({\bm{x}},t)\beta)$, with
$\beta=1/(2\tau_{0})$ and where $\alpha(\bm{x},t)$ is the time and space
dependent, locally-calculated, entropic parameter. The post-collision
distribution, $\bm{f}^{post}(\beta)$, can be understood as a convex
combination between the pre-collision distribution, $\bm{f}^{pre}=\bm{f}$, and
the so-called mirror distribution,
$\bm{f}^{mir}(\alpha)=\bm{f}^{pre}-\alpha\,\bm{f}^{neq}$, with
$\bm{f}^{neq}=\bm{f}^{pre}-\bm{f}^{eq}$, the non-equilibrium part of
$\bm{f}^{pre}$. This convex combination is parametrized by the parameter
$\beta$ in the range $0<\beta<1$ for which we have $0.5<\tau_{0}<+\infty$.
From the definition of the H-functional given in Eq. (4) the discrete
H-theorem can then be expressed as a the local decrease of the H-functional
between the pre-collision and post-collision distributions,
$\displaystyle\begin{split}\Delta H&=H[\bm{f}^{post}]-H[\bm{f}^{pre}]\\\
&=H[(1-\beta)\bm{f}^{pre}+\beta\bm{f}^{mir}(\alpha)]-H[\bm{f}]\leq
0,\end{split}$ (9)
The equilibrium distribution function $\bm{f}^{eq}$ can be calculated as the
extremum of the convex H-functional introduced in Eq. (4), which has an
analytical solution for the D1Q3 lattice, whose tensorial product is solution
for three D2Q9 and the D3Q27 lattice,
$\displaystyle\begin{split}f_{i}^{eq}&(\bm{x},t)=\\\
w_{i}\rho&\prod_{j=1}^{d}\left\\{\left(2-\sqrt{1+\frac{u_{j}^{2}}{c_{s}^{2}}}\right)\left[\frac{\frac{2u_{j}}{\sqrt{3}c_{s}}+\sqrt{1+\frac{u_{j}^{2}}{c_{s}^{2}}}}{1-\frac{u_{j}}{\sqrt{3}c_{s}}}\right]^{\frac{c_{ij}}{\sqrt{3}c_{s}}}\right\\},\end{split}$
(10)
where $d$ is the dimension of the DdQq lattice, $c_{s}$ is the speed of sound,
$w_{i}$ are the weights associated with each lattice direction and
$u_{j}(\bm{x},t)=\sum_{i=0}^{q-1}f_{i}(\bm{x},t)c_{ij}/\rho(\bm{x},t)$ is the
flow macroscopic velocity. It is important to remark that the first three
moments of the entropic equilibrium distribution Eq. (10) are exactly the same
as the one coming from the 3rd order Hermite polynomial expansion of the
Maxwell-Boltzmann equilibrium distribution, namely,
$\displaystyle\begin{split}f^{\rm{eq}}_{i}(\rho,{\bm{u}})=w_{i}\rho&\bigg{(}1+\frac{{\bm{u}}\cdot{\bm{c}}_{i}}{c_{s}^{2}}+\frac{{\bm{u}}{\bm{u}}:{\bm{c}}_{i}{\bm{c}}_{i}-c_{s}^{2}|{\bm{u}}|^{2}}{2c_{s}^{4}}\\\
&+\frac{{\bm{u}}{\bm{u}}{\bm{u}}:\\!\cdot\,{\bm{c}}_{i}{\bm{c}}_{i}{\bm{c}}_{i}-3c_{s}^{2}|{\bm{u}}|^{2}{\bm{u}}\cdot{\bm{c}}_{i}}{6c_{s}^{6}}\bigg{)},\end{split}$
(11)
therefore, it allows the recovery of the same athermal weakly compressible NSE
as in the case of BGK-LBM Ansumali2003 . This recovery, obtained by performing
a Chapman-Enskog expansion at the second order in Kn, is also valid for ELBM
as fluctuation of $\alpha$ around $2$ leads to higher-order terms in Kn,
absorbed in $\mathcal{O}(\text{Kn}^{2})$ tauzin2019implicit . In this work,
following the approach used in Karlin1999 , we calculate $\alpha(\bm{x},t)$ as
the solution of,
$H[\bm{f}^{pre}]=H[\bm{f}^{mir}],$ (12)
which can be estimated via the popular Newton-Raphson algorithm. In this way,
$f_{i}^{post}$ being a convex combination between two distributions,
$f_{i}^{pre}$ and $f_{i}^{mir}$ of equal H-value, and being H at the same time
a convex functional, the monotonic decrease of the H is ensured. Let us stress
that, as it was shown in tauzin2019implicit , the ELBM equation cannot be
considered macroscopically as an approximation to the weakly compressible NSE
with the addiction of a sole eddy viscosity term of the form of Eq. (7).
Indeed, this term appears in a macroscopic equation of motion that requires a
Chapman-Enskog expansion of third order in the Knudsen number, while the NSE
are recovered at the second order. As a consequence a number of extra third-
order terms are part of the implicit ELBM SGS model. This makes the actual
ELBM closure even more complex than a simple eddy viscosity, and in principle,
able to outperform standard methods. On top of this, as already discussed in
the introduction, the macroscopic approximation of the ELBM eddy viscosity,
Eq. (7), has itself a very interesting formulation, being similar to a
Smagorinsky eddy viscosity Smagorinsky1963 , but being not positive-definite
and therefore allows events of energy backscatter, i.e. energy transfer from
the unresolved to the resolved scales. Indeed, while energy in 3d turbulence
is on average cascading from the large towards the small scales, in real flows
there are local events of energy going backward with non-trivial implications
on the statistical properties of the resolved scales.
### II.2 Large-Eddy Simulations
The LES governing equations can be directly derived by filtering each term of
the incompressible NSE, see Lesieur2005 ; Meneveau2000 . The filtering
operation consist in a convolution between the full velocity field and a
filter kernel. There are several choices that can be made for the filter
kernel, see Pope2001 , in this work, we consider a “sharp spectral cutoff” in
Fourier space. This choice is convenient for two reasons, first, the sharp
cutoff produces a clear separation between resolved and sub-grid scales,
defined respectively as all scales above and below the cutoff wavenumber,
$k_{c}$. Second, it is a Galerkin projector that produces the same results
when operating multiple times on a field, which allows to have a clear scale
separation also in a dynamical sense, namely it projects on the same support
all terms (non-linear ones included) of the equations of motion, see
buzzicotti2018effect . In the following we briefly sketch the main operations
required to arrive at the LES equations. Given a filter kernel
$G_{\Delta}(\bm{x})$, the filtered velocity
$\overline{\boldsymbol{u}}(\bm{x},t)$ can be defined by the following
convolution operation,
$\displaystyle\begin{split}\overline{\boldsymbol{u}}(\bm{x},t)\equiv\int_{\Omega}d\bm{y}\
G_{\Delta}(|\bm{x}-\bm{y}|)\,\bm{u}(\bm{y},t)=\\\
=\sum_{\bm{k}\in\mathbb{Z}^{3}}\hat{G}_{\Delta}(|\bm{k}|)\boldsymbol{\hat{u}}(\bm{k},t)e^{i\bm{k}\bm{x}}\,\end{split}$
(13)
where $\hat{G}_{\Delta}$ is the Fourier transform of $G_{\Delta}$, and
$\Delta\sim\pi/k_{c}$, is the filter cutoff scale, see Pope2001 . Applying the
filtering operation to all terms in the Navier-Stokes equations we get,
$\partial_{t}\overline{\boldsymbol{u}}+\nabla\cdot(\overline{\overline{\boldsymbol{u}}\otimes\overline{\boldsymbol{u}}})=-\nabla\overline{{p}}-\nabla\cdot\overline{{\tau}}^{\Delta}(\bm{u},\bm{u})+\nu\Delta\overline{\boldsymbol{u}}\
.$ (14)
Here, we have introduced the SGS tensor,
$\overline{{\tau}}^{\Delta}(\bm{u},\bm{u})$, defined as,
$\overline{{\tau}}^{\Delta}_{ij}(\bm{u},\bm{u})=\overline{u_{i}u_{j}}-\overline{\overline{u}_{i}\overline{u}_{j}},$
(15)
which is the only term of Eq. (14) that depends on SGS scales. Hence, it is
the only term that needs to be replaced by a model to close the equations in
terms of the resolved-scales dynamics. From
$\overline{{\tau}}^{\Delta}(\bm{u},\bm{u})$, we can easily get the exact
formulation of the SGS energy transfer $\overline{{\Pi}}^{\Delta}$, namely,
the energy transfer across the filter scale produced by the real non-linear
coupling in the NSE. To do so we need to multiply with a scalar product each
term of Eq. (14) and the velocity field to obtain the filtered energy balance
equations;
$\frac{1}{2}\partial_{t}(\overline{u}_{i}\overline{u}_{i})+\partial_{j}A_{ij}+{\overline{\Pi}}^{\Delta}_{L}=-\overline{{\Pi}}^{\Delta}.$
(16)
The terms on the lhs of Eq. (16) are defined respectively as
$\partial_{j}A_{j}=\partial_{j}\overline{u}_{i}(\overline{\overline{u}_{i}\overline{u}_{j}}+\overline{p}\delta_{ij}+\overline{{\tau}}^{\Delta}_{ij}-\frac{1}{2}\overline{u}_{i}\overline{u}_{j})$
and
${\overline{\Pi}}^{\Delta}_{L}=-\partial_{j}\overline{u}_{i}\left(\overline{\overline{u}_{i}\overline{u}_{j}}-\overline{u}_{i}\overline{u}_{j}\right)$.
ID | | Eddy viscosities | | Stress tensors | | | Energy transfers
---|---|---|---|---|---|---|---
DNS | | — | | $\overline{{\tau}}^{\Delta}_{ij}=\overline{u_{i}u_{j}}-\overline{\overline{u}_{i}\overline{u}_{j}}$ | | | $\overline{{\Pi}}^{\Delta}=-\partial_{j}\overline{u}_{i}\,\,\overline{{\tau}}^{\Delta}_{ij}(\bm{u},\bm{u})$
S-LES | | $\nu^{S}_{e}=(C_{S}\Delta)^{2}\sqrt{2S_{ij}S_{ij}}$ | | $\overline{{\tau}}^{\text{S}}_{ij}=-2\nu_{e}^{S}\bar{S}_{ij}$ | | | $\overline{{\Pi}}^{\text{S}}_{\text{LES}}=-\overline{{\tau}}^{\text{S}}_{ij}\overline{S}_{ij}$
E-LES | | $\nu^{E}_{e}=(C_{E}\Delta)^{2}\frac{S_{\ell j}S_{ji}S_{i\ell}}{S_{ij}S_{ij}}$ | | $\overline{{\tau}}^{\text{E}}_{ij}=-2\nu_{e}^{E}\bar{S}_{ij}$ | | | $\overline{{\Pi}}^{\text{E}}_{\text{LES}}=-\overline{{\tau}}^{\text{E}}_{ij}\overline{S}_{ij}$
ELBM | | $\delta\nu_{\alpha}=c_{s}^{2}\frac{2-\alpha}{2\alpha\beta}\Delta_{t}$ | | $\overline{{\tau}}^{\alpha}_{ij}=-2\delta\nu_{\alpha}\bar{S}_{ij}$ | | | $\overline{{\Pi}}^{\text{E}}_{\text{LBM}}=-\overline{{\tau}}^{\alpha}_{ij}\overline{S}_{ij}$
Table 1: Summary of definitions of eddy viscosities, sub-grid scales stresses
and energy transfers. The ID column indicates the names of the corresponding
set of simulations, see sect. III, in particular S-LES and E-LES correspond to
the hydrodynamical LES respectively with Smagorinsky and macroscopic ELBM
model, while with ELBM we refer to the SGS energy transfer measured from the
mesoscopic quantities.
As shown in leonard1974energy ; buzzicotti2018effect , to get the correct
contribution to the energy transfer, it is important to distinguish between
${\overline{\Pi}}^{\Delta}_{L}$ and the actual SGS energy transfer
$\overline{{\Pi}}^{\Delta}$ because the former depends only on resolved-scales
quantities and does not contribute to the mean energy flux across the cutoff
scale. On the other hand,
$\overline{{\Pi}}^{\Delta}=-\partial_{j}\overline{u}_{i}\,\,\overline{{\tau}}^{\Delta}_{ij}(\bm{u},\bm{u})=-\partial_{j}\overline{u}_{i}\left(\overline{u_{i}u_{j}}-\overline{\overline{u}_{i}\overline{u}_{j}}\right),$
(17)
is the flux which depends on both the SGS and the resolved scales. In this
work, as already mentioned, we consider as possible closure the Smagorinsky
LES model (referred to as S-LES in the sequel),
$\overline{{\tau}}^{\text{S}}_{ij}(\overline{\boldsymbol{u}},\overline{\boldsymbol{u}})$,
which aims to model the deviatoric part of the stress tensor,
$\overline{{\tau}}^{\Delta}_{ij}-\frac{1}{3}\overline{{\tau}}^{\Delta}_{kk}\delta_{ij}\,\,\rightarrow\,\,\overline{{\tau}}^{\text{S}}_{ij}$,
as follows,
$\overline{{\tau}}^{\text{S}}_{ij}=-2\nu_{e}^{S}(\bm{x},t)\bar{S}_{ij},\quad\nu^{S}_{e}=(C_{S}\Delta)^{2}\sqrt{2S_{ij}S_{ij}}$
(18)
where,
$\bar{S}_{ij}=1/2(\partial_{j}\overline{u}_{i}+\partial_{i}\overline{u}_{j})$,
is the resolved scales strain-rate tensor, $\nu_{e}^{S}$ is the Smagorinsky
eddy viscosity depending on the filter cutoff scale $\Delta$ and the non-
dimensional factor $C_{S}$. From the definition of the macroscopic
approximation of the ELBM eddy viscosity in Eq. (7), we can now define the
hydrodynamic ELBM-LES model (called E-LES in the sequel),
$\overline{{\tau}}^{\text{E}}_{ij}=-2\nu_{e}^{E}(\bm{x},t)\bar{S}_{ij},\quad\nu^{E}_{e}=(C_{E}\Delta)^{2}\frac{S_{\ell
j}S_{ji}S_{i\ell}}{S_{ij}S_{ij}},$ (19)
where $C_{E}$ is the entropic dimensionless coefficient. Comparing the
definition of $\nu^{E}_{e}$ with $\delta\nu_{\alpha}^{M}$, in Eq. (7), we can
see that they both have the same functional dependency on the strain-rate
tensor, but different signs and multiplicative constants. In particular, the
minus sign of the E-LES closure has been absorbed in the definition of
$\overline{{\tau}}^{\text{E}}_{ij}$ to align with the Smagorinsky closure
formulation. Let us stress that the E-LES model has the same scaling as the
Smagorinsky model, proportional to the strain rate, but it is not positive
definite. From the above definitions of the S-LES and E-LES models the two
corresponding SGS energy transfers can be written as,
$\overline{{\Pi}}^{\text{S}}_{\text{LES}}=-\overline{{\tau}}^{\text{S}}_{ij}\overline{S}_{ij};\qquad\overline{{\Pi}}^{\text{E}}_{\text{LES}}=-\overline{{\tau}}^{\text{E}}_{ij}\overline{S}_{ij}.$
(20)
To compare the behaviour of the mesoscopic ELBM model with respect to the two
hydrodynamical approaches just introduced, we can approximate the SGS energy
transfer from the ELBM as,
$\overline{{\Pi}}^{\text{E}}_{\text{LBM}}=-2\delta\nu_{\alpha}\overline{S}_{ij}\overline{S}_{ij},$
(21)
where $\overline{{\Pi}}^{\text{E}}_{\text{LBM}}$ stands for ELBM SGS energy
transfer and
$\delta\nu_{\alpha}=c_{s}^{2}\Delta_{t}\frac{2-\alpha}{2\alpha\beta}$ is the
mesoscopic fluctuating viscosity depending on $\alpha(\bm{x},t)$. The strain
rate tensor can be measured from the ELBM data after the calculation of the
macroscopic velocity in terms of the mesoscopic ones, namely,
$\bm{u}(\bm{x},t)=\sum_{i=0}^{q-1}f_{i}(\bm{x},t)\bm{c}_{i}/\rho(\bm{x},t)$. A
summary of these SGS energy transfer definitions with their respective SGS
tensors, eddy viscosities is given in table 1. It is worth noting that the
ELBM in the limit of small Knudsen numbers is not equivalent to the entropic
LES. Indeed, as previously mentioned, the eddy viscosity term appears in the
Chapman-Enskog expansion at the third order in Kn along with various extra
terms that are not contained in the entropic LES formulation.
## III Numerical Simulations
All simulations performed in this work are intended to model HIT on a three
dimensional domain with periodic boundary conditions. In the following we
provide some details about the sets of simulations performed with the
different modelling techniques. Concerning the lattice Boltzmann simulation
with entropic formulation of the relaxation time, ELBM, we have conducted a
set of 3d simulations with a number of $512$ collocation points along each
spatial direction. To reach stationarity the flow is forced at large scales,
$1\leq|\bm{k}|\leq 2$ with a constant and isotropic forcing. More precisely we
have used in all simulations the same forcing, defined in Fourier space with
constant phases and amplitudes, added isotropically to all wave-vectors at
large-scales. To ensure incompressibility the forcing is projected on its
solenoidal component. The ELBM simulation uses a lattice with 27 discrete
velocities (see Fig. 1), the D3Q27 Succi2001 ; Wolf-Gladrow2000 ; Kruger2017 .
The spectral forcing is implemented using the exact-difference method forcing
scheme Kuperstokh2004 for a relaxation time $\tau_{0}=0.5003$ corresponding
to $\beta\approx 0.9994$.
Figure 1: Schematic representation of the D3Q27 lattice stencil used for the
ELBM simulation.
Considering the LES we have performed two sets of pseudo-spectral fully
dealiased simulations on a domain $\Omega=[0,2\pi]^{3}$ with periodic boundary
conditions both at the resolution of $512^{3}$ grid points. The first LES is
equipped with the Smagorinsky model (S-LES), see Eq. (18), and a second with
the macroscopic formulation of the entropic eddy viscosity, see Eq. (19),
(E-LES). As discussed above, all simulations are forced with the same
isotropic constant forcing mechanisms acting only on the larger system scales
($1\leq|\bm{k}|\leq 2$). In the expression of Smagorinsky eddy viscosity Eq.
(18), we use the standard value of $C_{S}=0.16$, while for the entropic eddy
viscosity Eq. (19), we use $C_{E}=0.45$, found to be optimal values for the
best compromise between the maximization of inertial range extension and the
minimization of spurious effects produced by the model buzzicotti2020synch .
For both we have $\Delta=\pi/k_{max}\approx 0.0184$ with $k_{max}=171$, which
comes from the $2/3$ rule for the dealiasing projection Patterson71 .
Additionally, as a reference, we have run a pseudo-spectral fully resolved DNS
of the NSE with the same forcing scheme on the same 3d domain
$\Omega=[0,2\pi]^{3}$, using a number of $512^{3}$ (DNSx1) and $1024^{3}$
(DNSx2) collocation points. The resolution in both DNS is kept such as
$\eta_{\alpha}/dx\simeq 0.7$, where $dx$ is the grid spacing and
$\eta_{\alpha}=(\nu^{3}/\epsilon)^{1/4}$ is the Kolmogorov microscale Borue95
with $\varepsilon$ denoting the mean energy dissipation rate. In order to
create ensembles of statistically independent data all the ELBM, LES and DNS
are sampled on time intervals of one large-scale eddy turnover time after
reaching a statistically stationary state. It is worth mentioning that both
the ELBM and E-LES simulations remain stable even though their modeling terms
are not purely dissipative being their eddy viscosities not positive definite.
In Fig. 2 we show the time-averaged energy spectra, $E(k)$, for all
simulations. It is visible that all modelled simulations have an extended
inertial range with respect to DNS at the same resolution (DNSx1) with an
inertial range slope close to the Kolmogorov prediction of $k^{-5/3}$
Frisch1995 . Let us stress that the ELBM spectrum reaches a maximum wavenumber
higher than the pseudo-spectral data, this is because in the ELBM simulations
there is not any dealiasing operation. Anyway also in the ELBM case the
spectrum loses the Kolmogorov inertial range scaling at wavenumber larger that
the pseudo-spectral dealiasing cutoff.
Figure 2: Time-averaged spectra for the conducted simulations at $512^{3}$
grid points, measured from the mesoscopic ELBM simulation (empty squares, red
color), the hydrodynamical LES with entropic model (E-LES, empty circles) and
with Smagorinsky model (S-LES, empty triangles). The energy spectra from fully
resolved DNS at $512^{3}$ (DNSx1) and $1024^{3}$ (DNSx2) are presented
respectively with full triangles and full circles. The curves are shifted
vertically for the sake of data presentation. The Kolmogorov predicted slope
of $k^{-5/3}$ is given as a reference Frisch1995 .
## IV SGS energy transfer analysis
In this section we provide a statistical comparison of the SGS energy
transfers measured in the modelled simulations together with the original SGS
transfer measured a priori from higher resolution DNS, see Table 1. Before
staring the comparison between macroscopic and mesoscopic simulations, we
analyse the quality of the approximation made in the Chapman-Enskog expansion
to obtain the macroscopic formulation of the ELBM eddy viscosity
Malaspinas2008 . In this direction, we have computed the SGS energy transfer
defined in Eq. (21) using the two different definitions of the fluctuating
viscosity. Namely, either the correct definition, $\delta\nu_{\alpha}$,
depending on the entropic parameter or its third order expansion in the limit
of small Kn, $\delta\nu_{\alpha}^{M}$ see Eq. (7). Their statistical
comparison is shown in Fig. 3, where on the left panel we show the probability
density functions (PDF) measured from the ELBM SGS energy transfer using the
two different formulations. Here we can see that the PDFs once re-scaled by
their standard deviations have almost an identical shape. From the center and
right panels of the same figure, we can qualitatively see two visualizations
of the SGS energy transfers measured by selecting the same plane of the
velocity field. From these visualizations we can appreciate that there is a
very high spatial correlation between them. These results suggest that the
approximation of neglecting the extra third order terms coming from Chapman-
Enskog expansion is a good approximation of the ELBM eddy viscosity.
Figure 3: Standardized PDFs of the ELBM SGS energy transfers measured from the
correct fluctuating viscosity, $\delta\nu_{\alpha}$, and its approximated
formulation, $\delta\nu_{\alpha}^{M}$ (left panel). Visualization of a plane
of ELBM SGS energy transfer measured using the approximated (center panels)
and correct definitions of eddy viscosity (right panel).
Let us now analyse the statistics of the SGS energy transfers, comparing them
also with the statistics of the real SGS energy transfer, see Eq. (17),
measured a priori from fully resolved DNSx2. To obtain the a priori
$\overline{{\Pi}}^{\Delta}$ we filter the velocity field with a sharp
projector in Fourier space with a cutoff at the maximum wavenumber allowed in
the modelled simulations, which corresponds to the dealiasing cutoff
($k_{max}=171$). As known, the presence of a forward energy cascade, as in 3d
turbulence, reflects in a skewed PDF of the a priori
$\overline{{\Pi}}^{\Delta}$, see gotoh2005statistics ; buzzicotti2018energy ;
buzzicotti2018effect , see Fig. 4. Instead, the negative tail describes the
presence of intense backscattering events with fluctuations up to two orders
of magnitude larger than the standard deviation. The main remarkable
difference between the different models considered here is that, as expected,
the ELBM and E-LES produce backscatter events, while the Smagorinsky model is
positive definite in its energy formulation and it produces a zero tail in the
negative region. Let us notice that ELBM mesoscopic model shows qualitatively
a better overlap with respect to the DNS data.
Figure 4: Standardized PDF of the SGS energy transfer measured from the a
posteriori data obtained via the ELBM simulations (empty squares, red color),
the LES with hydrodynamical entropic closure (E-LES, empty circles) and from
LES with Smagorinsky model (S-LES, empty triangles). For comparison the PDF
measured for the real SGS energy transfer measured a priori by filtering data
from higher resolution simulations is presented (DNSx2, full circles).
## V Inertial range statistics
In this last section we analyse the inertial range statistics by measuring the
longitudinal velocity increments defined as
$\delta_{r}u=(\bm{u}(\bm{x}+\bm{r})-\bm{u}(\bm{x}))\cdot\bm{r}/r$. In this way
we can quantify the effects produced by the different models at different
scales, $r=|\bm{r}|$, hence, we can get an accurate estimation of the quality
of the models in capturing the correct intermittent properties of the NSE. We
study the scaling properties of the longitudinal structure functions (SF)
defined as,
$S_{p}(r)\equiv\langle[\delta_{r}u]^{p}\rangle$ (22)
where the angular brackets indicate the ensemble average, that assuming
spatio-temporal ergodicity can be evaluated averaging over space and time,
$\langle(...)\rangle=\frac{1}{V}\frac{1}{T}\int_{V}\int_{t_{0}}^{t_{0}+T}(...)\,d\bm{x}dt$.
In the limit of large Reynolds number, where $r$ can be taken arbitrarily
small the structures function follows a powerlaw scaling behavior,
$S_{p}(r)\sim r^{\xi_{p}}$ Frisch1995 , where a p-th order scaling exponent
that, according to the phenomenological theory of Kolmogorov (K41)
Kolmogorov1941 , is $\xi_{p}=p/3$. Nevertheless, both experimental and
numerical studies have highlighted as bi-product of intermittency the presence
of anomalous exponents in turbulent data, with important deviations from the
K41 predicted values Gotoh2002 ; sinhuber2017dissipative ; benzi2010inertial ;
biferale2019self . On the other hand, to get an accurate measurement of these
exponents is extremely difficult. The reason is twofold, first it is required
to have large scaling range (very well resolved simulations) and second it is
simultaneously required to have large statistical ensemble. The first question
we ask here is connected to the first of the aforementioned problems, namely
whether those models are able or not to extend the length of inertial range of
scales in our simulations. To answer this in the left column of Fig. 5 we show
the 2nd and 4th order structure functions measured from all modelled and fully
resolved simulations. In the right column on the same figure we show the local
scaling exponents,
$\xi_{p}(r)=\frac{d\,\log S_{n}(r)}{d\,\log(r)}\,,$
measured form the structure functions shown on the left side. From these plots
it is evident that all modelled simulations present an extension of the
inertial range with respect to the same resolution, similar to the inertial
range observed in the simulations with a number of grid points two times
larger along each spatial direction, DNSx2.
Figure 5: Second-order longitudinal structure functions (left panels) and
corresponding local slopes (right panels) for the conducted simulations at
$512^{3}$ grid points, using ELBM (empty squares, red color), E-LES with
entropic inspired model (empty circles), S-LES with Smagorinsky model (empty
squares), DNSx1 at $512^{3}$ grid points (full triangles) and DNSx2 at
$1024^{3}$ grid points (full circles). The straight line corresponds to the
K41 prediction in the inertial range ($\xi_{p}=p/3$), while the dashed line
corresponds to the intermittent measure as reported from the literature
Gotoh2002 ; biferale2019self .
Which means that the models allows to save a factor $8$ in terms of the number
of degrees of freedom required in the simulations. Considering that also the
time-step needs to be changed accordingly in the higher resolution DNS to
resolve the smaller time-scales, it means that the modelled simulations are
more than an order of magnitude cheaper than a DNS with the same inertial
range. On the other hand, by measuring the local scaling exponents as reported
on the right column of Fig. 5, we can see that the inertial range extension
produced by the model is not as accurate as the fully resolved DNS. This is a
problem if we want the model to correctly describe intermittency. Let us
stress that the correction to the K41 prediction at the level of the second
and fourth order exponents is very small, hence a model needs to be extremely
accurate to correctly capture the intermittent scaling biferale2019self . In
both right panels of Fig. 5 we report with solid line the $\xi_{p}$ value of
the K41 prediction, and with dotted lines the values measured from DNS as
reported in Gotoh2002 ; sinhuber2017dissipative . To highlight intermittency
we can look at the ratio between SF at different orders. In particular, any
systematic non-linear dependency of $\xi_{p}$ vs $p$, will introduce a scale-
dependency in the Kurtosis, defined by the dimensionless ratio among fourth
and second order SF,
$K(r)=\frac{S_{4}(r)}{\left[S_{2}(r)\right]^{2}}.$ (23)
In Fig. 6 we see that in all simulations, at large scales the increments are
Gaussian ($K\sim 3$), while the Kurtosis quickly increases, decreasing the
scale. This observation shows non-self similarity in the statistics of all
data. It is interesting to observe that at this level the inertial range of
scale observed in the DNSx2 simulation are well captured by all closures up to
the dissipative scales $r\approx 0.1$ where deviations from the DNSs and
models arise.
Figure 6: Kurtosis of the velocity increment for the simulations at $512^{3}$
grid points, using ELBM (empty squares, red color), E-LES with the entropic
inspired model (empty circles), S-LES with Smagorinsky model (ampty
triangles), DNSx1 at $512^{3}$ grid points (full triangles) and DNSx2 at
resolution of $1024^{3}$ (full circles). The dashed horizontal line at 3
corresponds to the value of a Gaussian distribution.
Going further, we measure the most refined quantity we can observe to quantify
intermittency, namely the local scaling exponent in Extended Self-Similarity
Benzi1993 ,
$\zeta(r)=\frac{\xi_{4}}{\xi_{2}}.$ (24)
A linear K41 behavior would recover in the inertial range a plateau value of
$\zeta$ equal to $2$. The correction, accounting for intermittency, measured
in both experimental and DNS data gives the plateau for $\zeta$ at the value
of $1.86$ Gotoh2002 ; sinhuber2017dissipative ; biferale2019self . As we can
see in Fig. 7, all models show deviations from the K41 self-similar prediction
meaning that they all capture the non-self-similarity of the turbulent
inertial range. However none of them is really accurate enough to extend the
length of the plateau displayed, hence to improve the prediction obtainable
from the DNS at the same resolution. Indeed, if we compare the modelled data
with the fully resolved simulations we can see that former are not showing any
flat plateau in the inertial range and we cannot estimate precisely the
correction to K41 of the structure function scaling exponents.
Figure 7: Extended Self-Similarity, $\zeta(r)$, for the simulations at
$512^{3}$ grid points, using ELBM (empty squares, red color), the LES with
entropic inspired model E-LES (empty circles), the LES with Smagorinsky model
S-LES (empty triangles), the fully resolved DNSx1 (full triangles) and the
fully resolved DNS at $1024^{3}$ grid points DNSx2 (full circles). The
straight line corresponds to the K41 prediction in the inertial range equal to
$2$, while the dashed line corresponds to the intermittent measure as reported
from numerical Gotoh2002 and experimental data sinhuber2017dissipative .
It is interesting to point out that the models show a very similar accuracy up
to this last analysis. This suggest the backscatter events of energy
introduced by the entropic closures are not accurate enough to improve
quantitatively the statistics of the Smagorinsky model. This results supports
the observation that intermittency in turbulent flows comes as a result of
highly non-trivial correlations among all degrees of freedom at different
scales buzzicotti2016phase ; buzzicotti2016lagrangian ; lanotte2015turbulence
. The observation that for all models we have a very similar inertial range
statistics goes in agreement with the common property of these models to have
the same scaling, proportional to the strain rate tensor.
## VI Conclusions
In this paper, we performed a quantitative assessment of the ELBM capabilities
in the modelling of 3d homogeneous isotropic turbulent flows by comparing the
inertial range statistics of ELBM data with the one of high resolution DNS of
the NSE. We also compared the quality of ELBM with respect to the
hydrodynamical Smagorinsky model, popular in the realm of LES. Furthermore, in
this work we have proposed and investigated for the first time, a new
hydrodynamical closure for LES simulation inspired from the macroscopic
approximation of the ELBM model introduce by Malaspinas2008 . We found that
ELBM extends the length of the inertial range with respect to the fully
resolved DNS, allowing to reduce the computational cost by an order of
magnitude and at the same time preserving the simulation stability. Results
showed that, in both the macroscopic and mesoscopic formulations, ELBM is able
to reproduce an inertial range with a non self-similar dynamics. ELBM captures
the correct deviations from the large scales Gaussian statistics as observed
in the fully resolved DNS, with an accuracy comparable to the one produced by
the Smagorinsky model. From the measure of the structure functions scaling
exponents in ESS, we have highlighted the limitations of these models to get
with high accuracy the turbulence corrections to the Kolmogorov scaling. In
this context we found that the modelled data are not producing the same
inertial range plateau as observed in the fully resolved DNS and experiments.
To conclude, we found that ELBM suffers in the modeling of extreme and rare
intermittent fluctuations, while on the other hand it is very efficient in
modeling the mean properties of 3d turbulence. Which makes ELBM a good
candidate for the modeling of 3d turbulent flows in complex geometries.
## Acknowledgement
The authors thank Prof. Luca Biferale for inspiration and many useful
discussions. This work was supported also by the European Unions Framework
Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie
Skłodowska-Curie grant [grant number 642069] and by the European Research
Council under the ERC grant [grant number 339032]. The authors would like to
thanks Prof. Dirk Pleiter as well as the Juelich Supercomputing Center for
providing access to the JURON cluster.
## References
* [1] O. Malaspinas, M. Deville, and B. Chopard. Towards a physical interpretation of the entropic Lattice Boltzmann method. Physical Review E, 78(6):066705, 2008.
* [2] Horia Dumitrescu and Vladimir Cardos. Rotational effects on the boundary-layer flow in wind turbines. AIAA journal, 42(2):408–411, 2004.
* [3] P. A. Davidson. Turbulence: An Introduction for Scientists and Engineers. Oxford University Press, 2015.
* [4] S. B. Pope. Turbulent Flows. Cambridge University press, 2000.
* [5] Adrian E Gill. Atmosphere—ocean dynamics. Elsevier, 2016.
* [6] Alexandros Alexakis and Luca Biferale. Cascades and transitions in turbulent flows. Physics Reports, 767:1–101, 2018.
* [7] Sydney A Barnes. An assessment of the rotation rates of the host stars of extrasolar planets. The Astrophysical Journal, 561(2):1095, 2001.
* [8] U. Frisch. Turbulence : the legacy of A.N. Kolmogorov. Cambridge University Press, 1995.
* [9] Rodney O Fox. Computational models for turbulent reacting flows. Cambridge university press, 2003.
* [10] Takashi Ishihara, Toshiyuki Gotoh, and Yukio Kaneda. Study of high–reynolds number isotropic turbulence by direct numerical simulation. Annual Review of Fluid Mechanics, 41:165–180, 2009.
* [11] PK Yeung, XM Zhai, and Katepalli R Sreenivasan. Extreme events in computational turbulence. Proceedings of the National Academy of Sciences, 112(41):12633–12638, 2015.
* [12] R Benzi, L Biferale, R Fisher, D Lamb, and F Toschi. Inertial range eulerian and lagrangian statistics from numerical simulations of isotropic turbulence. 2010\.
* [13] Kartik P Iyer, Katepalli R Sreenivasan, and PK Yeung. Reynolds number scaling of velocity increments in isotropic turbulence. Physical Review E, 95(2):021101, 2017.
* [14] C. Meneveau and J. Katz. Scale-invariance and turbulence models for large-eddy simulation. Annual Review of Fluid Mechanics, 32(1):1–32, 2000.
* [15] Olga Filippova, Sauro Succi, Francesco Mazzocco, Cinzio Arrighetti, Gino Bella, and Dieter Hänel. Multiscale lattice boltzmann schemes with turbulence modeling. Journal of Computational Physics, 170(2):812–829, 2001.
* [16] Y.-H. Dong and P. Sagaut. A study of time correlations in Lattice Boltzmann-based large-eddy simulation of isotropic turbulence. Physics of Fluids, 20(3):035105, 2008.
* [17] Y.-H. Dong, P. Sagaut, and S. Marie. Inertial consistent subgrid model for large-eddy simulation based on the Lattice Boltzmann method. Physics of Fluids, 20(3):035104, 2008.
* [18] Tomas Bohr, Mogens H Jensen, Giovanni Paladin, and Angelo Vulpiani. Dynamical systems approach to turbulence. Cambridge University Press, 2005.
* [19] M. Lesieur, O. Métais, and P. Comte. Large-eddy simulations of turbulence. Cambridge University Press, 2005.
* [20] Sybren Ruurds De Groot and Peter Mazur. Non-equilibrium thermodynamics. Courier Corporation, 2013.
* [21] Roberto Benzi, Sauro Succi, and Massimo Vergassola. The lattice boltzmann equation: theory and applications. Physics Reports, 222(3):145–197, 1992.
* [22] S. Succi. The Lattice Boltzmann Equation for Fluid Dynamics and Beyond. Oxford University Press, 2001.
* [23] Uriel Frisch, Brosl Hasslacher, and Yves Pomeau. Lattice-gas automata for the navier-stokes equation. Physical review letters, 56(14):1505, 1986.
* [24] D. A. Wolf-Gladrow. Lattice-gas cellular automata and Lattice Boltzmann models : an introduction. Springer, 2000.
* [25] Yue-Hong Qian, S Succi, and SA Orszag. Recent advances in lattice boltzmann computing. In Annual reviews of computational physics III, pages 195–242. World Scientific, 1995.
* [26] Shiyi Chen and Gary D Doolen. Lattice boltzmann method for fluid flows. Annual review of fluid mechanics, 30(1):329–364, 1998.
* [27] P. L. Bhatnagar, E. P. Gross, and M. Krook. A model for collision processes in gases. I. Small amplitude processes in charged and neutral one-component systems. Phys. Rev., 94:511–525, 1954.
* [28] G. Tauzin, L. Biferale, M. Sbragaglia, A. Gupta, F. Toschi, A. Bartel, and M. Ehrhardt. A numerical tool for the study of the hydrodynamic recovery of the Lattice Boltzmann method. Computers & Fluids, 172:241 – 250, 2018.
* [29] Jack GM Eggels. Direct and large-eddy simulation of turbulent fluid flow using the lattice-boltzmann scheme. International journal of heat and fluid flow, 17(3):307–323, 1996\.
* [30] Huidan Yu, Li-Shi Luo, and Sharath S Girimaji. Les of turbulent square jet flow using an mrt lattice boltzmann model. Computers & Fluids, 35(8-9):957–965, 2006.
* [31] Pierre Sagaut. Toward advanced subgrid models for lattice-boltzmann-based large-eddy simulation: Theoretical formulations. Computers & Mathematics with Applications, 59(7):2194–2199, 2010\.
* [32] Orestis Malaspinas and Pierre Sagaut. Consistent subgrid scale modelling for lattice boltzmann methods. Journal of Fluid Mechanics, 700:514–542, 2012.
* [33] I. V. Karlin, A. Ferrante, and H. C. Öttinger. Perfect entropy functions of the Lattice Boltzmann method. Europhysics Letters (EPL), 47(2):182–188, 1999.
* [34] Santosh Ansumali and Iliya V Karlin. Single relaxation time model for entropic lattice boltzmann methods. Physical Review E, 65(5):056312, 2002.
* [35] S. Ansumali, I. V. Karlin, and H. C. Öttinger. Minimal entropic kinetic models for hydrodynamics. Europhysics Letters (EPL), 63(6):798–804, 2003.
* [36] I. V. Karlin, F. Bösch, S. S. Chikatamarla, and S. Succi. Entropy-assisted computing of low-dissipative systems. Entropy, 17(12):8099–8110, 2015.
* [37] Benedikt Dorschner, Fabian Bösch, Shyam S Chikatamarla, Konstantinos Boulouchos, and Ilya V Karlin. Entropic multi-relaxation time lattice boltzmann model for complex flows. Journal of Fluid Mechanics, 801:623, 2016.
* [38] Benedikt Dorschner, Shyam S Chikatamarla, and Iliya V Karlin. Transitional flows with the entropic lattice boltzmann method. Journal of Fluid Mechanics, 824:388, 2017.
* [39] Benedikt Dorschner, Shyam S Chikatamarla, and Ilya V Karlin. Fluid-structure interaction with the entropic lattice boltzmann method. Physical Review E, 97(2):023305, 2018.
* [40] Ilya V Karlin, Fabian Bösch, and SS Chikatamarla. Gibbs’ principle for the lattice-kinetic theory of fluid dynamics. Physical Review E, 90(3):031302, 2014.
* [41] Fabian Bösch, Shyam S Chikatamarla, and Ilya V Karlin. Entropic multirelaxation lattice boltzmann models for turbulent flows. Physical Review E, 92(4):043309, 2015.
* [42] Guillaume Tauzin. Implicit Sub-Grid Scale Modeling within the Entropic Lattice Boltzmann Method in Homogeneous Isotropic Turbulence. PhD thesis, Universität Wuppertal, Fakultät für Mathematik und Naturwissenschaften …, 2019.
* [43] J. Smagorinsky. General circulation experiments with the primitive equations. 91(3):99–194, 1963.
* [44] Luca Biferale, Fabio Bonaccorso, Michele Buzzicotti, and Kartik P Iyer. Self-similar subgrid-scale models for inertial range turbulence and accurate measurements of intermittency. Physical review letters, 123(1):014503, 2019.
* [45] A. Arnèodo, R. Benzi, J. Berg, L. Biferale, E. Bodenschatz, A. Busse, E. Calzavarini, B. Castaing, M. Cencini, L. Chevillard, R. T. Fisher, R. Grauer, H. Homann, D. Lamb, A. S. Lanotte, E. Lévèque, B. Lüthi, J. Mann, N. Mordant, W.-C. Müller, S. Ott, N. T. Ouellette, J.-F. Pinton, S. B. Pope, S. G. Roux, F. Toschi, H. Xu, and P. K. Yeung. Universal intermittent properties of particle trajectories in highly turbulent flows. Phys. Rev. Lett., 100:254504, 2008.
* [46] M Buzzicotti, M Linkmann, H Aluie, L Biferale, J Brasseur, and C Meneveau. Effect of filter type on the statistics of energy transfer between resolved and subfilter scales from a-priori analysis of direct numerical simulations of isotropic turbulence. Journal of Turbulence, 19(2):167–197, 2018.
* [47] A Leonard et al. Energy cascade in large-eddy simulations of turbulent fluid flows. Adv. Geophys. A, 18(A):237–248, 1974.
* [48] T. Krüger, H. Kusumaatmaja, A. Kuzmin, O. Shardt, G. Silva, and E. M. Viggen. The Lattice Boltzmann Method. Graduate Texts in Physics. Springer International Publishing, Cham, 2017\.
* [49] A. L. Kuperstokh. New method of incorporating a body force term into the Lattice Boltzmann equation. In Proceedings of the 5th International EDH Workshop, pages 241–246, Poitiers, France, 2004.
* [50] Michele Buzzicotti and Patricio Clark Di Leoni. Synchronizing subgrid scale models of turbulence to data. Physics of Fluids, 32(12):125116, 2020.
* [51] G. S. Patterson and S. A. Orszag. Spectral Calculations of Isotropic Turbulence: Efficient Removal of Aliasing Interactions. Phys. Fluids, 14:2538–2541, 1971.
* [52] V. Borue and S. A. Orszag. Self-similar decay of three-dimensional homogeneous turbulence with hyperviscosity. Phys. Rev. E, 51:2859(R), 1995.
* [53] Toshiyuki Gotoh and Takeshi Watanabe. Statistics of transfer fluxes of the kinetic energy and scalar variance. Journal of Turbulence, (6):N33, 2005.
* [54] Michele Buzzicotti, Hussein Aluie, Luca Biferale, and Moritz Linkmann. Energy transfer in turbulence under rotation. Physical Review Fluids, 3(3):034802, 2018.
* [55] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. In Dokl. Akad. Nauk SSSR, volume 30, pages 301–305. JSTOR, 1941\.
* [56] T. Gotoh, D. Fukayama, and T. Nakano. Velocity field statistics in homogeneous steady turbulence obtained using a high-resolution direct numerical simulation. Physics of Fluids (1994-present), 14(3):1065–1081, 2002.
* [57] Michael Sinhuber, Gregory P Bewley, and Eberhard Bodenschatz. Dissipative effects on inertial-range statistics at high reynolds numbers. Physical review letters, 119(13):134502, 2017.
* [58] R. Benzi, S. Ciliberto, R. Tripiccione, C. Baudet, F. Massaioli, and S. Succi. Extended self-similarity in turbulent flows. Phys. Rev. E, 48:R29–R32, 1993.
* [59] Michele Buzzicotti, Brendan P Murray, Luca Biferale, and Miguel D Bustamante. Phase and precession evolution in the burgers equation. The European Physical Journal E, 39(3):1–9, 2016.
* [60] Michele Buzzicotti, Akshay Bhatnagar, Luca Biferale, Alessandra S Lanotte, and Samriddhi Sankar Ray. Lagrangian statistics for navier–stokes turbulence under fourier-mode reduction: fractal and homogeneous decimations. New Journal of Physics, 18(11):113047, 2016.
* [61] Alessandra S Lanotte, Roberto Benzi, Shiva K Malapaka, Federico Toschi, and Luca Biferale. Turbulence on a fractal fourier set. Physical review letters, 115(26):264502, 2015.
|
# High-frequency limit of spectroscopy
Vladimir U. Nazarov<EMAIL_ADDRESS>Moscow Institute of Physics and
Technology (National Research University), Dolgoprudny, Russian Federation
Fritz Haber Research Center for Molecular Dynamics and Institute of Chemistry,
Hebrew University of Jerusalem, Jerusalem, Israel Roi Baer
<EMAIL_ADDRESS>Fritz Haber Research Center for Molecular Dynamics and
Institute of Chemistry, Hebrew University of Jerusalem, Jerusalem, Israel
###### Abstract
We consider an arbitrary quantum mechanical system, initially in its ground-
state, exposed to a time-dependent electromagnetic pulse with a carrier
frequency $\omega_{0}$ and a slowly varying envelope of finite duration. By
working out a solution to the time-dependent Schrödinger equation in the
high-$\omega_{0}$ limit, we find that, to the leading order in
$\omega_{0}^{-1}$, a perfect self-cancellation of the system’s linear response
occurs as the pulse switches off. Surprisingly, the system’s observables are,
nonetheless, describable in terms of a combination of its linear density
response function and nonlinear functions of the electric field. An analysis
of jellium slab and jellium sphere models reveals a very high surface
sensitivity of the considered setup, producing a richer excitation spectrum
than accessible within the conventional linear response regime. On this basis,
we propose a new spectroscopic technique, which we provisionally name the
Nonlinear High-Frequency Pulsed Spectroscopy (NLHFPS). Combining the
advantages of the extraordinary surface sensitivity, the absence of
constraints by the traditional dipole selection rules, and the clarity of
theoretical interpretation utilizing the linear response time-dependent
density functional theory, NLHFPS has a potential to evolve into a powerful
characterization method for nanoscience and nanotechnology.
## I Introduction
In optical spectroscopy, systems of interest are exposed to light, and their
response allows us to explore their structure and composition. A significant
part of spectroscopy involves linear effects, such as when light is absorbed
or scattered off material targets, allowing their imaging and
characterization, teaching us almost solely about dipole-allowed transitions.
Nonlinear spectroscopy methods go beyond this limitation, studying otherwise
hidden or dark changes applicable to a large variety of systems and
processes.[1] Examples of nonlinear spectroscopy include the second-order
harmonic generation (SHG) approach, used to study interfaces and adsorbed
molecules and serves as high-resolution optical microscopy in biological
systems,[2] multiphoton excitation fluorescence (MPEF), as well as various
Raman scattering methods.[3]
The use of nonlinear spectroscopies, especially in surface and nano-sciences,
is growing due to their high interfacial sensitivity. However, results in
nonlinear spectroscopies are often challenging to interpret since their
description involves much more sophisticated theoretical techniques as
compared to their linear counterparts. [4]
This article studies the high-frequency limit of the electronic response,
singling out a pathway which leads to a major simplification in the
description of nonlinear spectroscopies, as long as the observables are
analysed after the field acting on a system dies out. We find that the
nonlinear behaviour of the system observables is expressible in terms of the
_linear electron density response function_ , the latter occurring on the
time-scale of the pulse’s enveloping shape. By this, we present an approach to
the problem of the nonlinear electronic response in the case of high-frequency
pulses, which turns out no more theoretically and computationally demanding
than the solution of the conventional linear response problem. Specifically,
the well-developed methods of the linear response time-dependent density
functional theory[5] (TDDFT) can be readily invoked, expanding the reach of
the latter to the realm of the nonlinear physics.
We validate our theory numerically using the exactly solvable hydrogen atom
system propagating under a time-dependent field. Then we consider applications
to nano-films and nano-dots, which demonstrate the power of the proposed
method by revealing the modes in the excitation spectra of these systems,
latent when probed within the linear regime. Finally, we present an example of
molecular spectroscopy showing dipole-forbidden transitions.
## II Formalism
We consider a many-electron system subject to the time-dependent (TD)
modulated periodic potential. We are concerned with solving the Schrödinger
equation (in the following, atomic units are used unless indicated otherwise)
$i\frac{\partial\Psi(t)}{\partial
t}=\left[\hat{H}_{0}+(\cos\omega_{0}t)\hat{W}(t)\right]\Psi(t),$ (1)
where the unperturbed Hamiltonian is
$\hat{H}_{0}=\sum\limits_{i=1}^{N}\left[-\frac{1}{2}\nabla_{i}^{2}+v_{ext}(\mathbf{r}_{i})\right]+\frac{1}{2}\sum\limits_{i\neq
j}^{N}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|},$ (2)
$N$ and $v_{ext}(\mathbf{r})$ being the number of electrons and the external
(electron-nuclear Coulomb) potential, respectively, and the harmonic
perturbation is enveloped with the potential
$\hat{W}(t)=\sum\limits_{i=1}^{N}W(\mathbf{r}_{i},t).$ (3)
For simplicity, we assume that the time-dependence in the pulse potential
$W(\mathbf{r},t)$ factorizes, i.e.,
$W(\mathbf{r},t)=C(t)W(\mathbf{r}),$ (4)
where $C(t)$ is the pulse envelope and $W(\mathbf{r})$ determines the
coordinate dependence of the potential, although, extensions to more general
forms of the potential are straightforward.
Our principal result, the proof of which is postponed until Appendix A and the
Supplemental Material, is an expression for the probability amplitude to find,
after the end of the pulse, the system in its excited state $\Psi_{\alpha\neq
0}$
$\begin{split}\langle\Psi_{\alpha\neq
0}|\Psi(t>T)\rangle&=\frac{\pi\widetilde{C^{2}}(E_{\alpha}-E_{0})}{2i\omega_{0}^{n}}e^{-iE_{\alpha}t}\\\
&\times\int\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle
F_{n}(\mathbf{r})d\mathbf{r}.\end{split}$ (5)
In Eq. (5), $E_{\alpha}$ are the eigenenergies of the system,
$\widetilde{C^{2}}(\omega)$ is the Fourier transform of the _square_ of the
envelope function
$\widetilde{C^{2}}(\omega)=\frac{1}{2\pi}\int e^{i\omega t}C^{2}(t)dt,$ (6)
$\hat{n}(\mathbf{r})=\sum_{i=1}^{N}\delta(\mathbf{r}_{i}-\mathbf{r})$ is the
electron density operator, and $n=4$ and $2$, in the case of the uniform
applied electric field
$W(\mathbf{r})=-\mathbfcal{E}_{0}\cdot\mathbf{r},$ (7)
and all other cases, respectively. Corresponding $F_{n}(\mathbf{r})$ are
$\displaystyle
F_{4}(\mathbf{r})=-[\mathbfcal{E}_{0}\cdot\nabla]^{2}v_{ext}(\mathbf{r}),$ (8)
$\displaystyle F_{2}(\mathbf{r})=[\nabla W(\mathbf{r})]^{2}.$ (9)
Finally, Eq. (5) holds to the leading non-vanishing order $\omega_{0}^{-n}$ in
each of the cases. Further developments (see Appendix A) show, that the time-
dependent oscillations in the electron density after the end of the pulse are
given by
$\delta n(\mathbf{r},t>T)\\!=\\!\frac{1}{i\omega_{0}^{n}}\\!\int\\!e^{-i\omega
t}\widetilde{C^{2}}(\omega){\rm
Im}\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)F_{n}(\mathbf{r}^{\prime})d\mathbf{r}^{\prime}d\omega,$
(10)
where $\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)$ is the linear density
response function of the interacting electron system.111The RHS of Eq. (10)
can easily be seen real, the presence of the imaginary unity in the
denominator notwithstanding, which is due to the oddness of ${\rm
Im}\,\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)$ in $\omega$ and to the fact
that ${\widetilde{C^{2}}^{*}(-\omega)=\widetilde{C^{2}}(\omega)}$, the latter
due to the realness of $C(t)$.
Furthermore, to the leading order in $\omega_{0}^{-1}$, we find for the total
energy absorbed by the system during the pulse action
$\Delta
E\\!=\\!-\frac{\pi}{4\omega_{0}^{2n}}\\!\\!\int\\!\\!\omega|\widetilde{C^{2}}(\omega)|^{2}F_{n}(\mathbf{r}){\rm
Im}\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)F_{n}(\mathbf{r}^{\prime})d\omega
d\mathbf{r}d\mathbf{r}^{\prime}.$ (11)
Clearly, the case of the uniform electric field ($n=4$) is relevant to the
problem of the illumination by light. Although, strictly speaking, the latter
should be described with the transverse vector potential $A_{z}(t-x/c)$, the
usual practice is, neglecting the retardation, to reduce the problem to that
with the homogeneous $A_{z}(t)$ and then, by the gauge transformation, to the
equivalent problem with the scalar potential (7). [7] Apart from the lower
bound on the frequency, inherent to our high-frequency asymptotic theory,
$\omega_{0}\gg\omega_{\text{low}}$, the neglect of the retardation imposes a
standard upper bound $\omega_{0}\ll\omega_{\text{high}}=c/d$, where $c$ is the
velocity of light, and $d$ is the size of the system. Another case, $n=2$, is
relevant to processes with the excitation by longitudinal fields, such, e.g.,
as with moving charges.222The potential
$\phi_{ext}(\mathbf{r},t)=Z/|\mathbf{r}-\mathbf{R}(t)|$ of an ion of the
charge $Z$ moving along the trajectory $\mathbf{R}(t)$ corresponds to the non-
uniform externally applied field, except for
$|\mathbf{r}-\mathbf{R}(t)|\gg|\mathbf{r}|$ This is promising for the
construction of TDDFT of the stopping power of matter for fast ions beyond the
adiabatic approximation for the exchange-correlation potential, which theory
now exists in the low-velocity limit only. [9, 10]
Importantly, in Eqs. (10) and (11) we witness a hybridization of linear and
quadratic response quantities: the _linear_ density-density response function
is multiplied by the _quadratic_ frequency envelop
$\widetilde{C^{2}}(\omega)$. In the illustrative calculations below, we will
see that such hybridization leads to interesting effects.
In the field of the light-matter interactions, the application of the
acceleration-frame method of Kramers and Henneberger (KH) [11, 12] has led to
a great many advancements in the theory.[13, 14, 15, 16, 17, 18]
Instructively, our formulas above can be re-derived in an alternative way
using the KH method, as it is shown in Appendix B. However, this is possible
to do in the case of the uniform field only ($n=4$), since this case is
inherent within the KH formalism.
## III Results
### III.1 Hydrogen atom
We now investigate how the high-frequency limit is approached as the frequency
increases by calculating a precisely solvable system, namely, the hydrogen
atom. First, assuming an atom, initially in its ground state, is subjected to
the doubly modulated Gaussian pulse with a spherically symmetric quadrupole
potential
$W(\mathbf{r},t)\cos\omega_{0}t=W_{0}r^{2}e^{-(t/\sigma)^{2}}\cos\omega
t\cos\omega_{0}t,$ (12)
we numerically time-propagate the Scrödinger equation (1). In the pulse (12),
the carrier frequency $\omega_{0}$ serves to set the scene for the high-
frequency regime, while the second frequency $\omega$ couples the pulse to the
excitations in the system. Upon the end of the pulse, we look at the
populations of the excited states, plot them in Fig. 1 versus the enveloping
function frequency $\omega$ (the second frequency), and compare with the
asymptotic limit. The latter, according to Eqs. (5), (9), and (12) is given by
$\langle\phi_{n,s}|\phi(t>T)\rangle\\!=\\!\frac{2\pi
W_{0}^{2}}{i\omega_{0}^{2}}e^{-i\epsilon_{n}t}\widetilde{C^{2}}(\epsilon_{n}-\epsilon_{1})\langle\phi_{n,s}(r)|r^{2}|\phi_{1,s}(r)\rangle,$
(13)
where $\phi_{n,s}(r)$ are the hydrogenic $s$-orbitals and $\epsilon_{n}$ are
the corresponding eigenenergies, and we have restricted the comparison to the
transitions to the $s$-states only.
Figure 1: Excitation probability [the modulus squared of Eq. (5)] upon the
end of the pulse of Eq. (12), from the ground-state of the hydrogen atom to a
number of its excited $s$-states. The solid black line is the asymptotic limit
of Eq. (13). Spectra at finite frequencies are obtained by the numerical
propagation of the TD Schrödinger equation (1). The parameters of the pulse
used were $\sigma=15$ a.u. and $W_{0}=0.125$ a.u.
We note that the spherically symmetric quadrupole potential (12) is purely
model one, which we use to demonstrate the convergence of the numerical
solution of the Schrödinger equation to the asymptotic solution (5) for a non-
uniform field ($n=2$).
Similarly, in the case of the uniform field ($n=4$), we propagate the system
under the potential 333While the potential of Eq. (12) is purely model, we
note that the regime of Eq. (14) can be realized by superimposing two lasers’
beams.
$W(\mathbf{r},t)\cos\omega_{0}t=-\mathcal{E}_{0}ze^{-(t/\sigma)^{2}}\cos\omega
t\cos\omega_{0}t.$ (14)
For the hydrogen atom
$\begin{split}F_{4}(\mathbf{r})&=\mathcal{E}_{0}^{2}\frac{\partial^{2}}{\partial
z^{2}}\frac{1}{r}=-\mathcal{E}_{0}^{2}\\\
&\times\left[\frac{(4\pi)^{3/2}}{3}\delta(\mathbf{r})Y_{00}(\theta,\phi)\\!+\\!\sqrt{\frac{\pi}{5}}\frac{4}{r^{3}}Y_{20}(\theta,\phi)\right],\end{split}$
(15)
where $Y_{lm}(\theta,\phi)$ are spherical harmonics. Evidently, only
transitions from the ground state to s- and d-states are possible, which have
the following amplitudes
$\displaystyle\begin{split}&\langle\phi_{n>1,s}|\phi(t>T)\rangle=-\frac{\pi\mathcal{E}^{2}_{0}\widetilde{C^{2}}(\epsilon_{n}-\epsilon_{1})}{2i\omega_{0}^{4}}e^{-i\epsilon_{n}t}\\\
&\times\frac{4\pi}{3}\phi_{n,s}(0)\phi_{1,s}(0),\end{split}$ (16)
$\displaystyle\begin{split}&\langle\phi_{n>2,d}|\phi(t>T)\rangle=\frac{\pi\mathcal{E}^{2}_{0}\widetilde{C^{2}}(\epsilon_{n}-\epsilon_{1})}{2i\omega_{0}^{4}}e^{-i\epsilon_{n}t}\\\
&\times
4\sqrt{\frac{\pi}{5}}\int\limits_{0}^{\infty}\frac{1}{r}\phi_{n,d}(r)\phi_{1,s}(r)dr.\end{split}$
(17)
where, in Eq. (16), we can further simplify with account of
$\phi_{n,s}(0)=2/n^{3/2}$. [20]
Figure 2: Excitation probability, upon the end of the pulse of Eq. (14), from
the ground-state of the hydrogen atom to some of its excited $s$-states. The
solid black line is the asymptotic limit of Eq. (16). Spectra at finite
frequencies are obtained by the numerical propagation of the TD Schrödinger
equation. The parameters of the pulse used were $\sigma=15$ a.u. and
$\mathcal{E}_{0}=0.125$ a.u.
Figures 1 and 2 demonstrate the convergence, with the growth of $\omega_{0}$,
of the excitation processes’ outcome to their $\omega_{0}\to\infty$ limits of
Eqs. (5), for the cases of the quadrupole and dipole exciting potentials,
respectively. Remarkably, in the quadrupole (Fig. 1) and the dipole (Fig. 2)
cases, the asymptotic regime is approached in very different ways: in the
former case, the peaks’ positions and shape change dramatically with the
frequency growth, while in the latter, the amplitude of the peaks varies
monotonously only. In the dipole case, the convergence, with respect to peaks’
amplitudes, is very slow, and it is not reached at practically achievable
values of $\omega_{0}$. 444Apart from the experimental unachievability of the
upper values of $\omega_{0}$ in Fig. 2, at those frequencies results become
unphysical because of the retardation effects, as discussed in Sec. II. The
rationale for our including these high frequencies is to confirm that,
although slow, the convergence takes place nonetheless. At the same time, the
excitation energies (peaks’ positions), even at moderate values of
$\omega_{0}$, are very well reproduced by the asymptotic theory. We point out
and emphasize that, while the asymptotic limit holds for an arbitrary system,
the speed of the convergence is system-dependent. This is confirmed by Fig. 3
with the use of the fictitious system of the hydrogenic atom with the nuclear
charge of $Z=0.25$. Since the asymptotic theory is expected to be the more
accurate the larger is $\omega_{0}$ compared to the characteristic excitation
energies in a system, in the $Z=0.25$ case we observe a much faster
convergence compared to the $Z=1$. For peaks in Fig. 3 to remain resolved, a
large width of the pulse $\sigma=200$ a.u. was chosen in the calculation with
$Z=0.25$. Further particulars of the solution of the TD Schrödinger equation
and the issues of the convergence to the asymptotic limit are presented in
Appendix C.
Figure 3: Similar to Fig. 2, but for the fictitious hydrogenic atom of the
nuclear charge $Z=0.25$ and with $\sigma=200$ a.u.
It is highly instructive to follow the excitation process in time, from the
pulse beginning to its end, in order to understand how the system reaches its
final state. As can be seen from the derivation [e.g., Eq. (S.15) of the
Supplemental Material], the linear response does contribute to the pumping
during the pulse action, but it passes a cycle from increasing to decreasing
the population of excited states, with the zero net result. On the contrary,
the quadratic response does not completely reverse itself, which results in
the residual occupancies of the excited states upon the pulse’s end. In Fig.
4, we plot the time-evolution of the population numbers of the $2s$\- and
$2p$\- orbitals of H atom under the action of the pulse of Eq. (14). We
observe the principal difference between the change of the occupancies of the
$s$\- and $p$\- levels: while the latter gets much more (approximately three
orders of magnitude) populated in the middle of the pulse duration, it gives
the electron away upon the pulse end. At the same time, the former keeps the
accepted electron with a finite probability. This type of behaviour is
characteristic of spherically symmetric systems in the high-frequency regime,
which is in agreement with our asymptotic theory. This is the linear response
that dominates the $s\to p$ transition at the time of the pulse duration,
which is gone upon the pulse’s extinction. In particular, we conclude that the
usual dipole selection rules do not hold in this process.
Figure 4: Evolution of the populations of orbitals in H atom during the
action of the pulse of Eq. (14). Parameters used were $\omega_{0}=2$ a.u.,
$\omega=(\epsilon_{2}-\epsilon_{1})/2=0.1875$ a.u., $\sigma=50$ a.u., and
$\mathcal{E}_{0}=0.125$ a.u.
At this point we note that, while TDDFT of the electronic response in the
high-frequency limit was studied in Ref. 22, it is important to emphasize the
principal difference between the physical situation considered in that
reference and in the present paper. Ref. 22 deals with the response to the
monochromatic field, thus considering a continuous wave. In that regime, the
linear response persists in the high-frequency limit and it is, usually,
prevailing. On the contrary, here we consider the excitation by a pulse of
finite duration, the carrier frequency of which is asymptotically high. We
focus on the behaviour of a system after the end of the pulse, in which case
we find the total suppression of the linear response, while the nonlinear one
is describable in terms of the linear response TDDFT.
With the use of Eqs. (16) and (17), in Fig. 5 we compare the excitation and
ionization processes’ probabilities for the hydrogen atom initially in its
ground-state and exposed to the Gaussian pulse. We conclude that the
ionization is dominant for short pulses, in which case a sudden impact strips
off electron, while, for longer pulses, transitions to excited bound states
become preferential. We also note that transitions to the $d$-states play
insignificant role compared to those to the $s$-states.
Figure 5: Probability of the excitation and ionization of hydrogen atom,
initially in its ground state, to $s$\- (left) and $d$\- (right) states,
relative to the total excitation plus ionization probability, plotted versus
the pulse width $\sigma$. The pulse shape is purely Gaussian
$C(t)=e^{-(t/\sigma)^{2}}$.
### III.2 Jellium slab
We proceed by considering a slab of the thickness $d$ with the positive
constant background charge density $n_{+}=(\frac{4}{3}\pi r_{s}^{3})^{-1}$,
where $r_{s}$ is the 3D density parameter. Within the Kohn-Sham (KS) density-
functional theory (DFT) [23] and using the local density approximation (LDA),
we calculate the ground-state KS band structure and electron density. To this
system, we apply the doubly modulated dipole pulse of Eq. (14), and we use our
theory to determine the total energy absorption in the slab in the high
carrier frequency regime. The problem being one-dimensional, the operator in
Eq. (8) reduces to the Laplacian, and we have by virtue of the Poisson law
$F_{4}(z)=-4\pi n_{+}(z)=-4\pi n_{+}\Theta\\!\left(\frac{d}{2}-|z|\right),$
(18)
where $\Theta(x)$ is the Heaviside’s step-function. Resulting absorption
spectra, obtained by Eq. (11) with the use of the adiabatic time-dependent LDA
(ATDLDA) in the construction of $\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)$,
[5] are presented in Figs. 6 and 7, for $r_{s}=5$ and $2$, corresponding to
the jellium model of the metallic potassium and aluminum, respectively. The
following observations are made: (i) Similar to the case of the hydrogen atom,
due to the integration with $\widetilde{C^{2}}(\omega)$ in Eq. (11) and due to
the form of the pulse (14), spectra in the left panels of Figs. 6 and 7 as
functions of $\omega$ are governed by SHG and, accordingly, peaks’ positions
scale to half the frequencies of the corresponding excitations; (ii) In the
linear regime (right panels in Figs. 6 and 7), spectra are dominated by the
bulk plasmon (BP) peak, the intensity of which crucially depends on the share
of the bulk, i.e., the slab thickness $d$. On the contrary, the nonlinear
spectra in the high-frequency regime (left panels in Figs. 6 and 7) weakly
depend on $d$, suggesting that the surface excitations dominate them. The
prevalence of the surface response can be understood by noting that
$\int\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)d\mathbf{r}^{\prime}=0$ (no
reaction to a constant potential) and, therefore, both the deep interior and
exterior of the slab, by Eq. (18), do not contribute appreciably to the
integral of Eq. (11).
Figure 6: Jellium slabs. Left: absorption from the pulse of Eq. (14)
($\sigma=500$ a.u.) at asymptotically large frequency $\omega_{0}$ as a
function of $2\omega$, as obtained through Eq. (11). Right: absorption per
unit time from the monochromatic field of the frequency $\omega$ in the linear
response regime. Two slabs of the thicknesses $d=25$ and $40$ a.u. and the
density parameter $r_{s}=5$ are considered. $x$-axes are scaled to the bulk
plasma energy $\omega_{p}=4.2$ eV. Parameters used correspond to the jellium
model of solid potassium. The inset shows the slab geometry and an arrow
indicates the direction of the electric field vector, while the laser pulse
moves parallel to the slab’s surfaces. Figure 7: Same as Fig. 6, but for
slabs of the density parameter $r_{s}=2$ and the corresponding bulk plasma
energy $\omega_{p}=16.7$ eV (jellium model of solid aluminum).
Notably, in the left panel of Fig. 6 we observe a strong peak with the maximum
at $2\omega\approx 0.88\omega_{p}$. The counterpart of this peak in the linear
response regime (right panel of Fig. 6) is positioned at $\omega\approx
0.83\omega_{p}$, and it is known as the multipole surface plasmon (MP). [24]
Because of the BP suppression, MP is very prominent in the left panel of this
figure, which makes the high-frequency nonlinear technique an ideal tool to
study this otherwise subtle type of excitation. It is instructive to note that
$F_{4}(z)$ of Eq. (8) provides, effectively, the impact mode of the
complementary linear response problem, [25] which is known to be favourable
for MP excitation. [26] In Fig. 7 ($r_{s}=2$), left panel, we also see a
prominent broad peak at $2\omega$ below the BP frequency, while MP is not
discernible in the linear response spectrum in the right panel. We, therefore,
conclude that the corresponding excitation exists at the surface of metallic
aluminum, and the high-frequency nonlinear technique provides a unique way to
detect it. At the same time, the traditional method of electron energy loss
spectroscopy (EELS) does not possess sufficient sensitivity. [24] The
oscillating structures at $2\omega>\omega_{p}$ in Figs. 6 and on both sides
from $\omega_{p}$ in Fig. 7 differ for different slab thicknesses, and they
can, therefore, be attributed to the interference effect between the two
surfaces of the slabs. Finally, the absence of the conventional (dipole)
surface plasmon (SP) peak at $\omega_{s}=\omega_{p}/\sqrt{2}$ is due to the
strictly normal to the surface direction of the exciting field ($q_{\|}=0$),
in which case the amplitude of the SP vanishes.
To quantitatively verify the above picture, in Fig. 8 we plot the Fourier
transform of the density oscillation in the asymptotic regime [Eq. (10)] and
compare it with the linear response density oscillation. Clearly, in the
former case, the oscillation is mainly confined to the vicinity of the
surfaces of the slab being largely suppressed in the interior. On the
contrary, in the linear response regime, oscillations predominantly occur in
the bulk of the slab.
Figure 8: Fourier transform of the density oscillation [Eq. (10)] in the
$\omega_{0}\to\infty$ asymptotic regime (solid curve against the left
$y$-axis) and its linear response counterpart (dashed curve against the right
$y$-axis), with the frequency $\omega$ set to $\omega_{mp}/2$ and
$\omega_{mp}$, respectively [cf. Fig. (6)]. Vertical straight lines indicate
positions of the slab’s surfaces. Parameters of the calculation are those of
Fig. 6.
### III.3 Jellium sphere
In contrast to a slab, for a sphere, the second derivative in the RHS of Eq.
(8) does not reduce to Laplacian and, consequently, $F_{4}(\mathbf{r})$ is not
given by the positive background density only. Instead, we have
$\begin{split}&F_{4}(\mathbf{r})=\frac{2\sqrt{4\pi}n_{+}}{3}\times\\\
&\left[\frac{2}{\sqrt{5}}\frac{R^{3}}{r^{3}}\Theta(r-R)Y_{20}(\theta,\phi)-\Theta(R-r)Y_{00}(\theta,\phi)\right],\end{split}$
(19)
where $R$ is the radius of the rigid positive-charge background. Due to the
symmetry, the density-response function
$\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)$ splits in angular momentum into
$\chi_{lm}(r,r^{\prime},\omega)$, the latter acting separately on each
harmonic of the externally applied potential. The problem becoming one-
dimensional again, we calculate $\chi_{00}$ and $\chi_{20}$, apply them to Eq.
(19), and plug the result into Eq. (11). We consider the same form of the
doubly modulated pulse of Eqs. (14) as previously.
Figure 9: Jellium spheres. Left: absorption from the pulse of Eq. (14)
($\sigma=500$ a.u.) at asymptotically large frequency $\omega_{0}$ as a
function of $2\omega$, as obtained through Eq. (11). Right: absorption per
unit time from the monochromatic field of the frequency $\omega_{0}$ in the
linear response regime. Vertical lines show positions of classical Mie
plasmons $\omega_{l}$. Two spheres of the radii $R=30$ and $40$ a.u. and the
density parameter $r_{s}=5$ are considered.
In Fig. 9, results of calculations for two spheres, with radii $R=30$ and $40$
a.u., and the density parameter $r_{s}=5$, are presented, for the nonlinear
$\omega_{0}\to\infty$ and the linear-response regimes, in the left and right
panels, respectively. Within the classical electrodynamics, a sphere of the
Drude metal supports an infinite series of Mie plasmons
$\omega_{l}=\sqrt{l/(2l+1)}\omega_{p},\ l=1,2,\dots$ .[27] In the
monochromatic linear-response (right panel of Fig. 9), we observe the $p$-mode
only of this series, red-shifted by the quantum size effect.
According to Eq. (19), energy absorption in the nonlinear
$\omega_{0}\to\infty$ regime (left panel of Fig. 9) originates from the
superposition of the $s$\- and $d$-modes. As plotted versus the second
modulation frequency $\omega$, it reveals a rich spectrum of the underlying
excitations. The leftmost feature near $0.57\omega_{p}$ comes from the
$d$-mode Mie plasmon $\omega_{2}$, red-shifted in the quantum calculation. The
broad dominating peak with the maximum near $0.80\omega_{p}$ does not have an
analog within the classical electrodynamics, and, similar to the multipole
plasmon modes in the case of a slab, it becomes accessible with the use of the
high-$\omega_{0}$ nonlinear regime. A signature of the bulk plasmon on the
right shoulder of this peak can also be observed, indicating the possibility
of the direct recognition of the constituents of nano-particles by their bulk
plasmon frequencies $\omega_{p}$ with the use of laser pulses. The latter is,
obviously, impossible in the linear-response regime. We also note structures
above $\omega_{p}$, which are due to the (dressed) single-particle excitations
affected by the quantum interference.
Finally, we consider molecular electronic spectroscopy. Referring back to the
above-discussed very short pulse spectroscopy, in the dipole interaction case,
we used time-dependent local density approximation calculations to produce
linear response estimates of the high-frequency energy absorption (using Eq.
11) and, in Fig. 10, compare to standard low-frequency energy absorption for
the ethylene molecule. The spectra’ differences in the two regimes are due to
the dipole versus $F_{4}$ selection rules, emphasizing the high-frequency
spectroscopy’s aptitude to probe the excitations forbidden in the linear
regime. See Appendix D for details concerning this calculation.
Figure 10: Comparison of the standard linear-response energy absorption
spectrum of the ethylene molecule to that of the high-frequency response in
the dipole approximation (Eq. 11). $x-x$ refers to the linear response auto-
correlation function with the electric field along the $x$-axis. At the same
time, $\nabla_{x}v_{ext}-\nabla_{x}v_{ext}$ stands for the auto-correlation
function in the high-frequency nonlinear regime, and the similarly for two
other directions.
## IV Discussion and conclusions
We have considered excitation of a quantum-mechanical system by an externally
applied electric field of high-frequency $\omega_{0}$ and finite duration in
time. After the end of the pulse, the state of the system being a
superposition of the eigenstates of the unperturbed Hamiltonian, the expansion
of the corresponding transition amplitudes in the power series in
$\omega_{0}^{-1}$ has been performed, with the leading terms found of the
order $\omega_{0}^{-4}$ for the uniform applied field (dipole case) and of
$\omega_{0}^{-2}$, otherwise.
We have demonstrated that, to the leading order in the inverse frequency, the
quadratic, rather than the linear, response determines the excitation process.
Nonetheless, we have also shown that all the information necessary to describe
this nonlinear excitation regime is contained in the linear density response
function of the system under consideration. The problem has been thus reduced
to that of the linear response time-dependent density functional theory, for
which practical methods of solution, at various levels of accuracy and
sophistication, are well established.
Further, we have found that a specific pulse shape, modulation by the second
(low) frequency can be advantageous as a probe, delivering spectra of
excitations in the nonlinear response regime. In our illustrative
applications, to the jellium model nano-films and nano-dots, plasmonic modes
undetectable or challenging for the detection by the linear optical
spectroscopy or electron energy-loss spectroscopy have been discerned. We
point out that the high carrier frequency is out of resonance, and its only
role is to set the scene for probing the system with the second frequency, the
twice of the latter being in resonance with the system’s excitations.
Based on our findings, we propose a spectroscopic technique, which we
provisionally name the Nonlinear High-Frequency Pulsed Spectroscopy. Our
results show that NLHFPS, i.e., exposing an explored system to a finite-
duration high-frequency electric field with low-frequency modulation, allows
for an efficient nonlinear spectroscopic probe of modes inaccessible or hardly
accessible by other techniques. A significant asset of the novel method is its
ease of interpretation, enabling a detailed comparison between experiment and
theory. This benefit stems from the results’ direct dependence on the target
material’s density-density response function. As demonstrated here, NLHFPS can
uncover rich and profound physical phenomena hidden from more conventional
methods.
## V Supplementary Material
Supplementary Material contains detailed derivation of Eq. (24) of the
Appendix A, which is too lengthy to be placed in the main text or appendices.
###### Acknowledgements.
V.U.N. acknowledges the support of the Russian Foundation for Basic Research
and the Ministry of Science and Technology of Taiwan (Grant no. 21-52-52007).
R.B. wishes to acknowledge the support of the German-Israel Foundation (Grant
no. GIF-I-26-303.2-2018). The authors declare no conflicts of interest. The
data that support the findings of this study are available from the authors
upon reasonable request.
## Appendix A Derivation of Eqs. (5)-(11)
In the interaction representation
$\displaystyle\tilde{\Psi}(t)=e^{i\hat{H}_{0}t}\Psi(t),$ (20)
$\displaystyle\tilde{\hat{W}}(t)=e^{i\hat{H}_{0}t}\hat{W}(t)e^{-i\hat{H}_{0}t},$
(21)
the problem of the solution of Eq. (1) turns into that for the equation
$\frac{\partial}{\partial
t}\tilde{\Psi}(t)=\frac{1}{i}(\cos\omega_{0}t)\tilde{\hat{W}}(t)\tilde{\Psi}(t),$
(22)
or for the equivalent integral equation
$\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i}\int\limits_{-\infty}^{t}(\cos\omega_{0}t^{\prime})\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})dt^{\prime},$
(23)
where by $\Psi_{\alpha}$ we denote the set of eigenfunctions of the
Hamiltonian (2), we assume that $W(\mathbf{r},-\infty)=0$, and the system is
initially in its ground-state $\Psi_{0}$.
Performing several consecutive integrations by parts in Eq. (23), assuming the
pulse to be of finite duration [$\hat{W}(\mathbf{r},+\infty)=0$] and
$\omega_{0}$ to be large, we obtain, after keeping the terms up to
$\omega_{0}^{-4}$ only
$\begin{split}&\tilde{\Psi}(+\infty)=\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\\!\int\limits_{-\infty}^{\infty}\\!\left[\frac{\partial\tilde{\hat{W}}(t)}{\partial
t},\tilde{\hat{W}}(t)\right]\\!\Psi_{0}dt-\frac{1}{4\omega_{0}^{4}}\\!\int\limits_{-\infty}^{\infty}\\!\left[\frac{\partial^{3}\tilde{\hat{W}}(t)}{\partial{t}^{3}},\tilde{\hat{W}}(t)\right]\\!\Psi_{0}dt+\\\
&\frac{1}{16\omega_{0}^{4}}\\!\\!\int\limits_{-\infty}^{\infty}\\!\left[\frac{\partial\tilde{\hat{W}}(t)}{\partial
t},\tilde{\hat{W}}(t)\right]\int\limits_{-\infty}^{t}\\!\left[\frac{\partial\tilde{\hat{W}}(t^{\prime})}{\partial
t^{\prime}},\tilde{\hat{W}}(t^{\prime})\right]\\!\Psi_{0}dt^{\prime}dt\\!-\\!\frac{1}{16\omega_{0}^{4}}\\!\\!\int\limits_{-\infty}^{\infty}\\!\left[\frac{\partial\tilde{\hat{W}}(t)}{\partial
t},\tilde{\hat{W}}^{3}(t)\right]\Psi_{0}dt\\!+\\!\frac{3}{64\omega_{0}^{4}}\\!\\!\int\limits_{-\infty}^{\infty}\left[\frac{\partial\tilde{\hat{W}}^{2}(t)}{\partial
t},\tilde{\hat{W}}^{2}(t)\right]\Psi_{0}dt.\end{split}$ (24)
A lengthy derivation of Eq. (24) is given in full in the Supplementary
Material. 555Arriving at final concise Eqs. (5)-(11) has required very lengthy
derivations. To rule out a possibility of error, we have repeated the
derivation several times. We have also verified results by an independent
method using the Kramers-Henneberger’s acceleration frame (Appendix B).
Additionally, after the manual derivation, we composed a computer algebra code
(in Mathematica) for consecutive integrations by parts in Eq. (23), which
produced exactly the same results. The commutators in Eq. (24) can be expanded
as
$\left[\frac{\partial\tilde{\hat{W}}(t)}{\partial
t},\tilde{\hat{W}}(t)\right]=ie^{i\hat{H}_{0}t}\left[\left[\hat{H}_{0},\hat{W}(t)\right],\hat{W}(t)\right]e^{-i\hat{H}_{0}t},$
(25) $\begin{split}&\left[\frac{\partial^{3}\tilde{\hat{W}}(t)}{\partial
t^{3}},\tilde{\hat{W}}(t)\right]=e^{i\hat{H}_{0}t}\left[-i\left[\hat{H}_{0},\left[\hat{H}_{0},\left[\hat{H}_{0},\hat{W}(t)\right]\right]\right]-3\left[\hat{H}_{0},\left[\hat{H}_{0},\frac{\partial\hat{W}(t)}{\partial
t}\right]\right]+3i\left[\hat{H}_{0},\frac{\partial^{2}\hat{W}(t)}{\partial
t^{2}}\right],\hat{W}(t)\right]e^{-i\hat{H}_{0}t},\end{split}$ (26)
$\left[\frac{\partial\tilde{\hat{W}}(t)}{\partial
t},\tilde{\hat{W}}^{3}(t)\right]=ie^{i\hat{H}_{0}t}\left[\left[\hat{H}_{0},\hat{W}(t)\right],\hat{W}^{3}(t)\right]e^{-i\hat{H}_{0}t},$
(27) $\left[\frac{\partial\tilde{\hat{W}}^{2}(t)}{\partial
t},\tilde{\hat{W}}^{2}(t)\right]=ie^{i\hat{H}_{0}t}\left[\left[\hat{H}_{0},\hat{W}^{2}(t)\right],\hat{W}^{2}(t)\right]e^{-i\hat{H}_{0}t}.$
(28)
### A.1 Non-uniform field case
We evaluate the commutator (25) to
$\begin{split}\left[\left[\hat{H}_{0},\hat{W}(t)\right],\hat{W}(t)\right]=-\int[\nabla
W(\mathbf{r},t)]^{2}\hat{n}(\mathbf{r})d\mathbf{r},\end{split}$ (29)
If the RHS of Eq. (29) is not zero, then the substitution of Eq. (29) into Eq.
(24), keeping only the leading term of the order $\omega_{0}^{-2}$, yields
$\begin{split}&\langle\Psi_{\alpha\neq
0}|\tilde{\Psi}(t>T)\rangle=\frac{1}{4i\omega_{0}^{2}}\times\\\ &\int
e^{i(E_{\alpha}-E_{0})t}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle[\nabla
W(\mathbf{r},t)]^{2}d\mathbf{r}dt.\end{split}$ (30)
If, furthermore, the factorization of Eq. (4) holds, then we arrive at Eq. (5)
with $n=2$, where an extra exponent $e^{-iE_{\alpha}t}$ appears in the
Schrödinger representation.
Equation (30) gives the transition amplitude to the leading order in
$\omega_{0}^{-1}$ unless the term in the square brackets under the integral is
independent on $\mathbf{r}$. However, in the latter case the integration of
$\hat{n}(\mathbf{r})$ over $\mathbf{r}$ produces a constant $N$, and then the
RHS becomes zero because of the zero the matrix element. This is, exactly,
what happens if the field is uniform, as can be seen from Eqs. (7) and,
therefore, this case requires a separate consideration.
### A.2 Uniform field case
With the use of Eqs. (24), (26), and with the commutator relations
$\displaystyle[\hat{H}_{0},\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\mathbf{r}_{i}]=-\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\nabla_{i},$
(31)
$\displaystyle[\hat{H}_{0},[\hat{H}_{0},\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\mathbf{r}_{i}]]=\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\nabla_{i}v_{ext}(\mathbf{r}_{i}),$
(32)
$\displaystyle\begin{split}&[\hat{H}_{0},[\hat{H}_{0},[\hat{H}_{0},\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\mathbf{r}_{i}]]]=-\sum\limits_{i=1}^{N}\left\\{\frac{1}{2}\mathbfcal{E}_{0}\cdot\nabla^{3}_{i}v_{ext}(\mathbf{r}_{i})\right.\\\
&\left.+[\nabla_{i}(\mathbfcal{E}_{0}\cdot\nabla_{i}v_{ext}(\mathbf{r}_{i})]\cdot\nabla_{i}\right\\},\end{split}$
(33)
$\displaystyle[[\hat{H}_{0},[\hat{H}_{0},[\hat{H}_{0},\mathbfcal{E}_{0}\cdot\mathbf{r}]]],\sum\limits_{i=1}^{N}\mathbfcal{E}_{0}\cdot\mathbf{r}_{i}]=$
(34)
$\displaystyle-\sum\limits_{i=1}^{N}(\mathbfcal{E}_{0}\cdot\nabla_{i})^{2}v_{ext}(\mathbf{r}_{i}),$
(35)
and noting that in Eq. (24) the sum of the 4th, 5th, and 6th terms on the RHS
evaluates to zero, as it can be directly verified, we immediately arrive at
Eq. (5) with $n=4$.
### A.3 Density oscillations and energy absorbed
The time-dependent density is given by
$\begin{split}&n(\mathbf{r},t>T)=\langle\Psi(t)|\hat{n}(\mathbf{r})|\Psi(t)\rangle=\sum\limits_{\alpha\beta}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{\beta}\rangle\langle\Psi(t)|\Psi_{\alpha}\rangle\langle\Psi_{\beta}|\Psi(t)\rangle=\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle|\langle\Psi_{0}|\Psi(t)\rangle|^{2}\\\
&+2\,{\rm Re}\,\sum\limits_{\alpha\neq
0}\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle\langle\Psi_{\alpha}|\Psi(t)\rangle\langle\Psi(t)|\Psi_{0}\rangle+\sum\limits_{\alpha,\beta\neq
0}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{\beta}\rangle\langle\Psi(t)|\Psi_{\alpha}\rangle\langle\Psi_{\beta}|\Psi(t)\rangle=|\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle|^{2}\\\
&-\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle\sum\limits_{\alpha\neq
0}|\langle\Psi_{\alpha}|\Psi(t)\rangle|^{2}+2\,{\rm
Re}\,\sum\limits_{\alpha\neq
0}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle\langle\Psi_{\alpha}|\Psi(t)\rangle\langle\Psi(t)|\Psi_{0}\rangle+\sum\limits_{\alpha,\beta\neq
0}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{\beta}\rangle\langle\Psi(t)|\Psi_{\alpha}\rangle\langle\Psi_{\beta}|\Psi(t)\rangle,\end{split}$
(36)
where the last equality is due to the normalization of $\Psi(t)$. Therefore,
$\begin{split}\delta
n(\mathbf{r},t>T)&=-\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle\\!\sum\limits_{\alpha\neq
0}|\langle\Psi_{\alpha}|\Psi(t)\rangle|^{2}+2\,{\rm
Re}\\!\sum\limits_{\alpha\neq
0}\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle\langle\Psi_{\alpha}|\Psi(t)\rangle\langle\Psi(t)|\Psi_{0}\rangle\\\
&+\sum\limits_{\alpha,\beta\neq
0}\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{\beta}\rangle\langle\Psi(t)|\Psi_{\alpha}\rangle\langle\Psi_{\beta}|\Psi(t)\rangle.\end{split}$
(37)
With account of Eq. (5), we conclude that the leading term in
$\omega_{0}^{-1}$ on RHS of Eq. (37) is the second one, while, for the same
reason, $\langle\Psi(t)|\Psi_{0}\rangle=e^{iE_{0}t}$ must be set in the
latter. Then
$\begin{split}&\delta n(\mathbf{r},t>T)=2\,{\rm
Re}\,e^{iE_{0}t}\sum\limits_{\alpha\neq
0}\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle\langle\Psi_{\alpha}|\Psi(t)\rangle.\end{split}$
(38)
Combining Eqs. (5) and (38), we have
$\begin{split}&\delta n(\mathbf{r},t>T)=\frac{\pi}{\omega_{0}^{n}}{\rm
Re}\frac{1}{i}\\!\sum\limits_{\alpha\neq
0}\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle\widetilde{C^{2}}(E_{\alpha}-E_{0})\\\
&\times
e^{i(E_{0}-E_{\alpha})t}\int\langle\Psi_{\alpha}|\hat{n}(\mathbf{r}^{\prime})|\Psi_{0}\rangle
F_{n}(\mathbf{r}^{\prime})d\mathbf{r}^{\prime},\end{split}$ (39)
or
$\begin{split}&\delta n(\mathbf{r},t>T)=\frac{\pi}{\omega_{0}^{n}}{\rm
Re}\frac{1}{i}\\!\int e^{-i\omega
t}\widetilde{C^{2}}(\omega)\sum\limits_{\alpha\neq
0}\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle\\\
&\times\langle\Psi_{\alpha}|\hat{n}(\mathbf{r}^{\prime})|\Psi_{0}\rangle
F_{n}(\mathbf{r}^{\prime})\delta(\omega-E_{\alpha}+E_{0})d\omega
d\mathbf{r}^{\prime},\end{split}$ (40)
Recalling the spectral representation of the many-body interacting density
response function
$\begin{split}\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)&=\sum\limits_{\alpha\neq
0}\left[\frac{\langle\Psi_{\alpha}|\hat{n}(\mathbf{r}^{\prime})|\Psi_{0}\rangle\langle\Psi_{0}|\hat{n}(\mathbf{r})|\Psi_{\alpha}\rangle}{E_{0}-E_{\alpha}+\omega+i\eta}\right.\\\
&\left.+\frac{\langle\Psi_{\alpha}|\hat{n}(\mathbf{r})|\Psi_{0}\rangle\langle\Psi_{0}|\hat{n}(\mathbf{r}^{\prime})|\Psi_{\alpha}\rangle}{E_{0}-E_{\alpha}-\omega-i\eta}\right],\end{split}$
(41)
where $\eta$ is a positive infinitesimal, we can rewrite Eq. (40) as
$\begin{split}&\delta n(\mathbf{r},t>T)=\frac{\pi}{\omega_{0}^{n}}{\rm
Re}\frac{1}{i}\\!\int e^{-i\omega t}\widetilde{C^{2}}(\omega)\\\ &\times{\rm
Im}\,\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)F_{n}(\mathbf{r}^{\prime})d\omega
d\mathbf{r}^{\prime}.\end{split}$ (42)
Finally, the separation of the real part on the RHS of Eq. (40) can be dropped
since the remaining expression is real already (see the footnote 6).
For the total energy absorbed by the system from the pulse, we can write
$\Delta
E=\sum\limits_{\alpha}E_{\alpha}|\langle\Psi_{\alpha}|\Psi(t>T)\rangle|^{2}-E_{0},$
(43)
which, with the use of the completeness of the basis set, can be rewritten as
$\Delta E=\sum\limits_{\alpha\neq
0}(E_{\alpha}-E_{0})|\langle\Psi_{\alpha}|\Psi(t>T)\rangle|^{2},$ (44)
and then, by Eq. (5), finally written in the form of Eq. (11).
## Appendix B Derivation in the Kramers-Henneberger’s acceleration frame
For an arbitrary $\mathbf{u}(t)$, if a function
$\Psi_{KH}(\\{\mathbf{r}\\},t)$ satisfies the equation
$\begin{split}i\frac{\partial\Psi_{KH}(\\{\mathbf{r}\\},t)}{\partial
t}=\left\\{-\frac{1}{2}\sum\limits_{i=1}^{N}\nabla_{i}^{2}+\frac{1}{2}\sum\limits_{i\neq
j}^{N}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}\right.\\\
\left.+\sum\limits_{i=1}^{N}v_{ext}[\mathbf{r}_{i}+\mathbf{u}(t)]\right\\}\Psi_{KH}(\\{\mathbf{r}\\},t),\end{split}$
(45)
then the function
$\Psi(\\{\mathbf{r}\\},t)=e^{i\theta(\\{\mathbf{r}\\},t)}\Psi_{KH}[\\{\mathbf{r}-\mathbf{u}(t)\\},t],$
(46)
where
$\theta(\\{\mathbf{r}\\},t)=\sum\limits_{i=1}^{N}\mathbf{u}^{\prime}(t)\cdot\mathbf{r}_{i},$
(47)
satisfies the equation
$\begin{split}i\frac{\partial\Psi(\\{\mathbf{r}\\},t)}{\partial
t}=\left\\{-\frac{1}{2}\sum\limits_{i=1}^{N}\nabla_{i}^{2}+\frac{1}{2}\sum\limits_{i\neq
j}^{N}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}\right.\\\
\left.+\sum\limits_{i=1}^{N}v_{ext}(\mathbf{r}_{i})-\sum\limits_{i=1}^{N}\mathbf{u}^{\prime\prime}(t)\cdot\mathbf{r}_{i}\right\\}\Psi(\\{\mathbf{r}\\},t).\\\
\ \end{split}$ (48)
Choosing
$\begin{split}&\mathbf{u}(t)=-\frac{\mathbfcal{E}_{0}}{\omega_{0}^{2}}[C(t)\cos\omega_{0}t+\\\
&\int\limits_{-\infty}^{t}[(t-t^{\prime})C^{\prime\prime}(t^{\prime})-2C^{\prime}(t^{\prime})]\cos\omega_{0}t^{\prime}dt^{\prime}]\end{split}$
(49)
and noting that
$\mathbf{u}^{\prime\prime}(t)=\mathbfcal{E}_{0}C(t)\cos\omega_{0}t$, we turn
Eq. (48) into Eq. (1) in the case of the dipole applied potential.
Expanding in Eq. (45) up to $\omega_{0}^{-4}$, we have with the use of Eq.
(49)
$\begin{split}i\frac{\partial\Psi_{KH}(\\{\mathbf{r}\\},t)}{\partial
t}=\hat{H}_{0}\Psi_{KH}(\\{\mathbf{r}\\},t)&-\frac{C(t)\cos\omega_{0}t}{\omega_{0}^{2}}\sum\limits_{i=1}^{N}[(\mathbfcal{E}_{0}\cdot\nabla_{i})v_{ext}(\mathbf{r}_{i})]\Psi_{KH}(\\{\mathbf{r}\\},t)\\\
&+\frac{C^{2}(t)\cos^{2}\omega_{0}t}{2\omega_{0}^{4}}\sum\limits_{i=1}^{N}[(\mathbfcal{E}_{0}\cdot\nabla_{i})^{2}v_{ext}(\mathbf{r}_{i})]\Psi_{KH}(\\{\mathbf{r}\\},t),\end{split}$
(50)
which in the interaction picture is written as
$\begin{split}i\frac{\partial\tilde{\Psi}_{KH}(\\{\mathbf{r}\\},t)}{\partial
t}=-\frac{C(t)\cos\omega_{0}t}{\omega_{0}^{2}}\sum\limits_{i=1}^{N}e^{i\hat{H}_{0}t}[(\mathbfcal{E}_{0}\cdot\nabla_{i})v_{ext}(\mathbf{r}_{i})]e^{-i\hat{H}_{0}t}\tilde{\Psi}_{KH}(\\{\mathbf{r}\\},t)\\\
+\frac{C^{2}(t)\cos^{2}\omega_{0}t}{2\omega_{0}^{4}}\sum\limits_{i=1}^{N}e^{i\hat{H}_{0}t}[(\mathbfcal{E}_{0}\cdot\nabla_{i})^{2}v_{ext}(\mathbf{r}_{i})]e^{-i\hat{H}_{0}t}\Psi_{0}(\\{\mathbf{r}\\}),\end{split}$
(51)
and, therefore,
$\begin{split}\tilde{\Psi}_{KH}(\\{\mathbf{r}\\},+\infty)=\Psi_{0}(\\{\mathbf{r}\\})-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{\infty}e^{i\hat{H}_{0}t^{\prime}}\sum\limits_{i=1}^{N}[(\mathbfcal{E}_{0}\cdot\nabla_{i})v_{ext}(\mathbf{r}_{i})]e^{-i\hat{H}_{0}t^{\prime}}\tilde{\Psi}_{KH}(\\{\mathbf{r}\\},t^{\prime})C(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}\\\
+\frac{1}{2i\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}e^{i\hat{H}_{0}t^{\prime}}\sum\limits_{i=1}^{N}[(\mathbfcal{E}_{0}\cdot\nabla_{i})^{2}v_{ext}(\mathbf{r}_{i})]e^{-i\hat{H}_{0}t^{\prime}}\Psi_{0}(\\{\mathbf{r}\\})C^{2}(t^{\prime})\cos^{2}\omega_{0}t^{\prime}dt^{\prime}.\end{split}$
(52)
In the last terms on the RHS of Eqs. (51) and (52) we have replaced
$\tilde{\Psi}_{KH}(\\{\mathbf{r}\\},t^{\prime})$ with
$\Psi_{0}(\\{\mathbf{r}\\})$, which is in accordance to keeping the terms up
to $\omega_{0}^{-4}$ only. We note that, upon the end of the pulse, according
to Eqs. (46) and (49),
$\Psi_{KH}(\\{\mathbf{r}\\},t)=\Psi(\\{\mathbf{r}\\},t)$. Then, the third term
in the RHS of Eq. (52) immediately reproduces Eq. (5). To prove that the
contribution of the second term is zero up to $\omega_{0}^{-4}$ it is
sufficient to integrate it by parts two times and use Eq. (51).
## Appendix C Particulars of the solution of the TD Schrödinger equation for
hydrogenic ion
In Eq. (22), we expand $\tilde{\Psi}(\mathbf{r},t)$ as
$\tilde{\Psi}(\mathbf{r},t)=\sum\limits_{l=0}^{l_{max}}\sum\limits_{n=0}^{n_{max}}a_{n,l}(t)F_{n}(r)Y_{l0}(\theta,\phi),$
(53)
where
$\displaystyle F_{n}(r)=\lambda^{3/2}f_{n}(\lambda r),$ (54) $\displaystyle
f_{n}(x)=\sqrt{\frac{n!}{\Gamma(n+\alpha+1)}}x^{\alpha/2-1}e^{-x/2}L^{(\alpha)}_{n}(x),$
(55)
$L^{(\alpha)}_{n}(x)$ are the generalized Laguerre polynomials, and $\alpha$
and $\lambda$ are positive parameters. The basis set in Eq. (53) is
orthonormal and complete with any $\alpha$ and $\lambda$. Although we have
been using $\alpha=2$ and $\lambda=1$, the convergence of the method has been
verified by comparing results with those obtained with other values of these
parameters.
Matrix elements of the unperturbed Hamiltonian $\hat{H}_{0}$ and the time-
dependent part $\hat{W}(t)$ were obtained exactly with the use of the
recurrence relations for the generalized Laguerre polynomials. [29] The
problem was thus reduced to that of the propagation in time of the system of
the linear ordinary differential equations for $a_{n,l}(t)$, which was carried
out by means of the Magnus expansion. [30]
For the hydrogen atom, the Schrödinger equation (1) reads
$i\frac{\partial\Psi(\mathbf{r},t)}{\partial
t}=\left[-\frac{1}{2}\nabla^{2}-\frac{1}{r}+(\cos\omega_{0}t)\hat{W}(\mathbf{r},t)\right]\Psi(\mathbf{r},t).$
(56)
By scaling the variables $\mathbf{r}^{\prime}=Z\mathbf{r}$,
$t^{\prime}=Z^{2}t$, we see that
$\Psi_{Z}(\mathbf{r},t)=Z^{3/2}\Psi(Z\mathbf{r},Z^{2}t)$ is the solution to
the complementary problem for the hydrogenic atom of the nuclear charge $Z$
$\begin{split}i\frac{\partial\Psi_{Z}(\mathbf{r},t)}{\partial
t}&=\left\\{-\frac{1}{2}\nabla^{2}-\frac{Z}{r}\right.\\\
&\left.+Z^{2}[(\cos(Z^{2}\omega_{0}t)]\hat{W}(Z\mathbf{r},Z^{2}t)\right\\}\Psi_{Z}(\mathbf{r},t).\end{split}$
(57)
From Eq. (57) we conclude that the frequency $\omega_{0}$ scales as
$\omega_{0}\to Z^{2}\omega_{0}$, which explains the faster convergence of the
solutions to its $\omega_{0}\to\infty$ limit we have observed in Fig. 3 for
$Z<1$.
## Appendix D TDDFT calculation of ethylene spectrum
The energy absorption spectra were calculated using time-dependent local
density approximation performed in real-time on a real-space grid. We used
Troullier-Martins norm-conserving pseudopotentials [31] and the reciprocal-
space-based method for treating long-range interactions. [32] The molecule C-C
axis coincides with the z-axis, and the four hydrogen atoms are in the y-z
place. A local density approximation energy minimization determined the atom
distance. The time propagation used fourth-order Runge-Kutta propagation with
a time step of 0.05 atomic time units.
## References
* Mukamel [1995] S. Mukamel, _Principles of Nonlinear Optical Spectroscopy_ (Oxford University Press, New York, 1995).
* Roke and Gonella [2012] S. Roke and G. Gonella, “Nonlinear Light Scattering and Spectroscopy of Particles and Droplets in Liquids,” Annual Review of Physical Chemistry 63, 353–378 (2012).
* Johansson, Schmüser, and Castner [2018] P. K. Johansson, L. Schmüser, and D. G. Castner, “Nonlinear Optical Methods for Characterization of Molecular Structure and Surface Chemistry,” Topics in Catalysis 61, 1101–1124 (2018).
* Mukamel, Cohen, and Harbola [2006] S. Mukamel, A. Cohen, and U. Harbola, “Intermolecular forces and generalized response functions in liouville space,” in _Time-Dependent Density Functional Theory_, edited by M. A. Marques, C. A. Ullrich, F. Nogueira, A. Rubio, K. Burke, and E. K. U. Gross (Springer Berlin Heidelberg, Berlin, Heidelberg, 2006) pp. 107–120.
* Gross and Kohn [1985] E. K. U. Gross and W. Kohn, “Local density-functional theory of frequency-dependent linear response,” Phys. Rev. Lett. 55, 2850–2852 (1985).
* Note [1] The RHS of Eq. (10) can easily be seen real, the presence of the imaginary unity in the denominator notwithstanding, which is due to the oddness of ${\rm Im}\,\chi(\mathbf{r},\mathbf{r}^{\prime},\omega)$ in $\omega$ and to the fact that ${\mathaccent 869{C^{2}}^{*}(-\omega)=\mathaccent 869{C^{2}}(\omega)}$, the latter due to the realness of $C(t)$.
* Landau and Lifshitz [1971] L. D. Landau and E. M. Lifshitz, _The classical theory of fields_ , 3rd ed., Course of theoretical physics, Vol. II (Pergamon Press, New York and London, 1971).
* Note [2] The potential $\phi_{ext}(\mathbf{r},t)=Z/|\mathbf{r}-\mathbf{R}(t)|$ of an ion of the charge $Z$ moving along the trajectory $\mathbf{R}(t)$ corresponds to the non-uniform externally applied field, except for $|\mathbf{r}-\mathbf{R}(t)|\gg|\mathbf{r}|$.
* Nazarov _et al._ [2005] V. U. Nazarov, J. M. Pitarke, C. S. Kim, and Y. Takada, “Time-dependent density-functional theory for the stopping power of an interacting electron gas for slow ions,” Phys. Rev. B 71, 121106(R) (2005).
* Nazarov _et al._ [2007] V. U. Nazarov, J. M. Pitarke, Y. Takada, G. Vignale, and Y.-C. Chang, “Including nonlocality in the exchange-correlation kernel from time-dependent current density functional theory: Application to the stopping power of electron liquids,” Phys. Rev. B 76, 205103 (2007).
* Kramers [1956] H. K. Kramers, _Collected Scientific Papers_ (North Holland, Amsterdam, 1956).
* Henneberger [1968] W. C. Henneberger, “Perturbation method for atoms in intense light beams,” Phys. Rev. Lett. 21, 838–841 (1968).
* Eberly and Kulander [1993] J. H. Eberly and K. C. Kulander, “Atomic stabilization by super-intense lasers,” Science 262, 1229–1233 (1993).
* Barash, Orel, and Baer [1999] D. Barash, A. E. Orel, and R. Baer, “Laser-induced resonance states as dynamic suppressors of ionization in high-frequency short pulses,” Phys. Rev. A 61, 013402 (1999).
* Vorobeichik and Moiseyev [1999] I. Vorobeichik and N. Moiseyev, “Tunneling control by high-frequency driving,” Phys. Rev. A 59, 2511–2514 (1999).
* Baer [2009] R. Baer, “Prevalence of the adiabatic exchange-correlation potential approximation in time-dependent density functional theory,” Journal of Molecular Structure: THEOCHEM 914, 19–21 (2009).
* Eckardt and Anisimovas [2015] A. Eckardt and E. Anisimovas, “High-frequency approximation for periodically driven quantum systems from a floquet-space perspective,” New Journal of Physics 17, 093039 (2015).
* Ben-Asher _et al._ [2020] A. Ben-Asher, D. Šimsa, T. Uhlířová, M. Šindelka, and N. Moiseyev, “Laser control of resonance tunneling via an exceptional point,” Phys. Rev. Lett. 124, 253202 (2020).
* Note [3] While the potential of Eq. (12) is purely model, we note that the regime of Eq. (14) can be realized by superimposing two lasers’ beams.
* Landau and Lifshitz [1981] L. D. Landau and E. M. Lifshitz, _Quantum Mechanics Non-Relativistic Theory_ , 3rd ed., Vol. III (Butterworth-Heinemann, London, 1981).
* Note [4] Apart from the experimental unachievability of the upper values of $\omega_{0}$ in Fig. 2, at those frequencies results become unphysical because of the retardation effects, as discussed in Sec. II. The rationale for our including these high frequencies is to confirm that, although slow, the convergence takes place nonetheless.
* Nazarov _et al._ [2010] V. U. Nazarov, I. V. Tokatly, S. Pittalis, and G. Vignale, “Antiadiabatic limit of the exchange-correlation kernels of an inhomogeneous electron gas,” Phys. Rev. B 81, 245101 (2010).
* Kohn and Sham [1965] W. Kohn and L. J. Sham, “Self-consistent equations including exchange and correlation effects,” Phys. Rev. 140, A1133–A1138 (1965).
* Tsuei _et al._ [1990] K.-D. Tsuei, E. W. Plummer, A. Liebsch, K. Kempa, and P. Bakshi, “Multipole plasmon modes at a metal surface,” Phys. Rev. Lett. 64, 44–47 (1990).
* Liebsch [1997] A. Liebsch, _Electronic excitations at metal surfaces_ (Plenum, New-York, 1997).
* Nazarov [1999] V. U. Nazarov, “Multipole surface-plasmon-excitation enhancement in metals,” Phys. Rev. B 59, 9866–9869 (1999).
* Bohren and Huffman [1998] C. F. Bohren and D. R. Huffman, _Absorption and Scattering of Light by Small Particles_ (John Wiley & Sons, Inc., New York, 1998).
* Note [5] Arriving at final concise Eqs. (5)-(11) has required very lengthy derivations. To rule out a possibility of error, we have repeated the derivation several times. We have also verified results by an independent method using the Kramers-Henneberger’s acceleration frame (Appendix B). Additionally, after the manual derivation, we composed a computer algebra code (in Mathematica) for consecutive integrations by parts in Eq. (23), which produced exactly the same results.
* Abramowitz and Stegun [1972] M. Abramowitz and I. A. Stegun, eds., _Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables_ , tenth printing ed. (U.S. Government Printing Office, Washington, DC, USA, 1972).
* Magnus [1954] W. Magnus, “On the exponential solution of differential equations for a linear operator,” Communications on Pure and Applied Mathematics 7, 649–673 (1954).
* Troullier and Martins [1991] N. Troullier and J. L. Martins, “Efficient Pseudopotentials for Plane-Wave Calculations,” Phys. Rev. B 43, 1993–2006 (1991).
* Martyna and Tuckerman [1999] G. J. Martyna and M. E. Tuckerman, “A reciprocal space based method for treating long range interactions in ab initio and force-field-based calculations in clusters,” J. Chem. Phys. 110, 2810–2821 (1999).
## Supplementary Material
to the article ’High-frequency limit of spectroscopy’ by Vladimir U. Nazarov
and Roi Baer
### Derivation of Eq. (24).
From Eq. (23), by the integration by parts, we can write
$\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})d\sin\omega_{0}t^{\prime}=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t-\frac{1}{i\omega_{0}}\int\limits_{-\infty}^{t}\frac{\partial}{\partial
t^{\prime}}\left[\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})\right]\sin\omega_{0}t^{\prime}dt^{\prime},$
(S.58)
or
$\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t-\frac{1}{i\omega_{0}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\sin\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{i\omega_{0}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}(t^{\prime})\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\Psi}(t^{\prime})\right]\sin\omega_{0}t^{\prime}dt^{\prime},$
(S.59)
and, with the use of Eq. (22),
$\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t-\frac{1}{i\omega_{0}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\sin\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{2\omega_{0}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t^{\prime})\sin
2\omega_{0}t^{\prime}dt^{\prime}.$ (S.60)
Continuing in the same way
$\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})d\cos\omega_{0}t^{\prime}-\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t^{\prime})d\cos
2\omega_{0}t^{\prime},$ (S.61)
$\begin{split}\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t\\\
-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\frac{\partial}{\partial
t^{\prime}}\left\\{\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\right\\}\cos\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\frac{\partial}{\partial
t^{\prime}}\left\\{\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t)\right\\}\cos
2\omega_{0}t^{\prime}dt^{\prime},\end{split}$ (S.62)
$\begin{split}\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t\\\
-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left\\{\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\right\\}\cos\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\Psi}(t^{\prime})\right]\cos\omega_{0}t^{\prime}dt^{\prime}\\\
+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t)\cos
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{2}(t^{\prime})\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\Psi}(t)\right]\cos
2\omega_{0}t^{\prime}dt^{\prime},\end{split}$ (S.63)
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t\\\
&-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos^{2}\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{4i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t\cos
2\omega_{0}t^{\prime}dt^{\prime}.\end{split}$ (S.64)
Since
$\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}(t)=\frac{1}{2}\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)+\frac{1}{2}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right],$ (S.65)
we can rewrite Eq. (S.64) as
$\begin{split}&\tilde{\Psi}(t)\\!=\\!\Psi_{0}\\!+\\!\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t\\!+\\!\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t\\!-\\!\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t\\!-\\!\frac{1}{i\omega_{0}^{2}}\\!\int\limits_{-\infty}^{t}\\!\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\\!\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{2\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{4i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t\cos
2\omega_{0}t^{\prime}dt^{\prime}.\end{split}$ (S.66)
Furthermore
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\\\
&-\frac{1}{i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{2}(t^{\prime})\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\Psi}(t^{\prime})\right]dt^{\prime}\\\
&+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{2\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{8i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})(\cos\omega_{0}t+\cos
3\omega_{0}t^{\prime})dt^{\prime},\end{split}$ (S.67)
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\\\
&-\frac{1}{i\omega_{0}^{2}}\\!\int\limits_{-\infty}^{t}\\!\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\\!\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}\\!+\\!\frac{1}{4\omega_{0}^{2}}\\!\int\limits_{-\infty}^{t}\\!\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\\!\tilde{\Psi}(t^{\prime})dt^{\prime}\\!+\\!\frac{1}{4\omega_{0}^{2}}\\!\int\limits_{-\infty}^{t}\\!\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\\!\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{2\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{8i\omega_{0}^{2}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})(\cos
3\omega_{0}t^{\prime}-\cos\omega_{0}t^{\prime})dt^{\prime},\end{split}$ (S.68)
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\\\
&-\frac{1}{i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})d\sin\omega_{0}t^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}+\frac{1}{8\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})d\sin
2\omega_{0}t^{\prime}\\\
&+\frac{1}{4\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})d\sin
2\omega_{0}t^{\prime}+\frac{1}{8i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})(\frac{1}{3}d\sin
3\omega_{0}t^{\prime}-d\sin\omega_{0}t^{\prime}),\end{split}$ (S.69)
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\\\
&-\frac{1}{i\omega_{0}^{3}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{8\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t\\\ &+\frac{1}{4\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t+\frac{1}{8i\omega_{0}^{3}}\tilde{\hat{W}}^{3}(t)\tilde{\Psi}(t)(\frac{1}{3}\sin
3\omega_{0}t-\sin\omega_{0}t)+\frac{1}{i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\frac{\partial}{\partial
t^{\prime}}\left\\{\left[\frac{\partial^{2}}{\partial t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\right\\}\sin\omega_{0}t^{\prime}dt^{\prime}\\\
&+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}-\frac{1}{8\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\right\\}\sin
2\omega_{0}t^{\prime}dt^{\prime}\\\
&-\frac{1}{4\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\right\\}\sin
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{8i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{3}(t^{\prime})\tilde{\Psi}(t^{\prime})\right\\}(\frac{1}{3}\sin
3\omega_{0}t^{\prime}-\sin\omega_{0}t^{\prime})dt^{\prime},\end{split}$ (S.70)
$\begin{split}&\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\\\
&-\frac{1}{i\omega_{0}^{3}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{8\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t\\\ &+\frac{1}{4\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t+\frac{1}{8i\omega_{0}^{3}}\tilde{\hat{W}}^{3}(t)\tilde{\Psi}(t)(\frac{1}{3}\sin
3\omega_{0}t-\sin\omega_{0}t)\\\
&+\frac{1}{i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{3}}{\partial
t^{\prime
3}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\sin\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{2\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})\sin
2\omega_{0}t^{\prime}dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}\\\
&-\frac{1}{8\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\sin
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{16i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})(\sin\omega_{0}t^{\prime}+\sin
3\omega_{0}t^{\prime})dt^{\prime}\\\
&-\frac{1}{4\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\sin
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{8i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})(\sin\omega_{0}t^{\prime}+\sin
3\omega_{0}t^{\prime})dt^{\prime}\\\
&-\frac{1}{8i\omega_{0}^{3}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{3}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})(\frac{1}{3}\sin
3\omega_{0}t^{\prime}-\sin\omega_{0}t^{\prime})dt^{\prime}+\frac{1}{48\omega_{0}^{3}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{4}(t^{\prime})\tilde{\Psi}(t^{\prime})(\sin
4\omega_{0}t^{\prime}-2\sin 2\omega_{0}t^{\prime})dt^{\prime},\end{split}$
(S.71)
$\displaystyle\tilde{\Psi}(t)=\Psi_{0}+\frac{1}{i\omega_{0}}\tilde{\hat{W}}(t)\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{i\omega_{0}^{2}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t-\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}(t)$
$\displaystyle-\frac{1}{i\omega_{0}^{3}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin\omega_{0}t+\frac{1}{8\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t$
$\displaystyle+\frac{1}{4\omega_{0}^{3}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}(t)\sin
2\omega_{0}t+\frac{1}{8i\omega_{0}^{3}}\tilde{\hat{W}}^{3}(t)\tilde{\Psi}(t)(\frac{1}{3}\sin
3\omega_{0}t-\sin\omega_{0}t)$
$\displaystyle-\frac{1}{i\omega_{0}^{4}}\left[\frac{\partial^{3}}{\partial
t^{3}}\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos\omega_{0}t+\frac{1}{4\omega_{0}^{4}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}(t)\tilde{\Psi}(t)\cos
2\omega_{0}t$
$\displaystyle+\frac{1}{16\omega_{0}^{4}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{16i\omega_{0}^{4}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}(t)\tilde{\Psi}(t)(\cos\omega_{0}t+\frac{1}{3}\cos
3\omega_{0}t)$
$\displaystyle+\frac{1}{8\omega_{0}^{4}}\left[\frac{\partial^{2}}{\partial
t^{2}}\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}(t)\cos
2\omega_{0}t+\frac{1}{8i\omega_{0}^{4}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)\right]\tilde{\hat{W}}(t)\tilde{\Psi}(t)(\cos\omega_{0}t+\frac{1}{3}\cos
3\omega_{0}t)$
$\displaystyle+\frac{1}{8i\omega_{0}^{4}}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{3}(t)\right]\tilde{\Psi}(t)(\frac{1}{9}\cos
3\omega_{0}t-\cos\omega_{0}t)-\frac{1}{48\omega_{0}^{4}}\tilde{\hat{W}}^{4}(t)\tilde{\Psi}(t)(\frac{1}{4}\cos
4\omega_{0}t-\cos 2\omega_{0}t)$
$\displaystyle+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})dt^{\prime}+\frac{1}{i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{4}}{\partial
t^{\prime
4}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{3}}{\partial
t^{\prime
3}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t)\tilde{\Psi}(t)\cos^{2}\omega_{0}t^{\prime}dt^{\prime}$
$\displaystyle-\frac{1}{4\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left\\{\left[\frac{\partial^{2}}{\partial t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\right\\}\right\\}\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{4i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t\cos
2\omega_{0}t^{\prime}dt^{\prime}$
$\displaystyle-\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left[\frac{\partial^{2}}{\partial t^{\prime
2}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\right\\}\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{16i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}\cos
2\omega_{0}t^{\prime}dt^{\prime}$
$\displaystyle-\frac{1}{16i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left\\{\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\right\\}\right\\}\tilde{\Psi}(t^{\prime})(\cos\omega_{0}t^{\prime}+\frac{1}{3}\cos
3\omega_{0}t^{\prime})dt^{\prime}$
$\displaystyle+\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t)(\cos^{2}\omega_{0}t^{\prime}+\frac{1}{3}\cos\omega_{0}t^{\prime}\cos
3\omega_{0}t^{\prime})dt^{\prime}$
$\displaystyle-\frac{1}{8\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{3}}{\partial
t^{\prime
3}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})\cos
2\omega_{0}t^{\prime}dt^{\prime}-\frac{1}{8i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}\cos
2\omega_{0}t^{\prime}dt^{\prime}$
$\displaystyle-\frac{1}{8i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left\\{\frac{\partial}{\partial
t^{\prime}}\left\\{\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\right\\}\right\\}\tilde{\Psi}(t^{\prime})(\cos\omega_{0}t^{\prime}+\frac{1}{3}\cos
3\omega_{0}t^{\prime})dt^{\prime}$
$\displaystyle+\frac{1}{8\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{2}(t^{\prime})\right]\tilde{\hat{W}}^{2}(t^{\prime})\tilde{\Psi}(t^{\prime})(\cos^{2}\omega_{0}t^{\prime}+\frac{1}{3}\cos\omega_{0}t^{\prime}\cos
3\omega_{0}t^{\prime})dt^{\prime}$ (S.72)
$\displaystyle-\frac{1}{8i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial^{2}}{\partial
t^{\prime
2}}\tilde{\hat{W}}^{3}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})(\frac{1}{9}\cos
3\omega_{0}t^{\prime}-\cos\omega_{0}t^{\prime})dt^{\prime}+\frac{1}{8\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{3}(t^{\prime})\right]\tilde{\hat{W}}(t^{\prime})\tilde{\Psi}(t^{\prime})(\frac{1}{9}\cos\omega_{0}t^{\prime}\cos
3\omega_{0}t^{\prime}-\cos^{2}\omega_{0}t^{\prime})dt^{\prime}$
$\displaystyle+\frac{1}{48\omega_{0}^{4}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}^{4}(t^{\prime})\right]\tilde{\Psi}(t^{\prime})(\frac{1}{4}\cos
4\omega_{0}t^{\prime}-\cos
2\omega_{0}t^{\prime})dt^{\prime}+\frac{1}{48i\omega_{0}^{4}}\int\limits_{-\infty}^{t}\tilde{\hat{W}}^{5}(t^{\prime})\tilde{\Psi}(t^{\prime})\cos\omega_{0}t^{\prime}(\frac{1}{4}\cos
4\omega_{0}t^{\prime}-\cos 2\omega_{0}t^{\prime})dt^{\prime},$
As all the previous equations starting from Eq. (S.58), Eq. (S.72) is exact at
any time $t$. Upon the end of the pulse, $t\to+\infty$, $\hat{W}(t)\to 0$, as
do all its time derivatives. Therefore, all the out-of-integrals terms on RHS
of Eq. (S.72), except for $\Psi_{0}$, (terms from 2nd to 16th) become zero.
We, therefore, can write
$\begin{split}&\tilde{\Psi}(+\infty)=\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}(t)dt-\frac{1}{2\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial^{3}}{\partial
t^{3}}\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}(t)\tilde{\Psi}_{0}dt\\\
&+\frac{1}{32\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}^{2}(t)\tilde{\Psi}_{0}dt+\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t)\right]\tilde{\hat{W}}^{2}(t)\tilde{\Psi}_{0}dt-\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{3}(t)\right]\tilde{\hat{W}}(t)\tilde{\Psi}_{0}dt,\end{split}$
(S.73)
where all the terms of the order $\omega_{0}^{-n}$, $n>4$, have been
neglected, which allowed us to replace $\Psi$ with $\Psi_{0}$ everywhere but
in the 2nd term. The last step is to expand the 2nd term to the same order,
which is done by using Eq. (S.72) again
$\begin{split}&\tilde{\Psi}(+\infty)=\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\left\\{\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}_{0}dt^{\prime}+\frac{1}{4\omega_{0}^{2}}\tilde{\hat{W}}^{2}(t)\tilde{\Psi}_{0}\right\\}dt\\\
&-\frac{1}{4\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial^{3}}{\partial
t^{3}}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}_{0}dt-\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}^{3}(t)\right]\tilde{\Psi}_{0}dt-\frac{3}{32\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}^{2}(t)\tilde{\Psi}_{0}dt\\\
&+\frac{1}{32\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t),\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}_{0}dt+\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}^{3}(t)\tilde{\Psi}_{0}dt,\end{split}$
(S.74)
or
$\begin{split}&\tilde{\Psi}(+\infty)=\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\left\\{\Psi_{0}+\frac{1}{4\omega_{0}^{2}}\int\limits_{-\infty}^{t}\left[\frac{\partial}{\partial
t^{\prime}}\tilde{\hat{W}}(t^{\prime}),\tilde{\hat{W}}(t^{\prime})\right]\tilde{\Psi}_{0}dt^{\prime}\right\\}dt\\\
&-\frac{1}{4\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial^{3}}{\partial
t^{3}}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\Psi}_{0}dt-\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}^{3}(t)\right]\tilde{\Psi}_{0}dt-\frac{1}{32\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t),\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}^{2}(t)\tilde{\Psi}_{0}dt\\\
&+\frac{1}{32\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}^{2}(t),\tilde{\hat{W}}^{2}(t)\right]\tilde{\Psi}_{0}dt+\frac{1}{16\omega_{0}^{4}}\int\limits_{-\infty}^{\infty}\left[\frac{\partial}{\partial
t}\tilde{\hat{W}}(t)\right]\tilde{\hat{W}}^{3}(t)\tilde{\Psi}_{0}dt.\end{split}$
(S.75)
Equation (24) follows from Eq. (S.75) after regrouping of the terms.
|
# Evaluation of the Gottfried sum with use of the truncated moments method
A. Kotlorz<EMAIL_ADDRESS>Opole University of Technology, 45-758 Opole,
Proszkowska 76, Poland D. Kotlorz<EMAIL_ADDRESS>Opole University of
Technology, 45-758 Opole, Proszkowska 76, Poland Bogoliubov Laboratory of
Theoretical Physics, JINR, 141980 Dubna, Russia O. V. Teryaev
<EMAIL_ADDRESS>Bogoliubov Laboratory of Theoretical Physics, JINR,
141980 Dubna, Russia
###### Abstract
We reanalyze the experimental NMC data on the nonsinglet structure function
$F_{2}^{p}-F_{2}^{n}$ and E866 data on the nucleon sea asymmetry
$\bar{d}/\bar{u}$ using the truncated moments approach elaborated in our
previous papers. With help of the special truncated sum one can overcome the
problem of the unavoidable experimental restrictions on the Bjorken $x$ and
effectively study the fundamental sum rules for the parton distributions and
structure functions. Using only the data from the measured region of $x$, we
obtain the Gottfried sum $\int_{0}^{1}F_{2}^{ns}/x\,dx$ and the integrated
nucleon sea asymmetry $\int_{0}^{1}(\bar{d}-\bar{u})\,dx$. We compare our
results with the reported experimental values and with the predictions
obtained for different global parametrizations for the parton distributions.
We also discuss the discrepancy between the NMC and E866 results on
$\int_{0}^{1}(\bar{d}-\bar{u})\,dx$. We demonstrate that this discrepancy can
be resolved by taking into account the higher-twist effects.
###### pacs:
11.55.Hx, 12.38.-t, 12.38.Bx
## I Introduction
The deep inelastic scattering (DIS) of leptons on hadrons and hadron-hadron
collisions are a gold mine to study the hadron structure and fundamental
particle interactions at high energies. Especially, so-called DIS sum rules
can provide important information on partonic structure of the nucleon and a
good test for the quantum chromodynamics (QCD). Nowadays, there are known a
number of polarized and unpolarized sum rules for structure functions. Some of
them are rigorous theoretical predictions and other are based on model
assumptions which can be verified experimentally. An example of the latter is
the Gottfried sum rule (GSR) Gottfried (1967). Thus, the GSR violation in a
series of experiments Amaudruz et al. (1991); Arneodo et al. (1994); Baldit et
al. (1994); Hawker et al. (1998); Peng et al. (1998); Towell et al. (2001);
Ackerstaff et al. (1998) revealed that, unlikely to the assumed simple
partonic model of the nucleon with the symmetric light sea, the light sea of
the proton was flavor asymmetric, i.e., $\bar{u}(x)\neq\bar{d}(x)$. This
unexpected result has prompted a large interest for many further studies, for
review, see, e.g., Kumano (1998); Garvey and Peng (2001), related to
theoretical explanations of the flavor asymmetry of the nucleon sea.
In our paper, we present a phenomenological analysis of the experimental NMC
data on the nonsinglet structure function $F_{2}^{p}-F_{2}^{n}$ Arneodo et al.
(1994) and E866 data on the nucleon sea asymmetry $\bar{d}/\bar{u}$ Towell et
al. (2001), utilizing a very effective method for determination of the DIS sum
rules in a restricted region of Bjorken $x$ – the so-called truncated Mellin
moments (TMM) approach Kotlorz et al. (2017).
In the next section, we give a brief recapitulation of the Gottfried sum rule
violation problem and discuss some effects modifying the GSR like the
perturbative QCD corrections, higher-twist terms, small-$x$ behavior and
nuclear shadowing. The method of the evaluation of the DIS rules from the
experimental data with help of the truncated Mellin moments approach is
shortly summarized in Section III. In Section IV, we present our numerical
results on the GSR value and compare them to those provided by the NMC and
E866, and also to other determinations based on the global parton distribution
functions (PDFs) fits. Furthermore, we discuss the higher-twist effects as a
possible explanation of the discrepancy between the NMC and E866 results on
the integrated nucleon sea asymmetry $\int_{0}^{1}(\bar{d}-\bar{u})\,dx$.
Finally, we discuss shortly our prediction for the iso-vector quark momentum
fraction $\langle x\rangle_{u-d}$.
In Section V, we give conclusions for this study.
## II Violation of the Gottfried sum rule
The Gottfried sum rule Gottfried (1967) states that the integral over Bjorken
variable $0<x<1$ of a difference of electron-proton and electron-neutron
structure functions is a constant ($=1/3$) under flavor symmetry in the
nucleon sea ($\bar{u}(x)=\bar{d}(x)$), which is independent of the transferred
four-momentum $q$ Gottfried (1967):
$S_{G}(Q^{2})=\int^{1}_{0}\left[F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right]{dx\over
x}=\frac{1}{3}\,.$ (1)
Here, $x=Q^{2}/(2Pq)$, where $Q^{2}=-q^{2}$, $P^{2}=m^{2}$, and $m$ is the
nucleon mass. This form of the GSR originates from a simple partonic model of
the nucleon structure functions in which the isospin symmetry of the nucleon
(the u-quark distribution in the proton is equal to the d-quark distribution
in the neutron),
$u^{p}_{v}(x)=d^{n}_{v}(x)\equiv u_{v}(x)\,,$ (2)
and, similarly, $d^{p}=u^{n}$ , $\bar{u}^{p}=\bar{d}^{n}$,
$\bar{d}^{p}=\bar{u}^{n}$, etc., and the flavor symmetry of the light sea in
the nucleon,
$\bar{u}(x)=\bar{d}(x)\,,$ (3)
are assumed. Then, the difference between the proton and neutron structure
functions incorporating implicit perturbative QCD $Q^{2}$ corrections to the
parton model is given by
$F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})=\frac{1}{3}\,x\left[u_{v}(x,Q^{2})-d_{v}(x,Q^{2})\right]+\frac{2}{3}\,x\left[\bar{u}(x,Q^{2})-\bar{d}(x,Q^{2})\right],$
(4)
where the valence-quark distribution $q_{v}$, ($q=u,d$), is defined by
$q_{v}\equiv q-\bar{q}$, with $\bar{q}$ being the sea-quark distribution.
Taking into account the charge conservation law for the nucleon,
$\int^{1}_{0}u_{v}(x,Q^{2})\,dx=2\,,\quad\quad\int^{1}_{0}d_{v}(x,Q^{2})\,dx=1\,,$
(5)
we obtain
$\int^{1}_{0}\left[F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right]{dx\over
x}=\frac{1}{3}+\frac{2}{3}\int^{1}_{0}\left[\bar{u}(x,Q^{2})-\bar{d}(x,Q^{2})\right]dx.$
(6)
If the light sea is flavor symmetric, Eq. (3), the second term in Eq. (6)
vanishes giving the Gottfried sum rule (1).
Though the isospin symmetry, Eq. (2), is not exact, and can also contribute to
the GSR violation, usually the experimental results on the GSR breaking are
interpreted as an evidence of the light flavor asymmetry of the nucleon sea,
$\bar{u}(x)\neq\bar{d}(x)\,.$ (7)
The first clear indication of the GSR violation in DIS experiment was provided
by the New Muon Collaboration (NMC) Amaudruz et al. (1991) and from the
reanalyzed NMC data Arneodo et al. (1994). The obtained NMC measurement of
$S_{G}$,
${\rm NMC~{}~{}1994:}\quad\quad S_{G}(Q^{2}=4\,{\rm GeV}^{2})=0.235\pm 0.026$
(8)
implies the integrated antiquark flavor asymmetry, Eq. (7),
$\int^{1}_{0}\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]dx=0.148\pm 0.039$
(9)
which means that in the proton, $d$-sea is larger than $u$-sea.
Later, the Gottfried sum rule was tested at the Fermilab in E866 Drell-Yan
(DY) experiments which measured $\bar{d}/\bar{u}$ as a function of $x$ over
the kinematic range of $0.015<x<0.35$ at $Q^{2}=54\,{\rm GeV}^{2}$ Towell et
al. (2001). Again, the data suggested a significant deficit in the sum rule
consistent with the DIS results and also with semi-inclusive DIS (SIDIS)
measurements of the HERMES collaboration for $0.020<x<0.30$ and
$1<Q^{2}<20\,{\rm GeV}^{2}$ Ackerstaff et al. (1998):
$\displaystyle{\rm HERMES~{}~{}1998:}\quad\quad$
$\displaystyle\int^{1}_{0}\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]dx=0.16\pm
0.03$ (10a) $\displaystyle{\rm E866~{}~{}2001:}\quad\quad$
$\displaystyle\int^{1}_{0}\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]dx=0.118\pm
0.012$ (10b)
The surprisingly large difference between light sea in the nucleon, Eq. (7),
observed in different experiments like DIS, DY and SIDIS, has triggered many
theoretical efforts to understand and accurately describe the experimental
results (for review, see, e.g., Kumano (1998); Garvey and Peng (2001)). While
the perturbative QCD fails in description of the sea asymmetry, the
nonperturbative mechanisms as Pauli-blocking, meson cloud, chiral-quark,
intrinsic sea, soliton seem to be more promising in explanation of the GSR
breaking. Recently, the statistical parton distributions approach was
developed to study the flavor structure of the light quark sea Soffer and
Bourrely (2019). The authors obtained a remarkable agreement of the
statistical model prediction for the ratio $\bar{d}/\bar{u}$ with the E866
data Garvey and Peng (2001); Peng et al. (2014) up to $x=0.2$. Unfortunately,
none of the studies mentioned above predicts correctly the $\bar{d}/\bar{u}$
behavior in the whole $x$ region, i.e. none of them predicts a sign-change for
$\bar{d}(x)-\bar{u}(x)$ at $x\approx 0.3$ as suggested by the E866 data.
Below, we briefly discuss possible effects modifying the GSR like the
perturbative QCD corrections, higher twist-terms, small-$x$ behavior and
nuclear shadowing effects.
### II.1 pQCD corrections to the GSR
Here, we show that the perturbative QCD corrections to the GSR are too small
to explain the light sea asymmetry Hinchliffe and Kwiatkowski (1996). The
corrections of order $\alpha_{s}^{2}$ to the GSR were obtained in Kataev and
Parente (2003) basing on numerical calculation of the order $\alpha_{s}^{2}$
contribution to the coefficient function.
From the renormalization group equation analysis for $S_{G}(Q^{2})$ Kataev and
Parente Kataev and Parente (2003) obtained for the number of active flavors
$n_{f}$=4 the following QCD corrections to the GSR:
$S_{G}(Q^{2})=\frac{1}{3}\left[1+0.0384\left(\frac{\alpha_{s}}{\pi}\right)-0.822\left(\frac{\alpha_{s}}{\pi}\right)^{2}\right].$
(11)
Using the above formula we find
$\displaystyle S_{G}(Q^{2}=4\,{\rm GeV}^{2})$ $\displaystyle=$
$\displaystyle\frac{1}{3}-0.0015=0.3318$ (12a) $\displaystyle
S_{G}(Q^{2}=54\,{\rm GeV}^{2})$ $\displaystyle=$
$\displaystyle\frac{1}{3}-0.0003=0.3330\,.$ (12b)
This means that the magnitude of order $\alpha_{s}^{2}$ perturbative QCD
effects turn out to be about $-0.4\%$ at $Q^{2}=4\,{\rm GeV}^{2}$
($\alpha_{s}\approx 0.31$), and $-0.08\%$ at $Q^{2}=54\,{\rm GeV}^{2}$
($\alpha_{s}\approx 0.20$) of the original constant value of the GSR,
$S_{G}=1/3$.
So, it is clearly seen that the perturbative QCD corrections to the Gottfried
sum rule are very small and cannot explain the experimental results of NMC
Arneodo et al. (1994) and E866 Towell et al. (2001) collaborations where the
GSR is broken on the level of $-29.5\%$ and $-23.6\%$, respectively.
### II.2 Higher-twist effects
In light of the results obtained in Alekhin et al. (2012), also the higher-
twist terms seem not to be much helpful in description of the large
discrepancy between the theoretical prediction of the GSR, Eq. (1), and the
experimental value of Eq. (8). The authors of Alekhin et al. (2012), basing on
the DIS world data, fitted in the NNLO analysis the twist-4 coefficient
$H_{2}^{\tau=4}(x)$ for the nonsinglet function $F_{2}^{p-n}(x)$ and found the
HT corrections marginal in comparison with the leading twist (LT) terms,
$F_{2}^{p-n}(x,Q^{2})=F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})=\left[F_{2}^{p-n}(x,Q^{2})\right]^{{\rm
LT}}+\frac{H_{2}^{\tau=4}(x)}{Q^{2}}\,.$ (13)
In Fig. 1, we plot the coefficient of the twist-4 term $H_{2}^{\tau=4}(x)$,
Eq. (13), for the nonsinglet structure function $F_{2}^{p-n}(x)$ obtained in
Alekhin et al. (2012) and compare the corresponding HT corrections with the
results of NMC for $F_{2}^{p-n}(x)$ at $Q^{2}=4\,{\rm GeV}^{2}$.
Figure 1: Left: the central values (solid line) and the error band for the
coefficient of the twist-4 term $H_{2}^{\tau=4}(x)$, Eq. (13), obtained from
the NNLO fit for the nonsinglet structure function $F_{2}^{p-n}(x)$ Alekhin et
al. (2012). Right: comparison of the NMC data for $F_{2}^{p-n}(x)$ at
$Q^{2}=4\,{\rm GeV}^{2}$ with the corresponding HT corrections.
However, when applied to the Gottfried sum rule, these HT corrections though
too small to be responsible for the observed flavor asymmetry, are not
marginal and can accurately explain the relatively large discrepancy between
the two central values of the experimental results: NMC, Eq. (9), and E866,
Eq. (10b).
Namely, the HT effects modify the original GSR giving the contribution on the
level of $-5.4\%$ at $Q^{2}=4\,{\rm GeV}^{2}$ (NMC) and $-0.4\%$ at
$Q^{2}=54\,{\rm GeV}^{2}$ (E866) of the sum $1/3$. Hence, the corresponding
difference between the NMC and E866 results for the flavor asymmetry of the
light sea
$\Delta(Q^{2})\equiv\int^{1}_{0}\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]dx\,,$
(14)
implied by the HT effects at different scales of $Q^{2}$, is
$\Delta^{HT}(Q^{2}=4\,{\rm GeV}^{2})-\Delta^{HT}(Q^{2}=54\,{\rm
GeV}^{2})\approx 0.025\pm 0.022\,.$ (15)
This is in a very good agreement with the experimental data:
$\Delta_{NMC}-\Delta_{E866}\approx 0.030\,.$ (16)
Taking into account also the perturbative QCD radiative corrections of Eq.
(11), we arrive at the value even closer to the data:
$\Delta^{Rad+HT}(Q^{2}=4\,{\rm GeV}^{2})-\Delta^{Rad+HT}(Q^{2}=54\,{\rm
GeV}^{2})\approx 0.027\,.$ (17)
It is seen that the $Q^{2}$-dependence of the GSR can resolve a discrepancy
between the flavor asymmetry of the light sea in the nucleon measured in
different experiments. Similar suggestion was made by the authors of Ref.
Szczurek and Uleshchenko (2000).
We have found that the QCD-improved parton model including the NNLO radiative
corrections and also the twist-4 contributions predicts for the Gottfried sum
rule at $Q^{2}=4\,{\rm GeV}^{2}$
$S_{G}(Q^{2}=4\,{\rm GeV}^{2})=\frac{1}{3}\quad\underbrace{-0.0015}_{{\rm
pQCD}}\quad\underbrace{-0.0181}_{{\rm HT}}\approx\frac{1}{3}-0.02\,.$ (18)
This means that the large deficit of the GSR observed in the experiments
($S_{G}\approx 1/3-0.1$) comes from another sources than perturbative
mechanisms and HT effects.
### II.3 Low-$x$ contribution
Experimental verification of the most sum rules faces the difficulty that in
any realistic experiment one cannot reach arbitrarily small values of the
Bjorken $x$. This is a serious obstacle also in the determination of the
Gottfried sum rule which involves the first Mellin moment, i.e. integral of
the nonsinglet structure function $F_{2}^{ns}$ over the whole range of $x$:
$0\leqslant x\leqslant 1$. The lack of low-$x$ data with good accuracy makes
reasonable the idea that a significant contribution to the integral of the GSR
can come just from the small-$x$ region. We illustrate this in Fig. 2 where we
show different low-$x$ behaviors of $F_{2}^{ns}/x\sim x^{a}$ and the
corresponding truncated GSR, $\int_{x}^{1}F_{2}^{ns}(x)\,dx/x$, together with
the NMC data Arneodo et al. (1994). We use three values for $a$: $-0.2$,
$-0.4$ and $-0.6$. It is seen that the experimental uncertainties in the
small-$x$ region are too large to favor any of them.
Figure 2: Left: possible parametrizations of $F_{2}^{ns}/x$ reflecting
different small-$x$ behavior $\sim x^{a}$: $a\,=\,-0.2$ (dashed), $-0.4$
(dash-dotted) and $-0.6$ (solid) compared to the NMC data Arneodo et al.
(1994). Right: the truncated Gottfried sum rule
$\int_{x}^{1}F_{2}^{ns}(x)\,dx/x$, respectively.
The different $\sim x^{a}$ behaviors predict significant different very
low-$x$ contributions to the GSR, $\int_{0}^{0.004}F_{2}^{ns}(x)\,dx/x$,
namely 0.006 for $a=-0.2$, 0.011 for $a=-0.4$ and 0.022 for $a=-0.6$. This
means that the very low-$x$ contributions to the GSR can vary from $1-7\,\%$
of the sum $1/3$ and cannot resolve the GSR breaking problem. On the other
hand, the NMC data in the small-$x$ region confirm very well the expectations
of the theoretical studies on $F_{2}$ based on the Regge theory. In the Regge
approach, the small-$x$ behavior of $F_{2}^{ns}(x)$ is controlled by the
reggeon $A_{2}$ exchange Kwiecinski (1996):
$F_{2}^{p-n}(x)=F_{2}^{p}(x)-F_{2}^{n}(x)\sim x^{1-\alpha_{A_{2}}}\,,$ (19)
where $\alpha_{A_{2}}\approx 0.5$ is the $A_{2}$ reggeon intercept. Taking
into account the Regge predictions in the NMC data analysis, we can estimate
the small-$x$ contribution to the Gottfried sum
$\int_{0}^{0.004}F_{2}^{ns}(x)\,dx/x$ as $4-7\,\%$ of the total value $1/3$.
### II.4 Nuclear shadowing
Since there is no fixed target for the neutron, the deuteron is usually used
for measuring the neutron structure function $F_{2}^{n}$. The same method was
used by the NMC for determination of the Gottfried sum rule. In order to
obtain the difference of the structure functions $F_{2}^{p}-F_{2}^{n}$ of free
nucleons which enters into the GSR, the extracted $F_{2}^{n}$ from the
deuteron data has to be corrected by the shadowing effects:
$F_{2}^{d}=\frac{1}{2}\,\left(F_{2}^{p}+F_{2}^{n}\right)-\delta F_{2}^{d}\,,$
(20)
where $\delta F_{2}^{d}\geqslant 0$.
The shadowing effects in the deuteron were investigated in many works (for
review, see, e.g., Kumano (1998)) providing the small negative correction to
the sum. Thus, the shadowing leads to smaller value of $S_{G}(Q^{2})$ than
that determined experimentally assuming no shadowing, and the GSR violation is
even magnified. The nuclear shadowing which is dominated by the vector-meson-
dominance (VMD) mechanism is non-negligible in the region of $x\leqslant 0.1$
and for low- and moderate $Q^{2}$ relevant for the NMC measurements and has to
be taken into account in the data analysis Badelek and Kwiecinski (1994). This
leads to the following expression for the difference between the proton and
neutron structure functions in the integrand of the Gottfried sum, Eq. (1):
$F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})=\left(F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right)_{NMC}-2\,\delta
F_{2}^{d}(x,Q^{2})\,,$ (21)
where $(F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2}))_{NMC}$ obtained by NMC is
related to the measured $F_{2}^{d}$ and $F_{2}^{n}/F_{2}^{p}$ via
$\left(F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right)_{NMC}=2\,F_{2}^{d}\;\frac{1-\left(\frac{F_{2}^{n}}{F_{2}^{p}}\right)_{NMC}}{1+\left(\frac{F_{2}^{n}}{F_{2}^{p}}\right)_{NMC}}$
(22)
Using the results of Badelek and Kwiecinski (1994) for $\delta F_{2}^{d}$, in
Fig. 3, we compare the NMC data with the corrected one by the nuclear
shadowing effect. We find that the negative shadowing correction to the
experimental result for the Gottfried sum, $S_{G}(0.004,\,0.8,\,Q^{2})=0.221$,
is $0.0265$ ($\approx 12\%$).
Figure 3: The nuclear shadowing corrected NMC data for $F_{2}^{ns}$ (open
circles) calculated in Badelek and Kwiecinski (1994). Solid: fit to the
shadowing contribution to the deuteron structure function, $2\delta
F_{2}^{d}$.
## III TMM method for determination of sum rules
Here, we briefly present an effective method which allows one to determine any
sum rule value from the experimental data in the available restricted
kinematic range of the Bjorken variable $x$. The method was elaborated in
Kotlorz et al. (2017); Strozik-Kotlorz et al. (2017) for the Bjorken sum rule,
and successfully applied to the experimental data at COMPASS, SLAC and JLab
Kotlorz et al. (2017); Strozik-Kotlorz et al. (2017); Kotlorz and Mikhailov
(2019); Kotlorz et al. (2019).
The main philosophy of the method presented in Kotlorz et al. (2017) is
construction of a special truncated sum $\Gamma$ which approaches the limit of
the sum rule value more quickly, i.e. for larger $x$, than the ordinary sum.
In other words, the use of $\Gamma$ “mimics” the extension of the experimental
kinematic region of $x$ to the lower values.
Below, we give useful formulas for determination of the sum rule value in the
TMM approach which in the next section we shall apply to the GSR. The details
on theoretical aspects of the $\Gamma$ construction and the description of
different approximations of the TMM method can be found in Kotlorz et al.
(2017).
Determination of the sum rules involves the integrals of the parton density or
structure function $f(x,Q^{2})$ over the whole range $(0,1)$ of $x$:
$S(0,\,1)=\int^{1}_{0}f(x)\,dx\,,$ (23)
where for clarity we omit the $Q^{2}$ dependence. The experimental
measurements provide data on $f(x)$ only in the limited range of $x$:
$0<x_{min}\equiv x_{1}<x_{2}<\cdots<x_{max}\equiv x_{N}<1$, where
$x_{min}\equiv Q^{2}_{\text{min}}/(2(Pq)_{\text{max}}>0)$. Thus, in fact, the
experiment gives information on the truncated sum
$S(x_{min},\,x_{max})=\int^{x_{max}}_{x_{min}}f(x)\,dx\,.$ (24)
The truncation at the upper limit $x_{max}$ is less important in comparison to
the low-$x$ limit $x_{min}$ because of the rapid decrease of the parton
densities and structure functions as $x\rightarrow 1$.
In particular, if we define $n$th truncated moment of the structure function
$f(x,Q^{2})$ as
$M_{n}(x_{min},\,x_{max},\,Q^{2})=\int^{x_{max}}_{x_{min}}x^{n-2}f(x,Q^{2})\,dx\,,$
(25)
the Gottfried sum rule $S_{G}(Q^{2})$ is the first moment
$M_{1}(0,\,1,\,Q^{2})$ of the nonsinglet function
$F_{2}^{p}(x,\,Q^{2})-F_{2}^{n}(x,\,Q^{2})$.
The special sum $\Gamma$ is constructed based on the ordinary sum $S$ in the
following way:
$\displaystyle\Gamma(x_{1},\,r)$ $\displaystyle=$ $\displaystyle
S(x_{1},1)+A\,\int_{x_{1}}^{x_{1}/r}f(x)\,dx\,,$ (26)
where $x_{1}$ is the smallest value of $x$ accessible in the experiment and
$A$, and $r$ are parameters calculated from the data. In the limit $x_{1}\to
0$, $\Gamma(x_{1},\,r)$ is equal to $S(x_{1},1)$ providing the sum rule value
$S(0,1)$, Eq. (23), whereas for $x>0$ $\Gamma(x_{1},\,r)$ approaches $S(0,1)$
much earlier than $S(x_{1},1)$ itself. This is illustrated in Fig. 4 where we
compare $S(x_{1},1)$ to a bunch of $\Gamma(x_{1},\,r)$, Eq. (26), plotted for
different values of $A$. We use smooth fits to the NMC and E866 data setting
the ratio of two experimental points $r=x_{1}/x_{2}$ equal to $0.5$ for NMC
and $0.7$ for E866, respectively. Here, we would like to emphasize that in our
analysis we shall use the auxiliary fit only for determination of $A_{I}$
while the rest of calculations will be performed with use of the pure data
$f(x)$ from the measured $x$-region.
Figure 4: $S(x_{1},1)$, Eq. (24), (black solid) and $\Gamma(x_{1},\,r)$, Eq.
(26), for different values of $A$. Upper (blue) solid line corresponds to
$A=A_{0}$ (left panel) and $A=A_{I}$ (right panel), see description in the
text.
In our approach, as described in Kotlorz et al. (2017), we utilize the quasi-
linear regime of $\Gamma(x_{1},\,r)$ which starts already for $x$
significantly larger than the smallest experimental value of $x_{1}$. This
ensures the applicability of the first (or even zero as in the NMC case) order
approximation for estimation of the value of $S(0,1)$ with help of
$\Gamma(x_{1},\,r)$. Thus, requiring the second derivative to vanished,
$\Gamma^{\prime\prime}(x_{1},\,r)=0$, we obtain $A_{I}$ and the sum rule value
can be determined very effectively in the first order of Taylor expansion:
$\displaystyle
S(0,1)=\Gamma(0,\,r)\approx\Gamma(x_{1},\,r)-x_{1}\,\Gamma^{\prime}(x_{1},\,r)$
$\displaystyle=$
$\displaystyle\Gamma(x_{1},\,r)+(A_{I}+1)\,x_{1}\,f(x_{1})-A_{I}\,\frac{x_{1}}{r}f(x_{1}/r)\,,$
(28) $\displaystyle A_{I}$ $\displaystyle=$
$\displaystyle\left[\frac{1}{r^{2}}\,\frac{f^{\prime}(x_{1}/r)}{f^{\prime}(x_{1})}-1\right]^{-1}\,,$
(29)
where $\Gamma(x_{1},\,r)$ is given by Eq. (26) and $f^{\prime}(x)$ denotes the
first order derivative with respect of $x$.
In a special case, where the small-$x$ experimental data can be well described
by a simple form $f(x)=Nx^{a}$, we have
$r^{n}\frac{f^{(n-1)}(x_{1})}{f^{(n-1)}(x_{1}/r)}=r\frac{f(x_{1})}{f(x_{1}/r)}$
(30)
and all derivatives $\Gamma^{(n)}(x_{1},\,r)$ vanish for the same $A=A_{0}$,
$A_{0}=\left[\frac{1}{r}\,\frac{f(x_{1}/r)}{f(x_{1})}-1\right]^{-1}\,.$ (31)
Hence, we arrive at the zero order approximation for the sum rule value which
reads
$S(0,1)=\Gamma(x_{1},\,r)=S(x_{1},1)+A_{0}\,\int_{x_{1}}^{x_{1}/r}f(x)\,dx\,.$
(32)
The method of estimation of the sum rule value based on the special truncated
sum $\Gamma$ is effective for different small-$x$ behavior of the function
$f\sim x^{a}$, also for $a<0$, as in the case of the Gottfried sum rule.
## IV Data analysis
Below we present our numerical results for the Gottfried sum rule value
$S(0,1)$ based on the experimental NMC Arneodo et al. (1994) and E866 Towell
et al. (2001) data following the approach described in the previous section.
### IV.1 NMC
The violation of the GSR was first observed by the New Muon Collaboration at
CERN in 1991 Amaudruz et al. (1991). NMC measured the cross section ratio for
deep inelastic scattering of muons from hydrogen and deuterium targets in the
kinematic range extended to the low-$x$ region, $0.004\leqslant x\leqslant
0.8$. The difference of the structure functions was calculated by Eq. (22) and
the ratio $F_{2}^{n}/F_{2}^{p}=2F_{2}^{d}/F_{2}^{p}-1$ was determined by the
NMC experiment, where the deuteron structure function $F_{2}^{d}$ was taken
from a fit to various experimental data. The results were obtained by
interpolation or extrapolation to $Q^{2}=4{\rm\,GeV}^{2}$. In the reanalyzed
data Arneodo et al. (1994), which are under study in this section, NMC used
their own data for $F_{2}^{d}$ and revised $F_{2}^{n}/F_{2}^{p}$ ratios.
The NMC data for $F_{2}^{p-n}$ which form the GSR,
$S_{G}(0,1)=\int^{1}_{0}\left[F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right]{dx\over
x}\,,$ (33)
can be for $x\leqslant 0.4$ well described by the fit function $\sim
0.2\,x^{0.6}$ Arneodo et al. (1994), which agrees with theoretical prediction
of the Regge-like behavior, Eq. (19), Kwiecinski (1996). The corresponding
truncated function $\Gamma(x_{1},\,r)$ saturates to the constant $S_{G}(0,1)$
already at large $x$ (see upper solid line in the left panel of Fig. 4) and we
can estimate $S_{G}(0,1)$ using the zero order formulas, Eqs. (31) and (32),
which take the form
$\displaystyle S_{G}(0,1)$ $\displaystyle=$ $\displaystyle
S_{G}(x_{1},1)^{\rm{NMC}}+A_{0}\,\int_{x_{1}}^{x_{1}/r}F_{2}^{p-n}(x,Q^{2})\frac{dx}{x}\,,$
(34) $\displaystyle A_{0}$ $\displaystyle=$
$\displaystyle\left[\frac{F_{2}^{p-n}(x_{1}/r,Q^{2})}{F_{2}^{p-n}(x_{1},Q^{2})}-1\right]^{-1}.$
(35)
All quantities in Eqs. (34) and (35) are directly provided by the data or can
be calculated from the data without necessity to use of any fit function.
Namely, $S_{G}(x_{1},1)^{\rm{NMC}}$ is the contribution to the GSR from the
measured region of $x$ together with the correction for $x>0.8$, $x_{1}$
denotes the smallest $\langle x\rangle$ in the analysis, and $r=x_{1}/x_{k}$
is a ratio of two experimental points where $x_{k}>x_{1}$. The integral in Eq.
(34) can be calculated as a sum of the partial experimental contributions,
respectively:
$\int_{x_{i}}^{x_{i+1}}F_{2}^{p-n}(x)\,\frac{dx}{x}=\frac{x_{i+1}-x_{i}}{\langle
x_{i}\rangle}\,F_{2}^{p-n}(\langle x_{i}\rangle)\,.$ (36)
In Table 1 we show our estimations for $S_{G}(0,1)$ obtained for two values of
$x_{1}$: $0.007$ and $0.015$ and corresponding $r$ and $A_{0}$, up to
$x_{k}\approx 0.4$. The experimental value of $S_{G}(0,1)$, where the
small-$x$ contribution from the region $x<0.004$ is determined from the fit
described in Arneodo et al. (1994), is displayed in the last row.
Table 1: Estimations of the Gottfried sum rule value $S_{G}(0,1)$ obtained in the zero order approximation, Eqs. (34) and (35), for two values of $x_{1}$: $0.007$ and $0.015$ based on the NMC data at $Q^{2}=4$ GeV2 Arneodo et al. (1994). The ratio $r=x_{1}/x_{1+i}$, where $i=1,\,2,\,...\,8$. The experimental value of $S_{G}(0,1)$ is displayed in the last row. $x_{1}=0.007$ | $~{}~{}~{}~{}~{}~{}x_{1}=0.015$
---|---
$r$ | $~{}~{}~{}A_{0}$ | $~{}~{}~{}S_{G}(0,1)$ | $~{}~{}~{}~{}~{}~{}r$ | $~{}~{}~{}A_{0}$ | $~{}~{}~{}S_{G}(0,1)$
$0.47$ | $~{}~{}~{}2.0$ | $~{}~{}~{}0.236$ | $~{}~{}~{}~{}~{}~{}0.50$ | $~{}~{}~{}1.07$ | $~{}~{}~{}0.223$
$0.23$ | $~{}~{}~{}0.53$ | $~{}~{}~{}0.230$ | $~{}~{}~{}~{}~{}~{}0.30$ | $~{}~{}~{}0.94$ | $~{}~{}~{}0.236$
$0.14$ | $~{}~{}~{}0.48$ | $~{}~{}~{}0.236$ | $~{}~{}~{}~{}~{}~{}0.19$ | $~{}~{}~{}0.52$ | $~{}~{}~{}0.232$
$0.09$ | $~{}~{}~{}0.29$ | $~{}~{}~{}0.234$ | $~{}~{}~{}~{}~{}~{}0.12$ | $~{}~{}~{}0.34$ | $~{}~{}~{}0.232$
$0.06$ | $~{}~{}~{}0.20$ | $~{}~{}~{}0.233$ | $~{}~{}~{}~{}~{}~{}0.09$ | $~{}~{}~{}0.31$ | $~{}~{}~{}0.236$
$0.04$ | $~{}~{}~{}0.19$ | $~{}~{}~{}0.236$ | $~{}~{}~{}~{}~{}~{}0.06$ | $~{}~{}~{}0.22$ | $~{}~{}~{}0.234$
$0.03$ | $~{}~{}~{}0.14$ | $~{}~{}~{}0.235$ | $~{}~{}~{}~{}~{}~{}0.04$ | $~{}~{}~{}0.19$ | $~{}~{}~{}0.235$
$0.02$ | $~{}~{}~{}0.12$ | $~{}~{}~{}0.235$ | $~{}~{}~{}~{}~{}~{}0.03$ | $~{}~{}~{}0.17$ | $~{}~{}~{}0.237$
EXP. NMC $S_{G}(0,1)=0.235\pm 0.026$
We obtain $S_{G}(0,1)=0.234\pm 0.003_{\,\rm SD}$. The low-$x$ contribution
$S_{G}(0,0.004)=0.012\pm 0.004$. Here, we estimate the error for the low-$x$
contribution as an average deviation for the composed errors of
$S_{G}(0,0.004)$ calculated for the data sets summarized in Table 1. Hence, we
find finally $S_{G}(0,1)=0.234\pm 0.022$. The obtained result is in a good
agreement with the value provided by NMC. This means that in the case of the
NMC data, the Gottfried sum can be determined very effectively already in the
zero order approximation of the TMM approach.
In Fig. 5, we compare the NMC data for $F_{2}^{ns}/x$ and
$\int_{x}^{0.8}F_{2}^{ns}dx/x$ with the predictions of two parametrizations
based on the global PDFs fit, CTEQ6 Pumplin et al. (2002) and MSTW08 Martin et
al. (2009), and to our TMM estimation for the GSR. In the left panel we plot
also the low-$x$ fit function $f_{\rm fit}^{\rm NMC}$ for illustration of the
regular Regge behavior of the NMC data up to $x\approx 0.4$. We find
$\frac{1}{x}\left[F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right]_{\rm
NMC}\approx f_{\rm fit}^{\rm NMC}=ax^{b},\quad\quad a=0.198\pm 0.006,\quad
b=-0.410\pm 0.012,$ (37)
which is consistent with the form provided in Arneodo et al. (1994).
Figure 5: $F_{2}^{ns}(x,Q^{2})/x$ (left) and
$\int_{x}^{0.8}F_{2}^{ns}(x,Q^{2})\,dx/x$ (right) at $Q^{2}=4\,{\rm GeV^{2}}$.
A comparison of the NMC data to CTEQ6 Pumplin et al. (2002) and MSTW08 Martin
et al. (2009) predictions and also to the TMM estimation for the Gottfried sum
rule. The low-$x$ fit function $f_{\rm fit}^{\rm NMC}$ in the left panel has
the form Eq. (37).
In Table 2, we collect the contributions to the Gottfried sum rule, $\int
F_{2}^{ns}(x,Q^{2})\,dx/x$, obtained for different $x$ ranges at
$Q^{2}=4\,{\rm GeV}^{2}$. A comparison of the NMC data to TMM, CTEQ6 and
MSTW08 predictions is shown.
Table 2: The contributions to the Gottfried sum rule, $\int F_{2}^{ns}(x,Q^{2})\,dx/x$, integrated over different $x$ ranges at $Q^{2}=4\,{\rm GeV}^{2}$. Compared are the NMC data to TMM, CTEQ6 Pumplin et al. (2002) and MSTW08 Martin et al. (2009) predictions. (∗) The result contains a fit to the unmeasured region of small-$x$. $~{}~{}~{}~{}~{}x$ range | NMC | TMM | CTEQ6 | MSTW08
---|---|---|---|---
$0<x<1$ | $0.235\pm 0.026^{*}$ | $0.234\pm 0.022$ | $0.255$ | $0.274$
$0.004<x<0.8$ | $~{}~{}~{}0.221\pm 0.021~{}~{}~{}$ | | $0.231$ | $0.218$
$0<x<0.004$ | $0.013\pm 0.005^{*}$ | $0.012\pm 0.004$ | $0.023$ | $0.055$
$0.8<x<1$ | $0.001\pm 0.001$ | | $0.001$ | $0.001$
The small-$x$ contribution to the GSR in the unmeasured region $x<0.004$ was
determined by NMC with use of the fit to the data. Our method, which is
totally based on the experimental data in the measured region of $x$, provides
almost the same result. The CTEQ6 prediction is slightly above the NMC
estimation for $S_{G}(0,0.004)$ while agreeing for the total GSR value and
also for the contribution from the measured region. In turn, the MSTW08
parametrization supports the experimental measurements but its predictions for
$S_{G}(0,0.004)$ and $S_{G}(0,1)$ are much larger than both the experimental
and our estimations. A reason for this discrepancy is a small-$x$ behavior of
$\bar{d}-\bar{u}$ assumed by MSTW08 which implies a decrease of
$\int(\bar{d}-\bar{u})\,dx$ and hence the increase of $\int F_{2}^{ns}/x\,dx$
in this region in comparison to other global PDF fits. We shall discuss it
also in the next subsection which is devoted to the E866 experiment.
### IV.2 E866
Fermilab experiment E866 Towell et al. (2001) was a fixed target experiment
that has measured the light sea quark asymmetry in the nucleon using Drell-Yan
process of di-muon production in 800 GeV proton interactions with hydrogen and
deuterium targets. From the data, the ratio $\bar{d}/\bar{u}$ was determined
over a wide range in Bjorken-$x$. The obtained results confirmed previous
measurements by E866/NuSea Hawker et al. (1998), which were the first
demonstration of a strong $x$-dependence of the $\bar{d}/\bar{u}$ ratio, and
extended them to lower-$x$. To obtained the antiquark asymmetry
$\bar{d}-\bar{u}$ and also the integrated asymmetry
$\int(\bar{d}-\bar{u})\,dx$, E866 used their data for $\bar{d}/\bar{u}$ and
the PDF parametrization CTEQ5M Lai et al. (2000) for $\bar{d}+\bar{u}$. In
order to estimate the contribution from the unmeasured region $0<x<0.015$,
MRST Martin et al. (1998) and CTEQ5M fits were used. Moreover, it was assumed
that the contribution for $x>0.35$ was negligible.
In our TMM analysis, presented below, we use the experimental data only from
the measured region $0.015<x<0.35$. We compare our results with the E866 ones
and also with the predictions of the updated global parametrizations – CTEQ6
Pumplin et al. (2002) and MSTW08 Martin et al. (2009).
To determine the light sea quark asymmetry from the E866 data,
$\Delta(0,1)=\int^{1}_{0}\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]dx\,,$
(38)
we apply the universal first order approximation of the special truncated sum
$\Gamma$ method given by Eqs. (26)-(29). In the terms of the experimental data
they read
$\displaystyle\Delta(0,1)=\Delta(x_{1},1)^{\rm{E866}}+$ $\displaystyle A_{I}$
$\displaystyle\,\int_{x_{1}}^{x_{1}/r}[\bar{d}-\bar{u}](x)\,dx+(A_{I}+1)\,x_{1}\,[\bar{d}-\bar{u}](x_{1})-A_{I}\,\frac{x_{1}}{r}[\bar{d}-\bar{u}](x_{1}/r)\,,$
(39) $\displaystyle A_{I}$
$\displaystyle=\left[\frac{1}{r^{2}}\,\frac{f_{fit}^{\prime}(x_{1}/r)}{f_{fit}^{\prime}(x_{1})}-1\right]^{-1}\,,$
(40)
where the prime denotes a derivative with respect to $x$ of the fit function
$f_{\rm fit}^{\rm E866}$ to the E866 data on $\bar{d}-\bar{u}$:
$\displaystyle\left[\bar{d}(x,Q^{2})-\bar{u}(x,Q^{2})\right]_{\rm E866}\approx
f_{\rm fit}^{\rm E866}$ $\displaystyle=$ $\displaystyle
ax^{b}(1-x)^{c}(1+d\,x),$ $\displaystyle a=0.55\pm 0.02,\quad b=-0.19\pm
0.02,\quad c$ $\displaystyle=$ $\displaystyle 2.8\pm 0.3,\quad d=-3.7\pm
0.1\,.$ (41)
The fit function, which we use only for calculation of $A_{I}$ in Eq. (40), is
shown as a dotted line in the left panel of Fig. 6. All other quantities in
Eqs. (39) and (40) are directly provided by the data. Again, as in the NMC
analysis, $x_{1}$ denotes the smallest $\langle x\rangle$, and the ratio
$r=x_{1}/x_{k}$ is determined from the kinematics.
$\Delta(x_{1},1)^{\rm{E866}}$ denotes the contribution to the light sea
asymmetry from the measured region of $x$.
Table 3: Estimations of the integrated quark asymmetry $\Delta(0,1)$ obtained in the first order approximation, Eqs. (39) and (40), for two values of $x_{1}$: $0.026$ and $0.038$ based on the E866 data at $Q^{2}=54$ GeV2, Towell et al. (2001). The ratio $r=x_{1}/x_{1+i}$ where $i=1,\,2,\,3,\,...\,8$. The experimental value of $\Delta(0,1)$ is displayed in the last row. $x_{1}=0.026$ | $~{}~{}~{}~{}~{}~{}x_{1}=0.038$
---|---
$r$ | $~{}~{}~{}A_{I}$ | $~{}~{}~{}\Delta(0,1)$ | $~{}~{}~{}~{}~{}~{}r$ | $~{}~{}~{}A_{I}$ | $~{}~{}~{}\Delta(0,1)$
$0.68$ | $~{}~{}~{}1.79$ | $~{}~{}~{}0.098$ | $~{}~{}~{}~{}~{}~{}0.73$ | $~{}~{}~{}2.19$ | $~{}~{}~{}0.097$
$0.50$ | $~{}~{}~{}0.79$ | $~{}~{}~{}0.098$ | $~{}~{}~{}~{}~{}~{}0.57$ | $~{}~{}~{}1.03$ | $~{}~{}~{}0.105$
$0.39$ | $~{}~{}~{}0.48$ | $~{}~{}~{}0.101$ | $~{}~{}~{}~{}~{}~{}0.46$ | $~{}~{}~{}0.67$ | $~{}~{}~{}0.101$
$0.32$ | $~{}~{}~{}0.35$ | $~{}~{}~{}0.099$ | $~{}~{}~{}~{}~{}~{}0.39$ | $~{}~{}~{}0.50$ | $~{}~{}~{}0.103$
$0.27$ | $~{}~{}~{}0.27$ | $~{}~{}~{}0.101$ | $~{}~{}~{}~{}~{}~{}0.34$ | $~{}~{}~{}0.40$ | $~{}~{}~{}0.104$
$0.23$ | $~{}~{}~{}0.22$ | $~{}~{}~{}0.101$ | $~{}~{}~{}~{}~{}~{}0.30$ | $~{}~{}~{}0.33$ | $~{}~{}~{}0.101$
$0.20$ | $~{}~{}~{}0.19$ | $~{}~{}~{}0.100$ | $~{}~{}~{}~{}~{}~{}0.27$ | $~{}~{}~{}0.29$ | $~{}~{}~{}0.103$
$0.18$ | $~{}~{}~{}0.17$ | $~{}~{}~{}0.101$ | $~{}~{}~{}~{}~{}~{}0.24$ | $~{}~{}~{}0.26$ | $~{}~{}~{}0.105$
EXP. E866 $\Delta(0,1)=0.118\pm 0.012$
In Table 3 we show our estimations for $\Delta(0,1)$ obtained for two values
of $x_{1}$: $0.026$ and $0.038$ and corresponding $r$ and $A_{I}$. To minimize
a possible error implied by the decrease of $r$, Kotlorz et al. (2017), we
proceed our analysis up to $x_{k}\approx 0.15$.
In Fig. 6, we compare the E866 data for $[\bar{d}-\bar{u}](x,Q^{2})$ and
$\int_{x}^{0.35}[\bar{d}-\bar{u}](x,Q^{2})\,dx$ with the predictions of two
parametrizations based on the global PDFs fits, CTEQ6 Pumplin et al. (2002)
and MSTW08 Martin et al. (2009), and to the TMM results for
$\int_{0}^{1}[\bar{d}-\bar{u}](x,Q^{2})\,dx$. In Table 4, we present a
comparison of the light sea quark asymmetry $\Delta$ integrated over different
$x$ ranges for the E866 data, TMM approach and CTEQ6 and MSTW08 predictions.
The TMM results are shown together with the average deviation of the composed
errors calculated from Eqs. (39) and (40) for the set data from Table 3.
Figure 6: $[\bar{d}-\bar{u}](x,Q^{2})$ (left) and $\int_{x}^{0.35}[\bar{d}-\bar{u}](x,Q^{2})\,dx$ (right) at $Q^{2}=54\,{\rm GeV^{2}}$. A comparison of the E866 data to CTEQ6 Pumplin et al. (2002) and MSTW08 Martin et al. (2009) predictions and also to the TMM estimation for the integrated asymmetry of the light sea quarks $\int_{0}^{1}[\bar{d}-\bar{u}](x,Q^{2})\,dx$. The fit function $f_{\rm fit}^{\rm E866}$ in the left panel has the form Eq. (41). Table 4: Integrated light sea quark asymmetry $\Delta$ over different $x$ ranges at $Q^{2}=54$ ${\rm GeV^{2}}$ obtained in the TMM approach. A comparison to the E866 data and CTEQ6 Pumplin et al. (2002) and MSTW08 Martin et al. (2009) predictions. (∗) The result contains a fit to the unmeasured region of small-$x$. $~{}~{}~{}~{}~{}x$ range | E866 | TMM | CTEQ6 | MSTW08
---|---|---|---|---
$0<x<1$ | $0.118\pm 0.012^{*}$ | $0.101\pm 0.016$ | $0.119$ | $0.089$
$0.015<x<0.35$ | $~{}~{}~{}0.0803\pm 0.011~{}~{}~{}$ | | $0.083$ | $0.077$
$0<x<0.015$ | $0.038\pm 0.004^{*}$ | $0.021\pm 0.012$ | $0.037$ | $0.014$
$0.35<x<1$ | $0$ | | $\\!\\!\\!-0.001$ | $\\!\\!\\!-0.002$
The low-$x$ contribution $\Delta(0,0.015)=0.021\pm 0.012$ obtained in our
analysis is essentially smaller than the E866 estimation obtained with use of
the combined fits MRST98 and CTEQ5M. It is also smaller than the CTEQ6
prediction but larger than the more recent global fit prediction of MSTW08.
Since the NMC and E866 data were used in the global fit analysis, the CTEQ6
and MSTW08 predictions are in a good agreement with these experimental data
from the measured $x$-region. The problem is in determination of the GSR and
$\Delta$ contributions coming from the unmeasured regions, especially from the
small-$x$ region. While all reasonable fits to the data assume
$\bar{u}=\bar{d}$ as $x\rightarrow 0$, it is achieved differently for
different parametrizations. This is shown in the left panel of Fig. 6 where we
compare the E866 data on $\bar{d}(x)-\bar{u}(x)$ with CTEQ6 and MSTW08 NLO
fits at $Q^{2}=54\,{\rm GeV^{2}}$. Our result for the small-$x$ contribution
$\Delta(0,0.015)$ lies between the values of the CTEQ6 and MSTW08 predictions.
MSTW08 parametrization of $\bar{d}(x)-\bar{u}(x)$ at $Q_{0}^{2}=1\,{\rm
GeV^{2}}$ goes to zero as $x^{0.8}$ at small-$x$ and this behavior is not
excluded by the E866 data.
Let us finally comment the discrepancy between the NMC and E866 data. To this
aim we shall use the TMM results which provide even larger discrepancy than
the results reported by NMC and E866. Namely, we compare $\Delta(0,1)$
calculated from the GSR value for the NMC data analysis with that obtained for
the E866 data. We have $\Delta(0,1,Q^{2}=4\,{\rm GeV^{2}})=0.149\pm 0.033$ vs
$\Delta(0,1,Q^{2}=54\,{\rm GeV^{2}})=0.101\pm 0.016$. The both results are
still in agreement with each other and the difference between their central
values can be attributed to the higher-twist effects. As it was described in
Section II.2, using the results obtained for the twist-4 coefficient
$H_{2}^{\tau=4}(x)$ for the nonsinglet function $F_{2}^{p-n}(x)$ Alekhin et
al. (2012), the difference for $\Delta(0,1)$ at $Q^{2}=4$ and $54\,{\rm
GeV^{2}}$ implied by the HT terms is $0.025\pm 0.022$. Using its central value
and taking into account also the perturbative QCD radiative corrections, Eqs.
(12a) and (12b), we are able to reduce the difference
$\Delta(0,1,Q^{2}=4\,{\rm GeV^{2}})-\Delta(0,1,Q^{2}=54\,{\rm
GeV^{2}})=0.048\pm 0.049$ by about $60\%$ .
### IV.3 Second moment of $F_{2}^{p-n}$
The main aim of our paper is to study the Gottfried sum rule within the TMM
approach, nevertheless, finally, we would like also to discuss shortly our
predictions for the second moment of the structure function $F_{2}^{p-n}$,
$\int^{1}_{0}\left[F_{2}^{p}(x,Q^{2})-F_{2}^{n}(x,Q^{2})\right]\,dx=\frac{1}{3}\left(\langle
x\rangle_{u-d}+\langle x\rangle_{\bar{u}-\bar{d}}\right),$ (42)
where
$\langle
x\rangle_{u-d}\equiv\int^{1}_{0}x\left[u(x,Q^{2})-d(x,Q^{2})\right]dx\,.$ (43)
The latter, $\langle x\rangle_{u-d}$, being the iso-vector quark momentum
fraction, is recently of large interest for the analyses based on the lattice
QCD. This interest, which has triggered many theoretical and phenomenological
investigations, is mainly motivated by a discrepancy of over $25\%$ between
the lattice predictions, $\langle x\rangle_{u-d}>0.2$, and the values obtained
from phenomenological fits to the experimental data, $0.15-0.17$ Bali et al.
(2014).
Below, we present our results for $\langle x\rangle_{u-d}$ at $Q^{2}=4\,{\rm
GeV^{2}}$ obtained within the TMM approach. Since the NMC data provide
knowledge only for the sum $\langle x\rangle_{u-d}+\langle
x\rangle_{\bar{u}-\bar{d}}$, Eq. (42), we use combined results based on the
NMC and E866 data. We take also into account the $Q^{2}$ evolution effects for
the E866 data provided for $Q^{2}=54\,{\rm GeV^{2}}$. To this aim, we correct
the value of $\langle x\rangle_{\bar{u}-\bar{d}}$ calculated from the E866
data by a mean difference $\langle x\rangle_{\bar{u}-\bar{d}}$ obtained for
the two $Q^{2}$ values $4$ and $54$ ${\rm GeV^{2}}$ from the CTEQ6 and MSTW08
fits. Thus, finally, we obtain $\langle x\rangle_{u-d}=0.165\pm 0.007$ and
$\langle x\rangle_{\bar{u}-\bar{d}}=0.007\pm 0.001$.
In Table 5, we compare our TMM results for $\langle x\rangle_{u-d}$ at
$Q^{2}=4\,{\rm GeV^{2}}$ with the predictions of the world-wide fits CTEQ6 and
MSTW08, and also with the recent lattice result Abdel-Rehim et al. (2015).
Table 5: The iso-vector quark momentum fraction $\langle x\rangle_{u-d}$ at $Q^{2}=4$ ${\rm GeV^{2}}$ obtained in TMM approach from the combined NMC and E866 data. A comparison to the global fit predictions CTEQ6 and MSTW08, and to the recent lattice result Abdel-Rehim et al. (2015). TMM | CTEQ6 | MSTW08 | LATTICE
---|---|---|---
$0.165\pm 0.007$ | $0.158$ | $0.161$ | $0.208\pm 0.024$
For comparison, the recent analysis of the DIS data from fixed-target
experiments on the structure function $F_{2}$ performed in the valence-quark
approximation at the NNLO approximation, and incorporating the NMC result on
the Gottfried sum rule, provides $\langle x\rangle_{\bar{u}-\bar{d}}=0.187\pm
0.021$ Kotikov et al. (2018).
## V Conclusions
In this paper, based on the experimental NMC data on the nonsinglet structure
function $F_{2}^{p}-F_{2}^{n}$ at $Q^{2}=4\,{\rm GeV^{2}}$ Arneodo et al.
(1994), and E866 data on the $\bar{d}/\bar{u}$ asymmetry in the nucleon sea at
$Q^{2}=54\,{\rm GeV^{2}}$ Towell et al. (2001), we have reevaluated the
Gottfried sum rule Gottfried (1967). In our analysis, we used the truncated
moments approach in which, with help of the special truncated sum, one can
overcome in a study of the fundamental integral characteristics of the parton
distributions the problem of the unavoidable kinematic restrictions on the
Bjorken variable $x$ Kotlorz et al. (2017).
Using only the data from the measured region of $x$, we obtained for the
Gottfried sum $\int_{0}^{1}F_{2}^{ns}/x\,dx=0.234\pm 0.022$ which is in a very
good agreement with the value reported by NMC, and $0.101\pm 0.016$ for the
integrated nucleon sea asymmetry $\int_{0}^{1}(\bar{d}-\bar{u})\,dx$. The
latter, though still consistent with the E866 result $0.118\pm 0.012$, is
clearly smaller in its central value. This disagreement can be attributed to
the estimation of the contribution from the unmeasured region $0<x<0.015$.
Namely, our analysis of the data suggests less steep small-$x$ behavior of the
$(\bar{d}-\bar{u})\sim x^{-0.2}$ than the MRST and CTEQ5M parametrizations
used by E866 for the determination of the
$\int_{0}^{0.015}(\bar{d}-\bar{u})\,dx$. For a comparison, the more recent
global fit MSTW08, incorporating also the E866 data, assumes the small-$x$
behavior of the $(\bar{d}-\bar{u})\sim x^{0.8}$ and provides
$\int_{0}^{1}(\bar{d}-\bar{u})\,dx=0.09$ which better agrees with our
estimation.
We have also discussed the well-known discrepancy between the NMC and E866
results on $\int_{0}^{1}(\bar{d}-\bar{u})\,dx$. We demonstrated that this
discrepancy can be understood after taking into account the higher-twist
effects which become important in the case of the NMC data with a relatively
low $Q^{2}=4\,{\rm GeV^{2}}$. Using the results obtained for the twist-4
coefficient $H_{2}^{\tau=4}(x)$ for the nonsinglet function $F_{2}^{p-n}(x)$
Alekhin et al. (2012), we found that the HT effects can be responsible for the
difference of $0.025\pm 0.022$ between the two experimental results obtained
at the different $Q^{2}$ scales.
In the last point of our paper, we obtained in the TMM analysis the iso-vector
quark momentum fraction $\langle x\rangle_{u-d}=0.165\pm 0.007$, which agrees
well with the global fit predictions. We compared it also with the recent
lattice result.
Finally, we note that the presented analysis can be directly applied to
studies of the violation of the Callan-Gross relation and the quark-hadron
duality Christy and Melnitchouk (2011).
###### Acknowledgements.
D. K. thanks A. L. Kataev for useful comments. This work is supported by the
Bogoliubov–Infeld Program. D. K. and O. V. T. acknowledge the support of the
Collaboration Program JINR–Bulgaria.
## References
* Gottfried (1967) K. Gottfried, Phys. Rev. Lett. 18, 1174 (1967).
* Amaudruz et al. (1991) P. Amaudruz et al. (New Muon), Phys. Rev. Lett. 66, 2712 (1991).
* Arneodo et al. (1994) M. Arneodo et al. (New Muon), Phys. Rev. D 50, 1 (1994).
* Baldit et al. (1994) A. Baldit et al. (NA51), Phys. Lett. B 332, 244 (1994).
* Hawker et al. (1998) E. Hawker et al. (NuSea), Phys. Rev. Lett. 80, 3715 (1998), eprint hep-ex/9803011.
* Peng et al. (1998) J. Peng et al. (NuSea), Phys. Rev. D 58, 092004 (1998), eprint hep-ph/9804288.
* Towell et al. (2001) R. Towell et al. (NuSea), Phys. Rev. D 64, 052002 (2001), eprint hep-ex/0103030.
* Ackerstaff et al. (1998) K. Ackerstaff et al. (HERMES), Phys. Rev. Lett. 81, 5519 (1998), eprint hep-ex/9807013.
* Kumano (1998) S. Kumano, Phys. Rept. 303, 183 (1998), eprint hep-ph/9702367.
* Garvey and Peng (2001) G. T. Garvey and J.-C. Peng, Prog. Part. Nucl. Phys. 47, 203 (2001), eprint nucl-ex/0109010.
* Kotlorz et al. (2017) D. Kotlorz, S. Mikhailov, O. Teryaev, and A. Kotlorz, Phys. Rev. D 96, 016015 (2017), eprint 1704.04253.
* Soffer and Bourrely (2019) J. Soffer and C. Bourrely, Nucl. Phys. A 991, 121607 (2019).
* Peng et al. (2014) J.-C. Peng, W.-C. Chang, H.-Y. Cheng, T.-J. Hou, K.-F. Liu, and J.-W. Qiu, Phys. Lett. B 736, 411 (2014), eprint 1401.1705.
* Hinchliffe and Kwiatkowski (1996) I. Hinchliffe and A. Kwiatkowski, Ann. Rev. Nucl. Part. Sci. 46, 609 (1996), eprint hep-ph/9604210.
* Kataev and Parente (2003) A. Kataev and G. Parente, Phys. Lett. B 566, 120 (2003), eprint hep-ph/0304072.
* Alekhin et al. (2012) S. Alekhin, J. Blumlein, and S. Moch, Phys. Rev. D 86, 054009 (2012), eprint 1202.2281.
* Szczurek and Uleshchenko (2000) A. Szczurek and V. Uleshchenko, Phys. Lett. B 475, 120 (2000), eprint hep-ph/9911467.
* Kwiecinski (1996) J. Kwiecinski, Acta Phys. Polon. B 27, 893 (1996), eprint hep-ph/9511375.
* Badelek and Kwiecinski (1994) B. Badelek and J. Kwiecinski, Phys. Rev. D 50, 4 (1994), eprint hep-ph/9401314.
* Strozik-Kotlorz et al. (2017) D. Strozik-Kotlorz, S. Mikhailov, O. Teryaev, and A. Kotlorz, J. Phys. Conf. Ser. 938, 1 (2017), eprint 1710.10179.
* Kotlorz and Mikhailov (2019) D. Kotlorz and S. Mikhailov, Phys. Rev. D 100, 056007 (2019), eprint 1810.02973.
* Kotlorz et al. (2019) D. Kotlorz, S. Mikhailov, O. Teryaev, and A. Kotlorz, AIP Conf. Proc. 2075, 080007 (2019).
* Pumplin et al. (2002) J. Pumplin, D. Stump, J. Huston, H. Lai, P. M. Nadolsky, and W. Tung, JHEP 07, 012 (2002), eprint hep-ph/0201195.
* Martin et al. (2009) A. Martin, W. Stirling, R. Thorne, and G. Watt, Eur. Phys. J. C 63, 189 (2009), eprint 0901.0002.
* Lai et al. (2000) H. Lai, J. Huston, S. Kuhlmann, J. Morfin, F. I. Olness, J. Owens, J. Pumplin, and W. Tung (CTEQ), Eur. Phys. J. C 12, 375 (2000), eprint hep-ph/9903282.
* Martin et al. (1998) A. D. Martin, R. Roberts, W. Stirling, and R. Thorne, Eur. Phys. J. C 4, 463 (1998), eprint hep-ph/9803445.
* Bali et al. (2014) G. S. Bali, S. Collins, B. Gläßle, M. Göckeler, J. Najjar, R. H. Rödl, A. Schäfer, R. W. Schiel, A. Sternbeck, and W. Söldner, Phys. Rev. D 90, 074510 (2014), eprint 1408.6850.
* Abdel-Rehim et al. (2015) A. Abdel-Rehim et al., Phys. Rev. D 92, 114513 (2015), [Erratum: Phys.Rev.D 93, 039904 (2016)], eprint 1507.04936.
* Kotikov et al. (2018) A. Kotikov, V. Krivokhizhin, and B. Shaikhatdenov, Phys. Atom. Nucl. 81, 244 (2018), eprint 1612.06412.
* Christy and Melnitchouk (2011) M. Christy and W. Melnitchouk, J. Phys. Conf. Ser. 299, 012004 (2011), eprint 1104.0239.
|
# Transport Properties in Gapped Bilayer Graphene
N. Benlakhouy<EMAIL_ADDRESS>Laboratory of Theoretical Physics,
Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida,
Morocco A. El Mouhafid<EMAIL_ADDRESS>Laboratory of Theoretical
Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El
Jadida, Morocco A. Jellal<EMAIL_ADDRESS>Laboratory of Theoretical
Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El
Jadida, Morocco Canadian Quantum Research Center, 204-3002 32 Ave Vernon,
BC V1T 2L7, Canada
###### Abstract
We investigate transport properties through a rectangular potential barrier in
AB-stacked bilayer graphene (AB-BLG) gapped by dielectric layers. Using the
Dirac-like Hamiltonian with a transfer matrix approach we obtain transmission
and reflection probabilities as well as the associated conductance. For two-
band model and at normal incidence, we find extra resonances appearing in
transmission compared to biased AB-BLG, which are Fabry-Pérot resonance type.
Now by taking into account the inter-layer bias, we show that both of
transmission and anti-Klein tunneling are diminished. Regarding four band
model, we find that the gap suppresses transmission in an energy range by
showing some behaviors look like ”Mexican hats”. We examine the total
conductance and show that it is affected by the gap compared to AA-stacked
bilayer graphene. In addition, we find that the suppression in conductance is
more important than that for biased AB-BLG.
###### pacs:
73.22.Pr, 72.80.Vp, 73.63.-b
## I Introduction
The experimental realization of monolayer graphene (MLG) in 2004 by Novoselov
and Geim [1] opened up a new field in physics. Such material has attractive
electronic, optical, thermal, and mechanical properties. In particular, the
observation of Klein tunneling [3, 2], anomalous quantum Hall effect [1, 4],
and optical transparency [5]. This makes graphene a good platform for
nanoscale adaptor applications [6]. Bilayer graphene (BLG) is a system formed
by two stacked sheets of graphene. Besides that, there are two distinct kinds
of stacking: AB-BLG or AB-(Bernal) [7], and AA-BLG. AB-BLG has a parabolic
dispersion relation with four bands where two of them touch at zero energy,
whereas the other two bands split together by the interlayer hopping parameter
$\gamma_{1}\approx 0.4$ eV [8]. This structure is much more stable and its
high-quality samples are developed and studied theoretically and
experimentally [9, 10, 11, 12, 13]. AA-BLG has a linear energy gapless
spectrum with two Dirac cones switched in energy by the quantity
$\gamma_{1}\approx 0.2$ eV [14], and because of this AA-BLG attained enormous
theoretical interest [15, 16, 17, 18, 19, 20]. Such a structure is expected to
be metastable, just lately, stable samples were discovered [21, 22, 23, 24].
The AB-BLG may have clearly defined benefits than MLG, due to greater
possibilities for balancing their physical properties. For reference: quantum
Hall effect [9, 25], spin-orbit coupling and transverse electric field [26],
transmission probability in presence of electric and magnetic static fields
[27, 13], and quantum dots [28].
Experimentally, the evidence of Klein tunneling in MLG was confirmed [29, 3,
30, 31], which means that there is no electron confinement, and then a gap
must be created to overcome this issue. In fact, many methods of induction a
band gap in MLG have been elaborated such as substrates [32, 33, 34, 35, 36,
37, 38, 39] and doping with impurities [40, 41]. Regarding AB-BLG, band gap
can be realized by applying an external electric field [42, 9] or induced by
using dielectric materials like hexagonal boron nitride (h-BN) or SiC [44]. To
this end, it is theoretically showed that quantum spin Hall phase can be
identified in gapped AB-BLG even when the Rashba interaction approached zero
[44].
The introduction of an inter-layer bias to AB-BLG opens a gap in the energy
spectrum and has a major effect on electronic properties [29]. Here, we
analyze the effects of a biased AB-BLG gapped by dielectric layers to show the
impact of band gap on transport properties. In both layers of AB-BLG, band gap
is the same allowing to open a gap. Using transfer matrix method together with
current density, we calculate transmission and reflection probabilities as
well as corresponding conductance. At low-energy, $E<\gamma_{1}$, and in
presence of the band gap $\Delta_{0}$ we find that Fabry-Pérot resonances [48]
strongly appear in the transmission. Now by including also the inter-layer
bias $\delta$, we show that the total transmission and anti-Klein tunneling
significantly diminished. For energies exceeding the inter-layer coupling
$\gamma_{1}$, $E>\gamma_{1}$, we obtain a new mode of propagating giving rise
to the four transmission channels. In this case, $\Delta_{0}$ suppresses the
transmission in the energy range
$V_{0}-(\Delta_{0}+\delta)<E<V_{0}+(\Delta_{0}+\delta)$, and shows some
behaviors that look like “Mexican hats”. Finally we find that the resulting
conductance in gapped AB-BLG gets modified compared to gapped AA-BLG.
Moreover, we find that the suppression in conductance is more important than
that for biased AB-BLG [29] because the energy range for a null conductance
increases as long as $\Delta_{0}$ increase and also the number of peaks get
reduced.
The paper is organized as follows. In Sec II we construct our theoretical
model describing biased and gapped AB-BLG giving rise to four band energies.
In Sec III we explain in detail the formalism used in calculating transmission
and reflection probabilities together with conductance. In Sec IV we
numerically analyze our results and give different discussions with published
works on the topic. Finally, in Sec. V we summarize our main conclusions.
## II Theoretical model
Figure 1: The parameters of a rectangular barrier structure.
In the AB-stacked bilayer graphene the atom $B_{1}$ of the top layer is placed
directly below the atom $A_{2}$ of the bottom layer with van der Waals inter-
layer coupling parameter $\gamma_{1}$, while $A_{1}$ and $B_{2}$ do not lie
directly below or above each other. Based on [29, 44] we consider a biased and
gapped AB-BLG described by the following Hamiltonian near the point K
$\mathcal{H}=\begin{pmatrix}V_{0}+\vartheta_{1}&v_{F}\pi^{{\dagger}}&-v_{4}\pi^{{\dagger}}&v_{3}\pi\\\
v_{F}\pi&V_{0}+\vartheta_{2}&\gamma_{1}&-v_{4}\pi^{{\dagger}}\\\
-v_{4}\pi&\gamma_{1}&V_{0}-\vartheta_{2}&v_{F}\pi^{{\dagger}}\\\
v_{3}\pi^{{\dagger}}&-v_{4}\pi&v_{F}\pi&V_{0}-\vartheta_{1}\\\ \end{pmatrix}$
(1)
where $v_{F}=\frac{\gamma_{0}}{\hbar}\frac{3a}{2}\approx 10^{6}$ m/s is the
Fermi velocity of electrons in each graphene layer, $a=0.142$ nm is the
distance between adjacent carbon atoms,
$v_{3,4}=\frac{v_{F}\gamma_{3,4}}{\gamma_{0}}$ represent the coupling between
the layers, $\pi=p_{x}+ip_{y},\pi^{{\dagger}}=p_{x}-ip_{y}$ are the in-plan
momenta and its conjugate with $p_{x,y}=-i\hbar\partial_{x,y}$,
$\gamma_{1}\approx 0.4$ eV is the interlayer coupling term. The electrostatic
potential $V_{0}$ of width $d$ (Fig. 1) can be varied on the $i$-th layer
using top and back gates on the sample. $\vartheta_{1}=\delta+\Delta_{0}$,
$\vartheta_{2}=\delta-\Delta_{0}$ with $\delta$ corresponds to an externally
induced inter-layer potential difference, and $\Delta_{0}$ is the band gap.
The skew parameters, $\gamma_{3}\approx 0.315$ eV and $\gamma_{4}\approx
0.044$ eV have negligible effect on the band structure at high energy [25,
45]. Recently, it was shown that even at low energy these parameters have also
negligible effect on the transmission [29], hence we neglect them in our
calculations.
Under the above approximation and for a barrier potential configuration as
depicted in Fig. 1, the Hamiltonian (1) can be written as
$H=\left(\begin{array}[]{cccc}V_{0}+\vartheta_{1}&\nu_{F}\pi^{{\dagger}}&0&0\\\
\nu_{F}\pi&V_{0}+\vartheta_{2}&\gamma_{1}&0\\\
0&\gamma_{1}&V_{0}-\vartheta_{2}&\nu_{F}\pi^{{\dagger}}\\\
0&0&\nu_{F}\pi&V_{0}-\vartheta_{1}\\\ \end{array}\right)$ (2)
By considering the length scale $l=\hbar v_{F}/\gamma_{1}$, which represents
the inter-layer coupling length $l=1.64$ nm, we define the dimensionless
quantities: $x\equiv x/l$ and $k_{y}\equiv lk_{y}$ together with
$\delta\equiv\frac{\delta}{\gamma_{1}}$,
$\Delta_{0}\equiv\frac{\Delta_{0}}{\gamma_{1}}$,
$E\equiv\frac{E}{\gamma_{1}}$, $V_{0}\equiv\frac{V_{0}}{\gamma_{1}}$. The
eigenstates of Eq. (2) are four-components spinors
$\psi(x,y)=[{\psi}_{A_{1}},{\psi}_{B_{1}},{\psi}_{A_{2}},{\psi}_{B_{2}}]^{{\dagger}}$,
here ${\dagger}$ denotes the transpose of the row vector. As a consequence of
the transnational invariance along the $y$-direction, we have $[H,p_{y}]=0$,
and then we decompose the spinor as
$\psi(x,y)=e^{ik_{y}y}\left[\phi_{A_{1}}(x),\phi_{B_{1}}(x),\phi_{A_{2}}(x),\phi_{B_{2}}(x)\right]^{T}$
(3)
We solve the time-independent Schrödinger equation $H\psi=E\psi$ to obtain a
general solution in the region II and then require $V_{0}=\delta=\Delta_{0}=0$
to derive the solutions in the regions I and III. Indeed, by substituting Eq.
(2) and Eq. (3) we get four related differential equations
$\displaystyle-i(\partial_{x}+k_{y})\phi_{B_{1}}$ $\displaystyle=$
$\displaystyle\varepsilon_{1}\phi_{A_{1}}$ (4a)
$\displaystyle-i(\partial_{x}-k_{y})\phi_{A_{1}}$ $\displaystyle=$
$\displaystyle\varepsilon_{2}\phi_{B_{1}}-\phi_{A_{2}}$ (4b)
$\displaystyle-i(\partial_{x}+k_{y})\phi_{B_{2}}$ $\displaystyle=$
$\displaystyle\varepsilon_{3}\phi_{A_{2}}-\phi_{B_{1}}$ (4c)
$\displaystyle-i(\partial_{x}-k_{y})\phi_{A_{2}}$ $\displaystyle=$
$\displaystyle\varepsilon_{4}\phi_{B_{2}}$ (4d)
where we have set $\varepsilon_{1}=\varepsilon-\vartheta_{1}$,
$\varepsilon_{2}=\varepsilon-\vartheta_{2}$,
$\varepsilon_{3}=\varepsilon+\vartheta_{2}$,
$\varepsilon_{4}=\varepsilon+\vartheta_{1}$ and $\varepsilon=E-V_{0}$. We
solve Eq. (4a) for $\phi_{A_{1}}$, Eq. (4d) for $\phi_{B_{2}}$ and substitute
the results in Eqs. (4b,4c). This process yields
$\displaystyle(\partial_{x}^{2}-k_{y}^{2}+\varepsilon_{1}\varepsilon_{2})\phi_{B_{1}}$
$\displaystyle=$ $\displaystyle\varepsilon_{1}\phi_{A_{2}}$ (5a)
$\displaystyle(\partial_{x}^{2}-k_{y}^{2}+\varepsilon_{3}\varepsilon_{4})\phi_{A_{2}}$
$\displaystyle=$ $\displaystyle\varepsilon_{4}\phi_{B_{1}}$ (5b)
Then for constant parameters, the energy bands are solution of the following
equation
$\left[-k^{2}+\varepsilon_{1}\varepsilon_{2}\right]\left[-k^{2}+\varepsilon_{3}\varepsilon_{4}\right]-\varepsilon_{1}\varepsilon_{4}=0$
(6)
such that $k=\sqrt{k_{x}^{2}+k_{y}^{2}}$ and the four possible wave vectors
are given by
$k^{s}_{x}=\sqrt{-k_{y}^{2}+\varepsilon^{2}+\delta^{2}-\Delta_{0}^{2}\pm\sqrt{\varepsilon^{2}(1+4\delta^{2})-(\delta+\Delta_{0})^{2}}}$
(7)
where $s=\pm$ defines the modes of propagation, which will be discussed in
numerical section. Therefore, the four energy bands can be derived as
$\displaystyle\varepsilon^{s}_{\pm}=s\sqrt{k^{2}+\delta^{2}+\Delta_{0}^{2}+\frac{1}{2}\pm\sqrt{k^{2}\left(1+4\delta^{2}\right)+\left(\frac{1}{2}-2\delta\Delta_{0}\right)^{2}}}$
(8)
At this level, we have some comments in order. Indeed, firstly by taking
$\delta=0$, (8) reduces
$\varepsilon^{s}_{\pm}|_{\delta=0}=s\sqrt{k^{2}+\Delta_{0}^{2}+\frac{1}{2}\pm\sqrt{k^{2}+\frac{1}{4}}}$
(9)
Secondly for the case $\Delta_{0}=0$, we end up with Ben et al. result [29]
$\displaystyle\varepsilon^{s}_{\pm}|_{\Delta_{0}=0}=s\sqrt{k^{2}+\delta^{2}+\frac{1}{2}\pm\sqrt{k^{2}\left(1+4\delta^{2}\right)+\frac{1}{4}}}$
(10)
Now by comparing (9) and (10), we clearly notice that both quantities $\delta$
and $\Delta_{0}$ are inducing different gaps in the energy spectrum. Certainly
this difference will affect the transmission probabilities (Figs. 3, 4) as
well as conductance (Fig. 7).
Figure 2: Energy spectrum of AB-stacked graphene bilayer inside (solid curves)
and outside (dashed curves) the barrier. Here blue (brown) curves correspond
to $k^{+}(k^{-})$ propagating modes for biased and gapped $(V_{0}\neq
0,\delta\neq 0,\Delta_{0}\neq 0)$ systems. $\delta^{\prime}=\delta+\Delta_{0}$
and $\gamma_{1}^{\prime}=\sqrt{\gamma_{1}^{2}+(\delta-\Delta_{0})^{2}}$.
It is known that the perfect AB-BLG has a parabolic dispersion relation with
four bands, of which two touch each other at $k=0$. In Fig. 2 we show the
energy bands as a function of the momentum $k_{y}$, for the biased and gapped
AB-BLG. We observe that when the AB-BLG is subjected to a gap $\Delta_{0}$ and
an inter-layer bias $\delta$ the two bands are switched and placed at
$V_{0}\pm\sqrt{\gamma_{1}^{2}+(\delta-\Delta_{0})^{2}}$, and the touching
bands are shifted by $2\delta^{\prime}=2(\delta+\Delta_{0})$. One should
notice that there are two cases related to whether the wave vector
$k^{s}_{0}=\sqrt{-k_{y}^{2}+\varepsilon^{2}\pm\varepsilon}$ is real or
imaginary. Indeed for $E<\gamma_{1}$, just $k^{+}_{0}$ is real, and for that
reason, the propagation is only possible for $k^{+}_{0}$ mode. However when
$E>\gamma_{1}$, both $k^{\pm}_{0}$ are real which presenting a new propagation
mode.
As concerning the eigenspinors in regions II, we show that the solution of
Eqs. (5) is a plane wave generated by
$\phi_{B_{1}}^{2}=a_{1}e^{ik_{x}^{+}x}+a_{2}e^{-ik_{x}^{+}x}+a_{3}e^{ik_{x}^{-}x}+a_{4}e^{-ik_{x}^{-}x}$
(11)
where $a_{n}$ are coefficients of normalization, with $n=1,\cdots,4$. The
remaining components of the eigenspinors can be obtained as
$\displaystyle\phi_{A_{1}}^{2}$
$\displaystyle=a_{1}\Lambda^{+}_{+}e^{ik_{x}^{+}x}+a_{2}\Lambda^{+}_{-}e^{-ik_{x}^{+}x}+a_{3}\Lambda^{-}_{+}e^{ik_{x}^{-}x}+a_{4}\Lambda^{-}_{-}e^{-ik_{x}^{-}x}$
(12) $\displaystyle\phi_{A_{2}}^{2}$
$\displaystyle=a_{1}\rho^{+}e^{ik_{x}^{+}x}+a_{2}\rho^{+}e^{-ik_{x}^{+}x}+a_{3}\rho^{-}e^{ik_{x}^{-}x}+a_{4}\rho^{-}e^{-ik_{x}^{-}x}$
(13) $\displaystyle\phi_{B_{2}}^{2}$
$\displaystyle=a_{1}\chi^{+}_{+}\rho^{+}e^{ik_{x}^{+}x}+a_{2}\chi^{+}_{-}\rho^{+}e^{-ik_{x}^{+}x}+a_{3}\chi^{-}_{+}\rho^{-}e^{ik_{x}^{-}x}+a_{4}\chi^{-}_{-}\rho^{-}e^{-ik_{x}^{-}x}$
(14)
where we have introduced the quantities $\Lambda^{\pm}_{\pm}=\frac{-ik_{y}\pm
k_{x}^{\pm}}{\varepsilon-\vartheta_{1}}$,
$\rho^{\pm}=\frac{(\epsilon-\vartheta_{1})(\epsilon-\vartheta_{2})-k_{y}^{2}-(k_{x}^{\pm})^{2}}{\epsilon-\vartheta_{1}}$,
$\chi^{\pm}_{\pm}=\frac{ik_{y}\pm k_{x}^{\pm}}{\varepsilon+\vartheta_{1}}$. In
matrix notation, the general solution of our system in region II can be
written as
$\psi_{2}(x,y)=\mathcal{G}_{2}\cdot\mathcal{M}_{2}(x)\cdot\mathcal{C}_{2}\
e^{ik_{y}y}$ (15)
where the four-component vector ${\cal{C}}_{2}$ represents the coefficients
$a_{n}$ expressing the relative weights of the different traveling modes,
which have to be set according to the propagating region [29]. The matrices
$\mathcal{M}_{2}(x)$ and $\mathcal{G}_{2}$ are given by
$\mathcal{G}_{2}=\begin{pmatrix}1&1&1&1\\\
\Lambda^{+}_{-}&\Lambda^{+}_{+}&\Lambda^{-}_{+}&\Lambda^{-}_{-}\\\
\rho^{+}&\rho^{+}&\rho^{-}&\rho^{-}\\\
\chi^{+}_{+}\rho^{+}&\chi^{+}_{-}\rho^{+}&\chi^{-}_{+}\rho^{-}&\chi^{-}_{-}\rho^{-}\\\
\end{pmatrix},\qquad\mathcal{M}_{2}(x)=\begin{pmatrix}e^{ik_{x}^{+}x}&0&0&0\\\
0&e^{-ik_{x}^{+}x}&0&0\\\ 0&0&e^{ik_{x}^{-}x}&0\\\ 0&0&0&e^{-ik_{x}^{-}x}\\\
\end{pmatrix},\qquad\mathcal{C}_{2}=\begin{pmatrix}a_{1}\\\ a_{2}\\\ a_{3}\\\
a_{4}\\\ \end{pmatrix}$ (16)
As claimed before, to get solutions in the other regions we have to set
$V_{0}=\delta=\Delta_{0}=0$. Then the eigenspinors in region I is
$\displaystyle\phi_{A_{1}}^{1}$
$\displaystyle=\delta_{s,1}e^{ik^{+}_{0}x}+r^{s}_{+}e^{-ik^{+}_{0}x}+\delta_{s,-1}e^{ik^{-}_{0}x}+r^{s}_{-}e^{-ik^{-}_{0}x}$
(17) $\displaystyle\phi_{B_{1}}^{1}$
$\displaystyle=\delta_{s,1}\Lambda^{+}_{-}e^{ik^{+}_{0}x}+r^{s}_{+}\Lambda^{+}_{+}e^{-ik^{+}_{0}x}+\delta_{s,-1}\Lambda^{-}_{+}e^{ik^{-}_{0}x}+r^{s}_{-}\Lambda^{-}_{-}e^{-ik^{-}_{0}x}$
(18) $\displaystyle\phi_{A_{2}}^{1}$
$\displaystyle=\delta_{s,1}\rho^{+}e^{ik^{+}_{0}x}+r^{s}_{+}\rho^{+}e^{-ik^{+}_{0}x}+\delta_{s,-1}\rho^{-}e^{ik^{-}_{0}x}+r^{s}_{-}\rho^{-}e^{-ik^{-}_{0}x}$
(19) $\displaystyle\phi_{B_{2}}^{1}$
$\displaystyle=\delta_{s,1}\chi^{+}_{+}\rho^{+}e^{ik^{+}_{0}x}+r^{s}_{+}\rho^{+}\chi^{+}_{-}e^{-ik^{+}_{0}x}+\delta_{s,-1}\rho^{-}\chi^{-}_{+}e^{ik^{-}_{0}x}+r^{s}_{-}\rho^{-}\chi^{-}_{-}e^{-ik^{-}_{0}x}$
(20)
and in the region III reads as
$\displaystyle\phi_{A_{1}}^{3}$
$\displaystyle=t^{s}_{+}e^{ik^{+}_{0}x}+t^{s}_{-}e^{ik^{-}_{0}x}$ (21)
$\displaystyle\phi_{B_{1}}^{3}$
$\displaystyle=t^{s}_{+}\Lambda^{+}_{-}e^{ik^{+}_{0}x}+t^{s}_{-}\Lambda^{-}_{+}e^{ik^{-}_{0}x}$
(22) $\displaystyle\phi_{A_{2}}^{3}$
$\displaystyle=t^{s}_{+}\rho^{+}e^{ik^{+}_{0}x}+t^{s}_{-}\rho^{-}e^{ik^{-}_{0}x}$
(23) $\displaystyle\phi_{B_{2}}^{3}$
$\displaystyle=t^{s}_{+}\chi^{+}_{+}\rho^{+}e^{ik^{+}_{0}x}+t^{s}_{-}\chi^{-}_{+}\rho_{-}e^{ik^{-}_{0}x}$
(24)
Since the potential is zero in regions I and III, we have the relation
$\mathcal{G}_{1}\cdot\mathcal{M}_{1}(x)=\mathcal{G}_{3}\cdot\mathcal{M}_{3}(x)$.
We will see how the above results will be used to determine different physical
quantities. Specifically, we focus on the transmission and reflection
probabilities as well as the conductance.
## III Transmission probability and conductance
To determine the transmission and reflection probabilities, we impose the
appropriate boundary conditions in the context of the transfer matrix approach
[46, 47]. Continuity of the spinors at interfaces gives the components of the
vectors
$\mathcal{C}_{1}^{s}=\begin{pmatrix}\delta_{s,1}\\\ r_{+}^{s}\\\
\delta_{s,-1}\\\ r_{-}^{s}\\\
\end{pmatrix},\qquad\mathcal{C}_{3}^{s}=\begin{pmatrix}t_{+}^{s}\\\ 0\\\
t_{-}^{s}\\\ 0\\\ \end{pmatrix}$ (25)
where $\delta_{s,\pm}$ is the Kronecker symbol. The continuity at $x=0$ and
$x=d$ can be written in a matrix notation as
$\displaystyle\mathcal{G}_{1}\cdot\mathcal{M}_{1}(0)\cdot\mathcal{C}_{1}^{s}=\mathcal{G}_{2}\cdot\mathcal{M}_{2}(0)\cdot\mathcal{C}_{2}$
(26)
$\displaystyle\mathcal{G}_{2}\cdot\mathcal{M}_{2}(d)\cdot\mathcal{C}_{2}=\mathcal{G}_{3}\cdot\mathcal{M}_{3}(d)\cdot\mathcal{C}_{3}^{s}$
(27)
Using the transfer matrix method together with the relation
$\mathcal{G}_{1}\cdot\mathcal{M}_{1}(x)=\mathcal{G}_{3}\cdot\mathcal{M}_{3}(x)$
we can connect $\mathcal{C}_{1}^{s}$ with $\mathcal{C}_{3}^{s}$ through the
matrix $\mathcal{N}$
$\mathcal{C}_{1}^{s}=\mathcal{N}\cdot\mathcal{C}_{3}^{s}$ (28)
where
$\mathcal{N}=\mathcal{G}_{1}^{-1}\cdot\mathcal{G}_{2}\cdot\mathcal{M}_{2}^{-1}(d)\cdot\mathcal{G}_{2}^{-1}\cdot\mathcal{G}_{1}\cdot\mathcal{M}_{1}(d)$
(29)
Consequently, the transmission and reflection coefficients can be derived from
$\left(\begin{array}[]{cccc}t_{+}^{s}\\\ r_{+}^{s}\\\ t_{-}^{s}\\\
r_{-}^{s}\\\
\end{array}\right)=\left(\begin{array}[]{cccc}\mathcal{N}_{11}&0&\mathcal{N}_{13}&0\\\
\mathcal{N}_{21}&-1&\mathcal{N}_{23}&0\\\
\mathcal{N}_{31}&0&\mathcal{N}_{33}&0\\\
\mathcal{N}_{41}&0&\mathcal{N}_{43}&-1\\\
\end{array}\right)^{-1}\left(\begin{array}[]{cccc}\delta_{s,1}\\\ 0\\\
\delta_{s,-1}\\\ 0\\\ \end{array}\right)$ (30)
where $\mathcal{N}_{ij}$ are the matrix elements of $\mathcal{N}$. Then, after
some algebras, we obtain the transmission and reflection coefficients
$\displaystyle t_{+}^{s}$
$\displaystyle=\frac{\delta_{s,-1}\mathcal{N}_{13}-\delta_{s,1}\mathcal{N}_{33}}{\mathcal{N}_{13}\mathcal{N}_{31}-\mathcal{N}_{11}\mathcal{N}_{33}},\qquad
t_{-}^{s}=\frac{-\delta_{s,-1}\mathcal{N}_{11}+\delta_{s,1}\mathcal{N}_{31}}{\mathcal{N}_{13}\mathcal{N}_{31}-\mathcal{N}_{11}\mathcal{N}_{33}}$
(31) $\displaystyle r_{+}^{s}$
$\displaystyle=\mathcal{N}_{21}t_{+}^{s}+\mathcal{N}_{23}t_{-}^{s},\qquad
r_{-}^{s}=\mathcal{N}_{41}t_{+}^{s}+\mathcal{N}_{43}t_{-}^{s}$ (32)
To calculate the transmission and reflection probabilities, we have to take
into account the change in velocity of the waves when they are scattered into
a different propagation mode. For this, it is convenient to use the current
density $\bm{J}$
$\bm{J}=v_{F}\bm{\psi}^{\dagger}\begin{pmatrix}\sigma_{x}&0\\\ 0&\sigma_{x}\\\
\end{pmatrix}\bm{\psi}$ (33)
where $\sigma_{x}$ is the Pauli matrix. Then Eq. (33) gives the incident
$\bm{J}_{x}^{\text{inc}}$, reflected $\bm{J}_{x}^{\text{ref}}$ and transmitted
$\bm{J}_{x}^{\text{tra}}$ current densities. Finally the transmission $T$ and
reflection $R$ probabilities are
$T^{s}_{\pm}=\frac{k^{\pm}_{0}}{k^{s}_{0}}|t^{s}_{\pm}|^{2},\qquad
R^{s}_{\pm}=\frac{k^{\pm}_{0}}{k^{s}_{0}}|r^{s}_{\pm}|^{2}$ (34)
To preserve the probability of current, $T$ and $R$ are normalized as
$\sum_{i,j}\left(T^{j}_{i}+R^{j}_{i}\right)=1$ (35)
where the index $i=\pm$ points to the arriving mode, when the index $j=\pm$
points to the exiting mode. For example in the case of channel $k^{+}$, gives
$T^{+}_{+}+T^{-}_{+}+R^{+}_{+}+R^{-}_{+}=1$. As already mentioned, for
$E>\gamma_{1}$ we have two modes of propagation ($k^{+}_{0},k^{-}_{0}$)
leading to four transmissions $T^{s}_{\pm}$ and four reflections $R^{s}_{\pm}$
channels, through the four conduction bands. For sufficiently enough low
energy or in the two-band model, $E<\gamma_{1}$, the two modes lead to one
transmission $T$ channel and one reflection $R$ channel.
From the transmission probabilities, we can calculate the conductance $G$, at
zero temperature, using the Landauer-Büttiker formula
$G(E)=G_{0}\frac{L_{y}}{2\pi}\int_{-\infty}^{\infty}dk_{y}\sum_{i,j=\pm}T_{i}^{j}\left(E,k_{y}\right)$
(36)
with $L_{y}$ the length of the sample in the $y$-direction, and
$G_{0}=4e^{2}/h$. The factor $4$ comes from the valley and spin degeneracies
in graphene. In order to get the total conductance of the system, we need to
sum over all the transmission channels
$G_{T}=\sum_{i,j}G^{j}_{i}$ (37)
## IV NUMERICAL RESULTS AND DISCUSSION
In this section, we numerically analyze and discuss our main results. First,
we evaluate the transmission probability in the two-band model at normal
incidence (i.e. $k_{y}=0$). To understand our system more effectively in Fig.
3, we present the effect of the band gap $\Delta_{0}$ on the transmission as a
function of the incident energy $E$ and the width $d$ of the barrier. In the
(left panel), we plot the energy dependence of the transmission probability
for a barrier of width $d=10$ nm, $d=25$ nm, and $d=100$ nm for biased
$\delta=0$ and unbiased system $\delta\neq 0$ with band gap $\Delta_{0}$. For
$\Delta_{0}\neq 0$, we observe appearance of resonances in the transmission
probability for the energy range $E<V_{0}-\delta^{\prime}$,
$\delta^{\prime}=\delta+\Delta_{0}$, which can be attributed to the finite
size of the AB-BLG as well as the presence of charge carriers with different
chirality. These phenomena are known as Fabry-Pérot resonances [48]. For the
energy range $V_{0}-\delta^{\prime}<E<V_{0}+\delta^{\prime}$, there is a bowl
(window) of zero transmission for $d=100$ nm in contrary for $d=10$ nm and
$d=25$ nm the transmission is not zero. However, for
$E>V_{0}+\delta^{\prime}$, the transmission still looks like Ben et al.
results [29]. Note that the transmission of width $d=100$ nm, shows anti-Klien
tunneling, which is a direct consequence of the pseudospin conservation in the
system. In the (right panel), we plot the width dependence of the transmission
probability for the incident energies $E=\frac{1}{5}V_{0}$,
$E=\frac{2}{5}V_{0}$ and $E=\frac{8}{5}V_{0}$. It is clearly seen that for
$E=\frac{1}{5}V_{0}$ and $E=\frac{2}{5}V_{0}$ with $\delta_{0}=0$,
$\Delta_{0}=0.01\gamma_{1}$, resonance peaks show up (see upper panel), which
are absent for the case $\Delta_{0}=0$ [29]. In the middle and bottom panel,
by taking into account the effect of a finite bias $\delta=0.01\gamma_{1}$, we
observe a decrease of resonance in the transmission probability, and more
precisely when $\Delta_{0}$ is greater than $\delta$.
Figure 3: (Color online) The transmission probability at normal incidence
through a barrier of height $V_{0}=0.05\gamma_{1}$ with
$\Delta_{0}=0.01\gamma_{1}$ and $\delta=0$ (for upper panel),
$\Delta_{0}=\delta=0.01\gamma_{1}$ (for middle panel) and
$\Delta_{0}=0.03\gamma_{1}$ and $\delta=0.01\gamma_{1}$ (for bottom panel).
(Left panel): The energy dependence of the transmission probability for
barrier widths $d=10$ nm (blue), $d=25$ nm (red), and $d=100$ nm (green).
(Right panel): The width dependence of the transmission probability for
incident energies $E=\frac{1}{5}V_{0}$ (blue), $E=\frac{2}{5}V_{0}$ (red) and
$E=\frac{8}{5}V_{0}$ (green).
To investigate the effect of band gap, for energy greater than the interlayer
hopping parameter, $E>\gamma_{1}$, in Fig. 4 we show the transmission and
reflection channels as a function of the incident energy $E$ and transverse
wave vector $k_{y}$ for potential height $V_{0}=\frac{3}{2}\gamma_{1}$ and
width $d=25$ nm. The superimposed dashed curves indicate different propagating
modes inside and outside the barriers. For ungapped and unbiased AB-BLG
(pristine AB-BLG), Ben et al. [29] showed that all channels are symmetric with
respect to normal incidence, $k_{y}=0$, i.e. $T^{+}_{-}=T^{-}_{+}$ and
$R^{+}_{-}=R^{-}_{+}$. This is due to the valley equivalence, namely the
transmission probabilities of electrons moving in the opposite direction
(scattering from $k^{+}$ to $k^{-}$ in the vicinity of the first valley, and
scattering from $k^{-}$ to $k^{+}$ in the vicinity of the second valley) are
the same. Now as for our case by introducing a gap $\Delta_{0}=0.3\gamma_{1}$,
with a null inter-layer bias, $\delta=0$, we observe that the transmissions
are completely suppressed in the energy range
$V_{0}-\Delta_{0}<E<V_{0}+\Delta_{0}$ due to the absence of traveling modes.
In $T^{+}_{+}$ channel and for energies smaller than $V_{0}-\gamma_{1}$, we
find that the resonances are decreased and Klein tunneling get less
incandescent than that seen in [29]. We notice that there is asymmetric in the
transmission channels with respect to normal incidence,
$T^{+}_{-}(k_{y})=T^{-}_{+}(-k_{y})$, but reflection channels still showing
symmetric behavior, $R^{+}_{-}(k_{y})=R^{-}_{+}(k_{y})$, because the incident
electrons back again in an electron state [29]. This is not the case for
gapped AA-BLG, whereas $T^{+}_{-}$ and $T^{-}_{+}$ channels preserve the
momentum symmetry [43]. In addition, there is a significant distinction for
all reflection channels, $R^{s}_{\pm}$, between gapped AB-BLG and biased AB-
BLG. Indeed, in our case we observe that the scales of $R^{s}_{\pm}$ get
reduced inside the barrier. It is remarkably seen that our transmission
channels, $T^{s}_{\pm}$, showed some bowels in the energy spectrum instead of
“Mexican hats” as have been see in [29]. This show that $\Delta_{0}$ can be
used to control the transmission behavior in AB-BLG.
Figure 4: (Color online) Density plot of transmission and reflection
probabilities as a function of the incident energy $E$ and transverse wave
vector $k_{y}$, through a potential barrier of height $V_{0}=1.5\gamma_{1}$
and width $d=25$ nm and band gap $\Delta_{0}=0.3\gamma_{1}$ with $\delta=0$.
The dashed white and black lines represent the band inside and outside the
barrier, respectively.
In Fig. 5 we show the density plot of the transmission and reflection
channels, for biased and gapped systems, $\delta={0.3}\gamma_{1}$,
$\Delta_{0}=0.3\gamma_{1}$. The transmission is completely suppressed in the
energy range $V_{0}-\delta^{\prime}<E<V_{0}+\delta^{\prime}$,
$\delta^{\prime}=\Delta_{0}+\delta$ . We notice that the symmetric inter-layer
sublattice equivalence is also broken in this case as seen in Fig. 4. We
recall that such symmetry broken can be achieved by taking either $\delta\neq
0$ or $\Delta_{0}\neq 0$, which means that there is violation of invariance
under the exchange $k_{y}\longrightarrow-k_{y}$ as noted in [29, 49] for AB-
BLG, in contrast to the AA-BLG [50]. Therefore, the transmission and
reflection probabilities are not symmetric with respect to normal incidence as
seen in Fig. 5.
Figure 5: (Color online) The same as in Fig. 4, but now for the band gap
$\Delta_{0}=0.3\gamma_{1}$ with $\delta=0.3\gamma_{1}$. The dashed white and
black lines represent the band inside and outside the barrier, respectively.
Fig. 6 presents the same plot as in Fig. 5 except that we choose a band gap
$\Delta_{0}=0.5\gamma_{1}$ greater than inter-layer bias
$\delta=0.3\gamma_{1}$. In this situation, we notice a significant difference
in the transmission and reflection channels. Indeed, we observe that Klein
tunneling becomes less than that see for the case
$\Delta_{0}=\delta=0.3\gamma_{1}$ in Fig. 5. In addition, it is clearly seen
that some resonances disappear for the energy range $E<V_{0}-\delta^{\prime}$.
Moreover, we find that the energy bands are pushed and showed some behaviors
look like “Mexican hats”, which are more clear than those see in Fig. 5. These
results are similar to those obtained in [51], by analyzing the transmission
probabilities for a system composed of two single layer-AB bilayer-two single
layer (2SL-AB-2SL) of graphene subjected to strong gate potential. In summary,
we observe that all transmissions for $\delta\neq 0$ and $\Delta_{0}\neq 0$
are weak compared to the biased AB-BLG [29], or gapped AB-BLG (Fig. 4) cases.
Figure 6: (Color online) The same as in Fig. 4, but now for the band gap
$\Delta_{0}=0.5\gamma_{1}$ with $\delta=0.3\gamma_{1}$. The dashed white and
black lines represent the band inside and outside the barrier, respectively.
In Figs. 7 we plot the energy dependence of the corresponding conductance for
different values of the band gap and an inter-layer bias
$\delta=0.3\gamma_{1}$. The band gap $\Delta_{0}=0.3\gamma_{1}$ contributed by
opening a gap in the energy spectrum of AB-BLG at $V_{0}\pm\Delta_{0}$, and
this of course reflected on the conductance as shown in Fig. 7(a). The
resonances that are clear in the transmission probability show up as peaks,
and the total conductance $G_{\text{Tot}}$ has a convex form. For low energies
we have $G_{\text{Tot}}=G^{+}_{+}$ meaning that the propagation is only via
$k^{+}$ mode, while $k^{-}$ mode is cloaked in this regime until
$E>V_{0}+\Delta_{0}$. $G^{-}_{-}$ starts conducting by making an appearance as
a rapid increase in the total conductance. Furthermore,
$G^{+}_{-}=G^{-}_{+}=0$ since $T^{+}_{-}=T^{-}_{+}=0$ at low energy but at
$E=\gamma_{1}$ both modes are coupled and $G^{+}_{-}$, $G^{+}_{-}$ start
conducting that is why $G_{\text{Tot}}\neq G^{+}_{+}$. However the band gap
does not break the equivalence in the scattered channels of the conductance
such that $G^{-}_{+}=G^{+}_{-}$ still equivalent for all energy ranges (see
Fig. 7(a)), in contrast to the case of the double barriers [52]. By comparing
our results with those of the biased AB-BLG [29], we observe that some
shoulders of the peaks are removed and the contribution of the transmission
channels on the total conductance are not much more pronounced as a result of
the gap $\Delta_{0}$ induced by dielectric layers. This confirms that our
$\Delta_{0}$ has a significant impact on the transport properties and differs
from that induced by bias in AB-BLG [29]. Instead of contrast, the total
conductance of a gapped AA-BLG is approximately unchanged even though the band
gap has a significant impact on the intracone transport [43]. Now we involve
both of parameters by presenting Figs 7(b) and 7(c) corresponding,
respectively, to $\Delta_{0}=\delta=0.3\gamma_{1}$, and
$\Delta_{0}=0.5\gamma_{1},\delta=0.3\gamma_{1}$. As expected we observe large
suppression of the conductance in the energy range
$V_{0}-\delta^{\prime}<E<V_{0}+\delta^{\prime}$, and hence some peaks are
removed with a decrease of the total conductance $G_{\text{Tot}}$.
Figure 7: (Color online): Conductance as a function of the incident energy for
biased and gapped AB-BLG with potential height $V_{0}=1.5\ \gamma_{1}$ and
width $d=25$ nm. (a): $\Delta_{0}=0.3\gamma_{1}$, $\delta=0$. (b):
$\Delta_{0}=0.3\gamma_{1}$, $\delta=0.3\gamma_{1}$, (c):
$\Delta_{0}=0.5\gamma_{1}$, $\delta=0.3\gamma_{1}$. The solid curves
correspond to the total conductance and the dashed curves correspond to
different contributions of the four transmission channels.
## V Summary and conclusion
We have theoretically investigated the transport properties through
rectangular potential barriers of biased AB-BLG gapped by dielectric layers.
By solving Dirac equation, the four band energies are obtained to be dependent
on the band gap $\Delta_{0}$ together with the inter-layer bias $\delta$.
Subsequently, using transfer matrix method we have evaluated the corresponding
transmission, reflection probabilities, and conductance. In particular, we
have analyzed the transmission probability in the two-band model at normal
incidence, (i.e $k_{y}=0$), firstly in the presence of $\Delta_{0}$ and
secondly by taking into account $\Delta_{0}$ and $\delta$. As a result, we
have observed that the presence of $\Delta_{0}$ induces extra resonances
appearing in transmission profiles. However by adding $\delta$, we have
observed that the transmission decreased more and anti-Klein tunneling in AB-
BLG is no longer preserved.
Furthermore, we have obtained a new mode of propagation for energies exceeding
the inter-layer coupling $\gamma_{1}$. In this case, we have showed that the
band gap $\Delta_{0}$ breaks the inter-layer sublattice equivalence with
respect to $k_{y}=0$. Such asymmetry is apparent in the scattered transmission
where it depends on the incident mode. The corresponding conductance does not
incorporate this asymmetric, and the locations of their peaks are changed in
the presence of $\Delta_{0}$ compared to $\delta$ [29].
## VI Acknowledgments
The generous support provided by the Saudi Center for Theoretical Physics
(SCTP) is highly appreciated by all authors. A.J. thanks Dr. Michael Vogl for
fruitful discussion.
## References
* [1] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva and A. A. Firsov, Science 306, 666 (2004).
* [2] O. Klein, Z. Phys. 53, 157 (1929).
* [3] M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. Phys. 2, 620 (2006).
* [4] Y. B. Zhang, Y. W. Tan, H. L. Stormer, and P. Kim, Nature 438, 201 (2005).
* [5] R. Nair, P. Blake, A. Grigorenko, K. Novoselov, T. Booth, T. Stauber, N. Peres, and A. Geim, Science 320, 1308 (2008).
* [6] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* [7] J. D. Bernal, Proc. R. Soc. A 106, 749 (1924).
* [8] Z. Q. Li, E. A. Henriksen, Z. Jiang, Z. Hao, M. C. Martin, P. Kim, H. L. Stormer, and D. N. Basov, Phys. Rev. Lett. 102, 037403 (2009).
* [9] E. McCann and V. I. Fal’ko, Phys. Rev. Lett. 96, 086805 (2006).
* [10] A. Rozhkov, A. Sboychakov, A. Rakhmanov, and F. Nori, Phys. Rep. 648, 1 (2016).
* [11] T. Ohta, A. Bostwick, T. Seyller, K. Horn, and E. Rotenberg, Science 313, 951 (2006).
* [12] M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
* [13] I. Redouani, A. Jellal, and H. Bahlouli, J. Low Temp. Phys. 181, 197 (2015).
* [14] I. Lobato and B. Partoens, Phys. Rev. B 83, 165429 (2011).
* [15] A. L. Rakhmanov, A. V. Rozhkov, A. O. Sboychakov, and F. Nori, Phys. Rev. Lett. 109, 206801 (2012).
* [16] Y. Mohammadi and B. A. Nia, Solid State Commun. 201, 76 (2015).
* [17] R.-B. Chen, Y.-H. Chiu, and M.-F. Lin, Carbon 54, 268 (2013).
* [18] C.-W. Chiu, S.-C. Chen, Y.-C. Huang, F.-L. Shyu, and M.-F. Lin, Appl. Phys. Lett. 103, 041907 (2013).
* [19] I. Redouani and A. Jellal, Mater. Res. Express 3, 065005 (2016).
* [20] Y. Zahidi, I. Redouani, and A. Jellal, Physica E 71, 259 (2016).
* [21] J.-K. Lee, S.-C. Lee, J.-P. Ahn, S.-C. Kim, J. I. B. Wilson, and P. John, J. Chem. Phys. 129, 234709 (2008).
* [22] J. Borysiuk, J. Soltys, and J. Piechota, J. Appl. Phys. 109, 093523 (2011).
* [23] P. L. de Andres, R. Ramírez, and J. A. Vergés, Phys. Rev. B 77, 045403 (2008).
* [24] Z. Liu, K. Suenaga, P. J. F. Harris, and S. Iijima, Phys. Rev. Lett. 102, 015501 (2009).
* [25] E. McCann, Phys. Rev. B 74, 161403(R) (2006).
* [26] S. Konschuh, M. Gmitra, D. Kochan, and J. Fabian, Phys. Rev. B 85, 115423 (2012).
* [27] A. Jellal, I. Redouani and H. Bahlouli, Physica E 72, 149 (2015).
* [28] G. Giavaras and F. Nori, Phys. Rev. B 83, 165427 (2011).
* [29] B. Van Duppen and F. M. Peeters, Phys. Rev. B 87, 205427 (2013).
* [30] A. F. Young and P. Kim, Nat. Phys. 5, 222 (2009).
* [31] N. Stander, B. Huard, and D. Goldhaber-Gordon, Phys. Rev. Lett. 102, 026807 (2009).
* [32] W.-X. Wang, L.-J. Yin, J.-B. Qiao, T. Cai, S.-Y. Li, R.-F. Dou, J.-C. Nie, X. Wu, and L. He, Phys. Rev. B 92, 165420 (2015).
* [33] P. San-Jose, A. Gutiérrez-Rubio, M. Sturla, and F. Guinea, Phys. Rev. B 90, 075428 (2014).
* [34] M. Kindermann, B. Uchoa, and D. L. Miller, Phys. Rev. B 86, 115415 (2012).
* [35] J. C. W. Song, A. V. Shytov, and L. S. Levitov, Phys. Rev. Lett. 111, 266801 (2013).
* [36] J. Jung, A. M. DaSilva, A. H. MacDonald, and S. Adam, Nat. Commun. 6, 6308 (2015).
* [37] M. S. Nevius, M. Conrad, F. Wang, A. Celis, M. N. Nair, A. Taleb-Ibrahimi, A. Tejeda, and E. H. Conrad, Phys. Rev. Lett. 115, 136802 (2015).
* [38] M. Zarenia, O. Leenaerts, B. Partoens, and F. M. Peeters, Phys. Rev. B 86, 085451 (2012).
* [39] B. Uchoa, V. N. Kotov, and M. Kindermann, Phys. Rev. B 91, 121412(R) (2015).
* [40] S.Y. Zhou, D.A. Siegel, A.V. Fedorov, A. Lanzara, Phys. Rev. Lett. 101, 086402 (2008).
* [41] R. N. Costa Filho, G. A. Farias, and F. M. Peeters, Phys. Rev. B 76, 193409 (2007).
* [42] Y. Zhang, T-T. Tang, C. Girit, Z. Hao, M.C Martin, A. Zettl, M. F Crommie, Y R. Shen, and F. Wang, Nat. 459 820 (2009).
* [43] H. M. Abdullah and H. Bahlouli, J. Comput. Sci. 26, 135 (2018).
* [44] X. Zhai and G. Jin, Phys. Rev. B 93, 205427 (2016).
* [45] E. McCann, D.S.L. Abergel, and V.I. Fal’ko, Solid State Communications 143, 110 (2007).
* [46] M. Barbier, P. Vasilopoulos, and F. M. Peeters, Phys. Rev. B 82, 235408 (2010).
* [47] Michaël Barbier, P. Vasilopoulos, F. M. Peeters, and J. M. Pereira, Jr, Phys. Rev. B 79, 155402 (2009).
* [48] I. Snyman and C. W. J. Beenakker, Phys. Rev. B 75, 045322 (2007).
* [49] J. Nilsson, A. H. Castro Neto, F. Guinea, and N. M. R. Peres, Phys. Rev. B 76, 165416 (2007).
* [50] H. M. Abdullah, M. A. Ezzi, and H. Bahlouli, J. App. Phy. 124, 204303 (2018).
* [51] H. M. Abdullah, B. Van Duppen, M. Zarenia , H. Bahlouli, and F. M. Peeters, J. Phys.: Condens. Matter 29, 425303 (2017).
* [52] H. M. Abdullah, A. El Mouhafid, H. Bahlouli, and A. Jellal, Mater. Res. Express 4, 025009 (2017).
|
# Switching off microcavity polariton condensate near the exceptional point
Yao Li Tianjin Key Laboratory of Molecular Optoelectronic Science, Institute
of Molecular Plus, School of Science, Tianjin University, Tianjin 300072,
China Xuekai Ma<EMAIL_ADDRESS>Department of Physics and Center for
Optoelectronics and Photonics Paderborn (CeOPP), Universität Paderborn,
Warburger Strasse 100, 33098 Paderborn, Germany Zaharias Hatzopoulos
Institute of Electronic Structure and Laser (IESL), Foundation for Research
and Technology-Hellas (FORTH), Heraklion 71110, Greece Pavlos Savvidis
School of Science,Westlake University, 18 Shilongshan Road, Hangzhou 310024,
Zhejiang Province, China Institute of Natural Sciences, Westlake Institute
for Advanced Study, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province,
China Institute of Electronic Structure and Laser (IESL), Foundation for
Research and Technology-Hellas (FORTH), Heraklion 71110, Greece Department of
Nanophotonics and Metamaterials, ITMO University, St. Petersburg 197101,
Russia Stefan Schumacher Department of Physics and Center for
Optoelectronics and Photonics Paderborn (CeOPP), Universität Paderborn,
Warburger Strasse 100, 33098 Paderborn, Germany Wyant College of Optical
Sciences, University of Arizona, Tucson, AZ 85721, USA Tingge Gao
<EMAIL_ADDRESS>Tianjin Key Laboratory of Molecular Optoelectronic
Science, Institute of Molecular Plus, School of Science, Tianjin University,
Tianjin 300072, China
###### Abstract
Gain and loss modulation are ubiquitous in nature. An exceptional point arises
when both the eigenvectors and eigenvalues coalesce, which in a physical
system can be achieved by engineering the gain and loss coefficients, leading
to a wide variety of counter-intuitive phenomena. In this work we demonstrate
the existence of an exceptional point in an exciton polariton condensate in a
double-well potential. Remarkably, near the exceptional point, the polariton
condensate localized in one potential well can be switched off by an
additional optical excitation in the other well with very low (far below
threshold) laser power which surprisingly induces additional loss into the
system. Increasing the power of the additional laser leads to a situation in
which gain dominates in both wells again, such that the polaritons re-condense
with almost the same density in the two potential wells. Our results offer a
simple way to optically manipulate the polariton condensation process in a
double-well potential structure. Extending such configuration to complex
potential well lattices offers exciting prospects to explore high-order
exceptional points and non-Hermitian topological photonics in a non-
equilibrium many-body system.
Effective Hamiltonians of non-Hermitian nature play a crucial role in our
understanding of a plethora of modern physical systems [1, 2]. This is in
spite of the underlying common principle that in quantum mechanical systems
Hermiticity is required to keep a real-valued spectrum of eigenvalues. Complex
eigenvalues generally appear in systems governed by non-Hermitian
Hamiltonians. However, Bender et.al. [3] observed that under one special
condition systems can still sustain a real-valued energy spectrum. That is if
the Hamiltonian commutes with the PT operator, where P and T are the parity
and time-reversal operator, respectively. In recent years photonic platforms
are widely employed to study PT symmetry in view of the fact that the
refractive index distribution can be judiciously engineered, that is, the real
part of the refractive index can be tailored to be an even function whereas
the imaginary part remains an odd function such that PT symmetry can be
fulfilled. A striking phase transition can occur at a point in parameter space
(the symmetry breaking point or exceptional point) where the real eigenvalues
become complex-valued. This transition can be induced by modulation of the
real or imaginary components of the external potential in the effective
Hamiltonian. Near this exceptional point, a large variety of counter-intuitive
phenomena has been reported, for example, PT symmetric synthetic lattices [4]
or microcavities [5], single mode [6, 7] and vortex lasing [8] in a ring
cavity, unidirectional light propagation in a waveguide [9], suppression and
revival of lasing due to loss in coupled whispering-gallery-mode microcavities
[10], improved sensitivity in single or coupled resonators [11, 12], optical
isolation in PT symmetric microcavities [13], formation of an exceptional ring
in a photonic crystal [14], and an enhanced linewidth of a phonon laser mode
at the exceptional point [15]. In contrast to regular laser operation, a non-
Hermitian system can behave surprisingly near the exceptional point where
changes in gain can shut down or re-excite lasing as demonstrated in coupled
resonators [16, 17, 18]. Approaching the exceptional points in these works is
achieved using asymmetrical pumping schemes.
Exciton polaritons form as hybrid particles from excitons in a quantum well
that strongly couple with the photon mode of a planar optical resonator. In an
excitation regime where polaritons behave like bosonic quasi-particles, they
can experience condensation and show spontaneous macroscopic coherence under
non-resonant excitation [20, 21, 22], as shown in Fig. 1(a). Thanks to the
spontaneous decay of photons from the microcavity, polariton systems are
inherently driven-dissipative in nature and as such provide an excellent
platform to study PT symmetry [23] and non-Hermitian physics. In our previous
works [24, 25] we demonstrated that the polariton condensate wavefunction and
energy distribution can be strongly modified by the gain and loss coefficients
in an optical billiard. Also in an exciton polariton system in coupled
microresonators the influence of pump asymmetry was explored and it was
demonstrated that relaxation kinetics plays a crucial role in the condensation
process [19]. Polariton dimers also offer a platform to study polariton-
polariton nonlinear interactions with Josephson oscillations, quantum self
trapping [26], interplay of interference and nonlinearity [27], and
bistability [28]. However, the existence of an exceptional point and its role
for the polariton condensation was not demonstrated or discussed in such
structures.
Figure 1: Excitation configuration. Schematics of polariton condensation
process in the planar microcavity (a) and the structure of a built-in double-
well potential (b) which is pumped by two independent non-resonant laser
beams.
In the present work we utilize a particularly simple and robust double-well
potential structure with slightly asymmetric sites as sketched in [Fig.
1(b)](the details of the sample can be seen in the experimental section). We
show that for non-resonant optical excitation polariton condensates can be
loaded into the double-well in a controlled manner , and demonstrate that an
exceptional point can be realized and explored in detail varying the optical
excitation parameters. In our experiment, we use two independent non-resonant
pumping lasers to separately excite the two potential wells and find that the
polariton condensation in one potential well can be altered dramatically and
even shut down completely by only controlling the pump intensity on the
adjacent potential well (in this case the double-well potential system as a
whole thus falls below the condensation threshold). The reason is that varying
the pump intensity on one of the potential wells changes the coupling of them,
giving rise to the appearance of an exceptional point which results in
substantial redistribution of the modes and reduced effective gain for
polaritons in the coupled wells. With further increasing the pump intensity on
the adjacent well, the polariton dimer starts condensation again as the system
is tuned away from the exceptional point and bifurcates into two modes,
antibonding and bonding. Our results show that the exceptional point can be
easily tailored in such kind of macroscopic quantum system and our work can be
systematically extended to 1D or 2D lattices to investigate anomalous edge
modes based on polariton condensates [29, 30].
## I results
## Theory of non-hermitian degeneracy
In theory, the Hamiltonian of a two-level non-Hermitian system like a coupled
polariton dimer trapped in two potential wells 1 and 2 can be expressed as
follows:
Figure 2: Eigenenergies of a two-level non-Hermitian Hamiltonian near an
exceptional point. (a) Real and (b) imaginary parts of the eigenenergy in a
two dimensional parameter space, where $\Delta\gamma$ =
$\varGamma_{1}-\varGamma_{2}$, $\Delta E$= $E_{1}-E_{2}$. Relations between
the (c) real and (d) imaginary parts of the eigenenergy with $\Delta\gamma$
along the route $\Delta{E}=0$ highlighted in (a) and (b). The arrow indicates
the decrease of $\Delta{\gamma}$. The exceptional point appears at
$\Delta{\gamma}=J$.
$H=\left[\begin{array}[]{ccc}E_{1}+i\varGamma_{1}&J/2\\\
J^{*}/2&E_{2}+i\varGamma_{2}\\\ \end{array}\right]$ (1)
where $E_{1}$($\varGamma_{1}$) and $E_{2}$($\varGamma_{2}$) correspond to the
real (imaginary) part of the polariton energies in the two potential wells, J
is the coupling strength between the two wells. The eigenenergies of the
Hamiltonian are
$E_{\pm}=(E_{1}+E_{2}+i\varGamma_{1}+i\varGamma_{2})/2\pm\sqrt{(J^{2}+[E_{1}-E_{2}+i\varGamma_{1}-i\varGamma_{2}]^{2})}/2$.
The dependence of the real and imaginary parts of the eigenenergies on
$\Delta{E}=E_{1}-E_{2}$ and $\Delta{\gamma}=\Gamma_{1}-\Gamma_{2}$ are shown
in Figs. 2(a) and 2(b). We introduce asymmetric pumping of the polariton dimer
by sequentially exciting the two potential wells: the potential well 1 is
excited above the threshold, then the pumping power of the potential well 2 is
increased from zero. Before the system approaches the exceptional point with
the difference of the gain level of the two potential wells $\Delta\gamma>J$
(the potential well 1 has a larger gain than 2) along the route at
$\Delta{E}=0$ shown in Figs. 2(a) and 2(b), one of the two eigenstates of the
above non-Hermitian Hamiltonian lies above the condensation threshold with
large gain [see the lower branch in Fig. 2(d)] and another is below the
threshold with large loss rate [see the upper branch in Fig. 2(d)] [16]. When
the pump power on the other potential well increases, the difference of the
gain level diminishes as shown in Fig. 2(b) and (d) until $\Delta\gamma=J$,
thus approaches the exceptional point [18], where
$J^{2}+[E_{1}-E_{2}+i\varGamma_{1}-i\varGamma_{2}]^{2}=0$. In this case, the
polaritons feel a larger effective loss when the gain/loss ratio in the
coupled wells approaches a critical value, which can leads to the already
condensed polariton being switched off. As a consequence, the polariton dimer
is below the threshold and the emission intensity is reduced greatly. When
$\Delta{\gamma}$ decreases further, the system is pulled away from the
exceptional point and the gain/loss contrast is reduced, thus the real parts
of the eigenvalues of the system are repelled away from each other [see the
bifurcation in Fig. 2(c)] and finally the polariton dimer can condensate
again. The switching off of the polariton dimer can also occur when
${E_{1}}\neq{E_{2}}$ where the anticrossing of the eigenenergy of the system
will appear [17, 16, 18].
## Experimental realization
In the experiment, we use a microcavity which contains 12 GaAs quantum wells
in the cavity with the Rabi splitting of around 9.2 meV and is cooled down at
around 4 K. The two potential wells with the distance of around 5 $\mu$m [see
the sketch in Fig. 1 (b)] are formed unintentionally during the growth process
due to some defects which induce loss into the system. Such kind of potential
wells can also be tailored with microstructuring mesas into the planar
microcavity. From the measurement taken under low pumping power at the
potential wells and planar microcavity (shown in the Supplemental Materials),
we find the potential depth of the two potential wells are 1.46 meV and 1.30
meV, respectively, as demonstrated in Fig. S1 (b) and (c). This double-well is
chosen which allows for realizing equal energy distribution in the two
potential sites with deep well above threshold and shallow well below
threshold, and polaritons in the system experience large gain level
difference. All the experiments are taken using a CW laser (wavelength: 750
nm) and the laser beam is chopped with a mechanical chopper with the duty
circle of 5% to reduce heating.
Figure 3: Switching off exciton polariton condensates. (a) The emission
intensity of the polariton molecule as the function of the pumping intensity
of the potential well 2 with the potential well 1 being continuously excited.
The real space distribution of polaritons is taken at the pump densities of
0.006MW/cm2 (b), 0.048MW/cm2 (c), and 0.115MW/cm2 (d) of the second laser
beam. (e-g) Time-integrated density profiles of the polariton condensates from
numerical simulations at $P_{1}=$12 $\mu$m-2 ps-1 and different $P_{2}$: (e)
$P_{2}=$ 0 $\mu$m-2 ps-1, (f) $P_{2}=$ 5 $\mu$m-2 ps-1, and (g) $P_{2}=$ 15
$\mu$m-2 ps-1. Note that the density profiles in the experiment (b-d) and in
the simulation (e-g) have different orientations.
Firstly we focus the first laser beam onto the potential well 1 with the spot
size of around 2 $\mu$m. At the pump density of around 0.45 MW/cm2 ($\sim
1.2P_{th}^{1}$, where $P_{th}^{1}$ is the condensation threshold of the
potential well 1), the polaritons condense and are mainly located in the
driven potential well 1 [see Fig. 3 (b)] with the energy of 1.5056 eV (the
data showing the polariton condensation can be seen in Fig. S2 of the SM).
Under quasi-resonant pumping, similar localization of exciton polaritons in a
photonic dimer due to quantum self localization was observed in [26], and
polaritons can be localized or delocalized in a photonic dimer [28] due to the
interplay between the nonlinear interactions and interference when the laser
energy is tuned across the antibonding/bonding modes.
In the following, the second potential well 2 is excited using another laser
beam with the same wavelength and spot size, during which the pumping flux of
the potential well 1 is fixed at around 1.2 $P_{th}^{1}$. We monitor the
emission of the polariton dimer by gradually increasing the pump density of
the second laser beam from zero to above the threshold $P_{th}^{2}$ (here
$P_{th}^{2}=$0.4 MW/cm2 is the threshold of the condensation in the potential
well 2). Surprisingly, the intensity of the polariton dimer decreases
dramatically to nearly zero (Fig. 3 (a) and (c)) then increases again with the
symmetric emission pattern appearing across the coupled potential wells, as
shown in Fig. 3 (d). It is worth noting that the above results in Fig. 3 are
reversible, i.e., when we gradually reduce the power of the second laser, the
polariton dimer shows the same behavior at the same pump power.
To check whether an exceptional point exists or not in the polariton dimer, we
measure both the energy levels (real and imaginary parts) and the real space
distribution of different polariton modes as the power of the second laser
beam varies. Thanks to the coupling and the potential depth difference in the
system, the polariton double-well potential can be tuned to the vicinity of
the exceptional point by varying the pump power of the second laser beam. When
the pump density of the potential well 2 is smaller than 0.032 MW/cm2, there
are two states whose energy share the same real part but different imaginary
parts, as shown in Fig. 4. The total emission of the polariton dimer is mainly
located in the potential well 1. At higher pumping density of the potential
well 2, the polariton condensate is switched off and the coupled wells are
below the threshold, which can be seen from the dispersion taken in the
switching off regime in Fig. S3 of the SM, thus the emitted light intensity is
greatly reduced. When the pump power density of the potential well 2 is larger
than 0.115 MW/cm2, the emission intensity of the polariton dimer increases
sharply and we can find two states: the antibonding mode and bonding mode, as
shown in the inserts in Fig. 4 (a). The intensity of the mode with larger
energy is much larger than another state thus dominates the total emission
pattern (Fig. 3 (d)). The energy difference between these two states increases
with the power of the second laser beam (Fig. 4 (a)) and the intensity of the
lower-energy state grows faster. At the same time, the linewidths of the two
modes are around the same [Fig. 4 (b)]. Hence, the clear bifurcations of the
eigenenergies of the coupled condensates shown in Fig. 4 are consistent with
the theoretical prediction in Fig. 2 (c,d), evidencing the existence of an
exceptional point. In this scenario, the pump power of the second laser beam
(the gain level difference between the two potential wells) acts as one
parameter ($\Delta\gamma$) in the two dimensional parameter space in Fig. 2,
which switches off the polariton condensate due to the substantial
modification of the condensate wavefunction near the exceptional point. Under
higher power of the second laser, the system approaches symmetric pumping, the
emission of the polariton dimer becomes mainly localized at each potential
site. In the coupled classical laser systems, we note that the appearance of
two modes after switching off the coupled lasers or not also depends on the
modal cross-saturation or spatial hole burning effect [16].
Figure 4: Demonstration of the exceptional point. The real (a) and imaginary
(b) part of the polariton energy of the coupled wells as the function of the
pumping intensity of the potential well 2. The inserted graphs in (a)
correspond to the real space imaging of the eigenmodes at several pumping
intensities where the bottom ones are multiplied by 3 times, 0.2 times, 0.5
times respectively. The dashed regions show the switching off region.
If the pump power of the potential well 1 is increased, the polariton
condensate can be partly switched off by repeating above experimental steps.
For example, if the pump density of the potential well 1 is kept at around 2
$P_{th}^{1}$, the second laser can reduce the emission intensity at potential
well 1 by about 40$\%$ at the pump density of 0.064 MW/cm2 then enhance the
intensity afterwards (see Fig. S5 in the SM). During this process, the
emission intensity evolves from asymmetric to symmetric pattern.
The coupled lasers can also be switched off when the frequencies of the two
resonators is different [16, 17, 18]. If we swap the two potential wells, that
is, the potential well 2 is pumped above its threshold firstly and the pumping
flux of potential well 1 is varied. In this configuration, the real parts of
the polariton energy of the two potential wells are not the same due to the
difference of the potential depths, however, the switching off polariton
condensate can still be observed (see Fig.S6 in the SM). It means that in this
case, the system experiences an anticrossing along a different route, instead
of $\Delta{E}=0$, in Fig. 2(a,b).
## Numerical analysis
We numerically mimic the dynamics of the polariton condensate in our
experiments by applying the extended Gross-Pitaevskii equation with loss and
gain [31]:
$\displaystyle i\hbar\frac{\partial\Psi(\mathbf{r},t)}{\partial t}$
$\displaystyle=\left[-\frac{\hbar^{2}}{2m_{\text{eff}}}\nabla_{\bot}^{2}-i\hbar\frac{\gamma_{\text{c}}}{2}+g_{\text{c}}|\Psi(\mathbf{r},t)|^{2}\right.$
(2)
$\displaystyle+\left.\left(g_{\text{r}}+i\hbar\frac{R}{2}\right)n(\mathbf{r},t)+V(\mathbf{r})\right]\Psi(\mathbf{r},t),$
Here, $\Psi(\mathbf{r},t)$ is the polariton wavefunction, $m_{\text{eff}}$
denotes the effective mass of polaritons in the vicinity of the bottom of the
lower polariton branch, $\gamma_{\text{c}}$ is the polariton loss rate,
$g_{\text{c}}$ represents the polariton-polariton interaction, and
$g_{\text{r}}$ represents the polariton-reservoir interaction. The reservoir
$n$ is incoherent and provides the gain for the condensate with a rate $R$. It
obeys the following equation of motion:
$\frac{\partial n(\mathbf{r},t)}{\partial
t}=\left[-\gamma_{r}-R|\Psi(\mathbf{r},t)|^{2}\right]n(\mathbf{r},t)+P(\mathbf{r})\,.$
(3)
Here, $\gamma_{\text{r}}$ is the loss rate of the reservoir and
$P(\mathbf{r})$ represents the two non-resonant pumps with Gaussian
distribution, i.e.,
$P(x,y)=P_{1}e^{-\frac{(x-x_{1})^{2}+y^{2}}{w^{2}}}+P_{2}e^{-\frac{(x-x_{2})^{2}+y^{2}}{w^{2}}},$
(4)
with pump intensity $P_{1}$ and $P_{2}$ and pump width $w$. The two pumps are
placed at $x_{1}$ and $x_{2}$, respectively. The double-well potential
$V(\mathbf{r})$ takes the following form:
$V(x,y)=V_{\text{1}}e^{-\left(\frac{(x-x_{1})^{2}+y^{2}}{W^{2}}\right)^{2}}+V_{\text{2}}e^{-\left(\frac{(x-x_{2})^{2}+y^{2}}{W^{2}}\right)^{2}}.$
(5)
Here, $V_{1}$ and $V_{2}$ are the depths of the potential well 1 and 2,
respectively, and $W$ indicates the size of each well. The parameters used for
numerical modeling are summarized in [32]. We fix the intensity of pump 1 that
excites potential well 1 with $P_{1}=12$ $\mu$m-2 ps-1 ($\sim
1.2~{}P_{th}^{1}$) and gradually increase the other pump intensity $P_{2}$ in
potential well 2, i.e., numerically repeating the experimental excitation
process presented in Fig. 3(a-d). The numerical results are shown in Fig.
3(e-g) (here the density profiles are integrated over time to match the
experimental measurements) and are in very good agreement with the
experimental results in Fig. 3(b-d).
## Discussion
We note that the exciton polariton condensate in the polariton dimer can not
be switched off under uniform pumping where the emission intensity
continuously increases with the pumping flux, as shown in Fig. S7 of the SM.
In this case the gain and loss coefficient of the two potential wells are
modulated simultaneously. In addition, as we also find in our numerical
simulations, the coupling between the two potential wells plays a critical
role in the switching off process, which can not be observed if we use two
tightly focused laser spots to excite a simple planar microcavity (see Fig. S8
in the SM).
To summarize, the counter-intuitive results reported in the present work are
rooted in the optically induced gain and loss modulation near an exceptional
point in a polariton dimer. We demonstrate that this can be used to
efficiently control the polariton condensation process and explore the role of
the exceptional point in detail. Switching off the polariton condensate with
increasing total pump power can also be used to investigate further the
bistability or multistability [33] in the non-Hermitian double-well potentials
if the detuning between the excitons and cavity mode are tuned to be more
positive. If the polariton condensate were loaded in multiple coupled
potential wells, high-order exceptional points could be explored where the
polariton energy is very sensitive against the perturbation[11, 12]. For the
future, we also envision that novel edge modes based on polariton condensates
can be created in 1D or 2D potential lattices when the gain and loss
coefficients are modulated similarly to the demonstration in the present work
[29, 30].
## Acknowledgments
T.G.thanks Li Ge for fruitful discussion and acknowledges the support from the
National Natural Science Foundation of China (grant No. $11874278$). The
Paderborn group acknowledges the Deutsche Forschungsgemeinschaft (DFG) through
the collaborative research center TRR142 (project A04, grant No. 231447078)
and Heisenberg program (grant No. 270619725). X.M. further acknowledges
support from the National Natural Science Foundation of China (grant No.
11804064). P.S thanks Project No. 041020100118 supported by Westlake
University and Program 2018R01002 supported by Leading Innovative and
Entrepreneur Team Introduction Program of Zhejiang. P.S. acknowledges
financial support from Russian Science Foundation Grant No. 19-72-20120 for
supporting MBE sample growth and from Greece Polisimulator project co-financed
by Greece and EU Regional Development Fund.
## Methods
The setup we used in the experiment is a hand-made momentum-space spectroscopy
system. The laser is divided into two identical beams through a 50:50 beam
splitter, and is focused onto the sample through the same objective (X 50, NA
0.42). The intensity of each laser can be adjusted continuously. The signal
emitted from the sample is collected by the objective and analyzed by a
spectrometer (Princeton instrument), from which we can get the energy-momentum
spectrum and energy-position spectrum of the exciton polaritons. The lens
closest to the spectrometer is installed on a one-dimensional motorized stage
(NewPort), and tomography can be performed (Fig. 4) by continuously moving the
translation stage from which we can get the energy-resolved real space
distribution of polaritons.
Author contributions. T.G. and X.M. conceived the project, Y.L. performed the
experiment and analyzed the results, X.M. performed the theoretical analysis
and numerical simulation, T.G. and X.M. prepared the manuscript with
contributions from S.S. and P.S., Z. H and P.S. fabricated the sample. All
authors discussed the results.
Competing interests. The authors declare no competing interests.
## References
* [1] Moiseyev N. Non-Hermitian Quantum Mechanics. Cambridge University Press: Cambridge, 2011.
* [2] Konotop VV, Yang J, Zezyulin DA. Nonlinear waves in $\mathcal{PT}$-symmetric systems. Reviews of Modern Physics 2016, 88(3): 035002. (https://doi.org/10.1017/ CBO9780511976186,2011)
* [3] Bender CM, Boettcher S. Real Spectra in Non-Hermitian Hamiltonians Having $\mathcal{P}\mathcal{T}$ Symmetry. Physical Review Letters 1998, 80(24): 5243-5246.
* [4] Regensburger A, Bersch C, Miri MA, Onishchukov G, Christodoulides DN, Peschel U. Parity-time synthetic photonic lattices. Nature 2012, 488(7410): 167-171.
* [5] Peng B, Ozdemir SK, Lei FC, Monifi F, Gianfreda M, Long GL, et al. Parity-time-symmetric whispering-gallery microcavities. Nature Physics 2014, 10(5): 394-398.
* [6] Feng L, Wong ZJ, Ma RM, Wang Y, Zhang X. Single-mode laser by parity-time symmetry breaking. Science (New York, NY) 2014, 346(6212): 972-975.
* [7] Hodaei H, Miri MA, Heinrich M, Christodoulides DN, Khajavikhan M. Parity-time-symmetric microring lasers. Science (New York, NY) 2014, 346(6212): 975-978.
* [8] Miao P, Zhang ZF, Sun JB, Walasik W, Longhi S, Litchinitser NM, et al. Orbital angular momentum microlaser. Science (New York, NY) 2016, 353(6298): 464-467.
* [9] Feng L, Xu Y-L, Fegadolli WS, Lu M-H, Oliveira JEB, Almeida VR, et al. Experimental demonstration of a unidirectional reflectionless parity-time metamaterial at optical frequencies. Nature Materials 2013, 12(2): 108-113.
* [10] Peng B, Ozdemir SK, Rotter S, Yilmaz H, Liertzer M, Monifi F, et al. Loss-induced suppression and revival of lasing. Science (New York, NY) 2014, 346(6207): 328-332.
* [11] Hodaei H, Hassan AU, Wittek S, Garcia-Gracia H, El-Ganainy R, Christodoulides DN, et al. Enhanced sensitivity at higher-order exceptional points (vol 548, pg 187, 2017). Nature 2017, 551(7682): 1.
* [12] Chen WJ, Ozdemir SK, Zhao GM, Wiersig J, Yang L. Exceptional points enhance sensing in an optical microcavity. Nature 2017, 548(7666): 192-+.
* [13] Chang L, Jiang XS, Hua SY, Yang C, Wen JM, Jiang L, et al. Parity-time symmetry and variable optical isolation in active-passive-coupled microresonators. Nat Photonics 2014, 8(7): 524-529.
* [14] Zhen B, Hsu CW, Igarashi Y, Lu L, Kaminer I, Pick A, et al. Spawning Rings of Exceptional Points out of Dirac Cones. 2016 Conference on Lasers and Electro-Optics, 2016.
* [15] Zhang J, Peng B, Ozdemir SK, Pichler K, Krimer DO, Zhao GM, et al. A phonon laser operating at an exceptional point. Nat Photonics 2018, 12(8): 479-484.
* [16] Brandstetter M, Liertzer M, Deutsch C, Klang P, Schoberl J, Tureci HE, et al. Reversing the pump dependence of a laser at an exceptional point. Nat Commun 2014, 5: 7.
* [17] Liertzer M, Ge L, Cerjan A, Stone AD, Tuereci HE, Rotter S. Pump-Induced Exceptional Points in Lasers. Physical Review Letters 2012, 108(17).
* [18] El-Ganainy R, Khajavikhan M, Ge L. Exceptional points and lasing self-termination in photonic molecules. Phys Rev A 2014, 90(1): 7.
* [19] Galbiati M, Ferrier L, Solnyshkov DD, Tanese D, Wertz E, Amo A, et al. Polariton Condensation in Photonic Molecules. Physical Review Letters 2012, 108(12).
* [20] Deng H, Weihs G, Santori C, Bloch J, Yamamoto Y. Condensation of semiconductor microcavity exciton polaritons. Science (New York, NY) 2002, 298(5591): 199-202.
* [21] Kasprzak J, Richard M, Kundermann S, Baas A, Jeambrun P, Keeling JMJ, et al. Bose-Einstein condensation of exciton polaritons. Nature 2006, 443(7110): 409-414.
* [22] Balili R, Hartwell V, Snoke D, Pfeiffer L, West K. Bose-einstein condensation of microcavity polaritons in a trap. Science (New York, NY) 2007, 316(5827): 1007-1010.
* [23] Ma X, Kartashov YY, Gao TG, Schumacher S. Controllable high-speed polariton waves in a PT-symmetric lattice. New Journal of Physics 2019, 21(12): 7.
* [24] Gao T, Estrecho E, Bliokh KY, Liew TCH, Fraser MD, Brodbeck S, et al. Observation of non-Hermitian degeneracies in a chaotic exciton-polariton billiard. Nature 2015, 526(7574): 554-U203.
* [25] Gao T, Li G, Estrecho E, Liew TCH, Comber-Todd D, Nalitov A, et al. Chiral Modes at Exceptional Points in Exciton-Polariton Quantum Fluids. Physical Review Letters 2018, 120(6): 7.
* [26] Abbarchi M, Amo A, Sala VG, Solnyshkov DD, Flayac H, Ferrier L, et al. Macroscopic quantum self-trapping and Josephson oscillations of exciton polaritons. Nature Physics 2013, 9(5): 275-279.
* [27] Rodriguez, S., Amo, A., Sagnes, I. et al. Interaction-induced hopping phase in driven-dissipative coupled photonic microcavities. Nat Commun 7, 11887 (2016).
* [28] Rodriguez SRK, Amo A, Carusotto I, Sagnes I, Le Gratiet L, Galopin E, et al. Nonlinear Polariton Localization in Strongly Coupled Driven-Dissipative Microcavities. ACS Photonics 2018, 5(1): 95-99.
* [29] Tony E. Lee, Anomalous Edge State in a Non-Hermitian Lattice,Physical Review Letters (2016) 116, 133903.
* [30] Shunyu Yao and Zhong Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Physical Review Letters 121, 086803 (2018)
* [31] Wouters M, Carusotto I. Excitations in a nonequilibrium Bose-Einstein condensate of exciton polaritons. Physical Review Letters 2007, 99(14): 4.
* [32] The values of the parameters used for the numerical modeling are: $m_{\text{eff}}=5\times{10^{-5}~{}m_{\text{e}}}$ ($m_{\text{e}}$ is the free electron mass), $\gamma_{\text{c}}=0.16$ ps-1, $\gamma_{\text{r}}=1.5~{}\gamma_{\text{c}}$, $g_{\text{c}}=6~{}\mu$eV $\mu$m2, $g_{\text{r}}=2g_{\text{c}}$, $R=0.01$ ps${}^{-1}~{}\mu$m2, $w=1~{}\mu$m, $W=1.5~{}\mu$m, $V_{\text{1}}=-2.2$ meV, $V_{\text{2}}=-2$ meV, $x_{1}=-2~{}\mu$m, and $x_{2}=2~{}\mu$m.
* [33] Jiun-Yi Lien, Yueh-Nan Chen, Natsuko Ishida, Hong-Bin Chen, Chi-Chuan Hwang and Franco Nori, Multistability and condensation of exciton-polaritons below threshold, Physical Review B 91, 024511 (2015)
|
# An entropy dichotomy for singular star flows
Maria José Pacifico, Fan Yang and Jiagang Yang Instituto de Matemática,
Universidade Federal do Rio de Janeiro, C. P. 68.530, CEP 21.945-970, Rio de
Janeiro, RJ, Brazil<EMAIL_ADDRESS>Department of Mathematics, Michigan
State University, East Lansing, MI, USA<EMAIL_ADDRESS>Department of
Mathematics, Southern University of Science and Technology of China,
Guangdong, China; and Departamento de Geometria, Instituto de Matemática e
Estatística, Universidade Federal Fluminense, Niterói, Brazil<EMAIL_ADDRESS>
###### Abstract.
We show that non-trivial chain recurrent classes for generic $C^{1}$ star
flows satisfy a dichotomy: either they have zero topological entropy, or they
must be isolated. Moreover, chain recurrent classes for generic star flows
with zero entropy must be sectional hyperbolic, and cannot be detected by any
non-trivial ergodic invariant probability. As a result, we show that $C^{1}$
generic star flows have only finitely many Lyapunov stable chain recurrent
classes.
M.J.P. and J.Y. are partially supported by CNPq, FAPERJ and PROEX-CAPES. J.Y.
is partially supported by NSFC 11871487 of China. F.Y. would like to thank the
hospitality of Southern University of Science and Technology of China (SUSTC),
where part of this work is done.
## 1\. Introduction
A milestone towards the final proof of the stability conjecture for
diffeomorphisms is the independent discovery of the star systems by Liao [19]
and Mañé [22]. Recall that a diffeomorphism with star condition is a $C^{1}$
diffeomorphism for which all the periodic orbits of all nearby diffeomorphisms
are hyperbolic (orbit-wise, not uniformly); in other words, there is no
periodic orbit bifurcation. It was first proven in [10, 19] that
$\Omega$-stability implies the star condition, and later proven by Hayashi
[16] with the help of the connecting lemma that star condition is equivalent
to Axiom A with the no-cycle condition. For smooth vector fields without
singularities, the same result was obtained by Gan and Wen [13] using a
different approach.
For singular flows, that is, flows exhibiting singularities, the situation is
much more complicated. Here, let us introduce the precise definition of the
star condition for singular flows, taking into account the presence of the
singularities.
###### Definition 1.
A $C^{1}$ vector field $X$ is a star vector field if there exists a
neighborhood $X\in{\mathcal{U}}\subset\mathscr{X}^{1}(M)$ such that for all
vector fields $Y\in{\mathcal{U}}$, all the critical elements of $Y$ i.e.,
singularities and periodic orbits, are hyperbolic. The collection of $C^{1}$
star vector fields is denoted by $\mathscr{X}^{1}_{*}(M)$.
It is well known that singular star flows can exhibit rich dynamics and
bifurcation phenomenon comparing to their non-singular counterpart. Certain
singular star flows, such as the famous Lorenz flows, do not have hyperbolic
nonwandering sets [14] and are not structural stable [15]. There are also
examples where the set of the periodic orbits is not dense in the nonwandering
set [9], and where the nonwandering set is Axiom A but no-cycle condition
fails [17].
In order to properly describe the hyperbolicity of an invariant set that
contains singularity, Morales, Pacifico and Pujals [24] proposed the notion of
singular hyperbolicity for three-dimensional flows. They require the flow to
have a one-dimensional uniformly contracting direction, and a two-dimensional
subbundle containing the flow direction of regular points, on which the flow
is volume expanding. This is later generalized to higher dimensions as
sectional hyperbolicity [18, 23]. See Definition 3 in the next section for
more details.
Next, let us introduce the notion of chain recurrent classes, which plays an
important role in the study of the stability conjecture. Roughly speaking,
chain recurrent classes are the largest non-trivial invariant set of a
topological dynamical system, outside which the system behaves like a gradient
system.
###### Definition 2.
For $\varepsilon>0,T>0$, a finite sequence $\\{x_{i}\\}_{i=0}^{n}$ is called
an $(\varepsilon,T)$-chain if there exists $\\{t_{i}\\}_{i=0}^{n-1}$ such that
$t_{i}>T$ and $d(\phi_{t_{i}}(x_{i}),x_{i+1})<\varepsilon$ for all
$i=0,\ldots,n-1$. We say that $y$ is chain attainable from $x$, if there
exists $T>0$ such that for all $\varepsilon>0$, there exists an
$(\varepsilon,T)$-chain $\\{x_{i}\\}_{i=0}^{n}$ with $x_{0}=x$ and $x_{n}=y$.
It is straightforward to check that chain attainability is an equivalent
relation on the closure of the set ${\\{x:\forall t>0,\mbox{ there is an
}(\varepsilon,T)\mbox{-chain with }x_{0}=x_{n}=x\mbox{ and
}\sum_{i}t_{i}>t\\}}$. Each equivalent class under this relation is then
called a chain recurrent class. A chain recurrent class $C$ is non-trivial if
it is not a singularity nor a periodic orbit.
As the first step towards the complete understanding of singular star flows,
the following conjecture was raised in [29].
###### Conjecture 1.1.
(Generic) singular star flows have only finitely many chain recurrent classes,
all of which are sectional hyperbolic for $X$ or $-X$.
A partial affirmative answer to this conjecture was obtained in [28], where it
is proven that for generic star flows, every non-trivial Lyapunov stable chain
recurrent class is sectional hyperbolic. In fact, they obtained a complete
characterization on the stable indices of singularities in the same chain
recurrent classes. This result is crucial to our current paper, and is
explained in more detail in Section 2.4.
A breakthrough was later obtained by da Luz and Bonatti [3, 8] which gives a
negative answer to the second half of this conjecture. They construct an
example which has two singularities with different indices robustly contained
in the same chain recurrent class. As a result, such classes cannot be
sectional hyperbolic. In [3] they propose the notion of multi-singular
hyperbolicity and prove that, generically, star condition is equivalent to
multi-singular hyperbolicity [3, Theorem 3]. To avoid technical difficulty, we
will not provide the definition of multi-singular hyperbolicity here, and
invite the interested readers to [7] for a simpler yet useful definition.
However, the first half of Conjecture 1.1 is still left unanswered: must a
(generic) singular star flow have only finitely many chain recurrent classes?
The goal of the current paper is to provide a partial answer to this question,
using the recent progress in the entropy theory [26]:
###### Theorem A.
There is a residual set ${\mathcal{R}}$ of $C^{1}$ star flows, such that for
every $X\in{\mathcal{R}}$ and every non-trivial chain recurrent class $C$ of
$X$, we have
1. (1)
if $h_{top}(X|_{C})>0$, then $C$ contains some periodic point $p$ and is
isolated;
2. (2)
if $h_{top}(X|_{C})=0$, then $C$ is sectional hyperbolic for $X$ or $-X$, and
contains no periodic orbits. In this case, every ergodic invariant measure
$\mu$ with $\operatorname{supp}(\mu)\subset C$ must satisfy
$\mu=\delta_{\sigma}$ for some $\sigma\in{\rm Sing}(X)\cap C$.
In other words, a chain recurrent class with zero topological entropy cannot
be detected by any non-trivial invariant ergodic probability measure, in the
sense that it can only support point masses of singularities. We call such
chain recurrent classes singular aperiodic classes.
Note that in the second case, $C$ cannot be Lyapunov stable due to [26,
Corollary D]. Also note that the example constructed by Bonatti and da Luz
belongs to the first case and is isolated.
An immediate corollary of Theorem A is:
###### Corollary B.
$C^{1}$ generic star flows have only finitely many Lyapunov stable chain
recurrent classes.
Now the first half of Conjecture 1.1, namely the finiteness of chain recurrent
classes for singular star flows, is reduced to the following conjecture:
###### Conjecture 1.2.
For (generic) singular star flows, singular aperiodic classes do not exist.
Consequently, (generic) singular star flows have only finitely many chain
recurrent classes, all of which are homoclinic classes of some periodic
orbits.
#### Organization of the paper
In Section 2 we provide the readers with preliminaries on singular flows:
dominated splitting, (extended and scaled) linear Poincaré flows and Liao’s
shadowing lemma. We also include in Section 2.4 a classification on the
singularities in the same chain recurrent classes, which is taken from [28].
Section 3 contains a detailed analysis on the dynamics near Lorenz-like
singularities and, more importantly, on the transverse intersection between
invariant manifolds of nearby periodic orbits with that of points in the
class. Finally, we provide the proof of Theorem A in Section 4 by showing that
every chain recurrent class with positive topological entropy must contain a
periodic orbit and, consequently, contains all periodic orbits that are
sufficiently close to the class.
## 2\. Preliminaries
Throughout this paper, $X$ will be a $C^{1}$ vector field on a $d$-dimensional
compact manifold $M$. Denote by ${\rm Sing}(X)$ (sometimes we also write ${\rm
Sing}(\phi_{t})$) the set of singularities of $X$, $\phi_{t}$ the flow
generated by $X$, and $f=\phi_{1}$ the time-one map of $\phi_{t}$. We will
write $\Phi_{t}$ for the tangent flow, i.e., $\Phi_{t}=D\phi_{t}:TM\to TM$.
This section includes the necessary background for the proof of Theorem A.
Most notably, we will introduce the scaled and extended linear Poincaré flow
by Liao [20, 21], and the shadowing lemma which was first introduced by Liao
[20, 21], and further developed by Gan [11]. We also collect some previously
established results on generic singular star flows in Section 2.4, most of
which can be found in [28] and [18].
### 2.1. Dominated splitting and invariant cones
A dominated splitting for a flow $\phi_{t}$ is defined similarly to the case
of diffeomorphisms. The invariant set $\Lambda$ admits a dominated splitting
$T_{\Lambda}M=E\oplus F$ if this splitting is invariant under $\Phi_{t}$, and
if there exist $C>0$ and $\lambda<1$ such that for every $x\in\Lambda$, and
every pair of unit vectors $u\in E_{x}$ and $v\in F_{x}$, one has
$\|(\Phi_{t})_{x}(u)\|\leq C\lambda^{t}\|(\Phi_{t})_{x}(v)\|\mbox{ for }t>0.$
We invite the readers to [4, Appendix B] and [1] for more properties on the
dominated splitting. The next lemma states the relation between dominated
splitting for the flow and its time-one map.
###### Lemma 2.1.
[26, Lemma 2.6] Let $\Lambda$ be an invariant set. A splitting
$T_{\Lambda}M=E\oplus F$ is a dominated splitting for the flow
$\phi_{t}|_{\Lambda}$ if and only if it is a dominated splitting for the time-
one map $f|_{\Lambda}$. Moreover, if $\phi_{t}|_{\Lambda}$ is transitive, then
we have either $X|_{\Lambda\setminus{\rm Sing}(X)}\subset E$ or
$X|_{\Lambda\setminus{\rm Sing}(X)}\subset F.$
###### Definition 3.
A compact invariant set $\Lambda$ of a flow $X$ is called sectional
hyperbolic, if it admits a dominated splitting $E^{s}\oplus F^{cu}$, such that
$E^{s}$ is uniformly contracting, and $F^{cu}$ is sectional-expanding: there
are constants $C,\lambda>0$ such that for every $x\in\Lambda$ and any subspace
$V_{x}\subset F^{cu}_{x}$ with $\dim V_{x}\geq 2$, we have
$|\det D\phi_{t}(x)|_{V_{x}}|\geq Ce^{\lambda t}\mbox{ for all }t>0.$
We call $\lambda$ the sectional volume expanding rate on $F^{cu}$.
###### Remark 2.2.
If the dominated splitting $E\oplus F$ is sectional hyperbolic, then
$\Phi_{t}$ on $E$ is uniformly contracting by definition. Since the flow speed
$\|X(x)\|$ is bounded and thus cannot be backward exponentially expanding, we
must have $X|_{\Lambda\setminus{\rm Sing}(X)}\subset F$. For more detail, see
[26, Lemma 3.10].
Let $E\oplus F$ be a dominated splitting for the flow $\phi_{t}$. For $a>0$
and $x\in M$, a $(a,F)$-cone on the tangent space $T_{x}M$ is defined as
$C_{a}(F_{x})=\\{v:v=v_{E}+v_{F}\mbox{ where }v_{E}\in E,v_{F}\in F\mbox{ and
}\|v_{E}\|<a\|v_{F}\|\\}\cup\\{0\\}.$
When $a$ is sufficiently small, the cone field $C_{a}(F_{x})$, $x\in M$, is
forward invariant by $\Phi_{1}$, i.e., there is $\lambda<1$ such that for any
$x\in M$, $\Phi_{1}(C_{a}(F_{x}))\subset C_{\lambda a}(F_{f(x)})$. Similarly,
we can define the $(a,E)$-cone $C_{a}(E_{x})$, which is backward invariant by
$\Phi_{1}$. When no confusing is caused, we call the two families of cones by
$F$ cones and $E$ cones.
For a $C^{1}$ disk with dimension at most $\dim F$, we say that $D$ is tangent
to the $(a,F)$-cone if for any $x\in D$, $T_{x}D\subset C_{a}(F_{x})$. The
same can be said for the $(a,E)$-cone if $T_{x}D\subset C_{a}(E_{x})$ for
every $x\in D$.
### 2.2. Scaled linear Poincaré flows and a shadowing lemma by Liao
In this section, $\mu$ will be a non-trivial ergodic measure with a dominated
splitting $E\oplus F$ for the time-one map $f=\phi_{1}$ on
$\operatorname{supp}\mu$.
The linear Poincaré flow $\psi_{t}$ is defined as following: denote the normal
bundle of $\phi_{t}$ over $\Lambda$ by
$N_{\Lambda}=\bigcup_{x\in\Lambda\setminus{\rm Sing}(X)}N_{x},$
where $N_{x}$ is the orthogonal complement of the flow direction $X(x)$, i.e.,
$N_{x}=\\{v\in T_{x}M:v\perp X(x)\\}.$
Denote the orthogonal projection of $T_{x}M$ to $N_{x}$ by $\pi_{x}$ and the
projection of $T_{\Lambda}M$ to $N_{\Lambda}$ by $\pi$. Given $v\in N_{x}$ for
a regular point $x\in M\setminus{\rm Sing}(X)$ and recalling that $\Phi_{t}$
is the tangent flow, we can define $\psi_{t}(v)$ as the orthogonal projection
of $\Phi_{t}(v)$ onto $N_{\phi_{t}(x)}$, i.e.,
$\psi_{t}(v)=\pi_{\phi_{t}(x)}(\Phi_{t}(v))=\Phi_{t}(v)-\frac{<\Phi_{t}(v),X(\phi_{t}(x))>}{\|X(\phi_{t}(x))\|^{2}}X(\phi_{t}(x)),$
where $<.,.>$ is the inner product on $T_{x}M$ given by the Riemannian metric.
The following is the flow version of the Oseledets theorem:
###### Proposition 2.3.
For $\mu$ almost every $x$, there exists $k=k(x)\in{\mathbb{N}}$ and real
numbers
$\hat{\lambda}_{1}(x)>\cdots>\hat{\lambda}_{k}(x)$
and a $\psi_{t}$ invariant measurable splitting on the normal bundle:
$N_{x}=\hat{E}^{1}_{x}\oplus\cdots\oplus\hat{E}^{k}_{x},$
such that
$\lim_{t\to\pm\infty}\frac{1}{t}\log\|\psi_{t}(v_{i})\|=\hat{\lambda}_{i}(x)\mbox{
for every non-zero vector }v_{i}\in\hat{E}^{i}_{x}.$
Now we state the relation between Lyapunov exponents and the Oseledets
splitting for $\psi_{t}$ and for $f=\phi_{1}$:
###### Theorem 2.4.
For $\mu$ almost every $x$, denote by $\lambda_{1}(x)>\cdots>\lambda_{k}(x)$
the Lyapunov exponents and
$T_{x}M=E^{1}_{x}\oplus\cdots\oplus E^{k}_{x}$
the Oseledets splitting of $\mu$ for $f$. Then
$N_{x}=\pi_{x}(E^{1}_{x})\oplus\cdots\oplus\pi_{x}(E^{k}_{x})$
is the Oseledets splitting of $\mu$ for the linear Poincaré flow $\psi_{t}$.
Moreover, the Lyapunov exponents of $\mu$ (counting multiplicity) for
$\psi_{t}$ is the subset of the exponents for $f$ obtained by removing one of
the zero exponent which comes from the flow direction.
###### Definition 4.
A non-trivial measure $\mu$ is called a hyperbolic measure for the flow
$\phi_{t}$ if it is an ergodic measure of $\phi_{t}$ and all the Lyapunov
exponents for the linear Poincaré flow $\psi_{t}$ are non-vanishing. In other
words, if we view $\mu$ as an invariant measure for the time-one map $f$, then
$\mu$ has exactly one exponent which is zero, given by the flow direction. We
call the number of the negative exponents of $\mu$, counting multiplicity, its
_index_.
The scaled linear Poincaré flow, which we denote by $\psi^{*}_{t}$, is the
normalization of $\psi_{t}$ using the flow speed:
(1)
$\psi^{*}_{t}(v)=\frac{\|X(x)\|}{\|X(\phi_{t}(x))\|}\psi_{t}(v)=\frac{\psi_{t}(v)}{\|\Phi_{t}|_{<X(x)>}\|}.$
We collect the following elementary properties of $\psi_{t}^{*}$.
The next lemma is first observed by Liao [20], see also [12] and [26, Section
4] for the proof.
###### Lemma 2.5.
$\psi^{*}_{t}$ is a bounded cocycle over $N_{\Lambda}$ in the following sense:
for any $\tau>0$, there is $C_{\tau}>0$ such that for any $t\in[-\tau,\tau]$,
$\|\psi^{*}_{t}\|\leq C_{\tau}.$
Furthermore, for every non-trivial ergodic measure $\mu$, the cocycles
$\psi_{t}$ and $\psi^{*}_{t}$ have the same Lyapunov exponents and Oseledets
splitting.
Next we describe the (quasi-)hyperbolicity for the scaled linear Poincaré flow
$\psi^{*}_{t}$.
###### Definition 5.
For $T_{0}>0$, $\lambda\in(0,1)$, an orbit segment $\\{\phi_{t}(x)\\}_{[0,T]}$
is called $(\lambda,T_{0})$-forward contracting for the bundle $E\subset
N_{x}$, if there exists a partition
$0=t_{0}<t_{1}<\cdots<t_{n}=T,\mbox{\hskip 5.69046pt where
}t_{i+1}-t_{i}\in[T_{0},2T_{0}],$
such that for all $k=1,\ldots,n-1$,
(2)
$\prod_{i=0}^{k-1}\|\psi^{*}_{t_{i+1}-t_{i}}|_{\psi_{t_{i}}(E)}\|\leq\lambda^{k}.$
Similarly, an orbit segment $\\{\phi_{t}(x)\\}_{[0,T]}$ is called
$(\lambda,T_{0})$-backward contracting for the bundle $E\subset N_{x}$, if the
orbit segment $\\{\phi_{-t}(\phi_{T}(x))\\}_{[0,T]}$ is
$(\lambda,T_{0})$-forward contracting for the flow $-X$.
###### Definition 6.
For $T_{0}>0,\lambda\in(0,1)$, the orbit segment $\\{\phi_{t}(x)\\}_{[0,T]}$
is called $(\lambda,T_{0})^{*}$ quasi-hyperbolic with respect to a splitting
$N_{x}=E^{N}_{x}\oplus F^{N}_{x}$ and the scaled linear Poincaré flow
$\psi^{*}_{t}$, if it is $(\lambda,T_{0})$-forward contracting for
$E_{x}^{N}$, $(\lambda,T_{0})$-backward contracting for $F_{x}^{N}$, and
satisfies:
(3)
$\frac{\|\psi^{*}_{t_{i+1}-t_{i}}|_{\psi_{t_{i}}(E^{N}_{x})}\|}{m(\psi^{*}_{t_{i+1}-t_{i}}|_{\psi_{t_{i}}(F^{N}_{x})})}\leq\lambda^{2}.$
###### Definition 7.
Assume that the scaled linear Poincaré flow has a dominated splitting $E\oplus
F$. A point $x$ is called a $(\lambda,T_{0})$-forward hyperbolic time for the
bundle $E\subset N_{x}$, if the infinite orbit $\phi_{[0,+\infty)}$ is
$(\lambda,T_{0})$-forward contracting. In this case the partition is taken as
$0=t_{0}<t_{1}<\cdots<t_{n}<\ldots,\mbox{\hskip 5.69046pt where
}t_{i+1}-t_{i}\in[T_{0},2T_{0}],$
and (2) is stated for all $k\in{\mathbb{N}}$. Similarly, $x$ is called a
$(\lambda,T_{0})$-backward hyperbolic time for the bundle $F\subset N_{x}$, if
it is a forward hyperbolic time for the bundle $F$ and for the flow $-X$. $x$
is called a two-sided hyperbolic time, if it is both a forward and backward
hyperbolic time.
The following lemma states that two consecutive orbit segments that are both
$(\lambda,T_{0})$-forward contracting can be “glued together” to form a
$(\lambda,T_{0})$-forward contracting orbit segment. The same can be said for
backward contracting orbit segments by considering the flow $-X$.
###### Lemma 2.6.
The following statements hold:
1. (1)
Assume that $\\{\phi_{t}(x)\\}_{[0,T_{1}]}$ is $(\lambda,T_{0})$-forward
contracting for the bundle $E$, and
$\\{\phi_{t}(\phi_{T_{1}}(x))\\}_{[0,T_{2}]}$ is $(\lambda,T_{0})$-forward
contracting for the bundle $\psi_{T_{1}}(E)$. Then
$\\{\phi_{t}(x)\\}_{[0,T_{1}+T_{2}]}$ is $(\lambda,T_{0})$-forward contracting
for the bundle $E$.
2. (2)
Assume that the scaled linear Poincaré flow has a dominated splitting $E\oplus
F$. Let $\\{\phi_{t}(x)\\}_{[0,T_{1}]}$ be $(\lambda,T_{0})$-forward
contracting for the bundle $E$, and assume that $\phi_{T_{1}}(x)$ is a
$(\lambda,T_{0})$-forward hyperbolic time for the bundle $E$. Then $x$ is a
$(\lambda,T_{0})$-forward hyperbolic time for the bundle $E$.
The proof is standard and thus omitted.
By the classic work of Liao [20], there exists
$\delta=\delta(\lambda,T_{0})>0$ such that if $x$ is a
$(\lambda,T_{0})$-backward hyperbolic time, then $x$ has unstable manifold
with size $\delta\|X(x)\|$. Similarly, if $x$ is a $(\lambda,T_{0})$-forward
hyperbolic time then it has stable manifold with size $\delta\|X(x)\|$. In
both cases, we say that $x$ has unstable/stable manifold up to the flow speed.
The next lemma can be seen as a $C^{1}$ version of the Pesin theory for flows.
The proof can be found in [26, Section 4]
###### Lemma 2.7.
Let $\mu$ be a hyperbolic measure for the flow $\phi_{t}$. For almost every
ergodic component $\tilde{\mu}$ of $\mu$ with respect to $f=\phi_{1}$, there
are $L^{\prime},\eta,T_{0}>0$ and a compact set
$\Lambda_{0}\subset\operatorname{supp}\mu\setminus{\rm Sing}(X)$ with positive
$\tilde{\mu}$ measure, such that for every $x$ satisfying
$f^{n}(x)\in\Lambda_{0}$ for $n>L^{\prime}$, the orbit segment
$\\{\phi_{t}(x)\\}_{[0,n]}$ is $(\eta,T_{0})^{*}$ quasi-hyperbolic with
respect to the splitting $N_{x}=\pi_{x}(E_{x})\oplus\pi_{x}(F_{x})$ and the
scaled linear Poincaré flow $\psi^{*}_{t}$.
Next. we introduce a shadowing lemma by Liao [20] for the scaled linear
Poincaré flow. See [11] and [26] for the current version.
###### Lemma 2.8.
Given a compact set $\Lambda_{0}$ with $\Lambda_{0}\cap{\rm
Sing}(X)=\emptyset$ and $\eta\in(0,1),T_{0}>0$, for any $\varepsilon>0$ there
exists $\delta>0$, $L>0$ and $\delta_{0}>0$, such that for any
$(\eta,T_{0})^{*}$ quasi-hyperbolic orbit segment $\\{\phi_{t}(x)\\}_{[0,T]}$
with respect to a dominated splitting $N_{x}=E_{x}\oplus F_{x}$ and the scaled
linear Poincaré flow $\psi^{*}_{t}$, if $x,\phi_{T}(x)\in\Lambda_{0}$ with
$d(x,\phi_{T}(x))<\delta$, then there exists a point $p$ and a $C^{1}$
strictly increasing function $\theta:[0,T]\to{\mathbb{R}}$, such that
1. (a)
$\theta(0)=0$ and $|\theta^{\prime}(t)-1|<\varepsilon$;
2. (b)
$p$ is a periodic point with $\phi_{\theta(T)}(p)=p$;
3. (c)
$d(\phi_{t}(x),\phi_{\theta(t)}(p))\leq\varepsilon\|X(\phi_{t}(x))\|$, for all
$t\in[0,T]$;
4. (d)
$d(\phi_{t}(x),\phi_{\theta(t)}(p))\leq Ld(x,\phi_{T}(x))$;
5. (e)
$p$ has stable and unstable manifold with size at least $\delta_{0}$.
6. (f)
if $\Lambda_{0}\subset\Lambda$ for a chain recurrent class $\Lambda$, then
$p\in\Lambda$.
### 2.3. Extended linear Poincaré flows
Note that the (scaled) linear Poincaré flow is only defined at the regular
points $M\setminus{\rm Sing}(X)$. To solve this issue, we introduce the
extended linear Poincaré flow, which is a useful tool developed by Liao [20,
21] and Li et al [18] to study hyperbolic singularities.
Denote by
$G^{1}=\\{L:L\mbox{ is a 1-dimensional subspace of }T_{x}M,x\in M\\}$
the Grassmannian manifold of $M$. Given a $C^{1}$ flow $\phi_{t}$, the tangent
flow $\Phi_{t}$ acts naturally on $G^{1}$ by mapping each $L$ to
$\Phi_{t}(L)$.
Write $\beta:G^{1}\to M$ and $\xi:TM\to M$ the bundle projection. The pullback
bundle of $TM$:
$\beta^{*}(TM)=\\{(L,v)\in G^{1}\times TM:\beta(L)=\xi(v)\\}$
is a vector bundle over $G^{1}$ with dimension $\dim M$. The tangent flow
$\Phi_{t}$ lifts naturally to $\beta^{*}(TM)$:
$\Phi_{t}(L,v)=(\Phi_{t}(L),\Phi_{t}(v)).$
Recall that the linear Poincaré flow $\psi_{t}$ projects the image of the
tangent flow to the normal bundle of the flow direction. The key observation
is that this projection can be defined not only w.r.t the bundle perpendicular
to the flow, but to the orthogonal complement of any section $\\{L_{x}:x\in
M\\}\subset G^{1}$.
To be more precise, given ${\mathcal{L}}=\\{L_{x}:x\in M\\}$ we write
${\mathcal{N}}_{\mathcal{L}}=\\{(L_{x},v)\in\beta^{*}(TM):v\perp L_{x}\\}.$
Then ${\mathcal{N}}$, consisting of vectors perpendicular to $L$, is a sub-
bundle of $\beta^{*}(TM)$ over $G^{1}$ with dimension $\dim M-1$. The extended
linear Poincaré flow is then defined as
$\psi_{t}:{\mathcal{N}}_{\mathcal{L}}\to{\mathcal{N}}_{\mathcal{L}},\\\
\psi_{t}(L_{x},v)=\pi(\Phi_{t}(L_{x},v)),$
where $\pi$ is the orthogonal projection from fibres of $\beta^{*}(TM)$ to the
corresponding fibres of ${\mathcal{N}}$ along ${\mathcal{L}}$.
If we consider the the map
$\zeta:{\rm Reg}(X)\to G^{1}$
that maps every regular point $x$ to the unique $L_{x}\in G^{1}$ with
$\beta(L_{x})=x$ such that $L_{x}$ is generated by the flow direction at $x$,
then the extended linear Poincaré on $\zeta({\rm Reg}(X))$ can be naturally
identified with the linear Poincaré flow defined earlier. On the other hand,
given any invariant set $\Lambda$ of the flow $\phi_{t}$, consider the set:
$\tilde{\Lambda}=\overline{\zeta(\Lambda\cap{\rm Reg}(X))}.$
If $\Lambda$ contains no singularity, then $\tilde{\Lambda}$ can be seen as a
natural copy of $\Lambda$ in $G^{1}$ equipped with the direction of the flow
on $\Lambda$. If $\sigma\in\Lambda$ is a singularity, then $\tilde{\Lambda}$
contains all the direction in $\beta^{-1}(\sigma)$ that can be approximated by
the flow direction at regular points in $\Lambda$. The extended Poincaré flow
restricted to $\tilde{\Lambda}$ can be seen as the continuous extension of the
linear Poincaré flow on $\Lambda$. The same treatment can be applied to the
scaled linear Poincaré flow $\psi_{t}^{*}$.
### 2.4. Classification of chain recurrent classes and singularities for
generic star flows
In this subsection we recap the main result in [28] on $C^{1}$ generic star
flows. We begin with the following classification on the singularities.
###### Definition 8.
Let $\sigma$ be a hyperbolic singularity contained in a non-trivial chain
recurrent class $C(\sigma)$. Assume that the Lyapunov exponents of $\sigma$
are:
$\lambda_{1}\leq\cdots\leq\lambda_{s}<0<\lambda_{s+1}\leq\cdots\leq\lambda_{\dim
M}.$
Write ${\rm Ind}(\sigma)=s$ for the stable index of $\sigma$. We say that
1. (1)
$\sigma$ is Lorenz-like, if $\lambda_{s}+\lambda_{s+1}>0$,
$\lambda_{s-1}<\lambda_{s}$ (this implies that $e^{\lambda_{s}}<1$ is a real
single eigenvalue of $\Phi_{t}|_{T_{\sigma}M}$), and
$W^{ss}(\sigma)\cap\\{\sigma\\}=\\{\sigma\\}$, where $W^{ss}(\sigma)$ is the
stable manifold of $\sigma$ corresponding to
$\lambda_{1},\ldots,\lambda_{s-1}$; regular orbits in $C(\sigma)$ can only
approach $\sigma$ along $E^{cu}(\sigma)$ cone, where $E^{cu}$ is the
$\Phi_{t}$-invariant subspace correspond to $\lambda_{s},\ldots,\lambda_{\dim
M}$;
2. (2)
$\sigma$ is reverse Lorenz-like,111In [28] and [7] both cases are called
Lorenz-like. Here we distinguish between the two since our main argument are
different in each case. See the proof of Lemma 4.6 for more details. if it is
Lorenz-like for $-X$; in this case, regular orbits in $C(\sigma)$ can only
approach $\sigma$ along $E^{cs}(\sigma)$ cone. See Figure 1.
Figure 1. Lorenz-like and reverse Lorenz-like singularities
Then it is shown in [18] and [28] that (all the theorems are labeled according
to [28]):
* •
for a star vector field $X$, if a chain recurrent class $C$ is non-trivial,
then every singularity in $C$ is either Lorenz-like or reverse Lorenz-like
(Theorem 3.6); the original proof can be found in [18];
* •
there exists a residual set ${\mathcal{R}}\subset\mathscr{X}^{1}_{*}(M)$ such
that for every $X\in{\mathcal{R}}$, if a periodic orbit $p$ is sufficiently
close to a singularity $\sigma$, then:
* –
when $\sigma$ is Lorenz-like, the index of the $p$ must be ${\rm
Ind}(\sigma)-1$;
* –
when $\sigma$ is reverse Lorenz-like, the index of the $p$ is ${\rm
Ind}(\sigma)$ (Lemma 4.4);
furthermore, the dominated splitting on $\sigma$ induced by such periodic
orbits coincides with the hyperbolic splitting on $\sigma$ (proof of Theorem
3.7);
* •
For every chain recurrent class $C$ there exists an integer ${\rm Ind}_{C}>0$,
such that every periodic orbit contained in a sufficiently small neighborhood
of $C$ has the same stable index which equals ${\rm Ind}_{C}$ (Theorem 5.7);
* •
combine the previous two results, we see that all the singularity in $C$ has
index either ${\rm Ind}_{C}+1$ (in which case it must be Lorenz-like) or ${\rm
Ind}_{C}$ (reverse Lorenz-like);
* •
if all the singularities in $C$ are Lorenz-like, then $C$ is sectional
hyperbolic (Theorem 3.7); if all the singularities in $C$ are reverse Lorenz-
like, then $C$ is sectional hyperbolic for $-X$ (Theorem 3.7);
* •
if $C$ contains singularity with different indices (note that they can only
differ by one), then there is no sectional hyperbolic splitting on $C$; one of
such examples was constructed by Bonatti and da Luz [3, 8].
## 3\. Flow orbits near singularities
This section contains some general results on hyperbolic singularities of
$C^{1}$ vector fields (Section 3.1), and on Lorenz-like singularities for star
flows (Section 3.2). The key results are Lemma 3.2 and 3.4, which state that
the time near a singularity where the orbit is “make the turn” is bounded from
above. For Lorenz-like singularities of star flows, we prove in Lemma 3.12
that for a periodic orbit ${\rm Orb}(p)$ approaching a singularity $\sigma$
while exhibiting backward hyperbolic times, the unstable manifold of ${\rm
Orb}(p)$ must transversally intersect with $W^{s}(\sigma)$. Then Lemma 3.13
deals with the case where a sequence of forward hyperbolic times approaches a
Lorenz-like singularity. These two results will allow us to show in the next
section that such periodic orbit must be in the same chain recurrent class as
$\sigma$.
### 3.1. Flow orbits near hyperbolic singularities
In this section we will establish some geometric properties for flow orbits in
a small neighborhood of a hyperbolic singularity. Our result applies to all
$C^{1}$ vector fields $X$ which are not necessarily star.
For this purpose, let $\sigma$ be a hyperbolic singularity with the hyperbolic
splitting $E^{s}_{\sigma}\oplus E^{u}_{\sigma}$. Without loss of generality,
we can think of $\sigma$ as the origin in ${\mathbb{R}}^{n}$, and assume that
$E^{s}_{\sigma}$ and $E^{u}_{\sigma}$ are perpendicular (which is possible if
one changes the metric). In particular, we will assume that
$E^{s}_{\sigma}={\mathbb{R}}^{s}$ is the $s$-dimensional subspace of
${\mathbb{R}}^{n}$ with the last $\dim M-s$ coordinates being zero. Here
$s=\dim E^{s}_{\sigma}$ is the stable index of $\sigma$. Similarly,
$E^{u}_{\sigma}$ is the subspace of ${\mathbb{R}}^{n}$ where the first $s$
coordinates are zero. As before we will write $f=\phi_{1}$ for the time-one
map of the flow.
Since the vector filed $X$ is $C^{1}$, we can take a neighborhood
$U=B_{r}(\sigma)$ with $r$ small enough, such that:
* •
the flow in $U$ can be written as
(4) $\phi_{t}(x)=e^{At}x+C^{1}\mbox{ small perturbation},$
where $A$ is a matrix no eigenvalue on the imaginary axis;
* •
for $x\in U$, the tangent map $Df_{x}=\Phi_{1}|_{T_{x}M}$ are small
perturbations of the hyperbolic matrix $e^{A}$, with eigenvalues bounded away
from $1$.
For each $x\in U$, denote by $x^{s}$ its distance to $E^{u}_{\sigma}$ and
$x^{u}$ its distance to $E^{s}_{\sigma}$. Then for every $\alpha>0$ small, we
define the $\alpha$-cone on the manifold, denote by $D^{i}_{\alpha}(\sigma)$,
$i=s,u$, as follows:
$D^{s}_{\alpha}(\sigma)=\\{x\in U:x^{u}<\alpha x^{s}\\},\hskip
28.45274ptD^{u}_{\alpha}(\sigma)=\\{x\in U:x^{s}<\alpha x^{u}\\}.$
Note that the hyperbolic splitting $T_{\sigma}M=E^{s}_{\sigma}\oplus
E^{u}_{\sigma}$ can be extended to $U$ in a natural way: for each $x\in U$,
put $E^{s}(x)$ as the $s$-dimensional hyperplane that is parallel to
$E^{s}_{\sigma}$; the same can be done for $E^{u}(x)$. This allows us to
consider the $\alpha$-cones $C_{\alpha}(E^{i})$, $i=s,u$, on the tangent
bundle as defined in Section 2.1. The next lemma easily follows from the
smoothness of the vector field $X$ and the hyperbolicity of $\sigma$:
###### Lemma 3.1.
There exists $L\geq 1$, such that for all $\alpha>0$ small enough,
1. (1)
for every $x\in{\rm Cl}(D^{s}_{\alpha}(\sigma))$, we have $X(x)\in
C_{L\alpha}(E^{s})$;
2. (2)
for every $x\in U$, if $X(x)\in C_{\alpha}(E^{s})$, we have $x\in
D^{s}_{L\alpha}(\sigma)$.
Moreover, the same holds for $D^{u}_{\alpha}(\sigma)$ and $C_{\alpha}(E^{u})$.
Let us fix some $\alpha>0$ small enough that will be specified at the end of
this section. Note that if $x\in U\setminus(D^{s}_{\alpha}(\sigma)\cup
D^{u}_{\alpha}(\sigma))$, we lose control on the direction of $X(x)$. One can
think of the region $U\setminus(D^{s}_{\alpha}(\sigma)\cup
D^{u}_{\alpha}(\sigma))$ as the place where the flow is ‘making the turn’ from
the $E^{s}$ cone to the $E^{u}$ cone. The next lemma states that the time that
an orbit segment spend in this region is uniformly bounded. To this end, we
write, for each $x\in U$,
$t^{+}(x)=\sup\\{t>0:\phi_{[0,t]}(x)\subset U\\},\hskip
14.22636ptt^{-}(x)=\sup\\{t>0:\phi_{[-t,0]}(x)\subset U\\}.$
Then the orbit segment $\phi_{(-t^{-},t^{+})}(x)$ contains $x$ and is
contained in $U$. With slight abuse of notation, we will frequently drop the
depends of $t^{\pm}(x)$ on $x$.
###### Lemma 3.2.
Let $\sigma$ be a hyperbolic singularity for a $C^{1}$ vector field $X$. Then
for every $\alpha>0$ small enough, there exists $T_{\alpha}>0$ such that for
every $r>0$ small enough and every $x\in U=B_{r}(x)$, the set
$T(x):=\\{t\in(-t^{-},t^{+}):\phi_{t}(x)\notin D^{s}_{\alpha}(\sigma)\cup
D^{u}_{\alpha}(\sigma)\\}$
has Lebesgue measure bounded by $T_{\alpha}$.
###### Proof.
Recall that the Lebesgue measure on the interval $(-t^{-},t^{+})$ corresponds
to the length of open intervals. Below we will prove that the set in question
is contained in a subinterval of $(-t^{-},t^{+})$ whose length is bounded from
above.
We take a small neighborhood $x\in V\subset U$, such that for every $y\in V$
it holds
$\phi_{(-t^{-}(y),t^{+}(y))}(y)\cap
D_{\alpha}^{*}(\sigma)\neq\emptyset,*=s,u.$
Note that if a orbit segment $\phi_{(-t^{-},t^{+})}(x)$ does not intersect
with $V$, then $t^{-}+t^{+}$ must be bounded. Therefore we only need to prove
the lemma for orbit segments that intersect with $V$.
Shrinking $V$ if necessary, we may assume that
$\phi_{(-t^{-},t^{+})}(x)\cap V\neq\emptyset\implies\phi_{-t^{-}}(x)\in
D^{s}_{\alpha}(\sigma),\phi_{t^{+}}(x)\in D^{u}_{\alpha}(\sigma).$
We will also assume, by changing to a different point on the same orbit
segment if necessary, that $x\in V\setminus(D^{s}_{\alpha}(\sigma)\cup
D^{u}_{\alpha}(\sigma))$. For such an orbit segment
$\phi_{(-t^{-},t^{+})}(x)$, define
$t^{s}=t^{s}(x)=\sup\\{t>0:\phi_{(-t^{-},-t)}(x)\subset
D^{s}_{\alpha}(\sigma)\\},$
and
$t^{u}=t^{u}(x)=\sup\\{t>0:\phi_{(t,t^{+})}(x)\subset
D^{u}_{\alpha}(\sigma)\\}.$
Clearly we have $T(x)\subset(-t^{s},t^{u})$. Below we will show that
$t^{s}+t^{u}$ is bounded from above.
Writing $x^{s}=\phi_{-t^{s}}(x)$, $x^{u}=\phi_{t^{u}}(x)$, Lemma 3.1(2) shows
that
$X(x^{s})\notin C_{\alpha/L}(E^{s}),\,\,X(x^{u})\notin C_{\alpha/L}(E^{u}).$
In particular222Following our notation earlier,
$X(x^{*})=X(x^{*})^{s}+X(x^{*})^{u}\in E^{s}\oplus E^{u}$.,
(5)
$\frac{||X(x^{s})^{u}||}{||X(x^{s})^{s}||}>\alpha/L,\,\,\,\,\frac{||X(x^{u})^{u}||}{||X(x^{u})^{s}||}<L/\alpha.$
On the other hand, by the hyperbolicity of $\sigma$, there is $\lambda_{0}>1$
independent of $\alpha$ such that for each $x\in U\cap\phi_{-1}(U)$, and for
every $v\in T_{x}M$ it holds that
$\frac{||\Phi_{1}(v)^{u}||}{||\Phi_{1}(v)^{s}||}>\lambda_{0}\frac{||v^{u}||}{||v^{s}||}.$
Combine this with (5) and the observation that
$\Phi_{t^{s}+t^{u}}(X(x^{s}))=X(x^{u})$, we obtain
$\lambda_{0}^{t^{s}+t^{u}}<\frac{L^{2}}{\alpha^{2}},$
which implies that
$t^{s}+t^{u}<\frac{2\log L-2\log\alpha}{\log\lambda_{0}}:=T_{\alpha}.$
Also note that $T_{\alpha}$ can be made uniform for $r>0$ small enough, since
$L$ and $\lambda_{0}$ can be chosen independent of $r$. This concludes the
proof of the lemma. ∎
Now let us look at this lemma from the perspective of invariant measures.
Setting for $i=s,u$,
${\mathcal{L}}^{i}(\sigma)=\\{L\in G^{1}:\beta(L)=\sigma,L\mbox{ is parallel
to }E^{i}\\},$
then ${\mathcal{L}}^{i}$ are invariant under $\Phi_{t}|_{\beta^{-1}(\sigma)}$
(note that this is the tangent flow on $G^{1}$). Furthermore, the
hyperbolicity of $\sigma$ implies that ${\mathcal{L}}^{s}$ is a repelling set
while ${\mathcal{L}}^{u}$ is an attracting set.
Next we take a sequence of points $\\{x_{i}\\}\subset U$ with $x_{i}\to\sigma$
as $i\to\infty$. To simply notation, we will write
$t^{\pm}_{i}=t^{\pm}(x_{i})$. Note that $t^{\pm}_{i}\uparrow+\infty$. For
every $\varepsilon>0$ small enough, the time that the orbit segments
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ spend in the region $U\setminus
B_{\varepsilon}(\sigma)$ is uniformly bounded in $i$. As a result, the
empirical measures supported on these orbit segments behaves trivially:
(6)
$\nu_{i}=\frac{1}{t^{-}_{i}+t^{+}_{i}}\int_{-t^{-}_{i}}^{t^{+}_{i}}\delta_{\phi_{s}(x_{i})}\,ds\xrightarrow{i\to\infty}\delta_{\sigma},$
where $\delta_{\sigma}$ is the atomic measure on $\sigma$.
On the other hand, the map $\zeta:{\rm Reg}(X)\to G^{1}$ defined in Section
2.3 lifts any measure $\mu$ on $M$ with $\mu({\rm Sing})=0$ to a measure
$\zeta_{*}(\mu)$ on $G^{1}$. Now consider the lifted empirical measures:
(7) $\tilde{\nu}_{i}=\zeta_{*}(\nu_{i}).$
If we take any weak-* limit $\tilde{\mu}$ of $\\{\tilde{\nu}_{i}\\}$ (the
limit exists since $G^{1}$ is compact), $\tilde{\mu}$ must be invariant under
$\Phi_{t}$ and is supported on
${\mathcal{L}}^{s}\cup{\mathcal{L}}^{u}\subset\beta^{-1}(\sigma)$ since
${\mathcal{L}}^{s}\cup{\mathcal{L}}^{u}$ contains the non-wandering set of
$\Phi_{t}|_{\beta^{-1}(\sigma)}$. Write
${\mathcal{U}}_{\alpha}^{*}=\overline{\zeta(D^{*}_{\alpha}(\sigma))}$ for
$*=s,u$. Observe that by Lemma 3.1, ${\mathcal{U}}_{\alpha}^{*}$ each contains
a neighborhood of ${\mathcal{L}}^{*}(\sigma)$ in $G^{1}$, $*=s,u$.
Furthermore, we have
${\mathcal{U}}_{\alpha}^{s}\cap{\mathcal{U}}_{\alpha}^{u}=\emptyset$. Combine
this with Lemma 3.2, we obtain the following lemma:
###### Lemma 3.3.
For all $\alpha>0$ small, we have
$\tilde{\nu}_{i}({\mathcal{U}}_{\alpha}^{s}\cup{\mathcal{U}}_{\alpha}^{u})\to
1$ as $i\to N$.
The next lemma states that the time that orbit segments
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ spend in $D^{s}_{\alpha}(\sigma)$ and
$D^{u}_{\alpha}(\sigma)$ are comparable:
###### Lemma 3.4.
There is $a\in(0,\frac{1}{2})$ independent of $\alpha$, such that for every
sequence $\\{x_{i}\\}\subset U$ with $x_{i}\to\sigma$ and every weak*-limit
$\tilde{\mu}$ of the empirical measure $\tilde{\nu}_{i}$ defined using (6) and
(7), we have
$\tilde{\mu}({\mathcal{U}}_{\alpha}^{s})>a\mbox{ and
}\tilde{\mu}({\mathcal{U}}_{\alpha}^{u})>a.$
###### Proof.
We will show that for every orbit segment
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$, the time it spends in
$D^{*}_{\alpha}$, $*=s,u$ are comparable, with a ratio that is uniform in $i$
and $\alpha$.
First, note that since the vector field is $C^{1}$, the flow speed is a
Lipschitz function of $d(x,\sigma)$: there is $0<C_{1}<C_{2}$ such that
$\frac{\|X(x)\|}{d(x,\sigma)}\in(C_{1},C_{2}).$
To simplify notation, we write
$x_{e,i}=\phi_{-t^{-}_{i}}(x_{i})\mbox{, and }x_{l,i}=\phi_{t^{+}_{i}}(x_{i})$
for the end points of $\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ that enter and
leave the neighborhood $U$. By our construction, $x_{e,i},x_{l,i}\in\partial
U=\partial B_{r}(x)$. As a result, for every $i$ it holds
$\frac{\|X(x_{e,i})\|}{\|X(x_{l,i})\|}\in\left(\frac{C_{1}}{C_{2}},\frac{C_{2}}{C_{1}}\right).$
Denote by $t^{0}_{i}\in(-t^{-}_{i},t^{+}_{i})$ the time such that the point
$x^{0}_{i}=\phi_{t_{0}}(x_{i})$ satisfies $(x^{0}_{i})^{s}=(x^{0}_{i})^{u}$.
One could think of $x^{0}_{i}$ as the point on the orbit segment
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ where the flow speed is the lowest. We
parse each orbit segment $\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ into three
sub-segments (recall the definition of $t^{s}(x_{i})$ and $t^{u}(x_{i})$ in
Lemma 3.2. To simplify notation we will write $t_{i}^{*}=t^{*}(x_{i})$,
$*=s,u$):
* •
write $x^{s}_{i}=\phi_{t^{s}_{i}}(x_{i})$ for the point on
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ that is on the boundary of
$D^{s}_{\alpha}(\sigma)$; then the orbit from $x_{e,i}$ to $x^{s}_{i}$ is
contained in $D^{s}_{\alpha}(\sigma)$;
* •
write $x^{u}_{i}=\phi_{t^{u}_{i}}(x_{i})$ for the point on
$\phi_{(-t^{-}_{i},t^{+}_{i})}(x_{i})$ that is on the boundary of
$D^{u}_{\alpha}(\sigma)$; then the orbit from $x^{u}_{i}$ to $x_{l,i}$ is
contained in $D^{u}_{\alpha}(\sigma)$;
* •
the orbit segment from $x^{s}_{i}$ to $x^{u}_{i}$ is outside
$D^{*}_{\alpha}(\sigma)$, $*=s,u$; by Lemma 3.2, $t^{u}_{i}-t^{s}_{i}\leq
T_{\alpha}$.
Note that $x^{0}_{i}$ is contained in the orbit segment from $x^{s}_{i}$ to
$x^{u}_{i}$. Since the flow time from $x^{0}_{i}$ to $x^{\pm}_{i}$ is bounded
by $T_{\alpha}$ and the flow is $C^{1}$, we obtain
$\frac{\|X(x^{u}_{i})\|}{\|X(x^{s}_{i})\|}=\frac{\|X(x^{u}_{i})\|}{\|X(x^{0}_{i})\|}\frac{\|X(x^{0}_{i})\|}{\|X(x^{s}_{i})\|}\in\left(||\Phi_{T_{\alpha}}||^{2},||\Phi_{T_{\alpha}}||^{-2}\right).$
For the orbit segment from $x_{e,i}$ to $x^{s}_{i}$, Lemma 3.1(1) shows that
$X(x)\in C_{L\alpha}(E^{s})$ for each $x$ in this orbit segment. Since the
flow speed is uniformly exponentially contracting in $C_{L\alpha}(E^{s})$
provided that $\alpha$ and $r$ are small enough, we see that the time length
of this orbit segment satisfies
$t^{-}_{i}-t^{s}_{i}=\mathcal{O}(\log\frac{\|X(x_{e,i})\|}{\|X(x_{i}^{-})\|}).$
Similarly,
$t^{+}_{i}-t^{u}_{i}=\mathcal{O}(\log\frac{\|X(x_{l,i})\|}{\|X(x^{u}_{i})\|}).$
Then the ratio is
$\frac{t^{-}_{i}-t^{s}_{i}}{t^{+}_{i}-t^{u}_{i}}=\mathcal{O}\left(\frac{\log\|X(x_{e,i})\|-\log\|X(x_{i}^{-})\|}{\log\|X(x_{l,i})\|-\log\|X(x^{u}_{i})\|}\right)=\mathcal{O}\left(\frac{\log\|X(x_{i}^{-})\|}{\log\|X(x^{u}_{i})\|}\right)=\mathcal{O}(1),$
where in the last equality we use the elementary fact that if $a_{i}\to
0,b_{i}\to 0$ such that $a_{i}/b_{i}$ is bounded from above and away from
zero, then $\log a_{i}/\log b_{i}\to 1.$
Finally, note that even though the ratio
$\frac{\|X(x^{u}_{i})\|}{\|X(x^{s}_{i})\|}$ depends on $\alpha$,
$\frac{t^{-}_{i}-t^{s}_{i}}{t^{+}_{i}-t^{u}_{i}}$ only depends on the
exponential contracting/expanding rate in $C_{L\alpha}(E^{*})$, which can be
made uniform for $\alpha$ small enough. This finishes the proof of the lemma.
∎
We conclude this subsection with the following lemma, which will be used later
to create transverse intersection between the unstable manifold of a periodic
orbit and the stable manifold of the singularity $\sigma$. As before, $s$ is
the stable index of $\sigma$.
###### Lemma 3.5.
For each $\beta>0$ small and $\delta>0$, there is $\alpha_{0}>0$ such that for
all $\alpha<\alpha_{0}$ and for every point $x\in D^{s}_{\alpha}(\sigma)$, let
$W(x)$ be a $(\dim M-s)$-dimensional submanifold that contains $x$ and is
tangent to $C_{\beta}(E^{u})$. If $\operatorname{diam}W(x)>\delta\|X(x)\|$,
then $W(x)\pitchfork W^{s}(\sigma)\neq\emptyset$.
###### Proof.
Since the flow speed at $x$ is a Lipschitz function of $d(x,\sigma)$, we see
that
$\operatorname{diam}W(x)>C_{1}\delta d(x,\sigma)>C_{1}C^{\prime}\delta x^{s}$
for some $C^{\prime}>0$. Here $C_{1}$ is the same constant as in the previous
lemma.
On the other hand, since $W(x)$ is tangent to the $\beta$-cone of $E^{u}$,
there is $C^{\prime\prime}>0$ such that if
$\operatorname{diam}W(x)>C^{\prime\prime}x^{u}=C^{\prime\prime}d(x,W^{s}(\sigma))$,
we must have $W(x)\pitchfork W^{s}(\sigma)\neq\emptyset$.
Since $x^{u}<\alpha x^{s}$ in the cone $D^{s}_{\alpha}$, the choice of
$\alpha<\alpha_{0}:=C_{1}C^{\prime}\delta/C^{\prime\prime}$ guarantees that
$\operatorname{diam}W(x)>CC^{\prime}\delta x^{s}>C^{\prime\prime}x^{u}$,
therefore $W(x)\pitchfork W^{s}(\sigma)\neq\emptyset$. This concludes the
proof of the lemma. ∎
### 3.2. Near Lorenz-like singularities
Now we turn our attention to Lorenz-like singularities for star flows. Assume
that $\sigma$ is a Lorenz-like singularity contained in a non-trivial chain
recurrent class $C=C(\sigma)$. The discussion below applies to reverse Lorenz-
like singularities if one considers the flow $-X$.
Let $E_{\sigma}^{cs}\oplus E_{\sigma}^{u}$ be the hyperbolic splitting on
$T_{\sigma}M$. We will write $E_{\sigma}^{ss}$ the subspace of $T_{\sigma}M$
corresponding to the exponents $\lambda_{1},\ldots,\lambda_{s-1}$ and
$E_{\sigma}^{c}$ the subspace of $T_{\sigma}M$ corresponding to the exponent
$\lambda_{s}$. Then $E_{\sigma}^{ss}\oplus E_{\sigma}^{c}=E_{\sigma}^{cs}$.
The discussion in the previous sub-section applies to $\sigma$ without any
modification (note that this time, we change the notation of $E^{s}$ to
$E^{cs}$ and ${\mathcal{L}}^{s}$ to ${\mathcal{L}}^{cs}$). Furthermore, we can
think of $\sigma$ as the origin in ${\mathbb{R}}^{n}$ with three bundles
$E_{\sigma}^{ss},E_{\sigma}^{c}$ and $E_{\sigma}^{u}$ perpendicular to one
another (which is possible if one changes the metric). These bundles can be
naturally extended to $U=B_{r}(x)$ as before. As a result, the cone field
$C_{\alpha}(E^{*})$ can be defined for $*=ss,c,u,cu$ and $cs$. The same can be
said about the cones on the manifold, $D_{\alpha}^{*}(\sigma)$.
It is proven in [25], [18] and [28] that
$W^{ss}(\sigma)\cap\operatorname{C}(\sigma)=\\{\sigma\\}$. Furthermore, it is
shown that if the orbit segment is taken inside $C(\sigma)$ (or if the orbit
segment belongs to a periodic orbit that is sufficiently close to
$C(\sigma)$), then it can only approach the singularity $\sigma$ along the
one-dimensional subspace $E^{c}$ in the following sense: write
${\mathcal{L}}^{c}=\\{L\in G^{1}:\beta(L)=\sigma,L\mbox{ is parallel to
}E^{c}\\},$
then ${\mathcal{L}}^{c}\subset{\mathcal{L}}^{cs}$ consists of a single point
in $G^{1}$. If we take $x_{i}\in C(\sigma)$ with $x_{i}\to\sigma$ and define
the empirical measure $\nu_{i}$ and its lift $\tilde{\nu}_{i}$ according to
(6) and (7), then any weak*-limit $\tilde{\mu}$ of $\\{\tilde{\nu}_{i}\\}$
must satisfy
$\operatorname{supp}\tilde{\mu}={\mathcal{L}}^{c}\cup{\mathcal{L}}^{u}.$
This observation leads to the following lemma, which is an improved version of
Lemma 3.3 and 3.4.
###### Lemma 3.6.
Let $\\{x_{i}\\}\subset C(\sigma)$, then the conclusion of Lemma 3.3 and Lemma
3.4 remain true with ${\mathcal{U}}^{s}(\alpha)$ replaced by a neighborhood
${\mathcal{U}}^{c}(\alpha)$ of ${\mathcal{L}}^{c}$ in $G^{1}$. The same can be
said if we take $\\{p_{i}\\}$ to be periodic points with $p_{i}\to\sigma$ but
not necessarily in $C(\sigma)$.
The proof remains unchanged and is thus omitted.
Now let us describe the hyperbolicity of the periodic orbits close to
$\sigma$. Recall that for a regular point $x$, $N_{x}\subset T_{x}M$ is the
orthogonal complement of the flow direction $X(x)$. Since $X$ is a star vector
field, every periodic orbit is hyperbolic. It is proven in [28, Theorem 3.7]
if $p_{n}$ is a sequence of periodic points near $C(\sigma)$ and approaches
$\sigma$, then the hyperbolic splitting $E^{s}\oplus E^{cu}$ (with
$E^{cu}=<X>\oplus E^{u}$) on ${\rm Orb}(p_{n})$ extends to a dominated
splitting on $\sigma$, which coincides with the splitting
$E^{ss}_{\sigma}\oplus E_{\sigma}^{cu}$ where
$E_{\sigma}^{cu}=E_{\sigma}^{c}\oplus E_{\sigma}^{u}$. Since we assume that
$E^{ss}$ is perpendicular to $E^{cu}$, and the flow direction is tangent to
the cone $C_{L\alpha}(E^{c})$ as the orbit approaches the singularity
$\sigma$, it follows that for $n$ large enough, the local stable manifold
$W^{s}(p^{\prime}_{n})$ (which has dimension $s-1$) for $p^{\prime}_{n}\in{\rm
Orb}(p_{n})\cap D^{cs}_{\alpha}(\sigma)$ is tangent to a $E^{ss}$ cone.
###### Remark 3.7.
It is tempting to argue that for $n$ large enough, $W^{u}(p^{\prime}_{n})$
must intersect transversally with $W^{cs}(\sigma)$. However, this is not
necessarily the case: as $p_{n}$ gets closer to the singularity $\sigma$, the
size of the invariant manifolds of $p$ will shrink with rates proportional to
the flow speed, as observed by Liao [20]. As a result, even if we have a
sequence of periodic points $p_{n}\to p\in W^{cs}(\sigma)$, there is still no
guarantee that the unstable manifold of $p_{n}$ will intersect with
$W^{cs}(\sigma)$. To solve this issue, we consider the hyperbolic times of
$p_{n}$ as defined in Definition 5. But before that, let us first estimate the
hyperbolicity of the orbit segment inside $D^{cs}_{\alpha}$ and
$D^{u}_{\alpha}$.
###### Lemma 3.8.
Let $\sigma$ be a Lorenz-like singularity, then for $r$ small enough, for
every orbit segment $\phi_{[0,T]}(x)\subset B_{r}(\sigma)\cap C(\sigma)$, the
scaled linear Poincaré flow $\psi^{*}_{t}|_{E^{ss}_{x}}$ is uniformly
contracting, where $E^{ss}_{x}\subset N_{x}$ is the stable subspace for the
scaled linear Poincaré flow. The same holds true for every periodic orbit
segment $\phi_{[0,T]}(p)\subset B_{r}(\sigma)$ but not necessarily in
$C(\sigma)$.
###### Proof.
We only need to estimate
$\psi^{*}_{t}(v)=\frac{\|X(x)\|}{\|X(\phi_{t}(x))\|}\psi_{t}|_{E^{ss}_{x}}(v),$
for $v\in E^{ss}_{x}$.
We take $\varepsilon>0$ small enough such that
$\lambda_{s-1}+2\varepsilon<\lambda_{s}-\varepsilon<0$ (recall that
$\lambda^{s}<0$ is the largest negative exponent of $\sigma$). If we take
$r>0$ small enough, then inside $B_{r}(\sigma)$ we have
$\|\psi_{t}|_{E^{ss}_{x}}(v)\|\leq e^{(\lambda_{s-1}+\varepsilon)t}\|v\|.$
On the other hand, for $\frac{\|X(x)\|}{\|X(\phi_{t}(x))\|}$ we have (in the
worst case scenario, where the flow direction is tangent to the $E^{c}$ cone):
$\|X(\phi_{t}(x))\|\geq e^{\lambda_{s}-\varepsilon}\|X(x)\|.$
Indeed the flow speed is expanding while the direction is tangent to the
$E^{u}$ cone. While the orbit is neither in $D^{cs}_{\alpha}$ nor in the
$D^{u}_{\alpha}$, we lose all the estimate. However, the length of such orbit
segment is uniformly bounded due to Lemma 3.2, and can be safely ignored.
As a result, we get
$\|\psi^{*}_{t}(v)\|\leq
e^{(\lambda_{s-1}+\varepsilon-\lambda_{s}+\varepsilon)t}\|v\|\leq
e^{-\varepsilon t}\|v\|,$
and conclude the proof of the lemma. ∎
On the unstable subspace $E^{u}_{x}\subset N_{x}$, the situation is different:
when the orbit of $x$ moves away from $\sigma$, it can only do so along
$D^{u}_{\alpha}(\sigma)$. As a result, one loses the hyperbolicity along the
$E^{u}_{x}$ direction. On the other hand, when the orbit approaches $\sigma$,
the orbit segment will be ‘good’ (in the sense that it is quasi-hyperbolic
according to Definition 6) as long as the flow direction is tangent to the
$E^{c}$ cone. This is summarized in the next lemma:
###### Lemma 3.9.
Let $\sigma$ be a Lorenz-like singularity, then there exists
$\lambda\in(0,1),T_{0}>0$, $\alpha_{1}>0$ and $r>0$, such that if
$\alpha<\alpha_{1}$ and $x$ is a periodic orbit such that the orbit segment
$\phi_{[0,T]}(x)$ is contained in $B_{r}(\sigma)\cap{\rm
Cl}(D^{cs}_{\alpha}(\sigma))$, then the orbit segment is
$(\lambda,T_{0})$-backward contracting. If the orbit segment is in ${\rm
Cl}(D^{u}_{\alpha}(\sigma))$, then it does not have any sub-segment that is
backward contracting.
###### Proof.
Case 1. $\phi_{[0,T]}(x)\subset{\rm Cl}(D^{cs}_{\alpha}(\sigma))$.
To simplify notation we write $y=\phi_{T}(x)$ for the endpoint of the orbit
segment (note that it is the starting point for the same orbit segment under
$-X$). Take any $t\in[0,T]$, we will estimate
$\psi^{*}_{-t}(v)=\frac{\|X(y)\|}{\|X(\phi_{-t}(y))\|}\psi_{-t}(v),$
for $v\in E^{u}_{y}$.
Recall that $\lambda_{s+1}$ is the smallest positive exponent of $\sigma$.
Like in the previous lemma, we take $\varepsilon>0$ small such that
$\lambda_{s}+\varepsilon<0$, $\lambda_{s+1}-\varepsilon>0$, therefore
$-\lambda_{s+1}+\lambda_{s}+2\varepsilon<0$.
Then we can take $r>0$ and $\alpha_{1}>0$ small enough such that all the
lemmas in Section 3.1 hold; furthermore,
$\|\psi_{-t}(v)\|\leq e^{-(\lambda_{s+1}-\varepsilon)t}\|v\|,$
and
$\|X(\phi_{-t}(y))\|\geq e^{-(\lambda_{s}+\varepsilon)t}\|X(y)\|,$
since the orbit segment is in ${\rm Cl}(D^{cs}_{\alpha}(\sigma))$. This gives
$\|\psi^{*}_{-t}(v)\|\leq
e^{(-\lambda_{s+1}+\varepsilon+\lambda_{s}+\varepsilon)t}\|v\|=e^{(-\lambda_{s+1}+\lambda_{s}+2\varepsilon)t}\|v\|,$
which shows that the orbit segment is backward contracting.
Case 2. $\phi_{[0,T]}(x)\subset{\rm Cl}(D^{u}_{\alpha}(\sigma))$.
We take any orbit segment $\phi_{[0,T]}(x)$ in ${\rm
Cl}(D^{u}_{\alpha}(\sigma))$ and take any $y\in\phi_{[0,T]}(x)$. By Lemma 3.1
the flow direction is almost parallel to $E^{u}$ if we take $\alpha_{1}$ small
enough. As a result, we can take $v\in E^{u}_{y}$ such that $v$ is almost
parallel to $E^{c}(\sigma)$. For such $v$ we have
$\|\psi_{-t}(v)\|\geq e^{-(\lambda_{s}+\varepsilon)t}\|v\|,$
and the flow direction satisfies
$\|X(\phi_{-t}(y))\|\leq e^{(-\lambda_{s+1}+\varepsilon)t}\|X(y)\|.$
This shows that
$\|\psi^{*}_{-t}(v)\|\geq
e^{(-\lambda_{s}-\varepsilon+\lambda_{s+1}-\varepsilon)t}\|v\|=e^{(-\lambda_{s}+\lambda_{s+1}-2\varepsilon)t}\|v\|.$
For $\varepsilon$ small enough, $-\lambda_{s}+\lambda_{s+1}-2\varepsilon>0$.
As a result, $\psi^{*}_{-t}$ will never be contracting as long as the orbit
segment is contained in ${\rm Cl}(D^{u}_{\alpha}(\sigma))$. Therefore
$\phi_{[0,T]}(x)$ does not contain any sub-segment that is backward
contracting. ∎
###### Remark 3.10.
The previous two lemmas have similar formulations for reverse Lorenz-like
singularities.
Recall the definition of $t^{\pm}(x)$ from the previous section. Next, we
introduce the main lemmas in this section, which will enable us to solve the
issue mentioned in Remark 3.7 and create transverse intersection between the
invariant manifolds of $p$ with points in $C$.
###### Lemma 3.11.
Let $\sigma$ be a Lorenz-like singularity. Then for $r>0$ small enough, for
every $\lambda\in(0,1),T_{0}>0$ there exists $\alpha_{2}>0$ such that for all
$\alpha<\alpha_{2}$, if $y\in B_{r}(\sigma)\cap{\rm
Cl}(D_{\alpha}^{cs}(\sigma))$ is a $(\lambda,T_{0})$-backward hyperbolic time
on its orbit, then $W^{u}(y)\pitchfork W^{cs}(\sigma)\neq\emptyset$.
###### Proof.
Since $y$ is a $(\lambda,T_{0})$-backward hyperbolic time, according to the
classic work of Liao [20], there is $\delta>0$ such that $W^{u}(y)$ has size
$\delta\|X(y)\|$, tangent to $C_{\beta}(E^{u})$ for some $\beta>0$ (in fact,
tangent to $C_{\beta}(E^{uu}_{y})$ where $E^{uu}_{y}$ is the unstable subspace
in $N_{y}$; however, we may assume that $E^{uu}_{y}$ and $E^{u}$ are almost
parallel as long as the flow orbit remains in the cone ${\rm
Cl}(D^{cs}_{\alpha}(\sigma))$). Let $\alpha_{2}$ be given by Lemma 3.5 for
such $\beta$ and $\delta$, we see that $W^{u}(y)\pitchfork
W^{cs}(\sigma)\neq\emptyset$.
∎
###### Lemma 3.12 (Backward hyperbolic times near $\sigma$).
Let $\sigma$ be a Lorenz-like singularity, and $\\{p_{n}\\}$ be a sequence of
periodic points with $p_{n}\to\sigma$. For $\lambda\in(0,1),T_{0}>0$ and for
$\alpha\in(0,\min\\{\alpha_{1},\alpha_{2}\\})$, assume that the set
$H_{n}=\\{t\in(-t^{-}_{n},t^{+}_{n}):\phi_{t}(p_{n})\mbox{ is a
}(\lambda,T_{0})\mbox{-backward hyperbolic time.}\\}$
has positive density: there exists $a>0$ such that for every $n$,
$\nu_{n}(\\{\phi_{t}(p_{n}):t\in H_{n}\\})>a>0,$
where $t^{\pm}_{n}$ and $\nu_{n}$ are taken according to (6). Then there
exists $N>0$ such that
$W^{u}({\rm Orb}(p_{n}))\pitchfork W^{cs}(\sigma)\neq\emptyset,\mbox{ for all
}n>N.$
###### Proof.
Figure 2. Transverse intersection between $W^{u}({\rm Orb}(p_{n}))$ and
$W^{cs}(\sigma)$
Let $\alpha<\min\\{\alpha_{1},\alpha_{2}\\}$ where $\alpha_{1}$ is given by
Lemma 3.9. We claim that $\\{\phi_{t}(x):t\in H_{n}\\}\cap
D^{cs}_{\alpha}(\sigma)\neq\emptyset$.
To this end, we parse the orbit segment $\phi_{(-t^{-}_{i},t^{+}_{i})}(p_{n})$
into three consecutive parts like in the proof of Lemma 3.4:
$(-t^{-}_{n},t^{+}_{n})=(-t^{-}_{n},-t^{s}_{n})\cup(-t^{s}_{n},t^{u}_{n})\cup(t^{u}_{n},t^{+}_{n}),$
such that:
* •
$-t^{s}_{n}$ is the first time in $(-t^{-}_{n},t^{+}_{n})$ such that
$\phi_{-t^{s}_{n}}(p_{n})\notin D^{cs}_{\alpha}(\sigma)$;
* •
$t^{u}_{n}$ is the first time in $(-t^{-}_{n},t^{+}_{n})$ such that
$\phi_{t^{u}_{n}}(p_{n})\in D^{u}_{\alpha}(\sigma)$.
In other words, the orbit segment in $(t^{s}_{n},t^{u}_{n})$ is ‘making the
turn’. Then according to Lemma 3.2, $t^{u}_{n}+t^{s}_{n}$ is uniformly bounded
by $T_{\alpha}$. Therefore, we can take $n$ large enough such that
$\nu_{n}(\\{\phi_{t}(p_{n}):t\in(t^{s}_{n},t^{u}_{n})\\})<\frac{a}{2}.$
On the other hand, Lemma 3.9 states that for every
$t\in(t^{u}_{n},t^{+}_{n}),$ $\phi_{t}(p_{n})$ cannot be a backward hyperbolic
time, since any sub-segment contained in $\phi_{(t^{u}_{n},t)}(p_{n})$ cannot
be backward contracting. As a result, we have
$H_{n}\cap(t^{u}_{n},t^{+}_{n})=\emptyset$. It then follows that
$\nu_{n}(\\{\phi_{t}(p_{n}):t\in
H_{n}\cap(-t^{-}_{n},-t^{s}_{n})\\})>\frac{a}{2}.$
In other words, there is a backward hyperbolic time $y_{n}=\phi_{t}(p_{n})$
contained in $D^{cs}_{\alpha}(\sigma)$.
It follows from Lemma 3.11 that $W^{u}(y_{n})\pitchfork
W^{cs}(\sigma)\neq\emptyset$. See Figure 2. We conclude the proof of the
lemma. ∎
###### Lemma 3.13 (Forward hyperbolic times near $\sigma$).
Let $\sigma$ be a Lorenz-like singularity. For $\beta>0$ small enough,
$\lambda\in(0,1),T_{0}>0$, there exists $\alpha_{3}>0$ with the following
property:
For every $\alpha<\alpha_{3}$, let $D$ be a $(\dim E^{u}+1)$-dimensional disk
that contains $\sigma$ and is tangent to $C_{\beta}(E^{cu})$, and let $z$ be a
$(\lambda,T_{0})$-forward hyperbolic time that is contained in
$D_{\alpha}^{c}(\sigma)\cap B_{r}(\sigma)$ for some $r>0$ small enough. Then
we have
$W^{s}(z)\pitchfork D\neq\emptyset,\mbox{ for all $n$ large enough}.$
###### Proof.
Similar to Lemma 3.11, there is $\delta>0$ such that $W^{s}(z)$ has size
$\delta\|X(z)\|$, tangent to $C_{\beta^{\prime}}(E^{ss})$ for some
$\beta^{\prime}>0$ small enough.
Following the idea of Lemma 3.5, we have
$\operatorname{diam}W^{s}(z)>C_{1}\delta d(z,\sigma)>CC^{\prime}\delta x^{c}.$
On the other hand, since $D$ is tangent to $C_{\beta}(E^{cu})$ and satisfies
$\dim D+\dim W^{s}(z)=\dim E^{u}+1+\dim E^{ss}=\dim M,$
there exists $C^{\prime\prime\prime}>0$ such that whenever
$\operatorname{diam}W^{s}(z)>C^{\prime\prime}x^{ss}$ we must have
$W^{s}(z)\pitchfork D\neq\emptyset$. See Figure 3. Since $x^{ss}<\alpha x^{c}$
inside the center cone $D_{\alpha}^{c}(\sigma)$, the choice of
$\alpha_{3}=C_{1}C^{\prime}\delta/C^{\prime\prime\prime}$ satisfies the
requirement of the lemma.
Figure 3. Transverse intersection between $W^{s}(z)$ and $D$
∎
Henceforth, we assume that $r>0$ is small enough such that all the previous
lemmas hold. Let $\lambda\in(0,1),T_{0}>0$ be given by Lemma 3.9, and
$\alpha<\min\\{\alpha_{1},\alpha_{2},\alpha_{3}\\}$.
## 4\. Proof of Theorem A
This section contains the proof of Theorem A. The proof is three-fold:
1. (1)
every chain recurrent class $C$ with positive entropy must contain a periodic
orbit; note that the converse is also true: for $C^{1}$ generic flows, a non-
trivial chain recurrent class containing a periodic point $p$ must coincide
with the homoclinic class of $p$ ([26, Proposition 4.8]; see also [2]),
therefore has positive topological entropy;
2. (2)
there exists a neighborhood $U$ of $C$, such that every periodic orbit in $U$
must be contained in $C$;
3. (3)
$C$ is isolated.
Recall that ${\mathcal{R}}$ is the residual set in $\mathscr{X}_{*}^{1}(M)$
described in Section 2.4. Denote by
${\mathcal{R}}_{0}\subset\mathscr{X}^{1}(M)$ the residual set of Kupka-Smale
vector fields. The next lemma takes care of Step (3) above.
###### Lemma 4.1.
For a $C^{1}$ vector field $X\in{\mathcal{R}}\cap{\mathcal{R}}_{0}$, let $C$
be a chain recurrent class with the following property: there exists a
neighborhood $U$ of $C$ such that every periodic orbit in $U$ is contained in
$C$.333Recall that for $C^{1}$ generic diffeomorphisms, every non-trivial
chain recurrent class is approximated by periodic orbits. See [6]. Then $C$ is
isolated.
###### Proof.
The proof is standard. Let $C_{n}$ be a sequence of distinct chain recurrent
classes approaching $C$ in the Hausdorff topology. Without loss of generality
we may assume that $C_{n}\subset U$, and $C_{n}\neq C$ for all $n$. Since $X$
has only finitely many singularities, we may assume that $U$ is small enough
such that all the singularities in $U$ are indeed contained in $C$. It then
follows that $C_{n}\cap{\rm Sing}(X)=\emptyset$ for all $n$ large enough.
Since $X$ is a star vector field, the main result of [13] shows that $C_{n}$
is uniformly hyperbolic. In particular, there exists periodic orbit ${\rm
Orb}(p_{n})\subset C_{n}\subset U$. By assumption we must have ${\rm
Orb}(p_{n})\subset C$, so $C_{n}=C$ which is a contradiction.
∎
Next, we turn our attention to Step (1) and (2).
### 4.1. $C$ contains a periodic orbit
Step (1) of the proof is carried out in the following two propositions, which
are of independent interest.
###### Proposition 4.2.
There exists a residual set $\tilde{\mathcal{R}}\subset\mathscr{X}_{*}^{1}(M)$
such that for every $X\in\tilde{{\mathcal{R}}}$, let $C$ be a chain recurrent
class of $X$ with singularities of different indices, then $C$ contains a
periodic point and has positive topological entropy.
###### Proof.
Let $X\in{\mathcal{R}}\cap{\mathcal{R}}_{0}$ as before. Following the
discussion in Section 2.4, let $\sigma^{+}\in C$ be a Lorenz-like singularity,
and $\sigma^{-}\in C$ be reverse Lorenz-like. We have
${\rm Ind}(\sigma^{+})-1={\rm Ind}(\sigma^{-})={\rm Ind}_{C},$
where ${\rm Ind}_{C}$ is also the stable index of the periodic orbits
sufficiently close to $C$. We will construct a periodic point $p$ such that
Lemma 3.11 can be applied at $\sigma^{+}$ for $X$, and at $\sigma^{-}$ for
$-X$. This shows that $p\in C$, which also implies that $C=H(p)$ where $H(p)$
is the homoclinic class of $p$, therefore has positive topological entropy, by
[2].
Given a periodic orbit $\gamma$, we denote by $\Pi(\gamma)$ its primary
period. By [28, Lemma 2.1] which is originally due to Liao, there is
$\tilde{\lambda}\in(0,1)$, $T>0$ such that for every periodic orbit $\gamma$
of $X$ with periodic $\Pi(\gamma)$ longer than $T$, we have
$\prod_{i=0}^{[\Pi(\gamma)/T]-1}\|\psi_{T}|_{N^{s}(\phi_{iT}(x))}\|\leq\tilde{\lambda}^{\Pi(\gamma)},$
and a similar estimate holds on $N^{u}$. Here $N^{s}\oplus N^{u}$ is the
hyperbolic splitting on $N_{x}$ for the linear Poincaré flow $\psi_{t}$. By
the Pliss Lemma [27], for every $\lambda\in(\tilde{\lambda},1)$ there are
$(\lambda,T_{0})$-backward hyperbolic times for $N^{u}$ along the orbit of
$\gamma$. Moreover, such points have positive density along the orbit of
$\gamma$.
Fix such $\lambda$ and $r>0$ small enough. For each $n>0$, consider the
following property:
(P(n)): there is a hyperbolic periodic orbit $p_{n}$, such that (see Figure
4):
* •
the time that ${\rm Orb}(p_{n})$ spends inside $B_{r}(\sigma^{+})$ is more
than $(1/2-1/n)\Pi(p_{n})$;
* •
the time that ${\rm Orb}(p_{n})$ spends inside $B_{r}(\sigma^{-})$ is more
than $(1/2-1/n)\Pi(p_{n})$;
* •
the time that ${\rm Orb}(p_{n})$ spends outside $B_{r}(\sigma^{-})\cup
B_{r}(\sigma^{-})$ is less than $\frac{1}{n}\Pi(p_{n})$.
Clearly this is an open property in $\mathscr{X}^{1}(M)$. On the other hand,
note that $W^{cs}(\sigma^{+})$ and $W^{cu}(\sigma^{-})$ must have transverse
intersection due to the Kupka-Smale theorem. Since $\sigma^{\pm}$ are in the
same chain recurrent class, using the connecting lemma we can create an
intersection between $W^{u}(\sigma^{+})$ and $W^{s}(\sigma^{-})$, which in
turn creates a loop between $\sigma^{+}$ and $\sigma^{-}$. Then standard
perturbation technique (see [28] for instance) will allow one to create
periodic orbits that satisfy the conditions above. Therefore the following
property is generic:
(P’): there exists periodic orbits $\\{p_{n}\\}_{n}$ that approach both
$\sigma^{\pm}$, such that Property P(n) holds for every $n$.
Figure 4. The periodic points $\\{p_{n}\\}$ satisfying Property (P’)
As a result, passing to the generic subset ${\mathcal{R}}_{1}$ where property
(P’) holds, we may assume that $X$ itself has periodic orbits $\\{p_{n}\\}$
satisfying Property (P’).
Below we will show that there exists $(\lambda,T_{0})$-backward hyperbolic
times contained in ${\rm Orb}(p_{n})\cap B_{r}(\sigma^{+})\cap
D_{\alpha}^{cs}(\sigma^{+})$, for $n$ large enough. This allows us to apply
Lemma 3.11 to show that $W^{u}({\rm Orb}(p_{n}))$ has transverse intersection
with $W^{cs}(\sigma^{+})$. The same argument applied to the flow $-X$ shows
the transverse intersection between $W^{s}({\rm Orb}(p_{n}))$ and
$W^{cu}(\sigma^{-})$, thus $p_{n}$ is contained in $C$ for $n$ large enough.
For convenience, we assume that $p_{n}\to\sigma^{+}$ such that $p_{n}$ lies on
the boundary of $D^{cs}_{\alpha}(\sigma^{+})$, where
$\alpha\in(0,\min\\{\alpha_{1},\alpha_{2}\\})$ as specified at the end of the
previous section. Denote by
$p_{n}^{\sigma^{+}}=\phi_{t_{1}^{n}}(p_{n})$
where $t_{1}^{n}<0$ is the largest real number such that
$p_{n}^{\sigma^{+}}\in\partial B_{r}(\sigma^{+})$, and
$p_{n}^{\sigma^{-}}=\phi_{t_{2}^{n}}(p_{n})$
where $t^{n}_{2}=\sup\\{t<t_{1}^{n}:\phi_{t}(p_{n})\in B_{r}(\sigma^{-})\\}.$
See the Figure above. Since the orbit segment
$\phi_{[t_{2}^{n},t_{1}^{n}]}(p_{n})\subset M\setminus(B_{r}(\sigma^{+})\cup
B_{r}(\sigma^{-}))$, Property (P’) dictates that
$t^{n}_{1}-t^{n}_{2}<\frac{1}{n}\Pi(p_{n})$.
On the other hand, the Pliss Lemma [27] shows that the set of
$(\lambda,T_{0})$-backward hyperbolic times on the orbit of $p_{n}$ have
density $a>0$. Thus for $n$ large enough, there must be hyperbolic times on
the subsegment ${\rm Orb}(p_{n})\cap B_{r}(\sigma^{\pm})$. We consider the
following cases:
1. (1)
If there exists a backward hyperbolic time $p_{n}^{\prime}$ contained in the
orbit segment ${\rm Orb}(p_{n})\cap B_{r}(\sigma^{+})$, then by Lemma 3.9
$p_{n}^{\prime}\notin D^{u}_{\alpha}(\sigma^{+})$; on the other hand, the
transition time from $D^{cs}_{\alpha}(\sigma^{+})$ to
$D^{u}_{\alpha}(\sigma^{+})$ is bounded (Lemma 3.2); this shows that
$p_{n}^{\prime}\in D_{\alpha}^{cs}(\sigma^{+})$;
2. (2)
if there exists a backward hyperbolic times $p^{\prime}_{n}\in{\rm
Orb}(p_{n})\cap B_{r}(\sigma^{-})$, then Lemma 3.8 (applied to $-X$) shows
that the orbit segment from $p^{\prime}_{n}$ to $p^{\sigma^{-}}_{n}$ is
backward contracted by the scaled linear Poincaré flow. By Lemma 2.6,
$p_{n}^{\sigma^{-}}$ itself must be a backward hyperbolic time.
Next, by Lemma 3.9, the orbit segment from $p^{\sigma^{+}}_{n}$ to $p_{n}$ is
backward contracting, and the orbit segment from $p_{n}^{\sigma^{-}}$ to
$p^{\sigma^{+}}_{n}$ has very small length comparing to the former. As a
result, $p_{n}$ is a backward hyperbolic time.
It then follows from both cases, that there exists a backward hyperbolic time
inside ${\rm Cl}(D_{\alpha}^{cs}(\sigma^{+}))$. As a result of Lemma 3.11,
$W^{u}({\rm Orb}(p_{n}))$ must intersect transversally with
$W^{cs}(\sigma^{+})$.
The same argument applied to the flow $-X$ shows that the stable manifold of
${\rm Orb}(p_{n})$ intersects transversally with the unstable manifold of
$\sigma^{-}$. We conclude that ${\rm Orb}(p_{n})$ is contained in the chain
recurrent class of $\sigma^{\pm}$, finishing the proof of this proposition.
∎
###### Proposition 4.3.
There exists a residual set $\tilde{\mathcal{R}}\subset\mathscr{X}_{*}^{1}(M)$
such that for every $X\in\tilde{{\mathcal{R}}}$, if $C$ is a chain recurrent
class of $X$ with singularity and positive topological entropy then $C$
contains a periodic point $p$.
###### Proof.
In view of the discussion in Section 2.4, we consider the following two cases
for $X\in{\mathcal{R}}\cap{\mathcal{R}}_{0}$:
Case 1. All singularities in $C$ have the same index. Then they must be of the
same type: either they are all Lorenz-like or all reverse Lorenz-like. By [28,
Theorem 3.7], $C$ is sectional hyperbolic for $X$ or $-X$. We may assume that
$C$ is sectional hyperbolic for $X$. When $h_{top}(\phi_{t}|_{C})>0$, [26,
Theorem B] guarantees that there are periodic orbits $p$ contained in $C$.
Case 2. $C$ contains singularities with different indices. Then the
proposition follows from Proposition 4.2. ∎
This finishes the proof of Step (1).
### 4.2. All periodic orbits close to $C$ must be contained in $C$
Now we turn our attention to Step (2). The main result of this subsection is
Lemma 4.6 which shows that whenever a periodic orbit ${\rm Orb}(p)$ gets
sufficiently close to a chain recurrent class containing a periodic orbit, we
must have $W^{u}(p)\pitchfork W^{s}(x)$ for some $x\in C$.
The proof of this lemma requires a careful analysis on how the backward
hyperbolic times along the orbit of $p$ approach points in $C$. There are
essentially three cases:
1. (1)
backward hyperbolic times near a regular point; this is the easy case;
2. (2)
backward hyperbolic times near a Lorenz-like singularity $\sigma$; this has
been taken careful of by Lemma 3.12;
3. (3)
backward hyperbolic times near a reverse Lorenz-like singularity $\sigma$; in
this case we consider $-X$, under which we have a sequence of forward
hyperbolic times near a Lorenz-like singularity $\sigma$; this is taken care
of by the following lemma, as well as Lemma 3.13.
###### Lemma 4.4.
There exists a residual set $\tilde{\mathcal{R}}\subset\mathscr{X}_{*}^{1}(M)$
such that for every $X\in\tilde{{\mathcal{R}}}$, let $C$ be a chain recurrent
class of $X$ that contains some Lorenz-like singularity $\sigma$ and a
hyperbolic periodic point $q$. Let $\\{p_{n}\\}$ be a sequence of periodic
points with $\lim_{H}{\rm Orb}(p_{n})\subset C$. Furthermore, assume that
there exists $x_{n}\in{\rm Orb}(p_{n})$ that are $(\lambda,T_{0})$-forward
hyperbolic times for $\lambda\in(0,1),T_{0}>0$ independent of $n$, with
$x_{n}\to\sigma$. Then for $n$ large enough,
$W^{s}({\rm Orb}(p_{n}))\pitchfork W^{u}({\rm Orb}(q))\neq\emptyset.$
###### Proof.
Recall that $E^{ss}_{\sigma}$ is the subspace spanned by the (generalized)
eigenspaces corresponding to the eigenvalues
$\lambda_{1},\ldots,\lambda_{s-1}$; furthermore we have $E^{ss}_{\sigma}\oplus
E_{\sigma}^{c}=E_{\sigma}^{cs}$.
Consider the strong stable manifold of $\sigma$, $W^{ss}(\sigma)$ that is
tangent to $E^{ss}_{\sigma}$ at $\sigma$. It exists because of the dominated
splitting $T_{\sigma}M=E^{ss}_{\sigma}\oplus E_{\sigma}^{cu}$ with
$E_{\sigma}^{cu}=E^{c}_{\sigma}\oplus E^{u}_{\sigma}$. Moreover,
$W^{ss}(\sigma)$ locally divides the stable manifold $W^{s}(\sigma)$ into two
branches, which we denote by $W^{s,\pm}(\sigma)$.
We may assume that ${p_{n}}$ themselves are forward hyperbolic times with
$p_{n}\to\sigma$. Fix some $r>0$ small enough, we consider $t_{n}<0$ the
largest real number such that $\phi_{t_{n}}(p_{n})\in\partial B_{r}(\sigma)$.
In particular, $z_{n}:=\phi_{t_{n}}(p_{n})\to z\in W^{s,+}(\sigma)\cap
C\cap\partial B_{r}(\sigma)$. Since the orbit of $p_{n}$ can only approach
$\sigma$ along the center direction $E^{c}_{\sigma}$, we must have
$z_{n}\in{\rm Cl}(D_{\beta}^{c}(\sigma))$, for some $\beta=\beta(r)>0$. By
Lemma 3.8, the orbit segment $\phi_{[t_{n},0]}(p_{n})$ is forward contracting.
It follows from Lemma 2.6 that $z_{n}$ itself is a $(\lambda,T_{0})$-forward
hyperbolic time, and possesses stable manifold with size proportional to the
flow speed at $z_{n}$.
On the other hand, we have $W^{s,+}(\sigma)\cap W^{u}({\rm
Orb}(q))\neq\emptyset$, a courtesy of the connecting lemma [2] (see also [26,
Proposition 4.9]). Let $a$ be a point of intersection between
$W^{s,+}(\sigma)$ and $W^{u}({\rm Orb}(q))$, and $D$ a disk in $W^{u}({\rm
Orb}(q))$ with $a\in D\subset W^{u}({\rm Orb}(q))$. By the inclination lemma,
$\phi_{t}(D)$ approximates $W^{u}(\sigma)$ as $t\to\infty$.
Note that $\tilde{D}:=\\{\phi_{t}(D):t>0\\}$ is tangent to the cone
$C_{\beta}(E^{cu})$ and has dimension $\dim E^{u}_{\sigma}+1$. See figure 3.
Shrinking $r$ such that $\beta(r)<\alpha_{3}$ and applying Lemma 3.13,444
$\tilde{D}$ defined in this way dos not contain $\sigma$. However it can be
extended to a disk $\overline{D}$ that contains $\sigma$ in its interior. Note
that $\overline{D}$ and $\tilde{D}$ coincide within the upper half of the cone
$D^{c}_{\alpha}(\sigma)$, which contains the point of the transverse
intersection with $W^{s}(z_{n})$. we obtain
$W^{s}(z_{n})\pitchfork\\{\phi_{t}(D):t>0\\}\neq\emptyset$ for all $n$ large
enough, with which we conclude the proof.
∎
###### Remark 4.5.
The proof of the lemma is reminiscent of [26, Corollary E] with two major
differences. Firstly, in [26], $C$ is Lyapunov stable. This guarantees not
only the existence of a periodic orbit $q\in C$, but the disk $D$ is contained
in the class $C$, which immediately allows one to conclude that $p_{n}\in C$
for $n$ large enough. Without Lyapunov stability, we only obtain the chain
attainability from $C$ to $p_{n}$. Secondly, $C$ is sectional hyperbolic in
[26]. Therefore nearby periodic orbits have dominated splitting $E^{s}\oplus
F^{cu}$ and, consequently, have stable manifolds with uniform size.
Now we are ready to state the main lemma of this subsection, which allows us
to establish the chain attainability from nearby periodic orbits to the class
$C$. Note that the lemma does not impose any condition on the type of
singularities contained in $C$, therefore can be applied to $-X$.
###### Lemma 4.6.
There exists a residual set $\tilde{\mathcal{R}}\subset\mathscr{X}_{*}^{1}(M)$
such that for every $X\in\tilde{{\mathcal{R}}}$, let $C$ be a chain recurrent
class of $X$ with singularities and a hyperbolic periodic point. Then there
exists a neighborhood $U$ of $C$ such that every periodic orbit ${\rm
Orb}(p)\subset U$ satisfies
$W^{u}(p)\pitchfork W^{s}(x)\neq\emptyset$
for some $x\in C$.
###### Proof.
We prove by contradiction. Assume that there is a sequence of periodic orbits
${\rm Orb}(p_{n})\subset U_{n}$ with $\cap_{n}U_{n}=C$, such that
$W^{u}(p)\pitchfork W^{s}(x)=\emptyset$ for every $x\in C$. It is easy to see
that the period of $p_{n}$ must tend to infinity; otherwise, we could take a
subsequence of $\\{p_{n}\\}$ that converges to a periodic orbit $p_{0}\in C$;
by the star assumption, $p_{0}$ is hyperbolic, and must be homoclinically
related with $p_{n}$ for $n$ large enough, a contradiction.
By [28, Theorem 5.7], the index of $p_{n}$ coincides with ${\rm Ind}_{C}$ for
all $n$ large enough. By [28, Lemma 2.1], there exists
$\lambda\in(0,1),T_{0}>0$ such that ${\rm Orb}(p_{n})$ contains
$(\lambda,T_{0})$-backward hyperbolic times $x_{n}$. Moreover, the collection
of such points $\Lambda_{n}=\\{x_{n}\in{\rm Orb}(p_{n}):x_{n}\mbox{ is a
backward hyperbolic time}\\}$ have positive density (independent of $n$) in
${\rm Orb}(p_{n})$ with respect to the empirical measure on ${\rm
Orb}(p_{n})$, thanks to the Pliss lemma [27].
Taking subsequence if necessary, we write $\tilde{C}\subset C$ for the
Hausdorff limit of ${\rm Orb}(p_{n})$, and
$\Lambda=\limsup_{n\to\infty}\Lambda_{n}\subset\tilde{C}$ (note that it is not
invariant). We may also assume that the empirical measure $\mu_{n}$ on ${\rm
Orb}(p_{n})$ converges to an invariant measure $\mu$ supported on $\tilde{C}$.
The backward hyperbolic times having uniform positive density implies that
$\mu(\Lambda)>0$.
We consider the following two cases:
Case 1. There is an ergodic component of $\mu$, denoted by $\mu_{1}$, which
satisfies $\mu_{1}(\Lambda)>0$ and $\mu_{1}({\rm Sing})=0$.
Then $\mu_{1}$ must be a non-trivial hyperbolic measure, thanks to [28,
Theorem 5.6]. The same argument used in Lemma 2.8 (f) shows that for $n$ large
enough, $W^{u}({\rm Orb}(p_{n}))$ has transverse intersection with the stable
manifold of some point in $\operatorname{supp}\mu_{1}$. Roughly speaking,
every regular point in $\Lambda\cap\,\operatorname{supp}\mu_{1}$ have stable
manifold (with uniform size if we choose a subset of $\Lambda$ away from all
singularities; note that such subset still has positive $\mu_{1}$ measure),
which must intersect transversally with the unstable manifold of the backward
hyperbolic times $x_{n}\in\Lambda_{n}$ (recall that such points have unstable
manifold up to the flow speed; if we take them uniformly away from all
singularities, then their unstable manifolds also have uniform size), a
contradiction.
Case 2. Every ergodic component $\mu_{1}$ of $\mu$ with $\mu_{1}(\Lambda)>0$
is supported on some singularities $\\{\sigma\\}$.
As per our discussion in Section 2.4, singularities in $C$ must be either
Lorenz-like or reverse Lorenz-like. We further consider two subcases:
Subcase 1. One of those $\sigma$ is reverse Lorenz-like555Note that this case
does not happen when $C$ is sectional hyperbolic..
In this case we consider the vector field $-X$. Note that backward hyperbolic
times for $X$ are forward hyperbolic times for $-X$. Therefore we have a
sequence of forward hyperbolic times on the orbit of $p_{n}$, approaching a
Lorenz-like singularity $\sigma$. We are in a position to apply Lemma 4.4,
which shows that
$W^{s,-X}({\rm Orb}(p_{n}))\pitchfork W^{u,-X}(q)\neq\emptyset$
for a hyperbolic periodic point $q\in C$. Reverse back to $X$, we see that
$W^{u}({\rm Orb}(p_{n}))\pitchfork W^{s}(x)\neq\emptyset$, which
contradictions with our assumption on $C$.
Subcase 2. One of those $\sigma$ is Lorenz-like.
In this case we have backward hyperbolic times in a neighborhood of a Lorenz-
like singularity with positive density. By Lemma 3.12, $W^{u}({\rm
Orb}(p_{n}))$ intersect transversely with $W^{cs}(\sigma^{+})$, which is a
contradiction.
The proof is now complete. ∎
In the proof of Lemma 4.6, the assumption that $C$ contains a periodic orbit
is only used in Case 2, Subcase 1. Since this case is impossible when $C$ is
sectional hyperbolic, we obtain the following proposition which has intrinsic
interest.
###### Proposition 4.7.
Let $C$ be a sectional hyperbolic chain recurrent class for $C^{1}$ generic
star flow $X$. Assume that $C$ contains a singularity $\sigma$. Then there
exists a neighborhood $U$ of $C$, such that $C$ is chain attainable from all
periodic orbits in $U$.
### 4.3. Proof of Theorem A
Now we are ready to prove Theorem A. By Proposition 4.3, every singular chain
recurrent class with positive entropy contains a periodic point. We then apply
Lemma 4.6 to $C$ for $X$ and $-X$. This shows that there exists a neighborhood
$U$ of $C$, such that all periodic orbits in $U$ are homoclinically related
with $C$, therefore, are contained in $C$. Finally we use Lemma 4.1 to
conclude that $C$ is isolated.
The only remaining case is chain recurrent classes with zero entropy. By
Proposition 4.2, singularities in such classes must have the same stable
index. Then [28, Theorem 3.7] shows that $C$ is sectional hyperbolic for $X$
or $-X$. $C$ cannot contain any periodic orbit; otherwise it must coincide
with the homoclinic class of said periodic orbit, resulting in positive
topological entropy.
To conclude the proof, we need to show that the only ergodic invariant
measures supported on $C$ are point masses of singularities.
For this purpose, let $\mu$ be an ergodic invariant measure with
$\operatorname{supp}\mu\subset C$. We prove by contradiction and assume that
$\mu\neq\delta_{\sigma}$ for any $\sigma\in{\rm Sing}(X)$. By [28], $\mu$ is a
non-trivial hyperbolic measure whose support contains regular orbits of $X$.
By Lemma 2.7 and 2.8 (f), there exists a periodic orbit in $C$, a
contradiction.
### 4.4. Proof of the corollary
We conclude this paper with the proof of the Corollary.
###### Proof of Corollary B.
In [20] it is proven that $C^{1}$ generic star flows have only finitely many
periodic sinks. As a result, if $X$ has infinitely many distinct Lyapunov
stable chain recurrent classes $\\{C_{n}\\}_{n=1}^{\infty}$, then we may
assume that $C_{n}$’s are non-trivial and approach, under Hausdorff topology,
to a chain recurrent class $C$. Note that $C$ cannot be trivial since trivial
chain recurrent classes, i.e., periodic orbits and singularities, are all
hyperbolic and isolated due to the star assumption. Therefore $C$ is non-
trivial and sectional hyperbolic with zero topological entropy due to Theorem
A.
Denote by $\lambda_{C}>0$ the sectional volume expanding rate on $F^{cu}$,
then by the continuity of the dominated splitting (see [4, Appendix B] and
[1]), nearby classes $C_{n}$ must be sectional hyperbolic and have sectional
volume expanding rate $\lambda_{C_{n}}>\lambda_{C}/2$. Since $C_{n}$ are
Lyapunov stable, we are in a position to apply [26, Theorem C] on $C_{n}$, and
see that $h_{top}(X|_{C_{n}})>\lambda_{C}/2$. By the variational principle, we
can take $\mu_{n}$ an ergodic invariant measure supported on $C_{n}$, such
that $h_{\mu_{n}}(X)\geq\lambda_{C}/2$.
On the other hand, [26, Theorem A] shows that $X$ is robustly entropy
expansive in s small neighborhood of $C$; in particular, this together with
[5] shows that the metric entropy must be upper semi-continuous in a small
neighborhood of $C$. Let $\mu$ be any limit point of $\mu_{n}$ in weak-*
topology, then we see that $\operatorname{supp}\mu\subset C$ and
$h_{\mu}(X)\geq\lambda_{C}/2$. It then follows from the variational principle
that $h_{top}(X|_{C})\geq\lambda_{C}/2$, a contradiction.
∎
## References
* [1] V. Araújo and M. J. Pacifico; Three-dimensional flows, volume 53 of _Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]_. Springer, Heidelberg, 2010. With a foreword by Marcelo Viana.
* [2] C. Bonatti and S. Crovisier. Recurrence et généricité. Invent. Math., 158:33–104, 2004.
* [3] C. Bonatti and A. da Luz. Star flows and multisingular hyperbolicity, arXiv:1705.05799
* [4] C. Bonatti, L. Diaz and M. Viana. Dynamics beyond uniform hyperbolicity. Encyclopedia of Mathematical Sciences, 102. Mathematical Physics, III. Springer-Verlag, Berlin, 2005.
* [5] R. Bowen. Entropy expansive maps. Trans. Amer. Math. Soc., 164:323–331, 1972.
* [6] S. Crovisier. Periodic orbits and chain-transitive sets of $C^{1}$-diffeomorphisms. Publ. Math. Inst. Hautes Études Sci. No. 104 (2006), 87–141.
* [7] S. Crovisier, A. da Luz, D. Yang, J. Zhang. On the notions of singular domination and (multi-)singular hyperbolicity. Sci. China Math. 63 (2020), no. 9, 1721–1744.
* [8] A. da Luz. Hyperbolic sets that are not contained in a locally maximal one. Discrete Contin. Dyn. Syst. 37 (2017), no. 9, 4923–4941.
* [9] H. Ding. Disturbance of the homoclinic trajectory and applications. Beijing Daxue Xuebao, no. 1 (1986), 53–63.
* [10] J. Franks. Necessary conditions for stability of diffeomorphisms. Trans. Amer. Math. Soc., 158 (1971), 301–308.
* [11] S. Gan. A generalized shadowing lemma. Discrete Contin. Dyn. Syst., 8:627–632, 2002.
* [12] S. Gan and D. Yang. Morse-Smale systems and horseshoes for three dimensional singular flows. Ann. Sci. Éc. Norm. Supér., 51:39–112, 2018.
* [13] S. Gan and L. Wen. Nonsingular star flows satisfy Axiom A and the no-cycle condition, Invent. Math., 164:279–315, 2006.
* [14] J. Guckenheimer. A strange, strange attractor. The Hopf bifurcation theorem and its applications, pages 368–381. Springer Verlag, 1976.
* [15] J. Guckenheimer and R. F. Williams. Structural stability of Lorenz attractors. Publ. Math. IHES, 50:59–72, 1979.
* [16] S. Hayashi. Diffeomorphisms in ${\mathcal{F}}^{1}(M)$ satisfy Axiom A. Ergodic Theory & Dynam. Systems 12 (1992), 233–253.
* [17] C. Li and L. Wen. $\mathscr{X}^{*}$ plus Axiom A does not imply no-cycle. J. Differential Equations 119 (1995), no. 2, 395–400.
* [18] M. Li, S. Gan and L. Wen. Robustly transitive singular sets via approach of extended linear Poincaré flow. Discrete Contin. Dyn. Syst., 13:239–269, 2005.
* [19] S. T. Liao. Obstruction sets. II. (in Chinese) Beijing Daxue Xuebao 2 (1981), 1–36.
* [20] S. T. Liao. On $(\eta,d)$-contractible orbits of vector fields. Systems Science and Mathematical Sciences., 2:193–227, 1989.
* [21] S. T. Liao. The qualitative theory of differential dynamical systems. Science Press, 1996.
* [22] R. Mañé. Contributions to the stability conjecture. Topology 17 (1978), no. 4, 383–396.
* [23] R. Metzger and C. A. Morales. Sectional-hyperbolic systems. Ergodic Theory & Dynam. Systems 28:1587–1597, 2008.
* [24] C. A. Morales, M. J. Pacifico and E. R. Pujals. Singular hyperbolic systems. Proc. Am. Math. Soc. 127, (1999), 3393–3401.
* [25] C. A.Morales, M. J. Pacifico and E. R. Pujals. Robust transitive singular sets for $3-$flows are partially hyperbolic attractors or repellers. Ann. of Math., 160:375–432, 2004.
* [26] M. J. Pacifico, F. Yang and J. Yang. Entropy theory for sectional hyperbolic flows. Annales de l’Institut Henri Poincaré C, Analyse non linéaire, 2020.
* [27] V. Pliss. A hypothesis due to Smale. Diff. Eq. 8 (1972), 203–214.
* [28] Y. Shi, S. Gan and L. Wen. On the singular hyperbolicity of star flows. J. Mod. Dyn. 8:191–219, 2014.
* [29] S. Zhu, S. Gan and L. Wen. Indices of singularities of robustly transitive sets. Discrete Contin. Dyn. Syst. 21 (2008), 945–957.
|
# Moderate Deviation Principles for Unbounded Additive Functionals of
Distribution Dependent SDEs
Panpan Ren 2,a), Shen Wang 1,b)
1)Center for Applied Mathematics, Tianjin University, Tianjin 300072, China
2)Mathematic Department, City University of Hong Kong, Hong Kong, China
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
By comparing the original equations with the corresponding stationary ones,
the moderate deviation principle (MDP) is established for unbounded additive
functionals of several different models of distribution dependent SDEs, with
non-degenerate and degenerate noises.
AMS subject Classification: 60H10, 60H15.
Keywords: Moderate deviation principle, exponential equivalence, distribution
dependent SDEs.
## 1 Introduction
To characterise long time behaviours of stochastic systems, various limit
theorems, including LLN(law of large numbers), CLT(central limit theorems),
and LDP(large deviation principle) have been intensively investigated in the
literature of Markov processes and random sequences, see for instance [1, 2,
3, 4, 5, 6, 9, 11, 19, 20, 21]. On the other hand, less is known for limit
theorems on nonlinear systems, where a typical model is the distribution
dependent SDE (also called McKean-Vlasov or Mean-filed SDE), which arises from
characterizations on nonlinear Fokker-Planck equations and mean-filed particle
systems, see [10] and references within. Recently, the Donsker-Varadhan LDP
for path-distribution dependent SDEs was investigated in [13] for empirical
measures of distribution dependent SDEs, which in particular implies LDP for
bounded continuous additive functionals. In this paper, we investigate the
MDP(moderate deviation principle) for unbounded additive functionals.
Below, we first briefly recall the notion of LDP and MDP, then introduce the
model studied in the present paper.
Let $E$ be a polish space, and let $(X_{t})_{t\geq 0}$ be a right continuous
Markov process on $E$ with infinitesimal generator $L$. For a measurable space
$(E,\mathscr{B})$, let $\mathscr{P}(E)$ denote the set of all probability
measures on $E$ with weak topology. Consider the empirical measures of
$(X_{t})_{t\geq 0}$:
$L_{t}:=\frac{1}{t}\int_{0}^{t}\delta_{X_{s}}\text{\rm{d}}s,\,\,\,t>0.$
The following Donsker-Varadhan type long time LDP for $L_{t}$ has been studied
in [7]:
(1.1) $\displaystyle\mathbb{P}_{x}(L_{t}\in M)\approx\exp\\{-t\inf_{\nu\in
M}J(\nu)\\},\quad M\subset\mathscr{P}(E),$
where
$J(\nu)=\sup_{\inf_{x}U(x)>0,U\in\mathcal{D}(L)}\int\frac{-LU}{U}\text{\rm{d}}\nu$,
$\mathbb{P}_{x}$ denotes the probability of Markov process starting from $x$,
and $\mathscr{P}(E)$ is the class of all probability measures on $E$, equipped
with the weak topology.
In general, when the Markov process $(X_{t})_{t\geq 0}$ is ergodic, in order
to describe the convergence of the empirical distribution $L_{t}$ to the
unique invariant probability measure $\bar{\mu}$ as $t\rightarrow\infty$, a
standard way is to look at the convergence rate of
$L_{t}^{A}:=\int_{E}A\text{\rm{d}}L_{t}=\frac{1}{t}\int_{0}^{t}A(X_{s})\text{\rm{d}}s\rightarrow\bar{\mu}(A)\,\,\,\textrm{as}\,\,\,t\rightarrow\infty$
for $A$ in a class of reference functions. This leads to the study of the LDP
(MDP) for the additive functional $L_{t}^{A}$. When $A$ is bounded and
continuous, (1.1) and the Contraction Principle imply the LDP of $L_{t}^{A}$,
that is, for $M\in\mathscr{B}(\mathbb{B})$,
$\mathbb{P}_{x}(L_{t}^{A}\in M)\approx\exp\\{-t\inf_{z\in M}J^{A}(z)\\},\quad
A\subset\mathbb{R},$
where $J^{A}(z)=\inf\\{J(\nu);\,\,\,\int A\text{\rm{d}}\nu=z\\}$. But this
approach does not apply when $A$ is unbounded. So, we consider the MDP
(moderate deviation principle) for $L_{t}^{A}$ with unbounded $A$, which is
equivalent to LDP for the modified additive functional
$l_{t}^{A}:=\frac{t}{a(t)}\Big{(}L_{t}^{A}-\bar{\mu}(A)\Big{)}=\frac{1}{a(t)}\int_{0}^{t}\big{(}A(X_{s})-\bar{\mu}(A)\big{)}\text{\rm{d}}s,$
where $a(t)$ is a positive function satisfying
(1.2)
$\displaystyle\lim_{t\rightarrow\infty}\frac{\sqrt{t}}{a(t)}=0,\,\,\,\,\lim_{t\rightarrow\infty}\frac{a(t)}{t}=0.$
###### Definition 1.1.
1. (1)
$L_{t}^{A}$ is said to satisfy the upper bound uniform MDP with a rate
function $I$, denoted by $L_{t}^{A}\in MDP_{u}(I)$, if for any $a$ satisfying
(1.2),
$\limsup_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\log\mathbb{P}(l_{t}^{A}\in
F)\leq-\inf_{F}I,\quad
F\subset\mathbb{R}\,\,\,\textrm{is}\,\,\,\textrm{closed}.$
2. (2)
$L_{t}^{A}$ is said to satisfy the lower bound uniform MDP with a rate
function $I$, denoted by $L_{t}^{A}\in MDP_{l}(I)$, if for any $a$ satisfying
(1.2),
$\liminf_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\log\mathbb{P}(l_{t}^{A}\in
G)\geq-\inf_{G}I,\quad
G\subset\mathbb{R}\,\,\,\textrm{is}\,\,\,\textrm{open}.$
3. (3)
$L_{t}^{A}$ is said to satisfy the uniform MDP with a rate function $I$,
denoted by $L_{t}^{A}\in MDP(I)$, if $L_{t}^{A}\in MDP_{u}(I)$ and
$L_{t}^{A}\in MDP_{l}(I)$.
The MDP has been established in [9] for non-degenerate SDEs by using Wang’s
Harnack inequality [16]:
$\text{\rm{d}}X_{t}=b(X_{t})\text{\rm{d}}t+\sigma(X_{t})\text{\rm{d}}B_{t},\,\,\,X_{0}=x\in\mathbb{R}^{d}.$
The assumptions in [9] was further simplified and improved in [19], so that
degenerate situations are also included.
In this paper, we investigate MDP for unbounded additive functionals of the
following distribution dependent SDE (DDSDE for short) on $\mathbb{R}^{d}$:
(1.3)
$\begin{split}&\text{\rm{d}}X_{t}=b(X_{t},\mathscr{L}_{X_{t}})\text{\rm{d}}t+\sigma(X_{t},\mathscr{L}_{X_{t}})\text{\rm{d}}B_{t},\end{split}$
where
$b:\mathbb{R}^{d}\times\mathscr{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d}$,
$\sigma:\mathbb{R}^{d}\times\mathscr{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d\times
d}$, $B_{t}$ is a $d$-dimensional Brownian motion, $\mathscr{L}_{X_{t}}$ is
the law of $X_{t}$ under the reference probability space.
Let $\mathscr{P}_{2}$ be the space of all probability measures $\mu$ on
$\mathbb{R}^{d}$ such that
$\|\mu\|_{2}:=\bigg{(}\int_{\mathbb{R}^{d}}|x|^{2}\mu(\text{\rm{d}}x)\bigg{)}^{\frac{1}{2}}<\infty.$
It is well known that $\mathscr{P}_{2}$ is a Polish space under the
Wasserstein distance
$W_{2}(\mu,\nu):=\inf_{\pi\in\mathscr{C}(\mu,\nu)}\Bigg{(}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}|x-y|^{2}\pi(\text{\rm{d}}x,\text{\rm{d}}y)\Bigg{)}^{\frac{1}{2}},$
where $\mathscr{C}(\mu,\nu)$ is the set of all couplings for $\mu$ and $\nu$.
As in [13], to establish MDP for DDSDE (1.3), we choose a reference SDE whose
solution is Markovian so that existing results on the MDP apply. By comparing
the original equation with the reference one in the sense of MDP, we establish
the MDP for the DDSDE. We will state the main results in Section 2, and
present complete proofs in Section 3.
## 2 Main results
We consider several different situations.
### 2.1 Lipschitz Continuous $A$.
We consider DDSDE (1.3) and make the following assumptions:
1. (H1)
$b$ is continuous, and $\sigma$ is Lipschitz continuous on
$\mathbb{R}^{d}\times\mathscr{P}_{2}(\mathbb{R}^{d})$ such that
$\displaystyle 2\langle
b(x,\mu)-b(y,\nu),x-y\rangle+\|\sigma(x,\mu)-\sigma(y,\nu)\|_{HS}^{2}$
$\displaystyle\leq\lambda_{2}W_{2}(\mu,\nu)^{2}-\lambda_{1}|x-y|^{2},\,\,\quad\,\,x,y\in\mathbb{R}^{d};\,\,\,\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d})$
holds for some constants $\lambda_{1}>\lambda_{2}\geq 0$.
2. (H2)
There exist constants $0<\kappa_{1}\leq\kappa_{2}<\infty$ such that
$\kappa_{1}^{2}I\leq\sigma(x,\mu)\sigma(x,\mu)^{*}\leq\kappa_{2}^{2}I,\quad
x\in\mathbb{R}^{d},~{}\mu\in\mathscr{P}_{2}(\mathbb{R}^{d}),$
where $\sigma^{*}$ denotes the transpose of the matrix $\sigma$, $I$ denotes
the identity matrix.
According to [18, Theorem 2.1], assumption (H1) implies that for any $X_{0}\in
L^{2}(\Omega\rightarrow\mathbb{R}^{d},\mathcal{F}_{0},\mathbb{P})$, the
equation (1.3) has a unique solution. We write
$P_{t}^{*}\nu=\mathscr{L}_{X_{t}}$ if $\mathscr{L}_{X_{0}}=\nu$. By [18,
Theorem 3.1(2)], $P_{t}^{*}$ has a unique invariant probability measure
$\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{d})$ such that
(2.1) $W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\leq
W_{2}(\nu,\bar{\mu})^{2}e^{-(\lambda_{1}-\lambda_{2})t},\,\,\,~{}t\geq
0,~{}\,\nu\in\mathscr{P}_{2}(\mathbb{R}^{d}).$
Consider the stationary reference SDE:
(2.2)
$\begin{split}&\text{\rm{d}}\bar{X}_{t}=b(\bar{X}_{t},\bar{\mu})\text{\rm{d}}t+\sigma(\bar{X}_{t},\bar{\mu})\text{\rm{d}}B_{t},\quad\mathscr{L}_{X_{0}}=\bar{\mu}.\end{split}$
Under (H1), the equation (2.2) has a unique solution $\bar{X}^{x}_{t}$ for any
starting point $x\in\mathbb{R}^{d}$, and $\bar{\mu}$ is the unique invariant
probability measure of the associated Markov semigroup
$\bar{P}_{t}f(x):=\mathbb{E}[f(\bar{X}^{x}_{t})],\,\,\,t\geq
0,\,\,x\in\mathbb{R}^{d},\,\,\,f\in\mathscr{B}_{b}(\mathbb{R}^{d}),$
where $\bar{P}_{t}$ is generated by
$\bar{\mathscr{A}}:=\frac{1}{2}\sum_{i,j=1}^{d}\\{\sigma\sigma^{*}\\}_{ij}(x,\bar{\mu})\partial_{i}\partial_{j}+\sum_{i=1}^{d}b_{i}(x,\bar{\mu})\partial_{i}.$
According to [9] and [19], under assumptions (H1) and (H2), $\bar{P}_{t}$ is
$\bar{\mu}$-hypercontractive and strong Feller, i.e.,
$\|\bar{P}_{t}\|_{L^{2}(\bar{\mu})\rightarrow L^{4}(\bar{\mu})}=1$ for large
$t>0$ and $\bar{P}_{t}\mathscr{B}_{b}(\mathbb{R}^{d})\subset
C_{b}(\mathbb{R}^{d})$ for $t>0$. In particular, the hypercontractivity
implies that there exists $\lambda>0$ such that
$\bar{\mu}(|\bar{P}_{t}f-\bar{\mu}(f)|^{2})\leq e^{-\lambda
t}\bar{\mu}(|f-\bar{\mu}(f)|^{2}),\quad t\geq 0,\quad f\in L^{2}(\bar{\mu}),$
so, for any $f\in L^{2}(\bar{\mu})$,
(2.3)
$\bar{V}(f):=\int_{0}^{\infty}\bar{\mu}(|\bar{P}_{t}f-\bar{\mu}(f)|^{2})\text{\rm{d}}t<\infty.$
We have the following result:
###### Theorem 2.1.
Assume (H1) and (H2). If $\mathbb{E}[e^{\delta|X_{0}|^{2}}]<\infty$ for some
constant $\delta>0$, then for any Lipschitz continuous function $A$ on
$\mathbb{R}^{d}$, $L_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$.
### 2.2 Hölder continuous $A$.
When $A$ is Hölder continuous, we need to assume that
$\sigma(x,\mu)=\sigma(\mu)$ does not depend on $x$. In this case, the DDSDE
becomes
(2.4)
$\begin{split}&\text{\rm{d}}X_{t}=b(X_{t},\mathscr{L}_{X_{t}})\text{\rm{d}}t+\sigma(\mathscr{L}_{X_{t}})\text{\rm{d}}B_{t},\end{split}$
and the reference SDE reduces to
(2.5)
$\begin{split}&\text{\rm{d}}\bar{X}_{t}=b(\bar{X}_{t},\bar{\mu})\text{\rm{d}}t+\sigma(\bar{\mu})\text{\rm{d}}B_{t}.\end{split}$
Below we give the main result of this subsection.
###### Theorem 2.2.
Assume (H1),(H2), and let $\sigma(x,\mu)=\sigma(\mu)$ do not depend on $x$. If
there exists a constant $\delta>0$ such that
$\mathbb{E}[e^{\delta|X_{0}|^{2}}]<\infty$, then for any function $A$ such
that
$\sup_{x\neq
y}{\frac{|A(x)-A(y)|}{|x-y|^{\alpha}\big{(}1+|x|+|y|\big{)}^{2-\alpha}}}<\infty,\quad
x,y\in\mathbb{R}^{d}$
holds for some $\alpha\in(0,1)$, $L_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$.
### 2.3 Non-Hölder continuous $A$.
In this part, we consider non-Hölder continuous $A$ for which we need to
further strengthen the assumption that $\sigma$ is constant matrix. So, the
DDSDE and the reference SDE reduce to
(2.6)
$\begin{split}&\text{\rm{d}}X_{t}=b(X_{t},\mathscr{L}_{X_{t}})\text{\rm{d}}t+\sigma\text{\rm{d}}B_{t},\end{split}$
and
(2.7)
$\begin{split}&\text{\rm{d}}\bar{X}_{t}=b(\bar{X}_{t},\bar{\mu})\text{\rm{d}}t+\sigma\text{\rm{d}}B_{t}.\end{split}$
###### Theorem 2.3.
Assume (H1),(H2) and let $\sigma$ be constant. If
$\mathbb{E}[e^{\delta|X_{0}|^{2}}]<\infty$ for some $\delta>0$, then for any
function $A$ such that
$\sup_{x\neq
y}{\frac{|A(x)-A(y)|\cdot{\log(e+|x|^{2}+|y|^{2})\cdot[\log(e+|x-y|^{-1})]^{p}}}{\big{(}1+|x|^{2}+|y|^{2}\big{)}}}<\infty,\quad
x,y\in\mathbb{R}^{d}$
holds for some $p>1$, $L_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$.
### 2.4 The degenerate case
In this section, we consider the distribution dependent stochastic Hamiltonian
system for $X_{t}=(X_{t}^{(1)},X_{t}^{(2)})$ on $\mathbb{R}^{m+d}$:
(2.8) $\displaystyle\left\\{\begin{aligned}
\text{\rm{d}}X_{t}^{(1)}&=(AX_{t}^{(1)}+BX_{t}^{(2)})\text{\rm{d}}t,\\\
\text{\rm{d}}X_{t}^{(2)}&=Z(X_{t},\mathscr{L}_{X_{t}})\text{\rm{d}}t+M\text{\rm{d}}B_{t},\end{aligned}\right.$
where $A,B$ and $M$ are $m\times m$, $m\times d$ and $d\times d$ matrixes
respectively, $B_{t}$ is $d$ dimensional Brownian motion. Define
$W_{2}(\nu_{1},\nu_{2}):=\inf_{\pi\in\mathscr{C}(\nu_{1},\nu_{2})}\Bigg{(}\int_{\mathbb{R}^{m+d}\times\mathbb{R}^{m+d}}\big{(}|\xi_{1}^{(1)}-\xi_{2}^{(1)}|^{2}+|\xi_{1}^{(2)}-\xi_{2}^{(2)}|^{2}\big{)}\pi\big{(}\text{\rm{d}}\xi_{1},\text{\rm{d}}\xi_{2}\big{)}\Bigg{)}^{\frac{1}{2}}.$
We assume
1. (D1)
$M$ is invertible and $Rank[B,AB,\ldots,A^{m-1}B]=m$.
2. (D2)
$Z:\mathbb{R}^{m+d}\times\mathscr{P}_{2}(\mathbb{R}^{m+d})\rightarrow\mathbb{R}^{d}$
is Lipschitz continuous.
3. (D3)
There exist constants $r>0$, $\theta_{1}>\theta_{2}>0$ and
$r_{0}\in(-\|B\|^{-1},\|B\|^{-1})$ such that
$\displaystyle\langle
r^{2}(x^{(1)}-y^{(1)})+rr_{0}B(x^{(2)}-y^{(2)}),A(x^{(1)}-y^{(1)})+B(x^{(2)}-y^{(2)})\rangle$
$\displaystyle+\langle
Z(x,\mu)-Z(y,\nu),x^{(2)}-y^{(2)}+rr_{0}B^{*}(x^{(1)}-y^{(1)})\rangle$
$\displaystyle\leq-\theta_{1}(|x^{(1)}-y^{(1)}|^{2}+|x^{(2)}-y^{(2)}|^{2})+\theta_{2}W_{2}(\mu,\nu)^{2},$
$\displaystyle\,\quad\,x=(x^{(1)},x^{(2)}),~{}y=(y^{(1)},y^{(2)})\in\mathbb{R}^{m+d},~{}\mu,\nu\in\mathscr{P}_{2}(\mathbb{R}^{m+d}).$
###### Theorem 2.4.
Assume (D1)-(D3), and let $A$ be a Lipschitz continuous function on
$\mathbb{R}^{m+d}$. Then
1. (1)
For any $\mu_{0},\nu_{0}\in\mathscr{P}_{2}(\mathbb{R}^{m+d})$, there exists a
constant $C$ such that
$W_{2}(P_{t}^{*}\mu_{0},P_{t}^{*}\nu_{0})^{2}\leq
Ce^{-\frac{\theta_{1}-\theta_{2}}{2C}t}W_{2}(\mu_{0},\nu_{0})^{2},\quad\quad
t\geq 0.$
2. (2)
$P_{t}^{*}$ has an invariant probability measure
$\bar{\mu}\in\mathscr{P}_{2}(\mathbb{R}^{m+d})$ such that
(2.9) $\displaystyle W_{2}(P_{t}^{*}\mu_{0},\bar{\mu})^{2}\leq
Ce^{-\frac{\theta_{1}-\theta_{2}}{2C}t}W_{2}(\mu_{0},\bar{\mu})^{2},\quad\quad
t\geq 0,~{}\mu_{0}\in\mathscr{P}_{2}(\mathbb{R}^{m+d}).$
3. (3)
If there exists a constant $\delta>0$ such that
$\mathbb{E}[e^{\delta|X_{0}|^{2}}]<\infty$, then $L_{t}^{A}\in\textrm{MDP}(I)$
for $I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}^{m+d}$.
## 3 Proofs of main results
### 3.1 Proof of Theorem 2.1
To prove the Theorem 2.1, we will compare $l_{t}^{A}$ with the additive
functional for $\bar{X}_{t}$. Let
$\bar{L}_{t}^{A}:=\frac{1}{t}\int_{0}^{t}A(\bar{X}_{s})\text{\rm{d}}s,$
and
$\bar{l}_{t}^{A}:=\frac{t}{a(t)}(\bar{L}_{t}^{A}-\bar{\mu}(A))=\frac{1}{a(t)}\int_{0}^{t}\big{(}A(\bar{X}_{s})-\bar{\mu}(A)\big{)}\text{\rm{d}}s,$
where $a(t)$ is a positive function satisfying (1.2).
Define the Cramér functional of $\bar{l}_{t}^{A}$:
(3.1) $\displaystyle\Lambda(z)$
$\displaystyle:=\lim_{t\rightarrow+\infty}\frac{t}{a^{2}(t)}\log\mathbb{E}_{x}\bigg{[}\exp\Big{\\{}\frac{a^{2}(t)}{t}z\bar{l}_{t}^{A}\Big{\\}}\bigg{]}$
$\displaystyle=\lim_{t\rightarrow+\infty}\frac{t}{a^{2}(t)}\log\mathbb{E}_{x}\bigg{[}\exp\Big{\\{}\frac{a(t)}{t}z\int_{0}^{t}\big{(}A(\bar{X}_{s})-\bar{\mu}(A)\big{)}\text{\rm{d}}s\Big{\\}}\bigg{]},$
where $\mathbb{E}_{x}$ is the expectation conditioned to $Y_{0}=x$, $z$ is a
constant. The Legendre transformation of $\Lambda(z)$ is defined by
$\Lambda^{*}(y):=\sup_{z\in\mathbb{R}^{d}}\\{zy-\Lambda(z)\\},$
which is related to the rate function. According to the Gärtner-Ellis Theorem
and [9, Theorem 1.3], $\bar{L}_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$.
Below we introduce the following exponential approximation lemma which is
useful in applications, see for instance [8, Theorem 4.2.16] and [14, Theorem
3.2].
###### Lemma 3.1.
(Exponential approximations) If
$\bar{L}_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$)
and for any $a$ satisfying (1.2),
$\lim_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\log\mathbb{P}(|l_{t}^{A}-\bar{l}_{t}^{A}|>\varepsilon)=-\infty,\,\,\forall\varepsilon>0,$
then $L_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$).
Using this Lemma, we prove the following result, which is crucial in the
present study.
###### Theorem 3.2.
If $\bar{L}_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$)
and there exists a constant $\delta>0$ such that
(3.2)
$\displaystyle\mathbb{E}\Big{[}\exp\Big{\\{}\delta\int_{0}^{\infty}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s\Big{\\}}\Big{]}<\infty,$
then $L_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$).
###### Proof.
Due to that $\lim_{t\rightarrow\infty}\frac{a(t)}{t}=0$, there exists
$t_{0}>0$ such that $\sqrt{\frac{a(t)}{t}}\leq\delta$ when $t\geq t_{0}$.
Below, we assume that $t\geq t_{0}$ and we have
$\mathbb{P}(|l_{t}^{A}-\bar{l}_{t}^{A}|>\varepsilon)\leq\mathbb{P}\Big{(}\sqrt{\frac{a(t)}{t}}\int_{0}^{t}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s>\sqrt{\frac{a(t)}{t}}\frac{a(t)\varepsilon}{K}\Big{)},$
by Chebyshev’s inequality, we obtain
$\displaystyle\mathbb{P}(|l_{t}^{A}-\bar{l}_{t}^{A}|>\varepsilon)$
$\displaystyle\leq\frac{\mathbb{E}\Big{[}\exp\Big{\\{}\sqrt{\frac{a(t)}{t}}\int_{0}^{t}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s\Big{\\}}\Big{]}}{\exp\\{\sqrt{\frac{a(t)}{t}}\frac{a(t)\varepsilon}{K}\\}},$
then (1.2) and (3.2) imply that $\forall\varepsilon>0$,
$\displaystyle\lim_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\log\mathbb{P}(|l_{t}^{A}-\bar{l}_{t}^{A}|>\varepsilon)$
$\displaystyle\leq\lim_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\Big{(}\log\mathbb{E}\Big{[}\exp\Big{\\{}\sqrt{\frac{a(t)}{t}}\int_{0}^{t}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s\Big{\\}}\Big{]}-\sqrt{\frac{a(t)}{t}}\frac{a(t)\varepsilon}{K}\Big{)}$
$\displaystyle\leq\lim_{t\rightarrow\infty}\frac{t}{a^{2}(t)}\log\mathbb{E}\Big{[}\exp\Big{\\{}\delta\int_{0}^{t}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s\Big{\\}}\Big{]}-\lim_{t\rightarrow\infty}\sqrt{\frac{t}{a(t)}}\frac{\varepsilon}{K}$
$\displaystyle=-\infty.$
Then the desired assertion follows from Lemma 3.1. ∎
###### Proof of Theorem 2.1.
Let $\mathscr{L}_{X_{0}}=\nu$ and $\mathscr{L}_{\bar{X}_{0}}=\bar{\mu}$.
According to [9, Theorem 1.1-1.3], $\bar{L}_{t}^{A}\in\textrm{MDP}(I)$. So, it
suffices to show (3.2) for some $\delta>0$.
Condition (H1) implies that the reference SDE (2.2) is well-posed and the
solution is a Markov process, $\bar{\mu}$ is the unique invariant probability
measure of $P_{t}^{*}$. Simply denote
$X_{t}=X_{t}^{\nu},\bar{X}_{t}=\bar{X}_{t}^{x}$ and
$P_{t}^{*}\nu=\mathscr{L}_{X_{t}^{\nu}}$ for
$\nu\in\mathscr{P}_{2}(\mathbb{R}^{d})$. By Itô’s formula and (H1),
$\displaystyle\text{\rm{d}}|X_{t}-\bar{X}_{t}|^{2}$
$\displaystyle\leq\big{\\{}\lambda_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}-\lambda_{1}|X_{t}-\bar{X}_{t}|^{2}\big{\\}}\text{\rm{d}}t$
$\displaystyle\quad+2\big{\langle}X_{t}-\bar{X}_{t},\big{(}\sigma(X_{t},P_{t}^{*}\nu)-\sigma(\bar{X}_{t},\bar{\mu})\big{)}\text{\rm{d}}B_{t}\big{\rangle}.$
Let $\xi_{t}=\big{(}e^{-\lambda
t}+|X_{t}-\bar{X}_{t}|^{2}\big{)}^{\frac{1}{2}}$, where
$\lambda:=\lambda_{1}-\lambda_{2}$. By (2.1), we find a constant $C>0$ such
that
$\displaystyle\text{\rm{d}}\xi_{t}\leq-\frac{\lambda_{1}}{2}\xi_{t}\text{\rm{d}}t+Ce^{-\frac{\lambda}{2}t}\text{\rm{d}}t+\text{\rm{d}}M_{t},$
where
$\text{\rm{d}}M_{t}=\frac{1}{\xi_{t}}\big{\langle}X_{t}-\bar{X}_{t},\big{(}\sigma(X_{t},P_{t}^{*}\nu)-\sigma(\bar{X}_{t},\bar{\mu})\big{)}\text{\rm{d}}B_{t}\big{\rangle}$.
Therefore, for some $\delta>0$, we obtain that
$\displaystyle\mathbb{E}\big{[}e^{\delta\int_{0}^{t}\xi_{s}\text{\rm{d}}s}\big{]}$
$\displaystyle\leq e^{\frac{4\delta
C}{\lambda_{1}\lambda}}\mathbb{E}\Big{[}e^{\frac{2\delta\xi_{0}}{\lambda_{1}}}e^{\frac{2\delta}{\lambda_{1}}\int_{0}^{t}\text{\rm{d}}M_{s}}\Big{]}$
$\displaystyle=e^{\frac{4\delta
C}{\lambda_{1}\lambda}}\mathbb{E}\Big{[}\mathbb{E}\big{[}e^{\frac{2\delta\xi_{0}}{\lambda_{1}}}e^{\frac{2\delta}{\lambda_{1}}\int_{0}^{t}\text{\rm{d}}M_{s}}|\mathcal{F}_{0}\big{]}\Big{]}$
$\displaystyle\leq
C(\delta)\Big{(}\mathbb{E}\Big{[}e^{\frac{4\delta\xi_{0}}{\lambda_{1}}}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{\frac{64\delta^{3}\kappa_{2}\sqrt{d}}{\lambda_{1}^{2}}\int_{0}^{t}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{4}},\,\,\,t>0$
holds for some constant $C(\delta)>0$. Therefore, we obtain that
$\mathbb{E}e^{\delta\int_{0}^{\infty}|X_{s}-\bar{X}_{s}|\text{\rm{d}}s}\leq\mathbb{E}e^{\delta\int_{0}^{\infty}\xi_{s}\text{\rm{d}}s}<\infty$
for $\delta>0$ small enough. Therefore, there exists some constant $\delta>0$
such that (3.2) holds. ∎
### 3.2 Proof of Theorem 2.2
In order to prove Theorem 2.2, we need the following result.
###### Theorem 3.3.
If $\bar{L}_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$)
and there exists a constant $\delta>0$ such that
(3.3)
$\displaystyle\mathbb{E}\Big{[}\exp\Big{\\{}\delta\int_{0}^{\infty}|X_{s}-\bar{X}_{s}|^{\alpha}\big{(}1+|X_{s}|+|\bar{X}_{s}|\big{)}^{2-\alpha}\text{\rm{d}}s\Big{\\}}\Big{]}<\infty,$
then $L_{t}^{A}\in\textrm{MDP}_{u}(I)$(respectively $\textrm{MDP}_{l}(I)$).
The proof is similar to that of Theorem 3.2, so we omit to save space.
###### Proof of Theorem 2.2.
Let $\mathscr{L}_{X_{0}}=\nu$ and $\mathscr{L}_{\bar{X}_{0}}=\bar{\mu}$.
According to [19, Theorem 2.1], $\bar{L}_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$. Therefore, it suffices to
show (3.3) for some $\delta>0$.
The assumption (H1) and (2.1) yield
$\|\sigma(P_{t}^{*}\nu)-\sigma(\bar{\mu})\|_{HS}^{2}\leq\lambda_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\leq\lambda_{2}e^{-(\lambda_{2}-\lambda_{1})t}W_{2}(\nu,\bar{\mu})^{2}.$
By Young’s inequality and $C_{r}$ inequality, for any $\lambda>0$,
$\displaystyle|X_{t}-\bar{X}_{t}|^{\alpha}\big{(}1+|X_{t}|+|\bar{X}_{t}|\big{)}^{2-\alpha}$
$\displaystyle=|X_{t}-\bar{X}_{t}|^{\alpha}e^{\lambda t}e^{-\lambda
t}\big{(}1+|X_{t}|+|\bar{X}_{t}|\big{)}^{2-\alpha}$
$\displaystyle\leq\frac{\alpha}{2}e^{\frac{2\lambda
t}{\alpha}}|X_{t}-\bar{X}_{t}|^{2}+\frac{2-\alpha}{2}e^{-\frac{2\lambda
t}{2-\alpha}}3\big{(}1+|X_{t}|^{2}+|\bar{X}_{t}|^{2}\big{)}.$
Below, for simplicity, we take
$\lambda<\frac{\alpha(\lambda_{1}-\lambda_{2})}{2}$. By Hölder’s inequality,
we have
$\displaystyle\mathbb{E}\Big{[}e^{\delta\int_{0}^{\infty}|X_{s}-\bar{X}_{s}|^{\alpha}(1+|X_{s}|+|\bar{X}_{s}|)^{2-\alpha}\text{\rm{d}}s}\Big{]}$
$\displaystyle\leq\Big{(}\mathbb{E}\Big{[}e^{\alpha\delta\int_{0}^{\infty}e^{\frac{2\lambda
s}{\alpha}}|X_{s}-\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}(1+|X_{s}|^{2}+|\bar{X}_{s}|^{2})\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{2}}.$
Below search for a constant $\delta>0$ such that
$I_{1}:=\mathbb{E}\Big{[}e^{\alpha\delta\int_{0}^{\infty}e^{\frac{2\lambda
s}{\alpha}}|X_{s}-\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}<\infty,$
and
$I_{2}:=\mathbb{E}\Big{[}e^{3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}(1+|X_{s}|^{2}+|\bar{X}_{s}|^{2})\text{\rm{d}}s}\Big{]}<\infty.$
By Itô’s formula and (H1), we have
$\text{\rm{d}}|X_{t}-\bar{X}_{t}|^{2}\leq\big{(}\lambda_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}-\lambda_{1}|X_{t}-\bar{X}_{t}|^{2}\big{)}\text{\rm{d}}t+2\langle
X_{t}-\bar{X}_{t},\big{(}\sigma(P_{t}^{*}\nu)-\sigma(\bar{\mu})\big{)}\text{\rm{d}}B_{t}\rangle.$
By the chain rule, we obtain
$\displaystyle\text{\rm{d}}\\{e^{\frac{2\lambda
t}{\alpha}}|X_{t}-\bar{X}_{t}|^{2}\\}$ $\displaystyle\leq e^{\frac{2\lambda
t}{\alpha}}\Big{\\{}\frac{2\lambda}{\alpha}|X_{t}-\bar{X}_{t}|^{2}+\lambda_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}-\lambda_{1}|X_{t}-\bar{X}_{t}|^{2}\Big{\\}}\text{\rm{d}}t$
$\displaystyle\,\,\,\,\,+2e^{\frac{2\lambda t}{\alpha}}\langle
X_{t}-\bar{X}_{t},\big{(}\sigma(P_{t}^{*}\nu)-\sigma(\bar{\mu})\big{)}\text{\rm{d}}B_{t}\rangle.$
Therefore, we obtain that
$\displaystyle\mathbb{E}\Big{[}e^{\alpha\delta\int_{0}^{t}e^{\frac{2\lambda
s}{\alpha}}|X_{s}-\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}$ $\displaystyle\leq
C_{1}(\delta)\mathbb{E}\Big{[}e^{\frac{\alpha\delta|X_{0}-\bar{X}_{0}|^{2}}{\lambda_{1}-2\lambda/{\alpha}}+\frac{2\alpha\delta}{\lambda_{1}-2\lambda/{\alpha}}\int_{0}^{t}e^{\frac{2\lambda
s}{\alpha}}\langle
X_{s}-\bar{X}_{s},(\sigma(P_{s}^{*}\nu)-\sigma(\bar{\mu}))\text{\rm{d}}B_{s}\rangle}\Big{]}$
$\displaystyle=C_{1}(\delta)\mathbb{E}\Big{[}\mathbb{E}\big{[}e^{\frac{\alpha\delta|X_{0}-\bar{X}_{0}|^{2}}{\lambda_{1}-2\lambda/{\alpha}}+\frac{2\alpha\delta}{\lambda_{1}-2\lambda/{\alpha}}\int_{0}^{t}e^{\frac{2\lambda
s}{\alpha}}\langle
X_{s}-\bar{X}_{s},(\sigma(P_{s}^{*}\nu)-\sigma(\bar{\mu}))\text{\rm{d}}B_{s}\rangle}|\mathcal{F}_{0}\big{]}\Big{]}$
$\displaystyle\leq
C_{1}(\delta)\Big{(}\mathbb{E}\Big{[}e^{\frac{\alpha\delta|X_{0}-\bar{X}_{0}|^{2}}{\lambda_{1}-2\lambda/{\alpha}}}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{\frac{32\alpha^{2}\delta^{2}\lambda_{2}W_{2}(\nu,\bar{\mu})^{2}}{(\lambda_{1}-2\lambda/{\alpha})^{2}}\int_{0}^{t}e^{\frac{2\lambda
s}{\alpha}}|X_{s}-\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{4}},$
where
$C_{1}(\delta):=e^{\frac{\alpha\delta\lambda_{2}W_{2}(\nu,\bar{\mu})^{2}}{(\lambda_{1}-2\lambda/{\alpha})(\lambda_{1}-\lambda_{2}-2\lambda/{\alpha})}}$.
We choose $\delta$ such that
$\delta\leq\frac{(\lambda_{1}-2\lambda/{\alpha})^{2}}{32\alpha\lambda_{2}W_{2}(\nu,\bar{\mu})^{2}}$,
which leads to $I_{1}<\infty$.
Next we need to prove that $I_{2}<\infty$. By (B1), there exist constants
$c_{1},c_{2}>0$ such that
(3.4)
$\displaystyle\text{\rm{d}}|X_{t}|^{2}\leq\big{(}c_{2}-c_{1}|X_{t}|^{2}+c_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\big{)}\text{\rm{d}}t+2\langle
X_{t},\sigma(P_{t}^{*}\nu)\text{\rm{d}}B_{t}\rangle,$ (3.5)
$\displaystyle\text{\rm{d}}|\bar{X}_{t}|^{2}\leq\big{(}c_{2}-c_{1}|\bar{X}_{t}|^{2}+c_{2}\|\bar{\mu}\|_{2}^{2}\big{)}\text{\rm{d}}t+2\langle\bar{X}_{t},\sigma(\bar{\mu})\text{\rm{d}}B_{t}\rangle,$
and
$\|\sigma(P_{t}^{*}\nu)\|_{HS}^{2}\leq
c_{2}\big{(}1+W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\big{)}.$
Recall that
$\displaystyle I_{2}$
$\displaystyle:=\mathbb{E}\Big{[}e^{3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}(1+|X_{s}|^{2}+|\bar{X}_{s}|^{2})\text{\rm{d}}s}\Big{]}$
$\displaystyle=\mathbb{E}\Big{[}e^{3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}\text{\rm{d}}s+3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|X_{s}|^{2}\text{\rm{d}}s+3(2-\alpha)\delta\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}$ $\displaystyle\leq
e^{\frac{3\delta(2-\alpha)^{2}}{2\lambda}}\Big{(}\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|X_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{2}}.$
Thus, for $I_{2}<\infty$, it suffices to show
$I_{2}^{\prime}:=\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|X_{s}|^{2}\text{\rm{d}}s}\Big{]}<\infty,$
and
$I_{2}^{\prime\prime}:=\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{\infty}e^{-\frac{2\lambda
s}{2-\alpha}}|\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}<\infty.$
By the chain rule and (3.4), we have
$\displaystyle\text{\rm{d}}\\{e^{-\frac{2\lambda t}{2-\alpha}}|X_{t}|^{2}\\}$
$\displaystyle\leq e^{-\frac{2\lambda
t}{2-\alpha}}\big{\\{}\big{(}-\frac{2\lambda}{2-\alpha}-c_{1}\big{)}|X_{t}|^{2}+c_{2}+c_{2}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\big{\\}}\text{\rm{d}}t+2e^{-\frac{2\lambda
t}{2-\alpha}}\langle X_{t},\sigma(P_{t}^{*}\nu)\text{\rm{d}}B_{t}\rangle.$
Then
$\displaystyle\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{t}e^{-\frac{2\lambda
s}{2-\alpha}}|X_{s}|^{2}\text{\rm{d}}s}\Big{]}$ $\displaystyle\leq
C_{2}(\delta)\mathbb{E}\Big{[}e^{\frac{6\delta{(2-\alpha)}|X_{0}|^{2}}{2\lambda/(2-\alpha)+c_{1}}}e^{\frac{12\delta{(2-\alpha)}}{2\lambda/(2-\alpha)+c_{1}}\int_{0}^{t}e^{-\frac{2\lambda
s}{2-\alpha}}\langle
X_{s},\sigma(P_{s}^{*}\nu)\text{\rm{d}}B_{s}\rangle}\Big{]}$
$\displaystyle\leq
C_{2}(\delta)\Big{(}\mathbb{E}\Big{[}e^{\frac{12\delta{(2-\alpha)}|X_{0}|^{2}}{2\lambda/(2-\alpha)+c_{1}}}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{\frac{288\delta^{2}(2-\alpha)^{4}c_{2}(1+W_{2}(\nu,\bar{\mu})^{2})}{(2\lambda+c_{1}(2-\alpha))^{2}}\int_{0}^{t}e^{-\frac{2\lambda
s}{2-\alpha}}|X_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{4}},$
where
$C_{2}(\delta)=\exp\big{\\{}6c_{2}\delta(2-\alpha)^{3}\big{(}\frac{1}{4\lambda^{2}+2\lambda
c_{1}(2-\alpha)}+\frac{1}{(2\lambda+c_{1}(2-\alpha))(2\lambda+(\lambda_{1}-\lambda_{2})(2-\alpha))}\big{)}\big{\\}}$.
Then for
$\delta\leq\frac{(2\lambda+c_{1}(2-\alpha))^{2}}{48\delta(2-\alpha)^{3}c_{2}(1+W_{2}(\nu,\bar{\mu})^{2})}$,
we have $I_{2}^{\prime}<\infty$.
On the other hand, the same argument gives
$\displaystyle\mathbb{E}\Big{[}e^{6\delta{(2-\alpha)}\int_{0}^{t}e^{-\frac{2\lambda
s}{2-\alpha}}|\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}\leq
C_{3}(\delta)\Big{(}\mathbb{E}\Big{[}e^{\frac{12\delta{(2-\alpha)}|\bar{X}_{0}|^{2}}{2\lambda/(2-\alpha)+c_{1}}}\Big{]}\Big{)}^{\frac{1}{2}}\Big{(}\mathbb{E}\Big{[}e^{\frac{288\delta^{2}(2-\alpha)^{4}\|\sigma(\bar{\mu})\|_{HS}^{2}}{(2\lambda+c_{1}(2-\alpha))^{2}}\int_{0}^{t}e^{-\frac{2\lambda
s}{2-\alpha}}|\bar{X}_{s}|^{2}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{4}}$
where
$C_{3}(\delta)=\exp\\{\frac{6\delta(2-\alpha)^{3}c_{2}(1+\|\bar{\mu}\|_{2}^{2})}{4\lambda^{2}+2\lambda
c_{1}(2-\alpha)}\\}$. So, when
$\delta\leq\frac{(2\lambda+c_{1}(2-\alpha))^{2}}{48\delta(2-\alpha)^{3}\|\sigma(\bar{\mu})\|_{HS}^{2}}$
we have $I_{2}^{\prime\prime}<\infty$.
Finally, we take
$\delta\leq\min\Big{\\{}\frac{(\lambda_{1}-2\lambda/{\alpha})^{2}}{32\alpha\lambda_{2}W_{2}(\nu,\bar{\mu})^{2}},\frac{(2\lambda+c_{1}(2-\alpha))^{2}}{48\delta(2-\alpha)^{3}c_{2}(1+W_{2}(\nu,\bar{\mu})^{2})},\frac{(2\lambda+c_{1}(2-\alpha))^{2}}{48\delta(2-\alpha)^{3}\|\sigma(\bar{\mu})\|_{HS}^{2}}\Big{\\}}$.
We conclude, there exists $\delta>0$ such that $I_{2}<\infty$, which together
with $I_{1}<\infty$ finishes the proof. ∎
### 3.3 Proof of Theorem 2.3
Let $\mathscr{L}_{X_{0}}=\nu$ and $\mathscr{L}_{\bar{X}_{0}}=\bar{\mu}$.
According to [19, Theorem 2.1], $\bar{L}_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$, $y\in\mathbb{R}$. Therefore, by the Lemma 3.1
(see also [8, Theorem 4.2.16] or [14, Theorem 3.2]), it suffices to prove
(3.6)
$\displaystyle\mathbb{E}\Big{[}\exp\Big{\\{}\delta\int_{0}^{\infty}\frac{\big{(}1+|X_{s}|^{2}+|\bar{X}_{s}|^{2}\big{)}}{\log(e+|X_{s}|^{2}+|\bar{X}_{s}|^{2})[\log(e+|X_{s}-\bar{X}_{s}|^{-1})]^{p}}\text{\rm{d}}s\Big{\\}}\Big{]}<\infty$
for some constant $\delta>0$.
By the chain rule and (H1), we have
$\displaystyle\text{\rm{d}}\big{(}e^{\lambda_{1}t}|X_{t}-\bar{X}_{t}|^{2}\big{)}$
$\displaystyle=e^{\lambda_{1}t}\big{\\{}\lambda_{1}|X_{t}-\bar{X}_{t}|^{2}\text{\rm{d}}t+\text{\rm{d}}|X_{t}-\bar{X}_{t}|^{2}\big{\\}}$
$\displaystyle\leq\lambda_{2}e^{\lambda_{1}t}W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2}\text{\rm{d}}t.$
Then we obtain
$\displaystyle|X_{t}-\bar{X}_{t}|^{2}\leq
e^{-(\lambda_{1}-\lambda_{2})t}W_{2}(\nu,\bar{\mu})^{2},$
this implies that
$\displaystyle|X_{t}-\bar{X}_{t}|^{-1}\geq
e^{\frac{\lambda_{1}-\lambda_{2}}{2}t}W_{2}(\nu,\bar{\mu})^{-1}.$
Let $\alpha=\sup_{t\geq 0}\big{(}|X_{t}|^{2}+|\bar{X}_{t}|^{2}\big{)}$,
$\beta=W_{2}(\nu,\bar{\mu})$, $\lambda=\frac{\lambda_{1}-\lambda_{2}}{2}$,
then we have
$\displaystyle\int_{0}^{\infty}|A(X_{t})-A(\bar{X}_{t})|\text{\rm{d}}t\leq\frac{e+\alpha}{\log(e+\alpha)}\int_{0}^{\infty}\frac{\text{\rm{d}}t}{[\log(e+\beta^{-1}e^{\lambda
t})]^{p}}.$
Let $\beta^{-1}e^{\lambda t}=s$, then we have
$\text{\rm{d}}t=\frac{\text{\rm{d}}s}{\lambda s}$, so that
$\displaystyle\int_{0}^{\infty}\frac{\text{\rm{d}}t}{[\log(e+\beta^{-1}e^{\lambda
t})]^{p}}$
$\displaystyle=\frac{1}{\lambda}\int_{\beta^{-1}}^{\infty}\frac{\text{\rm{d}}s}{s[\log(e+s)]^{p}}$
$\displaystyle\leq\frac{1}{\lambda}\int_{\beta^{-1}}^{\beta^{-1}+1}\frac{\text{\rm{d}}s}{s}+\frac{1}{\lambda}\int_{\beta^{-1}+1}^{\infty}\big{(}1+\frac{e}{1+\beta^{-1}}\big{)}\big{(}\log(e+s)\big{)}^{-p}\text{\rm{d}}{\log(e+s)}$
$\displaystyle\leq\frac{\log(1+\beta)}{\lambda}+\frac{1+e}{\lambda(p-1)}.$
Thus,
(3.7)
$\displaystyle\int_{0}^{\infty}|A(X_{t})-A(\bar{X}_{t})|\text{\rm{d}}t\leq\frac{1}{\lambda}\Big{\\{}\frac{(e+\alpha)(1+e)}{p-1}+J\Big{\\}},$
where $J:=\frac{e+\alpha}{\log(e+\alpha)}\cdot\log(e+\beta)$. Let
$h(\alpha)=\frac{e+\alpha}{\log(e+\alpha)}\cdot\log(e+\beta)-\alpha.$
When $\alpha\geq\beta$, we have $h^{\prime}(\alpha)\leq 0$, which implies that
$h$ decreases with respect to $\alpha$, and we obtain that $J\leq\alpha+e$.
When $0<\alpha<\beta$, let $g(\alpha)=\frac{e+\alpha}{\log(e+\alpha)}$, we
have $g^{\prime}(\alpha)\geq 0$, which implies that $g$ increases in $\alpha$,
so that
$\sup_{\alpha\in(0,\beta)}h(\alpha)\leq\frac{e+\beta}{\log(e+\beta)}\cdot\log(e+\beta)=e+\beta$.
Combining this with (3.7), we find a constant $C_{0}>0$ such that
$\displaystyle\int_{0}^{\infty}|A(X_{t})-A(\bar{X}_{t})|\text{\rm{d}}t$
$\displaystyle\leq C_{0}(e+\alpha+\beta)$
$\displaystyle=C_{0}\Big{\\{}\sup_{t>0}\\{|X_{t}|^{2}+|\bar{X}_{t}|^{2}\\}+W_{2}(\nu,\bar{\mu})+e\Big{\\}}.$
Since
$\mathbb{E}[e^{\delta|X_{0}|^{2}}]+\bar{\mu}(e^{\delta|\cdot|^{2}})<\infty$
for some $\delta>0$ and $\mathscr{L}_{\bar{X}_{t}}=\bar{\mu}$, (3.6) follows
if
(3.8)
$\displaystyle\mathbb{E}\Big{[}\sup_{t>0}e^{\delta|X_{t}|^{2}}\Big{]}<\infty$
holds for some $\delta>0$.
Indeed, (3.8) holds also for $\bar{X}_{t}$ replacing $X_{t}$, since when
$\mathscr{L}_{\bar{X}_{0}}=\bar{\mu}$, we have $\mathscr{L}_{(X_{t})_{t\geq
0}}=\mathscr{L}_{(\bar{X}_{t})_{t\geq 0}}$.
By (H1), there exists a constant $C_{1}>0$ such that
$\displaystyle\text{\rm{d}}\left(e^{(\lambda_{1}-\lambda_{2})t}|X_{t}|^{2}\right)\leq
C_{1}e^{(\lambda_{1}-\lambda_{2})t}\text{\rm{d}}t+2e^{(\lambda_{1}-\lambda_{2})t}\langle
X_{t},\sigma\text{\rm{d}}B_{t}\rangle.$
So,
$\displaystyle\delta|X_{t}|^{2}\leq\frac{\delta C_{1}}{\tilde{\lambda}}+\delta
e^{-\tilde{\lambda}t}|X_{0}|^{2}+2\delta
e^{-\tilde{\lambda}t}\int_{0}^{t}e^{\tilde{\lambda}s}\langle
X_{s},\sigma\text{\rm{d}}B_{s}\rangle,$
where $\tilde{\lambda}:=\lambda_{1}-\lambda_{2}$. Therefore, we obtain
$\displaystyle\mathbb{E}\Big{[}\sup_{0\leq s\leq
t}e^{\delta|X_{s}|^{2}}\Big{]}$ $\displaystyle\leq e^{\delta
C_{1}/{\tilde{\lambda}}}\mathbb{E}\Big{[}\mathbb{E}\Big{[}\sup_{0\leq s\leq
t}e^{\delta|X_{0}|^{2}}\cdot e^{2\delta
e^{-\tilde{\lambda}s}\int_{0}^{s}e^{\tilde{\lambda}u}\langle
X_{u},\sigma\text{\rm{d}}B_{u}\rangle}|\mathcal{F}_{0}\Big{]}\Big{]}$ (3.9)
$\displaystyle\leq e^{\delta
C_{1}/{\tilde{\lambda}}}\Big{(}\mathbb{E}\Big{[}e^{2\delta|X_{0}|^{2}}\Big{]}\Big{)}^{\frac{1}{2}}\cdot\Big{(}\mathbb{E}\Big{[}\sup_{0\leq
s\leq t}e^{4\delta
e^{-\tilde{\lambda}s}\int_{0}^{s}e^{\tilde{\lambda}u}\langle
X_{u},\sigma\text{\rm{d}}B_{u}\rangle}\Big{]}\Big{)}^{\frac{1}{2}}.$
By the BDG inequality, there exists a constant $C_{2}>0$, such that
$\displaystyle\tilde{J}:=\mathbb{E}\Big{[}\sup_{0\leq s\leq t}e^{4\delta
e^{-\tilde{\lambda}s}\int_{0}^{s}e^{\tilde{\lambda}u}\langle
X_{u},\sigma\text{\rm{d}}B_{u}\rangle}\Big{]}$ $\displaystyle\leq
C_{2}\Big{(}\mathbb{E}\Big{[}e^{16\delta^{2}e^{-2\tilde{\lambda}t}\int_{0}^{t}e^{2\tilde{\lambda}u}|\sigma^{*}X_{u}|^{2}\text{\rm{d}}u}\Big{]}\Big{)}^{\frac{1}{2}}$
$\displaystyle=C_{2}\Big{(}\mathbb{E}\Big{[}e^{16\delta^{2}\int_{0}^{t}\|\sigma\|^{2}|X_{s}|^{2}\frac{1-e^{-2\tilde{\lambda}t}}{2\tilde{\lambda}}\frac{2\tilde{\lambda}}{1-e^{-2\tilde{\lambda}t}}e^{-2\tilde{\lambda}(t-s)}\text{\rm{d}}s}\Big{]}\Big{)}^{\frac{1}{2}}.$
Since
$\lambda_{t}(\text{\rm{d}}s):=\frac{2\tilde{\lambda}}{1-e^{-2\tilde{\lambda}t}}e^{-2\tilde{\lambda}(t-s)}\text{\rm{d}}s$
is a probability measure on $[0,t]$, therefore by the Jensen’s inequality, we
obtain
$\displaystyle\tilde{J}$ $\displaystyle\leq
C_{2}\Big{(}\mathbb{E}\Big{[}e^{\frac{16\delta^{2}(1-e^{-2\tilde{\lambda}t})}{2\tilde{\lambda}}\int_{0}^{t}\|\sigma\|^{2}|X_{s}|^{2}\lambda_{t}(\text{\rm{d}}s)}\Big{]}\Big{)}^{\frac{1}{2}}$
$\displaystyle\leq
C_{2}\Big{(}\mathbb{E}\Big{[}\int_{0}^{t}e^{\frac{8\delta^{2}\|\sigma\|^{2}|X_{s}|^{2}}{\tilde{\lambda}}}\lambda_{t}(\text{\rm{d}}s)\Big{]}\Big{)}^{\frac{1}{2}}.$
When $t\geq 1$, we have
$\displaystyle\tilde{J}\leq\frac{C_{2}^{2}}{4\tilde{\lambda}}+\tilde{\lambda}\int_{0}^{t}\mathbb{E}\Big{[}e^{\delta|X_{s}|^{2}-2\tilde{\lambda}(t-s)}\Big{]}\text{\rm{d}}s.$
Substituting into (3.3) and applying the Gronwall’s lemma, we obtain
(3.10) $\displaystyle\mathbb{E}\Big{[}\sup_{0\leq s\leq
t}e^{\delta|X_{s}|^{2}}\Big{]}$ $\displaystyle\leq
C_{3}\Big{(}1+\mathbb{E}\Big{[}\sup_{0\leq s\leq
1}e^{\delta|X_{s}|^{2}}\Big{]}\Big{)}$
for some constant $C_{3}>0$. Finally,
$\displaystyle\mathbb{E}\Big{[}\sup_{0\leq t\leq
1}e^{\delta|X_{t}|^{2}}\Big{]}\leq\Big{(}\mathbb{E}\Big{[}e^{2\delta|X_{0}|^{2}}\Big{]}\Big{)}^{\frac{1}{2}}\cdot
e^{c\delta\int_{0}^{1}(1+W_{2}(P_{t}^{*}\nu,\bar{\mu})^{2})\text{\rm{d}}t}\cdot\Big{(}\mathbb{E}\Big{[}\sup_{0\leq
t\leq 1}e^{4\delta\int_{0}^{t}\langle
X_{s},\sigma\text{\rm{d}}B_{s}\rangle}\Big{]}\Big{)}^{\frac{1}{2}}.$
By the exponential martingale inequality and the Jensen’s inequality, we
obtain that there exists a constant $C_{\delta}$ such that
$\displaystyle\mathbb{E}\Big{[}\sup_{0\leq t\leq
1}e^{\delta|X_{t}|^{2}}\Big{]}$ $\displaystyle\leq
C_{\delta}\sqrt{e}\Big{(}\int_{0}^{1}\mathbb{E}\big{[}e^{16\delta^{2}\|\sigma\|^{2}|X_{s}|^{2}}\big{]}\text{\rm{d}}s\Big{)}^{\frac{1}{4}}.$
Taking $\delta\leq{1}/({16\|\sigma\|^{2}})$ and we obtain that
$\mathbb{E}\Big{[}\sup_{0\leq t\leq 1}e^{\delta|X_{t}|^{2}}\Big{]}<\infty$.
This together with (3.10) imply that $\mathbb{E}\Big{[}\sup_{0\leq s\leq
t}e^{\delta|X_{s}|^{2}}\Big{]}<\infty$.
### 3.4 Proof of Theorem 2.4
Let
$\rho(\xi_{1},\xi_{2}):=\left(|\xi_{1}^{(1)}-\xi_{2}^{(1)}|^{2}+|\xi_{1}^{(2)}-\xi_{2}^{(2)}|^{2}\right)^{1/2}.$
We take $X_{0},Y_{0}\in
L^{2}(\Omega\rightarrow\mathbb{R}^{m+d},\mathcal{F}_{0},\mathbb{P})$ such that
$\mathscr{L}_{X_{0}}=\mu_{0},\mathscr{L}_{Y_{0}}=\nu_{0}$ and
$W_{2}(\mu_{0},\nu_{0})^{2}=\mathbb{E}\rho(X_{0},Y_{0})^{2}.$
Let $X_{t}=(X_{t}^{(1)},X_{t}^{(2)})$ and $Y_{t}=(Y_{t}^{(1)},Y_{t}^{(2)})$
solve (2.4) with initial values $X_{0}$ and $Y_{0}$ respectively. Obviously,
$X_{t}^{(1)}-Y_{t}^{(1)}$ and $X_{t}^{(2)}-Y_{t}^{(2)}$ solve the ODE
(3.11) $\displaystyle\left\\{\begin{aligned}
\text{\rm{d}}(X_{t}^{(1)}-Y_{t}^{(1)})&=\Big{(}A(X_{t}^{(1)}-Y_{t}^{(1)})+B(X_{t}^{(2)}-Y_{t}^{(2)})\Big{)}\text{\rm{d}}t,\\\
\text{\rm{d}}(X_{t}^{(2)}-Y_{t}^{(2)})&=\Big{(}Z(X_{t},\mathscr{L}_{X_{t}})-Z(Y_{t},\mathscr{L}_{Y_{t}})\Big{)}\text{\rm{d}}t.\end{aligned}\right.$
Since $r_{0}\in(-\|B\|^{-1},\|B\|^{-1})$, for any $r>0$ there exists a
constant $C>1$ such that
$\displaystyle\frac{1}{C}\big{(}|X_{t}^{(1)}-Y_{t}^{(1)}|^{2}+|X_{t}^{(2)}-Y_{t}^{(2)}|^{2}\big{)}$
$\displaystyle\leq\Psi_{t}:=\frac{r^{2}}{2}|X_{t}^{(1)}-Y_{t}^{(1)}|^{2}+\frac{1}{2}|X_{t}^{(2)}-Y_{t}^{(2)}|^{2}+rr_{0}\langle
X_{t}^{(1)}-Y_{t}^{(1)},B(X_{t}^{(2)}-Y_{t}^{(2)})\rangle$ $\displaystyle\leq
C\big{(}|X_{t}^{(1)}-Y_{t}^{(1)}|^{2}+|X_{t}^{(2)}-Y_{t}^{(2)}|^{2}\big{)}.$
Combining this with (3.11) and (D3), we obtain
$\displaystyle\text{\rm{d}}\Psi_{t}$
$\displaystyle\leq-\theta_{1}\big{(}|X_{t}^{(1)}-Y_{t}^{(1)}|^{2}+|X_{t}^{(2)}-Y_{t}^{(2)}|^{2}\big{)}+\theta_{2}W_{2}(P_{t}^{*}\mu_{0},P_{t}^{*}\nu_{0})^{2},$
by the chain rule, we have
$\displaystyle\text{\rm{d}}(e^{\lambda t}\Psi_{t})$ $\displaystyle\leq
e^{\lambda
t}\big{\\{}\lambda\Psi_{t}-\theta_{1}\big{(}|X_{t}^{(1)}-Y_{t}^{(1)}|^{2}+|X_{t}^{(2)}-Y_{t}^{(2)}|^{2}\big{)}+\theta_{2}W_{2}(P_{t}^{*}\mu_{0},P_{t}^{*}\nu_{0})^{2}\big{\\}}\text{\rm{d}}t,$
thus we obtain
$\displaystyle\mathbb{E}\Psi_{t}$ $\displaystyle\leq e^{-\lambda
t}\mathbb{E}\Psi_{0}-e^{-\lambda t}\int_{0}^{t}e^{\lambda
s}(\theta_{1}-\theta_{2}-\lambda
C)\mathbb{E}\big{[}|X_{s}^{(1)}-Y_{s}^{(1)}|^{2}+|X_{s}^{(2)}-Y_{s}^{(2)}|^{2}\big{]}\text{\rm{d}}s,$
we take $\lambda=\frac{\theta_{1}-\theta_{2}}{2C}$ and we obtain
$\displaystyle\mathbb{E}\Psi_{t}$ $\displaystyle\leq
e^{-\frac{\theta_{1}-\theta_{2}}{2C}t}\mathbb{E}\Psi_{0},$
and we deduce that
$\displaystyle W_{2}(P_{t}^{*}\mu_{0},P_{t}^{*}\nu_{0})^{2}$
$\displaystyle\leq\mathbb{E}[\rho(X_{t},Y_{t})^{2}]$ $\displaystyle\leq
Ce^{-\frac{\theta_{1}-\theta_{2}}{2C}t}W_{2}(\mu_{0},\nu_{0})^{2},$
Consequently, $P_{t}^{*}$ has a unique invariant probability measure
$\bar{\mu}$ such that (2.9) holds.
Next, let $\mathscr{L}_{\bar{X}_{0}}=\bar{\mu}$, consider the reference
Stochastic Hamiltonian System for
$\bar{X}_{t}=(\bar{X}_{t}^{(1)},\bar{X}_{t}^{(2)})$ on $\mathbb{R}^{m+d}$:
(3.12) $\displaystyle\left\\{\begin{aligned}
\text{\rm{d}}\bar{X}_{t}^{(1)}&=(A\bar{X}_{t}^{(1)}+B\bar{X}_{t}^{(2)})\text{\rm{d}}t,\\\
\text{\rm{d}}\bar{X}_{t}^{(2)}&=Z(\bar{X}_{t},\bar{\mu})dt+M\text{\rm{d}}B_{t}.\end{aligned}\right.$
According to [19, Theorem 3.1], $\bar{L}_{t}^{A}\in\textrm{MDP}(I)$ for
$I(y)={y^{2}}/({8\bar{V}(A)})$. Since $A$ is Lipschitz continuous, by (2.9),
we can find some small $\delta>0$ such that (3.2) holds. Therefore, the proof
is finished by Theorem 3.2.
#### Acknowledgement.
The authors would like to thank Professor Feng-Yu Wang for supervision.
## References
* [1] P. A. Baldi, _Large deviations and stochastic homogenisation_ , Ann. Mat. Pura Appl. 151(1988), 161–177.
* [2] A. A. Borovkov, A. A. Mogulskii, _Probabilities of large deviations in topological vector space I_ , Siberian Math. J. 19(1978), 697–709.
* [3] A. A. Borovkov, A. A. Mogulskii, _Probabilities of large deviations in topological vector space II_ , Siberian Math. J. 21(1980), 12–26.
* [4] J. Bao, F.-Y. Wang, C. Yuan, _Limit theorems for additive functionals of path-depedent SDEs_ , Discrete Contin. Dyn. Syst. 40(2020), 5173–5188.
* [5] X. Chen, _The moderate deviations of independent random vectors in a Banach space_ , Chinese J. Appl. Probab. Statist. 7(1991), 24–32.
* [6] P. Cattiaux, P. Dai Pra, S. Roelly, _A constructive approach to a class of ergodic HJB equatons with unbounded and nonsmooth cost_ , SIAM J. Control Optim. 47(2008), 2598–2615.
* [7] M. D. Donsker, S. R. S. Varadhan, _Asymptotic evaluation of certain Markov process expectations for large time, I-IV_ , Comm. Pure Appl. Math. 28(1975), 1–47, 279–301; 29(1976), 389–461; 36(1983), 183–212.
* [8] A. Dembo, O. Zeitouni, _Large Deviations Techniques and Applications_ , Second Edition, Springer, New York. 1998.
* [9] F. Gao, _Long time asymptotics of unbounded additive functionals of Markov processes_ , Electron. J. Probab. 22(2017), 1–21.
* [10] X. Huang, P. Ren, F.-Y. Wang, _Distribution Dependent Stochastic Differential Equation_ , arXiv:2012.13656.
* [11] K. Itô, M. Nisio, _On stationary solutions of a stochastic differential equation_ , J. Math.Kyoto Univ. 4(1964), 1–75.
* [12] I. Kontoyiannis, S. P. Meyn, _Spectral theory and limit theorems for geometrically ergodic Markov processes_ , Ann. Appl. Probab. 13(2003), 304–362.
* [13] P. Ren, F.-Y. Wang, _Donsker-Varadhan Large Deviations for Path-Distribution Dependent SPDEs_ , arXiv:2002.08652.
* [14] M. Röckner, F.-Y. Wang, L. Wu, _Large deviations for stochastic generalized porous media equations_ , Stoch. Proc. Appl. 116(2006), 1677–1689.
* [15] D. W. Stroock, S. R. S. Varadhan, _Multidimensional Diffusion Processes_ , Springer, New York. 1979.
* [16] F.-Y. Wang, _Harnack inequality for SDE with multiplicative noise and extension to Neumann semigroup on nonconvex mainfolds_ , Ann. Probab. 39(2011), 1449–1467.
* [17] F.-Y. Wang, _Hypercontractivity and applications for stochastic Hamiltonian systems_ , J. Funct. Anal. 272(2017), 5360–5383.
* [18] F.-Y. Wang, _Distribution dependent SDEs for Landau type equations._ Stoch. Proc. Appl. 128(2018), 595–621.
* [19] F.-Y. Wang, Y. Zhang, _Application of Harnack inequality to long time asymptotics of Markov processes(in Chinese)_ , Sci. Sin. Math. 49(2019), 505–516.
* [20] L. Wu, _Moderate deviations of dependent random variables related to CLT_ , Ann. Probab. 23(1995), 420–445.
* [21] L. Wu, _Uniformly integrable operators and large deviations for Markov processes_ , J. Funct. Anal. 172(2000), 301–376.
|
# Neural Relational Inference with Efficient Message Passing Mechanisms
Siyuan Chen, Jiahai Wang111Corresponding author, Guoqing Li
###### Abstract
Many complex processes can be viewed as dynamical systems of interacting
agents. In many cases, only the state sequences of individual agents are
observed, while the interacting relations and the dynamical rules are unknown.
The neural relational inference (NRI) model adopts graph neural networks that
pass messages over a latent graph to jointly learn the relations and the
dynamics based on the observed data. However, NRI infers the relations
independently and suffers from error accumulation in multi-step prediction at
dynamics learning procedure. Besides, relation reconstruction without prior
knowledge becomes more difficult in more complex systems. This paper
introduces efficient message passing mechanisms to the graph neural networks
with structural prior knowledge to address these problems. A relation
interaction mechanism is proposed to capture the coexistence of all relations,
and a spatio-temporal message passing mechanism is proposed to use historical
information to alleviate error accumulation. Additionally, the structural
prior knowledge, symmetry as a special case, is introduced for better relation
prediction in more complex systems. The experimental results on simulated
physics systems show that the proposed method outperforms existing state-of-
the-art methods.
## Introduction
Many complex processes in natural and social areas including multi-agent
systems (Yang et al. 2018; Li et al. 2020), swarm systems (Oliveira et al.
2020), physical systems (Ha and Jeong 2020; Bapst et al. 2020) and social
systems (Almaatouq et al. 2020; Zhang et al. 2020) can be viewed as dynamical
systems of interacting agents. Revealing the underlying interactions and
dynamics can help us understand, predict, and control the behavior of these
systems. However, in many cases, only the state sequences of individual agents
are observed, while the interacting relations and the dynamical rules are
unknown.
A lot of works (Hoshen 2017; van Steenkiste et al. 2018; Watters et al. 2017)
use implicit interaction models to learn the dynamics. These models can be
regarded as graph neural networks (GNNs) over a fully-connected graph, and the
implicit interactions are modeled via message passing operations (Watters et
al. 2017) or attention mechanisms (Hoshen 2017). Compared with modeling
implicit interactions, modeling the explicit interactions offers a more
interpretable way to understand the dynamical systems. A motivating example is
shown in Fig. 1 (Kipf et al. 2018), where a dynamical system consists of 5
particles linked by invisible springs. It is of interest to infer the
relations among the particles and predict their future states. Kipf et al.
(2018) propose the neural relational inference (NRI) model, a variational
auto-encoder (VAE) (Kingma and Welling 2014), to jointly learn explicit
interactions and dynamical rules in an unsupervised manner.
Figure 1: A dynamical system consisting of 5 particles that are linked by
invisible springs. The interacting relations and future states are to be
predicted based on the observed state sequences.
Currently, there are three main limitations of NRI. First, the interacting
relations are inferred independently, while the coexistence of these relations
is not considered. Alet et al. (2019) tackle this problem by taking all
relations as a whole and iteratively improving the prediction through modular
meta-learning. However, this method is computationally costly and it is
limited to small-scale systems. Second, NRI predicts multiple steps into the
future to emphasize the effect of the interactions, which leads to error
accumulation and prevents the model from reconstructing the dynamics
accurately. Third, as the scale and the complexity of systems increase, it is
difficult to infer the interactions solely based on the observed data, while
incorporating some structural prior knowledge can help better explore the
structural space and promote the precision of relation recovery.
To address the problems above, this paper introduces efficient message passing
mechanisms with structural prior knowledge for neural relational inference.
For relation reconstruction, a relation interaction mechanism is introduced to
capture the dependencies among different relations. For dynamics
reconstruction, a spatio-temporal message passing mechanism is introduced to
utilize historical information to alleviate the error accumulation of multi-
step prediction. Additionally, the prior knowledge about the relations is
incorporated as a regularization term in the loss function to impose soft
constraints, and as a special case, the symmetry of relations is taken into
consideration.
The contributions of this work are summarized as follows.
* •
Efficient message passing mechanisms are introduced to make a joint prediction
of relations and alleviate the error accumulation of multi-step state
prediction.
* •
The prior knowledge about relations, symmetry as a special case, is introduced
to better reconstruct relations on more complex dynamical systems.
* •
Extensive experiments on physics simulation datasets are conducted. The
results show the superiority of our method by comparing it with the state-of-
the-art methods.
## Background
### Neural Relational Inference (NRI)
NRI (Kipf et al. 2018) is an unsupervised model that infers interacting
structure from observational data and learns system dynamics. NRI takes the
form of VAE, where the encoder infers the interacting relations, and the
decoder predicts the future states of individual agents.
Specifically, the encoder adopts GNNs with multiple rounds of message passing,
and infers the distribution of potential interactions based on the input
state,
$\displaystyle\mathbf{h}$ $\displaystyle=f_{\text{enc}}(\mathbf{x}),$ (1)
$\displaystyle q_{\phi}(\mathbf{z}|\mathbf{x})$
$\displaystyle=\text{softmax}(\mathbf{h}),$ (2)
where $\mathbf{x}=(\mathbf{x}_{1}^{1:T},...,\mathbf{x}_{N}^{1:T})$ is the
observed trajectories of $N$ objects in the system in $T$ time steps,
$f_{\text{enc}}$ is a GNN acting on the fully-connected graph, and
$q_{\phi}(\mathbf{z}|\mathbf{x})$ is the factorized distribution of edge type
$\mathbf{z}$.
Since directly sampling edges from $q_{\phi}(\mathbf{z}|\mathbf{x})$, a
discrete distribution, is a non-differentiable process, the back-propagation
cannot be used. To solve this problem, NRI uses the Gumbel-Softmax trick
(Maddison, Mnih, and Teh 2017), which simulates a differentiable sampling
process from a discrete distribution using a continuous function, i.e.,
$\mathbf{z}_{ij}=\text{softmax}\left((\mathbf{h}_{ij}+\mathbf{g})/\tau\right),$
(3)
where $\mathbf{z}_{ij}$ represents the edge type between the nodes $v_{i}$ and
$v_{j}$, $\mathbf{g}\in\mathbb{R}^{K}$ is a vector of i.i.d. samples from a
$\text{Gumbel}(0,1)$ distribution and the temperature $\tau$ is a parameter
controlling the “smoothness” of the samples.
According to the inferred relations and past state sequences, the decoder uses
another GNN that models the effect of interactions to predict the future
state,
$p_{\theta}(\mathbf{x}|\mathbf{z})={\textstyle\prod_{t=1}^{T}}p_{\theta}(\mathbf{x}^{t+1}|\mathbf{x}^{1:t},\mathbf{z}),$
(4)
where $p_{\theta}(\mathbf{x}^{t+1}|\mathbf{x}^{1:t},\mathbf{z})$ is the
conditional likelihood of $\mathbf{x}^{t+1}$.
As a variational auto-encoder model, NRI is trained to maximize the evidence
lower bound,
$\mathcal{L}=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[{\rm{log}}\
p_{\theta}(\mathbf{x}|\mathbf{z})]-\text{KL}[q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})],$
(5)
where the prior $p_{\theta}(\mathbf{z})={\textstyle\prod_{i\neq
j}}p_{\theta}(\mathbf{z}_{ij})$ is a factorised uniform distribution over edge
types. In the right-hand side of Eq. (5), the first term is the expected
reconstruction error, while the second term encourages the approximate
posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ to approach the prior
$p_{\theta}(\mathbf{z})$.
### Message Passing Mechanisms in GNNs
GNNs are a widely used class of neural networks that operates on graph
structured data by message passing mechanisms (Gilmer et al. 2017). For a
graph $\mathcal{G=(V,E)}$ with vertices $v\in\mathcal{V}$ and edges
$e=(v,v^{\prime})\in\mathcal{E}$, the node-to-edge ($n\\!\to\\!e$) and edge-
to-node ($e\\!\to\\!n$) message passing operations are defined as follows,
$\displaystyle v\\!\rightarrow\\!e$ $\displaystyle:$
$\displaystyle\mathbf{h}_{(i,j)}^{l}$
$\displaystyle=f_{e}^{l}\left(\left[\mathbf{h}_{i}^{l},\mathbf{h}_{j}^{l},\mathbf{r}_{(i,j)}\right]\right),$
(6) $\displaystyle e\\!\to\\!v$ $\displaystyle:$
$\displaystyle\mathbf{h}_{j}^{l+1}$ $\displaystyle={\textstyle
f_{v}^{l}\left(\left[\sum_{i\in\mathcal{N}_{j}}\mathbf{h}_{(i,j)}^{l},\mathbf{x}_{j}\right]\right)},$
(7)
where $\mathbf{h}_{i}^{l}$ is the embedding of node $v_{i}$ in the $l$-th
layer of the GNNs, $\mathbf{r}_{ij}$ is the feature of edge $e_{ij}$ (e.g.
edge type), and $\mathbf{h}_{ij}^{l}$ is the embedding of edge $e_{ij}$ in the
$l$-th layer of the GNNs. $\mathcal{N}_{j}$ denotes the set of indices of
neighboring nodes with an incoming edge connected to vertex $v_{j}$, and
$[\cdot,\cdot]$ denotes the concatenation of vectors. $f_{e}$ and $f_{v}$ are
node-specific and edge-specific neural networks, such as multilayer
perceptrons (MLP), respectively. Node embeddings are mapped to edge embeddings
in Eq. (6) and vice versa in Eq. (7).
## Method
The structure of our method is shown in Fig. 2. Our method follows the
framework of VAE in NRI. In the encoder, a relation interaction mechanism is
used to capture the dependencies among the latent edges for a joint relation
prediction. In the decoder, a spatio-temporal message passing mechanism is
used to incorporate historical information for alleviating the error
accumulation in multi-step prediction. As both mechanisms mentioned above can
be regarded as Message Passing Mechanisms in GNNs for Neural Relational
Inference, our method is named as NRI-MPM. Additionally, the structural prior
knowledge, symmetry as a special case, is incorporated as a regularization
term in the loss function to improve relation prediction in more complex
systems.
Figure 2: Overview of NRI-MPM. The encoder
$q_{\phi}\left(\mathbf{z}|\mathbf{x}\right)$ uses the state sequences
$\mathbf{x}$ to generate relation embeddings, and applies a relation
interaction mechanism to jointly predict the relations. The structural prior
knowledge of symmetry is imposed as a soft constraint for relation prediction.
The decoder takes the predicted relations $\mathbf{z}$ and the historical
state sequences $\mathbf{x}^{1:t}$ to predict the change in state
$\Delta\mathbf{x}^{t}$.
### Relation Interaction Mechanism
The encoder is aimed at inferring the edge types $\mathbf{z}_{ij}$ based on
the observed state sequences
$\mathbf{x}=\left(\mathbf{x}^{1},\dots,\mathbf{x}^{T}\right)$. From the
perspective of message passing, the encoder defines a node-to-edge message
passing operation at a high level. As shown in Eqs. (8)-(11), NRI first maps
the observed state $\mathbf{x}_{j}$ to a latent vector $\mathbf{h}^{1}_{j}$,
and then applies two rounds of node-to-edge and one round of edge-to-node
message passing alternately to obtain the edge embeddings
$\mathbf{h}_{(i,j)}^{2}$ that integrate both local and global information.
Then, $\mathbf{h}_{(i,j)}^{2}$ are used to predict the pairwise interacting
types independently.
$\displaystyle\mathbf{h}_{j}^{1}$
$\displaystyle=f_{\text{emb}}(\mathbf{x}_{j}),$ (8) $\displaystyle
v\\!\to\\!e:$ $\displaystyle\mathbf{h}_{(i,j)}^{1}$
$\displaystyle=f^{1}_{e}([\mathbf{h}^{1}_{i},\mathbf{h}^{1}_{j}]),$ (9)
$\displaystyle e\\!\to\\!v:$ $\displaystyle\mathbf{h}_{j}^{2}$
$\displaystyle={\textstyle f^{1}_{v}\left(\sum_{i\neq
j}\mathbf{h}_{(i,j)}^{1}\right)},$ (10) $\displaystyle v\\!\to\\!e:$
$\displaystyle\mathbf{h}_{(i,j)}^{2}$
$\displaystyle=f^{2}_{e}([\mathbf{h}^{2}_{i},\mathbf{h}^{2}_{j}]).$ (11)
However, the relations are generally dependent on each other since they
jointly affect the future states of individual agents. Within the original
formulations of GNNs, Eqs. (8)-(11) cannot effectively modeled the
dependencies among edges. Typical GNNs are designed to learn node embeddings,
while the edge embeddings are treated as a transient part of the computation.
To capture the coexistence of all relations, this paper introduces an edge-to-
edge ($e\\!\to\\!e$) message passing operation that directly passes messages
among edges, named the relation interaction mechanism,
$e\\!\to\\!e:\left\\{\mathbf{e}_{(i,j)}:(v_{i},v_{j})\in\mathcal{E}\right\\}=g_{e}\left(\left\\{\mathbf{h}^{2}_{(i,j)}:(v_{i},v_{j})\in\mathcal{E}\right\\}\right).$
(12)
Ideally, this operation includes modeling the pairwise dependencies among all
edges, which is computationally costly as its time complexity is
$O(|\mathcal{E}|^{2})$. Alternatively, as shown in Fig. 3, our method
decomposes this operation into two sub-operations, intra-edge interaction and
inter-edge interaction operations, for modeling the interactions among
incoming edges of the same node and those among the incoming edges of
different nodes, respectively.
The intra-edge interaction operation is defined as follows,
$e\\!\to\\!e:\left\\{\mathbf{e}^{1}_{(i,j)}:i\in\mathcal{N}_{j}\right\\}=g^{\text{intra}}_{e}\left(\left\\{\mathbf{h}^{2}_{(i,j)}:i\in\mathcal{N}_{j}\right\\}\right).$
(13)
Figure 3: Relation interaction mechanism. Given the edge embeddings
$\mathbf{h}_{(i,j)}^{2}$, the intra-edge and inter-edge interaction operations
are used to model the interactions among incoming edges of the same node and
those among the incoming edges of different nodes, respectively. The resulting
embeddings $\mathbf{e}^{1}_{(i,j)}$ and $\mathbf{e}^{2}_{j}$ are concatenated
to obtain the final edge representations $\mathbf{e}_{(i,j)}$.
From the above definition, $g^{\text{intra}}_{e}$ is required to be
permutation equivariant (Lee et al. 2019) to preserve the correspondences
between the input and output edge embeddings. Formally, let
$\mathcal{I}_{j}=\left\\{i_{k}\right\\}^{|\mathcal{N}_{j}|}_{k=1}$ be the
ascending sequence of $\mathcal{N}_{j}$ and $\mathcal{S}_{j}$ be the set of
all permutations on $\mathcal{I}_{j}$. For any permutation
$\pi\in\mathcal{S}_{j}$,
$\pi(\mathbf{h}^{2}_{(\mathcal{I}_{j},j)})=\mathbf{h}^{2}_{(\pi(\mathcal{I}_{j}),j)}$.
$g^{\text{intra}}_{e}$ is said to be permutation equivariant if
$\mathbf{\pi}({e}^{1}_{(\mathcal{I}_{j},j)})=g^{\text{intra}}_{e}\circ\pi(\mathbf{h}^{2}_{(\mathcal{I}_{j},j)})$,
where $\circ$ denotes the composition of functions. Although permutation
invariant operators such as sum and max aggregators are widely used in GNNs
(Xu et al. 2019), the design for permutation equivariant operators is less
explored. Inspired by Hamilton et al. (2017), this paper treats a set as an
unordered sequence and defines $g^{\text{intra}}_{e}$ as a sequence model over
it, appending by an inverse permutation to restore the order, i.e.,
$\displaystyle
e\\!\to\\!e:\quad\mathbf{e}^{1}_{(\mathcal{I}_{j},j)}=\pi^{-1}\circ
S\circ\pi\left(\mathbf{h}^{2}_{(\mathcal{I}_{j},j)}\right),$ (14)
where $S$ is a sequence model such as recurrent neural networks (RNNs) and
convolutional neural networks (CNNs). When $S$ is implemented by self-
attention, $g^{\text{intra}}_{e}$ is permutation equivariant, which may not
hold when $S$ is an RNN. In this case, $g^{\text{intra}}_{e}$ is expected to
be approximately permutation equivariant with a well trained $S$.
Similarly, one can define an inter-edge interaction operation. The difference
is that this operation treats all incoming edges of a node as a whole. For
simplicity, this paper applies mean-pooling to the edge embeddings to get an
overall representation. Formally, the inter-edge interaction operation is
defined to be the composition of the following steps,
$\displaystyle e\\!\to\\!v:$ $\displaystyle\mathbf{e}^{1}_{j}$
$\displaystyle=\text{Mean}\left(\mathbf{e}^{1}_{(1:N,j)}\right),$ (15)
$\displaystyle v\\!\to\\!v:$
$\displaystyle\left\\{\mathbf{e}^{2}_{j}\right\\}^{N}_{j=1}$
$\displaystyle=g^{\text{inter}}_{e}\left(\left\\{\mathbf{e}^{1}_{j}\right\\}^{N}_{j=1}\right),$
(16)
where $g^{\text{inter}}_{e}$ is a node-to-node ($v\\!\to\\!v$) operation that
passes messages among nodes and takes a similar form of
$g^{\text{intra}}_{e}$. To analyze the time complexity of relation
interaction, this paper assumes that all sequence models involved are RNNs.
Since $g^{\text{intra}}_{e}$ and Mean can be applied to each node in parallel
and the time complexity of Eqs. (14)-(16) are all $O(N)$, the overall time
complexity of relation interaction is $O(N)$, which is much more effective
than the pairwise interaction.
Finally, an MLP is used to unify the results of the two operations, and the
predicted edge distribution is defined as
$\displaystyle\mathbf{e}_{(i,j)}$
$\displaystyle=\text{MLP}([\mathbf{e}^{1}_{(i,j)},\mathbf{e}^{2}_{j}]),$ (17)
$\displaystyle q_{\phi}(\mathbf{z}_{ij}|\mathbf{x})$
$\displaystyle=\text{softmax}(\mathbf{e}_{(i,j)}).$ (18)
As in Eq. (3), $\mathbf{z}_{ij}$ is sampled via the Gumbel-Softmax trick to
allow back-propagation.
### Spatio-Temporal Message Passing Mechanism
The decoder is aimed at predicting the future state $\mathbf{x}^{t+1}$ using
the inferred relations $\mathbf{z}$ and the historical states
$\mathbf{x}^{1:t}$. Since the interactions can have a small effect on short-
term dynamics, NRI predicts multiple steps into the future to avoid a
degenerate decoder that ignores the effect of interactions. In multi-step
prediction, the predicted value $\boldsymbol{\mu}_{j}^{t}$ is replaced by the
ground truth state $\mathbf{x}_{j}^{t}$, i.e.,
$\displaystyle\boldsymbol{\mu}_{j}^{t+1}$
$\displaystyle=f_{\text{dec}}(\mathbf{x}_{j}^{t}),$ (19)
$\displaystyle\boldsymbol{\mu}_{j}^{t+m}$
$\displaystyle=f_{\text{dec}}(\boldsymbol{\mu}_{j}^{t+m-1}),\qquad\qquad
m=2,\dots,M$ (20)
where $f_{\text{dec}}$ denotes the decoder, and $M(M\geq 1)$ is the number of
time steps to predict. This means that errors in the prediction process
accumulate over $M$ steps. As the decoder of NRI only uses the current state
to predict future states, it is difficult to avoid error accumulation.
Apparently, the current state of the interacting system is related to the
previous states, and thus, incorporating historical information can help learn
the dynamical rules and alleviate the error accumulation in multi-step
prediction. To this end, sequence models are used to capture the non-linear
correlation between previous states and the current state. GNNs combined with
sequence models can be used to capture spatio-temporal information, which has
been widely used in traffic flow forecasting (Li et al. 2018; Guo et al.
2019). In this paper, the sequence model can be composed of one or more of
CNNs, RNNs, attention mechanisms, etc.
Inspired by interaction networks (Battaglia et al. 2016) and graph networks
(Sanchez-Gonzalez et al. 2018), sequence models are added to the original
node-to-edge and edge-to-node operations to obtain their spatio-temporal
versions. In this way, the decoder can integrate the spatio-temporal
interaction information at different fine-grained levels, i.e., the node level
and the edge level, which can help better learn the dynamical rules.
As shown in Fig. 4, the decoder contains a node-to-edge and an edge-to-node
spatio-temporal message passing operations. The node-to-edge spatio-temporal
message passing operation is defined as
$\displaystyle v\\!\to\\!e:\quad\hat{\mathbf{e}}_{(i,j)}^{t}$
$\displaystyle=\sum_{k}z_{ij,k}\hat{f}_{e}^{k}([\mathbf{x}_{i}^{t},\mathbf{x}_{j}^{t}]),$
(21) $\displaystyle\hat{\mathbf{h}}_{(i,j)}^{t+1}$
$\displaystyle=S_{\text{edge}}\left([\hat{\mathbf{e}}_{(i,j)}^{t},\hat{\mathbf{h}}_{(i,j)}^{1:t}]\right),$
(22)
where $z_{ij,k}$ is the $k$-th element of the vector $\mathbf{z}_{ij}$,
representing the edge type $k$, whose effect is modeled by an MLP
$\hat{f}_{e}^{k}$. For each edge $e_{ij}$, the effect of all potential
interactions are aggregated into $\hat{\mathbf{e}}_{(i,j)}^{t}$ as a weighted
sum. Its concatenation with the previous hidden states
$\hat{\mathbf{h}}_{(i,j)}^{1:t}$ is fed to $S_{\text{edge}}$ to generate the
future hidden state of interactions $\hat{\mathbf{h}}_{(i,j)}^{t+1}$ that
captures the temporal dependencies at edge level.
Figure 4: The structure of the decoder. The decoder takes the interacting
relations $\mathbf{z}$, the current state $\mathbf{x}^{t}$, the historical
hidden states $\hat{\mathbf{h}}^{1:t}_{(i,j)}$ and
$\hat{\mathbf{h}}^{1:t}_{j}$ as inputs to predict the change in state
$\Delta\mathbf{x}^{t}$. Sequence models are added to the message passing
operations to jointly capture the spatio-temporal dependencies. Elements that
are currently updated are highlighted in blue, e.g.,
$\hat{\mathbf{e}}_{(i,j)}^{t}$ is an edge embedding updated in the node-to-
edge ($v\\!\to\\!e$) message passing operation.
Similarly, the edge-to-node spatio-temporal message passing operation is
defined as
$\displaystyle e\\!\to\\!v:\text{MSG}_{j}^{t}$
$\displaystyle={\textstyle\sum_{i\neq j}\hat{\mathbf{h}}_{(i,j)}^{t+1}},$ (23)
$\displaystyle\hat{\mathbf{h}}_{j}^{t+1}$
$\displaystyle=S_{\text{node}}\left([\text{MSG}_{j}^{t},\mathbf{x}_{j}^{t}],\hat{\mathbf{h}}_{j}^{1:t}\right).$
(24)
For each node $v_{j}$, the spatial dependencies
$\hat{\mathbf{h}}_{(i,j)}^{t+1}$ are aggregated into $\text{MSG}_{j}^{t}$.
$\text{MSG}_{j}^{t}$ together with the current state $\mathbf{x}_{j}^{t}$ and
the previous hidden states $\hat{\mathbf{h}}_{j}^{1:t}$ are fed to
$S_{\text{node}}$ to generate the future hidden state of nodes
$\hat{\mathbf{h}}_{j}^{t+1}$ that captures the temporal dependencies at node
level.
Finally, the predicted future state is defined as
$\displaystyle\Delta\mathbf{x}_{j}^{t}$
$\displaystyle=\hat{f}_{v}\left(\hat{\mathbf{h}}_{j}^{t+1}\right),$ (25)
$\displaystyle\boldsymbol{\mu}_{j}^{t+1}$
$\displaystyle=\mathbf{x}_{j}^{t}+\Delta\mathbf{x}_{j}^{t},$ (26)
$\displaystyle p_{\theta}(\mathbf{x}^{t+1}|\mathbf{x}^{1:t},\mathbf{z})$
$\displaystyle=\mathcal{N}(\mathbf{x}^{t+1}|\boldsymbol{\mu}^{t+1},\sigma^{2}\mathbf{I}),$
(27)
where $\hat{f}_{v}$ is an MLP predicting the change in state
$\Delta\mathbf{x}_{j}^{t}$, and $\sigma^{2}$ is a fixed variance. Note that
since both $\hat{\mathbf{h}}^{1:t}_{(i,j)}$ and $\hat{\mathbf{h}}^{1:t}_{j}$
rely on the historical states $\mathbf{x}^{1:t}$, the decoder is implicitly a
function of $\mathbf{x}^{1:t}$.
### Structural Prior Knowledge
As the scale and complexity of systems increase, it becomes more difficult to
infer relations solely based on the state sequences of individual agents.
Therefore, it is desirable to incorporate possible structural prior knowledge,
such as sparsity (Kipf et al. 2018), symmetry and node degree distribution (Li
et al. 2019). Since symmetric relations widely exist in physical dynamical
systems, the symmetry of relations is studied as a special case. Li et al.
(2019) impose a hard constraint on symmetry, i.e., setting
$\mathbf{z}_{ji}=\mathbf{z}_{ij}$. However, the hard constraint may limit the
exploration of the model during the training procedure, and sometimes it may
lead to a decrease of the prediction precision. By contrast, this paper
imposes a soft constraint by adding a regularization term to the original loss
function.
Specifically, an auxiliary distribution
$q^{\prime}_{\phi}(\mathbf{z}|\mathbf{x})$ is introduced as the “transpose” of
the predicted relation distribution $q_{\phi}(\mathbf{z}|\mathbf{x})$, namely,
$q^{\prime}_{\phi}(\mathbf{z}_{ij}|\mathbf{x})=q_{\phi}(\mathbf{z}_{ji}|\mathbf{x}).$
(28)
Then, the Kullback-Leibler divergence between
$q^{\prime}_{\phi}(\mathbf{z}|\mathbf{x})$ and
$q_{\phi}(\mathbf{z}|\mathbf{x})$ is used as a regularization term for
symmetry, i.e.,
$\mathcal{L}^{\prime}=\mathcal{L}-\lambda\cdot\text{KL}\left[q^{\prime}_{\phi}(\mathbf{z}|\mathbf{x})||q_{\phi}(\mathbf{z}|\mathbf{x})\right],$
(29)
where $\lambda$ is a penalty factor for the symmetric prior.
Notations used in this paper, details of the computation of
$\mathcal{L}^{\prime}$ and the pseudo code of NRI-MPM are shown in Appendix A,
Appendix B and Appendix C, respectively.
## Experiments
All methods are tested on three types of simulated physical systems: particles
connected by springs, charged particles and phase-coupled oscillators, named
Springs, Charged and Kuramoto, respectively. For the Springs and Kuramoto
datasets, objects do or do not interact with equal probability. For the
Charged datasets, objects attract or repel with equal probability. For each
type of system, a 5-object dataset and a 10-object dataset are simulated. All
datasets contain 50k training samples, and 10k validation and test samples.
Further details on data generation can be found in (Kipf et al. 2018). All
methods are evaluated w.r.t. two metrics, the accuracy of relation
reconstruction and the mean squared error (MSE) of future state prediction.
For our method, the sequence models used in both encoders and decoders are
composed of gated recurrent units (GRUs) (Cho et al. 2014), except that
attention mechanisms are added to the decoder in the Kuramoto datasets (see
Appendix D). The penalty factor $\lambda$ is set to $10^{2}$ for all datasets
except that it is set to 1 and $10^{3}$ for the 5-object and 10-object
Kuramoto datasets, respectively (see Appendix E).
### Baselines
To evaluate the performance of our method, we compare it with several
competitive methods as follows.
* •
Correlation (Kipf et al. 2018): a baseline that predicts the relations between
two particles based on the correlation of their state sequences.
* •
LSTM (Kipf et al. 2018): an LSTM that takes the concatenation of all state
vectors and predict all future states simultaneously.
* •
NRI (Kipf et al. 2018): the neural relational inference model that jointly
learns the relations and the dynamics with a VAE.
* •
SUGAR (Li et al. 2019): a method that introduces structural prior knowledge
such as hard symmetric constraint and node degree distribution for relational
inference.
* •
ModularMeta (Alet et al. 2019): a method that solves the relational inference
problem via modular meta-learning.
To compare the performance of our method with “gold standard” methods, i.e.,
those trained given the ground truth relations, this paper introduces two
variants as follows.
* •
Supervised: a variant of our method that only trains an encoder with the
ground truth relations.
* •
NRI-MPM (true graph): a variant of our method that only trains a decoder given
the ground truth relations.
The codes of NRI222https://github.com/ethanfetaya/nri and
ModularMeta333https://github.com/FerranAlet/modular-metalearning are public
and thus directly used in our experiments. SUAGR is coded by ourselves
according to the original paper. Note that Correlation and Supervised are only
designed for relation reconstruction, while LSTM and NRI-MPM (true graph) are
only designed for state prediction.
To verify the effectiveness of the proposed message passing mechanisms and the
structural prior knowledge, this paper introduces some variants of our method
as follows.
* •
NRI-MPM w/o RI: a variant of our method without the relation interaction
mechanism.
* •
NRI-MPM w/o intra-RI, NRI-MPM w/o inter-RI: variants of our method without the
intra- and inter-edge interaction operations, respectively.
* •
NRI-MPM w/o ST: a variant of our method without the spatio-temporal message
passing mechanism.
* •
NRI-MPM w/o Sym, NRI-MPM w/ hard Sym: variants of our method without the
symmetric prior and that with hard symmetric constraint, respectively.
### Comparisons with Baselines
Table 1: Accuracy (%) of relation reconstruction.
Model | Springs | Charged | Kuramoto
---|---|---|---
5 objects
Correlation | $52.4\pm 0.0$ | $55.8\pm 0.0$ | $62.8\pm 0.0$
NRI | $\mathbf{99.9}\pm 0.0$ | $82.1\pm 0.6$ | $96.0\pm 0.1$
SUGAR | $\mathbf{99.9}\pm 0.0^{*}$ | $82.9\pm 0.8^{*}$ | $91.8\pm 0.1^{*}$
ModularMeta | $\mathbf{99.9}$ | $88.4$ | $96.2\pm 0.3^{*}$
NRI-MPM | $\mathbf{99.9}\pm 0.0$ | $\mathbf{93.3}\pm 0.5$ | $\mathbf{97.3}\pm 0.2$
Supervised | $99.9\pm 0.0$ | $95.4\pm 0.1$ | $99.3\pm 0.0$
10 objects
Correlation | $50.4\pm 0.0$ | $51.4\pm 0.0$ | $59.3\pm 0.0$
NRI | $98.4\pm 0.0$ | $70.8\pm 0.4$ | $75.7\pm 0.3$
SUGAR | $98.3\pm 0.0^{*}$ | $72.0\pm 0.9^{*}$ | $74.0\pm 0.2^{*}$
ModularMeta | $98.8\pm 0.0^{*}$ | $63.8\pm 0.1^{*}$ | $\mathbf{89.6}\pm 0.1^{*}$
NRI-MPM | $\mathbf{99.1}\pm 0.0$ | $\mathbf{81.6}\pm 0.2$ | $80.3\pm 0.6$
Supervised | $99.4\pm 0.0$ | $89.7\pm 0.1$ | $94.3\pm 0.8$
* *
The results in these datasets are unavailable in the original paper, and they
are obtained by running the codes provided by the authors.
Table 2: Mean squared error in predicting future states for simulations with 5
interacting objects.
Datasets | Springs | Charged | Kuramoto
---|---|---|---
Predictions steps | 1 | 10 | 20 | 1 | 10 | 20 | 1 | 10 | 20
LSTM | 4.13e-8 | 2.19e-5 | 7.02e-4 | 1.68e-3 | 6.45e-3 | 1.49e-2 | 3.44e-4 | 1.29e-2 | 4.74e-2
NRI | 3.12e-8 | 3.29e-6 | 2.13e-5 | 1.05e-3 | 3.21e-3 | 7.06e-3 | 1.40e-2 | 2.01e-2 | 3.26e-2
SUGAR | 3.71e-8∗ | 3.86e-6∗ | 1.53e-5∗ | 1.18e-3∗ | 3.43e-3∗ | 7.38e-3∗ | 2.12e-2∗ | 9.45e-2∗ | 1.83e-1∗
ModularMeta | 3.13e-8 | 3.25e-6 | - | 1.03e-3 | 3.11e-3 | - | 2.35e-2∗ | 1.10e-1∗ | 1.96e-1∗
NRI-MPM | 8.89e-9 | 5.99e-7 | 2.52e-6 | 7.29e-4 | 2.57e-3 | 5.41e-3 | 1.57e-2 | 2.73e-2 | 5.36e-2
NRI-MPM (true graph) | 1.60e-9 | 9.06e-9 | 1.50e-7 | 8.06e-4 | 2.51e-3 | 5.66e-3 | 1.73e-2 | 2.49e-2 | 4.09e-2
* *
Results in these datasets are unavailable in the original paper, and they are
obtained by running the codes provided by the authors.
(a) LSTM
(b) Ground Truth
(c) NRI-MPM
Figure 5: Visualization of the ground truth states (b) together with the
predicted states for LSTM (a) and NRI-MPM (c) in the 5-object Kuramoto
dataset.
The results of relation reconstruction and future state prediction in the
5-object datasets (for MSEs in the 10-object datasets, see Appendix E) are
shown is Table 1 and Table 2, respectively. Our method significantly
outperforms all baselines in nearly all datasets in terms of both accuracy and
MSE. In the Springs datasets, the accuracies of NRI, SUGAR, ModularMeta and
our method are all comparable with the supervised baseline, while our method
achieves significantly lower MSEs in the 5-object systems. This indicates that
our method can better learn the relations and dynamics simultaneously. In the
Charged datasets, our method outperforms the baselines by 4.9%-9.6% in terms
of accuracy. The reason may be that the charged systems are densely connected
since a charged particle interacts with all other particles, and our method
can better handle this situation. SUGAR achieves higher accuracies than NRI,
suggesting the effectiveness of structural prior knowledge. Still, our method
performs better with the extra help of the proposed message passing
mechanisms.
In the Kuramoto datasets, the performance of our method is non-dominant. Our
method achieves higher accuracies than NRI, while the MSEs are larger.
ModularMeta achieves higher accuracy than our method in the 10-object system,
while the MSEs are much poorer than all other methods. Maybe the interactions
among objects in the Kuramoto dataset are relatively weak (Kuramoto 1975),
making it more difficult to infer the relations based on the observed states,
and the situation worsens as the scale of systems increases. ModularMeta
infers all relations as a whole, and modifies its predictions with the help of
simulated annealing, which may help better search the structural space to
achieve better relation predictions. However, this does not translate to
better state prediction. According to (Kuramoto 1975), Kuramoto is a
synchronization system where the object states converge to a certain set of
values in the long run. Maybe the interaction is less helpful for state
prediction in this case.
Figure 6: Running times of different methods.
#### Qualitative Analysis of Future State Prediction
One observes that LSTM achieves lower MSEs for short-term prediction in the
5-object Kuramoto dataset, but its performance declines for long-term
prediction. To further understand the predictive behavior of LSTM and our
method, we conduct a qualitative analysis by visualizing the predicted states
together with the ground truth states in 49 steps, and the results are shown
in Fig. 5. From Fig. 5(a), LSTM can capture the shape of the sinusoidal
waveform but fails to make accurate prediction for time steps larger than 40
(e.g., curves in green and purple). By contrast, as shown in Fig. 5(c), the
predicted states of our method closely match the ground truth states except
for those in the last few time steps for the third particle, whose curve is
colored in green. Maybe our method can better capture the interactions among
particles that affect the long-term dynamics. Note that this result is
consistent with that reported in Appendix A.1 in the original paper of NRI.
Figure 7: Predicted relations of different methods.
#### Running Time of Different Methods
The running times of different methods in a single epoch are reported in Fig.
6. It can be seen that our method requires some more time than NRI, which is a
natural consequence of more complex models. The running time of SUGAR is
comparable with our method. It is worth noting that ModularMeta requires much
more running time than the others and the situation becomes more severe in
larger systems. Maybe meta-learning the proposal function is computationally
costly in ModularMeta. These results show that our method can achieve better
performance with a smaller additional cost of computation.
### Ablation Study
Figure 8: MSEs of multi-step prediction.
The ablation studies are conducted in the 10-object Charged dataset, and the
results are shown in Table 3.
Table 3: Ablation study in the 10-object Charged dataset.
Metrics | Accuracy | MSEs
---|---|---
Prediction steps | | 1 | 10 | 20
NRI-MPM w/o RI | $77.5\pm 0.2$ | 8.07e-4 | 4.13e-3 | 1.13e-2
NRI-MPM w/o intra-RI | $80.5\pm 0.4$ | 7.83e-4 | 3.90e-3 | 1.26e-2
NRI-MPM w/o inter-RI | $78.3\pm 0.7$ | 7.71e-4 | 4.11e-3 | 1.13e-2
NRI-MPM w/o ST | $74.0\pm 0.5$ | 1.26e-3 | 4.92e-3 | 1.38e-2
NRI-MPM w/o Sym | $79.3\pm 0.7$ | 7.60e-4 | 3.77e-3 | 1.06e-2
NRI-MPM w/ hard Sym | $80.4\pm 0.5$ | 7.96e-4 | 3.83e-3 | 1.04e-2
NRI-MPM | $\textbf{81.6}\pm 0.2$ | 7.52e-4 | 3.80e-3 | 1.03e-2
#### Effect of Relation Interaction Mechanism
As shown in Table 3, the accuracy drops significantly by removing the relation
interacting mechanism, verifying its effectiveness in relation prediction.
Besides, the MSEs decrease, indicating that more accurate relation prediction
can help with future state prediction. Besides, the contribution of the inter-
edge interaction is higher than that of the intra-edge interaction.
Intuitively, the intra- and inter-edge operations capture local and global
interactions, respectively, and maybe in this dataset, global interactions are
more informative than local interactions.
To gain an intuitive understanding of the effect of the relation interaction
mechanism, we conduct a case study on the 10-object charged particle systems.
The distributions of predicted relations of NRI-MPM, NRI-MPM w/o RI and NRI
together with the ground truth relations are visualized in Fig. 7. The two
types of relations are highlighted in red and green, respectively, while the
diagonal elements are all in white as there no self-loops. It is known that
two particles with the same charge repel each other while those with the
opposite charge attract each other. Consequently, for any particles
$v_{i},v_{j}$, and $v_{k}$, the relations $e_{ij}$ and $e_{ik}$ are
correlated. As shown in Fig. 7, our method can model the dependencies among
all relations much better than NRI, while removing the relation interaction
mechanism results in less consistent prediction.
#### Effect of Spatio-temporal Message Passing Mechanism
As shown in Table 3, the MSEs increase by removing the spatio-temporal message
passing mechanism, and the differences are more significant for $M=10$ and
$M=20$, verifying that the spatio-temporal message passing mechanism can
alleviate error accumulation in multi-step prediction. Furthermore, the MSEs
of different methods that predict 40 steps into the future are shown in Fig.
8. Note that the differences among different methods narrow for large $M$.
Maybe it is still challenging for all methods to handle error accumulation in
the long run.
Interestingly, the decrease of NRI-MPM w/o ST in terms of accuracy is more
significant than that of NRI-MPM w/o RI. Maybe the spatio-temporal message
passing mechanism imposes strong dependencies of historical interactions,
which indirectly helps with relation reconstruction.
#### Effect of Symmetric Prior
Figure 9: The accuracy and the rate of asymmetry w.r.t. $\lambda$.
As shown in Table 3, without the symmetric prior, the accuracy of NRI-MPM
decreases by 2.3%, while the MSEs are on par with the original model,
indicating that the symmetric prior can help with relation reconstruction
without hurting the precision of future state prediction. Compared with NRI-
MPM w/ hard Sym, our method achieves higher accuracy with lower MSEs. Maybe
the hard constraint of symmetry limits the exploration of the model in the
training procedure, while a soft constraint provides more flexibility.
The effect of the penalty factor $\lambda$ is shown in Fig. 9. The rate of
asymmetry decreases significantly as $\lambda$ increases, while the accuracy
increases steadily and peaks around $\lambda=10^{2}$, verifying that adjusting
$\lambda$ can control the effect of the symmetric prior and reasonable values
will benefit relation reconstruction.
## Related Work
This paper is part of an emerging direction of research attempting to model
the explicit interactions in dynamical systems using neural networks as in NRI
(Kipf et al. 2018).
Most closely related are the papers of Alet et al. (2019), Li et al. (2019),
Webb et al. (2019) and Zhang et al. (2019). Alet et al. (2019) frame this
problem as a modular meta-learning problem to jointly infer the relations and
use the data more effectively. To deal with more complex systems, Li et al.
(2019) incorporate various structural prior knowledge as a complement to the
observed states of agents. Webb et al. (2019) extend NRI to multiplex
interaction graphs. Zhang et al. (2019) explore relational inference over a
wider class of dynamical systems, such as discrete systems and chaotic
systems, assuming a shared interaction graph for all state sequences, which is
different from the experimental settings of Kipf et al. (2018) and Alet et al.
(2019). Compared with this line of work, our method focuses on introducing
efficient message passing mechanisms to enrich the representative power of
NRI.
Many recent works seek to extend the message passing mechanisms of GNNs. Zhu
et al. (2020) define a bilinear aggregator to incorporate the possible
interactions among all neighbors of a given node. Brockschmidt (2020) defines
node aware transformations over messages to impose feature-wise modulation.
Nevertheless, theses variants treat the messages as a transient part of the
computation of the node embeddings, our relation interaction mechanism is
aimed at learning edge embeddings. Herzig et al. (2019) use non-local
operations to capture the interactions among all relations, which requires
quadratic time complexity.
Besides, many works extend GNNs to handle structured time series. Graph
convolutional networks with CNNs (Yu, Yin, and Zhu 2018), GRUs (Li et al.
2018), or attention mechanisms (Guo et al. 2019) are introduced to deal with
spatial and temporal dependencies separately for traffic flow forecasting.
Furthermore, spatio-temporal graphs (Song et al. 2020) and spatio-temporal
attention mechanisms (Zheng et al. 2020) are proposed to capture complex
spatio-temporal correlations. Our methods borrow these ideas from traffic flow
forecasting to define spatio-temporal message passing operations for neural
relational inference.
## Conclusion
This paper introduces efficient message passing mechanisms with structural
prior knowledge for neural relational inference. The relation interaction
mechanism can effectively capture the coexistence of all relations and help
make a joint prediction. By incorporating historical information, the spatio-
temporal message passing mechanism can effectively alleviate error
accumulation in multi-step state prediction. Additionally, the structural
prior knowledge, symmetry as a special case, can promote the accuracy of
relation reconstruction in more complex systems. The results of extensive
experiments on simulated physics systems validate the effectiveness of our
method.
Currently, only simple yet effective implementations using GRUs and attention
mechanisms are adopted for the sequence models. Future work includes
introducing more advanced models like the Transformers to further improve the
performance. Besides, current experiments are conducted on simulated systems
over static and homogeneous graphs. Future work includes extending our method
to systems over dynamic and heterogeneous graphs. Furthermore, the proposed
method will be applied to study the mechanism of the emergence of intelligence
in different complex systems, including multi-agent systems, swarm systems,
physical systems and social systems.
## Ackownledgement
This work is supported by the National Key R&D Program of China
(2018AAA0101203), and the National Natural Science Foundation of China
(62072483, 61673403, U1611262). This work is also supported by MindSpore.
## References
* Alet et al. (2019) Alet, F.; Weng, E.; Lozano-Pérez, T.; and Kaelbling, L. P. 2019. Neural Relational Inference with Fast Modular Meta-learning. In _NeurIPS_ , 11827–11838.
* Almaatouq et al. (2020) Almaatouq, A.; Noriega-Campero, A.; Alotaibi, A.; Krafft, P. M.; Moussaid, M.; and Pentland, A. 2020. Adaptive Social Networks Promote the Wisdom of Crowds. _PNAS_ 117(21): 11379–11386.
* Bapst et al. (2020) Bapst, V.; Keck, T.; Grabska-Barwińska, A.; Donner, C.; Cubuk, E. D.; Schoenholz, S. S.; Obika, A.; Nelson, A. W. R.; Back, T.; Hassabis, D.; and Kohli, P. 2020. Unveiling the Predictive Power of Static Structure in Glassy Systems. _Nature Physics_ 16(4): 448–454.
* Battaglia et al. (2016) Battaglia, P.; Pascanu, R.; Lai, M.; Rezende, D. J.; and kavukcuoglu, K. 2016. Interaction Networks for Learning about Objects, Relations and Physics. In _NeurIPS_ , 4509–4517.
* Brockschmidt (2020) Brockschmidt, M. 2020. GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation. In _ICML_.
* Cho et al. (2014) Cho, K.; Van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning Phrase Representations Using RNN Encoder–Decoder for Statistical Machine Translation. In _EMNLP_ , 1724–1734.
* Gilmer et al. (2017) Gilmer, J.; Schoenholz, S. S.; Riley, P. F.; Vinyals, O.; and Dahl, G. E. 2017. Neural Message Passing for Quantum Chemistry. In _ICML_ , 1263–1272.
* Guo et al. (2019) Guo, S.; Lin, Y.; Feng, N.; Song, C.; and Wan, H. 2019. Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting. In _AAAI_ , 922–929.
* Ha and Jeong (2020) Ha, S.; and Jeong, H. 2020. Towards Automated Statistical Physics: Data-Driven Modeling of Complex Systems with Deep Learning. _arXiv preprint arXiv:2001.02539_ .
* Hamilton, Ying, and Leskovec (2017) Hamilton, W.; Ying, Z.; and Leskovec, J. 2017. Inductive Representation Learning on Large Graphs. In _NeurIPS_ , 1024–1034.
* Herzig et al. (2019) Herzig, R.; Levi, E.; Xu, H.; Gao, H.; Brosh, E.; Wang, X.; Globerson, A.; and Darrell, T. 2019. Spatio-Temporal Action Graph Networks. In _ICCVW_.
* Hoshen (2017) Hoshen, Y. 2017. VAIN: Attentional Multi-agent Predictive Modeling. In _NeurIPS_ , 2701–2711.
* Kingma and Welling (2014) Kingma, D. P.; and Welling, M. 2014. Auto-Encoding Variational Bayes. In _ICLR_.
* Kipf et al. (2018) Kipf, T.; Fetaya, E.; Wang, K.-C.; Welling, M.; and Zemel, R. 2018. Neural Relational Inference for Interacting Systems. In _ICML_ , 2688–2697.
* Kuramoto (1975) Kuramoto, Y. 1975. Self-Entrainment of A Population of Coupled Non-Linear Oscillators. In _International Symposium on Mathematical Problems in Theoretical Physics_ , 420–422. Springer.
* Lee et al. (2019) Lee, J.; Lee, Y.; Kim, J.; Kosiorek, A.; Choi, S.; and Teh, Y. W. 2019. Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. In _ICML_ , 3744–3753.
* Li et al. (2020) Li, J.; Yang, F.; Tomizuka, M.; and Choi, C. 2020. EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning. In _NeurIPS_.
* Li et al. (2019) Li, Y.; Meng, C.; Shahabi, C.; and Liu, Y. 2019. Structure-Informed Graph Auto-Encoder for Relational Inference and Simulation. In _ICML Workshop on Learning and Reasoning with Graph-Structured Representations_.
* Li et al. (2018) Li, Y.; Yu, R.; Shahabi, C.; and Liu, Y. 2018. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In _ICLR_.
* Maddison, Mnih, and Teh (2017) Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2017. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In _ICLR_.
* Oliveira et al. (2020) Oliveira, M.; Pinheiro, D.; Macedo, M.; Bastos-Filho, C.; and Menezes, R. 2020. Uncovering the Social Interaction Network in Swarm Intelligence Algorithms. _Applied Network Science_ 5(1): 1–20.
* Sanchez-Gonzalez et al. (2018) Sanchez-Gonzalez, A.; Heess, N.; Springenberg, J. T.; Merel, J.; Riedmiller, M.; Hadsell, R.; and Battaglia, P. 2018. Graph Networks as Learnable Physics Engines for Inference and Control. In _ICML_ , 4467–4476.
* Song et al. (2020) Song, C.; Lin, Y.; Guo, S.; and Wan, H. 2020. Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting. _AAAI_ 914–921.
* van Steenkiste et al. (2018) van Steenkiste, S.; Chang, M.; Greff, K.; and Schmidhuber, J. 2018. Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions. In _ICLR_.
* Watters et al. (2017) Watters, N.; Zoran, D.; Weber, T.; Battaglia, P.; Pascanu, R.; and Tacchetti, A. 2017. Visual Interaction Networks: Learning A Physics Simulator from Video. In _NeurIPS_ , 4539–4547.
* Webb et al. (2019) Webb, E.; Day, B.; Andres-Terre, H.; and Lió, P. 2019. Factorised Neural Relational Inference for Multi-Interaction Systems. _ICML Workshop on Learning and Reasoning with Graph-Structured Representations_ .
* Xu et al. (2019) Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2019. How Powerful are Graph Neural Networks. In _ICLR_.
* Yang et al. (2018) Yang, Y.; Yu, L.; Bai, Y.; Wen, Y.; Zhang, W.; and Wang, J. 2018. A Study of AI Population Dynamics with Million-agent Reinforcement Learning. In _AAMAS_ , 2133–2135.
* Yu, Yin, and Zhu (2018) Yu, B.; Yin, H.; and Zhu, Z. 2018. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In _IJCAI_ , 3634–3640.
* Zhang et al. (2020) Zhang, J.; Wang, W.; Xia, F.; Lin, Y.-R.; and Tong, H. 2020. Data-Driven Computational Social Science: A Survey. _Big Data Research_ 100145\.
* Zhang et al. (2019) Zhang, Z.; Zhao, Y.; Liu, J.; Wang, S.; Tao, R.; Xin, R.; and Zhang, J. 2019. A General Deep Learning Framework for Network Reconstruction and Dynamics Learning. _Applied Network Science_ 4(1): 1–17.
* Zheng et al. (2020) Zheng, C.; Fan, X.; Wang, C.; and Qi, J. 2020. GMAN: A Graph Multi-Attention Network for Traffic Prediction. In _AAAI_ , 1234–1241.
* Zhu et al. (2020) Zhu, H.; Feng, F.; He, X.; Wang, X.; Li, Y.; Zheng, K.; and Zhang, Y. 2020. Bilinear Graph Neural Network with Neighbor Interactions. In _IJCAI_ , 1452–1458.
See main_appendix.pdf
|
# Chiral Condensate in Two Dimensional Models
V.G. Ksenzov State Scientific Center Institute for Theoretical and
Experimental Physics, Moscow, 117218, Russia
###### Abstract
We investigate two different models. In one of them massive fermions interact
with a massive scalar field and in the other the fermion field is in an
electrical field (QED2). The chiral condensates are calculated in one-loop
approximation. We found that the chiral condensate in the case of the Yukawa
interaction the fermions and scalar field does not vanish if the mass of the
fermion field tends to zero. The chiral condensate disappears in QED2, if the
fermion mass is zero.
## I Introduction
Dynamical breaking of the symmetry is described by a quantity known as the
order parameter. In models with a broken chiral symmetry the chiral condensate
is the order parameter that disappears, if the chiral symmetry is restored.
The investigation of the chiral condensate plays a crucial role in attempts to
describe phase transitions related to the dynamical chiral symmetry breaking.
In QCD chiral condensate is investigated via lattice simulations. This
approach is usually employed in studies of the chiral condensate in the
presence of external factors such as temperature, chemical potential, magnetic
fields and magnetic monopoles, etc (see Refs. BCL –RS and references
therein).
Apart from numerical studies there are a lot of models employed for the
investigation of the chiral condensate in various physical systems BBM –JLK ,
NJL –KR2 . In the first paper on this subject Nambu and Jona-Lasinio (NJL)
analyzed a specific field model in four dimensions NJL . Later, the chiral
condensate was studied by Gross and Neveu (GN) in two dimensions spacetime in
the limit of a large number of fermion flavors $N$ GN . These two models are
similar, but in contrast to NJL-model, GN-model is a renormalizable and
asymptotically free theory. Due to these properties, GN-model is used for
qualitative modeling of QCD. The relative simplicity of both models is a
consequence of the quartic fermion interaction.
In our previous papers we investigated a system of a self-interacting massive
scalar and massless fermion fields with the Yukawa interaction in a
$(1+1)$-dimensional spacetime. In the limit of a large mass of the scalar
field, the model equivalent to the GN-model. The chiral condensate was
obtained by the path integral using the method of the stationary phase KR –KR2
.
In this paper we present a study of two models. One of them is the model with
massive fermions and massive scalars with the Yukawa interaction between these
two fields. It is worth to note, that the scalars is not a self-interacting
fields. The other model is QED in two-dimensional space-time.
The purpose of this paper is to obtain the chiral condensate in QED2-model. To
do this we must obtain an effective action of the model. Evaluating the one-
loop effective potential is equivalent to the summation of an infinite class
of Feynman diagrams, therefore we are unable to calculate a fermionic
determinant with an arbitrary vector potential $A_{\mu}(x)$. For this reason
we begin studying of the first model, that we use to investigate the effective
potential and the chiral condensate, if discrete chiral symmetry was
explicitly broken. This model exhibits the essential feature of the
techniques, that we use to construct the effective action of QED2. In
particular we can see, how the chiral symmetry breaking manifests itself in
the effective potential. As a result we found the chiral condensate in the
electrical field.
## II Fermions in a scalar field
The model, that will be discussed in this section involves $N$ massive fermion
fields and a massive scalar field with the Yukawa interaction between those
two fields in $(1+1)$-dimensional spacetime.
Lagrangian of the model is
$L=L_{b}+L_{f}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{1}{2}\mu^{2}\phi^{2}(x)+i\bar{\psi}^{a}\\!\\!\\!\not\partial\psi^{a}-g\left(\phi(x)-\frac{m}{g}\right)\bar{\psi}^{a}\psi^{a},$
(1)
here $\phi(x)$ is a real scalar field, $\psi^{a}(x)$ is a fermion fields,
index $a$ runs from 1 to $N\gg 1$ and $m$ is a mass of the fermions.
The model with the massless fermions was investigated in our previous papers
KR –KR2 . In the case, if $m=0$, the Lagrangian is invariant under a discrete
symmetry
$\psi^{a}\to\gamma_{5}\psi^{a},\;\bar{\psi}^{a}\to-\bar{\psi}^{a}\gamma_{5},\;\phi\to-\phi,$
(2)
which is broken by the chiral condensate. If $m\neq 0$, the discrete symmetry
(2) disappears, and chiral condensate is not a qualitative criterion.
As before in our papers we will determine the chiral condensate by an
effective potential. For our purpose we formally define the chiral correlator,
using a functional integral in Minkowski space
$\left\langle 0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\frac{1}{Z}\int D\phi
D\bar{\psi}^{a}D\psi^{a}g\bar{\psi}^{a}\psi^{a}\exp\left(i\int
d^{2}xL(x)\right),$ (3)
here $Z$ is a normalization constant. The chiral correlator (3) is rewritten
as
$\left\langle 0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\frac{1}{Z}\int
D\phi\exp\left({i\int d^{2}xL_{b}(x)}\right)i\frac{\delta}{\delta\phi}\int
D\bar{\psi}^{a}D\psi^{a}\exp\left(i\int d^{2}xL_{f}(x)\right).$ (4)
The fermionic Lagrangian is quadratic in the field and we can integrate over
them, getting
$\left\langle 0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\frac{1}{Z}\int
D\phi\frac{g^{2}N\left(\phi(x)-\frac{m}{g}\right)}{2\pi}\ln\frac{g^{2}\left(\phi(x)-\frac{m}{g}\right)^{2}}{\Lambda^{2}}\times$
$\times\exp\left(i\int
d^{2}x\left(\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{\mu^{2}}{2}\phi^{2}-\frac{Ng^{2}\left(\phi(x)-\frac{m}{g}\right)^{2}}{4\pi}\left(\ln\frac{g^{2}\left(\phi(x)-\frac{m}{g}\right)^{2}}{\Lambda^{2}}-1\right)\right)\right),$
(5)
here $\Lambda$ is the ultraviolet cutoff.
We want to obtain the chiral condensate in the framework of one-loop
approximation. Therefore we calculate (5) using the method of the stationary
phase. A minimum of the effective action of the system is reached if the
effective potential and kinetic energy are minimal on its own:
$\partial_{\mu}\phi=0\text{ and }U_{\text{eff}}(\phi)=\min.$ (6)
Let the constant scalar field $\phi_{m}$ satisfies the condition (6). The
factor in front of the exponent in (5) is fixed at the point $\phi=\phi_{m}$
and we obtain
$\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\frac{Ng^{2}\left(\phi_{m}-\frac{m}{g}\right)}{2\pi}\ln\frac{g^{2}\left(\phi_{m}-\frac{m}{g}\right)^{2}}{\Lambda^{2}}.$
(7)
It is worth to note, that the correlator $\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle$ is a chiral condensate only in the
minimum of the effective potential $\phi_{m}$. The effective potential and the
chiral condensate require renormalization. We renormalize effective potential
following Coleman and Weinberg CW and Gross and Neveu GN by demanding that
$\left.\frac{d^{2}U_{\text{eff}}}{d\phi_{m}^{2}}\right|_{\phi_{m}=M^{2}}=\mu_{R}^{2}.$
(8)
Then the renormalized chiral condensate is written as
$\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle_{R}=\frac{Ng^{2}\left(\phi_{m}-\frac{m}{g}\right)}{2\pi}\ln\frac{\left(\phi_{m}-\frac{m}{g}\right)^{2}}{M^{2}}.$
(9)
$\phi_{m}$ is determined by mean the renormalized effective potential
$U^{R}_{\text{eff}}$, which is written as
$U^{R}_{\text{eff}}=\frac{1}{2}\mu_{R}^{2}\phi^{2}+\frac{g^{2}N}{4\pi}\left(\phi-\frac{m}{g}\right)^{2}\left(\ln\frac{\left(\phi-\frac{m}{g}\right)^{2}}{M^{2}}-3\right),$
(10)
Figure 1: a – The value $U_{\text{eff}}(\phi)$; b – Chiral condensate
$R=\left\langle 0|m\bar{\psi}^{a}\psi^{a}|0\right\rangle_{R}$ at
$g^{2}N=1,69\text{ MeV}^{2}$, $\mu^{2}_{R}=0,5\text{ MeV}^{2}$, $m/g=2,17$,
$M^{2}=100$.
We determine the stationary point $\phi_{m}$ by numerical solution of equation
$\left.\frac{dU_{\text{eff}}}{d\phi}\right|_{\phi=\phi_{m}}=\mu^{2}_{R}\phi_{m}+\frac{g^{2}N}{2\pi}\left(\phi_{m}-\frac{m}{g}\right)\left(\ln\frac{\left(\phi_{m}-\frac{m}{g}\right)^{2}}{M^{2}}-2\right)=0.$
(11)
One can see, that there are two different solutions of (11), and for this
reason appear two different vacuums. One of them is a global, and the other is
a local minima (see Fig. 1a).
Using (9) and (11) we get
$\left\langle
0|m\bar{\psi}^{a}\psi^{a}|0\right\rangle_{R}=\frac{m}{g}\frac{N}{\pi}(g^{2}-g^{2}_{\text{cr}})\phi_{m}-\frac{m^{2}N}{\pi},$
(12)
here $g^{2}_{\text{cr}}=\frac{m^{2}_{R}\pi}{N}$.
It is worth to note, that the chiral condensate does not disappear, if the
fermions mass tends to zero. In such a case $(m=0)$ the effective potential
has the minimum at the point $\phi^{2}_{m}$, which is given in an explicit
form
$\phi^{2}_{m}=M^{2}\exp 2\left(1-\frac{\pi\mu^{2}_{R}}{g^{2}N}\right),$ (13)
and the vacuum energy is
$\mathcal{E}_{V}=-\frac{g^{2}N}{4\pi}\phi^{2}_{m}.$ (14)
If $g^{2}N=\pi\mu^{2}_{R}$, then $\phi^{2}_{m}=M^{2}$ and the vacuum energy is
completely perturbative one. It is known, that the vacuum energy is determined
by the vacuum condensate of the trace of the energy-momentum tensor
$\theta_{\mu\mu}$ MS , SH , SVZ and SVNZ as
$\frac{1}{d}\left\langle 0|\theta_{\mu\mu}|0\right\rangle=\mathcal{E}_{V},$
(15)
here $d$ is the dimension of the spacetime.
The diagrams shown in Fig. 2 determine the quantum correction to the trace of
the energy-momentum tensor in the case of the massive fermions. Then the
quantum correction to the trace of the energy-momentum tensor is written as
$\theta_{\mu\mu}=\frac{N}{2\pi}(g^{2}\phi^{2}-2mg\phi+m^{2}),$ (16)
here the fist term defines quantum anomaly if mass fermions are zero, the
second term defines the discrete symmetry breaking Fig. 2b, and the term
$\propto m^{2}$ defines the vacuum energy noninteracting fermions Fig. 2c.
It will be noted, that although the discrete symmetry is broken in the model
with the free massive fermions, but there is no violation of this symmetry in
the effective potential of the theory. This is due to the fact, that it is
impossible to construct a combination of the energy dimension, that violates
this symmetry.
The diagrams shown in Fig. 2 determine the quantum correction to the trace of
the energy-momentum tensor in the case of massive fermions.
Figure 2: The dashed line denotes the fermion loops, solid line denotes the
scalar field $\phi$.
## III Fermions in an electrical field
The model that will be discussed here is QED2. A Lagrangian of the model is
well-known
$L=L_{b}+L_{f}=-\frac{1}{4}F^{2}_{\mu\nu}+i\bar{\psi}^{a}\\!\\!\\!\not\partial\psi^{a}-m\bar{\psi}^{a}\psi^{a}-A_{\mu}\bar{\psi}^{a}\gamma_{\mu}\psi^{a},$
(17)
If $m=0$ then the Lagrangian is invariant under a discrete chiral simmetry
$\psi^{a}\to\gamma_{5}\psi^{a},\;\bar{\psi}^{a}\to-\bar{\psi}^{a}\gamma_{5}.$
(18)
We formally define the chiral correlator as
$\begin{split}&\int d^{2}x\left\langle
0|m\bar{\psi}^{a}\psi^{a}|0\right\rangle=\\\ =\frac{1}{Z}\int
DA_{\mu}\exp\left(i\int
d^{2}x\right.&\left.L_{b}(x)\\!\\!\\!\\!\\!\\!\phantom{\int}\right)im\frac{d}{dm}\int
D\bar{\psi}^{a}D\psi^{a}\exp\left(i\int d^{2}xL_{f}(x)\right).\end{split}$
(19)
It is hard enough to calculate a fermionic determinant with an arbitrary
vector potential $A_{\mu}(x)$ therefore we will use an alternative method of
getting an effective potential.
It is known that the expression of the effective potential was derived within
gluodynamics by Migdal and Shifman MS and SH . In the paper VK the method
was used to obtain the vacuum condensate of the trace of the energy-momentum
tensor in massless theories in various spacetime dimensions. In this paper the
method will be used for construction the effective Lagrangian in QED2. The
expression for the effective Lagrangian was obtained from the requirement that
the anomaly be reproduced under a scale transformation. The effective
potential has the form
$V_{\text{eff}}=\frac{1}{d}\sigma\left(\ln\frac{\sigma}{\upsilon}-1\right),$
(20)
here $\sigma=\theta_{\mu\mu}$ and $\upsilon$ emerges as a constant of
integration in solving the respective differential equation MS , SH . We
should find $\theta_{\mu\mu}$ and $\upsilon$ for our model.
Figure 3: The dashed line denotes the fermion loops, solid line denotes the
vector potential $A_{\mu}$.
Above we see that the mass term is fundamentally important for calculating of
the chiral condensate (19) therefore we must construct $\theta_{\mu\mu}$
taking into account the mass term. The diagrams shown at Fig. 3 determine
$\theta_{\mu\mu}$ in QED2. The chiral symmetry breaking at the expense of the
mass term is given by the diagram Fig. 3b. A result of calculating the
diagrams is given
$\theta_{\mu\mu}=-e^{2}_{0}\frac{E^{2}}{6\pi}-e_{0}m\frac{E}{3\pi}+\frac{m^{2}}{2\pi},$
(21)
here we introduced a dimensionless coupling constant
$e^{2}_{0}=\frac{e^{2}}{m^{2}}$ and an electrical field
$E=\varepsilon_{\mu\nu}\partial_{\mu}A_{\nu}$.
To find $\upsilon$ we assume that the coupling constant $e_{0}$ is zero at the
time then $V_{\text{eff}}(\sigma)$ coincides with the effective potential for
free fermions and we get $\upsilon=\frac{1}{2\pi}M_{0}^{2}$, here $M_{0}$ is
an arbitrary substraction parameter.
Now the effective Lagrangian can be expressed as
$L_{\text{eff}}=\frac{E^{2}}{2}+\frac{\sigma}{2}\left(\ln\frac{2\pi\sigma}{M^{2}_{0}}-1\right).$
(22)
A minimum of the effective Lagrangian of the system is reached if
$\frac{dL_{\text{eff}}}{dE}=E+\frac{1}{2}\frac{d\sigma}{dE}\ln\frac{2\pi\sigma}{M^{2}_{0}}=0,$
(23)
here
$\frac{d\sigma}{dE}=-e^{2}_{0}\frac{E}{3\pi}-e_{0}\frac{m}{3\pi}$ (24)
The chiral correlation is
$\int d^{2}x\left\langle 0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\int
d^{2}x\frac{m}{2}\frac{d\sigma}{dm}\ln\frac{2\pi\sigma}{M_{0}},$ (25)
here
$m\frac{d\sigma}{dm}=-e_{0}\frac{mE}{3\pi}+\frac{m^{2}}{\pi}.$ (26)
Let’s $E_{m}$ is the solution of the equation (23) then using (23) and (25)
and accounting that $E_{m}$ and $\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle$ are constants we get
$\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=-mE_{m}\frac{d\sigma}{dm}\left(\frac{d\sigma}{dE_{m}}\right)^{-1}$
(27)
or
$\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\left(3m^{2}-e_{0}mE_{m}\right)\frac{E_{m}}{e_{0}m+e^{2}_{0}E_{m}}$
(28)
In generally there are no analytical solutions (23) but we can find a few of
them in special cases if
1. 1.
$m=0$, then $E_{m}=0$ and the chiral condensate disappears,
2. 2.
$e_{0}\ll 1$, then
$E_{M}\simeq\cfrac{e_{0}m}{6\pi}\ln\cfrac{m^{2}}{M^{2}_{0}}$ and
$\left\langle
0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle=\frac{m^{2}}{2\pi}\ln\frac{m^{2}}{M^{2}_{0}}\left(1-\frac{e_{0}m}{18\pi}\ln\frac{m^{2}}{M^{2}_{0}}\right),$
here $\left|\cfrac{e_{0}m}{6\pi}\ln\cfrac{m^{2}}{M^{2}_{0}}\right|\leqslant
1.$
It is worth to note that $E$ may be a constant quantity. Really, fixing a
gauge $A_{1}=0$ and using the fact that the Coulomb potential is a leaner one
$A_{0}=-ax$,we get $E=a$.
In Fig. 4 the effective Lagrangian and the chiral codensate are shown as the
function of $E$.
Figure 4: a – The value $L_{\text{eff}}(\phi)$; b – Chiral condensate
$R=\left\langle 0|g\bar{\psi}^{a}\psi^{a}|0\right\rangle_{R}$ at $e_{0}=1$,
$m=0,5$ MeV, $M_{0}^{2}=100\text{ MeV}^{2}$.
## IV Conclusions
The models analyzed in this paper, formulated in two dimensional spacetime,
are unrealistic. However, we believe that obtained results are showing the
method calculation of the chiral condensate in more realistic models.
It was shown that the massive fermions in the massive scalar field have the
effective potential with two different vacuums. One of them is global the
other is local minima. The chiral condensate obtained in the model does not
disappear if the fermion mass tends to zero.
The trace of the energy-momentum tensor taking into account the mass of
fermions demonstrates a clear violation of discrete symmetry. This fact
allowed to construct the effective Lagrangian in QED2. The chiral condensate
was obtained in the model. If the fermion mass vanishes, then the chiral
condensate disappears.
Technically, the central point is the construction of the effective Lagrangian
in QED2.
Acknowledgment: I am grateful to O.V. Kancheli for useful discussions.
## References
* (1) P.V. Buividovich, M.N. Chernodub, E.V. Luschevskaya, M.I. Polikarpov, Phys. Lett. B 682, 484 (2010).
* (2) P.V. Buividovich, E.V. Luschevskaya, M.I. Polikarpov, Phys. Rev. D 78, 074505 (2008).
* (3) D. Ebert, N.A. Gubina, K.G. Klimenko, S.G. Kurbanov and Zhukovsky, Phys. Rev. D 84, 025004 (2011).
* (4) R. Larsen, E. Shyryak, Phys. Rev. D 93, 054029 (2016).
* (5) A. Ramamurti, E. Shyryak, Phys. Rev. D 100, 016007 (2019).
* (6) S. Bellicci, E.R. Bezerra de Mello, A.A. Saharian, Phys. Rev. D 89, 085002 (2014).
* (7) T. Banks, A. Casher, Nucl. Phys. B 169, 103 (1980).
* (8) L.Ya. Glozman, V.K. Sazonov, M. Shifman, R.F. Wagenbruun, Phys. Rev. D 85, 09030 (2012).
* (9) Joan-Loic Kneur, A. Neveu, Phys. Rev. D 101, 074009 (2020).
* (10) A.A. Migdal, M.A. Shifman, Phys. Lett. B 114, 445 (1982).
* (11) M.A. Shifman, Usp. Fiz. Nauk 157, 561 (1989) [Sov. Phys. Usp. 32, 289 (1989)].
* (12) Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345 (1961).
* (13) D.J. Gross and A. Neveu, Phys. Rev. D10, 3235 (1974).
* (14) V.G. Ksenzov, A.I. Romanov, Yad. Fiz. 76, 1 (2013) [Phys. Atom. Nucl. 76, 534 (2013)].
* (15) V.G. Ksenzov, A.I. Romanov, Modern Phys. Lett. A. Vol. 29, 38, 1450200 (2014).
* (16) V.G. Ksenzov, A.I. Romanov, Yad. Fiz. 80, 1 (2017).
* (17) V.G. Ksenzov and A.I. Romanov, arXiv: 1703.09022v1 [hep-ph].
* (18) M.A. Shifman, A.I. Vainshtein, V.I. Zakharov, Nucl. Phys. B 147, 385 (1979).
* (19) A.I. Vainshtein, V.I. Zakharov, V.A. Novikov, M.A. Shifman, Phys. Rep. 116, 103 (1989).
* (20) S. Coleman and E. Weinberg, Phys. Rev. D 7 1888 (1973).
* (21) V.G. Ksenzov, Yad. Fiz. 71, 781 (2008).
|
Tsallis Statistics in High Energy Physics: Chemical and Thermal Freeze-Outs
J. Cleymans1, M. W. Paradza1,2
1 UCT-CERN Research Centre and Physics Department, University of Cape Town,
South Africa,
2 Centre for Postgraduate Studies, Cape Peninsula University of Technology,
Bellville 7535, South Africa
Abstract:
We present an overview of a proposal in relativistic proton-proton ($pp$)
collisions emphasizing the thermal or kinetic freeze-out stage in the
framework of the Tsallis distribution. In this paper we take into account the
chemical potential present in the Tsallis distribution by following a two step
procedure. In the first step we used the redundancy present in the variables
such as the system temperature, $T$, volume, $V$, Tsallis exponent, $q$,
chemical potential, $\mu$, and performed all fits by effectively setting to
zero the chemical potential. In the second step the value $q$ is kept fixed at
the value determined in the first step. This way the complete set of variables
$T,q,V$ and $\mu$ can be determined. The final results show a weak energy
dependence in $pp$ collisions at the centre-of-mass energy $\sqrt{s}=6$ GeV to
13 TeV. The chemical potential $\mu$ at kinetic freeze-out shows an increase
with beam energy. This simplifies the description of the thermal freeze-out
stage in $pp$ collisions as the values of $T$ and of the freeze-out radius $R$
vary only mildly over a wide range of beam energies.
## 1 Introduction
It has been estimated [1] that about 30,000 particles (pions, kaons, protons,
antiprotons)) are produced in a central heavy ion collision at the Large
Hadron Collider (LHC) at 5.02 TeV. Hence it is natural to use concepts from
statistical mechanics to analyze the produced particles. This procedure has a
long and proud history with contributions from three Nobel prize winners: E.
Fermi [2, 3], W. Heisenberg [4] and L.D. Landau [5]. To quote Landau:
> “ _Fermi originated the ingenious idea of considering the collision process
> at very high energies by the use of thermodynamic methods._ ”
This turned out to be useful also at much higher beam energies than those
initially envisaged. The main ingredient in the hadron resonance gas model
(referred to as thermal model here) is that all resonances listed in the
Review of Particle Physics [6] are in thermal and chemical equilibrium. This
reduces the number of available parameters and just a few thermodynamic
variables characterize the system.
The chemical freeze-out stage is well understood and is strongly supported by
experimental results (see e.g., [7] for a recent review) with a strong
connection to results obtained using Lattice Quantum Chromodynamics (LQCD) as
the chemical freeze-out temperature is consistent with the phase transition
temperature calculated in LQCD. Indeed, for the most central Pb-Pb collisions,
the best description of the ALICE data on yields of particles in one unit of
rapidity at mid-rapidity was obtained for a chemical freeze-out temperature
given by $T_{ch}=156.6\pm 1.7$ MeV [7, 8]. Remarkably, this value of $T_{ch}$
is close to the pseudo-critical temperature $T_{c}=156.5\pm 1.5$ MeV obtained
from first principles Lattice QCD (LQCD) calculations [9], albeit with the
possibility of a broad transition region [10].
For several decades, a well-established procedure using hydrodynamics [11] and
variations thereof has existed to describe this stage. In this paper we review
another possibility to describe the thermal freeze-out stage which has shown
considerable potential especially to describe the final state in proton–proton
(pp) collisions. Most of these approaches are based on variations of a
distribution proposed by Tsallis about 40 years ago [12] to describe entropy
by introducing an additional parameter called $q$. In the limit $q\rightarrow
1$ this reproduces the standard Boltzmann–Gibbs entropy. The advantage is that
thermodynamic variables like temperature, energy density, pressure and
particle density can still be used and thermodynamic consistency is
maintained.
This paper is an extension of [13]. For completeness and for the convenience
of the reader we have included the tables presented there and considerably
improved on them, the inclusion of the NA61/SHINE [14] is new and contributes
very much to the understanding of the energy dependence of the parameters,
also all figures are new.
## 2 Thermal Freeze-Out
We will focus here on one particular form of the Tsallis distribution,
satisfying thermodynamic consistency relations [15, 16] and given by:
$E\frac{d^{3}N}{d^{3}p}=gVE\frac{1}{(2\pi)^{3}}\left[1+(q-1)\frac{E-\mu}{T}\right]^{-\frac{q}{q-1}},$
(1)
where $V$ is the volume, $q$ is the Tsallis parameter, $T$ is the
corresponding temperature, $E$ is the energy of the particle, $p$ is the
momentum, $g$ is the degeneracy factor and $\mu$ is the chemical potential. In
terms of variables commonly used in high-energy physics, rapidity $y$,
transverse mass $m_{T}=\sqrt{p_{T}^{2}+m^{2}}$:
$\frac{d^{2}N}{dp_{T}dy}=gV\frac{p_{T}m_{T}\cosh
y}{(2\pi)^{2}}\left[1+(q-1)\frac{m_{T}\,\cosh\,y-\mu}{T}\right]^{-\frac{q}{q-1}}.$
(2)
In the limit where the parameter $q$ tends to unity one recovers the well-
known Boltzmann–Gibbs distribution (with $p_{T}$ being the particle transverse
momentum):
$\lim_{q\rightarrow 1}\frac{d^{2}N}{dp_{T}dy}=gV\frac{p_{T}m_{T}\cosh
y}{(2\pi)^{2}}\exp\left(-\frac{m_{T}\,\cosh\,y-\mu}{T}\right).$ (3)
The main advantage of Equation (2) over Equation (3) is that it has a
polynomial decrease with increasing $p_{T}$ which is what is observed
experimentally.
It was recognized early on [17] that there is a redundancy in the number of
parameters in this distribution, namely the four parameters $T,V,q$ and $\mu$
in Equation $(\ref{YieldNonZeroMu})$ can be replaced by just three parameters
$T_{0},V_{0},q$ with the help of the following transformation:
$\displaystyle T_{0}$ $\displaystyle=$ $\displaystyle
T\left[1-(q-1)\frac{\mu}{T}\right],\qquad\mu\leq\frac{T}{q-1},$ (4)
$\displaystyle V_{0}$ $\displaystyle=$ $\displaystyle
V\left[1-(q-1)\frac{\mu}{T}\right]^{\frac{q}{1-q}},$ (5)
leading to a transverse momentum distribution which can thus be written
equivalently as
$\frac{d^{2}N}{dp_{T}dy}=gV_{0}\frac{p_{T}m_{T}\cosh
y}{(2\pi)^{2}}\left[1+(q-1)\frac{m_{T}\,\cosh\,y}{T_{0}}\right]^{-\frac{q}{q-1}},$
(6)
where the chemical potential does not appear explicitly.
Corresponding to the volumes $V$ and $V_{0}$ defined in Equations (1) and (5)
we also introduce the corresponding radii $R$ and $R_{0}$
$\displaystyle V$ $\displaystyle=$ $\displaystyle\frac{4\pi}{3}R^{3},$ (7)
$\displaystyle V_{0}$ $\displaystyle=$ $\displaystyle\frac{4\pi}{3}R_{0}^{3}.$
(8)
It is to be noted that most previous analyses have confused the two Equations
(2) and (6) and reached conclusions that are incorrect, namely that at LHC
energies, different hadrons, $\pi,K,p,...$ cannot be described by the same
values of $T$ and $V$. As we will show this is based on using $T_{0}$ and
$V_{0}$ and not $T$ and $V$. Many authors have followed this conclusion
because at LHC energies equal numbers of particles and antiparticles are being
produced and, furthermore, at chemical equilibrium, one has indeed $\mu=0$ MeV
for all quantum numbers. However the equality of particle and antiparticle
yields, at thermal freeze-out, only implies that e.g., $\pi^{+}$ and $\pi^{-}$
have the same chemical potential but they are not necessarily zero. We
emphasize that Equations (2) and (6) carry a different meaning, notice the
difference in parameters: $T_{0}$ is not equal to $T$ and neither is $V$ equal
to $V_{0}$. Notice also that we do not have $\mu$ in Equation (6).
It is the purpose of the present paper to resolve this issue. The procedure we
choose is the following:
1. 1.
Use Equation (6) to fit the transverse momentum distributions. This determines
the three parameters $T_{0}$, $q$ and $V_{0}$.
2. 2.
Fix the parameter $q$ thus obtained.
3. 3.
Perform a new fit to the transverse momentum distributions using Equation (2)
keeping $q$ as determined in the previous step. This determines the parameters
$T$ and $V$ and the chemical potential $\mu$.
4. 4.
Check the consistency with Equations (4) and (5).
Each step in the fitting procedure thus involves only three parameters to
describe the transverse momentum distributions. This procedure was presented
in [13] and the present paper is an extension with more details in this paper,
some of the entries in Table 2 have been corrected.
We emphasize that the chemical potentials at kinetic freeze-out (described
here with a Tsallis distribution), are not related to those at chemical
freeze-out. At chemical freeze-out, where thermal and chemical equilibrium
have been well established the chemical potentials are zero. At kinetic
freeze-out however, there is no chemical equilibrium and the observed
particle-antiparticle symmetry only implies that the chemical potentials for
particles must be equal to those for antiparticles. However, due to the
absence of chemical equilibrium they do not have to be zero. The only
constraint is that they should be equal for particles and antiparticles.
We remind the reader here of the advantage of using the above distribution as
they follow a consistent set of thermodynamic relations (see e.g., [17]). From
this, it is thus clear that the parameter $T$ can indeed be considered as a
temperature in the thermodynamic sense since the relation below holds
$T=\left.\frac{\partial E}{\partial S}\right|_{V,N},$ (9)
where the entropy $S$ is the Tsallis entropy.
In the next section we include the chemical potential parameter in the Tsallis
fits to the transverse momentum spectra. Previously, it was first noted by
[17] that the variables $T,V,q$ and $\mu$ in the Tsallis distribution function
Equation $(\ref{Yield})$ have a redundancy for $\mu\neq 0$ MeV and recently
[18] considered the mass of a particle in place of chemical potential. This
necessitates work on determining the chemical potential from the transverse
momentum spectra.
## 3 Comparison of Fit Results
As mentioned in the introduction, we reproduce here for completeness the
values extracted from the results published by the ALICE Collaboration [19,
20, 21, 22]. The data at the centre-of-mass energy $\sqrt{s}=0.9$ TeV had the
smallest range in $p_{T}$ (for all the ALICE Collaboration results considered
here), of about an order of magnitude less than the experimental data at
$\sqrt{s}=2.76$ GeV and $\sqrt{s}=7$ TeV with ALICE.
In general the data were described very well; the figures showing the actual
fits results are not included in this paper since they form part of previous
publications. The least squares method was performed by the Minuit package
[23] as part of the fitting procedure in the code. There was no manual
selection in the choice of parameters, all parameters were initialized at the
beginning and the code returned the best fit parameter values. We did not
particularly fix the value of $T$ and tried to obtain the other parameters. In
particular, the value of $\mu$ did not affect $V$.
We did not observe any trend which suggested a deterioration of the fits with
the centre-of-mass energy. In Tables 1 and 2; we give the $\chi^{2}$ values.
Comparing the values of $\chi^{2}$ from 2.76 to 7.0 TeV, in Tables 1 and 2,
there was no clear trend with increasing energy.
Table 1: Fit results at $\sqrt{s}$ = 0.9 [19], 2.76 [20], 5.02 [21] and 7 TeV
[21, 22], using data from the ALICE Collaboration using Equations (6) and (7).
$\sqrt{s}$ (TeV) | Particle | $R_{0}$ (fm) | $q$ | $T_{0}$ (GeV) | $\chi^{2}$/NDF
---|---|---|---|---|---
0.9 | $\pi^{+}$ | 4.83 $\pm$ 0.14 | 1.148 $\pm$ 0.005 | 0.070 $\pm$ 0.002 | 22.73/30
| $\pi^{-}$ | 4.74 $\pm$ 0.13 | 1.145 $\pm$ 0.005 | 0.072 $\pm$ 0.002 | 15.83/30
| $K^{+}$ | 4.52 $\pm$ 1.30 | 1.175 $\pm$ 0.017 | 0.057 $\pm$ 0.013 | 13.02/24
| $K^{-}$ | 3.96 $\pm$ 0.96 | 1.161 $\pm$ 0.016 | 0.064 $\pm$ 0.013 | 6.21/24
| $p$ | 42.7 $\pm$ 19.8 | 1.158 $\pm$ 0.006 | 0.020 $\pm$ 0.004 | 14.29/21
| $\bar{p}$ | 7.44 $\pm$ 3.95 | 1.132 $\pm$ 0.014 | 0.052 $\pm$ 0.016 | 13.82/21
2.76 | $\pi^{+}+\pi^{-}$ | 4.80 $\pm$ 0.10 | 1.149 $\pm$ 0.002 | 0.077 $\pm$ 0.001 | 20.64/60
| $K^{+}+K^{-}$ | 2.51 $\pm$ 0.13 | 1.144 $\pm$ 0.002 | 0.096 $\pm$ 0.004 | 2.46/55
| $p+\bar{p}$ | 4.01 $\pm$ 0.62 | 1.121 $\pm$ 0.005 | 0.086 $\pm$ 0.008 | 3.51/46
5.02 | $\pi^{+}+\pi^{-}$ | 5.02 $\pm$ 0.11 | 1.155 $\pm$ 0.002 | 0.076 $\pm$ 0.002 | 20.13/55
| $K^{+}+K^{-}$ | 2.44 $\pm$ 0.17 | 1.15 $\pm$ 0.005 | 0.099 $\pm$ 0.006 | 1.52/48
| $p+\bar{p}$ | 3.60 $\pm$ 0.55 | 1.126 $\pm$ 0.005 | 0.091 $\pm$ 0.009 | 2.56/46
7.0 | $\pi^{+}+\pi^{-}$ | 5.66 $\pm$ 0.17 | 1.179 $\pm$ 0.003 | 0.066 $\pm$ 0.002 | 14.14/38
| $K^{+}+K^{-}$ | 2.51 $\pm$ 0.15 | 1.158 $\pm$ 0.005 | 0.097 $\pm$ 0.005 | 3.11/45
| $p+\bar{p}$ | 3.07 $\pm$ 0.41 | 1.124 $\pm$ 0.005 | 0.101 $\pm$ 0.008 | 6.03/43
Table 2: Fit results at $\sqrt{s}$ = 0.9 [19], 2.76 [20], 5.02 [21] and 7 TeV
[21, 22], using data from the ALICE Collaboration with $q$ from Table 1
following Equations (2) and (8).
$\sqrt{s}$ (TeV) | Particle | $R$ (fm) | $\mu$ (GeV) | $T$ (GeV) | $\chi^{2}$/NDF
---|---|---|---|---|---
0.9 | $\pi^{+}$ | 3.64 $\pm$ 0.21 | 0.055 $\pm$ 0.012 | 0.079 $\pm$ 0.002 | 3.66/30
| $\pi^{-}$ | 3.53 $\pm$ 0.21 | 0.059 $\pm$ 0.012 | 0.080 $\pm$ 0.002 | 2.18/30
| $K^{+}$ | 3.76 $\pm$ 0.33 | 0.029 $\pm$ 0.017 | 0.062 $\pm$ 0.003 | 5.31/24
| $K^{-}$ | 3.89 $\pm$ 0.35 | 0.003 $\pm$ 0.018 | 0.065 $\pm$ 0.003 | 3.38/24
| $p$ | 3.34 $\pm$ 0.27 | 0.233 $\pm$ 0.020 | 0.057 $\pm$ 0.007 | 7.44/21
| $\bar{p}$ | 3.93 $\pm$ 0.33 | 0.097 $\pm$ 0.024 | 0.065 $\pm$ 0.002 | 7.69/21
2.76 | $\pi^{+}+\pi^{-}$ | 4.32 $\pm$ 2.68 | 0.022 $\pm$ 0.130 | 0.080 $\pm$ 0.019 | 20.48/60
| $K^{+}+K^{-}$ | 4.75 $\pm$ 0.03 | $-$0.140 $\pm$ 0.008 | 0.075 $\pm$ 0.004 | 2.48/55
| $p+\bar{p}$ | 4.47 $\pm$ 5.50 | $-$0.071 $\pm$ 0.253 | 0.077 $\pm$ 0.030 | 3.52/46
5.02 | $\pi^{+}+\pi^{-}$ | 4.19 $\pm$ 2.64 | 0.038 $\pm$ 0.134 | 0.082 $\pm$ 0.021 | 20.14/55
| $K^{+}+K^{-}$ | 4.49 $\pm$ 0.03 | $-$0.142 $\pm$ 0.009 | 0.078 $\pm$ 0.0005 | 1.52/48
| $p+\bar{p}$ | 4.00 $\pm$ 4.48 | $-$0.075 $\pm$ 0.243 | 0.081 $\pm$ 0.031 | 2.56/46
7.0 | $\pi^{+}+\pi^{-}$ | 3.67 $\pm$ 0.02 | 0.081 $\pm$ 0.141 | 0.081 $\pm$ 0.003 | 14.15/38
| $K^{+}+K^{-}$ | 3.80 $\pm$ 0.22 | $-$0.098$\pm$ 0.014 | 0.082 $\pm$ 0.002 | 3.13/55
| $p+\bar{p}$ | 4.07 $\pm$ 0.27 | $-$0.127$\pm$ 0.018 | 0.085 $\pm$ 0.002 | 6.03/43
The fits to the transverse momentum distributions were then repeated using
Equation (2) but this time keeping the parameter $q$ fixed to the value
determined in the previous section and listed in Table 1. The results are
listed in Table 2, where we present the fit results for non-zero chemical
potential for $pp$ collisions at four different beam energies by the ALICE
Collaboration.
In the first case, we set the chemical potential as the mass of the respective
particle and compare our results to [18] for $pp$ collisions at $0.9$ TeV with
the CMS Collaboration, secondly we set the chemical potential as a free
parameter to fit the data and analysis of the fit results and lastly, we
calculated the chemical potential directly from Equation $(\ref{poland_mu})$.
In Table 3 we present the extracted values of $T,\,\,q\,,R$ and $\mu$ at four
different energies with the CMS Collaboration.
Table 3: The extracted values of $T,\,\,q\,,R,\,\mu\,\mathrm{and}\,\,\chi^{2}/NDF$ parameters, using the data published in [24, 26, 25] for $pp$ collisions with the CMS experiment. $\sqrt{s}$ (TeV) | Particle | $T$ (MeV) | $q$ | $R$ (fm) | $\mu$ (MeV) | $\chi^{2}/NDF$
---|---|---|---|---|---|---
$0.9$ [24] | $\pi^{+}$ | $77\pm 1$ | $1.164\pm 0.004$ | $0.070\pm 0.102$ | $66\pm 4$ | $8.111/18$
| $K^{+}$ | $74\pm 1$ | $1.158\pm 0.008$ | $3.724\pm 0.126$ | $-25\pm 9$ | $2.123/13$
| $p^{+}$ | $71\pm 1$ | $1.139\pm 0.003$ | $3.536\pm 0.105$ | $94\pm 9$ | $9.596/23$
$2.76$ [25] | $\pi^{+}$ | $76\pm 1$ | $1.189\pm 0.005$ | $3.906\pm 0.100$ | $80\pm 5$ | $5.711/18$
| $K^{+}$ | $78\pm 1$ | $1.162\pm 0.008$ | $3.883\pm 0.019$ | $-5\pm 1$ | $2.447/13$
| $p^{+}$ | $67\pm 1$ | $1.166\pm 0.004$ | $3.508\pm 0.099$ | $107\pm 9$ | $27.43/23$
$7.0$ [25] | $\pi^{+}$ | $77\pm 1$ | $1.203\pm 0.005$ | $3.994\pm 0.105$ | $89\pm 1$ | $14.29/18$
| $K^{+}$ | $87\pm 1$ | $1.152\pm 0.009$ | $3.900\pm 0.135$ | $-96\pm 11$ | $2.074/13$
| $p^{+}$ | $67\pm 1$ | $1.184\pm 0.004$ | $3.509\pm 0.099$ | $84\pm 9$ | $12.22/23$
$13.0$ [26] | $\pi^{+}$ | $76\pm 2$ | $1.215\pm 0.008$ | $3.932\pm 0.157$ | $88\pm 3$ | $3.546/18$
| $K^{+}$ | $88\pm 3$ | $1.142\pm 0.0150$ | $4.044\pm 0.27$ | $-124\pm 22$ | $1.828/13$
| $p^{+}$ | $59\pm 1$ | $1.213\pm 0.008$ | $3.135\pm 0.130$ | $191\pm 14$ | $8.892/22$
In Figure 1 we compare the values for $T_{0}$ and $T$ at four different beam
energies. The results obtained for $T$ were more stable for different particle
types than the values obtained for $T_{0}$. We will come back to this with
more detail later in this paper.
An interesting proposal to determine the chemical potential was made in [27],
where the observation was made that the radius $R_{0}$ given in Table 1 is
larger than the one obtained from a femtoscopy analysis [28] by a factor
$\kappa$ estimated to be about 3.5, i.e.,
$R_{\rm femto}\approx\frac{1}{\kappa}R_{0}.$ (10)
Hence in [27] the suggestion is made to identify the corresponding volume
$V_{\rm femto}$ with the volume $V$ appearing in Equation (1).
Hence
$V_{0}\approx V\cdot\kappa^{3}.$ (11)
Combining this with Equations $(\ref{T0})$ and $(\ref{V0})$ this leads to a
chemical potential given by
$\mu=\frac{T_{0}}{q-1}\left(\kappa^{3(q-1)/q}-1\right),$ (12)
Hence, using this proposal, a knowledge of $T_{0}$ would lead to a
determination of $\mu$.
We compared the resulting values of the chemical potential $\mu$ using this
proposal [27] to the values using the procedure outlined above starting
Equation (2) and concluded that the results are very different; hence, our
results do not support this assumption and thus the volume $V$ appearing in
Equation (2) cannot be identified with the volume determined from femtoscopy.
The volume $V$ must be considered to be specific to the Tsallis distribution
as is the case with all the other variables used in this paper.
A clearer picture of the energy dependence emerges when including results from
the NA61/SHINE Collaboration [14] for $\pi^{-}$’s. The procedure outlined
above was repeated in this case using the data published in [14], first we
used Equation (6) and collect the results in Table 4. Next we fix the values
of $q$ obtained this way and repeat the fits using Equation (1); the results
are then collected in Table 5.
Figure 1: A comparison of the values of temperatures $T$ and $T_{0}$ of different hadron species for $pp$ collisions at $\sqrt{s}$ = 0.9 [19], 2.76 [20], 5.02 [21] and $7$ [22] TeV. Table 4: The extracted values of $T_{0},\,\,q\,,R_{0}\,\,\mathrm{and}\,\,\chi^{2}/NDF$ parameters, using the data published in [14] for $pp$ collisions with the NA $61$ Collaboration. $\sqrt{s}$ (GeV) | Particle | $T_{0}$ (MeV) | $q$ | $R_{0}$ (fm) | $\chi^{2}/NDF$
---|---|---|---|---|---
$6.3$ | $\pi^{-}$ | $98\pm 6$ | $1.042\pm 0.015$ | $2.55\pm 0.14$ | $4.454/15$
$7.7$ | $\pi^{-}$ | $95\pm 3$ | $1.057\pm 0.008$ | $2.72\pm 0.09$ | $4.561/15$
$8.8$ | $\pi^{-}$ | $96\pm 2$ | $1.055\pm 0.006$ | $2.76\pm 0.06$ | $8.423/15$
$12.3$ | $\pi^{-}$ | $95\pm 2$ | $1.064\pm 0.006$ | $2.90\pm 0.06$ | $6.775/15$
$17.3$ | $\pi^{-}$ | $93\pm 3$ | $1.069\pm 0.006$ | $3.07\pm 0.08$ | $2.176/15$
Table 5: The extracted values of $T,\,\,\mu\,,R\,\,\mathrm{and}\,\,\chi^{2}/NDF$ parameters, using the data published in [14] for $pp$ collisions with the NA $61$ Collaboration. $\sqrt{s}$ (GeV) | Particle | $R$ (fm) | $\mu$ (GeV) | $T$ (GeV) | $\chi^{2}/NDF$
---|---|---|---|---|---
$6.3$ | $\pi^{-}$ | $2.451\pm 0.399$ | $0.011\pm 0.046$ | $0.098\pm 0.003$ | $4.454/15$
$7.3$ | $\pi^{-}$ | $2.529\pm 0.223$ | $0.020\pm 0.024$ | $0.096\pm 0.002$ | $4.561/15$
$8.8$ | $\pi^{-}$ | $2.548\pm 0.016$ | $0.022\pm 0.002$ | $0.097\pm 0.001$ | $8.423/15$
$12.3$ | $\pi^{-}$ | $2.638\pm 0.171$ | $0.026\pm 0.018$ | $0.096\pm 0.001$ | $6.776/15$
$17.3$ | $\pi^{-}$ | $2.785\pm 0.216$ | $0.025\pm 0.021$ | $0.095\pm 0.002$ | $2.179/15$
The values for $T_{0}$ as a function of beam energy are shown in Figure 2. As
one can see a fairly strong energy dependence was present when comparing the
two sets of data.
However, this picture changes when plotting the temperature $T$ as a function
of beam energy as shown in Figure 3. The energy dependence becomes weaker and
the values of $T$ decrease with increasing beam energy from about 10 GeV all
the way up to 13,000 GeV. A similar decrease of the kinetic freeze-out energy
was also observed by the STAR collaboration [29] at the Brookhaven National
Laboratory.
Figure 2: The energy dependence of the temperature parameter $T_{0}$. The
triangular points are values of $T_{0}$ extracted from data in $pp$ collisions
obtained by the NA61/SHINE [14] Collaboration for pions (see Table 4). The
squares are the $T_{0}$ values in Table 1. All points were obtained by fits
using Equation (6). Figure 3: The energy dependence of the temperature $T$
for pions in $pp$ collisions. The triangular points are values of $T$
extracted from data in $pp$ collisions obtained by the NA61/SHINE [14]
Collaboration for pions (see Table 5). The squares are the $T$ values in
Tables 2 and 3. All points were obtained by fits using Equation (2). The
straight line at $T$ = 0.1 GeV is there to guide the eye only.
Similarly when plotting the results obtained for the radius $R_{0}$ one sees a
strong dependence on the beam energy as seen in Figure 4.
Figure 4: The energy dependence of the freeze-out radius $R_{0}$ of pions in
$pp$ collisions. The round (red) points are obtained from fits to the results
of the NA61/SHINE Collaboration [14] (see Table 4), the square points are for
the ALICE Collaboration data (see Table 1). The straight line at $R_{0}$ = 4
fm is there to guide the eye only.
However, similarly to the case with the temperatures $T$ and $T_{0}$, the
energy dependence s weakened when plotting the radius $R$ where only a very
mild energy dependence could be noticed, see Figure 5.
Figure 5: The energy dependence of the freeze-out radius $R$ of pions in $pp$
collisions. The round (red) points are obtained from fits to the results of
the NA61/SHINE Collaboration [14] (see Table 5), the square (blue) points are
for the ALICE Collaboration data (see Table 2), while the round (black) points
are for the CMS Collaboration data (see Table 3). The straight line at $R$ = 4
fm is there to guide the eye only.
Finally the parameter which was most influenced by deviations from chemical
equilibrium, namely the chemical potential $\mu$ which is shown in Figure 6.
Here one sees a very clear increase with beam energy.
Figure 6: The energy dependence of the freeze-out chemical potential $\mu$
for pions in $pp$ collisions. The round (red) points are obtained from fits to
the results of the NA61/SHINE Collaboration [14] (see Table 5), the square
(blue) points are for the ALICE Collaboration data (see Table 2), while the
round (black) points are for the CMS Collaboration data (see Table 3).
## 4 Conclusions
In this paper we have taken into account the chemical potential present in the
Tsallis distribution Equation (1) by following a two step procedure. In the
first step we used the redundancy present in the variables $T,V,q$ and $\mu$
expressed in Equations (4) and (5) and performed all fits using Equation (6),
i.e., effectively setting the chemical potential equal to zero. The only
variable which is common between Equations (1) and (6) is the Tsallis
parameter $q$; hence, in the second step of our procedure we fixed the value
of $q$ and performed all fits using Equation (1). This way we finally obtained
the set of variable $T,V$ and $\mu$. The results are shown in several figures.
It is to be noted that $T$ and $R$ (as deduced from the volume $V$) show a
weak energy dependence in proton-proton ($pp$) collisions at the centre-of-
mass energies from $\sqrt{s}$ = 20 GeV up to 7 and 13 TeV. This is not the
case for the variables $T_{0}$ and $V_{0}$. The chemical potential at kinetic
freeze-out shows an increase with beam energy as presented in Figure 6. This
simplifies the resulting description of the thermal freeze-out stage in $pp$
collisions as the values of $T$ and $R$ vary mildly over a wide range of beam
energies.
## References
* [1] D. Adamová et al., Phys. Lett. B 2017, 772, 567–577.
* [2] Fermi, E. Prog. Theor. Phys. 1950, 5, 570–583.
* [3] Fermi, E. Phys. Rev. 1951, 81, 683–687.
* [4] Heisenberg, W. Z. Phys. 1952, 133, 65.
* [5] Landau, L. Izv. Akad. Nauk Ser. Fiz. 1953, 17, 51–64.
* [6] M. Tanabashi et al. Review of particle physics. Phys. Rev. D 2018, 98, 030001.
* [7] A. Andronic, P. Braun-Munzinger, K. Redlich, J. Stachel, Nature 2018, 561, 321–330.
* [8] A. Andronic, P. Braun-Munzinger, B. Friman, P.M. Lo, K. Redlich, J. Stachel, Phys. Lett. B 2019, 792, 304–309.
* [9] A. Bazavov et al. Phys. Lett. B 2019, 795, 15–21.
* [10] S. Borsanyi, Z. Fodor, J.N. Guenther, R. Kara, S.D. Katz, P. Parotto, A. Pasztor, C. Ratti, K.K. Szabo, Phys. Rev. Lett. 2020, 125, 052001.
* [11] J. Sollfrank, P. Koch, U. W. Heinz, Phys. Lett. B 1990, 252, 256–264.
* [12] C. Tsallis, J. Statist. Phys. 1988, 52, 479–487.
* [13] J. Cleymans and M. Paradza, arXiv 2010, arxiv:2010.05565
* [14] N. Abgrall et al., Eur. Phys. J. C 2014, 74, 2794.
* [15] J. Cleymans and D. Worku, J. Phys. G 2012, 39, 025006.
* [16] J. Cleymans and D. Worku, Eur. Phys. J. A 2012, 48, 160.
* [17] J. Cleymans, G. Lykasov, A. Parvan, A. Sorin, O. Teryaev, D. Worku, Phys. Lett. B 2013, 723, 351–354.
* [18] Bíró, G.; Barnaföldi, G.G.; Biró, T.S. arXiv 2020, arXiv:2003.03278.
* [19] K. Aamodt et al., Eur. Phys. J. C 2011, 71, 1655.
* [20] B.B. Abelev et al., Phys. Lett. B 2014, 736, 196–207.
* [21] J. Adam et al., Phys. Lett. B 2016, 760, 720–735.
* [22] J. Adam et al., Eur. Phys. J. C 2015, 75, 226.
* [23] F. James and M. Roos, Comput. Phys. Commun. 1975, 10, 343.
* [24] S. Chatrchyan et al., JHEP 2011, 8, 86.
* [25] S. Chatrchyan et al., Eur. Phys. J. C 2012, 72, 2164.
* [26] A. M. Sirunyan et al. Phys. Rev. D 2017, 96, 112003.
* [27] M. Rybczynski and Z. Wlodarczyk, Eur. Phys. J. C 2014, 74, 2785.
* [28] T.C. Awes et al. Phys. Rev. D 2011, 84, 112004.
* [29] J. Adam et al. Phys. Rev. C 2020, 102, 034909.
|
# A DSA-like digital signature protocol
Leila<EMAIL_ADDRESS>and Omar<EMAIL_ADDRESS>
Laboratory of Mathematics, Cryptography, Mechanics
and Numerical Analysis, Fstm
University Hassan II of Casablanca, Morocco
###### Abstract
MSC 2010 : 94A60, 11T71
Keywords : Public key cryptography. Digital signature . DSA protocol. Discrete
logarithm problem.
MSC 2010 : 94A60, 11T71
## 1 Introduction
Moderne data protection started with the work of Shannon[16] in $1949$ on
information theory. However modern public key cryptography appeared clearly
when, in $1976$ Diffie and Hellman[5] showed how any two network users can
construct a common secret key even if they never meet. One year and few months
later the RSA[14] method was published. It is considered as the most used
cryptosystem in the daily life.
Digital signature is an important tool in cryptography. Its role in funds
transfert, online business, electronic emails, users identification or
documents integrity is essential. Let us recall its mechanism principle. A
trusted authority prepares the keys for Alice who is a network user. There is
a secret key $k$ and a public key $K$ depending on her identity parameters. If
she wants to sign a document $D$, she must solve a hard problem $Pb(K,D)$
which is function of $K$ and $D$. She is able to find a solution as she
possesses a supplementary information: her private key $k$. For anybody else,
the problem, consisting generally of a difficult equation of a high
mathematical level and based on the elements $K$ and $D$, is intractable even
with the help of computers. No one can forge Alice personal digital signature
on the document $D$. On the other hand, to validate and accept the signature,
anyone and some time it is the judge, can verify if the answer furnished by
Alice is correct or not.
One of the first concrete digital signature protocol was proposed by
ElGamal[6] in $1985$. It is based on the discrete logarithm problem
computationally considered as intractable[7, p. 103][17, p. 236][1]. Provided
that the signature system parameters are properly selected, the algorithm
security of the scheme has never been threatened.
Many variants of this scheme have been created. In $1991$, Schnorr [15]
proposed a similar signature protocol inspired by ElGamel system. The digital
signature algorithm or DSA[18], also deduced from the ElGamal signature
equation, was definitively formulated by the National Institute of Standards
and Technology (NIST) in $1994$[9]. Some variants[4] of the DSA were published
and several attacks were elaborated against it. In $1997$ Bellare et al.[3]
presented an attack where they showed that if the DSA algorithm uses linear
congruential pseudorandom number generator then it is possible to recover the
signer secret key. In $2002$ Nguyen and Shparlinski[10] published a polynomial
time algorithm that can totally break the DSA protocol if they get few bits
from the nonces used to sign a certain number of documents. More recently
Poulakis[13] used lattices theory to construct a system of linear equation
that leads to the disclosure of the signer secret key. At last, in $2017$,
Angel et al.[2] with extensive experiments, elaborated a method where they
exploit blocks in the ephemeral nonce and find the signer private key.
In this paper we propose a new digital signature scheme inspired by the DSA
algorithm. The security and the complexity are analyzed. Our method constitues
an alternative if the classical protocol DSA is broken. The drawback of our
algorithm is that the generation of the signature has three parameters instead
of two for DSA and we have to execute one more modular exponentiation in the
verification step. For a theoretical and a pedagogical interest, we present an
extension of the method.
The paper is organized as follows: In the next section we briefly recall the
description of the classical DSA digital signature. In section $3$ we present
our contribution and we conclude in section $4$.
Classical notations will be adopted. So $\mathbb{N}$ is the set of all natural
integers. When $a,b,n\in\mathbb{N}$, we write $a=b\ mod\ n$ if $a$ is the
remainder of the division of the integer $b$ by $n$, and $a\equiv b\ [n]$ if
the number $n$ divides the difference $a-b$. If $p$ is a prime integer then
the set $(\dfrac{\mathbb{Z}}{p\mathbb{Z}})^{*}$ is the multiplicative group of
modular integers $\\{1,2,\ldots,p-1\\}$.
We begin by recalling the DSA signature method.
## 2 The standard DSA protocol[9, 18]
In this section, we describe the basic DSA scheme followed by the security
analysis of the method.
### 2.1 Keys production
The signer selects two primes $p$ and $q$ such that $q$ divides $p-1$,
$2^{t-1}<q<2^{t}$ with: $t\in\\{160,256,384,512\\}$, $2^{L-1}<p<2^{L}$,
$768<L<1024$ and $L$ is a multiple of 64.
Then, he chooses a primitive root $g$ mod $p$ and computes
$\alpha=g^{\frac{p-1}{q}}\>mod\>p$. The signer selects also an integer $x$
such that $1\leq x\leq q-1$ and calculates $y=\alpha^{x}\>mod\>p$. Finally, he
publishes $(p,q,g,y)$ and keeps the parameter $x$ secret as its private key.
### 2.2 The signature generation
Let $h$ be a secure hash function[7, p. 33] that produces a 160-bit output.
To sign a message $m$, the signer starts by selecting a random secret integer
$k$ smaller than $q$ and called the nonce[7, p. 397]. Then, he computes
successively $r=(\alpha^{k}\>mod\>p)\>mod\>q$ and
$s=\displaystyle\frac{h(m)+a.r}{k}\ mod\>q$. Finally, the signature is the
pair $(r,s)$.
### 2.3 The signature verification
The verifier of the signature calculates:
$u_{1}=\displaystyle\frac{h(m)}{s}\>mod\>q$ and
$u_{2}=\displaystyle\frac{r}{s}\>mod\>q$. Then, he computes:
$v=((\alpha^{u_{1}}y^{u_{2}})\>mod\>p)\>mod\>q$.
Depending on $v=r$ or not, he accepts or rejects the signature.
### 2.4 Security of the method
To date the DSA system is considered as a secure digital signature scheme.
Indeed to break it, attackers must first solve a famous hard mathematical
question: the discrete logarithm problem (DLP)[7, p. 103][17, p. 267].
Conversely, we ignore whether or not breaking the DSA scheme leads to an
algorithm for solving the DLP. It’s a remarkable open problem. The
introduction of the second large prime $q$ in the DSA mechanism avoids Pohlig
and Hellman attack[11] on the discrete logarithm problem.
Several attacks were mounted against DSA protocol. The reader is invited to
see for instance references[2, 13, 10, 3]. It’s well known[8, p. 188] that, as
for ElGamal signature scheme[6], using the same nonce $k$ to sign two
different documents reveals the system secret key. Therefore it’s mandatory to
change the value of $k$ at each signature.
The following section describes our main contribution.
## 3 A DSA-like digital signature protocol
In this section, we present a new DSA-like digital signature and we analyze
its security and complexity.
### 3.1 Keys production
The fabrication of the keys is the same as for the DSA protocol. The signer
selects two primes $p$ and $q$ such that $q$ divides $p-1$, $2^{t-1}<q<2^{t}$
with: $t\in\\{160,256,384,512\\}$, $2^{L-1}<p<2^{L}$, $768<L<1024$ and $L$ is
a multiple of 64.
Then, he chooses a primitive root $g$ mod $p$ and computes
$\alpha=g^{\frac{p-1}{q}}\>mod\>p$. The signer selects also an integer $x$
such that $1\leq x\leq q-1$ and calculates $y=\alpha^{x}\>mod\>p$. Finally, he
publishes $(p,q,g,y)$ and keeps the parameter $x$ secret as its private key.
### 3.2 The signature generation
Let $h$ be a collision resistant hush function[7, p. 323] such that the image
of any message is belonging to the set $\\{1,2,\ldots,q\\}$.
To sign the document $m$, Alice must solve the modular signature equation:
$\alpha^{\frac{h(m)}{t}}y^{\frac{r}{t}}r^{\frac{s}{t}}\ mod\ p\equiv s\ [q]$
(1)
where the unknown parameters $r,s,t$ verify $0<r<p$ et $0<s,t<q$.
###### Theorem 1.
Alice, with her secrete key $x$, is able to find a triplet (r,s,t) that
verifies the signature equation (1).
###### Proof.
Alice chooses two random numbers $k,l<q$ then computes $r=\alpha^{k}\ mod\ p$
and $s=\alpha^{l}\ mod\ p\ mod\ q$. To get a solution of equation (1) it
suffices to have
$\alpha^{\frac{h(m)}{t}}\alpha^{x\frac{r}{t}}\alpha^{k\frac{s}{t}}\equiv\alpha^{l}\
[p]$. On the other hand the third parameter $t$ must verify
$\dfrac{h(m)+xr+ks}{t}=l\ [q]$ or
$t=\dfrac{h(m)+xr+ks}{l}\ mod\ q$ (2)
∎
### 3.3 The signature verification
To verify Alice signature $(r,s,t)$, Bob should do the following:
1\. He first finds Alice public key $(p,q,\alpha,y)$.
2\. He verifies that $0<r<p$ and $0<s,t<q$. If not he rejects the signature.
3\. He computes $u_{1}=\displaystyle\frac{h(m)}{t}\>mod\>q$,
$u_{2}=\displaystyle\frac{r\>mod\>q}{t}\>$ $mod\>q$ and
$u_{3}=\displaystyle\frac{s}{t}\>mod\>q$.
4\. He determines $v=((\alpha^{u_{1}}y^{u_{2}}r^{u_{3}})\>mod\>p)\>mod\>q$
5\. If $v=s$, Bob accepts the signature, otherwise he rejects it.
Before going on, we illustrate the procedure by an example. We took the same
two large primes $p$ and $q$ and the generator $g$ from reference[12].
Example. Number $p$ is the $1024$ bit-length prime:
$p=94772214835463005053734612688987420745400704367641322402256120323201\\\
101201888870137170705373498571313030316679878174804574980244779195907606\\\
098768964031739134779278848798229819934901324222106210711842549374102491\\\
417296346772453897799554117544442700769168461664359227744193913924495898\\\
621041399925210910234489$,
and $q=875964080856129786106302881659054003458244253873$.
The generator of the multiplicative group
$((\dfrac{\mathbb{Z}}{p\mathbb{Z}})^{*},.)$ is
$g=5401015700248054412670727427618885968561082170092747201289836023587070\\\
84212996041390972417923715770599593251420677788894704063133555201170818988\\\
61805293399530415510300321792173737741806916560867092300436919535062464506\\\
65697269251181380801651442389601287318661304902597519067842079816229492516\\\
762912476306877$.
We find that:
$\alpha=527223567708677258197455859133222937206317701786932418731523516024196571\\\
7085174644862184855390335598300773471653030132135612590607930128253659134294\\\
8495010997120001192272226027997156250022237643201186825665260281160455408116\\\
6154971036251524872300380557802833133742393048305876853382003116504889806209\\\
1119992$.
Suppose that the secrete key is:
$x=371575259833906365510684947508061994685469500919$, so:
$y=630775473718828368762466985220658584945893263294126314383120993968845387\\\
9656101046604281277749079342479732012406513115015034410271324671714864739538\\\
5456320857555381477923626248021182202362864702913038804579836525851535288840\\\
6082786247355168380930043030984132308743260982091689595031807130951481230017\\\
18879430$.
Assume that Alice decides to sign the message $m$ such that $h(m)=123456789$.
She uses the random secrete exponents $k=1250$ and $l=98561$. Therefore:
$r=578164865504179483400175892277942835618633089799304565306500743940987577\\\
3739310417729452466781593533114855965860037328909457721341899371409595718096\\\
8602212875285420808831668566865480631794760348104088294970278483930299680828\\\
49153157235106512542837849342137689407189514011990142602691770559506\\\
0711695234702599$;
$s=544621099954698016824748794802717914623249907270$ and
$t=556119013460694353294511753174948468444082504155$.
To test the validity of Alice signature Bob first determines $u_{1}$, $u_{2}$
and $u_{3}$:
$u_{1}=141694501602616348876138369891969021089672807186$;
$u_{2}=662220963645670062957535725628437873682297068731$;
$u_{3}=68770021753905823557652121472773639225383494491$;
then Bob calculates $v=\alpha^{u_{1}}y^{u_{2}}r^{u_{3}}\ mod\ p\ mod\ q$
$=544621099954698016824748794802717914623249907270$ which is exactly $s$. In
this case, Alice signature is accepted.
### 3.4 Security analysis
We discuss here some possible attacks. Suppose that Oscar, enemy of Alice,
tries to impersonate her by signing the message $m$ without knowing her secret
key $x$.
Attack 1. After receiving the signature parameters $(r,s,t)$ of a particular
message $m$, Oscar may be wants to find Alice secret key $x$. If he uses
equation(1), he will be confronted to the discrete logarithm problem
$a^{x}\equiv b\ [q]$ where $a=\alpha^{r/t}\ mod\ p$ and
$b=s\alpha^{-h(m)/t}\,r^{-s/t}\ modp$. If Oscar prefers to exploit relation
(2), he needs to know the two nonces $k$ and $l$. Their computation derives
from the discrete logarithm problem.
Attack 2. Assume now that Oscar arbitrary fixes random values for two
parameters and tries to find the third one.
(i) If he fixes $r$ and $s$ in the signature equation (1), and likes to
determine the third unknown parameter $t$, he will be confronted to the
discrete logarithm problem $a^{t^{\prime}}\equiv s\ [p]$ where
$a=\alpha^{h(m)}y^{r}r^{s}\ mod\ p$ and $t^{\prime}=\dfrac{1}{t}\ mod\ q$.
(ii) If Oscar fixes $r$ and $t$ and wants to find the parameter $s$, he will
be confronted to the equation $ab^{s}\equiv s\ [q]$, where
$a=\alpha^{\frac{h(m)}{t}}y^{\frac{r}{t}}\ mod\ p$ and $b=r^{1/t}\ mod\ p$. In
the mathematical literature, we don’t know any algorithm to solve this kind of
modular equation.
(3) If Oscar fixes $s$ and $t$ then likes to calculate the parameter $r$, he
must solve the equation $a^{r}r^{b}\equiv c\ [q]$, where $a=y^{1/t}\ mod\ p$,
$b=\dfrac{s}{t}\ mod\ q$ and $c=s\alpha^{-{h(m)}/t}\ mod\ p$. There is no
known general method for solving this problem.
Attack 3. Assume that Alice used the same couple of exponents $(k,l)$ to sign
two distinct message $m_{1}$ and $m_{2}$. Being aware of this fact, Oscar,
from the first message signature obtains $lt_{1}\equiv h(m_{1})+xr_{1}+ks_{1}\
[q]$ and from the second message $lt_{2}\equiv h(m_{2})+xr_{2}+ks_{2}\ [q]$.
As $r_{1}=r_{2}$ and $s_{1}=s_{2}$, Oscar is able to calculate the nonce $l$.
In contrast to the ElGamal and DSA schemes, it seems that there is no easy way
to compute the exponent $k$ and then to retrieve Alice secret key $x$.
Attack 4. Let $n\in\mathbb{N}$. Suppose that Oscar has collected $n$ valid
signatures $(r_{i},s_{i},t_{i})$ for messages $m_{i}$,
$i\in\\{1,2,\ldots,n\\}$. Using (2), he will construct a system of $n$ modular
equations:
$(S)\left\\{\begin{array}[]{c}l_{1}t_{1}\equiv h(m_{1})+xr_{1}+k_{1}s_{1}\
[q]\\\ l_{2}t_{2}\equiv h(m_{2})+xr_{2}+k_{2}s_{2}\ [q]\\\ \vdots\ \ \ \ \ \
\vdots\ \ \ \ \ \ \vdots\\\ l_{n}t_{n}\equiv h(m_{n})+xr_{n}+k_{n}s_{n}\
[q]\\\ \end{array}\right.$
where $\forall i\in\\{1,2,\ldots,n\\}$, $r_{i}=\alpha^{k_{i}}\ mod\ p$ and
$s_{i}=\alpha^{l_{i}}\ mod\ p\ mod\ q$.
Since system (S) contains $2n+1$ unknown parameters
$x,r_{i},s_{i},i\in\\{1,2,\ldots,n\\}$, it is not difficult for Oscar to
propose a valid solution. But Alice secret key $x$ has a unique possibility
and therefore Oscar will never be sure what value of x is the right one. So
this attack is not efficient.
Attack 5. Let us analyze the existential forgery[17, p. 285]. Suppose that the
signature protocol is used without the hash function $h$. Oscar can put
$r=\alpha^{k}\,y^{k^{\prime}}\ mod\ p$, $s=\alpha^{l}\,y^{l^{\prime}}\ mod\ p\
mod\ q$ for arbitrary numbers $k,k^{\prime},l,l^{\prime}$. To solve the
signature equation (1), it suffices to solve the system:
$\displaystyle\left\\{\begin{array}[]{c}\dfrac{m}{t}+k\dfrac{s}{t}\equiv l\
[q]\\\ \dfrac{r}{t}+k^{\prime}\dfrac{s}{t}\equiv l^{\prime}\ [q]\\\
\end{array}\right.$, so $\displaystyle\left\\{\begin{array}[]{c}m\equiv tl-ks\
mod\ q\\\ t=\dfrac{1}{l^{\prime}}[r+k^{\prime}s]\ mod\ q\\\
\end{array}\right.$. Hence $(r,s,t)$ is a valid signature for the message $m$,
but this attack is not realistic.
Attack 6. Suppose that Alice enemy Oscar is able to break the DSA scheme. In
another words, given $p,q,m,\alpha,y$, he can find integers $r,s<q$ such that
$\alpha^{\frac{h(m)}{s}}\,y^{\frac{r}{s}}\ mod\ p\equiv s\ [q]$. There is no
evidence that Oscar is able to solve equation (1)
$\alpha^{\frac{h(m)}{t}}y^{\frac{r}{t}}r^{\frac{s}{t}}\ mod\ p\equiv s\ [q]$.
It’s an advantage of our signature model: breaking the DSA scheme does not
lead to breaking our protocol.
Remark 1. We end this security analysis by asking a question for which we have
no answer: If someone is able to simultaneously break ElGamal, DSA and our own
signature protocols, can he solve the general discrete logarithm problem ?
### 3.5 Complexity
Productions of public and private keys in our protocol and in the DSA scheme
are identical. So the number of operations to be executed is the same. In the
generation of the signature parameters, we have one more parameter than in the
DSA. To compute it, we use a a supplementary modular exponentiation. For the
verification step, we calculate three exponentiation instead of two for the
DSA scheme.
Let $T_{exp}$ and $T_{mult}$ the times necessary to compute respectively an
exponentiation and a multiplication. The total time to execute all operations
using our method is as follows:
$T_{tot}=7T_{exp}+8T_{mult}$ (3)
As $T_{exp}=O(\log^{3}n)$ and $T_{mult}=O(\log^{2}n)$, (see [7, p. 72]), the
final complexity of our signature scheme is
$T_{tot}=O(\log^{2}n+\log^{3}n)=O(\log^{3}n)$ (4)
This proves that the execution of the protocol works in a polynomial time.
### 3.6 Theoretical generalization
For it’s pedagogical and mathematical interest, we end this paper by giving an
extension of the signature equation (1). Let $h$ be a known and secure hash
function as mentioned in section $2$ and in the beginning of this section.
We fix an integer $n\in\mathbb{N}$ such that $n\geq 2$.
1. Alice begins by choosing her public key $(p,q,\alpha,y)$, where $p$ and $q$ primes such that $q$ divides $p-1$.
Element $\alpha$ is a generator of the subgroup of
$\displaystyle\frac{\mathbb{Z}}{p\mathbb{Z}}$ whose order is $q$.
$y=\alpha^{x}\ mod\ p$ where $x$ is a secret parameter in
$\\{1,2,\ldots,q-1\\}$. Integer $x$ is Alice private key.
2. If Alice likes to product a digital signature of a message $m$, she must solve the congruence:
$\alpha^{\frac{h(m)}{r_{n+1}}}\,y^{\frac{r_{1}}{r_{n+1}}}\,{r_{1}}^{\frac{{r_{2}}}{r_{n+1}}}\,{r_{2}}^{\frac{{r_{3}}}{r_{n+1}}}\,\ldots{r_{n-1}}^{\frac{{r_{n}}}{r_{n+1}}}\
mod\ p\equiv r_{n}\ [q]$ (5)
where the unknown parameters $r_{1},r_{2},\ldots,r_{n+1}$ verify
$0<r_{1},r_{2},\ldots,r_{n-1}<p{\rm\ and\ }0<r_{n},r_{n+1}<q.$ (6)
###### Theorem 2.
Alice, with her secrete key $x$ can determine an $(n+1)$uplet
$(r_{1},r_{2},\ldots,r_{n},r_{n+1})$ that verifies the modular relation (5).
###### Proof.
The signer Alice selects $n-1$ random numbers
$k_{1},k_{2},\ldots,k_{n-1}\in\mathbb{N}$ lm2ess than the prime $q$ then
computes $r_{i}=\alpha^{k_{i}}\ mod\ p$ for every $i\in\\{1,2,\ldots,n-1\\}$
and $r_{n}=\alpha^{k_{n}}\mod\ p\ mod\ q$. We have:
Equation (5)
$\Longleftrightarrow\alpha^{\frac{h(m)}{r_{n+1}}}\,\alpha^{x\frac{r_{1}}{r_{n+1}}}\,\alpha^{{\displaystyle\sum_{i=1}^{n-1}}\,k_{i}\frac{r_{i+1}}{r_{n+1}}}\
mod\ p\ mod\ q=\alpha^{k_{n}}\ mod\ p\ mod\ q$.
It suffices to have
$\alpha^{\frac{h(m)}{r_{n+1}}}\,\alpha^{x\frac{r_{1}}{r_{n+1}}}\,\alpha^{{\displaystyle\sum_{i=1}^{n-1}}\,k_{i}\frac{r_{i+1}}{r_{n+1}}}\
mod\ p=\alpha^{k_{n}}\ mod\ p$, which is equivalent to :
$\dfrac{h(m)}{r_{n+1}}+x\frac{r_{1}}{r_{n+1}}+\displaystyle\sum_{i=1}^{n-1}\,k_{i}\frac{r_{i+1}}{r_{n+1}}\equiv
k_{n}\ [q]$. So Alice determines the last unknown parameter $r_{n+1}$ by
calculating
$r_{n+1}=\frac{1}{k_{n}}[h(m)+xr_{1}+\sum_{i=1}^{n-1}\,k_{i}r_{i+1}]\ mod\ q$
(7)
∎
3. If Bob receives from Alice her signature preuve $(r_{1},r_{2},\ldots,r_{n},r_{n+1})$, he will be able to check whether the modular equation (5) is valid or not. He then deduces if he accepts or rejects this signature.
Remark 2. Let $k_{0}$ be Alice secret key $x$, ${\overrightarrow{u}}$ and
${\overrightarrow{v}}$ respectively the vectors $(k_{0},k_{1},\ldots,k_{n-1})$
and $(r_{1},r_{2},\ldots,r_{n})$. To easily memorize equality (7), observe
that $r_{n+1}=\dfrac{h(m)+\overrightarrow{u}.\overrightarrow{v}}{k_{n}}$ where
$\overrightarrow{u}.\overrightarrow{v}$ denotes the classical inner product of
the two vectors $\overrightarrow{u}$ and $\overrightarrow{v}$.
## 4 Conclusion
In this article, a new digital signature protocol was presented. We studied in
details the security of the method and gave an analysis of its complexity. Our
contribution can be seen as an alternative if the DSA algorithm is totally
broken. For its purely mathematical and pedagogical interest, we furnished a
general form of our proposed signature equation.
## References
* [1] Addepalli V. N. Krishna, Addepalli Hari Narayana & K. Madhura Vani Fully homomorphic encryption with matrix based digital signature standard, Journal of Discrete Mathematical Sciences and Cryptography, 20:2, 439-444, DOI: 10.1080/09720529.2015.1101882 (2017).
* [2] J. Angel, R. Rahul, C. Ashokkumar, B. Menezes, DSA signing key recovery with noisy side channels and variable error rates, Progress in cryptology Indocrypt, pp147–165, Lecture Notes in Comput. Sci., 10698, Springer, Cham, (2017).
* [3] M. Bellare, S. Goldwasser, and D. Micciancio, Pseudo-random number generation within cryptographic algorithms: the DSS case, In Proc. of Crypto’97, volume 1294 of LNCS. IACR, Palo Alto, CA, Springer- Verlag, Berlin, (1997).
* [4] L. Chen-Yu, L. and L. Wei-Shen, Extended DSA. Journal of Discrete Mathematical Sciences and Cryptography Vol. 11, 5, pp545–550, (2008).
* [5] W. Diffie and M. E. Hellman, New directions in cryptography. Information Theory, IEEE Transactions 22, N. 6, pp644–654, (1976).
* [6] T. ElGamal, A public key cryptosystem and a signature scheme based on discrete logarithm problem. IEEE Trans. Info. Theory , IT-31, N. 4, pp469–472, (1985).
* [7] J. A. Menezes, P. C. Van Oorschot, and S. A. Vanstone, Handbook of applied cryptography. CRC press, (1996).
* [8] R. Mollin, An Introduction to cryptography, Seconde edition, Chapman & Hall/CRC, (2007).
* [9] National institute of standard and technology (NIST). FIPS Publication 186, DSA, Department of commerce, (1994).
* [10] P. Q. Nguyen and I. E. Shparlinski, The insecurity of the Digital Signature Algorithm with partially known nonces, J. of Cryptology, pp151–176, (2002).
* [11] P. Pohlig and M. Hellman, An Improved Algorithm for Computing Logarithms over GP(p) and Its Cryptographic Significance, IEEE Transaction on information theory, Vol. IT 24, n. 1, (1978).
* [12] T. Pornin, Deterministic usage of the digital signature algorithm (DSA) and elliptic curve digital signature algorithm (ECDSA). RFC 6979, (2013).
* [13] D. Poulakis, New lattice attacks on DSA schemes, J. Math. Cryptol. 10, no. 2, pp135–144, (2016).
* [14] Rivest, R., Shamir, A., & Adeleman, L. A method for obtaining digital signatures and public key cryptosystems, Communication of the ACM Vol. no 21 (1978).
* [15] C. P. Schnorr, Efficient Signature Generation by Smart Cards. Journal of Cryptology, pp161–174, (1991).
* [16] C. E. Shannon, Communication Theory of Secrecy Systems, Bell System Technical Journal vol 28, pp656–715, (1949).
* [17] D. R. Stinson, Cryptography, Theory and Practice, Third edition, Chapman & Hall/CRC (2006).
* [18] http://www.umich.edu/ x509/ssleay/fip186/fip186.htm.
|
∎
e1e-mail<EMAIL_ADDRESS>
11institutetext: Institut für Kernphysik, Technische Universität Darmstadt,
D-64289 Darmstadt, Germany 22institutetext: Departamento de Física Atómica,
Molecular y Nuclear, Facultad de Física, Universidad de Sevilla, Apartado
1065, E-41080 Sevilla, Spain 33institutetext: Instituto Interuniversitario
Carlos I de Física Teórica y Computacional (iC1), Apdo. 1065, E-41080 Sevilla,
Spain 44institutetext: Centro Nacional de Aceleradores, U. Sevilla, J.
Andalucía, CSIC, Avda Thomas A Edison, 7, E-41092 Sevilla, Spain
55institutetext: School of Physics Science and Engineering, Tongji University,
Shanghai 200092, China 66institutetext: Institute for Advanced Study of Tongji
University, Shanghai 200092, China
# The Hussein-McVoy formula for inclusive breakup revisited
A Tribute to Mahir Hussein
M. Gómez Ramos addr1 J. Gómez-Camacho addr2,addr4 Jin Lei addr5,addr6 A. M.
Moroe1,addr2,addr3
(Received: date / Accepted: date)
###### Abstract
In 1985, Hussein and McVoy [Nuc. Phys. A445 (1985) 124] elucidated a formula
for the evaluation of the nonelastic breakup (“stripping”) contribution in
inclusive breakup reactions. The formula, based on the spectator core model,
acquires a particularly simple and appealing form in the eikonal limit, to the
extent that it has become the standard procedure to analyze single-nucleon
knockout reactions at intermediate energies. In this contribution, a critical
assessment of this formula is presented and its connection with other,
noneikonal expressions discussed. Some calculations comparing the different
formulae are also presented for the one-nucleon removal of 14O+9Be reaction at
several incident energies.
###### Keywords:
Breakup reactions Inclusive breakup Hussein-McVoy model
###### pacs:
PACS code1 PACS code2 more
## 1 Introduction
Breakup reactions have been extensively used to extract nuclear structure
information (binding energies, spectroscopic factors, electric response to the
continuum, etc) and have also permitted to improve our understanding of the
dynamics of reactions among composite systems. When the projectile dissociates
into two fragments, the process can be described as an effective three-body
problem, which can be schematically represented as $a+A\rightarrow b+x+A$,
where $a$ represents the projectile which eventually dissociates into $b+x$.
Even in this simplified three-body picture, the theoretical description of the
process is not straightforward due to the presence of three particles in the
final state.
In some applications, one is interested in the inclusive process in which only
one of the fragments (say, $b$) is measured experimentally, that we represent
schematically as $A(a,b)X$. These inclusive cross sections are needed, for
example, in the application of the surrogate method Esc12 and in
spectroscopic studies by means of intermediate-energy knockout reactions Han03
; Tos01 ; Tos14 .
The evaluation of inclusive breakup reactions poses a challenging theoretical
problem because many processes can in principle contribute to the $b$ singles
cross section. When the two fragments $b$ and $x$ “survive” and the target
remains in its ground state, the process is referred to as elastic breakup
(denoted EBU hereafter), also called diffraction dissociation.
The remaining part of the inclusive breakup cross section, that we denote
globally as nonelastic breakup (NEB), includes those processes in which the
$x$ particle interacts nonelastically with the target nucleus. This involves,
for example, the transfer of $x$ to a bound state of the residual system
$B=A+x$, the fusion of $x$ forming a compound nucleus (incomplete fusion) or
simply the target excitation by $x$. If $x$ is a composite system, it also
includes any process in which the latter is broken or excited in any way. The
explicit evaluation of all these processes is not possible in general so
several authors proposed closed-form formulae which avoid the sum over the
final states. Interestingly, all these formulae display a common structure,
given by
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|_{\mathrm{NEB}}=-\frac{2}{\hbar
v_{a}}\rho_{b}(E_{b})\langle\varphi_{x}|W_{x}|\varphi_{x}\rangle,$ (1)
where $\rho_{b}(E_{b})=k_{b}\mu_{b}/[(2\pi)^{3}\hbar^{2}]$ is the density of
states (with $\mu_{b}$ the reduced mass of $b+B$ and $k_{b}$ their relative
wave number), $W_{x}$ is the imaginary part of the optical potential $U_{x}$,
which describes $x+A$ elastic scattering. Expression (1) offers a intuitively
appealing interpretation of nonelastic breakup. As particle $b$ is scattered,
fragments $x$ and $A$ interact with each other. In the original Hamiltonian,
the interaction between $x$ and $A$ will be represented by a real operator
depending on the internal degrees of freedom of the target nucleus. After
application of a Feshbach reduction, this interaction is replaced by the
complex potential $U_{xA}$, describing $x+A$ elastic scattering and whose
imaginary part accounts for nonelastic breakup events. Indeed, the difficulty
in this interpretation is to provide a proper description of the state
$|\varphi_{x}\rangle$ (the $x$-channel wave function hereafter), which should
describe the relative motion of $x$ and $A$, compatible with the incoming
boundary conditions, and with the fact that fragment $b$ will be finally
detected with a given momentum $\vec{k}_{b}$.
One of the first of these expressions was due to Hussein and McVoy HM85 ,
given explicitly in the next section. In the same work, they also derived an
approximate expression obtained by treating the distorted waves appearing in
(1) in the Glauber (also referred to as eikonal) approximation. Since this
approximation is valid at high energies, the HM formula so obtained is
expected to be accurate also at high energies. In fact, this eikonal formula
is the key tool to evaluate the NEB part of the inclusive cross section in
nucleon removal knockout reactions used to study spectroscopy of nucleon hole-
states. In these experiments, measured cross sections are compared with
theoretical calculations for EBU and NEB, with the latter being evaluated with
the Glauber version of the HM formula. These theoretical cross sections are
commonly evaluated assuming single-particle wavefunctions for the removed
nucleon and they are later multiplied by the required spectroscopic factors
derived, for example, from shell-model calculations. Then, the ratio
$R_{s}=\sigma^{\mathrm{exp}}/\sigma^{\mathrm{theo}}$ is computed. Typically,
one obtains $R_{s}<1$, which has been interpreted as an effect of additional
correlations not present in small-scale shell-model calculations, presumably
leading to a larger fragmentation of single-particle strengths (and a
subsequent reduction of spectroscopic factors). Moreover, these studies have
found a systematic dependence of this ratio on the separation energy of the
removed nucleon, with $R_{s}$ becoming smaller and smaller as the separation
energy becomes larger Tos14 . Some authors have interpreted this result as an
indication of additional correlations (coming from tensor and short-range
components of the nucleon-nucleon interaction). However, this interpretation
has been recently put into question by other authors, because this trend is
apparently not observed in other reactions, such as transfer Fla13 and
$(p,pN)$ reactions Ata18 ; Kaw18 ; Gom18 .
Clearly, the conclusions strongly rely on the validity of the formulae used in
the evaluation of the inclusive cross sections. Whereas there is a general
consensus on the evaluation of the EBU cross sections, with different
approaches leading to consistent results (as, for instance, the distorted-wave
Born approximation (DWBA) Bau83 , the continuum-discretized coupled-channels
(CDCC) method Aus87 and a variety of semiclassical approaches Typ94 ; Esb96 ;
Kid94 ; Cap04 ), the reliability of the NEB calculations has been more
controversial. One of the criticisms concern the validity of the Glauber HM
formula at the energies commonly used in these experiments (several tens of
MeV per nucleon).
In this paper, we revisit the HM formula and its Glauber limit, and discuss
its connection with other inclusive breakup formulae proposed by other
authors. We present also preliminary numerical calculations comparing some of
these formulae, which show that the Glauber limit of the HM formula is a
reasonable approximation at the energies at which the nucleon-knockout
reactions have been measured.
Figure 1: Relevant coordinates for the three-body models described in the
text.
## 2 Review of inclusive breakup formulae
### 2.1 The Hussein-McVoy (HM) formula
One of the first attempts to provide a closed-form formula for the inclusive
breakup cross section was due to Hussein and McVoy in their seminal 1985 paper
HM85 . The HM derivation makes use of the spectator assumption for the
detected fragment $b$, which means that this fragment does not participate
directly in the breakup, so that the breakup is produced by any nonelastic
scattering of the participant $x$ with the target. By summing over all $x+A$
states which leave $A$ in an excited state, they arrived to the following
formula for the double differential cross section for NEB:
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{HM}}_{\mathrm{NEB}}=-\frac{2}{\hbar
v_{a}}\rho_{b}(E_{b})\langle\varphi^{\mathrm{HM}}_{x}|W_{x}|\varphi^{\mathrm{HM}}_{x}\rangle,$
(2)
where $\varphi^{\mathrm{HM}}_{x}$ is defined by the so-called non-
orthogonality overlap, which is given by:
$\varphi^{\mathrm{HM}}_{x}(\vec{r}_{x})=\langle{\vec{r}}_{x}|\varphi^{\mathrm{HM}}_{x}\rangle=\langle{\vec{r}}_{x}\chi^{(-)}_{b}|\chi_{a}^{(+)}\phi_{a}\rangle.$
(3)
Here, $|\phi_{a}\rangle$ is the projectile ground state,
$|\chi^{(+)}_{a}\rangle$ and $|\chi^{(-)}_{b}\rangle$ are distorted scattering
states for the $a+A$ and $b+B$ systems, respectively. $|\vec{r}_{x}\rangle$ is
a state with given separation of fragment and target. The relevant coordinates
are shown in Fig. 1.
### 2.2 The Eikonal Hussein-McVoy formula (EHM)
Hussein and McVoy obtained further insight on their formula (2) by treating
the distorted waves $\chi^{(+)}_{a}(\vec{r}_{a})$ and
$\chi^{(-)}_{b}(\vec{r}_{b})$ in the Glauber (also known as eikonal)
approximation. The Glauber approximation to an elastic-scattering distorted
wave is:
$\chi_{\vec{k}}^{(+)}({\vec{r}})=\mathrm{e}^{i\vec{k}\cdot\vec{r}}\exp\left[+i\int_{-\infty}^{z}\Delta
k\left(z^{\prime},b\right)\mathrm{d}z^{\prime}\right]$ (4)
where the incident momentum $\vec{k}$ points along the positive $z$-axis, and
$\vec{b}$ is the component of $\vec{r}$ perpendicular to $z$. The exponent in
the second factor is
$\Delta k\left(z^{\prime},b\right)\equiv-\frac{k}{2E}U(z,b),$ (5)
which, when integrated along the entire trajectory, gives the optical phase
shift
$2\delta(b)=\int_{-\infty}^{\infty}\Delta
k\left(z^{\prime},b\right)\mathrm{d}z^{\prime}=2\int_{0}^{\infty}\Delta
k\left(z^{\prime},b\right)\mathrm{d}z^{\prime},$ (6)
that is related to the partial-wave optical S-matrix, i.e.,
$S(b)=\exp[{2i\delta(b)}]$.
A key step in the HM derivation is the choice of the $U_{a}$ potential,
distorting the incident wave, which is taken as the sum of the corresponding
fragment-target potentials:
$U_{a}=U_{bA}+U_{xA}.$ (7)
As we shall see, this is a particularly fortunate choice, which takes into
account breakup effects in the entrance channel. Usual DWBA approaches would
use a distorting potential depending only on the $a-A$ coordinate, which does
not induce breakup. Indeed, they could get away with this complicated
distorting potential because the eikonal approximations were assumed: the $x$
and $b$ fragments move with the same average velocity as the projectile and
hence their momenta are given by
$\vec{k}_{x}=\left(m_{x}/m_{\mathrm{a}}\right)\vec{k}_{\mathrm{a}},\quad\vec{k}_{b}=\left(m_{b}/m_{a}\right)\vec{k}_{a}.$
(8)
With the particular choice (7) and the assumption (8) one obtains the
following result
$\displaystyle\varphi^{\mathrm{EHM}}_{x}({\vec{r}}_{x})=$
$\displaystyle\int\mathrm{d}^{3}{\vec{r}}_{b}\chi_{b}^{(-)*}({\vec{r}}_{b})\chi_{a}^{(+)}({\vec{r}}_{a})\phi_{a}({\vec{r}}_{bx})$
(9) $\displaystyle=$
$\displaystyle\mathrm{e}^{i\vec{k}_{x}\cdot{\vec{r}}_{x}}\exp\left[i\int_{-\infty}^{z_{x}}\Delta
k_{x}\left(z^{\prime},b_{x}\right)\mathrm{d}z^{\prime}\right]$
$\displaystyle\times\int\mathrm{d}^{3}{\vec{r}}_{b}\mathrm{e}^{i\vec{q}\cdot{\vec{r}}_{b}}S_{bA}\left(b_{b}\right)\phi_{\mathrm{a}}\left({\vec{r}}_{bx}\right)$
with $\vec{q}=\vec{k}_{b}-\vec{k}^{\prime}_{b}$, the average momentum
transferred in $b-A$ elastic scattering.
The last factor is then conveniently expressed as
$\int\mathrm{d}^{3}{\vec{r}}_{b}\mathrm{e}^{i\vec{q}\cdot{\vec{r}}_{b}}S_{bA}\left(b_{b}\right)\phi_{\mathrm{a}}\left({\vec{r}}_{bx}\right)\equiv\mathrm{e}^{iq_{\parallel}z_{x}}\tilde{\phi}_{a,b}(\vec{q},\vec{b}_{x}).$
(10)
Replacing this result into (2) one obtains for the double differential cross
section:
$\displaystyle\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{EHM}}_{\mathrm{NEB}}$
$\displaystyle=\frac{2}{\hbar v_{a}}\rho_{b}(E_{b})\frac{E_{x}}{k_{x}}$
$\displaystyle\times\int\mathrm{d}^{2}\vec{b}_{x}\left|\tilde{\phi}_{a,b}(\vec{q},\vec{b}_{x})\right|^{2}\left[1-\left|S_{xA}(b_{x})\right|^{2}\right].$
(11)
It should be noticed that the NEB depends only on the asymptotic properties,
this is, the $S$ matrices, of the interaction of $b$ and $x$ with the target.
There is no sensitivity on the wavefunctions in the interaction region. This
is a result of the eikonal approximation, plus the particular choice of the
distorted interaction, which included the imaginary potential $W_{xA}$ which
ultimately generates the NEB.
In many applications, one is interested in the total yield of fragment $b$,
which is obtained upon integration of the previous formula over the angular
and energy variables, resulting:
$\displaystyle\sigma_{\mathrm{NEB}}^{\mathrm{EHM}}=$
$\displaystyle\frac{2}{v_{\mathrm{a}}}(2\pi)^{3}\frac{E_{x}}{\hbar
k_{x}}\int\mathrm{d}^{3}\vec{r}_{b}\mathrm{d}^{3}\vec{r}_{x}\left|\phi_{a}\left(\vec{r}_{bx}\right)\right|^{2}$
$\displaystyle\times$
$\displaystyle\left|S_{bA}(b_{b})\right|^{2}\left[1-\left|S_{xA}\left(b_{x}\right)\right|^{2}\right].$
(12)
This equation has an appealing and intuitive form: the integrand contains the
product of the probabilities for the core being elastically scattered by the
target, $|S_{bA}(b_{b})|^{2}$, times the probability of the valence particle
being absorbed, $(1-|S_{xA}(b_{x})|)^{2}$. These probabilities are weighted by
the projectile wave function squared, and integrated over all possible impact
parameters. Because of the Glauber approximation, Eq. (2.2) is expected to be
accurate at high energies (above $\sim$100 MeV per nucleon). In fact, this
formula has been extensively employed in the analysis of intermediate-energy
knockout reactions (see e.g. Han03 ; Tos01 ; Tos14 and references therein)
mostly aimed at obtaining spectroscopic information of nucleon hole states.
### 2.3 The three-body (3B) model of Austern et al.
In Aus87 , Austern and collaborators derived a three-body formula for the
inclusive breakup cross section using as starting point the post-form
representation of the exact transition amplitude
$\displaystyle\frac{d^{2}\sigma}{d\Omega_{b}E_{b}}$
$\displaystyle=\frac{2\pi}{\hbar
v_{a}}\rho(E_{b})\sum_{c}|\langle\chi^{(-)}_{b}\Psi^{c,(-)}_{xA}|V_{\mathrm{post}}|\Psi^{(+)}\rangle|^{2}$
$\displaystyle\times\delta(E-E_{b}-E^{c}),$ (13)
where $V_{\mathrm{post}}\equiv V_{bx}+U_{bA}-U_{bB}$ is the post-form
transition operator, $\Psi^{(+)}$ is the system wavefunction with the incident
wave in the $a+A$ channel, and $\Psi^{c,(-)}_{xA}$ are the eigenstates of the
$x+A$ system, with $c=0$ denoting the $x$ and $A$ ground states. Thus, for
$c=0$ this expression gives the EBU part, whereas the terms $c\neq 0$ give the
NEB contribution.
In this model, excitations of the target are not explicitly considered
(although they are effectively taken into account by means of the optical
potentials $U_{xA}$ and $U_{bA}$). Thus, the total wavefunction is given by
$|\Psi^{(+)}\rangle\approx|\Psi^{\mathrm{3B(+)}}\phi^{0}_{A}\rangle,$ (14)
where $|\phi^{0}_{A}\rangle$ is the target ground-state and
$|\Psi^{\mathrm{3B(+)}}\rangle$ is the solution of the three-body equation:
$[\hat{T}_{aA}+\hat{T}_{bx}+V_{bx}+U_{bA}+U_{xA}-E]|\Psi^{\mathrm{3B(+)}}\rangle=0.$
(15)
Using the Feshbach projection formalism and the optical model reduction they
obtain a closed-form expression for the inclusive breakup cross section. The
latter can be further split into its EBU and NEB components Kas82 . The EBU is
given by
$\displaystyle\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{3B}}_{\mathrm{EBU}}$
$\displaystyle=\frac{2\pi}{\hbar
v_{a}}\rho(E_{b})\int{d\Omega_{x}}\,\rho(E_{x})$
$\displaystyle\times\left|\left\langle\chi_{b}^{(-)}\chi_{x}^{(-)}\left(\mathbf{k}_{x}\right)\left|V_{\mathrm{post}}\right|\Psi^{\mathrm{3B(+)}}\right\rangle\right|^{2}$
(16)
where $\rho(E_{x})$ is the density of states of particle $x$, whereas the NEB
part is given by.
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{3B}}_{\mathrm{NEB}}=-\frac{2}{\hbar
v_{a}}\rho_{b}(E_{b})\langle\varphi^{\mathrm{3B}}_{x}|W_{x}|\varphi^{\mathrm{3B}}_{x}\rangle.$
(17)
The three-body $x$-channel wavefunction
$\varphi^{\mathrm{3B}}_{x}({\vec{r}}_{x})$ is obtained by solving the
inhomogeneous equation
$(E^{+}_{x}-K_{x}-{U}_{x})\varphi^{\mathrm{3B}}_{x}({\vec{r}}_{x})=\langle{\vec{r}}_{x}\chi_{b}^{(-)}|V_{\mathrm{post}}|\Psi^{\mathrm{3B(+)}}\rangle$
(18)
where $E_{x}=E-E_{b}$. This equation can also be written in integral form as
$\displaystyle\varphi^{\mathrm{3B}}_{x}({\vec{r}}_{x})$
$\displaystyle=G^{(+)}_{x}\langle{\vec{r}}_{x}\,\chi_{b}^{(-)}|V_{\mathrm{post}}|\Psi^{\mathrm{3B(+)}}\rangle$
$\displaystyle=\int d^{3}\vec{r}\mkern
2.0mu\vphantom{r}^{\prime}_{x}\langle{\vec{r}}_{x}|G^{(+)}_{x}|{\vec{r}\mkern
2.0mu\vphantom{r}^{\prime}_{x}}\rangle\langle\vec{r}\mkern
2.0mu\vphantom{r}^{\prime}_{x}\,\chi_{b}^{(-)}|V_{\mathrm{post}}|\Psi^{\mathrm{3B(+)}}\rangle$
(19)
where $G^{(+)}_{x}=(E^{+}_{x}-K_{x}-{U}_{x}-i\epsilon)^{-1}$ is the optical
model Green’s function of particle $x$.
Austern et al. derived also an interesting alternative expression for
$\varphi^{\mathrm{3B}}_{x}({\vec{r}}_{x})$, namely,
$\varphi^{\mathrm{3B}}_{x}({\vec{r}}_{x})=\langle{\vec{r}}_{x}\chi_{b}^{(-)}|\Psi^{\mathrm{3B(+)}}\rangle.$
(20)
It is worth noting that the formulae (20) and (2.3) are formally equivalent
and, as such, they must provide identical results. Although (20) may seem
simpler, in practice, it requires that the three-body wavefunction
$\Psi^{\mathrm{3B(+)}}$ be accurate in the full configuration space, since
there is no natural cutoff in the integration variable ${\vec{r}}_{b}$. By
contrast, in Eqs. (18) and (2.3), the presence of the $V_{\mathrm{post}}$
operator will tend to emphasize small $b-x$ separations and hence one requires
only an approximate three-body wavefunction accurate within that range. This
can be achieved, for instance, expanding $\Psi^{\mathrm{3B(+)}}$ in terms of
$b-x$ eigenstates, as done in the CDCC method Aus87 , or in terms of Weinberg
states Pan13 . The implementation of the method with CDCC wavefunctions is
numerically challenging and the first calculation of this kind was only
recently reported Jin19 .
### 2.4 The Ichimura, Austern, Vincent (IAV) formula
Ichimura, Austern and Vincent IAV85 proposed a simpler DWBA version of the 3B
formula above. In DWBA, the exact wavefunction $\Psi^{(+)}$ is approximated by
the factorized form:
$|\Psi^{(+)}\rangle\approx|\chi_{a}^{(+)}\phi_{a}\phi^{0}_{A}\rangle.$ (21)
With this approximation, the NEB component of the $b$ singles cross section
becomes
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{IAV,post}}_{\mathrm{NEB}}=-\frac{2}{\hbar
v_{a}}\rho_{b}(E_{b})\langle\varphi^{\mathrm{post}}_{x}|W_{x}|\varphi^{\mathrm{post}}_{x}\rangle,$
(22)
that is formally identical to (17) but with the $x$-channel wavefunction given
now by
$\varphi^{\mathrm{post}}_{x}({\vec{r}}_{x})=G^{(+)}_{x}\langle{\vec{r}}_{x}\,\chi_{b}^{(-)}|V_{\mathrm{post}}|\chi_{a}^{(+)}\phi_{a}\rangle.$
(23)
The IAV model has been recently revisited by several groups Car16 ; Jin15 ;
Pot15 and its accuracy assessed against experimental data with considerable
success Jin17 ; Pot17 .
### 2.5 The Udagawa, Tamura (UT) formula
Udagawa and Tamura Uda81 derived a formula similar to that of IAV, but making
use of the prior form DWBA. Their final result can be again expressed in the
form (1),
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{UT}}_{\mathrm{NEB}}=-\frac{2}{\hbar
v_{i}}\rho_{b}(E_{b})\langle\varphi^{\mathrm{prior}}_{x}|W_{x}|\varphi^{\mathrm{prior}}_{x}\rangle,$
(24)
where $\varphi^{\mathrm{prior}}_{x}$ is the solution of a $x-A$ inhomogeneous
equation similar to Eq. (23) but replacing in the source term the post-form
transition operator, $V_{\mathrm{post}}$, by its prior form counterpart,
$V_{\mathrm{prior}}\equiv U_{xA}+U_{bA}-U_{a}$, i.e.:
$(E^{+}_{x}-K_{x}-{U}_{x})\varphi^{\mathrm{prior}}_{x}({\vec{r}}_{x})=\langle{\vec{r}}_{x}\,\chi_{b}^{(-)}|V_{\mathrm{prior}}|\chi^{(+)}_{a}\phi_{a}\rangle.$
(25)
## 3 Relation among theories
In this section we discuss the connection between the formalisms outlined in
the previous section, with emphasis in the HM formulae. Some of these
relations have already been discussed in previous works, most notably in the
comprehensive work of Ichimura Ich90 . We focus here on the HM formulae (2)
and (2.2) due their relevance in the analysis and interpretation of knockout
studies.
The HM formula (2) can be readily obtained starting from Austern’s identity
(20), and making the replacement
$|\Psi^{\mathrm{3B(+)}}\rangle\approx|\chi_{a}^{(+)}\phi_{a}\rangle$. It is
also enlightening to see the connection of the HM formula (2) with the IAV
model. For that, one needs to transform first the IAV formula, Eq. (22) into
its prior form. This can be done using the following relation due to Li,
Udagawa and Tamura Li84 :
$\varphi^{\mathrm{post}}_{x}(\vec{r}_{x})=\varphi^{\mathrm{prior}}_{x}(\vec{r}_{x})+\varphi^{\mathrm{HM}}_{x}(\vec{r}_{x}).$
(26)
Replacing (26) into Eq. (22) one gets
$\displaystyle\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{IAV,prior}}_{\mathrm{NEB}}$
$\displaystyle=\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{UT}}_{\mathrm{NEB}}+\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{HM}}_{\mathrm{NEB}}$
$\displaystyle+\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{IN}}_{\mathrm{NEB}},$
(27)
with the interference (IN) term
$\left.\frac{d^{2}\sigma}{dE_{b}d\Omega_{b}}\right|^{\mathrm{IN}}_{\mathrm{NEB}}=-\frac{4}{\hbar
v_{a}}\rho_{b}(E_{b})\mathrm{Re}\langle\varphi^{\mathrm{prior}}_{x}|W_{xA}|\varphi^{\mathrm{HM}}_{x}\rangle.$
(28)
Equation (27) represents the post-prior equivalence of the NEB cross sections
in the IAV model, with the RHS corresponding to the prior-form expression of
this model. The first term is just the NEB formula proposed by Udagawa and
Tamura Uda81 , which is formally analogous to the IAV post-form formula (22),
but with the $x$-channel wave function given by
$\varphi^{\mathrm{prior}}_{x}({\vec{r}}_{x})$. The second term in (27) is the
HM formula of Eq. (2) and the last term arises due the interference between
the UT and HM terms.
The HM formula [Eqs. (2) and (3)], is then recovered from (27) by neglecting
altogether the UT term. This result suggests that this HM formula is an
incomplete NEB theory, since the UT term can be very large, or even dominant
Jin15b 111This conclusion was in fact shared by M. Hussein himself who, at
least in recent years, was aware of the incompleteness of the HM formula
Hussein . .
The situation for the eikonal HM formula, Eq. (2.2), is qualitatively
different. In this case, we cannot start from the DWBA formula (27), since the
latter assumes that the auxiliary potential potential $U_{a}$ is a function of
the $a-A$ relative coordinate. This is not the case of the HM choice, Eq. (7).
If $U_{a}$ is allowed to be a function of both ${\vec{r}}_{a}$ and
${\vec{r}}_{bx}$ coordinates (26) becomes:
$\displaystyle\varphi^{\mathrm{post}}_{x}$
$\displaystyle=G_{x}\langle{\vec{r}}_{x}\,\chi_{b}^{(-)}|V_{\mathrm{prior}}|\psi^{3B(+)}_{xb}\rangle+\langle{\vec{r}}_{x}\chi^{(-)}_{b}|\psi^{3B(+)}_{xb}\rangle.$
(29)
But, for the HM choice of the $U_{a}$ potential, Eq. (7), $V_{\mathrm{prior}}$
vanishes identically, and one recovers Austern’s formula, Eq. (20). This shows
that the EHM approximation (2.2) incorporates three-body effects which go
beyond its non-eikonal form (2). In fact, Eq. (2.2) represents the Glauber
limit of the three-body formula by Austern al.. In this regard, one may expect
the eikonal HM formula to be more accurate than its original noneikonal
counterpart, whenever the Glauber approximation is justified (i.e., at high
energies).
## 4 Application to nucleon knockout from 14O
In this section, we present some preliminary numerical results comparing the
HM, EHM and IAV formulae. A more systematic study, including more detailed
observables, such as momentum distributions, is in progress and will be
presented elsewhere.
### 4.1 Practical considerations
Some remarks on the numerical implementation of the different models are in
order. The post-form IAV formula faces the problem of the marginal convergence
of the integrals appearing in the source term of Eq. (23), due to the
oscillatory character of the functions appearing in the initial and final
states. To overcome this problem, several regularization procedures have been
used in the literature. Huby and Mines Hub65 and Vincent Vin68 multiply the
source term by an exponential convergence factor, that damps the contribution
of the integral at large distances. Vincent and Fortune Vin70 use integration
in the complex plane to transform the oscillatory functions into decaying
exponentials. One may also replace the distorted waves
$\chi_{b}^{(-)}({\vec{r}}_{b})$ by wave packets constructed by averaging these
distorted waves over finite energy intervals Jin15 ; Jin15b . The resulting
averaged functions become square-integrable and the source term of Eq. (23)
vanishes at large distances.
Notice that this problem does not arise in the prior form version of this
formula because, in this case, the transition operator ($V_{\mathrm{prior}}$)
makes the source term short-ranged. In Jin15b , a numerical comparison between
the post and prior IAV formulae was performed for the 58Ni(d,p)X reaction and
they were found to be yield almost identical results, provided a
regularization procedure was applied to the post formula.
### 4.2 Numerical results
For these test calculations we consider the one-neutron and one-proton removal
processes taking place in the 14O+9Be reaction at two different incident
energies, namely, 53 MeV/u and 80 MeV/u. Experimental data for this reaction
were reported in Ref. Fla12 and analyzed in terms of the eikonal model as
well as the semiclassical “transfer the continuum” method of Bonaccorso and
Brink Bon88 ; Bon91 , which provides an alternative to the eikonal method for
the case of neutron removal.
In the calculations presented here, we focus on the NEB part of the cross
section. We compare the DWBA IAV [Eq. (22)], the HM [Eq. (2)] and the Eikonal
HM formulae [Eq. (2.2)]. Owing to the aforementioned convergence issues, the
calculations with the IAV method were performed using its prior form formula.
Intrinsic spins are ignored for simplicity. For a meaningful comparison, the
same structure inputs and optical potentials are considered in all these
calculations. In particular, the wavefunction of the removed nucleon is
generated with a Woods-Saxon potential with parameters $R_{0}=3.29$ fm,
$a=0.67$ fm and the depth adjusted to reproduce the proton ($S_{p}=4.6$ MeV)
or neutron ($S_{n}=23.2$ MeV) separation energy, as appropriate. We used
energy-independent neutron-target and core-target potentials derived from the
$t\rho$ and $t\rho\rho$ approximation, respectively, assuming for the nucleon-
nucleon t-matrices the parametrization of Hof78 with nucleon-nucleon cross
sections from Cha90 and imaginary-to-real ratios from Ray79 and densities
extracted from Hartree-Fock calculations with the SkX interaction Bro98 . For
the IAV and HM calculations, the potential between projectile and target is
also needed. This potential has been computed via a folding of the neutron-
target and core-target potentials with the square of the wave-function between
neutron and core in the projectile. For the EHM calculation, the optical limit
of the Glauber approximation was used to derive the required S-matrices from
these potentials.
Figure 2: Integrated nonelastic breakup cross sections for the 9Be(14O,13N)
(top) and 9Be(14O,13O) (bottom) reactions at 80 MeV/u and 53 MeV/u computed
with the IAV, HM and EHM formulae.
The results of these calculations are presented in Fig. 2, in the form of bar
diagrams. The upper and bottom panels correspond to the one-proton and one-
neutron removal reactions. For the neutron removal case, we find that the
three methods give close results. This suggests that the nonorthogonality
term, which is absent in the HM calculation, is small in the case of removal
of a tightly bound nucleon. For the proton removal, the HM result tends to
overestimate the IAV result. This indicates that, at lest in this case, the HM
and nonorthogonality terms interfere destructively (c.f. last term in Eq.
(27)]. Remarkably, the Glauber version of the HM formula is in better
agreement with the IAV result than the original (i.e. non-eikonal) HM formula.
As discussed in the previous section, this result can be interpreted
recognizing that the EHM can be regarded as an approximation to the full
three-body IAV theory and, as such, includes effectively contributions from
the three terms in Eq. (27).
Two main conclusions can be drawn from this preliminary analysis. First, the
non-eikonal HM formula provides an accurate approximation to the more
elaborated IAV formula for deeply bound nucleons, but fails for more weakly
bound nucleons. Second, the EHM formula represents a very good approximation
to the IAV formula both for well bound and weakly bound nucleons and for
energies as low as 50 MeV/u. This result seems to add support to the use of
the EHM formula in the analysis of knockout reactions.
We note that some other approximations are implicit in the derivation of all
discussed formulae, including the IAV one. For example, all these formulae are
based on the spectator assumption for the core fragment. This means that the
latter simply scatters elastically by the target nucleus, but does not
participate in the nucleon-removal dynamics. For example, possible
rescattering effects of the struck nucleon are not taken into account. These
rescattering effects, not considered by any of the presented theories, could
modify the cross section for nucleon removal, whose systematics are currently
under discussion Tos14 . A recent comprehensive review on the subject is given
in Ref. Aum20 .
## 5 Summary and conclusions
In this work, we have reexamined the nonelastic breakup formula devised by
Hussein and McVoy (HM) and extensively employed in knockout studies. We have
shown that this formula can be derived from the Ichimura, Austern, Vincent
(IAV) model, recently revisited and applied by several groups, as well as from
the three-body formula of Austern et al. Aus87 . We have also shown that,
owing to the particular choice of the auxiliary interaction $U_{aA}$, the
eikonal version of the HM formula (EHM) incorporates genuine three-body
effects. These effects are also present in the more general three-body formula
of Austern et al., but in a more complicated way.
Preliminary calculations for the one-nucleon removal in the 14O+9Be reaction
shows that the EHM formula reproduces accurately the results of the IAV model.
For the removal of the strongly bound neutron ($S_{n}=23.2$ MeV), the
noneikonal HM formula is also in good agreement with the IAV result. However,
for the one-proton removal ($S_{p}=4.6$ MeV), the noneikonal HM formula tends
to overestimate the IAV result. It will be interesting to extend these
calculations to other systems and energies to see whether these conclusions
remain.
###### Acknowledgements.
This work has been partially supported by the Spanish Ministerio de Ciencia,
Innovación y Universidades under projects FIS2017-88410-P and
RTI2018-098117-B-C21 and by the European Union’s Horizon 2020 research and
innovation program under Grant Agreement No. 654002. J.L. acknowledges support
from the National Natural Science Foundation of China (Grants No. 12035011,
No. 11975167, and No. 11535004), by the National Key R&D Program of China
(Contracts No. 2018YFA0404403 and 2016YFE0129300), and by the Fundamental
Research Funds for the Central Universities (Grant No. 22120200101). M. G.-R.
acknowledges support from the Alexander von Humboldt foundation.
## References
* (1) J.E. Escher, J.T. Burke, F.S. Dietrich, N.D. Scielzo, I.J. Thompson, W. Younes, Rev. Mod. Phys. 84, 353 (2012). DOI 10.1103/RevModPhys.84.353
* (2) P. Hansen, J. Tostevin, Annu. Rev. of Nucl. and Part. Sci 53(1), 219 (2003). DOI 10.1146/annurev.nucl.53.041002.110406
* (3) J. Tostevin, Nucl. Phys. A 682(1), 320 (2001). DOI 10.1016/S0375-9474(00)00656-4
* (4) J.A. Tostevin, A. Gade, Phys. Rev. C 90, 057602 (2014). DOI 10.1103/PhysRevC.90.057602
* (5) M. Hussein, K. McVoy, Nucl. Phys. A 445(1), 124 (1985). DOI 10.1016/0375-9474(85)90364-1
* (6) F. Flavigny, et al., Phys. Rev. Lett. 110, 122503 (2013). DOI 10.1103/PhysRevLett.110.122503
* (7) L. Atar, et al., Phys. Rev. Lett. 120, 052501 (2018). DOI 10.1103/PhysRevLett.120.052501
* (8) S. Kawase, et al., Prog. Theor. Exp. Phys 2018, 021D01 (2018). DOI 10.1093/ptep/pty011
* (9) M. Gómez-Ramos, A. Moro, Phys. Lett. B 785, 511 (2018). DOI 10.1016/j.physletb.2018.08.058
* (10) G. Baur, R. Shyam, F. Rösel, D. Trautmann, Phys. Rev. C 28, 946 (1983). DOI 10.1103/PhysRevC.28.946
* (11) N. Austern, Y. Iseri, M. Kamimura, M. Kawai, G. Rawitscher, M. Yahiro, Phys. Rep. 154, 125 (1987). DOI 10.1016/0370-1573(87)90094-9
* (12) S. Typel, G. Baur, Phys. Rev. C50, 2104 (1994). DOI 10.1103/PhysRevC.50.2104
* (13) H. Esbensen, G.F. Bertsch, Nucl. Phys. A600, 37 (1996). DOI 10.1016/0375-9474(96)00006-1
* (14) T. Kido, K. Yabana, Y. Suzuki, Phys. Rev. C 50, R1276 (1994). DOI 10.1103/PhysRevC.50.R1276
* (15) P. Capel, G. Goldstein, D. Baye, Phys. Rev. C 70, 064605 (2004). DOI 10.1103/PhysRevC.70.064605
* (16) A. Kasano, M. Ichimura, Physics Letters B 115(2), 81 (1982). DOI 10.1016/0370-2693(82)90800-0
* (17) D.Y. Pang, N.K. Timofeyuk, R.C. Johnson, J.A. Tostevin, Phys. Rev. C 87, 064613 (2013). DOI 10.1103/PhysRevC.87.064613
* (18) J. Lei, A.M. Moro, Phys. Rev. Lett. 123, 232501 (2019). DOI 10.1103/PhysRevLett.123.232501
* (19) M. Ichimura, N. Austern, C.M. Vincent, Phys. Rev. C 32, 431 (1985). DOI 10.1103/PhysRevC.32.431
* (20) B.V. Carlson, R. Capote, M. Sin, Few-Body Syst. 57(5), 307 (2016). DOI 10.1007/s00601-016-1054-8
* (21) J. Lei, A.M. Moro, Phys. Rev. C 92, 044616 (2015). DOI 10.1103/PhysRevC.92.044616
* (22) G. Potel, F.M. Nunes, I.J. Thompson, Phys. Rev. C 92, 034611 (2015). DOI 10.1103/PhysRevC.92.034611
* (23) J. Lei, A.M. Moro, Phys. Rev. C 95, 044605 (2017). DOI 10.1103/PhysRevC.95.044605
* (24) G. Potel, G. Perdikakis, B.V. Carlson, M.C. Atkinson, W.H. Dickhoff, J.E. Escher, M.S. Hussein, J. Lei, W. Li, A.O. Macchiavelli, A.M. Moro, F.M. Nunes, S.D. Pain, J. Rotureau, The European Physical Journal A 53(9), 178 (2017). DOI 10.1140/epja/i2017-12371-9
* (25) T. Udagawa, T. Tamura, Phys. Rev. C 24, 1348 (1981). DOI 10.1103/PhysRevC.24.1348
* (26) M. Ichimura, Phys. Rev. C 41, 834 (1990). DOI 10.1103/PhysRevC.41.834
* (27) X.H. Li, T. Udagawa, T. Tamura, Phys. Rev. C 30, 1895 (1984). DOI 10.1103/PhysRevC.30.1895
* (28) J. Lei, A.M. Moro, Phys. Rev. C 92, 061602 (2015). DOI 10.1103/PhysRevC.92.061602
* (29) M. Hussein. Private Communication
* (30) R. Huby, J.R. Mines, Rev. Mod. Phys. 37, 406 (1965). DOI 10.1103/RevModPhys.37.406
* (31) C.M. Vincent, Phys. Rev. 175, 1309 (1968). DOI 10.1103/PhysRev.175.1309
* (32) C.M. Vincent, H.T. Fortune, Phys. Rev. C 2, 782 (1970). DOI 10.1103/PhysRevC.2.782
* (33) F. Flavigny, A. Obertelli, A. Bonaccorso, G.F. Grinyer, C. Louchart, L. Nalpas, A. Signoracci, Phys. Rev. Lett. 108, 252501 (2012). DOI 10.1103/PhysRevLett.108.252501
* (34) A. Bonaccorso, D.M. Brink, Phys. Rev. C 38, 1776 (1988). DOI 10.1103/PhysRevC.38.1776
* (35) A. Bonaccorso, D.M. Brink, Phys. Rev. C 44, 1559 (1991). DOI 10.1103/PhysRevC.44.1559
* (36) G. Hoffmann, et al., Physics Letters B 76(4), 383 (1978). DOI 10.1016/0370-2693(78)90888-2
* (37) S.K. Charagi, S.K. Gupta, Phys. Rev. C 41, 1610 (1990). DOI 10.1103/PhysRevC.41.1610. URL https://link.aps.org/doi/10.1103/PhysRevC.41.1610
* (38) L. Ray, Phys. Rev. C 20, 1857 (1979). DOI 10.1103/PhysRevC.20.1857
* (39) B. Alex Brown, Phys. Rev. C 58, 220 (1998). DOI 10.1103/PhysRevC.58.220
* (40) T. Aumann, et al., Prog. Part. Nucl. Phys. (accepted), arXiv:2012.12553 (2020)
|
# Disentangled Sequence Clustering for
Human Intention Inference
Mark Zolotas and Yiannis Demiris M. Zolotas and Y. Demiris are with the
Personal Robotics Lab, Dept. of Electrical and Electronic Engineering,
Imperial College London, SW7 2BT, UK; Email: {mark.zolotas12,
<EMAIL_ADDRESS>research was supported in part by an EPSRC
Doctoral Training Award to MZ, and a Royal Academy of Engineering Chair in
Emerging Technologies to YD.
###### Abstract
Equipping robots with the ability to infer human intent is a vital
precondition for effective collaboration. Most computational approaches
towards this objective derive a probability distribution of “intent”
conditioned on the robot’s perceived state. However, these approaches
typically assume task-specific labels of human intent are known a priori. To
overcome this constraint, we propose the Disentangled Sequence Clustering
Variational Autoencoder (DiSCVAE), a clustering framework capable of learning
such a distribution of intent in an unsupervised manner. The proposed
framework leverages recent advances in unsupervised learning to disentangle
latent representations of sequence data, separating time-varying local
features from time-invariant global attributes. As a novel extension, the
DiSCVAE also infers a discrete variable to form a latent mixture model and
thus enable clustering over these global sequence concepts, _e.g._ high-level
intentions. We evaluate the DiSCVAE on a real-world human-robot interaction
dataset collected using a robotic wheelchair. Our findings reveal that the
inferred discrete variable coincides with human intent, holding promise for
collaborative settings, such as shared control.
## I Introduction
Humans are remarkably proficient at inferring the implicit intentions of
others from their overt behaviour. Consequently, humans are adept at planning
their actions when collaborating together. Intention inference may therefore
prove equally imperative in creating fluid and effective human-robot
collaborations. Robots endowed with this ability have been extensively
explored [1, 2, 3], yet their integration into real-world settings remains an
open research problem.
One major impediment to real-world instances of robots performing human
intention inference is the assumption that a known representation of intent
exists. For example, most methods in collaborative robotics assume a discrete
set of task goals is known a priori. Under this assumption, the robot can
infer a distribution of human intent by applying Bayesian reasoning over the
entire goal space [4, 3]. Whilst such a distribution offers a versatile and
practical representation of intent, the need for predefined labels is not
always feasible unless restricted to a specific task scope.
Figure 1: Overview visualisation of the intention inference experiment on a
robotic wheelchair. Bottom: Recorded output of an actual human subject
navigating towards a goal (red arrow). Top Right: Maps of the three experiment
settings, with red stars denoting target locations. Top Left: Probability
histogram of the categorical variable modelling “intentions” at this
particular snapshot of the data for $K\,{=}\,6$ clusters. The bars are
coloured to align with the wheelchair trajectories generated by sampling from
the corresponding clusters. Multiple diverse trajectories can be sampled from
the same cluster and each trajectory’s length is dependent on the velocity
commands drawn from the generative model. Figure best viewed in colour.
Another fundamental challenge is that many diverse actions often fulfil the
same intention. A popular class of probabilistic algorithms for overcoming
this challenge are generative models, which derive a distribution of
observations by introducing latent random variables to capture any hidden
underlying structure. Within the confines of intention inference, the modelled
latent space is then presumed to represent all possible causal relations
between intentions and observed human behaviour [5, 6, 7]. The advent of deep
generative models, such as Variational Autoencoders (VAEs) [8, 9], has also
enabled efficient inference of this latent space from abundant sources of
high-dimensional data.
Inspired by the prospects of not only extracting hidden “intent” variables but
also interpreting their meaning, we frame the intention inference problem as a
process of disentangling the latent space. Disentanglement is a core research
thrust in representation learning that refers to the recovery of abstract
concepts from independent factors of variation assumed to be responsible for
generating the observed data [10, 11, 12]. The interpretable structure of
these independent factors is exceedingly desirable for human-in-the-loop
scenarios [13], like robotic wheelchair assistance, however few applications
have transferred over to the robotics domain [7].
We strive to bridge this gap by proposing an unsupervised disentanglement
framework suitable for human intention inference. Capitalising on prior
disentanglement techniques, we learn a latent representation of sequence
observations that divides into a local (time-varying) and global (time-
preserving) part [14, 15]. Our proposed variant simultaneously infers a
categorical variable to construct a mixture model and thereby form clusters in
the global latent space. In the scope of intention inference, we view the
continuous local variable as representative of desirable low-level
trajectories, whilst the discrete counterpart signifies high-level intentions.
To summarise, this paper’s contributions are:
* •
A framework for clustering disentangled representations of sequences, coined
as the Disentangled Sequence Clustering Variational Autoencoder (DiSCVAE);
* •
Findings from a robotic wheelchair experiment (see Fig. 1) that demonstrate
how clusters learnt without explicit supervision can be interpreted as user-
intended navigation behaviours, or strongly correlated with “labels” of such
intent in a semi-supervised context.
## II Preliminaries
Before defining the DiSCVAE, we describe supporting material from
representation learning, starting with the VAE displayed in Fig. 2a. The VAE
is a deep generative model consisting of a generative and recognition network.
These networks are jointly trained by applying the reparameterisation trick
[8, 9] and maximising the evidence lower bound (ELBO)
$\mathcal{L}_{\theta,\phi}(\mathbf{x})$ on the marginal log-likelihood:
$\displaystyle\log p_{\theta}(\mathbf{x})$
$\displaystyle\geq\mathcal{L}_{\theta,\phi}(\mathbf{x})$ (1)
$\displaystyle\equiv\operatorname{\mathbb{E}}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\\!\big{[}\log
p_{\theta}(\mathbf{x}|\mathbf{z})\big{]}-\text{KL}\big{(}{q_{\phi}(\mathbf{z}|\mathbf{x})\,||\,p_{\theta}(\mathbf{z})}\big{)},$
where the first term is the reconstruction error of reproducing observations
$\mathbf{x}$, and the second KL divergence term is a regulariser that
encourages the variational posterior $q_{\phi}(\mathbf{z}\,|\,\mathbf{x})$ to
be close to the prior $p_{\theta}(\mathbf{z})$. For notational convenience,
parameters $\phi$ and $\theta$ will be omitted hereafter.
(a) VAE
(b) VRNN
(c) GMVAE
Figure 2: Deep generative models for: (a) variational inference [8, 9]; (b) a
sequential VAE that conditions on the deterministic hidden states of an RNN at
each timestep (VRNN [16]); (c) a VAE with a Gaussian mixture prior (GMVAE).
Dashed lines denote inference and bold lines indicate generation.
Deep generative models can also be parameterised by Recurrent Neural Networks
(RNNs) to represent temporal data under the VAE learning principle. A notable
example is the Variational RNN (VRNN) [16] shown in Fig. 2b, which conditions
on latent variables and observations from previous timesteps via its
deterministic hidden state,
$\mathbf{h}_{t}(\mathbf{x}_{t-1},\mathbf{z}_{t-1},\mathbf{h}_{t-1})$, leading
to the joint distribution:
$\displaystyle p(\mathbf{x}_{\leq T},\mathbf{z}_{\leq T})$
$\displaystyle=\prod_{t=1}^{T}p(\mathbf{x}_{t}\,|\,\mathbf{z}_{\leq
t},\mathbf{x}_{<t})p(\mathbf{z}_{t}\,|\,\mathbf{x}_{<t},\mathbf{z}_{<t})$ (2)
$\displaystyle=\prod_{t=1}^{T}p(\mathbf{x}_{t}\,|\,\mathbf{z}_{t},\mathbf{h}_{t})p(\mathbf{z}_{t}\,|\,\mathbf{h}_{t}),$
where the true posterior is conditioned on information pertaining to previous
observations $\mathbf{x}_{<t}$ and latent states $\mathbf{z}_{<t}$, hence
accounting for temporal dependencies. The VRNN state $\mathbf{h}_{t}$ is also
shared with the inference procedure to yield the variational posterior
distribution:
$q(\mathbf{z}_{\leq T}\,|\,\mathbf{x}_{\leq
T})=\prod_{t=1}^{T}q(\mathbf{z}_{t}|\mathbf{x}_{\leq
t},\mathbf{z}_{<t})=\prod_{t=1}^{T}q(\mathbf{z}_{t}|\mathbf{x}_{t},\mathbf{h}_{t}).$
(3)
The DiSCVAE developed in the following section elects an approach akin to the
VRNN, where latent variables are injected into the forward autoregressive
dynamics.
## III Disentangled Sequence Clustering Variational Autoencoder
In this section, we introduce the Disentangled Sequence Clustering VAE
(DiSCVAE)111Code available at: https://github.com/mazrk7/discvae, a framework
suited for human intention inference. Clustering is initially presented as a
Gaussian mixture adaptation of the VAE prior. The complete DiSCVAE is then
specified by combining this adaptation with a sequential model that
disentangles latent variables. Finally, we relate back to the intention
inference domain.
### III-A Clustering with Variational Autoencoders
A crucial aspect of generative models is choosing a prior capable of fostering
structure or clusters in the data. Previous research has tackled clustering
with VAEs by segmenting the latent space into distinct classes using a
Gaussian mixture prior, _i.e._ a GMVAE [17, 18].
Our approach is similar to earlier GMVAEs, except for two modifications.
First, we leverage the categorical reparameterisation trick to obtain
differentiable samples of discrete variables [19, 20]. Second, we alter the
ELBO to mitigate the precarious issues of posterior collapse and cluster
degeneracy [15]. Posterior collapse refers to latent variables being ignored
or overpowered by highly expressive decoders during training, such that the
posterior mimics the prior. Whilst cluster degeneracy is when multiple modes
of the prior have collapsed into one [17].
The GMVAE outlined below is the foundation for how the DiSCVAE uncovers $K$
clusters (see Fig. 2c). Assuming observations $\mathbf{x}$ are generated
according to some stochastic process with discrete latent variable $y$ and
continuous latent variable $\mathbf{z}$, then the joint probability can be
written as:
$\displaystyle p(\mathbf{x},\mathbf{z},y)$
$\displaystyle=p(\mathbf{x}\,|\,\mathbf{z})p(\mathbf{z}\,|\,y)p(y)$ (4)
$\displaystyle y$ $\displaystyle\sim\text{Cat}(\bm{\pi})$
$\displaystyle\mathbf{z}$
$\displaystyle\sim\mathcal{N}\big{(}\bm{\mu}_{z}(y),\text{diag}(\bm{\sigma}^{2}_{z}(y))\big{)}$
$\displaystyle\mathbf{x}$
$\displaystyle\sim\mathcal{N}\big{(}\bm{\mu}_{x}(\mathbf{z}),\bm{I}\big{)}\;\text{or}\;\mathcal{B}\big{(}\bm{\mu}_{x}(\mathbf{z})\big{)},$
where functions $\bm{\mu}_{z}$, $\bm{\sigma}^{2}_{z}$ and $\bm{\mu}_{x}$ are
neural networks whose outputs parameterise the distributions of $\mathbf{z}$
and $\mathbf{x}$. The generative process involves three steps: (1) sampling
$y$ from a categorical distribution parameterised by probability vector
$\bm{\pi}$ with $\pi_{k}$ set to $K^{-1}$; (2) sampling $\mathbf{z}$ from the
marginal prior $p(\mathbf{z}\,|\,y)$, resulting in a Gaussian mixture with a
diagonal covariance matrix and uniform mixture weights; and (3) generating
data $\mathbf{x}$ from a likelihood function $p(\mathbf{x}\,|\,\mathbf{z})$.
A variational distribution $q(\mathbf{z},y\,|\,\mathbf{x})$ for the true
posterior can then be introduced in its factorised form as:
$q(\mathbf{z},y\,|\,\mathbf{x})=q(\mathbf{z}\,|\,\mathbf{x},y)q(y\,|\,\mathbf{x}),$
(5)
where both the multivariate Gaussian $q(\mathbf{z}\,|\,\mathbf{x},y)$ and
categorical $q(y\,|\,\mathbf{x})$ are also parameterised by neural networks,
with respective parameters, $\phi_{z}$ and $\phi_{y}$, omitted from notation.
Provided with these inference $q(.)$ and generative $p(.)$ networks, the ELBO
for this clustering model becomes:
$\displaystyle\mathcal{L}(\mathbf{x})$
$\displaystyle=\operatorname{\mathbb{E}}_{q(\mathbf{z},y\,|\,\mathbf{x})}\bigg{[}\log\frac{p(\mathbf{x},\mathbf{z},y)}{q(\mathbf{z},y\,|\,\mathbf{x})}\bigg{]}$
(6)
$\displaystyle=\operatorname{\mathbb{E}}_{q(\mathbf{z},y\,|\,\mathbf{x})}\big{[}\log
p(\mathbf{x}\,|\,\mathbf{z})\big{]}$
$\displaystyle\quad-\>\operatorname{\mathbb{E}}_{q(y\,|\,\mathbf{x})}\big{[}\text{KL}\big{(}{q(\mathbf{z}\,|\,\mathbf{x},y)\,||\,p(\mathbf{z}\,|\,y)}\big{)}\big{]}$
$\displaystyle\quad-\>\text{KL}\big{(}{q(y\,|\,\mathbf{x})\,||\,p(y)}\big{)},$
where the first term is reconstruction loss of data $\mathbf{x}$, and the
latter two terms push the variational posteriors close to their corresponding
priors. As the standard reparameterisation trick is intractable for non-
differentiable discrete samples, we employ a continuous relaxation of
$q(y\,|\,\mathbf{x})$ [19, 20] that removes the need to marginalise over all
$K$ class values.
Optimising GMVAEs with powerful decoders is prone to cluster degeneracy due to
the over-regularisation effect of the KL term on $y$ opting for a uniform
posterior [17]. As KL divergence is a known upper bound on mutual information
between a latent variable and data during training [11, 10], we instead
penalise mutual information in Eq. 6 by replacing
$\text{KL}\big{(}q(y\,|\,\mathbf{x})\,||\,p(y)\big{)}$ with entropy
$\mathcal{H}\big{(}q(y\,|\,\mathbf{x})\big{)}$ given uniform $p(y)$. We found
this modification to be empirically effective at preventing mode collapse and
it may even improve the other key trait of the DiSCVAE: disentanglement [11].
### III-B Model Specification
Having established how to categorise the VAE latent space learnt over static
data, we now derive the DiSCVAE (see Fig. 3) as a sequential extension that
automatically clusters and disentangles representations. Disentanglement
amongst sequential VAEs commonly partitions latent representations into time-
invariant and time-dependent subsets [14, 15]. Similarly, we express our
disentangled representation of some input sequence $\mathbf{x}_{\leq T}$ at
timestep $t$ as $\mathbf{z}_{t}\,{=}\,[\mathbf{z}_{G},\mathbf{z}_{t,L}]$,
where $\mathbf{z}_{G}$ and $\mathbf{z}_{t,L}$ encode global and local
features.
Figure 3: Computation graph of the inference $q(.)$ and generative $p(.)$
networks. Green blocks contain global variables $y$ and $\mathbf{z}_{G}$, with
a bidirectional LSTM conditioning over input sequence $\mathbf{x}_{\leq T}$.
Forward $\mathbf{h}_{t}^{\mathbf{z}_{G}}$ and backward
$\mathbf{g}_{t}^{\mathbf{z}_{G}}$ states then compute the $q(.)$ distribution
parameters. Orange blocks encompass the local sequence variable
$\mathbf{z}_{t,L}$, where an LSTM’s states $\mathbf{h}_{t}^{\mathbf{z}_{L}}$
are combined at each timestep with current inputs $\mathbf{x}_{t}$ to infer
$\mathbf{z}_{t,L}$. Generating $\mathbf{x}_{t}$ requires both $\mathbf{z}_{G}$
and $\mathbf{z}_{t,L}$. Figure best viewed in colour.
The novelty of our approach lies in how we solely cluster the global variable
$\mathbf{z}_{G}$ extracted from sequences. Related temporal clustering models
have either mapped the entire sequence $\mathbf{x}_{\leq T}$ to a discrete
latent manifold [13] or inferred a categorical factor of variation to cluster
over an entangled continuous latent representation [15]. Whereas the DiSCVAE
clusters high-level attributes $\mathbf{z}_{G}$ in isolation from lower-level
dynamics $\mathbf{z}_{t,L}$. Furthermore, this proposed formulation plays an
important role in our interpretation of intention inference, as is made
apparent in Section III-D.
Using the clustering scheme described in Section III-A, we define the
generative model $p(\mathbf{x}_{\leq T},\mathbf{z}_{\leq
T,L},\mathbf{z}_{G},y)$ as:
$p(\mathbf{z}_{G}\,|\,y)p(y)\prod_{t=1}^{T}p(\mathbf{x}_{t}\,|\,\mathbf{z}_{t,L},\mathbf{z}_{G},\mathbf{h}^{\mathbf{z}_{L}}_{t})p(\mathbf{z}_{t,L}\,|\,\mathbf{h}^{\mathbf{z}_{L}}_{t}).$
(7)
The mixture prior $p(\mathbf{z}_{G}\,|\,y)$ encourages mixture components
(indexed by $y$) to emerge in the latent space of variable $\mathbf{z}_{G}$.
Akin to a VRNN [16], the posterior of $\mathbf{z}_{t,L}$ is parameterised by
deterministic state $\mathbf{h}^{\mathbf{z}_{L}}_{t}$. We also highlight the
dependency on both $\mathbf{z}_{t,L}$ and $\mathbf{z}_{G}$ upon generating
$\mathbf{x}_{t}$.
To perform posterior approximation, we adopt the variational distribution
$q(\mathbf{z}_{\leq T,L},\mathbf{z}_{G},y\,|\,\mathbf{x}_{\leq T})$ and
factorise it as:
$q(\mathbf{z}_{G}\,|\,\mathbf{x}_{\leq T},y)q(y\,|\,\mathbf{x}_{\leq
T})\prod_{t=1}^{T}q(\mathbf{z}_{t,L}\,|\,\mathbf{x}_{t},\mathbf{h}^{\mathbf{z}_{L}}_{t}),$
(8)
with a differentiable relaxation of categorical $y$ injected into the process
when training [19, 20].
Under the VAE paradigm, the DiSCVAE is trained by maximising the time-wise
objective:
$\displaystyle\mathcal{L}(\mathbf{x}_{\leq T})$
$\displaystyle=\operatorname{\mathbb{E}}_{q(\cdot)}\bigg{[}\log\frac{p(\mathbf{x}_{\leq
T},\mathbf{z}_{\leq T,L},\mathbf{z}_{G},y)}{q(\mathbf{z}_{\leq
T,L},\mathbf{z}_{G},y\,|\,\mathbf{x}_{\leq T})}\bigg{]}$ (9)
$\displaystyle=\operatorname{\mathbb{E}}_{q(\cdot)}\bigg{[}\sum_{t=1}^{T}\bigg{(}\log
p(\mathbf{x}_{t}\,|\,\mathbf{z}_{t,L},\mathbf{z}_{G},\mathbf{h}^{\mathbf{z}_{L}}_{t})$
$\displaystyle\quad-\>\text{KL}\big{(}q(\mathbf{z}_{t,L}\,|\,\mathbf{x}_{t},\mathbf{h}^{\mathbf{z}_{L}}_{t})\,||\,p(\mathbf{z}_{t,L}\,|\,\mathbf{h}^{\mathbf{z}_{L}}_{t})\big{)}\bigg{)}$
$\displaystyle\quad-\>\text{KL}\big{(}q(\mathbf{z}_{G}\,|\,\mathbf{x}_{\leq
T},y)\,||\,p(\mathbf{z}_{G}\,|\,y)\big{)}$
$\displaystyle\quad+\>\mathcal{H}\big{(}q(y\,|\,\mathbf{x}_{\leq
T})\big{)}\bigg{]}.$
This summation of lower bounds across timesteps is decomposed into: (1) the
expected log-likelihood of input sequences; (2) KL divergences for variables
$\mathbf{z}_{t,L}$ and $\mathbf{z}_{G}$; and (3) entropy regularisation to
alleviate mode collapse.
### III-C Network Architecture
The DiSCVAE is graphically illustrated in Fig. 3. An RNN parameterises the
posteriors over $\mathbf{z}_{t,L}$, with the hidden state
$\mathbf{h}^{\mathbf{z}_{L}}_{t}$ allowing $\mathbf{x}_{<t}$ and
$\mathbf{z}_{<t,L}$ to be indirectly conditioned on in Eqs. 7 and 8. For time-
invariant variables $y$ and $\mathbf{z}_{G}$, a bidirectional RNN extracts
feature representations from the entire sequence $\mathbf{x}_{\leq T}$,
analogous to prior architectures [14]. Bidirectional forward $\mathbf{h}_{t}$
and reverse $\mathbf{g}_{t}$ states are computed by iterating through
$\mathbf{x}_{\leq T}$ in both directions, before being merged by summation.
RNNs have LSTM cells and multilayer perceptrons (MLPs) are dispersed
throughout to output the mean and variance of Gaussian distributions.
### III-D Intention Inference
Input: Observation sequence $\mathbf{x}_{\leq t}$; sample length $n$;
Initialise: $\mathbf{h}_{t}\leftarrow\mathbf{0}$;
$\mathbf{z}_{t,L}\leftarrow\mathbf{0}$;
Output: Predicted states
$\tilde{\mathbf{x}}_{t+1},\ldots,\tilde{\mathbf{x}}_{t+n}$
Feed prefix $\mathbf{x}_{\leq t}$ into inference model (Eq. 8)
Assign to cluster $c$ (Eq. 10)
Draw fixed global sample from $p(\mathbf{z}_{G}\,|\,y=c)$
for $i\in\\{t+1,\ldots,t+n\\}$ do
Update
$\mathbf{h}_{i}\leftarrow\text{RNN}(\mathbf{z}_{i-1,L},\mathbf{x}_{i-1},\mathbf{h}_{i-1})$
Sample local dynamics from $p(\mathbf{z}_{i,L}\,|\,\mathbf{h}_{i})$
Predict $\tilde{\mathbf{x}}_{i}\sim
p(\mathbf{x}_{i}\,|\,\mathbf{z}_{i,L},\mathbf{h}_{i},\mathbf{z}_{G})$
end for
Algorithm 1 Sampling to produce diverse predictions of goal states from the
inferred cluster $c$
Let us now recall the problem of intention inference. We first posit that the
latent class attribute $y$ could model a $K$-dimensional repertoire of action
plans when considering human interaction data for a specific task. From this
perspective, intention inference is a matter of assigning clusters (or action
plans) to observations $\mathbf{x}_{\leq T}$ of human behaviour and the
environment (_e.g._ joystick commands and sensor data). Human intent is thus
computed as the most probable element of the component posterior:
$c=\operatorname*{arg\,max}_{k}q(y_{k}\,|\,\mathbf{x}_{\leq T}),$ (10)
where $c$ is the assigned cluster identity, _i.e._ the inferred intention
label. The goal associated with this cluster is then modelled by
$\mathbf{z}_{G}$, and local variable $\mathbf{z}_{t,L}$ captures the various
behaviours capable of accomplishing the inferred plan.
In the robotic wheelchair scenario, most related works on intention estimation
represent user intent [21, 22] as a target wheelchair state
$\tilde{\mathbf{x}}_{T}$. Bayesian reasoning over the entire observation
sequence $\mathbf{x}_{\leq T}$ using an entangled latent variable can yield
such a state [5, 6, 3]. In contrast, the DiSCVAE employs a disentangled
representation $\mathbf{z}_{t}\,{=}\,[\mathbf{z}_{G},\mathbf{z}_{t,L}]$, where
the goal state variable is explicitly separated from the user action and
environment dynamics. The major benefit of this separation is controlled
generation, where repeatedly sampling $\mathbf{z}_{t,L}$ can enable diversity
in how trajectories $\tilde{\mathbf{x}}_{t}$ pan out according to the global
plan. The procedure for inferring intention label $c$ amongst a collection of
action plans and generating diverse trajectories is summarised in Algorithm 1.
## IV Intention Inference on Robotic Wheelchairs
To validate the DiSCVAE utility at intention inference, we consider a dataset
of real users navigating a wheelchair. The objective here is to infer user-
intended action plans from observations of their joystick commands and
surroundings.
### IV-A Dataset
Eight healthy subjects (aged 25-33) with experience using a robotic wheelchair
were recruited to navigate three mapped environments (top right of Fig. 1).
Each subject was requested to manually control the wheelchair using its
joystick and follow a random route designated by goal arrows appearing on a
graphical interface, as in Fig. 1.
Experiment data collected during trials were recorded at a rate of 10 Hz, with
sequences of length $T\,{=}\,20$. This sequence length $T$ is inspired by
related work on estimating the short-term intentions of robotic wheelchair
operators [22]. Every sequence was composed of user joystick commands
$\mathbf{a}_{t}\,{\in}\,\mathbb{R}^{2}$ (linear and angular velocities), as
well as rangefinder readings $\mathbf{l}_{t}\,{\in}\,\mathbb{R}^{360}$
($1^{\circ}$ angular resolution), with both synchronised to the elected system
frequency. The resulting dataset amounted to a total of 8883 sequences.
To assess the generalisability of our intention inference framework, we
segregate the dataset based on the experiment environment. As a result, trials
that took place in Map 3 are excluded from the training and validation sets,
leaving splits of 5881/1580/1422 for training/testing/validation. Dividing the
dataset in this way allows us to investigate performance under variations in
task context, verifying whether the DiSCVAE can elucidate human intent
irrespective of such change.
### IV-B Labelling Routine
Even without access to predefined labels for manoeuvres made by subjects while
pursuing task goals, we can appoint approximations of user “intent” to shed
light on the analysis. As such, an automated labelling routine is devised
below.
Each sequence is initially categorised as either “narrow” or “wide” depending
on a measure of threat applied in obstacle avoidance for indoor navigation
[23]:
$s_{t}=\frac{1}{N}\sum_{i=1}^{N}\text{sat}_{[0,1]}\bigg{(}\frac{D_{s}+R-l^{i}_{t}}{D_{s}}\bigg{)},$
(11)
where the aggregate threat score $s_{t}$ at timestep $t$ for $N\,{=}\,360$
laser readings $l^{i}_{t}$ is a saturated function of these ranges, the
robot’s radius $R$ (0.5 m for the wheelchair), and a safe distance parameter
$D_{s}$ (set to 0.8 m). In essence, this score reflects the danger of imminent
obstacles and qualifies narrow sequences whenever it exceeds a certain
threshold.
Next, we discern the intended navigation manoeuvres of participants from the
wheelchair’s odometry. After empirically testing various thresholds for
translational and angular velocities, we determined six manoeuvres: in-place
rotations (left/right), forward and reverse motion, as well as forward turns
(left/right). This results in 12 classes that account for the influence of
both the environment and user actions. Referring to Fig. 1, the majority class
across Maps 1 & 2 is the wide in-place rotation (left and right), whilst for
Map 3 it is the narrow reverse. This switch in label frequency highlights the
task diversity caused by different maps.
### IV-C Implementation
Figure 4: Architecture for the robotic wheelchair experiment. Joystick and
laser data are fed into separate MLPs to produce a concatenated sequence,
$\mathbf{x}_{t}\,{\in}\,\mathbb{R}^{136}$, which feeds into the DiSCVAE
encoder (Fig. 3). Forward and backward states,
$\mathbf{h}^{\mathbf{z}_{G}}_{T}$ and $\mathbf{g}^{\mathbf{z}_{G}}_{1}$, allow
inference of $\mathbf{z}_{G}$, whilst $\mathbf{z}_{t,L}$ instead conditions on
hidden state $\mathbf{h}^{\mathbf{z}_{L}}_{t}$. These latent variables are
then passed onto MLPs that decode the joystick commands
$\tilde{\mathbf{a}}_{t}$ and range values $\tilde{\mathbf{l}}_{t}$.
The robotic wheelchair has an on-board computer and three laser sensors, two
at the front and one at the back for a full $360^{\circ}$ field of view. For
readers interested in the robotic platform, please refer to our earlier work
[24].
Fig. 4 portrays the network architecture for this experiment. Before entering
the network, input sequences are normalised per modality using the mean and
standard deviation of the training set. To process the two input modalities,
laser readings $\mathbf{l}_{\leq T}$ and control commands $\mathbf{a}_{\leq
T}$ are first passed through separate MLPs. The derived code vectors are then
concatenated $\mathbf{x}_{\leq T}$ and fed into the DiSCVAE encoder to infer
latent variables $\mathbf{z}_{G}$ and $\mathbf{z}_{\leq T,L}$. Two individual
decoders are conditioned on these variables to reconstruct the original input
sequences. Both sensory modalities are modelled as Gaussian variables with
fixed variance. No odometry information was supplied at any point to this
network.
### IV-D Evaluation Protocol & Model Selection
The evaluation protocol for this experiment is as follows. Although labelled
data are unavailable in most practical settings, including ours, we are still
interested in digesting the prospects of the DiSCVAE for downstream tasks,
such as semi-supervised classification. Accordingly, we train a k-nearest
neighbour (KNN) classifier over the learnt latent representation,
$\mathbf{z}_{G}$, and judge intention estimation performance using two
pervasive classification metrics: accuracy and the F1-score. Another typical
measure in the field is mean squared error (MSE) [6], hence we compare
trajectory predictions of user actions $\tilde{\mathbf{a}}_{t}$ and laser
readings $\tilde{\mathbf{l}}_{t}$ for 10 forward sampled states against
“ground truth” future states.
Using this protocol, model selection was conducted on the holdout validation
set. A grid search over the network hyperparameters found 512 hidden units to
be suitable for the single-layer MLPs (ReLU activations) and bidirectional
LSTM states. More layers and hidden units garnered no improvements in accuracy
and overall MSE. However, 128 units was chosen for the shared
$\mathbf{h}_{t}^{\mathbf{z}_{L}}$ state, as higher values had the trade-off of
enhancing MSE but worsening accuracy, and so we opted for better
classification. Table I also reports on the dimensionality effects of global
$\mathbf{z}_{G}$ and local $\mathbf{z}_{t,L}$ for a fixed model setting. The
most noteworthy pattern observed is the steep fall in accuracy when
$\text{dim}(\mathbf{z}_{t,L})\,{>}\,16$. Given that smaller latent spaces
raised MSE, a balanced dimensionality of 16 was configured for local and
global features.
TABLE I: Hyperparameter Selection on Validation Set $\text{dim}(\mathbf{z}_{G})$ | 16 | 32 | 64
---|---|---|---
$\text{dim}(\mathbf{z}_{t,L})$ | 16 | 32 | 64 | 16 | 32 | 64 | 16 | 32 | 64
Acc (%) $\uparrow$ | 77.9 | 54.3 | 22.9 | 72.9 | 41.4 | 15 | 74.9 | 28.8 | 14.5
MSE $\downarrow$ | 4.52 | 4.56 | 4.43 | 4.69 | 4.69 | 4.45 | 4.47 | 4.55 | 4.5
Another core design choice of the DiSCVAE is to select the number of clusters
$K$. Without access to ground truth labels, we rely on an unsupervised metric,
known as Normalised Mutual Information (NMI), to assess clustering quality.
The NMI score occupies the range $[0,1]$ and is thus unaffected by different
$K$ clusterings. This metric has also been used amongst similar VAEs for
discrete representation learning [13]. Table II provides NMI scores as $K$
varies, where $K\,{=}\,13$ was settled on due to its marginal superiority and
resemblance to the class count from Section IV-B.
TABLE II: Normalised Mutual Information to determine $K$ No. Clusters $K$ | 4 | 6 | 10 | 13 | 16 | 25 | 36
---|---|---|---|---|---|---|---
NMI $\uparrow$ | 0.141 | 0.133 | 0.206 | 0.264 | 0.244 | 0.24 | 0.26
### IV-E Experimental Results
Six methods are considered in this experiment, each imitating the same network
structure as in Fig. 4:
* •
HMM: A ubiquitous baseline in the literature [6, 3];
* •
SeqSVM: A sequential SVM baseline [5];
* •
BiLSTM: A bidirectional LSTM classifier, akin to [25];
* •
VRNN: An autoregressive VAE model [16];
* •
DSeqVAE: A disentangled sequential autoencoder [14];
* •
DiSCVAE: The proposed model of Section III-B.
The top three supervised models learn mappings between the inputs and labels
identified in Section IV-B, where baselines utilised the trained BiLSTM
encoder for feature extraction. Meanwhile, the bottom three VAE-based methods
optimise their respective ELBOs, with a KNN trained on learnt latent variables
for a semi-supervised approach. Hyperparameters are consistent across methods,
_e.g._ equal dimensions for the static and global latent variables of the
DSeqVAE and DiSCVAE, respectively. The Adam optimiser [26] was used to train
models with a batch size of 32 and initial learning rate of $10^{-3}$ that
exponentially decayed by 0.5 every 10k steps. From the range
$3{\times}10^{-3}$ to $10^{-4}$, this learning rate had the most stable and
effective ELBO optimisation performance. All models were optimised for 10 runs
at different random seeds with early stopping ($\sim$75 epochs for the
DiSCVAE).
For qualitative analysis, a key asset of the DiSCVAE is that sampling states
from different clusters can exhibit visually diverse characteristics. Fig. 1
portrays sampled trajectories from each mixture component during a subject’s
recorded interaction. There is clear variability in the trajectory outcomes
predicted at this wheelchair configuration ($K\,{=}\,6$ to ease trajectory
visualisation). The histogram over categorical $y$ (top left of Fig. 1) also
indicates that the most probable trajectory aligns with the wheelchair user’s
current goal (red arrow), _i.e._ the correct “intention”. As for generating
future environment states, Fig. 5 displays how samples from clusters manifest
when categorised as either “wide” or “narrow”.
Figure 5: 2D grids of predicted laser scans on the test set when sampling
from “wide” and “narrow” type clusters. Wide samples create spacious proximity
around the wheelchair (red dot), whilst narrow samples enclose space.
Table III contains quantitative results for this experiment. As anticipated,
the highly variable nature of wheelchair control in an unconstrained
navigation task makes classifying intent challenging. The baselines perform
poorly and even the supervised BiLSTM obtains a classification accuracy of
merely 56.3% on the unseen test environment. Nevertheless, learning
representations of user interaction data can reap benefits in intention
inference, as performance is drastically improved by a KNN classifier trained
over the latent spaces of the VAE-based methods. The DiSCVAE acquires the best
accuracy, F1-scores and MSE on joystick commands. The DSeqVAE instead attains
the best error rates on forecasted laser readings at the expense of under-
representing the relevant low-dimensional joystick signal. Cluster
specialisation in the DiSCVAE may explain the better
$\tilde{\mathbf{a}}_{MSE}$.
TABLE III: Performance on Test Set (10 random seeds) Model | Acc (%) $\uparrow$ | F1 $\uparrow$ | $\tilde{\mathbf{a}}_{MSE}\downarrow$ | $\tilde{\mathbf{l}}_{MSE}\downarrow$
---|---|---|---|---
HMM | 12.3 $\pm$ 2.9 | 0.09 $\pm$ 0.02 | - | -
SeqSVM | 48.3 $\pm$ 0.9 | 0.41 $\pm$ 0.01 | - | -
BiLSTM | 56.3 $\pm$ 1.9 | 0.43 $\pm$ 0.02 | - | -
VRNN | 65.1 $\pm$ 2.8 | 0.58 $\pm$ 0.04 | 0.15 $\pm$ 0.02 | 2.8 $\pm$ 0.04
DSeqVAE | 73.2 $\pm$ 2.0 | 0.65 $\pm$ 0.02 | 0.26 $\pm$ 0.02 | 2.14 $\pm$ 0.06
DiSCVAE | 82.3 $\pm$ 1.8 | 0.78 $\pm$ 0.03 | 0.14 $\pm$ 0.01 | 2.7 $\pm$ 0.05
### IV-F Illuminating the Clusters
(a) Wheelchair Manoeuvres
(b) Spatial States
Figure 6: Assignment distribution of $y$ for $K\,{=}\,13$ with post-processed
labels for (a) wheelchair manoeuvres and (b) perceived spatial context. The
plot illuminates how various clusters are associated with user intent under
different environmental conditions. For example, most backward motion and
“narrow” state samples reside in cluster 2. Similar patterns are noticeable
for in-place rotations (0 and 9) and “wide” forward motion (4 and 10).
Straying away from the purely discriminative task of classifying intent, we
now use our framework to decipher the navigation behaviours, or “global”
factors of variation, intended by users. In particular, we plot assignment
distributions of $y$ on the test set examples to understand the underlying
meaning of our clustered latent space. “Local” factors of variation in this
application capture temporal dynamics in state, _e.g._ wheelchair velocities.
Fig. 6a provides further clarity on how certain clusters have learnt
independent wheelchair manoeuvres. For instance, cluster 2 is distinctly
linked with the wheelchair’s reverse motion. Likewise, clusters 0 and 9 pair
with left and right in-place rotations. The spatial state assignments shown in
Fig. 6b also delineate how these clusters are most often categorised as
“narrow”, which is to be expected of evasive actions taking place in cluttered
spaces. On the contrary, predominantly forward-oriented manoeuvres fall into
“wide” clusters (_e.g._ 4 and 10). These findings suggest that wheelchair
action plans have been aptly inferred.
### IV-G Prospects for Shared Control
Figure 7: Percentage of trials per map where shared control would have wrongly
intervened. The Model approach is significantly more likely to trigger
incorrect assistance. Less variable VRNN and DiSCVAE performance across maps
also hints at better robustness to changes in task conditions.
Lastly, we examine a shared control use-case, where intention inference plays
a vital role [1]. Shared control concerns the interaction between robots and
humans when both exert control over a system to accomplish a common goal [2].
Despite shared control being inactive for this experiment, we simulate its
operation in post-processing to gauge success.
More precisely, we address the known issue in shared control of administering
wrong assistance whenever there is a misalignment between the robot’s and
user’s internal models. To quantify this mismatch, we monitor the percentage
of each navigation trial where a shared control methodology [24] would have
intervened had it been operational. Given how the subjects are experienced,
healthy individuals that incurred no wheelchair collisions, it is safe to
assume they never required assistance. We compare wheelchair trajectories
produced by the VRNN, DiSCVAE, and a constant velocity “Model” using
differential drive kinematics.
Fig. 7 offers results on shared control intervention rates. Performing the
two-sided Mann-Whitney U test finds significantly better rates for the VRNN
and DiSCVAE over the Model across all maps ($p\\!\leq\\!0.01$). Excluding Map
1 ($p\\!\leq\\!0.05$), the positive trend in the DiSCVAE surpassing the VRNN
is not significant. Though the DiSCVAE has the advantage of capturing
uncertainty around its estimated intent via the categorical $y$, _e.g._ when
a strict left-turn is hard to distinguish from a forward left-turn (blue and
red in Fig. 1). This holds potential for shared control seeking to realign
mismatched internal models by explaining to a user why the robot chose not to
assist under uncertainty [24].
## V Discussion
There are a few notable limitations to this work. One is that learning
disentangled representations is sensitive to hyperparameter tuning, as shown
in Section IV-D. To aid with model selection and prevent posterior collapse,
further investigation into different architectures and other information
theoretic advances is thus necessary [10, 11]. Moreover, disentanglement and
interpretability are difficult to define, often demanding access to labels for
validation [10, 12]. Therefore, a study into whether users believe the DiSCVAE
representations of intent are “interpretable” or helpful for the wheelchair
task is integral in claiming disentanglement.
In human-robot interaction tasks, intention recognition is typically addressed
by equipping a robot with a probabilistic model that infers intent from human
actions [4, 3]. Whilst the growing interest in scalable learning techniques
for modelling agent intent has spurred on applications in robotics [25, 7],
disentanglement learning remains sparse in the literature. The only known
comparable work to ours is a conditional VAE that disentangled latent
variables in a multi-agent driving setting [7]. Albeit similar in principle,
we believe our approach is the first to infer a discrete “intent” variable
from human behaviour by clustering action plans.
## VI Conclusions
In this paper, we embraced an unsupervised outlook on human intention
inference through a framework that disentangles and clusters latent
representations of input sequences. A robotic wheelchair experiment on
intention inference gleaned insights into how our proposed DiSCVAE could
discern primitive action plans, _e.g._ rotating in-place or reversing,
without supervision. The elevated classification performance in semi-
supervised learning also posits that disentanglement is a worthwhile avenue to
explore in intention inference.
There are numerous promising research directions for an unsupervised means of
inferring intent in human-robot interaction. The task-agnostic prior and
inferred global latent variable could be exploited in long-term downstream
tasks, such as user modelling, to augment the wider adoption of collaborative
robotics in unconstrained environments. A truly interpretable latent structure
could also prove fruitful in assistive robots that warrant explanation by
visually relaying inferred intentions back to end-users [24].
## References
* [1] Y. Demiris, “Prediction of intent in robotics and multi-agent systems,” _Cogn. Process._ , vol. 8, no. 3, pp. 151–158, 2007.
* [2] D. P. Losey, C. G. McDonald, E. Battaglia, and M. K. O’Malley, “A Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human-Robot Interaction,” _Appl. Mech. Rev._ , vol. 70, no. 1, 2018.
* [3] S. Jain and B. Argall, “Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics,” _J. Hum. Robot Interact._ , vol. 9, no. 1, 2019.
* [4] S. Javdani, S. S. Srinivasa, and J. A. Bagnell, “Shared autonomy via hindsight optimization,” _Robot. Sci. Syst.: online proceedings_ , 2015.
* [5] Z. Wang, K. Mülling, M. P. Deisenroth, H. B. Amor, D. Vogt, B. Schölkopf, and J. Peters, “Probabilistic movement modeling for intention inference in human-robot interaction,” _Int. J. Rob. Res._ , vol. 32, no. 7, pp. 841–858, 2013.
* [6] A. K. Tanwani and S. Calinon, “A generative model for intention recognition and manipulation assistance in teleoperation,” in _IEEE/RSJ Int. Conf. Intell. Robots Syst._ , 2017, pp. 43–50.
* [7] Y. Hu, W. Zhan, L. Sun, and M. Tomizuka, “Multi-modal Probabilistic Prediction of Interactive Behavior via an Interpretable Model,” in _IEEE Intell. Veh. Symp. Proc._ , 2019, pp. 557–563.
* [8] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” _arXiv:1312.6114_ , 2013.
* [9] D. J. Rezende, S. Mohamed, and D. Wierstra, “Stochastic Backpropagation and Approximate Inference in Deep Generative Models,” in _Int. Conf. Mach. Learn._ , 2014, pp. 1278–1286.
* [10] R. T. Q. Chen, X. Li, R. B. Grosse, and D. K. Duvenaud, “Isolating sources of disentanglement in variational autoencoders,” in _Adv. Neural Inf. Process. Syst._ , vol. 31, 2018.
* [11] E. Dupont, “Learning Disentangled Joint Continuous and Discrete Representations,” in _Adv. Neural Inf. Process. Syst._ , 2018, pp. 710–720.
* [12] F. Locatello, S. Bauer, M. Lucic, G. Rätsch, S. Gelly, B. Schölkopf, and O. Bachem, “Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations,” in _Int. Conf. Mach. Learn._ , 2019, pp. 4114–4124.
* [13] V. Fortuin, M. Hüser, F. Locatello, H. Strathmann, and G. Rätsch, “Som-vae: Interpretable discrete representation learning on time series,” _arXiv:1806.02199_ , 2018.
* [14] L. Yingzhen and S. Mandt, “Disentangled Sequential Autoencoder,” in _Int. Conf. Mach. Learn._ , 2018, pp. 5670–5679.
* [15] W.-N. Hsu, Y. Zhang, R. J. Weiss, H. Zen, Y. Wu, Y. Wang, Y. Cao, Y. Jia, Z. Chen, J. Shen, _et al._ , “Hierarchical Generative Modeling for Controllable Speech Synthesis,” _arXiv:1810.07217_ , 2018.
* [16] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio, “A Recurrent Latent Variable Model for Sequential Data,” in _Adv. Neural Inf. Process. Syst._ , 2015, pp. 2980–2988.
* [17] N. Dilokthanakul, P. A. Mediano, M. Garnelo, M. C. Lee, H. Salimbeni, K. Arulkumaran, and M. Shanahan, “Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders,” _arXiv:1611.02648_ , 2016.
* [18] Z. Jiang, Y. Zheng, H. Tan, B. Tang, and H. Zhou, “Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering,” in _Int. Jt. Conf. Artif. Intell._ , 2017, pp. 1965–1972.
* [19] C. J. Maddison, A. Mnih, and Y. W. Teh, “The concrete distribution: A continuous relaxation of discrete random variables,” _arXiv:1611.00712_ , 2016.
* [20] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” _arXiv:1611.01144_ , 2016.
* [21] T. Carlson and Y. Demiris, “Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload,” _IEEE Trans. Syst. Man Cybern._ , vol. 42, no. 3, pp. 876–888, 2012.
* [22] J. Poon, Y. Cui, J. V. Miro, T. Matsubara, and K. Sugimoto, “Local driving assistance from demonstration for mobility aids,” in _IEEE Int. Conf. Robot. Autom._ , 2017, pp. 5935–5941.
* [23] J. W. Durham and F. Bullo, “Smooth Nearness-Diagram Navigation,” in _IEEE/RSJ Int. Conf. Intell. Robots Syst._ , 2008, pp. 690–695.
* [24] M. Zolotas and Y. Demiris, “Towards Explainable Shared Control using Augmented Reality,” in _IEEE/RSJ Int. Conf. Intell. Robots Syst._ , 2019, pp. 3020–3026.
* [25] D. Nicolis, A. M. Zanchettin, and P. Rocco, “Human intention estimation based on neural networks for enhanced collaboration with robots,” in _IEEE/RSJ Int. Conf. Intell. Robots Syst._ , 2018, pp. 1326–1333.
* [26] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” _arXiv:1412.6980_ , 2014.
|
Chirality imbalance and chiral magnetic effect under a parallel
electromagnetic field
Hayato<EMAIL_ADDRESS>and Katsuhiko
<EMAIL_ADDRESS>
Department of Physics, Tokyo University of Science,
Kagurazaka 1-3, Shinjuku, Tokyo 162-8601, Japan
Abstract
We study the time evolution of the chirality imbalance $n_{5}$ and the chiral
magnetic effect (CME) under the external parallel electromagnetic fields
without assuming the artificial chiral asymmetric source. We adopt the time-
dependent Sauter-type electric and constant magnetic field, and obtain
analytical solutions of the Dirac equation for a massive fermion. We use the
point-split regularization to calculate the vacuum contribution in the gauge
invariant way. As a result, we find that $n_{5}$ and CME current increase
substantially as the electric field increases, and stay finite after the
electric field is switched off. The chirality imbalance and CME current are
shown to consist of a dominant contribution, which is essentially proportional
to relativistic velocity, and a small oscillating part. We find a simple
analytical relation between $n_{5}$ and the fermion pair-production rate from
the vacuum. We also discuss dynamical origin of the chirality imbalance in
detail.
## 1 Introduction
Recently, roles of the chiral anomaly have attracted considerable theoretical
and experimental interests in various subjects of physics. The chiral (Adler-
Bell-Jackiw) anomaly is violation of the (partial) axial-vector current
conservation due to quantum effects[1, 2], and causes the CP-violating
processes observed experimentally. For the last decade, macroscopic
manifestations of the chiral anomaly are discussed in the context of
hydrodynamic and transport phenomena in systems with chiral fermions, e.g. the
quark-gluon plasma or the Dirac$/$Weyl semi-metals[3, 4, 5]. One of the
important effects induced by the anomaly is chiral magnetic effect (CME),
which is the generation of ”non-dissipative” electric current along the
direction of the magnetic field[6, 7, 8];
$\displaystyle{{\bm{J}}=\frac{\mu_{5}}{2\pi^{2}}{\bm{B}}}$ (1)
where $\mu_{5}$ is the chiral chemical potential. The chiral chemical
potential characterizes an asymmetry of the chirality of the system, and is
conjugate to the chirality imbalance of the fermions, $n_{5}$, which is a
difference of right-handed and left-handed fermion number densities,
$n_{5}\equiv
n_{R}-n_{L}\equiv\langle\bar{\psi}\gamma_{0}\gamma_{5}\psi\rangle$.
In the quark-gluon plasma produced in the heavy ion collisions, the
interaction with the non-trivial gluonic field would change quark chiralities
and thus produce the chirality imbalance between right- and left-handed
quarks[4]. With the strong magnetic field, $eB\sim m_{\pi}^{2}$, created by
the heavy ion-collision, CME may produce asymmetry of the charged particle
distributions which can be measured experimentally[9]. On the other hand, CME
is an important topic in the condensed matter system[5], where the massless
Dirac mode has been realized in the Dirac/Weyl semimetals[10, 11].
Experimental result for observing CME in such a system is reported in ref.
[11].
For various applications in QCD/condensed matter, the existence of the
chirality imbalance $n_{5}$ and the chiral chemical potential $\mu_{5}$ is a
priori assumed to study specific transport phenomena. However, appearance of
the initial chirality imbalance is still under debate. For example, in the
quark-gluon plasma, metastable local CP-violating domains may be generated by
transitions of the non-perturbative gluonic configurations[7, 12, 13]. In ref.
[11] for the semimetal system with the electromagnetic field, $\mu_{5}$ is
estimated as $\displaystyle{\mu_{5}=\hbar
v_{F}\left(\frac{3e^{2}}{4}{\bm{E}}\cdot{\bm{B}}\tau\right)}\>,$ where $\tau$
is the relaxation time of the chirality imbalance. In our opinion, it is
important to calculate the chirality imbalance and the chiral magnetic effect
within the field theoretical method without introducing additional
assumptions.
On the other hand, it is also necessary to clarify use of eq. (1) in
equilibrium. Although the CME formula eq. (1) is used for various
applications, it has to be interpreted with care. It is pointed out that such
a current is forbidden in equilibrium system[14, 15]. There are also some
cautions from theoretical calculations[16, 17]. It seems that the introduction
of the chiral chemical potential implicitly assumes a system out of
equilibrium[18]. In order to clarify this issue, it is crucial to calculate
time-evolution of the chirality imbalance and the CME current within a
specific model and compare their characteristic time scales.
In addition, the CME current for a massive fermion is to be studied carefully.
It is well-known that the anomaly relation receives a contribution from the
mass-dependent term:
$\displaystyle\partial_{\mu}\int
d^{3}x\langle\bar{\psi}\gamma^{\mu}\gamma^{5}\psi\rangle=2im\int
d^{3}x\langle\bar{\psi}\gamma^{5}\psi\rangle+\frac{2\alpha}{\pi}\int
d^{3}x{\bm{E}}\cdot{\bm{B}}\,,$ (2)
with $\alpha=e^{2}/4\pi$ being the fine structure constant. To estimate the
contribution from the mass, one should calculate a vacuum expectation value of
the pseudoscalar density, which is also time-dependent. In particular, it is
of interest to understand a relation between CME and the spontaneous breakdown
of chiral symmetry. In ref. [19], the CME current is suppressed in the
insulator phase, which may correspond to the chiral symmetry breaking phase.
For these purposes, we study time evolution of the chirality imbalance $n_{5}$
and the chiral magnetic effect in the vacuum under the electromagnetic field
solving the Dirac equation analytically without initial chiral chemical
potential[20]. We consider the vacuum state (zero temperature and zero fermion
chemical potential) with external parallel electromagnetic fields, which
provide the chirality imbalance of the fermion number density due to the
chiral anomaly. We adopt the time-dependent half-pulse electric field (Sauter-
type), and constant magnetic field in order to solve the Dirac equation for a
massive fermion analytically. To calculate the infinite vacuum contribution in
the gauge invariant way, we use the point-split regularization[21, 22] and
calculate vacuum expectation values of the various bilinear fermion operators
including $n_{5}$ and CME. In addition, we expect production of fermion-
antifermion pairs from the vacuum under the electric field by the Schwinger
mechanism[23, 24]. We systematically study relations between $n_{5}$, CME
current and the pair-creation rate using the Bogoliubov transformation. Our
results are to be compared with the previous works obtained with the Schwinger
mechanism with constant electromagnetic field[18], the Wigner function method
with collinear electromagnetic fields[26], the Wigner function method with the
chiral chemical potential[27], cylindrical Dirac equation with the chiral
chemical potential[28].
This paper is organized as follows. In Sec. 2, we show analytical solutions of
the Dirac equation with the parallel electromagnetic fields. Using them we
perform the canonical quantization and define the vacuum state at
$t\to{-\infty}$ in Sec. 3. We introduce the point-split regularization in Sec.
4 to calculate vacuum expectation values of the fermion operators in gauge
invariant way. We present our numerical results for the time evolution of the
vacuum expectation values of the chirality imbalance and CME in Sec. 5. In
Sec. 6 we also discuss relations between the chirality imbalance and the
fermion pair-production rate, and give a simple formula for CME current. Using
them, we show how the chirality imbalance is dynamically generated in this
model. Finally, Sec. 7 is devoted to the summary and discussion.
## 2 Solution of Dirac eq. under solvable external EM fields
### 2.1 Dirac equation with electromagnetic fields
We need an analytical solution of the Dirac equation under the constant
magnetic and the time-dependent electric field, which plays a key role in our
work. The Dirac equation for a fermion field $\psi(x)$ with the mass $m$ under
an external electromagnetic potential, $A_{\mu}(x)$, is given by
$\displaystyle[i{\ooalign{\hfil/\hfil\crcr$D$}}-m]\psi(x)=0$ (5)
where we introduce the covariant derivative
$D_{\mu}=\partial_{{\mu}}+ieA_{\mu}(x)$. Squared Dirac equation is given by
$\displaystyle{\ooalign{\hfil/\hfil\crcr$D$}}^{2}\Phi(x)=-m^{2}\Phi(x)$ (8)
Hereafter, we concentrate on finding solutions of the squared Dirac equation
$\Phi(x)$, from which we can obtain $\psi(x)$ by a suitable projection,
$\displaystyle\psi(x)=(i{\ooalign{\hfil/\hfil\crcr$D$}}+m)\Phi(x)\;.$ (11)
We consider specific forms of the external electromagnetic field in this work
to obtain analytical solutions;
$\displaystyle\boldsymbol{B}$ $\displaystyle=(0,0,-B),$ (12)
$\displaystyle\boldsymbol{E}$ $\displaystyle=(0,0,E/\cosh^{2}(t/\tau))\;,$
(13)
where parameters $B$ and $E$ are non-zero real constants, and $\tau>0$. The
magnetic field is time-independent and uniform along the $z$-direction. On the
other hand, the electric field is spatially homogeneous but time-dependent
with a pulse structure, which is known as Sauter-type electric field[29]. The
corresponding electromagnetic potential is
$\displaystyle A_{\mu}=(0,0,Bx,E\tau(\tanh(t/\tau)+1))\;.$ (14)
Note that the vector potential is finite even at $t\to\pm\infty$. If we
adopted the constant electric field[24, 22], the vector potential would
diverge at $t\to\pm\infty$, $A_{3}=E\,t$.
To understand roles of the electromagnetic field, it is convenient to
introduce the so-called ”magnetic helicity” density[30] as
$\displaystyle h(t)\equiv\frac{1}{V}\int
d^{3}x\boldsymbol{A}\cdot\boldsymbol{B}\;.$ (15)
Although the magnetic helicity is not gauge invariant in general, it is useful
when we discuss the topological structure of the gauge field. With eq. (14),
the magnetic helicity density in our case is calculated as
$\displaystyle h(t)=-BE\tau(\tanh(t/\tau)+1)\;.$ (16)
At the initial state, $t\to-\infty$, both electric field and the magnetic
helicity $h(t)$ are zero, and they increase as $t$ increases. In the final
state, $t\to\infty$, ${\bm{E}}$ vanishes rapidly, while the magnetic helicity
$h(t)$ is kept finite. This peculiar behavior of the magnetic helicity is due
to the Sauter electric field, and appropriate to discuss the production of the
chirality imbalance, as we will show later.
With the chiral representation for the gamma matrices, the Dirac operator /
$D$ and its squared form ${\ooalign{\hfil/\hfil\crcr$D$}}^{2}$ are given by
$\displaystyle i{\ooalign{\hfil/\hfil\crcr$D$}}$
$\displaystyle=\begin{pmatrix}0&0&-\hat{c}_{-}&i\hat{a}\\\
0&0&-i\hat{a}^{\dagger}&\hat{c}_{+}\\\ \hat{c}_{+}&-i\hat{a}&0&0\\\
i\hat{a}^{\dagger}&-\hat{c}_{-}&0&0\end{pmatrix}$
$\displaystyle{\ooalign{\hfil/\hfil\crcr$D$}}^{2}$
$\displaystyle=\begin{pmatrix}\hat{c}_{-}\hat{c}_{+}+\hat{a}\hat{a}^{\dagger}&0&0&0\\\
0&\hat{c}_{+}\hat{c}_{-}+\hat{a}^{\dagger}\hat{a}&0&0\\\
0&0&\hat{c}_{+}\hat{c}_{-}+\hat{a}\hat{a}^{\dagger}&0\\\
0&0&0&\hat{c}_{-}\hat{c}_{+}+\hat{a}^{\dagger}\hat{a}\end{pmatrix}$
where we have defined the following operators,
$\displaystyle\hat{c}_{+}$
$\displaystyle=(-i\partial_{z}+eE\tau(\tanh(t/\tau)+1))+i\partial_{t}$
$\displaystyle\hat{c}_{-}$
$\displaystyle=(-i\partial_{z}+eE\tau(\tanh(t/\tau)+1))-i\partial_{t}$
$\displaystyle\hat{a}$ $\displaystyle=(-i\partial_{y}+eBx)+\partial_{x}$
$\displaystyle\hat{a}^{\dagger}$
$\displaystyle=(-i\partial_{y}+eBx)-\partial_{x}\;.$
Because ${\ooalign{\hfil/\hfil\crcr$D$}}^{2}$ commute both $\partial_{y}$ and
$\partial_{z}$, the solution of the squared Dirac equation, $\Phi$, can be
written as a separable form,
$\Phi(t,\boldsymbol{x})=\exp(ip_{y}y+ip_{z}z)\,\phi(t,x)$, with momenta of $y$
and $z$ directions being constants. For $\Phi(t,{\bm{x}})$, we explicitly
introduce the four component form as
$\displaystyle\phi(t,x)=\begin{pmatrix}\phi_{1}(t,x)\\\ \phi_{2}(t,x)\\\
\phi_{3}(t,x)\\\ \phi_{4}(t,x)\end{pmatrix}\;.$ (17)
We then obtain a set of equations for $\phi_{i}(t,x)~{}~{}(i=1,2,3,4)$ as
follows,
$\displaystyle[\hat{c}_{-}\hat{c}_{+}+\hat{a}\hat{a}^{\dagger}+m^{2}]\phi_{1}(t,x)=0$
(18)
$\displaystyle[\hat{c}_{+}\hat{c}_{-}+\hat{a}^{\dagger}\hat{a}+m^{2}]\phi_{2}(t,x)=0$
(19)
$\displaystyle[\hat{c}_{+}\hat{c}_{-}+\hat{a}\hat{a}^{\dagger}+m^{2}]\phi_{3}(t,x)=0$
(20)
$\displaystyle[\hat{c}_{-}\hat{c}_{+}+\hat{a}^{\dagger}\hat{a}+m^{2}]\phi_{4}(t,x)=0\;.$
(21)
Note that the operators $\hat{c}_{+},\hat{c}_{-}$ include only $t$ and
$\partial_{t}$ variables, whereas $\hat{a},\hat{a}^{\dagger}$ contain only $x$
and $\partial_{x}$. Hence, these equations can be solved as
$\displaystyle\Phi({\bm{x}})=\exp(ip_{y}y+ip_{z}z)\begin{pmatrix}f_{1}(t)g_{1}(x)\\\
f_{2}(t)g_{2}(x)\\\ f_{2}(t)g_{1}(x)\\\ f_{1}(t)g_{2}(x)\end{pmatrix}$ (22)
with eigenfunctions, $f_{i}(t),g_{i}(x)\;(i=1,2)$, which satisfy the following
eigenvalue equations,
$\displaystyle\hat{a}\,\hat{a}^{\dagger}g_{1}(x)=\kappa g_{1}(x)$ (23)
$\displaystyle\hat{a}^{\dagger}\,\hat{a}g_{2}(x)=\kappa g_{2}(x)$ (24)
$\displaystyle\hat{c}_{-}\,\hat{c}_{+}f_{1}(t)=-(\kappa+m^{2})f_{1}(t)$ (25)
$\displaystyle\hat{c}_{+}\,\hat{c}_{-}f_{2}(t)=-(\kappa+m^{2})f_{2}(t)\;.$
(26)
We note that the eigenvalue $\kappa$ is real and positive-semidefinite because
the operators $\hat{a}^{\dagger}\hat{a},\hat{a}\hat{a}^{\dagger}$ are
Hermittian.
### 2.2 solutions for the $x$-dependent part
The eigenfunction and the eignevalue of equations (23) and (24) are easily
obtained with the standard technique for the harmonic oscillator.
We find a solution with the normalized Hermite polynomial $\mathrm{H}_{n}$
$(n=0,1,2,\cdots)$
$\displaystyle g_{1}(x)$ $\displaystyle=g_{n-1,p_{y}}(x)$ (27) $\displaystyle
g_{2}(x)$ $\displaystyle=g_{n,p_{y}}(x)$ (28)
with eigenvalues $\kappa=2eBn$ and eigenfunctions $g_{n,p_{y}}(x)$;
$\displaystyle g_{n,p_{y}}(x)$
$\displaystyle=\frac{1}{\sqrt{2^{n}n!}}(\frac{|eB|}{\pi})^{1/4}H_{n}(\eta)\exp(-\eta^{2}/2)$
(29) $\displaystyle\eta$
$\displaystyle\equiv\frac{1}{\sqrt{|eB|}}(p_{y}+|eB|x)\;\,.$
where $n$ denotes the Landau level. When $n=0$, the normalizable solution of
$g_{1}(x)$ does not exist, thus we define $g_{-1,p_{y}}(\tilde{x})=0$.
The eigenfunctions satisfy the orthonormal condition as
$\displaystyle\int dx\,g_{n,p_{y}}(x)g_{m,p_{y}}(x)=\delta_{mn}\;.$ (30)
Moreover, the completeness identity also holds
$\displaystyle\sum_{n=0}^{\infty}g_{n,p_{y}}(x)g_{n,p_{y}}(x^{\prime})=\delta(x-x^{\prime})\;.$
(31)
Additionally, integration over $p_{y}$ also gives a relation
$\displaystyle\int dp_{y}\,g_{n,p_{y}}(x)g_{m,p_{y}}(x)=|eB|\delta_{mn}$ (32)
which guarantees the orthogonal condition for $y$ in eq. (22).
### 2.3 solutions for the $t$-dependent part
Next, we will solve equations for the time-dependent part. The operators
$\hat{c}_{-}\hat{c}_{+},\hat{c}_{+}\hat{c}_{-}$ in eq. (25) and eq. (26) are
written explicitly as
$\displaystyle\hat{c}_{+}\hat{c}_{-}=\partial_{t}^{2}+(p_{z}+eE\tau(\tanh(t/\tau)+1))^{2}+ie\frac{E}{\cosh^{2}(t/\tau)}$
(33)
$\displaystyle\hat{c}_{-}\hat{c}_{+}=\partial_{t}^{2}+(p_{z}+eE\tau(\tanh(t/\tau)+1))^{2}-ie\frac{E}{\cosh^{2}(t/\tau)}\;\;.$
(34)
which reduce to the hypergeometric differential equation for $f_{1}(x)$. We
obtain the eigenfunctions for $f_{1}(x)$ in (25) and $f_{2}(x)$ in (26) as
follows
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(+)}(t)\equiv\sqrt{\frac{\omega(0)+p_{z}}{2\omega(0)}}u^{-\frac{i\tau\omega(0)}{2}}(1-u)^{\frac{i\tau\omega(1)}{2}}F\left({a,b\atop
c};u(t)\right)$ (35)
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(-)}(t)\equiv\sqrt{\frac{\omega(0)-p_{z}}{2\omega(0)}}u^{\frac{i\tau\omega(0)}{2}}(1-u)^{-\frac{i\tau\omega(1)}{2}}$
$\displaystyle\hskip 71.13188pt\times F\left({1-a,1-b\atop 2-c};u(t)\right)$
(36)
where $F\left({a,b\atop c};u\right)$ are Gauss’s hypergeometric function. The
parameters $a,b,c$ are given by
$\displaystyle
a=1-\frac{i\tau\omega_{n,p_{z}}(0)}{2}+\frac{i\tau\omega_{n,p_{z}}(1)}{2}+ieE\tau^{2}$
(37a) $\displaystyle
b=-\frac{i\tau\omega_{n,p_{z}}(0)}{2}+\frac{i\tau\omega_{n,p_{z}}(1)}{2}-ieE\tau^{2}$
(37b) $\displaystyle c=1-i\tau\omega_{n,p_{z}}(0)\;,$ (37c) where
$\displaystyle\omega_{n,p_{z}}^{2}(u)$ $\displaystyle=(p_{z}+2eE\tau
u)^{2}+2|eB|n+m^{2}$ (38) $\displaystyle u(t)$
$\displaystyle=\frac{1}{2}(\tanh(t/\tau)+1)\;.$ (39)
We find a simple relation,
$\displaystyle|\tilde{\phi}_{n,p_{z}}^{(+)}(t)|^{2}+|\tilde{\phi}_{n,p_{z}}^{(-)}(t)|^{2}=1,$
(40)
which holds independent of $t$, and is useful for further calculations.
### 2.4 classical solutions of Dirac equation
We then obtain solutions of the squared Dirac equation,
$\Phi_{n,p_{y},p_{z}}(x)$, as follows:
$\displaystyle\Phi_{n,p_{y},p_{z}}=\exp(ip_{y}y+ip_{z}z)$ $\displaystyle\hskip
14.22636pt\times\begin{pmatrix}g_{n-1,p_{y}}(x)\\{N^{(+)}_{1}\tilde{\phi}_{n,p_{z}}^{*(-)}(t)+N^{(-)}_{1}\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\\}\\\
g_{n,p_{y}}(x)\\{N^{(-)}_{2}\tilde{\phi}_{n,p_{z}}^{(-)}(t)+N^{(+)}_{2}\tilde{\phi}_{n,p_{z}}^{(+)}(t)\\}\\\
g_{n-1,p_{y}}(x)\\{N^{(-)}_{3}\tilde{\phi}_{n,p_{z}}^{(-)}(t)+N^{(+)}_{3}\tilde{\phi}_{n,p_{z}}^{(+)}(t)\\}\\\
g_{n,p_{y}}(x)\\{N^{(+)}_{4}\tilde{\phi}_{n,p_{z}}^{*(-)}(t)+N^{(-)}_{4}\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\\}\end{pmatrix}$
(41)
where $N^{(\pm)}_{i}$ are normalization constants. To construct the solutions
of the Dirac equation, we properly choose solutions in eq. (41), and extract
the right/left-handed solutions by performing the suitable projection. Here,
we choose 4-independent solutions proportional to
$N^{(\pm)}_{1},N^{(\pm)}_{4}$ in eq. (41), which satisfy the orthogonal and
completeness relations, as we will show later.
First, we obtain the ”right-handed” solutions, operating
$i\gamma^{\mu}D_{\mu}+m$ to the 1st row in eq. (41).
$\displaystyle\psi_{\boldsymbol{p}}^{(+,\tilde{R})}$
$\displaystyle=N^{(+)}_{1}\exp(ip_{y}y+ip_{z}z)$
$\displaystyle\times\begin{pmatrix}\cos\theta_{n}\cdot
g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(-)}(t)\\\ 0\\\
g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{(+)}(t)\\\ i\sin\theta_{n}\cdot
g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(-)}(t)\end{pmatrix}$ (42)
$\displaystyle\psi_{\boldsymbol{p}}^{(-,\tilde{R})}$
$\displaystyle=N^{(-)}_{1}\exp(ip_{y}y+ip_{z}z)$
$\displaystyle\times\begin{pmatrix}\cos\theta_{n}\cdot
g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\\\ 0\\\
-g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{(-)}(t)\\\ i\sin\theta_{n}\cdot
g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\end{pmatrix}$ (43)
where $\theta_{n}$ is defined by
$\displaystyle\theta_{n}=\arctan(\frac{\sqrt{2|eB|n}}{m})\;.$ (44)
Hereafter, we use the shorthand notation $\boldsymbol{p}=(n,p_{y},p_{z})$ for
simplicity.
Similarly, we obtain the ”left-handed” solutions operating
$i\gamma^{\mu}D_{\mu}+m$ to the 4th row in eq. (41).
$\displaystyle\psi_{\boldsymbol{p}}^{(+,\tilde{L})}$
$\displaystyle=N^{(+)}_{4}\exp(ip_{y}y+ip_{z}z)$
$\displaystyle\times\begin{pmatrix}i\sin\theta_{n}\cdot
g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(-)}(t)\\\
g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{(+)}(t)\\\ 0\\\ \cos\theta_{n}\cdot
g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(-)}(t)\end{pmatrix}$ (45)
$\displaystyle\psi_{\boldsymbol{p}}^{(-,\tilde{L})}$
$\displaystyle=N^{(-)}_{4}\exp(ip_{y}y+ip_{z}z)$
$\displaystyle\times\begin{pmatrix}i\sin\theta_{n}\times
g_{n-1,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\\\
-g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{(-)}(t)\\\ 0\\\
\cos\theta_{n}\cdot
g_{n,p_{y}}(x)\cdot\tilde{\phi}_{n,p_{z}}^{*(+)}(t)\end{pmatrix}$ (46)
In massless limit, $m\to 0$, the solutions of eq. (42) are exact eigenspinors
of the chirality operator $\gamma^{5}$ with the eigenvalue $+1$, while eq.
(45) is the chirality eigenstate with the eigenvalue $-1$. Note that
$\psi_{0,p_{y},p_{z}}^{(\pm,\tilde{R})}=0$ because of $g_{-1,p_{y}}=0$ and
$\sin\theta_{0}=0$.
These solutions of the Dirac equation form the complete orthonormal basis. By
choosing the normalization constants,
$N^{(+)}_{1}=N^{(-)}_{1}=N^{(+)}_{4}=N^{(-)}_{4}=1$, the orthonormal relations
are given by
$\displaystyle\int
d^{3}x[\psi_{\boldsymbol{p^{\prime}}}^{(u^{\prime},s^{\prime})}(x)]^{\dagger}[\psi_{\boldsymbol{p}}^{(u,s)}(x)$
$\displaystyle\hskip
28.45274pt=(2\pi)^{2}\delta_{uu^{\prime}}\delta_{ss^{\prime}}\delta_{nn^{\prime}}\delta(p_{y}-p_{y}^{\prime})\delta(p_{z}-p_{z}^{\prime});$
which holds except for $\psi_{0,p_{y},p_{z}}^{(\pm,\tilde{R})}$. Moreover, one
can show the completeness relation for eqs. (42,43,45,46)
$\displaystyle\sum_{\boldsymbol{p}}\sum_{s=\tilde{R},\tilde{L}}\sum_{u=\pm}[\psi_{\boldsymbol{p}}^{(u,s)}(t,\boldsymbol{x})]_{\alpha}[\psi_{\boldsymbol{p}}^{\dagger(u,s)}(t,\boldsymbol{x}^{\prime})]_{\beta}$
$\displaystyle\hskip
28.45274pt=(2\pi)^{2}\delta_{\alpha\beta}\delta^{(3)}(\boldsymbol{x}-\boldsymbol{x}^{\prime})\;,$
which guarantees the validity of our construction from eq. (41).
## 3 Quantization and vacuum expectation values of currents
To construct the quantum field theory with the external EM field, we first
introduce the fermionic field operators from eqs. (42), (43), (45), (46);
$\displaystyle\hat{\psi}(x)$
$\displaystyle=\sum_{n=0}^{\infty}\int\frac{dp_{y}}{\sqrt{2\pi}}\int\frac{dp_{z}}{\sqrt{2\pi}}$
$\displaystyle\times\sum_{s=\tilde{R},\tilde{L}}(\hat{b}_{s,\boldsymbol{p}}\psi_{s,\boldsymbol{p}}^{(+)}(x)+\hat{d}_{s,-\boldsymbol{p}}^{\dagger}\psi_{s,\boldsymbol{p}}^{(-)}(x))$
(47)
where
$\hat{b}_{s,p}^{\dagger},\hat{d}_{s,p}^{\dagger}$($\hat{b}_{s,p},\hat{d}_{s,p}$)
are interpreted as creation (annihilation) operators of the particles and
anti-particles. These operators obey the anti-commutation relations,
$\displaystyle\\{\hat{b}_{s,p},\hat{b}_{s^{\prime},p^{\prime}}^{\dagger}\\}=\\{\hat{d}_{s,p},\hat{d}_{s^{\prime},p^{\prime}}^{\dagger}\\}=\delta_{ss^{\prime}}\delta_{nn^{\prime}}\delta(p_{y}-p_{y}^{\prime})\delta(p_{z}-p_{z}^{\prime})$
which is equivalent to the anti-commutation relations for the field operators,
$\displaystyle\\{\hat{\psi}_{\alpha}(t,\boldsymbol{x}),\hat{\psi}^{\dagger}_{\beta}(t,\boldsymbol{x}^{\prime})\\}=\delta^{(3)}(\boldsymbol{x}-\boldsymbol{x}^{\prime})\delta_{\alpha\beta}\;.$
In order to describe the fermion field under the time-dependent EM field, we
adopt the Heisenberg picture in the following calculations, and define the
vacuum state $|0\rangle$ at $t\to-\infty$,
$\displaystyle\hat{b}_{s,\boldsymbol{p}}|0\rangle=0,~{}\hat{d}_{s,\boldsymbol{p}}|0\rangle=0~{}(\text{for
all }s,\boldsymbol{p}),~{}~{}\langle 0|0\rangle=1\;.$ (48)
We obtain asymptotic behavior of the eigenfunctions
$\tilde{\phi}_{n,p_{z}}^{(\pm)}(t)$ at $t\to-\infty$ as
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(+)}(t)\propto\exp(-i\omega_{n,p_{z}}(0)t)~{}~{}~{}(t\to-\infty)$
(49a)
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(-)}(t)\propto\exp(+i\omega_{n,p_{z}}(0)t)~{}~{}~{}(t\to-\infty)$
(49b)
Apparently, the eigenfunction $\tilde{\phi}_{n,p_{z}}^{(+)}(t)$
($\tilde{\phi}_{n,p_{z}}^{(-)}(t)$) at $t\to-\infty$ coincides a positive
(negative) energy solution of the free Dirac fermion.
By using the quantized fields, the classical current,
$j(\Gamma;x)=\bar{\psi}(x)\Gamma\psi(x)$, is replaced by the current operator
$\displaystyle\hat{j}({\Gamma};x)$
$\displaystyle=\frac{1}{2}[\hat{\bar{\psi}}(x),\Gamma\hat{\psi}(x)]$
$\displaystyle=\frac{1}{2}[\hat{\bar{\psi}}_{\alpha}(x)\Gamma_{\alpha\beta}\hat{\psi}_{\beta}(x)-\Gamma_{\alpha\beta}\hat{\psi}_{\beta}(x)\hat{\bar{\psi}}_{\alpha}(x)]$
where $\Gamma$ are products of $\gamma$ matrices, i.e.
$\Gamma=(1,i\gamma_{5},\gamma_{\mu},\gamma_{5}\gamma_{\mu})$. We can calculate
the vacuum expectation value (VEV) of the corresponding current as follows
$\displaystyle\langle\bar{\psi}(x)\Gamma\psi(x)\rangle$ $\displaystyle=\langle
0|\hat{j}({\Gamma};x)|0\rangle$ (50a)
$\displaystyle=\sum_{n=0}^{\infty}\int\frac{dp_{y}}{2\pi}\int\frac{dp_{z}}{2\pi}S_{\boldsymbol{p}}(x;\Gamma)$
(50b)
where we define $S_{\boldsymbol{p}}(x;\Gamma)$ as
$\displaystyle S_{\boldsymbol{p}}(x;\Gamma)$
$\displaystyle\equiv\frac{1}{2}\sum_{s=\tilde{R},\tilde{L}}[\bar{\psi}_{\boldsymbol{p}}^{(-,s)}(x)\Gamma\psi_{\boldsymbol{p}}^{(-,s)}(x)$
$\displaystyle\hskip
28.45274pt-\bar{\psi}_{\boldsymbol{p}}^{(+,s)}(x)\Gamma\psi_{\boldsymbol{p}}^{(+,s)}(x)]$
(51)
Using eqs. (42),(43),(45),(46) we find $S_{\boldsymbol{p}}(x;\Gamma)$ for
various $\Gamma$ as,
$\displaystyle
S_{\boldsymbol{p}}(x;\gamma^{0}\gamma^{5})=[g_{n-1,p_{y}}^{2}-g_{n,p_{y}}^{2}][|\tilde{\phi}_{n,p_{z}}^{(+)}|^{2}-|\tilde{\phi}_{n,p_{z}}^{(-)}|^{2}]$
(52) $\displaystyle
S_{\boldsymbol{p}}(x;\gamma^{3})=[g_{n-1,p_{y}}^{2}+g_{n,p_{y}}^{2}][|\tilde{\phi}_{n,p_{z}}^{(+)}|^{2}-|\tilde{\phi}_{n,p_{z}}^{(-)}|^{2}]$
(53) $\displaystyle
S_{\boldsymbol{p}}(x;i\gamma^{5})=2[g_{n-1,p_{y}}^{2}-g_{n,p_{y}}^{2}]\cos\theta_{n}\,\mbox{Im}[\tilde{\phi}_{n,p_{z}}^{(+)}\phi_{n,p_{z}}^{(-)}]$
(54) $\displaystyle
S_{\boldsymbol{p}}(x;\gamma^{0}\gamma^{3})=2[g_{n-1,p_{y}}^{2}+g_{n,p_{y}}^{2}]\cos\theta_{n}\,\mbox{Im}[\tilde{\phi}_{n,p_{z}}^{(+)}\phi_{n,p_{z}}^{(-)}]$
(55) $\displaystyle
S_{\boldsymbol{p}}(x;i\gamma^{1}\gamma^{2})=2[g_{n-1,p_{y}}^{2}-g_{n,p_{y}}^{2}]\cos\theta_{n}\,\mbox{Re}[\tilde{\phi}_{n,p_{z}}^{(+)}\phi_{n,p_{z}}^{(-)}]$
(56) $\displaystyle
S_{\boldsymbol{p}}(x;1)=2[g_{n-1,p_{y}}^{2}+g_{n,p_{y}}^{2}]\cos\theta_{n}\mbox{Re}[\tilde{\phi}_{n,p_{z}}^{(+)}\phi_{n,p_{z}}^{(-)}]\;,$
(57)
For further calculations, we shall integrate the right hand side over $p_{y}$
with paying attention to $g_{-1,p_{y}}=0$, namely,
$\displaystyle\int dp_{y}[g_{n-1,p_{y}}^{2}(x)-g_{n,p_{y}}^{2}(x)]$
$\displaystyle=-|eB|\delta_{n0}$ (58) $\displaystyle\int
dp_{y}[g_{n-1,p_{y}}^{2}(x)+g_{n,p_{y}}^{2}(x)]$
$\displaystyle=|eB|\alpha_{n}$ (59)
where $\alpha_{n}$ are defined by
$\displaystyle\alpha_{n}=\begin{cases}1&\text{if~{}}n=0\\\
2&\text{if~{}}n=1,2,3,\cdots\;.\end{cases}$ (60)
## 4 Regularized VEV and Chiral Anomaly
### 4.1 Regularization and and VEVs of currents
The VEVs of the current derived in the previous section diverge when we
integrate over $p_{z}$, thus we need some sort of the regularization. Because
we could obtain these VEVs as a result of the subtle cancellation of the
divergent integrals, use of the gauge invariant regularization is certainly
important. Here, we use the point-split regularization[21, 22], which is known
as the gauge invariant regularization scheme.
The regularization method in the $p_{z}$ integral essentially introduces the
non-locality in the $z$ space. We replace the local current operator,
$\bar{\psi}(x)\Gamma{\psi}(x)$, by the integral of the non-local current as
follows
$\displaystyle\bar{\psi}(z)\Gamma{\psi}(z)$ $\displaystyle=\int
dz^{\prime}\bar{\psi}(z^{\prime})\delta(z-z^{\prime})\Gamma{\psi}(z)$ (61)
$\displaystyle=\lim_{\varepsilon\to 0}\int
dz^{\prime}\bar{\psi}(z^{\prime})h_{\varepsilon}(z-z^{\prime})\Gamma{\psi}(z)$
(62)
where
$\displaystyle
h_{\varepsilon}(z-z^{\prime})\equiv\frac{1}{2\sqrt{\pi\varepsilon}}\exp(-\frac{(z-z^{\prime})^{2}}{4\varepsilon})$
(63)
which is reduced to the delta function $\delta(z-z^{\prime})$ as
$\varepsilon\to 0$.
This non-locality clearly breaks the local gauge invariance of the matrix
elements. To revover the gauge invariance, we insert the Wilson line into the
non-local current
$\displaystyle
U(z^{\prime},z)=\mbox{P}\,\exp[ie\int_{z^{\prime}}^{z}d\tilde{x}^{\mu}A_{\mu}(\tilde{x})]$
(64)
where the choice of the integral path P is arbitrary. We may choose a straight
line which connects $z^{\prime}$ and $z$ for the path function
$U(z^{\prime},z)$.
$\displaystyle\bar{\psi}(x)\Gamma{\psi}(x)\to\int_{-\infty}^{\infty}dz^{\prime}\bar{\psi}(z^{\prime})h_{\varepsilon}(z-z^{\prime})U(z^{\prime},z)\Gamma{\psi}(z)\;.$
(65)
Carrying out the $z^{\prime}$ integration, we obtain the regularized VEV of
the current,
$\displaystyle\langle\bar{\psi}(x)\Gamma\psi(x)\rangle=\sum_{n=0}^{\infty}\int\frac{dp_{y}}{2\pi}\int\frac{dp_{z}}{2\pi}$
$\displaystyle\hskip
14.22636pt\times\sum_{s=\tilde{R},\tilde{L}}\,\exp(-\varepsilon{[p_{z}+eA_{z}(t)]^{2}})\;S_{\boldsymbol{p}}(x;\Gamma)$
(66)
The regularization factor, $\exp(-\varepsilon{[p_{z}+eA_{z}(t)]^{2}})$, is now
inserted into the integrand for the VEVs. Because the parameter $\varepsilon$
has the dimension of (mass)-2, we introduce the cutoff parameter
$\Lambda^{2}\equiv 1/\varepsilon$ with the dimension of $\Lambda$ being
(mass)1.
We arrive at final expressions for the regularized VEVs of the currents: .
$\displaystyle\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle=\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})[|\tilde{\phi}_{0,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{0,p_{z}}^{(-)}(t)|^{2}]$
(67)
$\displaystyle\langle\bar{\psi}i\gamma^{1}\gamma^{2}\psi\rangle=2\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\cdot\mbox{Re}[\tilde{\phi}_{0,p_{z}}^{(+)}(t)\,\phi_{0,p_{z}}^{(-)}(t)]$
(68)
$\displaystyle\langle\bar{\psi}i\gamma^{5}\psi\rangle=-2\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\cdot\mbox{Im}[\tilde{\phi}_{0,p_{z}}^{(+)}(t)\,\phi_{0,p_{z}}^{(-)}(t)]$
(69)
$\displaystyle\langle\bar{\psi}\gamma^{3}\psi\rangle=\frac{|eB|}{2\pi}\int\frac{dp_{z}}{2\pi}\sum_{n=0}^{\infty}\alpha_{n}f_{\Lambda}(p_{z})[|\tilde{\phi}_{n,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{n,p_{z}}^{(-)}(t)|^{2}]$
(70)
$\displaystyle\langle\bar{\psi}\psi\rangle=-2\frac{|eB|}{2\pi}\int\frac{dp_{z}}{2\pi}\sum_{n=0}^{\infty}\alpha_{n}f_{\Lambda}(p_{z})\cos\theta_{n}$
$\displaystyle\hskip
85.35826pt\times\mbox{Re}[\tilde{\phi}_{n,p_{z}}^{(+)}(t)\,\tilde{\phi}_{n,p_{z}}^{(-)}(t)]$
(71)
$\displaystyle\langle\bar{\psi}i\gamma^{0}\gamma^{3}\psi\rangle=2\frac{|eB|}{2\pi}\int\frac{dp_{z}}{2\pi}\sum_{n=0}^{\infty}\alpha_{n}f_{\Lambda}(p_{z})\cos\theta_{n}$
$\displaystyle\hskip
85.35826pt\times\mbox{Im}[\tilde{\phi}_{n,p_{z}}^{(+)}(t)\,\tilde{\phi}_{n,p_{z}}^{(-)}(t)]$
(72)
where $f_{\Lambda}(p_{z})=\exp(-[p_{z}+eA_{z}(t)]^{2}/\Lambda^{2})$.
It is clear that only the lowest Landau level (LLL), $n=0$, contributes to the
chirality imbalance eq. (67), psedoscalar density eq. (69), and the tensor
density eq. (68). On the other hand, we need sum up contributions from all
possible Landau levels for the vector current eq. (70) as well as the scalar
density (71). This is different from calculations with the chiral chemical
potential [27, 28], where the vector current is given by the contribution from
only the lowest Landau level. In our case, when there is no electric field
($t=-\infty$), $|\tilde{\phi}_{n,p_{z}}^{(+)}|^{2}$ and
$|\tilde{\phi}_{n,p_{z}}^{(-)}|^{2}$ in (70) show the same momentum
distribution for each $n$, and a cancellation gives null vector current. After
the electric field is turned on ($t\geq 0$), however, momentum distributions
of $|\tilde{\phi}_{n,p_{z}}^{(+)}|^{2}$ and
$|\tilde{\phi}_{n,p_{z}}^{(-)}|^{2}$ with the regularizatin function become
different in eqs.(35,36), and the resulting vector current is finite for each
$n$, although contributions from higher Landau levels are small. Our results
are consistent with ref. [22].
We also find that the VEVs for all other $\Gamma$s vanish. In particular, the
spin expectation value of the $z$ component vanishes,
$\langle\bar{\psi}\gamma_{3}\gamma_{5}\psi\rangle\sim\langle S_{z}\rangle=0$.
Thus, there is no magnetization of the vacuum due to the chiral anomaly. This
is in contrast with the result for the tensor matrix element,
$\langle\bar{\psi}\sigma_{12}\psi\rangle\neq 0$, in eq. (68), which is non-
zero. This difference may come from roles of antifermions for these matrix
elemets, i.e. $\langle\bar{\psi}\gamma_{3}\gamma_{5}\psi\rangle$ expresses a
sum of fermion and antifermion contributions, while the tensor matrix element
describes their differences.
We are particularly interested in the $z$-component of the electric current,
$J_{z}=e\langle\bar{\psi}\gamma^{3}\psi\rangle$, in view of the chiral
magnetic effect. From eqs. (67) and (70), we find a simple relation between
$J_{z}$ and the chirality imbalance $n_{5}$ in the LLL approximation ($n=0$)
as
$\displaystyle J_{z}$ $\displaystyle\simeq
e\frac{|eB|}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})[|\tilde{\phi}_{0,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{0,p_{z}}^{(-)}(t)|^{2}]$
$\displaystyle=e\frac{|eB|}{eB}n_{5}$ $\displaystyle=\mathrm{sgn}(eB)e\,n_{5}$
(73)
Eq. (73) tells us that $J_{z}$ is essentially proportional to $n_{5}$ in the
limit of the strong magnetic field, where the use of the LLL approximation can
be justified. The result agrees with one obtained in the previous work[8],
although the existence of the chiral chemical potential is assumed in ref.
[8]. Here, we recover eq. (73) by considering the massive fermion under the
external EM fields, without assuming the chirality asymmetric source.
### 4.2 Chiral anomaly with the regularization
We are also interested in the modification of the conservation law for the
axial-vector current[1]. Neglecting the surface term from the current
divergence, the chiral anomaly relation is given by
$\displaystyle\partial_{t}\int
d^{3}x\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle=2im\int
d^{3}x\langle\bar{\psi}\gamma^{5}\psi\rangle+\frac{2\alpha}{\pi}\int
d^{3}x{\bm{E}}\cdot{\bm{B}}\,.$ (74)
Here, the second term of RHS is just an input of the model calculation in our
case. On the other hand, we have already calculated the LHS and the 1st term
of RHS individually. Thus, we can check a consistency of our calculations with
the point-split regularization.
For the LHS of the chiral anomaly, we simply calculate the time-derivative of
the chirality imbalance. If we used eq. (52) for the chirality imbalance
without invoking the momentum regularization, the time derivative would yield,
$\displaystyle\partial_{t}\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle$
$\displaystyle=2im\cdot
2[g_{n-1,p_{y}}^{2}(x)-g_{n,p_{y}}^{2}(x)]\cos\theta_{n}\mbox{Im}[\tilde{\phi}_{n,p_{z}}^{(+)}(t)\phi_{n,p_{z}}^{(-)}(t)]$
$\displaystyle=2im\langle\bar{\psi}\gamma^{5}\psi\rangle\;\;.$ (75)
This is just a classical conservation law for the axial-vector current.
However, the gauge invariant regularization provides an additional time-
dependent factor coming from $\exp(-[p_{z}+eA_{z}(t)]^{2}/\Lambda^{2})$, in
the integrand of the chirality imbalance. Using the regularized result, eq.
(67), we obtain a modified conservation law as
$\displaystyle\partial_{t}\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle=2im\langle\bar{\psi}\gamma^{5}\psi\rangle+\frac{2\alpha}{\pi}E_{z}(t)\,B\,F_{\Lambda}(t)$
(76)
where
$\displaystyle
F_{\Lambda}(t)=\int^{\infty}_{-\infty}dp_{z}\frac{p_{z}+eA_{z}(t)}{\Lambda_{z}^{2}}f_{\Lambda}(p_{z})[|\tilde{\phi}_{0,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{0,p_{z}}^{*(-)}(t)|^{2}]\;.$
This is the axial-vector current conservation law in our framework. If the
momentum cutoff is large enough, $\Lambda\gg m$, we obtain a simple relation
$\lim_{\Lambda\to\infty}F_{\Lambda}(t)=1$, which is explicitly shown in
Appendix, and thus reproduce the correct anomaly relation eq. (74).
## 5 Time evolution of the VEVs of the currents
In this section, we show numerical results for the time evolution of the
chirality imbalance and CME in the vacuum. Here, we have three independent
parameters of the model, magnitudes of the electric and magnetic fields, and
the fermion mass, which are expressed in units of the electron mass
$m_{e}=0.5$MeV. We also need the parameter $\Lambda$ in the regularization
function, and take $\Lambda=30m_{e}$, which is much larger than the fermion
mass scale.
In our study, we calculate the VEVs of the vacuum under parallel constant
magnetic and the time-dependent Sauter Electric fields, whose magnitudes can
be fixed independently. However, as far as we understand, the chirality
imbalance is well studied by considering the magnetic helicity density $h$
defined in eq. (15). As we have discussed in the previous section, our
calculation is fully consistent to the chiral anomaly relation. In the
massless limit, it is simplified as
$\displaystyle\partial_{t}\int
d^{3}x\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle$
$\displaystyle\simeq\frac{2\alpha}{\pi}\int{\bm{E}}\cdot{\bm{B}}$
$\displaystyle=-\partial_{t}\left(\frac{2\alpha}{\pi}\int
d^{3}x{\bm{A}}\cdot{\bm{B}}\right)$ (77)
since we consider the time-independent magnetic field. Integrand of the RHS is
just magnetic helicity $h(t)$. Hence, with our EM field, the chirality
imbalance becomes
$\displaystyle
n_{5}(t=\infty)=-\frac{2\alpha}{\pi}h(t=\infty)=e^{2}BE\tau/\pi^{2},$ (78)
which is true only for massless fermions. Nevertheless, it is convenient to
express the chirality imbalance (and CME) in unit of the magnetic helicity,
$e^{2}BE\tau/\pi^{2}$.
In Fig. 1, we first show the chirality imbalance $n_{5}$ as a function $t$
with a shape of the Sauter electric field by the dash-dotted curve. In the
massless case (solid curve), $n_{5}$ increases by the electric field, and
approaches a finite value, $e^{2}BE\tau/\pi^{2}$, at $t\to\infty$ even after
$E$ field diminished. On the other hand, in the case of the finite fermion
mass, the chirality imbalance consists of both a constant part and an
oscillating part at $t\to\infty$. When the mass is comparable with the
magnitude of the electric field, $m^{2}\sim(eE$), the chirality imbalance is
largely suppressed as depicted by the dotted curve. Thus, we find that the
average chirality imbalance is almost zero, if $m^{2}>(eE$). We will relate
these results with the fermion pair production from the vacuum in view of the
Schwinger mechanism[24].
We also examine effects of the magnetic field on the chirality imbalance. If
we increase the strength of the magnetic field, the magnitude of the chirality
imbalance is also increased which is just proportional to the magnetic
helicity. However, the time-dependence of $n_{5}$ is never changed as
expected.
Figure 1: The time evolution of chirality imbalance $n_{5}$ $\tau/\pi^{2}$.
$eE/m_{e}^{2}=4.0,\tau m_{e}=0.5,eB/m_{e}^{2}=8.0$
We then show the vector current along the $z$-direction in Fig. 2 which could
be understood as the chiral magnetic effect. Again, the vector current is
shown in units of the magnetic helicity density. In the case of the massless
limit, the vector current depicted by the solid curve consists of dominant
constant part and tiny oscillating part, which is somewhat different from the
behavior of the chirality imbalance $n_{5}$. This is because $n_{5}$ is solely
determined by the lowest Landau level contribution, while the vector current
gets contributions from higher Landau levels in eq. (70). The average CME
current almost vanishes for the small electric field $m^{2}>(eE)$, which is
similar with the chirality imbalance.
From Fig. 2, for $t/\tau\gg 1$ where there is no electric field, the CME
current for the massless fermion is expressed as
$\displaystyle
j_{z}\simeq\frac{e^{2}BE\tau}{\pi^{2}}=\frac{\alpha}{2\pi}B\left(8E\tau\right)\;.$
(79)
The form of eq.(79) is the same as eq. (1) if we substitute $8E\tau$ for
$\mu_{5}$. This crude identification is justified only if $t$ is large enough
compared with the time scale $\tau$ of the electric field in eq. (13).
Figure 2: The time evolution of vector current density $j_{z}$ $\tau
m_{e}=0.5,eB/m_{e}^{2}=8.0$
For completeness, we also show the pseudoscalar density in Fig.3 calculated
with eq. (69). As expected from the chiral anomaly relation eq. (76), the
pseudoscalar density is significant only at $t\sim 0$.
Figure 3: The time evolution of pseudoscalar density.
$m_{e}=0.5\mathrm{MeV},eE/m_{e}^{2}=4.0,\tau m_{e}=0.5,eB/m_{e}^{2}=8.0$
## 6 Relation to the fermion pair-prudcution
In order to understand appearance of the chirality imbalance from the vacuum,
we relate it with the fermion pair-production[24, 25]. To do so, we try to
find a relation between the ”in-state” vacuum at $t\to-\infty$ and the ”out-
state” vacuum at $t\to\infty$. As discussed in eqs. (80,81), our original ”in
state” vacuum at $t\to-\infty$, $|0\rangle$, coincides with the free particle
vacuum (although $B\neq 0$). However, due to the Sauter-electric field, the
vacuum at $t\to\infty$, $|0\rangle_{\mbox{out}}$, is not the same as the
original vacuum $|0\rangle$.
To proceed calculations, we need asymptotic forms of the eigenfunction
$\tilde{\phi}_{{n,p_{z}}}^{(+)}(t)$ at $t\to-\infty$ and at $t\to\infty$. For
the ”in-state”, the eigenfunctions reduce to
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(+)}(t)\propto\exp(-i\omega_{n,p_{z}}(0)t)~{}~{}~{}(t\to-\infty)$
(80)
$\displaystyle\tilde{\phi}_{n,p_{z}}^{(-)}(t)\propto\exp(+i\omega_{n,p_{z}}(0)t)~{}~{}~{}(t\to-\infty)$
(81)
which agree with positive/negative energy plane wave solutions with the energy
$\pm\omega(0)$ defined eq. (38). On the other hand, with the help of the
connection formula for the Gauss hypergeometric function, ”out-state”
eigenfunctions are rewritten as
$\displaystyle\tilde{\phi}_{{n,p_{z}}}^{(+)}(t)$
$\displaystyle=\alpha_{n,p_{z}}\tilde{\phi}_{\mathrm{out},{n,p_{z}}}^{(+)}(t)-\beta_{n,p_{z}}^{*}\tilde{\phi}_{\mathrm{out},{n,p_{z}}}^{(-)}(t)$
(82) $\displaystyle\tilde{\phi}_{{n,p_{z}}}^{(-)}(t)$
$\displaystyle=\alpha_{n,p_{z}}^{*}\tilde{\phi}_{\mathrm{out},{n,p_{z}}}^{(-)}(t)+\beta_{n,p_{z}}\tilde{\phi}_{\mathrm{out},{n,p_{z}}}^{(+)}(t)$
(83)
where
$\displaystyle\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(+)}(t)$
$\displaystyle\equiv\sqrt{\frac{\omega(1)+[p_{z}+2eE\tau]}{2\omega(1)}}u^{-\frac{i\tau\omega(0)}{2}}(1-u)^{\frac{i\tau\omega(1)}{2}}$
$\displaystyle\times F\left({a,b\atop 1+a+b-c};1-u\right)$ (84)
$\displaystyle\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(-)}(t)$
$\displaystyle\equiv\sqrt{\frac{\omega(1)-[p_{z}+2eE\tau]}{2\omega(1)}}u^{\frac{i\tau\omega(0)}{2}}(1-u)^{-\frac{i\tau\omega(1)}{2}}$
$\displaystyle\times F\left({1-a,1-b\atop 1+c-a-b};1-u\right)\;.$ (85)
and
$\displaystyle\alpha_{n,p_{z}}$
$\displaystyle=\sqrt{\frac{\omega(0)+p_{z}}{\omega(0)}}\sqrt{\frac{\omega(1)}{\omega(1)+[p_{z}+2eE\tau]}}\frac{2i}{\tau[\omega(0)+\omega(1)-2eE\tau]}$
$\displaystyle\hskip
56.9055pt~{}~{}\times\frac{\Gamma(1-i\tau\omega(0))\Gamma(-i\tau\omega(1))}{\Gamma(-\frac{i\tau\omega(0)}{2}-\frac{i\tau\omega(1)}{2}-ieE\tau^{2})\Gamma(-\frac{i\tau\omega(0)}{2}-\frac{i\tau\omega(1)}{2}+ieE\tau^{2})}$
(86) $\displaystyle\beta_{n,p_{z}}$
$\displaystyle=\sqrt{\frac{\omega(0)+p_{z}}{\omega(0)}}\sqrt{\frac{\omega(1)}{\omega(1)-[p_{z}+2eE\tau]}}\frac{2i}{\tau[\omega(0)-\omega(1)-2eE\tau]}$
$\displaystyle\hskip
56.9055pt~{}~{}\times\frac{\Gamma(1+i\tau\omega(0))\Gamma(-i\tau\omega(1))}{\Gamma(\frac{i\tau\omega(0)}{2}-\frac{i\tau\omega(1)}{2}+ieE\tau^{2})\Gamma(\frac{i\tau\omega(0)}{2}-\frac{i\tau\omega(1)}{2}-ieE\tau^{2})}\;.$
(87)
$\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(+)}(t)$ and
$\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(-)}(t)$ are further simplified as
$\displaystyle\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(+)}(t)\propto\exp(-i\omega(1)t)~{}~{}~{}(t\to\infty)$
(88)
$\displaystyle\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(-)}(t)\propto\exp(i\omega(1)t)~{}~{}~{}(t\to\infty)\;..$
(89)
which are the free fermion wave functions with the energy $\omega(1)$. From
these functions, we can construct the Bogoliubov transformation between ”in-
state” and ”out-state”[24, 25]. We already introduced the annihilation
operators and the vacuum for the ”in-state”.
$\displaystyle\hat{b}_{s,\boldsymbol{p}}|0\rangle=0,~{}\hat{d}_{s,\boldsymbol{p}}|0\rangle=0~{}(\text{for
all }s,\boldsymbol{p}),~{}~{}\langle 0|0\rangle=1\;.$ (90)
Similarly, we define the ”out-state” vacuum with operators
$\hat{b}^{\mathrm{(out)}}_{s,\boldsymbol{p}},\hat{d}^{\mathrm{(out)}}_{s,\boldsymbol{p}}$,
$\displaystyle\hat{b}^{\mathrm{(out)}}_{s,\boldsymbol{p}}|0\rangle_{\mbox{out}}=0,~{}\hat{d}^{\mathrm{(out)}}_{s,\boldsymbol{p}}|0\rangle_{\mbox{out}}=0~{}(\text{for
all }s,\boldsymbol{p}),~{}~{}\langle 0|0\rangle=1\;.$ (91)
where operators $\hat{b}^{\mathrm{(out)}}_{s,\boldsymbol{p}}$,
$\hat{d}^{\mathrm{(out)}}_{s,\boldsymbol{p}}$ are introduced as coefficients
of $\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(+)}(t)$ and
$\tilde{\phi}_{\mathrm{out},n,p_{z}}^{(-)}(t)$ in the standard way. Thus,
these operators are subject to the transformation,
$\displaystyle\begin{pmatrix}\hat{b}_{\mathbf{p,s}}^{\mathrm{(out)}}\\\
\hat{d}_{\mathbf{p,s}}^{\mathrm{(out)}\,\dagger}\end{pmatrix}=\begin{pmatrix}\alpha_{n,p_{z}}&\beta_{n,p_{z}}\\\
-\beta_{n,p_{z}}^{*}&\alpha_{n,p_{z}}^{*}\end{pmatrix}\begin{pmatrix}\hat{b}_{\mathbf{p,s}}\\\
\hat{d}_{\mathbf{p,s}}^{\dagger}\end{pmatrix}$ (92)
where the Bogoliubov coefficients satisfy the unitary condition
$|\alpha_{n,p_{z}}|^{2}+|\beta_{n,p_{z}}|^{2}=1$. The expectation value of the
number operator at $t=\infty$ between the original vacuum becomes
$\displaystyle\langle
0|\hat{b}_{\mathbf{p,s}}^{\mathrm{(out)}\,\dagger}\,\hat{b}_{\mathbf{p,s}}^{\mathrm{(out)}}|0\rangle=|\beta_{n,p_{z}}|^{2}$
(93)
which is understood as the probability to find a fermion produced by the
electric field with the momentum ${n,p_{z}}$ at $t=\infty$[24, 25, 31]. It is
well-known that $|\beta_{n,p_{z}}|^{2}$ is significant only if the electric
field is larger than the fermion mass square, $eE>m^{2}$, which means
spontaneous creation of fermion pairs from the vacuum under the strong
electric field. Thus, we naively expect the chirality imbalance may emerge for
$eE\gg m^{2}$.
Using these results, one can express the VEVs of the vacuum at $t=\infty$ in
terms of the Bogoliubov coefficients. For example, the chirality imbalance
$n_{5}$ at $t=\infty$ is calculated as
$\displaystyle n_{5}|_{t=\infty}$
$\displaystyle=\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\left[-2|\beta_{0,p_{z}}|^{2}\frac{p_{z}+2eE\tau}{\omega(1)}\right.$
$\displaystyle\hskip
42.67912pt\left.-2\frac{m}{\omega(1)}\mbox{Re}[\alpha_{0,p_{z}}\beta_{0,p_{z}}\mathrm{e}^{-2i\omega(1)t}]\right]$
(94)
where the regularization function at $t=\infty$ is
$\displaystyle
f_{\Lambda}(p_{z})=\exp[-\varepsilon{(p_{z}+2eE\tau)^{2}/\Lambda^{2}}]$ (95)
The first term is independent of time, and simply proportional to
$|\beta_{0,p_{z}}|^{2}$ which is the probability to find a produced particle
in the lowest Landau level with $p_{z}$. On the other hand, the second term is
proportional to the mass, and is somehow interpreted as the ”interference”
term.
At first sight, $n_{5}$ is simply determined by the magnitude of
$|\beta_{0,p_{z}}|^{2}$. However, existence of the chirality imbalance
strongly depends on details of the integration over $p_{z}$ in eq. (94), which
is sensitive to a parameter $\tau$, the time scale of the electric field in
eq. (13). We will discuss how the non-zero $n_{5}$ appears in some detail. In
the massless limit, the first term of eq. (94), which we call $n_{5}^{(0)}$,
becomes
$\displaystyle n_{5}^{(0)}$
$\displaystyle=\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\left[-2|\beta_{0,p_{z}}|^{2}\frac{p_{z}+2eE\tau}{\sqrt{(p_{z}+2eE\tau)}^{2}+m^{2}}\right]$
$\displaystyle\to\frac{eB}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\left[-2\,\mbox{sgn}[{p_{z}+2eE\tau}]\,|\beta_{0,p_{z}}|^{2}\,\right]\;.$
(96)
In the presence of the uniform magnetic field, all the fermions move along the
$z$-direction, and the spin of the fermions in the lowest Landau level, which
can contribute to $|\beta_{0,p_{z}}|$, is parallel to the z-direction. Hence,
the fermions with positive canonical momenta, $p_{z}+2eE\tau>0$, carry the
right-handed chirality, while those with $p_{z}+2eE\tau<0$ are left-handed. If
the electric field were zero, the imbalance $n_{5}^{(0)}$ would vanish,
because of a cancellation between contributions from $p_{z}>0$ and $p_{z}<0$
fermions by virtue of the symmetrical $p_{z}$ distribution of the pair-
production probability $\beta_{0,p_{z}}$. However, non-zero electric field
induces an asymmetry between momentum distributions of right- and left-handed
fermions in both the sign function $\mbox{sgn}[{p_{z}+2eE\tau}]$ and the
regularization function $f_{\Lambda}$, which indeed generates the chirality
imbalance in this model.
Figure 4: $p_{z}$ distribution of the pair-production probability
$|\beta|^{2}$ and the signfunction of $p_{z}+2eE\tau/\omega(1)$.
To study $n_{5}^{(0)}$ in the case of the finite mass, we show
$|\beta_{0,p_{z}}|^{2}$ and $(p_{z}+2eE\tau)/\omega(1)$ in Fig. 4, where
$(p_{z}+2eE\tau)/\omega(1)$ is no longer the sign function. The pair-creation
probability $|\beta_{0,p_{z}}|^{2}$ peaked at $p_{z}=-eE\tau$, whereas
$(p_{z}+2eE\tau)/\omega(1)$ changes its sign at $p_{z}=-2eE\tau$. Hence, if
$\tau$ is very small ($\sim 0)$, the integration over $p_{z}$ is negligible
due to a cancellation, and thus the resulting chirality imbalance almost
vanishes.
Figure 5: $\tau$ dependence of $n_{5}^{(0)}$ for several values of $eE$.
For completeness, we show explicit $\tau$ dependence of the results. We first
show the chirality imbalance as a function of $\tau m_{e}$ in Fig. 5 for
several values of $eE$. If $eE<m^{2}$, the chirality imbalance is almost zero,
because the production of the fermion pairs is forbidden. The large electric
field simply gives the larger chirality imbalance. However, if the time scale
$\tau$ is quite small, $\tau\ll 1/m_{e}$, situation becomes different. In Fig.
6, we show $n_{5}^{(0)}$ for several values of $\tau m_{e}$. We find that,
even if the magnitude of the electric field is large enough, $n_{5}^{(0)}$ is
very small for $\tau m_{e}<0.01$. This is because the small $\tau$ cannot
provide enough asymmetry in the integrad of eq. (96).
Figure 6: $eE$ dependence of $n_{5}^{(0)}$
Similar argument holds for the chiral magnetic effect, the $z$-component of
vector current at $t=\infty$. We can write the CME current in terms of
Bogoliubov coeffcients as,
$\displaystyle\langle\bar{\psi}\gamma^{3}\psi\rangle|_{t=\infty}$
$\displaystyle=-\frac{|eB|}{2\pi}\int\frac{dp_{z}}{2\pi}f_{\Lambda}(p_{z})\sum_{n=0}^{\infty}\alpha_{n}\left[-2|\beta_{n,p_{z}}|^{2}\frac{p_{z}+2eE\tau}{\omega(1)}\right.$
$\displaystyle\left.\hskip
28.45274pt-2\frac{\sqrt{m^{2}+2eBn}}{\omega(1)}\mbox{Re}[\alpha_{n,p_{z}}\beta_{n,p_{z}}\mathrm{e}^{-2i\omega(1)t}]\right],$
(97)
which is similar with one of the chirality imbalance. The first term is
independent of time and essentially given by a product of
$|\beta_{0,p_{z}}|^{2}$ and $(p_{z}+2eE\tau)/\omega(1)$, which is interpreted
as the $z$-component of relativistic velocity of particles. Hence, this term
is understood as a classical analogue of the electric current of the
$z$-component carried by the produced fermions. Note that the second
oscillating term is non-zero even in the massless limit.
## 7 Summary and Disscussions
We have studied the chirality imbalance of the vacuum under the time-
independent magnetic field and the Sauter-type pulsed electric field. In
particular, we have focused on the time evolution of the chirality imbalance
and the chiral magnetic effect from $t=-\infty$ to $t=\infty$. Solving the
squared Dirac equation with the EM field, we have constructed the quantized
fermion field and the vacuum at $t=-\infty$. Then, we have calculated the
vacuum expectation values of various fermion current operators including
$n_{5}$ and CME in terms of the point-split regularization. Use of the gauge
invariant regularization method is important in our study, because the VEVs
diverge by the momentum integration. Subtle cancellation between positive and
negative energy states provides non-zero contribution to CME. We note that
calculated VEVs are finite at $t=\infty$, and differ from the case with the
constant electric and magnetic fields, where the several VEVs become infinite
at $t\to\infty$[22]. In addition, we have found expressions for the VEVs of
other bilinear operators, e.g.
$\langle\bar{\psi}\gamma_{3}\gamma_{5}\psi\rangle=0$, whereas
$\langle\bar{\psi}\sigma_{12}\psi\rangle\neq 0$.
We have shown the time evolution of the chirality imbalance and CME current.
The resulting chirality imbalance is finite at $t=\infty$ where the Sauter
electric field is already turned off. We have also demonstrated that a part of
the chirality imbalance consists of the time-oscillating contribution, which
is proportional to the fermion mass. The CME current also consists of the
dominant time-independent part and the oscillating part, which is similar with
the chirality imbalance. We have also discussed a connection between the
fermion pair-creation by the electric field and the chirality imbalance. As we
have obtained in eqs.(96,97), there are simple physical interpretations for
the generation of $n_{5}$ and CME.
The magnitudes of $n_{5}$ and CME in this model are essentially determined by
the following conditions;
1. 1.
enough magnitude of the electric field which is much larger than the fermion
mass scale.
2. 2.
enough asymmetric $p_{z}$ distribution of the produced fermions (in the
integrand of the chirality imbalance).
Asymmetries of the pair-production rate between $p_{z}>0$ and $p_{z}<0$
particles are important to produce the chirality imbalance, and may depend on
details of the external electromagnetic fields. In fact, magnitudes of the
chirality imbalance for the massive fermion change largely if we change the
time dependence of the electric field by using the Gaussian packet
formalism[32].
Although we consider the external electromagnetic fields in this work, it may
be important to include ”back-reaction” of the external field. It is possible
to include back-reaction effects on the electric field within this
framework[24]. Work along this line is under consideration.
Acknowledgments
We acknowledge Prof. A. Suzuki for useful and critical comments. We also thank
all the members of the quark/hadron group in Tokyo University of Science for
useful conversations. K.S. thanks Dr. S. Saitoh for contributions on early
stage of this work.
Appendix : Anomaly relation
We will provide a proof of the anomaly equation, eq.(76),
$\displaystyle\partial_{t}\langle\bar{\psi}\gamma^{0}\gamma^{5}\psi\rangle=2im\langle\bar{\psi}\gamma^{5}\psi\rangle+\frac{2\alpha}{\pi}E_{z}(t)\,B\,F_{\Lambda}(t)$
(98)
where
$\displaystyle
F_{\Lambda}(t)=\int^{\infty}_{-\infty}dp_{z}\frac{p_{z}+eA_{z}(t)}{\Lambda_{z}^{2}}\exp(-\frac{(p_{z}+eA(t))^{2}}{\Lambda^{2}})[|\tilde{\phi}_{0,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{0,p_{z}}^{*(-)}(t)|^{2}]\to
1~{}~{}(\Lambda\to\infty)$
Using the solutions of the Dirac equation for the lowest Landau level,
$\tilde{\phi}_{0,p_{z}}$, we first rewrite the integrand of $F_{\Lambda}(t)$
as
$\displaystyle|\tilde{\phi}_{0,p_{z}}^{(+)}(t)|^{2}-|\tilde{\phi}_{0,p_{z}}^{*(-)}(t)|^{2}=\frac{p_{z}+eA(t)}{\sqrt{m^{2}+(p_{z}-eA(t))^{2}}}+G(p_{z},t)\;.$
(99)
where the first term gives a finite contribution as $p_{z}\to\infty$, while
the second term, $G(p_{z},t)$, is a rapidly decreasing function,
$\lim_{|p_{z}|\to\infty}|G(p_{z},t)|\to 0$. We decompose the integrand of the
first term using
$\displaystyle\frac{p_{z}^{2}}{\sqrt{m^{2}+p_{z}^{2}}}=|p_{z}|+|p_{z}|\,H(\frac{p_{z}^{2}}{m^{2}})\;,$
(100)
where a function $H(p_{z},t)$ satisfies $\lim_{p_{z}\to\infty}H(p_{z}/m,t)\to
0$. Thus, we rewrite the first term as
$\displaystyle\int^{\infty}_{-\infty}dp_{z}\frac{p_{z}+eA_{z}(t)}{\Lambda^{2}}\exp(-\frac{(p_{z}+eA(t))^{2}}{\Lambda^{2}})\frac{p_{z}+eA(t)}{\sqrt{m^{2}+(p_{z}-eA(t))^{2}}}$
$\displaystyle=$
$\displaystyle\frac{2}{\Lambda^{2}}\int^{\infty}_{0}dp_{z}\,|p_{z}|\exp(-\frac{p_{z}^{2}}{\Lambda^{2}})+\frac{2}{\Lambda^{2}}\int^{\infty}_{0}dp_{z}\,|p_{z}|\,H(\frac{p_{z}^{2}}{m^{2}})\,\exp(-\frac{p_{z}^{2}}{\Lambda^{2}})$
$\displaystyle=$ $\displaystyle
1+\frac{2m^{2}}{\Lambda^{2}}\int^{\infty}_{0}du\,H(u)\exp(-\frac{m^{2}}{\Lambda^{2}}u)$
$\displaystyle\to$ $\displaystyle 1$
as $\Lambda\to\infty$, because of the fact $H(\frac{p_{z}^{2}}{m^{2}})\to 0$
($p_{z}\to\infty$).
On the other hand, the second term of eq.(99) can be shown to vanish in the
similar way. In our model with the Sauter electric field, $G(p_{z},t)$,
decreases rapidly for $|p_{z}|\gg eE\tau$ independent of time $t$, as shown in
Fig. 4. Hence, in the limit $\Lambda\to\infty$, i.e. $\Lambda\gg eE\tau$, the
integal of the second term is independent of $\Lambda$, and proportional to
$(eE\tau)^{2}$ by the dimensional analysis. (If the fermion mass is comparable
with $(eE)^{1/2}$, the argument should be modified. However, the essential
result is not changed for $\Lambda\gg(eE)^{1/2}$ or $m$.) It leads to
$\displaystyle\int^{\infty}_{-\infty}dp_{z}\frac{p_{z}+eA_{z}(t)}{\Lambda_{z}^{2}}\exp(-\frac{(p_{z}+eA(t))^{2}}{\Lambda^{2}})G(p_{z},t)$
$\displaystyle=$
$\displaystyle\frac{(eE\tau)}{\Lambda^{2}}\times\mbox{($\Lambda$-independent
constant)}\to 0$
Thus, we recover the correct anomaly relation in eq. (76).
## References
* [1] S.L. Adler, Phys. Rev. 177, 2426 (1969); J.S. Bell and R. Jackiw, Nuovo Cim. A60, 47 (1969)
* [2] H. B. Nielsen and M. Ninomiya, Physics Letters B130, 389 (1983); J. Ambjorn, J. Greensite, C. Peterson, Nuclear Physics B221, 381 (1983)
* [3] D. T. Son and P. Surowka, Phys. Rev. Lett. 103:191601 (2009)
* [4] For reviews, D. E. Kharzeev, Prog. Part. Nucl. Phys., 75, 133 (2014); D. Kharzeev etal., Prog. Part. Nucl. Phys. 88, 1 (2016).
* [5] For a review, K. Landsteiner, Acta Phys. Pol. B 47, 2617 (2016)
* [6] A. Vilenkin, Phys. Rev. D22, .3080 (1980)
* [7] D. E. Kharzeev, L. McLerran, H. J. Warringa, Nucl. Phys. A803 :227 (2008).
* [8] K. Fukushima, D. E. Kharzeev and H.J. Warringa, Phys. Rev. D78, .074033 (2008)
* [9] For a review, J. Zhao and F. Wang, Prog. Part. Nucl. Phys. 107 (2019) 200-236
* [10] B. Q. Lv et al., Phys. Rev., X5, 031013 (2015)
* [11] Q. Li etal., Nature Phys. 12, 550-554 (2016)
* [12] D. E. Kharzeev, Annals of Phys.,325, 205 (2010)
* [13] N. Muller, S. Schlichting, and S. Sharma, Phys. Rev. Lett. 117 142301 (2016).
* [14] M.M. Vazifeh, M. Franz, Phys. Rev. Lett. 111, 027201 (2013) .
* [15] N. Yamamoto, Phys. Rev. D 92, 085011 (2015)
* [16] V. A. Rubakov, On chiral magnetic effect and holography, arXiv:1005.1888[hep-ph]. A. Rebhan, A. Schmitt, S. Stricker, .JHEP 1001, 026 (2010) M.A. Zubkov, Phys. Rev. D 93, 105036 (2016)
* [17] B. Feng, D.F. Hou, H. Liu, H.C. Ren, P.P. Wu, Y. Wu, Phys. Rev. D95, 114023 (2017); D-f Hou, H. Lui and H-c Ren, JHEP 05, 046 (2011).
* [18] P. Copinger, K. Fukushima, and S. Pu, Phys.Rev.Lett. 121, 261602 (2018)
* [19] D.L.Boyda etal., Annals of Phys. 396, 78 (2018)
* [20] H. Aoi and K. Suzuki, JPS Conference Proceedings 26, 031025 (2019)
* [21] For a review, J. Novotny and M. Schnabl, Point-splitting regularization of composite operators and anomalies, arXiv:hep-th/9803244
* [22] H. J. Warringa, Phys. Rev. D86, 085029 (2012)
* [23] W. Heisenberg and H. Euler, Z.Phys. 98 714 (1936); J. Schwinger, Phys. Rev. 82, 664 (1951)
* [24] For a review, F. Gelis and N Tanji, Prog.Part.Nucl.Phys. 87 (2016) 1-49;
* [25] N. Tanji., Annals of Phys. 324 1691 (2009)
* [26] X.-L. Sheng, R.-H. Fang, Q. Wang, and D.H.Rischke, Phys.Rev.D 99 (2019), 056004; X.-L. Sheng, e-Print: 1912.01169 [nucl-th]
* [27] Ren-Da Dong, Ren-Hong Fang, De-Fu Hou, Duan She, Chinese Phys. C 44, 074106 (2020)
* [28] K. Fukushima, T. Shimazaki, L. Wang , Phys. Rev. D102, 014045 (2020)
* [29] F. Sauter, Z. Phys. 73, 547 (1932)
* [30] M. A. Berger. ”Magnetic Helicity in Space Physics”, American Geophysical Union 111 1, (1999)
* [31] H. Taya, H. Fujii, and K. Itakura, Phys. Rev. D90, 014039 (2014)
* [32] H. Aoi and K. Suzuki, to be published
|
# A note on $2$-generated symmetric axial algebras of Monster type
Clara Franchi and Mario Mainardis Dipartimento di Matematica e Fisica,
Università Cattolica del Sacro Cuore, Via Musei 41, I-25121 Brescia, Italy
<EMAIL_ADDRESS>Dipartimento di Scienze Matematiche, Informatiche e
Fisiche, Università degli Studi di Udine, via delle Scienze 206, I-33100
Udine, Italy<EMAIL_ADDRESS>
###### Abstract.
In [10], Yabe gives an almost complete classification of primitive symmetric
$2$-generated axial algebras of Monster type. In this note, we construct a new
infinite-dimensional primitive $2$-generated symmetric axial algebra of
Monster type $(2,\frac{1}{2})$ over a field of characteristic $5$, and use
this algebra to complete the last case left open in Yabe’s classification.
## 1\. Introduction
Axial algebras of Monster type have been introduced in [4] by Hall, Rehren and
Shpectorov, in order to generalise some subalgebras of the Griess algebra
(defined Majorana algebras by Ivanov [5]) and create a new tool for better
understanding, and possibly unifying, the classification of finite simple
groups. They have recently appeared also in other branches of mathematics (see
[8, 9]). In [6, 7] Rehren started a systematic study of primitive
$2$-generated axial algebras of Monster type, constructing several classes of
new algebras. More examples have been found by Galt et al. [3], and,
independently, by Yabe [10]. In [10], Takahiro Yabe obtained an almost
complete classification of the primitive $2$-generated symmetric axial
algebras of Monster type. Yabe left open only the case of algebras over a
field of characteristic $5$ and axial dimension greater than $5$. Indeed
something surprising happens in the latter case, namely we show that, over a
field ${\mathbb{F}}$ of characteristic $5$, there exists a new infinite-
dimensional primitive $2$-generated symmetric axial algebra
$\hat{\mathcal{H}}$ of Monster type $(2,\frac{1}{2})$ such that any primitive
$2$-generated symmetric axial algebras of Monster type $(2,\frac{1}{2})$ is
isomorphic to a quotient of $\hat{\mathcal{H}}$ (in particular, the Highwater
algebra $\mathcal{H}$ [1] is a proper factor of $\hat{\mathcal{H}}$ over an
infinite-dimensional ideal). As a corollary we complete Yabe’s classification.
For the definitions and further motivation refer to [4, 6, 7], for the
notation and the basic properties of axial algebras refer to [1]. In
particular, throughout this paper, ${\mathbb{F}}$ is a field of characteristic
$5$. For a $2$-generated symmetric axial algebra $V$ of Monster type
$(\alpha,\beta)$ over ${\mathbb{F}}$, let $a_{0},a_{1}$ be the generating axes
of $V$. For $i\in\\{0,1\\}$, let $\tau_{i}$ be the Miyamoto involution
associated to $a_{i}$. Set $\rho:=\tau_{0}\tau_{1}$, and, for
$i\in{\mathbb{Z}}$, $a_{2i}:=a_{0}^{\rho^{i}}$ and
$a_{2i+1}:=a_{1}^{\rho^{i}}$. Note that, since $\rho$ is an automorphism of
$V$, for every $j\in{\mathbb{Z}}$, $a_{j}$ is an axis. Denote by
$\tau_{j}:=\tau_{a_{j}}$ the corresponding Miyamoto involution. The algebra
$V$ is symmetric if it admits an algebra automorphism $f$ that swaps $a_{0}$
and $a_{1}$, whence, for every $i\in{\mathbb{Z}}$, $a_{i}^{f}=a_{-i+1}$. Let
$\sigma_{i}$ be the element of $\langle\tau_{0},f\rangle$ that swaps $a_{0}$
with $a_{i}$. Since, by [1, Lemma 4.2], for $n\in{\mathbb{Z}}_{+}$ and
$i,j\in{\mathbb{Z}}$ such that $i\equiv_{n}j$,
$a_{i}a_{i+n}-\beta(a_{i}+a_{i+n})=a_{j}a_{j+n}-\beta(a_{j}+a_{j+n}),$
we can define
(1) $s_{\bar{\imath},n}:=a_{i}a_{i+n}-\beta(a_{i}+a_{i+n}).$
where $\bar{\imath}$ denotes the congruence class $i+n{\mathbb{Z}}$. Since $V$
is primitive, there is a linear function $\lambda_{a_{0}}:V\to{\mathbb{F}}$
such that every $v$ can be written in a unique way as
$v=\lambda_{a_{0}}(v)a_{0}+v_{0}+v_{\alpha}+v_{\beta}$, where $v_{0}$,
$v_{\alpha}$, $w_{\beta}$ are $0$-, $\alpha$-, $\beta$-eigenvectors for ${\rm
ad}_{a_{0}}$, respectively. For $i\in{\mathbb{Z}}$, set
$\lambda_{i}:=\lambda_{a_{0}}(a_{i})$ and let
(2) $a_{i}=\lambda_{a_{0}}(a_{i})a_{0}+u_{i}+v_{i}+w_{i}$
be the decomposition of $a_{i}$ into $ad_{a_{0}}$-eigenvectors, where $u_{i}$
is a $0$-eigenvector, $v_{i}$ is an $\alpha$-eigenvector and $w_{i}$ is a
$\beta$-eigenvector.
Following Yabe [10], we denote by $D$ the positive integer such that
$\\{a_{0},\ldots,a_{D-1}\\}$ is a basis for the linear span $\langle a_{i}\mid
i\in{\mathbb{Z}}\rangle$ of the set of the axes $a_{i}$’s. $D$ is called the
axial dimension of $V$. Our results are the following
###### Theorem 1.
Let $V$ be a primitive $2$-generated symmetric axial algebra of Monster type
$(2,\frac{1}{2})$ over a field of characteristic $5$. If
$\lambda_{1}=\lambda_{2}=1$, then $V$ is isomorphic to a quotient of the
algebra $\hat{\mathcal{H}}$.
###### Corollary 2.
Let $V$ be a primitive $2$-generated symmetric axial algebra of Monster type
$(\alpha,\beta)$ over a field of characteristic $5$. If $D\geq 6$, then $V$ is
isomorphic to a quotient of one of the following:
1. (1)
the algebra $6A_{\alpha}$, as defined in [7];
2. (2)
the algebra $V_{8}(\alpha)$, as defined in [3];
3. (3)
the algebra $\hat{\mathcal{H}}$, as defined in Section 2.
Note that, for every $\alpha$, Rehren’s algebra $6A_{\alpha}$ and Yabe’s
algebra $\mbox{{VI}}_{2}(\alpha,\frac{-\alpha^{2}}{4(2\alpha-1)})$ coincide
and the $8$-dimensional algebra $V_{8}(\alpha)$ of type
$(\alpha,\frac{\alpha}{2})$ constructed in [3] coincides with Yabe’s algebra
$\mbox{{VI}}_{1}(\alpha,\frac{\alpha}{2})$. Furthermore, remarkably, over a
field of characteristic $5$, the Highwater algebra $\mathcal{H}$ (see [2] and
[10]) is isomorphic to a quotient of $\hat{\mathcal{H}}$, Yabe’s algebras
$\mbox{{V}}_{1}(2,\frac{1}{2})$ and $\mbox{{V}}_{2}(2,\frac{1}{2})$, and
Rehren’s algebra $5A_{2}$ are all isomorphic, and are in turn a quotient of
$\mathcal{H}$. Finally, also the algebra $6A_{2}$ is a quotient algebra of
$\hat{\mathcal{H}}$.
## 2\. The algebra $\hat{\mathcal{H}}$
In this section, for every $i\in{\mathbb{Z}}$, denote by $\bar{\imath}$ the
congruence class $i+3{\mathbb{Z}}$. Let ${\hat{\mathcal{H}}}$ be an infinite-
dimensional ${\mathbb{F}}$-vector space with basis
$\mathcal{B}:=\\{\hat{a}_{i},\hat{s}_{\bar{0},j},\hat{s}_{\bar{1},3k},\hat{s}_{\bar{2},3k}\mid
i\in{\mathbb{Z}},\>j,k\in{\mathbb{Z}}_{+}\\}$,
${\hat{\mathcal{H}}}:=\bigoplus_{i\in{\mathbb{Z}}}{\mathbb{F}}\hat{a}_{i}\oplus\bigoplus_{j\in{\mathbb{Z}}_{+}}{\mathbb{F}}\hat{s}_{\bar{0},j}\oplus\bigoplus_{k\in{\mathbb{Z}}_{+}}({\mathbb{F}}\hat{s}_{\bar{1},3k}\oplus{\mathbb{F}}\hat{s}_{\bar{2},3k}).$
Set $\hat{s}_{\bar{0},0}:=0$ and, if $j\not\equiv_{3}0$,
$\hat{s}_{\bar{1},j}:=\hat{s}_{\bar{0},j}=:\hat{s}_{\bar{2},j}$. Let
$\hat{\tau}_{0}$ and $\hat{f}$ be the linear maps of $\hat{\mathcal{H}}$
defined on the basis elements by
$\hat{a}_{i}^{\hat{\tau}_{0}}=\hat{a}_{-i},\>(\hat{s}_{\bar{0},j})^{\hat{\tau}_{0}}=\hat{s}_{\bar{0},j},\>(\hat{s}_{\bar{1},3k})^{\hat{\tau}_{0}}=\hat{s}_{\bar{2},3k},\>\mbox{
and }\>(\hat{s}_{\bar{2},3k})^{\hat{\tau}_{0}}=\hat{s}_{\bar{1},3k},$
$\hat{a}_{i}^{\hat{f}}=\hat{a}_{-i+1},\>(\hat{s}_{\bar{0},j})^{\hat{f}}=\hat{s}_{\bar{0},j}\>\mbox{
if }\>j\not\equiv_{3}0,$
and
$(\hat{s}_{\bar{0},3k})^{\hat{f}}=\hat{s}_{\bar{1},3k},\>(\hat{s}_{\bar{1},3k})^{\hat{f}}=\hat{s}_{\bar{0},3k},\>(\hat{s}_{\bar{2},3k})^{\hat{f}}=\hat{s}_{\bar{2},3k}.$
Define a commutative non-associative product on ${\hat{\mathcal{H}}}$
extending by linearity the following values on the basis elements (where
$\delta_{\bar{\imath}\bar{r}}$ denotes the Kronecker delta and
$(-1)\ast{\bar{1}}:=-1$, $(-1)\ast{\bar{2}}:=1$, and $0\ast\bar{t}:=0$ for
every $t\in{\mathbb{Z}}$):
1. ($\hat{\mathcal{H}}_{1}$)
$\hat{a}_{i}\hat{a}_{j}:=-2(\hat{a}_{i}+\hat{a}_{j})+\hat{s}_{\bar{\bar{\imath}},|i-j|}$,
2. ($\hat{\mathcal{H}}_{2}$)
$\hat{a}_{i}\hat{s}_{\bar{r},j}:=-2\hat{a}_{i}+(\hat{a}_{i-j}+\hat{a}_{i+j})-\hat{s}_{\bar{r},j}-(\delta_{\bar{\imath}\bar{r}}-1)\ast{(\bar{\imath}-\bar{r})}(\hat{s}_{\bar{r}-\bar{1},j}-\hat{s}_{\bar{r}+\bar{1},j})$,
3. ($\hat{\mathcal{H}}_{3}$)
$\hat{s}_{\bar{r},i}\hat{s}_{\bar{t},j}:=2(\hat{s}_{\bar{r},i}+\hat{s}_{\bar{t},j})-2(\hat{s}_{\bar{0},{|i-j|}}+\hat{s}_{\bar{1},{|i-j|}}+\hat{s}_{\bar{2},{|i-j|}}+\hat{s}_{\bar{0},{i+j}}+\hat{s}_{\bar{1},{i+j}}+\hat{s}_{\bar{2},{i+j}})$,
if $\\{i,j\\}\not\subseteq 3{\mathbb{Z}}$,
4. ($\hat{\mathcal{H}}_{4}$)
$\hat{s}_{\bar{0},3h}\hat{s}_{\bar{0},3k}:=2(\hat{s}_{\bar{0},3h}+\hat{s}_{\bar{0},3k})-(\hat{s}_{\bar{0},3|h-k|}+\hat{s}_{\bar{0},3(h+k)})$,
5. ($\hat{\mathcal{H}}_{5}$)
$\hat{s}_{\bar{0},3h}\hat{s}_{\bar{1},3k}:=2(\hat{s}_{\bar{0},3h}+\hat{s}_{\bar{1},3h}-\hat{s}_{\bar{2},3h}+\hat{s}_{\bar{0},3k}+\hat{s}_{\bar{1},3k}-\hat{s}_{\bar{2},3k})-(\hat{s}_{\bar{0},3|h-k|}+\hat{s}_{\bar{1},3|h-k|}-\hat{s}_{\bar{2},3|h-k|}+\hat{s}_{\bar{0},3(h+k)}+\hat{s}_{\bar{1},3(h+k)}-\hat{s}_{\bar{2},3(h+k)})$,
6. ($\hat{\mathcal{H}}_{6}$)
$\hat{s}_{\bar{0},3h}\hat{s}_{\bar{2},3k}:=2(\hat{s}_{\bar{0},3h}-\hat{s}_{\bar{1},3h}+\hat{s}_{\bar{2},3h}+\hat{s}_{\bar{0},3k}-\hat{s}_{\bar{1},3k}+\hat{s}_{\bar{2},3k})-(\hat{s}_{\bar{0},3|h-k|}-\hat{s}_{\bar{1},3|h-k|}+\hat{s}_{\bar{2},3|h-k|}+\hat{s}_{\bar{0},3(h+k)}-\hat{s}_{\bar{1},3(h+k)}+\hat{s}_{\bar{2},3(h+k)})$,
7. ($\hat{\mathcal{H}}_{7}$)
$\hat{s}_{\bar{1},3h}\hat{s}_{\bar{2},3k}:=2(-\hat{s}_{\bar{0},3h}+\hat{s}_{\bar{1},3h}+\hat{s}_{\bar{2},3h}-\hat{s}_{\bar{0},3k}+\hat{s}_{\bar{1},3k}+\hat{s}_{\bar{2},3k})-(-\hat{s}_{\bar{0},3|h-k|}+\hat{s}_{\bar{1},3|h-k|}+\hat{s}_{\bar{2},3|h-k|}-\hat{s}_{\bar{0},3(h+k)}+\hat{s}_{\bar{1},3(h+k)}+\hat{s}_{\bar{2},3(h+k)})$.
We now introduce some eigenvectors for ${\rm ad}_{\hat{a}_{0}}$ and study how
they multiply. For $i\in{\mathbb{Z}}_{+}$, set
$\hat{u}_{i}:=-2\hat{a}_{0}+(\hat{a}_{i}+\hat{a}_{-i})+2\hat{s}_{\bar{0},i},$
$\hat{v}_{i}:=-2\hat{a}_{0}+(\hat{a}_{i}+\hat{a}_{-i})-\hat{s}_{\bar{0},i},$
$\hat{w}_{i}:=\hat{a}_{i}-\hat{a}_{-i},$
$\overline{u}_{i}:=-2\hat{a}_{0}+(\hat{a}_{-i}+\hat{a}_{i})-(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{1},i}+\hat{s}_{\bar{2},i}),$
$\overline{w}_{i}:=\hat{s}_{\bar{1},i}-\hat{s}_{\bar{2},i}.$
Then, the $\hat{u}_{i}$’s and $\overline{u}_{i}$’s are $0$-eigenvectors for
${\rm ad}_{\hat{a}_{0}}$, the $\hat{v}_{i}$’s are $2$-eigenvectors for ${\rm
ad}_{\hat{a}_{0}}$, the $\hat{w}_{i}$’s and $\overline{w}_{i}$’s are
$-2$-eigenvectors for ${\rm ad}_{\hat{a}_{0}}$. Note that, if
$i\not\equiv_{3}0$, $\overline{u}_{i}=\hat{u}_{i}$ and $\overline{w}_{i}=0$.
Moreover, it will be convenient to use the following notation: for
$i,j\in{\mathbb{Z}}_{+}$, set
$\hat{c}_{j}:=-2\hat{a}+(\hat{a}_{-j}+\hat{a}_{j}),$
$\hat{c}_{i,j}:=-2\hat{c}_{i}-2\hat{c}_{j}+\hat{c}_{|i-j|}+\hat{c}_{i+j},$
$\hat{\sigma}_{i,j}:=\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+\hat{s}_{-\bar{\imath},|i-j|}+\hat{s}_{\bar{\imath},|i-j|}+\hat{s}_{-\bar{\imath},i+j}+\hat{s}_{\bar{\imath},i+j},$
$\hat{u}_{i,j}:=-2\hat{u}_{i}-2\hat{u}_{j}+\hat{u}_{|i-j|}+\hat{u}_{i+j},$
$\hat{v}_{i,j}:=-2\hat{v}_{i}-2\hat{v}_{j}+\hat{v}_{|i-j|}+\hat{v}_{i+j}.$
We collect in the following lemma the main relations among the above vectors.
###### Lemma 3.
For all $i,j\in{\mathbb{Z}}_{+}$, we have
1. (1)
$\hat{u}_{j}=\hat{c}_{j}+2\hat{s}_{\bar{0},j}$,
2. (2)
$\hat{v}_{j}=\hat{c}_{j}-\hat{s}_{\bar{0},j}$,
3. (3)
$\overline{u}_{j}=\hat{c}_{j}-(\hat{s}_{\bar{0},j}+\hat{s}_{\bar{1},j}+\hat{s}_{\bar{2},j})$,
4. (4)
$\hat{u}_{i,j}=\hat{c}_{i,j}+\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+2\hat{s}_{\bar{0},|i-j|}+2\hat{s}_{\bar{0},i+j}$,
5. (5)
$\hat{v}_{i,j}=\hat{c}_{i,j}+2(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j})-(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j})$.
6. (6)
$\hat{c}_{i}\hat{c}_{j}=\hat{\sigma}_{i,j}$,
7. (7)
$\hat{c}_{i}\hat{s}_{\bar{r},j}=\hat{c}_{i,j}=\hat{c}_{j}\hat{s}_{\bar{r},i}$.
###### Proof.
The first five assertions are immediate. A straightforward computation gives
the sixth:
$\displaystyle\hat{c}_{i}\hat{c}_{j}$ $\displaystyle=$
$\displaystyle(-2\hat{a}_{0}+(\hat{a}_{-i}+\hat{a}_{i}))(-2\hat{a}_{0}+(\hat{a}_{-j}+\hat{a}_{j}))$
$\displaystyle=$
$\displaystyle-\hat{a}_{0}-2(-2\hat{a}_{-i}+\hat{a}_{0}+\hat{s}_{\bar{0},i}-2\hat{a}_{i}-2\hat{a}_{0}+\hat{s}_{\bar{0},i})$
$\displaystyle\>\>\>\>\>\>\>\>-2(-2\hat{a}_{0}-2\hat{a}_{-j}+\hat{s}_{\bar{0},j}-2\hat{a}_{0}-2\hat{a}_{j}+\hat{s}_{\bar{0},j})$
$\displaystyle\>\>\>\>\>\>\>\>+(-2\hat{a}_{-i}-2\hat{a}_{-j}+\hat{s}_{-\bar{\imath},|i-j|}-2\hat{a}_{i}-2\hat{a}_{-j}+\hat{s}_{\bar{\imath},i+j}$
$\displaystyle\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>\>-2\hat{a}_{-i}-2\hat{a}_{j}+\hat{s}_{-\bar{\imath},i+j}-2\hat{a}_{i}-2\hat{a}_{j}+\hat{s}_{\bar{\imath},|i-j|})$
$\displaystyle=$
$\displaystyle\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+\hat{s}_{-\bar{\imath},|i-j|}+\hat{s}_{\bar{\imath},|i-j|}+\hat{s}_{-\bar{\imath},i+j}+\hat{s}_{\bar{\imath},i+j}=\hat{\sigma}_{i,j}.$
Similarly, for the seventh, we have
$\begin{array}[]{l}\hat{c}_{i}\hat{s}_{\bar{r},j}=-(2\hat{a}_{0}-(\hat{a}_{-i}+\hat{a}_{i}))\hat{s}_{\bar{r},j}=\\\
=-2[-2\hat{a}_{0}+(\hat{a}_{-j}+\hat{a}_{j})-\hat{s}_{\bar{r},j}-(\delta_{\bar{0}\bar{r}}-1)\ast{(\bar{0}-\bar{r})}(\hat{s}_{\bar{r}-\bar{1},j}-\hat{s}_{\bar{r}+\bar{1},j})]\\\
+(-2\hat{a}_{-i}+(\hat{a}_{-i-j}+\hat{a}_{-i+j})-\hat{s}_{\bar{r},j}-(\delta_{-\bar{\imath}\bar{r}}-1)\ast{(-\bar{\imath}-\bar{r})}(\hat{s}_{\bar{r}-\bar{1},j}-\hat{s}_{\bar{r}+\bar{1},j})\\\
+(-2\hat{a}_{i}+(\hat{a}_{i-j}+\hat{a}_{i+j})-\hat{s}_{\bar{r},j}-(\delta_{\bar{\imath}\bar{r}}-1)\ast{(\bar{\imath}-\bar{r})}(\hat{s}_{\bar{r}-\bar{1},j}-\hat{s}_{\bar{r}+\bar{1},j}).\\\
\end{array}$
Here, $\hat{s}_{\bar{r},j}$, $\hat{s}_{\bar{r}-\bar{1},j}$, and
$\hat{s}_{\bar{r}+\bar{1},j}$ cancel and the first equality of the last claim
follows after the terms are rearranged. Since, by the definition,
$\hat{c}_{i,j}$ is symmetric in $i$ and $j$, we have
$\hat{c}_{i,j}=\hat{c}_{j,i}=\hat{c}_{j}\hat{s}_{\bar{r},i}$. ∎
Note that also $\hat{\sigma}_{i,j}$ is symmetric in $i$ and $j$. This evident
by Lemma 3.(4), since $\hat{\mathcal{H}}$ is commutative. Two more relations
will be useful in the sequel.
###### Lemma 4.
For all $i,j\in{\mathbb{Z}}_{+}$, we have
1. (1)
$\hat{s}_{\bar{0},i}(\hat{s}_{\bar{0},j}+\hat{s}_{\bar{1},j}+\hat{s}_{\bar{2},j})=\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+2(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j}),$
2. (2)
$\hat{u}_{i}-\overline{u}_{i}=-2\hat{s}_{\bar{0},i}+\hat{s}_{\bar{1},i}+\hat{s}_{\bar{2},i}$.
###### Proof.
A straightforward computation gives the claims. ∎
Now are now ready to compute the products of the vectors $\hat{u}_{j}$,
$\overline{u}_{j}$, and $\hat{v}_{j}$.
###### Lemma 5.
For all $i,j\in{\mathbb{Z}}_{+}$, we have
1. (1)
$\hat{u}_{i}\hat{u}_{j}=-\hat{u}_{i,j}-2(\hat{u}_{|i-j|}-\overline{u}_{|i-j|})-2(\hat{u}_{i+j}-\overline{u}_{i+j})$,
2. (2)
$u_{i}v_{j}=v_{i,j}$,
3. (3)
$v_{i}v_{j}=-2u_{i,j}-(\hat{u}_{|i-j|}-\overline{u}_{|i-j|})-(\hat{u}_{i+j}-\overline{u}_{i+j})$,
4. (4)
$\hat{u}_{i}\overline{u}_{j}=-\hat{u}_{i,j}$,
5. (5)
$\hat{v}_{i}\overline{u}_{j}=\hat{v}_{i,j}$.
###### Proof.
By Lemma 3,
$\displaystyle\hat{u}_{i}\hat{u}_{j}$ $\displaystyle=$
$\displaystyle(\hat{c}_{i}+2\hat{s}_{\bar{0},i})(\hat{c}_{j}+2\hat{s}_{\bar{0},j})$
$\displaystyle=$
$\displaystyle\hat{c}_{i}\hat{c}_{j}+2\hat{c}_{i}\hat{s}_{\bar{0},j}+2\hat{s}_{\bar{0},i}\hat{c}_{j}-\hat{s}_{\bar{0},i}\hat{s}_{\bar{0},j}$
$\displaystyle=$
$\displaystyle\hat{\sigma}_{i,j}+2\hat{c}_{i,j}+2\hat{c}_{i,j}-\hat{s}_{\bar{0},i}\hat{s}_{\bar{0},j}$
$\displaystyle=$
$\displaystyle\hat{\sigma}_{i,j}-\hat{c}_{i,j}-\hat{s}_{\bar{0},i}\hat{s}_{\bar{0},j}.$
Assume $i\equiv_{3}0$. Then
$\hat{\sigma}_{i,j}=\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+2(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j})$.
Moreover,
(3)
$\hat{s}_{\bar{0},i}\hat{s}_{\bar{0},j}=2(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j})-(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j}).$
This is immediate if $j\equiv_{3}0$, since in this case
$(\hat{\mathcal{H}}_{4})$ holds. If $j\not\equiv_{3}0$, then
$|i-j|\not\equiv_{3}0$ and $i+j\not\equiv_{3}0$. Thus
$\hat{s}_{\bar{0},|i-j|}=\hat{s}_{\bar{1},|i-j|}=\hat{s}_{\bar{2},|i-j|}$,
$\hat{s}_{\bar{0},i+j}=\hat{s}_{\bar{1},i+j}=\hat{s}_{\bar{0},i+j}$ and
$(\hat{\mathcal{H}}_{3})$ reduces to (3). Hence, by Lemma 3.(4)
$\displaystyle\hat{u}_{i}\hat{u}_{j}$ $\displaystyle=$
$\displaystyle-\hat{c}_{i,j}+\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+2(\hat{s}_{\bar{0},|i-j|}+2\hat{s}_{\bar{0},i+j})-2(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j})$
$\displaystyle+(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j})$
$\displaystyle=$
$\displaystyle-\hat{c}_{i,j}-(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j})-2(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{0},i+j})=-\hat{u}_{i,j}.$
Assume $i\not\equiv_{3}0$ and $j\not\equiv_{3}0$. Then
$\hat{\sigma}_{i,j}=\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+\hat{s}_{\bar{1},|i-j|}+\hat{s}_{\bar{2},|i-j|}+\hat{s}_{\bar{1},i+j}+\hat{s}_{\bar{2},i+j}$,
while the product $\hat{s}_{\bar{0},i}\hat{s}_{\bar{0},j}$ is given by
($\hat{\mathcal{H}}_{2}$). Hence
$\displaystyle\hat{u}_{i}\hat{u}_{j}$ $\displaystyle=$
$\displaystyle-\hat{c}_{i,j}+\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j}+\hat{s}_{\bar{1},|i-j|}+\hat{s}_{\bar{2},|i-j|}+\hat{s}_{\bar{1},i+j}+\hat{s}_{\bar{2},i+j}-2(\hat{s}_{\bar{0},i}+\hat{s}_{\bar{0},j})$
$\displaystyle+2(\hat{s}_{\bar{0},|i-j|}+\hat{s}_{\bar{1},|i-j|}+\hat{s}_{\bar{2},|i-j|}+\hat{s}_{\bar{0},i+j}+\hat{s}_{\bar{1},i+j}+\hat{s}_{\bar{2},i+j})$
$\displaystyle=$
$\displaystyle-\hat{u}_{i,j}-2(\hat{u}_{|i-j|}-\overline{u}_{|i-j|})-2(\hat{u}_{i+j}-\overline{u}_{i+j}).$
This proves the first assertion. The remaining assertions follow with similar
computations, using Lemma 4. ∎
###### Theorem 6.
The algebra ${\hat{\mathcal{H}}}$ defined above is a primitive $2$-generated
symmetric axial algebra of Monster type $(2,\frac{1}{2})$ over any field of
characteristic $5$.
###### Proof.
Remind that in characteristic $5$, $\frac{1}{2}=-2$. It is easy to see that
the maps $\hat{\tau}_{0}$ and $\hat{f}$ are algebra automorphisms of
$\hat{\mathcal{H}}$ and that the map $\hat{\theta}:=\tau_{0}\hat{f}$ induces
on the set $\\{\hat{a}_{i}\mid i\in{\mathbb{Z}}\\}$ the translation
$\hat{a}_{i}\mapsto\hat{a}_{i+1}$. Let $H:=\langle\\!\langle
a_{0},a_{1}\rangle\\!\rangle$ be the subalgebra of $\hat{\mathcal{H}}$
generated by $\hat{a}_{0}$ and $\hat{a}_{1}$. Note that
$\hat{s}_{\bar{0},1}=\hat{a}_{0}\hat{a}_{1}+2(\hat{a}_{0}+\hat{a}_{1})\in H$.
Also,
$\hat{a}_{-1}=\hat{a}_{0}\hat{s}_{\bar{0},1}+2\hat{a}_{0}-\hat{a}_{1}+\hat{s}_{\bar{0},1}\in
H$. This gives us $\hat{a}_{-1}\in H$. Clearly,
$H=\langle\\!\langle\hat{a}_{0},\hat{a}_{1}\rangle\\!\rangle$ is invariant
under $\hat{f}$ and also
$H=\langle\\!\langle\hat{a}_{-1},\hat{a}_{0},\hat{a}_{1}\rangle\\!\rangle$ is
invariant under the involution $\hat{\tau}_{0}$. Thus $H$ is invariant under
$\hat{\theta}$ and so $H$ contains all the $\hat{a}_{i}$’s. It follows that
$H$ contains all the $s_{\bar{r},j}$, that is $H=\hat{\mathcal{H}}$.
Since, for every $i\in{\mathbb{Z}}$,
$\hat{a}_{i}=\hat{a}_{0}^{\hat{\theta}_{i}}$, to show that $\hat{\mathcal{H}}$
is an axial algebra of Monster type $(2,\frac{1}{2})$ it is enough to prove
that $\hat{a}_{0}$ is an axis with respect to the Monster fusion law in Table
1.
$\begin{array}[]{|c||c|c|c|c|}\hline\cr\star&1&0&2&-2\\\ \hline\cr\hline\cr
1&1&&2&-2\\\ \hline\cr 0&&0&2&-2\\\ \hline\cr 2&2&2&0,1&-2\\\
\hline\cr-2&-2&-2&-2&0,1,2\\\ \hline\cr\end{array}$ Table 1. Fusion law for
$\hat{\mathcal{H}}$
Further, for every $i\in{\mathbb{Z}}_{+}$, we have
(4)
$\langle\hat{a}_{0},\hat{a}_{i},\hat{a}_{-i},\hat{s}_{\bar{0},i}\rangle=\langle\hat{a}_{0},\hat{u}_{i},\hat{v}_{i},\hat{w}_{i}\rangle\mbox{
if }i\not\equiv_{3}0$
and
(5)
$\langle\hat{a}_{0},\hat{a}_{i},\hat{a}_{-i},\hat{s}_{\bar{0},i},\hat{s}_{\bar{1},i},\hat{s}_{\bar{2},i}\rangle=\langle\hat{a}_{0},\hat{u}_{i},\hat{v}_{i},\hat{w}_{i},\overline{u}_{i},\overline{w}_{i}\rangle\mbox{
if }i\equiv_{3}0.$
Hence, a basis of ${\rm ad}_{\hat{a}_{0}}$-eigenvectors for
$\hat{\mathcal{H}}$ is given by
$\hat{a}_{0},\>\hat{u}_{i},\>\hat{v}_{i},\>\hat{w}_{i}\>,\>\overline{u}_{3k},\>\overline{w}_{3k},\mbox{
with }\>i,k\in{\mathbb{Z}}_{+}.$
In particular, since $\hat{a}_{0}$ is the unique element of this basis that is
a $1$-eigenvector for ${\rm ad}_{\hat{a}_{0}}$, the algebra is primitive.
Finally, set
$H_{0}:=\langle\hat{u}_{i},\overline{u}_{i}\mid
i\in{\mathbb{Z}}_{+}\rangle,\>H_{2}:=\langle\hat{v}_{i}\mid
i\in{\mathbb{Z}}_{+}\rangle,\>H_{-2}:=\langle\hat{w}_{i},\overline{w}_{i}\mid
i\in{\mathbb{Z}}_{+}\rangle.$
Then, for $z\in\\{0,2,-2\\}$, $H_{z}$ is the $z$-eigenspace for ${\rm
ad}_{\hat{a}_{0}}$ and $\hat{\tau}_{0}$ acts as the identity on
$\langle\hat{a}_{0}\rangle\oplus H_{0}\oplus H_{2}$ and as the multiplication
by $-1$ on $H_{-2}$. Since $\tau_{0}$ is an algebra automorphism, we have
$H_{z}H_{-2}\subseteq H_{-2}$ for every $z\in\\{0,2\\}$ and
$H_{-2}H_{-2}\subseteq\langle\hat{a}_{0}\rangle\oplus H_{0}\oplus H_{2}$. By
Lemma 5, we also have $H_{0}H_{0}\subseteq H_{0}$, $H_{0}H_{2}\subseteq
H_{2}$, and $H_{2}H_{2}\subseteq H_{0}$. Hence $\hat{\mathcal{H}}$ respects (a
restricted version of) the Monster fusion law and the result is proved. ∎
Note that
$\langle\hat{s}_{\bar{0},3k}-\hat{s}_{\bar{1},3k},\hat{s}_{\bar{0},3k}-\hat{s}_{\bar{2},3k},\hat{s}_{\bar{1},3k}-\hat{s}_{\bar{2},3k}\mid
k\in{\mathbb{Z}}_{+}\rangle$ is an $\hat{f}$-invariant ideal of
$\hat{\mathcal{H}}$ and the corresponding factor algebra is isomorphic to the
Highwater algebra $\mathcal{H}$. Moreover, the algebra $6A_{2}$ is isomorphic
to the factor of $\hat{\mathcal{H}}$ over the ideal linearly spanned by the
vectors
$\hat{a}_{i}-\hat{a}_{i-6},\mbox{ for }i\geq
3,\>\>\hat{s}_{\bar{0},4}-\hat{s}_{\bar{0},2},\>\>\hat{s}_{\bar{0},5}-\hat{s}_{\bar{0},1},\>\>\hat{s}_{\bar{0},j}-\hat{s}_{\bar{0},j-6},\mbox{
for }j\geq
6,\>\>\hat{x},\>\>\hat{x}^{\hat{f}},\>\>\hat{x}^{\hat{f}\hat{\tau}_{0}},$
where
$\hat{x}:=\hat{s}_{\bar{0},3}-\hat{s}_{\bar{0},1}-\hat{s}_{\bar{0},2}+\hat{a}_{-2}+\hat{a}_{-1}+\hat{a}_{1}+\hat{a}_{2}-2(\hat{a}_{0}+\hat{a}_{3}).$
## 3\. Proofs of the main results
In the next lemma we recall some basic properties of the elements
$s_{\bar{r},n}$ that will be used throughout the proof of Theorem 1 without
further reference.
###### Lemma 7.
Let $V$ be a primitive $2$-generated symmetric axial algebra of Monster type.
For every $n\in{\mathbb{Z}}_{+}$ and $i\in{\mathbb{Z}}$ the following hold
1. (1)
$(s_{\bar{r},n})^{\sigma_{i}}=s_{\bar{k},n}$, with $r+i\equiv k\>\bmod n$;
2. (2)
the group $\langle\tau_{0},f\rangle$ acts transitively on the set
$\\{s_{\bar{r},n}\mid\bar{r}\in{\mathbb{Z}}/n{\mathbb{Z}}\\}$, for each
$n\in{\mathbb{Z}}_{+}$;
3. (3)
$\lambda_{a_{0}}(s_{\bar{0},n})=\lambda_{n}-\beta-\beta\lambda_{n}$.
###### Proof.
The first assertion is Lemma 4.2 in [1] and (2) and (3) follow immediately.
For the last one, note that, for every $x\in V$, we have
$\lambda_{a_{0}}(a_{0}x)=\lambda_{a_{0}}(x)$ (this follows immediately from
the linearity of $\lambda_{a_{0}}$, decomposing $x$ into a sum of ${\rm
ad}_{a_{0}}$-eigenvectors). Hence, by the definition of $s_{\bar{0},n}$, we
get
$\lambda_{a_{0}}(s_{\bar{0},n})=\lambda_{a_{0}}(a_{0}a_{n}-\beta(a_{0}+a_{n}))=\lambda_{n}-\beta-\beta\lambda_{n}.$
∎
Proof of Theorem 1. Let $V$ be a primitive $2$-generated symmetric axial
algebra of Monster type $(2,\frac{1}{2})$ over a field of characteristic $5$
such that $\lambda_{1}=\lambda_{2}=1$. By [1, Lemma 4.4], for
$h\in{\mathbb{Z}}$, the ${\rm ad}_{a_{0}}$-eigenvectors $u_{h}$ and $v_{h}$,
defined in Section 1, are as follows (remind that $\frac{1}{2}=-2$ in
characteristic $5$)
(6) $u_{h}=-2a_{0}+(a_{h}+a_{-h})+2s_{\bar{0},h},$ (7)
$v_{h}=a_{0}+2(a_{h}+a_{-h})-2s_{\bar{0},h}.$
By the fusion law, for every $h,k\in{\mathbb{Z}}$, the following identities
hold
(8) $a_{0}(u_{h}u_{k}-v_{h}v_{k}+\lambda_{a_{0}}(v_{h}v_{k})a_{0})=0$
and
(9) $a_{0}(u_{h}u_{k}+u_{h}v_{k})=2u_{h}v_{k}.$
By [1, Lemma 4.3], for every $j\in{\mathbb{Z}}$, we have
$a_{0}s_{\bar{0},j}=-2a_{0}+(a_{-j}+a_{j})-s_{\bar{0},j}$. Using the action of
the group of automorphisms $\langle\tau_{0},f\rangle$, we get, for every
$j,k\in{\mathbb{Z}}$, $r\in{\mathbb{Z}}$,
(10)
$a_{r+jk}s_{\bar{r},j}=-2a_{r+jk}+(a_{r+j(k-1)}+a_{r+j(k+1)})-s_{\bar{0},j}.$
Set $V_{0}:=\langle a_{i},s_{\bar{r},n}\mid
i\in{\mathbb{Z}},n\in{\mathbb{Z}}_{+},r\in{\mathbb{Z}}\rangle$ and, for
$t\in{\mathbb{Z}}$, denote by $[t]_{3}$ the congruence class
$t+3{\mathbb{Z}}$, $0\ast[t]_{3}:=0$, $(-1)\ast[1]_{3}:=-1$, and
$(-1)\ast[2]_{3}:=1$.
Claim. For every $i\in{\mathbb{Z}}_{+}$, $r\in{\mathbb{Z}}$, and
$t\in\\{0,1,2\\}$
1. (i)
$\lambda_{i}=1$ and $\lambda_{a_{0}}(s_{\bar{r},i})=0$;
2. (ii)
if $i\not\equiv_{3}0$, $s_{\bar{r},i}=s_{\bar{0},i}$;
3. (iii)
if $i\equiv_{3}0$ and $r\equiv_{3}t$, $s_{\bar{r},i}=s_{\bar{t},i}$;
4. (iv)
for every $l\in{\mathbb{Z}}$, $j\leq i$, $m\in{\mathbb{Z}}$, the products
$a_{l}s_{\bar{m},j}$ belong to $V_{0}$ and satisfy the formula
(11)
$a_{l}s_{\bar{m},j}=-2a_{l}+(a_{l-j}+a_{l+j})-s_{\bar{m},j}+(\delta_{[l]_{3}[m]_{3}}-1)\ast{[l-m]_{3}}(s_{\bar{m}-\bar{1},j}-s_{\bar{m}+\bar{1},j}).$
Note that, by the symmetries of $V$, part (iv) of Claim holds if and only if,
for every $r\in\\{0,\dots,j-1\\}$, the products $a_{0}s_{\bar{r},j}$ satisfy
the corresponding formula in Equation (11).
We proceed by induction on $i$. Let $i=1$. By the hypothesis $\lambda_{1}=1$,
hence (i) holds by Lemma 7.(4) and (ii) holds trivially.
Let $i=2$. Again by the hypothesis, $\lambda_{2}=1$, hence (i) holds by Lemma
7.(4). Equation (1) in [1, Lemma 4.8] becomes
$-2(s_{\bar{0},2}-s_{\bar{1},2})=0$, whence
(12) $s_{\bar{1},2}=s_{\bar{0},2}$
and parts (ii) and (iv) hold.
Assume $i\geq 3$ and the result true for every $l\leq i$. By the fusion law,
$u_{1}u_{i}$, $u_{1}v_{i}$ are $0$\- and $2$-eigenvectors for ${\rm
ad}_{a_{0}}$, respectively. Further, since
$(s_{\bar{1},i+1})^{\tau_{0}}=s_{-\bar{1},i+1}=s_{\bar{\imath},i+1},$
we get that $s_{\bar{1},i+1}-s_{\bar{\imath},i+1}$ is negated by the map
$\tau_{0}$, in particular $s_{\bar{1},i+1}-s_{\bar{\imath},i+1}$ is a
$-2$-eigenvector for ${\rm ad}_{a_{0}}$. It follows that
$\lambda_{a_{0}}(u_{1}u_{j})=\lambda_{a_{0}}(u_{1}v_{j})=\lambda_{a_{0}}(s_{\bar{1},i+1}-s_{\bar{\imath},i+1})=0.$
By Equations (6) and (7) and linearity of $\lambda_{a_{0}}$, we get
$0=\lambda_{a_{0}}(u_{1}u_{i}+u_{1}v_{i})=2-2\lambda_{a_{0}}(s_{\bar{1},i+1})-2\lambda_{i+1},$
and
$0=\lambda_{a_{0}}(u_{1}u_{i})=\lambda_{a_{0}}(s_{\bar{0},1}s_{\bar{0},i})-2\lambda_{a_{0}}(s_{\bar{1},i+1})+2\lambda_{i+1}-2,$
whence
(13) $\lambda_{a_{0}}(s_{\bar{1},i+1})=1-\lambda_{i+1}\>\>\mbox{ and
}\>\>\lambda_{a_{0}}(s_{\bar{0},1}s_{\bar{0},i})=\lambda_{i+1}-1.$
As above, substituting $u_{1}$, $u_{i}$, $v_{1}$, and $v_{i}$ in Equation (8),
with $h=1$ and $k=i$, we get
(14)
$a_{0}(s_{\bar{1},i+1}+s_{\bar{\imath},i+1})=-2(1+\lambda_{i+1})a_{0}+2(a_{i+1}+a_{-i-1})-2s_{\bar{0},i+1}.$
On the other hand, since $s_{\bar{1},i+1}-s_{\bar{\imath},i+1}$ is a
$-2$-eigenvector for ${\rm ad}_{a_{0}}$,
(15)
$a_{0}(s_{\bar{1},i+1}-s_{\bar{\imath},i+1})=-2(s_{\bar{1},i+1}-s_{\bar{\imath},i+1}).$
Taking the sum and the difference of both members of Equations (14) and (15)
we get
(16) $\displaystyle a_{0}s_{\bar{1},i+1}$ $\displaystyle=$
$\displaystyle-(1+\lambda_{i+1})a_{0}+(a_{i+1}+a_{-i-1})-s_{\bar{0},i+1}-s_{\bar{1},i+1}+s_{\bar{\imath},i+1}$
and
(17) $\displaystyle a_{0}s_{\bar{2},i+1}$ $\displaystyle=$
$\displaystyle-(1+\lambda_{i+1})a_{0}+(a_{i+1}+a_{-i-1})-s_{\bar{0},i+1}+s_{\bar{1},i+1}-s_{\bar{\imath},i+1}.$
Assume first that $i+1\equiv_{3}0$. Substituting the expressions (6) and (7)
in Equation (9), with $h=1$ and $k=i$, and using Equation (14) we get
(18)
$s_{\bar{0},1}s_{\bar{0},i}=2(\lambda_{i+1}-1)a_{0}-2(s_{\bar{1},i+1}+s_{\bar{\imath},i+1}+s_{\bar{0},i+1})+2s_{\bar{0},1}-s_{\bar{0},i-1}+2s_{\bar{0},i}.$
Since, $s_{\bar{0},1}$ and, by the inductive hypothesis, $s_{\bar{0},i-1}$ and
$s_{\bar{0},i}$ are $\sigma_{j}$-invariant for every $j\in{\mathbb{Z}}$,
subtracting to both members of Equation (18) their images under $\sigma_{i+1}$
we get
(19)
$0=s_{\bar{0},1}s_{\bar{0},i}-(s_{\bar{0},1}s_{\bar{0},i})^{\sigma_{i+1}}=2(\lambda_{i+1}-1)(a_{0}-a_{i+1}),$
whence either $\lambda_{i+1}=1$ or $a_{0}=a_{i+1}$. But, again, in the latter
case, $\lambda_{i+1}=\lambda_{a_{0}}(a_{0})=1$. Hence, by Equation (13),
giving (i). In particular Equation (18) becomes
(20)
$s_{\bar{0},1}s_{\bar{0},i}=-2(s_{\bar{1},i+1}+s_{\bar{\imath},i+1}+s_{\bar{0},i+1})+2s_{\bar{0},1}-s_{\bar{0},i-1}+2s_{\bar{0},i}.$
Again, taking the image under $\sigma_{i}$ of both members of the above
equation, we get
$0=s_{\bar{0},1}s_{\bar{0},i}-(s_{\bar{0},1}s_{\bar{0},i})^{\sigma_{i}}=-2(s_{\bar{1},i+1}-s_{\bar{\imath}-\bar{1},i+1}),$
whence $s_{\bar{1},i+1}=s_{\bar{\imath}-\bar{1},i+1}$. Since, by Lemma 7.(3),
the group $\langle\tau_{0},f\rangle$ is transitive on the set
$\\{s_{\bar{r},i+1}\mid 0\leq r\leq i\\}$, it follows that (iii) holds, in
particular
(21) $s_{\bar{\imath},i+1}=s_{\bar{2},i+1}.$
Hence, in order to prove (iv), we just need to check that Equation (11) holds
for $l=0$ and $m\in\\{0,1,2\\}$. The case $m=0$ follows from Equation (10),
cases $m\in\\{1,2\\}$ follow from Equations (16), (17), and (21).
Assume now $i+1\equiv_{3}1$. Substituting the expressions (6) and (7) in
Equation (9), with $h=2$ and $k=i-1$, since by the inductive hypothesis (iv),
for every $l\in{\mathbb{Z}}$ and $j\leq i+1$, $r\in{\mathbb{Z}}$, the products
$a_{l}s_{\bar{r},j}$ are given by Equation (11), we get
$s_{\bar{0},2}s_{\bar{0},i-1}=2(\lambda_{i+1}-1)a_{0}-2(s_{\bar{1},i-3}+s_{\bar{2},i-3}+s_{\bar{0},i-3})+2s_{\bar{0},2}-s_{\bar{0},i+1}+2s_{\bar{0},i-1}.$
Since, $s_{\bar{0},2}$ and, by the inductive hypothesis, $s_{\bar{0},i-1}$ and
$s_{\bar{1},i-3}+s_{\bar{2},i-3}+s_{\bar{0},i-3}$ are $\sigma_{j}$-invariant
for every $j\in{\mathbb{Z}}$, as above we obtain
(22)
$0=s_{\bar{0},2}s_{\bar{0},i-1}-(s_{\bar{0},2}s_{\bar{0},i-1})^{\sigma_{i+1}}=2(\lambda_{i+1}-1)(a_{0}-a_{i+1}),$
whence, as in the previous case, $\lambda_{i+1}=1$ giving (i) by Equation
(13). Similarly,
$0=s_{\bar{0},2}s_{\bar{0},i-1}-(s_{\bar{0},2}s_{\bar{0},i-1})^{\sigma_{1}}=s_{\bar{1},i+1}-s_{\bar{0},i+1},$
whence $s_{\bar{1},i+1}=s_{\bar{0},i+1}$. Since the group
$\langle\tau_{0},f\rangle$ is transitive on the set of all pairs
$(s_{\bar{r},i+1},s_{\bar{r}+\bar{1},i+1})$, it follows that
$s_{\bar{r},i+1}=s_{\bar{0},i+1}$, for every $r\in{\mathbb{Z}}$.
Finally, assume $i+1\equiv_{3}2$. Substituting the expressions (6) and (7) in
Equation (9), with $h=1$ and $k=i$, using the inductive hypothesis as above
(in particular $s_{\bar{i}-2,i-1}=s_{\bar{2},i-1}$), we get
$s_{\bar{0},1}s_{\bar{0},i}=2(\lambda_{i+1}-1)a_{0}-2(s_{\bar{0},i-1}+s_{\bar{1},i-1}+s_{\bar{2},i-1})+2s_{\bar{0},1}-s_{\bar{0},i+1}+2s_{\bar{0},i}.$
Since $s_{\bar{0},1}$ and, by the inductive hypothesis, $s_{\bar{0},i}$ and
$s_{\bar{0},i-1}+s_{\bar{1},i-1}+s_{\bar{2},i-1}$ are $\sigma_{j}$-invariant
for every $j\in{\mathbb{Z}}$, we have
(23)
$0=s_{\bar{0},1}s_{\bar{0},i}-(s_{\bar{0},1}s_{\bar{0},i})^{\sigma_{i+1}}=2(\lambda_{i+1}-1)(a_{0}-a_{i+1}),$
whence, as above, it follows $\lambda_{i+1}=1$. Hence, by Equation (13),
giving (i). Then,
$0=s_{\bar{0},1}s_{\bar{0},i}-(s_{\bar{0},1}s_{\bar{0},i})^{\sigma_{1}}=s_{\bar{1},i+1}-s_{\bar{0},i+1},$
whence we conclude as in the previuous case. This finishes the inductive step
and the Claim is proved. As a consequence, we get that the subspace $V_{0}$ is
closed with respect to the multiplication by any axes $a_{i}$.
We now consider the products $s_{\bar{r},i}s_{\bar{t},j}$, for
$i,j\in{\mathbb{Z}}_{+}$, $r,t\in{\mathbb{Z}}$. Proceeding as above, by
Equation (9) with $h=i$ and $k=j$, we obtain
(24)
$s_{\bar{0},i}s_{\bar{0},j}=2(s_{\bar{0},i}+s_{\bar{0},j})-2(s_{\bar{0},{|i-j|}}+s_{\bar{1},{|i-j|}}+s_{\bar{2},{|i-j|}}+s_{\bar{0},{i+j}}+s_{\bar{1},{i+j}}+s_{\bar{2},{i+j}}),$
if $\\{i,j\\}\not\subseteq 3{\mathbb{Z}}$, and
(25)
$s_{\bar{0},i}s_{\bar{0},j}=2(s_{\bar{0},i}+s_{\bar{0},j})-(s_{\bar{0},|i-j|}+s_{\bar{0},i+j}),\>\>\mbox{
if }i\equiv_{3}j\equiv_{3}0.$
If $i\not\equiv_{3}0$, then $s_{\bar{0},i}=s_{\bar{1},i}=s_{\bar{2},i}$, thus,
applying $f$ and $\tau_{1}$ to Equation (24), we get
$s_{\bar{0},i}s_{\bar{1},j}=2(s_{\bar{0},i}+s_{\bar{1},j})-2(s_{\bar{0},{|i-j|}}+s_{\bar{1},{|i-j|}}+s_{\bar{2},{|i-j|}}+s_{\bar{0},{i+j}}+s_{\bar{1},{i+j}}+s_{\bar{2},{i+j}}),$
and
$s_{\bar{0},i}s_{\bar{2},j}=2(s_{\bar{0},i}+s_{\bar{2},j})-2(s_{\bar{0},{|i-j|}}+s_{\bar{1},{|i-j|}}+s_{\bar{2},{|i-j|}}+s_{\bar{0},{i+j}}+s_{\bar{1},{i+j}}+s_{\bar{2},{i+j}}).$
If $i\equiv_{3}j\equiv_{3}0$, in a similar way, from Equation (25), we get,
for any $r\in{\mathbb{Z}}$,
$s_{\bar{r},i}s_{\bar{r},j}=2(s_{\bar{r},i}+s_{\bar{0}r,j})-(s_{\bar{r},|i-j|}+s_{\bar{r},i+j}).$
The last products needed are $s_{\bar{r},3h}s_{\bar{t},3k}$, for
$h,k\in{\mathbb{Z}}_{+}$, $r,t\in{\mathbb{Z}}$ with $r\not\equiv_{3}t$. For
$k\in{\mathbb{Z}}_{+}$, set
(26)
$\overline{u}_{3k}:=a_{0}+2(a_{-3k}+a_{3k})-2(s_{\bar{0},3k}+s_{\bar{1},3k}+s_{\bar{2},3k}).$
Then, by Equation (11), $\overline{u}_{3k}$ is a $0$-eigenvector for ${\rm
ad}_{a_{0}}$. Hence, by the fusion law, we have
$a_{0}(\overline{u}_{3k}u_{3h}+\overline{u}_{3k}v_{3h})=2\overline{u}_{3k}v_{3h}.$
Substituting the expressions (26) and (7) in the above equation, we get
(27)
$s_{\bar{0},3h}(s_{\bar{1},3k}+s_{\bar{2},3k})=-s_{\bar{0},3h}-s_{\bar{0},3k}-2(s_{\bar{0},3|h-k|}+s_{\bar{0},3(h+k)}),$
whence, taking the images under $f$ of both sides, we get
(28)
$s_{\bar{1},3h}(s_{\bar{0},3k}+s_{\bar{2},3k})=-s_{\bar{1},3h}-s_{\bar{1},3k}-2(s_{\bar{1},3|h-k|}+s_{\bar{1},3(h+k)}).$
Taking the difference of the Equations (27) and (28), we obtain
$\displaystyle s_{\bar{2},3k}(s_{\bar{0},3h}-s_{\bar{1},3h})$ $\displaystyle=$
$\displaystyle-(s_{\bar{0},3h}-s_{\bar{1},3h}+s_{\bar{0},3k}-s_{\bar{1},3k})$
$\displaystyle-2(s_{\bar{0},3|h-k|}-s_{\bar{1},3|h-k|}+s_{\bar{0},3(h+k)}-s_{\bar{1},3(h+k)}).$
Since in Equation (3) we can swap $h$ and $k$, we finally get
$\displaystyle s_{\bar{0},3h}s_{\bar{1},3k}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[s_{\bar{0},3h}(s_{\bar{1},3k}+s_{\bar{2},3k})-(s_{\bar{2},3h}(s_{\bar{0},3k}-s_{\bar{1},3k}))^{\tau_{1}}\right]$
$\displaystyle=$ $\displaystyle
2(s_{\bar{0},3h}+s_{\bar{1},3h}-s_{\bar{2},3h}+s_{\bar{0},3k}+s_{\bar{1},3k}-s_{\bar{3}k})$
$\displaystyle-(s_{\bar{0},3|h-k|}+s_{\bar{1},3|h-k|}-s_{\bar{2},3|h-k|}+s_{\bar{0},3(h+k)}+s_{\bar{1},3(h+k)}-s_{\bar{3}(h+k)}).$
Using the maps $\tau_{0}$ and $f$, we derive the formulas for the products
$s_{\bar{0},3h}s_{\bar{2},3k}$ and $s_{\bar{1},3h}s_{\bar{2},3k}$. Hence
$V_{0}$ is a subalgebra of $V$, and since $a_{0},a_{1}\in V_{0}$, we get
$V_{0}=V$. Therefore, the map
$\begin{array}[]{cllcl}\phi&\colon&\hat{\mathcal{H}}&\to&V\\\
&&\hat{a}_{i}&\mapsto&a_{i}\\\
&&\hat{s}_{\bar{r},j}&\mapsto&s_{\bar{r},j}\end{array}$
from the basis $\mathcal{B}$ of $\hat{\mathcal{H}}$ and $V$, extends to a
surjective linear map $\bar{\phi}:\hat{\mathcal{H}}\to V$. $\bar{\phi}$ is
actually an algebra homomorphism, since $V$ satisfies the multiplication table
of the algebra $\hat{\mathcal{H}}$ and the result follows. $\square$
Proof of Theorem 2. Let $V$ be a primitive $2$-generated axial algebra of
Monster type $(\alpha,\beta)$ over a field ${\mathbb{F}}$ of characteristic
$5$, such that $D\geq 6$. If $\alpha=4\beta$, condition $D\geq 6$ and the
proof of Proposition 4.12 in [1] yield $(\alpha,\beta)=(2,\frac{1}{2})$ and
$\lambda_{1}=\lambda_{2}=1$. Then, by Theorem 1, $V$ satisfies (3). If
$\alpha=2\beta$, the proof of Claim 5.18 in [10] is still valid in
characteristic $5$, proving (2). Similarly, if $\alpha\neq 4\beta,2\beta$, the
proof of Claim 5.19 in [10] is also valid in characteristic $5$ and gives (1).
$\square$
## References
* [1] Franchi, C., Mainardis, M., Shpectorov, S., $2$-generated axial algebras of Monster type $(\alpha,\beta)$. In preparation.
* [2] Franchi, C., Mainardis, M., Shpectorov, S., An infinite dimensional $2$-generated axial algebra of Monster type. https://arxiv.org/abs/2007.02430.
* [3] Galt, A., Joshi, V., Mamontov, A., Shpectorov, S., Staroletov, A., Double axes and subalgebras of Monster type in Matsuo algebras. https://arxiv.org/abs/2004.11180.
* [4] Hall, J., Rehren, F., Shpectorov, S.: Universal Axial Algebras and a Theorem of Sakuma, J. Algebra 421 (2015), 394-424.
* [5] Ivanov, A. A.: The Monster group and Majorana involutions. Cambridge Tracts in Mathematics 176, Cambridge Univ. Press, Cambridge (2009)
* [6] Rehren, F., Axial algebras, PhD thesis, University of Birmingham, 2015.
* [7] Rehren, F., Generalised dihedral subalgebras from the Monster, Trans. Amer. Math. Soc. 369 (2017), 6953-6986.
* [8] Nadirashvili, N., Tkachev, Vladimir G., Vladut, S., Nonlinear Elliptic Equations and Nonassociative Algebras Math. Surveys and Monographs, vol. 200, AMS, Providence, RI (2014)
* [9] Tkachev, Vladimir G., Spectral properties of nonassociative algebras and breaking regularity for nonlinear elliptic type PDEs, Algebra I Anal. 31 (2) (2019), 51-74.
* [10] Yabe, T.: On the classification of $2$-generated axial algebras of Majorana type. https://arxiv.org/abs/2008.01871
|
# Logic Blog 2020 (the 10th anniversary blog)
Editor: André Nies<EMAIL_ADDRESS>
###### Contents
1. I Group theory and its connections to logic
1. 1 Nies and Stephan: non-word automatic free nil-2 groups
2. 2 Nies: open questions on classes of closed subgroups of $S_{\infty}$
3. 3 Ivanov and Majcher: amenable subgroups of $S(\omega)$
4. 4 Nies: Stone-type duality for totally disconnected locally compact groups
5. 5 Nies: Closed subgroups of $S(\omega)$ generated by their permutations of finite support
6. 6 Harrison-Trainor and Nies: $\Pi_{r}$-pseudofinite groups
2. II Computability theory and randomness
1. 7 Greenberg, Nies and Turetsky: Characterising SJT reducibility
2. 8 Nies and Stephan: Update on the SMB theorem for measures
3. 9 Nies and Tomamichel: the measure associated with an infinite sequence of qubits
3. III Set theory
1. 10 Yu: perfect subsets of uncountable sets of reals
The Logic Blog is a shared platform for
* •
rapidly announcing results and questions related to logic
* •
putting up results and their proofs for further research
* •
parking results for later use
* •
getting feedback before submission to a journal
* •
foster collaboration.
Each year’s blog is posted on arXiv a 2-3 months after the year has ended.
Logic Blog 2019 | (Link: http://arxiv.org/abs/2003.03361)
---|---
Logic Blog 2018 | (Link: http://arxiv.org/abs/1902.08725)
Logic Blog 2017 | (Link: http://arxiv.org/abs/1804.05331)
Logic Blog 2016 | (Link: http://arxiv.org/abs/1703.01573)
Logic Blog 2015 | (Link: http://arxiv.org/abs/1602.04432)
Logic Blog 2014 | (Link: http://arxiv.org/abs/1504.08163)
Logic Blog 2013 | (Link: http://arxiv.org/abs/1403.5719)
Logic Blog 2012 | (Link: http://arxiv.org/abs/1302.3686)
Logic Blog 2011 | (Link: http://arxiv.org/abs/1403.5721)
Logic Blog 2010 | (Link: http://dx.doi.org/2292/9821)
How does the Logic Blog work?
Writing and editing. The source files are in a shared dropbox. Ask André () in
order to gain access.
Citing. Postings can be cited. An example of a citation is:
H. Towsner, _Computability of Ergodic Convergence_. In André Nies (editor),
Logic Blog, 2012, Part 1, Section 1, available at
http://arxiv.org/abs/1302.3686.
The logic blog, once it is on arXiv, produces citations e.g. on Google
Scholar.
## Part I Group theory and its connections to logic
### 1\. Nies and Stephan: non-word automatic free nil-2 groups
We follow up on a post on word automaticity of various nilpotent groups by the
same guys last year [10, Section 6]. To be word automatic means that given an
appropriate encoding of the elements by strings, the domain and operations are
recognizable by finite automata. Other terms in use for this notion include
FA-presentable, and even “automatic”.
We supply a proof that examples of the last of three types that were provided
there are not word-automatic. The first two types were shown to be word
automatic there. For an odd prime $p$, let $L_{p}$ be the free nilpotent-2
exponent $p$ group of infinite rank. In the notation of [10, Section 6] one
has $N\cong Q$; let $y_{i,k}$ ($i<k$) be free generators of $N$; then $L_{p}$
is given by the function $\phi(x_{i},x_{k})=y_{i,k}$ if $i<k$. So we have
$L_{p}=\langle x_{i},y_{i,k}\mid
x_{i}^{p},[x_{i},x_{k}]y_{i,k}^{-1},[y_{i,k},x_{r}](i<k)\rangle$
thus the $y_{i,k}$ are actually redundant. This group in a sense generalises
the Heisenberg group over the ring $\mathbb{F}_{p}$, which would be the case
of two generators $x_{0},x_{1}$.
###### Theorem 1.1.
$L_{p}$ is not word automatic.
###### Proof.
Let $\alpha,\beta$ denote strings in the alphabet of digits $0,\ldots,{p-1}$,
which are thought of as extended by $0$’s if necessary. Let
$[\alpha]=\prod_{i<|\alpha|}x_{i}^{\alpha_{i}}$. Each element of $L_{p}$ has a
normal form
$[\alpha]\prod_{s<n}\prod_{r<s}[x_{r},x_{s}]^{\gamma_{r,s}}$
where $|\alpha|=n$ and $0\leq\gamma_{r,s}<p$. By the usual commutator rules in
a group that is nilpotent of class 2, for $|\alpha|=|\beta|=n$ and central
elements $c,d$,
$\big{[}[\alpha]c,[\beta]d\big{]}=\prod_{r<s\leq
n}[x_{r},x_{s}]^{\alpha_{r}\beta_{s}-\alpha_{s}\beta_{r}}.$
This identity will tacitly be used below.
Assume for a contradiction that $L_{p}$ has a finite automata presentation
with domain a regular set $D\subseteq\Sigma^{*}$, and FA-recognizable
operation $\circ$. Denote by $\preceq$ the length-lexicographical ordering on
$D$. Note that $\preceq$ can also be recognized by a finite automaton.
Let $C\subseteq D$ denote the regular set of strings that are in the centre of
$L_{p}$.
###### Claim 1.2.
For each finite set $S\subseteq D$, there is a $\preceq$-least string
$u=u_{S}\in D$ such that
* (i)
$\forall r[r\in S-C\Rightarrow[r,u]\not\in S]$.
* (ii)
$\forall v,w\in S([v,u]=[w,u]\to\exists c\in C\,cv=w)$.
To see this, let $k$ be so large that only $x_{i}$ with $i<k$ occur in the
normal form of any element of $S$, and let $u=x_{k}$.
For (i) note that $r$ contains some $x_{i}$ with $i<k$, so the normal form of
$[r,u]$ contains $[x_{i},x_{k}]$, while the normal form of an element of $S$
does not contain such commutators.
For (ii) let $v=[\alpha]c$ with $c$ central. Then the normal form of $[v,u]$
ends in $\prod_{i<k}[x_{i},x_{k}]^{\alpha_{i}}$, which allows us to recover
$\alpha$. This verifies the claim.
We now inductively define sequences
${\left\langle{z_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ and
${\left\langle{u_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ in $D$. Let $z_{0}$ be
the string representing the neutral element. Suppose now that $z_{n}$ has been
defined, and write $V_{n}$ for $\\{v\colon v\preceq z_{n}\\}$. Let $u_{n}$ be
$u_{S}$ for $S={V_{n}}$, where $u_{S}$ is defined in the claim above. Next let
$z_{n+1}$ be the $\preceq$-least string $z$ such that
(1.1) $\forall v,w\in V_{n}\,\bigwedge_{i<p}w\circ
u_{n}^{i}\circ[v,u_{n}]\preceq z.$
Note that $z_{n+1}=G(z_{n})$ for a function $G\colon D\to D$ that is first-
order definable in the structure $(D,\circ,\preceq)$. This implies that the
graph of $G$ can be recognized by a finite automaton in the usual sense of
automatic structures, and hence $|z_{n}|=O(n)$ by the pumping lemma.
In the following we write $u_{i,k}$ for $[u_{i},u_{k}]$ where $i\neq k$.
###### Claim 1.3.
For each $n$, the subgroup $\langle u_{0},\ldots,u_{n}\rangle$ generated by
$u_{0},\ldots,u_{n}$ is contained in $V_{n+1}$.
This is checked by induction on $n$. For $n=0$ we have $\langle
u_{0}\rangle\subseteq V_{1}$ because $w=v=1$ is allowed in (1.1). For the
inductive step, note that by the normal form (and freeness of $L_{p}$) each
element $y$ of $\langle u_{0},\ldots,u_{n}\rangle$ has the form
$\prod_{i\leq n}u_{i}^{\alpha_{i}}\prod_{s\leq
n}\prod_{r<s}[u_{r,s}]^{\gamma_{r,s}}$.
This can be rewritten as $wu_{n}^{\alpha_{n}}[v,u_{n}]$ where
$w=\prod_{i<n}u_{i}^{\alpha_{i}}\prod_{s<n}\prod_{r<s}[u_{r,s}]^{\gamma_{r,s}}$
and $v=\prod_{k<n}u_{k}^{\gamma_{k,n}}$.
By inductive hypothesis $w,v\in V_{n}$. So the element $y$ is in $V_{n+1}$ by
(1.1). This verifies the claim.
In the next claim we view elementary abelian $p$-groups as vector spaces over
the field $\mathbb{F}_{p}$.
###### Claim 1.4.
* (a)
The elements $Cu_{i}$ are linearly independent in $G/C$.
* (b)
The elements $u_{i,k}$ are linearly independent in $C$.
We use induction on a bound $n$ for the indices. For (a) note that
$u_{n+1}^{i}\not\in V_{n}C$ for $i<p$, for otherwise $v=u_{n+1}^{i}c\in V_{n}$
for some central $c$, and clearly $[v,u_{n+1}]=1$ contradicting the condition
(i) in Claim 1.2. Therefore $C\langle u_{n+1}\rangle\cap C\langle
u_{0},\ldots,u_{n}\rangle=0$.
For (b), inductively the $u_{i,k}$, $i<k\leq n$ are a basis for a subspace
$T_{0}\leq C$. The linear map given by $Cw\mapsto[w,u_{n+1}]$ is 1-1 by (ii)
of Claim 1.2. So, by (a), the $[u_{i},u_{n+1}]$ are independent, generating a
subspace $T_{1}$. The condition (i) of Claim 1.2 implies that $T_{0}\cap
T_{1}=0$. This concludes the inductive step and verifies the claim.
We now obtain our contradiction. Given $n$, by Claim 1.3 we have
$\prod_{i<k\leq n}u_{i,k}^{\gamma_{i},k}\in V_{n}$
for each double sequence ${\left\langle{\gamma_{i,k}}\right\rangle}_{i<k\leq
n}$ of exponents in $[0,p)$. By Claim 1.4 all these elements are distinct.
Since $V_{n}$ consists of strings of length $O(n)$, we have $p^{n^{2}/2}$
distinct strings of length $O(n)$, which is false for large enough $n$. ∎
### 2\. Nies: open questions on classes of closed subgroups of $S_{\infty}$
The following started in discussions with A. Kechris at Caltech in 2016. It is
well known that the closed subgroups of $S(\omega)$ form a Borel space, and
there is a Borel action of $S(\omega)$ by conjugation (see e.g. [25]). This is
because being a subgroup is a Borel property in the usual Effros space of
closed subsets of $S(\omega)$. A natural class of closed subgroups of
$S(\omega)$ should be closed under conjugation, and often even is closed under
topological isomorphism (an exception being oligomorphicness). Here we ask
which natural classes of closed subgroups of $S(\omega)$ are Borel. It turns
out that even for the simplest classes this can be a hard problem.
Once a class is known to be Borel, one can study the relative complexity of
the topological isomorphism problem for this class with respect to Borel
reducibility $\leq_{B}$. For instance, Kechris et al. [25] showed that for the
Borel properties of compactness, local compactness, and Roelcke
precompactness, the isomorphism relation is Borel equivalent to $\mathrm{GI}$,
the isomorphism of countable graphs. This worked by adapting Mekler’s result
described below.
Before proceeding, we import some preliminaries from [25].
#### Effros structure of a Polish space
Given a Polish space $X$, let $\mathcal{F}(X)$ denote the set of closed
subsets of $X$. The _Effros structure_ on $X$ is the Borel space consisting of
$\mathcal{F}(X)$ together with the $\sigma$-algebra generated by the sets
$\mathcal{C}_{U}=\\{D\in\mathcal{F}(X)\colon D\cap U\neq\emptyset\\}$,
for open $U\subseteq X$. Clearly it suffices to take all the sets $U$ in a
countable basis ${\left\langle{U_{i}}\right\rangle}_{i\in{\mathbb{N}}}$ of
$X$. The inclusion relation on $\mathcal{F}(X)$ is Borel because for
$C,D\in\mathcal{F}(X)$ we have $C\subseteq D\leftrightarrow\forall
i\in{\mathbb{N}}\,[C\cap U_{i}\neq\emptyset\to D\cap U_{i}\neq\emptyset]$.
#### Representing elements of the Effros structure of $S(\omega)$
For a Polish group $G$, we have a Borel actions
$G\curvearrowright\mathcal{F}(G)$ by translation and by conjugation. We will
only consider the case that $G=S(\omega)$. In the following $\sigma,\tau,\rho$
will denote injective maps on initial segments of the integers, that is, on
tuples of integers without repetitions. Let $[\sigma]$ denote the set of
permutations extending $\sigma$:
$\mathcal{[}\sigma]=\\{f\in S(\omega)\colon\sigma\prec f\\}$
(this is often denoted $\mathcal{N}_{\sigma}$ in the literature). The sets
$[\sigma]$ form a base for the topology of pointwise convergence of
$S(\omega)$. For $f\in S(\omega)$ let $f\\!\upharpoonright_{n}$ be the initial
segment of $f$ of length $n$. Note that the $[f\\!\upharpoonright_{n}]$ form a
basis of neighbourhoods of $f$. Given $\sigma,\sigma^{\prime}$ let
$\sigma^{\prime}\circ\sigma$ be the composition as far as it is defined; for
instance, $(7,4,3,1,0)\circ(3,4,6)=(1,0)$. Similarly, let $\sigma^{-1}$ be the
inverse of $\sigma$ as far as it is defined.
###### Definition 2.1.
For $n\geq 0$, let $\tau_{n}$ denote the function $\tau$ defined on
$\\{0,\ldots,n\\}$ such that $\tau(i)=i$ for each $i\leq n$.
###### Definition 2.2.
For $P\in\mathcal{F}(S(\omega))$, by $T_{P}$ we denote the tree describing $P$
as a closed set in the sense that $[T_{P}]\cap S(\omega)=P$. Note that
$T_{P}=\\{\sigma\colon\,P\in\mathcal{C}_{[\sigma]}\\}$.
###### Lemma 2.3.
The closed subgroups of $S(\omega)$ form a Borel set $\mathcal{U}(S(\omega))$
in $\mathcal{F}(S(\omega))$.
###### Proof.
$D\in\mathcal{F}(S(\omega))$ is a subgroup iff the following three conditions
hold:
* •
$D\in\mathcal{C}_{[(0,1,\ldots,n-1)]}$ for each $n$
* •
$D\in\mathcal{C}_{[\sigma]}\to D\in\mathcal{C}_{[\sigma^{-1}]}$ for all
$\sigma$
* •
$D\in\mathcal{C}_{[\sigma]}\cap C_{[\tau]}\to
D\in\mathcal{C}_{[\tau\circ\sigma]}$ for all $\sigma,\tau$.
It now suffices to observe that all three conditions are Borel. ∎
Note that $\mathcal{U}(S(\omega))$ is a standard Borel space. The statement of
the lemma actually holds for each Polish group in place of $S(\omega)$.
So much for the preliminaries. We now note any known results on the complexity
of isomorphism. Let $G$ always denote a closed subgroup of $S(\omega)$
(another, and cumbersome, but persistent, term is non-Archimedean group). By
‘group’ we usually mean such a $G$.
##### The class of all closed subgroups of $S(\omega)$
It is not hard to verify that the isomorphism problem is analytic [25]. It is
Borel above ($\geq_{B}$) graph isomorphism $\mathrm{GI}$. Nothing else appears
to be known on its complexity. The following was asked in Kechris et al. [25].
###### Question 2.4.
Is the isomorphism relation between closed subgroups of $S(\omega)$ analytic
complete?
The isomorphism problem for abelian Polish groups is known to be analytic
complete [11], but the groups used there are not non-Archimedean (are
Archimedean?).
##### Discrete groups
Discreteness is Borel because it is equivalent to saying that the neutral
element is isolated. Note that $G$ is discrete iff $G$ is countable. The
isomorphism relation for discrete groups is Borel equivalent to graph
isomorphism. The upper bound is fairly standard, the lower bound is obtained
by A. Mekler’s technique [31] encoding countable graph isomorphism (for a
sufficiently rich class of graphs) into isomorphism of countable nil-2 groups
of exponent $p^{2}$, where $p$ is some fixed odd prime.
##### Procountable groups
The following are equivalent (see e.g. Malicki [29, Lemma 1]):
* (i)
$G$ is procountable, i.e., an inverse limit of a chain of countable groups
$G_{n+1}\to G_{n}$
* (ii)
$G$ is a closed subgroup of a Cartesian product of discrete groups
* (iii)
there is nbhd base of the neutral element consisting of open normal subgroups.
It is also equivalent to ask that
* (iv)
$G$ has a compatible bi-invariant metric (such a metric will be necessarily
complete because $G$ is a Polish group).
This class includes the abelian closed subgroups of $S(\omega)$ (see below),
and of course the discrete groups. So graph isomorphism GI can be reduced to
its isomorphism problem.
##### Abelian groups
To be abelian is easily seen to be Borel because
$G$ is abelian iff $\forall\sigma,\tau\in T_{G}\,[\sigma^{-1}\tau^{-1}\
\sigma\tau\prec\text{id}_{\omega}]$.
(In fact any variety of groups is Borel, by a similar argument.) As a
nonlocally compact example, consider ${\mathbb{Z}}^{\omega}$. Also there is a
universal abelian closed subgroup of $S(\omega)$.
Each abelian closed subgroup of $S(\omega)$ has an invariant metric and hence
is pro-countable by Malicki [30, Lemma 2], also Malicki [29, Lemma 1]. Su Gao
(On Automorphism Groups of Countable Structures, JSL, 1998) has proved that if
$\operatorname{Aut}(M)$ is abelian (or merely solvable) for a countable
structure $M$, then the Scott sentence of $M$ has no uncountable model.
##### Topologically finitely generated groups
This property is analytical. The symmetric group $S(\omega)$ is f.g. because
the group of permutations with finite support is dense in $S(\omega)$, and is
contained in a 2-generated group, namely the group generated by the successor
function on ${\mathbb{Z}}$ and the transposition $(0,1)$.
###### Question 2.5.
Is being topologically finitely generated Borel?
Given $k\geq 2$, is being $k$-generated Borel?
We observe that among the compact groups, being $k$-generated is Borel. For in
a Borel way we can represent $G$ as $\projlim_{n\in{\mathbb{N}}}G_{n}$ for
discrete finite groups $G_{n}$ (with some unnamed projections $G_{n+1}\to
G_{n}$). Then $G$ is $k$-generated iff each $G_{n}$ is $k$-generated: for the
nontrivial implication, for each $n$ let $\overline{a}_{n}$ be a $k$-tuple of
generators for $G_{n}$. Now take a converging subsequence of a sequence of
pre-images of the $\overline{a}_{n}$ in $G^{n}$. The limit generates $G$.
Being 1-generated (monothetic) is Borel, because as it is well known, such a
group is either discrete, or an inverse limit of cyclic groups (e.g.
$({\mathbb{Z}}_{p},+)$) and hence compact. See e.g. Malicki [29, Lemma 5].
Hewitt and Ross in their book have a more detailed structure theorem for such
groups.
Among the abelian groups, being f.g. is Borel because such a group is pro-
countable. If it is $k$-generated, it has to be an inverse limit of
$k$-generated countable abelian groups. An onto map
${\mathbb{Z}}^{s}\to{\mathbb{Z}}^{s}$ is of course a bijection. So $G$ is
$k$-generated iff $G$ is of the form ${\mathbb{Z}}^{r}\times H$ where $H$ is a
product of $k-r$ procyclic groups. This condition is Borel.
Outside the abelian, it is easy to provide an example of a complicated pro-
countable 2-generated group. In the free group $F(a,b)$ let
$v_{z}=b^{-z}ab^{z}$ ($z\in{\mathbb{Z}}$). For $k\in{\mathbb{N}}^{+}$ let
$N_{k}\leq F(a,b)$ be the normal subgroup generated by commutators
$[v_{0},v_{r}],r\geq k$. Then $N_{1}>N_{2}>\ldots$ and
$\bigcap_{k}N_{k}=\\{1\\}$. Let $G$ be the inverse limit of the system
${\left\langle{F(a,b)/N_{k}}\right\rangle}_{k\in{\mathbb{N}}}$ with the
natural projections.
##### Compactly generated groups
One says that $G$ is compactly generated if there is a compact subset $S$ that
topologically generates $G$. Note that if $G$ is locally compact, it has a
compact open subgroup $K$ (van Dantzig), so the compact subset $KS$ generates
$G$ algebraically.
###### Fact 2.6.
For locally compact groups $G\leq_{c}S(\omega)$, being compactly generated is
Borel.
To see this, note that if $G$ is c.g. iff it is topologically generated by a
compact open set $C$. Such a set $C$ is given as a finite union of sets
$[T_{G}]\cap[\sigma]$ for strings on $T_{G}$, and we can describe
arithmetically whether a set $[T_{G}]\cap[\sigma]$ is compact. So we have to
express that there is $C$ such that for each $\eta$ on $T_{G}$, there is a
term $t$ and finitely many $\sigma_{i}$ with $[\sigma_{i}]\cap T_{G}\subseteq
C$ so that $t$ applied to the $\sigma_{i}$ yields an extension of $\eta$. This
is arithmetical.
###### Question 2.7.
Is being (topologically) compactly generated Borel?
##### Oligomorphic groups
To be oligomorphic means that for each $n$ there are only finitely many
$n$-orbits. Equivalently the orbit structure $M_{G}$ is $\omega$-categorical.
By Coquand’s work (elaborated in Ahlbrandt and Ziegler [2]) oligomorphic
groups $G,H$ are isomorphic iff $M_{G}$ and $M_{H}$ are bi-interpretable.
Nies, Schlicht and Tent [37] have proved that isomorphism of oligomorphic
groups is $\leq_{B}E_{\infty}$, the universal countable Borel equivalence
relation. So it is way below graph isomorphism. In fact it is unknown to be
nonsmooth. By Harrington-Kechris-Louveau, nonsmoothness is equivalent to an
affirmative answer to the following.
###### Question 2.8.
Is $E_{0}\leq_{B}$ isomorphism of oligomorphic groups?
##### Amenable groups (with A. Iwanow and B. Majcher)
Recall that a Polish group $G$ is amenable if each compact space it acts on
has an invariant probability measure. $G$ is extremely amenable if each such
action has a fixed point (so the point mass on it is the required probability
measure). For discrete groups, this is equivalent to the usual Folner
condition.
Discrete amenable group form a Borel subset. For, applying Kuratowski-Ryll-
Nardzewski selectors the Folner condition can be presented in a Borel form.
Nilpotent groups are amenable. Thus, Mekler’s result can be also applied to
the isomorphism relation of discrete amenable groups, making it equivalent to
GI.
Amenability (without the assumption of discreteness) is Borel by its
characterisation due to Schneider and Thom [43]. The description is more less
in the style of Folner.
For more detail, including extreme amenability, see Section 3.
##### Maximal-closed groups
Fixing some bijection ${\mathbb{Q}}^{n}\leftrightarrow\omega$, the group
AGL(${\mathbb{Q}}^{n})$ of affine linear transformations can be seen as a
closed subgroup of $S(\omega)$. Kaplan and Simon [24] showed that it is a
maximal closed subgroup (that is also countable). Agarwal and Kompatscher [1]
have provided continuum many maximal-closed groups that are not even
algebraically isomorphic, using “Henson digraphs” that were introduced in a
paper of Henson.
Clearly being maximal-closed is $\Pi^{1}_{1}$. It is not known to be Borel.
#### Recursion theoretic view
The Effros space is insufficient here. We need a more concise way to represent
closed subgroups of $S(\omega)$. They are given by trees without dead ends
satisfying a certain $\Pi^{0}_{1}$ condition.
Let $\mathbb{T}$ be the tree of all pairs
$\langle\sigma,\sigma^{\prime}\rangle$ of the same length $n$ such that
$\sigma(i)=k\leftrightarrow\sigma^{\prime}(k)=i$ for each $i,k<n$. In other
words, there is $f\in S(\omega)$ such that $\sigma\prec f$ and
$\sigma^{\prime}\prec f^{-1}$.
If $B$ is a subtree of $\mathbb{T}$ without dead ends, then for each $\langle
f,f^{\prime}\rangle\in[B]$, $f$ is a permutation of $\omega$ with inverse
$f^{\prime}$. We can formulate as a $\Pi^{0}_{1}$ condition on $B$ that
$\\{f\colon\langle f,f^{-1}\rangle\in[B]\\}$ is closed under inverses and
product.
If $B$ is a computable tree we say that the group given by $[B]$ is
computable.
###### Question 2.9.
Are there two compact, computably isomorphic computable subgroups of
$S(\omega)$ such that no computable copies are conjugate via a computable
permutation of $S(\omega)$?.
### 3\. Ivanov and Majcher: amenable subgroups of $S(\omega)$
In this post we show that the properties of being amenable and extremely
amenable for Polish groups are Borel.
Given a Polish space ${\bf Y}$ let $\mathcal{F}({\bf Y})$ denote the set of
closed subsets of ${\bf Y}$. The Effros structure on $\mathcal{F}({\bf Y})$ is
the Borel space with respect to the $\sigma$-algebra generated by the sets
$\mathcal{C}_{U}=\\{D\in\mathcal{F}({\bf Y}):D\cap U\not=\emptyset\\},$
for open $U\subseteq{\bf Y}$. For various ${\bf Y}$ this space serves for
analysis of Borel complexity of families of closed subsets (see [25] for some
recent results). It is convenient to use the fact that there is a sequence of
Kuratowski-Ryll-Nardzewski selectors (selectors, in brief)
$s_{n}:\mathcal{F}({\bf Y})\rightarrow{\bf Y}$, $n\in\omega$, which are Borel
functions such that for every non-empty $F\in\mathcal{F}({\bf Y})$ the set
$\\{s_{n}(F);n\in\omega\\}$ is dense in $F$.
We consider $S(\omega)$ as a complete metric space by defining
$d(g,h)=\sum\\{2^{-n}\mid g(n)\not=h(n)\mbox{ or }g^{-1}(n)\not=h^{-1}(n)\\}.$
Let $S_{<\infty}$ denote the set of all bijections between finite substes of
$\omega$. Let
$S^{+}_{<\infty}=\\{\sigma\in S_{<\infty}\mid\mathsf{dom}[\sigma]\mbox{ is an
initial segment of }\omega\\}.$
The family
$\\{{\mathcal{N}}_{\sigma}\mid\sigma\in S^{+}_{<\infty}\\}$
is a basis of the Polish topology of $S(\omega)$.
We mention here that the set ${\mathcal{U}}(S(\omega))$ of all closed
subgroups of $S(\omega)$ is a Borel subset of ${\mathcal{F}}(S(\omega))$ (see
Lemma 2.5 of [25]).
Since $S(\omega)$ is a Polish group (in particular the multiplication is
continuous) we may extend the set of selectors $s_{n}$, $n\in\omega$, by group
words of the form $w(\bar{s})$ which define Borel maps
${\mathcal{F}}(S(\omega))\rightarrow S(\omega)$ and respectively
${\mathcal{U}}(S(\omega))\rightarrow S(\omega)$. In particular for any closed
$G\leq S(\omega)$ all $w(\bar{s})(G)$ form a dense subgroup. Below for
simplicity we will always assume that already all $s_{n}(G)$, $n\in\omega$,
form a dense subgroup of $G$.
#### 3.1. Closed subgroups and amenability
In this section we apply the description of amenable topological groups found
by F.M. Schneider and A. Thom in [44] in order to analyse amenability for
closed subgroups of $S(\omega)$.
Let $G$ be a topological group, $F_{1},F_{2}\subset G$ are finite and $U$ be
an identity neighbourhood. Let $R_{U}$ be a binary relation defined as
follows:
$R_{U}=\\{(x,y)\in F_{1}\times F_{2}:yx^{-1}\in U\\}.$
This relation defines a bipartite graph on $(F_{1},F_{2})$. Let
$\mu(F_{1},F_{2},U)=|F_{1}|-\mathsf{sup}\\{|S|-|N_{R}(S)|:S\subseteq
F_{1}\\},$
where $N_{R}(S)=\\{y\in F_{2}:(\exists x\in S)(x,y)\in R_{U}\\}$. By Hall’s
matching theorem this value is the matching number of the graph
$(F_{1},F_{2},R_{U})$. Theorem 4.5 of [44] gives the following description of
amenable topological groups.
Let $G$ be a Hausdorff topological group. The following are equivalent.
(1) $G$ is amenable.
(2) For every $\theta\in(0,1)$, every finite subset $E\subseteq G$, and every
identity neighbourhood $U$, there is a finite non-empty subset $F\subseteq G$
such that
$\forall g\in E(\mu(F,gF,U)\geq\theta|F|).$
(3) There exists $\theta\in(0,1)$ such that for every finite subset
$E\subseteq G$, and every identity neighbourhood $U$, there is a finite non-
empty subset $F\subseteq G$ such that
$\forall g\in E(\mu(F,gF,U)\geq\theta|F|).$
It is worth noting here that when an open neighbourhood $V$ contains $U$ the
number $\mu(F,gF,U)$ does not exceed $\mu(F,gF,V)$. In particular in the
formulation above we may consider neighbourhoods $U$ from a fixed base of
identity neighbourhoods. For example in the case of a closed $G\leq S(\omega)$
we may take all $U$ in the form of stabilizers $V_{[n]}=\\{f\in G:f(i)=i$ for
$i<n\\}$. It is also clear that we can restrict all $\theta$ by rational
numbers. From now on we work in this case.
###### Theorem 3.1.
The class of all amenable closed subgroups of $S(\omega)$ is Borel.
###### Proof.
Since the family of all closed subgroups of $S(\omega)$ is Borel it suffices
to prove the following claim.
CLAIM. For every basic open neighbourhood $U$ of the unity, any rational
$\theta\in(0,1)$ and any pair of tuples $\bar{s}$ and $\bar{s}^{\prime}$ of
selectors the family of all closed $Z\subseteq S(\omega)$ with the condition
$\forall
g\in\bar{s}(Z)(\mu(\bar{s}^{\prime}(Z),g\bar{s}^{\prime}(Z),U)\geq\theta|\bar{s}^{\prime}(Z)|)$
is Borel.
Indeed, let us denote the condition of the claim by
$F\o(U,\theta,\bar{s},\bar{s}^{\prime})$. Then having Borelness as above we
see that the (countable) intersection by all $U$, $\theta$ and $\bar{s}$ of
the families
$\bigcup\\{\\{G\leq_{c}S(\omega):G\models
F\o(U,\theta,\bar{s},\bar{s}^{\prime})\\}:\bar{s}^{\prime}\mbox{ is a tuple of
}$ $\mbox{ selectors}\\}$
is also Borel. Note that this family exactly consists of closed subgroups $G$
having dense sabgroups satisfying condition (2) of Schneider-Thom’s theorem.
It is well-known that groups having dense amenable subgroups are amenable. In
particular we see that the claim above implies the theorem.
Let us prove the claim. For a closed $Z\subseteq S(\omega)$, $g\in\bar{s}(Z)$
and $F=\\{f_{1},\ldots,f_{k}\\}$ consisting of entries of
$\bar{s}^{\prime}(Z)$ to guarantee the inequality $\mu(F,gF,U)\geq\theta|F|$
we only need to demand that for every $S\subseteq F$ the following inequality
holds:
$|S|-k+\theta\cdot k\leq|N_{R}(S)|,$
where $N_{R}(S)$ is defined with respect to $(F,gF)$ and $U$. To satisfy this
inequality we will use the observation that when $S^{\prime}\subseteq gF$ and
$\rho$ is a function $S^{\prime}\rightarrow S$ such that $gf(\rho(gf))^{-1}\in
U$ for each $gf\in S^{\prime}$ then $|S^{\prime}|\leq|N_{R}(S))|$.
The following condition formalizes $\mu(F,gF,U)\geq\theta|F|$:
$\bigwedge_{S\subseteq F}\bigvee\\{\bigwedge_{gf\in
S^{\prime}}(gf(\rho(gf))^{-1}\in U):S^{\prime}\subseteq gF\mbox{ ,
}\rho:S^{\prime}\rightarrow S\mbox{ , }$ $|S|-k+\theta\cdot
k\leq|S^{\prime}|\\}.$
By the choice of $g$ and $F$ we see that all closed $Z\subseteq S(\omega)$
satisfying it form a Borel family. ∎
Let ${\bf U}$ be the Urysohn space. By [47] every Polish group is realized as
a closed subgroup of $Iso({\bf U})$. Applying the proof given above to
${\mathcal{F}}(Iso({\bf U}))$ we obtain the following corollary.
###### Corollary 3.2.
The class of all amenable closed subgroups of $Iso({\bf U})$ is Borel.
#### 3.2. Closed subgroups and extreme amenability
Let $G$ be a topological group. The group $G$ is said to be extremely amenable
if every continuous action of $G$ on a non-empty compact Hausdorff space
admits a fixed point.
We begin by fixing a left-invariant metric $d$ inducing the topology of
$S(\omega)$ (resp. $Iso({\bf U})$). Recall from ([42], Theorem 2.1.11) that
$G\leq_{c}S(\omega)$ is extremely amenable if and only if the left-translation
action of $G$ on $(G,d)$ is finitely oscillation stable. From ([42], Theorem
1.1.18) and ([32], proof of Theorem 3.1) this is equivalent to the following
condition:
> For any $\varepsilon>0$ and a finite $F\subset G$ there exists a finite
> $K\subseteq G$ such that for any function $c:K\rightarrow\\{0,1\\}$ there
> exists $i\in\\{0,1\\}$ and $g\in G$ such that for any $f\in F$ there exists
> $k\in c^{-1}(i)$ with $d(gf,k)<\varepsilon$.
We consider the case when $G$ has a countable base of the topology. By the
definition of extreme amenability if $G$ has a dense subgroup which is
extremely amenable, then $G$ is extremely amenable too. Now it is easy to see
that when $D\subseteq G$ is a countable dense subgroup of $G$ then extreme
amenability of $G$ is equivalent to condition above for the elements taken in
$D$.
We now see that when $G\in{\mathcal{F}}(S(\omega))$ (resp.
${\mathcal{F}}(Iso({\bf U}))$), extreme amenability of $G$ is equivalent to a
countable conjunction of the folowing conditions.
> Let $\bar{s}$ be a tuple of selectors and $\varepsilon\in{\bf Q}^{+}$. Then
> there is a selectors $\bar{t}$ such that for any function
> $c:\bar{t}\rightarrow\\{0,1\\}$ there exists $i\in\\{0,1\\}$ and a selector
> $s^{\prime}$ such that for any $s\in\bar{s}$ there exists $k\in c^{-1}(i)$
> with $d(s^{\prime}(G)s(G),k(G))<\varepsilon$.
We see that extreme amenability is a Borel property.
#### 3.3. Comments
1\. The argument given in Section 3.2 is adapted from the proof of Theorem 1.3
in [32]. Originally [32] considers the following situation. Let $G$ be a
Polish group and $\Gamma$ be a countable group. Let us consider the Polish
space $Hom(\Gamma,G)$ of all homomrphisms from $\Gamma$ to $G$. By Theorem 3.1
in [32] the subset of all $\pi\in Hom(\Gamma,G)$ such that
$\overline{\pi(\Gamma)}$ is extremely amenable is a $G_{\delta}$ subset of
$Hom(\Gamma,G)$. By Corollary 18 of [23] the set of all representations from
$Hom(\Gamma,G)$ whose image is an amenable subgroup of $G$ is also
$G_{\delta}$ in $Hom(\Gamma,G)$.
2\. Let ${\mathcal{G}}_{n}$ be the space of all $n$-generated (discrete)
groups with distinguished $n$-tuples of generators $(G,\bar{g})$ (so called
marked groups). This is a compact space under so called Grigorchuk topology.
In papers [3] and [48] descriptive complexity in ${\mathcal{G}}_{n}$ of some
versions of amenability is considered. The authors of [3] show that
amenability is ${\bf\Pi}^{0}_{2}$. They ask if it is
${\bf\Pi}^{0}_{2}$-complete.
A group $G$ is called elementarily amenable if it is in the smallest class of
groups which contains all abelian and finite ones and is closed under
quotients, subgroups, extensions corresponding to exact sequences
$1\rightarrow K\rightarrow G\rightarrow H\rightarrow 1$ and directed unions.
It is proved in [48] that elementary amenability is coanalytic and non-Borel.
3\. Let us fix an indexation of all computably enumerable groups on $\omega$
(i.e. computably presented groups). Under this indexation computable groups
correspond to groups with decidable word problem. It is easy to see that
Følner’s condition of amenability (or the Schneider-Thom’s condition of
Section 3.1 in the case of discrete groups) define a $\Pi^{0}_{2}$ subset of
indices. On the other hand applying Theorem 3 of [21] it is easy to see that
this property is $\Pi^{0}_{2}$-hard (it is a Markov property). Similarly one
easily obtains that extreme amenability is $\Pi^{0}_{2}$-complete.
E-mail: Aleksander.Iwanowpolsl.pl
### 4\. Nies: Stone-type duality for totally disconnected locally compact
groups
In this post all topological groups will be Polish, and they all have a basis
of neighborhoods of $1$ consisting of open subgroups. As is well-known, such a
group is topologically isomorphic to a closed subgroup of the symmetric group
on ${\mathbb{N}}$, denoted $S(\omega)$. A homeomorphic embedding into
$S(\omega)$ is obtained for instance by letting the group act by left
translation on the left cosets of open subgroups in that basis of
neighborhoods of $1$.
Nies, Schlicht and Tent [37] developed the notion of coarse groups for closed
subgroups of $S(\omega)$, which first appeared in [25]. The idea is to do
algebra with approximations of elements, rather than with the elements
themselves. The approximations are all the cosets of open subgroups (left or
right cosets, this makes the same class). Open cosets form the domain of the
coarse group, and the structure is equipped with the ternary relation
$AB\subseteq C$. The authors in [37] apply the notion primarily for the class
of oligomorphic groups, but also the profinite groups. (Ivanov has pointed out
that in that case a closely related structure was studied much earlier by
Chatzidakis [5].)
Here we give a different approach to coarse groups, which is particularly
intended for the setting of totally disconnected locally compact (t.d.l.c.)
groups. General references for t.d.l.c. groups include Willis [49, 50].
In [37] all open cosets of a topological group $G$ were considered, but the
analysis was restricted to classes of groups $G$ which have only countably
many open subgroups. This is e.g. the case for Roelcke precompact groups (for
each open subgroup $U$ there is a finite set $F$ such that $UFU=G$). Such
groups are in a sense opposite to the t.d.l.c. groups: the intersection of
those two ‘large” classes consists merely of the profinite groups. However, a
superclass of both has also been studied: locally Roelcke precompact groups.
The coarse group $\mathcal{M}(G)$ of a t.d.l.c. group $G$ consists of the
_compact_ open cosets of $G$.
#### 4.1. Inductive groupoids, and inverse semigroups
A category is small if the objects form a set (rather than a proper class).
Recall that a groupoid is a small category such that each morphism $A$ has an
inverse, denoted $A^{-1}$. A partially ordered groupoid is a groupoid with a
partial order $\sqsubseteq$ on the set of morphisms (and therefore also on the
objects, which are identified with their identity morphisms) where the
functional and the order structure are compatible. An _inductive groupoid_ is
a partially ordered groupoid such that the partial order $\sqsubseteq$
restricted to the set of neutral elements is a semilattice. See Lawson [27,
Section 4.1].
A semigroup is called _regular_ if for each $a$ there is $b$, called the
inverse of $a$, such that $aba=a$ and $bab=b$. Inductive groupoids closely
correspond to _inverse semigroups_. These are regular semigroups where the
idempotents (elements $e$ such that $ee=e$) commute. In particular, the
(large) categories of inductive groupoids and of inverse semigroups are
isomorphic.
For instance, given an inductive groupoid, to define the semigroup operation,
simply let $AB=(A\mid V)(V\mid B)$, where $V$ is the meet of the right domain
of $A$ and the left domain of $B$, and $\mid$ denotes restriction, given by
axiom $A2(a)$ below. See Lawson [27] for detail.
The representation theorem due to Wagner and Preston (both 1954,
independently) realizes every inverse semigroup $S$ as an inverse semigroup of
partial bijections on $S$. An element $a\in S$ becomes the partial bijection
$\tau_{a}\colon a^{-1}S\to S$ given by $t\mapsto at$. Clearly
$\tau_{b}\circ\tau_{a}=\tau_{ba}$. If $S$ is a group this is just the left
Cayley representation. See Lawson [27, Section 4.1 and 4.5].
#### 4.2. The coarse groupoid of a topological group
Given a topological group for which the open subgroups form a nbhd basis of
$1$, an inductive groupoid is obtained as follows.
* •
Objects correspond to open subgroups. In the abstract setting they will be
called ∗subgroups. We use letters $U,V,W$ for them.
* •
Morphisms correspond to open cosets. Abstractly they are called ∗cosets. We
use letters $A,\ldots,E$ to denote them. $A\colon U\to V$ means that $A$ is a
right coset of $U$ and a left coset of $V$. In brief we often write
${}_{U}A_{V}$ for this. The usual notation in the theory of groupoids is
$U=\mathbf{d}(A)$ (for domain) and $V=\mathbf{r}(A)$ (for range).
We will treat coarse groupoids axiomatically. We begin with the following.
Notation and conventions. An object $U$ will be identified with the neutral
morphism $1_{U}$. So there are only morphisms, and objects merely form a
convenient manner of speaking. We write $RC(U)$ and $LC(U)$ for the sets of
right, resp. left ∗cosets of $U$. In formulas we also write ${}_{U}A$ to mean
that $A\in RC(A)$, and $A_{U}$ to mean that $A\in LC(U)$.
We axiomatically require the usual properties defining groupoids and partial
orders. For ease of language we adjoin a least element $0$ to the partial
order. We require that in the partial order $\sqsubseteq$ on the objects
(i.e., the ∗subgroups), any two elements $U,V$ have an infimum, denoted
$U\wedge V$. By $A\perp B$ denote that $A\wedge B=0$ are incompatible. If
$M=\mathcal{M}(G)$ then $0$ is interpreted as the empty set.
We have the following axioms connecting the groupoid and partial order. (Keep
in mind that we identify $U$ and $1_{U}$.)
###### Axioms 4.1.
1. (A1)
If $A\sqsubseteq B$ then $A^{-1}\sqsubseteq B^{-1}$.
2. (A2)
Let $U\sqsubseteq V$.
($\downarrow$) If ${}_{V}B$ then $A\sqsubseteq B$ for some ${}_{U}A$.
($\uparrow$) If ${}_{U}A$ then $A\sqsubseteq B$ for some ${}_{V}B$.
3. (A3)
if $AB$ and $A^{\prime}B^{\prime}$ are defined and $A\sqsubseteq
A^{\prime},B\sqsubseteq B^{\prime}$, then $AB\sqsubseteq
A^{\prime}B^{\prime}$.
4. (A4)
If ${}_{U}A$ and ${}_{V}B$ and $U\sqsubseteq V$, then either $A\sqsubseteq B$
or $A\perp B$.
5. (A5)
If $A\not\sqsubseteq B$ then there is $C\sqsubseteq A$ such that $C\perp B$.
Remarks:
Note that Axioms (A1), (A2$\downarrow$) and (A3) are the usual axioms of
ordered groupoids, OG1, OG3 and OG2 respectively in Lawson [27, Section 4.1],
only the notation there is a bit different.
We have $A_{U}$ iff ${}_{U}A^{-1}$ by the definitions, which implies that the
axioms mentioning right ∗cosets also holds for left ∗cosets. See e.g. [27,
Section 4.1, Prop 3(6)] for a proof of the left coset version of
(A2$\downarrow$) which Lawson calls (OG3∗).
Axiom (A2$\uparrow$) doesn’t seem to occur in the ordered groupoids
literature. Axiom (A4) is special to the applications to topological groups we
have in mind here. It implies that different right ∗cosets of the same
∗subgroup are disjoint. Axiom (A5) essentially says that the topology is
Hausdorff.
#### The axioms are satisfied for structures of suitable open cosets
In the following, $G$ is a topological group as above with countably many open
subgroups, or $G$ is a t.d.l.c. group. Let $\mathcal{M}(G)$ denote the coarse
groupoid: the ∗subgroups are the open subgroups in the former case, and the
compact open subgroups in the t.d.l.c. case. The morphisms are the (compact)
open cosets. We have $A\colon U\to V$ if $A$ is a right coset of $U$ and a
left coset of $V$. Recall that in brief we write ${}_{U}A_{V}$ for this. It is
easily seen that the axioms above hold. To show that $\mathcal{M}(G)$ (with 0
interpreted as the empty set) is a lower semilattice, suppose that $x\in
aU\cap bV$ for subgroups $U,V$, then $xU=aU$ and $xV=bV$. Let $W=U\cap V$.
Then $aU\cap bV=xW$. Claim 4.6 below shows that this argument works in the
general axiomatic setting.
Note that $\exists A\colon U\to V$ iff $U$ and $V$ are conjugate in $G$. In
this case, there is $a\in G$ such that $Ua=A=aV$.
#### Some consequences of the axioms
First we check that the ordering relation of morphisms carries over to their
left and right domains.
###### Claim 4.2.
Suppose $A\colon U_{0}\to U_{1}$, $B\colon V_{0}\to V_{1}$ and $A\sqsubseteq
B$. Then $U_{i}\sqsubseteq V_{i}$ for $i=0,1$.
To verify this: by (A1) we have $A^{-1}\sqsubseteq B^{-1}$. Then by (A3) and
identifying $U$ with $1_{U}$, we have $U_{0}=AA^{-1}\sqsubseteq
BB^{-1}=V_{0}$. Similarly, we’ve got $U_{1}\sqsubseteq V_{1}$.
###### Claim 4.3.
For each $A\in M$ and each ∗subgroup $U$, there are a ∗subgroup $V\sqsubseteq
U$ and a left ∗coset $B$ of $V$ such that $B\sqsubseteq A$. A similar fact
holds for right ∗cosets.
To see this, suppose that $A$ is left ∗coset of $W$. Let $V=W\wedge U$. By
Axiom (A2$\downarrow$) there is $B\sqsubseteq A$ such that $B$ is left ∗coset
of $V$, as required.
The following holds more generally in ordered groupoids.
###### Claim 4.4 ([27], Section 4.1, Prop 3(5)).
If $C\sqsubseteq AB$ then there are $A^{\prime}\sqsubseteq A$ and
$B^{\prime}\sqsubseteq B$ such that $C=A^{\prime}B^{\prime}$.
Next we show that each left ∗coset of a ∗subgroup $V$ is given by the left
∗cosets of a ∗subgroup $U$ it contains. (In a sense it is the “union” of these
cosets.)
###### Claim 4.5.
Suppose $U\sqsubseteq V$. If $B_{V}\neq C_{V}$ then there is $A_{U}\sqsubseteq
B$ such that $A\perp C$.
To verify this, we may suppose that $B\not\sqsubseteq C$. By Axiom (A5) there
a ∗subgroup $W$ and $D_{W}\sqsubseteq B$ such that $D\perp C$. Let
$U^{\prime}=W\wedge U$. Let $E_{U^{\prime}}\sqsubseteq D$ by (A2$\downarrow$).
There is $A_{U}\sqsupseteq E$ by (A2$\uparrow$). Since $A\perp B$ fails
(because of $E$) we have $A\sqsubseteq B$ by (A4). However, $A\sqsubseteq C$
would imply $E\sqsubseteq C$ and hence contradict $D\perp C$. So $A\perp C$ by
(A4) again.
###### Claim 4.6.
Suppose $A_{U}\wedge B_{V}\neq 0$. Let $W=U\cap V$. Then $A\wedge B$ is the
unique left ∗coset of $W$ contained in $A$ and $B$.
If $C_{W^{\prime}}\sqsubseteq A,B$, then $W^{\prime}\sqsubseteq W$ by Claim
4.2 and definition of $W$. So by (A2$\uparrow$) there is $D_{W}$ such that
$C\sqsubseteq D$. Letting $C=A\wedge B$, we see that $A\wedge B$ is a left
∗coset of $W$. If any left ∗coset of $W$ is contained in $A,B$ it equals $C$
by (A4).
#### Normal ∗subgroups
Recall that we write $LC(U)$ and $RC(U)$ for the sets of left, resp. right,
∗cosets of $U$. We say that ∗subgroups $U,V$ are _conjugate_ if $\exists
A\colon U\to V$, or in other words, $RC(U)\cap LC(V)\neq\emptyset$. In
$\mathcal{M}(G)$ this replicates the usual meaning of conjugacy. For the
slightly nontrivial direction, if $A=Ua=bV$ then $a^{-1}Ua=a^{-1}bV$. This is
a subgroup, so $a^{-1}b\in V$, and hence $U^{a}=V$. The axioms of groupoids
imply that conjugacy is an equivalence relation.
Normal ∗subgroups $V$ are the ones only conjugate to themselves: for each
$B\in RC(V)$ we have $B^{-1}VB=V$. This is equivalent to $VB=BV$ for each such
$B$, or equivalently defined by the condition $LC(V)=RC(V)$. In category
language, all morphisms with left domain $V$ also have right domain $V$, and
vice versa. So there is a natural group operation on $RC(V)$.
A rather trivial fact from group theory becomes more demanding in the
axiomatic setting of coarse groupoids.
###### Proposition 4.7.
Suppose that $U$ is a ∗subgroup such that $RC(U)$, or equivalently $LC(U)$, is
finite. Then there is a normal ∗subgroup $N\sqsubseteq U$.
In usual topological group theory, the argument is as follows. Since $U$ is a
subgroup of $G$ of finite index, the conjugacy class of $U$ is finite. Let
$N=\bigcap_{g\in G}U^{g}$. This is a finite intersection and hence defines an
open subgroup, and $N^{h}=\bigcup_{g}U^{gh}=N$ for each $N$. So $N$ is normal.
###### Proof.
Let $\mathcal{D}$ be the “conjugacy class” of $U$, namely
$\mathcal{D}=\\{B^{-1}UB\colon B\in RC(U)\\}$.
The hypothesis implies that $\mathcal{D}$ is finite, so let $N$ be the meet of
all its members. Then $N\sqsubseteq U$. We show that $N$ is normal as
required.
Given $C\in RC(N)$, we will show that $C\in LC(N)$. We define a bijection
$f_{C}\colon\mathcal{D}\to\mathcal{D}$ by
$f_{C}(W)=D^{-1}WD$ where $D\in RC(W)$ and $C\sqsubseteq D$;
note that $D$ exists and is unique because $N\sqsubseteq W$, so $f_{C}$ is
well-defined.
To verify that $f_{C}$ is bijection, since $\mathcal{D}$ is finite it suffices
to show that $f_{C}$ is 1-1. Suppose $f_{C}(W_{0})=f_{C}(W_{1})=:V$. Then
$D_{0}^{-1}W_{0}D_{0}=D_{1}^{-1}W_{1}D_{1}=V$ where $C\sqsubseteq D_{0},D_{1}$
and $D_{i}\in RC(W_{i})$. Since $D_{0},D_{1}\in LC(V)$, and $D_{0},D_{1}$ are
not disjoint, this implies $D_{0}=D_{1}$ and hence $W_{0}=W_{1}$.
We have $N^{\prime}:=C^{-1}NC\sqsubseteq f_{C}(W)$ for each $W\in\mathcal{D}$
using (A1) and (A3). So, since $f_{C}$ is onto, $N^{\prime}\sqsubseteq N$.
Since $C\in LC(N^{\prime})$ and $N^{\prime}\sqsubseteq N$, there is $D\in
LC(N)$ such that $C\sqsubseteq D$ by (A2$\uparrow$). Then
$C^{\prime}:=D^{-1}\in RC(N)$, so $C^{\prime}\sqsubseteq D^{\prime}$ for some
$D^{\prime}\in LC(N)$ by the argument above. Then $D\sqsubseteq
E:=(D^{\prime})^{-1}\in RC(N)$ by (A1). So $C\sqsubseteq E$ and both are in
$RC(N)$. Hence $C=D=E$ by (A4). This shows that $LC(N)\subseteq RC(N)$, hence
also $RC(N)\subseteq LC(N)$ by taking inverses. So $RC(N)=LC(N)$ as required.
∎
#### The filter group associated with a coarse groupoid
We’ve seen how to turn a topological group into a coarse groupoid. Now we go
the opposite way. This is adapted from [37]. An alternative, possibly easier
way to do this is to take the topological group of “left automorphisms” of
$M$, as detailed in Prop. 4.18 below.
Let $M$ be a coarse groupoid. A filter on a p.o. is a proper subset that is
downward directed and upward closed. For $(M,\sqsubseteq)$, a filter is thus a
subset $x$ that is upward closed, and $A\wedge B$ exists and is in $x$, for
any $A,B\in x$.
###### Definition 4.8.
A _full filter_ is a filter $x$ on the partial order $(M,\sqsubseteq)$ such
that for each ∗subgroup $U\in M$, there is a left ∗coset and a right ∗coset in
$x$. Note that these ∗coset are unique by (A4). $\mathcal{F}(M)$ denotes the
set of full filters. We use variables $x,y,z$ for full filters.
###### Claim 4.9.
For each $A$ there is a full filter $x$ such that $A\in x$.
This follows by iterated applications of Claim 4.3.
We begin with the topology on $\mathcal{F}(M)$.
###### Definition 4.10 (Topology on the set of full filters).
As in [37] we define a topology on $\mathcal{F}(M)$ by declaring as subbasic
the open sets
$\widehat{A}=\\{x\in\mathcal{F}(M)\colon A\in x\\}$
where $A\in M$. These sets form a base since filters are directed.
Suppose $M$ is countable. The following improves [37, Prop 2.5] that
$\mathcal{F}(M)$ a totally disconnected Polish space.
###### Proposition 4.11.
There is a homeomorphic embedding taking $\mathcal{F}(M)$ to a closed subset
of Baire space.
###### Proof.
Let ${\left\langle{U_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ be a descending
sequence of ∗subgroups that is cofinal (every ∗subgroups contains an $U_{n}$).
Fix a bijection $f\colon{\mathbb{N}}\to M$. We define an injection $\Delta$
from $\mathcal{F}(M)$ into Baire space ${}^{\omega}\omega$.
Suppose that $x\in\mathcal{F}(M)$. Let $\Delta(x)(0)$ be the left ∗coset of
$U_{0}$ in $x$.
Suppose $\Delta(x)(2n)$ is defined and a left ∗coset of some $U_{r}$. Let
$\Delta(x)(2n+1)$ be the $k$ such that $A=f(k)\in x$, and $A$ is a right
∗coset of $U_{m}$ where $m>r$ is chosen least possible. This exists by Claim
4.3, since the sequence ${\left\langle{U_{n}}\right\rangle}$ is cofinal, and
(A2).
Similarly, suppose $\Delta(x)(2n+1)$ is defined and a right ∗coset of some
$U_{r}$. Let $\Delta(x)(2n+2)$ be the $k$ such that $A=f(k)\in x$, and $A$ is
a left ∗coset of $U_{m}$ where $m>r$ is chosen least possible.
By the axioms, $\Delta$ is injective because $x$ is the filter generated by
$\Delta(x)$. One checks that it is a homeomorphism because full filters
correspond to the paths on the subtree of the strings given by the possible
next choices at each step. ∎
Next we define the group operations on $\mathcal{F}(M)$. For filters $x,y$ we
let
$\displaystyle x^{-1}$ $\displaystyle=$ $\displaystyle\\{A^{-1}\colon\,A\in
x\\}$ $\displaystyle xy$ $\displaystyle=$ $\displaystyle\\{C\colon\exists
A,B[AB\sqsubseteq C]\\}$
Here writing $AB$ implies that it is defined, i.e. $\exists UA\ _{U}B$.
###### Claim 4.12.
If $x,y$ are full filters, then $x^{-1}$ and $z=xy$ are full filters.
That $x^{-1}$ is a full filter is straightforward. For the second statement,
clearly $z$ is upwards closed. We verify that $z$ is downwards directed.
Always let $i=0,1$. Suppose $C_{i}\in z$. Then there are $A_{i}\in x$,
$B_{i}\in y$ and $U_{i}$ such that ${A_{i}\ }_{U_{i}}B_{i}$ and
$A_{i}B_{i}\sqsubseteq C_{i}$.
Let $U=U_{0}\wedge U_{1}$. By definition of full filters there are $A\in x$
and $B\in y$ such that $A\ _{U}\,B$. Then $C=AB\in z$. By (A4) and since
filters are downwards directed, we have $A\sqsubseteq A_{i}$ and $B\sqsubseteq
B_{i}$. So by (A3) we have $C\sqsubseteq C_{i}$ as required.
The neutral element $e$ is the full filter consisting of all the ∗subgroups.
It is clear from the groupoid axioms that $(\mathcal{F}(M),\cdot)$ is a group
with this neutral element, and the inverse operation above.
That leaves continuity of the group operations. First, as in [37, Claim 2.11]
we need to check that the transfer from formal to semantic concepts works
between ∗cosets $A$ and the corresponding actual open cosets $\widehat{A}$.
Note that we are NOT claiming that these are the only open cosets; this is not
true unless we require further axioms special to the particular class of
groups we are interested in.
###### Claim 4.13.
Let $A,B,C\in M$.
* (a)
$A\sqsubseteq B\Longleftrightarrow\widehat{A}\subseteq\widehat{B}$.
* (b)
$\widehat{B^{-1}}=(\widehat{B})^{-1}$.
* (c)
If $A\cdot B$ is defined then $\widehat{A\cdot B}=\widehat{A}\widehat{B}$.
* (d)
$\widehat{U}$ is a subgroup of $\mathcal{F}(M)$.
* (e)
$A\in LC(U)\Longleftrightarrow\widehat{A}$ is a left coset of $\widehat{U}$.
Similarly for right cosets.
###### Proof.
(a) The implication $\Rightarrow$ is upward closure of full filters. For the
implication $\Leftarrow$, suppose that $A\not\sqsubseteq B$. By (A5) there is
$C\sqsubseteq A$ such that $C\perp B$. By Claim 4.9 let $x$ be a full filter
such that $C\in x$. Then $A\in x$ and $B\not\in x$. (b) is immediate. For (c),
$\supseteq$ is by definition, and $\subseteq$ follows from Claim 4.9.
(d) is immediate using $UU=U$.
(e) We follow [37]:
$\Rightarrow$: take any $x\in\widehat{A}$. We show that
$x\widehat{U}=\widehat{A}$.
For $x\widehat{U}\subseteq\widehat{A}$, let $y\in\widehat{U}$. Since $A\in
LC(U)$, we have $AU\sqsubseteq A$. So $x\cdot
y\in\widehat{A}\widehat{U}\subseteq\widehat{A}$ by (c).
For $\widehat{A}\subseteq x\widehat{U}$, let $y\in\widehat{A}$. To show that
$y\in x\widehat{U}$, or equivalently $x^{-1}y\in\widehat{U}$, note that we
have
$x^{-1}y\in\widehat{A}^{-1}\widehat{A}=\widehat{A^{-1}}\widehat{A}\subseteq\widehat{U}$
by (b) and (c).
$\Leftarrow$: Suppose $\widehat{A}=x\widehat{V}$. There is $B\in x$ such that
$B\in LC(V)$. By the forward implication, $\widehat{B}$ is a left coset of
$\widehat{V}$. Also $x\in\widehat{A}\cap\widehat{B}$, so $A\perp B$ fails.
Since $A,B\in LC(V)$ this implies $A=B$ by (A4).
The case of right cosets follows by taking inverses. ∎
Another transfer fact will be useful.
###### Claim 4.14.
The map $x,y\to x\cdot y^{-1}$ is continuous on $\mathcal{F}(M)$.
This follows since the sets of the form $\widehat{D}$ form a basis. If
$xy^{-1}\in\widehat{D}$, i.e., $D\in xy^{-1}$, then by definition there are
$A\in x$ and $B\in y$ such that $AB^{-1}\sqsubseteq D$. Now, by Claim 4.13,
$\widehat{A}\widehat{B}^{-1}\subseteq\widehat{D}$ as required.
###### Claim 4.15.
For any left coset $x\widehat{V}$ in $\mathcal{F}(M)$, there is $A\in LC(V)$
such that $x\widehat{V}=\widehat{A}$.
###### Proof.
Since $x$ is a full filter, there is some left ∗coset $A$ of $V$ in $x$. We
claim that $x\widehat{V}=\widehat{A}$. We have
$x\widehat{V}\subseteq\widehat{A}\widehat{V}=\widehat{V}$, since $A\in x$ and
$\widehat{A}$ is a left coset of $\widehat{V}$ by Claim 4.13. To see that
$\widehat{A}\subseteq x\widehat{V}$, let $y\in\widehat{A}$. Since
$x,y\in\widehat{A}$,
$x^{-1}y\in\widehat{A}^{-1}\widehat{A}=\widehat{A^{-1}}\widehat{A}\sqsubseteq\widehat{V}$
by Claim 4.13. Thus $y\in x\widehat{V}$. ∎
By definition of the topology, the open subgroups of $\mathcal{F}(M)$ form a
nbhd base of $1$. So if $M$ is countable, $\mathcal{F}(M)$ is a non-
Archimedean Polish group.
The operation $\mathcal{F}$ recovers a topological group from its coset
structure when that is countable. It also works in the t.d.l.c. setting where
$\mathcal{M}(G)$ denotes the compact open cosets.
###### Proposition 4.16 (cf. [25], after Claim 3.6, and [37], Prop 2.13).
Suppose that $G$ is a closed subgroup of $S(\omega)$ such that
$\mathcal{M}(G)$ is countable. There is a natural group homeomorphism
$\Phi:G\cong\mathcal{F}(\mathcal{M}(G))$ given by $g\mapsto\\{A\colon A\ni
g\\}$,
with inverse given by $x\mapsto g$ where $\bigcap x=\\{g\\}$.
The inverse map simply sends a full filter $x$ to the point it converges to.
Note that $x$ isn’t really a filter in the sense of topology, only on certain
open sets, but that suffices for the convergence notion.
###### Example 4.17.
For an instructive example of a coarse groupoid, consider the oligomorphic
group $G=\operatorname{Aut}(\mathbb{Q},<)$. The open subgroups of $G$ are the
stabilizers of finite sets. If $U,V$ are stabilizers of sets of the same
finite cardinality, there is a unique morphism $A\colon U\to V$ in the sense
above, corresponding to the order-preserving bijection between the two sets.
The coarse groupoid for $\operatorname{Aut}(\mathbb{Q},<)$ is canonically
isomorphic to the groupoid of finite order-preserving maps on $\mathbb{Q}$,
with the partial order being reverse extension. For compatible maps $A,B$, the
meet $A\wedge B$ is the union of those maps.
A filter $x$ corresponds to an arbitrary order-preserving map $\psi$ on
$\mathbb{Q}$. The filter $x$ contains a right coset of each open subgroup iff
$\psi$ is total, and a left coset of each open subgroup iff $\psi$ is onto. So
the set of full filters corresponds to $\operatorname{Aut}(\mathbb{Q})$ as
expected. (Incidentally, this example shows that in Definition 4.8 we need
both types of cosets, and that not every maximal filter is full.)
#### The filter group as an automorphism group
Let $M$ be a coarse groupoid. By $M_{\text{left}}$ we will denote the
structure with domain $M$ and the operations $\wedge$ and $(r_{B})_{B\in M}$
where $Ar_{B}=AB$ in case $\mathbf{r}(A)=\mathbf{l}(B)$, and $Ar_{B}=0$
otherwise. We show that the left action of $\mathcal{F}(M)$ on $M$ corresponds
to the automorphisms of this “rewrite” of $M$. (This is simlar to showing that
a group is isomorphic to the automorphism group of a Cayley graph given by a
generating set, with edge relations labelled according to the generators.)
Note that for each automorphism $p$ of $M_{\text{left}}$, and each $A$, we
have
$\mathbf{r}(p(A))=\mathbf{r}(A).$
This is because where $U=\mathbf{r}(A)$, we have $p(A)U=p(AU)=p(A)$. Also,
note that $p$ is determined by its restriction to the ∗subgroups, because for
each right ∗coset $B$ of a ∗subgroup $U$ we have $p(B)=p(U)B$.
###### Proposition 4.18.
$\mathcal{F}(M)$ is topologically isomorphic to
$\operatorname{Aut}(M_{\text{left}})$ via a canonical isomorphism $\Theta$.
###### Proof.
For $x\in\mathcal{F}(M)$, the left action ${}_{U}B\mapsto A=x\cdot B$ is given
by $A=CB$ where $C_{U}\in x$. The isomorphism
$\Theta\colon\mathcal{F}(M)\to\operatorname{Aut}(M_{\text{left}})$ maps $x$ to
its left action:
$\Theta(x)(A)=x\cdot A$.
Clearly $\Theta(x)\in\operatorname{Aut}(M_{\text{left}})$, and $\Theta$
preserves the group operations.
Let $s_{M}\in\mathcal{F}(M)$ denote the full filter of ∗subgroups, which is
the neutral element of $\mathcal{F}(M)$. We claim that the inverse $\Phi$ of
$\Theta$ is given by
$\Phi(p)=p(s_{M})$.
Clearly $x=\Phi(p)$ is a filter. To show that $x$ is a _full_ filter, let $U$
be a ∗subgroup in $M$. Since $p$ is an automorphism, firstly, we have
$p(U)U=p(U)$, so $p(U)\in LC(U)$ and $p(U)\in x$. Secondly, there is $B$ such
that $p(B^{-1})=U$. Then $p(B^{-1}B)=UB=B$. So $B\in RC(U)$. Now
$V=B^{-1}B=\mathbf{r}(B)$ is a ∗subgroup. Since $p(V)=B$ we have $B\in x$.
We verify that $\Theta,\Phi$ are inverses of each others. $\Phi(\Theta(x))=x$
because
$A_{U}\in x\leftrightarrow xU=A\leftrightarrow\Theta(x)(U)=A$.
$\Theta(\Phi(p))=p$ because
$p(U)=A\leftrightarrow
A\in\Phi(p)\leftrightarrow\Phi(p)U=A\leftrightarrow\Theta(\Phi(p))(U)=A$.
To show $\Theta$ and $\Phi$ are continuous at $1$, note that if $p=\Theta(x)$,
then $p(U)=U$ is equivalent to $x\in\widehat{U}$.
∎
#### Profinite groups, and ∗compact coarse groupoids
As mentioned, Chatzidakis [5] carried out the first research related to the
application of coarse groupoids to profinite groups. In her version the coarse
groupoid was restricted to normal open ∗subgroups, which suffices in that
case. (Thanks to the Ivanovs for pointing this out.)
We say that a coarse groupoid $M$ is _∗ compact_ if $\forall U\,[LC(U)\text{
is finite}]$. This is, of course, equivalent to requiring that $\forall
U\,[RC(U)\text{ is finite}]$. Clearly $\mathcal{M}(G)$ for profinite $G$ has
this property. By Prop. 4.7, ∗compactness implies that each ∗subgroup $U$
contains a normal ∗subgroup $V$. (This was required separately in [37].)
###### Proposition 4.19.
Let $M$ be a coarse groupoid. Then
$M$ is ∗compact $\Leftrightarrow$ $\mathcal{F}(M)$ is compact.
###### Proof.
$\Leftarrow$: Given $U\in M$, by Claim 4.13(d) $\widehat{U}$ is open in
$\mathcal{F}(M)$. So it has finite index. By (e) of the same claim, this
implies that $LC(U)$ is finite.
$\Rightarrow$: Using Prop. 4.7, let
${\left\langle{N_{k}}\right\rangle}_{k\in{\mathbb{N}}}$ be a descending chain
of normal ∗subgroups such that $\forall U\exists k\,[N_{k}\sqsubseteq U]$. Let
$G_{k}$ be the group induced by $M$ on $LC(N_{k})$. We define an onto map
$p_{k}\colon G_{k+1}\to G_{k}$ as follows: given $A\in LC(N_{k+1})$, using
(A2$\uparrow$) let $p_{k}(A)=B$ where $A\sqsubseteq B\in LC(N_{k})$. Each
$p_{k}$ is a homomorphism by Axioms (A1, A2).
Let $G$ be the inverse limit: $G=\projlim_{k}(G_{k},p_{k})$. Thus
$G=(\\{f\in\prod_{k}G_{k}\colon\forall k\,f(k)=p_{k}(f(k+1))\\},\cdot)$,
which is closed and hence compact group subgroup of the Cartesian product of
the $G_{k}$. We claim that $G\cong(\mathcal{F}(M),\cdot)$ via the map $\Phi$
that sends $f\in G$ to the filter in $\mathcal{F}(M)$ generated by the ∗cosets
$f(k)$, namely
$\Phi(f)=\\{C\in M\colon\,\exists k\,f(k)\sqsubseteq C\\}$.
It is clear that $\Phi$ is a monomorphism. To show $\Phi$ is onto, given a
full filter $x\in\mathcal{F}(M)$, for each $k$ there is $f(k)=B_{k}\in
LC(N_{k})$ such that $B_{k}\in x$. Then $f\in G$, and clearly $\Phi(f)=x$.
Note the $\widehat{N}_{k}$ form a base of nbhds of $1$ in $\mathcal{F}(M)$.
Since $\Phi^{-1}(\widehat{N}_{k})=\\{f\colon f(k)=N_{k}\\}$ and the letter
sets form a base of nbhds of $1$ in $G$, we get that $\Phi$ is a
homeomorphism. Thus $\mathcal{F}(M)$ is compact. ∎
#### Coarse groupoids versus diagrams, for profinite groups
We will characterize the coarse groupoids $\mathcal{M}(G)$ obtained from
profinite groups $G$ by adding an axiom to the ∗compactness condition.
Consider profinite $G=\projlim_{k}(G_{k},p_{k})$ where the $G_{k}$ are finite
groups and each $p_{k}\colon G_{k+1}\to G_{k}$ is an epimorphism. We say that
${\left\langle{G_{k},p_{k}}\right\rangle}$ is a _diagram_ for $G$. By the
proof of Prop. 4.19, each diagram for $G$ can be seen as a coarse groupoid $M$
with $\mathcal{F}(M)\cong G$. So a coarse groupoid for $G$ is not unique.
Intuitively, they may be open subgroups of $\mathcal{F}(M)$ that $M$ is
missing. To avoid this we need another axiom. In the axiom to follow, “CC”
stands for “completeness in case of compactness”. We will see that it implies
in the compact case that each open subgroup of $\mathcal{F}(M)$ has a “name”.
Axiom CC. Let $M$ be a ∗compact coarse groupoid. Let $N$ be a normal ∗subgroup
of $M$.
If a set $\mathcal{S}\subseteq LC(N)$ is closed under products and inverses,
then there is a ∗subgroup $U$ such that $A\sqsubseteq U\leftrightarrow
A\in\mathcal{S}$, for each $A\in LC(N)$.
Clearly $\mathcal{M}(G)$ for profinite $G$ satisfies this axiom. The axiom
implies the dual of Prop. 4.16 in the compact case:
###### Proposition 4.20.
Let $M$ be a ∗compact coarse groupoid satisfying Axiom CC. Then
$M\cong\mathcal{M}(\mathcal{F}(M))$ via the map $A\mapsto\widehat{A}$.
###### Proof.
By Claim 4.13 it suffices to show that the map is onto.
Firstly, let $\mathcal{U}$ be an open subgroup of $\mathcal{F}(M)$. By
definition of the topology and Prop. 4.7, there is a normal ∗subgroup $N$ in
$M$ such that $\widehat{N}\subseteq\mathcal{U}$. By Prop. 4.19,
$\mathcal{F}(M)$ is compact, so $\mathcal{U}$ is the union of finitely many
cosets of $\widehat{N}$. By Claim 4.15 each such coset has the form
$\widehat{A}$ for some $A\in LC(N)$. Let $\mathcal{S}$ be the set of such $A$
in $LC(N)$. The set $\mathcal{S}$ is closed under product and inverses since
$\mathcal{U}$ is a subgroup, using Claim 4.13. So there is a ∗subgroup $U$ as
in Axiom CC. Clearly $\widehat{U}=\mathcal{U}$.
Secondly, given a left coset $\mathcal{B}$ of an open subgroup $\mathcal{U}$
in $\mathcal{F}(M)$, by Claim 4.15 we have $\mathcal{B}=\widehat{B}$ for some
$B\in LC(U)$ as required. ∎
#### T.d.l.c. groups and ∗locally compact coarse groupoids
We say that a coarse groupoid $M$ is _∗ locally compact_ if for each ∗subgroup
$K\in M$ the coarse subgroupoid induced on $\\{A\colon\,A\sqsubseteq K\\}$ is
∗compact. Note that $\mathcal{M}(G)$ for t.d.l.c. $G$ has this property: by
van Dantzig’s theorem (that every t.d.l.c. group has an open compact subgroup)
$\mathcal{M}(G)$ is non-empty, and by definition $\mathcal{M}(G)$ consists of
compact open cosets.
We call $M$ _weakly ∗locally compact_ if for _some_ ∗subgroup $K\in M$ the
inductive subgroupoid on $\\{A\colon\,A\sqsubseteq K\\}$ is ∗compact.
###### Proposition 4.21.
Let $M$ be a coarse groupoid. Then
$M$ is weakly ∗locally compact $\Leftrightarrow$ $\mathcal{F}(M)$ is locally
compact
###### Proof.
$\Leftarrow$: By van Dantzig’s theorem, $\mathcal{F}(M)$ has a compact open
subgroup $\mathcal{L}$. Let $K\in M$ be a ∗subgroup such that
$\widehat{K}\subseteq\mathcal{L}$. As above let $M_{K}$ be the coarse
subgroupoid of $M$ on $\\{A\colon\,A\sqsubseteq K\\}$. Then
$\mathcal{F}(M_{K})\cong\widehat{K}$. So $\mathcal{F}(M_{K})$ is compact.
Hence $M_{K}$ is ∗compact by Prop. 4.19. $\Rightarrow$. Let $M$ be weakly
∗locally compact via $K$. Then $\mathcal{F}(M_{K})\cong\widehat{K}$ is
compact. Since $\widehat{K}$ is an open subgroup of $\mathcal{F}(M)$ this
makes $\mathcal{F}(M)$ locally compact. ∎
###### Example 4.22.
(a) If $G$ is countable discrete group, then $\mathcal{M}(G)$ consists of the
isomorphisms between finite subgroups.
(b) Let $G=(\mathbb{Q}_{p},+)$. The proper open subgroups are compact, and are
all of the form $U_{r}=p^{r}{\mathbb{Z}}_{p}$ for some $r\in{\mathbb{Z}}$. In
this abelian setting each morphism is an endomorphism. The group $G_{r}$ of
endomorphisms $A\colon\,U_{r}\to U_{r}$ has the structure of $C_{p^{\infty}}$
(the direct limit of the cyclic groups $C_{p^{n}}$ with the canonical
embeddings). Let $f(x)=px$ for $x\in C_{p^{\infty}}$ and view $f$ as a map
$G_{r}\to G_{r+1}$. Then for $A\in LC(U_{r}),B\in LC(U_{r+1})$, the ordering
relation $A\sqsubseteq B$ is equivalent to $f(A)=B$. We see that the coarse
groupoid is a bit like a diagram for a profinite group, but goes not only to
closer approximations of elements (backwards), but also to less close ones
(forward).
(c) $G_{d}=\operatorname{Aut}(T_{d})$ for $d\geq 2$. This is the group of
automorphism of the $d$-regular tree $T_{d}$, defined as an undirected graph
without a specified root, first studied by Tits. It is known that each proper
open subgroup is compact. Each compact (open or not) subgroup is contained in
the stabilizers of a vertex, or the stabilizers of an edge (which are compact
open). See [12, p. 12]. It would be interesting to describe more of the
structure of $\mathcal{M}(G_{d})$.
Recall that in the locally compact setting, the coarse groupoid
$\mathcal{M}(G)$ has as a domain the compact open cosets of $G$. We replace
Axiom CC from the compact setting by a variant that works in the more general
setting.
Axiom CLC. Let $M$ be a ∗locally compact coarse groupoid. Let $N$ be a
∗subgroup of a $M$.
If a finite set $\mathcal{S}\subseteq LC(N)$ is closed under products and
inverses, then there is a ∗subgroup $U$ such that $A\sqsubseteq
U\leftrightarrow A\in\mathcal{S}$ for each $A\in LC(N)$.
Clearly, if $G$ is t.d.l.c. then $\mathcal{M}(G)$ satisfies this axiom. We
verify that the axiom characterizes the ∗locally compact coarse groupoids
obtained in this way.
###### Proposition 4.23.
Let $L$ be a ∗locally compact coarse groupoid satisfying Axiom CLC. Then
$L\cong\mathcal{M}(\mathcal{F}(L))$ via the map $A\mapsto\widehat{A}$.
###### Proof.
As in Prop. 4.20, by Claim 4.13 it suffices to show that the map
$A\mapsto\widehat{A}$ is onto.
Firstly let $\mathcal{U}$ be a compact open subgroup of $\mathcal{F}(L)$.
There is $W\in L$ such that $\widehat{W}\subseteq\mathcal{U}$. Let
$L_{\mathcal{U}}=\\{A\in L\colon\,\widehat{A}\sqsubseteq\mathcal{U}\\}$.
Clearly $L_{\mathcal{U}}$ is a coarse subgroupoid of $L$. In
$L_{\mathcal{U}}$, $RC(W)$ is finite, so by Prop. 4.7 there is $N\sqsubseteq
W$ such that $\mathcal{S}:=LC(N)=RC(N)$ in $L_{\mathcal{U}}$. Clearly the
hypothesis of Axiom CLC applies to $\mathcal{S}$, so we get a ∗subgroup $U$.
Then $\widehat{U}=\mathcal{U}$.
Secondly, given a left coset $\mathcal{B}$ of an open subgroup $\mathcal{U}$
in $\mathcal{F}(L)$, by Claim 4.15 we have $\mathcal{B}=\widehat{B}$ for some
$B\in LC(U)$ as above. ∎
#### $\mathcal{M}$ and $\mathcal{F}$ as functors
We will view closed subgroups of $S(\omega)$ (also called non-Archimedean
groups) as a category where the morphisms $f\colon G\to H$ are the continuous
epimorphisms. The kernel of $f$ is a closed normal subgroup. If $G$ is
compact/locally compact then $H$ has the same property.
Covariant functor, for the locally compact case. The open mapping theorem for
Hausdorff topological groups states that a surjective continuous homomorphism
of a $\sigma$-compact group onto a Baire (e.g., a locally compact) group is an
open mapping. (This is proved using Baire category.) Each (separable) t.d.l.c.
group is $\sigma$-compact again by van Dantzig. So if $G$ is locally compact
and $f\colon G\to H$ is onto, then for each compact open $A\subseteq G$, the
image $f(A)$ is open (and of course compact) in $H$.
On t.d.l.c. groups with continuous epimorphisms we can now view the operator
$\mathcal{M}$ as a covariant functor $\mathcal{M}_{+}$. If $f\colon G\to H$
then $\mathcal{M}_{+}(f)\colon\mathcal{M}(G)\to\mathcal{M}(H)$ is given by
$A\mapsto f(A)$. Here we view coarse groupoids as a weak category
$\mathcal{C}_{weak}$ where the morphisms $r\colon\,M\to N$ preserve the
groupoid structure and the partial order in the forward direction only.
Contravariant functor. If $f\colon G\to H$ is an epimorphism of non-
Archimedean groups, then we have a map
$\mathcal{M}_{-}(f)\colon\,\mathcal{M}(H)\to\mathcal{M}(G)$, where
$\mathcal{M}_{-}(f)(A)=f^{-1}(A)$. This clearly preserves the inductive
groupoid structure. We want to identify the right type of morphisms on coarse
groupoids so that $\mathcal{M}$ is a contravariant functor with inverse
$\mathcal{F}$ (suitably extended to morphisms) for the classes of groups of
interest. Consider a map $\mathcal{M}_{-}(f)$ above and let
$R\subseteq\mathcal{M}(G)$ be its range. $R$ consists of all open [compact]
cosets of subgroup of $G$ that contain the kernel of $f$. So,
* •
$R$ is closed upwards, and
* •
$R$ is closed under conjugation of subgroups. Thus if $\exists A[_{U}A_{V}]$
and $U\in R$ then $V\in R$.
So we have to take the strong category $\mathcal{C}_{str}$ with objects the
(countable) coarse groupoids and with morphisms $q\colon\,N\to M$ that
preserve the groupoid and the meet semilattice structure (in particular, they
preserve the ordering and hence have to be $1-1$) and have range with the two
properties above. For such an $q\colon\,N\to M$ we can define
$\mathcal{F}(q)\colon\,\mathcal{F}(M)\to\mathcal{F}(N)$ by $x\mapsto
q^{-1}(x)$.
###### Claim 4.24.
Suppose $q\colon N\to M$ is a morphism in $\mathcal{C}_{str}$.
Then $\mathcal{F}(q)\colon\mathcal{F}(M)\to\mathcal{F}(N)$ is a continuous
epimorphism.
### 5\. Nies: Closed subgroups of $S(\omega)$ generated by their permutations
of finite support
Let $SF(\omega)$ denote the groups of permutations of $\omega$ that have
finite support. We note that $SF(\omega)$ plays a role in $S(\omega)$ similar
to the role that $\mathbb{Q}$ plays in ${\mathbb{R}}$. Clearly $SF(\omega)$ is
dense in $S(\omega)$. The inherited topology has as a basis of neighbourhoods
of $1$ the open subgroups $SF(\omega)\cap U_{n}$ of $SF(\omega)$, where
$U_{n}\leq_{o}S(\omega)$ is the pointwise stabilizer of $\\{0,\ldots,n\\}$.
For example the subgroup $T$ algebraically generated by
$\\{(01)(2n\,2n+1)\colon n\geq 1\\}$ of $SF(\omega)$ is not closed; its
closure is $T\cup\\{(01)\\}$.
Given a closed subgroup $G$ of $S(\omega)$, it is of interest to find out how
much of $G$ can be recovered from the closed subgroup $G\cap SF(\omega)$ of
$SF(\omega)$. B. Majcher-Iwanov [28] called $G\cap SF(\omega)$ the _finitary
shadow_ of $G$. Often $G\cap SF(\omega)$ will be trivial, for instance if $G$
is torsion free.
Note that a closed subgroup $G$ of $S(\omega)$ is compact iff each $G$-orbit
is finite. Majcher–Iwanov [28] studied the distribution of finitary shadows of
compact subgroups of $S(\omega)$ within the subgroup lattice of $SF(\omega)$,
and made connections to cardinal characteristics.
One can also start from a subgroup $H$ of $SF(\omega)$ and study how it is
related to its closure $G=\overline{H}$ in $S(\omega)$. (Closures will be
taken in $S(\omega)$ unless otherwise stated.)
Firstly, for each open subgroup $U$ of $H$, its closure $\overline{U}$ is open
in $G$. For, suppose $U_{n}\cap H\leq U$. Since $U_{n}$ is closed we have
$\overline{U_{n}\cap H}=U_{n}\cap\overline{H}$. Hence $U_{n}\cap
G\leq\overline{U}$, so $\overline{U}$ is open in $G$.
Conversely, each open subgroup of $G$ is given as the closure of the open
subgroup $L\cap H$ of $H$:
###### Lemma 5.1.
Let $H\leq SF(\omega)$ and let $G=\overline{H}$ be the closure of $H$ in
$S(\omega)$. Then $L=\overline{L\cap H}$ for each open subgroup $L$ of $G$.
###### Proof.
Since $L$ is closed in $G$, we only need to verify the inclusion $\subseteq$.
Let $g\in L$. Since $L$ is open in $G$, there is $k$ such that for each $v$,
if $gv^{-1}\in U_{k}$ then $v\in L$. Now let $n\geq k$ be arbitrary. Since
$G=\overline{H}$, there is $r\in H$ such that $gr^{-1}\in U_{n}$. So $r\in L$.
This shows $g\in\overline{L\cap H}$. ∎
###### Remark 5.2.
To summarize, there is a natural isomorphisms between the open subgroups $L$
of $G$ and the open subgroups $U$ of $H$, given by $L\to L\cap H$, with
inverse $U\to\overline{U}$.
#### The compact case
The textbook on permutation groups by Dixon and Mortimer [7, Lemma 8.3D]
contains a proof of the following equivalences for a group $H\leq SF(\omega)$:
(i) every $H$-orbit is finite (i.e., $\overline{H}$ is compact)
(ii) $H$ only has finite conjugacy classes (such $H$ is called an FC-group)
(iii) $H$ is residually finite.
The implication (i)$\to$(ii) is pretty elementary: for each $x\in H$ there is
a finite $H$-invariant set $\Delta$ such that the support of $x$ is contained
in $\Delta$. The number of conjugates of $x$ is then bounded by the number of
conjugates of $x\mid\Delta$ (restriction to $\Delta$) in $S_{\Delta}$.
(ii)$\to$(iii) also easy, the remaining implication (iii)$\to$(i) is harder.
It was proved in [33, Thm. 2].
It would be interesting to determine the complexity of the topological
isomorphism relation for closed subgroups $H$ of $SF(\omega)$ that are FC-
groups. Since $H=\overline{H}\cap SF(\omega)$, where the closure is taken in
$S(\omega)$ as usual, it is Borel reducible to topological isomorphism of
profinite groups, which by Kechris et al. [25] is Borel isomorphic to
countable graph isomorphism. However, the groups employed there to reduce
graph isomorphism do not have a finitary shadow satisfying the conditions
above. [25] uses an extension to the topological setting of Mekler’s
construction to code a countable graph into a countable step 2 nilpotent group
of exponent a fixed odd prime. Mekler’s groups being FC would mean that the
coded graph is co-locally finite (each vertex connected to cofinitely many
vertices). But for technical reasons related to the definability of the vertex
set, Mekler’s method only can encode graphs that are triangle and square free,
and such graphs are of course not co-finitary.
#### The oligomorphic case
Recall that a group $H\leq S(\omega)$ is called oligomorphic if for each $r$
its action on $\omega$ has only finitely many $r$-orbits. It is open whether
$E_{0}$ can be Borel reduced to topological isomorphism of closed oligomorphic
subgroups of $S(\omega)$. See Section 2, or [37] for background.
For $H\leq SF(\omega)$, note that $H$ is oligomorphic iff $\overline{H}$ is.
The hope was that one can reduce $E_{0}$ to topological isomorphism of closed
subgroups of $SF(\omega)$, which should be easier to control. To be useful,
this actually assumed an affirmative answer to the following question:
###### Question 5.3.
Is the following true?
Let $H_{0},H_{1}\leq_{c}SF(\omega)$ be oligomorphic. Then
$H_{0},H_{1}$ are homeomorphic $\Leftrightarrow$
$\overline{H}_{0},\overline{H}_{1}$ are homeomorphic.
For the implication $\Rightarrow$, we note that by Remark 5.2 the structures
of open cosets for $\overline{H}_{i}$ and $H_{i}$ are isomorphic for $i=0,1$.
That is $\mathcal{M}(\overline{H}_{i})\cong\mathcal{M}(H_{i})$. By hypothesis
$\mathcal{M}(H_{0})\cong\mathcal{M}(H_{1})$. Since the $\overline{H}_{i}$ are
closed oligomorphic subgroups of $S(\omega)$,
$\mathcal{M}(\overline{H}_{0})\cong\mathcal{M}(\overline{H}_{1})$ implies that
they are topologically isomorphic by [37].
However, the intended application doesn’t not work, because oligomorphic
subgroups $H$ of $SF(\omega)$ are very restricted, and in particular there are
only countably many up to isomorphism.
Some examples of such groups $H$ come to mind. First take permutations of
finite support preserving the evens. This group is topologically isomorphic to
$SF(\omega)\times SF(\omega)$. More generally, take a finite power of
$SF(\omega)$. Another type of example is letting $H$ be the automorphism group
of a countably infinite structure obtained taking a finite structure $M$, and
have an equivalence relation $E$ with a “copy” of $M$ on each class. For
instance, take one unary function symbol $f$, and let $M$ be a finite cycle
given by $f$. If $\phi$ is the permutation that is the union of all these
cycles of fixed length, then $H$ is the centralizer of $\phi$ in $SF(\omega)$.
Let $R$ be the random graph and $Q$ the random linear order. In contrast,
$\operatorname{Aut}(R)$ and $\operatorname{Aut}(Q)$ have no nontrivial members
of finite support at all (not hard to check). Even $G=\operatorname{Aut}(E)$
where $E$ is an equivalence relation with just two infinite classes, is not
the closure of $G\cap SF(\omega)$ because an automorphism of finite support
cannot leave an equivalence class. More generally, if $R$ is a definable
relation and $\phi(a)=b$ where $\phi$ is an automorphism of finite support,
then $R(a)$ almost equals $R(b)$.
Towards a full classification of oligomorphic subgroups $H$ of $SF(\omega)$,
we use some structure theory of subgroups of $SF(\omega)$ developed in the
1970s by Peter Neumann, e.g. [33], and in two short papers by Dan Segal,
independently. Here I’m mostly using the notes on finitary permutation groups
by Chris Pinnock, available at chrispinnock.com/phdpublications/. Unless
otherwise stated references to theorems etc refer to those notes.
Let us begin by assuming that $H\leq SF(\omega)$ is 1-transitive. The Jordan-
Wielandt Theorem says for such a group $H$ that if $H$ is primitive then
$H=SF(\omega)$ or $H$ equals the group of alternating permutations of finite
support (which has index 2 in $SF(\omega)$). So we can assume otherwise. Let
$B$ be a block of imprimitivity. $B$ has to be finite (Lemma 2.2 in Pinnock),
simply because there is a permutation $\tau\in H$ moving $B$ to a disjoint set
$\tau(B)$, so that $B$ is contained in the support of $\tau$. The equivalence
relation $E$ with classes made up of the translates of $B$ is $H$-invariant,
and hence a union of 2-orbits. Since there are only finitely many 2-orbits,
there must be a maximal block of imprimitivity. Then $H$ acts primitively on
$\Omega/E$ and each induced permutation there has finite support. So it acts
highly transitively on it. This is the second type of example above, the same
finite model in each equivalence class of $E$. Note that $H$ is a subgroup of
$G_{0}\wr R$ where $R=SF(\omega)$ or $R$ is the alternating group (Thm 2.3).
If $H$ is not 1-transitive we are in the case of the first example above. It
is known that we get only finite products of 1-transitive groups, and a finite
group.
For a more detailed treatment, see Iwanow [22] who works in the more general
setting of automorphism groups of countable saturated structures. He uses the
concept of cell, a permutation group that preserves an equivalence relation
with all classes finite, and induces the full symmetric group on the
equivalence classes. A 2-cell is permutation group $G\leq_{c}S(\omega)$ such
that there exists a partition $\omega$ into $G$\- invariant classes $Y_{i}$
such that for any infinite $Y_{i}$ the group induced by $G$ on$Y_{i}$ is a
cell and is also induced by the pointwise stabilizer of the complement of
$Y_{i}$. Each oligomorphic 2-cell is a finite product of cells and a finite
algebraic closure of the empty set. Their number is countable as already
mentioned. Each cell is a finite cover of a pure set in the sense of Evans,
Macpherson, Ivanov, Finite covers, 1997. These covers have been classified.
### 6\. Harrison-Trainor and Nies: $\Pi_{r}$-pseudofinite groups
The authors of this post worked at the (now defunct) Research Centre
Coromandel in July. They started from some notes of Dan Segal (2014), which in
turn are based on [20]. The main concept in these notes is this.
###### Definition 6.1.
A group $G$ is called _pseudofinite_ if each first-order sentence true in $G$
also holds in some finite group.
In the first two sections we present a simplified account of some material in
the Segal notes. Our account is somewhat more general than the notes, firstly
as it takes into account the quantifier complexity of the sentences for which
a group needs to be pseudofinite, and secondly because a lot of this work can
be carried out in the setting of an arbitrary finitely axiomatised variety of
algebraic structures.
Throughout, let $G$ be an algebraic structure in a language $L$ with finitely
many function and constant symbols, and no relation symbols besides equality.
The $\Pi_{0}$ and the $\Sigma_{0}$ sentences are the quantifier free ones,
thought of as disjunctions of conjunctions of equalities and inequalities.
Note that satisfaction of $\Pi_{1}$ sentences is closed under taking
substructures.
###### Definition 6.2.
* (i)
For $r\in{\mathbb{N}}$ we say that an $L$-structure $G$ is _$\Sigma_{r}$
-pseudofinite_ if $G$ is a model of the $\Sigma_{r}$-theory of finite
$L$-structures. Equivalently, each $\Pi_{r}$ sentence that holds in $G$ also
holds in some finite $L$-structure.
* (ii)
Similarly, we define _$\Pi_{r}$ -pseudofiniteness_ of an $L$-structure $G$.
###### Definition 6.3.
We say that an $L$-structure $G$ has _named generators_ if $G$ is
$d$-generated for some $d\geq 1$, and the language $L$ contains finitely many
constant symbols $\overline{c}=(c_{1},\ldots,c_{d})$ naming such generators of
$G$.
In this case witnesses for outermost existential quantifiers be named by
terms. So we have
###### Fact 6.4.
Let $G$ be an $L$-structure with named generators. Let $\widetilde{G}$ be the
structure with the constants naming the generators omitted. Let $r\geq 1$. The
following are equivalent.
(i) $\widetilde{G}$ is $\Pi_{r+1}$-pseudofinite
(ii) $G$ is $\Pi_{r+1}$-pseudofinite
(iii) $G$ is $\Sigma_{r}$-pseudofinite.
###### Proof.
(ii)$\to$(i) and (i)$\to$(iii) are straightforward, and don’t rely on the fact
that the $c_{i}$ name generators.
For (iii)$\to$(ii), suppose a sentence
$\theta\equiv\exists\overline{x}\,\xi(\overline{x})$ holds in $G$, where
$\overline{x}$ denotes $(x_{1},\ldots,x_{m})$ and $\xi(\overline{x})$ is
$\Pi_{r}$. Since the $c_{i}$ name generators, we can pick $L$-terms $t_{j}$
without free variables as witnesses. The sentence $\xi(t_{1},\ldots,t_{m})$ is
$\Pi_{r}$ and holds in $G$. So it holds in some finite $L$-structure. Hence
$\theta$ holds in that structure. ∎
The $\Sigma_{1}$-theory of finite groups equals the $\Sigma_{1}$-theory of the
trivial group, which is decidable. Every group satisfies the
$\Sigma_{1}$-theory of the trivial group and hence is
$\Sigma_{1}$-pseudofinite. So the notion $\Sigma_{1}$-pseudofiniteness really
only makes sense for groups with named generators.
In contrast, the $\Pi_{1}$-theory of finite groups is hereditarily undecidable
by a result of Slobodskoi [46]. However, it has infinitely many decidable
completions, e.g. the theory of $({\mathbb{Z}}^{n},+)$, for each $n\geq 1$. To
see this, let $H$ be the ultraproduct of all cyclic groups of prime order. $H$
is abelian, torsion free, not f.g. and satisfies the theory of finite groups.
Any subgroup of $H$ satisfies the $\Pi_{1}$-theory of $H$. To see that $H$ has
infinite rank, let $R_{i}$, $i\in{\mathbb{N}}$ be disjoint infinite sets of
primes, let $f_{i}(p)=1$ for $p\in R_{i}$ and $0$ otherwise. Then the
$[f_{i}]$ are linearly independent over ${\mathbb{Z}}$.
#### 6.1. $\Sigma_{1}$-embedding of $G$ into an ultraproduct of witness
structures
The following construction is based on Houcine and Point [20]. We fix some
variety $\mathcal{V}$ of $L$-structures, such as groups. Suppose $G$ is as in
Definition 6.3, and $G$ is $\Sigma_{1}$-pseudofinite. Let
${\left\langle{\phi_{i}^{\prime}}\right\rangle}_{i\in{\mathbb{N}}}$ be a list
of the $\Pi_{1}$-sentences that hold in $G$, with $\phi_{0}^{\prime}$ being
the conjunction of the finitely many axioms for $\mathcal{V}$. Let
$\phi_{n}=\bigwedge_{i\leq n}\phi^{\prime}_{i}$. By hypothesis on $G$ there is
a finite $L$-structure $E_{n}$ (called a witness structure) such that
$E_{n}\models\phi_{n}$. Since we are only considering $\Pi_{1}$-sentences, we
may assume that each $E_{n}$ is generated by $\overline{c}^{E_{n}}$.
Let $\mathcal{U}$ be a free ultrafilter on ${\mathbb{N}}$. Let
$E=\prod_{n}E_{n}/\mathcal{U}$ be the corresponding ultraproduct. Define an
embedding $\beta\colon G\to E$ by setting for each $L$-term $s$
$\beta(s(\overline{c}^{G}))=s(\overline{c}^{E})$.
Note that $\beta$ is well-defined and $1-1$: for each terms $s,t$, if
$G\models s(\overline{c})=t(\overline{c})$ then the sentence
$s(\overline{c})=t(\overline{c})$ equals $\phi^{\prime}_{i}$ for some $i$, and
so $E_{n}\models s(\overline{c})=t(\overline{c})$ for each $n\geq i$. One
argues similarly for the case $G\models s(\overline{c})\neq t(\overline{c})$.
So $\beta$ is a monomorphism of $L$-structures.
###### Fact 6.5.
Suppose an $L$-structure $G$ with named generators as in Definition 6.3 is
$\Sigma_{1}$-pseudofinite. The map $\beta\colon G\to E$ preserves satisfaction
of $\Sigma_{1}$-formulas in both directions. Thus, for each
$p_{1},\ldots,p_{k}\in G$ and each quantifier free $L$-formula
$\theta(x_{1},\ldots,x_{k},\tilde{y})$,
$G\models\exists\tilde{y}\ \theta(p_{1},\ldots,p_{k},\tilde{y})\Leftrightarrow
E\models\exists\tilde{y}\ \theta(\beta(p_{1}),\ldots,\beta(p_{k}),\tilde{y})$.
###### Proof.
We may incorporate the parameters into $\theta$, so we may assume that there
are none. Let $\phi\equiv\exists\tilde{y}\ \theta(\tilde{y})$. For the
nontrivial implication suppose that $E\models\phi$. If $G\models\lnot\phi$,
then $E_{n}\models\lnot\phi$ for almost all $n$, contradiction. So
$G\models\phi$. ∎
#### 6.2. The number of factors needed for being in a verbal subgroup
In this section we only consider the case that $G$ is a group. We will explain
the result of Segal that if $G$ is $\Pi_{2}$-pseudofinite then the verbal
subgroups $G^{(n)}$ and $\langle G^{q}\rangle$, $n,q\geq 1$, are definable in
$G$ by a positive $\Sigma_{1}$ formula.
Below $G$ satisfies Definition 6.3 where $d$ denotes the number of generators.
The language $L$ consists of symbols for the group operations, the neutral
element, and constants $c_{1},\ldots,c_{d}$ naming the generators.
###### Definition 6.6.
Let $d,r\geq 1$. Let $w$ be a term with free variables $x_{1},\ldots,x_{k}$ in
the language of group theory, identified with an element of the free group
$F_{k}$. We say that $w$ is _$r$ -bounded for $d$_ if for each finite
$d$-generated group $H$, we have
$w(H)=w^{*r}(H)$.
Here for any group $H$, by $w(H)$ one denotes the usual verbal subgroup which
is generated by the values of $w$, and $w^{*r}(H)$ is the set of products of
up to $r$ many values of $w$ or their inverses.
Nikolov and Segal [40, Thm 1.7], or [41, Thm. 1.2], show that this holds for
the terms $x^{q}$, where $q\geq 1$, and iterated commutators such as $[x,y]$
or $[[x,y],[z,w]]$. For such terms, the result below (for pseudofinite groups)
was established in [20, Prop. 3.8], but an additional hypothesis was needed
for the case of terms $x^{q}$.
###### Proposition 6.7.
Suppose a $d$-generated group $G$ with named generators as in Definition 6.3
is $\Sigma_{1}$-pseudofinite w.r.t. the language $L$. Suppose further that $w$
is $r$-bounded for $d$. Then $w(G)=w^{*r}(G)$.
By Fact 6.4 the hypothesis holds if $G$ is $\Pi_{2}$-pseudofinite w.r.t. the
language of group theory. In this case $w(G)$ is $\Sigma_{1}$-definable
without parameters in that language.
###### Proof.
Write $g_{i}=c_{i}^{G}$ and $\overline{g}=(g_{1},\ldots,g_{k})$. Suppose
$t(\overline{g})\in w(G)$ for some term $t$. That is, for some $m$, there are
$k$-tuples $\overline{z}_{j}$ of elements of $G$ and $\epsilon_{j}=-1,1$, for
$1\leq j\leq m$, such that
$t(\overline{g})=\prod_{1\leq j\leq m}w(\overline{z}_{j})^{\epsilon_{j}}$.
Then $E$ satisfies the $\Sigma_{1}$-sentence
$\exists\overline{z}_{1}\ldots\exists\overline{z}_{m}\,[t(\overline{c})=\prod_{1\leq
j\leq m}w(\overline{z}_{j})^{\epsilon_{j}}]$, and hence the ultrafilter
$\mathcal{U}$ contains the set $S$ of those $n$ such that $E_{n}$ satisfies
this sentence. By hypothesis on $w$ and since $E_{n}$ is $d$-generated by the
$c_{i}^{E_{n}}$, for each $n\in S$ there is a choice of $\eta_{\ell}=-1,1$,
for $1\leq\ell\leq r$, such that
$E_{n}\models\exists\overline{y}_{1}\ldots\exists\overline{y}_{m}\,[t(\overline{c})=\prod_{1\leq
j\leq r}w(\overline{y}_{j})^{\eta_{j}}]$.
Since $\mathcal{U}$ is an ultrafilter, there must be one choice
$(\eta_{\ell})_{1\leq\ell\leq r}$ so that the $n$ where this choice applies
form a set in $\mathcal{U}$. Hence for this choice we have
$E\models\exists\overline{y}_{1}\ldots\exists\overline{y}_{m}\,[t(\overline{c})=\prod_{1\leq
j\leq r}w(\overline{y}_{j})^{\eta_{j}}]$.
By Fact 6.5 this implies that $t(\overline{g})\in w^{*r}(G)$ as required. ∎
As Houcine and Point point out [20, Lemma 2.11], parameter definable quotients
and subgroups of pseudofinite groups are again pseudofinite. Also, finitely
generated pseudofinite groups that are of fixed exponent, or soluble, are
finite [20, Prop. 3.9]. So, $G$ is an extension of a pseudofinite group, the
verbal subgroup given by a term as above, by a finite group satisfying the law
given by the term.
#### 6.3. Effectiveness considerations
We show that a variant of the basic construction above leading to Fact 6.5 is
effective using the structure $G$ as an oracle.
_Effective ultraproducts._ Given a sequence
${\left\langle{E_{n}}\right\rangle}$ of uniformly computable structures for
the same finite signature, and a non-principal ultrafilter $\mathcal{U}$ for
the Boolean algebra of recursive sets, one can form the structure
$E=\prod_{rec}E_{n}/{\mathcal{U}}$. It consists of
$\sim_{\mathcal{U}}$-equivalence classes of recursive functions $f$ such that
$f(n)\in E_{n}$ for each $n$. Here $f\sim_{\mathcal{U}}g$ denotes that
functions $f,g$ agree on a set in $\mathcal{U}$. We interpret the relation and
functions symbols on $E$ as usual.
A version of Los’ Theorem restricted to existential formulas holds.
###### Fact 6.8.
For each formula
$\theta(x_{1},\ldots,x_{m})\equiv\exists\widetilde{y}\,\xi(x_{1},\ldots,x_{m},\widetilde{y})$
with $\xi$ quantifier free, and each $[f_{1}],\ldots,[f_{m}]\in E$,
$E\models\theta([f_{1}],\ldots[f_{m}])\Leftrightarrow R=\\{n\colon
E_{n}\models\theta(f_{1}(n),\ldots f_{m}(n))\\}\in\mathcal{U}$.
###### Proof.
Note that the set $R$ is computable. For the implication from left to right,
suppose $[h_{1}],\ldots,[h_{k}]$ are witnesses for
$E\models\theta([f_{1}],\ldots[f_{m}])$, for recursive functions
$h_{1},\ldots,h_{k}$.This shows that $R\in\mathcal{U}$ by definition of the
ultraproduct.
For the implication from right to left, note that when $n$ is in the
computable set $R$ one can search for the witnesses $w_{n}$ in $E_{n}$. So one
can define computable witness functions $h_{1},\ldots,h_{k}$ for
$E\models\theta([f_{1}],\ldots[f_{m}])$, assigning a vacuous value in $E_{n}$
in case $n\not\in R$. ∎
To obtain an effective version of Fact 6.5, let $\psi_{i}$ be an effective
list of all the $\Pi_{1}$ sentences in $L$, where $\psi_{0}$ is the
conjunction of the laws of $\mathcal{V}$. We modify the construction of the
$E_{n}$ as follows. Given $n$, look for the least stage $s$ and a finite
$L$-structure, called $E_{n}$, such that $E_{n}$ satisfies each $\psi_{i}$,
$i\leq n$, that has not been shown to fail in $G$ by stage $s$, in the sense
that a counterexample has been found among the first $s$ elements of $G$.
Now define the restricted ultraproduct $E$ as above. The argument for the fact
can be carried out as before: Suppose that $E\models\phi$ for an existential
sentence $\phi$. If $G\models\lnot\phi$, then $E_{n}\models\lnot\phi$ for
almost all $n$. This contradicts the weak version of Los theorem above.
The argument of Prop 6.7 also works with this restricted ultraproduct. Note
that the set $S$ in the proof of Prop 6.7 is computable since the $E_{n}$ are
finite and given by strong indices. The set of $n$ for which a particular
choice of $\eta_{\ell}$ works is also computable.
The upshot: if $G$ is computable, then we have a canonical ultraproduct
version $E$ of $G$, with a $\Sigma_{1}$ elementary embedding, and this version
$E$ is in a sense effective as well, except for the ultrafilter, which is
necessarily high in a sense specified and proved in a 2020 preprint by Lempp,
Miller, Nies and Soskova available at
www.cs.auckland.ac.nz/research/groups/CDMTCS/researchreports/download.php?selected-
id=769.
## Part II Computability theory and randomness
### 7\. Greenberg, Nies and Turetsky: Characterising SJT reducibility
#### 7.1. The basic concepts
##### 7.1.1. SJT- reducibility and its equivalents
SJT-reducibility was introduced in [35, Exercise 8.4.37].
###### Definition 7.1 (Main: SJT-reducibility).
For an oracle $B$, a $B$–c.e. trace is a u.c.e. in $B$ sequence
${\left\langle{T_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ of finite sets. For a
function $h$, such a trace is $h$-bounded if $|T_{n}|\leq h(n)$ for each $n$.
A set $A$ is jump-traceable if there is a computably bounded $\emptyset$–c.e.
trace ${\left\langle{T_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ such that
$J^{A}(n)$ is in $T_{n}$ if it is defined.
For sets $A,B$, we write $A\leq_{SJR}B$ if for each order function $h$, there
is a $B$–c.e., $h$-bounded trace for $J^{A}$.
This is transitive by argument similar to [34, Theorem 3.3].
(Question: Is “strongly superlow” reducibility equivalent to $\leq_{SJR}$?
This was also mentioned in [35, Exercise 8.4.37], there shown to be at least
as strong.)
###### Proposition 7.2.
For each $K$-trivial set $B$, there is a c.e. $K$-trivial set $A$ such that
$A\not\leq_{SJR}B$.
###### Proof.
For some fixed computable function $h$ there is a functional $\Psi$ and
$K$-trivial set $A$ such that $\Psi^{A}$ has no c.e. trace bounded by $h$, by
a result of [6]. (Also see [35, 8.5.1] where $h(n)=0.5\log\log n$ works.)
Encoding $\Psi$ into $J$, we get a fixed computable function $g$ so that the
statement holds for $J$ and $g$ instead of $\Psi$ and $h$.Relativizing to $B$
we can retain the same $g$, so for each $B$ there is a $K$-trivial in $B$ set
$A$ such that $J^{A\oplus B}$ does not have a $B$-c.e. trace bounded by $g$.
In particular, $A\not\leq_{SJR}B$.
If $B$ is $K$-trivial, it is low for $K$, and hence $A$ is also $K$-trivial.
Finally, there is $K$-trivial c.e. set $\widehat{A}\geq_{T}A$, so we can make
$A$ c.e. ∎
###### Definition 7.3.
Let $c$ be a cost function. For sets $A,B$, we write $A\models_{B}c$ if there
is a $B$-computable enumeration of ${\left\langle{A_{s}}\right\rangle}$
satisfying $c$.
###### Definition 7.4 (Benign cost functions).
A cost function $\mathbf{c}$ is _benign_ [16] if from a rational $\epsilon>0$,
we can compute a bound on the length of any sequence $n_{1}<s_{1}\leq
n_{2}<s_{2}\leq\cdots\leq n_{\ell}<s_{\ell}$ such that
$\mathbf{c}(n_{i},s_{i})\geq\epsilon$ for all $i\leq\ell$. For example,
$\mathbf{c}_{\Omega}$ is benign, with the bound being $1/\epsilon$.
Conjecture that strengthens proposition above: For each benign $c$, for each
$B\models c$, there is a c.e. $A\models c$ such that $A\not\leq_{SJR}B$.
Conjecture for each noncomputable c.e. $E$ there are c.e.
$\leq_{SJR}$-incomparable $A,B\leq_{T}E$.
Fact $\leq_{SJR}$ is $\Sigma^{0}_{3}$ on the $K$-trivials. Density might be
easy on the $K$-trivials using this.
##### 7.1.2. Two relevant randomness notions
###### Definition 7.5 (Demuth randomness).
A _Demuth test_ is a seqence
${\left\langle{G_{m}}\right\rangle}_{m\in{\mathbb{N}}}$ of open subsets of
$2^{{\mathbb{N}}}$ such that $\mathbf{\lambda}G_{m}\leq 2^{-m}$ and there is
an $\omega$-c.a. function $p$ such that $G_{m}=[W_{p(m)}]^{\prec}$. Since the
function $p$ is $\omega$-c.a., there are a computable approximation function
$p(m,s)$ and a computable bound $b$ such that $\lim_{s}p(m,s)=p(m)$ and the
number of changes is bounded by $b(m)$. One writes $G_{m}[t]$ for
$[W_{p(m,t),t}]^{\prec}$, the approximation of the $m$-th component at stage
$t$. One may assume that $\mathbf{\lambda}G_{m}[t]\leq 2^{-m}$ for each $t$ by
cutting off the enumeration of $G_{m}$ when it attempts to exceed that
measure. One says that $Z$ is _Demuth random_ if for each such test, one has
$Z\not\in G_{m}$ for almost every $m$.
###### Definition 7.6 (Weak Demuth randomness).
A _nested Demuth test_ is a Demuth test
${\left\langle{G_{m}}\right\rangle}_{m\in{\mathbb{N}}}$ such that
$G_{m}\supseteq G_{m+1}$ for each $m$. One says that $Z$ is _weakly Demuth
random_ if $Z\not\in\bigcap_{m}G_{m}$ for each nested Demuth test
${\left\langle{G_{m}}\right\rangle}_{m\in{\mathbb{N}}}$. Replacing $G_{m}[t]$
by $\bigcap_{i\leq m}G_{i}[t]$ (and noting that the number of changes remains
computably bounded), one may assume that the approximations are nested at each
stage $t$, i.e., $G_{m}[t]\supseteq G_{m+1}[t]$ for each $m$.
#### 7.2. Equvalent characterizations of $\leq_{SJR}$
###### Theorem 7.7.
The following are equivalent for $K$-trivial c.e. sets $A,B$.
* (a)
$A\leq_{SJR}B$
* (b)
$A\models_{B}\mathbf{c}$ for every benign cost function $\mathbf{c}$
* (c)
$A\leq_{T}B\oplus Y$ for each ML-random set $Y$ that is not weakly Demuth
random
* (d)
$A\leq_{T}B\oplus Y$ for each ML-random set $Y\in\mathcal{C}$, where
$\mathcal{C}$ is the class of the $\omega$-c.a., superlow, or superhigh sets.
We remark that the implication (b) $\Rightarrow$ (a) was essentially obtained
by Greenberg and Nies [16, Prop. 2.1]. They proved the following. Let $A$ be a
c.e., jump-traceable set, and let $h$ be an order function. Then there is a
benign cost function $c$ such that if $A$ obeys $c$, then $J^{A}$ has a c.e.
trace which is bounded by $h$. Suppose now that $A\models_{B}c$. Apply the
argument in the proof of [16, Prop. 2.1] to a $B$-computable enumeration of
$A$ showing this. Then the $h$-bounded trace
${\left\langle{T_{n}}\right\rangle}$ for $J^{A}$ constructed there is c.e.
relative to $B$.
For $\mathcal{C}\subseteq\mbox{\rm{MLR}}$ define $\mathcal{C}^{\diamond}$. By
the implication (a) $\Rightarrow$ (c) we obtain:
###### Corollary 7.8.
Let $\mathcal{C}$ be a nonempty class of ML-randoms that contains no weakly
Demuth random. Then $\mathcal{C}^{\diamond}$ is downward closed under
$\leq_{SJR}$.
For instance, let $\mathcal{C}=\\{\Omega_{R}\\}$ for a co-infinite computable
set $R$. This shows that the subideals of $K$-trivials considered in [14, 15]
are SJT-ideals.
We will prove the implications of the theorem in a cycle, starting from (b).
The implications (b) $\Rightarrow$ (c) and (c) $\Rightarrow$ (d) rely on
literature results, or standard methods from the literature. The implication
(d) $\Rightarrow$ (a) combines methods from [36] and [4]. The implication (a)
$\Rightarrow$ (b) uses the box promotion method; for background see [17, 18].
##### 7.2.1. Proof of (b) $\Rightarrow$ (c)
Hirschfeldt and Miller showed that for each null $\Pi^{0}_{2}$ class
$\mathcal{H}\subseteq 2^{{\mathbb{N}}}$ there is a cost function $\mathbf{c}$
such that $A\models\mathbf{c}$ and $Y\in\mbox{\rm{MLR}}\cap\mathcal{H}$
implies $A\leq_{T}Y$; see [35, proof of 5.3.15] for a proof of this otherwise
unpublished result.
Suppose that a ML-random set $Y$ fails a nested Demuth test
${\left\langle{G_{m}}\right\rangle}$. Then $Y\in\mathcal{H}=\bigcap_{m}G_{m}$
which is a $\Pi^{0}_{2}$ class. We will apply the method of Hirschfeldt and
Miller but incorporating the additional oracle set $B$, and using the
particular representation of the $\Pi^{0}_{2}$ null class by a nested Demuth
test to ensure that the cost function $\mathbf{c}$ is benign. Also see post on
weak Demuth randomness by Kučera and Nies in [8]
Let $p(m)$ and its approximation $p(m,t)$ be as in Definition 7.6 of nested
Demuth tests. We define the benign cost function $\mathbf{c}$ as follows.
Define for $k\leq t$
$\displaystyle r(k,t)$ $\displaystyle=$ $\displaystyle\min\\{m\colon\,\exists
s.k<s\leq t\,[p(m,s-1)\neq p(m,s)]\\}$ $\displaystyle V_{k,t}$
$\displaystyle=$ $\displaystyle\bigcup_{k\leq s\leq t}G_{r(k,s)}[s]$
$\displaystyle\mathbf{c}(k,t)$ $\displaystyle=$
$\displaystyle\mathbf{\lambda}V_{k,t}.$
Clearly $r(k-1,t)\geq r(k,t)$ for $k>0$, hence the $V_{k,t}$ are nested and
$\mathbf{c}(k,t)$ is nonincreasing in $k$. Similarly, $\mathbf{c}(k,t)$ is
nondecreasing in $t$. Note that if $p(m,s)$ has stopped changing by a stage
$k$, then $r(k,t)>m$ for each $t\geq k$, and hence $V_{k,t}$ is contained in
the final version $G_{m}$ of the $m$-th component. By the conventions in
Definition 7.6 above, if $\mathbf{c}(k,t)>2^{-m}$ then $p(m,s-1)\neq p(m,s)$
for some $s$ in the interval $(k,t]$. Since the number of changes of $p(m,s)$
is bounded computably in $m$, this shows that the cost function $\mathbf{c}$
is indeed benign.
Suppose now that $A\models_{B}\mathbf{c}$ via a $B$-computable approximation
${\left\langle{A_{s}}\right\rangle}$. To show $A\leq_{\mathrm{T}}Y\oplus B$
for each ML-random set $Y\in\mathcal{\bigcap}_{m}G_{m}$, we enumerate a
Solovay test $\mathcal{S}$ relative to $B$, i.e., a uniformly $\Sigma^{0}_{1}$
sequence ${\left\langle{\mathcal{S}_{n}}\right\rangle}$ relative to $B$ such
that $\sum_{n}\mathbf{\lambda}\mathcal{S}_{n}<\infty$. At stage $s$, when
$x<s$ is least such that $A_{s}(x)\neq A_{s-1}(x)$, list $V_{x,s}$ in
$\mathcal{S}$. This yields a Solovay test relative $B$ by the hypothesis that
the approximation of $A$ obeys $\mathbf{c}$.
Since $B$ is low for ML-randomness, the set $Y$ is ML-random relative to $B$.
Hence $Y$ passes the Solovay test $\mathcal{S}$. So choose $s_{0}$ such that
$Y\not\in V$ for any $V$ listed in $S$ after stage $s_{0}$. Given an input
$x\geq s_{0}$, using $Y$ as an oracle compute $t>x$ such that
$[Y\\!\upharpoonright_{t}]\subseteq V_{x,t}$. We claim that $A(x)=A_{t}(x)$.
Otherwise $A_{s}(x)\neq A_{s-1}(x)$ for some $s>t$, which would cause
$V_{x,s}$ (or some superset $V_{y,s}$, $y<x$) to be listed in $G$, contrary to
$Y\in V_{x,t}$.
##### 7.2.2. Proof of (c) $\Rightarrow$ (d)
Note that each superlow set is $\omega$-c.a. So it suffices to show that no
$\omega$-c.a. set, and no superhigh set, is weakly Demuth random. For
$\omega$-c.a. sets this is immediate from the definitions. For superhigh sets,
this is a result of Kučera and Nies [26, Cor. 3.6].
##### 7.2.3. Proof of (d) $\Rightarrow$ (a)
For convenience we restate the implication to be shown:
Let $\mathcal{C}$ be the class of superlow, or of superhigh sets. Suppose $A$
and $B$ are $K$-trivial c.e. sets. If $A\leq_{\mathrm{T}}Y\oplus B$ for each
$Y\in\mathcal{C}\cap\mbox{\rm{MLR}}$, then $A\leq_{SJR}B$.
We remark that for this implication, the hypothesis suffices that $A$ is c.e.
and superlow, and $B$ is Demuth traceable as discussed below; in particular,
this holds if $B$ is c.e. and superlow.
As mentioned, the proof of this implication combines methods from Nies [36]
and Bienvenu et al. [4]. Below we will review and use some technical notions
from these articles. First we consider [4, Def. 1.7]. Given an oracle $B$, a
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test generalises a Demuth
test ${\left\langle{G_{m}}\right\rangle}_{m\in{\mathbb{N}}}$ in that
$G_{m}\subseteq 2^{{\mathbb{N}}}$ is a $\Sigma^{0}_{1}(B)$ class with
$\mathbf{\lambda}G_{m}\leq 2^{-m}$, and there is a function $f$ taking $m$ to
an index for $G_{m}$ such that $f$ has a $B$-computable approximation $g$,
with $g(m,t)$ having a number of changes that is computably bounded in $m$.
(Here we will only need the case that $f$ is $\omega$-c.a., in which case the
only difference to usual Demuth tests is that the versions of the components
are uniformly $\Sigma^{0}_{1}(B)$, rather than $\Sigma^{0}_{1}$.)
The authors in [4] defined that an oracle $B$ is low for
$\textup{Demuth}_{\textup{BLR}}$ tests if every
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test can be covered by a
Demuth test, in the sense that passing the Demuth test implies passing the
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test. In [4, Thm. 1.8] they
characterized such oracles via a tracing condition called Demuth traceability.
For c.e. sets, this condition is equivalent to being jump traceable, or again
superlow [4, Prop. 4.3].
The following lemma holds for any oracle $B$. We will prove it shortly.
###### Lemma 7.9.
For a given order function $h$ and a superlow c.e. set $A$, one can build a
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test
$(\mathcal{H}_{m})_{m\in{\mathbb{N}}}$ such that, if
$A\leq_{\mathrm{T}}Y\oplus B$ for some ML-random set $Y$ passing this test,
then the function $J^{A}$ has a $B$-c.e. trace with bound $h$.
Nies [36] defined a class $\mathcal{C}\subseteq 2^{\mathbb{N}}$ to be _Demuth
test–compatible_ if each Demuth test is passed by a member of $\mathcal{C}$.
Using some methods from [13], he showed in [36, Section 4] that the superlow
ML-random sets, as well as the superhigh ML-random sets, are Demuth
test–compatible.
If a class $\mathcal{C}$ is Demuth test–compatible and $B$ is Demuth
traceable, then each $\textup{Demuth}_{\textup{BLR}}\langle B\rangle$-test is
passed by a set in $\mathcal{C}$. So the lemma suffices to establish the
implication (d) $\Rightarrow$ (a) in question.
To verify Lemma 7.9, let $\Phi$ be a Turing functional such that
$\Phi(0^{e}1Y\oplus B)=\Phi_{e}(Y\oplus B)$ for each $e,Y$. We reduce the
lemma to the following.
###### Claim 7.10.
There is a $\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test
$(\mathcal{S}_{m})_{m\in{\mathbb{N}}}$ such that, if $A=\Phi^{Y\oplus B}$ for
some $Y$ passing this test, then $J^{A}$ has a $B$-c.e. trace with bound $h$.
This claim suffices to obtain Lemma 7.9: let
$(\mathcal{H}_{m})_{m\in{\mathbb{N}}}$ be the
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test obtained as in [36,
Lemma 2.6] applied to the test $(\mathcal{S}_{m})_{m\in{\mathbb{N}}}$. Thus,
if a set $Y$ passes $(\mathcal{H}_{m})_{m\in{\mathbb{N}}}$, then $0^{e}1Y$
passes $(\mathcal{S}_{m})_{m\in{\mathbb{N}}}$ for each $e$. By hypothesis of
Lemma 7.9, $A\leq_{\mathrm{T}}Y\oplus B$ for some $Y$ passing
$(\mathcal{H}_{m})_{m\in{\mathbb{N}}}$, so we have $A=\Phi_{e}^{Y\oplus B}$
for some $e$, and hence $A=\Phi({0^{e}1Y}\oplus B)$. Since $0^{e}1Y$ passes
$(\mathcal{S}_{m})_{m\in{\mathbb{N}}}$, we can conclude from the claim that
$J^{A}$ has a $B$-c.e. trace with bound $h$.
It remains to prove Claim 7.10. Write $\mathcal{U}_{e}=[W^{B}_{e}]^{\prec}$.
For the duration of the proof of the claim, a sequence
${\left\langle{G_{m}}\right\rangle}$ of open sets will be called an _$(A,B)$
-special test_ if there is a Turing functional $\Gamma$ such that $G_{m}$ is
the final version of the sets $G_{m}[t]=\mathcal{U}_{\Gamma^{A}(m,t)}$ over
stages $t$, and there is a computable function $g$ such that the number of
changes of $\Gamma^{A}(m,t)$ is bounded by $g(m)$. A set $Y$ passes such a
test in the usual sense of Demuth tests, namely, $Y\not\in G_{m}$ for almost
all $m$.
We first observe that since $A$ is superlow and c.e., there is a
$\textup{Demuth}_{\textup{BLR}}\langle B\rangle$ test
${\left\langle{\mathcal{S}_{m}}\right\rangle}$ that covers
${\left\langle{G_{m}}\right\rangle}$ in the sense that each $Y$ passing
${\left\langle{\mathcal{S}_{m}}\right\rangle}$ passes
${\left\langle{G_{m}}\right\rangle}$. To see this, let $\Theta$ be a Turing
functional such that $\Theta^{X}(m,i)$ is the $i$-th value of
$\Gamma^{X}(m,i)$, for each oracle $X$. Note that by [36, Lemma 2.7] there is
a computable enumeration $(A_{s})_{s\in{\mathbb{N}}}$ of $A$ and a computable
function $f$ such that for at most $f(m,i)$ times a computation
$\Theta^{A_{s}}(m,i)$ is destroyed. At stage $t$, let
$\mathcal{S}_{m}[t]=\mathcal{U}_{\Theta^{A_{t}}(m,i)}$
where $i$ is maximal such that the expression on the right is defined at stage
$t$. Clearly the number of times a version $\mathcal{S}_{m}[t]$ changes is
bounded by $\sum_{i=0}^{g(m)}f(m,i)$. Thus,
$(\mathcal{S}_{m})_{m\in{\mathbb{N}}}$ is a Demuth test. If an oracle $Y$ is
in $G_{m}$ then $Y$ is in the final version $\mathcal{S}_{m}[t]$, so the new
test indeed covers ${\left\langle{G_{m}}\right\rangle}$.
So to establish Claim 7.10 it suffices to build an $(A,B)$-special test
${\left\langle{G_{m}}\right\rangle}$ in place of
${\left\langle{\mathcal{S}_{m}}\right\rangle}$. To do so, we mostly follow
[36, proof of Thm 3.2]. For $m\in{\mathbb{N}}$ let
$I_{m}=\\{x\colon\,2^{m}\leq h(x)<2^{m+1}\\}$.
At stage $t$, let $u$ be the maximum use of the computations $J^{A}(x)$ for
$x\in I_{m}$ that exist. We enumerate into the current version $G_{m}[t]$ all
oracles $Z$ such that $\Phi_{t}^{Z\oplus B}\succeq A\\!\upharpoonright_{u}$,
as long as the measure stays below $2^{-m}$. Whenever a new computation
$J^{A}(x)$ for $x\in I_{m}$ converges at a stage, we start a new version of
$G_{m}$. Clearly, there will be at most $\\#I_{m}$ many versions.
More formally, there is a Turing functional $\Gamma$ such that for each string
$\alpha$ of length $t$,
$\mathcal{U}_{\Gamma^{\alpha}(m,t)}=\\{Z\colon\,\forall x\in I_{m}\
[J^{\alpha}_{t}(x)\downarrow\text{with
use}\,u\Rightarrow\alpha\\!\upharpoonright_{u}\preceq\Phi^{Z\oplus
B}_{t}]\\}.$
Let $G_{m}[t]=\mathcal{U}_{\Gamma^{A}(m,t)}^{(\leq 2^{-m})}$. Here for a
$\Sigma^{0}_{1}(B)$ set $\mathcal{W}$ and rational $\epsilon>0$, as in the
Cut-off Lemma [36, Lemma 2.1] by $\mathcal{W}^{(\leq\epsilon)}$ one denotes
the $\Sigma^{0}_{1}(B)$ set given by the enumeration capped by measure
$\epsilon$. By the uniformity of the Cut-off Lemma, from $m,t$ with the help
of oracle $A$ we can compute an index for this effectively open class. Thus,
the versions $G_{m}[t]$ define an $(A,B)$-special test
$(G_{m})_{m\in{\mathbb{N}}}$.
The $B$-c.e. trace $(T_{x})_{x\in{\mathbb{N}}}$ is defined as follows. At
stage $t$, for each string $\alpha$ of length $t$ such that
$y=J_{t}^{\alpha}(x)$ is defined and the measure of the current approximation
to the c.e. open set $\mathcal{U}_{\Gamma^{\alpha}(m,t)}$ exceeds $2^{-m}$,
put $y$ into $T_{x}$. The idea is that, if $y=J^{A}(x)$, then this must happen
for some $\alpha\prec A$, otherwise $Y$ can be put into $G_{m}$ because there
is no cut-off.
The verification is similar to [36, proof of Thm 3.2] _mutatis mutandis_. We
omit the proofs of the two claims that follow, which are similar to the claims
in the proof of the corresponding result [36, Thm 3.2].
Claim 1. $(T_{x})_{x\in{\mathbb{N}}}$ is a $B$-c.e. trace such that for each
$x$ we have $\\#T_{x}\leq h(x)$.
Claim 2. For almost every $x$, if $y=J^{A}(x)$ is defined, then $y\in T_{x}$.
This completes the proof of Claim 7.10 and hence Lemma 7.9, and hence of the
implication in question.
##### 7.2.4. Proof of (a) $\Rightarrow$ (b)
This implication works in the context of much weaker assumptions on $A$ and
$B$. We state this separately.
###### Proposition 7.11.
If $A\leq_{SJR}B$ and $B$ is jump traceable, then for every benign cost
function $c$, we have $A\models_{B}c$.
First we need another lemma.
###### Lemma 7.12.
Suppose $T$ is a finite tree, and $v_{0},\dots,v_{n-1}\in T$ are pairwise
distinct, such that each $v_{i}$ has at least 2 children in $T$. Then $T$ has
at least $n+1$ leaves.
We omit the proof, which is a simple induction on $n$.
###### Proof of Proposition 7.11.
Fix $c$ a benign cost function and $f$ an $\omega$-c.a. function with
$c(f(n))<2^{-n}$ for all $n$. We also denote by $f$ a computable approximation
to $f$, so that $\lim_{s}f(n,s)=f(n)$ for all $n$, and such that for each $n$,
$|\\{f(n,s):s\in\omega\\}|\leq g(n)$, where $g$ is some total computable
function. We may assume that $f(n,s)$ is non-decreasing in both $n$ and $s$.
Fix $h$ such that $B$ is $h$-JT.
First we employ standard tricks to assume we already have the traces for the
partial functions we intend to build. We define sets $I_{n}^{e}$ and
$J_{n}^{e}$ for $e<n<\omega$:
* •
For each $e$, the sets $\\{I_{n}^{e}:e<n<\omega\\}$ partition $\omega^{[2e]}$
such that
$|I_{n}^{e}|=g(n)+\sum_{i<n}h({\left\langle{2e,n+1,i}\right\rangle});$
* •
For each $e$, the sets $\\{J_{n}^{e}:e<n<\omega\\}$ partition
$\omega^{[2e+1]}$ such that
$|J_{n}^{e}|=2^{\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in
I_{n}^{e}\ \&\ i<n\\}}.$
We then define $k$ an order function such that for all $e<n$ and all $x\in
I^{e}_{n}\cup J^{e}_{n}$, $k(x)\leq n$.
We will build partial functions $\Phi^{A}$ and $\Psi^{B}$. Fix an effective
listing of all c.e. $h$-traces and all oracle c.e. $k$-traces. For
$e={\left\langle{e_{0},e_{1}}\right\rangle}$, we will construct $\Phi$ and
$\Psi$ on $\omega^{[2e]}\cup\omega^{[2e+1]}$ under the assumption that the
$e_{0}$th element of the first listing traces $\Psi^{B}$ and the $e_{1}$th
element of the second listing traces $\Phi^{A}$ with oracle $B$. For the
remainder of the construction, fix $e$, and let $(V_{y})_{y\in\omega}$ and
$(U_{x}^{-})_{x\in\omega}$ be these elements, respectively. We will drop the
$e$ superscripts on the $I$ and $J$.
Suppose first that $(U_{x})_{x\in\omega}$ were an oracle-free c.e. $k$-trace
of $\Phi^{A}$. We will have a module for each $n>e$. Our module for $n$ seeks
to test $A$ at various lengths, in particular at length $f(n)$. At stage
$s=n$, or at stage $s>n$ with $f(n,s)\neq f(n,s-1)$, the module will test
length $f(n,s)$. It will also test various lengths as they are provided to it
by the $n+1$ module.
For a length $\ell$, it will first test $A\\!\upharpoonright_{\ell}$ in an
element of $x\in I_{n}$ – that is, we define $\Phi^{\sigma}(x)=\sigma$ for all
$\sigma\in 2^{\ell}$, and we monitor the strings enumerated into $U_{x}$. This
will narrow down the possibilities for $A\\!\upharpoonright_{\ell}$ to a set
of at most $n$ strings. The module will then test each of those strings in
$J_{n}$ (we will say more about how testing is done in $J_{n}$ in a moment).
If more than one of those strings were to pass this second test, we would
promote $\ell$, i.e. tell the $n-1$ module that it is responsible for testing
$A\\!\upharpoonright_{\ell}$. We will arrange that the $n$ module promotes at
most $n-1$ lengths, so that the $n-1$ module will need to handle at most
$g(n-1)+n-1$ lengths.
For $n=e+1$, the $n$ module will still declare lengths to be promoted, even
though there is no $n-1$ module to promote them to. This will not affect the
running of the $n$ module, and so it will continue on as if there were an
$n-1$ module.
Of course, $(U_{x})_{x\in\omega}$ only traces $\Phi^{A}$ with oracle $B$, so
we will need to rely on $(V_{y})_{y\in\omega}$ to approximate this. When we
decide to test a length $\ell$ at $x\in I_{n}$, we will simultaneously define
$\Psi^{Y}({\left\langle{2e+1,x,i}\right\rangle})$ for $i<n$ and all oracles
$Y$ to be the $i$th element enumerated into $U_{x}^{Y}$, if such an element
exists. Then
$A\\!\upharpoonright_{\ell}\in U_{x}\subseteq
2^{\ell}\cap\bigcup_{i<n}V_{{\left\langle{2e+1,x,i}\right\rangle}},$
which has size at most $\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\
i<n\\}$. We will test each of these strings on $J_{n}$. So we will test at
most $\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in I_{n}^{e}\ \&\
i<n\\}$ strings on $J_{n}$, which the reader will note is the exponent of the
size of $J_{n}$.
The oracle $B$, using $(U_{x}^{B})_{x\in\omega}$, will have an opinion as to
which lengths should be promoted. Again, we will arrange that $B$ sees at most
$n-1$ lengths which are to be promoted by the $n$ module. We define
$\Psi^{Y}({\left\langle{2e,n,i}\right\rangle})$ for $i<n-1$ and all oracles
$Y$ to be the $i$th length which $Y$ believes the $n$ module should promote.
So the lengths $B$ believes should be promoted by the $n$ module will be
elements of $\bigcup_{i<n-1}V_{{\left\langle{2e,n,i}\right\rangle}}$, which
has size at most $\sum_{i<n-1}h({\left\langle{2e,n,i}\right\rangle})$. Thus
the $n-1$ module will need to handle at most
$g(n-1)+\sum_{i<n-1}h({\left\langle{2e,n,i}\right\rangle})$, which the reader
will note is the size of $I_{n-1}$. So each $I_{n}$ is large enough to test
every length the $n$ module must consider.
We must explain what it means to test a string on $J_{n}$. We identify $J_{n}$
with $2^{\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in I_{n}^{e}\
\&\ i<n\\}}$, which we think of as a hypercube of side length 2 and dimension
$\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in I_{n}^{e}\ \&\
i<n\\}$. When we seek to test a string $\sigma$ on $J_{n}$, we choose an axis
$d$ of the hypercube, split the hypercube into two pieces orthogonal to this
axis, and define $\Phi^{\sigma}(x)=\sigma$ for all $x$ belonging to one of
these pieces. To that end, for each
$d<\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in I_{n}^{e}\ \&\
i<n\\}$, let $J_{n}(d)=\\{\tau\in J_{n}:\tau(d)=0\\}$. For each string
$\sigma$ that we seek to test on $J_{n}$, we will choose a unique $d$ and
define $\Phi^{\sigma}(x)=\sigma$ for each $x\in J_{n}(d)$ with
$\Phi^{\sigma}(x)$ not already defined. Since we will seek to test at most
$\sum\\{h({\left\langle{2e+1,x,i}\right\rangle})\ :\ x\in I_{n}^{e}\ \&\
i<n\\}$ many strings on $J_{n}$, there are sufficient $d$ to give each
$\sigma$ a unique $d$.
Next, we must explain what it means to promote a length. Recall that this is
defined for each oracle $Y$, but we only care about it for oracle $B$. At a
stage $s$, let $\ell$ be the longest length which we have already decided
should be promoted by the $n$ module (or $\ell=0$ if there is no such length).
A string $\sigma$ which has been tested on $J_{n}$ is confirmed at $n$
(relative to $Y$) if $\sigma\in U_{x,s}^{Y}$ for each $x$ with
$\Phi^{\sigma}(x)=\sigma$. If there are distinct $\sigma_{0},\sigma_{1}$ of
the same length which are both confirmed at $n$ at stage $s$, and such that
$\sigma_{0}\\!\upharpoonright_{\ell}=\sigma_{1}\\!\upharpoonright_{\ell}$,
then $|\sigma_{0}|$ is to be promoted by the $n$ module. In particular, it
must be that $|\sigma_{0}|>\ell$.
This completes the description of the original construction.
We must argue that for each $n>e$, $B$ believes at most $n-1$ lengths should
be promoted by the $n$ module. Suppose not, and fix lengths
$0=\ell_{0}<\ell_{1}<\ell_{1}\dots<\ell_{n}$ such that for $i>0$, $B$ believes
$\ell_{i}$ is to be promoted by the $n$ module. For each $i>0$, fix strings
$\sigma_{0}^{i}$ and $\sigma_{1}^{i}$ on the basis of which $B$ decided to
promote $\ell_{i}$.
By construction, $B$ decides $\ell_{i}$ should be promoted before it decides
$\ell_{i+1}$ should be, and thus
$\sigma_{0}^{i+1}\\!\upharpoonright_{\ell_{i}}=\sigma_{1}^{i+1}\\!\upharpoonright_{\ell_{i}}$
for $i>0$. Clearly this also holds for $i=0$. Now define the following
sequence of sets:
* •
$Z_{n}=\\{\sigma_{0}^{n},\sigma_{1}^{n}\\}$;
* •
For $0<i<n$,
$Z_{i}=Z_{i+1}\cup(\\{\sigma_{0}^{i},\sigma_{1}^{i}\\}\setminus\\{\sigma\\!\upharpoonright_{\ell_{i}}:\sigma\in
Z_{i+1}\\}).$
Note that each $Z_{i}$ is an antichain, and
$\\{\sigma_{0}^{i},\sigma_{1}^{i}\\}\subseteq\\{\sigma\\!\upharpoonright_{\ell_{i}}:\sigma\in
Z_{i}\\}$ by construction. Let
$T=\\{\sigma\\!\upharpoonright_{\ell_{i}}:\sigma\in Z_{1}\ \&\ i\leq n\\}$,
which we think of as a tree. Note that the leaves of $T$ are precisely
$Z_{1}$.
For $i<n$, let
$v_{i}=\sigma_{0}^{i+1}\\!\upharpoonright_{\ell_{i}}=\sigma_{1}^{i+1}\\!\upharpoonright_{\ell_{i}}$.
Then the $v_{i}$ are pairwise distinct and each has at least 2 children in $T$
(namely, $\sigma_{0}^{i+1}$ and
$\sigma_{1}^{i+1}\\!\upharpoonright_{\ell_{i}}$). Thus $|Z_{1}|\geq n+1$.
Let $D$ be the set of axes chosen for various $\sigma\in Z_{1}$, and define
$\tau\in J_{n}$ by
$\tau(d)=\left\\{\begin{array}[]{cl}0&d\in D,\\\ 1&d\not\in
D.\end{array}\right.$
Observe that $\tau\in J_{n}(d)\iff d\in D$.
###### Claim 7.13.
For each $\sigma\in Z_{1}$, we make the definition
$\Phi^{\sigma}(\tau)=\sigma$.
###### Proof.
By construction, we will make this definition so long as we have not already
defined $\Phi^{\sigma}(\tau)$ to be something else. But our actions for any
string $\sigma^{\prime}\not\in Z_{1}$ will never do this, as such
$\sigma^{\prime}$ will have an axis $d\not\in D$, and so will not seek to make
a definition at $\tau$. And $\sigma^{\prime}\in Z_{1}\setminus\\{\sigma\\}$
will not do this, as they will only seek to make a definition for
$\Phi^{\sigma^{\prime}}(\tau)$, and $\sigma^{\prime}$ and $\sigma$ are
incomparable as $Z_{1}$ is an antichain. ∎
As each of the $\ell_{i}$ is promoted, we have $Z_{1}\subseteq U_{\tau}^{B}$,
contradicting $|U_{\tau}^{B}|\leq h(\tau)=n$.
Thus the construction can proceed. The remainder of the argument is relative
to $B$.
Fix $\widehat{\ell}$ the longest length which $B$ believes the $e+1$ module
should promote. Nonuniformly fix $A\\!\upharpoonright_{\widehat{\ell}}$. Let
$L(n,s)$ be the set of lengths being tested by the $n$ module at stage $s$. At
a stage $s$, define a partial sequence $\sigma_{n}^{s}$ for $e\leq n\leq s$
recursively:
* •
$\sigma_{e}^{s}=A\\!\upharpoonright_{\widehat{\ell}}$;
* •
Given $\sigma_{n}^{s}$, define $\sigma_{n+1}^{s}$ to be a string $\tau$
extending $\sigma_{n}^{s}$ with $|\tau|=\max L(n+1,s)$, and such that for each
$\ell\in L(n+1,s)$, $\tau\\!\upharpoonright_{\ell}$ is confirmed at $n+1$ by
stage $s$, if such a string $\tau$ exists.
###### Claim 7.14.
There is at most one possible choice for $\sigma_{n}^{s}$.
###### Proof.
For $n=e$, this is immediate.
For $n>e$, suppose there were two distinct strings $\tau_{0}$ and $\tau_{1}$
which are appropriate to pick for $\sigma_{n}^{s}$. Fix $\ell\in L(n,s)$ least
with $\tau_{0}\\!\upharpoonright_{\ell}\neq\tau_{1}\\!\upharpoonright_{\ell}$.
Then $\tau_{0}\\!\upharpoonright_{\ell},\tau_{1}\\!\upharpoonright_{\ell}$
witness the promotion of $\ell$ at stage $s$, and $\ell>|\sigma_{n-1}^{s}|$,
as $\tau_{0}$ and $\tau_{1}$ both extend $\sigma_{n-1}^{s}$. This contradicts
$|\sigma_{n-1}^{s}|=\max L(n-1,s)$ (or contradicts the definition of
$\widehat{\ell}$ if $n=e+1$). ∎
###### Claim 7.15.
Let $\ell_{n}=\max\bigcup_{s}L(n,s)$ for $n>e$, and $\ell_{e}=\widehat{\ell}$.
Then $A\\!\upharpoonright_{\ell_{n}}=\lim_{s}\sigma_{n}^{s}$ for $n\geq e$.
###### Proof.
Induction on $n$. The case $n=e$ is immediate.
For $n>e$, first observe that $\ell_{n-1}$ is either a length promoted by the
$n$ module (and so eventually an element of $L(n,s)$) or is $f(n-1,s)\leq
f(n,s)$ for some $s$, and so is bounded by an element of $L(n,s)$. Thus
$\ell_{n-1}\leq\ell_{n}$.
Now fix $s_{0}$ sufficiently large such that
$\sigma_{m}^{s}=A\\!\upharpoonright_{\ell_{m}}$ for all $m<n$ and $s\geq
s_{0}$, and such that $L(n,s_{0})=\bigcup_{s}L(n,s)$. As $\Phi^{A}$ is traced
by $(U_{x}^{B})_{x\in\omega}$, there is a stage $s_{1}\geq s_{0}$ such that
each $A\\!\upharpoonright_{\ell}$ for $\ell\in L(n,s_{0})$ is confirmed at
$n$. Then $A\\!\upharpoonright_{\ell_{n}}$ is a possible choice for
$\sigma_{n}^{s}$ for every $s\geq s_{1}$, and thus is $\sigma_{n}^{s}$. ∎
Define a sequence of stages $(s_{t})_{t\in\omega}$ as follows:
* •
$t_{0}=e$.
* •
Given $s_{t}$, $s_{t+1}$ is the least $s>s_{t}$ such that for every $n$ with
$e<n\leq t$, $\sigma_{n}^{s}$ exists.
Define $A_{t}=\sigma_{t}^{s_{t}}$.
###### Claim 7.16.
$(A_{t})_{t\in\omega}\models c$
###### Proof.
Suppose $e<n\leq t$ and $A_{t}(z)\neq A_{t}(z+1)$ for some $z$ with
$c(z,t)\geq 2^{-n}$. As $c(z,s_{t})\geq c(z,t)$, $z<f(n,s_{t})\in L(n,s_{t})$.
Thus $\sigma_{n}^{s_{t}}\neq\sigma_{n}^{s_{t+1}}$. Fix $m$ least with
$\sigma_{m}^{s_{t}}\neq\sigma_{m}^{s_{t+1}}$. Fix $\ell\in L(m,s_{t})$ least
with
$\sigma_{m}^{s_{t}}\\!\upharpoonright_{\ell}\neq\sigma_{m}^{s_{t+1}}\\!\upharpoonright_{\ell}$.
If no length less than $\ell$ and greater than $\max L(m-1,s_{t})$ is promoted
by the $m$ module at a stage $s\in(s_{t},s_{t+1}]$, then these witness the
promotion of $\ell$ at stage $s_{t+1}$, and $\ell>\max L(m-1,s_{t})$, as both
$\sigma_{m}^{s_{t}}$ and $\sigma_{m}^{s_{t+1}}$ extend
$\sigma_{m-1}^{s_{t}}=\sigma_{m-1}^{s_{t+1}}$. So whenever there is such a
$z,n$ and $t$, there is a promotion by an $m$ module for $m\leq n$ at a stage
after $s_{t}$.
There can be at most $\sum_{e<m\leq n}m-1<n^{2}$ such promotions over the
entire construction. Thus we can bound
$c((A_{t})_{t\in\omega})<\sum_{n}n^{2}\cdot 2^{-n}<\infty.\qed$
This ends the proof of Proposition 7.11. ∎
### 8\. Nies and Stephan: Update on the SMB theorem for measures
The purpose of this post is to provide an example showing that the boundedness
hypothesis in [39, Prop. 24] is necessary.
We briefly review some background and notation. Let $\mathbb{A}^{\infty}$
denote the topological space of one-sided infinite sequences of symbols in an
alphabet $\mathbb{A}$. Randomness notions etc. carry over from the case that
$\mathbb{A}=\\{0,1\\}$. A measure $\rho$ on $\mathbb{A}^{\infty}$ is called
_shift invariant_ if $\rho(G)=\rho(T^{-1}(G))$ for each open (and hence each
measurable) set $G$. The _empirical entropy_ of a measure $\rho$ along
$Z\in\mathbb{A}^{\infty}$ is given by the sequence of random variables
$h^{\rho}_{n}(Z)=-\frac{1}{n}\log_{|\mathbb{A}|}\rho[Z\\!\upharpoonright_{n}].$
A shift invariant measure $\rho$ on $\mathbb{A}^{\infty}$ is called _ergodic_
if every $\rho$ integrable function $f$ with $f\circ T=f$ is constant
$\rho$-almost surely. The following equivalent condition can be easier to
check: for any strings $u,v\in\mathbb{A}^{*}$,
$\lim_{N}\frac{1}{N}\sum_{k=0}^{n-1}\rho([u]\cap T^{-k}[v])=\rho[u]\rho[v].$
For ergodic $\rho$, the entropy $H(\rho)$ is defined as $\lim_{n}H_{n}(\rho)$,
where
$H_{n}(\rho)=-\frac{1}{n}\sum_{|w|=n}\rho[w]\log\rho[w].$
Thus, $H_{n}(\rho)=\mathbb{E}_{\rho}h^{\rho}_{n}$ is the expected value with
respect to $\rho$. Recall that by concavity of the logarithm function and
subadditivity of the entropy $H(X,Y)\leq H(X)+H(Y)$, the limit exists and
equals the infimum of the sequence. This limit is denoted $H(\rho)$, the
entropy of $\rho$.
###### Theorem 8.1 (SMB theorem, e.g. [45]).
Let $\rho$ be an ergodic measure on the space $\mathbb{A}^{\infty}$. For
$\rho$-almost every $Z$ we have $\lim_{n}h^{\rho}_{n}(Z)=H(\rho)$.
###### Proposition 8.2 ([39], Prop. 7.2).
Let $\rho$ be a computable ergodic measure on the space $\mathbb{A}^{\infty}$
such that for some constant $D$, each $h_{n}^{\rho}$ is bounded above by $D$.
Suppose the measure $\mu$ is Martin-Löf a.c. with respect to $\rho$. Then
$\lim_{n}E_{\mu}|h^{\rho}_{n}-H(\rho)|=0$.
We now given an example showing that the boundedness hypothesis on the
$h_{n}^{\rho}$ is necessary. In fact we provide a computable ergodic measure
$\rho$ such that some finite measure $\mu\ll\rho$ makes the sequence
$E_{\mu}h_{n}^{\rho}$ converge to $\infty$. This condition $\mu\ll\rho$ (every
$\rho$ null set is a $\mu$ null set) is stronger than requiring that $\mu$ is
Martin-Löf a.c. with respect to $\rho$.
###### Example 8.3.
There is an ergodic computable measure $\rho$ (associated to a binary renewal
process) and a computable measure $\mu\ll\rho$ such that
$\lim_{n}\mathbb{E}_{\mu}h^{\rho}_{n}=\infty$. (We can then normalise $\mu$ to
become a probability measure, and still have the same conclusion.)
###### Proof.
Let $k$ range over positive natural numbers. The real $c=\sum_{k}2^{-k^{4}}$
is computable. Let $p_{k}=2^{-k^{4}}/c$ so that $\sum p_{k}=1$. Let
$b=\sum_{k}k\cdot p_{k}$ which is also computable.
Let $\rho$ be the measure associated with the corresponding binary renewal
process, which is given by the conditions
$\rho[Z_{0}=1]=1/b$ and $\rho(10^{k}1\prec Z\mid Z_{0}=1)=p_{k}$.
Informally, the process has initial value $1$ with probability $1/b$, and
after each $1$ with probability $p_{k}$ it takes $k$ many 0s until it reaches
the next 1. See again e.g. [45, Ch. 1] where it is shown that $\rho$ is
ergodic. Write $v_{k}=10^{k}1$. Note that $\rho[v_{k}]=p_{k}/b$.
Define a function $f$ in $L_{1}(\rho)$ by $f(v_{k}\,\widehat{\
}\,Z)=k^{-2}/p_{k}$ and $f(X)=0$ for any $X$ not extending any $v_{k}$. It is
clear that $f$ is $L_{1}(\rho)$-computable, in the usual sense that there is
an effective sequence of basic functions ${\left\langle{f_{n}}\right\rangle}$
converging effectively to $f$: let $f_{n}(X)=f(X)$ in case $v_{k}\prec X$,
$k\leq n$, and $f_{n}(X)=0$ otherwise. Define the measure $\mu$ by
$d\mu=fd\rho$, i.e. $\mu(A)=\int_{A}fd\rho$. Thus $\mu[v_{k}]=k^{-2}/b$. Since
$\rho$ is computable and $f$ is $L_{1}(\rho)$-computable, $\mu$ is computable.
Also note that $\mu(2^{{\mathbb{N}}})=\int fd\rho$ is finite.
For any $n>2$, letting $k=n-2$, we have
$E_{\mu}h_{n}^{\rho}\geq-\frac{1}{n}\mu[v_{k}]\log\rho[v_{k}]=-\frac{1}{nk^{2}b}(k^{4}-bc)\geq\frac{k^{2}}{nb}-O(1).$
∎
### 9\. Nies and Tomamichel: the measure associated with an infinite sequence
of qubits
For background and notation see the 2017 Logic Blog entry [9, Section 6].
Recall that mathematically, a qubit is a unit vector in the Hilbert space
$\mathbb{C}^{2}$. We give a brief summary on “infinite sequences” of qubits.
One considers the $C^{*}$ algebra $M_{\infty}=\lim_{n}M_{2^{n}}(\mathbb{C})$,
an approximately finite (AF) $C^{*}$ algebra. “Quantum Cantor space” consists
of the state set $\mathcal{S}(M_{\infty})$, which is a convex, compact,
connected set with a shift operator, deleting the first qubit.
Given a finite sequence of qubits, “deleting” a particular one generally
results in a statistical superposition of the remaining ones. This is is why
$\mathcal{S}(M_{\infty})$ consists of coherent sequences of density matrices
$\mu={\left\langle{\mu_{n}}\right\rangle}_{n\in{\mathbb{N}}}$ where $\mu_{n}$
is in $M_{2^{n}}(\mathbb{C})$ (density matrices formalise such
superpositions), rather than just of sequences of unit vectors in
$(\mathbb{C}^{2})^{\otimes n}$. To be coherent means that
$T(\mu_{n+1})=\mu_{n}$ where $T$ is the partial trace operation deleting the
last qubit. For more background on this, as well as an algorithmic notion of
randomness for such states, see Nies and Scholz [38].
We defined in [9, Section 6] what it means for a state $\mu$ on $M_{\infty}$
to be qML-random with respect to a computable shift invariant state $\rho$.
For each density matrix $D\in M_{2^{n}}$ its diagonal $\overline{D}$ is also a
density matrix. This is because the operator
$A\mapsto\sum_{x\in\\{0,1\\}^{n}}P_{x}DP_{x}$ is completely positive and trace
preserving (here as usual $P_{x}=|x\rangle\langle x|$ is the projection onto
the subspace spanned by the basis vector given by $x$). Clearly
$\overline{\mu}_{n}=T(\overline{\mu_{n+1}})$ for each $n$ because taking the
partial trace means adding corresponding items on the two quadratic
$2^{n}\times 2^{n}$ components along the diagonal. So $\overline{\mu}$ is a
diagonal state, and hence corresponds to a measure on Cantor space. Clearly if
$\mu$ is computable then so is $\overline{\mu}$. Shift invariance is also
preserved by this operation.
Ergodicity of $\overline{\mu}$ can be used as a test of the more complicated
ergodicity for $\mu$.
###### Fact 9.1.
If $\overline{\mu}$ is ergodic then so is $\mu$.
###### Proof.
Suppose $\mu=\alpha\eta+\beta\nu$ is a nontrivial convex decomposition of
$\mu$ into shift-invariant states. Then
$\overline{\mu}=\alpha\overline{\eta}+\beta\overline{\nu}$ is a nontrivial
convex decomposition of $\overline{\mu}$ into shift-invariant states as well.
∎
###### Fact 9.2.
Let $\rho$ be a computable shift-invariant measure. If a state $\mu$ is qML-
random wrt $\rho$ then so is $\overline{\mu}$.
###### Proof.
Note that for each classical $\Sigma^{0}_{1}$ set $G$ we have
$\mu(G)=\overline{\mu}(G)$, where on the left hand side $G$ is interpreted as
an ascending sequence ${\left\langle{p_{m}}\right\rangle}$ of clopen
projections $p_{m}\in M_{m}$, and then $\mu(G)=\lim_{m}\mu(p_{m})$. But
$\mu(p_{m})=\mathrm{Tr}(\mu\\!\upharpoonright_{m}p_{m})=\mathrm{Tr}(\overline{\mu}\\!\upharpoonright_{m}p_{m})$
because $p_{m}$ is diagonal. ∎
We obtain a partial quantum version of Prop. 8.2. This answers one special
case of Conjecture 6.3 in [9] (unfortunately the roles of $\mu$ and $\rho$ are
exchanged there). The boundedness hypothesis turned out to be necessary by
Example 8.3, but was not present in the statement of the conjecture back then.
###### Proposition 9.3.
Let $\rho$ be a computable ergodic measure on the space $\mathbb{A}^{\infty}$
such that for some constant $D$, each $h_{n}^{\rho}$ is bounded above by $D$.
Write $s=H(\rho)$. Suppose the state $\mu$ is qML random with respect to
$\rho$. Then
$\lim_{n}\mathrm{Tr}(\mu\\!\upharpoonright_{n}|h^{\rho}_{n}-sI_{2^{n}}|)=0$.
Here the function $h^{\rho}_{n}$ is viewed as defined on strings of length
$n$, and in the expression above we identify it with the corresponding
diagonal matrix in $M_{n}$.
###### Proof.
Let $\overline{\mu}$ be the classical state (measure) such that
$\overline{\mu}\\!\upharpoonright_{n}$ is the diagonal of
$\mu\\!\upharpoonright_{n}$, as above. By the fact above we have
$\overline{\mu}\ll_{ML}\rho$ i.e., the non $\rho$-MLR bit sequences form a
$\overline{\mu}$ null set. Now
$\mathrm{Tr}(\mu\\!\upharpoonright_{n}|h^{\rho}_{n}-sI_{2^{n}}|)=\mathrm{Tr}(\overline{\mu}\\!\upharpoonright_{n}|h^{\rho}_{n}-sI_{2^{n}}|)=E_{\mu}|h^{\rho}_{n}-s|$.
It now suffices to apply Prop. 8.2. ∎
If $\rho$ is i.i.d. then the boundedness condition on the $h_{n}^{\rho}$
holds. This yields a new proof of [9, Thm. 6.4] (first turning the ergodic
state $\rho$ into a classical state by applying a fixed unitary “qubit-wise”,
as before).
## Part III Set theory
### 10\. Yu: perfect subsets of uncountable sets of reals
We make some remarks on a recent result:
###### Theorem 10.1 (Hamel, Horowitz, Shelah [19]).
Assume $ZF+DC$. If every uncountable Turing invariant set of reals has a
perfect subset, then so has every uncountable set of reals.
We obtained an improvement of the Theorem which was added in Section III of
the most recent version of [19].
###### Theorem 10.2 (Yu [19]).
Assume $ZF+AC_{\omega}$. For any analytic countable equivalence relation $E$,
if every uncountable $E$-invariant set of reals has a perfect subset, then so
has every uncountable set of reals.
Remark 1: Actually $AC_{\omega}$ can be removed from Theorems 10.1 and 10.2.
In the recursion theoretic proof of Theorem 10.1, the first use of
$AC_{\omega}$ is to prove that $[Q]_{T}\cap A$ is uncountable. But this is
clearly unnecessary, since otherwise $Q\subseteq[[Q_{T}]\cap A]_{T}$ would be
countable but without appealing $AC_{\omega}$ due to the uniformity.
The second use of $AC_{\omega}$ is to prove that $Q_{e,i}\cap A$ is
uncountable for some $e,i$. But if $Q_{e,i}\cap A$ is countable for all $e,i$,
then the computation is uniform since $Q_{e,i}\cap A=Q_{e,i}\cap P$ is a
countable closed set. $AC_{\omega}$ can be removed from Theorem 10.2 for
similar reasons.
Remark 2: Ironically we need $AC_{\omega}$ to prove the conclusion for every
countable Borel equivalence relation since the Borelness implying
$\mathbf{\Sigma^{1}_{1}}$-ness requires $AC_{\omega}$. But for most natural
Borel countable equivalence relations, it seems $AC_{\omega}$ is unnecessary.
### References
* [1] L. Agarwal and M. Kompatscher. Pairwise nonisomorphic maximal-closed subgroups of Sym (N) via the classification of the reducts of the Henson digraphs. The Journal of Symbolic Logic, 83(2):395–415, 2018.
* [2] G. Ahlbrandt and M. Ziegler. Quasi finitely axiomatizable totally categorical theories. Annals of Pure and Applied Logic, 30(1):63–82, 1986.
* [3] M. Benli and B. Kaya. Descriptive complexity of subsets of the space of finitely generated groups. Arxiv: 1909.11163, 2019.
* [4] L. Bienvenu, R. Downey, N. Greenberg, A. Nies, and D. Turetsky. Characterizing lowness for Demuth randomness. The Journal of Symbolic Logic, 79(2):526–569, 2014.
* [5] Z. Chatzidakis. Model theory of profinite groups having the Iwasawa property. Illinois Journal of Mathematics, 42(1):70–96, 1998.
* [6] P. Cholak, R. Downey, and N. Greenberg. Strongly jump-traceability I: the computably enumerable case. Adv. in Math., 217:2045–2074, 2008.
* [7] J.D. Dixon and B. Mortimer. Permutation groups, volume 163. Springer Science & Business Media, 1996.
* [8] A. Nies (editor). Logic Blog 2011. Available at http://arxiv.org/abs/1403.5721, 2011.
* [9] A. Nies (editor). Logic Blog 2017. Available at http://arxiv.org/abs/1804.05331, 2017.
* [10] A. Nies (editor). Logic Blog 2019. Available at http://arxiv.org/abs/2003.03361, 2019.
* [11] V. Ferenczi, A. Louveau, and C. Rosendal. The complexity of classifying separable Banach spaces up to isomorphism. Journal of the London Mathematical Society, 79(2):323–345, 2009\.
* [12] A. Figà-Talamanca and C. Nebbia. Harmonic Analysis and Representation Theory for Groups Acting on Homogenous Trees, volume 162. Cambridge University Press, 1991.
* [13] N. Greenberg, D. Hirschfeldt, and A. Nies. Characterizing the strongly jump-traceable sets via randomness. Adv. Math., 231(3-4):2252–2293, 2012.
* [14] N. Greenberg, J. S. Miller, and A. Nies. Computing from projections of random points. Journal of Mathematical Logic, page 1950014, 2019.
* [15] N. Greenberg, J. S. Miller, A. Nies, and D. Turetsky. Martin-löf reducibility and cost functions. Arxiv: https://arxiv. org/abs/1707.00258, 2017.
* [16] N. Greenberg and A. Nies. Benign cost functions and lowness properties. J. Symbolic Logic, 76:289–312, 2011.
* [17] N. Greenberg and D. Turetsky. Strong jump-traceability and Demuth randomnesss. Proc. Lond. Math. Soc., 108:738–779, 2014.
* [18] N. Greenberg and D. Turetsky. Strong jump-traceability. Bulletin of Symbolic Logic, 24(2):147–164, 2018.
* [19] C. Hamel, H. Horowitz, and S. Shelah. Turing invariant sets and the perfect set property. preprint arXiv:1912.12558, 2019.
* [20] A. O. Houcine and F. Point. Alternatives for pseudofinite groups. Journal of Group Theory, 16(4):461–495, 2013.
* [21] J. Chubb I. Bilanovic and A. Roven. Detecting properties from description of groups. Arch. Math. Log., 59:293 – 312, 2020.
* [22] A Ivanov. Closed groups induced by finitary permutations and their actions on trees. Proceedings of the American Mathematical Society, 130(3):875–882, 2002.
* [23] A. Kaichouh. Amenability and ramsey theory in the metric setting. Fund. Math., 231:19 – 38, 2015.
* [24] I. Kaplan and P. Simon. The affine and projective groups are maximal. Transactions of the American Mathematical Society, 368(7):5229–5245, 2016.
* [25] A. Kechris, A. Nies, and K. Tent. The complexity of topological group isomorphism. The Journal of Symbolic Logic, 83(3):1190–1203, 2018.
* [26] A. Kučera and A. Nies. Demuth randomness and computational complexity. Ann. Pure Appl. Logic, 162:504–513, 2011.
* [27] M. Lawson. Inverse semigroups: the theory of partial symmetries. World Scientific, 1998.
* [28] B. Majcher-Iwanow. Finitary shadows of compact subgroups of $s(\omega)$. Algebra Universalis, 81(2):25, 2020.
* [29] M. Malicki. Abelian pro-countable groups and non-Borel orbit equivalence relations. Mathematical Logic Quarterly, 62(6):575–579, 2016.
* [30] Maciej Malicki. Abelian pro-countable groups and orbit equivalence relations. arXiv preprint arXiv:1405.0693, 2014. Fundamentae.
* [31] A. Mekler. Stability of nilpotent groups of class 2 and prime exponent. The Journal of Symbolic Logic, 46(04):781–788, 1981.
* [32] J. Melleray and T. Tsankov. Generic representations of abelian groups and extreme amenability. Isr. J. Math., 198:129 – 167, 2013.
* [33] P. Neumann. The structure of finitary permutation groups. Archiv der Mathematik, 27(1):3–17, 1976.
* [34] K.M. Ng. Beyond strong jump traceability. Proceedings of the London Mathematical Society, 2010. To appear.
* [35] A. Nies. Computability and Randomness, volume 51 of Oxford Logic Guides. Oxford University Press, Oxford, 2009. 444 pages. Paperback version 2011.
* [36] A. Nies. Computably enumerable sets below random sets. Ann. Pure Appl. Logic, 163(11):1596–1610, 2012.
* [37] A. Nies, P. Schlicht, and K. Tent. Oligomorphic groups are essentially countable. arXiv preprint arXiv:1903.08436, 2019.
* [38] A. Nies and V. Scholz. Martin-Löf random quantum states. Journal of Mathematical Physics, 60(9):092201, 2019. available at doi.org/10.1063/1.5094660.
* [39] A. Nies and F. Stephan. A weak randomness notion for measures. Available at https://arxiv.org/abs/1902.07871, 2019.
* [40] N. Nikolov and D. Segal. On finitely generated profinite groups. I. Strong completeness and uniform bounds. Ann. of Math. (2), 165(1):171–238, 2007.
* [41] N. Nikolov and D. Segal. Generators and commutators in finite groups; abstract quotients of compact groups. Inventiones mathematicae, 190(3):513–602, 2012.
* [42] V. G. Pestov. Dynamics of infinite-dimensional groups. Inst. Mat. Pura. Apl. (IMPA), Rio de Janeiro, 2005), 2005.
* [43] F.M. Schneider and A. Thom. Topological matchings and amenability. Fund. Math., 238:167 – 200, 2017.
* [44] F.M. Schneider and A. Thom. On Folner sets in topological groups. Compositio Mathematica, 154:1333–1361, 2018.
* [45] P. Shields. The Ergodic Theory of Discrete Sample Paths. Graduate Studies in Mathematics 13. American Mathematical Society, 1996\.
* [46] A. M. Slobodskoi. Unsolvability of the universal theory of finite groups. Algebra i Logika, 20:207–230, 1981.
* [47] V. Uspenskij. On the group of isometries of the Urysohn universal metric space. Comment. Math. Univ. Carolinae, 31:181 – 182, 1990.
* [48] Ph. Wesolek and J. Williams. Chain conditions, elementary amenable groups, and descriptive set theory. Groups Geom. Dyn., 11:649 – 684, 2017.
* [49] G. Willis. The structure of totally disconnected, locally compact groups. Mathematische Annalen, 300(1):341–363, 1994.
* [50] G. Willis. Computing the scale of an endomorphism of a totally disconnected locally compact group. Axioms, 6(4):27, 2017.
|
# Experiences & Challenges with Server-Side WiFi Indoor Localization Using
Existing Infrastructure
Dheryta Jaisinghani University of Northern IowaCedar Falls, IAUSA
<EMAIL_ADDRESS>, Vinayak Naik BITS Pilani GoaGoaIndia
<EMAIL_ADDRESS>, Rajesh Balan Singapore Management
UniversitySingaporeSingapore<EMAIL_ADDRESS>, Archan Misra Singapore
Management UniversitySingaporeSingapore<EMAIL_ADDRESS>and Youngki Lee
Seoul National UniversitySeoulSouth Korea<EMAIL_ADDRESS>
###### Abstract.
Real-world deployments of WiFi-based indoor localization in large public
venues are few and far between as most state-of-the-art solutions require
either client or infrastructure-side changes. Hence, even though high location
accuracy is possible with these solutions, they are not practical due to cost
and/or client adoption reasons. Majority of the public venues use commercial
controller-managed WLAN solutions, that neither allow client changes nor
infrastructure changes. In fact, for such venues we have observed highly
heterogeneous devices with very low adoption rates for client-side apps.
In this paper, we present our experiences in deploying a scalable location
system for such venues. We show that server-side localization is not trivial
and present two unique challenges associated with this approach, namely
_Cardinality Mismatch_ and _High Client Scan Latency_. The “Mismatch”
challenge results in a significant mismatch between the set of access points
(APs) reporting a client in the offline and online phases, while the “Latency”
challenge results in a low number of APs reporting data for any particular
client. We collect three weeks of detailed ground truth data ($\approx 200$
landmarks), from a WiFi setup that has been deployed for more than four years,
to provide evidences for the extent and understanding the impact of these
problems. Our analysis of real-world client devices reveal that the current
trend for the clients is to reduce scans, thereby adversely impacting their
localization accuracy. We analyze how localization is impacted when scans are
minimal. We propose heuristics to alleviate reduction in the accuracy despite
lesser scans. Besides the number of scans, we summarize the other challenges
and pitfalls of real deployments which hamper the localization accuracy.
WiFi, Localization, Server-side, Device-agnostic, Large-scale Measurements.
††journal: TCPS††ccs: Networks Network measurement††ccs: Networks Location
based services††ccs: Human-centered computing Empirical studies in ubiquitous
and mobile computing
## 1\. Introduction
There has been a long and rich history of WiFi-based indoor localization
research (Rajalakshmi Nandakumar et al., 2012; C. Wu et al., 2013; Anshul Rai
et al., 2012; Stephen P. Tarzia et al., 2011; Nilanjan Banerjee et al., 2010;
Anthony LaMarca et al., 2005; Martin Azizyan et al., 2009; He Wang et al.,
2012; Liu et al., 2012; Moustafa Youssef and Ashok Agrawala, 2005; Sen et al.,
2012; John Krumm and John Platt, 2003; P. Bahl and V. N. Padmanabhan, 2000;
Abhishek Goswami et al., 2011; Krishna Chintalapudi et al., 2010; D. Niculescu
and Badri Nath, 2003; Swarun Kumar et al., 2014; Jon Gjengset et al., 2014;
Jie Xiong and Kyle Jamieson, 2013; Manikanta Kotaru et al., 2015; Moustafa
Youssef et al., 2006; Stuart A. Golden and Steve S. Bateman, 2007; Domenico
Giustiniano and Stefan Mangold, 2011; Mariakakis et al., 2014; Souvik Sen et
al., 2015; Donny Huang et al., 2014; Wei Wang et al., 2015; Yanzi Zhu et al.,
2015; Lei Yang et al., 2015; Teng Wei and Xinyu Zhang, 2015). However, in
spite of several breakthroughs, there are very few real-world deployments of
WiFi-based indoor localization systems in public spaces. The reasons for this
are many-fold, with three of the most common being – ($a$) the high cost of
deployment, ($b$) arguably, the lack of compelling business use, and ($c$) the
inability of existing solutions to seamlessly work with all devices. In fact,
current solutions impose a tradeoff between universality, accuracy, and
energy, for example, client-based solutions that combine inertial-based
tracking with WiFi scanning offer significantly better accuracy but require a
mobile application which will possibly drain energy faster and which will be
downloaded by only a fraction of visitors (Zafari et al., 2017).
In this paper, we present our experiences with deploying and operating a WiFi-
based indoor localization system across the entire campus of a small Asian
university. It is worth noting that the environment is very densely occupied,
by $\approx$ $10,000$ students and $1,500$ faculty and staff. The system has
been in the production for more than four years. It is deployed at multiple
venues including two universities (Singapore Management University, University
of Massachusetts, Amherst), and four different public spaces (Mall, Convention
Center, Airport, and Sentosa Resort) (Kasthuri Jayarajah et al., 2016; Rajesh
Krishna Balan et al., 2014). These venues use the localization system for
various real-time analytics such as group detection, occupancy detection, and
queue detection while taking care of user privacy.
Our goal is to highlight challenges and propose easy to integrate solutions to
build a universal indoor localization system – one that can spot localize all
WiFi enabled devices on campus without any modifications whether client or
infrastructure-side. The scale and the nature of this real environment,
presents unique set of challenges – ($a$) infrastructure i.e. controller and
APs do not allow any changes, ($b$) devices cannot be modified in any way i.e.
no explicit/implicit participation for data generation, no app download
allowed, and no chipset changes allowed, and ($c$) only available data is RSSI
measurements from APs, which are centrally controlled by the controller, using
a _Real-Time Location Service_ (RTLS) interface (Coleman and Westcott, 2009).
It is worth noting that within the face of these challenges we have to rule
out more sophisticated state-of-the-art schemes, such as fine-grained CSI
measurements (Manikanta Kotaru et al., 2015), Angle-of-Arrival (D. Niculescu
and Badri Nath, 2003), Time-of-Flight (Mariakakis et al., 2014), SignalSLAM
(P. Mirowski et al., 2013), or Inertial Sensing (Fan Li et al., 2012).
Given the challenges, we adopt an offline fingerprint-based approach to
compute each device’s location. Fingerprints have been demonstrated to be more
accurate than model-based approaches in densely crowded spaces (Liu et al.,
2007) and hence widely preferred. Our localization software processes the RSSI
updates using well-known “classical” fingerprint-based technique (P. Bahl and
V. N. Padmanabhan, 2000). Given the wide usage of this approach, our
experiences and results apply to a majority of the localization algorithms.
Our primary contribution is to detail the cases where such a conventional
approach succeeds and where it fails. We highlight the related challenges for
making the approach work in current, large-scale WiFi networks, and then
develop appropriate solutions to overcome the observed challenges. We collect
three weeks of detailed ground truth data ($\approx 200$ landmarks) in our
large-scale deployment, carefully construct a set of experimental studies to
show two unique challenges – _Cardinality Mismatch_ and _High Client Scan
Latency_ associated with a server-side localization approach. The three weeks
of data is representative of our four years of data.
($a$) _Cardinality Mismatch:_ We define cardinality as the set of APs
reporting for a client located at a specific landmark. We first show that the
cardinality, during the online phase, is _often_ quite different from the
cardinality in the offline phase. Note that this divergence is in the _set_ of
reporting APs, and not just merely a mismatch in the values of the RSSI
vectors. Intuitively, this upends the very premise of fingerprint-based
systems that the cardinality seen at any landmark is the same during the
offline and online phases. This phenomenon arises from the dynamic power and
client management performed by a centralized controller in all commercial
grade WiFi networks (for example, those provided by Aruba, Cisco, and other
vendors) to achieve outcomes such as ($i$) minimize overall interference
(shift neighboring APs to alternative channels), ($ii$) enhance throughput
(shift clients to alternative APs), and ($iii$) reduce energy consumption
(shut down redundant APs during periods of low load).
($b$) _High Client Scan Latency:_ Most localization systems use client-side
localization techniques where clients actively scan the network when they need
a location fix. Specialized mobile apps trigger active scans at the client
devices. However, when using server-side localization, the location system has
no way to induce scans from client devices.This is because such a localization
system is deployed in public places, e.g. universities and malls, where
changes at the client device are not feasible. Hence, the system can only
“see” clients when clients scan as part of their normal behavior. However, as
we show in Section 4, most clients today, by default, do not prefer to scan.
We analyze different usage mode of the devices. Our observations reveal that
while mobile clients trigger scans only while handover, stationary clients
trigger scans only when their screen is turned on.
These phenomena do not exist in small-scale deployments often used in the past
pilot studies, where each AP is configured independently. In large-scale
deployments, where it is fairly common to use controller-managed WLANs with a
large number of devices, these phenomena invariably persist to a great extent.
To exemplify, we noticed $57.30$% instances of cardinality mismatch in $2.4$
GHz and $30.60$% in $5$ GHz in our deployment. We saw $90^{th}$%ile of client
scan interval to be $20$ minutes. While localizing with fingerprint-based
solutions in such environments, these phenomena translate to either _minimal_
or even worse _no_ matching APs, resulting in substantial delays between
client location updates and “teleporting” of clients across the location.
It is important to note that not only the schedule of these algorithms is non-
deterministic but also their distribution during offline and online phases.
This is attributed to the fundamental fact that the dynamics of WiFi networks
such as load and interference, is non-deterministic in most of the cases and
that the controller algorithm is a black-box to us. Furthermore, the
differences in signal propagation and scanning behavior of $2.4$ and $5$ GHz
contribute to these problems. We believe that we are the first to present the
challenges of server-side localization as well as their mitigation. Our
proposals are device-agnostic, simple, and easily integrable with any large-
scale WiFi deployment to efficiently localize devices.
Key Contributions:
* •
We identify and describe a couple of novel and fundamental problems associated
with a server-side localization framework. In particular, we provide evidence
for the (i) “Cardinality Mismatch” and (ii) “High Client Scan Latency”
problems, explain why these problems are progressively becoming more
significant in commercial WiFi deployments. We discuss the reasons why these
problems are non-trivial to be solved given the challenge of no
client/infrastructure-side allowed. Our entire analysis is for both frequency
bands – $2.4$ and $5$ GHz.
* •
We provide valuable insights about the causes of these problems with extensive
evaluations based on the ground truth data collected over three weeks for
$200$ landmarks. Motivated from the real-world usage of clients, we study
their 4 states – Disconnected, Inactive, Intermittent, and Active. For each of
these states, we analyze – (i) their scanning behavior while being mobile as
well as stationary and (ii) the impact of these states on the achieved
cardinality in the presence and the absence of scans. We demonstrate the
impact of considering “only” scanning frames as compared to “only” non-
scanning frames on the localization errors. Furthermore, we compare our
results with real-world scenario of having a mix of both categories of frames.
* •
We propose heuristics to improve the accuracy of the localization in the face
of these problems. We see an improvement from a minimum of $35.40$% to a
maximum of $100$%. We show an improvement in the higher percentiles over
SignalSLAM (P. Mirowski et al., 2013). This shows that our lessons learned
have the potential of improving the existing localization algorithms.
* •
We describe our experiences with deploying, managing, and improving a
fingerprint-based WiFi localization system, which has been operational, since
$2013$, across the entire campus of Singapore Management University. We not
only focus on the final “best solution” that uses RTLS data feeds, but also
discuss the challenges and pitfalls encountered over the years.
Paper Organization: We discuss the related works in Section 2. We present the
system architecture and the details of data collection in Section 3. We
introduce the challenges, their evidences, and propose the solutions in
Section 4. We discuss the challenges of localizing clients in real world
deployments and the limitations of our proposed solutions in Section 5. We
conclude in Section 6.
## 2\. Related Work
In this section, we discuss existing solutions and their limitations for
indoor localization.
Fingerprint vs. Model-based Solutions: One of the oldest localization
techniques use either a fingerprint-based (John Krumm and John Platt, 2003; P.
Bahl and V. N. Padmanabhan, 2000; Martin Azizyan et al., 2009; Sen et al.,
2012; Moustafa Youssef and Ashok Agrawala, 2005; Anshul Rai et al., 2012;
Zheng Yang et al., 2012; Liu et al., 2012) or model-based (Krishna
Chintalapudi et al., 2010; Hyuk Lim et al., 2010; Abhishek Goswami et al.,
2011; D. Turner et al., 2011) approach, or a combination of both (Liqun Li et
al., 2014a). Overall, fingerprint-based solutions tend to have much higher
accuracies than other approaches albeit with a high setup and maintenance cost
(Liu et al., 2007). The fingerprint-based approach was pioneered by Radar (P.
Bahl and V. N. Padmanabhan, 2000) and has spurred numerous follow-on research.
For example, Horus (Moustafa Youssef and Ashok Agrawala, 2005) uses a
probabilistic technique to construct statistical radio maps, which can infer
locations with centimeter level accuracy. PinLoc (Sen et al., 2012)
incorporates physical layer information into location fingerprints. Liu et al.
(Liu et al., 2012) improved accuracy by adopting a peer-assisted approach
based on p$2$p acoustic ranging estimates. Another thread of research in this
line is to reduce the fingerprinting effort, for example, using crowdsourcing
(Zheng Yang et al., 2012; C. Wu et al., 2013; Anshul Rai et al., 2012;
Yuanfang Chen et al., 2013) and down-sampling (John Krumm and John Platt,
2003). The mathematical signal propagation model approach (Abhishek Goswami et
al., 2011; Krishna Chintalapudi et al., 2010) has the benefit of easy
deployment (no need for fingerprints) although its accuracy suffers when the
environment layout or crowd dynamics change (D. Turner et al., 2011). Systems,
such as EZ, improve the accuracy by additionally using GPS to guide the model
construction.
Client vs. Infrastructure-based solutions: There is a rich history of client-
based indoor location solutions, to name a few, SignalSLAM (P. Mirowski et
al., 2013), SurroundSense (Martin Azizyan et al., 2009), UnLoc (He Wang et
al., 2012), and many others (Rajalakshmi Nandakumar et al., 2012; C. Wu et
al., 2013; Anshul Rai et al., 2012; Stephen P. Tarzia et al., 2011; Nilanjan
Banerjee et al., 2010; Anthony LaMarca et al., 2005). All of them share some
commonalities in that they extract sensor signals (of various types) from
client devices to localize. The location algorithms usually run on the device
itself; however, it is also possible to run the algorithm on a server and use
the signals from multiple clients to achieve better performance (Liu et al.,
2012). Overall, client-based solutions have very high accuracy (centimeter
resolution in some cases (Moustafa Youssef and Ashok Agrawala, 2005; Sen et
al., 2012)). An alternative would be to pull signal measurements directly from
the WiFi infrastructure, similar to what our solution does. The research
community has only lightly explored this approach since it requires full
access to the WLAN controllers, which is usually proprietary. Our main
competitors are the commercial WiFi providers themselves. In particular, both
Cisco (Cisco, 2017) and Aruba (Aruba, 2017a) offer location services. These
solutions use server-side tracking coupled with model-based approaches (to
eliminate fingerprint setup overhead).
Other Solutions: There are several other solutions, complementary to the
signal strength-based technique. Time-based solutions (Moustafa Youssef et
al., 2006; Stuart A. Golden and Steve S. Bateman, 2007; Domenico Giustiniano
and Stefan Mangold, 2011; Mariakakis et al., 2014; Souvik Sen et al., 2015)
use the arrival time of signals to estimate the distance between client and
AP, while angle-based solutions (D. Niculescu and Badri Nath, 2003; Swarun
Kumar et al., 2014; Jon Gjengset et al., 2014; Jie Xiong and Kyle Jamieson,
2013; Manikanta Kotaru et al., 2015) utilize angle of arrival information,
estimated from a antenna array, to locate mobile users. Recently, the notion
of passive location tracking (Donny Huang et al., 2014; Wei Wang et al., 2015;
Yanzi Zhu et al., 2015; Lei Yang et al., 2015; Teng Wei and Xinyu Zhang, 2015)
has been proposed, which does not assume people carry devices. In large and
crowded venues, however, the feasibility and accuracy of such passive tracking
is still an open question. Other systems like light-based localization (Liqun
Li et al., 2014b; Pan Hu et al., 2013; Naveed Ul Hassan et al., 2015) and
acoustic-based localization (Zengbin Zhang et al., 2012; Sanjib Sur et al.,
2014; Viktor Erdélyi et al., 2018; Md Abdullah Al Hafiz Khan et al., 2015).
Limitations of above solutions: These solutions can achieve higher accuracy,
but they have at least one of the following limitations – ($a$) need of a
customized hardware, which cannot be implemented in large-scale deployments,
($b$) installation of client application, making them hard to scale, ($c$)
rooting client OS - Android or iOS, which limits their generalizability, ($d$)
energy savvy, ($e$) high error rates in dense networks, and ($f$) proprietary
and expensive to deploy (especially, solutions from vendors like Cisco and
Aruba).
To summarize, even though several wonderful solutions are available, their
scalability is still a question. Therefore, we advocate using server-side
localization approach with fingerprints. Our aim is not to compare the
efficacy of different approaches, but to address the challenges of practical
and widely deployed device-agnostic indoor localization using today’s WiFi
standards and hardware, for example, use of $5$ GHz band and controller-based
architecture.
## 3\. System Architecture and Data Collection
In this section, we present the details about system architecture and the
dataset.
### 3.1. Background & Deployment
This work began in $2013$ when we started deploying a WiFi-based localization
solution across the entire campus. It has since gone through much major and
minor evolutions. However, in this paper, we focus our evaluation and results
on just one venue – a university, as we have full access to that venue.
Our university campus has seven schools in different buildings. Five buildings
have six floors, remaining two have five and three floors respectively, with a
floor area of $\approx 70,000$ $m^{2}$. Landmarks, characterized by water
sprinklers are deployed every three meters, on a given floor denote a
particular location. There are $3203$ landmarks across thirty-eight floors of
seven schools. WLAN deployment includes $750$\+ dual-band APs, centrally
controlled by eleven WiFi controllers, with $\approx 4000$ associated clients
per day.
Figure 1. Block diagram of Indoor Localization System. Note that lines between AP and C denote the coverage area of AP and not the association. Legend: AP - Access Point, C - Client, WLAN - Wireless LAN, RTLS - Real-Time Location System Field | Description
---|---
Timestamp | AP Epoch time (milliseconds)
Client MAC | SHA$1$ of original MAC address
Age | #Seconds since the client was last seen at an AP
Channel | Band ($2.4$/$5$ GHz) on which client was seen
AP | MAC address of the Access Point
Association Status | Client’s association status (associated/unassociated)
Data Rate | MAC layer bit-rate of last transmission by the client
RSSI | Average RSSI for duration when client was seen
Table 1. Details of RTLS data feeds
### 3.2. System Architecture
Figure 1 represents the primary building blocks of the system. The system is
bootstrapped with APs configured by the WLAN controller to send RTLS data
feeds every $5$ seconds to the RTLS server. Most commercial WLAN
infrastructures allow such a configuration. Once configured, APs bypass WLAN
controller and report RTLS data feeds directly to our Location Server. Table 1
presents all the fields contained in an RTLS data feed per client. The
reported RSSI value is not on a per-frame basis, but a summarized value from
multiple received frames. The Location Server analyzes these RTLS data feeds
for the signal strengths reported by different APs to estimate the location of
a client. Note that the APs do not report the type of frames. They gather
information from their current channel of operation and scan other channels to
collect data. Vendors have microscopic details of what APs measure (Aruba,
2017b), however as an end-user we do not have access to any more information
than what is specified. Nevertheless, even this information at large-scale
gives us a view of the entire network from a single vantage point.
### 3.3. Recording of the Fingerprints
We define a fingerprint as a vector of RSSI from APs for a given client. We
consider two types of fingerprints – offline and online. An offline
fingerprint is collected and stored in a database before the process of
localization is bootstrapped, while an online fingerprint is collected in
real-time.
Offline Fingerprinting A two-dimensional offline fingerprint map is prepared
for each landmark on the per-floor basis. The client devices used for
fingerprinting were dual-band Android phones, which were associated with the
network, and they actively scanned for APs. For each landmark, the device
collected data for $5$ minutes. While the clients scan their vicinity, APs
collate RSSI reports for the client and send their measurements as RTLS data
feeds to the Location Server. For a given landmark $L_{i}$, an offline
fingerprint takes the following form:
(1) $<L_{i},B,AP_{1}:RSSI_{1};...;AP_{n}:RSSI_{n};>\vspace*{-0.05in}$
We maintain fingerprints for both $2.4$ and $5$ GHz frequency bands. Band $B$,
in the above equation, takes a value of band being recorded. The vectors are
stored in a database on the Location Server.
Online Fingerprinting Localization of a client is done with online
fingerprints. An online fingerprint takes the same syntax as offline
fingerprints in Equation 1, except the landmark, as shown below:
(2) $<B,AP_{1}:RSSI_{1};...;AP_{m}:RSSI_{m};>\vspace*{-0.05in}$
Now, we match this online fingerprint with offline fingerprints of each
landmark to calculate the distance in signal space, as discussed in (P. Bahl
and V. N. Padmanabhan, 2000). The landmark with minimum distance in signal
space is reported as the probable location of the client.
### 3.4. Pre-processing of the Data
Now, we present the details of data collection and its processing.
Figure 2. Floor Map of the school where we collected ground truth data.
#### 3.4.1. Collection of the Ground Truth
We collect the ground truth data for online fingerprints. We want to correlate
the data collection with real-world usage scenarios. Therefore, we choose four
most common states of WiFi devices as per their WiFi association status and
Data transmission. The states are – ($i$) Disconnected, ($ii$) WiFi Associated
– ($ii.a$) Never actively used by user, ($ii.b$) Intermittently used, and
($ii.c$) Actively used. These states implicitly modulate the scanning
frequency. We use a separate phone for each state; thus, we use $4$ Samsung
Galaxy S$7$ phones to record ground truth for each landmark.
State ($a$) client is disconnected. In this state, WiFi is turned on but not
associated with any AP and screen remains off throughout the data collection.
Therefore, only traffic generated from this client is scanning traffic and no
data traffic. We ensure that this client did not follow MAC address
randomization, which most latest devices follow in the unassociated state
(Jeremy Martin et al., 2017). State ($b$) client is associated but inactive.
In this state, WiFi is turned on, it is associated but the screen remains off
throughout the data collection. State ($c$) client is associated and
intermittently used. In this state, WiFi is on, the client is associated and
user intermittently uses the device. This is one of the most common state for
mobile devices and previous research (Dheryta Jaisinghani et al., 2017) states
scanning is triggered whenever screen of the device is lit up. State ($d$)
client is associated and actively used. In this state, WiFi is on, the client
is associated, and a YouTube video plays throughout the data collection. This
state generates most data traffic, i.e. non-scanning traffic. Each client
stayed at a landmark for about a minute before it moved to the next landmark.
We manually noted down start time and end time for every landmark at the
granularity of seconds. We did this exercise for $3203$ landmarks of our
university, collected $86$ hours worth of data, that accounts for $54,096$
files carrying $274$ GB of data. The amount of time to localize a client is
$40$ seconds. Processing the entire dataset would take $\approx 100$ days.
Therefore, given the size of the entire dataset, we present our analysis of
$200$ landmarks, which accounts for $3121$ files with $15.3$ GB of data.
Figure 2 shows the floor map of one of the schools whose data we refer for our
analysis.
Our aim is to demonstrate the challenges associated with fingerprint-based
localization. These challenges apply to all the solutions that employ
fingerprint-based localization, irrespective of the type of device present in
the network. The variation of RSSI with device heterogeneity is well known
(Yogita Chapre et al., 2013) and that will further exacerbate the problems
identified by this paper. We collect ground truth with only one device so that
we can highlight issues without any complications added by heterogeneous
devices.
#### 3.4.2. Pre-processing of the RTLS Data Feeds
Our code reads every feed to extract the details of APs reporting a particular
client. RTLS data feeds, may obtain stale records for a client. Therefore, we
filter the raw RTLS data feed for the latest values, with age less than or
equal to $15$ seconds, and the RSSI should be greater than or equal to $-72$
dBm. The threshold for age is a heuristic to take the most recent readings.
The threshold for RSSI is decided based on the fact that a client loses
association when RSSI is below $-72$ dBm.
For our analysis, we classify MAC layer frames in two classes ($a$) _Scanning
Frames_ – high power and low bit rate probe requests and ($b$) _Non-Scanning
Frames_ – all other MAC layer frames. Offline fingerprints are derived from
the scanning frames, which are known to provide accurate distance estimates as
they are transmitted at full power. In the offline phase, a client is
configured to scan continuously. However, in the online phase we have no
control over the scanning behavior of the client, resulting in a mix of
scanning and non-scanning frames. Therefore, while localizing with the
fingerprints, RSSIs available for matching are from the different categories
of frames. RTLS data feeds do not report the type of frame and do not have a
one-to-one mapping of MAC layer frames to the feeds. Therefore, we devise a
probabilistic approach to identify these frames.
We design with a set of controlled experiments, where we configured the client
in one of the two settings at a time ($a$) send scanning frames only and ($b$)
send non-scanning frames only. These two settings are mutually exclusive. We
collected the traffic from the client with a sniffer as well as the
corresponding RTLS data feeds. Then, we compare both the logs – sniffer and
RTLS, to confirm the frame types and the corresponding data rates.
Our analysis reveals that when a client is associated and sending non-scanning
frames, the AP to which it is associated reports the client as _associated_.
The data rates of the RTLS data feeds vary among various $802.11g$ rates, e.g.
$1$, $2$, $5.5$, …,$54$ Mbps. Even though, our network deployment is dual-band
and supports the latest $802.11$ standards including $802.11ac$, still the
rates reported in the RTLS data feeds follow $802.11$g. We do not have any
visibility in the controller’s algorithm to deduce the reason for this
mismatch in the reported data rates. However, when a client sends scanning
frames, all the APs that could see the client report the client as
_unassociated_ and the data rates reported is fixed at either $1$, $6$, or
$24$ Mbps, as per the configured probe response rate.
We use these facts to differentiate non-scanning and scanning RTLS data feeds.
We believe this approach correctly infers scanning frames because ($a$) the
data rates are fixed to $1$, $6$, or $24$ Mbps, ($b$) when an associated
client scans, other APs report that client as unassociated, and ($c$) an
unassociated client can only send either scanning or association frames.
However, our approach may still incorrectly identify a scanning frame as non-
scanning in the following cases – ($a$) When an associated client scans and
the AP, to which it is associated, reports. This AP reports the client as
associated and its data rate as $1$, $6$, or $24$ Mbps. In this case, these
rates may also be because of the non-scanning frames. We identify such feeds
as non-scanning. ($b$) When an unassociated client sends association or
authentication frames. In this case also, the rates overlap with the scanning
data rates and the association status is reported as unassociated. Here, we
incorrectly identify non-scanning frames as scanning frames. However, these
cases are rare. For other cases, our approach is deterministically correct.
## 4\. Challenges Discovered
In this section, we give evidence of the issues, namely Cardinality Mismatch
and High Client Scan Latencies. We compare the severity of these issues for
both the frequency bands, identify the causes behind these issues, and measure
their impact on the issues.
(a) Offline Phase
(b) Online Phase
Figure 3. Cardinalities observed during the offline and online phases. For
both the phases, the cardinalities are lower for $5$ GHz. During the online
phase, there is a substantial decrease in the cardinality for both bands as
compared to the offline phase.
(a) Client Close To AP
(b) Client Far From AP
Figure 4. Variations in RSSI for scanning and non-scanning frames in two
scenarios – ($a$) client close to the AP and ($b$) client far from the AP. For
both cases the RSSI from scanning frames vary far lesser than the non-scanning
frames. Figure 5. Frequency of scanning in both band increases as RSSI
reduces. $2.4$ GHz experiences higher scan frequencies.
### 4.1. Evidence of the Issues
The Cardinality Mismatch arises from the dynamic power and client management
performed by a centralized controller as well as the client-side power
management. Given the dynamic nature of these management policies, it is not
possible to estimate their implications on the Cardinality Mismatch, and
thereby on the localization errors. We take an empirical approach to see
whether ($a$) we can find out the severity of these implications on the
Cardinality Mismatch and the localization error and ($b$) identify the
tunability of the implicating factors.
Figure 3 plots the differences in cardinality between the offline and online
phases for $2.4$ and $5$ GHz. Figure 3(a) shows the cardinality observed in
our offline fingerprints. Figure 3(b) shows the cardinality observed during
the online phase. While the maximum cardinality is $16$ during the offline
phase, it is merely $6$ in the online phase. This shows the spectrum of the
Cardinality Mismatch. In the online phase, $80$% of the time only $1$ AP
reports for a client in $5$ GHz while $40$% in $2.4$ GHz. Any fingerprint-
based algorithm will be adversely affected by such a big difference in the
cardinality. For each band, we find out how much is the extent of the
Cardinality Mismatch. Overall, across all the cardinalities, $2.4$ GHz has
$57.30\%$ mismatches and $5$ GHz has $30.6\%$ mismatches. The $5$ GHz band is
more adversely affected by the Cardinality Mismatch issue because it
experiences lower cardinality, which increases the chances of a mismatch.
$2.4$ GHz always sees higher cardinality than $5$ GHz, both during the offline
and online phases. This is because – ($a$) signals in $2.4$ GHz travel farther
than that of $5$ GHz, and ($b$) the number of scanning frames transmitted in
$2.4$ GHz. Unlike the data frames, the scanning frames are broadcasted and
hence heard by more number of APs. As the number of scanning frames increases,
more APs hear them and revert, thereby increasing the cardinality.
Besides, the RSSI variation for the scanning frames is lesser compared to that
of the data frames. To validate, we perform a controlled experiment with a
stationary client and collect client’s traffic using a sniffer. The client has
an ongoing data transmission and periodic scanning is triggered every $15$
seconds. From the sniffed packet capture, we extract per-frame RSSI. The
experiment is repeated for two scenarios- ($a$) the client is close to the AP
and ($b$) the client is far from the AP. With these two scenarios, we emulate
the client behavior for low and high RSSI from the AP.
Figure 4 shows the RSSI measurements in two cases. In the first scenario, when
the client is close to an AP, RSSI of the scanning frames varies by up to $10$
dB and for non-scanning frames it varies by up to $50$ dB. Similarly, in the
second scenario, when the client is far from AP, RSSI of scanning frames
varies by up to $5$ dB and for non-scanning frames it varies by up to $30$ dB.
Both our experiments validate that the RSSI from scanning frames vary far
lesser than the non-scanning frames. This means the online RSSI from scanning
frames match more closely and is a much more reliable indicator of the
client’s position. We want to study how clients in their default configuration
behave in _real_ networks; therefore we do not modify the default behavior of
client driver in any way. We repeated the experiment with devices of Samsung,
Nexus, Xiaomi, and iPhone.
Next, we study the effect of the band on the frequency of scanning. We collect
WiFi traffic with sniffers listening on the channels in operation at that time
in both the bands for $6$ hours. Data from $200$ WiFi clients is recorded.
Figure 5 shows the plot. For both $2.4$ and $5$ GHz bands, the frequency
increases as RSSI reduces. Overall, the frequency is lesser for $5$ GHz, even
though most ($\approx 2X$) of the clients in our network associate in $5$ GHz.
More the frequency of scanning, lesser is the chance of Cardinality Mismatch.
Our comparative analysis of the two bands revealed that the instance of frame
losses and poor connection quality, which cause scanning, are much lower in
$5$ GHz due to lower interference. The analysis of the scanning behavior of
our clients reveals that – ($a$) $90^{th}$ %ile values of scanning intervals
is in the order of few $1000$ seconds, which is a lot for fingerprint based
solutions, ($b$) $5$ GHz is the least preferred band of scanning, and ($c$)
clients rarely scan both the frequency bands. Hence, we rule out the
possibility of the reduced range of $5$ GHz resulting in lesser scanning
frames.
### 4.2. Causes Behind the Issues
Next, we study the combined effect of frequency of scanning, i.e., number of
scans per hour, and transmission distance on the cardinality. For this, we
consider clients configured in one of the four states as discussed in Section
3.4.1. Note that each state implicitly controls the amount of scanning. We do
not manually control scanning behavior to imitate the real-world.
Figure 6 shows the interval between two consecutive scans, i.e., the inter-
scan arrival time, observed for each client state when the client moves from
one landmark to another. We find that the median scan intervals are $15$-$20$
seconds for a client with either WiFi intermittently used or continuously
used, while it is $26$-$47$ seconds for a client whose WiFi is disconnected or
not actively used. However, in all the cases, $90^{th}$ percentile values are
in thousands of seconds, which signify that clients scan infrequently in
reality.
Figure 6. Inter Scan Arrival Time for a client in four states – Disconnected, Inactive, Intermittent, and Active. The intermittent and active states have median scanning time of $16$-$19$ seconds, while it increases to $26$-$47$ seconds for the inactive and disconnected clients. Notice that the upper quartile inter scan arrival times go up to $1000$ of seconds, which means clients mostly do not scan and hence the reduced cardinality. Phone | Disconnected | Inactive | Intermittent | Active
---|---|---|---|---
iPhone $6$ (iOS $10.3$) | Random | ✗ | ✗ | ✗
Nexus $5$X (Android $7.1$) | $100$-$300$s | ✗ | Screen Off$\rightarrow$On | Once
Galaxy S$7$ (Android $6.0$) | $100$-$2600$s | $1200$s | Screen Off$\rightarrow$On | ✗
Galaxy S$3$ (Android $4.0$) | $240$s | $300$-$1300$s | Screen Off$\rightarrow$On | ✗
Moto G$4$ (Android $7.1$) | $100$-$150$s | $10$-$1000$s | Screen Off$\rightarrow$On | $15$s
SonyXperia (Android $6.0$) | Once | $5$-$30$s | Screen Off$\rightarrow$On | Once
Table 2. Scanning Behavior of the Stationary clients. Most devices either do
not trigger scans or perform minimal scans while in Disconnected, Inactive, or
Active states. It may take up to 2600 seconds for devices to a trigger scan.
This latency negatively reduces the localization accuracy. An exception is the
Intermittent state in which all the Android devices trigger scans as and when
the screen is turned on, otherwise the devices stay silent. The reduced number
of active scans, across the states, result in inaccurate localization.
We evaluate scanning behavior of stationary clients by closely monitoring $6$
different models of phone. Table 2 summarizes the measurements and confirms
our observations of the reduced scans. Unlike mobile clients, stationary
clients tend to scan much lesser. The clients trigger scan almost every time
when the screen is turned on while being in the intermittent state. This is
especially true for the Android clients. However, scanning is infrequent in
all the other states; even if the client is in the Active state. Such a
behavior results in the reduced cardinality and therefore inaccurate
localization.
|
---|---
(a) State: Disconnected - $2.4$ GHz | (b) State: Disconnected - $5$ GHz
|
(c) State: Inactive - $2.4$ GHz | (d) State: Inactive - $5$ GHz
|
(e) State: Intermittent - $2.4$ GHz | (f) State: Intermittent - $5$ GHz
|
(g) State: Active - $2.4$ GHz | (h) State: Active - $5$ GHz
Figure 7. The number of APs reporting in the absence and presence of scanning
for $4$ client states – Disconnected, Inactive, Intermittent, and Active. When
a client is disconnected, the APs report only when the client scans (Figure
($a$) and ($b$)). For the remaining $3$ states, the number of reporting APs
are present irrespective of the scanning status; however, the amount of APs
are consistently higher when the client scans. $2.4$ GHz always has more APs
reporting than $5$ GHz.
In the absence of client scans, APs get only non-scanning frames. For each of
the four states of the client, we study how many APs report that client i.e.
the cardinality. With this analysis, we are able to compare the cardinality in
the presence and absence of scans, for both $2.4$ and $5$ GHz bands. We see
that as the frequency of scanning increases, more number of APs respond and
the cardinality increases. We expand the result for each state of the client
in Figure 7. The cardinality is consistently higher for $2.4$ GHz than that of
$5$ GHz, that further increases in the presence of scans. This implies that
higher frequency of scanning possibly reduces the Cardinality Mismatch and
vice-versa.
Figure 8. Variations in RSSI in $2.4$ GHz and $5$ GHz for a stationary client
as measured at the server. Notice that $5$ GHz is relatively stable than $2.4$
GHz.
Frames | Transmission Distance | RSSI Variation | Frequency of Transmission
---|---|---|---
Scanning | _High_ – ✓ | _Low_ – ✓ | _Low_ – ✗
Non-Scanning | _Low_ – ✗ | _High_ – ✗ | _High_ – ✓
Band | Transmission Distance | RSSI Variation | Frequency of Scanning
---|---|---|---
$2.4$ GHz | _High_ – ✓ | _High_ – ✗ | _High_ – ✓
$5$ GHz | _Low_ – ✗ | _Low_ – ✓ | _Low_ – ✗
Table 3. A summary of the causes and their impact (✓- Reduces Localization
Errors, ✗- Increases Localization Errors). The causes conflict with each
other, making server-side localization non-trivial.
However, a downside for $2.4$ GHz is that the frames (both scanning and non-
scanning) have higher variation in RSSI. This means, even though the extent of
the Cardinality Mismatch is lower, the RSSI will differ more in $2.4$ GHz. To
confirm this, we analyze the RSSI from a stationary client by enabling its
association in one band at a time and disabling the other band altogether. We
use the RSSI recorded at the RTLS server for a duration of $1$ hour. As shown
in Figure 8, even with more scanning information available $2.4$ GHz is more
prone to RSSI fluctuations than $5$ GHz. The reasons for this behavior are
($a$) the range of $2.4$ GHz is almost double than that of $5$ GHz and ($b$) a
lesser number of non-overlapping channels makes it susceptible to
interference. Therefore, RSSI from $2.4$ GHz results in predicting distant and
transient locations. We validate this in different locations with devices of
four other models.
To summarize, there is a significant extent of Cardinality Mismatch and High
Client Scan Latency. There is a difference in the extent of the issues for the
two classes of frames and the two bands of operation. While scanning frames
has a longer distance of transmission and less variation in RSSI, they are not
often sent by the clients. The factors favoring $2.4$ GHz are longer distance
of transmission and higher frequency of scanning. However, low variation in
RSSI works in favor of $5$ GHz. We summarize these observations in Table 3.
(a) Same Floor
(b) Different Floor
Figure 9. Localization errors reported with baseline fingerprinting algorithm.
The errors are measured as Same Floor errors in meters and Different Floor
errors in percentage of floors for Cardinalities ranging from $1$ to $6$.
Cardinalities >$3$ are Not Applicable to $5$ GHz due to cardinality mismatch.
### 4.3. Impact of Causes on Localization Errors
We now evaluate the impact of the causes on the localization errors. We
implemented a server-side localization using well known fingerprint-based
method (P. Bahl and V. N. Padmanabhan, 2000). Since we use server-side
processing, we do not require any client-side modification. Our proposals do
not make assumptions about hardware or operating system of the clients or the
controller. Although each adaptation of fingerprint-based technique from the
existing body of work may result in different errors, our exercise gives us a
baseline that cuts across all the adaptations. The $2.4$ and $5$ GHz bands
differ in distance of transmission, variation in RSSI, and frequency of
scanning. We measure the localization errors for both the bands.
We report localization errors for each value of the cardinality in online
phase to understand how the error varies as a function of the cardinality. We
consider a multi-storey building; hence, the clients may be localized across
floors, irrespective of their current position. Therefore, we measure the
errors in terms of ($a$) Different Floor and ($b$) Same Floor errors.
Different Floor error is the percentage of total records for which a wrong
floor is estimated. These are seen at the higher percentiles. For the rest of
the records, the Same Floor error is the distance in meters between the actual
and the predicted landmark on the floor. The errors at the higher percentiles
are essential for security applications, for example, an error by a floor in
localizing a suspect can make or break the evidence. We want to minimize both
the errors.
Figure 9 shows the results for the Baseline fingerprinting-based localization.
Overall, we observe that the errors are high for the low cardinalities, and
the errors for $2.4$ GHz are more significant compared to that of $5$ GHz.
This is despite the fact that more APs hear clients and frequency of
transmission of scanning frames is more for $2.4$ GHz. We conclude that the
variation in RSSI, including that induced by the transmission power control,
has a significant impact on the cardinality and therefore on the localization
errors.
(a) 2.4 GHz
(b) 5 GHz
Figure 10. Bifurcation of Localization Errors with and without scanning frames
for 2.4 and 5 GHz. Localizing client with only scanning frames reduces errors
in 2.4 GHz; while only non-scanning frames reduces errors in 5 GHz.
In order to understand the significance of scanning frames on localization
errors, we study the spectrum of errors caused due to each type of frame –
scanning and non-scanning. Results in Figure 9 combine both types of frames;
we now bifurcate the results for each frame type. The analysis is categorized
for – ($a$) both scanning and non-scanning frames, ($b$) only scanning frames,
and ($c$) only non-scanning frames. Figure 10 shows the localization errors on
the same floor and different floors for both frequency bands.
Scanning frames not only reduce the localization errors on the same floor in
$2.4$ GHz but, even floor errors start at as high as 85th%ile. With non-
scanning frames, floor errors start at as low as 62th%ile. The reason being
that the scanning frames, as opposed to non-scanning frames, do not
incorporate transmit power control, implying lesser or no RSSI variation,
which in turn helps in a close match of online and offline fingerprints. This
improves accuracy in $2.4$ GHz. While non-scanning frames increase the
localization errors in $2.4$ GHz. The reason being that the non-scanning
frames fetch RSSI from AP to which the client is associated and APs which are
on the same or overlapping channels as the AP of association. A client
connects to a farther AP because ($a$) in $2.4$ GHz, a farther AP can
temporarily have better RSSI, ($b$) algorithms, such as load balancing, cause
overloaded APs, which may be nearer to the client, to not respond while
farther APs respond, and ($c$) sticky clients, these clients do not
disassociate with an AP of low RSSI, which is a limitation of client drivers.
The results for the $5$ GHz are contrasting to that of $2.4$ GHz. Non-scanning
frames reduce the localization errors in $5$ GHz. Important to note here is
that the 5 GHz band hardly encounters floor errors, as is seen in Figure
10(b). Primary reason being the $2$x reduced range of $5$ GHz ($15$ meters)
when compared to $2.4$ GHz($30$ meters). Non-scanning frames in $5$ GHz fetch
RSSI from the AP to which the client is associated, which is the strongest AP.
Hence, the RSSI from this AP have minimal variation due to reduced
interference. Therefore, they match closely with fingerprint maps and improve
reduce errors. However, the scanning frames in $5$ GHz do not fetch RSSI from
the AP to which the client is associated, that is the strongest AP. RSSI from
other APs, with which the client is not associated, may or may not closely
match with offline fingerprints. Hence, the accuracy suffers.
We analyze the localization errors when a mix of scanning and non-scanning
frames are used for localizing the client. In this case, the errors range
between the numbers reported by only scanning and only non-scanning frames.
Overall, we see that the $5$ GHz outperforms $2.4$ GHz, irrespective of which
type of frames are considered for the localization. However, accuracy of
localization in $2.4$ GHz is severely affected by the minimal scans. Thus,
preferring scanning frames in $2.4$ GHz and non-scanning frames in $5$ GHz is
beneficial to improve localization accuracy.
(a) Same Floor Errors
(b) Different Floor Errors
Figure 11. The localization error with three floor detection heuristics. The
lowest localization errors were seen for the heuristic – AP of Association. On
an average, across all the cardinalities, AP of association heuristic reduces
the same floor and different floor errors by $58$% and $78$% in $2.4$ GHz;
$3.8$% and $46$% in $5$ GHz, respectively. Note that cardinalities greater
than 3 are not applicable in 5 GHz.
### 4.4. Reducing the Localization Errors
The challenge in large-scale deployments is that clients or APs cannot be
modified. So, the odds of installing mobile apps that can trigger scans are
very low. Furthermore, the latest phones do not prefer, for the purposes of
saving bandwidth, to trigger scans until absolutely necessary. Therefore, even
if the scanning frames have a potential of improving the localization accuracy
in 2.4 GHz, their frequency is not in one’s control. Since all the identified
causes in Table 3, are conflicting to each other, getting rid of the two
issues – Cardinality Mismatch and Low Cardinality, is not trivial.
For improvement, we take a position to make the best use of whatever RSSI we
receive during the online phase. We use heuristic to select the APs from the
online phase to reduce localization errors. We know the location of each AP.
We use this information to shortlist the APs from the online fingerprint. The
algorithm first selects a floor and then shortlists all the APs that are
located on the same floor. We use the shortlisted APs to find a match with
offline fingerprints. For selecting the floor, we explore three heuristics –
($a$) Maximum Number of APs - the floor for which the maximum APs are
reporting the client, ($b$) AP with Maximum RSSI - the floor from which the
strongest RSSI is received, and ($c$) AP of Association - the floor of AP to
which the client is presently associated with.
Figure 11 shows how the localization errors vary for the three heuristics for
$85^{th}$ percentile values. There is clearly an improvement for both Same
Floor and Different Floor errors. Floor detection with Maximum Number of APs
gives the least improvement. In fact, until cardinality $4$ it performs worse
than the Baseline. A cause behind this is that the distant APs, specially in
$2.4$ GHz that has longer transmission distance, respond and thus localization
errors increase. Next, is the floor detection with the AP with Maximum RSSI
and AP of Association. The AP with Maximum RSSI or the AP of Association are
typically closest to the client, except when the controller does load
balancing and transmit power control. There is marginal improvement for $5$
GHz. Since the Figure 11 only shows the data for $85^{th}$ percentile, we plot
the CDF of error for cardinality=$1$ in Figure 12 for $2.4$ GHz. We see that
the error reduces for all the percentiles. We see similar results for other
cardinalities, but we don’t include them due to space constraint.
Figure 12. Localization errors with three floor detection heuristics – ($a$)
Maximum Number of APs, ($b$) AP with Maximum RSSI, and ($c$) AP of Association
at $Cardinality$=$1$ for $2.4$ GHz. AP of Association performs the best for
both bands. Maximum Number of APs performs worse than the Baseline, while the
other two significantly reduced the errors.
We compare our results with Signal SLAM (P. Mirowski et al., 2013) which is
deployed in a public space like mall since we also have similar deployments.
We have similar observations in other venues too. We find their $90^{th}$
percentile is about $15$ meters. We perform similar. In fact, their AP
visibility algorithm has $90^{th}$ percentile as $24.3$ meters. We perform
better than this in $5$ GHz. Given the complexity of the algorithm Signal SLAM
incorporates and the amount of sensing it needs, we believe even with few
meters of accuracy our approach is better; particularly because its simple and
scalable.
## 5\. Discussion
Now, we discuss the practical challenges encountered while localizing clients
in real deployments and limitations of our solution.
### 5.1. Challenges Of Real Deployments
Real deployments have myriad of practical challenges that hamper the
efficiency and the accuracy of an empirical study. For instance, there can be
sudden and unexpected crowd movement which is known to increase signal
variations. Furthermore, as and when required network administrators either
replace old APs or deploy new APs. These administrative decisions are not
under our control. However, such changes severely affect the offline
fingerprints and change the floor heat maps that ultimately affect location
accuracy. Preparing fingerprints for an entire campus with several thousands
of landmarks is already tedious, such developments make the process of
iterations even more cumbersome.
Beyond insufficient measurement and latency issues, various contextual
dynamics makes the fingerprint-based system erroneous. The primary reason is
that such dynamic changes results in significant fluctuation in RSSI
measurements, which affects the distance calculation of the localization
algorithms. These fluctuations can happen quite frequently as there are many
different factors affecting RSSI between an AP and its clients, such as crowds
blocking the signal path, AP-side power-control for load balancing, and
client-side power control to save battery. In Section 4 we shed light on most
of these factors. Lastly, all MAC addresses in our system are anonymized. We
do not do a device to person mapping to preserve user privacy.
### 5.2. Limitations
A major limitation of this work is that we have not considered an exhaustive
set of devices. Given a multitude of device vendors, it is practically
impossible to consider all set of devices for this kind of in-depth analysis.
We did cover the latest set of devices, though, including iPhone and Android
devices. The second limitation is that even though we collected the data for
both lightly (semester off, very few students on campus) and heavily loaded
(semester on, most students on campus) days. We tested our observations on the
lightly loaded dataset but, only on a subset of heavily loaded days. We do not
yet know the behavior of system during heavily loaded days, in its entirety.
Specifically, the load, concerning the number of clients and traffic is
expected to increase interference and thus, signal variations. However, this
study is still a part of future work. The third limitation of this work is
that we do not consider the effect of MAC address randomization algorithms
which make clients intractable. Although there is an active field of research
that suggest ways to map randomized MAC to actual MAC (Jeremy Martin et al.,
2017), but given its complexity we do not employ these.
## 6\. Conclusion
To conclude, we presented two major issues that need to be addressed to
perform server-side localization. We validated these challenges with a huge
data from a production WLAN deployed across a university campus. We discussed
the causes and their impact on these challenges. We proposed heuristics that
handle the challenges and reduce the localization errors. Our findings apply
to all the server-side localization algorithms, which use fingerprinting
techniques. Most of this work provides real-world evidence of “where” and
“what” may go wrong for practically localizing clients in a device agnostic
manner.
## References
* (1)
* Abhishek Goswami et al. (2011) Abhishek Goswami, Luis E. Ortiz, and Samir R. Das. 2011\. WiGEM: A Learning-based Approach for Indoor Localization _(CoNEXT)_. http://doi.acm.org/10.1145/2079296.2079299
* Anshul Rai et al. (2012) Anshul Rai, Krishna Kant Chintalapudi, Venkata N. Padmanabhan, and Rijurekha Sen. 2012. Zee: Zero-effort Crowdsourcing for Indoor Localization _(MobiCom)_. http://doi.acm.org/10.1145/2348543.2348580
* Anthony LaMarca et al. (2005) Anthony LaMarca, Yatin Chawathe, Sunny Consolvo, Jeffrey Hightower, Ian Smith, James Scott, Timothy Sohn, James Howard, Jeff Hughes, Fred Potter, Jason Tabert, Pauline Powledge, Gaetano Borriello, and Bill Schilit. 2005. Place Lab: Device Positioning Using Radio Beacons in the Wild _(Pervasive Computing)_. https://doi.org/10.1007/11428572_8
* Aruba (2017a) Aruba. 2017a. https://www.arubanetworks.com.
* Aruba (2017b) Aruba. 2017b. Deep dive Radio technologies for indoor location. https://community.arubanetworks.com/aruba.
* C. Wu et al. (2013) C. Wu, Z. Yang, Y. Liu, and W. Xi. 2013\. WILL: Wireless Indoor Localization without Site Survey _(IEEE TPDS)_.
* Cisco (2017) Cisco. 2017. https://www.cisco.com.
* Coleman and Westcott (2009) Coleman and Westcott. 2009\. _CWNA Certified Wireless Network Administrator Official Study Guide_. John Wiley & Sons.
* D. Niculescu and Badri Nath (2003) D. Niculescu and Badri Nath. 2003. Ad hoc positioning system (APS) using AOA _(Infocom)_.
* D. Turner et al. (2011) D. Turner, S. Savage, and A. C. Snoeren. 2011. On the empirical performance of self-calibrating WiFi location systems _(LCN)_.
* Dheryta Jaisinghani et al. (2017) Dheryta Jaisinghani, Vinayak Naik, Sanjit K. Kaul, and Sumit Roy. 2017. Sniffer-based inference of the causes of active scanning in WiFi networks _(NCC)_.
* Domenico Giustiniano and Stefan Mangold (2011) Domenico Giustiniano and Stefan Mangold. 2011. CAESAR: Carrier Sense-based Ranging in Off-the-shelf 802.11 Wireless LAN _(CoNEXT)_. http://doi.acm.org/10.1145/2079296.2079306
* Donny Huang et al. (2014) Donny Huang, Rajalakshmi Nandakumar, and Shyamnath Gollakota. 2014. Feasibility and Limits of Wi-Fi Imaging _(SenSys)_. http://doi.acm.org/10.1145/2668332.2668344
* Fan Li et al. (2012) Fan Li, Chunshui Zhao, Guanzhong Ding, Jian Gong, Chenxing Liu, and Feng Zhao. 2012\. A reliable and accurate indoor localization method using phone inertial sensors _(Ubicomp)_.
* He Wang et al. (2012) He Wang, Souvik Sen, Ahmed Elgohary, Moustafa Farid, Moustafa Youssef, and Romit Roy Choudhury. 2012. No Need to War-drive: Unsupervised Indoor Localization _(MobiSys)_. http://doi.acm.org/10.1145/2307636.2307655
* Hyuk Lim et al. (2010) Hyuk Lim, LuChuan Kung, Jennifer C. Hou, and Haiyun Luo. 2010. Zero-configuration Indoor Localization over IEEE 802.11 Wireless Infrastructure _(Wireless Networks)_. http://dx.doi.org/10.1007/s11276-008-0140-3
* Jeremy Martin et al. (2017) Jeremy Martin, Travis Mayberry, Collin Donahue, Lucas Foppe, Lamont Brown, Chadwick Riggins, Erik C. Rye, and Dane Brown. 2017. A Study of MAC Address Randomization in Mobile Devices and When it Fails _(CoRR)_. http://arxiv.org/abs/1703.02874
* Jie Xiong and Kyle Jamieson (2013) Jie Xiong and Kyle Jamieson. 2013. ArrayTrack: A Fine-Grained Indoor Location System _(NSDI)_. https://www.usenix.org/conference/nsdi13/technical-sessions/presentation/xiong
* John Krumm and John Platt (2003) John Krumm and John Platt. 2003. _Minimizing Calibration Effort for an Indoor 802.11 Device Location Measurement System_. Technical Report. NIPS.
* Jon Gjengset et al. (2014) Jon Gjengset, Jie Xiong, Graeme McPhillips, and Kyle Jamieson. 2014. Phaser: Enabling Phased Array Signal Processing on Commodity WiFi Access Points _(MobiCom)_. http://doi.acm.org/10.1145/2639108.2639139
* Kasthuri Jayarajah et al. (2016) Kasthuri Jayarajah, Rajesh Krishna Balan, Meera Radhakrishnan, Archan Misra, and Youngki Lee. 2016. LiveLabs: Building In-Situ Mobile Sensing & Behavioural Experimentation TestBeds _(MobiSys)_. http://doi.acm.org/10.1145/2906388.2906400
* Krishna Chintalapudi et al. (2010) Krishna Chintalapudi, Anand Padmanabha Iyer, and Venkata N. Padmanabhan. 2010. Indoor Localization Without the Pain _(MobiCom)_. http://doi.acm.org/10.1145/1859995.1860016
* Lei Yang et al. (2015) Lei Yang, Qiongzheng Lin, Xiangyang Li, Tianci Liu, and Yunhao Liu. 2015. See Through Walls with COTS RFID System! _(MobiCom)_. http://doi.acm.org/10.1145/2789168.2790100
* Liqun Li et al. (2014a) Liqun Li, Guobin Shen, Chunshui Zhao, Thomas Moscibroda, Jyh Han Lin, and Feng Zhao. 2014a. Experiencing and Handling the Diversity in Data Density and Environmental Locality in an Indoor Positioning Service _(MobiCom)_. http://doi.acm.org/10.1145/2639108.2639118
* Liqun Li et al. (2014b) Liqun Li, Pan Hu, Chunyi Peng, Guobin Shen, and Feng Zhao. 2014b. Epsilon: A Visible Light Based Positioning System _(NSDI)_. http://dl.acm.org/citation.cfm?id=2616448.2616479
* Liu et al. (2007) H. Liu, H. Darabi, P. Banerjee, and J. Liu. 2007\. Survey of Wireless Indoor Positioning Techniques and Systems _(IEEE SMC)_.
* Liu et al. (2012) Hongbo Liu, Yu Gan, Jie Yang, Simon Sidhom, Yan Wang, Yingying Chen, and Fan Ye. 2012. Push the Limit of WiFi Based Localization for Smartphones _(MobiCom)_. http://doi.acm.org/10.1145/2348543.2348581
* Manikanta Kotaru et al. (2015) Manikanta Kotaru, Kiran Joshi, Dinesh Bharadia, and Sachin Katti. 2015. SpotFi: Decimeter Level Localization Using WiFi _(SIGCOMM)_. http://doi.acm.org/10.1145/2829988.2787487
* Mariakakis et al. (2014) Alex T. Mariakakis, Souvik Sen, Jeongkeun Lee, and Kyu-Han Kim. 2014\. SAIL: Single Access Point-based Indoor Localization _(MobiSys)_. http://doi.acm.org/10.1145/2594368.2594393
* Martin Azizyan et al. (2009) Martin Azizyan, Ionut Constandache, and Romit Roy Choudhury. 2009\. SurroundSense: Mobile Phone Localization via Ambience Fingerprinting _(MobiCom)_. http://doi.acm.org/10.1145/1614320.1614350
* Md Abdullah Al Hafiz Khan et al. (2015) Md Abdullah Al Hafiz Khan, H M Sajjad Hossain, and Nirmalya Roy. 2015. Infrastructure-less Occupancy Detection and Semantic Localization in Smart Environments _(MOBIQUITOUS)_. http://dx.doi.org/10.4108/eai.22-7-2015.2260062
* Moustafa Youssef et al. (2006) Moustafa Youssef, Adel Youssef, Chuck Rieger, Udaya Shankar, and Ashok Agrawala. 2006\. PinPoint: An Asynchronous Time-Based Location Determination System _(MobiSys)_. http://doi.acm.org/10.1145/1134680.1134698
* Moustafa Youssef and Ashok Agrawala (2005) Moustafa Youssef and Ashok Agrawala. 2005. The Horus WLAN Location Determination System _(MobiSys)_. http://doi.acm.org/10.1145/1067170.1067193
* Naveed Ul Hassan et al. (2015) Naveed Ul Hassan, Aqsa Naeem, Muhammad Adeel Pasha, Tariq Jadoon, and Chau Yuen. 2015\. Indoor Positioning Using Visible LED Lights: A Survey. _ACM Comput. Surv._ (2015). http://doi.acm.org/10.1145/2835376
* Nilanjan Banerjee et al. (2010) Nilanjan Banerjee, Sharad Agarwal, Paramvir Bahl, Ranveer Chandra, Alec Wolman, and Mark Corner. 2010. Virtual Compass: Relative Positioning to Sense Mobile Social Interactions _(Pervasive Computing)_. http://dx.doi.org/10.1007/978-3-642-12654-3_1
* P. Bahl and V. N. Padmanabhan (2000) P. Bahl and V. N. Padmanabhan. 2000. RADAR: an in-building RF-based user location and tracking system _(Infocom)_.
* P. Mirowski et al. (2013) P. Mirowski, T. K. Ho, Saehoon Yi, and M. MacDonald. 2013. SignalSLAM: Simultaneous localization and mapping with mixed WiFi, Bluetooth, LTE and magnetic signals _(IPIN)_.
* Pan Hu et al. (2013) Pan Hu, Liqun Li, Chunyi Peng, Guobin Shen, and Feng Zhao. 2013. Pharos: Enable Physical Analytics Through Visible Light Based Indoor Localization _(HotNets)_. http://doi.acm.org/10.1145/2535771.2535790
* Rajalakshmi Nandakumar et al. (2012) Rajalakshmi Nandakumar, Krishna Kant Chintalapudi, and Venkata N. Padmanabhan. 2012. Centaur: Locating Devices in an Office Environment _(MobiCom)_. http://doi.acm.org/10.1145/2348543.2348579
* Rajesh Krishna Balan et al. (2014) Rajesh Krishna Balan, Archan Misra, and Youngki Lee. 2014\. LiveLabs: Building an In-situ Real-time Mobile Experimentation Testbed _(HotMobile)_. http://doi.acm.org/10.1145/2565585.2565597
* Sanjib Sur et al. (2014) Sanjib Sur, Teng Wei, and Xinyu Zhang. 2014. Autodirective Audio Capturing Through a Synchronized Smartphone Array _(MobiSys)_. http://doi.acm.org/10.1145/2594368.2594380
* Sen et al. (2012) Souvik Sen, Božidar Radunovic, Romit Roy Choudhury, and Tom Minka. 2012. You Are Facing the Mona Lisa: Spot Localization Using PHY Layer Information _(MobiSys)_. http://doi.acm.org/10.1145/2307636.2307654
* Souvik Sen et al. (2015) Souvik Sen, Dongho Kim, Stephane Laroche, Kyu-Han Kim, and Jeongkeun Lee. 2015\. Bringing CUPID Indoor Positioning System to Practice _(WWW)_. https://doi.org/10.1145/2736277.2741686
* Stephen P. Tarzia et al. (2011) Stephen P. Tarzia, Peter A. Dinda, Robert P. Dick, and Gokhan Memik. 2011. Indoor Localization Without Infrastructure Using the Acoustic Background Spectrum _(MobiSys)_.
* Stuart A. Golden and Steve S. Bateman (2007) Stuart A. Golden and Steve S. Bateman. 2007. Sensor Measurements for Wi-Fi Location with Emphasis on Time-of-Arrival Ranging _(TMC)_. http://dx.doi.org/10.1109/TMC.2007.1002
* Swarun Kumar et al. (2014) Swarun Kumar, Stephanie Gil, Dina Katabi, and Daniela Rus. 2014. Accurate Indoor Localization with Zero Start-up Cost _(MobiCom)_. http://doi.acm.org/10.1145/2639108.2639142
* Teng Wei and Xinyu Zhang (2015) Teng Wei and Xinyu Zhang. 2015. mTrack: High-Precision Passive Tracking Using Millimeter Wave Radios _(MobiCom)_. http://doi.acm.org/10.1145/2789168.2790113
* Viktor Erdélyi et al. (2018) Viktor Erdélyi, Trung Kien Le, Bobby Bhattacharjee, Peter Druschel, and Nobutaka Ono. 2018\. Sonoloc: Scalable positioning of commodity mobile devices _(MobiSys)_.
* Wei Wang et al. (2015) Wei Wang, Alex X. Liu, Muhammad Shahzad, Kang Ling, and Sanglu Ling. 2015. Understanding and Modeling of WiFi Signal Based Human Activity Recognition _(MobiCom)_. http://doi.acm.org/10.1145/2789168.2790093
* Yanzi Zhu et al. (2015) Yanzi Zhu, Yibo Zhu, Ben Y. Zhao, and Haito Zheng. 2015\. Reusing 60GHz Radios for Mobile Radar Imaging _(MobiCom)_. http://doi.acm.org/10.1145/2789168.2790112
* Yogita Chapre et al. (2013) Yogita Chapre, Prasant Mohapatra, Sanjay Jha, and Aruna Seneviratne. 2013. Received signal strength indicator and its analysis in a typical WLAN system (short paper) _(LCN)_.
* Yuanfang Chen et al. (2013) Yuanfang Chen, Noel Crespi, Lin Lv, Mingchu Li, Antonio M. Ortiz, and Lei Shu. 2013\. Locating Using Prior Information: Wireless Indoor Localization Algorithm _(SIGCOMM)_. http://doi.acm.org/10.1145/2486001.2491688
* Zafari et al. (2017) F. Zafari, A. Gkelias, and K. Leung. 2017\. A Survey of Indoor Localization Systems and Technologies. _ArXiv e-prints_ (2017).
* Zengbin Zhang et al. (2012) Zengbin Zhang, David Chu, Xiaomeng Chen, and Thomas Moscibroda. 2012. SwordFight: Enabling a New Class of Phone-to-phone Action Games on Commodity Phones _(MobiSys)_. http://doi.acm.org/10.1145/2307636.2307638
* Zheng Yang et al. (2012) Zheng Yang, Chenshu Wu, and Yunhao Liu. 2012. Locating in Fingerprint Space: Wireless Indoor Localization with Little Human Intervention _(MobiCom)_. http://doi.acm.org/10.1145/2348543.2348578
|
# Long range correlations and slow time scales in a boundary driven granular
model
Andrea Plati Dipartimento di Fisica, Università di Roma Sapienza, P.le Aldo
Moro 2, 00185 Roma, Italy Andrea Puglisi Istituto dei Sistemi Complessi -
CNR and Dipartimento di Fisica, Università di Roma Sapienza, P.le Aldo Moro 2,
00185, Rome, Italy
###### Abstract
We consider a velocity field with linear viscous interactions defined on a one
dimensional lattice. Brownian baths with different parameters can be coupled
to the boundary sites and to the bulk sites, determining different kinds of
non-equilibrium steady states or free-cooling dynamics. Analytical results for
spatial and temporal correlations are provided by analytical diagonalisation
of the system’s equations in the infinite size limit. We demonstrate that
spatial correlations are scale-free and time-scales become exceedingly long
when the system is driven only at the boundaries. On the contrary, in the case
a bath is coupled to the bulk sites too, an exponential correlation decay is
found with a finite characteristic length. This is also true in the free
cooling regime, but in this case the correlation length grows diffusively in
time. We discuss the crucial role of boundary driving for long-range
correlations and slow time-scales, proposing an analogy between this
simplified dynamical model and dense vibro-fluidized granular materials.
Several generalizations and connections with the statistical physics of active
matter are also suggested.
## Introduction
The emergence of long-range order, or collective behavior (CB), in non-
equilibrium systems such as granular materials and living organisms is a
matter of great interest for fundamental physics and applications [1, 2].
Examples, recently observed in experiments and numerical simulations, are
motility induced phase transitions in bacteria [3, 4, 5, 6], collective
migration in epithelial cells [7], persistent collective rotations in granular
systems [8, 9, 10]. An important class of CB instances includes flocking and
swarming in animals, systematically studied by physicists in the last 25 years
[11, 12, 13]. The great variety of systems in which CB has been observed makes
the formulation of a rigorous and unifying definition for them a difficult
task. Generally speaking we can say that CB occurs when a many-body system
_acts as a whole_. Indeed, a common property of the previous examples is the
interplay between different length scales: the interactions act on microscopic
distances while correlations extend to macroscopic scales, comparable with the
system size. In the study of CB it is common, in fact, to look at spatial
correlation functions of the relevant fields: if this function has a typical
decay length $\xi$ then we can divide the system in almost independent
subsystems of size $\sim\xi$. If the correlation function decays without a
typical length it is said to be _scale-free_ : in this case the dynamics of
every particle is correlated with the whole system. We underline that _scale-
free_ spatial correlations appear naturally in critical phase transitions at
equilibrium [14], but a general and well established theoretical framework to
understand the appearance of long-range ordering in non-equilibrium systems is
still lacking: sometimes equilibrium-like approaches are successful (effective
Hamiltonian/temperatures) [15, 16] while in other cases fully non-equilibrium
tools have to be developed [17, 18, 19].
In this paper, we provide analytical results about the occurrence of _scale-
free_ (more precisely power law decaying) correlations in a velocity field
defined on a one dimensional lattice with interactions mediated by viscous
friction. We’ll show that this behavior is observed in the non-equilibrium
stationary state (NESS) obtained by coupling only the boundaries of the system
with a thermal bath. We call this phase Non-Homogeneously Heated Phase (NHHP).
If the particles in the bulk are also put in contact with a bath a different
regime is found, the Homogeneously Heated Phase (HHP), where the spatial
correlation is exponential with a characteristic length scale that goes to
infinity when the contact between the bulk and the bath vanishes. The NHHP is
also characterized by slow relaxation times that scale with the square of the
system size.
Lattices (particularly in 1d) bring two main advantages: (i) analytical
calculations are often possible, (ii) they help to isolate minimal ingredients
for the occurrence of the phenomenon under study. Considering just the non-
equilibrium context, 1D models have been used to study thermal conduction [20,
21, 22], non-equilibrium fluctuations [23, 24], correlations and response with
non-symmetric couplings [25], velocity alignment in active matter [26],
systems with Vicsek-like interactions [27, 28], velocity fields in granular
materials [29, 30, 31]. In the following we’ll just consider linear
interactions between variables and this allows to work in the framework of
multivariate linear stochastic processes. Despite their simplicity, this class
of models continues to be a powerful tool when dealing with dynamics driven
out of equilibrium as in biological systems [32, 33].
As discussed in the next section, our model can be thought as an extreme
simplification of a vibrated granular system at strong compression. Looking
for the emergence of a collective motion in it is then motivated also by the
recent experimental/numerical evidence of slow collective behavior in vibro-
fluidized granular materials [8, 9]. This phenomenon is not yet fully
understood and our study tackles this problem, revealing that non-homogeneous
heating and frictional interactions (i.e standard features of vibrated
granular matter) are minimal ingredients to develop a slow collective
dynamics.
The manuscript is organized as follows: In section "Model" we present our
model discussing its phenomenology and its relation with real granular systems
and previously studied non-equilibrium 1D models. Section "Results" contains
the key-steps for the calculation of the spatial correlation function in the
NHHP and in the HHP shedding light on the limit for which diverging
correlation lengths and times are obtained. We also show the validity of our
results beyond the assumptions used to perform analytical calculations.
Finally, in "Discussion" we draw conclusions and sketch some perspectives. In
the Supplemental Material (SM) details of the calculations are provided in
addition to some insights about the cooling state and the active equivalent of
our model.
## Model
### Definition and phenomenology
We consider a velocity field on a one dimensional lattice of size $L$. The
$i$th particle interacts with their nearest neighbors $j$ through a viscous
force with coefficient $\gamma$: $F_{i}=-\sum_{j}\gamma(v_{i}-v_{j})$. The
boundary (bulk) sites are coupled with an external bath defined by a drag
coefficient $\gamma_{b}$ ($\gamma_{a}$) and relative temperatures which can be
different if at the boundaries or in the bulk. Considering particles with
unitary mass the equations for the model are:
$\displaystyle\dot{v}_{i}=-(2\gamma+\gamma_{a})v_{i}+\gamma(v_{i+1}+v_{i-1})+\sqrt{2\gamma_{a}T_{a}}\eta_{i}(t)$
(1a) $\displaystyle\dot{v}_{1}=-(\gamma+\gamma_{b})v_{1}+\gamma
v_{2}+\sqrt{2\gamma_{b}T_{1}}\eta_{1}(t)$ (1b)
$\displaystyle\dot{v}_{L}=-(\gamma+\gamma_{b})v_{L}+\gamma
v_{L-1}+\sqrt{2\gamma_{b}T_{L}}\eta_{L}(t)$ (1c)
Where the first equation holds for $1<i<L$ and the $\eta_{i}(t)$s are Gaussian
white noises with unitary variance:
$\langle\eta_{i}(t)\eta_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime})$.
In this model, the way in which energy is supplied to the system is consistent
with the fluctuation-dissipation theorem. Indeed, for each viscous force
($\gamma_{a(b)}$) there is a stochastic counterpart at finite temperature
($T_{a(b)}$). This is actually not true for the interaction force defined by
$\gamma$ because it is related to the viscosity of the material that forms the
grains. Thus, the associated temperature (typical of the thermal agitation at
the molecular scale) can be reasonably neglected in a granular context. We
refer to NHHP when $\gamma_{a}=0$ so that just the first and the $L$th sites
are heated, while in the HHP we consider a general $\gamma_{a}\neq 0$. We note
that the HHP is not strictly spatially homogeneous because viscous
coefficients and temperatures depend on the position: we refer to it as
homogeneously heated meaning that in this phase _all_ the particles are
coupled with a bath.
Figure 1: a,b,c) Snapshots of the velocity field in the stationary state of
the two phases. We exclude the first five (really hot) sites near the
boundaries to have a more clear view of the field. Each panel shows the
vectors in linear scale and the moduli in log scale in order to better
appreciate the phenomenology of the system. Orange and blue bars discriminate
the two directions. We note that a great cluster of particles with same
direction and similar modulus is found in the NHHP only, signaling that in
terms of correlations the key parameter is $\gamma_{a}$ rather than $T_{a}$.
d) Autocorrelation times for each site defined as the time $\tau_{i}$ for
which $\Gamma_{i}(\tau_{i})=0.4$. The autocorrelation function is defined as
$\Gamma(t^{\prime})=\lim_{t\to\infty}\langle
v_{i}(t)v_{i}(t+t^{\prime})\rangle/\langle v_{i}^{2}(t)\rangle$ where the
brackets refer to a time average on the stationary state. We note that in the
NHHP the dynamics is far slower than in the HHP also when $T_{a}=0$. The
snapshots are obtained by numerical integration of Eqs. (1) with $L=50$,
$\gamma=5$, $\gamma_{b}=10$, $\gamma_{a}=\\{3,0\\}$, $T_{1}=T_{L}=0.002$,
$T_{a}=\\{0.002,0\\}$ after a time $t_{M}=10^{8}/\gamma$ and with a temporal
step $dt=0.05/\gamma$.
As we discuss in the next paragraphs, this is a linear model and a full
solution can be found in the context of multivariate stochastic processes.
Nevertheless, a numerical integration of Eqs. (1) can be useful to have a
physical insight on the phenomenology in play. In Fig. 1 we show some
instantaneous snapshots of the system in the stationary state for three
different conditions: HHP with $T_{a}\neq 0$, HHP with $T_{a}=0$ and NHHP. We
note that in the NHHP (panel c) almost all the velocities are aligned with
similar moduli while in the HHP we have smaller aligned domains with moduli
that decay sharply moving away from the boundaries when $T_{a}=0$ (panel b)
and a random configuration when $T_{a}\neq 0$ (panel a). This comparison makes
clear that - in terms of correlations - the key parameter is $\gamma_{a}$
rather than $T_{a}$: indeed a situation where the sites experience a
collective behavior (in the intuitive sense that they _act as a whole_) is
only found in the NHHP. In Fig. 1d the typical correlation time for each site
is shown and we can see that in the NHHP the dynamics is extremely slower with
respect to the other two conditions. It is worth noting that this model does
not present any directional asymmetry so the true mean value of the velocity
field (i.e. obtained by an average over long times or equivalently over all
the realizations of the noises) is zero also in the NHHP, even if the single
time configurations clearly show an explicit global alignment. The
phenomenology of the NHHP can then be described as the occurrence of slow and
collective fluctuations around the expected mean value.
### Relation with real granular systems and other models
We note that the kind of interaction used in Eqs. 1 is typical of contact
models for granular materials [34, 35]. In these models, the grains (that are
disks or spheres depending on the geometry) interact when a distance smaller
than the sum of their radius is reached. In this condition, the particles
penetrate each other and the dynamics is ruled by contact forces that are
split into a normal and tangential component with respect to the vector
connecting the centers of the grains. Both of this contributions contain a
(linear or non-linear) elastic term that depends on the normal/tangential
displacement and a dissipative one that depends on the normal/tangential
relative velocity. The latter has, in many cases, exactly the form of the
viscous interaction we use in our model [36]. In view of this we can say that
if we fix the centers of $L$ grains on the lattice sites so that they are
partially overlapped, then the dynamics of the particles’ velocities would be
given by Eqs. 1. Neglecting the dynamics of positions (they don’t appear at
all in Eqs. 1) is surely the most relevant approximation of our approach: in
the SM (S5) we briefly discuss how to go beyond it.
Figure 2: Sketch of the model and relation with higher dimensional systems. On
the left we suggest a hypothetical 2D dense granular system where particles
are roughly located on the vertices of a regular lattice. A possible mapping
from the 2D to the 1D system involves replacing the mean horizontal velocity
on the $i$th layer of the 2D system and replacing it with the $v_{i}$ of the
1D system. The dynamics in the vertical direction is neglected, an
approximation which is justified by the presence of the vertical confinement,
while the periodic boundary conditions (indicated by the dotted lines) are
representative of a ’free’ direction in which the grains can flow without
obstacles. This can be realized experimentally, for instance, in a 3D
cylindrical geometry, where the velocity of grains in the tangential direction
(with respect to the central axis of the cylinder) constitute the horizontal
velocities in the putative 2D system sketched here, see for instance [8, 9].
Red grains are in direct contact with the external source of energy coming
from the boundaries ($\gamma_{b}$,$T_{1(L)}$) while the green ones are in
contact with the bulk bath, which is switched off in the NHHP.
Nevertheless, the physics described by our model can realistically represent
the condition of permanent contacts in which dense granular matter is found in
vertically-vibrated setups. Such kind of systems are widely studied
experimentally; they consist on assemblies of grains confined in a box
vibrated with a noisy or sinusoidal signal on the $z$ direction. For low
driving energies, the particles are always arranged in a dense packing where
they vibrate in permanent contact with each other experiencing very rare and
slow rearrangements. This implies, if the geometry is narrow enough, that just
the external layers of the system are in direct contact with the vibrating
walls while the others never touch them. This last fact tell us that, in
addition to the specific form of the viscous forces and the permanent
interactions, also the way in which the external energy injection is modeled
in the NHHP resembles the conditions of a vibrated granular system in a dense
state. Moreover, if layers of particles are mapped into lattice sites, a 1D
chain can also be representative of a higher dimensional systems (see Fig. 2).
On the other hand, the HHP can be referred to a setup where all the particles
interact with the vibrating walls, as it happens for instance in vibrated
monolayers [37].
The idea of considering velocity fields defined on lattices, i.e. neglecting
the evolution of the positions and density fluctuations in the dynamics, has
been widely exploited in granular literature [29, 30, 31] especially for
dilute systems. In these previous works, however, there is no continuous
interaction, but only instantaneous collisions occurring between pairs of
neighboring grains picked up at random, at every time step. Many results have
been obtained by solving (analytically or numerically) the corresponding
master equation or performing its hydrodynamic limit, revealing that these
models are a powerful tool to investigate complex phenomena observed in
experiments and simulations of realistic granular systems such as shock waves,
anomalous transport and current fluctuations [38, 39].
To summarize motivations and background, our model reflects three main
characteristics of dense granular materials in vertically-vibrated setups i.e.
viscous forces, permanent contacts and energy injection localized at the
boundaries. It can be then considered as the high density variant of a well
established family of models previously investigated.
It is important to note that also the dilute models can exhibit long-range
correlations [38, 39]. Nevertheless, those are finite-size effects found in
the homogeneous cooling state [40] i.e. without external driving and with
conserved total momentum. As we briefly discuss in the next paragraph and more
clearly in the SM (S4), our model makes clear that there is a sharp difference
between the correlations of the cooling state and the NESS ones.
### Compact SDE formulation of the model
Defining the vectors $\bm{V}=(v_{1},\dots,v_{L})$ ,
$\bm{\eta}(t)=(\eta_{1}(t),\dots,\eta_{L}(t))$ and the adimensional parameters
$\beta=\gamma_{b}/\gamma$, $\alpha=\gamma_{a}/\gamma$ then we can rewrite Eqs.
1 as a multivariate Ornstein-Uhlenbeck process obtaining the following
stochastic differential equation (SDE):
$\dot{\bm{V}}=-\hat{A}\bm{V}+\hat{B}\bm{\eta}(t)$ (2)
where
$\hat{B}=\text{diag}(\sqrt{2\gamma_{b}T_{1}},\sqrt{2\gamma_{a}T_{a}},\dots,\sqrt{2\gamma_{a}T_{a}},\sqrt{2\gamma_{b}T_{L}})$
and:
$\hat{A}=\gamma\begin{pmatrix}1+\beta&-1&&&\bm{0}\\\ -1&2+\alpha&-1&&\\\
&\ddots&\ddots&\ddots&\\\ &&-1&2+\alpha&-1\\\
\bm{0}&&&-1&1+\beta\end{pmatrix}$ (3)
is a $L\times L$ tridiagonal symmetric matrix.
The information about space-time correlations of the system are encoded in the
two times correlation matrix $\hat{\sigma}(t,s)$ whose entries are defined as
$\sigma_{jm}(t,s)=\langle
v_{j}(t)v_{m}(s)\rangle\equiv\langle\left[v_{j}(t)-\langle
v_{j}(t)\rangle\right]\left[v_{m}(s)-\langle v_{m}(s)\rangle\right]\rangle$.
We now define the quantity of principal interest in this paper i.e. the static
spatial correlation function of the velocity field:
$\zeta_{jm}=\frac{\sigma_{jm}}{\sqrt{\sigma_{jj}\sigma_{mm}}}\quad\text{where}\quad\sigma_{jm}=\left\langle
v_{j}v_{m}\right\rangle.$ (4)
With this definition we have $\zeta_{jm}=1$ if $j=m$ or $v_{j}=v_{m}$ and
$\zeta_{jm}=0$ if $\langle v_{j}v_{m}\rangle=0$. It is then clear that our
goal is to solve Eq. 2 and find the stationary correlation matrix
$\hat{\sigma}=\lim_{t\to\infty}\hat{\sigma}(t,t)$ that exists if $\hat{A}$ is
positive semi-definite. In this conditions, regardless the symmetry of
$\hat{A}$, the correlation matrix can be found by inverting the relation [41]:
$\hat{A}\hat{\sigma}+\hat{\sigma}\hat{A}^{T}=\hat{B}\hat{B}^{T}.$ (5)
Nevertheless, a more direct way to obtain an analytic expression of
$\hat{\sigma}$ can be followed exploiting the fact that $\hat{A}$ is
symmetric. In this case there exist a unitary matrix $\hat{S}$ such that
$\hat{S}\hat{S}^{+}=\hat{I}$ and
$\hat{S}^{+}\hat{A}\hat{S}$=$\hat{S}^{+}\hat{A}^{T}\hat{S}=\hat{\lambda}=\text{diag}(\lambda_{1},\lambda_{2},\dots,\lambda_{L})$
where $\hat{I}$ is the identity matrix, the $\lambda_{j}$s are the eigenvalues
of $\hat{A}$ while $S_{ji}$ is the $j$th component of the $i$th eigenvector of
it. With these hypotheses and in the case of
$\hat{B}=\text{diag}(b_{1},\dots,b_{L})$ we can write the covariance matrix in
the two-times (with $t\geq s$) and non-stationary case:
$\hat{\sigma}(t,s)=\hat{S}\left(\hat{C}(t,s)+\hat{G}(t,s)\right)\hat{S}^{+}$
(6)
where:
$\displaystyle\hat{C}(t,s)=\exp(-\hat{\lambda}t)\hat{S}^{+}\langle\bm{V}(0),\bm{V}^{T}(0)\rangle\hat{S}\exp(-\hat{\lambda}s)$
(7a) $\displaystyle
G_{jm}(t,s)=\frac{\left(e^{-\lambda_{j}(t-s)}-e^{-(\lambda_{j}+\lambda_{m})s}\right)\sum_{n}S_{jn}^{+}S_{nm}b_{n}^{2}}{\lambda_{j}+\lambda_{m}}.$
(7b)
The first matrix represents the transient and the brackets refer to the
average over initial conditions while the NESS is described by
$\lim_{s\to\infty}G(t,s)$. Without noises, Eq. (7a) would be the solution of
Eq. 2 representing the correlations in the cooling state. We note that the two
correlation matrices has a different mathematical structure. The consequences
of that together with some properties of the cooling state are discussed in
the SM (S4) while in the next paragraphs we will neglect $\hat{C}$
concentrating on the NESS. Defining
$\hat{\sigma}(t^{\prime})=\lim_{t\to\infty}\hat{\sigma}(t+t^{\prime},t)$ and
through Eqs. (6) and (7b) it is also possible to evaluate the single particle
autocorrelation function
$\Gamma_{j}(t^{\prime})\equiv\sigma_{jj}(t^{\prime})/\sigma_{jj}(0)$:
$\Gamma_{j}(t^{\prime})=\frac{1}{\sigma_{jj}}\sum_{k}q_{jk}S^{+}_{kj}e^{-\lambda_{k}t^{\prime}},\quad
q_{jk}=\sum_{ls}\frac{S_{jl}S^{+}_{ks}S_{sl}b^{2}_{s}}{\lambda_{l}+\lambda_{k}}$
(8)
from which is clear that, as expected for a linear system, the autocorrelation
function is a sum of exponential terms with different characteristic times
that are given by the inverse of the eigenvalues $\tau_{k}=1/\lambda_{k}$.
We will derive $\sigma_{jm}$ in a specific case where the diagonalisation of
$\hat{A}$ can be done analytically and then follow a numerical technique of
diagonalisation [42] to show the robustness of our main results i.e. power law
decay of spatial correlations. Before doing that, we briefly review what
techniques have been used to solve similar problems highlighting the
differences with the present case.
These kinds of lattice models, and also more complex ones (with higher
dimension and second order dynamics), when translational invariance holds, can
be mapped in a system of independent equations for the modes in the Bravais
lattice allowing a full solution [6]. However, our model (both NHHP and HHP)
has not periodic boundary conditions and the bath parameters depend on the
particular site position. Assuming translational invariance would mean giving
up some crucial aspects of our investigation. To keep a reasonable connection
with dense granular matter it is important to have a source of energy that
acts differently at the boundary and in the bulk of the system. Nevertheless,
in the next section we’ll discuss some common aspects between the HHP and
translational invariant systems.
We also point out that the continuous limit of Eq. 1a leads to the following
equation for the velocity field:
$\partial_{t}v(x,t)=-\gamma_{a}v(x,t)+\partial_{xx}v(x,t)+\sqrt{2T_{a}\gamma_{a}}\xi(x,t)$
with
$\langle\xi(x,t)\xi(x^{\prime},t^{\prime})\rangle=\delta(x-x^{\prime})\delta(t-t^{\prime})$.
Equations of this form applied on a density field describe a diffusion process
with traps and noise. The variation of the field at the point $x$ is indeed
given by a noise, a diffusive term and a loss term ($-\gamma_{a}v(x,t)$) that
represents the possibility for the particles to be permanently trapped. These
processes can be used to describe the dynamics of mobile defects in crystals
where translational invariance is assumed and the problem can be easily solved
in Fourier space [43]. Our case where external thermostats are necessary to
keep stationary the system and break translational invariance is different. In
the general case with space-dependent parameters, correlations can be studied
diagonalising the matrix $\hat{A}$ or by exploiting Eq. 5 combined with
physical constraint on $\hat{\sigma}$. The former strategy, used by us and
recently applied in [25, 22], when possible, is more convenient because it
gives access also to time-dependent properties. The latter has been used to
study temperature profiles in non-equilibrium harmonic chains [20] . It is
important to stress that a crucial difference between the present work and the
aforementioned ones is that we deal with interactions acting on relative
velocities and not (only) on displacements. Indeed, we have a direct
competition between baths $\gamma_{a(b)}$ and interaction $\gamma$ in
$\hat{A}$, while in heated harmonic chains only the coupling constants appear
in the interaction matrix.
#### Toeplitz condition
In order to obtain an explicit form of Eq. 6 we consider the case of
$\gamma_{b}=\gamma+\gamma_{a}$ so that $\beta=1+\alpha$ making $\hat{A}$ a
Toeplitz matrix whose eigenvalues and eigenvectors are respectively:
$\lambda_{j}=\gamma(2+\alpha-2\cos(j\Pi)),\quad
S_{jm}=\sqrt{\frac{2\Pi}{\pi}}\sin\left(jm\Pi\right)$ (9)
where $\Pi=\pi/(L+1)$. Replacing these in Eq. (7b) and taking $t=s\to\infty$,
Eq. 6 becomes:
$\sigma_{jm}(\alpha)=\frac{2\Pi^{2}}{\gamma\pi^{2}}\sum_{lk}\frac{\sin\left(jl\Pi\right)\sin\left(mk\Pi\right)\left[\sum_{n}b^{2}_{n}\sin(ln\Pi)\sin(kn\Pi)\right]}{\Delta(\alpha)-\cos\left(k\Pi\right)-\cos\left(l\Pi\right)},$
(10)
where $\Delta(\alpha)=2+\alpha$. The sums run from 1 to $L$ and:
$b_{n}^{2}=\begin{cases}2(\gamma+\gamma_{a})T_{1},\quad n=1\\\
2\gamma_{a}T_{a},\quad 1<n<L\\\ 2(\gamma+\gamma_{a})T_{L},\quad n=L.\\\
\end{cases}$ (11)
We point out that Eq. (10) is symmetric with respect the center of the lattice
(i.e. $\sigma_{1m}=\sigma_{L(L+1-m)}$) if the coefficients $b_{n}$ are too.
## Results
### Power-Law correlations and slow time scales in the NHHP
We first study the NHHP so we put $\gamma_{a}=0$ and use the Toeplitz
condition that now reads $\gamma_{b}=\gamma$ so $\beta=1$. Exploiting the
limit for large systems ($L\gg 1$), we can exchange sums with integrals as
$\Pi\sum_{k=1}^{k=L}f(k\Pi)\rightarrow\int_{0}^{\pi}dzf(z)$. We note that in
Eq. (10), when $\gamma_{a}=0$, the sum over $n$ is actually made of two terms.
The one multiplied by $\gamma_{b}T_{L}$ has a sign that depends on the parity
of $l$ and $k$ and this brings to a subleading contribution if one considers
$L\gg 1$ and $j,m\ll L$ (see S1 in the SM). Neglecting it and defining
$\Sigma_{jm}(\alpha)=\int_{0}^{\pi}dzds\frac{\sin(jz)\sin(ms)\sin(z)\sin(s)}{\Delta(\alpha)-\cos\left(z\right)-\cos\left(s\right)}$
(12)
we obtain the covariance matrix for the NHHP:
$\sigma^{\text{NHHP}}_{jm}=\frac{4T_{1}}{\pi^{2}}\Sigma_{jm}(0).$ (13)
The integral contained in $\Sigma_{jm}(0)$ is difficult to be explicitly
evaluated but the following asymptotic behaviors can be derived in the limit
$L\gg m\gg 1$:
$\displaystyle\sigma^{\text{NHHP}}_{mm}\sim\frac{1}{m^{2}}$ (14a)
$\displaystyle\sigma^{\text{NHHP}}_{1m}\sim\frac{8T_{1}}{\pi m^{3}}$ (14b)
$\displaystyle\zeta^{\text{NHHP}}_{1m}\sim\frac{1}{m^{2}}$ (14c)
As explained in the SM (S2), these results are obtained by expressing
$\sigma^{\text{NHHP}}_{jm}$ as a power series of $(jm)^{-1}$ by multiple
integration by parts and estimating opportune upper bounds. The limit $L\gg
m\gg 1$ is important because we want to study the asymptotic behavior of the
correlations in the range for which they are not affected by the opposite
boundary of the system. This is the reason why we predict just a decay for the
variance $\sigma_{mm}$ even if it must grow approaching the $L$th site if
$T_{L}\neq 0$. This growth for large $m$ is given by the term proportional to
$\gamma_{b}T_{L}$ that we have neglected going from Eq. (10) to Eq. (13).
Eq. (14c) clearly states that the bulk sites are correlated with the first
(heated) one by a power law decay with exponent 2. Regarding the correlations
between particles in the bulk, they show a decay even slower than a power law.
We discuss them in the last paragraph of this section. Regarding time scales,
looking at Eq. (8) and at the specific form of the eigenvalues of $\hat{A}$ in
Eq. (9) for $\alpha=0$, we see that, when $j/L\ll 1$, the slowest time scales
in the single particle autocorrelation function behave as:
$\tau_{j}^{\text{NHHP}}=1/\lambda_{j}\sim\tau L^{2}$ (15)
where $\tau=1/\gamma$. We note that the emergence of characteristic times that
scale with the system size together with _scale free_ correlations is fully
consistent. Thus, the information that influences the dynamics of every
particle comes from all across the system and so the time to receive it must
increase with the system size.
### Finite Correlation Length and Times in the HHP
The emergence of _scale free_ correlations is often considered a remarkable
fact in physical systems. Nevertheless, we are now dealing with a model so it
is important to understand if this result is found just by an algebraic
coincidence or if it is consistent with the usual framework in which _scale
free_ correlations are understood i.e. a particular limit for which a finite
correlation length diverges. The study of the HHP comes into play to provide
an evidence of this last scenario. We point out that by studying the HHP with
periodic boundary conditions, and therefore assuming translational invariance
(i.e. extending Eq. 1a to all the particles in the system), it is quite easy
to derive an exponential decay for the stationary spatial correlation
function. This can be done by expressing Eq. 1a in the Bravais lattice or by
studying the continuous limit of $\dot{\sigma}_{jm}=\langle
v_{j}\dot{v}_{m}+v_{m}\dot{v}_{j}\rangle=0$. Nevertheless, we want to study
the passage from the HHP to the NHHP when $\gamma_{a}\to 0$ so we proceed with
space dependent parameters from Eq. (10). This expression, in the HHP,
contains all the contributions given by Eq. (11). Performing the large system
limit and taking into account just the leading terms we arrive at the
following expression for the covariance matrix in the HHP (see S3 in the SM
for details):
$\begin{split}\sigma_{jm}^{\text{HHP}}(\alpha)=\frac{2\alpha
T_{a}}{\pi}\int_{0}^{\pi}dz\frac{\sin(jz)\sin(mz)}{\Delta(\alpha)-2\cos(z)}+\frac{4T_{1}}{\pi^{2}}\left[1+\alpha\left(1-\frac{T_{a}}{T_{1}}\right)\right]\Sigma_{jm}(\alpha)\end{split}$
(16)
where we see that for $\alpha=0$ Eq. (13) is recovered. It is important to
note that trying to express the above equation as a power series of $(m)^{-1}$
one finds that all the coefficients are zero signaling a decay faster than
every power law. In order to go straight to the result we consider homogeneous
amplitude of noises i.e. $T_{1}=T_{L}=T_{a}\gamma_{a}/(\gamma+\gamma_{a})$ so
that the second term of Eq. (16) vanishes. In this condition the matrix
$\hat{B}$ is proportional to the identity so the system can reach
thermodynamic equilibrium. We then take the Fourier transform
$\tilde{\sigma}_{j\omega}(\alpha)=\int dm\exp(i\omega
m)\sigma^{\text{HHP}}_{jm}(\alpha)$ and study the limit $\omega\ll 1$ ($m\gg
1$):
$\tilde{\sigma}_{j\omega}(\alpha)\propto\int_{0}^{\pi}dz\frac{\delta(\omega-z)\sin(jz)}{\Delta(\alpha)-2\cos(z)}\sim\frac{\sin(j\omega)}{\alpha+\omega^{2}}$
(17)
whose inverse Fourier transform for $m>j$ is proportional to an exponential
with characteristic length $\sqrt{\alpha}$, so we have that
$\sigma^{\text{HHP}}_{jm}(\alpha)\sim\exp(-\sqrt{\alpha}m)$. This last result
is valid for a generic $j\ll L$ so it holds also for particles in the bulk. We
note that $\alpha\to 0$ is a singular limit because the pole of the last term
of the above equation tends to the real axis. Regarding variances that we need
to calculate $\zeta_{jm}$ we can write :
$\sigma^{\text{HHP}}_{mm}(\alpha)=\frac{2\alpha
T_{a}}{\pi}\int_{0}^{\pi}dz\frac{\sin^{2}(mz)}{\Delta(\alpha)-2\cos(z)}=T_{a}\sqrt{\frac{\alpha}{4+\alpha}}+o(m^{-1}),\quad
m\gg 1$ (18)
as we expect, in the HHP the asymptotic temperature is a constant that we
explicitly calculate in the SM (S3). We point out that this variance has two
reasonable limiting cases: for $\alpha=0$ it is $o(m^{-1})$ consistently with
the NHHP while $\lim_{\alpha\to\infty}\sigma^{\text{HHP}}_{mm}(\alpha)=T_{a}$
representing the condition for which the external bath overcomes the
interaction so that the variables are in equilibrium with the thermostats.
From this and by the definition of Eq. 4 we can conclude that spatial
correlations in the HHP follow an exponential decay with a finite
characteristic length scale $\xi$:
$\zeta^{\text{HHP}}_{jm}\sim e^{-m/\xi}\quad m\gg 1,\quad\xi=\alpha^{-1/2}.$
(19)
In the SM (S3) we show that this trend holds also without equal noise
amplitudes so it is not strictly related to the equilibrium condition. We note
that looking at this result in the framework of critical phenomena we would
have a critical point at $\alpha_{c}=0$ and a correlation length that diverges
as $\xi\sim(\alpha-\alpha_{c})^{-\nu}$ with a critical exponent $\nu=1/2$.
This critical point would then coincide with the NHHP. Indeed, in this phase,
the system behave as in a critical regime where spatial correlations exhibit a
power low decay. Nevertheless, we make clear that this is just an analogy and
we don’t interpret our results as a phase transition. Moreover, it is
important to remind that an equivalent equilibrium phase transition governed
by temperature could not occur because we are considering a 1D system. In
equilibrium cases there is actually a transition at zero temperature but it
coincides with a physical state with no dynamics. In other words, the model
described by Eqs. 1 can’t be mapped into an Ising or Heisenberg-like
Hamiltonian system maintaining the same properties. We also note that the same
scaling relation between correlation length and characteristic time of the
bath has also been found in dilute granular systems with an hydrodynamic
approach [44] and in dense active systems [26]. Nevertheless in these two
translational invariant systems the equivalent limit for $\alpha=0$ is
meaningless because in the first case it removes the driving while in the
second one it implies a deterministic constant self propulsion. In Fig. 3 we
show that the exponential to power law crossover and the scaling for $\xi$
derived in the large system limit are clearly visible also for finite size
lattices.
Figure 3: a) Spatial correlation function calculated via Eq. (4). The entries
of $\hat{\sigma}$ are obtained from Eq. (6) with $t=s\gg 1$ and diagonalising
$\hat{A}$. The parameters of the system are: $L=500$, $\gamma=5$,
$\beta=1+\alpha$ (i.e. Toeplitz condition) and $\alpha\in[0.002,5]$. We
observe an exponential decay with a growing correlation length that turns into
a power law when $\alpha=0$. b) Scaling of the correlation length obtained
from an exponential fit of $\zeta^{\text{HHP}}_{1m}$ for different
combinations of parameters. We can see that the relation $\xi=\alpha^{-1/2}$
does not depend on the microscopic details of the system. Quasi-Toeplitz cases
are discussed in the next paragraph. In both panels we used
$T_{1}=T_{a}=0.001$ and $T_{L}=0$.
In order to discuss also the characteristic time scales in the HHP, we note
from Eq. (9) that $\lambda_{j}>\gamma_{a}$ $\forall$ $j$ and so for finite
$\alpha$ and $j/L\ll 1$ we have that:
$\tau^{\text{HHP}}_{j}\sim 1/\gamma_{a}=\tau_{a}.$ (20)
This result is consistent with the fact that being correlated with a finite
fraction of the system implies a finite time to receive the information that
effectively determines the dynamics.
To conclude the comparison between HHP and NHHP, we stress that the difference
between the two phases is originated in the structure of the eigenvalues of
$\hat{A}$. In particular, for both space and time correlations, the crucial
ingredient is that the spectrum of $\hat{A}$ accumulates in $\gamma_{a}$ for
$L\gg 1$ (Eq. (9)). Consequently it accumulates to a finite value in the HHP
and to zero in the NHHP. The crossover between the two phases is then governed
by the limit $\alpha\to 0$ that brings to diverging correlation lengths and
times.
### Beyond the Toeplitz case
Up to now we have considered the special case $\beta=1+\alpha$ for which
$\hat{A}$ is a uniform Toeplitz matrix. Now we want to study the system with a
general viscous constant $\gamma_{b}\neq\gamma+\gamma_{a}$ at the boundaries.
Are the results obtained in the previous paragraphs still valid also in this
more general case? In order to answer this question, we follow a procedure,
systematically explained in [42], to diagonalise quasi-uniform Toeplitz
matrices i.e. matrices that deviates from the Toeplitz form just for few
external borders. It does not give an analytical expression of the eigenvalues
and eigenvectors but assures some constraints on their form and allows to find
their values by numerically solving a set of transcendental equations. In
order to uniform our notation with [42] we note that
$\hat{A}=\gamma(2+\alpha)\hat{I}-\gamma\hat{A}^{\prime}$ where:
$\hat{A}^{\prime}=\begin{pmatrix}x&1&&&\bm{0}\\\ 1&0&1&&\\\
&\ddots&\ddots&\ddots&\\\ &&1&0&1\\\ \bm{0}&&&1&x\end{pmatrix}$ (21)
and $x=1-\beta+\alpha$ so that for $\beta=1+\alpha$ we recover the Toeplitz
case. Once defined $\lambda^{\prime}_{j}$ ($S^{\prime}_{ij}$) as the
eigenvalues (eigenvectors) of $\hat{A}^{\prime}$, then
$\lambda_{j}=\gamma(2+\alpha)-\gamma\lambda^{\prime}_{j}$ and
$S_{jm}=S^{\prime}_{jm}$. If the eigenvalues are parametrized as
$\lambda^{\prime}_{j}=2\cos(k_{j})$ then we can find them by solving:
$k_{j}=\frac{\pi
j+2\phi(k_{j})}{L+1},\quad\phi(k)=k-\tan^{-1}\left(\frac{\sin(k)}{\cos(k)-x}\right)$
(22)
that determine the allowed values of $k_{j}$. The entries of the eigenvector
matrix $\hat{S}$ can then be directly obtained starting from the numerical
solution of Eq. (22) [42].
Figure 4: a) Spatial correlation function for different quasi-Toeplitz cases
in both HHP and NHHP. We can see that the two phases are stable also for large
values of negative $x$. The entries of $\hat{\sigma}$ are obtained from Eq.
(6) for $t=s\gg 1$ and diagonalising $\hat{A}$. b) Spectra of $\hat{A}$ for
different values of $x$ and $\alpha=2.1$. The spectra always accumulate at the
boundary of the band $[\gamma_{a},4\gamma+\gamma_{a}]$ and out-of-band
eigenvalues can occur only for $|x|>1$. We also note that in the range of
interest for the NHHP ($x\in[-\infty,1]$) the spectra are always positive
assuring the stability of the system. In both panel we used $L=500$,
$\gamma=5$, $T_{1}=T_{a}=0.001$ and $T_{L}=0$.
Once calculated all the $\lambda_{j}$ and the $S_{jm}$ we can use Eq. (7b) in
the stationary case to obtain the covariance matrix and consequently the
correlation functions. In Fig. 4 we show the correlation function for some
quasi-Toeplitz cases for both the HHP and the NHHP finding the same asymptotic
behavior obtained for the Toeplitz one in Fig. 3a. Also the scaling for $\xi$
in the HHP does not change (see Fig. 3b). We note that the difference in terms
of parameters between Toeplitz and quasi-Toeplitz cases is that in the former
we have just one adimensional ratio between viscous constants i.e.
$\alpha=\gamma_{a}/\gamma$ while in the latter we can independently fix
$\beta=\gamma_{b}/\gamma$ and $\alpha$.
Given the form with which eigenvalues are parametrized they can take values
only in the band $\lambda^{\prime}_{j}\in[-2,2]$ and equivalently
$\lambda_{j}\in[\gamma_{a},4\gamma+\gamma_{a}]$. Nevertheless, for absolute
values of $x$ large enough, out-of-band eigenvalues can occur [42]. This fact
would compromise the existence of a stationary state in the NHHP because
$\hat{A}$ would cease to be positive semi-definite. A more refined inspection
of the spectral properties is then needed. Being $\beta>0$ by definition we
are sure that $x\in[-\infty,1)$ in the NHHP. For $L\gg 1$ and $|x|>1$ two out-
of-band eigenvalues $\lambda^{\text{out}}_{1,2}$ emerge converging to a common
value given by $\lambda^{\text{out}}_{1,2}=\gamma(2+\alpha-x-x^{-1})$ that, in
our case, is strictly positive preventing any problem of stability (see Fig.
4b). Moreover, as shown in the same panel, we can see that the spectrum of
$\hat{A}$ always accumulates at the boundary of the band independently from
the value of $x$. This is also clear by taking $j/L\ll 1$ or $\sim 1$ in Eqs.
(22) and verifying that $k_{j}$ tends respectively to $0$ or $\pi$.
Consequently the $\lambda^{\prime}_{j}$s always accumulate in $2$ and the
$\lambda_{j}$s in $\gamma_{a}$. This generalizes our result about the power
law decay in the NHHP (i.e. with $\gamma_{a}=0$) for any $\gamma_{b}>0$
because, as explained in the previous paragraphs, its origin relies in the
accumulation of the $\lambda_{j}$ spectrum in zero (see also Fig. 4).
### Correlations in the bulk and finite size effects
In previous paragraphs we focused on the correlation function with respect to
the first site $\zeta_{1m}$ in the limit $L\gg m\gg 1$. These conditions,
particularly in the NHHP, were crucial ingredients for calculations. Moreover,
in Fig. 3a and Fig. 4a we have always shown the correlation function in the
case of $T_{L}=0$ in order to treat cases more compatible with our
calculations where the terms proportional to $T_{L}\sim\mathcal{O}(1/L)$ are
neglected. In this condition the only source of stochasticity is the bath on
the first site so the finite size effects do not substantially affect the
shape of $\zeta_{1m}$. Thus, the power law regime in the NHHP spans almost all
the system size.
Here we want to discuss the behavior of spatial correlations between particles
in the bulk (i.e. $\zeta_{jm}$ with $1\ll j,m\ll L$) and the finite size
effects for $T_{L}\neq 0$. In Fig. 5 we show $\zeta_{j(j+m-1)}$ with $j=1,L/2$
for different values of $L$ and $\alpha$. In all the cases we have
$T_{1}=T_{a}=T_{L}\neq 0$. The correlation function with respect $L/2$ is
representative for the bulk and we can see from Fig. 5 that in the HHP it
presents an exponential decay with a correlation length independent from $L$
while in the NHHP it decays slower than a power law:
$\zeta^{\text{NHHP}}_{L/2(L/2+m-1)}$ remains essentially constant up to a
sharp cutoff that increases by raising $L$. Regarding
$\zeta^{\text{NHHP}}_{1m}$ for $T_{L}\neq 0$, we can still observe the power
law decay $\sim m^{-2}$ predicted in the previous paragraphs but with a sharp
cutoff that occurs when $m$ is large enough and depending on $L$. In Fig. 5b
we show the same curves as a function of $m/(L/2)$ and we note that the
cutoffs of the correlation functions in the NHHP collapse signaling that their
size scales linearly with $L$. In other words this confirms that, also when
the boundary effects affect the shape of $\zeta_{jm}$, the NHHP presents
_scale-free_ correlations. Indeed the only typical correlation length that one
can define grows with system size. As we expect, the correlation functions in
the HHP separates when plotted as a function of $m/(L/2)$ because their decay
is strictly defined by $\alpha$ regardless of $L$.
Figure 5: a) Spatial correlation function with respect to the site
$j=1,\frac{L}{2}$ for $\beta=2$, $T_{1}=T_{a}=T_{L}=0.001$ and different
values of $\alpha$ and $L$. The entries of $\hat{\sigma}$ are obtained from
from Eq. (6) for $t=s\gg 1$ and diagonalising $\hat{A}$. b) Same curves shown
in the left panel but as a function of the rescaled distance $m/(L/2)$. The
collapse of the cutoffs is a signature of _scale-free_ correlations[13]
## Discussion
We studied spatial and temporal correlations in the NESS reached by a velocity
field with viscous interactions defined on the lattice and coupled with
Brownian baths. The model reproduces three main characteristics of vibrated
granular matter at high density i.e. dissipative forces, permanent contacts
and non-homogeneous energy injection. The typical correlation lengths and
times have a finite characteristic scale when the bulk particles are coupled
to an external bath (HHP regime); however such a scale diverges with the
system size, as in a _scale-free_ scenario, when the thermal bath is removed
from the bulk particles and kept acting on the boundary sites only (NHHP
regime). Solving this model as a diagonalisable multivariate Ornstein-
Uhlenbeck process, we unveiled the role of non-homogeneous heating in the
development of slow and collective dynamics. We conclude that keeping the bath
only at the boundaries allows to have a driven NESS in which the internal
(deterministic) dynamics - and the corresponding propagation of information
and fluctuations - is not hindered by external disturbances. From a
mathematical point of view this is reflected in the spectral properties of the
interaction matrix that accumulates in zero also in the presence of noises at
the boundaries of the lattice. Our findings provide an example of a mechanism
for which power law decays of correlations can occur out of equilibrium,
shedding light on the emergence of collective behavior in dense granular
matter. Further investigations of this model, considering both harmonic and
viscous interactions, are promising steps towards the understanding of more
general non-equilibrium systems such as active matter and biological
assemblies.
## Supplemental Materials: Details of calculations
### S1: Subleading terms in the large system limit
Here we show how performing the large system limit ($L\gg 1$) subleading terms
$\sim 1/L$ occurs. Starting from Eq.(10) we consider the contribution
proportional to $b_{L}^{2}$:
$b_{L}^{2}\Pi^{2}\sum_{lk}\frac{\sin\left(jl\Pi\right)\sin\left(mk\Pi\right)\sin(lL\Pi)\sin(kL\Pi)}{\Delta(\alpha)-\cos\left(k\Pi\right)-\cos\left(l\Pi\right)}$
(23)
where $\Pi=\pi/(L+1)$ and we note that:
$\sin(lL\Pi)\sin(kL\Pi)=\left(-1\right)^{k+l+2}\sin(l\Pi)\sin(k\Pi)$.
Considering a generic function $f$ we can write
$\Pi^{2}\sum_{lk}(-1)^{k+l+2}f(jl\Pi,mk\Pi)=\Pi^{2}\sum_{nh}\big{[}f(2jn\Pi,2mh\Pi)-f(2jn\Pi+j\Pi,2mh\Pi)+f(2jn\Pi+j\Pi,2mh\Pi+m\Pi)\\\
-f(2jn\Pi,2mh\Pi+m\Pi)\big{]}$ (24)
that taking the large system limit $L\gg 1$ and replacing sums with integrals
as $\Pi\sum_{m=0}^{m=L/2}f(2m\Pi)\rightarrow\frac{1}{4}\int_{0}^{\pi}dxf(x)$
becomes:
$\frac{1}{4}\int_{0}^{\pi}dzds\left[f(jz,ms)-f(jz+j\Pi,ms)+f(jz+j\Pi,ms+m\Pi)-f(jz,ms+m\Pi)\right]\sim\mathcal{O}(1/L),\quad
L\gg 1,\quad m\vee j\ll L$ (25)
because all the terms at the zeroth order vanish in the integrand. This
explains why it is possible to neglect the term proportional to $b_{L}^{2}$ in
Eq. (10) once the large system limit is taken and for $j\vee m$ small enough.
This is consistent with the idea that the effect of the bath acting on the Lth
site can be neglected only if $\sigma_{jm}$ is calculated for sites that are
far away from $L$.
### S2: Covariance matrix in the NHHP
Here we give some details about the calculations necessary to derive the
asymptotic predictions of Eqs. (14) from Eq. (13). To do so we start from the
latter equation in a form more suitable for next calculations:
$\sigma^{\text{NHHP}}_{jm}=\lim_{L\to\infty}\frac{4T_{1}}{\pi^{2}}\int_{\frac{\pi}{L+1}}^{\frac{\pi
L}{L+1}}dz\int_{\frac{\pi}{L+1}}^{\frac{\pi
L}{L+1}}ds\sin(jz)\sin(ms)g(z,s)\quad\text{where}\quad
g(z,s)=\frac{\sin(z)\sin(s)}{2-\cos(z)-\cos(s)}.$ (26)
In this expression we have shown the explicit form of the large $L$ limit
because the integrand of the function $g$ is a function of both $z$ and $s$
that is singular in the point $(0,0)$. Indeed, its right value in the origin
comes from the limit for large $L$ of the integration domain
$[\frac{\pi}{L+1},\frac{\pi L}{L+1}]\times[\frac{\pi}{L+1},\frac{\pi L}{L+1}]$
in the $zs$ plane. More specifically we have that $0\leq g(z,s)\leq 1$
$\forall z,s\in[0,\pi]$ and that $\lim_{z\to 0}g(z^{a},z^{b})\sim z^{a-b}$ if
$a\geq b$. In the remainder, we consider the integration intervals as
$[\frac{\pi}{L+1},\pi]$ because the singularity is just in the origin.
Integrating two times by parts and noting that $g(\pi,s)=g(z,\pi)=0$ $\forall$
$z,s$ we have:
$\sigma^{\text{NHHP}}_{jm}=\lim_{L\to\infty}\frac{4T_{1}}{\pi^{2}jm}\Biggl{[}\cos\left(\frac{j\pi}{L+1}\right)\cos\left(\frac{m\pi}{L+1}\right)g\left(\frac{\pi}{L+1},\frac{\pi}{L+1}\right)+\cos\left(\frac{m\pi}{L+1}\right)\int_{\frac{\pi}{L+1}}^{\pi}dz\cos\left(jz\right)\partial_{z}g\left(z,\frac{\pi}{L+1}\right)\\\
+\cos\left(\frac{j\pi}{L+1}\right)\int_{\frac{\pi}{L+1}}^{\pi}ds\cos\left(ms\right)\partial_{s}g\left(\frac{\pi}{L+1},s\right)+\int_{\frac{\pi}{L+1}}^{\pi}dsdz\cos\left(jz\right)\cos\left(ms\right)\partial_{zs}g(z,s)\Biggr{]}.$
(27)
We want to show that $\sigma^{\text{NHHP}}_{jm}\sim(jm)^{-1}$ so we have to
demonstrate that the sum of the terms in the square brackets is
$\mathcal{O}(1)$ for $m,j\gg 1$ in the large $L$ limit. The first term clearly
tends to $1$ when $L\to\infty$ regardless the value of $j$ and $m$ (remember
that $j,m\ll L$). Reintroducing $\Pi=\pi/(L+1)$ we can express Eq. (27) as:
$\sigma^{\text{NHHP}}_{jm}\sim\frac{4T_{1}}{\pi^{2}jm}\left[1+C_{jm}\right]\quad\text{where}\quad
C_{jm}=\lim_{L\to\infty}\left[\cos(m\Pi)I_{j}+\cos(j\Pi)I_{m}+I_{jm}\right]$
(28)
and where $I_{j}$, $I_{m}$ and $I_{jm}$ are respectively the integrals of the
second, third and fourth term in the square brackets of Eq. (27). The estimate
of the asymptotic behavior of such integrals is not trivial because of the
presence of the derivatives of $g(z,s)$ that diverge in the origin. We then
proceed by estimating upper bounds. It is important to note that, in order to
demonstrate $\sigma^{\text{NHHP}}_{jm}\sim(jm)^{-1}$, requiring
$C_{jm}\sim\mathcal{O}(1)$ or $|C_{jm}|\leq 1$ is not enough because it would
bring contributions as $-1\pm o(1/j)$ that imply the emergence of a faster
decay. The right thing to do is instead to show that $|C_{jm}|\leq c$ with
$c<1$. In this way, we could be sure that $C_{jm}$ cannot cancel 1 in Eq.
(28). Starting by $I_{j}$, we define $u(z)=\partial_{z}g(z,\frac{\pi}{L+1})$
and rewrite it as:
$I_{j}=\int_{\frac{\pi}{L+1}}^{\pi+\frac{\pi}{L+1}}dz\cos\left(jz\right)u(z)+\mathcal{O}(1/L)$
(29)
Now we note that the interval of integration is much larger than the period
$T_{j}=\frac{2\pi}{j}$ of the cosine so we can split it in a sum of
contributions over consecutive periods. Without loss of generality we can
assume $j$ even and exploit the periodicity of the cosine obtaining:
$I_{j}=\sum_{k=1}^{k=j/2}\int_{(k-1)T_{j}+\Pi}^{kT_{j}+\Pi}dz\cos(jz)u(z)=\frac{1}{j}\int_{\Pi}^{2\pi+\Pi}dx\cos(x)\sum_{k=1}^{k=j/2}u\left(\frac{x}{j}+(k-1)T_{j}\right)$
(30)
where we have have changed variable as $x=jz+2\pi(k-1)$ and reintroduced the
symbol $\Pi=\frac{\pi}{L+1}$. Now we use the fact that $T_{j}\ll 1$ to
exchange the sum over $k$ with an integral as
$\sum_{k}f\left((k-1)T_{j}\right)\to T_{j}^{-1}\int d\phi_{j}f(\phi_{j})$ and
return to an expression with $g$:
$I_{j}=\frac{1}{2\pi}\int_{\Pi}^{2\pi+\Pi}dx\cos(x)\int_{0}^{\pi-\frac{2\pi}{j}}d\phi_{j}u\left(\frac{x}{j}+\phi_{j}\right)=\frac{1}{2\pi}\int_{\Pi}^{2\pi+\Pi}dx\cos(x)\left[g\left(\frac{x}{j}+\pi-\frac{2\pi}{j},\Pi\right)-g\left(\frac{x}{j},\Pi\right)\right].$
(31)
The function $g$ can be regularly expanded in series around the point
$(\pi,0)$. Doing this, it’s easy to verify that the integral of the first term
in the brackets gives $\mathcal{O}(1/j)$ contributions. We can’t perform such
an estimate for $g(x/j,\Pi)$ because the derivatives near the origin are not
well defined. Nevertheless, we know that $g(x/j,\Pi)\in[0,1]$ $\forall$
$x\in[\Pi,2\pi/+\Pi]$ if $j$ is sufficiently large so we can estimate an upper
bound for $I_{j}$ (and $I_{m}$) as: $\lim_{L\to\infty}|I_{j(m)}|\leq 1/\pi$
for $j\gg 1$. This happens because, given $T$ a $2\pi$-large interval with
$T_{+(-)}$ the sub-interval where the cosine is positive(negative) and
$g(x)\in[0,1]$ if $x\in T$, we can write:
$\Bigg{|}\int_{T}\cos(x)g(x)\Bigg{|}=\Bigg{|}\bigg{|}\int_{T_{+}}\cos(x)g(x)\bigg{|}-\bigg{|}\int_{T_{-}}\cos(x)g(x)\bigg{|}\Bigg{|}\leq\frac{1}{2}\int_{T}\big{|}\cos(x)\big{|}=2$
(32)
With the same kind of calculations leading to Eq. (31) we obtain:
$I_{jm}=\frac{1}{4\pi^{2}}\int_{\Pi}^{2\pi+\Pi}dxdy\cos(x)\cos(y)g\left(\frac{x}{j},\frac{y}{m}\right)+\mathcal{O}((mj)^{-1}).$
(33)
Using inequalities similar to the ones of Eq. (32) but for 2D integrals we
estimate the upper bound of Eq. (33) as $\lim_{L\to\infty}|I_{jm}|\leq
2/\pi^{2}$ for $j,m\gg 1$. Putting together these results in the definition of
$C_{jm}$ of Eq. (28) we are sure that in the large $L$ limit:
$|C_{jm}|\leq\lim_{L\to\infty}\left[|I_{j}|+|I_{m}|+|I_{jm}|\right]=\frac{2}{\pi}\left(1+\frac{1}{\pi}\right)\simeq
0.83926<1\quad\text{for}\quad j,m\gg 1$ (34)
We conclude that $\sigma^{\text{NHHP}}_{jm}\sim(jm)^{-1}$ from which Eq. (14a)
is straightforward.
It is important to note that, in order to obtain Eqs. (30) and (31), we need
both $j$ and $m\gg 1$. So we have to use another way to estimate the
asymptotic behavior of $\sigma^{\text{NHHP}}_{1m}$. It can be rewritten as
$\sigma^{\text{NHHP}}_{1m}=\frac{4T_{1}}{\pi^{2}}\int_{0}^{\pi}dzds\sin(ms)g_{1}(s,z)\quad\text{where}\quad
g_{1}(s,z)=\frac{\sin^{2}(z)\sin(s)}{2-\cos(z)-\cos(s)}$ (35)
and $g_{1}$ is regular in the origin because $\lim_{z\to
0}g_{1}(z^{a},z^{b})=0$ $\forall$ $a,b>0$. We can perform the integral over
$z$ obtaining
$\int_{0}^{\pi}dzg_{1}(z,s)=\pi\left[-2+\cos(s)+\sqrt{6-2\cos(s)}\sin(s/2)\right]\sin(s)$
where the first two terms in the brackets vanish when also the integral over
$s$ is performed ($m$ is an integer). We have now that
$\sigma^{\text{NHHP}}_{1m}=\frac{4T_{1}}{\pi^{2}}\int_{0}^{\pi}ds\sin(ms)f(s)$
where $f(s)=\sin(s)\left[\sqrt{6-2\cos(s)}\sin(s/2)\right]$. Integrating four
times by parts and noting that $f(0)=f(\pi)=f^{\prime\prime}(\pi)=0$ while
$f^{\prime\prime}(0)=2$ we obtain:
$\sigma^{\text{NHHP}}_{1m}=\frac{8T_{1}}{\pi m^{3}}+R_{m}\sim\frac{8T_{1}}{\pi
m^{3}}+\mathcal{O}(m^{-5})\quad m\gg 1$ (36)
where $R_{m}=(m)^{-4}(\pi)^{-1}\int_{0}^{\pi}ds\sin(ms)f^{(4)}(s)$ so
$|R_{m}|\leq(m)^{-4}(\pi)^{-1}|\text{max}(f^{(4)}(s))|\int_{0}^{\pi}ds|\sin(ms)|=2(m)^{-5}(\pi)^{-1}|\text{max}(f^{(4)}(s))|\simeq
19(m)^{-5}(\pi)^{-1}$ . The last quantity needed for the Eqs. (14) is
$\sigma^{\text{NHHP}}_{11}=\int_{0}^{\pi}dzds\sin(z)\sin(s)g(z,s)=\pi^{2}-8\pi/3$
that is finite and does not depends on $m$ so the asymptotic behavior for
$\zeta_{1m}$ directly follows from the ones derived for Eqs. (14a) and (14b).
### S3: Covariance matrix in the HHP
In order to derive Eq. (16) from Eq. (10) we have to discuss the contributions
coming from the sum $\sum_{n}b_{n}^{2}\sin(ln\Pi)\sin(kn\Pi)$ that compares in
the latter. As explained in the first appendix, the term proportional to
$b^{2}_{L}$ gives a subleading term $\mathcal{O}(1/L)$ in the large system
limit while the one proportional to $b_{1}^{2}$ gives
$4T_{1}(1+\alpha)(\pi)^{-2}\Sigma_{jm}(\alpha)$. Regarding the other
contributions, we exploit orthogonality to express the remaining sum as:
$\sum_{n=2}^{n=L-1}\sin(ln\Pi)\sin(kn\Pi)=\frac{L+1}{2}\delta_{kl}-\sin(l\Pi)\sin(k\Pi)-\sin(lL\Pi)\sin(kL\Pi)$
(37)
where again the last term gives $\mathcal{O}(1/L)$ for $L\gg 1$. Thus, using
this equation and neglecting subleading terms, Eq. (10) becomes:
$\sigma_{jm}(\alpha)=\Pi^{2}\sum_{lk}\frac{\sin\left(jl\Pi\right)\sin\left(mk\Pi\right)}{\Delta(\alpha)-\cos\left(k\Pi\right)-\cos\left(l\Pi\right)}\left[\frac{2\alpha
T_{a}(L+1)}{\pi^{2}}\delta_{kl}+\frac{4T_{1}}{\pi^{2}}\left(1+\alpha\left(1-\frac{T_{a}}{T_{1}}\right)\right)\sin(l\Pi)\sin(k\Pi)\right]$
(38)
that in the large system limit gives Eq. (16).
In the main text we proceed from Eq. (16) by considering constant amplitude of
noise i.e. $T_{1}=T_{a}\gamma_{a}/(\gamma+\gamma_{a})$. In this way the term
proportional to $\Sigma(\alpha)$ vanishes and one can shorten calculations
concentrating just on the integral over $z$. To verify that the asymptotic
behavior of Eq. (19) holds also without constant amplitude of noises we have
to show that $\Sigma_{jm}(\alpha)$ does not decay slower than
$\exp({-\sqrt{\alpha}m})$. We then consider the fourier transform
$\tilde{\Sigma}_{j\omega}(\alpha)=\int dm\exp(i\omega m)\Sigma_{jm}(\alpha)$
for small $\omega$:
$\tilde{\Sigma}_{j\omega}\sim\int_{0}^{\pi}dz\frac{\sin(jz)\sin(z)\omega}{1+\alpha-\cos(z)+\frac{\omega^{2}}{2}}\quad\text{so}\quad\Sigma_{jm}\sim\int_{0}^{\pi}dz\frac{\sin(jz)\sin(z)}{1+\alpha-\cos(z)}\exp(-m\sqrt{2(1+\alpha-\cos(z))})$
(39)
and for this last expression is simple to show that
$|\Sigma_{jm}|\leq\frac{\pi}{\alpha}\exp(-\sqrt{2\alpha}m)$. Then we are sure
that its behavior for large $m$ will be subleading with respect to
$\exp(-\sqrt{\alpha}m)$.
To complete the discussion about the exponential decay in the HHP we need to
evaluate the result of Eq. (18). We then write such integral after one
integration by parts obtaining:
$\frac{2\alpha
T_{a}}{\pi}\int_{0}^{\pi}dz\frac{\sin^{2}(mz)}{\Delta(\alpha)-2\cos(z)}=\frac{2\alpha
T_{a}}{\pi}\left[\frac{\pi}{2(4+\alpha)}-\int_{0}^{\pi}dz\frac{z\sin(z)}{(\Delta(\alpha)-2\cos(z))^{2}}-\int_{0}^{\pi}dz\frac{\sin(mz)\sin(z)}{2m(\Delta(\alpha)-2\cos(z))^{2}}\right]$
(40)
from which we have that
$\sigma^{\text{HHP}}_{mm}(\alpha)=T_{a}\sqrt{\frac{\alpha}{4+\alpha}}+o(m^{-1})$.
### S4: Spatial correlation in the cooling state
An important question that often arise in granular systems regards the
relation between the properties of the cooling dynamics and the one of the
NESS obtained with the injection of energy. In our case we obtain the cooling
state by switching off all the temperatures in the lattice (matrix $\hat{B}$
with all zero entries). In this situation the covariance matrix is simply
given by Eq. (7a). Where the brackets $\langle\rangle$ refer to a mean on the
initial condition. Exploiting the symmetricity of $\hat{A}$ we can rewrite it
as:
$\sigma_{jm}(t,s)=\sum_{nhkl}S_{hn}e^{-\lambda_{n}t}S^{+}_{nk}\langle
v_{k}(0)v_{l}(0)\rangle S_{lh}e^{-\lambda_{h}s}S^{+}_{hj}$ (41)
Keeping initial conditions identically and independently distributed around
$0$ with the variance 1 so that $\langle v_{k}(0)v_{l}(0)\rangle=\delta_{kl}$
and exploiting orthogonality of the eigenvectors we have:
$\sigma_{jm}(t,s)=\sum_{n}S_{jn}e^{-\lambda_{n}(t+s)}S^{+}_{nm}$ (42)
That in the Toeplitz case for $t=s$ becomes:
$\sigma_{jm}(t)=\frac{\exp(-2(2\gamma+\gamma_{a})t)\Pi}{\pi}\sum_{n}\sin\left(jn\Pi\right)\sin\left(nm\Pi\right)\exp\left(4\gamma
t\cos\left(n\Pi\right)\right).$ (43)
where we note that for $t=0$ $\sigma_{jm}(0)=\delta_{jm}$ as imposed by the
initial state. The same uncorrelated condition, expected for non-iteracting
systems, is also obtained with $\gamma=0$. Another important properties of the
$\sigma_{ij}(t)$ is that the dependence on $\gamma_{a}$ is factored out from
the sum so, when calculating
$\zeta_{jm}=\sigma_{jm}/\sqrt{\sigma_{jj}\sigma_{mm}}$, it simplifies.
Moreover, also the dependence from $\gamma$ can be removed just by using the
adimensional time $\tilde{t}=\gamma t$. To conclude, during the cooling the
behavior of spatial correlations is crucially different from the one observed
in the two heated phases studied in the main text. In particular, the
parameter $\alpha$ does not play a crucial role as in the NESS. This is an
intriguing result because we found that an external source of energy makes
something more than just keeping alive the dynamics that characterizes the
system when it cools down.
Figure 6: Spatial correlation function in the cooling state after different
times $\tilde{t}$. We observe a collapse by rescaling the horizontal axis by
$\sqrt{\tilde{t}}$.
In Fig. 6 we show $\zeta_{1x}(\tilde{t})$ for different times $\tilde{t}$ and
we clearly observe that it presents a finite cutoff that grows with the delay
time $\tilde{t}$. We can understand it by thinking that the information is
propagating through the system in time. In Fig. 6b we show how rescaling the
space with $\sqrt{\tilde{t}}$ all the curves collapse. So the information
propagates as $\xi(t)\propto\sqrt{\gamma t}$. This result is fully consistent
with diffusion-like coarsening dynamics of vortices, found in other models for
granular velocity fields [45, 29, 46]. In those models however the cooling
state is closer to "dilute" situations where interactions are sequences of
separate binary collisions.
### S5: Reintroduction of space and connection with active matter
Although it is reasonably justified from empirical observations, neglecting
the positional dynamics remains the main approximation of our model. A way to
reintroduce it in our description is to consider a harmonic potential between
nearest neighbors in the lattice. The equation of motion for each particle
would then be of this form
$\displaystyle\dot{x}_{i}=v_{i}$ (44a)
$\displaystyle\dot{v}_{i}=-(\gamma_{a(b)}+2\gamma)v_{i}-2kx_{i}+k(x_{i+1}+x_{i-1})+\gamma(v_{i+1}+v_{i-1})+\sqrt{2T_{a(i)}\gamma_{a(b)}}\xi_{i}(t)$
(44b)
where we consider again a bath on the boundaries characterized by
($\gamma_{b}$, $T_{1(L)}$) and a bath on the bulk ($\gamma_{a}$, $T_{a}$).
It is interesting to note that we can obtain equations of the same form when
considering a 1D chain of (overdamped) active particles with harmonic
interactions, where self-propulsion is modeled using a colored noise $\eta$
(AOUP):
$\displaystyle\dot{x}_{i}=-k(x_{i}-x_{i+1})-k(x_{i}-x_{i-1})+\eta_{i}(t)$
(45a)
$\displaystyle\dot{\eta}_{i}=-\gamma_{a}\eta_{i}+\sqrt{2T_{a}\gamma_{a}}\xi_{i}(t)$
(45b)
where $\xi_{i}$ are Gaussian white noises with unitary variance. Time-deriving
the first of these equations and following standard manipulations, we get
[47]:
$\displaystyle\dot{x}_{i}=v_{i}$ (46a)
$\displaystyle\dot{v}_{i}=-2k\gamma_{a}x_{i}-(\gamma_{a}+2k)v_{i}+k\gamma_{a}(x_{i+1}+x_{i-1})+k(v_{i+1}+v_{i-1})+\sqrt{2T_{a}\gamma_{a}}\xi_{i}(t)$
(46b)
which are formally equivalent to Eqs. (44). If we consider the particles fixed
on the lattice and neglect the positional dynamics we find the analogous of
the granular case studied in the main with a transition in $\gamma_{a}=0$.
While in the granular chain removing the bath on the bulk has a specific and
realistic physical condition (granular materials are often driven only through
boundaries) in the active case it seems meaningless. A self-propelled harmonic
chain modeled by Eqs. (46) has been studied taking account the positional
dynamics and assuming spatially homogeneous self-propulsion [26]. The authors
perform calculations based on translational invariance (they solve the system
in the Bravais reciprocal lattice). This assumption is crucial and it is also
the main difference with our approach in which we are interested in the effect
of non-homogeneous heating. The interesting connection with our investigation
is that they found a correlation length that scales as
$\xi\sim\sqrt{1/\gamma_{a}}$ as in our case [26].
The study of correlations in this kind of 1D systems with both positional
dynamics and non-homogeneous heating is, up to our knowledge, still lacking.
We are currently working in this direction.
## References
* [1] Narayan, V., Ramaswamy, S. & Menon, N. Long-lived giant number fluctuations in a swarming granular nematic. _Science_ 317, 105–108, DOI: 10.1126/science.1140414 (2007).
* [2] Kumar, N., Soni, H., Ramaswamy, S. & Sood, A. Flocking at a distance in active granular matter. _Nat. Comm._ 5, 1–9, DOI: 10.1038/ncomms5688 (2014).
* [3] Fily, Y. & Marchetti, M. C. Athermal phase separation of self-propelled particles with no alignment. _Phys. Rev. Lett._ 108, 235702, DOI: 10.1103/PhysRevLett.108.235702 (2012).
* [4] Redner, G. S., Hagan, M. F. & Baskaran, A. Structure and dynamics of a phase-separating active colloidal fluid. _Phys. Rev. Lett._ 110, 055701, DOI: 10.1103/PhysRevLett.110.055701 (2013).
* [5] Cates, M. E. & Tailleur, J. Motility-induced phase separation. _Annu. Rev. Condens. Matter Phys._ 6, 219–244, DOI: 10.1146/annurev-conmatphys-031214-014710 (2015).
* [6] Caprini, L., Marini Bettolo Marconi, U. & Puglisi, A. Spontaneous velocity alignment in motility-induced phase separation. _Phys. Rev. Lett._ 124, 078001, DOI: 10.1103/PhysRevLett.124.078001 (2020).
* [7] Alert, R. & Trepat, X. Physical models of collective cell migration. _Annual Review of Condensed Matter Physics_ 11, 77–101, DOI: 10.1146/annurev-conmatphys-031218-013516 (2020).
* [8] Scalliet, C., Gnoli, A., Puglisi, A. & Vulpiani, A. Cages and anomalous diffusion in vibrated dense granular media. _Phys. Rev. Lett._ 114, 198001, DOI: 10.1103/PhysRevLett.114.198001 (2015).
* [9] Plati, A., Baldassarri, A., Gnoli, A., Gradenigo, G. & Puglisi, A. Dynamical collective memory in fluidized granular materials. _Phys. Rev. Lett._ 123, 038002, DOI: 10.1103/PhysRevLett.123.038002 (2019).
* [10] Plati, A. & Puglisi, A. Slow time scales in a dense vibrofluidized granular material. _Phys. Rev. E_ 102, 012908, DOI: 10.1103/PhysRevE.102.012908 (2020).
* [11] Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Shochet, O. Novel type of phase transition in a system of self-driven particles. _Phys. Rev. Lett._ 75, 1226, DOI: 10.1103/PhysRevLett.75.1226 (1995).
* [12] Toner, J. & Tu, Y. Flocks, herds, and schools: A quantitative theory of flocking. _Phys. Rev. E_ 58, 4828, DOI: 10.1103/PhysRevE.58.4828 (1998).
* [13] Cavagna, A. _et al._ Scale-free correlations in starling flocks. _Proceedings of the National Academy of Sciences_ 107, 11865–11870, DOI: 10.1073/pnas.1005766107 (2010). https://www.pnas.org/content/107/26/11865.full.pdf.
* [14] Ma, S.-K. _Modern Theory of Critical Phenomena_ (Routledge, 2018).
* [15] Cavagna, A. _et al._ Dynamical renormalization group approach to the collective behavior of swarms. _Phys. Rev. Lett._ 123, 268001, DOI: 10.1103/PhysRevLett.123.268001 (2019).
* [16] Gradenigo, G., Ferrero, E. E., Bertin, E. & Barrat, J.-L. Edwards thermodynamics for a driven athermal system with dry friction. _Phys. Rev. Lett._ 115, 140601, DOI: 10.1103/PhysRevLett.115.140601 (2015).
* [17] Garrido, P. L., Lebowitz, J. L., Maes, C. & Spohn, H. Long-range correlations for conservative dynamics. _Phys. Rev. A_ 42, 1954, DOI: 10.1103/PhysRevA.42.1954 (1990).
* [18] Grinstein, G., Lee, D.-H. & Sachdev, S. Conservation laws, anisotropy, and “self-organized criticality”in noisy nonequilibrium systems. _Phys. Rev. Lett._ 64, 1927, DOI: 10.1103/PhysRevLett.64.1927 (1990).
* [19] Bertini, L., De Sole, A., Gabrielli, D., Jona-Lasinio, G. & Landim, C. Macroscopic fluctuation theory. _Rev. Mod. Phys._ 87, 593, DOI: 10.1103/RevModPhys.87.593 (2015).
* [20] Rieder, Z., Lebowitz, J. L. & Lieb, E. Properties of a harmonic crystal in a stationary nonequilibrium state. _Journal of Mathematical Physics_ 8, 1073–1078, DOI: 10.1063/1.1705319 (1967). https://doi.org/10.1063/1.1705319.
* [21] Lepri, S., Livi, R. & Politi, A. Thermal conduction in classical low-dimensional lattices. _Physics reports_ 377, 1–80, DOI: 10.1016/S0370-1573(02)00558-6 (2003).
* [22] Falasco, G., Baiesi, M., Molinaro, L., Conti, L. & Baldovin, F. Energy repartition for a harmonic chain with local reservoirs. _Phys. Rev. E_ 92, 022129, DOI: 10.1103/PhysRevE.92.022129 (2015).
* [23] Derrida, B. An exactly soluble non-equilibrium system: the asymmetric simple exclusion process. _Phys. Rep._ 301, 65–83, DOI: 10.1016/S0370-1573(98)00006-4 (1998).
* [24] Prados, A., Lasanta, A. & Hurtado, P. I. Large fluctuations in driven dissipative media. _Phys. Rev. Lett._ 107, 140601, DOI: 10.1103/PhysRevLett.107.140601 (2011).
* [25] Ishiwata, R., Yaguchi, R. & Sugiyama, Y. Correlations and responses for a system of $n$ coupled linear oscillators with asymmetric interactions. _Phys. Rev. E_ 102, 012150, DOI: 10.1103/PhysRevE.102.012150 (2020).
* [26] Caprini, L. & Marconi, U. M. B. Time-dependent properties of interacting active matter: Dynamical behavior of one-dimensional systems of self-propelled particles. _Phys. Rev. Research_ 2, 033518, DOI: 10.1103/PhysRevResearch.2.033518 (2020).
* [27] Manacorda, A. & Puglisi, A. Lattice model to derive the fluctuating hydrodynamics of active particles with inertia. _Phys. Rev. Lett._ 119, 208003, DOI: 10.1103/PhysRevLett.119.208003 (2017).
* [28] Buttà, P., Flandoli, F., Ottobre, M. & Zegarlinski, B. A non-linear kinetic model of self-propelled particles with multiple equilibria. _Kinetic and Related Models_ 12, 791–827, DOI: 10.3934/krm.2019031 (2019).
* [29] Baldassarri, A., Marini Bettolo Marconi, U. & Puglisi, A. Cooling of a lattice granular fluid as an ordering process. _Phys. Rev. E_ 65, 051301, DOI: 10.1103/PhysRevE.65.051301 (2002).
* [30] Lasanta, A., Manacorda, A., Prados, A. & Puglisi, A. Fluctuating hydrodynamics and mesoscopic effects of spatial correlations in dissipative systems with conserved momentum. _New J. Phys._ 17, 083039, DOI: 10.1088/1367-2630/17/8/083039 (2015).
* [31] Baldassarri, A., Puglisi, A. & Prados, A. Hydrodynamics of granular particles on a line. _Phys. Rev. E_ 97, 062905, DOI: 10.1103/PhysRevE.97.062905 (2018).
* [32] Battle, C. _et al._ Broken detailed balance at mesoscopic scales in active biological systems. _Science_ 352, 604–607, DOI: 10.1126/science.aac8167 (2016). https://science.sciencemag.org/content/352/6285/604.full.pdf.
* [33] Mura, F., Gradziuk, G. & Broedersz, C. P. Nonequilibrium scaling behavior in driven soft biological assemblies. _Phys. Rev. Lett._ 121, 038002, DOI: 10.1103/PhysRevLett.121.038002 (2018).
* [34] Herrmann, H., Hovi, J.-P. & Luding, S. _Physics of Dry Granular Media_ (Springer, Dordrecht, 1998).
* [35] Brilliantov, N. V., Spahn, F., Hertzsch, J. M. & Pöschel, T. Model for collisions in granular gases. _Physical Review E_ 53, 5382–5392, DOI: 10.1103/PhysRevE.53.5382 (1996).
* [36] Actually, in many contact models, the dissipative tangential force can swicth from a viscous form to a (non-linear) coulomb one if the normal force between the two grains is small enough. here we are considering cases where dense granular matter is confined by a container and an external field (such as gravity) and we assume that the particles are sufficiently compressed to consider just the linear viscous term.
* [37] Puglisi, A., Gnoli, A., Gradenigo, G., Sarracino, A. & Villamaina, D. Structure factors in granular experiments with homogeneous fluidization. _J. Chem. Phys._ 136, 014704, DOI: 10.1063/1.3673876 (2012).
* [38] Plata, C., Manacorda, A., Lasanta, A., Puglisi, A. & Prados, A. Lattice models for granular-like velocity fields: finite-size effects. _J. Stat. Mech._ 2016, 093203, DOI: 10.1088/1742-5468/2016/09/093203 (2016).
* [39] Manacorda, A., Plata, C. A., Lasanta, A., Puglisi, A. & Prados, A. Lattice models for granular-like velocity fields: hydrodynamic description. _J. Stat. Phys._ 164, 810–841, DOI: 10.1007/s10955-016-1575-z (2016).
* [40] Puglisi, A. _Transport and Fluctuations in Granular Fluids: From Boltzmann Equation to Hydrodynamics, Diffusion and Motor Effects_ (Springer, 2014).
* [41] Gardiner, C. _Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences_ (Springer-Verlag, Berlin, 1990).
* [42] Banchi, L. & Vaia, R. Spectral problem for quasi-uniform nearest-neighbor chains. _Journal of Mathematical Physics_ 54, 043501, DOI: 10.1063/1.4797477 (2013).
* [43] Schroeder, K. Diffusion in crystals with traps: A simple phenomenological model. _Zeitschrift fur Physik B Condensed Matter_ 25, 91–95, DOI: 10.1007/BF01343313 (1976).
* [44] Gradenigo, G., Sarracino, A., Villamaina, D. & Puglisi, A. Fluctuating hydrodynamics and correlation lengths in a driven granular fluid. _Journal of Statistical Mechanics: Theory and Experiment_ 2011, DOI: 10.1088/1742-5468/2011/08/P08017 (2011).
* [45] Van Noije, T., Ernst, M., Brito, R. & Orza, J. Mesoscopic theory of granular fluids. _Phys. Rev. Lett._ 79, 411 (1997).
* [46] Baldassarri, A., Puglisi, A. & Sarracino, A. Coarsening in granular systems. _C. R. Physique_ 16, 291–302 (2015).
* [47] Maggi, C., Marconi, U. M. B., Gnan, N. & Di Leonardo, R. Multidimensional stationary probability distribution for interacting active particles. _Scientific reports_ 5, 10742 (2015).
## Acknowledgements
The authors are indebted to Marco Baldovin and Lorenzo Caprini for fruitful
scientific discussions and to Alessandro Manacorda for the careful reading of
the manuscript. The authors acknowledge the financial support of Regione Lazio
through the Grant “Progetti Gruppi di Ricerca” N. 85-2017-15257 and from the
MIUR PRIN2017 project 201798CZLJ
## Author contributions statement
A. Pl. conceived the theory. A. Pl. and A. Pu. contributed to the writing of
the manuscript.
## Competing interests
The authors declare no competing financial interests.
|
# A unified approach to study the existence and numerical solution of
functional differential equation
Dang Quang A${}^{\text{a}}$, Dang Quang Long${}^{\text{b}}$
${}^{\text{a}}$ Center for Informatics and Computing, VAST
18 Hoang Quoc Viet, Cau Giay, Hanoi, Vietnam
Email<EMAIL_ADDRESS>
${}^{\text{b}}$ Institute of Information Technology, VAST,
18 Hoang Quoc Viet, Cau Giay, Hanoi, Vietnam
Email<EMAIL_ADDRESS>
###### Abstract
In this paper we consider a class of boundary value problems for third order
nonlinear functional differential equation. By the reduction of the problem to
operator equation we establish the existence and uniqueness of solution and
construct a numerical method for solving it. We prove that the method is of
second order accuracy and obtain an estimate for total error. Some examples
demonstrate the validity of the obtained theoretical results and the
efficiency of the numerical method. The approach used for the third order
nonlinear functional differential equation can be applied to functional
differential equations of any orders.
Keywords: Third order boundary value problem; Functional differential
equation; Existence and uniqueness of solution; Iterative method; Total error.
AMS Subject Classification: 34B15, 65L10
## 1 Introduction
Functional differential equations have numerous applications in engineering
and sciences [6]. Therefore, for the last decades they have been studied by
many authors. There are many works concerning the numerical solution of both
initial and boundary value problems for them. The methods used are diverse
including collocation method [10], iterative methods [1, 8], neural networks
[7, 9], and so on. Below we mention some results of typical works.
First it is worthy to mention the work of Reutskiy in 2015 [10]. In this work
the author considered the linear pantograph functional differential equation
with proportional delay
$\displaystyle
u^{(n)}=\sum_{j=0}^{J}\sum_{k=0}^{n-1}p^{jk}(x)u^{(k)}(\alpha_{j}x)+f(x),\quad
x\in[0,T]$
associated with initial or boundary conditions. Here $\alpha_{j}$ are
constants ($0<\alpha_{j}<1$). The author proposed a method, where the initial
equation is replaced by an approximate equation which has an exact analytic
solution with a set of free parameters. These free parameters are determined
by the use of the collocation procedure. Many examples show the efficiency of
the method but no errors estimates are obtained.
In 2016 Bica et al. [1] considered the boundary value problem (BVP)
$\begin{split}x^{(2p)}(t)=f(t,x(t),x(\varphi(t))),\quad t\in[a,b],\\\
x^{(i)}(a)=a_{i},\;x^{(i)}(b)=b_{i},\quad i=\overline{0,p-1}\end{split}$ (1)
where $\varphi:[a,b]\rightarrow\mathbb{R},\;a\leq\varphi(t)\leq b,\forall
t\in[a,b]$. For solving the problem, the authors constructed successive
approximations for the equivalent integral equation with the use of cubic
spline interpolation at each iterative step. The error estimate was obtained
for the approximate solution under the very strong conditions including
$(\alpha+13\beta)(b-a)M_{G}<1$, where $\alpha$ and $\beta$ are the Lipshitz
coefficients of the function $f(s,u,v)$ in the variables $u$ and $v$,
respectively; $M_{G}$ is a number such that $|G(t,s)|\leq M_{G}\;\forall
t,s\in[a,b]$, $G(t,s)$ being the Green function for the above problem. Some
numerical experiments demonstrate the convergence of the proposed iterative
method. But is a regret that in the proof of the error estimate for fourth
order nonlinear BVP there is a vital mistake when the authors by default
considered that the partial derivatives $\frac{\partial^{3}G}{\partial
s^{3}},\frac{\partial^{4}G}{\partial s^{4}}$ are continuous in
$[a,b]\times[a,b]$. But it is invalid because $\frac{\partial^{3}G}{\partial
s^{3}}$ has discontinuity on the line $s=t$. Due to this mistake the authors
obtained that the error of the method for fourth order BVP is $O(h^{4})$.
Although in [1] the method is constructed for the general function
$\varphi(t)$ but in all numerical examples only the particular case
$\varphi(t)=\alpha t$ was considered and the conditions of convergence were
not verified.
Recently, in 2018 Khuri and Sayfy [8] proposed a Green function based
iterative method for functional differential equations of arbitrary orders.
But the scope of application of the method is very limited due to the
difficulty in calculation of integrals at each iteration.
For solving functional differential equations, besides analytical and
numerical methods recently computational intelligence algorithms also are used
(see, e.g.,[7, 9]), where feed-forward artificial neural networks of different
architecture are applied. These algorithms are heuristic, so no errors
estimates are obtained and they require large computational efforts.
In this paper we propose a new approach to functional differential equations
(FDE). Although this approach can be applied to functional differential
equations of any orders with nonlinear terms containing derivatives but for
simplicity we consider the FDE of the form
$u^{\prime\prime\prime}=f(t,u(t),u(\varphi(t))),\quad t\in[0,a]$ (2)
associated with the general boundary conditions
$\begin{split}B_{1}[u]=\alpha_{1}u(0)+\beta_{1}u^{\prime}(0)+\gamma_{1}u^{\prime\prime}(0)=b_{1},\\\
B_{2}[u]=\alpha_{2}u(0)+\beta_{2}u^{\prime}(0)+\gamma_{2}u^{\prime\prime}(0)=b_{2},\\\
B_{3}[u]=\alpha_{3}u(1)+\beta_{3}u^{\prime}(1)+\gamma_{3}u^{\prime\prime}(1)=b_{3},\\\
\end{split}$ (3)
or
$\begin{split}B_{1}[u]=\alpha_{1}u(0)+\beta_{1}u^{\prime}(0)+\gamma_{1}u^{\prime\prime}(0)=b_{1},\\\
B_{2}[u]=\alpha_{2}u(1)+\beta_{2}u^{\prime}(1)+\gamma_{2}u^{\prime\prime}(1)=b_{2},\\\
B_{3}[u]=\alpha_{3}u(1)+\beta_{3}u^{\prime}(1)+\gamma_{3}u^{\prime\prime}(1)=b_{3},\\\
\end{split}$ (4)
such that
$Rank\begin{pmatrix}\alpha_{1}&\beta_{1}&\gamma_{1}&0&0&0\\\
\alpha_{2}&\beta_{2}&\gamma_{2}&0&0&0\\\
0&0&0&\alpha_{3}&\beta_{3}&\gamma_{3}\\\ \end{pmatrix}=3.$
As in (1), the function $\varphi(t)$ is assumed to be continuous and maps
$[0,a]$ into itself.
Developing the unified approach for fully third order nonlinear differential
equation
$\displaystyle
u^{\prime\prime\prime}=f(t,u(t),u^{\prime}(t),u^{\prime\prime}(t)$
in the previous works [2, 3], in this paper we establish the existence and
uniqueness of solution of the problem (2)-(3) and propose an iterative method
for finding the solution at both continuous and discrete levels. Some examples
demonstrate the validity of obtained theoretical results and the efficiency of
the proposed numerical method.
## 2 Existence and uniqueness of solution
Following the approach in [2, 3] (see also [4, 5]) to investigate the problem
(2)-(3) we introduce the nonlinear operator $A$ defined in the space of
continuous functions $C[0,a]$ by the formula:
$(A\psi)(t)=f(t,u(t),u(\varphi(t))),$ (5)
where $u(t)$ is the solution of the problem
$\begin{split}u^{\prime\prime\prime}(t)&=\psi(t),\quad 0<t<1\\\
B_{1}[u]&=b_{1},B_{2}[u]=b_{2},B_{3}[u]=b_{3},\end{split}$ (6)
where $B_{1}[u],B_{2}[u],B_{3}[u]$ are defined by (3). It is easy to verify
the following
###### Proposition 2.1
If the function $\psi$ is a fixed point of the operator $A$, i.e., $\psi$ is
the solution of the operator equation
$A\psi=\psi,$ (7)
where $A$ is defined by (5)-(6) then the function $u(t)$ determined from the
BVP (6) is a solution of the BVP (2)-(3). Conversely, if the function $u(x)$
is the solution of the BVP (2)-(3) then the function
$\psi(t)=f(t,u(t),u(\varphi(t)))$
satisfies the operator equation (7).
Now, let $G(t,s)$ be the Green function of the problem (6). Then the solution
of the problem can be represented in the form
$u(t)=g(t)+\int_{0}^{a}G(t,s)\psi(s)ds,$ (8)
where $g(t)$ is the polynomial of second degree satisfying the boundary
conditions
$B_{1}[g]=b_{1},B_{2}[g]=b_{2},B_{3}[g]=b_{3},$ (9)
Denote
$M_{0}=\max_{0\leq t\leq a}\int_{0}^{1}|G(t,s)|ds.$ (10)
For any positive number $M$ define the domain
$\mathcal{D}_{M}=\Big{\\{}(t,u,v)\mid 0\leq t\leq
a;|u|\leq\|g\|+M_{0}M;|v|\leq\|g\|+M_{0}M\Big{\\}},$ (11)
where $\|g\|=\max_{0\leq t\leq a}|g(t)|$.
As usual, we denote by $B[0,M]$ the closed ball of the radius $M$ centered at
$0$ in the space of continuous functions $C[0,a]$.
###### Theorem 2.2
Assume that:
(i) The function $\varphi(t)$ is a continuous map from $[0,a]$ to $[0,a]$.
(ii) The function $f(t,u,v)$ is continuous and bounded by $M$ in the domain
$\mathcal{D_{M}}$, i.e.,
$|f(t,u,v)|\leq M\quad\forall(t,u,v)\in\mathcal{D_{M}}.$ (12)
(iii) The function $f(t,u,v)$ satisfies the Lipshitz conditions in the
variables $u,v$ with the coefficients $L_{1},L_{2}\geq 0$ in
$\mathcal{D}_{M}$, i.e.,
$\begin{split}|f(t,u_{2},v_{2})-f(t,u_{1},v_{1})|\leq
L_{1}|u_{2}-u_{1}|+L_{2}|v_{2}-v_{1}|\quad\\\
\forall(t,u_{i},v_{i})\in\mathcal{D}_{M}\;(i=1,2)\end{split}$ (13)
(iv)
$q:=(L_{1}+L_{2})M_{0}<1.$ (14)
The the problem (2)-(3) has a unique solution $u(t)\in C^{3}[0,a]$, satisfying
the estimate
$|u(t)|\leq\|g\|+M_{0}M\quad\forall t\in[0,a].$ (15)
Proof. The proof of the theorem will be done in the following steps:
First we show that the operator $A$ is a mapping $B[0,M]\rightarrow B[0,M]$.
Indeed, for any $\psi\in B[0,M]$ we have $\|\psi\|\leq M$. Let $u(t)$ be the
solution of the problem (6). From (8) it follows
$|u(t)|\leq\|g\|+M_{0}M\quad\forall t\in[0,a].$ (16)
Since $0\leq\varphi(t)\leq a$ we also have
$|u(\varphi(t)|\leq\|g\|+M_{0}M\quad\forall t\in[0,a].$
Therefore, if $t\in[0,a]$ then $(t,u(t),u(\varphi(t)))\in\mathcal{D}_{M}$. By
the assumption (12) we have $|f(t,u(t),u(\varphi(t)))|\leq M\quad\forall
t\in[0,a]$. In view of (5) we have $|(A\psi)(t)|\leq M\quad\forall t\in[0,a]$.
It means $\|A\psi\|\leq M$ or $A\psi\in B[0,M]$.
Next, we prove that $A$ is a contraction in $B[0,M]$. Let
$\psi_{1},\psi_{2}\in B[0,M]$ and $u_{1}(t),u_{2}(t)$ be the solutions of the
problem (6), respectively. Then from the assumption (13) we obtain
$|A\psi_{2}-A\psi_{1}|\leq
L_{1}|u_{2}(t)-u_{1}(t)|+L_{2}|u_{2}(\varphi(t))-u_{1}(\varphi(t))|.$ (17)
From the representations
$u_{i}(t)=g(t)+\int_{0}^{a}G(t,s)\psi_{i}(s)ds,\quad(i=1,2)$
and (10) it is easy to obtain
$\displaystyle|u_{2}(t)-u_{1}(t)|$ $\displaystyle\leq
M_{0}\|\psi_{2}-\psi_{1}\|,$
$\displaystyle|u_{2}(\varphi(t))-u_{1}(\varphi(t))|$ $\displaystyle\leq
M_{0}\|\psi_{2}-\psi_{1}\|$
Combining the above estimates and (17), in view of the assumption (14) we
obtain
$\displaystyle\|A\psi_{2}-A\psi_{1}\|\leq q\|\psi_{2}-\psi_{1}\|,\quad q<1.$
Thus, $A$ is a contraction mapping in $B[0,M]$.
Therefore, the operator equation (7) has a unique solution $\psi\in B[0,M]$.
By Proposition 2.1 the solution of the problem (6) for this right-hand side
$\psi(t)$ is the solution of the original problem (2)-(3).
## 3 Solution method and its convergence
Consider the following iterative method:
1. 1.
Given $\psi_{0}\in B[0,M]$, for example,
$\psi_{0}(t)=f(t,0,0).$ (18)
2. 2.
Knowing $\psi_{k}(t)$ $(k=0,1,...)$ compute
$\begin{split}u_{k}(t)&=g(t)+\int_{0}^{a}G(t,s)\psi_{k}(s)ds,\\\
v_{k}(t)&=g(\varphi(t))+\int_{0}^{1}G(\varphi(t),s)\psi_{k}(s)ds,\end{split}$
(19)
3. 3.
Update
$\psi_{k+1}(t)=f(t,u_{k}(t),v_{k}(t)).$ (20)
Set
$p_{k}=\dfrac{q^{k}}{1-q},\;d=\|\psi_{1}-\psi_{0}\|.$ (21)
###### Theorem 3.1 (Convergence)
Under the assumptions of Theorem 2.2 the above iterative method converges and
there holds the estimate
$\|u_{k}-u\|\leq M_{0}p_{k}d,$
where $u$ is the exact solution of the problem (2)-(3) and $M_{0}$ is given by
(10).
This theorem follows straightforward from the convergence of the successive
approximation method for finding fixed point of the operator $A$ and the
representations (8) and the first equation in (19).
To numerically realize the above iterative method we construct the
corresponding discrete iterative method. For this purpose cover the interval
$[0,a]$ by the uniform grid
$\bar{\omega}_{h}=\\{t_{i}=ih,\;h=a/N,i=0,1,...,N\\}$ and denote by
$\Phi_{k}(t),U_{k}(t),V_{k}(t)$ the grid functions, which are defined on the
grid $\bar{\omega}_{h}$ and approximate the functions
$\psi_{k}(t),u_{k}(t),v_{k}(t)$ on this grid, respectively.
Below we describe the discrete iterative method:
1. 1.
Given
$\Psi_{0}(t_{i})=f(t_{i},0,0),\ i=0,...,N.$ (22)
2. 2.
Knowing $\Psi_{k}(t_{i}),\;k=0,1,...;\;i=0,...,N,$ compute approximately the
definite integrals (19) by the trapezoidal rule
$\begin{split}U_{k}(t_{i})&=g(t_{i})+\sum_{j=0}^{N}h\rho_{j}G(t_{i},t_{j})\Psi_{k}(t_{j}),\\\
V_{k}(t_{i})&=g(\xi_{i})+\sum_{j=0}^{N}h\rho_{j}G(\xi_{i},t_{j})\Psi_{k}(t_{j}),i=0,...,N,\end{split}$
(23)
where $\rho_{j}$ are the weights
$\rho_{j}=\begin{cases}1/2,\;j=0,N\\\ 1,\;j=1,2,...,N-1\end{cases}$
and $\xi_{i}=\varphi(t_{i})$.
3. 3.
Update
$\Psi_{k+1}(t_{i})=f(t_{i},U_{k}(t_{i}),V_{k}(t_{i})).$ (24)
Now study the convergence of the above discrete iterative method. For this
purpose we need some auxiliary results.
###### Proposition 3.2
If the function $f(t,u,v)$ has all partial derivatives continuous up to second
order and the function $\varphi(t)$ also has continuous derivatives up to
second order then the functions $\psi_{k}(t),u_{k}(t),v_{k}(t)$ constructed by
the iterative method (18)-(20) also have continuous derivatives up to second
order.
This proposition is obvious.
###### Proposition 3.3
For any function $\psi(t)\in C^{2}[0,a]$ there hold the estimates
$\displaystyle\int_{0}^{a}G(t_{i},s)\psi(s)ds=\sum_{j=0}^{N}h\rho_{j}G(t_{i},s_{j})\psi(s_{j})+O(h^{2}),$
(25)
$\displaystyle\int_{0}^{a}G(\xi_{i},s)\psi(s)ds=\sum_{j=0}^{N}h\rho_{j}G(\xi_{i},s_{j})\psi(s_{j})+O(h^{2}),$
(26)
where in order to avoid possible confusion we denote $s_{j}=t_{j}$.
Proof. The validity of (25) is guaranteed by [3, Proposition 3]. Here we
notice that (25) is not automatically deduced from the estimate for the
composite trapezoidal rule because the function
$\frac{\partial^{2}G(t_{i},s)}{\partial s^{2}}$ has discontinuity at
$s=t_{i}$.
Now we prove the estimate (26). Since $0\leq\xi_{i}=\varphi(t_{i})\leq a$,
there are two cases.
Case 1: $\xi_{i}$ coincides with one node $s_{j}$ of the grid
$\bar{\omega_{h}}$, i.e., there exists $s_{j}\in\bar{\omega_{h}}$ such that
$\xi_{i}=s_{j}$. Because the Green function $G(t,s)$ as a function of $s$, it
is continuous function at $s=\xi_{i}$ and is a polynomial of $s$ in the
intervals $[0,\xi_{i}$ and $[\xi_{i},a]$, we have
$\displaystyle\int_{0}^{a}G(\xi_{i},s)\psi(s)ds=\int_{0}^{\xi_{i}}G(\xi_{i},s)\psi(s)ds+\int_{\xi_{i}}^{a}G(\xi_{i},s)\psi(s)ds$
$\displaystyle=h\Big{(}\frac{1}{2}G(\xi_{i},s_{0})\psi(s_{0})+\sum_{m=1}^{j-1}G(\xi_{i},s_{m})\psi(s_{m})+\frac{1}{2}G(\xi_{i},s_{j})\psi(s_{j})\Big{)}+O(h^{2})$
$\displaystyle+h\Big{(}\frac{1}{2}G(\xi_{i},s_{j})\psi(s_{j})+\sum_{m=j+1}^{N-1}G(\xi_{i},s_{m})\psi(s_{m})+\frac{1}{2}G(\xi_{i},s_{N})\psi(s_{N})\Big{)}+O(h^{2})$
$\displaystyle=\sum_{j=0}^{N}h\rho_{j}G(t_{i},s_{j})\psi(s_{j})+O(h^{2}).$
Thus, (26) is proved for Case 1.
Case 2: $\xi_{i}$ lies between $s_{l}$ and $s_{l+1}$, i.e.,
$s_{l}<\xi_{i}<s_{l+1}$ for some $l=\overline{0,N-1}$. In this case, we
represent
$\int_{0}^{a}G(\xi_{i},s)\psi(s)ds=\int_{0}^{s_{l}}F(s)ds+\int_{s_{l}}^{\xi_{i}}F(s)ds+\int_{\xi_{i}}^{s_{l+1}}F(s)ds+\int_{s_{l+1}}^{a}F(s)ds.$
(27)
Here, for short we denote $F(s)=G(\xi_{i},s)\psi(s)$. Note that $F(s)\in
C^{2}$ in $[s_{l},\xi_{i}]$ and $[\xi_{i},s_{l+1}]$. Applying the composite
trapezoidal rule to the first and the last integrals in the right-hand side of
(27) we obtain
$\displaystyle T_{1}:$
$\displaystyle=\int_{0}^{s_{l}}F(s)ds+\int_{s_{l+1}}^{a}F(s)ds$ (28)
$\displaystyle=\sum_{j=0}^{l}\rho_{j}^{(l-)}F(s_{j})+\sum_{j=l+1}^{N}\rho_{j}^{(l+)}F(s_{j})+O(h^{2}),$
where
$\displaystyle\rho_{j}^{(l-)}=\left\\{\begin{array}[]{ll}\frac{1}{2},\quad
j=0,l\\\
1,\quad,1<j<l\end{array}\right.,\quad\rho_{j}^{(l+)}=\left\\{\begin{array}[]{ll}\frac{1}{2},\quad
j=l+1,N\\\ 1,\quad l+1<j<N\end{array}\right.$
For calculating the second and the third integrals in the right-hand side of
(27) we use the trapezoidal rule
$\displaystyle T_{2}:$
$\displaystyle=\int_{s_{l}}^{\xi_{i}}F(s)ds+\int_{\xi_{i}}^{s_{l+1}}F(s)ds$
(29)
$\displaystyle=\frac{1}{2}\big{[}(F(s_{l})+F(\xi_{i}))(\xi_{i}-s_{l})+(F(\xi_{i})+F(s_{l+1}))(s_{l+1}-\xi_{i})\big{]}+O(h^{2}).$
Using the points $s_{l}$ and $s_{l+1}$ for linearly interpolating $F(s)$ in
the point $\xi_{i}$ we have
$\displaystyle
F(\xi_{i})=F(s_{l})\frac{\xi_{i}-s_{l+1}}{s_{l}-s_{l+1}}+F(s_{l+1})\frac{\xi_{i}-s_{l}}{s_{l+1}-s_{l}}+O(h^{2}).$
From here we obtain
$F(\xi_{i})(s_{l+1}-s_{l})=F(s_{l})(s_{l+1}-\xi_{i})+F(s_{l+1})(\xi_{i}-s_{l})+O(h^{3}).$
(30)
Now, transforming $T_{2}$ we have
$\displaystyle
T_{2}=\frac{1}{2}\big{[}F(s_{l})(\xi_{i}-s_{l})+F(s_{l+1})(s_{l+1}-\xi_{i})\big{]}+F(\xi_{i})(s_{l+1}-s_{l})+O(h^{2})$
Further, in view of (30) it is easy to obtain
$T_{2}=\frac{1}{2}h(F(s_{l})+F(s_{l+1}))+O(h^{3}).$
Taking into account the above estimate, (28) and (27) we have
$\int_{0}^{a}G(\xi_{i},s)\psi(s)ds=\sum_{j=0}^{N}h\rho_{j}G(\xi_{i},s_{j})\psi(s_{j})+O(h^{2}).$
Thus, (26) is proved for Case 2 and the proof of Proposition 3.3 is complete.
###### Remark 3.4
If in Proposition 3.3 replace $G(t_{i},s)$ and $G(\xi_{i},s)$ by
$|G(t_{i},s)|$ and $|G(\xi_{i},s)|$, respectively then we obtain the analogous
estimates
$\displaystyle\int_{0}^{a}|G(t_{i},s)|\psi(s)ds=\sum_{j=0}^{N}h\rho_{j}|G(t_{i},s_{j})|\psi(s_{j})+O(h^{2}),$
(31)
$\displaystyle\int_{0}^{a}|G(\xi_{i},s)|\psi(s)ds=\sum_{j=0}^{N}h\rho_{j}|G(\xi_{i},s_{j})|\psi(s_{j})+O(h^{2}),$
(32)
###### Proposition 3.5
Under the assumptions of Theorem 2.2 we have the estimates
$\displaystyle\|\Psi_{k}-\psi_{k}\|_{\bar{\omega_{h}}}=O(h^{2}),$ (33)
$\displaystyle\|U_{k}-u_{k}\|_{\bar{\omega_{h}}}=O(h^{2}),,$ (34)
where $\|.\|_{\bar{\omega_{h}}}$ is the max-norm of grid function defined on
the grid $\bar{\omega_{h}}$.
Proof. We prove the proposition by induction. For $k=0$ we have at once
$\|\Psi_{0}-\psi_{0}\|_{\bar{\omega_{h}}}$ because
$\Psi_{0}(t_{i})=f(t_{i},0,0)$ and
$\psi_{0}(t_{i})=f(t_{i},0,0),\;i=\overline{0,N}$, too. Next, by (19) and
Proposition 3.3 we have
$\displaystyle u_{0}(t_{i})$
$\displaystyle=g(t_{i})+\int_{0}^{a}G(t_{i},s)\psi_{0}(s)ds$
$\displaystyle=g(t_{i})+\sum_{j=0}^{N}h\rho_{j}G(t_{i},s_{j})\psi_{0}(s_{j})+O(h^{2}).$
On the other hand, by (23) we have
$\displaystyle U_{0}(t_{i})$
$\displaystyle=g(t_{i})+\sum_{j=0}^{N}h\rho_{j}G(t_{i},s_{j})\Psi_{0}(s_{j}).$
Therefore,
$\displaystyle|U_{0}(t_{i})-u_{0}(t_{i})|=O(h^{2}).$
It implies $\|U_{0}-u_{0}\|_{\bar{\omega_{h}}}=O(h^{2})$. Thus, the estimates
(33) and (34) are valid for $k=0$.
Now, suppose that these estimates are valid for $k\geq 0$. We shall show that
they are valid for $k+1$. Indeed, from (20), (24) and the Lipshitz conditions
for the function $f(t,u,v)$ we have
$\begin{split}|\Psi_{k+1}(t_{i})-\psi_{k+1}(t_{i})|&=|f(t_{i},U_{k}(t_{i}),V_{k}(t_{i}))-f(t_{i},u_{k}(t_{i}),v_{k}(t_{i}))|\\\
&\leq
L_{1}|U_{k}(t_{i})-u_{k}(t_{i})|+L_{2}|V_{k}(t_{i})-v_{k}(t_{i})|.\end{split}$
(35)
Now estimate $|V_{k}(t_{i})-v_{k}(t_{i})|$. We have by Proposition 3.3
$\displaystyle v_{k}(t_{i})$
$\displaystyle=g(\varphi(t_{i}))+\int_{0}^{a}G(\varphi(t_{i}),s)\psi_{k}(s)ds$
$\displaystyle=g(\xi_{i})+\sum_{j=0}^{N}h\rho_{j}G(\xi_{i},s_{j})\psi_{k}(s_{j})+O(h^{2}).$
In view of (23) we have
$\begin{split}|V_{k}(t_{i})-v_{k}(t_{i})|&=|\sum_{j=0}^{N}h\rho_{j}G(\xi_{i},s_{j})(\Psi_{k}(s_{j})-\psi_{k}(s_{j}))|+O(h^{2})\\\
&\leq\sum_{j=0}^{N}h\rho_{j}|G(\xi_{i},s_{j})|\|\Psi_{k}-\psi_{k}\|_{\bar{\omega_{h}}}+O(h^{2}).\end{split}$
(36)
Notice that (32) for $\psi(s)=1$ gives
$\displaystyle\int_{0}^{a}|G(\xi_{i},s)|ds=\sum_{j=0}^{N}h\rho_{j}|G(\xi_{i},s_{j})|+O(h^{2}).$
From here it follows that
$\begin{split}&\sum_{j=0}^{N}h\rho_{j}|G(\xi_{i},s_{j})|=\int_{0}^{a}|G(\xi_{i},s)|ds+O(h^{2})\\\
&\leq\max_{0\leq t\leq
a}\int_{0}^{1}|G(t,s)|ds+O(h^{2})=M_{0}+O(h^{2}).\end{split}$
Thanks to this estimate, from (36) we obtain
$|V_{k}(t_{i})-v_{k}(t_{i})|\leq\|\Psi_{k}-\psi_{k}\|_{\bar{\omega_{h}}}+O(h^{2}).$
So, due to the induction hypothesis it implies
$\|V_{k}-v_{k}\|_{\bar{\omega_{h}}}=O(h^{2}).$ (37)
Combining the induction hypothesis
$\|U_{k}-u_{k}\|_{\bar{\omega_{h}}}=O(h^{2})$ and (37), from (35) we obtain
$\|\Psi_{k+1}-\psi_{k+1}\|_{\bar{\omega_{h}}}=O(h^{2}).$ (38)
In order to prove
$\|U_{k+1}-u_{k+1}\|_{\bar{\omega_{h}}}=O(h^{2}).$ (39)
we take into account that
$\displaystyle|U_{k+1}(t_{i})-u_{k+1}(t_{i})|\leq\sum_{j=0}^{N}h\rho_{j}|G(t_{i},s_{j})||\Psi_{k+1}(s_{j})-\psi_{k+1}(s_{j})|+O(h^{2}).$
Doing the similar argument as above and using the proved estimate (38) it is
easy to obtain
$\displaystyle|U_{k+1}(t_{i})-u_{k+1}(t_{i})|=O(h^{2}),$
or (39).
Thus, Proposition 3.5 is proved.
Now combining Proposition 3.5 with Theorem 3.1 we obtain the following result.
###### Theorem 3.6
Under the assumptions of Theorem 2.2 for the approximate solution of the
problem (2)-(3) obtained by the discrete iterative method (22)-(24) we have
the estimate
$\|U_{k}-u\|_{\bar{\omega_{h}}}\leq M_{0}p_{k}d+O(h^{2}),$
where $p_{k}$ and $d$ are defined by (21).
###### Remark 3.7
The results in Section 2 and 3 are obtained for the nonlinear third order FDE
with nonlinear term $f=f(t,u(t),u(\varphi(t)))$. Analogously, it is possible
to obtain similar results of existence and convergence of the iterative method
at continuous level for the general case
$f=f(t,u(t),u(\varphi(t)),u^{\prime}(\varphi_{1}(t)),u^{\prime\prime}(\varphi_{2}(t))).$
But for numerical realization of the iterative method it is needed to take
attention that the second derivative $\frac{\partial^{2}G(t,s)}{\partial
t^{2}}$ of the Green function has discontinuity at the line $s=t$. In this
case for computing integrals containing $\frac{\partial G(t,s)}{\partial t}$
and $\frac{\partial^{2}G(t,s)}{\partial t^{2}}$ it is needed to use the
formulas constructed in our previous work [3].
###### Remark 3.8
The technique developed in this paper for the third order nonlinear FDE can be
applied to nonlinear FDE of any order.
## 4 Examples
In all numerical examples below we perform the iterative method (22)-(24)
until $\|\Psi_{k}-\Psi_{k-1}\|_{\bar{\omega_{h}}}\leq 10^{-10}$. In the tables
of results for the convergence of the iterative method
$Error=\|U_{K}-u\|_{\bar{\omega_{h}}}$, $K$ is the number of iterations
performed.
Example 1. Consider the following problem
$\displaystyle u^{\prime\prime\prime}(t)$
$\displaystyle=e^{t}-\frac{1}{4}u(t)+\frac{1}{4}u^{2}(\frac{t}{2}),\quad
0<t<1,$ (40) $\displaystyle u(0)$
$\displaystyle=1,\;u^{\prime}(0)=1,\;u^{\prime}(1)=e$
with the exact solution $u(t)=e^{t}.$ The Green function for the above problem
is
$\displaystyle
G(t,s)=\left\\{\begin{array}[]{ll}\dfrac{s}{2}(t^{2}-2t+s),\quad 0\leq s\leq
t\leq 1,\\\ \,\,\dfrac{t^{2}}{2}(s-1),\quad 0\leq t\leq s\leq 1.\\\
\end{array}\right.$
So, we have
$M_{0}=\max_{0\leq t\leq a}\int_{0}^{1}|G(t,s)|ds=\frac{1}{2}.$
The second degree polynomial satisfying the boundary conditions of the problem
is
$g(t)=1+t+\frac{e-1}{2}t^{2}.$
Therefore, $\|g\|=2+\dfrac{e-1}{2}=2.7183$. In this example
$f(t,u,v)=e^{t}-\frac{1}{4}u+\frac{1}{4}v^{2}$. It is easy to verify that for
$M=6.5$ we have $|f(t,u,v)|\leq M$ in the domain $\mathcal{D}_{M}$ defined by
(11). Moreover, in this domain the function $f(t,u,v)$ satisfies the Lipshitz
conditions in $u$ and $v$ with the coefficients $L_{1}=\frac{1}{4}$ and
$L_{2}=1.7004$. Therefore, $q:=(L_{1}+L_{2})M_{0}=0.16$. Thus, all the
assumptions of Theorem 2.2 are satisfied. By the theorem, the problem (40) has
a unique solution. This is the above exact solution.
The results of convergence of the iterative method (22)-(24) are given in
Table 1.
Table 1: The convergence in Example 1. $N$ | $h^{2}$ | $K$ | $Error$
---|---|---|---
50 | 4.0000e-04 | 3 | 6.1899e-05
100 | 1.0000e-04 | 3 | 1.5475e-05
150 | 4.4444e-05 | 3 | 6.877 -06
200 | 2.5000e-05 | 3 | 3.8688e-06
300 | 1.1111e-05 | 3 | 1.7195e-06
400 | 6.2500e-06 | 3 | 9.6721e-07
500 | 4.0000e-06 | 3 | 6.1901e-07
800 | 1.5625e-06 | 3 | 6.1901e-07
1000 | 1.0000e-06 | 3 | 1.5475e-07
From this table we see that the results of computation support the conclusion
that the accuracy of the iterative method is $O(h^{2})$.
###### Remark 4.1
Theorem 3.6 gives sufficient conditions for convergence of the iterative
method (22)-(24). In the cases when these conditions are not satisfied the
iterative also can converge to some solution. For example, for the case
$f(t,u,v)=e^{t}+u^{2}+v^{2}+1$ with the same boundary conditions as in (40)
the iterative method converges after 15 iterations. And for the case
$f(t,u,v)=e^{2t}-u^{3}+v^{2}+5$ after $16$ iterations the iterative process
reaches the $TOL=10^{-10}$. Notice that the number of iterations do not depend
on the grid size as in Example 1.
Example 2. Consider the following problem
$\displaystyle u^{\prime\prime\prime}(t)$
$\displaystyle=\sin(u^{2}(t))+\cos(u^{2}(t^{2})),\quad 0<t<1,$ (41)
$\displaystyle u(0)$
$\displaystyle=0,\;u^{\prime}(0)=\pi,\;u^{\prime}(1)=-\pi.$
For this problem $f(t,u,v)=\sin(u^{2})+\cos(v^{2})$ and $\varphi(t)=t^{2}$. It
is easy to verify that all the conditions of Theorem 3.1 are satisfied,
therefore the problem has a unique solution. Also, by Theorem 3.6 the
iterative method (22)-(24) converges. With $TOL=10^{-10}$ the iterative method
for any number of grid points stops after $8$ iterations. The graph of the
approximate solution is depicted in Figure 1.
Figure 1: The graph of the approximate solution in Example $2$.
Example 3. (Example 5 in [8]) Consider the problem
$\displaystyle u^{\prime\prime\prime}(t)$ $\displaystyle=-1+2u^{2}(t/2),\quad
0<t<\pi,$ (42) $\displaystyle u(0)$
$\displaystyle=0,\;u^{\prime}(0)=1,\;u(\pi)=0.$
which has exact solution $u(t)=\sin(t)$. For this problem the Green function
has the form
$\displaystyle
G(t,s)=\left\\{\begin{array}[]{ll}-\dfrac{t^{2}(\pi-s)^{2}}{2\pi^{2}}+\dfrac{(t-s)^{2}}{2},\quad
0\leq s\leq t\leq\pi,\\\ \,\,-\dfrac{t^{2}(\pi-s)^{2}}{2\pi^{2}},\quad 0\leq
t\leq s\leq\pi\\\ \end{array}\right.$
and the function $f(t,u,v)=-1+2v^{2}$.
The results of convergence of the iterative method (22)-(24) for this example
are given in Table 2
Table 2: The convergence in Example 3. $N$ | $h^{2}$ | $K$ | $Error$
---|---|---|---
50 | 4.0000e-04 | 25 | 1.4455e-04
100 | 1.0000e-04 | 25 | 3.6142e-05
150 | 4.4444e-05 | 25 | 1.6063e-05
200 | 2.5000e-05 | 25 | 9.0345e-06
300 | 1.1111e-05 | 25 | 4.0155e-06
400 | 6.2500e-06 | 25 | 2.2587e-06
500 | 4.0000e-06 | 25 | 1.4456e-06
800 | 1.5625e-06 | 25 | 5.6467e-07
1000 | 1.0000e-06 | 25 | 3.6139e-07
From Table 2 it is seen also that the numerical method has the accuracy
$O(h^{2})$.
## 5 Conclusion
In this paper we have proposed a unified approach to nonlinear functional
differential equations via boundary value problems for nonlinear third order
functional differential equations as a particular case. We have established
the existence and uniqueness of solution and proved the convergence of the
discrete iterative method for finding the solution. Some examples demonstrate
the validity of the theoretical results and the efficiency of the numerical
method.
The proposed approach can be applied to boundary value problems for nonlinear
functional differential equations of any order. It also can be applied to
integro-differential equations.
## References
* [1] Bica, A. M., Curila, M., Curila, S., Two-point boundary value problems associated to functional differential equations of even order solved by iterated splines, Applied Numerical Mathematics 110 (2016) 128-147.
* [2] Dang, Q.A, Dang, Q.L., A unified approach to fully third order nonlinear boundary value problems, J. Nonlinear Funct. Anal. 2020 (2020), Article ID 9, https://doi.org/10.23952/jnfa.2020.9
* [3] Dang, Q.A., Dang, Q.L., Simple numerical methods of second and third-order convergence for solving a fully third-order nonlinear boundary value problem, Numer Algor (2020). https://doi.org/10.1007/s11075-020-01016-2
* [4] Dang, Q.A, Ngo, T.K.Q.: Existence results and iterative method for solving the cantilever beam equation with fully nonlinear term, Nonlinear Anal. Real World Appl. 36, 56-68 (2017)
* [5] Dang, Q.A, Dang, Q.L., Ngo, T.K.Q.: A novel efficient method for nonlinear boundary value problems, Numer. Algor. 76, 427-439 (2017)
* [6] Hale, Jack K., Theory of functional differential equations, Springer-Verlag, New York, 1977,
* [7] Hou C-C, Simos T.E., Famelis I.T., Neural network solution of pantograph type differential equations. Math Meth Appl Sci. 2020;1-6. https://doi.org/10.1002/mma.6126
* [8] Khuri, S.A. , Sayfy, A. , Numerical solution of functional differential equations: a Green’s function-based iterative approach, International Journal of Computer Mathematics Volume 95, 2018 - Issue 10 , 1937-1949.
* [9] Raja, M. A. Z. , Numerical treatment for boundary value problems of Pantograph functional differential equation using computational intelligence algorithms, Applied Soft Computing, Volume 24, 2014, Pages 806-821.
* [10] Reutskiy, S.Yu., A new collocation method for approximate solution of the pantograph functional differential equations with proportional delay, Applied Mathematics and Computation 266 (2015) 642-655.
* [11] Yang, C., Hou, J. Jacobi spectral approximation for boundary value problems of nonlinear fractional pantograph differential equations. Numer Algor (2020). https://doi.org/10.1007/s11075-020-00924-7
|
0
# Levenberg-Marquardt method and partial exact penalty parameter selection
in bilevel optimization
Andrey Tin† and Alain B. Zemkoho‡
_Centre for Operational Research, Management Sciences and Information Systems
(CORMSIS)_
_and School of Mathematical Sciences, University of Southampton, SO17 1BJ
Southampton, UK_
## Abstract
We consider the optimistic bilevel optimization problem, known to have a wide
range of applications in engineering, that we transform into a single-level
optimization problem by means of the lower-level optimal value function
reformulation. Subsequently, based on the partial calmness concept, we build
an equation system, which is parameterized by the corresponding partial exact
penalization parameter. We then design and analyze a Levenberg-Marquardt
method to solve this parametric system of equations. Considering the fact that
the selection of the partial exact penalization parameter is a critical issue
when numerically solving a bilevel optimization problem, we conduct a careful
experimental study to this effect, in the context the Levenberg-Marquardt
method, while using the Bilevel Optimization LIBrary (BOLIB) series of test
problems.
††footnotetext: ${\dagger}$ e-mail<EMAIL_ADDRESS>${\ddagger}$ e-mail<EMAIL_ADDRESS>
AT was supported by a University of Southampton Vice Chancellor Scholarship,
while ABZ was supported by the EPSRC grant EP/V049038/1 and the Alan Turing
Institute under the EPSRC grant EP/N510129/1.
## 1 Introduction
We consider the optimistic bilevel optimization problem
$\underset{x,\,y}{\min\;}F(x,y)\;\mbox{ s.t. }\;G(x,y)\leq 0,\;y\in
S(x):=\arg\underset{y}{\min}\leavevmode\nobreak\ \\{f(x,y):\;g(x,y)\leq
0\\},\\\ $ (1.1)
where $F:\mathds{R}^{n}\times\mathds{R}^{m}\rightarrow\mathds{R}$,
$f:\mathds{R}^{n}\times\mathds{R}^{m}\rightarrow\mathds{R}$,
$G:\mathds{R}^{n}\times\mathds{R}^{m}\rightarrow\mathds{R}^{q}$, and
$g:\mathds{R}^{n}\times\mathds{R}^{m}\rightarrow\mathds{R}^{p}$. As usual, we
refer to $F$ (resp. $f$) as the upper-level (resp. lower-level) objective
function and $G$ (resp. $g$) stands for the upper-level (resp. lower-level)
constraint function. Solving problem (1.1) is very difficult because of the
implicit nature of the lower-level optimal solution set-valued mapping
$S:\mathds{R}^{n}\rightrightarrows\mathds{R}^{m}$ that partly describes the
feasible set of problem (1.1). Problem (1.1) has a wide range of applications
in engineering; to have a flavour of this, see, e.g., [7] and references
therein.
One of the most common single-level transformation of problem (1.1) is the
Karush-Kuhn-Tucker (KKT) reformulation, which consists of replacing inclusion
$y\in S(x)$ with the equivalent KKT conditions of the lower-level problem,
under appropriate assumptions; see, e.g., [1, 8, 5, 15, 24]. As shown in [5],
the first drawback of the KKT reformulation is that it is not necessarily
equivalent to the original problem (1.1). Secondly, from the numerical
perspective, the KKT reformulation involves derivatives from the lower-level
problem that can require the calculation of second (resp. third) order
derivatives for first (resp. second) order optimization-type methods. Thirdly,
it is shown in the new paper [31] that the lower-level value function (LLVF)
reformulation
$\underset{x,\,y}{\min}\leavevmode\nobreak\ F(x,y)\;\mbox{ s.t. }\;G(x,y)\leq
0,\;\,g(x,y)\leq 0,\;\,f(x,y)\leq\varphi(x),$ (1.2)
where the LLVF $\varphi$ is defined by
$\varphi(x):=\min\leavevmode\nobreak\ \left\\{f(x,y)\left|\leavevmode\nobreak\
g(x,y)\leq 0\right.\right\\},$ (1.3)
can lead to a better numerical performance than the KKT reformulation for
certain problem classes; in particular for the Bilevel Optimization LIBrary
(BOLIB) examples [33]. It is also important note that problem (1.2)–(1.3) is
equivalent to the original problem (1.1) without any assumption.
There is a number of recent studies on solution methods for bilevel programs,
based on the LLVF reformulation. For example, [20, 22, 27] develop global
optimization techniques for (1.2)–(1.3). [18, 28] propose algorithms computing
stationary points for (1.2)–(1.3) in the special case where $G$ (resp. $g$) is
independent from $y$ (resp. $x$). But, the value function in the latter work
is approximated by an entropy function, which is difficult to compute for
problems with a big number of lower-level variables, as the corresponding
approximation relies on integral calculations.
More recently, [12, 31] have proposed Newton-type methods for problem
(1.2)–(1.3) based on a special transformation that enables the optimality
conditions of this problem to be squared. Naturally, detailed optimality
conditions for (1.2)–(1.3) are non-square and are directly dealt with in [14]
with Gauss-Newton-type methods. But, these methods require inverse
calculations at each iteration for matrices which are not necessarily
nonsingular. Hence, to expand the applicability of this method to a wider
class of problems, we consider and study the convergence of a regularized
version in this paper, commonly known as the Levenberg-Marquardt method.
Recall that the standard approach to derive optimality conditions for problem
(1.2)–(1.3), necessary to build this Levenberg-Marquardt method, is based on
the concept of partial calmness. The consequence of this is that all the
methods studied in the papers [14, 12, 31, 18, 28] are based on a partial
exact penalization parameter, which needs a special care to be selected.
However, none of these papers pays a special attention on how to select this
parameter. One of the goals of the current paper is to conduct a numerical
study on the best way to select this parameter, based on the BOLIB test set
[33], and in the context of the Levenberg-Marquardt method to be introduced in
the next section. This study will certainly enlightening the process of
selection the partial exact penalization parameter in a broader class of
methods to solve optimistic bilevel optimization problems.
Another difficulty working with the LLVF reformulation is that $\varphi$ is
typically non-differentiable function. This will be handled in this paper by
using upper estimates of the subdifferential of the function (see, e.g., [9,
6, 8, 30]) that will lead to a relatively simple system of optimality
conditions, which depends on the aforementioned partial exact penalization
parameter.
The remainder of the paper proceeds as follows. In the next section, we start
by transforming the necessary optimality conditions into a system of
equations. To do so, we substitute the corresponding complementarity
conditions by the well-known Fischer-Burmeister function [11] that we
subsequently perturb to build a smooth system of equations. In Section 3, we
focus our attention on partial exact penalization, including a detailed
discussion on how to chose the penalty parameter, as well as the related ill-
behaviour. This analysis plays an important role in the performance of the
method, given that a particular attention is paid on ways to avoid the
aforementioned ill-behaviour. Furthermore, we will present results on the
experimental order of convergence of the method and line search stepsize
parameter at the last iteration. The results in Section 4 are compared with
known solutions of the problems to check the performance of the method under
different partial exact penalization parameter scenarios.
## 2 The algorithm and convergence analysis
We start this section with some definitions necessary to state optimality
conditions for the bilevel program (1.2)–(1.3), which will be used to
construct the algorithm. The lower-level problem is _fully convex_ if the
functions $f$ and $g_{i}$, $i=1,\ldots,p$ are convex in $(x,y)$. A point
$(\bar{x},\bar{y})$ is said to be _lower-level regular_ if there exists a
vector $d\in\mathbb{R}^{m}$ such that we have
$\nabla_{y}g_{i}(\bar{x},\bar{y})^{\top}d<0\;\;\mbox{ for }\;\;i\in
I_{g}(\bar{x},\bar{y}):=\left\\{i:\;\,g_{i}(\bar{x},\bar{y})=0\right\\}.$
(2.1)
Obviously, this is the _Mangasarian-Fromovitz constraint qualification_ (MFCQ)
for the lower-level problem in (1.1). Similarly, a point $(\bar{x},\bar{y})$
is _upper-level regular_ if there exists
$d\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ such that
$\begin{array}[]{rl}\nabla g_{i}(\bar{x},\bar{y})^{\top}d<0&\text{for
}\;\;j\in I_{g}(\bar{x},\bar{y}),\\\ \nabla
G_{j}(\bar{x},\bar{y})^{\top}d<0&\text{for }\;\;j\in
I_{G}(\bar{x},\bar{y}):=\left\\{j:\;\,G_{j}(\bar{x},\bar{y})=0\right\\}.\\\
\end{array}$ (2.2)
Finally, to write the necessary optimality conditions for problem (1.2)–(1.3),
it is standard to use the following partial calmness concept [30]:
###### Definition 2.1.
Let $(\bar{x},\bar{y})$ be a local optimal solution of problem (1.2)–(1.3).
The problem is partially calm at $(\bar{x},\bar{y})$ if there exists
$\lambda>0$ and a neighbourhood $U$ of $(\bar{x},\bar{y},0)$ such that
$F(x,y)-F(\bar{x},\bar{y})+\lambda|u|\geq 0,\;\,\forall(x,y,u)\in
U:\;G(x,y)\leq 0,\;g(x,y)\leq 0,\;f(x,y)-\varphi(x)-u=0.$
The following relationship shows that the partial calmness concept enables the
penalization of the optimal value function constraint $f(x,y)-\varphi(x)\leq
0$, to generate a tractable optimization problem, as it is well-known that
standard constraint qualifications (including the MFCQ, for example) fail for
problem (1.2)–(1.3); see [9, 30].
###### Theorem 2.2 ([30]).
Let $(\bar{x},\bar{y})$ be a local minimizer of (1.2)–(1.3). Then, this
problem is partially calm at $(\bar{x},\bar{y})$ if and only if there is some
$\bar{\lambda}>0$ such that for any $\lambda\geq\bar{\lambda}$, the point
$(\bar{x},\bar{y})$ is also a local minimizer of
$\underset{x,\,y}{\min}\leavevmode\nobreak\
F(x,y)+\lambda(f(x,y)-\varphi(x))\;\mbox{ s.t. }\;G(x,y)\leq 0,\;\,g(x,y)\leq
0.$ (2.3)
Problem (2.3) is known as the _partial exact penalization_ problem of
(1.2)–(1.3), as only the optimal value constraint is penalized. Next, we state
the necessary optimality conditions for problem (1.2)–(1.3) based on the
aforementioned penalization, while using the upper-and lower-level regularity
conditions and an estimate of the subdifferential of $\varphi$ (1.3); see,
e.g., [9, 6, 8, 30], for the details.
###### Theorem 2.3.
Let $(\bar{x},\bar{y})$ be a local optimal solution of problem (1.2), where
all involved functions are assumed to be continuously differentiable,
$\varphi$ is finite around $\bar{x}$, and the lower-level problem is fully
convex. Furthermore, suppose that the problem is partially calm at
$(\bar{x},\bar{y})$, and the lower- and upper-level regularity conditions are
both satisfied at $(\bar{x},\bar{y})$. Then, there exist $\lambda>0$, as well
as $u,w\in\mathbb{R}^{p}$ and $v\in\mathbb{R}^{q}$ such that
$\displaystyle\nabla F(\bar{x},\bar{y})+\nabla
g(\bar{x},\bar{y})^{T}(u-\lambda w)+\nabla G(\bar{x},\bar{y})^{T}v=0,$ (2.4)
$\displaystyle\nabla_{y}f(\bar{x},\bar{y})+\nabla_{y}g(\bar{x},\bar{y})^{T}w=0,$
(2.5) $\displaystyle u\geq 0,\;\;g(\bar{x},\bar{y})\leq
0,\;\;u^{T}g(\bar{x},\bar{y})=0,$ (2.6) $\displaystyle v\geq
0,\;\;G(\bar{x},\bar{y})\leq 0,\;\;v^{T}G(\bar{x},\bar{y})=0,$ (2.7)
$\displaystyle w\geq 0,\;\;g(\bar{x},\bar{y})\leq
0,\;\;w^{T}g(\bar{x},\bar{y})=0.$ (2.8)
In this result, partial calmness and full convexity are essential and
fundamentally related to the nature of the bilevel optimization. Hence, it is
important to highlight a few classes of problems satisfying these assumptions.
Partial calmness has been the main tool to derive optimality conditions for
(1.1) from the perspective of the optimal value function; see, e.g., [6, 8, 9,
30]. It automatically holds if $G$ is independent from $y$ and the lower-level
problem is defined by
$f(x,y):=c^{\top}y\;\;\mbox{ and }\;\,g(x,y):=A(x)+By,$ (2.9)
where $A:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}$, $c\in\mathbb{R}^{m}$, and
$B\in\mathbb{R}^{p\times m}$. More generally, various sufficient conditions
ensuring that partial calmness holds have been studied in the literature; see
[30] for the seminal work on the subject. More recently, the paper [19] has
revisited the condition, proposed a fresh perspective, and established new
sufficient conditions for it to be satisfied.
As for full convexity, it will automatically hold for the class of problem
defined in (2.9), provided that each component of the function $A$ is convex.
Another class of nonlinear fully convex lower-level problem is given in [17].
Note however that when it is not possible to guarantee that this assumption is
satisfied, there are at least two alternative scenarios to obtain the same
optimality conditions as in Theorem 2.3. The first is to replace the full
convexity assumption by the _inner semicontinuity_ of the optimal solution
set-valued mapping $S$ (1.1). Secondly, note that a much weaker qualification
condition known as _inner semicompactness_ can also be used here. However,
under the latter assumption, it will additionally be required to have
$S(\bar{x})=\\{\bar{y}\\}$ in order to get the optimality conditions in
(2.4)–(2.8). The concept of inner semicontinuity (resp. semicompactness) of
$S$ is closely related to the lower semicontinuity (resp. upper
semicontinuity) of set-valued mappings; for more details on these notions and
their ramifications on bilevel programs, see, e.g., [6, 8, 9].
It is important to mention that various other necessary optimality conditions,
different from the above ones, can be obtained, depending on the assumptions
made. Details of different stationarity concepts for (1.2) can be found in the
latter references, as well as in [32].
To reformulate the complementarity conditions (2.6)–(2.8) into a system of
equations, we use the well-known _Fischer-Burmeister_ function [11]
$\phi(a,b):=\sqrt{a^{2}+b^{2}}-a-b$, which ensures that
$\phi(a,b)=0\;\;\iff\;\;a\geq 0,\;\;b\geq 0,\;\;ab=0.$
This leads to the reformulation of the optimality conditions (2.4)–(2.8) into
the system of equations:
$\displaystyle\Upsilon^{\lambda}(z):=\left(\begin{array}[]{rr}\nabla_{x}F(x,y)+\nabla_{x}g(x,y)^{T}(u-\lambda
w)+\nabla_{x}G(x,y)^{T}v\\\ \nabla_{y}F(x,y)+\nabla_{y}g(x,y)^{T}(u-\lambda
w)+\nabla_{y}G(x,y)^{T}v\\\ \nabla_{y}f(x,y)+\nabla_{y}g(x,y)^{T}w\\\
\sqrt{u^{2}+g(x,y)^{2}}-u+g(x,y)\\\ \sqrt{v^{2}+G(x,y)^{2}}-v+G(x,y)\\\
\sqrt{w^{2}+g(x,y)^{2}}-w+g(x,y)\end{array}\right)=0,$ (2.16)
where we have $z:=(x,y,u,v,w)$ and
$\sqrt{u^{2}+g(x,y)^{2}}-u+g(x,y):=\left(\begin{array}[]{c}\sqrt{u_{1}^{2}+g_{1}(x,y)^{2}}-u_{1}+g_{1}(x,y)\\\
\vdots\\\ \sqrt{u_{p}^{2}+g_{p}(x,y)^{2}}-u_{p}+g_{p}(x,y)\end{array}\right).$
(2.17)
$\sqrt{v^{2}+G(x,y)^{2}}-v+G(x,y)$ and $\sqrt{w^{2}+g(x,y)^{2}}-w+g(x,y)$ are
defined as in (2.17). The superscript $\lambda$ is used to emphasize the fact
that this number is a parameter and not a variable for equation (2.16). One
can easily check that this system is made of $n+2m+p+q+p$ real-valued
equations and $n+m+p+q+p$ variables. Clearly, this means that (2.16) is an
_overdetermined_ system and the Jacobian of $\Upsilon^{\lambda}(z)$, when it
exists, is a non-square matrix.
To focus our attention on the main ideas of this paper, we smoothen the
function $\Upsilon^{\lambda}$ (2.16) using the _smoothed Fischer-Burmeister
function_ (see [16]) defined by
$\phi^{\mu}_{j}(x,y,u):=\sqrt{u^{2}_{j}+g_{j}(x,y)^{2}+2\mu}-u_{j}+g_{j}(x,y),\;j=1,\ldots,p,$
(2.18)
where the perturbation $\mu>0$ helps to guaranty its differentiability at
points $(x,y,u)$, where $u_{j}=g_{j}(x,y)=0$ for $j=1,\ldots,p$. It is well-
known (see latter reference) that for $j=1,\ldots,p$,
$\phi^{\mu}_{j}(x,y,u)=0\;\;\Longleftrightarrow\;\;\left[u_{j}>0,\;-g_{j}(x,y)>0,\;-u_{j}g_{j}(x,y)=\mu\right].$
(2.19)
The smoothed version of system (2.16) then becomes
$\displaystyle\Upsilon^{\lambda}_{\mu}(z)=\left(\begin{array}[]{rr}\nabla_{x}F(x,y)+\nabla_{x}g(x,y)^{T}(u-\lambda
w)+\nabla_{x}G(x,y)^{T}v\\\ \nabla_{y}F(x,y)+\nabla_{y}g(x,y)^{T}(u-\lambda
w)+\nabla_{y}G(x,y)^{T}v\\\ \nabla_{y}f(x,y)+\nabla_{y}g(x,y)^{T}w\\\
\sqrt{u^{2}+g(x,y)^{2}+2\mu}-u+g(x,y)\\\
\sqrt{v^{2}+G(x,y)^{2}+2\mu}-v+G(x,y)\\\
\sqrt{w^{2}+g(x,y)^{2}+2\mu}-w+g(x,y)\end{array}\right)=0,$ (2.26)
following the convention in (2.17), where $\mu$ is a vector of appropriate
dimensions with sufficiently small positive elements. Under the assumption
that all the functions involved in problem (1.1) are continuously
differentiable, $\Upsilon^{\lambda}_{\mu}$ is also a continuously
differentiable function for any $\lambda>0$ and $\mu>0$. We can easily check
that for a fixed value of $\lambda>0$,
$\|\Upsilon^{\lambda}_{\mu}(z)-\Upsilon^{\lambda}(z)\|\;\;\longrightarrow
0\;\;\mbox{ as }\;\;\mu\downarrow 0.$ (2.27)
Hence, based on this scheme, our aim is to consider a sequence $\\{\mu_{k}\\}$
decreasing to $0$ such that equation (2.16) is approximately solved while
leading to
$\Upsilon^{\lambda}_{\mu^{k}}(z^{k})\;\;\longrightarrow 0\;\;\mbox{ as
}\;\;k\uparrow\infty$
for a fixed value of $\lambda>0$. In order to proceed, let us define the least
squares problem
$\min\leavevmode\nobreak\
\Phi^{\lambda}_{\mu}(z):=\frac{1}{2}\left\lVert\Upsilon^{\lambda}(z)\right\rVert^{2}.$
(2.28)
Before we introduce the smoothed Levenberg-Marquardt method that will be one
of the main focus points of this paper, note that for fixed $\lambda>0$ and
$\mu>0$,
$\nabla\Upsilon^{\lambda}_{\mu}(z)=\left[\begin{array}[]{cccc}\nabla^{2}L^{\lambda}(z)&\nabla
g(x,y)^{T}&\nabla G(x,y)^{T}&-\lambda\nabla g(x,y)^{T}\\\
\nabla(\nabla_{y}\mathcal{L}(z))&O&O&\nabla_{y}g(x,y)^{T}\\\
\mathcal{T}^{\mu}\nabla g(x,y)&\Gamma^{\mu}&O&O\\\ \mathcal{A}^{\mu}\nabla
G(x,y)&O&\mathcal{B}^{\mu}&O\\\ \Theta^{\mu}\nabla
g(x,y)&O&O&\mathcal{K}^{\mu}\\\ \end{array}\right]$ (2.29)
with the pair $\left(\mathcal{T}^{\mu},\Gamma^{\mu}\right)$ defined by
$\mathcal{T}^{\mu}:=diag\leavevmode\nobreak\
\\{\tau_{1}^{\mu},..,\tau_{p}^{\mu}\\}$ and
$\Gamma^{\mu}:=diag\leavevmode\nobreak\
\\{\gamma_{1}^{\mu},..,\gamma_{p}^{\mu}\\}$, where
$\tau^{\mu}_{j}:=\frac{g_{j}(x,y)}{\sqrt{u_{j}^{2}+g_{j}(x,y)^{2}+2\mu}}+1\;\mbox{
and
}\;\gamma^{\mu}_{j}:=\frac{u_{j}}{\sqrt{u_{j}^{2}+g_{j}(x,y)^{2}+2\mu}}-1,\;\,j=1,\ldots
p.$ (2.30)
The pairs $\left(\mathcal{A}^{\mu},\,\mathcal{B}^{\mu}\right)$ and
$\left(\Theta^{\mu},\mathcal{K}^{\mu}\right)$ are defined in a similar way,
based on $(v_{j},G_{j}(x,y))$, $j=1,\ldots,q$ and $(w_{j},g_{j}(x,y))$,
$j=1,\ldots,p$, respectively.
We now move on to present some definitions that will help us state the
algorithm with _line search_. It is well-known that line search helps to
choose the optimal step length to avoid over-going an optimal solution in the
direction $d^{k}$ and also to globalize the convergence of the method, i.e.,
have more flexibility on the starting point $z^{0}$. The optimal step length
$\gamma_{k}$ can be calculated through minimizing
$\Phi^{\lambda}(z^{k}+\gamma_{k}d^{k})$, with respect to $\gamma_{k}$, such
that
$\Phi^{\lambda}(z^{k}+\gamma_{k}d^{k})\leq\Phi^{\lambda}(z^{k})+\sigma\gamma_{k}\nabla\Phi^{\lambda}(z^{k})^{T}d^{k}\;\mbox{
for }\;0<\sigma<1.$
That is, we are looking for
$\gamma_{k}=arg\min_{\gamma\in\mathbb{R}}||\Upsilon^{\lambda}(z^{k}+\gamma
d^{k})||^{2}$. To implement line search, it is standard to use _Armijo
condition_ that guarantees a decrease at the next iterate.
###### Definition 2.4.
Fixing $d$ and $z$, consider the function
$\phi_{\lambda}(\gamma):=\Phi^{\lambda}(z+\gamma d)$. Then, the _Armijo_
condition will be said to hold if
$\phi_{\lambda}(\gamma)\leq\phi(0)+\gamma\sigma\phi^{\prime}_{\lambda}(\gamma)$
for some $0<\sigma<1$.
The practical implementation of the Armijo condition is based on backtracking.
###### Definition 2.5.
Let $\rho\in(0,1)$ and $\bar{\gamma}>0$. _Backtracking_ is the process of
checking over a sequence $\bar{\gamma}$, $\rho\bar{\gamma}$,
$\rho^{2}\bar{\gamma}$, …, until a number $\gamma$ is found satisfying the
Armijo condition.
Line search is widely used in continuous optimization; see, e.g., [21] for
more details. For the implementation in this paper, we start with stepsize
$\gamma_{0}:=1$; then, if the condition
$\left\lVert\Upsilon^{\lambda}(z^{k}+\gamma_{k}d^{k})\right\rVert^{2}<\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert^{2}+\sigma\gamma_{k}\nabla\Upsilon_{\mu}^{\lambda}(z^{k})^{T}\Upsilon^{\lambda}(z^{k})d^{k},$
is not satisfied, we set $\gamma_{k}=\gamma_{k}/2$ and check again until the
condition above is satisfied. More generally, the algorithm proceeds as
follows:
###### Algorithm 2.6.
Smoothed Levenberg-Marquardt Method for Bilevel Optimization
Step 0: Choose
$(\lambda,\mu,K,\epsilon,\alpha_{0})\in\left(\mathbb{R}^{*}_{+}\right)^{5}$,
$(\rho,\sigma,\gamma_{0})\in(0,1)^{3}$,
$z^{0}:=(x^{0},y^{0},u^{0},v^{0},w^{0})$, and set $k:=0$.
Step 1: If $\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k})\right\rVert<\epsilon$
or $k\geq K$, then stop.
Step 2: Calculate the Jacobian $\nabla\Upsilon_{\mu}^{\lambda}(z^{k})$ and
subsequently find a vector $d^{k}$ satisfying
$\left(\nabla\Upsilon_{\mu}^{\lambda}(z^{k})^{\top}\nabla\Upsilon_{\mu}^{\lambda}(z^{k})+\alpha_{k}I\right)d^{k}=-\nabla\Upsilon_{\mu}^{\lambda}(z^{k})^{\top}\Upsilon^{\lambda}_{\mu}(z^{k}),$
(2.31) where $I$ denotes the identity matrix of appropriate size.
Step 3: While
$\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k}+\gamma_{k}d^{k})\right\rVert^{2}\geq\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k})\right\rVert^{2}+\sigma\gamma_{k}\nabla\Upsilon_{\mu}^{\lambda}(z^{k})^{T}\Upsilon^{\lambda}_{\mu}(z^{k})d^{k}$,
do $\gamma_{k}\leftarrow\rho\gamma_{k}$ end.
Step 4: Set $z^{k+1}:=z^{k}+\gamma_{k}d^{k}$, $k:=k+1$, and go to Step 1.
Note that in Step 0, $\mathbb{R}^{*}_{+}:=(0,\,\infty)$. Before we move on to
focus our attention on the practical implementation details of this algorithm,
we present its convergence result, which is based on the following selection
of the Levenberg–Marquardt (LM) parameter $\alpha_{k}$:
$\alpha_{k}=\|\Upsilon_{\mu}^{\lambda}(z^{k})\|^{\eta}\,\mbox{ for any choice
of }\,\eta\in\left[1,\,2\right].$ (2.32)
###### Theorem 2.7 ([10]).
Consider Algorithm 2.6 with fixed values for the parameters $\lambda>0$ and
$\mu>0$ and let $\alpha_{k}$ be defined as in (2.32). Then, the sequence
$\\{z^{k}\\}$ generated by the algorithm converges quadratically to $\bar{z}$
satisfying $\Upsilon_{\mu}^{\lambda}(\bar{z})=0$, under the following
assumptions:
1. 1.
$\Upsilon_{\mu}^{\lambda}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N+m}$ is
continuously differentiable and
$\nabla\Upsilon_{\mu}^{\lambda}:\mathbb{R}^{N}\rightarrow\mathbb{R}^{(N+m)\times(N)}$
is locally Lipschitz continuous in a neighbourhood of $\bar{z}$.
2. 2.
There exists some $C>0$ and $\delta>0$ such that
$Cdist(z,Z^{\lambda}_{\mu})\leq\left\lVert\Upsilon_{\mu}^{\lambda}(z)\right\rVert\phantom{-}\text{for
all }\;\,z\in\mathcal{B}(\bar{z},\delta),$
where _dist_ denotes the distance function and $Z^{\lambda}_{\mu}$ corresponds
to the solution set of equation (2.26).
For fixed values of $\lambda>0$ and $\mu>0$, assumption 1 in this theorem is
automatically satisfied if all the functions involved in problem (1.1) are
twice continuously differentiable. According to [29], assumption 2 of Theorem
2.7 is fulfilled if the matrix $\nabla\Upsilon_{\mu}^{\lambda}$ has a full
column rank. Various conditions guarantying that
$\nabla\Upsilon_{\mu}^{\lambda}$ has a full rank have been developed in [14].
Below, we present an example of bilevel program satisfying the first and
second assumptions of Theorem 2.7.
###### Example 2.8.
Consider the following instance of problem (1.1) from the BOLIB library [33,
LamprielloSagratelli2017Ex33]:
$\begin{array}[]{rll}F(x,y)&:=&x^{2}+(y_{1}+y_{2})^{2},\\\
G(x,y)&:=&-x+0.5,\\\ f(x,y)&:=&y_{1},\\\
g(x,y)&:=&\left(\begin{array}[]{c}-x-y_{1}-y_{2}+1\\\
-y\end{array}\right).\end{array}$
Obviously, assumption 1 holds. According to [14], for this example, the
columns of $\nabla\Upsilon_{\mu}^{\lambda}$ are linearly independent at the
solution point
$\bar{z}:=(\bar{x},\,\bar{y}_{1},\,\bar{y}_{2},\,\bar{u}_{1},\,\bar{u}_{2},\,\bar{u}_{3},\,\bar{v},\,\bar{w}_{1},\,\bar{w}_{2},\,\bar{w}_{3})=(0.5,\,0,\,0.5,\,1,\,\lambda,\,0,\,0,\,0,\,1,\,0)$
with the parameters chosen as $\mu=2\times 10^{-2}$ and $\lambda=10^{-2}$. ∎
On the selection of the LM parameter $\alpha_{k}$, we conducted a preliminary
analysis based on the BOLIB library test set [33]. It was observed that for
almost all the corresponding examples, the choice
$\alpha_{k}:=\left\|\Upsilon_{\mu}^{\lambda}(z^{k})\right\|^{\eta}$ with
$\eta\in(1,2]$ leads to a very poor performance of Algorithm 2.6. The typical
behaviour of the algorithm for $\eta\in(1,2]$ is shown in the following
example.
###### Example 2.9.
Consider the following scenario of problem (1.1) from [33, AllendeStill2013]:
$\begin{array}[]{rll}F(x,y)&:=&x_{1}^{2}-2x_{1}+x_{2}^{2}-2x_{2}+y_{1}^{2}+y_{2}^{2},\\\
G(x,y)&:=&\left(\begin{array}[]{c}-x\\\ -y\\\ x_{1}-2\end{array}\right),\\\
f(x,y)&:=&y_{1}^{2}-2x_{1}y_{1}+y_{2}^{2}-2x_{2}y_{2},\\\
g(x,y)&:=&\left(\begin{array}[]{c}(y_{1}-1)^{2}-0.25\\\
(y_{2}-1)^{2}-0.25\end{array}\right).\end{array}$
Figure 1 shows the progression of
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$ generated from Algorithm
2.6 with $\alpha_{k}$ selected as in (2.32) while setting $\eta=1$ and
$\eta=2$, respectively. Clearly, after about 100 iterations
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$ blows up relatively quickly
when $\eta=2$, while it falls and stabilizes within a certain tolerance for
$\eta=1$.
(a) $\alpha_{k}:=\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$
(b) $\alpha_{k}:=\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert^{2}$
Figure 1: Typical behaviour of Algorithm 2.6 for two scenarios of the LM
parameter
It is worth noting that the scale on the y-axis of Figure 1(b) is quite large.
Hence, it might not be apparent that solutions at the early iterations of the
algorithm are much better for $\eta=1$ compared to the ones obtained in the
case where $\eta=2$. ∎
Note that for almost all of the examples in the BOLIB test set [33], we have a
behaviour of Algorithm 2.6 similar to that of Figure 1(b) when
$\alpha_{k}=\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert^{\eta}$ for many
different choices of $\eta\in(1,2]$. It is important to mention that the
behaviour of the algorithm is not surprising, as it is well-known in the
literature that with the sequence
$\alpha_{k}:=\left\lVert\Upsilon_{\mu}^{\lambda}(z^{k})\right\rVert^{2}$, for
example, the Levenberg–Marquardt method often faces some issues. Namely, when
the sequence $z^{k}$ is close to the solution set,
$\alpha_{k}:=\left\lVert\Upsilon_{\mu}^{\lambda}(z^{k})\right\rVert^{2}$ could
become smaller than the machine precision and hence lose its role as a result
of this. On the other hand, when the sequence is far away from the solution
set, $\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k})\right\rVert^{2}$ may be very
large; making a movement towards the solution set to be very slow. Hence, from
now on, we use $\alpha_{k}:=\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$,
with $k=0,\,1,\ldots$, for all the analysis of Algorithm 2.6 conducted in this
paper. Note however that there are various other approaches to select the LM
parameter $\alpha_{k}$ in the literature; see, e.g., [2, 10, 29].
To conclude this section, it is important to recall that from the perspective
of bilevel optimization, the partial exact penalization parameter $\lambda$ is
a key element of Algorithm 2.6, as it originates from the penalization of the
value function constraint $f(x,y)\leq\varphi(x)$. Additionally, unlike the
other parameters involved in the algorithm, which have benefited from many
years of research, as it will be clear in Section 4, it remains unknown what
is the best way to select $\lambda$ while solving problem (1.1) via the value
function reformulation (1.2)–(1.3). Hence, the focus of the remaining parts of
this paper will be on the selection of $\lambda$ and assessing its impact on
the efficiency of Algorithm 2.6.
## 3 Partial exact penalty parameter selection
The aim of this section is to explore the best way to select the penalization
parameter $\lambda$. Based on Theorem 2.2, we should be getting the solution
for some threshold value $\bar{\lambda}$ of the penalty parameter and the
algorithm should be returning the value of the solution for any
$\lambda\geq\bar{\lambda}$. Hence, increasing $\lambda$ at every iteration
seems to be the ideal approach to follow this logic to obtain and retain the
solution. Hence, to proceed, we use the increasing sequence
$\lambda_{k}:=0.5*(1.05)^{k}$, where $k$ is the number of iterations of the
algorithm. The main reason of this choice is that the final value of $\lambda$
need not to be too small to recover solutions and not too large to avoid ill-
conditioning. It was observed that going too aggressive with the sequence
(e.g., with $\lambda_{k}:=2^{k}$) forces the algorithm to diverge. Also, it
was observed that for fixed and small values of $\lambda$ (i.e., $\lambda<1$),
Algorithm 2.6 performs well for many examples. This justifies choosing the
starting value for varying parameter less than $1$. Overall, the aim here is
vary $\lambda$ as mentioned above and assess what could be the potential best
ranges for selection of the parameter based on our test set from BOLIB [33].
### 3.1 How far can we go with the value of $\lambda$?
We start here by acknowledging that it is very difficult to check that partial
calmness (cf. Definition 2.1) holds in practice. Nevertheless, we would like
to consider Theorem 2.2 as the base of our experimental analysis, and ask
ourselves how large does $\lambda$ need to be for Algorithm 2.6 to converge.
Intuitively, one would think that taking $\lambda$ as whatever large number
should be fine. However, this is usually not the case in practice. One of the
main reasons for this is that for too large values of $\lambda$, Algorithm 2.6
does not behave well. In particular, if we run Algorithm 2.6 with varying
$\lambda$ for a very large number of iterations, the value of the $Error$
blows up at some point and the values
$\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k})\right\rVert$ stop decreasing.
Recall that ill-conditioning (and possibly the _ill-behaviour_) refers to the
fact that one eigenvalue of the Hessian involved in
$\nabla\Upsilon_{\mu}^{\lambda}(z^{k})$ being much larger than the other
eigenvalues, which affects the curvature in the negative way for gradient
methods [25]. To analyze this ill-behaviour in this section, we are going to
run our algorithm for $1,000$ iterations with no stopping criteria and let
$\lambda$ vary indefinitely. We will subsequently look at which iteration the
algorithm blows up and record the value of $\lambda$ there. To proceed, we
denote by $\lambda_{ill}$ the first value of $\lambda$ for which the ill-
behaviour is observed for each example. The table below presents the number of
BOLIB examples that are approximately within certain intervals of
$\lambda_{ill}$.
Table 1: Approximate intervals for the values of $\lambda$ when the ill-behaviour starts $\lambda_{ill}$ | $\lambda_{ill}<10^{7}$ | $10^{7}<\lambda_{ill}<10^{9}$ | $10^{9}<\lambda_{ill}<10^{11}$ | $10^{11}<\lambda_{ill}<10^{20}$ | Not observed
---|---|---|---|---|---
Examples | 1 | 6 | 72 | 11 | 34
On average, the ill-behaviour seems to typically occur after about $500$
iterations (where $\lambda_{ill}\approx 10^{10}$), as it can be seen in the
table above. For $34$ problems ill behaviour was not observed under the scope
of $1000$ iterations. We also see that for most of the problems ($72/124$),
the ill-behaviour happens for $10^{9}<\lambda<10^{11}$. We further observe
that algorithm has shown to behave well for the values of penalty parameter
$\lambda<10^{9}$ with only $7/124$ examples demonstrating ill behaviour for
such $\lambda$. This makes the choice of very large values of $\lambda$ not
attractive at all. Mainly, the analysis shows that choosing $\lambda>10^{9}$
could cause the algorithm to diverge. Hence, based on our method, selecting
$\lambda\leq 10^{7}$ seems to be a very safe choice for our BOLIB test set.
This is useful for the choice of fixed $\lambda$ as we can choose values
smaller than $10^{7}$. For the case of varying $\lambda$ the values are
controlled by the stopping criteria proposed in the next section. The complete
results on the values of $\lambda_{ill}$ for each example will be presented in
Table LABEL:combinedtable below.
Interestingly, 34 out of the 124 test problems do not demonstrate any ill
behaviour signs even if we run the algorithm for $1,000$ iterations with
$\lambda=0.5*1.05^{iter}$. A potential reason for this could just be that the
parameter $\lambda$ does not get sufficiently large after $1,000$ iterations
to cause problems for these problems. It could also be that the eigenvalues of
the Hessian are not affected by large values of $\lambda$ for these examples.
Also, there is a possibility that elements of the Hessian involved in (2.31)
do not depend on $\lambda$ at all, as for $20/34$ problems where the function
$g$ is linear in $(x,y)$ or not present in these problems. Next, we focus on
assessing what magnitudes of the penalty parameter $\lambda$ seem to lead to
the bet performance for our method.
### 3.2 Do the values of $\lambda$ really need to be large?
It is clear from the previous subsection that to reduce the chances for
Algorithm 2.6 to diverge or exhibit some ill-behaviour, we should
approximately select $\lambda<10^{7}$. However, it is still unclear whether
only large values of $\lambda$ would be sensible to ensure a good behaviour of
the algorithm. In other words, it is important to know whether relatively
small values of $\lambda$ can lead to a good behaviour for Algorithm 2.6. To
assess this, we attempt here to identify inflection points, i.e., values of
$k$ where we have
$\left\lVert\Upsilon^{\lambda_{k}}_{\mu}(z^{k})\right\rVert<\epsilon$ as
$\lambda_{k}$ varies increasingly as described in the introduction of this
section. We would then record the value of $\lambda_{k}$ at these points.
Ideally, we want to get the threshold $\bar{\lambda}$ such that solution is
retained for all $\lambda>\bar{\lambda}$ in the sense of Theorem 2.2. To
proceed, we extract the information on the final
$Error^{*}:=\left\lVert\Upsilon^{\lambda}(z^{*})\right\rVert$ for each example
from [26] and then rerun the algorithm with varying penalty parameter
$\lambda_{k}:=0.5*1.05^{k}$ with new stopping criterion (i.e., $Error\leq
1.1Error^{*}$) while relaxing all of the stopping criteria (see details in
Section 4). This way, we stop once we observe
$Error_{k}:=\left\lVert\Upsilon^{\lambda_{k}}(z)\right\rVert$ close to the
$Error^{*}$ that we obtained in our experiments [26]. It is worth noting that
it would make sense to test only 72/117 ($61.54\%$) of examples, for which
algorithm performed well and produced a good solution. This approach can be
thought of as finding the inflection point. For instance, if we have an
algorithm running as below, we want to stop at the inflection point after
125-130 iterations. The illustration of how we aim to stop at the inflection
point is presented in Figure 2 (a)–(b) below, where we have
$\left\lVert\Upsilon^{\lambda}(z)\right\rVert$ on the y-axis and iterations on
the x-axis.
(a) Complete run 1
(b) Inflection point stop 1
(c) Complete run 2
(d) Inflection point stop 2
Figure 2: (a) and (b) illustrate the inflection point identification for
Example AllendeStill2013 and similarly (c) and (d) correspond to Example
Anetal2009; the examples are taken from BOLIB [33].
For some of the examples, we obtained a better $Error$ than initial
$Error^{*}$. For these cases, we stopped very early as $Error_{k}\leq
1.1Error^{*}$ was typically met at an early iteration $k$ (where $\lambda$ was
still small), as demonstrated in Figure 2(c)–(d) above. This demonstrates the
disadvantage of $\lambda$ being an increasing sequence. If the algorithm makes
many iterations, the parameter $\lambda$ keeps increasing without possibility
to go back to the smaller values. It turns out that for some examples the
smaller value of $\lambda$ was as good as large values, or even better to
recover a solution. This further justifies the choice to start from the small
value of $\lambda$, that is $\lambda_{0}<1$ and increase it slowly.
With the setting to stop whenever $Error_{k}\leq 1.1Error^{*}$, we often stop
very early. Hence, we do not always get $\bar{\lambda}$ that represents the
inflection point, which we aimed to get; cf. Figure 2(c)–(d), where (c) has a
scale of $10^{4}$ on the y-axis. Although, we can clearly see from Figure 2
(c) that inflection point lies around 190-200 iterations, where the value of
$\lambda$ is much bigger. It is clear that we stopped earlier due to having
small value of $Error$ after 12-45 iterations. It was observed that such
scenario is typical for the examples in the considered test set. For this
reason, we want to introduce $\lambda^{*}$ as the _large threshold_ of
$\lambda$. This value will be used to represent the value of the penalty
parameter at the inflection point, where solution starts to be recovered for
large values of $\lambda$ ($\lambda>6.02$), while we also record
$\bar{\lambda}$ as the first (smallest) value of $\lambda$ for which good
solution was obtained. For instance, with $\lambda$ defined as
$\lambda:=0.5\times 1.05^{k}$ in Figure 2 (c) we have $\bar{\lambda}=0.5\times
1.05^{12}$ and $\lambda^{*}=0.5\times 1.05^{190}$ as we obtain good solution
for small $\lambda$ after 12 iterations and for large $\lambda$ after 190
iterations. We shall note that the value $\lambda>6.02$ is the value of the
penalty parameter after we make at least 50 iterations as for the case with
varying $\lambda$ we have $\lambda=0.5\times 1.05^{51}=6.02$.
The complete results of detecting $\bar{\lambda}$ and $\lambda^{*}$ is
presented in Table LABEL:combinedtable below. It was observed that the
behaviour of the method follows the same pattern for the majority of the
examples. Typically, we get a good solution retaining for a few small values
of $\lambda$, then value of the Error blows up and takes some iterations to
start decreasing, coming back to obtaining and retaining a good solution for
large values of $\lambda$. Such pattern is clearly demonstrated in Figure 2
(c). Such a behaviour is interesting as usually, only large values of penalty
parameters are used in practice [4, 23], as intuitively suggested by Theorem
2.2. As mentioned in [13], some methods require penalty parameters to increase
to infinity to obtain convergence.
Table 2: Ill behaviour and two thresholds for $\lambda$ Problem number | Problem name | $\textbf{iter}_{ill}$ | $\lambda_{ill}$ | | $\bar{\lambda}$ | | $\lambda^{*}$ | iter for $\lambda^{*}$
---|---|---|---|---|---|---|---|---
1 | AiyoshiShimizu1984Ex2 | 495 | 1.54e+10 | | $9.92*10^{4}$ | | $9.92*10^{4}$ | 250
2 | AllendeStill2013 | Not observed | NA | | 245 | | 245 | 127
3 | AnEtal2009 | Not observed | NA | | - | | - | -
4 | Bard1988Ex1 | 520 | 5.22e+10 | | 0.608 | | 245 | 127
5 | Bard1988Ex2 | 536 | 1.14e+11 | | - | | - | -
6 | Bard1988Ex3 | Not observed | NA | | 0.525 | | 54.1 | 96
7 | Bard1991Ex1 | Not observed | NA | | 0.608 | | 183 | 121
8 | BardBook1998 | 502 | 2.17e+10 | | - | | - | -
9 | CalamaiVicente1994a | 476 | 6.1e+09 | | 0.608 | | 27.3 | 82
10 | CalamaiVicente1994b | 490 | 1.21e+10 | | 0.739 | | 79.9 | 104
11 | CalamaiVicente1994c | 490 | 1.21e+10 | | - | | - | -
12 | CalveteGale1999P1 | 538 | 1.26e+11 | | 0.525 | | $1.57*10^{3}$ | 165
13 | ClarkWesterberg1990a | 520 | 5.22e+10 | | - | | - | -
14 | Colson2002BIPA1 | 510 | 3.2e+10 | | 792 | | 792 | 151
15 | Colson2002BIPA2 | 900 | 5.88e+18 | | 1.33 | | $1.65*10^{3}$ | 166
16 | Colson2002BIPA3 | 110 | 107 | | - | | - | -
17 | Colson2002BIPA4 | Not observed | NA | | 0.551 | | $2.68*10^{3}$ | 176
18 | Colson2002BIPA5 | 550 | 2.25e+11 | | - | | - | -
19 | Dempe1992a | Not observed | NA | | - | | - | -
20 | Dempe1992b | Not observed | NA | | 0.551 | | $1.29*10^{3}$ | 161
21 | DempeDutta2012Ex24 | Not observed | NA | | - | | - | -
22 | DempeDutta2012Ex31 | Not observed | NA | | 0.525 | | 137 | 115
23 | DempeEtal2012 | 470 | 4.55e+09 | | - | | - | -
24 | DempeFranke2011Ex41 | 492 | 1.33e+10 | | 0.704 | | 44.5 | 92
25 | DempeFranke2011Ex42 | 495 | 1.54e+10 | | 0.67 | | 54.1 | 96
26 | DempeFranke2014Ex38 | 495 | 1.54e+10 | | 0.551 | | 79.9 | 104
27 | DempeLohse2011Ex31a | 502 | 2.17e+10 | | 0.525 | | 6.02 | 51
28 | DempeLohse2011Ex31b | 510 | 3.2e+10 | | - | | - | -
29 | DeSilva1978 | 495 | 1.54e+10 | | 0.551 | | 69 | 101
30 | FalkLiu1995 | 440 | 1.05e+09 | | $1.1*10^{4}$ | | $1.1*10^{4}$ | 205
31 | FloudasEtal2013 | 505 | 2.51e+10 | | 0.525 | | 183 | 121
32 | FloudasZlobec1998 | 510 | 3.2e+10 | | - | | - | -
33 | GumusFloudas2001Ex1 | 530 | 8.5e+10 | | - | | - | -
34 | GumusFloudas2001Ex3 | 510 | 3.2e+10 | | - | | - | -
35 | GumusFloudas2001Ex4 | 502 | 2.17e+10 | | - | | - | -
36 | GumusFloudas2001Ex5 | 495 | 1.54e+10 | | 3.04 | | 38.4 | 89
37 | HatzEtal2013 | Not observed | NA | | 0.608 | | 6.02 | 51
38 | HendersonQuandt1958 | Not observed | NA | | $5.64*10^{7}$ | | $5.64*10^{7}$ | 380
39 | HenrionSurowiec2011 | Not observed | NA | | 0.551 | | 6.02 | 51
40 | IshizukaAiyoshi1992a | 495 | 1.54e+10 | | - | | - | -
41 | KleniatiAdjiman2014Ex3 | 445 | 1.34e+09 | | - | | - | -
42 | KleniatiAdjiman2014Ex4 | 485 | 9.46e+09 | | 0.739 | | 42.4 | 91
43 | LamparSagrat2017Ex23 | Not observed | NA | | 0.525 | | 137 | 115
44 | LamparSagrat2017Ex31 | Not observed | NA | | 0.525 | | 6.02 | 51
45 | LamparSagrat2017Ex32 | Not observed | NA | | 0.551 | | 284 | 130
46 | LamparSagrat2017Ex33 | 495 | 1.54e+10 | | 0.551 | | 102 | 109
47 | LamparSagrat2017Ex35 | Not observed | NA | | 0.525 | | 298 | 131
48 | LucchettiEtal1987 | 495 | 1.54e+10 | | 0.525 | | 6.02 | 51
49 | LuDebSinha2016a | Not observed | NA | | 0.943 | | 6.02 | 51
50 | LuDebSinha2016b | Not observed | NA | | 0.67 | | 6.02 | 51
51 | LuDebSinha2016c | Not observed | NA | | - | | - | -
52 | LuDebSinha2016d | 890 | 3.61e+18 | | - | | - | -
53 | LuDebSinha2016e | 900 | 5.88e+18 | | - | | - | -
54 | LuDebSinha2016f | Not observed | NA | | - | | - | -
55 | MacalHurter1997 | Not observed | NA | | 0.551 | | 6.02 | 51
56 | Mirrlees1999 | Not observed | NA | | 0.579 | | 6.02 | 51
57 | MitsosBarton2006Ex38 | 398 | 1.36e+08 | | 6.64 | | 7.32 | 55
58 | MitsosBarton2006Ex39 | 400 | 1.5e+08 | | - | | - | -
59 | MitsosBarton2006Ex310 | 470 | 4.55e+09 | | 56.8 | | 56.8 | 97
60 | MitsosBarton2006Ex311 | 452 | 1.89e+09 | | - | | - | -
61 | MitsosBarton2006Ex312 | 470 | 4.55e+09 | | 0.99 | | 6.02 | 51
62 | MitsosBarton2006Ex313 | 505 | 2.51e+10 | | 0.579 | | 54.1 | 96
63 | MitsosBarton2006Ex314 | 460 | 2.79e+09 | | 0.855 | | 11.9 | 65
64 | MitsosBarton2006Ex315 | 470 | 4.55e+09 | | 26 | | 56.8 | 97
65 | MitsosBarton2006Ex316 | Not observed | NA | | 1.33 | | 6.02 | 51
66 | MitsosBarton2006Ex317 | 420 | 3.97e+08 | | 3.7 | | 6.02 | 51
67 | MitsosBarton2006Ex318 | Not observed | NA | | 1.04 | | 6.02 | 51
68 | MitsosBarton2006Ex319 | 398 | 1.36e+08 | | - | | - | -
69 | MitsosBarton2006Ex320 | 485 | 9.46e+09 | | 0.943 | | 6.32 | 52
70 | MitsosBarton2006Ex321 | 452 | 1.89e+09 | | 1.09 | | 10.3 | 62
71 | MitsosBarton2006Ex322 | 470 | 4.55e+09 | | 0.99 | | 20.4 | 76
72 | MitsosBarton2006Ex323 | 505 | 2.51e+10 | | - | | - | -
73 | MitsosBarton2006Ex324 | 495 | 1.54e+10 | | 0.855 | | 6.02 | 51
74 | MitsosBarton2006Ex325 | 505 | 2.51e+10 | | - | | - | -
75 | MitsosBarton2006Ex326 | 505 | 2.51e+10 | | - | | - | -
76 | MitsosBarton2006Ex327 | 475 | 5.81e+09 | | 1.26 | | 7.32 | 55
77 | MitsosBarton2006Ex328 | 510 | 3.2e+10 | | - | | - | -
78 | MorganPatrone2006a | 500 | 1.97e+10 | | 0.525 | | 6.02 | 51
79 | MorganPatrone2006b | 505 | 2.51e+10 | | 0.551 | | 6.02 | 51
80 | MorganPatrone2006c | 470 | 4.55e+09 | | 56.8 | | 56.8 | 97
81 | MuuQuy2003Ex1 | Not observed | NA | | - | | - | -
82 | MuuQuy2003Ex2 | Not observed | NA | | - | | - | -
83 | NieEtal2017Ex34 | 520 | 5.22e+10 | | 0.551 | | 118 | 112
84 | NieEtal2017Ex52 | Not observed | NA | | - | | - | -
85 | NieEtal2017Ex54 | 495 | 1.54e+10 | | - | | - | -
86 | NieEtal2017Ex57 | 850 | 5.13e+17 | | - | | - | -
87 | NieEtal2017Ex58 | 780 | 1.69e+16 | | - | | - | -
88 | NieEtal2017Ex61 | Not observed | NA | | - | | - | -
89 | Outrata1990Ex1a | 505 | 2.51e+10 | | 0.608 | | 192 | 122
90 | Outrata1990Ex1b | 510 | 3.2e+10 | | 0.551 | | 223 | 125
91 | Outrata1990Ex1c | 495 | 1.54e+10 | | 0.99 | | 718 | 149
92 | Outrata1990Ex1d | 495 | 1.54e+10 | | - | | - | -
93 | Outrata1990Ex1e | 495 | 1.54e+10 | | 0.855 | | 873 | 153
94 | Outrata1990Ex2a | 520 | 5.22e+10 | | 0.551 | | 245 | 127
95 | Outrata1990Ex2b | 470 | 4.55e+09 | | 1.78 | | 6.02 | 51
96 | Outrata1990Ex2c | 510 | 3.2e+10 | | 0.551 | | 72.5 | 102
97 | Outrata1990Ex2d | 520 | 5.22e+10 | | - | | - | -
98 | Outrata1990Ex2e | 470 | 4.55e+09 | | 0.898 | | 6.02 | 51
99 | Outrata1993Ex31 | 520 | 5.22e+10 | | 0.551 | | 144 | 116
100 | Outrata1993Ex32 | 680 | 1.28e+14 | | - | | - | -
101 | Outrata1994Ex31 | 910 | 9.58e+18 | | - | | - | -
102 | OutrataCervinka2009 | 505 | 2.51e+10 | | - | | - | -
103 | PaulaviciusEtal2017a | 400 | 1.5e+08 | | 107 | | 107 | 110
104 | PaulaviciusEtal2017b | 490 | 1.21e+10 | | - | | - | -
105 | SahinCiric1998Ex2 | 510 | 3.2e+10 | | - | | - | -
106 | ShimizuAiyoshi1981Ex1 | 495 | 1.54e+10 | | 46.7 | | 46.7 | 93
107 | ShimizuAiyoshi1981Ex2 | 520 | 5.22e+10 | | - | | - | -
108 | ShimizuEtal1997a | 910 | 9.58e+18 | | 0.638 | | 212 | 124
109 | ShimizuEtal1997b | Not observed | NA | | - | | - | -
110 | SinhaMaloDeb2014TP3 | Not observed | NA | | 0.525 | | 651 | 147
111 | SinhaMaloDeb2014TP6 | 530 | 8.5e+10 | | - | | - | -
112 | SinhaMaloDeb2014TP7 | Not observed | NA | | $5.75*10^{10}$ | | $5.75*10^{10}$ | 522
113 | SinhaMaloDeb2014TP8 | 520 | 5.22e+10 | | - | | - | -
114 | SinhaMaloDeb2014TP9 | 505 | 2.51e+10 | | - | | - | -
115 | SinhaMaloDeb2014TP10 | 480 | 7.41e+09 | | - | | - | -
116 | TuyEtal2007 | 495 | 1.54e+10 | | 2.9 | | 16.8 | 72
117 | Vogel2002 | Not observed | NA | | - | | - | -
118 | WanWangLv2011 | 520 | 5.22e+10 | | - | | - | -
119 | YeZhu2010Ex42 | Not observed | NA | | 0.814 | | 6.02 | 51
120 | YeZhu2010Ex43 | Not observed | NA | | 4.49 | | 6.02 | 51
121 | Yezza1996Ex31 | 460 | 2.79e+09 | | 223 | | 223 | 125
122 | Yezza1996Ex41 | 490 | 1.21e+10 | | 0.608 | | 46.7 | 93
123 | Zlobec2001a | 360 | 2.12e+07 | | - | | - | -
124 | Zlobec2001b | 495 | 1.54e+10 | | - | | - | -
We are now going to proceed with finding the threshold of $\lambda$ for which
we start getting a solution and retain the value for larger values of
$\lambda$. We are going to proceed with the technique of finding small
threshold $\bar{\lambda}$ and large threshold $\lambda^{*}$, which was briefly
discussed earlier. To find the threshold we first extract the value of the
final $Error^{*}$ for each example from [26]. To find $\bar{\lambda}$ we run
the algorithm with $\lambda$ being defined as $\lambda:=0.5\times 1.05^{k}$,
with the new stopping criteria:
$\textbf{Stop if }\phantom{-}Error\leq 1.1Error^{*}.$
Of course, we also relax all of the aforementioned stopping criteria, as
$Error$ is the main measure here and we know that desired value of $Error$
exists. We then define $\bar{\lambda}:=0.5\times 1.05^{\bar{k}}$, where
$\bar{k}$ is the number of iterations after stopping whenever we get
$Error<1.1Error^{*}$. For most of the cases we detect $\bar{\lambda}$ early
due to a good solution after the first few iterations in the same manner as
shown in Figure 2 (c). Since for many examples we satisfy condition $Error\leq
1.1Error^{*}$ for early iterations (before the inflection point is achieved),
we further introduce $\lambda^{*}$, _the large threshold_ $\lambda$. The value
of $\lambda^{*}$ will be used to represent the value of the penalty parameter
at the inflection point, where solution starts to be recovered for large
values of $\lambda$. This will be obtained in the same way as $\bar{\lambda}$
with the only difference that we additionally impose the condition to stop
after at least 50 iterations. To obtain $\lambda^{*}$ we run the algorithm
with $\lambda:=0.5\times 1.05^{k}$ and the following stopping criteria:
$\textbf{Stop if }\phantom{-}Error\leq
1.1Error^{*}\phantom{-}\&\phantom{-}iter>50.$
Then the large threshold is defined as $\lambda^{*}:=0.5\times 1.05^{k^{*}}$,
where $k^{*}$ is the number of iterations after stopping whenever we get
$Error<1.1Error^{*}$ and $k>50$. This way $\bar{\lambda}$ represents the first
(smallest) value of $\lambda$ for which good solution was obtained, while
$\lambda^{*}$ represent the actual threshold after which solution is retained
for large values of $\lambda$. The demonstration of stopping at the inflection
point for large threshold $\lambda^{*}$ was shown in Figure 2 (b) and for
small threshold $\bar{\lambda}$ in Figure 2 (d).
It makes sense to test only the examples where algorithm performed well and
produced a good solution. For the rest of the examples, the value of
$Error^{*}$ would not make sense as the measure to stop, and we do not obtain
good solutions for these examples by the algorithm anyway. From the optimistic
perspective we could consider recovered solutions to be the solutions for
which the optimal value of upper-level objective was recovered with some
prescribed tolerance. Taking the tolerance of $20\%$, the total amount of
recovered solutions by the method with varying $\lambda$ is 72/117 ($61.54\%$)
for the cases where solution was reported in BOLIB [33]. This result will be
shown in more details in Section 4.2. Let us look at the thresholds
$\bar{\lambda}$ and $\lambda^{*}$ for these examples in the figure below,
where the value of $\lambda$ is shown on $y-axis$ and the example numbers on
the $x-axis$.
Figure 3: Small threshold $\bar{\lambda}$ and large threshold $\lambda^{*}$
for examples with good solutions
For $68/72$ problems, we observe that large threshold of the penalty parameter
has the value $\lambda^{*}\leq 10^{4}$, which shows that we usually can obtain
a good solution before 205 iterations. This further justifies stopping
algorithm after 200 iterations (see next section) if there is no significant
step to step improvement. As for the main outcome of Figure 3, we observe that
small threshold is smaller than large threshold ($\bar{\lambda}<\lambda^{*}$)
for $59/72$ ($82\%$) problems. This clearly shows that for majority of the
problems, for which we recover solution with $\lambda:=0.5\times 1.05^{k}$, we
obtain a good solution for small $\lambda$ as well as for large $\lambda$.
This demonstrates that small $\lambda$ could in principle be good for the
method. For the rest $18\%$ of the problems we have
$\bar{\lambda}=\lambda^{*}$, meaning that good solution was not obtained for
$\lambda<6$ for these examples. This also means that we typically obtain good
solution for small values of $\lambda$ and for large values of $\lambda$, but
not for the medium values ($\bar{\lambda}<\lambda<\lambda^{*}$).
As for the general observations of Figure 3, for $42/72$ ($58.33\%$) examples
we observed that the large threshold $\lambda^{*}$ is located somewhere in
between $90-176$ iterations with $40<\lambda^{*}<2680$. For $7/72$ problems
threshold is in the range $6.02<\lambda^{*}\leq 40$, and for only $4/72$
problems $\lambda^{*}>1.1\times 10^{4}$. Once again, this justifies that
typically $\lambda$ does not need to be large. It also suggests the optimal
values of $\lambda$ for the tested examples, at least for our solution method.
We observe that for $19/72$ problems we get $\lambda^{*}=51$, which should be
treated carefully as this could mean that the inflection point could possibly
be achieved before $50$ for these examples. Nevertheless, $\lambda^{*}=6.02$
is still a good value of penalty parameter for these examples as solutions are
retained for $\lambda>\lambda^{*}$.
As going to be observed in Section 4 we could actually argue that smaller
values of $\lambda$ work better for our method not only for varying $\lambda$
but also for fixed $\lambda$. Together with the fact that we often have the
behaviour as demonstrated in Figure 2 (c), it follows that small $\lambda$
could be more attractive for the method we implement. We even get better
values of $Error$ and better solutions for small values of $\lambda$ for some
examples. Hence we draw the conclusion that small values of $\lambda$ can
generate good solutions. Since it is typical to use large values of $\lambda$
for other penalization methods (e.g. in [3, 4, 23]), it is interesting what
could be the reasons that small $\lambda$ worked better for our case. This
could be due to the specific nature of the method, or due to the fact that we
do not do full penalization in the usual sense. Other reason could come from
the structure of the problems in the test set. The exact reason of why such
behaviour was observed remains an open question. What is important is that
this could possibly be the case that small values of $\lambda$ would be good
for some other penalty methods and optimization problems of different nature.
This result contradicts typical choice of large penalty parameter for general
penalization methods for optimization problems. As the conclusion for our
framework, we can claim that for our method $\lambda$ needs not to be large.
### 3.3 Partially calm examples
Intuitively, one would think that for partially calm examples, Algorithm 2.6
would behave well, in the sense that varying $\lambda$ increasingly would lead
to a good convergence behaviour. To show that it is not necessarily the case,
we start by considering the following result identifying a class of bilevel
program of the form (1.1) that is automatically partially calm.
###### Theorem 3.1 ([19]).
Consider a bilevel program (1.1), where $G$ is independent of $y$ and the
lower-level optimal solution map is defined as follows, with
$c\in\mathbb{R}^{m}$, $d\in\mathbb{R}^{p}$, $B\in\mathbb{R}^{p\times m}$, and
$A:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}$:
$S(x):=\arg\min_{y}\left\\{c^{T}y|\;A(x)+By\leq d\right\\}.$ (3.1)
In this case, problem (1.1) is partially calm at any of its local optimal
solutions.
Examples 8, 40, 43, 45, 46, 188, and 123 in the BOLIB library (see Table
LABEL:combinedtable) are of the form described in this result. The expectation
is that these examples will follow the pattern of retaining solution after
some threshold, that is for $\lambda>\lambda^{*}$, as they fit the theoretical
structure behind the penalty approach as described in Theorem 2.2. Note that
all of these examples follow the pattern shown in Figure 1(a). However, if we
relax the stopping criteria used to mitigate the effects of ill-conditioning,
as discussed in the previous two subsections, varying $\lambda$ for 1000
iterations for these seven partially calm examples leads to the 3 typical
scenarios demonstrated in Figure 4.
(a) Example 8
(b) Example 123
(c) Example 45
Figure 4: (a) and (b) are obtained for 1000 iterations, while (c) is based on
500 iterations.
In the first case of Figure 4, we can clearly see the algorithm is performing
well, retaining the solution for the number of iteration, but then blows up at
one point (after 500 iterations) and never goes back to reasonable solution
values. Examples 40 and 46 also follow this pattern. Example 123 (second
picture in Figure 4) shows a slightly different picture, where the zig-zagging
pattern is observed. Algorithm 2.6 blows up at some point and starts zig-
zagging away from the solution after obtaining it for a smaller value of
$\lambda$. Zig-zagging is very common issue in penalty methods and often
caused by ill-conditioning [21]. Note that Example 118 exhibits a similar
behaviour. This is somewhat similar to scenario 1. However, we put this
separately as zig-zagging issue is often referred to as the danger that could
be caused by ill-conditioning of a penalty function. The last picture of
Figure 4 shows a case where Algorithm 2.6 runs very well without any ill-
behaviour observed for all the 1000 iterations. It could be possible that the
algorithm could blow up after more iterations if we keep increasing $\lambda$.
It could also be possible that ill-conditioning does not occur for this
example at all, as the Hessian of $\Upsilon^{\lambda}$ (2.16) is not affected
by $\lambda$. Out of the seven BOLIB problems considered here, only examples
43 and 45 follows this pattern.
## 4 Performance comparison for the Levenberg-Marquardt method under fixed
and varying partial exact penalty parameter
Following the discussion from the previous section, two approaches for
selecting the penalty parameter $\lambda$ are tested and compared on the
Levenberg-Marquardt method. Recall that for the varying $\lambda$ case, we
define the penalty parameter sequence as $\lambda_{k}:=0.5\times 1.05^{k}$,
where $k$ is the iteration number. When fixed values of penalty parameter are
considered, ten different values of $\lambda$ are used for the experiments;
i.e., $\lambda\in\\{10^{6},10^{5},...,10^{-3}\\}$. For the fixed values of
$\lambda$ one could choose best $\lambda$ for each example to see if at least
one of the selected values worked well to recover the solution. As in the
previous section, the examples used for the experiments are from the Bilvel
Optimization LIBrary of Test Problems (BOLIB) [33], which contains 124
nonlinear examples. The experiments are run in MATLAB, version R2016b, on
MACI64. Here, we present a summary of the results obtained; more details for
each example are reported in the supplementary material [26]. It is important
to mention that algorithm always converges, unlike its Gauss-Newton
counterpart studied in [14], where the corresponding algorithm diverges for
some values of $\lambda$.
### 4.1 Practical implementation details
For Step 0 of Algorithm 2.6 we set the tolerance to $\epsilon:=10^{-5}$ and
the maximum number of iterations to be $K:=1000$. We also choose
$\alpha_{0}:=\left\lVert\Upsilon^{\lambda}(z^{0})\right\rVert$ (if $\lambda$
is varying, we set $\lambda:=\lambda_{0}$ here), $\gamma_{0}:=1$, $\rho=0.5$,
and $\sigma=10^{-2}$. The selection of $\sigma$ is based on the overall
performance of the algorithm while the other parameters are standard in the
literature.
_Starting point._ The experiments have shown that the algorithm performs much
better if the starting point $(x^{0},y^{0})$ is feasible. As a default setup,
we start with $x^{0}=1_{n}$ and $y^{0}=1_{m}$. If the default starting point
does not satisfy at least one constraint and algorithm diverges, we choose a
feasible starting point; see [26] for such choices. To be more precise, if for
some $i$, $G_{i}(x^{0},y^{0})>0$ or for some $j$ we have
$g_{j}(x^{0},y^{0})>0$ and the algorithm does not converge to a reasonable
solution, we generate a starting point such that $G_{i}(x^{0},y^{0})=0$ or
$g_{j}(x^{0},y^{0})=0$. Subsequently, the Lagrange multipliers are initialised
at $u^{0}=\max\,\left\\{0.01,\;-g(x^{0},y^{0})\right\\}$,
$v^{0}=\max\,\left\\{0.01,\;-G(x,y)\right\\}$, and $u^{0}=w^{0}$.
_Smoothing parameter $\mu$._ The smoothing process, i.e., the use of the
parameter $\mu$, is only applied when necessary (Step 2 and Step 3), where the
derivative evaluation for $\Upsilon^{\lambda}_{\mu}$ is required. We tried
both the case where $\mu$ is fixed to be a small constant for all iterations,
and the situation where the smoothing parameter is sequence $\mu_{k}\downarrow
0$. Setting the decreasing sequence to $\mu_{k}:=0.001/(1.5^{k})$, and testing
its behaviour and comparing it with a fixed small value ($\mu:=10^{-11}$), in
the context of Algorithm 2.6, we observed that both options lead to almost the
same results. Hence, we stick to what is theoretically more suitable, that is
the smoothing decreasing sequence ; i.e., $\mu_{k}:=0.001/(1.5^{k})$.
_Descent direction check and update._ For the sake of efficiency, we calculate
the direction $d^{k}$ by solving (2.31) with the Gaussian elimination.
Considering the line search in Step 3, if we have
$\left\lVert\Upsilon^{\lambda}(z^{k}+\gamma_{k}d^{k})\right\rVert^{2}<\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert^{2}+\sigma\gamma_{k}\nabla\Upsilon^{\lambda}(z^{k})^{T}\Upsilon^{\lambda}(z^{k})d^{k}$,
we redefine $\gamma_{k}=\gamma_{k}/2$ and check again. Recall that the
Levenberg-Marquardt direction can be interpreted as a combination of Gauss-
Newton and steepest descent directions. In fact, if $\alpha_{k}=0$ this
direction is a Gauss-Newton direction one and when as $\alpha_{k}\to\infty$
the direction $d^{k}$ from (2.31) tends to a steepest descent direction.
Hence, if the Levenberg-Marquardt direction is not a descending at some
iteration, we give more weight to the steepest descent direction. Hence, when
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert>\left\lVert\Upsilon^{\lambda}(z^{k-1})\right\rVert$
setting $\alpha_{k+1}:=10000\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$
has led to an oeverall good performance of Algorithm 2.6 for test set used in
this paper.
_Stopping Criteria._ The primary stopping criterion for Algorithm 2.6 is
$\left\lVert\Upsilon^{\lambda}_{\mu}(z^{k})\right\rVert<\epsilon$, as
requested in Step 1. However, robust safeguards are needed to deal with ill-
behaviours typically due to the size of the penalty parameter $\lambda$.
Hence, for the practical implementation of the method, we set
$\epsilon=10^{-5}$ and stop if one of the following six conditions is
satisfied:
1. 1.
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert<\epsilon$,
2. 2.
$\big{|}\left\lVert\Upsilon^{\lambda}(z^{k-1})\right\rVert-\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert\big{|}<10^{-9}$,
3. 3.
$\big{|}\left\lVert\Upsilon^{\lambda}(z^{k-1})\right\rVert-\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert\big{|}<10^{-4}$
and $iter>200$,
4. 4.
$\left\lVert\Upsilon^{\lambda}(z^{k-1})\right\rVert-\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert<0$
and $\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert<10$ and $iter>175$,
5. 5.
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert<10^{-2}$ and $iter>500$,
6. 6.
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert>10^{2}$ and $iter>200$.
The additional stopping criteria is important to ensure that algorithm is not
running for too long. The danger of running algorithm for too long is that
ill-conditioning could occur. Further, we typically observe the pattern that
we recover solution earlier than algorithm stops. This appears due to the
nature of the overdetermined system. We do not know beforehand the tolerance
with which we can solve for $\left\lVert\Upsilon^{\lambda}(z)\right\rVert$ as
$\Upsilon^{\lambda}$ is overdetermined system. Hence, it is hard to select
$\epsilon$ that would fit all examples and allow to solve examples with
efficient tolerance. With the stopping criteria defined above we avoid running
unnecessary iterations, retaining the obtained solution. To avoid algorithm
running for too long and to prevent $\lambda$ to become too large, we impose
additional stopping criterion 3, 5 or 6 above. These criteria are motivated by
the behaviour of the algorithm.
For almost all of the examples we observe that after 100-150 iterations we
obtain the value reasonably close to the solution but we cannot know
beforehand what would be the tolerance of
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert$ to stop. Choosing small
$\epsilon$ would not always work due to the overdetermined nature of the
system being solved. Choosing $\epsilon$ too big would lead to worse solutions
and possibly not recover some of the solutions. Further, a quick check has
shown that ill-conditioning issue typically takes place after 500 iterations
for majority of the problems. For these reasons we stop if the improve of the
$Error$ value from step to step becomes too small,
$\big{|}\left\lVert\Upsilon^{\lambda}(z^{k-1})\right\rVert-\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert\big{|}<10^{-4}$,
after the algorithm has performed $200$ iterations. Since ill-conditioning is
likely to happen after $500$ iterations we stop if by that time we obtain a
reasonably small $Error$,
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert<10^{-2}$. Finally, if it
turns out that system cannot be solved with a good tolerance, such that we
would obtain a reasonably small value of the $Error$, we stop if the Error
after $200$ iterations is big,
$\left\lVert\Upsilon^{\lambda}(z^{k})\right\rVert>10^{2}$. This way additional
stopping criteria plays the role of safeguard to prevent ill-conditioning and
also does not allow the algorithm to keep running for too long once a good
solution is obtained.
### 4.2 Accuracy of the upper-level objective function
Here, we compare the values of the upper-level objective functions at points
computed by the Levenberg-Marquardt algorithm with fixed $\lambda$ and varying
$\lambda$. For the comparison purpose, we focus our attention only on 117
BOLIB examples [33], as solutions are not known for the other seven problems.
To proceed, let $\bar{F}_{A}$ be the value of upper-level objective function
at the point $(\bar{x},\bar{y})$ obtained by the algorithm and $\bar{F}_{K}$
the value of this function at the known best solution point reported in the
literature (see corresponding references in [33]). We consider all fixed
$\lambda\in\\{10^{6},10^{5},...,10^{-3}\\}$ and varying $\lambda$ in one graph
and present the results in Figure 5 below, where we have the relative error
$(\bar{F}_{A}-\bar{F}_{K})/(1+|\bar{F}_{K}|)$ on the $y-axis$ and number of
examples on the x-axis, starting from $30th$ example. We further plot the
results for the best fixed value of $\lambda$. The graph is plotted in the
order of increasing error. From the Figure 5 above we can clearly see that
much more solutions were recovered for the small values of fixed $\lambda$
than for large values. For instance with the allowable accuracy error of $\leq
20\%$ we recover solutions for $78.63\%$ for fixed
$\lambda\in\\{10^{-2},10^{-3}\\}$, while for
$\lambda\in\\{10^{6},10^{5},10^{4},10^{3},10^{2}\\}$ we recover at most
$40.17\%$ solutions. Interestingly, the worst performance is observed for
fixed $\lambda=100$. With the varying $\lambda$ we observe that algorithm
performed averagely in comparison between large and small fixed values of
$\lambda$, recovering $59.83\%$ of the solutions with the accuracy error of
$\leq 20\%$. It is worth saying that implementing Algorithm 2.6 with
$\lambda:=0.5\times 1.05^{k}$ still recovers over half of the solutions, which
is not too bad. However, fixing $\lambda$ to be small recovers way more
solutions, which shows that varying $\lambda$ is not the most efficient option
for our case.
It was further observed that for some examples only $\lambda\geq 10^{3}$
performed well, while for others small values ($\lambda<1$) showed good
performance. If we were able to pick best fixed $\lambda$ for each example, we
would obtain negligible (less than $10\%$) error for upper-level objective
function for $85.47\%$ of the tested problems. With the accuracy error of
$\leq 25\%$ our algorithm recovered solutions for $88.9\%$ of the problems for
the best fixed $\lambda$ and for $61.54\%$ with varying $\lambda$. This means
that if one can choose the best fixed $\lambda$ from the set of different
values, fixing $\lambda$ is much more attractive for the algorithm. It was
further observed that for some examples only $\lambda\geq 10^{3}$ performed
well, while for others small values of $\lambda$ ($\lambda<1$) showed good
performance. However, if one does not have a way to choose the best value or a
set of potential values cannot be constructed efficiently for certain
problems, varying $\lambda$ could be a better option to choose. Nevertheless,
for the test set of small problems from BOLIB [33], fixing $\lambda$ to be
small performed much better than varying $\lambda$ as increasing sequence.
Further, if one could run the algorithm for all fixed $\lambda$ and was able
to choose the best one, the algorithm with fixed $\lambda$ performs extremely
well compared to varying $\lambda$. In other words, algorithm almost always
finds a good solution for at least one value of fixed
$\lambda\in\\{10^{6},10^{5},...,10^{-3}\\}$.
Figure 5: Error of the upper-level objective value for examples with known
solutions
### 4.3 Feasibility check
Considering the structure of the feasible set of problem (1.2), it is critical
to check whether the points computed by our algorithms satisfy the value
function constraint $f(x,y)\leq\varphi(x)$, as it is not explicitly included
in the expression of $\Upsilon^{\lambda}$ (2.16). If the lower-level problem
is convex in $y$ and a solution generated by our algorithms satisfies (2.5)
and (2.8), then it will verify the value function constraint. Conversely, to
guaranty that a point $(x,y)$ such that $y\in S(x)$ satisfies (2.5) and (2.8),
a constraint qualification (CQ) is necessary. Note that conditions (2.5) and
(2.8) are incorporated in the stopping criterion of Algorithm 2.6. To check
whether the points obtained are feasible, we first identify the BOLIB
examples, where the lower-level problem is convex w.r.t. $y$. As shown in [14]
it turns out that a significant number of test examples have linear lower-
level constraints. For these examples, the lower-level convexity is
automatically satisfied. We detect 49 examples for which some of these
assumptions are not satisfied, that is the problems having non-convex lower-
level objective or some of the lower-level constraints being nonconvex. For
these examples, we compare the obtained solutions with the known ones from the
literature. Let $f_{A}$ stand for $f(\bar{x},\bar{y})$ obtained by one of the
tested algorithms and $f_{K}$ to be the known optimal value of lower-level
objective function. In the graph below we have the lower-level relative error,
$(f_{A}-f_{K})/(1+|f_{K}|)$, on the y-axis, where the error is plotted in
increasing order. In Figure 6 below we present results for all fixed
$\lambda\in\\{10^{6},10^{5},...,10^{-3}\\}$ as well as varying $\lambda$
defined as $\lambda:=0.5\times 1.05^{k}$.
From the Figure 6 above we can see that for 20 problems the relative error of
lower-level objective is negligible ($<5\%)$ for all values of fixed $\lambda$
and varying $\lambda$. We have seen that convexity and a CQ hold for the
lower-level hold for 74 test examples. We consider solutions for these
problems to be feasible for the lower-level problem. Taking satisfying
feasibility error to be $<20\%$ and using information from the graph above, we
claim that feasibility is satisfied for at most $100$ ($80.65\%$) problems for
fixed $\lambda\in\\{10^{6},10^{5},10^{4},10^{3}\\}$, for $101-104$
($81.45-83.87\%$) problems for
$\lambda\in\\{10^{3},10^{2},10^{1},10^{0},10^{-1}\\}$ and for $106$
($85.48\%$) problems for $\lambda\in\\{10^{-2},10^{-3}\\}$. We further observe
that feasibility is satisfied for 101 (81.4%) problems for varying $\lambda$.
Considering we could choose best fixed $\lambda$ for each of the examples, we
could also claim that feasibility is satisfied for 108 (87.1%) problems for
best fixed $\lambda$. From Figure 6 we note that slightly better feasibility
was observed for smaller values of fixed $\lambda$ than for the big ones and
that varying $\lambda$ has shown average performance between these magnitudes
in terms of the feasibility.
Figure 6: Feasibility error for the lower-level problem in increasing order
### 4.4 Experimental order of convergence
Recall that the experimental order of convergence (EOC) is defined by
$\mbox{EOC}:=\max\left\\{\frac{\log\|\Upsilon^{\lambda}(z^{K-1})\|}{\log\|\Upsilon^{\lambda}(z^{K-2})\|},\,\frac{\log\|\Upsilon^{\lambda}(z^{K})\|}{\log\|\Upsilon^{\lambda}(z^{K-1})\|}\right\\},$
where $K$ is the number of the last iteration [12]. If $K=1$, no EOC will be
calculated (EOC$=\infty$). EOC is important to estimate the local behaviour of
the algorithm and to show whether this practical convergence reflects the
theoretical convergence result stated earlier. Let us consider EOC for fixed
$\lambda\in\\{10^{6},...,10^{-3}\\}$ and for varying $\lambda$
($\lambda=0.5\times 1.05^{k}$) in Figure 7 below.
It is clear from this picture that for most of the examples our method has
shown linear experimental convergence. This is slightly below the quadratic
convergence established by Theorem 2.7. It is however important to note that
the method always converges to a number, although sometimes the output might
not be the optimal point for the problem. There are a few examples that shown
better convergence for each value of $\lambda$, with the best ones being
$\lambda\in\\{10^{-3},10^{-2},10^{-1},10^{6}\\}$ as seen in the figure above.
These fixed values have shown slightly better EOC performance than varying
$\lambda$. Varying $\lambda$ showed slightly better convergence than fixed
$\lambda\in\\{10^{0},10^{1},10^{2},10^{3},10^{4},10^{5}\\}$. EOC bigger than
$1.2$ has been obtained for less than 5 (4.03 %) examples for fixed
$\lambda\in\\{10^{0},10^{1},10^{2},10^{3},10^{4},10^{5}\\}$, while varying
$\lambda$ showed such EOC for 11 (8.87%) examples. Fixed $\lambda=10^{6}$ has
shown almost the same result as varying $\lambda$ with rate of convergence
greater than $1.2$ for 12 (9.67%) examples, while $\lambda=10^{-1}$ has
demonstrated such EOC for 14 (11.29%) examples and
$\lambda\in\\{10^{-2},10^{-3}\\}$ for 17 (13.71 %) examples. Finally, in the
graph above, we can see that for all values of $\lambda$ only a few ($\leq
4/124$) examples have worse than linear convergence.
Figure 7: Observed EOC at the last iterations for all examples (in decreasing
order)
### 4.5 Line search stepsize
Let us now look at the line search stepsize, $\gamma_{k}$, at the last step of
the algorithm for each example. Consider all fixed $\lambda$ and varying
$\lambda$ in Figure 8 below. This is quite important to know two things.
Firstly, how often line search was used at the last iteration, that is how
often implementation of line search was clearly important. Secondly, as main
convergence results are for the pure method this would be demonstrative to
note how often the pure (full) step was made at the last iteration. This can
then be compared with the experimental convergence results in the previous
subsection, namely with Figure 7.
Figure 8: Stepsize made at the last iteration for all examples (in decreasing
order)
In the figure above whenever stepsize on the y-axis is equal to $1$ it means
the full step was made at the last iteration. For these cases the convergence
results shown in Theorem 2.7 could be considered valid. From the graphs above
we observe that stepsize at the last iteration was rather $\gamma_{k}=1$ or
$\gamma_{k}<0.05$. We observe that for varying $\lambda$ algorithm would
typically do a small step at the last iteration. It seems that algorithm with
varying $\lambda$ benefits more from line search technique than the algorithm
with fixed $\lambda$. Possibly, pure Levenberg-Marquardt method with varying
$\lambda$ would not converge for most of the problems. Interestingly, for
fixed values of $\lambda$ stepsize was $\gamma_{k}<0.05$ at the last iteration
much more often for the values of $\lambda$ that showed worse performance in
terms of recovering solutions (i.e. $\lambda\in\\{10^{1},10^{2}\\}$). We also
observe that for medium values of $\lambda$
($\lambda\in\\{10^{4},10^{3},10^{2},10^{1},10^{0}\\}$) full stepsize was made
for less than half of the examples. For large values
$\lambda\in\\{10^{5},10^{5}\\}$ full step was made for $63.71\%$ and $70.16\%$
of the problems respectively. Further on, small values of $\lambda$ for which
more solutions were recovered would do the full step at the last iteration for
most of the examples. For instance, with $\lambda=10^{-3}$ and
$\lambda=10^{-2}$ full step was made at the last iteration for $73.39\%$ of
the problems, while for $\lambda=10^{-1}$ full step was made for $75.81\%$ of
the problems. In terms of the fixed $\lambda_{best}$ it is interesting to
observe that full step was used only for 82/124 ($66.13\%$) of the problems,
meaning that for a third of the problems linesearch was implemented in the
last step for the best tested value of $\lambda$. This also coincides with the
results of Figure 7 where with smaller values of $\lambda$ algorithm has shown
faster than linear convergence for more examples than for big values of
$\lambda$. This is likely to be the case that small steps were made in the
other instances due to the non-efficient direction of the method at the last
iteration.
## 5 Final comments
We introduce a smoothing Levenberg-Marquardt method to solve LLVF-based
optimality conditions for optimistic bilevel programs. Since these optimality
conditions are parameterized by a certain $\lambda$, its selection is
carefully studied in light of the BOLIB library test set. Surprisingly, small
values of this partial exact penalty parameter showed a good behaviour for our
method, as they generally performed very well, although classical literature
on exact penalization usually suggest large values. However, relatively medium
values of $\lambda$ did not perform as well. Furthermore, as both the varying
and fixed scenarios were considered for $\lambda$, it was observed that both
approaches showed a linear experimental order of convergence for most of the
examples. Average CPU time for all fixed $\lambda$ is $0.243$ seconds, while
it is $0.193$ seconds for $\lambda_{best}$ and $0.525$ seconds for varying
$\lambda$. The algorithm with varying $\lambda$ turns out to be more than
twice slower than for fixed $\lambda$. However, running it for all fixed
values of $\lambda$ can take way more time than the case where the value of
$\lambda$ is varying.
## References
* [1] G.B. Allende and G. Still, Solving bi-level programs with the KKT-approach, Mathematical Programming 131:37-48 (2012)
* [2] R. Behling, A. Fischer, G. Haeser, A.Ramos and K. Schönefeld, On the constrained error bound condition and the projected Levenberg-Marquardt method, Optimization 66(8): 1397-1411 (2016)
* [3] D.P. Bertsekas, Constrained optimization and Lagrange multiplier methods, Academic Press (1982)
* [4] J.V. Burke, An exact penalization viewpoint of constrained optimization, SIAM journal on control and optimization 29(4): 968-998 (1991)
* [5] S. Dempe and J. Dutta, Is bilevel programming a special case of mathematical programming with equilibrium constraints? Mathematical Programming 131:37-48 (2012)
* [6] S. Dempe, J. Dutta, and B.S. Mordukhovich, New necessary optimality conditions in optimistic bilevel programming, Optimization 56 (5-6):577-604 (2007)
* [7] S. Dempe and A.B. Zemkoho (eds.), Bilevel optimization: advances and next challenges, Springer (2020)
* [8] S. Dempe and A.B. Zemkoho, The bilevel programming problem: reformulations, constraint qualification and optimality conditions, Mathematical Programming 138:447-473 (2013)
* [9] S. Dempe and A.B. Zemkoho, The generalized Mangasarian-Fromowitz constraint qualification and optimality conditions for bilevel programs, Journal of Optimization Theory and Applications 148(1):46-68 (2011)
* [10] J.Y. Fan and Y.X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing 74:23-39 (2005)
* [11] A. Fischer, A special Newton-type optimization method, Optimization 24(3):269-284 (1992)
* [12] A. Fischer, A.B. Zemkoho, and S. Zhou, Semismooth Newton-type method for bilevel optimization: global convergence and extensive numerical experiments, arXiv:1912.07079 (2019)
* [13] R. Fletcher, An ideal penalty function for constrained optimization, IMA Journal of Applied Mathematics 15:319-342 (1975)
* [14] J. Fliege, A. Tin, and A.B. Zemkoho, Gauss-Newton-type methods for bilevel optimization, Computational Optimization and Applications, doi.org/10.1007/s10589-020-00254-3 (2021)
* [15] J. Herskovits, M.T. Filho, and A. Leontiev, An interior point technique for solving bilevel programming problems, Optimization and Engineering 14(3):381-394 (2013)
* [16] C. Kanzow, Some noninterior continuation methods for linear complementarity problems, SIAM Journal on Matrix Analysis and Applications 17(4):851-868 (1996)
* [17] L. Lampariello and S. Sagratella, Numerically tractable optimistic bilevel problems, Computational Optimization and Applications 76(2):277-303 (2020)
* [18] G.-H. Lin, M. Xu, and J.J. Ye, On solving simple bilevel programs with a nonconvex lower level program, Mathematical Programming 144(1-2):277-305 (2014)
* [19] P. Mehlitz, L.I. Minchenko, and A.B. Zemkoho, A note on partial calmness for bilevel optimization problems with linear structures at the lower level, Optimization letters, doi.org/10.1007/s11590-020-01636-6 (2020)
* [20] A. Mitsos, P. Lemonidis, and P.I. Barton, Global solution of bilevel programs with a nonconvex inner program, Journal of Global Optimization 42(4):475-513 (2008)
* [21] J. Nocedal and S.J. Wright, Numerical optimization, Springer (1999)
* [22] R. Paulavicius, J. Gao, P. Kleniati, and C.S. Adjiman, BASBL: Branch-and-sandwich bilevel solver. Implementation and computational study with the BASBLib test sets, Computers & Chemical Engineering 132 (2020)
* [23] G. Di Pillo and L. Grippo, Exact penalty functions in constrained optimization, SIAM journal on control and optimization 27 (6): 1333-1360, (1989)
* [24] S. Pineda, H. Bylling, and J.M. Morales, Efficiently solving linear bilevel programming problems using off-the-shelf optimization software, Optimization and Engineering 19(1):187-211 (2018)
* [25] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, Numerical recipes in C: the art of scientific computing (2nd Ed), Cambridge University Press (1992)
* [26] A. Tin and A.B. Zemkoho, Supplementary material for “Levenberg-Marquardt method for bilevel optimization”, School of Mathematical Sciences, University of Southampton, UK (2020)
* [27] W. Wiesemann, A. Tsoukalas, P. Kleniati, and B. Rustem, Pessimistic bilevel optimization, SIAM Journal on Optimization 23(1):353-380 (2013)
* [28] M. Xu and J.J. Ye, A smoothing augmented lagrangian method for solving simple bilevel programs, Computational Optimization and Applications 59(1-2):353-377 (2014)
* [29] N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, Computing 15:237-249 (2001)
* [30] J.J. Ye and D.L. Zhu, Optimality conditions for bilevel programming problems, Optimization 33:9-27 (1995)
* [31] A.B. Zemkoho and S. Zhou, Theoretical and numerical comparison of the Karush–Kuhn–Tucker and value function reformulations in bilevel optimization, Computational Optimization and Applications doi.org/10.1007/s10589-020-00250-7 (2021)
* [32] A.B. Zemkoho, Bilevel programming: reformulations, regularity and stationarity, PhD thesis, Department of Mathematics and Computer Science, TU Bergakademie Freiberg, Freiberg, Germany (2012)
* [33] S. Zhou, A.B. Zemkoho, and A. Tin, BOLIB: Bilevel Optimization LIBrary of test problems, In: S. Dempe and A.B. Zemkoho (eds.), Bilevel optimization: advances and next challenges, Springer (2020)
|
# Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Tokyo Metropolitan University
<EMAIL_ADDRESS>&Danushka Bollegala
University of Liverpool, Amazon
<EMAIL_ADDRESS>Danushka Bollegala holds concurrent appointments as a
Professor at University of Liverpool and as an Amazon Scholar. This paper
describes work performed at the University of Liverpool and is not associated
with Amazon.
###### Abstract
In comparison to the numerous debiasing methods proposed for the static non-
contextualised word embeddings, the discriminative biases in contextualised
embeddings have received relatively little attention. We propose a fine-tuning
method that can be applied at token- or sentence-levels to debias pre-trained
contextualised embeddings. Our proposed method can be applied to any pre-
trained contextualised embedding model, without requiring to retrain those
models. Using gender bias as an illustrative example, we then conduct a
systematic study using several state-of-the-art (SoTA) contextualised
representations on multiple benchmark datasets to evaluate the level of biases
encoded in different contextualised embeddings before and after debiasing
using the proposed method. We find that applying token-level debiasing for all
tokens and across all layers of a contextualised embedding model produces the
best performance. Interestingly, we observe that there is a trade-off between
creating an accurate vs. unbiased contextualised embedding model, and
different contextualised embedding models respond differently to this trade-
off.
## § 1 Introduction
Contextualised word embeddings have significantly improved performance in
numerous natural language processing (NLP) applications Devlin et al. (2019);
Liu et al. (2019); Clark et al. (2020) and have established as the de facto
standard for input text representations. Compared to static word embeddings
Pennington et al. (2014); Mikolov et al. (2013) that represent a word by a
single vector in all contexts it occurs, contextualised embeddings use dynamic
context dependent vectors for representing a word in a specific context.
Unfortunately however, it has been shown that, similar to their non-contextual
counterparts, contextualised text embeddings also encode various types of
unfair biases Zhao et al. (2019); Bordia and Bowman (2019); May et al. (2019);
Tan and Celis (2019); Bommasani et al. (2020); Kurita et al. (2019). This is a
worrying situation because such biases can easily propagate to the downstream
NLP applications that use contextualised text embeddings.
Different types of unfair and discriminative biases such as gender, racial and
religious biases have been observed in static word embeddings Bolukbasi et al.
(2016); Zhao et al. (2018a); Rudinger et al. (2018); Zhao et al. (2018b);
Elazar and Goldberg (2018); Kaneko and Bollegala (2019). As discussed later in
§ 2 different methods have been proposed for debiasing static word embeddings
such as projection-based methods Kaneko and Bollegala (2019); Zhao et al.
(2018b); Bolukbasi et al. (2016); Ravfogel et al. (2020) and adversarial
methods Xie et al. (2017); Gonen and Goldberg (2019). In contrast, despite
multiple studies reporting that contextualised embeddings to be unfairly
biased, methods for debiasing contextualised embeddings are relatively under
explored Dev et al. (2020); Nadeem et al. (2020); Nangia et al. (2020).
Compared to static word embeddings, debiasing contextualised embeddings is
significantly more challenging due to several reasons as we discuss next.
First, compared to static word embedding models where the semantic
representation of a word is limited to a single vector, contextualised
embedding models have a significantly large number of parameters related in
complex ways. For example, BERT-large model Devlin et al. (2019) contains 24
layers, 16 attention heads and 340M parameters. Therefore, it is not obvious
which parameters are responsible for the unfair biases related to a particular
word. Because of this reason, projection-based methods, popularly used for
debiasing pre-trained static word embeddings, cannot be directly applied to
debias pre-trained contextualised word embeddings.
Second, in the case of contextualised embeddings, the biases associated with a
particular word’s representation is a function of both the target word itself
and the context in which it occurs. Therefore, the same word can show unfair
biases in some contexts and not in the others. It is important to consider the
words that co-occur with the target word in different contexts when debiasing
a contextualised embedding model.
Third, pre-training large-scale contextualised embeddings from scratch is time
consuming and require specialised hardware such as GPU/TPU clusters. On the
other hand, fine-tuning a pre-trained contextualised embedding model for a
particular task (possibly using labelled data for the target task) is
relatively less expensive. Consequently, the standard practice in the NLP
community has been to
share111https://huggingface.co/transformers/pretrained_models.html pre-trained
contextualised embedding models and fine-tune as needed. Therefore, it is
desirable that a debiasing method proposed for contextualised embedding models
can be applied as a fine-tuning method. In this view, counterfactual data
augmentation methods Zmigrod et al. (2019); Hall Maudslay et al. (2019); Zhao
et al. (2019) that swap gender pronouns in the training corpus for creating a
gender balanced version of the training data are less attractive when
debiasing contextualised embeddings because we must retrain those models on
the balanced corpora, which is more expensive compared to fine-tuning.
Using gender-bias as a running example, we address the above-mentioned
challenges by proposing a debiasing method that fine-tunes pre-trained
contextualised word embeddings222Code and debiased embeddings:
https://github.com/kanekomasahiro/context-debias. Our proposed method retains
the semantic information learnt by the contextualised embedding model with
respect to gender-related words, while simultaneously removing any
stereotypical biases in the pre-trained model. In particular, our proposed
method is agnostic to the internal architecture of the contextualised
embedding method and we apply it to debias different pre-trained embeddings
such as BERT, RoBERTa Liu et al. (2019), ALBERT Lan et al. (2020), DistilBERT
Sanh et al. (2019) and ELECTRA Clark et al. (2020). Moreover, our proposed
method can be applied at token-level or at sentence-level, enabling us to
debias at different granularities and on different layers in the pre-trained
contextualised embedding model.
Following prior work, we compare the proposed debiasing method in two
sentence-level tasks: Sentence Encoder Association Test (SEAT; May et al.,
2019) and Multi-genre co-reference-based Natural Language Inference (MNLI; Dev
et al., 2020). Experimental results show that the proposed method not only
debiases all contextualised word embedding models compared, but also preserves
useful semantic information for solving downstream tasks such as sentiment
classification Socher et al. (2013), paraphrase detection Dolan and Brockett
(2005), semantic textual similarity measurement Cer et al. (2017), natural
language inference Dagan et al. (2005); Bar-Haim et al. (2006) and solving
Winograd schema Levesque et al. (2012). We consider gender bias as a running
example throughout this paper and evaluate the proposed method with respect to
its ability to overcome gender bias in contextualised word embeddings, and
defer extensions to other types of biases to future work.
## § 2 Related Work
Prior work on debiasing word embeddings can be broadly categorised into two
groups depending on whether they consider static or contextualised word
embeddings. Although we focus on contextualised embeddings in this paper, we
first briefly describe prior work on debiasing static embeddings for
completeness of the discussion.
#### Bias in Static Word Embeddings:
Bolukbasi et al. (2016) proposed a post-processing approach that projects
gender-neutral words into a subspace, which is orthogonal to the gender
direction defined by a list of gender-definitional words. However, their
method ignores gender-definitional words during the subsequent debiasing
process, and focus only on words that are _not_ predicted as gender-
definitional by a classifier. Therefore, if the classifier erroneously
predicts a stereotypical word as gender-definitional, it would not get
debiased. Zhao et al. (2018b) modified the original GloVe Pennington et al.
(2014) objective to learn gender-neutral word embeddings (GN-GloVe) from a
given corpus. Unlike the above-mentioned methods, Kaneko and Bollegala (2019)
proposed GP-GloVe, a post-processing method to preserve gender-related
information with autoencoder Kaneko and Bollegala (2020), while removing
discriminatory biases from stereotypical cases.
Adversarial learning Xie et al. (2017); Elazar and Goldberg (2018); Li et al.
(2018) for debiasing first encode the inputs and then two classifiers are
jointly trained – one predicting the target task (for which we must ensure
high prediction accuracy) and the other for protected attributes (that must
not be easily predictable). Elazar and Goldberg (2018) showed that although it
is possible to obtain chance-level development-set accuracy for the protected
attributes during training, a post-hoc classifier trained on the encoded
inputs can still manage to reach substantially high accuracies for the
protected attributes. They conclude that adversarial learning alone does not
guarantee invariant representations for the protected attributes. Ravfogel et
al. (2020) found that iteratively projecting word embeddings to the null space
of the gender direction to further improve the debiasing performance.
#### Benchmarks for biases in Static Embeddings:
Word Embedding Association Test (WEAT; Caliskan et al., 2017) quantifies
various biases (e.g. gender, race and age) using semantic similarities between
word embeddings. Word Association Test (WAT) measures gender bias over a large
set of words Du et al. (2019) by calculating the gender information vector for
each word in a word association graph created in the Small World of Words
project (SWOWEN; Deyne et al., 2019) by propagating masculine and feminine
words via a random walk Zhou et al. (2003). SemBias dataset Zhao et al.
(2018b) contains three types of word-pairs: (a) Definition, a gender-
definition word pair (e.g. hero – heroine), (b) Stereotype, a gender-
stereotype word pair (e.g., manager – secretary) and (c) None, two other word-
pairs with similar meanings unrelated to gender (e.g., jazz – blues, pencil –
pen). It uses the cosine similarity between the gender directional vector,
$(\vv{he}-\vv{she})$, and the offset vector $(\boldsymbol{a}-\boldsymbol{b})$
for each word pair, $(a,b)$, in each set to measure gender bias. WinoBias Zhao
et al. (2018a) uses the ability to predict gender pronouns with equal
probabilities for gender neutral nouns such as occupations as a test for the
gender bias in embeddings.
#### Bias in Contextualised Word Embeddings:
May et al. (2019) extended WEAT using templates to create a sentence-level
benchmark for evaluating bias called SEAT. In addition to the attributes
proposed in WEAT, they proposed two additional bias types: _angry black woman_
and _double binds_ (when a woman is doing a role that is typically done by a
man that woman is seen as arrogant). They show that compared to static
embeddings, contextualised embeddings such as BERT, GPT and ELMo are less
biased. However, similar to WEAT, SEAT also only has positive predictive
ability and cannot detect the absence of a bias. Bommasani et al. (2020)
evaluated the bias in contextualised embeddings by first distilling static
embeddings from contextualised embeddings and then using WEAT tests for
different types of biases such as gender (male, female), racial (White,
Hispanic, Asian) and religion (Christianity, Islam). They found that
aggregating the contextualised embedding of a particular word in different
contexts via averaging to be the best method for creating a static embedding
from a contextualised embedding.
Zhao et al. (2019) showed that contextualised ELMo embeddings also learn
gender biases present in the training corpus. Moreover, these biases propagate
to a downstream coreference resolution task. They showed that data
augmentation by swapping gender helps more than neutralisation by a
projection. They obtain the embedding of two input sentences with reversed
gender from ELMo, and obtain the debiased embedding by averaging them. It can
only be applied to feature-based embeddings, so it cannot be applied to fine-
tuning based embeddings like BERT. We directly debias the contextual
embeddings. Additionally, data augmentation requires re-training of the
embeddings, which is often costly compared to fine-tuning. Kurita et al.
(2019) created masked templates such as “__ is a nurse” and used BERT to
predict the masked gender pronouns. They used the log-odds between male and
female pronoun predictions as an evaluation measure and showed that BERT to be
biased according to it. Karve et al. (2019) learnt conceptor matrices using
class definitions in the WEAT and used the negated conceptors to debias ELMo
and BERT. Although their method was effective for ELMo, the results on BERT
were mixed. This method can only be applied to context-independent vectors,
and it requires the creation of static embeddings from BERT and ELMo as a pre-
processing step for debiasing the context-dependent vectors. Therefore, we do
not compare against this method in the present study, where we evaluate on
context-dependent vectors.
Dev et al. (2020) used natural language inference (NLI) as a bias evaluation
task, where the goal is to ascertain if one sentence (i.e. premise) entails or
contradictions another (i.e. hypothesis), or if neither conclusions hold (i.e.
neutral). The premise-hypothesis pairs are constructed to elicit various types
of discriminative biases. They showed that orthogonal projection to gender
direction Dev and Phillips (2019) can be used to debias contextualised
embeddings as well. However, their method can be applied only to the
noncontextualised layers (ELMo’s Layer 1 and BERT’s subtoken layer). In
contrast, our proposed method can be applied to _all_ layers in a
contextualised embedding and outperforms their method on the same NLI task.
And our debiasing approach does not require task-dependent data.
## § 3 Debiasing Contextualised Embeddings
Figure 1: Types of hidden states in $E$ considered in the proposed method. The
blue boxes in the middle correspond to the hidden states of the target token.
We propose a method for debiasing pre-trained contextualised word embeddings
in a fine-tuning setting that simultaneously (a) preserves the semantic
information in the pre-trained contextualised word embedding model, and (b)
removes discriminative gender-related biases via an orthogonal projection in
the intermediate (hidden) layers by operating at token- or sentence-levels.
Fine-tuning allows debiasing to be carried out without requiring large amounts
of tarining data or computational resources. Our debiasing method is
independent of model architectures or their pre-training methods, and can be
adapted to a wide range of contextualised embeddings as shown in § 4.3.
Let us define two types of words: _attribute_ words ($\mathcal{V}_{a}$) and
_target_ words ($\mathcal{V}_{t}$). For example, in the case of gender bias,
attribute words consist of multiple word sets such as feminine (e.g. she,
woman, her) and masculine (e.g. he, man, him) words, whereas target words can
be occupations (e.g. doctor, nurse, professor), which we expect to be gender
neutral. We then extract sentences that contain an attribute or a target word.
Sentences contain more than one attribute (or target) words are excluded to
avoid ambiguities. Let us denote the set of sentences extracted for an
attribute or a target word $w$ by $\Omega(w)$. Moreover, let
$\mathcal{A}=\bigcup_{w\in\mathcal{V}_{a}}\Omega(w)$ and
$\mathcal{T}=\bigcup_{w\in\mathcal{V}_{t}}\Omega(w)$ be the sets of sentences
containing respectively all of the attribute and target words. We require that
the debiased contextualised word embeddings preserve semantic information
w.r.t. the sentences in $\mathcal{A}$, and remove any discriminative biases
w.r.t. the sentences in $\mathcal{T}$.
Let us consider a contextualised word embedding model $E$, with pre-trained
model parameters $\boldsymbol{\theta}_{e}$. For an input sentence $x$, let us
denote the embedding of token $w$ in the $i$-th layer of $E$ by
$E_{i}(w,x;\boldsymbol{\theta}_{e})$. Moreover, let the total number of layers
in $E$ to be $N$. In our experiments, we consider different types of encoder
models such as $E$. To formalise the requirement that the debiased word
embedding $E_{i}(t,x;\boldsymbol{\theta}_{e})$ of a target word
$t\in\mathcal{V}_{t}$ must not contain any information related to a protected
attribute $a$, we consider the inner-product between the noncontextualised
embedding $\boldsymbol{v}_{i}(a)$ of $a$ and
$E_{i}(t,x;\boldsymbol{\theta}_{e})$ as a loss $L_{i}$ given by (1).
$\displaystyle
L_{i}\\!=\\!\\!\\!\sum_{t\in\mathcal{V}_{t}}\sum_{x\in\Omega(t)}\sum_{a\in\mathcal{V}_{a}}\left({\boldsymbol{v}_{i}(a)}{}^{\top}E_{i}(t,x;\boldsymbol{\theta}_{e})\right)^{2}$
(1)
Here, $\boldsymbol{v}_{i}(a)$ is computed by averaging the contextualised
embedding of $a$ in the $i$-th layer of $E$ over all sentences in $\Omega(a)$
following Bommasani et al. (2020) and is given by (2).
$\displaystyle\boldsymbol{v}_{i}(a)=\frac{1}{|\Omega(a)|}\sum_{x\in\Omega(a)}E_{i}(a,x;\boldsymbol{\theta}_{e})$
(2)
Here, $|\Omega(a)|$ denotes the total number of sentences in $\Omega(a)$. If a
word is split into multiple sub-tokens, we compute the contextualised
embedding of the word by averaging the contextualised embeddings of its
constituent sub-tokens. Minimising the loss $L_{i}$ defined by (1) with
respect to $\boldsymbol{\theta}_{e}$ forces the hidden states of $E$ to be
orthogonal to the protected attributes such as gender.
Although removing discriminative biases in $E$ is our main objective, we must
ensure that simultaneously we preserve as much useful information that is
encoded in the pre-trained model for the downstream tasks. We model this as a
regulariser where we measure the squared $\ell_{2}$ distance between the
contextualised word embedding of a word $w$ in the $i$-th layer in the
original model, parametrised by $\boldsymbol{\theta}_{\textrm{pre}}$, and the
debiased model as in (3).
$\displaystyle L_{\textrm{reg}}\\!=\\!\\!\sum_{x\in\mathcal{A}}\sum_{w\in
x}\sum_{i=1}^{N}\left|\left|E_{i}(w,x;\boldsymbol{\theta}_{e})-E_{i}(w,x;\boldsymbol{\theta}_{\textrm{pre}})\right|\right|^{2}$
(3)
The overall training objective is then given by (4) as the linearly weighted
sum of the two losses defined by (1) and (3).
$\displaystyle L=\alpha L_{i}+\beta L_{\textrm{reg}}$ (4)
Here, coefficients $\alpha,\beta\in[0,1]$ satisfy $\alpha+\beta=1$.
As shown in Figure 1, a contextualised word embedding model typically contains
multiple layers. It is not obvious which hidden states of $E$ are best for
calculating $L_{i}$ for the purpose of debiasing. Therefore, we compute
$L_{i}$ for different layers in a particular contextualised word embedding
model in our experiments. Specifically, we consider three settings: debiasing
only the first layer, last layer or all layers. Moreover, $L_{i}$ can be
computed only for the target words in a sentence $x$ as in (1), or can be
summed up for _all_ words in $w\in x$ (i.e.
$\sum_{t\in\mathcal{V}_{t}}\sum_{x\in\Omega(t)}\sum_{w\in
x}\left(\boldsymbol{v}_{i}(a){}^{\top}E_{i}(w,x;\boldsymbol{\theta}_{e})\right)^{2}$).
We refer to the former as token-level debiasing and latter sentence-level
debiasing. Collectively this gives us six different settings for the proposed
debiasing method, which we evaluate experimentally in § 4.3.
## § 4 Experiments
### § 4.1 Datasets
Model _Layer_ _Unit_ SEAT-6 SEAT-7 SEAT-8 #${\dagger}$ SST-2 MRPC STS-B RTE
WNLI Avg BERT all token $0.68^{\dagger}$ -0.09 $0.60^{\dagger}$ 2 92.1 85.6
83.1 60.0 53.5 74.9 sent $1.13^{\dagger}$ 0.34 0.12 1 91.9 82.6 80.0 54.2 40.8
69.9 last token $1.02^{\dagger}$ -1.18 $0.47^{\dagger}$ 2 92.2 86.9 82.3 58.1
56.3 75.2 sent $1.51^{\dagger}$ -0.60 $1.52^{\dagger}$ 2 92.3 84.6 82.9 62.1
56.3 75.6 first token $0.88^{\dagger}$ 0.33 $0.86^{\dagger}$ 2 92.4 87.1 82.6
62.1 50.7 75.0 sent $0.94^{\dagger}$ 0.32 $0.97^{\dagger}$ 2 91.9 86.1 83.0
63.9 46.5 74.3 original $1.04^{\dagger}$ 0.18 $0.81^{\dagger}$ 2 92.8 86.7
82.4 60.6 56.3 75.8 random $1.16^{\dagger}$ -0.08 -0.29 1 92.2 87.4 81.9 63.2
54.9 75.9 RoBERTa all token $0.51^{\dagger}$ 0.15 0.02 1 78.1 81.6 73.7 53.8
56.3 68.7 sent $1.27^{\dagger}$ $0.86^{\dagger}$ $1.14^{\dagger}$ 3 80.3 82.8
74.4 50.9 56.3 68.9 last token $1.17^{\dagger}$ -0.60 $0.45^{\dagger}$ 2 79.9
83.7 74.1 52.3 56.3 69.3 sent $0.98^{\dagger}$ $0.75^{\dagger}$
$0.87^{\dagger}$ 3 69.5 81.5 72.9 52.7 56.3 66.6 first token $1.15^{\dagger}$
0.26 $0.54^{\dagger}$ 2 77.8 81.1 74.5 54.5 56.3 68.8 sent $1.21^{\dagger}$
0.32 $0.50^{\dagger}$ 2 79.0 82.5 74.5 51.6 56.3 68.8 original
$1.21^{\dagger}$ $1.34^{\dagger}$ $1.01^{\dagger}$ 3 93.8 91.2 89.8 71.8 56.3
80.6 random $1.39^{\dagger}$ $0.40^{\dagger}$ $0.39^{\dagger}$ 3 73.4 82.5
73.9 53.4 49.3 66.5 ALBERT all token 0.16 0.02 0.18 0 78.1 80.5 67.5 54.9 56.3
67,5 sent 0.18 -0.05 -0.77 0 77.3 81.7 69.9 46.9 56.3 66.4 last token
$0.83^{\dagger}$ -1.15 -0.76 1 77.8 81.2 68.9 47.3 56.3 66.3 sent
$0.69^{\dagger}$ -0.06 -0.10 1 78.3 80.1 71.3 55.2 56.3 68,2 first token 0.09
0.28 $0.97^{\dagger}$ 1 77.9 81.6 70.0 52.0 56.3 67,6 sent 0.25
$0.60^{\dagger}$ $1.18^{\dagger}$ 2 75.9 81.3 70.1 53.1 54.9 67,1 original
0.30 $0.48^{\dagger}$ $1.12^{\dagger}$ 2 92.2 89.9 87.7 70.0 56.3 79.2 random
$0.41^{\dagger}$ 0.34 $1.08^{\dagger}$ 2 78.2 79.9 71.8 47.3 56.3 66.7
DistilBERT all token $0.70^{\dagger}$ -0.83 -0.66 1 90.4 87.8 80.8 56.0 42.3
71.5 sent $1.34^{\dagger}$ $1.01^{\dagger}$ $0.97^{\dagger}$ 3 91.4 83.3 78.8
57.4 53.5 72.9 last token $1.11^{\dagger}$ -0.03 $1.38^{\dagger}$ 2 90.9 88.5
80.3 55.6 38.0 70.7 sent $1.57^{\dagger}$ -1.34 0.27 1 90.8 90.2 80.9 58.5
43.7 72.8 first token $1.19^{\dagger}$ $0.59^{\dagger}$ $0.52^{\dagger}$ 3
90.8 90.8 80.4 55.2 38.0 71.0 sent $1.19^{\dagger}$ $0.60^{\dagger}$
$0.55^{\dagger}$ 3 91.1 90.9 80.1 55.2 36.6 70.8 original $1.26^{\dagger}$
0.31 $0.74^{\dagger}$ 2 90.8 89.3 80.6 56.0 38.0 70.9 random $1.35^{\dagger}$
$0.66^{\dagger}$ -0.25 2 91.1 89.1 80.5 56.3 40.8 71.6 ELECTRA all token 0.33
0.10 0.15 0 90.3 87.7 79.4 52.7 57.7 73.6 sent $0.42^{\dagger}$ 0.21 0.33 1
90.7 87.1 79.5 52.3 54.9 72.9 last token $0.55^{\dagger}$ 0.07 0.24 1 90.8
87.3 79.8 51.6 46.5 71.2 sent $0.50^{\dagger}$ $0.42^{\dagger}$
$0.32^{\dagger}$ 3 90.5 87.3 80.1 54.5 40.8 70.6 first token 0.31 0.10 0.33 0
90.4 86.9 79.7 53.1 56.3 73.4 sent 0.29 0.22 0.30 0 90.4 87.6 79.7 53.4 56.3
73.5 original 0.16 $0.46^{\dagger}$ 0.04 1 90.5 87.9 80.4 54.5 46.5 72.0
random $0.43^{\dagger}$ $0.49^{\dagger}$ -0.22 2 90.4 87.7 78.5 51.3 54.9 72.6
Table 1: Gender bias of contextualised embeddings on SEAT. ${\dagger}$ denotes
significant bias effects at $\alpha<0.01$.
We used SEAT May et al. (2019) 6, 7 and 8 to evaluate gender bias. We use NLI
as a downstream evaluation task and use the Multi-Genre Natural Language
Inference data (MNLI; Williams et al., 2018) for training and development
following Dev et al. (2020). In NLI, the task is to classify a given
hypothesis and premise sentence-pair as entailing, contradicting, or neutral.
We programmatically generated the evaluation set following Dev et al. (2020)
by filling occupation words and gender words in template sentences. The
templates take the form “The subject verb a/an object.” and the created
sentence-pairs are assumed to be neutral.
We used the word lists created by Zhao et al. (2018b) for the attribute list
of feminine and masculine words. As for the stereotype word list for target
words, we use the list created by Kaneko and Bollegala (2019). Using News-
commentary-v15 corpus333http://www.statmt.org/wmt20/translation-task.html was
extract 11023, 42489 and 34148 sentences respectively for Feminine, Masculine
and Stereotype words. We excluded sentences with more than 128 tokens in
training data. We randomly sampled 1,000 sentences from each type of extracted
sentences as development data.
We used the GLEU benchmark Wang et al. (2018) to evaluate whether the useful
information in the pre-trained embeddings is retrained after debiasing. To
evaluate the debiased models with minimal effects due to task-specific fine-
tuning, we used the following small-scale training data: Stanford Sentiment
Treebank (SST-2; Socher et al., 2013), Microsoft Research Paraphrase Corpus
(MRPC; Dolan and Brockett, 2005), Semantic Textual Similarity Benchmark
(STS-B; Cer et al., 2017), Recognising Textual Entailment (RTE; Dagan et al.,
2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al.,
2009), and Winograd Schema Challenge (WNLI; Levesque et al., 2012). We
evaluate the performance of the contextualised embeddings on the corresponding
development data.
### § 4.2 Hyperparameters
We used BERT (bert-base-uncased; Devlin et al., 2019), RoBERTa (roberta-base;
Liu et al., 2019), ALBERT (albert-base-v2; Lan et al., 2020), DistilBERT
(distilbert-base-uncased; Sanh et al., 2019) and ELECTRA (electra-small-
discriminator; Clark et al., 2020) in our experiments.444We used
https://github.com/huggingface/transformers DistilBERT has 6 layers and the
others 12. We used the development data in SEAT-6 for hyperparameter tuning.
The hyperparameters of the models, except the learning rate and batch size,
are set to their default values as in run_glue.py. Using greedy search, the
learning rate was set to 5e-5 and the batch size to 32 during debiasing.
Optimal values for $\alpha=0.2$ and $\beta=0.8$ were found by a greedy search
in $[0,1]$ with $0.1$ increments. For the GLEU and MNLI experiments, we set
the learning rate to 2e-5 and the batch size to 16. Experiments were conducted
on a GeForce GTX 1080 Ti GPU.
### § 4.3 Debiasing vs. Preserving Information
Table 1 shows the results on SEAT and GLEU where original denotes the pre-
trained contextualised models prior to debiasing. We see that original models
other than ELECTRA contain significant levels of gender biases. Overall, the
all-token method that conducts token-level debiasing across all layers
performs the best. Prior work has shown that biases are learned at each layer
Bommasani et al. (2020) and it is important to debias all layers. Moreover, we
see that debiasing at token-level is more efficient compared to at the
sentence-level. This is because in token-level debiasing, the loss is computed
only on the target word and provides a more direct debiasing update for the
target word than in the sentence-level debiasing, which sums the losses over
all tokens in a sentence.
To test the importance of carefully selecting the target words considering the
types of biases that we want to remove from the embeddings, we implement a
random baseline where we randomly select target and attribute words from
$\mathcal{V}_{a}\cup\mathcal{V}_{t}$ and perform all-token debiasing. We see
that random debiases BERT to some extent but is not effective on other models.
This result shows that the proposed debiasing method is _not_ merely a
regularisation technique that imposes constraints on any arbitrary set of
words, but it is essential to carefully select the target words used for
debiasing.
The results on GLEU show that BERT, DistilBERT and ELECTRA compared to the
original embeddings, the debiased embeddings report comparable performances in
most settings. This confirms that the proposed debiasing method preserves
sufficient semantic information contained in the original embeddings that can
be used to learn accurate prediction models for the downstream NLP
tasks.555Although on WNLI all-token debiasing improves performance for
DistilBERT and ELECTRA compared to the respective original models, this is
insignificant as WNLI contains only 146 test instances. However, the
performance of RoBERTa and ALBERT decrease significantly compared to their
original versions after debiasing. We suspect that these models are more
sensitive to fine-tuning and hence lose their pre-trained information during
the debiasing process. We defer the development of techniques to address this
issue to future research.
### § 4.4 Measuring Bias with Inference
Model MNLI-m MNLI-mm NN FN T:0.7 Dev et al. (2020) 80.8 81.1 85.5 97.3 88.3
all-token 80.7 81.2 87.8 96.8 89.3 original 80.8 81.0 82.3 96.4 83.2 random
80.5 81.1 85.8 96.4 87.0
Table 2: Debias results for BERT in MNLI.
Model Layer SEAT-6 SEAT-7 SEAT-8 BERT all 0.44 0.25 0.46 last 0.56 0.12 0.47
first 0.52 0.22 0.49 RoBERTa all 0.59 0.23 0.61 last 0.73 0.24 0.65 first 0.69
0.28 0.59 ALBERT all 0.46 0.48 0.24 last 1.15 0.26 0.60 first 0.54 0.89 0.95
DistilBERT all 0.66 -0.16 0.37 last 0.88 0.19 0.35 first 0.90 0.40 0.52
ELECTRA all 0.21 0.02 0.18 last 0.34 0.20 0.21 first 0.28 0.13 0.34
Table 3: Averaged scores over all layers in an embedding debiased at token-
level, measured on SEAT tests.
(a) BERT
(b) RoBERTa
(c) ALBERT
(d) DistilBERT
(e) ELECTRA
Figure 2: Scatter plot of gender information of hidden states for original and
debiased stereotype words.
Following Dev et al. (2020), we use the multi-genre co-reference-based natural
language inference (MNLI) dataset for evaluating gender bias. This dataset
contains sentence triples where a premise must be neutral in entailment w.r.t.
two hypotheses. If the predictions made by a classifier that uses word
embeddings as features deviate from neutrality, it is considered as biased.
Given a set containing $M$ test instances, let the entailment predictor’s
probabilities for the $m$-th instance for entail, neutral and contradiction
labels be respectively $e_{m}$, $n_{m}$ and $c_{m}$. Then, they proposed the
following measures to quantify the bias: (1) Net Neutral (NN): ${\rm
NN}=\frac{1}{M}\sum_{m=1}^{M}n_{m}$; (2) Fraction Neutral (FN): ${\rm
FN}=\frac{1}{M}\sum_{m=1}^{M}\textbf{1}[{\rm neutral}={\rm
max}(e_{m},n_{m},c_{m})]$; and (3) Threshold $\tau$ (T:$\tau$): T:$\tau$
$=\textbf{1}[n_{m}\geq\tau]$, where we used $\tau=0.7$ following Dev et al.
(2020). For an ideal (bias-free) embedding, all three measure would be 1.
In Table 2, we compare our proposed method against the _noncontextualised
debiasing_ method proposed by Dev et al. (2020) where they debias Layer 1 of
BERT-large model using an orthogonal projection to the gender direction during
training and evaluation. In addition to the above-mentioned measures, we also
report the entailment accuracy on the matched (in-domain) and mismatched
(cross-domain) denoted respectively by MNLI-m and MNLI-mm in Table 2 to
evaluate the semantic information preserved in the embeddings after debiasing.
We see that the proposed method outperforms noncontextualised debiasing Dev et
al. (2020) in NN and T:0.7, and its performance of the MNLI task is comparable
to the original embeddings. This result further confirms that the proposed
method can not only debias well but can also preserve the pre-trained
information. Moreover, it is consistent with the results reported in Table 1
and shows that debiasing all layers is more effective than only the first
layer as done by Dev et al. (2020).
### § 4.5 The Importance of Debiasing All Layers
In Table 1, we investigated the bias for the final layer, but it is known that
the contextualised embeddings are learned at each layer Bommasani et al.
(2020). Therefore, to investigate whether by debiasing in each layer we are
able to remove the biases of the entire contextualised embeddings, we evaluate
the debiased embeddings at each layer on SEAT 6, 7, 8 datasets and report the
averaged metrics for all-token, first-token and last-token methods in Table 3.
We see that, on average, fitst-token and last-token methods have more bias
than all-token. Therefore, we conclude that It is not enough to debias only
the first and last layers even in DistilBERT, which has a small number of
layers. These results show that biases in the entire contextualised embedding
cannot be reliably removed by debiasing only some selected layers, but rather
the importance of debiasing all layers consistently.
### § 4.6 Visualizing Debiasing Results
To further illustrate the effect of debiasing using the proposed all-token
method, we visualise the similarity scores of a stereotypical word with
feminine and masculine dimensions as follows. First, for each target word $t$,
its hidden state, $E_{i}(t,x)$ in the $i$-th layer of the model $E$ in a
sentence $x$ is computed. Next, we average those hidden states across all
sentences in the dataset that contain $t$ to obtain
$\hat{E}_{i}(t)=\frac{1}{|\mathcal{T}|}\sum_{x\in\mathcal{T}}E_{i}(t,x)$.
Likewise, we compute $\hat{E}_{i}(f)$ and $\hat{E}_{i}(m)$ respectively for
each feminine ($f$) and masculine ($m$) word. Next, we compute, $s_{i}^{f}$,
the cosine similarity between each $\hat{E}_{i}(f)$ and the feminine vector
$\boldsymbol{v}_{i}(f)$, and the cosine similarity, $s_{i}^{m}$, between each
$\hat{E}_{i}(f)$ and the masculine vector $\boldsymbol{v}_{i}(f)$.
$\boldsymbol{s}_{i}^{f}$ and $\boldsymbol{s}_{i}^{m}$, respectively, are
averaged over all layers in a contextualised embedding model to obtain
$\boldsymbol{s}_{\rm Avg}^{f}$ and $\boldsymbol{s}_{\rm Avg}^{m}$, which
represent how much gender information each gender word contains on average.
We then compute the cosine similarity, $\boldsymbol{s}_{i}^{t,f}$, between
each stereotype word’s averaged embedding, $\hat{E}_{i}(t)$ and the feminine
vector $\boldsymbol{v}_{i}(f)$. Similarly, we compute the cosine similarity
$\boldsymbol{s}_{i}^{t,m}$ between each stereotype word’s averaged embedding
$\hat{E}_{i}(t)$ and the masculine vector $\boldsymbol{v}_{i}(m)$. We then
average $\boldsymbol{s}^{t,f}$ and $\boldsymbol{s}^{t,m}$ over the layers in
$E$ respectively, to compute $\boldsymbol{s}_{\rm Avg}^{t,f}$ and
$\boldsymbol{s}_{\rm Avg}^{t,m}$, which represent how much gender information
each stereotype word contains on average. Finally, we visualise the normalised
female and male gender scores given respectively by $\boldsymbol{s}_{\rm
Avg}^{t,f}/\boldsymbol{s}_{\rm Avg}^{f}$ and $\boldsymbol{s}_{\rm
Avg}^{t,m}/\boldsymbol{s}_{\rm Avg}^{m}$. For example, a zero
$\boldsymbol{s}_{\rm Avg}^{t,f}/\boldsymbol{s}_{\rm Avg}^{f}$ value indicates
that $t$ does not contain female gender related information, whereas a value
of one indicates that it contains all information about the female gender.
Figure 2 shows each stereotype word with its normalised female ad male gender
scores respectively in $x$ and $y$ axises. For a word, a yellow circle denotes
its original embeddings, and the blue triangle denotes the result of debiasing
using the all-token method.
We see that with the original embeddings, stereotypical words of are
distributed close to one, indicating that they are highly gender-specific. On
the other hand, we see that the debiased BERT, DistilBERT and ELECTRA have
similar word distributions compared to the original embeddings respectively,
with an overall movement towards zero. On the other hand, for RoBERTa,
debiased embeddings are mainly distributed from zero to around one compared to
the original embeddings. Moreover, for ALBERT, the debiased embeddings are
close to zero, but unlike the original distribution, the debiased embeddings
are mainly clustered around zero. This shows that RoBERTa and ALBERT do not
retain structure of the original distribution after debiasing. While ALBERT
over-debiases pre-trained embeddings of stereotypical words, RoBERTa under-
debiases them. This trend was already confirmed on the downstream evaluation
tasks conducted in Table 1.
## § 5 Conclusion
We proposed a debiasing method for pre-trained contextualised word embeddings,
operating at token- or sentence-levels. Our experimental results showed that
the proposed method effectively debiases discriminative gender-related biases,
while preserving useful semantic information in the pre-trained embeddings.
The results showed that the downstream task was more effective in debias than
the previous studies.
## References
* Bar-Haim et al. (2006) Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second pascal recognising textual entailment challenge. _Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment_.
* Bentivogli et al. (2009) Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In _In Proc Text Analysis Conference (TAC’09_.
* Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In _NIPS_.
* Bommasani et al. (2020) Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4758–4781, Online. Association for Computational Linguistics.
* Bordia and Bowman (2019) Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop_ , pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics.
* Caliskan et al. (2017) Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. _Science_ , 356:183–186.
* Cer et al. (2017) Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. 2017\. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_ , pages 1–14, Vancouver, Canada. Association for Computational Linguistics.
* Clark et al. (2020) K. Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. _ArXiv_ , abs/2003.10555.
* Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment_ , MLCW’05, pages 177–190, Berlin, Heidelberg. Springer-Verlag.
* Dev et al. (2020) Sunipa Dev, Tao Li, Jeff Phillips, and Vivek Srikumar. 2020. On Measuring and Mitigating Biased Inferences of Word Embeddings. In _AAAI_.
* Dev and Phillips (2019) Sunipa Dev and Jeff M. Phillips. 2019. Attenuating bias in word vectors. In _The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan_ , volume 89 of _Proceedings of Machine Learning Research_ , pages 879–887. PMLR.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Deyne et al. (2019) Simon De Deyne, Danielle J. Navarro, Amy Perfors, Marc Brysbaert, and Gert Storms. 2019. The “small world of words” english word association norms for over 12,000 cue words. _Behavior Research Methods_ , 51:987–1006.
* Dolan and Brockett (2005) William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Proceedings of the Third International Workshop on Paraphrasing (IWP2005)_.
* Du et al. (2019) Yupei Du, Yuanbin Wu, and Man Lan. 2019. Exploring human gender stereotypes with word association test. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6133–6143, Hong Kong, China. Association for Computational Linguistics.
* Elazar and Goldberg (2018) Yanai Elazar and Yoav Goldberg. 2018. Adversarial Removal of Demographic Attributes from Text Data. In _Proc. of EMNLP_.
* Giampiccolo et al. (2007) Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. _The Third PASCAL Recognizing Textual Entailment Challenge_ , pages 1–9. Association for Computational Linguistics, USA.
* Gonen and Goldberg (2019) Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics.
* Hall Maudslay et al. (2019) Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5266–5274, Hong Kong, China. Association for Computational Linguistics.
* Kaneko and Bollegala (2019) Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1641–1650, Florence, Italy. Association for Computational Linguistics.
* Kaneko and Bollegala (2020) Masahiro Kaneko and Danushka Bollegala. 2020. Autoencoding improves pre-trained word embeddings. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1699–1713, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Karve et al. (2019) Saket Karve, Lyle Ungar, and João Sedoc. 2019. Conceptor debiasing of word representations evaluated on WEAT. In _Proceedings of the First Workshop on Gender Bias in Natural Language Processing_ , pages 40–48, Florence, Italy. Association for Computational Linguistics.
* Kurita et al. (2019) Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In _Proceedings of the First Workshop on Gender Bias in Natural Language Processing_ , pages 166–172, Florence, Italy. Association for Computational Linguistics.
* Lan et al. (2020) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. _ArXiv_ , abs/1909.11942.
* Levesque et al. (2012) Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In _Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning_ , KR’12, pages 552–561. AAAI Press.
* Li et al. (2018) Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 25–30. Association for Computational Linguistics.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* May et al. (2019) Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019\. On measuring social biases in sentence encoders. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 622–628, Minneapolis, Minnesota. Association for Computational Linguistics.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word representation in vector space. In _ICLR_.
* Nadeem et al. (2020) Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. StereoSet: Measuring stereotypical bias in pretrained language models.
* Nangia et al. (2020) Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. In _Proc. of EMNLP_.
* Pennington et al. (2014) Jeffery Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: global vectors for word representation. In _EMNLP_ , pages 1532–1543.
* Ravfogel et al. (2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020\. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In _Proc. of ACL_.
* Rudinger et al. (2018) Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 8–14. Association for Computational Linguistics.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _ArXiv_ , abs/1910.01108.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
* Tan and Celis (2019) Yi Chern Tan and L. Elisa Celis. 2019. Assessing social and intersectional biases in contextualized word representations. In _Advances in Neural Information Processing Systems 32_ , pages 13230–13241. Curran Associates, Inc.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355, Brussels, Belgium. Association for Computational Linguistics.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
* Xie et al. (2017) Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In _Proc. of NIPS_.
* Zhao et al. (2019) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics.
* Zhao et al. (2018a) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 15–20. Association for Computational Linguistics.
* Zhao et al. (2018b) Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In _Proc. of EMNLP_ , pages 4847–4853.
* Zhou et al. (2003) Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf. 2003. Learning with local and global consistency. In _NIPS_.
* Zmigrod et al. (2019) Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In _Proc. of ACL_.
|
# Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Tokyo Metropolitan University
<EMAIL_ADDRESS>&Danushka Bollegala
University of Liverpool, Amazon
<EMAIL_ADDRESS>Danushka Bollegala holds concurrent appointments as a
Professor at University of Liverpool and as an Amazon Scholar. This paper
describes work performed at the University of Liverpool and is not associated
with Amazon.
###### Abstract
Word embeddings trained on large corpora have shown to encode high levels of
unfair discriminatory gender, racial, religious and ethnic biases. In
contrast, human-written dictionaries describe the meanings of words in a
concise, objective and an unbiased manner. We propose a method for debiasing
pre-trained word embeddings using dictionaries, without requiring access to
the original training resources or any knowledge regarding the word embedding
algorithms used. Unlike prior work, our proposed method does not require the
types of biases to be pre-defined in the form of word lists, and learns the
constraints that must be satisfied by unbiased word embeddings automatically
from dictionary definitions of the words. Specifically, we learn an encoder to
generate a debiased version of an input word embedding such that it (a)
retains the semantics of the pre-trained word embeddings, (b) agrees with the
unbiased definition of the word according to the dictionary, and (c) remains
orthogonal to the vector space spanned by any biased basis vectors in the pre-
trained word embedding space. Experimental results on standard benchmark
datasets show that the proposed method can accurately remove unfair biases
encoded in pre-trained word embeddings, while preserving useful semantics.
## § 1 Introduction
Despite pre-trained word embeddings are useful due to their low
dimensionality, memory and compute efficiency, they have shown to encode not
only the semantics of words but also unfair discriminatory biases such as
gender, racial or religious biases Bolukbasi et al. (2016); Zhao et al.
(2018a); Rudinger et al. (2018); Zhao et al. (2018b); Elazar and Goldberg
(2018); Kaneko and Bollegala (2019). On the other hand, human-written
dictionaries act as an impartial, objective and unbiased source of word
meaning. Although methods that learn word embeddings by purely using
dictionaries have been proposed Tissier et al. (2017), they have coverage and
data sparseness related issues because pre-compiled dictionaries do not
capture the meanings of neologisms or provide numerous contexts as in a
corpus. Consequently, prior work has shown that word embeddings learnt from
large text corpora to outperform those created from dictionaries in downstream
NLP tasks Alsuhaibani et al. (2019); Bollegala et al. (2016).
We must overcome several challenges when using dictionaries to debias pre-
trained word embeddings. First, not all words in the embeddings will appear in
the given dictionary. Dictionaries often have limited coverage and will not
cover neologisms, orthographic variants of words etc. that are likely to
appear in large corpora. A lexicalised debiasing method would generalise
poorly to the words not in the dictionary. Second, it is not known apriori
what biases are hidden inside a set of pre-trained word embedding vectors.
Depending on the source of documents used for training the embeddings,
different types of biases will be learnt and amplified by different word
embedding learning algorithms to different degrees Zhao et al. (2017).
Prior work on debiasing required that the biases to be pre-defined Kaneko and
Bollegala (2019). For example, Hard-Debias (HD; Bolukbasi et al., 2016) and
Gender Neutral Glove (GN-GloVe; Zhao et al., 2018b) require lists of _male_
and _female_ pronouns for defining the _gender_ direction. However, gender
bias is only one of the many biases that exist in pre-trained word embeddings.
It is inconvenient to prepare lists of words covering all different types of
biases we must remove from pre-trained word embeddings. Moreover, such pre-
compiled word lists are likely to be incomplete and inadequately cover some
biases. Indeed, Gonen and Goldberg (2019) showed empirical evidence that such
debiasing methods do not remove all discriminative biases from word
embeddings. Unfair biases have adversely affected several NLP tasks such as
machine translation Vanmassenhove et al. (2018) and language generation Sheng
et al. (2019). Racial biases have also been shown to affect criminal
prosecutions Manzini et al. (2019) and career adverts Lambrecht and Tucker
(2016). These findings show the difficulty of defining different biases using
pre-compiled lists of words, which is a requirement in previously proposed
debiasing methods for static word embeddings.
We propose a method that uses a dictionary as a source of bias-free
definitions of words for debiasing pre-trained word embeddings111Code and
debiased embeddings: https://github.com/kanekomasahiro/dict-debias.
Specifically, we learn an encoder that filters-out biases from the input
embeddings. The debiased embeddings are required to simultaneously satisfy
three criteria: (a) must preserve all non-discriminatory information in the
pre-trained embeddings (_semantic preservation_), (b) must be similar to the
dictionary definition of the words (_dictionary agreement_), and (c) must be
orthogonal to the subspace spanned by the basis vectors in the pre-trained
word embedding space that corresponds to discriminatory biases (_bias
orthogonality_). We implement the semantic preservation and dictionary
agreement using two decoders, whereas the bias orthogonality is enforced by a
parameter-free projection. The debiasing encoder and the decoders are learnt
end-to-end by a joint optimisation method. Our proposed method is agnostic to
the details of the algorithms used to learn the input word embeddings.
Moreover, unlike counterfactual data augmentation methods for debiasing
Zmigrod et al. (2019); Hall Maudslay et al. (2019), we do _not_ require access
to the original training resources used for learning the input word
embeddings.
Our proposed method overcomes the above-described challenges as follows.
First, instead of learning a lexicalised debiasing model, we operate on the
word embedding space when learning the encoder. Therefore, we can use the
words that are in the intersection of the vocabularies of the pre-trained word
embeddings and the dictionary to learn the encoder, enabling us to generalise
to the words not in the dictionary. Second, we do _not_ require pre-compiled
word lists specifying the biases. The dictionary acts as a clean, unbiased
source of word meaning that can be considered as _positive_ examples of
debiased meanings. In contrast to the existing debiasing methods that require
us to pre-define _what to remove_ , the proposed method can be seen as using
the dictionary as a guideline for _what to retain_ during debiasing.
We evaluate the proposed method using four standard benchmark datasets for
evaluating the biases in word embeddings: Word Embedding Association Test
(WEAT; Caliskan et al., 2017), Word Association Test (WAT; Du et al., 2019),
SemBias Zhao et al. (2018b) and WinoBias Zhao et al. (2018a). Our experimental
results show that the proposed debiasing method accurately removes unfair
biases from three widely used pre-trained embeddings: Word2Vec Mikolov et al.
(2013b), GloVe Pennington et al. (2014) and fastText Bojanowski et al. (2017).
Moreover, our evaluations on semantic similarity and word analogy benchmarks
show that the proposed debiasing method preserves useful semantic information
in word embeddings, while removing unfair biases.
## § 2 Related Work
Dictionaries have been popularly used for learning word embeddings Budanitsky
and Hirst (2006, 2001); Jiang and Conrath (1997). Methods that use both
dictionaries (or lexicons) and corpora to jointly learn word embeddings
Tissier et al. (2017); Alsuhaibani et al. (2019); Bollegala et al. (2016) or
post-process Glavaš and Vulić (2018); Faruqui et al. (2015) have also been
proposed. However, learning embeddings from dictionaries alone results in
coverage and data sparseness issues Bollegala et al. (2016) and does not
guarantee bias-free embeddings Lauscher and Glavas (2019). To the best of our
knowledge, we are the first to use dictionaries for debiasing pre-trained word
embeddings.
Bolukbasi et al. (2016) proposed a post-processing approach that projects
gender-neutral words into a subspace, which is orthogonal to the gender
dimension defined by a list of gender-definitional words. They refer to words
associated with gender (e.g., _she_ , _actor_) as gender-definitional words,
and the remainder gender-neutral. They proposed a _hard-debiasing_ method
where the gender direction is computed as the vector difference between the
embeddings of the corresponding gender-definitional words, and a _soft-
debiasing_ method, which balances the objective of preserving the inner-
products between the original word embeddings, while projecting the word
embeddings into a subspace orthogonal to the gender definitional words. Both
hard and soft debiasing methods ignore gender-definitional words during the
subsequent debiasing process, and focus only on words that are _not_ predicted
as gender-definitional by the classifier. Therefore, if the classifier
erroneously predicts a stereotypical word as a gender-definitional word, it
would not get debiased.
Zhao et al. (2018b) modified the GloVe Pennington et al. (2014) objective to
learn gender-neutral word embeddings (GN-GloVe) from a given corpus. They
maximise the squared $\ell_{2}$ distance between gender-related sub-vectors,
while simultaneously minimising the GloVe objective. Unlike, the above-
mentioned methods, Kaneko and Bollegala (2019) proposed a post-processing
method to preserve gender-related information with autoencoder Kaneko and
Bollegala (2020), while removing discriminatory biases from stereotypical
cases (GP-GloVe). However, all prior debiasing methods require us to pre-
define the biases in the form of explicit word lists containing gender and
stereotypical word associations. In contrast we use dictionaries as a source
of bias-free semantic definitions of words and do not require pre-defining the
biases to be removed. Although we focus on static word embeddings in this
paper, unfair biases have been found in contextualised word embeddings as well
Zhao et al. (2019); Vig (2019); Bordia and Bowman (2019); May et al. (2019).
Adversarial learning methods Xie et al. (2017); Elazar and Goldberg (2018); Li
et al. (2018) for debiasing first encode the inputs and then two classifiers
are jointly trained – one predicting the target task (for which we must ensure
high prediction accuracy) and the other protected attributes (that must not be
easily predictable). However, Elazar and Goldberg (2018) showed that although
it is possible to obtain chance-level development-set accuracy for the
protected attributes during training, a post-hoc classifier trained on the
encoded inputs can still manage to reach substantially high accuracies for the
protected attributes. They conclude that adversarial learning alone does not
guarantee invariant representations for the protected attributes. Ravfogel et
al. (2020) found that iteratively projecting word embeddings to the null space
of the gender direction to further improve the debiasing performance.
To evaluate biases, Caliskan et al. (2017) proposed the Word Embedding
Association Test (WEAT) inspired by the Implicit Association Test (IAT;
Greenwald et al., 1998). Ethayarajh et al. (2019) showed that WEAT to be
systematically overestimating biases and proposed a correction. The ability to
correctly answer gender-related word analogies Zhao et al. (2018b) and resolve
gender-related coreferences Zhao et al. (2018a); Rudinger et al. (2018) have
been used as extrinsic tasks for evaluating the bias in word embeddings. We
describe these evaluation benchmarks later in § 4.3.
## § 3 Dictionary-based Debiasing
Let us denote the $n$-dimensional pre-trained word embedding of a word $w$ by
$\boldsymbol{w}\in\mathbb{R}^{n}$ trained on some resource $\mathcal{C}$ such
as a text corpus. Moreover, let us assume that we are given a dictionary
$\mathcal{D}$ containing the definition, $s(w)$ of $w$. If the pre-trained
embeddings distinguish among the different senses of $w$, then we can use the
gloss for the corresponding sense of $w$ in the dictionary as $s(w)$. However,
the majority of word embedding learning methods do not produce sense-specific
word embeddings. In this case, we can either use all glosses for $w$ in
$\mathcal{D}$ by concatenating or select the gloss for the dominant (most
frequent) sense of $w$222Prior work on debiasing static word embeddings do not
use contextual information that is required for determining word senses.
Therefore, for comparability reasons we do neither.. Without any loss of
generality, in the remainder of this paper, we will use $s(w)$ to collectively
denote a gloss selected by any one of the above-mentioned criteria with or
without considering the word senses (in § 5.3, we evaluate the effect of using
all vs. dominant gloss).
Next, we define the objective functions optimised by the proposed method for
the purpose of learning unbiased word embeddings. Given, $\boldsymbol{w}$, we
model the debiasing process as the task of learning an encoder,
$E(\boldsymbol{w};\boldsymbol{\theta}_{e})$ that returns an $m(\leq
n)$-dimensional debiased version of $\boldsymbol{w}$. In the case where we
would like to preserve the dimensionality of the input embeddings, we can set
$m=n$, or $m<n$ to further compress the debiased embeddings.
Because the pre-trained embeddings encode rich semantic information from a
large text corpora, often far exceeding the meanings covered in the
dictionary, we must preserve this semantic information as much as possible
during the debiasing process. We refer to this constraint as _semantic
preservation_. Semantic preservation is likely to lead to good performance in
downstream NLP applications that use pre-trained word embeddings. For this
purpose, we decode the encoded version of $\boldsymbol{w}$ using a decoder,
$D_{c}$, parametrised by $\boldsymbol{\theta}_{c}$ and define $J_{c}$ to be
the reconstruction loss given by (1).
$\displaystyle
J_{c}(w)=\left|\left|\boldsymbol{w}-D_{c}(E(\boldsymbol{w};\boldsymbol{\theta}_{e});\boldsymbol{\theta}_{c})\right|\right|_{2}^{2}$
(1)
Following our assumption that the dictionary definition, $s(w)$, of $w$ is a
concise and unbiased description of the meaning of $w$, we would like to
ensure that the encoded version of $\boldsymbol{w}$ is similar to $s(w)$. We
refer to this constraint as _dictionary agreement_. To formalise dictionary
agreement empirically, we first represent $s(w)$ by a sentence embedding
vector $\boldsymbol{s}(w)\in\mathbb{R}^{n}$. Different sentence embedding
methods can be used for this purpose such as convolutional neural networks Kim
(2014), recurrent neural networks Peters et al. (2018) or transformers Devlin
et al. (2019). For the simplicity, we use the smoothed inverse frequency (SIF;
Arora et al., 2017) for creating $\boldsymbol{s}(w)$ in this paper. SIF
computes the embedding of a sentence as the weighted average of the pre-
trained word embeddings of the words in the sentence, where the weights are
computed as the inverse unigram probability. Next, the first principal
component vector of the sentence embeddings are removed. The dimensionality of
the sentence embeddings created using SIF is equal to that of the pre-trained
word embeddings used. Therefore, in our case we have both
$\boldsymbol{w},\boldsymbol{s}(w)\in\mathbb{R}^{n}$.
We decode the debiased embedding $E(\boldsymbol{w};\boldsymbol{\theta}_{e})$
of $w$ using a decoder $D_{d}$, parametrised by $\boldsymbol{\theta}_{d}$ and
compute the squared $\ell_{2}$ distance between it and $\boldsymbol{s}(w)$ to
define an objective $J_{d}$ given by (2).
$\displaystyle
J_{d}(w)=\left|\left|\boldsymbol{s}(w)-D_{d}(E(\boldsymbol{w};\boldsymbol{\theta}_{e});\boldsymbol{\theta}_{d})\right|\right|_{2}^{2}$
(2)
Recalling that our goal is to remove unfair biases from pre-trained word
embeddings and we assume dictionary definitions to be free of such biases, we
define an objective function that explicitly models this requirement. We refer
to this requirement as the _bias orthogonality_ of the debiased embeddings.
For this purpose, we first project the pre-trained word embedding
$\boldsymbol{w}$ of a word $w$ into a subspace that is orthogonal to the
dictionary definition vector $\boldsymbol{s}(w)$. Let us denote this
projection by $\phi(\boldsymbol{w},\boldsymbol{s}(w))\in\mathbb{R}^{n}$. We
require that the debiased word embedding,
$E(\boldsymbol{w};\boldsymbol{\theta}_{e})$, must be orthogonal to
$\phi(\boldsymbol{w},\boldsymbol{s}(w))$, and formalise this as the
minimisation of the squared inner-product given in (3).
$\displaystyle
J_{a}(w)=\left(E(\phi(\boldsymbol{w},\boldsymbol{s}(w));\boldsymbol{\theta}_{e}){}^{\top}E(\boldsymbol{w};\boldsymbol{\theta}_{e})\right)^{2}$
(3)
Note that because $\phi(\boldsymbol{w},\boldsymbol{s}(w))$ lives in the space
spanned by the original (prior to encoding) vector space, we must first encode
it using $E$ before considering the orthogonality requirement.
To derive $\phi(\boldsymbol{w},\boldsymbol{s}(w))$, let us assume the
$n$-dimensional basis vectors in the $R^{n}$ vector space spanned by the pre-
trained word embeddings to be
$\boldsymbol{b}_{1},\boldsymbol{b}_{2},\ldots,\boldsymbol{b}_{n}$. Moreover,
without loss of generality, let the subspace spanned by the subset of the
first $k(<n)$ basis vectors
$\boldsymbol{b}_{1},\boldsymbol{b}_{2},\ldots,\boldsymbol{b}_{k}$ to be
$\mathcal{B}\subseteq\mathbb{R}^{n}$. The projection
$\boldsymbol{v}_{\mathcal{B}}$ of a vector $\boldsymbol{v}\in\mathbb{R}^{n}$
onto $\mathcal{B}$ can be expressed using the basis vectors as in (4).
$\displaystyle\boldsymbol{v}_{\mathcal{B}}=\sum_{j=1}^{k}(\boldsymbol{v}{}^{\top}\boldsymbol{b}_{j})\boldsymbol{b}_{j}$
(4)
To show that $\boldsymbol{v}-\boldsymbol{v}_{\mathcal{B}}$ is orthogonal to
$\boldsymbol{v}_{\mathcal{B}}$ for any $\boldsymbol{v}\in\mathcal{B}$, let us
express $\boldsymbol{v}-\boldsymbol{v}_{\mathcal{B}}$ using the basis vectors
as given in (5).
$\displaystyle\boldsymbol{v}-\boldsymbol{v}_{\mathcal{B}}$
$\displaystyle=\sum_{i=1}^{n}(\boldsymbol{v}{}^{\top}\boldsymbol{b}_{i})\boldsymbol{b}_{i}-\sum_{j=1}^{k}(\boldsymbol{v}{}^{\top}\boldsymbol{b}_{j})\boldsymbol{b}_{j}$
$\displaystyle=\sum_{i=k+1}^{n}(\boldsymbol{v}{}^{\top}\boldsymbol{b}_{i})\boldsymbol{b}_{i}$
(5)
We see that there are no basis vectors in common between the summations in (4)
and (5). Therefore,
$\boldsymbol{v}_{\mathcal{B}}{}^{\top}(\boldsymbol{v}-\boldsymbol{v}_{\mathcal{B}})=0$
for $\forall\boldsymbol{v}\in\mathcal{B}$.
Considering that $\boldsymbol{s}(w)$ defines a direction that does not contain
any unfair biases, we can compute the vector rejection of $\boldsymbol{w}$ on
$\boldsymbol{s}(w)$ following this result. Specifically, we subtract the
projection of $\boldsymbol{w}$ along the unit vector defining the direction of
$\boldsymbol{s}(w)$ to compute $\phi$ as in (6).
$\displaystyle\phi(\boldsymbol{w},\boldsymbol{s}(w))=\boldsymbol{w}-\boldsymbol{w}{}^{\top}\boldsymbol{s}(w)\frac{\boldsymbol{s}(w)}{\left|\left|\boldsymbol{s}(w)\right|\right|}$
(6)
We consider the linearly-weighted sum of the above-defined three objective
functions as the total objective function as given in (7).
$\displaystyle J(w)=\alpha J_{c}(w)+\beta J_{d}(w)+\gamma J_{a}(w)$ (7)
Here, $\alpha,\beta,\gamma\geq 0$ are scalar coefficients satisfying
$\alpha+\beta+\gamma=1$. Later, in § 4 we experimentally determine the values
of $\alpha,\beta$ and $\gamma$ using a development dataset.
## § 4 Experiments
### § 4.1 Word Embeddings
In our experiments, we use the following publicly available pre-trained word
embeddings: Word2Vec333https://code.google.com/archive/p/word2vec/
(300-dimensional embeddings for ca. 3M words learned from Google News corpus
Mikolov et al. (2013a)), GloVe444https://github.com/stanfordnlp/GloVe
(300-dimensional embeddings for ca. 2.1M words learned from the Common Crawl
Pennington et al. (2014)), and fastText555https://fasttext.cc/docs/en/english-
vectors.html (300-dimensional embeddings for ca. 1M words learned from
Wikipedia 2017, UMBC webbase corpus and statmt.org news Bojanowski et al.
(2017)).
As the dictionary definitions, we used the glosses in the WordNet Fellbaum
(1998), which has been popularly used to learn word embeddings in prior work
Tissier et al. (2017); Bosc and Vincent (2018); Washio et al. (2019). However,
we note that our proposed method does not depend on any WordNet-specific
features, thus in principle can be applied to any dictionary containing
definition sentences. Words that do not appear in the vocabulary of the pre-
trained embeddings are ignored when computing $\boldsymbol{s}(w)$ for the
headwords $w$ in the dictionary. Therefore, if all the words in a dictionary
definition are ignored, then the we remove the corresponding headwords from
training. Consequently, we are left with 54,528, 64,779 and 58,015 words
respectively for Word2Vec, GloVe and fastText embeddings in the training
dataset. We randomly sampled 1,000 words from this dataset and held-out as a
development set for the purpose of tuning various hyperparameters in the
proposed method.
$E$, $D_{c}$ and $D_{d}$ are implemented as single-layer feed forward neural
networks with a hyperbolic tangent activation at the outputs. It is known that
pre-training is effective when using autoencoders $E$ and $D_{c}$ for
debiasing Kaneko and Bollegala (2019). Therefore, we randomly select 5000
words from each pre-trained word embedding set and pre-train the autoencoders
on those words with a mini-batch of size 512. In pre-training, the model with
the lowest loss according to (1) in the development set for pre-traininng is
selected.
### § 4.2 Hyperparameters
During optimisation, we used dropout Srivastava et al. (2014) with probability
$0.05$ to $\boldsymbol{w}$ and $E(w)$. We used Adam Kingma and Ba (2015) with
initial learning rate set to $0.0002$ as the optimiser to find the parameters
$\boldsymbol{\theta}_{e},\boldsymbol{\theta}_{c}$, and
$\boldsymbol{\theta}_{d}$ and a mini-batch size of 4. The optimal values of
all hyperparameters are found by minimising the total loss over the
development dataset following a Monte-Carlo search. We found these optimal
hyperparameter values of $\alpha=0.99998$, $\beta=0.00001$ and
$\gamma=0.00001$. Note that the scale of different losses are different and
the absolute values of hyperparameters do _not_ indicate the significance of a
component loss. For example, if we rescale all losses to the same range then
we have $L_{c}=0.005\alpha$, $L_{d}=0.269\beta$ and $L_{a}=21.1999\gamma$.
Therefore, debiasing ($L_{d}$) and orthogonalisation ($L_{a}$) contributions
are significant.
We utilized a GeForce GTX 1080 Ti. The debiasing is completed in less than an
hour because our method is only a fine-tuning technique. The parameter size of
our debiasing model is 270,900.
### § 4.3 Evaluation Datasets
We use the following datasets to evaluate the degree of the biases in word
embeddings.
#### WEAT:
Word Embedding Association Test (WEAT; Caliskan et al., 2017), quantifies
various biases (e.g. gender, race and age) using semantic similarities between
word embeddings. It compares two same size sets of _target_ words
$\mathcal{X}$ and $\mathcal{Y}$ (e.g. European and African names), with two
sets of _attribute_ words $\mathcal{A}$ and $\mathcal{B}$ (e.g. _pleasant_ vs.
_unpleasant_). The bias score,
$s(\mathcal{X},\mathcal{Y},\mathcal{A},\mathcal{B})$, for each target is
calculated as follows:
$\displaystyle s(\mathcal{X},\mathcal{Y},\mathcal{A},\mathcal{B})$
$\displaystyle=\sum_{\boldsymbol{x}\in\mathcal{X}}k(\boldsymbol{x},\mathcal{A},\mathcal{B})$
$\displaystyle-\sum_{\boldsymbol{y}\in\mathcal{Y}}k(\boldsymbol{y},\mathcal{A},\mathcal{B})$
(8) $\displaystyle k(\boldsymbol{t},\mathcal{A},\mathcal{B})$
$\displaystyle=\textrm{mean}_{\boldsymbol{a}\in\mathcal{A}}f(\boldsymbol{t},\boldsymbol{a})$
$\displaystyle-\textrm{mean}_{\boldsymbol{b}\in\mathcal{B}}f(\boldsymbol{t},\boldsymbol{b})$
(9)
Here, $f$ is the cosine similarity between the word embeddings. The one-sided
$p$-value for the permutation test regarding $\mathcal{X}$ and $\mathcal{Y}$
is calculated as the probability of
$s(\mathcal{X}_{i},\mathcal{Y}_{i},\mathcal{A},\mathcal{B})>s(\mathcal{X},\mathcal{Y},\mathcal{A},\mathcal{B})$.
The effect size is calculated as the normalised measure given by (10).
$\displaystyle\frac{\textrm{mean}_{x\in\mathcal{X}}s(x,\mathcal{A},\mathcal{B})-\textrm{mean}_{y\in\mathcal{Y}}s(y,\mathcal{A},\mathcal{B})}{\textrm{sd}_{t\in\mathcal{X}\cup\mathcal{Y}}s(t,\mathcal{A},\mathcal{B})}$
(10)
Embeddings | Word2Vec | GloVe | fastText
---|---|---|---
Org/Deb | Org/Deb | Org/Deb
T1: flowers vs. insects | $1.46^{\dagger}$/$\mathbf{1.35}^{\dagger}$ | $\mathbf{1.48}^{\dagger}$/$1.54^{\dagger}$ | $1.29^{\dagger}$/$\mathbf{1.09}^{\dagger}$
T2: instruments vs. weapons | $1.56^{\dagger}$/$\mathbf{1.43}^{\dagger}$ | $1.49^{\dagger}$/$\mathbf{1.41}^{\dagger}$ | $1.56^{\dagger}$/$\mathbf{1.34}^{\dagger}$
T3: European vs. African American names | $0.46^{\dagger}$/$\mathbf{0.16}^{\dagger}$ | $1.33^{\dagger}$/$\mathbf{1.04}^{\dagger}$ | $0.79^{\dagger}$/$\mathbf{0.46}^{\dagger}$
T4: male vs. female | $1.91^{\dagger}$/$\mathbf{1.87}^{\dagger}$ | $1.86^{\dagger}$/$\mathbf{1.85}^{\dagger}$ | $1.65^{\dagger}$/$\mathbf{1.42}^{\dagger}$
T5: math vs. art | $0.85^{\dagger}$/$\mathbf{0.53}^{\dagger}$ | $\mathbf{0.43}^{\dagger}$/$0.82^{\dagger}$ | $1.14^{\dagger}$/$\mathbf{0.86}^{\dagger}$
T6: science vs. art | $1.18^{\dagger}$/$\mathbf{0.96}^{\dagger}$ | $\mathbf{1.21}^{\dagger}$/$1.44^{\dagger}$ | $1.16^{\dagger}$/$\mathbf{0.88}^{\dagger}$
T7: physical vs. mental conditions | 0.90/$\mathbf{0.57}$ | 1.03/$\mathbf{0.98}$ | 0.83/$\mathbf{0.63}$
T8: older vs. younger names | $\mathbf{-0.08}$/$-0.10$ | $1.07^{\dagger}$/$\mathbf{0.92}^{\dagger}$ | -0.32/$\mathbf{-0.13}$
T9: WAT | $0.48^{\dagger}$/$\mathbf{0.45}^{\dagger}$ | $0.59^{\dagger}$/$\mathbf{0.58}^{\dagger}$ | $0.54^{\dagger}$/$\mathbf{0.51}^{\dagger}$
Table 1: Rows T1-T8 show WEAT bias effects for the cosine similarity and row
T9 shows the Pearson correlations on the WAT dataset with cosine similarity.
${\dagger}$ indicates bias effects that are insignificant at $\alpha<0.01$.
#### WAT:
Word Association Test (WAT) is a method to measure gender bias over a large
set of words Du et al. (2019). It calculates the gender information vector for
each word in a word association graph created with Small World of Words
project (SWOWEN; Deyne et al., 2019) by propagating information related to
masculine and feminine words $(w_{m}^{i},w_{f}^{i})\in\mathcal{L}$ using a
random walk approach Zhou et al. (2003). The gender information is represented
as a 2-dimensional vector ($b_{m}$, $b_{f}$), where $b_{m}$ and $b_{f}$ denote
respectively the masculine and feminine orientations of a word. The gender
information vectors of masculine words, feminine words and other words are
initialised respectively with vectors (1, 0), (0, 1) and (0, 0). The bias
score of a word is defined as $\log(b_{m}/b_{f})$. We evaluate the gender bias
of word embeddings using the Pearson correlation coefficient between the bias
score of each word and the score given by (11) computed as the averaged
difference of cosine similarities between masculine and feminine words.
$\displaystyle\frac{1}{|\mathcal{L}|}\sum_{i=1}^{|\mathcal{L}|}\left(f(w,w_{m}^{i})-f(w,w_{f}^{i})\right)$
(11)
#### SemBias:
SemBias dataset Zhao et al. (2018b) contains three types of word-pairs: (a)
Definition, a gender-definition word pair (e.g. hero – heroine), (b)
Stereotype, a gender-stereotype word pair (e.g., manager – secretary) and (c)
None, two other word-pairs with similar meanings unrelated to gender (e.g.,
jazz – blues, pencil – pen). We use the cosine similarity between the
$\vv{he}-\vv{she}$ gender directional vector and
$\boldsymbol{a}-\boldsymbol{b}$ in above word pair $(a,b)$ lists to measure
gender bias. Zhao et al. (2018b) used a subset of 40 instances associated with
2 seed word-pairs, not used in the training split, to evaluate the
generalisability of a debiasing method. For unbiased word embeddings, we
expect high similarity scores in Definition category and low similarity scores
in Stereotype and None categories.
Embeddings | Word2Vec | GloVe | fastText
---|---|---|---
Org/Deb | Org/Deb | Org/Deb
definition | 83.0/83.9 | 83.0/83.4 | 92.0/93.2
stereotype | 13.4/12.3 | 12.0/11.4 | 5.5/4.3
none | 3.6/3.9 | 5.0/5.2 | 2.5/2.5
sub-definition | 50.0/57.5 | 67.5/67.5 | 82.5/85.0
sub-stereotype | 40.0/32.5 | 27.5/27.5 | 12.5/10.0
sub-none | 10.0/10.0 | 5.0/5.0 | 5.0/5.0
Table 2: Prediction accuracies for gender relational analogies on SemBias.
#### WinoBias/OntoNotes:
We use the WinoBias dataset Zhao et al. (2018a) and OntoNotes Weischedel et
al. (2013) for coreference resolution to evaluate the effectiveness of our
proposed debiasing method in a downstream task. WinoBias contains two types of
sentences that require linking gendered pronouns to either male or female
stereotypical occupations. In Type 1, co-reference decisions must be made
using world knowledge about some given circumstances. However, in Type 2,
these tests can be resolved using syntactic information and understanding of
the pronoun. It involves two conditions: the pro-stereotyped (pro) condition
links pronouns to occupations dominated by the gender of the pronoun, and the
anti-stereotyped (anti) condition links pronouns to occupations not dominated
by the gender of the pronoun. For a correctly debiased set of word embeddings,
the difference between pro and anti is expected to be small. We use the model
proposed by Lee et al. (2017) and implemented in AllenNLP Gardner et al.
(2017) as the coreference resolution method.
We used a bias comparing code666https://github.com/hljames/compare-embedding-
bias to evaluate WEAT dataset. Since the WAT code was not published, we
contacted the authors to obtain the code and used it for evaluation. We used
the evaluation code from GP-
GloVe777https://github.com/kanekomasahiro/gp_debias to evaluate SemBias
dataset. We used AllenNLP888https://github.com/allenai/allennlp to evaluate
WinoBias and OntoNotes datasets. We used evaluate_word_pairs function and
evaluate_word_analogies in gensim999https://github.com/RaRe-
Technologies/gensim to evaluate word embedding benchmarks.
## § 5 Results
### § 5.1 Overall Results
We initialise the word embeddings of the model by original (Org) and debiased
(Deb) word embeddings and compare the coreference resolution accuracy using F1
as the evaluation measure.
Embeddings | Word2Vec | GloVe | fastText
---|---|---|---
Org/Deb | Org/Deb | Org/Deb
Type 1-pro | 70.1/69.4 | 70.8/69.5 | 70.1/69.7
Type 1-anti | 49.9/50.5 | 50.9/52.1 | 52.0/51.6
Avg | 60.0/60.0 | 60.9/60.8 | 61.1/60.7
Diff | 20.2/18.9 | 19.9/17.4 | 18.1/18.1
Type 2-pro | 84.7/83.7 | 79.6/78.9 | 83.8/82.5
Type 2-anti | 77.9/77.5 | 66.0/66.4 | 75.1/76.4
Avg | 81.3/80.6 | 72.8/72.7 | 79.5/79.5
Diff | 6.8/6.2 | 13.6/12.5 | 8.7/6.1
OntoNotes | 62.6/62.7 | 62.5/62.9 | 63.3/63.4
Table 3: F1 on OntoNotes and WinoBias test set. WinoBias results have Type-1
and Type-2 in pro and anti stereotypical conditions. Average (Avg) and
difference (Diff) of anti and pro stereotypical scores are shown.
In Table 1, we show the WEAT bias effects for cosine similarity and
correlation on WAT dataset using the Pearson correlation coefficient. We see
that the proposed method can significantly debias for various biases in all
word embeddings in both WEAT and WAT. Especially in Word2Vec and fastText,
almost all biases are debiased.
Table 2 shows the percentages where a word-pair is correctly classified as
Definition, Stereotype or None. We see that our proposed method succesfully
debiases word embeddings based on results on Definition and Stereotype in
SemBias. In addition, we see that the SemBias-subset can be debiased for
Word2Vec and fastText.
Table 3 shows the performance on WinoBias for Type 1 and Type 2 in pro and
anti stereotypical conditions. In most settings, the diff is smaller for the
debiased than the original word embeddings, which demonstrates the
effectiveness of our proposed method. From the results for Avg, we see that
debiasing is achieved with almost no loss in performance. In addition, the
debiased scores on the OntoNotes are higher than the original scores for all
word embeddings.
### § 5.2 Comparison with Existing Methods
| GloVe | HD | GN-GloVe | GP-GloVe | Ours
---|---|---|---|---|---
T1 | $0.89^{\dagger}$ | $0.97^{\dagger}$ | $1.10^{\dagger}$ | $1.24^{\dagger}$ | $\bf 0.74^{\dagger}$
T2 | $1.25^{\dagger}$ | $1.23^{\dagger}$ | $1.25^{\dagger}$ | $1.31^{\dagger}$ | $\bf 1.22^{\dagger}$
T5 | 0.49 | -0.40 | 0.00 | 0.21 | 0.35
T6 | $1.22^{\dagger}$ | -0.11 | $1.13^{\dagger}$ | $0.78^{\dagger}$ | $1.05^{\dagger}$
T7 | 1.19 | 1.23 | 1.11 | 1.01 | 1.03
Table 4: WEAT bias effects for the cosine similarity on prior methods and
proposed method. ${\dagger}$ indicates bias effects that are insignificant at
$\alpha<0.01$. T* are aligned with those in Table 1.
We compare the proposed method against the existing debiasing methods
Bolukbasi et al. (2016); Zhao et al. (2018b); Kaneko and Bollegala (2019)
mentioned in § 2 on WEAT, which contains different types of biases. We debias
Glove101010https://github.com/uclanlp/gn_glove, which is used in Zhao et al.
(2018b). All word embeddings used in these experiments are the pre-trained
word embeddings used in the existing debiasing methods. Words in evaluation
sets T3, T4 and T8 are not covered by the input pre-trained embeddings and
hence not considered in this evaluation. From Table 4 we see that only the
proposed method debiases all biases accurately. T5 and T6 are the tests for
gender bias; despite prior debiasing methods do well in those tasks, they are
not able to address other types of biases. Notably, we see that the proposed
method can debias more accurately compared to previous methods that use word
lists for gender debiasing, such as Bolukbasi et al. (2016) in T5 and Zhao et
al. (2018b) in T6.
### § 5.3 Dominant Gloss vs All Glosses
Embeddings | Word2Vec | GloVe | fastText
---|---|---|---
Dom/All | Dom/All | Dom/All
definition | 83.4/83.9 | 83.9/83.4 | 92.5/93.2
stereotype | 12.7/12.3 | 11.8/11.4 | 4.8/4.3
none | 3.9/3.9 | 4.3/5.2 | 2.7/2.5
sub-definition | 55.0/57.5 | 67.5/67.5 | 77.5/85.0
sub-stereotype | 35.0/32.5 | 27.5/27.5 | 12.5/10.0
sub-none | 10.0/10.0 | 5.0/5.0 | 10.0/5.0
Table 5: Performance obtained when using only the dominant gloss (Dom) or all
glosses (All) on SemBias.
In Table 5, we investigate the effect of using the dominant gloss (i.e. the
gloss for the most frequent sense of the word) when creating $s(w)$ on SemBias
benchmark as opposed to using all glosses (same as in Table 2). We see that
debiasing using all glosses is more effective than using only the dominant
gloss.
### § 5.4 Word Embedding Benchmarks
Embeddings | Word2Vec | GloVe | fastText
---|---|---|---
| Org/Deb | Org/Deb | Org/Deb
WS | 62.4/60.3 | 60.6/68.9 | 64.4/67.0
SIMLEX | 44.7/46.5 | 39.5/45.1 | 44.2/47.3
RG | 75.4/77.9 | 68.1/74.1 | 75.0/79.6
MTurk | 63.1/63.6 | 62.7/69.4 | 67.2/69.9
RW | 75.4/77.9 | 68.1/74.1 | 75.0/79.6
MEN | 68.1/69.4 | 67.7/76.7 | 67.6/71.8
MSR | 73.6/72.6 | 73.8/75.1 | 83.9/80.5
Google | 74.0/73.7 | 76.8/77.3 | 87.1/85.7
Table 6: The Spearman correlation coefficients between human ratings and
cosine similarity scores computed using word embeddings for the word pairs in
semantic similarity benchmarks.
(a) Original Word2Vec
(b) Debiased Word2vec
Figure 1: Cosine similarity between neutral occupation words for vector
directions on gender ($\vv{he}-\vv{she}$), race
($\vv{Caucasoid}-\vv{Negroid}$), and age ($\vv{elder}-\vv{youth}$) vectors.
It is important that a debiasing method removes only discriminatory biases and
preserves semantic information in the original word embeddings. If the
debiasing method removes more information than necessary from the original
word embeddings, performance will drop when those debiased embeddings are used
in NLP applications. Therefore, to evaluate the semantic information preserved
after debiasing, we use semantic similarity and word analogy benchmarks as
described next.
#### Semantic Similarity:
The semantic similarity between two words is calculated as the cosine
similarity between their word embeddings and compared against the human
ratings using the Spearman correlation coefficient. The following datasets are
used: Word Similarity 353 (WS; Finkelstein et al., 2001), SimLex Hill et al.
(2015), Rubenstein-Goodenough (RG; Rubenstein and Goodenough, 1965), MTurk
Halawi et al. (2012), rare words (RW; Luong et al., 2013) and MEN Bruni et al.
(2012).
#### Word Analogy:
In word analogy, we predict $d$ that completes the proportional analogy “$a$
is to $b$ as $c$ is to what?”, for four words $a$, $b$, $c$ and $d$. We use
CosAdd Levy and Goldberg (2014), which determines $d$ by maximising the cosine
similarity between the two vectors
($\boldsymbol{b}-\boldsymbol{a}+\boldsymbol{c}$) and $\boldsymbol{d}$.
Following Zhao et al. (2018b), we evaluate on MSR Mikolov et al. (2013c) and
Google analogy datasets Mikolov et al. (2013a) as shown in Table 6.
From Table 6 we see that for all word embeddings, debiased using the proposed
method accurately preserves the semantic information in the original
embeddings. In fact, except for Word2Vec embeddings on WS dataset, we see that
the accuracy of the embeddings have _improved_ after the debiasing process,
which is a desirable side-effect. We believe this is due to the fact that the
information in the dictionary definitions is used during the debiasing
process. Overall, our proposed method removes unfair biases, while retaining
(and sometimes further improving) the semantic information contained in the
original word embeddings.
We also see that for GloVe embeddings the performance has improved after
debiasing whereas for Word2Vec and fastText embeddings the opposite is true.
Similar drop in performance in word analogy tasks have been reported in prior
work Zhao et al. (2018b). Besides CosAdd there are multiple alternative
methods proposed for solving analogies using pre-trained word embeddings such
as CosMult, PairDiff and supervised operators Bollegala et al. (2015, 2014);
Hakami et al. (2018). Moreover, there have been concerns raised about the
protocols used in prior work evaluating word embeddings on word analogy tasks
and the correlation with downstream tasks Schluter (2018). Therefore, we defer
further investigation in this behaviour to future work.
### § 5.5 Visualising the Outcome of Debiasing
We analyse the effect of debiasing by calculating the cosine similarity
between neutral occupational words and gender ($\vv{he}-\vv{she}$), race
($\vv{Caucasoid}-\vv{Negroid}$) and age ($\vv{elder}-\vv{youth}$) directions.
The neutral occupational words list is based on Bolukbasi et al. (2016) and is
listed in the Supplementary. Figure 1 shows the visualisation result for
Word2Vec. We see that original Word2Vec shows some gender words are especially
away from the origin (0.0). Moreover, age-related words have an overall bias
towards “elder”. Our debiased Word2Vec gathers vectors around the origin
compared to the original Word2Vec for all gender, race and age vectors.
On the other hand, there are multiple words with high cosine similarity with
the female gender after debiasing. We speculate that in rare cases their
definition sentences contain biases. For example, in the WordNet the
definitions for “homemaker” and “nurse” include gender-oriented words such as
“a wife who manages a household while her husband earns the family income” and
“a woman who is the custodian of children.” It remains an interesting future
challenge to remove biases from dictionaries when using for debiasing.
Therefore, it is necessary to pay attention to biases included in the
definition sentences when performing debiasing using dictionaries. Combining
definitions from multiple dictionaries could potentially help to mitigate
biases coming from a single dictionary. Another future research direction is
to evaluate the proposed method for languages other than English using
multilingual dictionaries.
## § 6 Conclusion
We proposed a method to remove biases from pre-trained word embeddings using
dictionaries, without requiring pre-defined word lists. The experimental
results on a series of benchmark datasets show that the proposed method can
remove unfair biases, while retaining useful semantic information encoded in
pre-trained word embeddings.
## References
* Alsuhaibani et al. (2019) Mohammed Alsuhaibani, Takanori Maehara, and Danushka Bollegala. 2019. Joint learning of hierarchical word embeddings from a corpus and a taxonomy. In _Proc. of the Automated Knowledge Base Construction Conference_.
* Arora et al. (2017) Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In _Proc. of ICLR_.
* Bojanowski et al. (2017) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the Association for Computational Linguistics_ , 5:135–146.
* Bollegala et al. (2015) Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Embedding semantic relationas into word representations. In _Proc. of IJCAI_ , pages 1222 – 1228.
* Bollegala et al. (2014) Danushka Bollegala, Takanori Maehara, Yuichi Yoshida, and Ken-ichi Kawarabayashi. 2014. Learning word representations from relational graphs. In _Proc. of 29th AAAI Conference on Aritificial Intelligence (AAAI 2015)_ , pages 2146 – 2152.
* Bollegala et al. (2016) Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, and Ken-ichi Kawarabayashi. 2016. Joint word representation learning using a corpus and a semantic lexicon. In _Proc. of AAAI_ , pages 2690–2696.
* Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In _NIPS_.
* Bordia and Bowman (2019) Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop_ , pages 7–15, Minneapolis, Minnesota. Association for Computational Linguistics.
* Bosc and Vincent (2018) Tom Bosc and Pascal Vincent. 2018. Auto-encoding dictionary definitions into consistent word embeddings. In _EMNLP_.
* Bruni et al. (2012) Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in technicolor. In _Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 136–145. Association for Computational Linguistics.
* Budanitsky and Hirst (2006) A. Budanitsky and G. Hirst. 2006. Evaluating wordnet-based measures of semantic distance. _Computational Linguistics_ , 32(1):13–47.
* Budanitsky and Hirst (2001) Alexander Budanitsky and Graeme Hirst. 2001. Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. In _NAACL 2001_.
* Caliskan et al. (2017) Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. _Science_ , 356:183–186.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Deyne et al. (2019) Simon De Deyne, Danielle J. Navarro, Amy Perfors, Marc Brysbaert, and Gert Storms. 2019. The “small world of words” english word association norms for over 12,000 cue words. _Behavior Research Methods_ , 51:987–1006.
* Du et al. (2019) Yupei Du, Yuanbin Wu, and Man Lan. 2019. Exploring human gender stereotypes with word association test. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6133–6143, Hong Kong, China. Association for Computational Linguistics.
* Elazar and Goldberg (2018) Yanai Elazar and Yoav Goldberg. 2018. Adversarial Removal of Demographic Attributes from Text Data. In _Proc. of EMNLP_.
* Ethayarajh et al. (2019) Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding associations. In _Proceedings of the 57th Conference of the Association for Computational Linguistics_ , pages 1696–1705. Association for Computational Linguistics.
* Faruqui et al. (2015) Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In _Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1606–1615, Denver, Colorado. Association for Computational Linguistics.
* Fellbaum (1998) Christiane Fellbaum, editor. 1998. _WordNet: an electronic lexical database_. MIT Press.
* Finkelstein et al. (2001) Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In _Proceedings of the 10th International Conference on World Wide Web_ , WWW ’01, pages 406–414, New York, NY, USA. ACM.
* Gardner et al. (2017) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017\. Allennlp: A deep semantic natural language processing platform. In _Proceedings of Workshop for NLP Open Source Software (NLP-OSS)_.
* Glavaš and Vulić (2018) Goran Glavaš and Ivan Vulić. 2018. Explicit retrofitting of distributional word vectors. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 34–45. Association for Computational Linguistics.
* Gonen and Goldberg (2019) Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 609–614, Minneapolis, Minnesota. Association for Computational Linguistics.
* Greenwald et al. (1998) Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwatz. 1998. Measuring individual differences in implicit cognition: The implicit association test. _Journal of Personality and Social Psychology_ , 74(6):1464–1480.
* Hakami et al. (2018) Huda Hakami, Kohei Hayashi, and Danushka Bollegala. 2018. Why does PairDiff work? - a mathematical analysis of bilinear relational compositional operators for analogy detection. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 2493–2504, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
* Halawi et al. (2012) Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In _Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’12, pages 1406–1414, New York, NY, USA. ACM.
* Hall Maudslay et al. (2019) Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5266–5274, Hong Kong, China. Association for Computational Linguistics.
* Hill et al. (2015) Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. _Computational Linguistics_ , 41(4):665–695.
* Jiang and Conrath (1997) Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In _10th Intl. Conf. Research on Computational Linguistics (ROCLING)_ , pages 19 – 33.
* Kaneko and Bollegala (2019) Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1641–1650, Florence, Italy. Association for Computational Linguistics.
* Kaneko and Bollegala (2020) Masahiro Kaneko and Danushka Bollegala. 2020. Autoencoding improves pre-trained word embeddings. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 1699–1713, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1746–1751, Doha, Qatar. Association for Computational Linguistics.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In _Proc. of ICLR_.
* Lambrecht and Tucker (2016) Anja Lambrecht and Catherine E. Tucker. 2016. Algorithmic bias? an empirical study into apparent gender-based discrimination in the display of stem career ads. _SSRN Electronic Journal_.
* Lauscher and Glavas (2019) Anne Lauscher and Goran Glavas. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In _*SEM@NAACL-HLT_.
* Lee et al. (2017) Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 188–197, Copenhagen, Denmark. Association for Computational Linguistics.
* Levy and Goldberg (2014) Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In _CoNLL_.
* Li et al. (2018) Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 25–30. Association for Computational Linguistics.
* Luong et al. (2013) Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In _Proceedings of the Seventeenth Conference on Computational Natural Language Learning_ , pages 104–113. Association for Computational Linguistics.
* Manzini et al. (2019) Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 615–621, Minneapolis, Minnesota. Association for Computational Linguistics.
* May et al. (2019) Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019\. On measuring social biases in sentence encoders. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 622–628. Association for Computational Linguistics.
* Mikolov et al. (2013a) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, _Advances in Neural Information Processing Systems 26_ , pages 3111–3119. Curran Associates, Inc.
* Mikolov et al. (2013b) Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continous space word representations. In _NAACL-HLT_ , pages 746 – 751.
* Mikolov et al. (2013c) Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In _Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 746–751, Atlanta, Georgia. Association for Computational Linguistics.
* Pennington et al. (2014) Jeffery Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: global vectors for word representation. In _EMNLP_ , pages 1532–1543.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proc. of NAACL-HLT_.
* Ravfogel et al. (2020) Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020\. Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection. In _Proc. of ACL_.
* Rubenstein and Goodenough (1965) Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. _Commun. ACM_ , 8(10):627–633.
* Rudinger et al. (2018) Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 8–14. Association for Computational Linguistics.
* Schluter (2018) Natalie Schluter. 2018. The word analogy testing caveat. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 242–246. Association for Computational Linguistics.
* Sheng et al. (2019) Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3405–3410, Hong Kong, China. Association for Computational Linguistics.
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. _J. Mach. Learn. Res._ , 15:1929–1958.
* Tissier et al. (2017) Julien Tissier, Christopher Gravier, and Amaury Habrard. 2017. Dict2vec : Learning word embeddings using lexical dictionaries. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 254–263, Copenhagen, Denmark. Association for Computational Linguistics.
* Vanmassenhove et al. (2018) Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3003–3008. Association for Computational Linguistics.
* Vig (2019) Jesse Vig. 2019. Visualizing Attention in Transformer-Based Language Representation Models.
* Washio et al. (2019) Koki Washio, Satoshi Sekine, and Tsuneaki Kato. 2019. Bridging the defined and the defining: Exploiting implicit lexical semantic relations in definition modeling. In _EMNLP/IJCNLP_.
* Weischedel et al. (2013) Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. _Linguistic Data Consortium, Philadelphia, PA_ , 23.
* Xie et al. (2017) Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In _Proc. of NIPS_.
* Zhao et al. (2019) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics.
* Zhao et al. (2017) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017\. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2941–2951, Copenhagen, Denmark. Association for Computational Linguistics.
* Zhao et al. (2018a) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 15–20. Association for Computational Linguistics.
* Zhao et al. (2018b) Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In _Proc. of EMNLP_ , pages 4847–4853.
* Zhou et al. (2003) Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Schölkopf. 2003. Learning with local and global consistency. In _NIPS_.
* Zmigrod et al. (2019) Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In _Proc. of ACL_.
|
# A Dual-arm Robot that Autonomously Lifts Up and Tumbles Heavy Plates Using
Crane Pulley Blocks
Shogo Hayakawa1, Weiwei Wan1∗, Keisuke Koyama1 and Kensuke Harada1,2 1Graduate
School of Engineering Science, Osaka University, Japan.2National Inst. of
AIST, Japan.Contact: Weiwei Wan<EMAIL_ADDRESS>
###### Abstract
This paper develops a planner that plans the action sequences and motion for a
dual-arm robot to lift up and flip heavy plates using crane pulley blocks. The
problem is motivated by the low payload of modern collaborative robots.
Instead of directly manipulating heavy plates that collaborative robots cannot
afford, the paper develops a planner for collaborative robots to operate crane
pulley blocks. The planner assumes a target plate is pre-attached to the crane
hook. It optimizes dual-arm action sequences and plans the robot’s dual-arm
motion that pulls the rope of the crane pulley blocks to lift up the plate.
The crane pulley blocks reduce the payload that each robotic arm needs to
bear. When the plate is lifted up to a satisfying pose, the planner plans a
pushing motion for one of the robot arms to tumble over the plate while
considering force and moment constraints. The article presents the technical
details of the planner and several experiments and analysis carried out using
a dual-arm robot made by two Universal Robots UR3 arms. The influence of
various parameters and optimization goals are investigated and compared in
depth. The results show that the proposed planner is flexible and efficient.
###### Index Terms:
Manipulation Planning, Grasping, Grippers and Other End-Effectors
## I Introduction
Although modern industrial robots can lift as much as 1000$kg$ payload. They
tend to be expensive, bulky, dangerous, and have to be fenced in a work cell
to keep human workers safe. They are not suitable for human-intensive
manufacturing sites. On the other hand, collaborative robots are developed to
work together with humans, they have a smaller size and are considered safe.
However, they tend to have small payloads to ensure a good response to
collisions. The heavy load and collaboration form a trade-off that leads to an
interesting and unsolved problem – how to use collaborative robots to
manipulate heavy objects.
Previously, researchers in robot manipulation suggested working on heavy
objects using non-prehensile manipulation, which leverages external supports
to afford a part of the total workload [1][2][3]. Despite the cleverness, the
masses of the target objects to be manipulated remain limited. They must meet
the force requirements from external supports. Also, carrying out complicated
manipulation tasks like turning over the object is difficult and deserves
sophisticated optimization, and planning [4][5]. Under this background, we
explore new manipulation methods to work on heavy objects using collaborative
robots.
The conception we have in mind is to take robots as humans and plan them to
manipulate various objects using humans’ tools. Especially for heavy objects,
we propose to use crane pulley blocks as the tool. The policy is inspired by a
production process seen in a factory that produces sewage press machines.
Sewage press machines are used to dehydrating coals. The pressboard of sewage
press machines could be as heavy as 1000$kg$. In a factory that produces
sewage press machines, as is shown in Fig.1, human workers need to flip and
clean both sides of the board before installing them to the cylinder axis. The
factory developed a production procedure where human workers perform the
flipping task using a gantry crane. They attach the board to the crane hook
using bearing belts, activate the crane to lift up the board. When the board
is raised to a satisfying pose, as is shown in Fig.1(c), the human workers
turn the board over by pushing it. Motivated by human workers’ actions in the
sewage machine factory, we in this research expect to perform a similar task
using collaborative robots that operate crane pulley blocks.
Figure 1: Human workers in a sewage press machine factory flip press boards
with the help of a gantry crane. (a) Attaching the board to the crane hook
using bearing belts. (b) Activating the crane to lift up the board. (c) The
board is lifted up to a satisfying pose. (d) The board is flipped and is
returned to the workbench.
We develop a planner that optimizes the action sequences and plans the motion
for a collaborative dual-arm robot to operate crane pulley blocks and lift up
and tumble heavy plate-shaped objects with the crane pulley blocks’ help. We
assume target plates are pre-attached to the crane cook. Depth vision is used
to recognize the position and orientation of the crane rope and the plates.
Based on the recognized rope and plate poses, we optimize the initial and goal
pulling poses for the rope and plan the robot’s dual-arm motion that pulls the
crane rope to lift up the plates. When the plates are lifted up to a
satisfying pose, we compute the relations between contact wrenches and
gravitational forces, and plan a pushing motion for one of the robot arms to
tumble over the plates.
Our main contribution is twofold. First, we initially develop a dual-arm robot
that manipulates a plate much heavier than the maximum end-effector load by
operating crane pulley blocks. Second, we formulated the constraints in using
the crane pulley blocks and performed multiple optimizations to let the robot
autonomously determine the pulling arms, pulling actions, and the tumbling
trajectories. We present the technical details of the planner and several
experiments and analysis carried out using the dual-arm robot and different
plates. The influence of various parameters and optimization goals are
investigated and compared in depth. The results show that the proposed planner
is flexible and efficient.
In the remaining part of this article, we first review related work in Section
II. Then, we present an overview of the method in Section III. After that, we
show the technical details in Section IV, with experiments and analysis
carried out using a dual-arm robot made by two Universal Robots UR3 arms and
plates with different parameters in Section V. Conclusions and future work are
presented in the last section.
## II Related Work
We review the related work in the manipulation planning of heavy objects. They
are divided into two categories. The first category concentrates on firmly
grasped or firmly attached objects. The goal is to keep balance or avoid
breaking weak joints. For example, Harada et al. [6] used the Zero Moment
Point (ZMP) of the humanoid robot-object system to compensate a humanoid
robot’s motion, and thus enable the humanoid robot to lift up an carry a heavy
object stably. Urakubo et al. [7] studied lifting up a heavy object with small
joint torques. They take advantage of singular configurations to avoid
breaking the joints of a two-link robot. Berenson et al. [8] formulated the
manipulation planning of heavy objects as a probabilistic sampling problem
while considering constrained manifolds. Yang et al. [9] developed a tele-
operating system considering mixed force and motion commands to avoid
unexpected excessive force caused by massive payload. Zhu et al. [10]
presented a planner for a wall climbing manipulator. The influence of self-
weight was considered in determining the footholds for the suction modules.
These studies assume that a robot firmly grasps the heavy object, which could
be undesirable as heavy objects are generally large. A robotic gripper may not
be able to grasp them with force or form closure. Also, even if a robot can
grasp a part of a large, heavy object, it may remain challenging for the robot
to pick up the object due to the fragile fingertip or fingerpad contacts. For
this reason, non-prehensile manipulation is widely used to plan to manipulate
heavy objects. It is the second category of studies we would like to review.
Non-prehensile manipulation means manipulating an object without firmly
grasping it. The representative non-prehensile manipulation primitives include
pushing, sliding, tumbling, pivoting, lifting, caging, etc. Pushing was
extensively studied by Mason [11], and is currently widely used to manipulate
the object. When a robot pushes an object, one needs to consider both the
trajectories of the object and robot itself. Lynch and Mason [12], and more
recently Zhou and Mason [13] analyzed this control and planning problem and
implemented stable pushing. Recent progress in pushing include Zhou et al.
[14] and Yu et al. [15], which use probabilistic models to infer object
states. Song et al. [16] proposed a nested approach to manipulate multiple
objects together using pushing and learning. With the help of pushing, a robot
can manipulate objects that cannot be directly grasped and lifted. For
example, Murooka et al. [2] presented a humanoid robot that pushed heavy
objects by using whole-body contacts. Topping et al. [17] modeled and planned
a small quadruped robot’s motion to open large doors.
Sliding is similar to pushing, but instead of exerting force sideways, sliding
assumes pressing against the frictional surface. It also enables a small robot
to manipulate large and heavy objects. Zhang et al. [18] presented a dynamic
model to plan the motion for a legged robot perform various sliding tasks like
driving, inch worming, scooting, etc. Hang et al. [19] developed a pregrasp
policy than slides thin objects to the corner of a table for easier pick-up.
Tumbling means rotating an object while pressing it against a surface. Bai et
al.[20] analyzed tumbling an object using a multi-fingered robotic hand. The
motion is induced by a tilted palm and gravity. Fingers were used to protect
the tumbling from overshooting. Cabezas et al. [21] presented a tumbling
planner that accepts a given trajectory of rotation and computes the quasi-
dynamic contacts. Correa et al. [22] and Cheng et al. [23] respectively
developed new usage of suction cups by considering using them as a tip for
tumbling.
Rolling is a variation of tumbling, where the manipulated object is rotated
continuously along a surface [24][25].
Pivoting is a method that moves an object by leaving the object alternatively
supported by corner points as if the object is walking on them. It is an
extended version of tumbling. Aiyama et al. [26] seminally suggested the idea
of robotic pivoting. Yoshida et al.[1] used a humanoid robot to pivot and move
a heavy box.
Besides the primitives, researchers also employed high-level graph search to
plan composite non-prehensile manipulation. For example, Maeda et al.[27]
proposed using a manipulation-feasibility graph to host the contact states
[28] of an object and plan multi-finger graspless manipulation by searching
the graph. Lee et al. [29] proposed a hierarchical method that used a contact-
state graph in lower layers to identify object-environment contacts, and
determined robot contact sequences and maneuverings in a higher layer.
Compared with the above non-prehensile manipulation, our difference is as
follows. Firstly, we do not stick to a robot itself. Our non-prehensile
manipulation is performed by using crane pulley blocks to increase the duty of
robots. Similar ideas could be found in a most up-to-date publication that use
a collaborative robot to operate a manual pallet jack [30]. We plan both the
dual-arm robot action sequences and motion details to pull and return the
crane pulley blocks’ rope and ease non-prehensile flipping. Second, we plan a
robot pushing motion to turn over heavy plates. We study a quasi-static
prediction of a single contact, and use it to generate the trajectory to
tumble plates.
## III Overview of the Proposed Method
This section presents an overview of our proposed method. We base our
discussion on the hardware set up shown in Fig.2. The proposed method is not
limited to this hardware setup, but we use it as an example to let our readers
have a solid conception of the working scenario.
Figure 2: An exemplary hardware setup. The goal is to optimize the robot
action sequences and plan the robot motion that pulls up the heavy plate using
the crane pulley blocks, and tumbles over the plate at a satisfying pose.
The proposed method is divided into two phases. The first phase is a rope
pulling phase. In the beginning this phase, the method detects a rope’s
position using point clouds obtained from a depth sensor. Many well-developed
algorithms, for example, RANSAC [31] and ICP [32], can be used to perform the
detection. The detected rope, together with the pre-annotated grasp poses that
grasp a section of the rope, is used to examine which robot arm to use and
optimize the initial and goal rope-pulling poses. The two arms are examined
sequentially, but they do not necessarily move one by one. The actuation of
arms is determined by a quality function designed to maximize the pulling
distance, minimize the pulling force, and enlarge grasping flexibility so that
the plate object can be quickly and safely lifted up. An arm motion will be
planned and executed to pull the crane rope if an optimal goal is found. After
pulling the rope, the method performs a second visual detection to find the
plate’s pose. It determines if a plate is at a satisfying pose for tumbling.
If the pose is not satisfying, the routine returns to the rope detection and
pulling optimization to pull down the rope continuously. Or else, the method
switches to a tumbling phase that optimize the trajectory of a robot arm that
flips the plate using sliding-push. The tumbling is also monitored by vision
to determine if the plate is well flipped or not. Once the plate’s pose is
considered to be reversed, the robot uses its two arms to return the crane
rope and lower the plate down to the table.
The detailed algorithms related to the two phases will be presented in detail
in Section IV and V, respectively. Their performance will be examined and
analyzed in Section V.
## IV Pulling the Rope Using Optimized Poses
In this section, we present the optimal rope-pulling planning algorithms used
in the rope pulling phase. The robot lifts up a plate by repeatedly carrying
out the planned rope-pulling motion. As shown in Fig.3, the planning is
performed repeatedly in closed loops. Inside each loop, the robot uses a
quality function to determine the best initial and goal pulling poses, and
plans and executes the planned pulling actions until the plate is lifted up to
a satisfying angle. If the planning or execution in a loop fails, the planner
will invalidate the failed grasping point or pulling pose and try another
candidate. The robot autonomously switches between single-arm or dual-arm
actions following the determined goals.
Figure 3: Workflow of the optimal rope-pulling planning algorithms. The
planning is performed repeatedly in closed loops, as denoted by the red arrows
and green arrows. Inside each loop, the robot uses a quality function to
determine the best initial and goal pulling poses, and plans and executes the
planned actions until a plate is lifted over a threshold angle.
The components of Fig.3 will be presented in detail in the remaining part of
this section. Frame box 1) will be presented in subsection A. Frame box 2)
will be presented in subsection B. Frame boxes 3)-5) will be presented in
subsection C. The closed-loop arrows (the red arrows and the green arrows
respectively form closed loops) and the switches of arms will be presented in
subsections D. Frame box 6) will be presented in subsection E.
### IV-A Searching for the Initial Grasping Point and Pulling Poses
Figure 4: (a) Searching for the initial grasp point and pulling poses
considering cylinder elements and pre-annotated grasp poses. The lower part of
the figure shows the cylindrical modeling of the rope and the pre-annotated
grasps for a cylinder element. (b, c, d) Sampling and determining an optimal
init-goal pair considering the pulling distance computed using $l_{i}$, the
pulling load computed using $\theta_{i}$, and the chance of successful motion
planning computed by reasoning shared grasps. The random green dots are the
sampled goals. The two clusters of hands in (d) show the grasping poses at the
initial and goal points. The red and green ones are the shared grasping poses
at both the initial and goal points. The blue ones in the upper cluster are
not accessible at the goal. In the shared grasping poses, the red ones are IK-
infeasible or collided. The green ones are the finally determined candidate
grasping poses. Their related arm pulling poses are also rendered in green
color.
The initial grasping point and initial pulling poses for the rope-pulling
motion are determined considering the rope’s point cloud obtained inside each
loop. We use a series of connected cylinders to model a detected rope point
cloud, as shown in Fig.4(a). The cylinder elements have the same size. Grasp
poses are pre-annotated for a cylinder element. To determine initial pulling
poses, we scan the cylinder elements of a detected rope from its top-most
position and select the first one that is not invalidated by previous loops as
the initial grasping point. Then, the pose of an arm is computed by solving
Inverse Kinematics (IK) considering the pre-annotated grasp poses at the
selected point. After that, the collision-free ones of the solved arm poses
are saved as the initial pulling poses. The green rendering in the upper part
of Fig.4(a) exemplifies a determined initial pulling pose. The red rendering
shows an obsoleted grasping pose as the arm elbow at this configuration
collides with the robot body.
### IV-B Determine the Optimal Init-Goal Pair
After the initial grasping point and pulling poses are decided, the planner
continues to decide goal points and the pulling poses at them to form init-
goal pairs. The goal points are selected from a set of probabilistically
sampled points in a robot arm’s workspace, as shown in Fig.4(b-d). Since the
two arms have different workspaces, their goal points are sampled differently.
The most effective point in the sampled set is selected to form the initial-
goal pair, where effectiveness is evaluated considering three aspects: The
motion distance between the init-goal pair; The load that a pulling arm bears
when moving between the init-goal pair; The number of shared grasp poses
between the init-goal pair. The three aspects are respectively quantified as
quality parameters $f_{length}$, $f_{load}$, and $f_{grasps}$. The values of
these quality parameters are normalized to (0, 1). Their details are as
follows.
#### IV-B1 $f_{length}$
The $f_{length}$ quality assesses the motion distance between the init-goal
pair. Its goal is to enlarge the length of each pulling motion. $f_{length}$
is computed using equation (1).
$f_{length}=(l_{i}-l_{min})/(l_{max}-l_{min}).$ (1)
Here, $l_{i}$ is the length between the upper pin point of the pulley blocks
and a sampled goal point. It is graphically explained in Fig.4(b). $l_{max}$
and $l_{min}$ are respectively the shortest and largest of all $l_{i}$. The
denominator $(l_{max}-l_{min})$ normalizes the values. $f_{length}$ reaches to
1 when the furthest goal sample is selected.
#### IV-B2 $f_{load}$
The $f_{load}$ quality assesses the load that a pulling arm bears when moving
between the init-goal pair. Its goal is to reduce the load of a pulling arm.
The $f_{load}$ value is computed using the tilting angle of the stretched
rope. A Free Body Diagram (FBD) analysis shows that a larger tilting angle
requires a robot arm to afford larger forces. Thus we use the cosine value of
the angle to approximate $f_{load}$:
$f_{load}=\cos{\theta_{i}}$ (2)
Fig.4(d) graphically explains $\theta_{i}$. $f_{load}$ reaches 1 when the rope
is pulled vertically. The robot arm bears the smallest resistant force at this
extreme.
#### IV-B3 $f_{grasps}$
The $f_{grasps}$ quality aims to improve the chances of successful motion
planning. It is computed using the number shared grasp poses at the initial
grasping point and goal point as follows:
$f_{grasps}=n_{goal}/n_{init}.$ (3)
A shared grasp pose is defined as the grasp poses identical in a cylinder
element’s local coordinate frame at both the initial grasping point and the
goal point. Shared grasps are computed by reasoning the intersection of all
available (IK-feasible and collision-free) grasp poses at the init-goal pair.
$n_{init}$ in the equation indicates the number of available grasp poses at
the initial grasping point. $n_{goal}$ indicates the number of grasp poses at
the goal point that shares the same local transformation as those in
$n_{init}$. A large $f_{grasps}$ means more candidate initial and goal pulling
poses for following configuration-space motion planning, which implies a
higher chance of getting a feasible motion trajectory.
The most effective goal point is determined considering the three quality
parameters using the following quality function.
$\displaystyle Q$ $\displaystyle=\boldsymbol{\omega}^{T}\boldsymbol{f}$ (4)
Here, $\boldsymbol{\omega}$=[$\omega_{length}$, $\omega_{load}$,
$\omega_{grasps}$] is the weighting coefficient.
$\boldsymbol{f}$=[$f_{length}$, $f_{load}$, $f_{grasps}$] indicates the three
quality parameters. The quality function helps to find a goal point that leads
to large pulling distances, smaller pulling forces, and high chance of finding
successful pulling motion. The weighting coefficient can be tuned following
application requirements. Their influences on final performance will be
analyzed in the experimental section.
For all sampled goal points, we compute their $Q$ and choose the one with the
largest $Q$ value as the optimal goal. The initial grasping point and the
optimal goal point together form an init-goal point pair. The goal arm pulling
poses are determined by considering the collision-free Inverse Kinematics (IK)
solutions of the shared grasp poses at the init-goal pair. The two clusters of
hands in Fig.4(d) show the grasping poses at the initial and goal points. The
blue ones in the upper cluster indicate the unshared grasping poses. They are
inaccessible at the goal. The ones with the other colors indicate the shared
grasping poses. In the shared grasping poses, the red ones are IK-infeasible
or collided. The green ones are the finally determined candidates. Their
related arm pulling poses are rendered for better observation.
### IV-C Planning the Rope-Pulling Motion
After the initial and goal pulling poses are determined, the robot generates a
motion to pull a rope from the initial pose to the goal pose. The motion
includes two sections. In the first section, the pulling arm moves from its
current pose to the initial pulling pose. The section is planned using Bi-
directional Rapidly-exploring Random Trees (Bi-RRT) [8]. In the second
section, the pulling arm pulls the rope by moving from the initial pulling
pose to the goal pulling pose. The section is planned as a linear trajectory
using Quadratic Programming (QP) [33][34]. If the planning succeeds, the robot
will execute the successfully planned motion to pull up the plate. Or else,
the planner will continue to try another init-goal pair. The planning and
execution routine is performed repeatedly until the plate’s tilting angle
exceeds a threshold. We use a threshold because we need to ensure that the
plate is not lifted too much and separated from the table. The plate
continuously contacts with the table until the end of the rope-pulling phase,
so that a robot arm could start tumbling manipulation. Before the execution in
each loop, the plate’s post-execution tilting angle is predicted to prevent it
from exceeding the threshold. The post-execution angle prediction is performed
using the following equation.
$\displaystyle\tilde{\alpha}_{i+1}$
$\displaystyle=\alpha_{i}+(\alpha_{i}-\alpha_{i-1})\times\frac{d_{i}}{d_{i-1}},~{}(i\geq
1)$ (5)
Here, $\alpha_{i-1}$ and $d_{i}$ are respectively the previous plate’s tilting
angle and the length of the previous rope-pulling motion. $\alpha_{i}$ and
$d_{i}$ are the current plate’s tilting angle and the length of the current
rope-pulling motion. The plate’s tilting angle after executing the current
rope-pulling motion is estimated as $\tilde{\alpha}_{i+1}$. The various
symbols are graphically explained in Fig.5. The prediction is based on the
proportional relationship between the length of the pulled rope and the change
of the angle in a previous execution. It is decoupled from a specific plate
and provides an upper-bound estimation for the post-execution angle.
Figure 5: Predicting a plate’s next tilting angle $\tilde{\alpha}_{i+1}$ using
the current titling angle $\alpha_{i}$ and previous tilting angle
$\alpha_{i-1}$.
If the predicted angle exceeds the threshold, the length of the next pulling
motion will be adjusted to avoid over-lifting. The following equation shows
the adjustment. The plate’s tilting angle after finishing the rope-pulling
phase approximates to the threshold angle $\alpha_{thld}$.
$\displaystyle\hat{d}_{i}$
$\displaystyle=d_{i-1}\times\frac{\alpha_{thld}-\alpha_{i}}{\alpha_{i}-\alpha_{i-1}}$
(6)
### IV-D Determining the Action Arms and Sequences
As mentioned in C, the planning and execution loop is performed repeatedly
until the plate’s tilting angle exceeds a threshold. The two arms share the
repetition sequentially – The planner optimizes init-goal pairs and plans and
executes a pulling-motion for the two arms one by one. It switches to a second
arm either after pulling the cable or without any pulling action.
Particularly, if no init-goal pair is found or no successful motion is
planned, the planner switches to the next arm without pulling, exhibiting
autonomy in determining the action arms and sequences.
The arrows in Fig.3 show the mentioned switches. The green arrows indicate the
flow with pulling executions. The current arm along the flow is actuated to
pull the cable. It will switch to the other arm after execution. The red
arrows indicate the flow without pulling. During the planning, a failure may
be caused by different reasons like no available initial pose, no available
init-goal pair, failed to find a motion for an init-goal pair, etc. If the
reason is a collision between the lifted plate and the other arm, the planner
will try moving the other arm away and continue the currently planned results
(as shown by “Yes2” in Fig.3). Or else, the planner will jump over the current
init-goal pair and perform replanning by iterating to other candidates (as
shown by the “No” condition after “$Succ.$” and the “Yes1” condition after
“$PlateColl.$” in the figure). If none of the sampled goals form a feasible
init-goal pair, the current arm will not perform the pulling action. Instead,
it simply re-grips by opening the gripper, moving to the init pulling pose,
and closing the gripper. The planner switches to plan for the other arm after
the re-gripping. A final failure is reported when all cylinder elements on a
detected rope are traversed, and none of them lead to a feasible initial
pulling pose.
Following the green and red workflows, the action arms and sequences are
determined autonomously. In an extreme case, a robot may use a single arm to
pull while never find feasible goals for the other arm. The robot may also use
the two arms one by one in another extreme case where both arms find feasible
init-goal pairs. More moderate cases are that the robot sometimes uses a left
arm and sometimes uses a right arm, depending on the amount of feasible init-
goal pairs found during the repeated planning.
Fig.6 shows two examples. In Fig.6(a), the robot is pulling up a small plate.
The robot finished a right-arm execution in (a.1) and is checking the
collisions of the shared left-arm grasping poses. The top-most cylinder
element on the detected rope is chosen as the init point. The rendered left
arms are the shared IK-feasible poses. There are both collided and collision-
free ones. The collided poses are shown in red, and the collision-free poses
are shown in green. In (a.2), the robot plans a linear motion to move the left
arm from an initial pulling pose to a goal pulling pose. Fig.6(b) shows the
case of larger plate. All shared IK-feasible grasp poses of the left arm
collide with the plate in (b.1). The planner reaches to a final failure and
switches to the right arm without triggering execution. In (b.2), the right
arm finds a new init-goal pair and plans a linear motion to move the right arm
from an initial pulling pose to a goal pulling pose.
Figure 6: Two examples of switching the action arms. The goal is to lift the
board from the yellow pose to the green pose. (a) The robot successfully finds
an init-goal pair for its left arm and plans a pulling motion. (b) The robot
fails to find a goal for its left arm because of the bulky plate’s
obstruction. It switches to the right arm without execution.
### IV-E Include Re-Recognition in the Loop
The plate pose is re-recognized using a depth camera after each execution to
acquire its real tilting angle. Like recognizing a rope, RANSAC and ICP are
used to extract the plate’s largest planar surface. The real tilting angle of
the plate is computed considering the normal of the extracted planar surface.
It is then used to update the $\alpha_{i}$ in equation (5) as well as to
predict the $\tilde{\alpha_{i}}$ of the next rope-pulling action. Fig.7 shows
an example. The point cloud of the scene in (a) is acquired and shown in (b).
The estimated planar surface normal and real tilting angle are illustrated
together with the point cloud. The optimization, planning, execution, and
recognition workflows mentioned in this section’s subsections are repeated for
the two arms sequentially until the plate is lifted up to a given threshold
angle.
Figure 7: A plate’s real tilting angle is re-recognized using a depth camera
after each execution. The algorithm extracts the largest planar surface and
computes the real tilting angle considering the normal of the extracted
surface.
## V Tumbling the Plate Using Sliding-Push
After lifting the plate to the threshold angle, one robot arm will tumble it
over by pushing. Here, we assume that a plate always has edge contacts with
the table. It will be tumbled by rotating around the edge contacts. The
rotation trajectory of a plate is modeled as a time-variant function, and the
robot arm follows the trajectory by using sliding-push. We define a sliding-
push as a pushing policy that allows sliding on the contact surface of a
plate. Like the rope-pulling actions, the sliding-push motion is performed
continuously until the plate is rotated beyond a threshold pose where the
force and moment reach equilibrium when the robot moves away from the pushing
point.
Besides the edge contacts, we also assume the following conditions and
constraints in planning the sliding-push: (1) All forces appear inside a
vertical plane perpendicular to the contact edges; (2) The plate and the
environment are rigid; (3) The push between two nearby time instants is quasi-
static; (4) The friction at the contact point follows the Coulomb friction
model; (5) Plate-slipping on the table surface is not considered, but finger-
sliding on the contact surface is allowed. Especially the fifth assumption is
a strong constraint and leads to fewer solutions, but we keep it there to
narrow down the search space. A plate has three contacts during pushing – Rope
connection, robot fingertip contact, and table contact. The force at the
connecting point with the rope is difficult to control. The contact between
the fingertip and the plate is assumed to be a point contact. During pushing,
the plate may both slide on the table surface and rotate around the contact
edge, making it extremely difficult to perform optimization. To avoid these
problems, we develop the fifth assumption to constrain a robot arm rotating a
plate without allowing it to slide on the table surface. Pushing actions will
be carefully optimized considering the constraint about table-slipping.
Meanwhile, we allow a pushing finger to slide on the plate’s contact surface
to maintain high success rate.
Based on these assumed constraints, a plate’s motion trajectory is determined
once the contact edge is known. The tumbling action is implemented by
optimizing a robot arm’s pushing trajectory while considering the plate’s
motion trajectory. As the pushing arm tumbles a plate, the remaining robot arm
holds or loosens the crane rope alternatively to avoid jerks. Take the plate
pose at a time instant $t_{i}$ shown in Fig.8 for example. We divide the
pushing at this time instant into two states $s_{i}$ and $s_{i}^{\prime}$. The
plate is pushed from a previous pose at $t_{i-1}$ to the current pose in the
first state. The crane rope is straightened, and a tension force $T$ appears
along the rope. In the second state, the rope is loosened by the other robot
arm, and the rope tension $T$ disappears. The tumbling is performed across a
sequence of these states at different time instants in the following way.
$\underbrace{s_{0}\xrightarrow{lsn}s_{0}^{\prime}}_{t_{0}}\xrightarrow{hld}\underbrace{s_{1}\xrightarrow{lsn}s_{1}^{\prime}}_{t_{1}}\xrightarrow{hld}\ldots\underbrace{s_{n-1}\xrightarrow{lsn}s_{n-1}^{\prime}}_{t_{n-1}}\xrightarrow{hld}\underbrace{s_{n}}_{t_{n}}$
Between each $s_{i-1}^{\prime}$ and $s_{i}$, the other arm holds ($hld$ for
abbreviation) the crane rope. Between each $s_{i}$ and $s_{i}^{\prime}$, the
crane rope is loosened ($lsn$ for abbreviation). We analyze the forces at each
time instant and determine the pushing points at each of them by minimizing
the balancing forces and reducing the rope tension. The formal expressions for
the minimization is as follows.
$\displaystyle\underset{\boldsymbol{r}_{1}}{\text{min}}$
$\displaystyle{k_{1}(F_{0}^{T}F_{0})}+{k_{2}(F_{1}^{T}F_{1})}+{k_{3}(T^{T}T)}$
s.t. $\displaystyle T+G+\sum_{j=0}F_{j}=0$ (7b)
$\displaystyle\boldsymbol{r}_{t}\times T+\boldsymbol{r}_{g}\times
G+\sum_{j=0}\boldsymbol{r}_{j}\times F_{j}=0$ (7c) $\displaystyle
F_{1}^{T}F_{1}\in(0,30)$ (7d)
$\displaystyle\frac{f_{0x}}{f_{0y}}\in(0,\mu_{0})$ (7e)
$\displaystyle\mathtt{acos}(\frac{F_{1}\cdot\boldsymbol{r}_{1}}{||F_{1}||||\boldsymbol{r}_{1}||})\in(\mathtt{atan}\frac{1}{\mu_{1}},\pi-\mathtt{atan}\frac{1}{\mu_{1}})$
(7f)
$\displaystyle\boldsymbol{r}_{1}(t_{i})^{T}\boldsymbol{r}_{1}(t_{i})\in(0,l)$
(7g)
$\displaystyle|\boldsymbol{r}_{1}(t_{i})-\boldsymbol{r}_{1}(t_{i-1})|\leq|\boldsymbol{v}_{max}|(t_{i}-t_{i-1})$
(7h)
$\displaystyle\mathtt{acos}(\frac{(\boldsymbol{r}(t_{i+1})-\boldsymbol{r}(t_{i}))\cdot(\boldsymbol{r}(t_{i})-\boldsymbol{r}(t_{i-1}))}{||\boldsymbol{r}(t_{i+1})-\boldsymbol{r}(t_{i})||||\boldsymbol{r}(t_{i})-\boldsymbol{r}(t_{i-1})||})\leq\gamma$
(7i)
The meanings of the various variables are listed below. They are also
graphically illustrated in Fig.8 for readers’ convenience.
* $F_{0}$
Force at the table contact point $\boldsymbol{p}_{0}$ in an $s_{i}$ state. An
edge contact is simplified into a point contact. $F_{0}$ = [$f_{0x}$,
$f_{0y}$].
* $F_{1}$
Force at the robot finger tip contact point $\boldsymbol{p}_{1}$ (pushing
point) in an $s_{i}$ state. $F_{1}$ = [$f_{1x}$, $f_{1y}$].
* $G$
Gravity.
* $T$
Tension caused by a stretched crane rope. Only exist at an $s_{i}$ state.
* $\boldsymbol{r}_{t}$
The vector pointing from the rotation center to the rope connecting point.
* $\boldsymbol{r}_{g}$
The vector pointing from the rotation center to the plate’s center of mass.
* $\boldsymbol{r}_{j}$
The vector pointing from the rotation center to $\boldsymbol{p}_{i}$.
* $\mu_{0}$
The Coulomb friction coefficient at $\boldsymbol{p}_{0}$.
* $\mu_{1}$
The Coulomb friction coefficient at $\boldsymbol{p}_{1}$.
* $\boldsymbol{v}_{max}$
The maximum speed of a pushing arm.
* $(t_{i})$
If a symbol does not have this postscript, it always denotes a value at time
instant $t_{i}$. Or else, the symbol denotes a value from the time instant
shown in the parenthesis.
* $\gamma$
The maximum allowable angle between current and previous pushing directions.
Figure 8: The rotational trajectory of a plate is modeled as a time-variant
function across $t_{0}$, $t_{1}$, …, $t_{n}$. A pushing at a time instant
$t_{i}$ is divided into two states $s_{i}$ and $s_{i}^{\prime}$. The goal of
optimization is to simultaneously minimize $F_{0}$, $F_{1}$, and $T$, while
considering a changing
$\boldsymbol{r}_{1}=\overrightarrow{\boldsymbol{p}_{0}\boldsymbol{p}_{1}}$
(sliding-push).
The optimization goal show in equation (LABEL:opgoal) is to minimize the
balancing force needed in the entire system. The idea behind this optimization
is that by minimizing the force applied to the entire system, the robot can
push the plate with a minimum force and reduce the external forces from the
rope and the desk. The smaller forces will also decrease the risk of rotating
in an unexpected direction (caused by non-vertical rope tension) or slipping
on the table. The constraints (7b), (7c) balance the forces and torques at an
$s_{i}$ state. The constraints (7d), (7e), (7f) add bounds to the force
exerted by a robot and the friction cone at the contact points at the $s_{i}$
state. The constraint (7g) limits $\boldsymbol{r}_{1}$ to be on a contact
surface. The constraint (7h) ensures the contact points at two consecutive
time instants are reachable. The last constraint (7i) smooths the change in
the pushing direction. It prevents the pushing direction at a time instant
from getting largely diverted from its precedence. Both constraints (7g) and
(7i) are important to finding a practical pushing trajectory. Their roles and
difference are illustrated in Fig.9. Essentially, the constraint (7g) limits
the pushing distance between adjacent pushing points. The constraint (7i)
limits the changes in pushing directions.
Figure 9: (a) Equation (7h) constrains the maximum distance between adjacent
pushing positions. It makes the blue arrows in the figure short. (b) Equation
(7i) constrains the deviation of a subsequent pushing direction with its
precedence. It helps to avoid oscillation and make the trajectory smooth.
The minimization is performed across all $s_{0}$, $s_{0}^{\prime}$, $s_{1}$,
$s_{1}^{\prime}$, … states to determine a sequence of optimal
$\boldsymbol{r}_{1}$, say an optimal sliding-pushing trajectory. Note that
during the optimization the robot kinematic constraint is not considered.
Instead of including it integrally as a constraint, we examine the kinematic
constraint lazily after a sequence is found. If the robot IK is not solvable,
we switch to the next best $\boldsymbol{r}_{1}$ until success or a failure is
reported. With the IK constraints, the optimization will find both a sequence
of pushing points and a sequence of robot joint trajectories that produce the
pushing motion along with the pushing points. The joint trajectories will be
executed by the target arm. The other arm plays the role of holding and
loosening the crane rope following the changes of states. As mentioned at the
beginning of this section, the resulted motion is a sliding-push. The contact
points at different $t_{i}$ are not necessarily the same in a plate’s local
coordinate system. The pushing finger may slide on the contact surface over
consecutive $t_{i}$ during the tumbling.
In addition, the contact edge between a plate and a table surface may change
during the tumbling as a plate always has a thickness. Our tumbling planner
takes into account the changes of the contact edge and optimizes the pushing
trajectory considering the changing rotating axes. Fig.10(a.1-2) show the
found pushing trajectories for two plates with different thickness. The
trajectories experience slight changes as the plate rotates onto a second
contact edge.
Finally, after the pushing robot arm finishes the whole planned trajectory, it
works together with the other arm to return the rope and to further lower down
the plate on the table. Fig.10(b) illustrates the returning actions. They are
also planned online with visual rope detection. The plate’s real tilting angle
is monitored during returning to make sure it is completely lowered onto the
table. The difference is there is no optimization, and the robot moves along a
straight line pointing to the upper pulley block. The full task is reported to
have been completed after the plate reaches the table surface.
Figure 10: (a.1-2) The optimized trajectories experience slight changes as the
plate rotates onto a second contact edge. The change gets significant as the
plate becomes thick. (b) Returning the rope using both arms to completely
lower down a plate onto the table.
## VI Experiments and Analysis
The proposed method is implemented and examined using the robot system shown
in Fig.2. The system includes two UR3e robots with two Robotiq F85 two-finger
grippers at each robot end flange. A Kinect V2 (Microsoft) sensor is used to
acquire 3D point clouds. The computer used for planning and optimization is a
PC with Intel Core i9-9900K CPU, 32.0GB memory, and GeForce GTX 1080Ti GPU.
The programming language for implementing the algorithms is Python.
### VI-A Influence of the Goal Quality Function for Pulling
First, we analyze the influence of the quality function presented in equation
(4), and examine the changes of the selected pulling goals under different
weights. We perform the analysis by setting one of the three weights to a
higher value while keeping the other two to zero and observe the planned
results. Specifically, we compare $\omega$ = [$\omega_{length}$,
$\omega_{load}$, $\omega_{grasps}$] = $[1,0,0]^{T}$, $[0,1,0]^{T}$, and
$[0,0,1]^{T}$ and show the distribution of the selected goals for the left and
right arms respectively.
#### VI-A1 $\omega$ = $[1,0,0]^{T}$
In this case, only $f_{length}$ influences the rope-pulling motion. It drives
the robot to select goal points that lead to a large pulling distance.
Fig.11(a) and (b) show an example of a selected init-goal pose pair and a
distribution of the selected goal points under the weight setting,
respectively. The robot selects faraway points for both the left and right
arms to pull the rope with long distances.
Figure 11: Results when $\omega$ = $[1,0,0]^{T}$: (a) An example of a selected
init-goal pose pair. (b) A distribution of the selected goals under 15 times
of simulation.
#### VI-A2 $\omega$ = $[0,1,0]^{T}$
In this case, only $f_{load}$ affects the pulling motion. Fig.12 exemplifies
an init-goal pair and a statistical distribution of the selected goal points.
The robot tends to select goals that make the stretched rope to have small
tilting angles to reduce the forces that an arm needs to bear.
Figure 12: Results when $\omega$ = $[0,1,0]^{T}$: (a) An example of a selected
init-goal pose pair. (b) Distribution of the selected goals after 15 times of
simulation.
#### VI-A3 $\omega$ = $[0,0,1]^{T}$
In this case, only $f_{grasps}$ affects the rope-pulling motion. This
parameter is calculated based on the number of grasping poses simultaneously
available to the initial and goal points of a pulling motion. Since the two
arms have higher manipulability near the body center, the robot has more
feasible grasp poses when a goal point is near the central line. The
expectation is validated by Fig.13. In the upper two examples of this figure,
the left and right hands pull the rope to points near the center. In the lower
two examples, we intentionally included an obstacle at the center for
comprehension. When there are obstacles, the number of grasp poses near the
center is reduced. As a result, points away from the center are selected as
goals. The $f_{grasps}$ parameter allows the robot to select goals considering
both kinematic and geometric constraints.
Figure 13: Results when $\omega$ = $[0,0,1]^{T}$: (a.1) An example of a
selected init-goal pose pair. (a.2) Distribution of the selected goals under
15 after times of simulation. (b.1-2) The exemplary selected pair and sample
distributions when an obstacle (the purple block) is intentionally placed in
the robot workspace.
### VI-B Influence of the Constraints and Parameters for Tumbling
Optimization
Second, we study the influence of different constraints and parameter settings
on the optimized tumbling motion. We are especially interested in: (1)
Necessity of the trajectory constraints (7h), (7i); (2) Different values of
$\boldsymbol{v}_{max}$ and $\gamma$; (3) Different friction coefficients,
center of mass (com) positions, and hooking positions. For each of the
different constraints and values, we run our algorithms using plates of
different sizes (40$mm\times$300$mm$, 150$mm\times$300$mm$,
44$mm\times$500$mm$) and compare the optimized results. The magnitudes of the
sizes are coherent with the ones used in the real-world experiments (Table I).
#### VI-B1 Necessity of the trajectory constraints (7h) and (7i)
We study the change of the trajectories under constraints (7h) and (7i) and
examine their necessity. The results are shown in Fig.14. Each column of the
figure uses a plate of a different size. The four rows are respectively:
(a.1-a.3) Both (7h) and (7i) are considered; (b.1-b.3) Only (7h) is
considered; (c.1-c.3) Only (7i) is considered; (d.4-d.3) Neither (7h) nor (7i)
is considered. The results imply that both (7h) and (7i) are necessary for
obtaining a smooth and stable pushing trajectory. When constraint (7i) is
removed, the trajectory gets oscillated. The oscillation can be observed by
comparing the trajectories in sub-figure (b.2) with that of (a.2). When
constraint (7h) is removed, the distance between adjacent pushing points
becomes large. The large distance can be easily observed by comparing the
second half of the trajectory in (c.2) with that of (a.2). When both (7h) and
(7i) are omitted, the optimized trajectories involve jitters and jerks and
become unsuitable for robotic execution. Note that in getting these results,
we set the parameters $\boldsymbol{v}_{max}$ = $30mm$/$s$, and $\gamma$ =
$20^{\circ}$ when they are used.
Figure 14: Necessity of the trajectory constraints (7h) and (7i). Each column
of the figure uses a plate with a different size (40$mm\times$300$mm$,
150$mm\times$300$mm$, 44$mm\times$500$mm$). The four rows are respectively:
(a.1-a.3) Both (7h) and (7i) are considered; (b.1-b.3) Only (7h) is
considered; (c.1-c.3) Only (7i) is considered; (d.4-d.3) Neither (7h) nor (7i)
is considered.
#### VI-B2 Different values of $\boldsymbol{v}_{max}$ and $\gamma$
In this part, we investigate the influence of the parameters
$\boldsymbol{v}_{max}$ and $\gamma$ in the trajectory constraints on the
optimization. The resulted trajectories using different $\boldsymbol{v}_{max}$
and $\gamma$ values are shown in Fig.15. Like Fig.14, each column of the
figure uses a plate with a different size. For the results in the upper two
rows, the $\gamma$ parameter in constraint (7i) is changed, while
$\boldsymbol{v}_{max}$ is fixed. For the results in the lower two rows, the
value of $\boldsymbol{v}_{max}$ in constraint (7h) is changed, and (7i) is
fixed.
Figure 15: Influence of $\boldsymbol{v}_{max}$ and $\gamma$ values on the
optimized trajectories. (a.1-a.3) and (b.1-b.3): The $\gamma$ parameter in
constraint (7i) is changed, while $\boldsymbol{v}_{max}$ is fixed; (c.1-c.3)
and (d.1-d.3): The value of $\boldsymbol{v}_{max}$ in constraint (7h) is
changed and (7i) is fixed.
For Fig.15(a.1) and (b.1), we set $\boldsymbol{v}_{max}$ = $45mm/s$ and
$60mm/s$ respectively, and fixed $\gamma$ = $20^{\circ}$. No significant
change is observed in this case as the constraint (7i) plays a master role.
Likewise, for (a.3) and (b.3), we set $\boldsymbol{v}_{max}$ = $60mm/s$ and
$75mm/s$, and fixed $\gamma$ = $20^{\circ}$. There is no significant change
observed. For (a.2) and (b.2), we set $\boldsymbol{v}_{max}$ = $50mm/s$ and
$70mm/s$, and fixed $\gamma$=$20^{\circ}$. In this case, the distances between
adjacent pushing points become larger. The observation shows that the
constraint (7h) is effective.
Specifically, for Fig.15(c.1) and (d.1), we set $\gamma$ = $40^{\circ}$ and
$60^{\circ}$ respectively, and fixed $\boldsymbol{v}_{max}$ = $30mm/s$. By
comparing them, we can observe that as the value of $\gamma$ increases, the
trajectory biased downward. For Fig.15(c.2) and (d.2), we set $\gamma$ =
$30^{\circ}$ and $50^{\circ}$ respectively, and fixed $\boldsymbol{v}_{max}$ =
$30mm/s$. By comparing them, we can observe that as the value of $\gamma$
increases, the degree of oscillation became stronger. For Fig.15(c.3) and
(d.3), we set $\gamma$ = $40^{\circ}$ and $60^{\circ}$ respectively, and fixed
$\boldsymbol{v}_{max}$ = $45mm/s$. The results show that the downward bias
becomes slightly larger in the second half of the trajectory (although no
significant difference is observable). The observation indicates that the
constraint (7i) is effective.
#### VI-B3 Different friction coefficients, center of mass (com) positions,
and hooking positions
Figure 16: Influence of different friction coefficient ($\mu_{1}$), center of
mass (com) position ($\boldsymbol{r}_{g}$), and hooking position
($\boldsymbol{r}_{h}$) on the optimized trajectories. (a.1-a.3) $\mu_{1}$ =
0.05; (b.1-b.3) $\mu_{1}$ = 0.6; (c.1-c.3) $\boldsymbol{r}_{g}$ is changed to
the $\boldsymbol{r}_{g_{2}}$ shown in Fig.17; (d.1-d.3) $\boldsymbol{r}_{h}$
is changed to the $\boldsymbol{r}_{h_{1}}$ shown in Fig.17.
In this section, we investigate the changes in trajectory when the friction
coefficient at the pushing point ($\mu_{1}$), center of mass (com) position
($\boldsymbol{r}_{g}$), or hooking position ($\boldsymbol{r}_{h}$) vary. The
results are shown in Fig.16. Each column of the figure uses a plate of a
different size. For Fig.16(a.1), (a.2), and (a.3), $\mu_{1}$ is set to 0.05.
When the plates are thin, such as (a.1) and (a.3), the optimized trajectories
bias downward in the second half. In contrast, when plates are thick, like the
one shown in (a.2), the optimized trajectory biases upward. For Figs.16 (b.1),
(b.2), and (b.3), $\mu_{1}$ is changed to 0.6. Compared with the results of
0.05, the optimized trajectories kept neutral in (b.1) and (b.3) but biased
downward in (b.2). The results show that $\mu_{1}$ significantly influences
the choices of pushing points in the second half of the trajectory.
Figure 17: We study the influence of the center of mass (com) position
($\boldsymbol{r}_{g}$) and hooking position ($\boldsymbol{r}_{h}$) on the
optimized trajectories by varying each of them to two different positions.
These positions are denoted by $\boldsymbol{r}_{g_{1}}$,
$\boldsymbol{r}_{g_{2}}$, $\boldsymbol{r}_{h_{1}}$, and
$\boldsymbol{r}_{h_{2}}$in the figure.
Next, we investigate the influences of $\boldsymbol{r}_{g}$ and
$\boldsymbol{r}_{h}$. We especially consider two choices for them, as shown by
$\boldsymbol{r}_{g_{1}}$, $\boldsymbol{r}_{g_{2}}$, $\boldsymbol{r}_{h_{1}}$,
and $\boldsymbol{r}_{h_{2}}$ in Fig.17. $\boldsymbol{r}_{g_{1}}$ and
$\boldsymbol{r}_{g_{2}}$ are the geometric center and 1/4th of the geometric
center of the plate rectangle. $\boldsymbol{r}_{h_{1}}$ and
$\boldsymbol{r}_{h_{2}}$ are the top-left and top-right corners of the plate
rectangle. All previous trajectories were obtained considering
$\boldsymbol{r}_{g_{1}}$ and $\boldsymbol{r}_{h_{1}}$. In this part, we change
them to the other positions for comparison. Fig.16(c.1), (c.2), and (c.3) show
the trajectories with respect to $\boldsymbol{r}_{g_{2}}$ and
$\boldsymbol{r}_{h_{1}}$. The results show that when the com is shifted to the
bottom of the plate, the maximum duty of a robot hand gets smaller and the
hand adjusts itself in wider range. Fig.16(d.,1), (d.2), and (d.3) show the
trajectories with respect to $\boldsymbol{r}_{g_{1}}$ and
$\boldsymbol{r}_{h_{2}}$. For all of them, the starting positions get lower.
This may be because the robot hand bears a smaller force when pushing starts
from a point far from hooking position.
### VI-C Results of Various Plates
Third, we examine the performance of the proposed method using three plates
shown in Fig.18 – An acrylic board, a stainless box, and a plywood board. The
parameters of these plates are shown in Table I. We assume a plate initially
lies on the table in front of the robot and the crane hook is pre-connected.
Figure 18: Three different plates used in the experiments. (a) An Acrylic
Board. (b) A Stainless Box. (c) A Plywood Plate. Their various parameters are
shown in Table I. TABLE I: Parameters of the Plates Used in Experiments
Plate Name | h, l, w (mm) | m (kg) | $\boldsymbol{r}_{g}$ | $\mu_{0}$ | $\mu_{1}$
---|---|---|---|---|---
Acrylic Board | 300, 300, 40 | 4.0 | Geom.Cent. | 0.5 | 0.4
Stainless Box | 300, 400, 150 | 6.0 | Geom.Cent. | 0.4 | 0.1
Plywood Board | 500, 400, 44 | 6.4 | Geom.Cent. | 0.6 | 0.3
* *
$l$ \- length; $w$ \- width; $h$ \- height; $m$ \- mass.
Fig.19 shows some snapshots of both the planning environment and real-world
execution results for the three plates. In these cases, the weight of the goal
quality function is chosen as $\omega$ = $[1,1,1]^{T}$. The robot managed to
pull up the plates with satisfying performance. Readers are encouraged to
watch the supplementary video submitted together with this manuscript to
better observe the optimized lifting up actions and tumbling trajectories for
the different plates.
Figure 19: Snapshots of the robotic executions for the three plates ($\omega$
= $[1,1,1]^{T}$; Watch our supplementary video for details).
The most important points we are interested in for the real-world execution
are the number of pulling actions, the pulling distances, and the forces born
by the pulling hands. We show them in detail in Tables II-IV.
First, in Table II, we compare the execution results of $\omega$ =
$[1,1,1]^{T}$ with other three extreme candidates $\omega$ = $[1,0,0]^{T}$,
$\omega$ = $[0,1,0]^{T}$, and $\omega$ = $[0,0,1]^{T}$. The results indicate
that when the weight is chosen as $\omega$ = $[1,1,1]^{T}$, the robot works
with a smaller number of actions and relatively large average pulling
distances. Meanwhile, the hands bear moderate pulling forces. When the weight
is chosen to be $\omega$ = $[1,0,0]^{T}$, the rope’s pulling length becomes
dominantly large, and the robot hands bear more forces. When the weight is
chosen as $\omega$ = $[0,1,0]^{T}$, the forces born by the hands are highly
suppressed, but the number of actions increase. When the weight is chosen as
$\omega$ = $[0,0,1]^{T}$, there is no significant influence on the length and
force values. However, without the parameter $f_{manip}$, the robot and
obstacles’ collisions cannot be taken into account.
TABLE II: Comparison of the Real-World Pulling Actions Under Different Weights | $w$ = [1,1,1]T | $w$ = [1,0,0]T
---|---|---
Plate Name | # Actn. | Avg.Dist. (L, R) (mm) | Forces (L, R) (N) | # Actn. | Avg.Dist. (L, R) (mm) | Forces (L, R) (N)
Acrylic Board | L2, R3 | (367.68, 430.06) | (3.98, 6.42) | L2, R3 | (448.50, 456.41) | (6.64, 7.48)
Stainless Box | L2, R3 | (294.99, 429.37) | (5.70, 6.51) | L2, R3 | (350.41, 422.39) | (5.83, 7.17)
Polywood Board | L3, R6 | (332.65, 388.62) | (6.07, 8.12) | L3, R6 | (411.65, 437.13) | (8.21, 8.12)
Total Average | 3.17 | 373.90 | 6.13 | 3.17 | 421.08 | 7.24
| $w$=[0,1,0]T | $w$=[0,0,1]T
Plate Name | # Actn. | Avg.Dist. (L, R) (mm) | Forces (L, R) (N) | # Actn. | Avg.Dist. (L, R) (mm) | Forces (L, R) (N)
Acrylic Board | L3, R3 | (348.71, 340.69) | (4.93, 3.86) | L4, R5 | (283.86, 257.14) | (5.59, 5.96)
Stainless Box | L3, R3 | (236.63, 350.60) | (5.64, 5.94) | L4, R4 | (288.68, 297.82) | (7.17, 5.92)
Polywood Board | L3, R7 | (334.52, 327.46) | (5.99, 5.86) | L4, R9 | (297.88, 296.20) | (6.62, 7.45)
Total Average | 3.67 | 323.10 | 5.37 | 5.00 | 286.93 | 6.45
Table III further shows the details of pulling lengths and forces for each
pulling action when $w$ is chosen as [$1$,$1$,$1$]T. The robot alternatively
used its left and right arms to pull up the acrylic board and the stainless
box. For the plywood board, the robot alternatively used the two arms in the
beginning but switched to right-alone actions after step 6. The reason is the
plywood board is a large plate. It blocked the right arm action after being to
lifted to a high pose.
TABLE III: Detailed Pulling Length and Forces of Each Pulling Action Using $w$=[$1$,$1$,$1$]T Plate Name | Item | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
---|---|---|---|---|---|---|---|---|---|---
Acrylic Board | Actn. Arm | Rgt | Lft | Rgt | Lft | Rgt | - | - | - | -
| Distance ($mm$) | 443.64 | 377.75 | 471.32 | 357.61 | 375.23 | - | - | - | -
| Force ($N$) | 8.48 | 4.3 | 5.58 | 3.65 | 5.72 | - | - | - | -
| Time ($s$) | 7.57 | 6.09 | 4.02 | 5.94 | 4.36 | - | - | - | -
Stainless Box | Actn. Arm | Rgt | Lft | Rgt | Lft | Rgt | - | - | - | -
| Distance ($mm$) | 449.99 | 328.19 | 475.72 | 261.78 | 362.41 | - | - | - | -
| Force ($N$) | 7.80 | 5.45 | 5.71 | 5.95 | 6.01 | - | - | - | -
| Time ($s$) | 6.41 | 8.45 | 4.08 | 5.87 | 3.96 | - | - | - | -
Polywood Board | Actn. Arm | Rgt | Lft | Rgt | Lft | Rgt | Lft | Rgt | Rgt | Rgt
| Distance ($mm$) | 455.69 | 367.77 | 449.4 | 343.95 | 425.74 | 286.24 | 394.68 | 394.94 | 211.28
| Force ($N$) | 10.79 | 6.08 | 8.6 | 5.88 | 5.49 | 6.26 | 8.21 | 7.22 | 9.41
| Time ($s$) | 8.55 | 5.39 | 4.00 | 5.99 | 4.47 | 6.36 | 3.99 | 4.15 | 3.85
Table IV shows the detailed average time costs for the executions of lifting
and tumbling each plate using $\omega$ = $[1,1,1]^{T}$. The most time-
consuming part of the proposed planner is optimizing the init-goal pair,
including sampling goal points, evaluating goal grasps, and reasoning shared
initial and goal grasp poses. They have to be performed on all sampled points
to get the defined qualities. The QP solver and RRT planner used to generate
the movement and pulling actions also take some time to search and optimize.
TABLE IV: Time Costs for Each Part of A Pulling Action | Acrylic | Stainless | Polywood
---|---|---|---
$Items$ | Rgt | Lft | Rgt | Lft | Rgt | Lft
Initial Grasps ($s$) | 0.16 | 0.59 | 0.15 | 0.58 | 0.18 | 0.55
Init-Goal Pairs ($s$) | 3.02 | 4.31 | 3.13 | 4.28 | 3.13 | 4.38
QP + RRT ($s$) | 1.41 | 1.76 | 1.43 | 1.78 | 1.48 | 1.69
## VII Conclusions
This paper proposed an optimization and planning method for a dual-arm robot
to lift up and tumble heavy plates using crane pulley blocks. The optimal
pulling actions for the pulley bocks were achieved by maximizing or minimizing
the length of a pulling motion, the load born by the pulling hand, and the
chances of finding a success motion. Visual detection was included in each
optimization and execution loop to update the states of the plate. The optimal
tumbling was implemented by considering the force and moment applied to the
plate. An optimal sliding-push trajectory was planned by minimizing the forces
needed to maintain equilibrium. After tumbling, the plate was lowered down
onto the table to completely finish a task. The action sequences and motion
details of both pulling and tumbling were autonomously decided following
respective optimizations. Experiments and analysis showed that the
optimizations responded well to changing plate sizes, weights, materials, etc.
The robot was able to flexibly and efficiently adapt its action sequences and
motion details to different scenarios.
The focus of this manuscript is the optimization and planning aspect. We did
not consider the rope and assumed it to be pre-attached to the plates.
Autonomously manipulating and attaching a crane rope using the robots and
peripherals will be our future work.
## References
* [1] E. Yoshida, P. Blazevic, and V. Hugel, “Pivoting manipulation of a large object: A study of application using humanoid platform,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2005, pp. 1040–1045.
* [2] M. Murooka, S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Whole-body pushing manipulation with contact posture planning of large and heavy object for humanoid robot,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2015, pp. 5682–5689.
* [3] F. Ohashi, K. Kaminishi, J. D. F. Heredia, H. Kato, T. Ogata, T. Hara, and J. Ota, “Realization of heavy object transportation by mobile robots using handcarts and outrigger,” _Robomech Journal_ , vol. 3, no. 1, p. 27, 2016\.
* [4] Y. Hou, Z. Jia, and M. T. Mason, “Fast planning for 3d any-pose-reorienting using pivoting,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2018.
* [5] J. A. Haustein, S. Cruciani, R. Asif, K. Hang, and D. Kragic, “Placing objects with prior in-hand manipulation using dexterous manipulation graphs,” in _IEEE-RAS International Conference on Humanoid Robots (Humanoids)_ , 2019, pp. 453–460.
* [6] K. Harada, S. Kajita, H. Saito, M. Morisawa, F. Kanehiro, K. Fujiwara, K. Kaneko, and H. Hirukawa, “A humanoid robot carrying a heavy object,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2005, pp. 1712–1717.
* [7] T. Urakubo, H. Yoshioka, T. Mashimo, and X. Wan, “Experimental study on efficient use of singular configuration in pulling heavy objects with two-link robot arm,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2014, pp. 4582–4587.
* [8] D. Berenson, S. S. Srinivasa, D. Ferguson, and J. J. Kuffner, “Manipulation planning on constraint manifolds,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2009, pp. 625–632.
* [9] J. Yang, A. Konno, S. Abiko, and M. Uchiyama, “Hardware-in-the-loop simulation of massive-payload manipulation on orbit,” _Robomech Journal_ , vol. 5, no. 1, p. 19, 2018.
* [10] H. Zhu, J. Lu, S. Gu, S. Wei, and Y. Guan, “Planning 3d collision-free optimized climbing path for biped wall-climbing robots,” _IEEE/ASME Transactions on Mechatronics_ , pp. 1–1, 2020.
* [11] M. T. Mason, “Mechanics and planning of manipulator pushing operations,” _The International Journal of Robotics Research_ , vol. 5, no. 3, pp. 53–71, 1986.
* [12] K. M. Lynch and M. T. Mason, “Stable pushing: Mechanics, controllability, and planning,” _The International Journal of Robotics Research_ , vol. 15, no. 6, pp. 533–556, 1996.
* [13] J. Zhou, Y. Hou, and M. T. Mason, “Pushing revisited: Differential flatness, trajectory planning, and stabilization,” _The International Journal of Robotics Research_ , vol. 38, no. 12-13, pp. 1477–1489, 2019.
* [14] J. Zhou, J. A. Bagnell, and M. T. Mason, “A fast stochastic contact model for planar pushing and grasping: Theory and experimental validation,” in _Robotics: Science and Systems (RSS)_ , 2017.
* [15] K.-T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez, “More than a million ways to be pushed. a high-fidelity experimental dataset of planar pushing,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2016, pp. 30–37.
* [16] C. Song and A. Boularias, “Object rearrangement with nested nonprehensile manipulation actions,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2019, pp. 6578–6585.
* [17] T. T. Topping, G. Kenneally, and D. E. Koditschek, “Quasi-static and dynamic mismatch for door opening and stair climbing with a legged robot,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2017, pp. 1080–1087.
* [18] G. Zhang, S. Ma, Y. Shen, and Y. Li, “A motion planning approach for nonprehensile manipulation and locomotion tasks of a legged robot,” _IEEE Transactions on Robotics_ , vol. 36, no. 3, pp. 855–874, 2020.
* [19] K. Hang, A. S. Morgan, and A. M. Dollar, “Pre-grasp sliding manipulation of thin objects using soft, compliant, or underactuated hands,” _IEEE Robotics and Automation Letters_ , vol. 4, no. 2, pp. 662–669, 2019.
* [20] Y. Bai and C. K. Liu, “Dexterous manipulation using both palm and fingers,” in _IEEE International Conference on Robotics and Automation (ICRA)_ , 2014, pp. 1560–1565.
* [21] B. Aceituno-Cabezas and A. Rodriguez, “A global quasi-dynamic model for contact-trajectory optimization in manipulation,” in _Robotics: Science and Systems (RSS)_ , 2020.
* [22] C. Correa, J. Mahler, M. Danielczuk, and K. Goldberg, “Robust toppling for vacuum suction grasping,” in _IEEE International Conference on Automation Science and Engineering (CASE)_ , 2019, pp. 1421–1428.
* [23] X. Cheng, Y. Hou, and M. T. Mason, “Manipulation with suction cups using external contacts,” in _International Symposium on Robotics Research (ISRR)_ , 2019.
* [24] Y. F. Golubev and V. Koryanov, “Insectomorphic robot maneuvering on freely rolling balls,” _Journal of Computer and Systems Sciences International_ , vol. 55, no. 1, pp. 125–137, 2016.
* [25] A. Specian, C. Mucchiani, M. Yim, and J. Seo, “Robotic edge-rolling manipulation: A grasp planning approach,” _IEEE Robotics and Automation Letters_ , vol. 3, no. 4, pp. 3137–3144, 2018.
* [26] Y. Aiyama, M. Inaba, and H. Inoue, “Pivoting: A new method of graspless manipulation of object by robot fingers,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , vol. 1, 1993, pp. 136–143.
* [27] Y. Maeda and T. Arai, “Planning of graspless manipulation by a multifingered robot hand,” _Advanced Robotics_ , vol. 19, no. 5, pp. 501–521, 2005.
* [28] J. Xiao and X. Ji, “Automatic generation of high-level contact state space,” _The International Journal of Robotics Research_ , vol. 20, no. 7, pp. 584–606, 2001.
* [29] G. Lee, T. Lozano-Pérez, and L. P. Kaelbling, “Hierarchical planning for multi-contact non-prehensile manipulation,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2015, pp. 264–271.
* [30] P. Balatti, F. Fusaro, N. Villa, E. Lamon, and A. Ajoudani, “A collaborative robotic approach to autonomous pallet jack transportation and positioning,” _IEEE Access_ , vol. 8, no. 142, pp. 191–204, 2020.
* [31] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm, “Usac: a universal framework for random sample consensus,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, no. 8, pp. 2022–2038, 2012\.
* [32] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” _International Journal of Computer Vision_ , vol. 13, no. 2, pp. 119–152, 1994.
* [33] J. E. Bobrow, B. Martin, G. Sohl, E. Wang, F. C. Park, and J. Kim, “Optimal robot motions for physical criteria,” _Journal of Robotic Systems_ , vol. 18, no. 12, pp. 785–795, 2001.
* [34] K. Harada, K. Hauser, T. Bretl, and J.-C. Latombe, “Natural motion generation for humanoid robots,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2006, pp. 833–839.
|
# A Primer for Neural Arithmetic Logic Modules
Bhumika Mistry<EMAIL_ADDRESS>
Katayoun Farrahi<EMAIL_ADDRESS>
Jonathon Hare<EMAIL_ADDRESS>
Department of Vision Learning, and Control
Electronics and Computer Science
University of Southampton
Southampton, SO17 1BJ, United Kingdom
###### Abstract
Neural Arithmetic Logic Modules have become a growing area of interest, though
remain a niche field. These modules are neural networks which aim to achieve
systematic generalisation in learning arithmetic and/or logic operations such
as $\\{+,-,\times,\div,\leq,\textrm{AND}\\}$ while also being interpretable.
This paper is the first in discussing the current state of progress of this
field, explaining key works, starting with the Neural Arithmetic Logic Unit
(NALU). Focusing on the shortcomings of the NALU, we provide an in-depth
analysis to reason about design choices of recent modules. A cross-comparison
between modules is made on experiment setups and findings, where we highlight
inconsistencies in a fundamental experiment causing the inability to directly
compare across papers. To alleviate the existing inconsistencies, we create a
benchmark which compares all existing arithmetic NALMs. We finish by providing
a novel discussion of existing applications for NALU and research directions
requiring further exploration.
Keywords: arithmetic, neural networks, extrapolation, interpretability,
systematic generalisation
## 1 Introduction
The ability to learn by composition of already known knowledge is a form of
systematic generalisation (Fodor et al., 1988), also termed as compositional
generalisation (Lake, 2019). Humans can apply such generalisations for
arithmetic after learning the relevant underlying rules. For example,
combining primitive operations such as addition ($a+b$) and multiplication
($a\times b$) on already observed inputs to produce more complex expressions
(such as $(a+b)\times(c+d)$). Humans can also transfer their skills in
applying operations on limited set of numbers (for example between 1-100) to
the range of unobserved numbers. This ability to extrapolate, that is
generalise to out-of-distribution (OOD) data, is a desirable property for
neural networks. Research suggests neural networks struggle to extrapolate
even for the simplest of tasks such as learning the identity function (Trask
et al., 2018). Rather than generalising, networks lean towards memorisation in
which the model memorises the training labels (Zhang et al., 2020).
To address this issue, Trask et al. (2018) introduce the first of a new class
of neural modules which we term Neural Arithmetic Logic Modules (NALMs). Their
module, the NALU, aims to learn systematic generalisation for arithmetic
computations. For example, learning the relation between input
[$x_{1},x_{2},x_{3},x_{4}$] and output $o_{1}$ where the input elements are
real numbers and output is expression $x_{1}+x_{3}-x_{2}$. To achieve this
they assume an inductive bias such that particular weight values can be
intuitively interpreted as different primitive arithmetic operations. For
example, using a weight of 1 to represent addition and weight of -1 to
represent subtraction. This form of interpretability is comparable to the
definition of decomposable transparency by Lipton (2016). Though NALU shows
promising improvements over networks such as Multilayer Perceptrons (MLPs) for
extrapolation, the unit still presents various shortcomings in architecture,
convergence, and transparency. These areas for improvement inspired the design
of other modules (Heim et al., 2020; Madsen and Johansen, 2020; Schlör et al.,
2020; Rana et al., 2019). Due to the growing interest of NALMs, we believe it
is important to have a resource, this paper, to explain current motivations,
strengths, weaknesses and gaps in this line of research.
### 1.1 Contributions
1. 1.
We provide the first definition to describe this research field by defining a
NALM—a neural network with the ability to model logic and/or arithmetic in a
generalisable manner to OOD data whilst expressing an interpretable solution.
2. 2.
We explain how recent modules are designed to overcome various shortcomings of
NALU including: inability to process negative inputs and outputs, lack of
convergence and adhering to its inductive bias, weak modelling of the division
operation, and lack of compositionality.
3. 3.
We highlight how a popular experiment for testing modules arithmetic
capabilities is inconsistent between different papers with regards to
hyperparameters and experiment setup. This leads to providing a new benchmark
for comparing existing (and future) arithmetic NALMs.
4. 4.
Using the first NALM, the NALU, as a focal point we show the usefulness of
NALUs in larger differentiable applications which require arithmetic and
extrapolation capabilities, while also making aware situations in which NALU
is sub-optimal.
5. 5.
We outline possible research directions regarding modelling division,
robustness across different training ranges, compositionality of modelled
expressions, and analysing the impact of NALMs when integrated with other
networks.
### 1.2 Outline
In this paper we begin by defining a NALM, motivating their aim and uses in
Section 2. Section 3 and 4 explains the definitions of key NALMs: NALU, iNALU,
NAU, NMU, and NPU to build understanding. Using the first NALM, the NALU, as a
focal point, Section 5 provides an in-depth analysis of the shortcomings of
NALU to understand the motivation behind design choices for more recent NALMs.
Section 6 highlights inconsistencies in experiment setup and compares findings
across existing modules. Additionally, we provide our own findings comparing
arithmetic NALMs using a consistent experiment setup. Section 7 discusses the
findings of NALMs which specialise in logic operations. Section 8 shows the
diversity in NALU’s use in applications, while also indicating situations in
which NALU is sub-optimal. Section 9 considers all discussed issues and
outlines remaining gaps, suggesting possible research directions to take as a
result. We end by providing related work in Section 10 which takes a step back
and considers the wider research around areas relevant to NALMs including
extrapolative mathematics, inductive biases and specialist modules.
#### 1.2.1 Mathematical Notation
When the individual NALM modules are defined, the mathematical notation will
be in element-wise form which provides how to calculate an output element
$y_{o}$ indexed at $o$ given a single data sample (input vector $\mathbf{x}$).
For completeness, we also provide illustrations for each module using the
matrix/vector with symbols and colouring following the key in Appendix A.
## 2 What are NALMs and Why use them?
We begin by defining NALMs. More specifically, before we detail instances of
NALMs, we first answer three questions: 1. What is a NALM? 2. What is the
aim of a NALM? 3. Why is a NALM useful?
From answering these questions, we shall arrive at the following definition: A
NALM is a neural network that performs arithmetic and/or logic based
expressions which can extrapolate to out-of-distribution (OOD) data when
parameters are appropriately learnt whilst expressing an interpretable
solution.
### 2.1 What is a NALM?
NALM stands for Neural Arithmetic Logic Module. Neural refers to neural
networks. Arithmetic refers to the ability to learn arithmetic operations such
as addition. Logic refers to the ability to learn operations such as
selection, comparison based logic (e.g., greater than) and boolean based logic
(e.g., conjunction). Module refers to the architecture’s of the neural units
which learns the arithmetic and/or logic operations. The term module
encompasses both a single (sub-)unit and multiple (sub-)units combined
together.
What kind of operations can be learnt? Existing work has tried to model
arithmetic operations including addition, subtraction, multiplication,
division, square, and square-root. Logic based operations include logic rules
(e.g., conjunction) (Reimann and Schwung, 2019), control logic (e.g., $<=$)
(Faber and Wattenhofer, 2020) and selection of relevant inputs.
How are operations learnt? Because a NALM is a neural network, a module can
model the relation between input and output vectors via supervised learning
which trains weights through backpropagation. To learn the relation between
input and output, requires learning to select relevant elements of the input
and apply the relevant arithmetic operation/s to the selected input to create
the output.
How is data represented? The input and outputs are both vectors. Each vector
element is a real-valued number which is implemented as a floating point
number. Each output element can learn a different arithmetic expression. For a
single data sample, this can be illustrated in Figure 1 where we assume that
the NALM used (made from two stacked sub-units) can learn addition,
subtraction and multiplication. In practice data would be given in batch form.
Figure 1: High-level example of the input output structure into a NALM. Both
networks are the same. The generalised network defines the notation of each
element in the input and output. The explicit network is an example of valid
input and output values.
### 2.2 What is the Aim of a NALM?
The main aim of NALMs is to provide systematic generalisation in learning
arithmetic and/or logic expressions. Once the learning state (training) has
ended, if the correct weights have been learned, the NALM is able to also
extrapolate to unseen data (i.e., OOD data).
What does interpretability mean for NALMs? Imagine modelling the relation
between input $\mathbf{x}$ and output $\mathbf{y}$ with a module $f$
parameterised by $\theta$, i.e., $\mathbf{y}=f_{\theta}(\mathbf{x})$. We say a
NALM is interpretable if you can set the module’s parameters ($f_{\theta}$) to
express the underlying relation between $\mathbf{x}$ and $\mathbf{y}$ in a
provable way. Simply put, the weights of a NALM can be set such that, if the
expression which NALM calculates is written out, we get an expression which
holds for all valid inputs. For example, for modelling the addition of two
inputs ($x_{1}$ and $x_{2}$), having a model which takes in the two inputs and
applies a dot product with a weight vector set to ones results in
$\mathbf{y}=f_{\theta}(\mathbf{x})=\begin{bmatrix}w_{1}&w_{2}\end{bmatrix}\cdot\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}=\begin{bmatrix}1&1\end{bmatrix}\cdot\begin{bmatrix}x_{1}\\\
x_{2}\end{bmatrix}$ which will always result in the output being $x_{1}+x_{2}$
no matter the values of $x_{1}$ and $x_{2}$.
More broadly speaking, the type of interpretability we want from a NALM is
decomposable transparency (Lipton, 2016). Transparency means to understand how
the model works. Decomposability is transparency at component level defined by
Lipton (2016) as ‘each part of the model - each input, parameter, and
calculation - admits an intuitive explanation’. For example, for modelling
$\textrm{force}=\textrm{mass}\times\textrm{acceleration}$, there are: the two
inputs into the NALM representing mass and acceleration, the parameters
representing the operation (multiplication) which are set to 1, and the
calculation that multiplies the two inputs resulting in the value for the
force.
What does extrapolation on OOD data mean for NALMs? OOD data refers to data
which is sampled outside the training distribution. For example, if trained on
a range [0,10] a valid OOD range could be [11,20]. Extrapolation is the
ability to correctly predict the output when given OOD data. In the context of
NALMs, extrapolation means that the model successfully learns the underlying
principles for modelling the (arithmetic/logic) operations it is designed for.
From a practical viewpoint, a NALM with successful extrapolative capabilities
can be considered as a module where loss in predictive accuracy occurs due to
numerical imprecisions of hardware limitations.
### 2.3 Why is a NALM useful?
The ability to learn arithmetic seems trivial in comparison to other
architectures such as LSTMs, CNNs or Transformers which can be used as
standalone networks which learn tasks such as arithmetic, object recognition
and language modelling. So, why care about NALMs?
Learning arithmetic, though it may seem a simple task, remains unsolved for
neural networks. To solve this problem requires precisely learning the
underlying rules of arithmetic such that failure cases will not occur on cases
of OOD data. Therefore, before considering more complex tasks, solving the
simple tasks seems reasonable.
Furthermore, even though NALMs specialise in arithmetic there is no
restriction in using them as part of larger end-to-end neural networks. For
example, attaching a NALM to a CNN as residual connections (Rana et al., 2020)
to improve counting in images. Utilising a NALM as a specialist module biased
towards arithmetic operations provides more focused learning. In Section 8, we
show a vast array of applications in which NALMs are being utilised. Being
used as a sub-component in a larger network implies that the sub-component has
the ability to learn regardless of the data distribution. Therefore, the
ability to extrapolate is essential.
## 3 Overview of the NALU Architecture
Figure 2: NALU architecture. Example of a 3-feature input and 2-feature output
model.
The NALU, illustrated in Figure 2, provides the ability to model basic
arithmetic operations, specifically: addition, subtraction, multiplication,
division. NALU requires no indication of which operation to apply and aims to
learn weights that provide extrapolation capabilities if correctly converged.
NALU comprises of two sub-units, a summative unit which models $\\{+,-\\}$ and
a multiplicative unit which models $\\{\times,\div\\}$. Following the notation
of Madsen and Johansen (2020) we denote the sub-units as $\mathrm{NAC}_{+}$
and $\mathrm{NAC}_{\bullet}$ respectively. Formally, for calculating a
specific output value, the NALU is expressed as:
$\displaystyle W_{i,o}$
$\displaystyle=\tanh(\widehat{W_{i,o}})\odot\operatorname*{sigmoid}(\widehat{M_{i,o}})$
(1) $\displaystyle\mathrm{NAC}_{+}:a_{o}$
$\displaystyle=\sum_{i=1}^{I}\left(W_{i,o}\cdot\mathrm{x}_{i}\right)$ (2)
$\displaystyle\mathrm{NAC}_{\bullet}:m_{o}$
$\displaystyle=\exp\left(\sum_{i=1}^{I}(W_{i,o}\cdot\ln(|x_{i}|+\epsilon))\right)$
(3) $\displaystyle g_{o}$
$\displaystyle=\operatorname*{sigmoid}\left(\sum_{i=1}^{I}(G_{i,o}\cdot
x_{i})\right)$ (4) $\displaystyle\mathrm{NALU}:\hat{y_{o}}$
$\displaystyle=g_{o}\cdot a_{o}+(1-g_{o})\cdot m_{o}$ (5)
where $\widehat{\bm{W}},\widehat{\bm{M}}\in\mathbb{R}^{I\times O}$ are learnt
matrices ($I$ and $O$ represent input and output dimension sizes). A non-
linear transformation is applied to each matrix and then both are combined via
element-wise multiplication to form $\bm{W}$ (Equation 1). Due to the range
values of $\mathrm{tanh}$ and $\operatorname*{sigmoid}$, $\bm{W}$ aims to have
a inductive bias towards values $\\{-1,0,1\\}$ which can be interpreted as
selecting a particular operation within a sub-unit (i.e., intra-sub-unit
selection). For example, in $\mathrm{NAC}_{+}$ +1 is addition and -1 is
subtraction, and in $\mathrm{NAC}_{\bullet}$ +1 is multiplication and -1 is
division. In both sub-units, 0 represents not selecting/ignoring an input
element. A sigmoidal gating mechanism (Equation 4) enables selection between
the sub-units (inter-sub-unit), where an open gate, 1, selects the
$\mathrm{NAC}_{+}$ and closed gate, 0, selects the $\mathrm{NAC}_{\bullet}$.
Once trained the gating should ideally select a single sub-unit. ${\bm{G}}$ is
learnt, and the gating vector $\bm{g}$ represents which sub-unit to use for
each element in the output vector. Finally, Equation 5 gates the sub-units and
sums the result to give the output. NALU’s gating only allows for each output
element to have a mixture of operations from the same sub-unit. Therefore,
each output element is an expression of a combination of operations from
either $\\{+,-\\}$ or $\\{\times,\div\\}$ but not $\\{+,-,\times,\div\\}$.
This issue is fixed by stacking NALUs such that the output of one NALU is the
input of another. A step-by-step example for the NALU is given in Appendix C.
Next, we overview architectures of some recent modules.
## 4 NALU Influenced Modules
NALU has inspired the creation of other modules including: Improved NALU
(iNALU) (Schlör et al., 2020), Neural Addition Unit (NAU)/ Neural
Multiplication Unit (NMU) (Madsen and Johansen, 2020), Neural Power Unit (NPU)
(Heim et al., 2020), Golden Ratio NALU (G-NALU) (Rana et al., 2019), Neural
Logic Rule Layer (NLRL) (Reimann and Schwung, 2019) and Neural Status Register
(NSR) (Faber and Wattenhofer, 2020). This section will go through each of the
modules definitions, providing a consistent notation for each module along
with an illustrations of the module’s architecture.111Module illustrations
from the original papers are provided in Appendix B.
### 4.1 iNALU
Figure 3: iNALU architecture. Example of a 3-feature input and 2-feature
output model.
The iNALU identifies key issues in NALU and modifies the unit to incorporate
solutions (detailed in Section 5). In particular, they use:
* •
Independent weight matrices. To allow the multiplicative and summative paths
to learn their own set of $\bm{\hat{W}}$ and $\bm{\hat{M}}$ weights to be used
in calculating $\bm{a}$ for the $\mathrm{NAC}_{+}$ and $\bm{m}$ for the
$\mathrm{NAC}_{\bullet}$.
$\displaystyle W^{\textrm{A}}_{i,o}$
$\displaystyle=\tanh(\widehat{W^{\textrm{A}}_{i,o}})\cdot\operatorname*{sigmoid}(\widehat{M^{\textrm{A}}_{i,o}}),$
(6) $\displaystyle W^{\textrm{M}}_{i,o}$
$\displaystyle=\tanh(\widehat{W^{\textrm{M}}_{i,o}})\cdot\operatorname*{sigmoid}(\widehat{M^{\textrm{M}}_{i,o}}).$
(7)
* •
Clipping. Clipping the multiplicative weights using the equation below (with
$\omega=20$) and clipping the gradient of learnable parameters between
[-0.1,0.1].
$\displaystyle m_{o}$
$\displaystyle=\exp(\min(\ln(\max(|x_{i}|,\epsilon))\cdot
W^{\textrm{M}}_{i,o},\;\omega).$ (8)
* •
Multiplicative sign correction. Retrieve the output sign of the multiplicative
path,
$\displaystyle msv_{o}$
$\displaystyle=\prod_{i=1}^{I}\left(\mathrm{sign}(x_{i})\cdot|W^{\textrm{M}}_{i,o}|+1-|W^{\textrm{M}}_{i,o}|\right)\;.$
(9)
* •
Regularisation. Include a regularisation loss term which avoids having near-
zero learnable parameters,
$\displaystyle\mathcal{R}_{\mathrm{sparse}}=\frac{1}{t}\sum_{\mathclap{\bm{\theta}\in\\{{\bm{\widehat{W^{\textrm{A}}}},\bm{\widehat{M^{\textrm{A}}}},\bm{\widehat{W^{\textrm{M}}}},\bm{\widehat{M^{\textrm{M}}}},\bm{g}}\\}}}\frac{\sum_{o}^{O}\sum_{i}^{I}\max(\min(-\theta_{i,o},\theta_{i,o})+t,0)}{O\cdot
I}\;,$ (10)
where $t=20$. This activates if the loss is under 1 and there have been over
10 iterations of training data.
* •
Reinitialisation. Reinitialise the model weights if the average loss collected
over a number of consecutive iterations has not improved. More specifically,
reinitialisation occurs for every $10^{th}$ iteration, if over 10,000
iterations have occurred and the average loss of the first half of those
iterations of the errors is less than the average loss of the second half plus
its standard deviation, and the average loss of the latter half of errors is
larger than 1.
* •
Independent gating. Remove the dependence of the input values when learning
the gate parameters,
$\displaystyle g_{o}=$ $\displaystyle\operatorname*{sigmoid}(g_{o})\;.$ (11)
The iNALU expression for calculating a single output element indexed at $o$ is
$\displaystyle\mathrm{iNALU}:\hat{y_{o}}$ $\displaystyle=g_{o}\cdot
a_{o}+(1-g_{o})\cdot m_{o}\cdot msv_{o}\;.$ (12)
### 4.2 NAU and NMU
Figure 4: NAU architecture. Example of a 3-feature input and 2-feature output
model. Figure 5: NMU architecture. Example of a 3-feature input and 2-feature
output model.
The NAU and NMU are sub-units for addition/subtraction and multiplication
respectively. Architecture and initialisations of the sub-units have strong
theoretical justifications and empirical results to validate design choices.
The NAU and NMU definitions for calculating an output element indexed at $o$
is:
$\displaystyle\textrm{NAU}:a_{o}$
$\displaystyle=\sum_{i=1}^{I}\left(W_{i,o}\cdot\mathrm{x}_{i}\right),$ (13)
$\displaystyle\textrm{NMU}:m_{o}$
$\displaystyle=\prod_{i=1}^{I}\left(W_{i,o}\cdot\mathrm{x}_{i}+1-W_{i,o}\right)\;,$
(14)
where the $\bm{W}$ is unique for each sub-unit. Prior to applying the weights
of a sub-unit to the input vector, each element of $\bm{W}$ is clamped between
[-1,1] if using the NAU, or [0,1] if using the NMU. Therefore, considering
discrete weights $\\{-1,0,1\\}$, Equation 13 will do the summation of the
inputs where each input is either added ($W_{i,o}=1$), ignored ($W_{i,o}=0$),
or subtracted ($W_{i,o}=-1$). When considering the discrete weight values of
the NMU $\\{0,1\\}$, the result is the product of the inputs where each input
is either multiplied ($W_{i,o}=1$) or not selected ($W_{i,o}=0$). Rather than
allowing the product of the inputs to be multiplied by 0 whenever an
irrelevant input (i.e., with weight of 0) is processed, Equation 14 will also
convert the input to be 1 (the multiplicative identity value) resulting in the
input not having any effect towards the final output.
To enforce the module weights to become discrete values, the following
regularisation loss term is also used,
$\mathcal{R}_{sparse}=\frac{1}{I\cdot
O}\sum_{o=1}^{O}\sum_{i=1}^{I}\min\left(|W_{i,o}|,1-|W_{i,o}|\right).$ (15)
A scaling factor
$\lambda=\hat{\lambda}\max\left(\min\left(\frac{iteration_{i}-\lambda_{start}}{\lambda_{end}-\lambda_{start}},1\right),0\right),$
(16)
is multiplied to $\mathcal{R}_{sparse}$ to get the final value, where
regularisation strength is scaled by a predefined $\widehat{\lambda}$.
### 4.3 NPU and RealNPU
Figure 6: NPU architecture. Example of a 3-feature input and 2-feature output
model.
The NPU (Equation 17) focuses on improving the division ability of the
$\mathrm{NAC}_{\bullet}$ by applying a complex log transformation and using
real and complex weight matrices ($\bm{W^{\textrm{RE}}}$ and
$\bm{W^{\textrm{IM}}}$ respectively). NPU based modules can model products of
arbitrary powers ($\prod x_{i}^{w_{i}}$), therefore the learnable weight
parameters do not require to be discrete. For example, modelling the square-
root operation requires $W^{\textrm{RE}}_{i,o}=0.5$. The $\bm{r}$ (Equation
18) converts values close to 0 into 1 to avoid the output multiplication
becoming 0. To do this, a relevance gate ($\bm{g}$) is learnt representing if
an input element is relevant and should be used as part of an output
expression ($g_{i}=1$) or not be selected ($g_{i}=0$). Furthermore, each
element of $\bm{g}$ is clipped between the range [0,1] (Equation 20).
$\displaystyle\mathrm{NPU}:y_{o}=$
$\displaystyle\exp\left(\sum_{i=1}^{I}(W^{\textrm{RE}}_{i,o}\cdot\ln(r_{i}))-\sum_{i=1}^{I}(W^{\textrm{IM}}_{i,o}\cdot
k_{i})\right)$ (17)
$\displaystyle\cdot\cos\left(\sum_{i=1}^{I}(W^{\textrm{IM}}_{i,o}\cdot\ln(r_{i}))+\sum_{i=1}^{I}(W^{\textrm{RE}}_{i,o}\cdot
k_{i})\right)$
where
$\displaystyle r_{i}$ $\displaystyle=g_{i}\odot(|x_{i}|+\epsilon)+(1-g_{i}),$
(18) $\displaystyle k_{i}$ $\displaystyle=\begin{cases}0&x_{i}\geq 0\\\
\pi\mathrm{g_{i}}&x_{i}<0\end{cases},$ (19) and $\displaystyle g_{i}$
$\displaystyle=\mathrm{min}(\mathrm{max}(g_{i},0),1)\;.$ (20)
Additionally a simplified version of the NPU exists, named the RealNPU,
considering only real values of Equation 17:
$\displaystyle\mathrm{RealNPU}:=$
$\displaystyle\exp\left(\sum_{i=1}^{I}(W^{\textrm{RE}}_{i,o}\cdot\ln(r_{i}))\right)\cdot\cos\left(\sum_{i=1}^{I}(W^{\textrm{RE}}_{i,o}\cdot
k_{i})\right).$ (21)
As the NPU and RealNPU can express arbitrary powers, using a regulariser to
enforce discrete parameters like in the iNALU, NAU or NMU would restrict the
expressiveness. Therefore, Heim et al. (2020) use a scaled L1 penalty, where
the scaling value $\beta$ grows between predefined values $\beta_{start}$ to
$\beta_{end}$ and is increased every $\beta_{step}=10,000$ iterations by a
growth factor $\beta_{growth}=10$.
### 4.4 G-NALU
Figure 7: G-NALU architecture. Example of a 3-feature input and 2-feature
output model.
The G-NALU replaces the exponent base in the $\mathrm{tanh}$ and
$\operatorname*{sigmoid}$ operations when calculating NALU’s weight matrix
with a golden ratio base value:
$\displaystyle\phi$ $\displaystyle=\frac{1+\sqrt{5}}{2}\approx 1.618$ (22)
$\displaystyle\operatorname*{sigmoid_{gr}}$
$\displaystyle=\frac{1}{(1+\phi^{-x})}$ (23)
$\displaystyle\operatorname*{tanh_{gr}}$
$\displaystyle=\frac{\phi^{2x}-1}{\phi^{2x}+1}$ (24)
The use of a golden ratio base also requires the $\mathrm{NAC}_{\bullet}$
definition (Equation 3) to be modified into Equation 25 to allow for the
$\ln$-$\exp$ transformation to work.
$\displaystyle\mathrm{NAC}_{\bullet}:m_{o}$
$\displaystyle=\phi^{\left(\sum_{i=1}^{I}\frac{(W_{i,o}\cdot\ln(|x_{i}|+\epsilon))}{\ln(\phi)}\right)}$
(25)
### 4.5 NLRL
Figure 8: NLRL architecture. Example of a 3-feature input and 2-feature output
model.
The NLRL, Figure 8, is a module to express boolean logic rules and simple
arithmetic operations (add, subtract and multiply) via modelling AND
(conjunction), OR (disjunction) and NOT (negation). By stacking NLRLs
together, it is also possible to represent more complex relations including
implication, exclusive OR and equivalence. The architecture is designed under
the assumption of modelling the logic rules on booleans, therefore the input
values must be booleans. The default NLRL architecture consists of the
following four parts in which the three base operators (negation, conjunction
and disjunction) are modelled:
* •
Negation gating, which models the negation operator. The (negation) gate
determines if an input element should be negated (gate value of 1) or simply
passed along (gate value of 0).
$\displaystyle\hat{x}_{i,o}$
$\displaystyle=(1-\operatorname*{sigmoid}(G^{\textrm{NEG}}_{i,o}))\cdot
x_{i,o}+\operatorname*{sigmoid}(G^{\textrm{NEG}}_{i,o})\cdot(1-x_{i,o})\;.$
(26)
* •
OR calculation, which applies disjunctions (weight value of 1) for the output
of the input gating. This can also be used to model addition and subtraction.
$\displaystyle z_{o}^{\textrm{OR}}$
$\displaystyle=\bigotimes_{i=2}^{I}(\begin{bmatrix}1&-A_{i,o}\cdot\hat{x}_{i,o}\end{bmatrix}\otimes\begin{bmatrix}-1&A_{1,o}\cdot\hat{x}_{1,o}\end{bmatrix})\bm{1}+1\;.$
(27)
* •
AND calculation, which applies conjunctions (weight value of 1) over the
output of the input gating. This can also be used to model multiplication. The
definition is the same as the $\mathrm{NAC}_{\bullet}$ (Equation 3) used in
the NALU.
$\displaystyle z_{o}^{\textrm{AND}}$
$\displaystyle=\exp\left(\sum_{i=1}^{I}(A_{i,o}\cdot\ln(|\hat{x}_{i,o}|+\epsilon))\right)\;.$
(28)
* •
Output gating, which determines whether an output value should use the OR
calculation (gate value 0) or AND calculation (gate value 1).
$\displaystyle\hat{y}_{o}$
$\displaystyle=(1-\operatorname*{sigmoid}(G^{\textrm{OUT}}_{i,o}))\cdot
z_{o}^{\textrm{AND}}+\operatorname*{sigmoid}(G^{\textrm{OUT}}_{i,o})\cdot
z_{o}^{\textrm{OR}})\;.$ (29)
Three parameter matrices require to be learnt. One for learning the gate
values for negation ($\bm{G^{\textrm{NEG}}}$), another for learning the
(shared) weight values for the AND and OR calculations ($\bm{A}$) and one for
learning the gate values for the output ($\bm{G^{\textrm{OUT}}}$).
Optionally, the application of De-Morgan laws which enables representing a
conjunction using only negation and disjunction and represent disjunction
using only negation and conjunction, makes it possible to modify the
architecture to only need either the AND or OR calculation block. The changes
require removing the unwanted calculation block and replacing the output gate
with a negation gate. Using only the negation and conjunction operators is
favoured as the implementation of disjunction requires using the Kronecker
product which scales poorly with input size.
### 4.6 NSR
Figure 9: NSR architecture. Example of a 3-feature input and 2-feature output
model.
The NSR (inspired by physical status registers found in the Arithmetic Logic
Unit’s of computers), models comparison based control logic: $<$, $>$, $!=$,
$=$, $>=$, $<=$. Simply put, the NSR does quantitative reasoning by learning
what input elements to compare and how to compare them. A NSR will output two
elements. The first represents if the comparison is true (or false) and the
second is the negation of the first output (i.e., $1-o_{1}$). The negation is
given such that when the NSR is used in downstream task, the other layers can
have access to either branch of the comparison. To do this, the NSR
architecture does the following:
1. 1.
Learns two matrices ($\bm{V^{\textrm{OP1}}}$ and $\bm{V^{\textrm{OP2}}}$)
whose purpose is to select two inputs to be operands
($\bm{\widehat{x^{\textrm{OP1}}}}$ and $\bm{\widehat{x^{\textrm{OP2}}}}$) of
the comparison function.
$\displaystyle\widehat{x_{o}^{\textrm{OP1}}}$
$\displaystyle=\sum_{i}^{I}(x_{i}\cdot\operatorname*{softmax}(V_{i,o}^{\textrm{OP1}}))$
(30) $\displaystyle\widehat{x_{o}^{\textrm{OP2}}}$
$\displaystyle=\sum_{i}^{I}(x_{i}\cdot\operatorname*{softmax}(V_{i,o}^{\textrm{OP2}}))$
(31)
2. 2.
Takes the difference of the two selected operands.
$\displaystyle\widehat{x_{o}}$
$\displaystyle=\widehat{x_{o}^{\textrm{OP1}}}-\widehat{x_{o}^{\textrm{OP2}}}$
(32)
3. 3.
Scales the difference with a hyperparameter ($\lambda$) to avoid vanishing
gradients. Authors indicate an inverse relation between $\lambda$ and the
difference of the input values which can be used to set the $\lambda$ value
(Faber and Wattenhofer, 2020, Figure 5).
$\displaystyle\widehat{x_{o}}$ $\displaystyle=\lambda\cdot\widehat{x_{o}}$
(33)
4. 4.
Calculates the sign bit ($\bm{\widehat{x^{\pm}}}$) and zero bit
($\widehat{\bm{x^{0}}}$) of the difference value using smooth continuous
functions.
$\displaystyle\widehat{x_{o}^{\pm}}$ $\displaystyle=\tanh(\widehat{x_{o}})$
(34) $\displaystyle\widehat{x_{o}^{0}}$
$\displaystyle=1-2\tanh(\widehat{x_{o}})^{2}$ (35)
5. 5.
Learns a scale value (for each bit) and shared shift value.
6. 6.
Applies the scale and shift to the bit values, takes the sum of the results
and passes the result through a $\operatorname*{sigmoid}$. The resulting value
represents the probability of the comparison being true/false.
$\displaystyle z_{o}$ $\displaystyle=\widehat{x_{o}^{\pm}}\cdot
W^{\pm}_{i,o}+\widehat{x_{o}^{0}}\cdot W^{0}_{i,o}+b_{o}$ (36) $\displaystyle
y_{o}$ $\displaystyle=\operatorname*{sigmoid}(z_{o})$ (37)
7. 7.
Returns as output the comparison value and its negation value ($1-y_{o}$).
Given two inputs (relevant for the comparison), the NSR will compute the sign
and zero bit of the difference of the two operands. The sign and zero bit
definitions are continuous relaxations of the discrete definitions which
rescale the bounds to avoid the gradients of partial derivatives becoming
zero. That is: $\widehat{x_{o}^{\pm}}=\begin{cases}1&\textrm{if
}\widehat{x_{o}}>0\\\ 0&\textrm{if }\widehat{x_{o}}=0\\\ -1&\textrm{if
}\widehat{x_{o}}<0\end{cases}\quad$ and
$\quad\widehat{x_{o}^{0}}=\begin{cases}1&\textrm{if }\widehat{x_{o}}=0\\\
-1&\textrm{if }\widehat{x_{o}}\neq 0\end{cases}.$
To improve robustness to different initialisations, the NSR also implements
redundancy which learns multiple independent operand pairs
($\widehat{x_{o}^{\textrm{OP1}}}$, $\widehat{x_{o}^{\textrm{OP2}}}$) in
parallel for each output element. Each pair will have its own sign and zero
bit, hence learning its own set of scale and shift values. These independent
paths get aggregated together by summing the different $z_{o}$’s together.
## 5 NALU’s Shortcomings and Existing Solutions
We detail the weaknesses of NALU and explain existing solutions. We focus
especially on the iNALU, NAU, NMU and NPU when looking at solutions, as these
modules focus on overcoming the shortcomings of NALU. A summary of the
discussed NALU issues and proposed solutions is given in Table 1.
Short- coming NALM NAU/NMU iNALU NPU/Real NPU CalcNet (G-NALU)
$\bm{NAC}_{\bullet}$ cannot have negative inputs/targets NMU: Remove log-
exponent transformation Sign correction (mixed sign vector) Sign retrieval
Fixed rules and sign parsing Convergence of gate parameters Stacking instead
of gating Independent gating, separate weights per sub unit and regularisation
loss - - Fragile initialisation Theoretically valid initialisation scheme
Reinitialise model - - Weight inductive bias of {-1,0,1} not met (non-discrete
solutions) Regularisation loss term and clipping Regularisation loss term (see
below)* - Gradient propagation Linear weight matrix $NAC_{\bullet}$ clip and
gradient clip Relevance gating Replace sigmoid and tanh exponent’s with golden
ratio Singularity (values close to 0) NMU: Remove log-exponent transformation
$NAC_{\bullet}$ clip Complex space transformation and relevance gating -
Compositionality - - - Parsing algorithm
* *
The NPU and Real NPU supports fractional weights (e.g., 0.5 representing
square-root) and therefore does not enforce discretisation.
Table 1: Summarised NALU shortcomings and existing proposed solutions.
### 5.1 Mixed Sign Inputs and Negative Outputs
The $\mathrm{NAC}_{\bullet}$ cannot deal with mixed sign inputs/negative
outputs. Equation 3 requires converting negative inputs into their positive
counterparts because the log transformation cannot evaluate negative values.
Therefore the sign of the input is lost, causing the $\mathrm{NAC}_{\bullet}$
to be unable to have negative target values. The use of an exponent also
causes the inability to have negative outputs, as the range of an exponent is
$\mathbb{R}_{>0}$. To allow for negative targets, a module can incorporate
logic to deal with assigning the correct sign to the output such as the
iNALU’s sign correction mechanism (Schlör et al., 2020) or the NPU’s inherent
sign retrieval (Heim et al., 2020).
The sign correction mechanism creates a mixed sign vector ($\bm{msv}$)
$\in\mathbb{R}^{O\times 1}$, consisting of elements $\\{-1,1\\}$ (assuming
$\bm{W}$ has converged to integers $\\{-1,0,1$}), where each element
represents the correct sign for each output element.222Notice the similarity
in calculation between the NMU (Equation 14) and iNALU’s $\bm{msv}$ (Equation
9). The $\bm{msv}$ is element-wise multiplied to Equation 3 resulting in
applying the relevant sign to the outputs of the multiplicative sub-unit. The
$+1-|W_{i,o}|$ means unselected inputs ($W_{i,o}=0$) will avoid affecting the
final sign value, as they will only multiply the $msv_{o}$ value by 1. An
alternate way to view the $\bm{msv}$ is as a gating mechanism,
$sign(x_{i}){\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\cdot|W_{i,o}|}+1{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\cdot(1-|W_{i,o}|)}$,
where a on gate ($W_{i,o}=-1/1$) gives the sign and an off gate ($W_{i,o}=0$)
returns 1.
In the case of a RealNPU, the latter half of its definition (in matrix form),
that is $\odot\,\cos(\bm{W^{\textrm{RE}}}\bm{k})$, can be interpreted as a
sign retrieval mechanism. $\bm{k}$ represents positive inputs as 0 and
negative inputs as $\pi$ (assuming the gate value converged to select the
input). Assuming convergence, $\bm{W^{\textrm{RE})}}$ values are $\\{1,-1\\}$
representing $\\{\times,\div\\}$. Two outcomes are possible from evaluating
the expression: –$\cos(\pm\pi)=-1$ or $\cos(0)=1$ where the output value
represents the sign of the input value.
Alternatively, it is possible to remove the need for transformations in the
log/exponent space in Equation 3, as Madsen and Johansen (2020) does for
defining the NMU (Equation 14). This means negative targets can be expressed
because the sign is no longer removed from the input.
### 5.2 Gating Parameter Convergence
The NALU gate, responsible for selection between the $\mathrm{NAC}_{+}$ and
$\mathrm{NAC}_{\bullet}$ modules, is unable to converge reliably. This is due
to the different convergence properties of the $\mathrm{NAC}_{+}$ and
$\mathrm{NAC}_{\bullet}$ (Madsen and Johansen, 2020, Appendix C.5), which
results in gate weights that converge to a discrete value but not the correct
value. For example, converging to a 1 when the true gate value is 0 and visa-
versa. In cases where the correct gate is selected, the NALU still fails to
converge consistently (Madsen and Johansen, 2020, Appendix C.5) implying
additional architectural issues exist for the unit. Alternatively, partial
convergence of gate (and weight) parameters leads to a leaky gate effect
(Schlör et al., 2020), where non-discretised parameters leads to non-optimal
solutions. Such solutions may perform well on the interpolation data used
during training but will not generalise to OOD data. This issue is amplified
when additional NALU modules are stacked.
Even when using the improved NAU and NMU modules, gating still leads to
inferior results, therefore Madsen and Johansen (2020) replace module gating
with module stacking. Schlör et al. (2020) suggests using separate weights for
the iNALU sub-units (Equations 6 and 7) to improve convergence, and
independent gating (removing $\bm{x}$ from Equation 4) so learning $\bm{g}$ is
no longer influenced by input (see Equation 11). Removing the dependence of
the input removes contradictory constraints on the gating that would lead to
unoptimal solutions. Taking the example given by Schlör et al. (2020, Section
3.3.6), imagine two different input samples $\bm{x_{1}}=[2,2]$ and
$\bm{x_{2}}=-\bm{x_{1}}=[-2,-2]$ and the task of adding, i.e., calculating
$2+2=4$ for the first input and $-2-2=-4$ for the second. Assuming we use the
NALU gating method, it implies that
$\bm{g}=\operatorname*{sigmoid}(\bm{x_{1}}\bm{G})=\operatorname*{sigmoid}(\bm{x_{2}}\bm{G})$
= $\operatorname*{sigmoid}(-\bm{x_{1}}\bm{G})$. However, as
$\bm{x_{1}}\neq\bm{x_{2}}$, the above statement is invalid.
### 5.3 Bias Considerations
Figure 10: Regularisation functions used to induce sparsity in learnable
parameters. Left: Sparsity regularisation used on the NAU and NMU (see
Equation 15), forcing values towards $\\{-1,0,1\\}$. Middle: Scaling function
(Equation 16) to control the importance of the sparsity regularisation for the
NAU and NMU. For this example, $\hat{\lambda}$ is set to 1 and the scale
factor will grow between iterations 4 to 6. Right: Sparsity regularisation for
a single parameter used on the iNALU (see Equation 10).
The weight biases are achieved by adding a regularisation term for sparsity
and using weight clamping (Madsen and Johansen, 2020; Schlör et al., 2020).
The regularisation penalty encourages weights to converge to the discrete
values. An illustrative example of the Madsen and Johansen (2020); Schlör et
al. (2020) regularisation functions are found in Figure 10. Madsen and
Johansen (2020) use sparsity regularisation (Equation 15) to enforce the
relevant biases for both NAU $\\{-1,0,1\\}$ and NMU $\\{0,1\\}$. Note that the
absolute of $W_{i,o}$ is not necessary when using NMU. The regularisation
activates and warms up over a predefined period of time to avoid overpowering
the main mean squared error loss term (Equation 16). Clamping is also applied
to the weights beforehand to the ranges of the desired biases. iNALU uses a
piece-wise function (Equation 10) for regularisation on weight
($\widehat{\bm{W}^{\textrm{A}}}$, $\widehat{\bm{M^{\textrm{A}}}}$,
$\widehat{\bm{W}^{\textrm{M}}}$, $\widehat{\bm{M^{\textrm{M}}}}$) and gate
($\bm{g}$) parameters to encourage discrete values that do not converge to
near-zero values. Intuitively, this regularisation penalises the parameter to
encourage it to move towards -t or t. Therefore, by having a large
positive/negative value, when the parameter goes through a
$\operatorname*{sigmoid}$ or $\tanh$ activation (see Equations 6, 7 and 11),
the resulting value will be close to $\\{-1,0,1\\}$. Rather than a warmup
period, regularisation occurs only once the loss is under a pre-defined
threshold and stops once a discretisation threshold $t=20$ is met.
### 5.4 Initialisation Considerations
Good initialisations are crucial for convergence. Assuming the Madsen and
Johansen (2020) implementation of NALU is used for initialisation, then weight
matrices are from a uniform distribution with the range calculated from the
fan values,333https://github.com/AndreasMadsen/stable-
nalu/blob/2db888bf2dfcb1bba8d8065b94b7dab9dd178332/stable_nalu/layer/nac.py#L22
and the gate matrix from a Xavier uniform initialisation with a sigmoid
gain.444https://github.com/AndreasMadsen/stable-
nalu/blob/2db888bf2dfcb1bba8d8065b94b7dab9dd178332/stable_nalu/layer/_abstract_nalu.py#L90
However, empirical results show difficulty for both optimisation and
robustness with such initialisations. Fragility in optimisation results in
converging to the expected parameter value difficult to achieve (Madsen and
Johansen, 2020), especially when redundant inputs that require sparse
solutions exist. When redundancy exists, non-sparse solutions are not
extrapolative, lacking transparency.
To ease optimisation, Madsen and Johansen (2020) use a linear weight matrix
construction (removing the need of non-linear transformations), while Schlör
et al. (2020) use clipping on the $\mathrm{NAC}_{\bullet}$ calculation (see
Equation 8). The minimum of the input is clipped to $\epsilon=10^{-7}$ and the
result of the $\ln$ operation is clipped to be at most $\omega=20$. Using this
clipping allows to avoid exploding intermediary results. Additionally,
gradient clipping is used to avoid exploding gradients.
Figure 11: Adapted from Rana et al. (2019, Figure 2 and Figure 4) for showing
the values for $\bm{W}$ used in NALU calculated over the domain of
$\widehat{\bm{W}}$ and $\widehat{\bm{M}}$. Left: Using NALU’s calculation of
$\bm{W}$ where $\tanh$ and $\operatorname*{sigmoid}$ are calculated with base
e. Right: G-NALU’s calculation of $\bm{W}$ where $\tanh$ and
$\operatorname*{sigmoid}$ are calculated with a golden ratio base resulting in
smoother value transition.
Rana et al. (2019) modify the non-linear activation’s of the weight matrices
in the NALU for smoother gradient propagation as shown by Figure 11. In
contrast, in attempts to avoid falling into a local optima, iNALU allows
multiple reinitialisations of a model during training to counteract the non-
optimal initialisation in NALU which contribute to vanishing gradients and
convergence to local minimas. Reinitialisation occurs every $m^{th}$ epoch if
the following two conditions are met: (1) the loss has not improved in the
last n steps, (2) the loss is larger than a pre-defined threshold. The main
disadvantage reinitialising multiple times during training is that it can
require running more iterations which may be infeasible. For example, for a
standalone NALM it is possible to keep reinitialising until a reasonable
solution is found, however if the NALM is used as a subcomponent in a larger
neural network then reinitialisation can be too costly. Through a grid search
they find having the mean of the gate and NALU weight matrices
$\widehat{\bm{M}}$, $\widehat{\bm{W}}$ initialised to be 0, -1 and 1
respectively, results in the most stable modules. However, even when using
such initialisations, the stability problem remains for division (Schlör et
al., 2020, Table 1).
### 5.5 Division
Figure 12: Taken from Madsen and Johansen (2020, Figure 2b). An example
illustration of the unstable optimisation issue arising when using a stacked
$\mathrm{NAC}+$ $\mathrm{NAC}_{\bullet}$ with $\epsilon=0.1$. The plot
represents the root mean squared loss surface when modelling
$(x_{1}+x_{2})\cdot(x_{1}+x_{2}+x_{3}+x_{4})$ for the input $[1,1.2,1.8,2]$.
$w_{1}$ and $w_{2}$ represent the weight values to use for the $\mathrm{NAC}+$
and $\mathrm{NAC}_{\bullet}$ weight matrices such that
$\mathbf{W}_{1}=\left[\begin{smallmatrix}w_{1}&w_{1}&0&0\\\
w_{1}&w_{1}&w_{1}&w_{1}\end{smallmatrix}\right]$ and
$\mathbf{W}_{2}=\left[\begin{smallmatrix}w_{2}&w_{2}\end{smallmatrix}\right]$
.
Division is NALU’s weakest operation (Trask et al., 2018). Having both
division and multiplication in the same module causes optimisation
difficulties. Madsen and Johansen (2020) highlight the singularity issue
(caused from division by 0 or values close to 0 bounded by an epsilon value)
in the $\mathrm{NAC}_{\bullet}$ which causes exploding outputs (see Figure
12). This issue is amplified due to operations being applied in log space. The
NMU removes the use of log, therefore is not epsilon bound. Furthermore, the
NMU is only designed for multiplication. The NPU takes Madsen and Johansen
(2020)’s interpretation of multiplication (using products of power functions)
but applies it in a complex space enabling division and multiplication (Heim
et al., 2020). Though the NPU cannot fully solve the singularity issue as a
log transformation is still applied to the inputs, the relevance gating (see
Equation 18) aids in smoothing the loss surface to provide better convergence.
Schlör et al. (2020) observe that reinitialising modules numerous times during
training can still lead to failure, implying that the issue lies in unit
architecture as well as initialisation. Hence, division remains an open issue.
### 5.6 Compositionality
A single NALU is unable to output expressions whose operations are from both
$\\{+,-\\}$ and $\\{\times,\div\\}$, for example $x_{1}+x_{2}*x_{3}$. Bogin et
al. (2020) hint at NALU’s inflexibility to learn different expressions from
same weights as once trained the learnt expression of a NALU is static meaning
that expressions with a different ordering of operations will not work. Rana
et al. (2019) develop CalcNet, a parsing algorithm, such that the expression
to learn is decomposed into its intermediary sub-expressions which obey the
rules of precedence (i.e., BIDMAS) and then is solved in a compositional
manner. However, decomposition requires fixed rules and pre-trained sub-units
which are undesirable because in order to decompose, the input must also
contain the operations used, meaning that model is exposed to a priori
relating to the underlying function.
## 6 Experiments and Findings of Modules for Arithmetic Tasks
To better understand the existing evaluation of modules, we go through the
experiments used in the papers for: NALU, iNALU, NAU, NMU, and NPU. We begin
by indicating inconsistencies across papers for the two-layer arithmetic task
setup, highlighting the different evaluation techniques used by each paper,
encouraging the need of task standardisation. Inter-module comparison using
existing findings is made to infer the best module per operation. We end this
section by introducing a Single Module Arithmetic task to act as a
standardised benchmark for comparison against all existing arithmetic NALM
modules.
### 6.1 Why are the Square and Square-Root Operations not included in this
Discussion?
Though mentioned in Trask et al. (2018) that the NALU can learn to model
square and square-rooting, we will purposefully avoid analysing the ability of
the multiplicative modules to do square ($a^{2}$) and square-root ($\sqrt{a}$)
operations. Rather, we focus only on the four core arithmetic operations:
addition, subtraction, multiplication and division.
Of the NALMs described previously in Sections 3 and 4 only the NALU, G-NALU,
NPU and Real NPU are able to model squaring and square-rooting (without
modifications to the original architecture/training methodology). The other
multiplicative modules (NMU and iNALU) would have difficulty in modelling
these operations which is explained below.
The squared operation can be solved when using a multiplication module in two
ways, differing by the way the input is represented. The first way expects two
inputs of the same value, which essentially results in a multiplication
($a\times a$), and the second way expects one input using it as the base and
applying it with an exponent of two ($a^{2}$). The second way requires using a
weight value of two (to correspond with the exponent), but this breaks the
inductive bias on many of the modules that weights should have a magnitude up
to 1 which is enforced using clipping. Therefore, we avoid analysing the
square operation.
As for the square-root operation an interpretable weight value corresponds to
0.5 resulting in square-rooting being modelled as $a^{\frac{1}{2}}$. This
contradicts the inductive bias of discrete weights of the NMU and iNALU which
is enforced using regularisation penalties. Therefore, we avoid analysis
square-root operation.
### 6.2 Two Layer Arithmetic Task
A task consistently used in all papers is the ‘Static Simple Function
Learning’ experiment (Trask et al., 2018), which evaluates the ability of a
module to learn a trivial two-operation function. Madsen and Johansen (2019)
introduce their own experiment setup (including details for reproducibility)
which they utilise in their later work (Madsen and Johansen, 2020) under the
name ‘Arithmetic Datasets’ task. Specifically, given an input vector
$\mathbb{R}^{100}$ of floats, the first (addition) layer should learn to
output two values (denoted a and b) which are the sums of two different
partially overlapping slices (i.e., subsets) of the input, and the second
layer should perform an operation on a and b. Figure 13 illustrates such an
example. Due to the rigorous setup, evaluation metrics, and available code, we
strongly suggest the Madsen and Johansen (2019) experiment be used to test and
compare new modules for the Two Layer Arithmetic task.
Figure 13: Taken from Madsen and Johansen (2020, Figure 6). Illustration on
how to get from input vector to target scalar for the Arithmetic Dataset Task.
This setup is solved using a stacked addition-multiplication module.
iNALU’s experiments 4 (‘Influence of Initalization’) and 5 (‘Simple Function
Learning Task’) is a copy of the task but is different to the Madsen and
Johansen (2020) setup. The experiments calculate a and b differently by not
allowing for overlap between a and b, and allowing a and b to be made up of
random (instead of consecutive) elements of the input. Also, the interpolation
and extrapolation ranges are different. Heim et al. (2020)’s claims that their
‘Large Scale Arithmetic’ task is equivalent to the Arithmetic Dataset task.
However, there are key distinctions between the two meaning the results from
the two papers are not directly comparable. We highlight in Table 2
differences between the three experiment setups.555We do not compare Trask et
al. (2018) as no details on the experiment setup is given. We do not compare
Rana et al. (2019) as they do not include this experiment.
Property | Madsen and Johansen (2020) | Heim et al. (2020) | Schlör et al. (2020)
---|---|---|---
Hidden size | 2 | 100 | 2
Iterations for one run | 5,000,000 | 50,000 | 100,000
Number of seeds | 100 | 10 | 10
Learning rates | 1e-3 | 1e-2 for addition and 5e-3 for all other operations | 1e-3
Subset and overlap ratios | 0.25 and 0.5 | 0.5 and 0.25 (for addition, subtraction, and multiplication) | $0.3\dot{3}$ and 0
Division | a/b | 1/a | a/b
Interpolation and extrapolation ranges∗ | Train: U[1,2) for all operations.
Test: U[2,6). | Train: S(-1,1) for addition, subtraction, and multiplication, S(0,0.5) for division.
Test: S(-4,4) for addition, subtraction and multiplication, S(-0.5,0.5) for division. | Train: U[-3,3] and TN(μ=0,σ=1)[-3,3]
Test: U[-5,-5] and TN${}_{(\mu=3.5,\sigma=\frac{1}{6})}$[3,4] respectively.
Programming framework | Pytorch (Python) | Flux (Julia) | Tensorflow (Python)
Table 2: Differences in the ‘Large Scale Arithmetic’ task used in the papers
Madsen and Johansen (2020) and Heim et al. (2020). ‘a’ and ‘b’ represent
summed slices of the input, and are the expected output values for the
addition module. ∗U=Uniform, S=Sobol and TN=Truncated Normal.
#### 6.2.1 Evaluation Metrics
Currently, no de facto method exists for measuring arithmetic extrapolation
performance on models. The purpose of evaluation metrics is to reflect whether
a model solution is the true solution and be able to rank different model
solutions against each other. We therefore explain the different metrics used
in previous works. Trask et al. (2018) calculates a score for each model,
where the score is the $\frac{\textrm{MSE loss of the model}}{\textrm{MSE loss
of a randomly initialised model (with no training)}}$. A score of 0 reflects
perfect accuracy while a score larger than 100 means the solution is worse
than the baseline model. Though this method is good for relative rankings
between different models, there is no indication to the relative performance
against the gold solution Schlör et al. (2020). Furthermore, a randomly
initialised model will most likely have poor performance, so the scaled errors
of the other models seem better than they actually are. Heim et al. (2020)
measures the median of the MSE with confidence intervals using the median
absolute deviation. Compared to the mean, the median is less sensitive to
outliers and skewed results, however as a result it discards information about
individual errors which can be helpful when considering factors such as the
extent of robustness against different initialisations. Both Madsen and
Johansen (2019); Schlör et al. (2020) measure the MSE, but also compare if the
MSE is within a threshold value representing the error of an ideal solution to
a given precision. This threshold comparison produces a success metric in
which each seed can be compared in a pass/fail situation which is averaged to
a success rate.
Evaluation metrics used on the Arithmetic Dataset Task. Madsen and Johansen
(2019) extends the use of threshold based success by using: configuration-
sensitive success thresholds, two additional metrics to measure speed of
convergence and sparsity, and confidence intervals for each metric where each
interval calculated using a different distribution family to best match the
metric. Specifically, there are three evaluation metrics: (1) the success on
the extrapolation dataset against a near optimal solution (success rate), (2)
the first iteration which the task is considered solved (speed of
convergence), and (3) the extent of discretisation towards the weights’
inductive biases (sparsity error). The sparsity error calculated by
$\max\limits_{i,o}(\min(|W_{i,o}|,1-|W_{i,o}|))$, calculates the NALM weight
element which is the furthest away from the acceptable discrete weights for a
NALM. For example, for the NMU if a weight was at 0.7 it would get a sparsity
error of 0.3. A success means the MSE of the trained model is lower than a
threshold value (i.e., the MSE of a near optimal solution). For the Arithmetic
Dataset Task, the threshold is a simulated MSE on 1,000,000 data samples using
a model where each weight of the addition is off an optimal weight value by
$\epsilon=$1e-5. A near optimal solution is used over an optimal solution as
it considers accumulated numerical precision errors (a limitation of hardware
rather than module architecture). Each metric is calculated over different
seeds where the total number of seeds should be enough to demonstrate issues
on robustness, while keeping computation time reasonable. 95% confidence
intervals are calculated for each metric. The success rate uses Binomial
distribution because trials (i.e., run on a single seed) are either pass/fail
situations. The convergence metric uses a Gamma distribution and sparsity
error uses a Beta distribution. Both Beta and Gamma can easily approximate the
normal distribution and support its corresponding metric.
### 6.3 Additional Experiments
This section briefly summarises additional experiments given in the NALM
papers. We do not cross-compare papers for each experiment as there is too
little similarities between experiments.
Trask et al. (2018) carries out a recurrent version of their static task
experiment to test the $\mathrm{NAC}_{+}$, where the subsets a and b are
accumulated over multiple timesteps. The purpose of this task is to generate
much larger output values to test NALU on. As well as pure arithmetic tasks,
Trask et al. (2018) tests NALU in other settings such as: translating numbers
in text form into the numerical form (for example ‘two hundred and one’ to
201), a block grid-world which requires travelling from point A to B in
exactly n timesteps, and program evaluation for programs with arithmetic and
control operations. However, the NALU is not utilised for its capabilities as
a NALM in the text to number task as the NALU is applied to a LSTM’s hidden
state vector; therefore it is questionable if the arithmetic capabilities of
NALU are being used, as the NALU may also have to decode the numerical values
from the LSTM vector. MNIST is also used to evaluate NALU’s abilities on being
part of end-to-end applications. This includes exploring counting the
occurrence of different digits, addition of a sequence of digits, and parity
prediction.
Madsen and Johansen (2020) also use MNIST for testing the module’s abilities
to act as a recurrent module for adding/multiplying the digits. Madsen and
Johansen (2020) additionally provide experiments to express the validity of
their modules. This includes modifying the number of redundant hidden units,
different input training ranges, ablation on multiplication, stress testing
the stacked NAU-NMU against difference input sizes, overlap ratios and subset
ratios, showing the failure of gating in convergence, and parameter tuning
regularisation parameters.
Schlör et al. (2020) provide three additional experiments. Experiment 1
(‘Minimal Arithmetic Task’) uses a single-layer to do a single operation with
no redundancy to see the effect of different input distributions. Experiment 2
(‘Input Magnitude’) sees the effect of training data by controlling the
magnitude of the interpolation data. NALU fails on magnitudes greater than 1.
iNALU remains unaffected for addition and subtraction. Multiplication
performance is coupled to magnitude where extrapolation error increases with
magnitude. Division is uncorrelated to the input magnitude. To increase
problem difficulty, experiment 3 (‘Simple Arithmetic Task’) introduces
redundancy where from 10 inputs only 2 are relevant. NALU improves on
performance for exponentially distributed data when redundant inputs are
introduced. iNALU show improvements for multiplication where the module is
able to succeed on previously failed training ranges such as an exponential
distribution with a scale parameter of 5 (meaning lambda is 0.2) but worsens
for division.
Heim et al. (2020) highlights the relevance gate’s use via a toy experiment to
select one of the two inputs. They show the relevance gate transforms regions
away from the solution which contain no gradient information into regions with
more instructive gradients (Heim et al., 2020, Figure 3). Additionally, they
demonstrate an application of a stacked NAU-NPU module for equation discovery
for an epidemiological model.
### 6.4 Cross Module Comparison
We compare existing findings across modules. NALU is no longer considered the
state-of-the-art for neural arithmetic operation learning. For each operation
the best module is as follows - addition or subtraction: NAU, multiplication:
NMU, division: NPU (or RealNPU if the task is trivial).
iNALU generally outperforms NALU at the cost of additional parameters and
complexities to the model. The magnitude of iNALU’s improvement varies, as
Schlör et al. (2020) claims vast improvements, while Heim et al. (2020) claim
minor. For division both the iNALU and NALU performances remain comparable.
Success on multiplication is dependent on the input training range. Heim et
al. (2020) states the NMU outperforms iNALU on multiplication (as expected),
but also addition and subtraction. The reason lies in the architecture used.
The model is a stacked NAU-NMU meaning the addition/subtraction would be
modelled by the NAU. Therefore, the NMU would only be required to act as a
selector, selecting the output of the summation (that is, have a single weight
at 1 and the rest at 0). Therefore, if two NMUs are stacked together we expect
the failure in a pure addition/subtraction task as shown in Madsen and
Johansen (2020, Appendix C.7). Surprisingly the two layer NMU was able to get
56% success for subtraction, though 0% success for addition (Madsen and
Johansen, 2020, Table 6). Heim et al. (2020) is the only work (at the time of
writing this paper) to experimentally compare the main modules mentioned.
Results show that the NPU outperforms the iNALU for multiplication and
division. When stacked on top of a NAU, the NPU performs similar to the NMU
for addition and subtraction. The NPU is outperformed by the NMU for
multiplication, however it is more consistent in convergence against different
runs. For addition and subtraction, the NAU-NMU is the sparsest module (having
the least number of non-zero weights). Arithmetic tasks using the basic
arithmetic operation require the weight and gate values to be discrete.
Regularisation penalties have been a popular approach (Madsen and Johansen,
2020; Schlör et al., 2020) to achieve this. The NPU uses L1 regularisation for
arithmetic tasks, encouraging sparsity over discretisation due to its ability
to express fractional powers. However, the main influence of causing sparsity
in the NPU modules are from using the relevance gating. If this gating is
removed (denoted by the NaiveNPU in the experiments), models are consistently
less sparse for all operations (Heim et al., 2020, Figure 7).
### 6.5 Single Module Arithmetic Task
Having a standardised benchmark is essential for fair comparison of modules.
As stated previously, so far, no such benchmark exists. Therefore, we provide
results on a Single Module Arithmetic Task, training modules on their
respective operations over a range of different interpolation distributions
and testing over a range of extrapolation distributions.666Code is available
at: https://github.com/bmistry4/nalm-benchmark
Why not use the two-layered Arithmetic Dataset Task? The Arithmetic Dataset
Task requires modules to perform three sub-tasks: selection, operation,
stacking. Selection is the ability to deal with input redundancy for both
modules (though more-so for the first layer addition module). Operation is the
ability to carry out the correct operation/s (i.e., addition and
multiplication). Stacking sees if training can propagate through two layers.
Even with only two layers, there are already multiple components being
assessed in a single task, making it difficult to analyse where issues lie.
Therefore, to gain a better understanding of individual NALMs, we propose an
experiment which evaluates if the operation/s the module specialises in can be
learnt.
Setup. A single module is used. The input size is two and output size is one,
hence there is no input redundancy. Hence, the objective is to model:
$y=x_{1}\;\circ\;x_{2}$ where $\circ\in\\{+,-,\times,\div\\}$. We test the:
NALU, iNALU, G-NALU, $\mathrm{NAC}_{+}$, $\mathrm{NAC}_{\bullet}$, NAU, NMU,
NPU, and Real NPU. Each run trains for 50,000 iterations to allow for enough
iterations until convergence. A MSE loss is used with an Adam optimiser.
Interpolation (training/validation) and extrapolation (test) ranges are
presented in Table 3. Early stopping is applied using a validation dataset
sampled from the interpolation range. Experiment/hyper-parameters set can be
found in Appendix D.
Interpolation | Extrapolation
---|---
[-20, -10) | [-40, -20)
[-2, -1) | [-6, -2)
[-1.2, -1.1) | [-6.1, -1.2)
[-0.2, -0.1) | [-2, -0.2)
[-2, 2) | [[-6, -2), [2, 6)]
[0.1, 0.2) | [0.2, 2)
[1, 2) | [2, 6)
[1.1, 1.2) | [1.2, 6)
[10, 20) | [20, 40)
Table 3: Interpolation (train/validation) and extrapolation (test) ranges used
for the Single Module Arithmetic Task. Data (as floats) is drawn from a
Uniform distribution with the range values as the lower and upper bounds.
Evaluation. We adopt the Madsen and Johansen (2019)’s evaluation scheme used
for the Arithmetic Dataset Task (explained in Section 6.2.1) but adapt the
expression used to generate the predictions of an $\epsilon$-perfect model
($y_{o}^{\epsilon}$). The expression used depends on the operation to model as
shown below:
$\displaystyle\textbf{Addition:}\,y_{o}^{\epsilon}$
$\displaystyle=(x_{1}+x_{2})-\left(\sum_{i=1}^{I}|x_{i}|\right)\epsilon$
$\displaystyle\textbf{Subtraction:}\,y_{o}^{\epsilon}$
$\displaystyle=(x_{1}-x_{2})-\left(\sum_{i=1}^{I}|x_{i}|\right)\epsilon$
$\displaystyle\textbf{Multiplication:}\,y_{o}^{\epsilon}$
$\displaystyle=(x_{1}x_{2})(1-\epsilon)^{2}\times\prod_{i\in
X_{irr}}(1-|x_{i}|\epsilon)$
$\displaystyle\textbf{Division:}\,y_{o}^{\epsilon}$
$\displaystyle=\frac{x_{1}(1-\epsilon)}{x_{2}(1+\epsilon)}\times\prod_{i\in
X_{irr}}(1-|x_{i}|\epsilon)$
Assume $x_{1}$ and $x_{2}$ are the operands to apply the operation to and any
remaining features ($x_{3},...,x_{n}$) be irrelevant to the calculation and
part of the set $X_{irr}$. We use $I$ to denote the total number of input
features. In each case the $\epsilon$ for each feature will contribute some
error towards the prediction. A simulated MSE is then generated with an
$\epsilon=1e-5$ like in Madsen and Johansen (2019) and used as the threshold
value to determine if a NALM converges successfully for a particular range by
comparing the NALMs extrapolation error against the threshold value.777Other
variations for generating the threshold exist which are further discussed in
Appendix E.
#### 6.5.1 Results
We present the NALMs’ performances on the four main arithmetic operations.
Each figure consist of plots for each evaluation metric (success rate, speed
of convergence and sparsity error) discussed in the evaluation paragraph
above, with confidence intervals calculated over 25 seeds.
Addition (Figure 14).
Figure 14: Performance on Single Module Task for addition.
The NAU has full success for all ranges correlating to the sparsity errors
around 0 meaning that weights successfully converge to the expected value of
1. The iNALU also has full success but takes longer to solve and has a
slightly larger sparsity error than the NAU. The NALU struggles with
consistent performance especially for the small positive range
($\mathcal{U}$[0.1,0.2)), large positive range ($\mathcal{U}$[10,20)) and
range with both positive and negative inputs ($\mathcal{U}$[-2,2)). The low
sparsity error implies that discrete values are being converged to, though not
to the correct ones. The $\mathrm{NAC}_{+}$ also struggles to obtain
consistent results over different ranges like the NALU. The G-NALU performs
the worst of all the modules obtaining non-zero success on only 4 of the 9
ranges.
Subtraction (Figure 15).
Figure 15: Performance on Single Module Task for subtraction.
The NAU has full success for all ranges. The solved at iterations does remain
low, similar to addition, with perfect sparsity when converged. However,
ranges $\mathcal{U}$[-1.2,-1.1) and $\mathcal{U}$[1.1,1.2) require over double
the number of iterations to be solved compared to the rest of the ranges
implying that small ranges with a mean of 1 can cause more challenging loss
landscapes. The difficulty of these two ranges also holds for all other
modules which have near 0 success (except the iNALU which has at least 40%
success). The iNALU has full success on all ranges excluding
$\mathcal{U}$[-1.2,-1.1) and $\mathcal{U}$[1.1,1.2). Like addition, the solve
speed and sparisty error of iNALU remain larger than the NAU. The NALU
struggles much more with subtraction than with addition (except for
$\mathcal{U}$[10,20)). The $\mathrm{NAC}_{+}$ outperforms NALU on 4 of the 9
ranges. The G-NALU does not outperform the NALU on any ranges.
Multiplication (Figure 16).
Figure 16: Performance on Single Module Task for multiplication.
The NMU, iNALU, NPU and RealNPU have full success on range $\mathcal{U}$[1,2).
The NMU struggles with some negative input ranges, i.e.,
$\mathcal{U}$[-1.2,-1.1) and $\mathcal{U}$[-2, -1). Though NPUs in theory can
learn with negative inputs, empirical results suggest the modules struggle.
The NPU and Real NPU perform the same for all ranges except one, suggesting
that the problem is not complex enough to require the use of the imaginary
weight matrix. However, $\mathcal{U}$[-2,2) is an example in which $W_{im}$ is
utilised (achieving 32% more success than the RealNPU). Even though this range
allows either of the input values to be positive or negative values, the
learnt weights should be [1,1] for the real weights and [0,0] for the
imaginary weights. The NALU can solve some ranges but no range with full
success. The $\mathrm{NAC}_{\bullet}$ outperforms the NALU on the 2 ranges it
has success but fails to achieve any success on the remaining 7 ranges. The
iNALU outperforms the NALU on 7 ranges where it gains full success on 5 of
those ranges. The G-NALU does not outperform the NALU on any ranges.
Division (Figure 17).
Figure 17: Performance on Single Module Task for division.
No model solves division for all ranges. The iNALU is the only module to have
a success rate of 1 on any range, fully solving 3 of the 9 ranges. This
highlights the difficulty in modelling division even for the simplest case,
aligning with prior claims Madsen and Johansen (2020). The NPU and RealNPU
performs perfectly for $\mathcal{U}$[1,2). The RealNPU has better performance
over the NPU for negative input ranges. The $\mathrm{NAC}_{\bullet}$ is able
to achieve some success on 4 ranges while the NALU and the G-NALU cannot
achieve any success on all 9 ranges. The failure on $\mathcal{U}$[-2,2) for
the NALU, iNALU and G-NALU is expected due to the inability to process mixed
sign inputs caused by the modules’ log-exponent transformation.
Testing limits of full success modules. Achieving full success on all the
ranges for a operation only occurred three times - the iNALU for addition and
the NAU in addition and subtraction. To determine to what extent this holds we
experiment with introducing redundant units to the input, resulting in an
increase of the task difficulty. The iNALU (Figure 18) shows multiple failure
ranges at 10 input units (8 redundant inputs), achieving reasonable success
only on the larger ranges.
Figure 18: iNALU failures on the Single Module Task for addition with 10
inputs (8 redundant inputs).
The NAU fails at 10 inputs (8 redundant inputs) for both addition (Figure 19)
and at 100 inputs (98 redundant inputs) for subtraction (Figure 20) on the
same ranges U[-1.2,-.1.1) and U[1.1,1.2).
Figure 19: NAU failures on the Single Module Task for addition with 10 inputs
(8 redundant inputs).
Figure 20: NAU failures on the Single Module Task for subtraction with 100
inputs (98 redundant inputs).
Summary. The Single Module Task assesses the stability of individual NALMs.
Having stability at this granularity is especially important, because if a
NALM as a stand-alone unit is unstable how can we expect them to converge if
applied larger networks where they are only a subcomponent? Overall, we find
that a majority of NALMs are not robust to different training ranges. NALMs
which achieved full success on all ranges for a operation on the two-input
setup break when redundant inputs are introduced, however the extent varies
depending on the module. Division is the most challenging operation followed
by multiplication, subtraction and addition. NALMs which specialise in at most
two operations are found to outperform the NALU in a majority of the cases. Of
the NALMs which can model the four operations, i.e., the NALU, iNALU and
G-NALU, the iNALU performs best on average over all operations, though the
performance gain is less significant for multiplication and division.
## 7 Experiments and Findings of Modules for Logic Tasks
This section summarises the experiments provided in two existing logic based
NALMs—the NLRL and the NSR.
### 7.1 NLRL
Preliminary results are given which tests basic logic and arithmetic
operations: AND, OR, NOT and XOR, multiplication, addition with division,
identity and constant selection. Each model consists of a stacked NLRL.
Different numbers of intermediary units per layer are tested where the authors
conclude increasing the units improves performance until a saturation point
(found to be 8) is reached. A multi-operation based task requires stacking
layers of NLRL which introduces redundancy as the stacked output and input
layers both use negation gating. Therefore, if stacking is required, it is
suggested to remove cases with consecutive negation gating layers and only
have a single layer. Using a module with both AND and OR results in faster
convergence (less iterations), compared to using a module only using AND
operations, but has a longer computation time to train each iteration.
### 7.2 NSR
Faber and Wattenhofer (2020) first check if the NSR can learn comparison
operations on both an integer and floating point input setting. Results show
that the NSR can learn the comparison functions with both input types and can
extrapolate well. Modules struggle with learning the $=$ and $\neq$
operations, but performance can be improved by introducing redundancy through
additional sets of weights during training (Faber and Wattenhofer, 2020,
Section 4.4). The NSR can be attached with a NAU to learn piecewise functions.
Findings suggest a simple continuous function (such as the absolute difference
between two inputs) can be learnt with extrapolation capabilities, but a non-
continuous function cannot. The NSR can be converted into a recurrent module
to find the minimum of a list and to count the occurrence of a number in a
sequence. The minimum task performs perfectly on all extrapolation settings
however the counting task’s performance reduces as sequence length increases.
An additional task requires finding the shortest paths in a Graph Neural
Network (GNN) where the network should learn to imitate the Bellman-Ford
algorithm. The is used to show that the recurrent NSR can learn to aggregate
numbers to a minimum. When extrapolating to larger graphs, performance
improved with larger edge weights. Finally, a MNIST digit comparison task
tested to see if the NSR can be used as a downstream module for a CNN in an
end-to-end manner. Findings show that the NSR based network cannot outperform
a vanilla CNN but is comparable to a MLP based network, where the
underperformance was suggested to be a result of a weak learning signal.
## 8 Applications of NALU
This section describes uses of NALU as a sub-component in architectures to
tackle practical problems outside the domain of solving arithmetic on numeric
inputs. Success and failure cases are mentioned. We choose to focus on NALU
applications on the basis that the improved modules discussed above can be
applied in place of NALU to provide additional performance gains to the
mentioned applications.
### 8.1 Existing Applications
Xiao et al. (2020) insert a NALU layer between a two-layer Gated Recurrent
Unit (GRU) and dense layer to predict vehicle trajectory of complex road
sections (containing constantly changing directions). NALU improves
extrapolation capabilities to deal with abnormal input cases outside the range
of the GRU hidden states output.
Raj et al. (2020) combine $\mathrm{NAC}_{+}$ modules before LSTM cells for
fast training in the extraction of temporal features to classify videos for
badminton strokes. They further experiment in using $\mathrm{NAC}_{+}$ modules
with a dense layer to learn temporal transformations, finding better
performance than the LSTM based module and the dense modules being quicker to
train. They justify the use of the $\mathrm{NAC}_{+}$ as a way to produce
sparse representations of frames, as non-relevant pixels would not be selected
by the $\mathrm{NAC}_{+}$ resulting in 0 values, while relevant pixels
accumulate.
Zhang et al. (2019a) use deep reinforcement learning to learn to schedule
views on content-delivery-networks (CDNs) for crowdsourced-live-streaming
(CLS). NALU’s extrapolative ability alleviates the issue of data bias (which
is the failure of models outside the training range) by using the NALU to
build an offline simulator to train the agent when learning to choose actions.
The simulator is composed of a two layer LSTM with a NALU layer attached to
the end. Zhang et al. (2019b) propose a novel framework (named Livesmart) for
cost-efficient CLS scheduling on CDNs with a quality-of-service (QoS)
guarantee. Two components required in Livesmart contain models using NALU. The
first component (named new viewer predictor) uses a stacked LSTM-NALU to
predict workloads from new viewers. The second component (named QoS
characterizer) predicts the QoS of a CDN provider. This component uses a stack
of Convolutional Neural Networks (CNNs), LSTM and NALU. Both components use
the NALU’s ability to capture OOD data to aid in dealing with rare
events/unexpected data.
Wu et al. (2020) combines layers of $\mathrm{NAC}_{+}$ to learn to do addition
and subtraction on vector embeddings to form novel compositions for creating
analogies. Modules are applied to the output of an attention module (scoring
candidate analogies) that is passed through a MLP. The output of the
$\mathrm{NAC}_{+}$ modules is passed to a LSTM producing the final analogy
encoding.
The NALU has also been used with CNNs. Rajaa and Sahoo (2019) applies stacked
NALUs to the end of convolution units to predict stock future stock prices.
Rana et al. (2020) utilises the $\mathrm{NAC}_{+}$/NALU as residual
connections modules to larger convolutional networks such as U-Net and a fully
convolutional regression networks for cell counting in images. Such
connections enable better generalisation when transitioning to data with
higher cell counts to the training data. However, no observations are made to
what the units learn which lead to an improvement on cell counting over the
baseline models.
Chennupati et al. (2020) uses the NALU as part of a larger architecture to
predict the runtime of code on different hardware devices configured using
hyperparameters. The NALU predicts the reuse profile of the program, keeping
track of the count of memory references accessed in the execution trace. The
NALU outperforms a genetic programming approach for doing such a prediction.
Teitelman et al. (2020) explores the problem domain of cloning black-box
functionality in a generalisable and interpretable way. A decision tree is
trained to differentiate between different tasks of the black box. Each leaf
of the tree is assigned a neural network comprising of stacked dense layers
with a NALU layer between them. Each neural network is able to learn the
black-box behaviour for a particular task. Like Xiao et al. (2020), results
showed that NALU is required to learn the more complex tasks.
Finally, Sestili et al. (2018) suggests the NALU has potential use in networks
which predict security defects in code. This is due to the module’s ability to
work with numerical inputs in a generalisable manner, instead of limiting the
application to be bound to a fixed token vocabulary requiring lookups.
### 8.2 Applications Where NALU Is Inferior
We discuss examples of situations in which NALU modules are a sub-optimal
architecture choice for applicational settings. Madsen and Johansen (2020)
show that the NAU/NMU outperforms NALU in the MNIST sequence task for both
addition and multiplication. Dai and Muggleton (2020) show the arithmetic
ability (named background knowledge) of the NALU is incapable in performing
the MNIST task for addition or products when combined with a LSTM. Instead,
they show a neural model for symbolic learning, which learns logic programs
using pre-defined rules as background knowledge, can perform with over 95%
accuracy. However, we question whether the failure is a result of the NALU or
due to the misuse of its abilities from combining it with a LSTM. For example,
as the inputs are images, unless the LSTM converts each image into a numerical
value which can be processed by the NALU in an arithmetic way it can be
suggested that the LSTM is completing the task without the numerical
capabilities of the NALM. Jacovi et al. (2019) show that in black box cloning
for the Trask et al. (2018) MNIST addition task, their EstiNet model which
captures non-differentiable models outperforms NALU. Though it can be argued
that a more relevant comparison would test the $\mathrm{NAC}_{+}$ or the NAU
which are solely designed for addition. Joseph-Rivlin et al. (2019) show that
although the $\mathrm{NAC}_{\bullet}$ can learn the order for a polynomial
transformation to a high accuracy, it is still outperformed by a pre-defined
order two polynomial model. Results suggest that the $\mathrm{NAC}_{\bullet}$
may not have fully converged to express integer orders. Dobbels et al. (2020)
found the NALU was unable to extrapolate for the task of predicting far-
infrared radiation fluxes from ultraviolet-mid-infrared fluxes. Though no
clear reason was stated, the lack of extrapolation could be attributed to the
co-dependence of features because of applying a fully connected layers prior
to the module. Jia et al. (2020) considers the NALU as a hardware component
concluding that the NALU has too high an area and power cost to be feasible
for practical use. Implementing for addition costs 17 times the area of a
digital adder, and the memory requirements for weight storage is energy
inefficient for doing CPU operations.
## 9 Remaining Gaps
This section discusses areas which remain to be fully addressed. We focus on:
benchmarks, division, robustness, compositionality, and interpretability of
more complex architectures.
Having benchmarks is important in allowing for reliable comparison between
modules. Such benchmarks should include a simple synthetic dataset which we
detail in this paper, and a real-world data benchmark (which remains to be
created) with a systematic evaluation.
Division remains a challenge. To date no module has been able to reliably
solve division. Currently the NPU by Heim et al. (2020) is the best module to
use, though it would struggle with input values close to zero. Madsen and
Johansen (2020) argues modelling division is not possible due to the
singularity issue. One suggestion for dealing with the zero case is to take
influence from Reimann and Schwung (2019) which can have an option for showing
an output which is invalid (or in their case all off values).
One goal of these modules is to be able to extrapolate. To achieve this, a
module should be robust to being trained on any input range. Madsen and
Johansen (2020) show that modules are unable to achieve full success of all
tested ranges (with the stacked NAU-NMU failing on a training range of
[1.1,1.2], being unable to obtain a single success). Reinitialisation of
weights (Schlör et al., 2020) during training could provide a solution,
however this seems to be unlikely given Madsen and Johansen (2020) tests
against 100 model initialisations and using reinitalisation for a NALM that is
part of a large end-to-end network may not be economical.
Compositionality is desirable. A model should be flexible, having the option
to select different types of operations and model complex mathematical
expressions. Currently the two popular approaches are gating and stacking.
Gating has been found to not work as expected and give convergence issues.
Stacking, though more reliable, has less options in operation selection than
gating. Deep stacking of modules (in a non-recurrent fashion) remains
untested.
It remains to be understood how modules influence learning of other modules
(such as recurrent networks and CNNs) in their representations. For example,
seeing if representations are more interpretable because of being trained with
a module.
## 10 Related Work
We now take a step back, overviewing related works, as part of the bigger
picture in deep learning. In particular we focus on other arithmetic based
architectures, inductive biases and specialist modules.
### 10.1 Extrapolative Mathematics
In contrast to specialists, generic neural architectures have also been
investigated for learning mathematics. Two such examples include convolutional
recurrent networks (used in the Neural GPU) Kaiser and Sutskever (2016) and
transformers Lample and Charton (2020), of which both have been shown to be
Turing complete.888That is, Turning complete under the assumption that
arbitrary (rather than finite) precision is used. (Pérez et al., 2019) Neural
GPUs are constructed from convolutional gated recurrent units. The Neural GPUs
can extrapolate to long sequence lengths (2000) from being trained on length
20 inputs, but use binary inputs rather than real numbers (Kaiser and
Sutskever, 2016). However, various training techniques require to be
implemented such as curriculum learning, relaxed parameter sharing and
dropout; such techniques are not required for training NALMs. Furthermore only
a few Neural GPU models generalise to such a long sequence, but this has been
improved on in Freivalds and Liepins (2017) by simplifying the
architecture/training and introducing diagonal gating and hard non-linearities
with additional cost functions. Transformers, which can process numerical
values, remain unsuccessful for extrapolation tasks which are simple such as
arithmetic using multiplication (Saxton et al., 2019). In contrast, once a
NALM learns to apply an operation (by converging to the relevant interpretable
weight) it will always calculate the operation correctly. For more complex
maths such as integration, generalisation over different input/output sequence
length data generators have also been identified as a weak point in
transformers (Lample and Charton, 2020).
Other approaches which can process raw numerical inputs include using
reinforcement learning, non-specialised MLP architectures and symbolic
regression. Chen et al. (2018) uses a multi-level hierarchical reinforcement
learning approach allowing for operations to be decomposed into simpler
operations and solved via specialised skill modules. A proximal policy
optimisation strategy is used to train the modules responsible for decomposing
and calling the specialised skill modules. Furthermore, a curriculum learning
methodology is adopted by using a teacher-student continual learning strategy
to control the task difficulty setting when learning. In experiments for
modelling the four arithmetic operations, the model is able to learn to full
accuracy short sequence lengths (5) but cannot learn longer lengths (20).
Similar to NALMs, the model especially struggles with division. A downside is
that arithmetic operation/s require to be defined in the input unlike in NALMs
where only the input values require to be given. Nollet et al. (2020) uses
non-specialised MLP architectures to learn long multiplication and addition
for up to 7 digits. By breaking the task into processing steps representing
sub-operations allows for the input to act as external memory. Similar to
Kaiser and Sutskever (2016)’s need of curriculum learning, active learning was
required to control the difficulty of the dataset to learn long multi-digit
multiplication. Though less interpretable than NALU weights, certain neurons
in the MLP were found to encode digit operations for some operands. However,
extrapolation performance to longer digits remained untested.
In short, though various alternates to NALMs exist, each have their own
shortcomings in regard to input format, extrapolation, and robustness.
### 10.2 Inductive Biases
Using an inductive bias, to give control over the learning space of the model,
can be a critical factor in achieving generalisation (Mitchell, 1980). In
NALMs, weights have inductive biases such that particular weights (e.g.,
discretised values) represent applying an arithmetic/logic operation. Others
forms include utilising knowledge of the task to incorporate biases directly
into the architecture. For example, using periodic activation functions (sin
and cos) (Martius and Lampert, 2017) or simplifying expressions through
symmetry/separability (Udrescu and Tegmark, 2020) to model physics based
expressions. Alternatively, regularisation can be used as a form of bias,
incorporated with additional auxiliary loss terms (Lopedoto and Weyde, 2020).
NALMs can use such losses to induce discretisation (Schlör et al., 2020;
Madsen and Johansen, 2020). Though auxiliary losses can only be minimised at
training time and result in optimising an alternate objective to the original
loss, a possible way to alleviate this through the use of unsupervised fine
tuning and nested optimisation (Alet et al., 2021).
### 10.3 Specialist Modules
NALMs can be viewed as specialist modules for arithmetic/logic. Using modules
can provide better systematic generalisation than generic architectures
(Bahdanau et al., 2019). Other works on specialist modules include Zhang et
al. (2019c) who introduces a module which learns permutation-invariant
representation of a set. This is achieved by learning a pairwise ordering cost
function with constraints to acquire desirable properties such as symmetry and
the identity value. Importantly, this module does not require priori knowledge
of the inputs which is analogous to how NALMs do not require to know the
operation to learn. Zhang et al. (2018) creates a module to learn to count
objects in images from attention weights with the ability to reduce double
counting of visual objects. The module takes in object proposals and converts
them into a graph whose edges can be removed/scaled in a fully differentiable
manner to recover the object count. In particular, they learn piecewise linear
monotonically increasing functions, designed to handle overlapping object
proposals while enforcing the constraint that the extreme cases of bounding
boxes being either fully distinct/overlapping returning 0 and 1 respectively
and having all other cases interpolate between the [0,1]. This is analogous to
how many NALMs use the bounding parameter ranges to represent particular
operations such as -1 for addition and 1 for subtraction, and interpolate
between them when learning. Furthermore, this counting module is an example of
a specialist which can be integrated into larger networks to improve
performance for more complex tasks (Kim et al., 2018).
Utilising different specialists encourages factorisation of the task resulting
in searching for reusable components. Hu et al. (2017) produce specialist
modules for compositional reasoning tasks in visual question answering. For
example, the ‘find’ and ‘relocate’ modules output attention maps which can be
applied to the visual input in order to carry out the sub-task. ‘Find’ would
be able to focus on an object/attribute (e.g., the red sphere) and ‘relocate’
can infer spatial relations (e.g., focus on object A behind object B). Jiang
and Bansal (2019) use similar types of modules with attention maps as outputs
but only use a text based modality. Modules with generic structures such as
LSTMs can also be turned into specialist modules by controlling the update
dynamics through inter-module competition and sparse communication (Goyal et
al., 2021; Goyal and Bengio, 2020). However, due to the generic nature, such
modules will not be interpretable at a parameter level like NALMs.
## 11 Conclusion
Neural Arithmetic Logic Modules (NALMs) are a promising area of research for
systematic generalisation. Focusing on the first Neural Arithmetic Unit, the
NALU, we explained the unit’s limitations along with existing solutions from
other modules: iNALU, NAU, NMU, NPU, and CalcNet. We also detail the two logic
NALMs: NLRL and NSR inspired by NALU. There exists a range of applications for
the NALU, though some uses remain questionable. Cross-comparing modules
suggest inconsistencies with experiment methodology and limitations existing
in the current state-of-the-art modules. A new benchmark is provided for
comparing arithmetic modules named the ‘Single Module Arithmetic Task’.
Finally, we outline remaining research gaps regarding: solving division,
robustness, compositionality and interpretability of complex architectures.
Acknowledgments
We would like to thank Andreas Madsen for informative discussions and
explanations regarding the Neural Arithmetic Units, and the anonymous
reviewers who have help improve the manuscript. B.M. is supported by the EPSRC
Doctoral Training Partnership (EP/R513325/1). J.H. received funding from the
EPSRC Centre for Spatial Computational Learning (EP/S030069/1). The authors
acknowledge the use of the IRIDIS High-Performance Computing Facility, the ECS
Alpha Cluster, and associated support services at the University of
Southampton in the completion of this work.
## Appendix A Architecture Illustration Key
Figure 21: Key containing the symbols and colouring system used for
architecture illustrations.
## Appendix B Module Illustrations
Table LABEL:tab:unit-imgs displays, in chronological order, the module
architecture illustrations given in their respective papers. ∗Note that we
modified the NALU architecture from Trask et al. (2018, Figure 2b) as the
learned gate matrix ($\mathbb{R}^{3\times 4}$) is mistakenly drawn as a vector
($\mathbb{R}^{3}$) in the original figure.)
Table 4: Module architecture illustrations taken from the original papers. Module | Architecture
---|---
NALU∗ (Trask et al., 2018) |
NLRL (Reimann and Schwung, 2019) |
G-NALU (Rajaa and Sahoo, 2019) | (No figure exists)
NAU (Madsen and Johansen, 2020) | (No figure exists)
NMU (Madsen and Johansen, 2020) |
NSR (Faber and Wattenhofer, 2020) |
iNALU (Schlör et al., 2020) |
NPU (Heim et al., 2020) |
## Appendix C Step-by-step Example using the NALU
To better understand how the internal process of a NALM, we provide a worked
through example for addition using the NALU with parameters that can
extrapolate.
Task: Subtract the the second input value from the first where the input is
$\bm{x}=\left[\begin{smallmatrix}2&3&4\end{smallmatrix}\right]$. The output
value should be $\left[\begin{smallmatrix}-1\end{smallmatrix}\right]$.
Steps:
1. 1.
Calculate $\tanh(\bm{\widehat{W}})$. The operation is $+x_{1}-x_{2}$ so
$\tanh(\bm{\widehat{W}})=\left[\begin{smallmatrix}1\\\ -1\\\
0\end{smallmatrix}\right]$.
2. 2.
Calculate $\operatorname*{sigmoid}(\bm{\widehat{M}})$. The first two input
values are selected and the third is ignored, so
$\operatorname*{sigmoid}(\bm{\widehat{M}})=\left[\begin{smallmatrix}1\\\ 1\\\
0\end{smallmatrix}\right]$.
3. 3.
Calculate
$\tanh(\bm{\widehat{W}})\odot\operatorname*{sigmoid}(\bm{\widehat{M}})$ to
obtain $\bm{W}=\left[\begin{smallmatrix}1\\\ -1\\\ 0\end{smallmatrix}\right]$.
4. 4.
Calculate the result of the summative path.
$\displaystyle\textrm{NAC}_{+}$ $\displaystyle=\bm{xW}$
$\displaystyle=\left[\begin{smallmatrix}2&3&4\end{smallmatrix}\right]\left[\begin{smallmatrix}1\\\
-1\\\ 0\end{smallmatrix}\right]$
$\displaystyle=\left[\begin{smallmatrix}(2\times 1)+(3\times-1)+(4\times
0)\end{smallmatrix}\right]$
$\displaystyle=\left[\begin{smallmatrix}2-3+0\end{smallmatrix}\right]$
$\displaystyle=\left[\begin{smallmatrix}-1\end{smallmatrix}\right].$
5. 5.
Calculate the result of the multiplicative path. (For simplicity, let us
assume $\epsilon=0$.)
$\displaystyle\textrm{NAC}_{\bullet}$
$\displaystyle=\exp(\bm{W}\ln(|\bm{x}|+\epsilon)$
$\displaystyle=\exp(\left[\begin{smallmatrix}1\\\ -1\\\
0\end{smallmatrix}\right]\ln(|\left[\begin{smallmatrix}2&3&4\end{smallmatrix}\right]|)$
$\displaystyle=\exp(\left[\begin{smallmatrix}1\\\ -1\\\
0\end{smallmatrix}\right]\left[\begin{smallmatrix}\ln(2)&\ln(3)&\ln(4)\end{smallmatrix}\right])$
$\displaystyle=\exp(\left[\begin{smallmatrix}\ln(2^{1})+\ln(3^{-1})+\ln(4^{0})\end{smallmatrix}\right])$
$\displaystyle=\exp(\ln([2^{1}\times 3^{-1}\times 4^{0}])$
$\displaystyle=\exp(\ln(\left[\begin{smallmatrix}2\times\frac{1}{3}\times
1\end{smallmatrix}\right]))$
$\displaystyle=\exp(\ln(\left[\begin{smallmatrix}\frac{2}{3}\end{smallmatrix}\right]))$
$\displaystyle=\left[\begin{smallmatrix}\frac{2}{3}\end{smallmatrix}\right].$
6. 6.
The target expression requires the summative path ($\textrm{NAC}_{+}$) and
ignores the multiplicative path ($\textrm{NAC}_{\bullet}$), therefore gate is
$\operatorname*{sigmoid}(\bm{x}\bm{G})=\left[\begin{smallmatrix}1\end{smallmatrix}\right]$.
7. 7.
Combine all the pieces to get the output.
$\displaystyle\bm{\hat{y}}$
$\displaystyle=\bm{g}\odot\bm{a}+(\bm{1}-\bm{g})\odot\bm{m}$
$\displaystyle=\left[\begin{smallmatrix}1\end{smallmatrix}\right]\odot\left[\begin{smallmatrix}-1\end{smallmatrix}\right]+(\left[\begin{smallmatrix}1\end{smallmatrix}\right]-\left[\begin{smallmatrix}1\end{smallmatrix}\right])\odot\left[\begin{smallmatrix}\frac{2}{3}\end{smallmatrix}\right]$
$\displaystyle=\left[\begin{smallmatrix}-1\end{smallmatrix}\right]+\left[\begin{smallmatrix}0\end{smallmatrix}\right]$
$\displaystyle=\left[\begin{smallmatrix}-1\end{smallmatrix}\right].$
## Appendix D Single Module Task Module Parameters
Refer to Tables 5, 6, and 7 for the breakdown of parameters used in the Single
Module Task for experiments with input size 2.
Parameter | Single Module Task
---|---
Layers | 1
Input size | 2
Subset ratio | 0.5
Overlap ratio | 0
Total iterations | 50000
Train samples | 128 per batch
Validation samples∗ | 10000
Test samples∗ | 10000
Seeds | 25
Optimiser | Adam (with default parameters)
Learning rate† | 1.00E-03
Table 5: Parameters which are applied to all modules. ∗Validation and test datasets generate one batch of samples at the start which gets used for evaluation for all iterations. †Will hold unless specified otherwise. Module | Operation | Parameter | Value
---|---|---|---
NPU, RealNPU | Mul | ($\beta_{start}$,$\beta_{end}$) | (1e-7,1e-5)
Div | ($\beta_{start}$,$\beta_{end}$) | (1e-9,1e-7)
Mul, Div | $\beta_{growth}$ | 10
$\beta_{step}$ | 10000
Learning rate | 5.00E-03
Table 6: Parameters specific to the NPU and RealNPU modules for the Single Module Task. Parameter | Single Module Task
---|---
$\hat{\lambda}$ | 0.01 (NAU), 10 (NMU)
$\lambda_{start}$ | 20000
$\lambda_{end}$ | 35000
Table 7: Parameters specific to the NAU and NMU modules for the Single Module Task. Parameter | Single Module Task
---|---
$\omega$ | 20
$t$ | 20
Gradient clip range | [-0.1,0.1]
Max stored losses (for reinitialisation check) | 5000
Minimum number of epochs before regularisation starts | 10000
Table 8: Parameters specific to the iNALU for the Single Module Task.
## Appendix E Single Module Task: Alternative Options for Generating a
Success Threshold
Other methods can be used to generate the $\epsilon$-threshold. The factors
which can be changed include:
* •
the $\epsilon$-perfect model, e.g., we could use a $\epsilon$-perfect NALU
expression which uses log space.
* •
the comparison metric against the perfect model. A MSE is used but other
metrics such as PCC or MAPE are also valid, with each metric having it’s own
biases.
* •
the value of $\epsilon$ to control the tolerance of the threshold. Larger
values would be more tolerant while smaller values are harsher.
All these can be modified and should be considered if creating a new threshold
evaluation scheme. However, the three points to be consistent on no matter the
chosen evaluation method is to: (1) be task and range dependant, (2) use the
same threshold when comparing models on the same task, and (3) not make the
generation of the threshold dependant on the benchmarked model.
A suggestion for an additional metric would be to generate a
$\epsilon$-threshold used to measure if the model weights have converged
enough prior to applying any (discretisation) regularisation. Using the NMU as
an example, a pre-regularisation threshold would set the $epsilon$ to be
$<0.5$. The resultant threshold will therefore determine if the weights have
converged towards the direction of the expected discrete value before the
regularisation begins to get applied. Having this threshold helps determine
where the module would be having difficulties i.e., if the problem is the
finding the global-minima or if the problem lies in regularisation of the
weights to the final value since the regularisation (if given enough priority)
would force values to round to the nearest integer.
## Appendix F Results for the Single Module Task
### F.1 Addition
Table 9: Results for addition. Comparison of the success-rate, model convergence iteration, and the sparsity error, with 95% confidence interval on the “single layer” task. Each value is a summary of 25 different seeds. Bold values refers to the best result for a evaluation metric for a single module across the different ranges. Model | Range | Success | Solved at | Sparsity error
---|---|---|---|---
| | Rate | Mean | Mean
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $28\%{~{}}^{+20\%}_{-14\%}$ | $4.9\cdot 10^{4}{~{}}^{+8.2\cdot 10^{2}}_{-8.3\cdot 10^{2}}$ | $1.4\cdot 10^{-5}{~{}}^{+2.3\cdot 10^{-6}}_{-2.3\cdot 10^{-6}}$
| U[-2,-1) | $64\%{~{}}^{+16\%}_{-19\%}$ | $4.8\cdot 10^{4}{~{}}^{+7.3\cdot 10^{2}}_{-7.4\cdot 10^{2}}$ | $1.2\cdot 10^{-5}{~{}}^{+1.0\cdot 10^{-6}}_{-1.0\cdot 10^{-6}}$
| U[-2,2) | $40\%{~{}}^{+19\%}_{-17\%}$ | $4.9\cdot 10^{4}{~{}}^{+6.4\cdot 10^{2}}_{-6.4\cdot 10^{2}}$ | $1.5\cdot 10^{-5}{~{}}^{+1.0\cdot 10^{-6}}_{-1.0\cdot 10^{-6}}$
| U[-20,-10) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $4.5\cdot 10^{4}{~{}}^{+8.2\cdot 10^{2}}_{-8.3\cdot 10^{2}}$ | $\mathbf{5.4\cdot 10^{-6}}{~{}}^{+1.1\cdot 10^{-6}}_{-1.1\cdot 10^{-6}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $68\%{~{}}^{+15\%}_{-20\%}$ | $4.8\cdot 10^{4}{~{}}^{+7.5\cdot 10^{2}}_{-7.5\cdot 10^{2}}$ | $1.2\cdot 10^{-5}{~{}}^{+1.1\cdot 10^{-6}}_{-1.1\cdot 10^{-6}}$
| U[1.1,1.2) | $36\%{~{}}^{+19\%}_{-16\%}$ | $4.9\cdot 10^{4}{~{}}^{+8.3\cdot 10^{2}}_{-8.3\cdot 10^{2}}$ | $1.4\cdot 10^{-5}{~{}}^{+1.7\cdot 10^{-6}}_{-1.7\cdot 10^{-6}}$
$\mathrm{NAC}_{+}$ | U[10,20) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $\mathbf{4.5\cdot 10^{4}}{~{}}^{+7.8\cdot 10^{2}}_{-7.8\cdot 10^{2}}$ | $\mathbf{5.4\cdot 10^{-6}}{~{}}^{+1.1\cdot 10^{-6}}_{-1.1\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{24\%}{~{}}^{+19\%}_{-13\%}$ | $4.4\cdot 10^{4}{~{}}^{+2.5\cdot 10^{3}}_{-2.5\cdot 10^{3}}$ | $6.7\cdot 10^{-6}{~{}}^{+3.0\cdot 10^{-6}}_{-3.0\cdot 10^{-6}}$
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $8\%{~{}}^{+17\%}_{-6\%}$ | $4.7\cdot 10^{4}{~{}}^{+3.9\cdot 10^{3}}_{-3.9\cdot 10^{3}}$ | $9.0\cdot 10^{-6}{~{}}^{+2.7\cdot 10^{-5}}_{-9.0\cdot 10^{-6}}$
| U[0.1,0.2) | $12\%{~{}}^{+18\%}_{-8\%}$ | $\mathbf{4.0\cdot 10^{4}}{~{}}^{+6.5\cdot 10^{2}}_{-6.5\cdot 10^{2}}$ | $4.6\cdot 10^{-6}{~{}}^{+2.0\cdot 10^{-6}}_{-2.0\cdot 10^{-6}}$
| U[1,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
G-NALU | U[10,20) | $4\%{~{}}^{+16\%}_{-3\%}$ | $4.2\cdot 10^{4}$ | $\mathbf{3.5\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.9\cdot 10^{2}}_{-1.9\cdot 10^{2}}$ | $7.8\cdot 10^{-7}{~{}}^{+4.1\cdot 10^{-8}}_{-4.1\cdot 10^{-8}}$
| U[-1.2,-1.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+2.0\cdot 10^{2}}_{-2.0\cdot 10^{2}}$ | $2.0\cdot 10^{-6}{~{}}^{+1.5\cdot 10^{-7}}_{-1.5\cdot 10^{-7}}$
| U[-2,-1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+7.8\cdot 10^{1}}_{-7.8\cdot 10^{1}}$ | $2.0\cdot 10^{-6}{~{}}^{+1.5\cdot 10^{-7}}_{-1.5\cdot 10^{-7}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+2.0\cdot 10^{2}}_{-2.0\cdot 10^{2}}$ | $1.4\cdot 10^{-6}{~{}}^{+2.9\cdot 10^{-7}}_{-2.9\cdot 10^{-7}}$
| U[-20,-10) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{1.6\cdot 10^{4}}{~{}}^{+4.3\cdot 10^{2}}_{-4.3\cdot 10^{2}}$ | $5.8\cdot 10^{-6}{~{}}^{+3.1\cdot 10^{-7}}_{-3.1\cdot 10^{-7}}$
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.9\cdot 10^{4}{~{}}^{+7.8\cdot 10^{1}}_{-7.8\cdot 10^{1}}$ | $\mathbf{8.8\cdot 10^{-8}}{~{}}^{+1.2\cdot 10^{-8}}_{-1.2\cdot 10^{-8}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+1.1\cdot 10^{2}}_{-1.1\cdot 10^{2}}$ | $1.8\cdot 10^{-7}{~{}}^{+4.5\cdot 10^{-8}}_{-4.5\cdot 10^{-8}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+1.1\cdot 10^{2}}_{-1.1\cdot 10^{2}}$ | $3.1\cdot 10^{-7}{~{}}^{+1.9\cdot 10^{-8}}_{-1.9\cdot 10^{-8}}$
iNALU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.0\cdot 10^{4}{~{}}^{+7.8\cdot 10^{1}}_{-7.8\cdot 10^{1}}$ | $4.6\cdot 10^{-7}{~{}}^{+8.3\cdot 10^{-8}}_{-8.3\cdot 10^{-8}}$
| U[-0.2,-0.1) | $52\%{~{}}^{+18\%}_{-19\%}$ | $3.8\cdot 10^{4}{~{}}^{+4.2\cdot 10^{3}}_{-4.5\cdot 10^{3}}$ | $7.5\cdot 10^{-6}{~{}}^{+2.1\cdot 10^{-6}}_{-2.1\cdot 10^{-6}}$
| U[-1.2,-1.1) | $40\%{~{}}^{+19\%}_{-17\%}$ | $4.2\cdot 10^{4}{~{}}^{+4.8\cdot 10^{3}}_{-5.5\cdot 10^{3}}$ | $9.9\cdot 10^{-6}{~{}}^{+3.0\cdot 10^{-6}}_{-3.0\cdot 10^{-6}}$
| U[-2,-1) | $64\%{~{}}^{+16\%}_{-19\%}$ | $4.4\cdot 10^{4}{~{}}^{+3.0\cdot 10^{3}}_{-3.3\cdot 10^{3}}$ | $9.1\cdot 10^{-6}{~{}}^{+1.4\cdot 10^{-6}}_{-1.4\cdot 10^{-6}}$
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $68\%{~{}}^{+15\%}_{-20\%}$ | $4.5\cdot 10^{4}{~{}}^{+1.8\cdot 10^{3}}_{-1.9\cdot 10^{3}}$ | $5.3\cdot 10^{-6}{~{}}^{+2.0\cdot 10^{-6}}_{-2.0\cdot 10^{-6}}$
| U[0.1,0.2) | $16\%{~{}}^{+19\%}_{-10\%}$ | $\mathbf{2.5\cdot 10^{4}}{~{}}^{+1.8\cdot 10^{3}}_{-1.9\cdot 10^{3}}$ | $\mathbf{2.9\cdot 10^{-6}}{~{}}^{+1.7\cdot 10^{-6}}_{-1.7\cdot 10^{-6}}$
| U[1,2) | $\mathbf{80\%}{~{}}^{+11\%}_{-19\%}$ | $4.4\cdot 10^{4}{~{}}^{+2.2\cdot 10^{3}}_{-2.4\cdot 10^{3}}$ | $9.1\cdot 10^{-6}{~{}}^{+8.8\cdot 10^{-7}}_{-8.8\cdot 10^{-7}}$
| U[1.1,1.2) | $76\%{~{}}^{+13\%}_{-19\%}$ | $4.5\cdot 10^{4}{~{}}^{+2.5\cdot 10^{3}}_{-2.8\cdot 10^{3}}$ | $1.0\cdot 10^{-5}{~{}}^{+1.1\cdot 10^{-6}}_{-1.1\cdot 10^{-6}}$
NALU | U[10,20) | $20\%{~{}}^{+19\%}_{-11\%}$ | $3.8\cdot 10^{4}{~{}}^{+8.4\cdot 10^{3}}_{-8.9\cdot 10^{3}}$ | $4.6\cdot 10^{-6}{~{}}^{+7.1\cdot 10^{-6}}_{-4.6\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-1.2,-1.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-2,-1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{4.2\cdot 10^{3}}{~{}}^{+2.6\cdot 10^{2}}_{-2.8\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-20,-10) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
NAU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.0\cdot 10^{3}{~{}}^{+2.6\cdot 10^{2}}_{-2.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
Table 9: Results for addition. Comparison of the success-rate, model
convergence iteration, and the sparsity error, with 95% confidence interval on
the “single layer” task. Each value is a summary of 25 different seed
(continued)
### F.2 Subtraction
Table 10: Results for subtraction. Comparison of the success-rate, model convergence iteration, and the sparsity error, with 95% confidence interval on the “single layer” task. Each value is a summary of 25 different seeds. Bold values refers to the best result for a evaluation metric for a single module across the different ranges. Model | Range | Success | Solved at | Sparsity error
---|---|---|---|---
| | Rate | Mean | Mean
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $20\%{~{}}^{+19\%}_{-11\%}$ | $4.4\cdot 10^{4}{~{}}^{+5.2\cdot 10^{3}}_{-5.4\cdot 10^{3}}$ | $2.2\cdot 10^{-5}{~{}}^{+4.0\cdot 10^{-6}}_{-4.4\cdot 10^{-6}}$
| U[-2,2) | $52\%{~{}}^{+18\%}_{-19\%}$ | $4.6\cdot 10^{4}{~{}}^{+2.8\cdot 10^{3}}_{-3.0\cdot 10^{3}}$ | $\mathbf{1.4\cdot 10^{-5}}{~{}}^{+1.7\cdot 10^{-6}}_{-1.7\cdot 10^{-6}}$
| U[-20,-10) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $\mathbf{4.1\cdot 10^{4}}{~{}}^{+2.2\cdot 10^{3}}_{-2.4\cdot 10^{3}}$ | $1.6\cdot 10^{-5}{~{}}^{+3.9\cdot 10^{-6}}_{-3.9\cdot 10^{-6}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $20\%{~{}}^{+19\%}_{-11\%}$ | $4.4\cdot 10^{4}{~{}}^{+5.2\cdot 10^{3}}_{-5.4\cdot 10^{3}}$ | $2.2\cdot 10^{-5}{~{}}^{+4.0\cdot 10^{-6}}_{-4.3\cdot 10^{-6}}$
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
$\mathrm{NAC}_{+}$ | U[10,20) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $\mathbf{4.1\cdot 10^{4}}{~{}}^{+2.2\cdot 10^{3}}_{-2.4\cdot 10^{3}}$ | $1.6\cdot 10^{-5}{~{}}^{+3.9\cdot 10^{-6}}_{-3.9\cdot 10^{-6}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $4\%{~{}}^{+16\%}_{-3\%}$ | $\mathbf{4.2\cdot 10^{4}}$ | $\mathbf{2.0\cdot 10^{-5}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
G-NALU | U[10,20) | $\mathbf{20\%}{~{}}^{+19\%}_{-11\%}$ | $4.5\cdot 10^{4}{~{}}^{+3.3\cdot 10^{3}}_{-3.4\cdot 10^{3}}$ | $2.1\cdot 10^{-5}{~{}}^{+1.1\cdot 10^{-5}}_{-1.1\cdot 10^{-5}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.3\cdot 10^{2}}_{-1.3\cdot 10^{2}}$ | $2.5\cdot 10^{-7}{~{}}^{+2.7\cdot 10^{-8}}_{-2.7\cdot 10^{-8}}$
| U[-1.2,-1.1) | $40\%{~{}}^{+19\%}_{-17\%}$ | $2.1\cdot 10^{4}{~{}}^{+3.9\cdot 10^{2}}_{-3.9\cdot 10^{2}}$ | $\mathbf{1.3\cdot 10^{-8}}{~{}}^{+8.0\cdot 10^{-9}}_{-8.0\cdot 10^{-9}}$
| U[-2,-1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+2.7\cdot 10^{2}}_{-2.7\cdot 10^{2}}$ | $5.8\cdot 10^{-7}{~{}}^{+1.2\cdot 10^{-7}}_{-1.2\cdot 10^{-7}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.6\cdot 10^{2}}_{-1.6\cdot 10^{2}}$ | $8.7\cdot 10^{-7}{~{}}^{+8.2\cdot 10^{-8}}_{-8.2\cdot 10^{-8}}$
| U[-20,-10) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{1.6\cdot 10^{4}}{~{}}^{+5.7\cdot 10^{2}}_{-5.6\cdot 10^{2}}$ | $9.4\cdot 10^{-7}{~{}}^{+2.1\cdot 10^{-7}}_{-2.1\cdot 10^{-7}}$
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.9\cdot 10^{2}}_{-1.9\cdot 10^{2}}$ | $1.5\cdot 10^{-7}{~{}}^{+2.0\cdot 10^{-8}}_{-2.0\cdot 10^{-8}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.5\cdot 10^{2}}_{-1.5\cdot 10^{2}}$ | $1.7\cdot 10^{-7}{~{}}^{+2.7\cdot 10^{-8}}_{-2.7\cdot 10^{-8}}$
| U[1.1,1.2) | $56\%{~{}}^{+17\%}_{-19\%}$ | $2.0\cdot 10^{4}{~{}}^{+1.9\cdot 10^{2}}_{-1.9\cdot 10^{2}}$ | $4.4\cdot 10^{-8}{~{}}^{+4.4\cdot 10^{-8}}_{-4.4\cdot 10^{-8}}$
iNALU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+7.7\cdot 10^{2}}_{-7.8\cdot 10^{2}}$ | $7.6\cdot 10^{-7}{~{}}^{+6.7\cdot 10^{-8}}_{-6.7\cdot 10^{-8}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $12\%{~{}}^{+18\%}_{-8\%}$ | $4.6\cdot 10^{4}{~{}}^{+3.4\cdot 10^{3}}_{-3.5\cdot 10^{3}}$ | $2.2\cdot 10^{-5}{~{}}^{+3.1\cdot 10^{-6}}_{-2.9\cdot 10^{-6}}$
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $20\%{~{}}^{+19\%}_{-11\%}$ | $\mathbf{3.8\cdot 10^{4}}{~{}}^{+7.4\cdot 10^{3}}_{-8.1\cdot 10^{3}}$ | $\mathbf{1.4\cdot 10^{-5}}{~{}}^{+10.0\cdot 10^{-6}}_{-10.0\cdot 10^{-6}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $12\%{~{}}^{+18\%}_{-8\%}$ | $4.8\cdot 10^{4}{~{}}^{+2.0\cdot 10^{3}}_{-2.0\cdot 10^{3}}$ | $2.3\cdot 10^{-5}{~{}}^{+1.7\cdot 10^{-6}}_{-1.4\cdot 10^{-6}}$
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
NALU | U[10,20) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $4.3\cdot 10^{4}{~{}}^{+2.3\cdot 10^{3}}_{-2.6\cdot 10^{3}}$ | $2.3\cdot 10^{-5}{~{}}^{+8.3\cdot 10^{-6}}_{-8.3\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $6.0\cdot 10^{3}{~{}}^{+6.4\cdot 10^{2}}_{-6.6\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-1.2,-1.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+1.0\cdot 10^{3}}_{-1.1\cdot 10^{3}}$ | $2.7\cdot 10^{-7}{~{}}^{+1.2\cdot 10^{-7}}_{-1.2\cdot 10^{-7}}$
| U[-2,-1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $6.0\cdot 10^{3}{~{}}^{+6.2\cdot 10^{2}}_{-6.4\cdot 10^{2}}$ | $3.1\cdot 10^{-8}{~{}}^{+1.3\cdot 10^{-8}}_{-1.3\cdot 10^{-8}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{4.0\cdot 10^{3}}{~{}}^{+2.7\cdot 10^{2}}_{-2.8\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-20,-10) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.9\cdot 10^{3}{~{}}^{+6.3\cdot 10^{2}}_{-6.5\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $6.1\cdot 10^{3}{~{}}^{+6.3\cdot 10^{2}}_{-6.5\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $6.0\cdot 10^{3}{~{}}^{+6.2\cdot 10^{2}}_{-6.4\cdot 10^{2}}$ | $2.4\cdot 10^{-8}{~{}}^{+1.2\cdot 10^{-8}}_{-1.2\cdot 10^{-8}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.6\cdot 10^{4}{~{}}^{+1.0\cdot 10^{3}}_{-1.1\cdot 10^{3}}$ | $3.2\cdot 10^{-7}{~{}}^{+1.5\cdot 10^{-7}}_{-1.5\cdot 10^{-7}}$
NAU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.9\cdot 10^{3}{~{}}^{+6.2\cdot 10^{2}}_{-6.4\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
Table 10: Results for subtraction. Comparison of the success-rate, model
convergence iteration, and the sparsity error, with 95% confidence interval on
the “single layer” task. Each value is a summary of 25 different seed
(continued)
### F.3 Multiplication
Table 11: Results for multiplication. Comparison of the success-rate, model convergence iteration, and the sparsity error, with 95% confidence interval on the “single layer” task. Each value is a summary of 25 different seeds. Bold values refers to the best result for a evaluation metric for a single module across the different ranges. Model | Range | Success | Solved at | Sparsity error
---|---|---|---|---
| | Rate | Mean | Mean
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $\mathbf{88\%}{~{}}^{+8\%}_{-18\%}$ | $4.5\cdot 10^{4}{~{}}^{+9.9\cdot 10^{2}}_{-9.8\cdot 10^{2}}$ | $\mathbf{1.7\cdot 10^{-6}}{~{}}^{+5.0\cdot 10^{-7}}_{-5.0\cdot 10^{-7}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
$\mathrm{NAC}_{\bullet}$ | U[10,20) | $\mathbf{88\%}{~{}}^{+8\%}_{-18\%}$ | $\mathbf{4.5\cdot 10^{4}}{~{}}^{+9.9\cdot 10^{2}}_{-9.9\cdot 10^{2}}$ | $\mathbf{1.7\cdot 10^{-6}}{~{}}^{+5.0\cdot 10^{-7}}_{-5.0\cdot 10^{-7}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $16\%{~{}}^{+19\%}_{-10\%}$ | $4.9\cdot 10^{4}{~{}}^{+1.5\cdot 10^{3}}_{-1.5\cdot 10^{3}}$ | $3.8\cdot 10^{-6}{~{}}^{+2.4\cdot 10^{-6}}_{-2.4\cdot 10^{-6}}$
| U[0.1,0.2) | $\mathbf{32\%}{~{}}^{+20\%}_{-15\%}$ | $\mathbf{4.5\cdot 10^{4}}{~{}}^{+3.0\cdot 10^{3}}_{-3.0\cdot 10^{3}}$ | $2.1\cdot 10^{-5}{~{}}^{+4.0\cdot 10^{-6}}_{-4.0\cdot 10^{-6}}$
| U[1,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
G-NALU | U[10,20) | $16\%{~{}}^{+19\%}_{-10\%}$ | $4.9\cdot 10^{4}{~{}}^{+1.5\cdot 10^{3}}_{-1.5\cdot 10^{3}}$ | $\mathbf{3.5\cdot 10^{-6}}{~{}}^{+2.3\cdot 10^{-6}}_{-2.3\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.1\cdot 10^{4}{~{}}^{+1.8\cdot 10^{2}}_{-1.9\cdot 10^{2}}$ | $1.9\cdot 10^{-9}{~{}}^{+1.7\cdot 10^{-9}}_{-1.7\cdot 10^{-9}}$
| U[-1.2,-1.1) | $12\%{~{}}^{+18\%}_{-8\%}$ | $2.0\cdot 10^{4}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-2,-1) | $68\%{~{}}^{+15\%}_{-20\%}$ | $2.2\cdot 10^{4}{~{}}^{+9.3\cdot 10^{2}}_{-9.7\cdot 10^{2}}$ | $3.5\cdot 10^{-9}{~{}}^{+7.4\cdot 10^{-9}}_{-3.5\cdot 10^{-9}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $4.3\cdot 10^{-8}{~{}}^{+1.4\cdot 10^{-8}}_{-1.4\cdot 10^{-8}}$
| U[-20,-10) | $12\%{~{}}^{+18\%}_{-8\%}$ | $1.5\cdot 10^{4}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $7.2\cdot 10^{-7}{~{}}^{+6.5\cdot 10^{-7}}_{-6.5\cdot 10^{-7}}$
| U[0.1,0.2) | $4\%{~{}}^{+16\%}_{-3\%}$ | $2.1\cdot 10^{4}$ | $1.0\cdot 10^{-9}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.7\cdot 10^{4}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.9\cdot 10^{4}{~{}}^{+1.5\cdot 10^{2}}_{-1.5\cdot 10^{2}}$ | $5.7\cdot 10^{-8}{~{}}^{+4.9\cdot 10^{-9}}_{-4.9\cdot 10^{-9}}$
iNALU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{1.5\cdot 10^{4}}{~{}}^{+1.5\cdot 10^{2}}_{-1.5\cdot 10^{2}}$ | $1.5\cdot 10^{-7}{~{}}^{+2.3\cdot 10^{-8}}_{-2.3\cdot 10^{-8}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $12\%{~{}}^{+18\%}_{-8\%}$ | $4.6\cdot 10^{4}{~{}}^{+3.4\cdot 10^{3}}_{-3.4\cdot 10^{3}}$ | $9.7\cdot 10^{-6}{~{}}^{+5.8\cdot 10^{-6}}_{-5.8\cdot 10^{-6}}$
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $52\%{~{}}^{+18\%}_{-19\%}$ | $4.3\cdot 10^{4}{~{}}^{+2.7\cdot 10^{3}}_{-2.9\cdot 10^{3}}$ | $1.3\cdot 10^{-6}{~{}}^{+7.9\cdot 10^{-7}}_{-7.9\cdot 10^{-7}}$
| U[0.1,0.2) | $60\%{~{}}^{+17\%}_{-19\%}$ | $\mathbf{3.7\cdot 10^{4}}{~{}}^{+3.6\cdot 10^{3}}_{-3.9\cdot 10^{3}}$ | $1.7\cdot 10^{-5}{~{}}^{+3.0\cdot 10^{-6}}_{-3.0\cdot 10^{-6}}$
| U[1,2) | $8\%{~{}}^{+17\%}_{-6\%}$ | $4.6\cdot 10^{4}{~{}}^{+5.8\cdot 10^{3}}_{-5.9\cdot 10^{3}}$ | $1.0\cdot 10^{-5}{~{}}^{+6.1\cdot 10^{-6}}_{-6.1\cdot 10^{-6}}$
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
NALU | U[10,20) | $\mathbf{84\%}{~{}}^{+10\%}_{-19\%}$ | $4.3\cdot 10^{4}{~{}}^{+1.8\cdot 10^{3}}_{-1.9\cdot 10^{3}}$ | $\mathbf{1.2\cdot 10^{-6}}{~{}}^{+5.0\cdot 10^{-7}}_{-5.0\cdot 10^{-7}}$
| U[-0.2,-0.1) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.1\cdot 10^{4}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-1.2,-1.1) | $68\%{~{}}^{+15\%}_{-20\%}$ | $\mathbf{2.0\cdot 10^{3}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-2,-1) | $80\%{~{}}^{+11\%}_{-19\%}$ | $\mathbf{2.0\cdot 10^{3}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-2,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.3\cdot 10^{3}{~{}}^{+1.7\cdot 10^{2}}_{-1.7\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-20,-10) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.4\cdot 10^{3}{~{}}^{+1.8\cdot 10^{2}}_{-1.9\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $5.5\cdot 10^{3}{~{}}^{+4.4\cdot 10^{2}}_{-4.3\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.8\cdot 10^{3}{~{}}^{+1.6\cdot 10^{2}}_{-1.7\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $3.0\cdot 10^{3}{~{}}^{+2.4\cdot 10^{2}}_{-2.5\cdot 10^{2}}$ | $2.3\cdot 10^{-7}{~{}}^{+8.4\cdot 10^{-8}}_{-8.4\cdot 10^{-8}}$
NMU | U[10,20) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.6\cdot 10^{3}{~{}}^{+1.9\cdot 10^{2}}_{-2.0\cdot 10^{2}}$ | $\mathbf{1.0\cdot 10^{-16}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $40\%{~{}}^{+19\%}_{-17\%}$ | $\mathbf{3.6\cdot 10^{3}}{~{}}^{+3.9\cdot 10^{2}}_{-4.5\cdot 10^{2}}$ | $1.2\cdot 10^{-7}{~{}}^{+2.3\cdot 10^{-8}}_{-2.3\cdot 10^{-8}}$
| U[-20,-10) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $12\%{~{}}^{+18\%}_{-8\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.1\cdot 10^{3}}_{-1.1\cdot 10^{3}}$ | $1.1\cdot 10^{-5}{~{}}^{+2.3\cdot 10^{-5}}_{-1.1\cdot 10^{-5}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.5\cdot 10^{4}{~{}}^{+3.2\cdot 10^{3}}_{-3.1\cdot 10^{3}}$ | $1.1\cdot 10^{-6}{~{}}^{+5.4\cdot 10^{-7}}_{-5.4\cdot 10^{-7}}$
| U[1.1,1.2) | $28\%{~{}}^{+20\%}_{-14\%}$ | $4.4\cdot 10^{3}{~{}}^{+5.5\cdot 10^{2}}_{-5.9\cdot 10^{2}}$ | $3.9\cdot 10^{-6}{~{}}^{+4.2\cdot 10^{-6}}_{-3.9\cdot 10^{-6}}$
NPU | U[10,20) | $84\%{~{}}^{+10\%}_{-19\%}$ | $1.8\cdot 10^{4}{~{}}^{+4.7\cdot 10^{3}}_{-3.6\cdot 10^{3}}$ | $\mathbf{4.5\cdot 10^{-8}}{~{}}^{+2.4\cdot 10^{-8}}_{-2.4\cdot 10^{-8}}$
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $8\%{~{}}^{+17\%}_{-6\%}$ | $\mathbf{4.0\cdot 10^{3}}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$ | $1.8\cdot 10^{-7}{~{}}^{+NaN\cdot 10^{-Inf}}_{-NaN\cdot 10^{-Inf}}$
| U[-20,-10) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $12\%{~{}}^{+18\%}_{-8\%}$ | $1.7\cdot 10^{4}{~{}}^{+1.1\cdot 10^{3}}_{-1.1\cdot 10^{3}}$ | $2.2\cdot 10^{-5}{~{}}^{+4.5\cdot 10^{-5}}_{-2.2\cdot 10^{-5}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.5\cdot 10^{4}{~{}}^{+3.2\cdot 10^{3}}_{-3.1\cdot 10^{3}}$ | $2.2\cdot 10^{-6}{~{}}^{+1.1\cdot 10^{-6}}_{-1.1\cdot 10^{-6}}$
| U[1.1,1.2) | $28\%{~{}}^{+20\%}_{-14\%}$ | $4.4\cdot 10^{3}{~{}}^{+5.5\cdot 10^{2}}_{-5.9\cdot 10^{2}}$ | $7.8\cdot 10^{-6}{~{}}^{+8.5\cdot 10^{-6}}_{-7.8\cdot 10^{-6}}$
Real NPU | U[10,20) | $84\%{~{}}^{+10\%}_{-19\%}$ | $1.8\cdot 10^{4}{~{}}^{+4.7\cdot 10^{3}}_{-3.6\cdot 10^{3}}$ | $\mathbf{9.1\cdot 10^{-8}}{~{}}^{+4.7\cdot 10^{-8}}_{-4.7\cdot 10^{-8}}$
Table 11: Results for multiplication. Comparison of the success-rate, model
convergence iteration, and the sparsity error, with 95% confidence interval on
the “single layer” task. Each value is a summary of 25 different seed
(continued)
### F.4 Division
Table 12: Results for division. Comparison of the success-rate, model convergence iteration, and the sparsity error, with 95% confidence interval on the “single layer” task. Each value is a summary of 25 different seeds. Bold values refers to the best result for a evaluation metric for a single module across the different ranges. Model | Range | Success | Solved at | Sparsity error
---|---|---|---|---
| | Rate | Mean | Mean
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $8\%{~{}}^{+17\%}_{-6\%}$ | $\mathbf{4.6\cdot 10^{4}}{~{}}^{+4.8\cdot 10^{3}}_{-4.9\cdot 10^{3}}$ | $\mathbf{2.5\cdot 10^{-5}}{~{}}^{+3.0\cdot 10^{-6}}_{-2.4\cdot 10^{-6}}$
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $\mathbf{16\%}{~{}}^{+19\%}_{-10\%}$ | $4.6\cdot 10^{4}{~{}}^{+4.4\cdot 10^{3}}_{-4.5\cdot 10^{3}}$ | $3.3\cdot 10^{-5}{~{}}^{+6.1\cdot 10^{-6}}_{-5.9\cdot 10^{-6}}$
| U[0.1,0.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $8\%{~{}}^{+17\%}_{-6\%}$ | $\mathbf{4.6\cdot 10^{4}}{~{}}^{+4.8\cdot 10^{3}}_{-4.9\cdot 10^{3}}$ | $\mathbf{2.5\cdot 10^{-5}}{~{}}^{+3.0\cdot 10^{-6}}_{-2.4\cdot 10^{-6}}$
| U[1.1,1.2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
$\mathrm{NAC}_{\bullet}$ | U[10,20) | $\mathbf{16\%}{~{}}^{+19\%}_{-10\%}$ | $4.6\cdot 10^{4}{~{}}^{+4.6\cdot 10^{3}}_{-4.8\cdot 10^{3}}$ | $3.2\cdot 10^{-5}{~{}}^{+6.6\cdot 10^{-6}}_{-5.6\cdot 10^{-6}}$
| U[-0.2,-0.1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
G-NALU | U[10,20) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.8\cdot 10^{4}{~{}}^{+2.0\cdot 10^{2}}_{-2.0\cdot 10^{2}}$ | $3.1\cdot 10^{-8}{~{}}^{+1.3\cdot 10^{-8}}_{-1.3\cdot 10^{-8}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $\mathbf{1.7\cdot 10^{4}}{~{}}^{+2.0\cdot 10^{2}}_{-2.0\cdot 10^{2}}$ | $2.1\cdot 10^{-7}{~{}}^{+2.8\cdot 10^{-8}}_{-2.8\cdot 10^{-8}}$
| U[1.1,1.2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $1.8\cdot 10^{4}{~{}}^{+2.3\cdot 10^{2}}_{-2.3\cdot 10^{2}}$ | $1.7\cdot 10^{-7}{~{}}^{+4.0\cdot 10^{-8}}_{-4.0\cdot 10^{-8}}$
iNALU | U[10,20) | $20\%{~{}}^{+19\%}_{-11\%}$ | $2.4\cdot 10^{4}{~{}}^{+2.6\cdot 10^{3}}_{-2.7\cdot 10^{3}}$ | $\mathbf{2.3\cdot 10^{-8}}{~{}}^{+5.9\cdot 10^{-8}}_{-2.3\cdot 10^{-8}}$
| U[-0.2,-0.1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1,2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[1.1,1.2) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
NALU | U[10,20) | $\mathbf{0\%}{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-0.2,-0.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-1.2,-1.1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,-1) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[0.1,0.2) | $88\%{~{}}^{+8\%}_{-18\%}$ | $1.7\cdot 10^{4}{~{}}^{+3.1\cdot 10^{3}}_{-4.2\cdot 10^{3}}$ | $\mathbf{3.1\cdot 10^{-7}}{~{}}^{+5.2\cdot 10^{-8}}_{-5.2\cdot 10^{-8}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.1\cdot 10^{4}{~{}}^{+4.6\cdot 10^{3}}_{-4.1\cdot 10^{3}}$ | $9.8\cdot 10^{-7}{~{}}^{+1.4\cdot 10^{-6}}_{-9.8\cdot 10^{-7}}$
| U[1.1,1.2) | $16\%{~{}}^{+19\%}_{-10\%}$ | $\mathbf{1.5\cdot 10^{3}}{~{}}^{+4.2\cdot 10^{2}}_{-4.9\cdot 10^{2}}$ | $3.9\cdot 10^{-7}{~{}}^{+3.9\cdot 10^{-8}}_{-3.9\cdot 10^{-8}}$
NPU | U[10,20) | $4\%{~{}}^{+16\%}_{-3\%}$ | $4.9\cdot 10^{4}$ | $7.2\cdot 10^{-7}$
| U[-0.2,-0.1) | $32\%{~{}}^{+20\%}_{-15\%}$ | $7.8\cdot 10^{3}{~{}}^{+2.7\cdot 10^{3}}_{-2.6\cdot 10^{3}}$ | $7.7\cdot 10^{-7}{~{}}^{+1.4\cdot 10^{-7}}_{-1.4\cdot 10^{-7}}$
| U[-1.2,-1.1) | $12\%{~{}}^{+18\%}_{-8\%}$ | $8.3\cdot 10^{3}{~{}}^{+3.1\cdot 10^{3}}_{-3.2\cdot 10^{3}}$ | $3.8\cdot 10^{-6}{~{}}^{+9.5\cdot 10^{-6}}_{-3.8\cdot 10^{-6}}$
| U[-2,-1) | $84\%{~{}}^{+10\%}_{-19\%}$ | $2.1\cdot 10^{4}{~{}}^{+4.2\cdot 10^{3}}_{-5.2\cdot 10^{3}}$ | $7.9\cdot 10^{-7}{~{}}^{+4.5\cdot 10^{-7}}_{-4.5\cdot 10^{-7}}$
| U[-2,2) | $0\%{~{}}^{+13\%}_{-0\%}$ | — | —
| U[-20,-10) | $4\%{~{}}^{+16\%}_{-3\%}$ | $5.0\cdot 10^{3}$ | $\mathbf{1.2\cdot 10^{-7}}$
| U[0.1,0.2) | $88\%{~{}}^{+8\%}_{-18\%}$ | $1.7\cdot 10^{4}{~{}}^{+3.1\cdot 10^{3}}_{-4.2\cdot 10^{3}}$ | $6.3\cdot 10^{-7}{~{}}^{+1.0\cdot 10^{-7}}_{-1.0\cdot 10^{-7}}$
| U[1,2) | $\mathbf{100\%}{~{}}^{+0\%}_{-13\%}$ | $2.1\cdot 10^{4}{~{}}^{+4.6\cdot 10^{3}}_{-4.1\cdot 10^{3}}$ | $2.0\cdot 10^{-6}{~{}}^{+2.7\cdot 10^{-6}}_{-2.0\cdot 10^{-6}}$
| U[1.1,1.2) | $16\%{~{}}^{+19\%}_{-10\%}$ | $\mathbf{1.5\cdot 10^{3}}{~{}}^{+4.2\cdot 10^{2}}_{-4.9\cdot 10^{2}}$ | $7.7\cdot 10^{-7}{~{}}^{+7.7\cdot 10^{-8}}_{-7.7\cdot 10^{-8}}$
Real NPU | U[10,20) | $4\%{~{}}^{+16\%}_{-3\%}$ | $4.9\cdot 10^{4}$ | $1.4\cdot 10^{-6}$
Table 12: Results for division. Comparison of the success-rate, model
convergence iteration, and the sparsity error, with 95% confidence interval on
the “single layer” task. Each value is a summary of 25 different seed
(continued)
## References
* Alet et al. (2021) Ferran Alet, Kenji Kawaguchi, Maria Bauza Villalonga, Nurullah Giray Kuru, Tomas Perez, and Leslie Pack Kaelbling. Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time, 2021. URL https://openreview.net/forum?id=w8iCTOJvyD.
* Bahdanau et al. (2019) Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. Systematic generalization: What is required and can it be learned? In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=HkezXnA9YX.
* Bogin et al. (2020) Ben Bogin, Sanjay Subramanian, Matt Gardner, and Jonathan Berant. Latent compositional representations improve systematic generalization in grounded question answering. _arXiv preprint arXiv:2007.00266_ , 2020. URL https://arxiv.org/pdf/2007.00266.pdf.
* Chen et al. (2018) Kaiyu Chen, Yihan Dong, Xipeng Qiu, and Zitian Chen. Neural arithmetic expression calculator. _CoRR_ , abs/1809.08590, 2018. URL http://arxiv.org/abs/1809.08590.
* Chennupati et al. (2020) Gopinath Chennupati, Nandakishore Santhi, Phillip Romero, and Stephan J. Eidenbenz. Machine learning enabled scalable performance prediction of scientific codes. _CoRR_ , abs/2010.04212, 2020. URL https://arxiv.org/abs/2010.04212.
* Dai and Muggleton (2020) Wang-Zhou Dai and Stephen H. Muggleton. Abductive knowledge induction from raw data. _CoRR_ , abs/2010.03514, 2020. URL https://arxiv.org/abs/2010.03514.
* Dobbels et al. (2020) Wouter Dobbels, Maarten Baes, Sébastien Viaene, S Bianchi, JI Davies, V Casasola, CJR Clark, J Fritz, M Galametz, F Galliano, et al. Predicting the global far-infrared sed of galaxies via machine learning techniques. _Astronomy & Astrophysics_, 634:A57, 2020. URL https://arxiv.org/pdf/1910.06330.pdf.
* Faber and Wattenhofer (2020) Lukas Faber and Roger Wattenhofer. Neural status registers. _CoRR_ , abs/2004.07085, 2020. URL https://arxiv.org/abs/2004.07085.
* Fodor et al. (1988) Jerry A Fodor, Zenon W Pylyshyn, et al. Connectionism and cognitive architecture: A critical analysis. _Cognition_ , 28(1-2):3–71, 1988. URL https://uh.edu/~garson/F&P1.PDF.
* Freivalds and Liepins (2017) Karlis Freivalds and Renars Liepins. Improving the neural GPU architecture for algorithm learning. _CoRR_ , abs/1702.08727, 2017. URL http://arxiv.org/abs/1702.08727.
* Goyal and Bengio (2020) Anirudh Goyal and Yoshua Bengio. Inductive biases for deep learning of higher-level cognition. _CoRR_ , abs/2011.15091, 2020. URL https://arxiv.org/abs/2011.15091.
* Goyal et al. (2021) Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. In _International Conference on Learning Representations_ , 2021. URL https://openreview.net/forum?id=mLcmdlEUxy-.
* Heim et al. (2020) Niklas Heim, Tomas Pevny, and Vasek Smidl. Neural power units. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_ , volume 33, pages 6573–6583. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-Paper.pdf.
* Hu et al. (2017) Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. Learning to reason: End-to-end module networks for visual question answering. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 804–813, 2017. doi: 10.1109/ICCV.2017.93.
* Jacovi et al. (2019) Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, and Jonathan Berant. Neural network gradient-based learning of black-box function interfaces. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=r1e13s05YX.
* Jia et al. (2020) T. Jia, Y. Ju, R. Joseph, and J. Gu. Ncpu: An embedded neural cpu architecture on resource-constrained low power devices for real-time end-to-end performance. In _2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)_ , pages 1097–1109, 2020. doi: 10.1109/MICRO50266.2020.00091. URL https://ieeexplore.ieee.org/document/9251958.
* Jiang and Bansal (2019) Yichen Jiang and Mohit Bansal. Self-assembling modular networks for interpretable multi-hop reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4474–4484, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1455. URL https://aclanthology.org/D19-1455.
* Joseph-Rivlin et al. (2019) M. Joseph-Rivlin, A. Zvirin, and R. Kimmel. Momenêt: Flavor the moments in learning to classify shapes. In _2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)_ , pages 4085–4094, 2019. URL https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9022223.
* Kaiser and Sutskever (2016) Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In _4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings_. International Conference on Learning Representations, ICLR, 2016. URL http://arxiv.org/abs/1511.08228.
* Kim et al. (2018) Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. In _Neural Information Processing Systems_ , pages 1571–1581, 2018\. URL http://papers.nips.cc/paper/7429-bilinear-attention-networks.
* Lake (2019) Brenden M Lake. Compositional generalization through meta sequence-to-sequence learning. In _Advances in Neural Information Processing Systems_ , pages 9791–9801, 2019. URL https://proceedings.neurips.cc/paper/2019/file/f4d0e2e7fc057a58f7ca4a391f01940a-Paper.pdf.
* Lample and Charton (2020) Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=S1eZYeHFDS.
* Lipton (2016) Zachary Chase Lipton. The mythos of model interpretability. _CoRR_ , abs/1606.03490, 2016. URL http://arxiv.org/abs/1606.03490.
* Lopedoto and Weyde (2020) Enrico Lopedoto and Tillman Weyde. Relex: Regularisation for linear extrapolation in neural networks with rectified linear units. In Max Bramer and Richard Ellis, editors, _Artificial Intelligence XXXVII_ , pages 159–165, Cham, 2020. Springer International Publishing. ISBN 978-3-030-63799-6.
* Madsen and Johansen (2019) Andreas Madsen and Alexander Rosenberg Johansen. Measuring arithmetic extrapolation performance. In _Science meets Engineering of Deep Learning at 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)_ , volume abs/1910.01888, Vancouver, Canada, October 2019. URL http://arxiv.org/abs/1910.01888.
* Madsen and Johansen (2020) Andreas Madsen and Alexander Rosenberg Johansen. Neural arithmetic units. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=H1gNOeHKPS.
* Martius and Lampert (2017) Georg S Martius and Christoph Lampert. Extrapolation and learning equations. In _5th International Conference on Learning Representations, ICLR 2017-Workshop Track Proceedings_ , 2017. URL https://openreview.net/pdf?id=BkgRp0FYe.
* Mitchell (1980) Tom M Mitchell. The need for biases in learning generalizations. _Readings in Machine Learning_ , (CBM-TR-117):184–191, 1980. URL http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.5466.
* Nollet et al. (2020) Bastien Nollet, Mathieu Lefort, and Frédéric Armetta. Learning arithmetic operations with a multistep deep learning. In _2020 International Joint Conference on Neural Networks (IJCNN)_ , pages 1–8. IEEE, 2020. URL https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9206963.
* Pérez et al. (2019) Jorge Pérez, Javier Marinković, and Pablo Barceló. On the turing completeness of modern neural network architectures. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=HyGBdo0qFm.
* Raj et al. (2020) Aditya Raj, Pooja Consul, and Sakar K Pal. Fast neural accumulator (nac) based badminton video action classification. In _Proceedings of SAI Intelligent Systems Conference_ , pages 452–467. Springer, 2020. URL https://link.springer.com/chapter/10.1007/978-3-030-55180-3_34.
* Rajaa and Sahoo (2019) Shangeth Rajaa and Jajati Keshari Sahoo. Convolutional feature extraction and neural arithmetic logic units for stock prediction. In _International Conference on Advances in Computing and Data Sciences_ , pages 349–359. Springer, 2019. URL https://link.springer.com/chapter/10.1007/978-981-13-9939-8_31.
* Rana et al. (2019) Ashish Rana, Avleen Malhi, and Kary Främling. Exploring numerical calculations with calcnet. In _2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI)_ , pages 1374–1379. IEEE, 2019. URL https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8995315.
* Rana et al. (2020) Ashish Rana, Taranveer Singh, Harpreet Singh, Neeraj Kumar, and Prashant Singh Rana. Systematically designing better instance counting models on cell images with neural arithmetic logic units, 2020. URL https://arxiv.org/abs/2004.06674.
* Reimann and Schwung (2019) Jan Niclas Reimann and Andreas Schwung. Neural logic rule layers. _CoRR_ , abs/1907.00878, 2019. URL http://arxiv.org/abs/1907.00878.
* Saxton et al. (2019) David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=H1gR5iR5FX.
* Schlör et al. (2020) Daniel Schlör, Markus Ring, and Andreas Hotho. inalu: Improved neural arithmetic logic unit. _Frontiers in Artificial Intelligence_ , 3:71, 2020. ISSN 2624-8212. doi: 10.3389/frai.2020.00071. URL https://www.frontiersin.org/article/10.3389/frai.2020.00071.
* Sestili et al. (2018) Carson D. Sestili, William S. Snavely, and Nathan M. VanHoudnos. Towards security defect prediction with AI. _CoRR_ , abs/1808.09897, 2018. URL http://arxiv.org/abs/1808.09897.
* Teitelman et al. (2020) Daniel Teitelman, Itay Naeh, and Shie Mannor. Stealing black-box functionality using the deep neural tree architecture. _CoRR_ , abs/2002.09864, 2020. URL https://arxiv.org/abs/2002.09864.
* Trask et al. (2018) Andrew Trask, Felix Hill, Scott E Reed, Jack Rae, Chris Dyer, and Phil Blunsom. Neural arithmetic logic units. In _Advances in Neural Information Processing Systems_ , pages 8035–8044, 2018. URL https://openreview.net/pdf?id=H1gNOeHKPS.
* Udrescu and Tegmark (2020) Silviu-Marian Udrescu and Max Tegmark. Ai feynman: A physics-inspired method for symbolic regression. _Science Advances_ , 6(16):eaay2631, 2020.
* Wu et al. (2020) Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, and Shih-Fu Chang. Analogical reasoning for visually grounded language acquisition. _CoRR_ , abs/2007.11668, 2020. URL https://arxiv.org/abs/2007.11668.
* Xiao et al. (2020) Zhu Xiao, Fancheng Li, Ronghui Wu, Hongbo Jiang, Yupeng Hu, Ju Ren, Chenglin Cai, and Arun Iyengar. Trajdata: On vehicle trajectory collection with commodity plug-and-play obu devices. _IEEE Internet of Things Journal_ , 7(9):9066–9079, 2020. doi: 10.1109/JIOT.2020.3001566. URL https://ieeexplore.ieee.org/document/9115028.
* Zhang et al. (2020) Chiyuan Zhang, Samy Bengio, Moritz Hardt, Michael C. Mozer, and Yoram Singer. Identity crisis: Memorization and generalization under extreme overparameterization. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=B1l6y0VFPr.
* Zhang et al. (2019a) Rui-Xiao Zhang, Tianchi Huang, Ming Ma, Haitian Pang, Xin Yao, Chenglei Wu, and Lifeng Sun. Enhancing the crowdsourced live streaming: A deep reinforcement learning approach. In _Proceedings of the 29th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video_ , NOSSDAV ’19, page 55–60, New York, NY, USA, 2019a. Association for Computing Machinery. ISBN 9781450362986. doi: 10.1145/3304112.3325607. URL https://doi.org/10.1145/3304112.3325607.
* Zhang et al. (2019b) Rui-Xiao Zhang, Ming Ma, Tianchi Huang, Haitian Pang, Xin Yao, Chenglei Wu, Jiangchuan Liu, and Lifeng Sun. Livesmart: A qos-guaranteed cost-minimum framework of viewer scheduling for crowdsourced live streaming. In _Proceedings of the 27th ACM International Conference on Multimedia_ , MM ’19, page 420–428, New York, NY, USA, 2019b. Association for Computing Machinery. ISBN 9781450368896. doi: 10.1145/3343031.3351013. URL https://doi.org/10.1145/3343031.3351013.
* Zhang et al. (2018) Yan Zhang, Jonathon Hare, and Adam Prügel-Bennett. Learning to count objects in natural images for visual question answering. In _International Conference on Learning Representations_ , 2018. URL https://openreview.net/forum?id=B12Js_yRb.
* Zhang et al. (2019c) Yan Zhang, Jonathon Hare, and Adam Prügel-Bennett. Learning representations of sets through optimized permutations. In _International Conference on Learning Representations_ , 2019c. URL https://openreview.net/forum?id=HJMCcjAcYX.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.